diff --git a/spaces/101-5/gpt4free/g4f/.v1/gpt4free/quora/tests/__init__.py b/spaces/101-5/gpt4free/g4f/.v1/gpt4free/quora/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gravity VST The Best Tool for Cinematic Sound Design.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gravity VST The Best Tool for Cinematic Sound Design.md deleted file mode 100644 index d2f811a9f14818e5dea9964e6d6f3b1ac60ce2c0..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Gravity VST The Best Tool for Cinematic Sound Design.md +++ /dev/null @@ -1,22 +0,0 @@ - -

Gravity VST: A Powerful Tool for Cinematic Sound Design

-

If you are looking for a versatile and expressive instrument to create cinematic soundscapes, atmospheres, and effects, you might want to check out Gravity VST by Heavyocity. Gravity VST is a collection of over 2000 sound sources, 1100 presets, and 800 snapshots that can be layered, manipulated, and morphed in various ways. Gravity VST lets you explore the sonic possibilities of organic and synthetic sounds, from ethereal pads and vocal phrases to gritty pulses and impacts.

-

One of the most impressive features of Gravity VST is the Motion Designer, which allows you to animate your sounds with rhythmic patterns, envelopes, filters, and effects. You can choose from over 300 presets or create your own custom motions to add movement and variation to your sounds. You can also use the Motion Designer to modulate the parameters of the four onboard effects: Delay, Reverb, Distortion, and Modulation.

-

gravity vst


DOWNLOADhttps://byltly.com/2uKwIi



-

Another feature that sets Gravity VST apart from other cinematic instruments is the Punish Knob, which lets you dial in some extra intensity and character to your sounds. The Punish Knob is a combination of compression, saturation, distortion, and limiting that can add anything from subtle warmth to extreme distortion. You can use it to make your sounds more punchy, aggressive, or dramatic.

-

Gravity VST is compatible with any DAW that supports VST, AU, or AAX plugins. It requires Kontakt 5.5 or higher (full version) to run. You can buy Gravity VST from Heavyocity's website for $449 USD or get it as part of the Gravity Pack Bundle for $699 USD. If you are looking for a powerful tool for cinematic sound design, Gravity VST might be the perfect choice for you.

- -

In this article, we will take a closer look at some of the features and sounds of Gravity VST and see how it can enhance your cinematic productions. We will also share some tips and tricks on how to get the most out of this powerful instrument.

-

Evocative Pads

-

The Pads section of Gravity VST contains over 1000 sound sources and 400 presets that can be used to create lush and atmospheric textures. Each pad consists of two layers that can be blended, tuned, and panned independently. You can also adjust the volume envelope, filter, EQ, and stereo width of each layer.

-

One of the highlights of the Pads section is the Motion Designer, which allows you to animate your pads with rhythmic patterns, envelopes, filters, and effects. You can choose from over 300 presets or create your own custom motions to add movement and variation to your pads. You can also use the Motion Designer to modulate the parameters of the four onboard effects: Delay, Reverb, Distortion, and Modulation.

-

The Pads section also features a Master FX page, where you can apply global effects such as compression, saturation, distortion, limiting, and convolution reverb. You can use the Punish Knob to dial in some extra intensity and character to your pads. You can also use the Twist Knob to modulate the pitch and timbre of your pads with an LFO.

-

The Pads section of Gravity VST is ideal for creating cinematic soundscapes, atmospheres, and backgrounds. You can use them to set the mood and tone of your scenes, or to add depth and dimension to your mixes. You can also layer them with other instruments or sounds to create rich and complex textures.

-

Earth-Shattering Hits

-

The Hits section of Gravity VST contains over 500 sound sources and 200 presets that can be used to create powerful and dramatic impacts. Each hit consists of three layers: Subs, Impacts, and Tails. You can mix and match different elements from each layer to create an unlimited range of unique hit combinations.

-

-

One of the most impressive features of the Hits section is the Designer page, where you can construct layered hits by mixing up Subs, Impacts, Tails and Whooshes. You can drag and drop different elements from the browser onto the timeline, and adjust their timing, volume, pan, pitch, filter, EQ, and effects. You can also use the Snapshots feature to save and recall up to 12 different hit configurations.

-

The Hits section also features a Master FX page, where you can apply global effects such as compression, saturation, distortion, limiting, and convolution reverb. You can use the Punish Knob to dial in some extra intensity and character to your hits. You can also use the Twist Knob to modulate the pitch and timbre of your hits with an LFO.

-

The Hits section of Gravity VST is ideal for creating cinematic impacts, transitions, accents, and punctuation. You can use them to add weight and drama to your scenes, or to emphasize key moments or events. You can also layer them with other instruments or sounds to create bigger and more epic hits.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Angry Birds 1.6.3.1 For PC [BEST].md b/spaces/1gistliPinn/ChatGPT4/Examples/Angry Birds 1.6.3.1 For PC [BEST].md deleted file mode 100644 index 54454caeaee97cabbf370cf3a52e7f825ade306e..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Angry Birds 1.6.3.1 For PC [BEST].md +++ /dev/null @@ -1,19 +0,0 @@ - -

Angry Birds 1.6.3.1 for PC: The Latest Update of the Classic Game

-

Angry Birds is one of the most popular and addictive games ever created. It was first released for iOS devices in 2009, and since then it has been ported to various platforms, including Windows PC. The game features a flock of colorful birds who are angry at the green pigs who stole their eggs. The player has to use a slingshot to launch the birds at the pigs' structures and destroy them.

-

The latest update of Angry Birds for PC is version 1.6.3.1, which was released in October 2011. This update includes the final chapter of Mine and Dine, the 17th episode of the game, which adds 15 new levels and a new golden egg. The update also fixes some bugs and improves the performance of the game.

-

Angry Birds 1.6.3.1 for PC


Download Filehttps://imgfil.com/2uy22k



-

To download Angry Birds 1.6.3.1 for PC, you can visit the official website of Rovio Entertainment, the developer of the game, or use one of the alternative sources available online. You can also check out some tips and walkthroughs for the game on various websites and forums dedicated to Angry Birds fans.

-

Angry Birds 1.6.3.1 for PC is a fun and challenging game that will keep you entertained for hours. If you love physics-based puzzles and cute characters, you should definitely give it a try.

- -

Angry Birds is not only a game of skill, but also a game of strategy. You have to plan your moves carefully and use the right bird for the right situation. Here are some tips and tricks to help you master the game and get the best scores possible.

- -

Angry Birds is a game that can be enjoyed by anyone, regardless of age or skill level. It is simple to play but hard to master, and it offers hours of fun and entertainment. With these tips and tricks, you can become an Angry Birds expert and impress your friends with your high scores.

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autocad 2015 Keygen Pirate Bay [PORTABLE].md b/spaces/1gistliPinn/ChatGPT4/Examples/Autocad 2015 Keygen Pirate Bay [PORTABLE].md deleted file mode 100644 index 651e34c8da6dccde0377647d8fc524eace28e460..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Autocad 2015 Keygen Pirate Bay [PORTABLE].md +++ /dev/null @@ -1,8 +0,0 @@ - -

Another AutoCAD Keygen is AutoCAD 2017. Other features include ability to open and modify existing files and to perform basic pre-processing operations on the drawings. AutoCAD comes with an AutoCAD LT Viewer. Additionally, it has more advanced functionality such as the ability to create and open projects, create annotations, create camera views of 3D models, and perform basic and geometric transformations.

-

AutoCAD does not include a calculator, AutoCAD LT does. The new keygen is available as a free download from the homepage of AutoCAD LT. A trial version of the software will let you to use it for 30 days. The program requires Windows 7 or higher. The AutoCAD LT program works with all Windows systems supporting AutoCAD LT.

-

Autocad 2015 Keygen Pirate Bay


Download ★★★ https://imgfil.com/2uy1jT



-

After which you can deploy files. The default partitioning is to use C:. You can change the partitioning and file allocation. In addition to these, a private image and a default private image are also created. The private image cannot be edited outside of the program, but you can save a private image and open it with AutoCAD. The default private image is a way to transfer the default private image to a file that you can edit. A snapshot of a default private image is created whenever you edit a drawing outside the program. The program will not allow you to open a default private image with AutoCAD.

-

The new version of AutoCAD offers additional tools such as two-dimensional (2D) floor plan and 3D rendering tools. You can create a 2D floor plan and can also use it as a reference when creating 3D objects. New 3D tools include the ability to view 3D models from different perspectives and perform 3D rotations, scaling and translations. The program includes integrated Viewers, which support most industry-standard file formats, including native DWG, DGN, DXF, and PDF. The program also includes a built-in application programming interface (API) that lets programmers automate many AutoCAD tasks. AutoCAD 2017 is a Professional product and is not free.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Code-de-la-route-en-anglais-pdf.md b/spaces/1gistliPinn/ChatGPT4/Examples/Code-de-la-route-en-anglais-pdf.md deleted file mode 100644 index 879b7410b15b8f3ab70da13bfcc0538e857c1256..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Code-de-la-route-en-anglais-pdf.md +++ /dev/null @@ -1,26 +0,0 @@ -
-

How to Learn the French Road Traffic Rules in English

-

If you are planning to drive in France, you need to know the code de la route, or the road traffic rules. These rules are different from those in other countries, and they can be challenging to learn if you don't speak French. Fortunately, there are some resources that can help you learn the code de la route in English.

-

code-de-la-route-en-anglais-pdf


Download ☆☆☆ https://imgfil.com/2uxY7U



-

One of the best ways to prepare for driving in France is to take your theory test, or passer le code de la route. This is a mandatory exam that you need to pass before you can take your practical driving test. The theory test consists of 40 multiple-choice questions based on the French road traffic laws, signs, and signals. You need to answer at least 35 questions correctly to pass.

-

To take your theory test in English, you need to find an authorized center that offers the test in English. You can search for a center near you on the official website of the French Ministry of Interior: https://www.interieur.gouv.fr/Le-ministere/Securite-routiere/Permis-de-conduire/Le-code-de-la-route. You also need to register online and pay a fee of 30 euros. You can then book a date and time for your test.

-

To study for your theory test, you can use various materials that are available in English. One of them is a PDF document that summarizes the main points of the code de la route. You can download it for free from this website: https://www.scribd.com/document/612594500/Code-de-La-Route-en-Anglais-Janv-2003. This document covers topics such as speed limits, priority rules, traffic signs, signals, and markings, parking regulations, alcohol and drug limits, and penalties.

-

Another useful resource is an online dictionary that translates the most common terms and expressions related to the code de la route. You can access it here: https://www.wordreference.com/fren/code%20de%20la%20route. This dictionary can help you understand the questions and answers on the theory test, as well as communicate with other drivers and authorities on the road.

-

-

Finally, you can also watch some videos that explain the code de la route in English. For example, this YouTube channel offers a series of videos that cover different aspects of the code de la route: https://www.youtube.com/watch?v=G5e7HqXboAQ. These videos are short and easy to follow, and they include examples and illustrations.

-

By using these resources, you can learn the code de la route in English and prepare yourself for driving in France. Remember to always respect the rules and be courteous to other road users. Bonne route!

- -

In addition to learning the code de la route in English, you may also want to familiarize yourself with some of the specific features of driving in France. Here are some tips and advice that can help you have a safe and enjoyable driving experience:

- -

By following these tips and advice, you can drive in France with confidence and enjoy the beauty and diversity of this country. Remember to always be respectful and courteous to other road users and follow the code de la route. Bon voyage!

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Caneta Azul Azul Caneta and Join the Fun of the Internet Sensation.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Caneta Azul Azul Caneta and Join the Fun of the Internet Sensation.md deleted file mode 100644 index 712181f4121c8fd45f4f01bbbf13c4cf9d32a35c..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Caneta Azul Azul Caneta and Join the Fun of the Internet Sensation.md +++ /dev/null @@ -1,105 +0,0 @@ - -

Download Caneta Azul: How to Enjoy the Viral Song on Your Device

-

If you are a fan of Brazilian music, you have probably heard of Caneta Azul, the viral song that took the internet by storm in 2019. The catchy tune, composed and performed by Manoel Gomes, has been covered by many famous artists and has inspired countless memes and parodies. But how can you download Caneta Azul and listen to it offline on your device? In this article, we will tell you everything you need to know about this phenomenon and how to enjoy it anytime, anywhere.

-

What is Caneta Azul?

-

The origin and meaning of the song

-

Caneta Azul, which means "blue pen" in Portuguese, is a song written by Manoel Gomes, a former security guard from Maranhão, Brazil. He wrote the song based on a personal experience of losing his blue pen at school and asking his classmates to return it. The lyrics are simple and repetitive, but they convey a sense of nostalgia and sadness for the lost object. The song also has a catchy melody and a distinctive vocal style that sounds like crying.

-

download caneta azul


Download Filehttps://urlin.us/2uT1Db



-

The popularity and impact of the song

-

The song became viral after Gomes uploaded a video of himself singing it on social media in October 2019. The video quickly gained millions of views and was shared by many celebrities, such as Wesley Safadão, Simone Mendes, Tirullipa, and Neymar. The song also spawned numerous remixes, covers, and parodies in different musical genres and languages. Some examples are AtilaKw's remix with seven musical styles, Dudeth & Lukraya's electronic version, and Manoel Gomes' own bachata version. The song also became a cultural phenomenon, generating memes, merchandise, tattoos, and even a Wikipedia page. The song also earned Gomes fame and recognition, as he signed a contract with a record label and released his debut album in 2020.

-

How to download Caneta Azul?

-

Download Caneta Azul from online platforms

-

If you want to download Caneta Azul to your device, you have several options to choose from. You can use online platforms that allow you to download audio or video files from various sources. Here are some of the most popular ones:

-

YouTube

-

YouTube is the largest video-sharing platform in the world, where you can find many versions of Caneta Azul uploaded by different users. You can use online tools such as Y2mate or SaveFrom to download any YouTube video as an MP3 or MP4 file. Just copy the URL of the video you want to download and paste it into the tool's website. Then, choose the format and quality you prefer and click on "download". You can also use browser extensions or mobile apps that offer similar functions.

-

SoundCloud

-

SoundCloud is a popular audio platform that hosts millions of songs, podcasts, and other audio content. You can find several remixes and covers of Caneta Azul on SoundCloud, such as Dudeth & Lukraya's version. To download SoundCloud tracks, you can use online tools such as KlickAud or ScloudDownloader. Just copy the URL of the track you want to download and paste it into the tool's website. Then, click on "download" and save the file to your device

Spotify

-

Spotify is one of the most popular music streaming platforms in the world, where you can find millions of songs, podcasts, and playlists. You can also find the original version of Caneta Azul by Manoel Gomes on Spotify, as well as his debut album with 18 tracks. To download Caneta Azul from Spotify, you need to have a premium subscription, which costs $9.99 per month. With a premium account, you can download up to 10,000 songs on five different devices and listen to them offline. To download Caneta Azul from Spotify, just follow these steps:

- -

Download Caneta Azul from mobile apps

-

If you prefer to use mobile apps that are dedicated to Caneta Azul, you have some options as well. These apps are designed to let you enjoy the song in different ways, such as playing games, making memes, or singing karaoke. Here are some of the best apps for Caneta Azul:

-

Caneta Azul, Azul Caneta for Android

-

This app is a game based on the song Caneta Azul. You have to help José, the protagonist of the song, to run as far as possible and collect blue pens along the way. You also have to avoid obstacles such as cars, buses, and birds. The app features the original song as the background music and has funny sound effects. You can also share your score with your friends and challenge them to beat it. The app is free to download and play, but it contains ads. You can download it from Google Play Store.

-

Caneta Azul - Dudeth & Lukraya for iOS

-

This app is an electronic version of Caneta Azul by Dudeth & Lukraya, a duo of Brazilian DJs and producers. The app lets you listen to the song and watch a video clip with animations and effects. You can also control the speed and pitch of the song, as well as add filters and stickers to the video. The app is free to download and use, but it requires an internet connection. You can download it from App Store.

-

Caneta Azul - Manoel Gomes for Windows Phone

-

This app is a karaoke app that allows you to sing along with Manoel Gomes' Caneta Azul. The app shows you the lyrics of the song and plays the instrumental version of it. You can also record your voice and share it with your friends. The app is free to download and use, but it contains ads. You can download it from Microsoft Store.

-

download caneta azul remix
-download caneta azul mp3
-download caneta azul album
-download caneta azul video
-download caneta azul song
-download caneta azul lyrics
-download caneta azul ringtone
-download caneta azul karaoke
-download caneta azul instrumental
-download caneta azul parody
-download caneta azul meme
-download caneta azul original
-download caneta azul bachata
-download caneta azul 2022
-download caneta azul 2023
-download caneta azul manoel gomes
-download caneta azul atilakw
-download caneta azul wesley safadao
-download caneta azul simone mendes
-download caneta azul gusttavo lima
-download caneta azul amado batista
-download caneta azul laercio da costa
-download caneta azul qes music
-download caneta azul youtube
-download caneta azul wikipedia
-download caneta azul know your meme
-download caneta azul spotify
-download caneta azul apple music
-download caneta azul deezer
-download caneta azul soundcloud
-download caneta azul amazon music
-download caneta azul tidal
-download caneta azul napster
-download caneta azul pandora
-download caneta azul iheartradio
-download caneta azul audiomack
-download caneta azul bandcamp
-download caneta azul reverbnation
-download caneta azul datpiff
-download caneta azul mixcloud

-

Conclusion

-

Summary of the main points

-

In this article, we have explained what Caneta Azul is, how it became viral, and how you can download it to your device. We have also suggested some online platforms and mobile apps that let you enjoy the song in different ways. Whether you want to listen to it offline, play a game with it, or sing along with it, there is an option for you.

-

Call to action

-

Now that you know how to download Caneta Azul, why not give it a try? Download your favorite version of the song and have fun with it. You can also share it with your friends and family and spread the joy of Caneta Azul. And if you liked this article, please share it with others who might be interested in Caneta Azul too.

-

Frequently Asked Questions

-
    -
  1. What does Caneta Azul mean?
  2. -

    Caneta Azul means "blue pen" in Portuguese. It is the title of a viral song by Manoel Gomes, a Brazilian singer-songwriter who wrote it based on his personal experience of losing his blue pen at school.

    -
  3. Who is Manoel Gomes?
  4. -

    Manoel Gomes is a former security guard from Maranhão, Brazil. He became famous after he uploaded a video of himself singing Caneta Azul on social media in 2019. The video went viral and was shared by many celebrities and influencers. He signed a contract with a record label and released his debut album in 2020.

    -
  5. How can I download Caneta Azul?
  6. -

    You can download Caneta Azul from various online platforms and mobile apps. Some of the online platforms are YouTube, SoundCloud, and Spotify. Some of the mobile apps are Caneta Azul, Azul Caneta for Android, Caneta Azul - Dudeth & Lukraya for iOS, and Caneta Azul - Manoel Gomes for Windows Phone. You can use online tools or browser extensions to download audio or video files from the online platforms. You can also use the premium subscription of Spotify to download songs offline. You can download the mobile apps from the respective app stores and enjoy the song in different ways.

    -
  7. What are some of the benefits of downloading Caneta Azul?
  8. -

    Downloading Caneta Azul can bring you many benefits, such as:

    - -
  9. What are some of the challenges of downloading Caneta Azul?
  10. -

    Downloading Caneta Azul can also pose some challenges, such as:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Bloons TD 6 32.4 APK - How to Protect Your Towers and Team Up with Monkeys in this Offline Game.md b/spaces/1phancelerku/anime-remove-background/Bloons TD 6 32.4 APK - How to Protect Your Towers and Team Up with Monkeys in this Offline Game.md deleted file mode 100644 index f9c36a1b02f3cc3abe2ce70e88a6a53e50181226..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Bloons TD 6 32.4 APK - How to Protect Your Towers and Team Up with Monkeys in this Offline Game.md +++ /dev/null @@ -1,159 +0,0 @@ - -

    Bloons TD 6 32.4 APK: A Tower Defense Game with a Twist

    -

    If you are looking for a fun and challenging tower defense game, you might want to check out Bloons TD 6. This game is the latest installment in the popular Bloons series, where you have to pop colorful balloons (or bloons) with your monkey towers and heroes. In this article, we will tell you everything you need to know about Bloons TD 6, including what it is, how to download and install it, how to play it effectively, and how to enjoy it more.

    -

    bloons td 6 32.4 apk


    Download Zip - https://jinyurl.com/2uNMp3



    -

    What is Bloons TD 6?

    -

    Bloons TD 6 is a 3D tower defense game developed and published by Ninja Kiwi, a New Zealand-based game studio. It was released on June 14, 2018 for iOS, Android, and Windows platforms. It is the sixth main game in the Bloons franchise, and the sequel to Bloons TD 5.

    -

    The gameplay of Bloons TD 6

    -

    The gameplay of Bloons TD 6 is similar to other tower defense games. You have to place your monkey towers along a path where the bloons will travel. Your goal is to pop all the bloons before they reach the end of the path and reduce your lives to zero. You can choose from different types of monkey towers, each with their own strengths and weaknesses. You can also upgrade your towers to make them more powerful and unlock new abilities.

    -

    However, Bloons TD 6 also adds some new twists to the tower defense genre. For example, you can also use heroes, which are special monkey units that have unique skills and can level up automatically. You can also use powers and insta-monkeys, which are items that can give you an edge in difficult situations. Moreover, you can also customize your monkeys, bloons, animations, music, and more with the trophy store.

    -

    The features of Bloons TD 6

    -

    Bloons TD 6 has many features that make it a great tower defense game. Some of these features are:

    - -

    How to download and install Bloons TD 6 32.4 APK?

    -

    If you want to play Bloons TD 6 on your Android device, you can download and install the latest version of the game from the Google Play Store. However, if you want to get the game for free, or if you want to access some features that are not available in the official version, you can download and install the Bloons TD 6 32.4 APK file from a third-party source.

    -

    The steps to download and install Bloons TD 6 32.4 APK

    -

    To download and install Bloons TD 6 32.4 APK, you need to follow these steps:

    -

    bloons td 6 32.4 apk mod unlimited money
    -bloons td 6 32.4 apk download for android
    -bloons td 6 32.4 apk free download latest version
    -bloons td 6 32.4 apk obb data file
    -bloons td 6 32.4 apk hack all towers unlocked
    -bloons td 6 32.4 apk update new features
    -bloons td 6 32.4 apk full game offline
    -bloons td 6 32.4 apk cracked no root
    -bloons td 6 32.4 apk premium access
    -bloons td 6 32.4 apk best strategy guide
    -bloons td 6 32.4 apk cheats tips and tricks
    -bloons td 6 32.4 apk review gameplay video
    -bloons td 6 32.4 apk how to install on pc
    -bloons td 6 32.4 apk compatible devices list
    -bloons td 6 32.4 apk support contact information
    -bloons td 6 32.4 apk alternative download links
    -bloons td 6 32.4 apk safe and secure verification
    -bloons td 6 32.4 apk original file from ninja kiwi
    -bloons td 6 32.4 apk fun and addictive tower defense game
    -bloons td 6 32.4 apk new maps and modes added
    -bloons td 6 32.4 apk online multiplayer co-op mode
    -bloons td 6 32.4 apk custom challenges and daily quests
    -bloons td 6 32.4 apk unlock and upgrade powerful monkeys
    -bloons td 6 32.4 apk pop colorful balloons with different abilities
    -bloons td 6 32.4 apk enjoy stunning graphics and animations
    -bloons td 6 32.4 apk learn and master different strategies and tactics
    -bloons td 6 32.4 apk earn achievements and rewards as you progress
    -bloons td 6 32.4 apk compare your scores and rankings with other players
    -bloons td 6 32.4 apk create and share your own custom maps and scenarios
    -bloons td 6 32.4 apk explore the lore and history of the bloon world

    -
      -
    1. Find a reliable source: You need to find a website that offers the Bloons TD 6 32.4 APK file for free and without viruses or malware. You can search for it on Google or use a trusted site like APKPure or APKMirror.
    2. -
    3. Download the file: You need to click on the download button or link and save the Bloons TD 6 32.4 APK file on your device. You might need to enable the option to download files from unknown sources in your device settings.
    4. -
    5. Install the file: You need to locate the Bloons TD 6 32.4 APK file on your device and tap on it to start the installation process. You might need to grant some permissions to the app during the installation.
    6. -
    7. Launch the game: You need to open the app icon on your device and enjoy playing Bloons TD 6.
    8. -
    -

    The benefits of downloading and installing Bloons TD 6 32.4 APK

    -

    By downloading and installing Bloons TD 6 32.4 APK, you can enjoy some benefits that are not available in the official version of the game. Some of these benefits are:

    - -

    How to play Bloons TD 6 effectively?

    -

    Bloons TD 6 is a fun and addictive game, but it can also be challenging and frustrating at times. If you want to play Bloons TD 6 effectively, you need to learn some strategies, tips, and tricks that can help you pop more bloons and win more games.

    -

    The best strategies, tips, and tricks for Bloons TD 6

    -

    Here are some of the best strategies, tips, and tricks for Bloons TD 6:

    - -

    The best towers, heroes, and upgrades for Bloons TD 6

    -

    There is no definitive answer to what are the best towers, heroes, and upgrades for Bloons TD 6, as it depends on your personal preference, play style, and strategy. However, here are some of the most popular and effective ones that you can try:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    TowerUpgrade PathDescription
    Ninja Monkey2-0-3This upgrade gives the ninja monkey the ability to throw shurikens that can pop four bloons each and can detect camo bloons. It also increases the attack speed and range of the ninja monkey.
    Super Monkey3-0-2This upgrade gives the super monkey the ability to shoot plasma blasts that can pop 11 bloons each and have increased pierce and damage. It also increases the attack speed of the super monkey.
    Bomb Shooter2-0-4This upgrade gives the bomb shooter the ability to shoot MOAB mauler bombs that deal extra damage to MOAB-class bloons. It also increases the blast radius and damage of the bomb shooter.
    Alchemist4-0-2This upgrade gives the alchemist the ability to brew stronger potions that can buff up to three nearby monkeys with increased attack speed, damage, pierce, and range. It also increases the duration of the buff.
    Banana Farm2-3-0This upgrade gives the banana farm the ability to produce valuable bananas that are worth more money. It also increases the amount of bananas produced per round.
    -

    As for heroes, some of the most popular and effective ones are:

    - -

    How to enjoy Bloons TD 6 more?

    -

    Bloons TD 6 is already a very enjoyable game, but there are ways to make it even more fun. Here are some suggestions on how to enjoy Bloons TD 6 more:

    -

    The modes and events of Bloons TD 6

    -

    Bloons TD 6 has various modes and events that can spice up your gameplay and offer different challenges and rewards. Some of these modes and events are:

    - -

    The community and content of Bloons TD 6

    -

    Bloons TD 6 has a large and active community of players and fans that can enhance your gaming experience. You can join the community and access the content of Bloons TD 6 by:

    - -

    Conclusion

    -

    Bloons TD 6 is a tower defense game with a twist. It is a game where you have to pop colorful balloons (or bloons) with your monkey towers and heroes. It is a game that has huge content, epic monkey towers and heroes, endless awesomeness, and more. It is a game that you can download and install for free using the Bloons TD 6 32.4 APK file. It is a game that you can play effectively using the best strategies, tips, tricks, towers, heroes, and upgrades. It is a game that you can enjoy more by playing the modes and events, joining the community, and creating and sharing your own content.

    -

    If you are a fan of tower defense games, or if you are looking for a new and exciting game to play, you should definitely give Bloons TD 6 a try. You will not regret it. You will have a blast popping bloons and saving the world with your monkeys and heroes.

    -

    So, what are you waiting for? Download and install the Bloons TD 6 32.4 APK file now and start your bloon popping adventure!

    -

    FAQs

    -

    Here are some frequently asked questions about Bloons TD 6 and Bloons TD 6 32.4 APK:

    -
      -
    1. Is Bloons TD 6 free? -

      Bloons TD 6 is not free on the Google Play Store. It costs $4.99 to download and install the game. However, you can get the game for free by downloading and installing the Bloons TD 6 32.4 APK file from a third-party source.

    2. -
    3. Is Bloons TD 6 safe? -

      Bloons TD 6 is safe to play on your device. It does not contain any viruses, malware, or harmful content. However, you should be careful when downloading and installing the Bloons TD 6 32.4 APK file from a third-party source. You should only download and install the file from a reliable and trusted website.

    4. -
    5. Is Bloons TD 6 online or offline? -

      Bloons TD 6 can be played both online and offline. You can play online with other players in co-op mode or contested territory. You can also play offline with single player mode even when your WiFi doesn’t work.

    6. -
    7. How to update Bloons TD 6? -

      You can update Bloons TD 6 by downloading and installing the latest version of the game from the Google Play Store. However, if you are using the Bloons TD 6 32.4 APK file, you need to download and install the latest version of the file from a third-party source.

    8. -
    9. How to contact Bloons TD 6 support? -

      You can contact Bloons TD 6 support by emailing them at support@ninjakiwi.com or by visiting their website at https://ninjakiwi.com/support.

    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Dragon Ball Legend Fighter The Ultimate 3D Battle Game APK Download.md b/spaces/1phancelerku/anime-remove-background/Dragon Ball Legend Fighter The Ultimate 3D Battle Game APK Download.md deleted file mode 100644 index 62062208c257581313a64de4e565a638afc8764c..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Dragon Ball Legend Fighter The Ultimate 3D Battle Game APK Download.md +++ /dev/null @@ -1,101 +0,0 @@ -
    -

    Dragon Ball Legend Fighter APK: A Review

    -

    If you are a fan of Dragon Ball, you might be interested in trying out a new game called Dragon Ball Legend Fighter APK. This is a classic fighting game that features your favorite characters from the popular anime and manga series. You can transform into powerful warriors, fight against epic bosses, and compete with other players online. But is this game worth downloading and playing? In this article, we will review Dragon Ball Legend Fighter APK and tell you everything you need to know about it.

    -

    dragon ball legend fighter apk


    Download File ★★★ https://jinyurl.com/2uNOH0



    -

    What is Dragon Ball Legend Fighter APK?

    -

    Dragon Ball Legend Fighter APK is an Android game developed by OneStick, a studio that specializes in creating action games. The game is inspired by the legendary Dragon Ball franchise, which follows the adventures of Goku and his friends as they protect the Earth from various threats. The game features 29+ characters that you can choose from, each with their own unique abilities and transformations. You can also customize your character's appearance, skills, and equipment.

    -

    The game has four different modes that you can play: Versus, Tournament, Story, and Arcade. In Versus mode, you can fight against another player or the computer in a one-on-one match. In Tournament mode, you can join a bracket of 16 fighters and try to win the championship. In Story mode, you can follow the original plot of Dragon Ball and face off against iconic villains. In Arcade mode, you can challenge yourself with different levels of difficulty and earn rewards.

    -

    How to download and install Dragon Ball Legend Fighter APK?

    -

    If you want to play Dragon Ball Legend Fighter APK on your Android device, you will need to download and install the APK file from a reliable source. Here are the steps that you need to follow:

    -

    dragon ball legends apk download
    -dragon ball legend fighter mod apk
    -dragon ball legend fighter game
    -dragon ball legends android game
    -dragon ball legend fighter apk latest version
    -dragon ball legends apk obb
    -dragon ball legend fighter apk offline
    -dragon ball legends apk hack
    -dragon ball legend fighter apk free download
    -dragon ball legends apk update
    -dragon ball legend fighter apk full version
    -dragon ball legends apk mod menu
    -dragon ball legend fighter apk unlimited money
    -dragon ball legends apk no verification
    -dragon ball legend fighter apk for pc
    -dragon ball legends apk pure
    -dragon ball legend fighter apk revdl
    -dragon ball legends apk mirror
    -dragon ball legend fighter apk rexdl
    -dragon ball legends apk uptodown
    -dragon ball legend fighter apk android 1
    -dragon ball legends apk data
    -dragon ball legend fighter apk 2.9.5
    -dragon ball legends apk old version
    -dragon ball legend fighter apk 2023
    -dragon ball legends apk ios
    -dragon ball legend fighter apk 2022
    -dragon ball legends apk online
    -dragon ball legend fighter apk 2.9.2
    -dragon ball legends apk original
    -dragon ball legend fighter apk 2.9.1
    -dragon ball legends apk english version
    -dragon ball legend fighter apk español
    -dragon ball legends apk global version
    -dragon ball legend fighter apk one stick
    -dragon ball legends apk japan version
    -dragon ball legend fighter legendary battle of god
    -dragon ball legends apk 3d action game
    -dragon ball legend fighter z warriors transform
    -dragon ball legends apk card action battles game

    -
      -
    1. Go to APKCombo and search for "Dragon Ball Legend Fighter APK".
    2. -
    3. Select the latest version of the game (2.9.5) and click on "Download APK (64 MB)".
    4. -
    5. Wait for the download to finish and then open the file.
    6. -
    7. If you see a warning message that says "Install blocked", go to your device's settings and enable "Unknown sources".
    8. -
    9. Tap on "Install" and wait for the installation to complete.
    10. -
    11. Launch the game and enjoy!
    12. -
    -

    How to play Dragon Ball Legend Fighter APK?

    -

    Playing Dragon Ball Legend Fighter APK is easy and fun. You can control your character using the virtual joystick on the left side of the screen and use the buttons on the right side to perform actions. You can move around, jump, dodge, attack, block, charge energy, use special moves, and transform into different forms. You can also combine different buttons to create combos and unleash powerful attacks.

    -

    The game has a simple interface that shows your health bar, energy bar, transformation bar, ability cards, and timer. You can also see your opponent's information on the opposite side of the screen. The goal of each match is to reduce your opponent's health bar to zero before they do the same to you or before the time runs out.

    What are the pros and cons of Dragon Ball Legend Fighter APK? -

    Like any other game, Dragon Ball Legend Fighter APK has its pros and cons. Here are some of the advantages and disadvantages of playing this game:

    - - - - - - - - - - - - - - - - - - - - - -
    ProsCons
    - The game has high-quality graphics and sound effects that make the gameplay more immersive and realistic.- The game requires a lot of storage space and may not run smoothly on low-end devices.
    - The game has a large roster of characters that you can unlock and customize to your liking.- The game can be repetitive and boring after a while, especially if you play the same mode or character over and over.
    - The game has a variety of modes that you can choose from, depending on your mood and preference.- The game can be frustrating and challenging, especially if you face stronger opponents or higher difficulty levels.
    - The game has an online multiplayer feature that allows you to compete with other players around the world.- The game can have connectivity issues and lag problems that can affect your performance and enjoyment.
    -

    How does Dragon Ball Legend Fighter APK compare to other Dragon Ball games?

    -

    Dragon Ball Legend Fighter APK is not the only Dragon Ball game that you can play on your Android device. There are many other games that are based on the same franchise, such as Dragon Ball Z Dokkan Battle, Dragon Ball Legends, Dragon Ball Z Kakarot, and more. How does Dragon Ball Legend Fighter APK compare to these games?

    -

    Well, it depends on what you are looking for in a Dragon Ball game. If you want a casual and simple fighting game that lets you relive the classic battles from the series, then Dragon Ball Legend Fighter APK might be a good choice for you. However, if you want a more complex and strategic game that involves collecting cards, building teams, upgrading characters, and exploring stories, then you might prefer one of the other games. Ultimately, it is up to you to decide which game suits your taste and style best.

    -

    Conclusion

    -

    Dragon Ball Legend Fighter APK is a fun and exciting fighting game that lets you experience the thrill of being a Dragon Ball fighter. You can choose from a wide range of characters, customize your skills and equipment, and fight against various enemies in different modes. You can also play online with other players and test your skills and strategies. However, the game also has some drawbacks, such as requiring a lot of storage space, being repetitive and challenging, and having connectivity issues. Therefore, you should weigh the pros and cons before downloading and playing this game.

    -

    If you are a fan of Dragon Ball and enjoy fighting games, then you might want to give Dragon Ball Legend Fighter APK a try. You can download it from APKCombo for free and start your adventure as a legendary fighter. Who knows, maybe you will become the next Super Saiyan!

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Dragon Ball Legend Fighter APK:

    -
      -
    1. Is Dragon Ball Legend Fighter APK safe to download and install?
      Yes, as long as you download it from a trusted source like APKCombo, which scans all the files for viruses and malware. However, you should always be careful when downloading any APK file from unknown sources, as they might contain harmful or malicious content.
    2. -
    3. Is Dragon Ball Legend Fighter APK legal to play?
      Yes, as long as you do not use any cheats, hacks, or mods that alter the game's functionality or give you an unfair advantage over other players. However, you should be aware that the game is not officially licensed or endorsed by the creators of Dragon Ball, so it might violate some intellectual property rights or terms of service.
    4. -
    5. How can I get more characters in Dragon Ball Legend Fighter APK?
      You can unlock more characters by playing the Story mode or by purchasing them with coins or gems. You can earn coins by winning matches or completing tasks, and you can earn gems by watching ads or buying them with real money.
    6. -
    7. How can I transform into different forms in Dragon Ball Legend Fighter APK?
      You can transform into different forms by filling up your transformation bar with energy. You can charge energy by holding down the charge button or by landing hits on your opponent. Once your transformation bar is full, you can tap on it to activate your transformation. You can also tap on it again to revert back to your normal form. Different forms have different advantages and disadvantages, such as speed, power, defense, and energy consumption.
    8. -
    9. How can I play online with other players in Dragon Ball Legend Fighter APK?
      You can play online with other players by selecting the Versus mode and choosing the Online option. You can then search for an opponent or create a room and invite your friends. You will need a stable internet connection to play online, otherwise you might experience lag or disconnection.
    10. -
    -

    I hope this article has helped you learn more about Dragon Ball Legend Fighter APK and how to play it. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have fun!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy TikTok on Your Computer with These Easy Steps.md b/spaces/1phancelerku/anime-remove-background/Enjoy TikTok on Your Computer with These Easy Steps.md deleted file mode 100644 index 02bd700e029f0648489d8420ffcaf059f72f663d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy TikTok on Your Computer with These Easy Steps.md +++ /dev/null @@ -1,95 +0,0 @@ - -

    How to Download TikTok in Computer

    -

    TikTok is a video-sharing app that allows users to create and share short-form videos on any topic. It’s mainly mobile-based, although you can still watch TikTok videos using the web app. The platform allows users to get creative with their content using filters, stickers, voiceovers, sound effects, and background music.

    -

    download tiktok in computer


    Download File ··· https://jinyurl.com/2uNOvV



    -

    TikTok is one of the most popular social media apps in the world, with over one billion active users. It’s especially popular among teens and young adults who enjoy watching and making videos that are entertaining, spontaneous, and genuine. Whether you’re a sports fanatic, a pet enthusiast, or just looking for a laugh, there’s something for everyone on TikTok.

    -

    But what if you want to download TikTok in computer? Maybe you want to watch TikTok videos on a bigger screen, or you want to create your own videos using your PC’s camera and microphone. Or maybe you just want to have another option besides your smartphone. Whatever your reason, there are two ways to download TikTok on PC: using an emulator or using the Microsoft Store. In this article, we’ll show you how to do both, as well as how to download TikTok videos on PC using a video downloader software.

    -

    How to Download TikTok in Computer Using an Emulator

    -

    An emulator is a software that mimics a smartphone on your computer. You can use an emulator to run Android apps on your PC, including TikTok. One of the most popular emulators is Bluestacks, which is free and easy to use. Here are the steps to download and install Bluestacks emulator and TikTok app on your PC.

    -
      -
    1. Go to Bluestacks website and click Download Bluestacks.
    2. -
    3. Run the installer file and follow the instructions to install Bluestacks on your PC.
    4. -
    5. Launch Bluestacks and sign in with your Google account.
    6. -
    7. Go to Google Play Store on Bluestacks and search for TikTok.
    8. -
    9. Click Install to download and install TikTok app on Bluestacks.
    10. -
    11. Open TikTok app on Bluestacks and sign in with your account or create a new one.
    12. -
    13. Enjoy watching and making TikTok videos on your PC.
    14. -
    -

    How to Download TikTok in Computer Using the Microsoft Store

    -

    The Microsoft Store is an online marketplace where you can download apps, games, movies, music, books, and more for your Windows devices. Since June 2021, you can also download the TikTok app from the Microsoft Store, which is available for Windows 10 or 11. You can also use the "Get app" button on the TikTok website to access the Microsoft Store. Here

  11. You can discover and watch videos from various categories, such as comedy, music, dance, sports, beauty, fashion, etc.
  12. -
  13. You can follow, like, comment, and chat with other users who share your interests or passions.
  14. -
  15. You can join or create challenges, trends, hashtags, or duets to participate in the TikTok community.
  16. -
  17. You can livestream your activities or events and interact with your fans or viewers in real time.
  18. -
  19. You can earn rewards or gifts from your fans or sponsors by creating quality content or engaging with them.
  20. - -

    What are some of the alternatives to TikTok for PC?

    -

    If you’re looking for some alternatives to TikTok for PC, you can try these apps:

    - -

    How can I use TikTok for business promotion?

    -

    TikTok is not only a platform for entertainment but also a platform for business promotion. You can use TikTok to market your products or services, increase your brand awareness, or generate leads or sales. Here are some ways to use TikTok for business promotion:

    - -

    How can I create a Duet or Stitch video on TikTok?

    -

    A Duet or Stitch video is a type of video that allows you to collaborate with another user on TikTok. A Duet video is when you record a video alongside another user’s video. A Stitch video is when you record a video that adds to another user’s video. Here are the steps to create a Duet or Stitch video on TikTok:

    -
      -
    1. Find a video that you want to Duet or Stitch with on TikTok.
    2. -
    3. Tap the Share icon and select Duet or Stitch.
    4. -
    5. Record your video using the camera button. You can also add filters, stickers, voiceovers, sound effects, etc.
    6. -
    7. Tap the Checkmark icon when you’re done recording.
    8. -
    9. Edit your video using the tools at the bottom of the screen. You can also add captions, hashtags, tags, etc.
    10. -
    11. Tap Post to share your Duet or Stitch video on TikTok.
    12. -

    -

    download tiktok videos on pc
    -download tiktok app for pc windows 10
    -download tiktok apk for pc
    -download tiktok on macbook
    -download tiktok on pc without bluestacks
    -download tiktok on pc microsoft store
    -download tiktok on pc online
    -download tiktok on pc with qr code
    -download tiktok on pc using ssstiktok
    -download tiktok on pc with qoob clips
    -download tiktok for pc free
    -download tiktok for pc windows 7
    -download tiktok for pc windows 8
    -download tiktok for pc windows xp
    -download tiktok for pc latest version
    -download tiktok for pc 64 bit
    -download tiktok for pc 32 bit
    -download tiktok for pc softonic
    -download tiktok for pc filehippo
    -download tiktok for pc uptodown
    -how to download tiktok on pc 2023
    -how to download tiktok on pc without emulator
    -how to download tiktok on pc chromebook
    -how to download tiktok on pc reddit
    -how to download tiktok on pc quora
    -how to download tiktok on pc with sound
    -how to download tiktok on pc without watermark
    -how to download tiktok on pc in hd
    -how to download tiktok on pc using url
    -how to download tiktok on pc using idm
    -best way to download tiktok on pc
    -easiest way to download tiktok on pc
    -fastest way to download tiktok on pc
    -safest way to download tiktok on pc
    -cheapest way to download tiktok on pc
    -can you download tiktok on pc
    -can i download tiktok on my computer
    -can we download tiktok on laptop
    -can u download tiktok on macbook air
    -can you watch live streams on tiktok on computer

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Football Quiz 2022 How many of these World Cup facts can you get right?.md b/spaces/1phancelerku/anime-remove-background/Football Quiz 2022 How many of these World Cup facts can you get right?.md deleted file mode 100644 index 0fa183e2084835683b4a81bcfafec4c479bea5b0..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Football Quiz 2022 How many of these World Cup facts can you get right?.md +++ /dev/null @@ -1,98 +0,0 @@ - -

    Football Quiz 2022: How Well Do You Remember the Year in Soccer?

    -

    If you are a football fan, you probably followed the action-packed year of 2022 with great interest and excitement. From the thrilling World Cup in Qatar to the drama-filled domestic leagues, there was no shortage of memorable moments and stories to keep you entertained.

    -

    Introduction

    -

    Why take this quiz?

    -

    Well, for one thing, it's fun! Who doesn't love a good quiz to test their knowledge and challenge their friends? Plus, it's a great way to refresh your memory and learn some new facts about the beautiful game. You might be surprised by how much you remember or how much you missed.

    -

    football quiz 2022


    Download Ziphttps://jinyurl.com/2uNTkG



    -

    What to expect from this quiz?

    -

    This quiz consists of 10 questions, each with four possible answers. The questions cover various topics and competitions related to football in 2022, such as the World Cup, the Champions League, the Premier League, and more. Some questions are easy, some are hard, and some are tricky. You will have to use your logic, your intuition, and your memory to get them right. At the end of the quiz, you will get your score and see how you compare to other football fans. Are you ready to take on the challenge?

    -

    The Quiz

    -

    Question 1: Who won the FIFA World Cup in Qatar?

    -

    A) France B) Argentina C) Brazil D) Italy

    -

    The correct answer is B) Argentina. The South American giants finally ended their 36-year drought and lifted their third World Cup trophy after beating France 4-2 in a thrilling final. Lionel Messi, who scored six goals and assisted four more in the tournament, was named the best player and won his first major title with his national team.

    -

    Question 2: Who was the top scorer of the Premier League in 2022?

    -

    A) Harry Kane B) Mohamed Salah C) Erling Haaland D) Ivan Toney

    -

    The correct answer is D) Ivan Toney. The Brentford striker had a sensational debut season in the top flight, scoring 27 goals and breaking the record for the most goals by a newly promoted player in Premier League history. He also won the Golden Boot award, beating Harry Kane and Mohamed Salah by one goal each.

    -

    Question 3: Which team became the first African team to reach the World Cup semi-finals?

    -

    A) Morocco B) Cameroon C) Ghana D) Egypt

    -

    The correct answer is C) Ghana. The Black Stars made history by becoming the first African team to reach the last four of the World Cup, after stunning Germany 2-1 in the quarter-finals. They were eventually knocked out by Argentina in a 3-2 thriller, but they won the hearts of many fans with their spirited and skillful performances.

    -

    Question 4: Who scored the fastest hat-trick in Champions League history?

    -

    A) Kylian Mbappé B) Robert Lewandowski C) Cristiano Ronaldo D) Gonçalo Ramos

    -

    The correct answer is D) Gonçalo Ramos. The Benfica youngster scored three goals in just six minutes and 18 seconds against Dynamo Kyiv in the group stage, breaking the previous record of eight minutes held by Mike Newell and Bafétimbi Gomis. Ramos, who was only 20 years old at the time, also became the youngest player to score a hat-trick in the Champions League.

    -

    football trivia 2022
    -world cup 2022 quiz questions
    -football quiz questions and answers 2022
    -premier league quiz 2022
    -football knowledge test 2022
    -football quiz for kids 2022
    -football quiz games 2022
    -football quiz app 2022
    -football quiz online 2022
    -football quiz with friends 2022
    -football quiz challenge 2022
    -football quiz fun 2022
    -football quiz hard 2022
    -football quiz easy 2022
    -football quiz multiple choice 2022
    -football quiz true or false 2022
    -football quiz guess the player 2022
    -football quiz guess the team 2022
    -football quiz guess the manager 2022
    -football quiz guess the stadium 2022
    -football quiz guess the logo 2022
    -football quiz guess the country 2022
    -football quiz guess the year 2022
    -football quiz guess the score 2022
    -football quiz guess the transfer 2022
    -football quiz who am i 2022
    -football quiz who said it 2022
    -football quiz who won it 2022
    -football quiz who scored it 2022
    -football quiz who assisted it 2022
    -football quiz who signed it 2022
    -football quiz who managed it 2022
    -football quiz who played it 2022
    -football quiz who wore it 2022
    -football quiz who sponsored it 2022
    -football quiz how well do you know 2022
    -football quiz how much do you remember 2022
    -football quiz how many can you name 2022
    -football quiz how many can you get right 2022
    -football quiz how smart are you 2022
    -football quiz what happened in 2022
    -football quiz what team are you 2022
    -football quiz what player are you 2022
    -football quiz what manager are you 2022
    -football quiz what position are you 2022
    -football quiz what league are you in 2022
    -football quiz what club should you support in 2022
    -football quiz what is your style of play in 2022

    -

    Question 5: Which team won the Copa América in 2022?

    -

    A) Colombia B) Uruguay C) Chile D) Brazil

    -

    The correct answer is A) Colombia. The Cafeteros ended their 21-year wait for a continental title by beating Brazil 2-1 in the final, thanks to goals from Luis Díaz and Juan Cuadrado. Colombia also avenged their defeat to Brazil in the 2019 final, and denied them their third consecutive Copa América crown.

    -

    Question 6: Who was the youngest manager in the World Cup 2022?

    -

    A) Lionel Scaloni B) Aliou Cissé C) Walid Regragui D) Gareth Southgate

    -

    The correct answer is C) Walid Regragui. The former Morocco international was only 40 years old when he led Algeria to their second World Cup appearance, after replacing Djamel Belmadi in 2021. Regragui guided the Desert Foxes to the round of 16, where they lost to England on penalties.

    Question 7: Which team had three players starting in the World Cup final?

    -

    A) Atlético Madrid B) PSG C) Real Madrid D) Tottenham Hotspur

    -

    The correct answer is B) PSG. The French giants had three of their stars featuring in the World Cup final, namely Kylian Mbappé and Presnel Kimpembe for France, and Ángel Di María for Argentina. Mbappé scored a goal but could not prevent his team from losing, while Di María assisted the winner and was named the man of the match.

    -

    Question 8: Which player did not score a goal in the knockout stages of the World Cup?

    -

    A) Denzel Dumfries B) Jordan Henderson C) Pepe D) Cristiano Ronaldo

    -

    The correct answer is D) Cristiano Ronaldo. The Portugal captain had a disappointing World Cup, as he failed to score a single goal in the knockout stages. He did score four goals in the group stage, but his team was eliminated by Belgium in the round of 16. Ronaldo also missed a penalty in that game, which proved to be his last World Cup appearance.

    -

    Question 9: How many yellow cards did England receive in their five matches at the World Cup?

    -

    A) None B) One C) Four D) Six

    -

    The correct answer is A) None. England had a remarkable disciplinary record at the World Cup, as they did not receive any yellow or red cards in their five matches. They were the only team to achieve this feat, and they also conceded the fewest fouls (32) in the tournament.

    -

    Question 10: Which team wore three different shirts in the World Cup?

    -

    A) France B) Canada C) Japan D) Belgium

    -

    The correct answer is C) Japan. The Asian side wore three different shirts in their three group matches, each representing a different aspect of their culture and identity. They wore a blue shirt with a red sun against Colombia, a white shirt with red stripes against Poland, and a red shirt with white dots against Senegal.

    -

    Conclusion

    -

    How did you do?

    -

    So, how many questions did you get right? Did you ace the quiz or did you struggle? Here is a table that shows how well you did compared to other football fans:

    - | Score | Rating | | --- | --- | | 10/10 | You are a football genius! You know everything there is to know about the beautiful game. You should be proud of yourself and brag to your friends. | | 8-9/10 | You are a football expert! You have an impressive knowledge of the game and its history. You only missed one or two questions, but that's okay. Nobody is perfect. | | 6-7/10 | You are a football fan! You have a good grasp of the game and its events. You got more than half of the questions right, which is commendable. You still have some room for improvement, though. | | 4-5/10 | You are a football novice! You have a basic understanding of the game and its rules. You got some questions right, but you also made some mistakes. You need to watch more football and learn more facts. | | 0-3/10 | You are a football beginner! You have little or no knowledge of the game and its players. You got most of the questions wrong, which is disappointing. You need to start from scratch and study hard. |

    What did you learn?

    -

    Regardless of your score, we hope that you learned something new and interesting from this quiz. Maybe you discovered some facts that you didn't know before, or maybe you refreshed your memory on some events that you forgot. Either way, we hope that you enjoyed this quiz and that it sparked your curiosity and passion for football.

    -

    What's next?

    -

    If you liked this quiz, why not share it with your friends and challenge them to beat your score? You can also try some other quizzes on our website, covering different topics and levels of difficulty. Or you can read some of our articles and blogs about football, where you can find more information and insights about the game and its stars.

    -

    Thank you for taking this quiz and we hope to see you again soon!

    - FAQs Q: When and where was the World Cup 2022 held? A: The World Cup 2022 was held from November 21 to December 18 in Qatar, which was the first Arab country to host the tournament. Q: Who was the oldest player in the World Cup 2022? A: The oldest player in the World Cup 2022 was Pepe, the Portugal defender, who was 39 years old at the time. Q: Who was the best goalkeeper in the World Cup 2022? A: The best goalkeeper in the World Cup 2022 was Gianluigi Donnarumma, the Italy keeper, who won the Golden Glove award after keeping five clean sheets and saving three penalties in the tournament. Q: Which team scored the most goals in the World Cup 2022? A: The team that scored the most goals in the World Cup 2022 was France, who netted 16 times in six matches, averaging 2.67 goals per game. Q: Which team had the best defense in the World Cup 2022? A: The team that had the best defense in the World Cup 2022 was England, who conceded only two goals in five matches, both from penalties, and had a goal difference of +10. Q: Which player won the Golden Ball award for the best player in the World Cup 2022? A: The player who won the Golden Ball award for the best player in the World Cup 2022 was Lionel Messi, the Argentina captain, who scored six goals and assisted four more in seven matches, and led his team to their first World Cup title since 1986.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/src/components/ui/separator.tsx b/spaces/2023Liu2023/bingo/src/components/ui/separator.tsx deleted file mode 100644 index 6c55e0b2ca8e2436658a06748aadbff7cd700db0..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/components/ui/separator.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SeparatorPrimitive from '@radix-ui/react-separator' - -import { cn } from '@/lib/utils' - -const Separator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->( - ( - { className, orientation = 'horizontal', decorative = true, ...props }, - ref - ) => ( - - ) -) -Separator.displayName = SeparatorPrimitive.Root.displayName - -export { Separator } diff --git a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/demo_web.py b/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/demo_web.py deleted file mode 100644 index a195087dbb790088ef4b15aaa189f51803cee063..0000000000000000000000000000000000000000 --- a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/demo_web.py +++ /dev/null @@ -1,124 +0,0 @@ -# -- coding: utf-8 --` -import argparse -import os -import random -import streamlit as st -from streamlit_drawable_canvas import st_canvas -import numpy as np -import cv2 -from PIL import Image, ImageEnhance -import numpy as np -# engine -from stable_diffusion_engine import StableDiffusionEngine -# scheduler -from diffusers import PNDMScheduler - - -def run(engine): - with st.form(key="request"): - with st.sidebar: - prompt = st.text_area(label='Enter prompt') - - with st.expander("Initial image"): - init_image = st.file_uploader("init_image", type=['jpg','png','jpeg']) - stroke_width = st.slider("stroke_width", 1, 100, 50) - stroke_color = st.color_picker("stroke_color", "#00FF00") - canvas_result = st_canvas( - fill_color="rgb(0, 0, 0)", - stroke_width = stroke_width, - stroke_color = stroke_color, - background_color = "#000000", - background_image = Image.open(init_image) if init_image else None, - height = 512, - width = 512, - drawing_mode = "freedraw", - key = "canvas" - ) - - if init_image is not None: - init_image = cv2.cvtColor(np.array(Image.open(init_image)), cv2.COLOR_RGB2BGR) - - if canvas_result.image_data is not None: - mask = cv2.cvtColor(canvas_result.image_data, cv2.COLOR_BGRA2GRAY) - mask[mask > 0] = 255 - else: - mask = None - - num_inference_steps = st.select_slider( - label='num_inference_steps', - options=range(1, 150), - value=32 - ) - - guidance_scale = st.select_slider( - label='guidance_scale', - options=range(1, 21), - value=7 - ) - - strength = st.slider( - label='strength', - min_value = 0.0, - max_value = 1.0, - value = 0.5 - ) - - seed = st.number_input( - label='seed', - min_value = 0, - max_value = 2 ** 31, - value = random.randint(0, 2 ** 31) - ) - - generate = st.form_submit_button(label = 'Generate') - - if prompt: - np.random.seed(seed) - image = engine( - prompt = prompt, - init_image = init_image, - mask = mask, - strength = strength, - num_inference_steps = num_inference_steps, - guidance_scale = guidance_scale - ) - st.image(Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)), width=512) - -@st.cache(allow_output_mutation=True) -def load_engine(args): - scheduler = PNDMScheduler( - beta_start=args.beta_start, - beta_end=args.beta_end, - beta_schedule=args.beta_schedule, - skip_prk_steps = True, - tensor_format="np" - ) - engine = StableDiffusionEngine( - model = args.model, - scheduler = scheduler, - tokenizer = args.tokenizer - ) - return engine - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # pipeline configure - parser.add_argument("--model", type=str, default="4eJIoBek/stable-diffusion-v1-4-openvino-fp32", help="model name") - # scheduler params - parser.add_argument("--beta-start", type=float, default=0.00085, help="LMSDiscreteScheduler::beta_start") - parser.add_argument("--beta-end", type=float, default=0.012, help="LMSDiscreteScheduler::beta_end") - parser.add_argument("--beta-schedule", type=str, default="scaled_linear", help="LMSDiscreteScheduler::beta_schedule") - # tokenizer - parser.add_argument("--tokenizer", type=str, default="openai/clip-vit-large-patch14", help="tokenizer") - - try: - args = parser.parse_args() - except SystemExit as e: - # This exception will be raised if --help or invalid command line arguments - # are used. Currently streamlit prevents the program from exiting normally - # so we have to do a hard exit. - os._exit(e.code) - - engine = load_engine(args) - run(engine) diff --git a/spaces/AIFILMS/ControlNet-Video/app.py b/spaces/AIFILMS/ControlNet-Video/app.py deleted file mode 100644 index 8e575424c190d89aebc250bf19e2bb5195010da5..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/ControlNet-Video/app.py +++ /dev/null @@ -1,359 +0,0 @@ -from __future__ import annotations -import gradio as gr -import os -import cv2 -import numpy as np -from PIL import Image -from moviepy.editor import * -from share_btn import community_icon_html, loading_icon_html, share_js - -import pathlib -import shlex -import subprocess - -is_shared_ui = True if "AIFILMS/ControlNet-Video" in os.environ['SPACE_ID'] else False - -if os.getenv('SYSTEM') == 'spaces': - with open('patch') as f: - subprocess.run(shlex.split('patch -p1'), stdin=f, cwd='ControlNet') - -base_url = 'https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/' - -names = [ - 'body_pose_model.pth', - 'dpt_hybrid-midas-501f0c75.pt', - 'hand_pose_model.pth', - 'mlsd_large_512_fp32.pth', - 'mlsd_tiny_512_fp32.pth', - 'network-bsds500.pth', - 'upernet_global_small.pth', -] - -for name in names: - command = f'wget https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/{name} -O {name}' - out_path = pathlib.Path(f'ControlNet/annotator/ckpts/{name}') - if out_path.exists(): - continue - subprocess.run(shlex.split(command), cwd='ControlNet/annotator/ckpts/') - - - -if(not is_shared_ui): - from model import (DEFAULT_BASE_MODEL_FILENAME, DEFAULT_BASE_MODEL_REPO, - DEFAULT_BASE_MODEL_URL, Model) - - model = Model() - - -def controlnet(i, prompt, control_task, seed_in, ddim_steps, scale, low_threshold, high_threshold, value_threshold, distance_threshold, bg_threshold): - img= Image.open(i) - np_img = np.array(img) - - a_prompt = "best quality, extremely detailed" - n_prompt = "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality" - num_samples = 1 - image_resolution = 512 - detect_resolution = 512 - eta = 0.0 - #low_threshold = 100 - #high_threshold = 200 - #value_threshold = 0.1 - #distance_threshold = 0.1 - #bg_threshold = 0.4 - - if control_task == 'Canny': - result = model.process_canny(np_img, prompt, a_prompt, n_prompt, num_samples, - image_resolution, ddim_steps, scale, seed_in, eta, low_threshold, high_threshold) - elif control_task == 'Depth': - result = model.process_depth(np_img, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed_in, eta) - elif control_task == 'Hed': - result = model.process_hed(np_img, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed_in, eta) - elif control_task == 'Hough': - result = model.process_hough(np_img, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed_in, eta, value_threshold, - distance_threshold) - elif control_task == 'Normal': - result = model.process_normal(np_img, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed_in, eta, bg_threshold) - elif control_task == 'Pose': - result = model.process_pose(np_img, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed_in, eta) - elif control_task == 'Scribble': - result = model.process_scribble(np_img, prompt, a_prompt, n_prompt, num_samples, - image_resolution, ddim_steps, scale, seed_in, eta) - elif control_task == 'Seg': - result = model.process_seg(np_img, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed_in, eta) - - #print(result[0]) - processor_im = Image.fromarray(result[0]) - processor_im.save("process_" + control_task + "_" + str(i) + ".jpeg") - im = Image.fromarray(result[1]) - im.save("your_file" + str(i) + ".jpeg") - return "your_file" + str(i) + ".jpeg", "process_" + control_task + "_" + str(i) + ".jpeg" - -def change_task_options(task): - if task == "Canny" : - return canny_opt.update(visible=True), hough_opt.update(visible=False), normal_opt.update(visible=False) - elif task == "Hough" : - return canny_opt.update(visible=False),hough_opt.update(visible=True), normal_opt.update(visible=False) - elif task == "Normal" : - return canny_opt.update(visible=False),hough_opt.update(visible=False), normal_opt.update(visible=True) - else : - return canny_opt.update(visible=False),hough_opt.update(visible=False), normal_opt.update(visible=False) - -def get_frames(video_in): - frames = [] - #resize the video - clip = VideoFileClip(video_in) - - #check fps - if clip.fps > 30: - print("vide rate is over 30, resetting to 30") - clip_resized = clip.resize(height=512) - clip_resized.write_videofile("video_resized.mp4", fps=30) - else: - print("video rate is OK") - clip_resized = clip.resize(height=512) - clip_resized.write_videofile("video_resized.mp4", fps=clip.fps) - - print("video resized to 512 height") - - # Opens the Video file with CV2 - cap= cv2.VideoCapture("video_resized.mp4") - - fps = cap.get(cv2.CAP_PROP_FPS) - print("video fps: " + str(fps)) - i=0 - while(cap.isOpened()): - ret, frame = cap.read() - if ret == False: - break - cv2.imwrite('kang'+str(i)+'.jpg',frame) - frames.append('kang'+str(i)+'.jpg') - i+=1 - - cap.release() - cv2.destroyAllWindows() - print("broke the video into frames") - - return frames, fps - - -def convert(gif): - if gif != None: - clip = VideoFileClip(gif.name) - clip.write_videofile("my_gif_video.mp4") - return "my_gif_video.mp4" - else: - pass - - -def create_video(frames, fps, type): - print("building video result") - clip = ImageSequenceClip(frames, fps=fps) - clip.write_videofile(type + "_result.mp4", fps=fps) - - return type + "_result.mp4" - - -def infer(prompt,video_in, control_task, seed_in, trim_value, ddim_steps, scale, low_threshold, high_threshold, value_threshold, distance_threshold, bg_threshold, gif_import): - if(is_shared_ui): - raise gr.Error("This Space doesn't work on this shared UI.") - print(f""" - ——————————————— - {prompt} - ———————————————""") - - # 1. break video into frames and get FPS - break_vid = get_frames(video_in) - frames_list= break_vid[0] - fps = break_vid[1] - n_frame = int(trim_value*fps) - - if n_frame >= len(frames_list): - print("video is shorter than the cut value") - n_frame = len(frames_list) - - # 2. prepare frames result arrays - processor_result_frames = [] - result_frames = [] - print("set stop frames to: " + str(n_frame)) - - for i in frames_list[0:int(n_frame)]: - controlnet_img = controlnet(i, prompt,control_task, seed_in, ddim_steps, scale, low_threshold, high_threshold, value_threshold, distance_threshold, bg_threshold) - #images = controlnet_img[0] - #rgb_im = images[0].convert("RGB") - - # exporting the image - #rgb_im.save(f"result_img-{i}.jpg") - processor_result_frames.append(controlnet_img[1]) - result_frames.append(controlnet_img[0]) - print("frame " + i + "/" + str(n_frame) + ": done;") - - processor_vid = create_video(processor_result_frames, fps, "processor") - final_vid = create_video(result_frames, fps, "final") - - files = [processor_vid, final_vid] - if gif_import != None: - final_gif = VideoFileClip(final_vid) - final_gif.write_gif("final_result.gif") - final_gif = "final_result.gif" - - files.append(final_gif) - print("finished !") - - return final_vid, gr.Accordion.update(visible=True), gr.Video.update(value=processor_vid, visible=True), gr.File.update(value=files, visible=True), gr.Group.update(visible=True) - - -def clean(): - return gr.Accordion.update(visible=False),gr.Video.update(value=None, visible=False), gr.Video.update(value=None), gr.File.update(value=None, visible=False), gr.Group.update(visible=False) - -title = """ -
    -
    -

    - ControlNet Video -

    -
    -

    - Apply ControlNet to a video -

    -
    -""" - -article = """ - - -
    -

    You may also like:

    -
    - - - - - - - -
    - -
    - -""" - -with gr.Blocks(css='style.css') as demo: - if(is_shared_ui): - with gr.Box(): - top_description = gr.HTML(f''' -
    -

    Attention - This Space doesn't work in this shared UI

    -

    For it to work, you can access the original or duplicate this Space and run it on your own profile using a GPU.  Duplicate Space

    -
    - ''') - with gr.Column(elem_id="col-container"): - gr.HTML(title) - gr.HTML(""" - Duplicate Space - """, elem_id="duplicate-container") - with gr.Row(): - with gr.Column(): - video_inp = gr.Video(label="Video source", source="upload", type="filepath", elem_id="input-vid") - video_out = gr.Video(label="ControlNet video result", elem_id="video-output") - - with gr.Group(elem_id="share-btn-container", visible=False) as share_group: - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - with gr.Accordion("Detailed results", visible=False) as detailed_result: - prep_video_out = gr.Video(label="Preprocessor video result", visible=False, elem_id="prep-video-output") - files = gr.File(label="Files can be downloaded ;)", visible=False) - - with gr.Column(): - #status = gr.Textbox() - - prompt = gr.Textbox(label="Prompt", placeholder="enter prompt", show_label=True, elem_id="prompt-in") - - with gr.Row(): - control_task = gr.Dropdown(label="Control Task", choices=["Canny", "Depth", "Hed", "Hough", "Normal", "Pose", "Scribble", "Seg"], value="Pose", multiselect=False, elem_id="controltask-in") - seed_inp = gr.Slider(label="Seed", minimum=0, maximum=2147483647, step=1, value=123456, elem_id="seed-in") - - with gr.Row(): - trim_in = gr.Slider(label="Cut video at (s)", minimun=1, maximum=5, step=1, value=1) - - with gr.Accordion("Advanced Options", open=False): - with gr.Tab("Diffusion Settings"): - with gr.Row(visible=False) as canny_opt: - low_threshold = gr.Slider(label='Canny low threshold', minimum=1, maximum=255, value=100, step=1) - high_threshold = gr.Slider(label='Canny high threshold', minimum=1, maximum=255, value=200, step=1) - - with gr.Row(visible=False) as hough_opt: - value_threshold = gr.Slider(label='Hough value threshold (MLSD)', minimum=0.01, maximum=2.0, value=0.1, step=0.01) - distance_threshold = gr.Slider(label='Hough distance threshold (MLSD)', minimum=0.01, maximum=20.0, value=0.1, step=0.01) - - with gr.Row(visible=False) as normal_opt: - bg_threshold = gr.Slider(label='Normal background threshold', minimum=0.0, maximum=1.0, value=0.4, step=0.01) - - ddim_steps = gr.Slider(label='Steps', minimum=1, maximum=100, value=20, step=1) - scale = gr.Slider(label='Guidance Scale', minimum=0.1, maximum=30.0, value=9.0, step=0.1) - - with gr.Tab("GIF import"): - gif_import = gr.File(label="import a GIF instead", file_types=['.gif']) - gif_import.change(convert, gif_import, video_inp, queue=False) - - with gr.Tab("Custom Model"): - current_base_model = gr.Text(label='Current base model', - value="" if is_shared_ui else DEFAULT_BASE_MODEL_URL) - with gr.Row(): - with gr.Column(): - base_model_repo = gr.Text(label='Base model repo', - max_lines=1, - placeholder="" if is_shared_ui else DEFAULT_BASE_MODEL_REPO, - interactive=True) - base_model_filename = gr.Text( - label='Base model file', - max_lines=1, - placeholder="" if is_shared_ui else DEFAULT_BASE_MODEL_FILENAME, - interactive=True) - change_base_model_button = gr.Button('Change base model') - - gr.HTML( - '''

    You can use other base models by specifying the repository name and filename.
    - The base model must be compatible with Stable Diffusion v1.5.

    ''') - if(not is_shared_ui): - change_base_model_button.click(fn=model.set_base_model, - inputs=[ - base_model_repo, - base_model_filename, - ], - outputs=current_base_model, queue=False) - - submit_btn = gr.Button("Generate ControlNet video") - - inputs = [prompt,video_inp,control_task, seed_inp, trim_in, ddim_steps, scale, low_threshold, high_threshold, value_threshold, distance_threshold, bg_threshold, gif_import] - outputs = [video_out, detailed_result, prep_video_out, files, share_group] - #outputs = [status] - - - gr.HTML(article) - control_task.change(change_task_options, inputs=[control_task], outputs=[canny_opt, hough_opt, normal_opt], queue=False) - submit_btn.click(clean, inputs=[], outputs=[detailed_result, prep_video_out, video_out, files, share_group], queue=False) - submit_btn.click(infer, inputs, outputs) - share_button.click(None, [], [], _js=share_js) - - - -demo.queue(max_size=12).launch() \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/__init__.py b/spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AUST001/video/app.py b/spaces/AUST001/video/app.py deleted file mode 100644 index a8d6fe5320b9b16ef4c9b405518744bc0c2acc07..0000000000000000000000000000000000000000 --- a/spaces/AUST001/video/app.py +++ /dev/null @@ -1,11 +0,0 @@ -import gradio as gr -import urllib.request - -url = 'http://aust001.pythonanywhere.com/photo/test.avi' -def to_black(text): - if text=='love': - urllib.request.urlretrieve(url, 'uu.avi') - return 'uu.avi' - -interface = gr.Interface(fn=to_black, inputs="text", outputs="video") -interface.launch() \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Buttons.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Buttons.d.ts deleted file mode 100644 index 06bf44dc463eefb065beceee1912bdb51a1f6706..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Buttons.d.ts +++ /dev/null @@ -1,95 +0,0 @@ -// import * as Phaser from 'phaser'; -import Sizer from '../sizer/Sizer'; -import { IConfig as IConfigButtons } from '../utils/buttongroup/Buttons'; - -export default Buttons; - -declare namespace Buttons { - - type AlignTypes = 'left' | 'top' | 'right' | 'bottom' | 'center'; - - interface IConfig extends Sizer.IConfig, IConfigButtons { - background?: Phaser.GameObjects.GameObject, - - buttons?: Phaser.GameObjects.GameObject[], - - expand?: boolean, - - align?: AlignTypes, - } -} - -declare class Buttons extends Sizer { - constructor( - scene: Phaser.Scene, - config?: Buttons.IConfig - ); - - emitButtonClick( - index: number | Phaser.GameObjects.GameObject - ): this; - - setButtonEnable( - index?: number | Phaser.GameObjects.GameObject | boolean, - enable?: boolean - ): this; - - toggleButtonEnable( - index?: number | Phaser.GameObjects.GameObject - ): this; - - getButtonEnable( - index: number | Phaser.GameObjects.GameObject - ): boolean; - - getButton( - index: number - ): Phaser.GameObjects.GameObject | null; - - addButton( - gameObject: Phaser.GameObjects.GameObject - ): this; - - removeButton( - gameObject: Phaser.GameObjects.GameObject, - destroyChild?: boolean - ): this; - - clearButtons( - destroyChild?: boolean - ): this; - - showButton( - index: number | Phaser.GameObjects.GameObject - ): this; - - hideButton( - index: number | Phaser.GameObjects.GameObject - ): this; - - forEachButtton( - callback: (button: Phaser.GameObjects.GameObject, index: number, buttons: Phaser.GameObjects.GameObject[]) => void, - scop?: unknown - ): this; - - readonly buttons: Phaser.GameObjects.GameObject[]; - - value: unknown; - - setSelectedButtonName( - name: string - ): this; - - getSelectedButtonName(): string; - - setButtonState( - name: string, - state?: boolean - ): this; - - getButtonState( - name: string - ): boolean; - - getAllButtonsState(): { [name: string]: boolean }; -} \ No newline at end of file diff --git a/spaces/AliHaider0343/implicit-and-explicit-aspects-Extraction-in-Restaurant-Reviews-Domain/app.py b/spaces/AliHaider0343/implicit-and-explicit-aspects-Extraction-in-Restaurant-Reviews-Domain/app.py deleted file mode 100644 index 880fd74f59896b7f16d5bfa7cd5dccaf3e044428..0000000000000000000000000000000000000000 --- a/spaces/AliHaider0343/implicit-and-explicit-aspects-Extraction-in-Restaurant-Reviews-Domain/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import torch -import streamlit as st -from transformers import RobertaTokenizer, RobertaForSequenceClassification -import re -import string - - - -def tokenize_sentences(sentence): - encoded_dict = tokenizer.encode_plus( - sentence, - add_special_tokens=True, - max_length=128, - padding='max_length', - truncation=True, - return_attention_mask=True, - return_tensors='pt' - ) - return torch.cat([encoded_dict['input_ids']], dim=0), torch.cat([encoded_dict['attention_mask']], dim=0) - - - -def preprocess_query(query): - query = str(query).lower() - query = query.strip() - query=query.translate(str.maketrans("", "", string.punctuation)) - return query - -def predict_aspects(sentence, threshold): - input_ids, attention_mask = tokenize_sentences(sentence) - with torch.no_grad(): - outputs = aspects_model(input_ids, attention_mask=attention_mask) - logits = outputs.logits - predicted_aspects = torch.sigmoid(logits).squeeze().tolist() - results = dict() - for label, prediction in zip(LABEL_COLUMNS_ASPECTS, predicted_aspects): - if prediction < threshold: - continue - precentage = round(float(prediction) * 100, 2) - results[label] = precentage - return results - -# Load tokenizer and model -BERT_MODEL_NAME_FOR_ASPECTS_CLASSIFICATION = 'roberta-large' -tokenizer = RobertaTokenizer.from_pretrained(BERT_MODEL_NAME_FOR_ASPECTS_CLASSIFICATION, do_lower_case=True) - -LABEL_COLUMNS_ASPECTS = ['FOOD-CUISINE', 'FOOD-DEALS', 'FOOD-DIET_OPTION', 'FOOD-EXPERIENCE', 'FOOD-FLAVOR', 'FOOD-GENERAL', 'FOOD-INGREDIENT', 'FOOD-KITCHEN', 'FOOD-MEAL', 'FOOD-MENU', 'FOOD-PORTION', 'FOOD-PRESENTATION', 'FOOD-PRICE', 'FOOD-QUALITY', 'FOOD-RECOMMENDATION', 'FOOD-TASTE', 'GENERAL-GENERAL', 'RESTAURANT-ATMOSPHERE', 'RESTAURANT-BUILDING', 'RESTAURANT-DECORATION', 'RESTAURANT-EXPERIENCE', 'RESTAURANT-FEATURES', 'RESTAURANT-GENERAL', 'RESTAURANT-HYGIENE', 'RESTAURANT-KITCHEN', 'RESTAURANT-LOCATION', 'RESTAURANT-OPTIONS', 'RESTAURANT-RECOMMENDATION', 'RESTAURANT-SEATING_PLAN', 'RESTAURANT-VIEW', 'SERVICE-BEHAVIOUR', 'SERVICE-EXPERIENCE', 'SERVICE-GENERAL', 'SERVICE-WAIT_TIME'] - -aspects_model = RobertaForSequenceClassification.from_pretrained(BERT_MODEL_NAME_FOR_ASPECTS_CLASSIFICATION, num_labels=len(LABEL_COLUMNS_ASPECTS)) -aspects_model.load_state_dict(torch.load('./Aspects_Extraction_Model_updated.pth', map_location=torch.device('cpu'))) -aspects_model.eval() - -# Streamlit App -st.title("Implicit and Explicit Aspect Extraction") - -sentence = st.text_input("Enter a sentence:") -threshold = st.slider("Threshold", min_value=0.0, max_value=1.0, step=0.01, value=0.5) - -if sentence: - processed_sentence = preprocess_query(sentence) - results = predict_aspects(processed_sentence, threshold) - if len(results) > 0: - st.write("Predicted Aspects:") - table_data = [["Category","Aspect", "Probability"]] - for aspect, percentage in results.items(): - aspect_parts = aspect.split("-") - table_data.append(aspect_parts + [f"{percentage}%"]) - st.table(table_data) - else: - st.write("No aspects above the threshold.") - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/conftest.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/conftest.py deleted file mode 100644 index 3a48d18d1cc739f3fbf52c84a9c77afbf5694803..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/conftest.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# tests directory-specific settings - this file is run automatically -# by pytest before any tests are run - -import sys -import warnings -from os.path import abspath, dirname, join - - -# allow having multiple repository checkouts and not needing to remember to rerun -# 'pip install -e .[dev]' when switching between checkouts and running tests. -git_repo_path = abspath(join(dirname(dirname(dirname(__file__))), "src")) -sys.path.insert(1, git_repo_path) - - -# silence FutureWarning warnings in tests since often we can't act on them until -# they become normal warnings - i.e. the tests still need to test the current functionality -warnings.simplefilter(action="ignore", category=FutureWarning) - - -def pytest_addoption(parser): - from diffusers.utils.testing_utils import pytest_addoption_shared - - pytest_addoption_shared(parser) - - -def pytest_terminal_summary(terminalreporter): - from diffusers.utils.testing_utils import pytest_terminal_summary_main - - make_reports = terminalreporter.config.getoption("--make-reports") - if make_reports: - pytest_terminal_summary_main(terminalreporter, id=make_reports) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/dependency_versions_table.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/dependency_versions_table.py deleted file mode 100644 index b26404bdec892f4e71338b2c1865c18924de3cd3..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/dependency_versions_table.py +++ /dev/null @@ -1,44 +0,0 @@ -# THIS FILE HAS BEEN AUTOGENERATED. To update: -# 1. modify the `_deps` dict in setup.py -# 2. run `make deps_table_update`` -deps = { - "Pillow": "Pillow", - "accelerate": "accelerate>=0.11.0", - "compel": "compel==0.1.8", - "black": "black~=23.1", - "datasets": "datasets", - "filelock": "filelock", - "flax": "flax>=0.4.1", - "hf-doc-builder": "hf-doc-builder>=0.3.0", - "huggingface-hub": "huggingface-hub>=0.13.2", - "requests-mock": "requests-mock==1.10.0", - "importlib_metadata": "importlib_metadata", - "invisible-watermark": "invisible-watermark>=0.2.0", - "isort": "isort>=5.5.4", - "jax": "jax>=0.2.8,!=0.3.2", - "jaxlib": "jaxlib>=0.1.65", - "Jinja2": "Jinja2", - "k-diffusion": "k-diffusion>=0.0.12", - "torchsde": "torchsde", - "note_seq": "note_seq", - "librosa": "librosa", - "numpy": "numpy", - "omegaconf": "omegaconf", - "parameterized": "parameterized", - "protobuf": "protobuf>=3.20.3,<4", - "pytest": "pytest", - "pytest-timeout": "pytest-timeout", - "pytest-xdist": "pytest-xdist", - "ruff": "ruff>=0.0.241", - "safetensors": "safetensors>=0.3.1", - "sentencepiece": "sentencepiece>=0.1.91,!=0.1.92", - "scipy": "scipy", - "onnx": "onnx", - "regex": "regex!=2019.12.17", - "requests": "requests", - "tensorboard": "tensorboard", - "torch": "torch>=1.4", - "torchvision": "torchvision", - "transformers": "transformers>=4.25.1", - "urllib3": "urllib3<=2.0.0", -} diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d_flax.py deleted file mode 100644 index 69a0704dca9dae32a7d612b82cbedc0454a0a1b5..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_unet_2d_flax.py +++ /dev/null @@ -1,104 +0,0 @@ -import gc -import unittest - -from parameterized import parameterized - -from diffusers import FlaxUNet2DConditionModel -from diffusers.utils import is_flax_available -from diffusers.utils.testing_utils import load_hf_numpy, require_flax, slow - - -if is_flax_available(): - import jax - import jax.numpy as jnp - - -@slow -@require_flax -class FlaxUNet2DConditionModelIntegrationTests(unittest.TestCase): - def get_file_format(self, seed, shape): - return f"gaussian_noise_s={seed}_shape={'_'.join([str(s) for s in shape])}.npy" - - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - - def get_latents(self, seed=0, shape=(4, 4, 64, 64), fp16=False): - dtype = jnp.bfloat16 if fp16 else jnp.float32 - image = jnp.array(load_hf_numpy(self.get_file_format(seed, shape)), dtype=dtype) - return image - - def get_unet_model(self, fp16=False, model_id="CompVis/stable-diffusion-v1-4"): - dtype = jnp.bfloat16 if fp16 else jnp.float32 - revision = "bf16" if fp16 else None - - model, params = FlaxUNet2DConditionModel.from_pretrained( - model_id, subfolder="unet", dtype=dtype, revision=revision - ) - return model, params - - def get_encoder_hidden_states(self, seed=0, shape=(4, 77, 768), fp16=False): - dtype = jnp.bfloat16 if fp16 else jnp.float32 - hidden_states = jnp.array(load_hf_numpy(self.get_file_format(seed, shape)), dtype=dtype) - return hidden_states - - @parameterized.expand( - [ - # fmt: off - [83, 4, [-0.2323, -0.1304, 0.0813, -0.3093, -0.0919, -0.1571, -0.1125, -0.5806]], - [17, 0.55, [-0.0831, -0.2443, 0.0901, -0.0919, 0.3396, 0.0103, -0.3743, 0.0701]], - [8, 0.89, [-0.4863, 0.0859, 0.0875, -0.1658, 0.9199, -0.0114, 0.4839, 0.4639]], - [3, 1000, [-0.5649, 0.2402, -0.5518, 0.1248, 1.1328, -0.2443, -0.0325, -1.0078]], - # fmt: on - ] - ) - def test_compvis_sd_v1_4_flax_vs_torch_fp16(self, seed, timestep, expected_slice): - model, params = self.get_unet_model(model_id="CompVis/stable-diffusion-v1-4", fp16=True) - latents = self.get_latents(seed, fp16=True) - encoder_hidden_states = self.get_encoder_hidden_states(seed, fp16=True) - - sample = model.apply( - {"params": params}, - latents, - jnp.array(timestep, dtype=jnp.int32), - encoder_hidden_states=encoder_hidden_states, - ).sample - - assert sample.shape == latents.shape - - output_slice = jnp.asarray(jax.device_get((sample[-1, -2:, -2:, :2].flatten())), dtype=jnp.float32) - expected_output_slice = jnp.array(expected_slice, dtype=jnp.float32) - - # Found torch (float16) and flax (bfloat16) outputs to be within this tolerance, in the same hardware - assert jnp.allclose(output_slice, expected_output_slice, atol=1e-2) - - @parameterized.expand( - [ - # fmt: off - [83, 4, [0.1514, 0.0807, 0.1624, 0.1016, -0.1896, 0.0263, 0.0677, 0.2310]], - [17, 0.55, [0.1164, -0.0216, 0.0170, 0.1589, -0.3120, 0.1005, -0.0581, -0.1458]], - [8, 0.89, [-0.1758, -0.0169, 0.1004, -0.1411, 0.1312, 0.1103, -0.1996, 0.2139]], - [3, 1000, [0.1214, 0.0352, -0.0731, -0.1562, -0.0994, -0.0906, -0.2340, -0.0539]], - # fmt: on - ] - ) - def test_stabilityai_sd_v2_flax_vs_torch_fp16(self, seed, timestep, expected_slice): - model, params = self.get_unet_model(model_id="stabilityai/stable-diffusion-2", fp16=True) - latents = self.get_latents(seed, shape=(4, 4, 96, 96), fp16=True) - encoder_hidden_states = self.get_encoder_hidden_states(seed, shape=(4, 77, 1024), fp16=True) - - sample = model.apply( - {"params": params}, - latents, - jnp.array(timestep, dtype=jnp.int32), - encoder_hidden_states=encoder_hidden_states, - ).sample - - assert sample.shape == latents.shape - - output_slice = jnp.asarray(jax.device_get((sample[-1, -2:, -2:, :2].flatten())), dtype=jnp.float32) - expected_output_slice = jnp.array(expected_slice, dtype=jnp.float32) - - # Found torch (float16) and flax (bfloat16) outputs to be within this tolerance, on the same hardware - assert jnp.allclose(output_slice, expected_output_slice, atol=1e-2) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r101_caffe_fpn_gn-head_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r101_caffe_fpn_gn-head_1x_coco.py deleted file mode 100644 index 6c38266f1b3e9a85a88f389a1410638b00b17368..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/fcos/fcos_r101_caffe_fpn_gn-head_1x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './fcos_r50_caffe_fpn_gn-head_1x_coco.py' -model = dict( - pretrained='open-mmlab://detectron/resnet101_caffe', - backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_x101_64x4d_fpn_gn-head_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_x101_64x4d_fpn_gn-head_2x_coco.py deleted file mode 100644 index 2fdc53c8c04c12bed16a31281127f9774bb70b64..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/grid_rcnn/grid_rcnn_x101_64x4d_fpn_gn-head_2x_coco.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = './grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1.py b/spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1.py deleted file mode 100644 index 188186502d56674fa4e6073b39819a209b9a2c1f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-8GF_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-8GF_fpn_1x_coco.py deleted file mode 100644 index b5890264672f0996d98db422365746e85fcea8e6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/regnet/mask_rcnn_regnetx-8GF_fpn_1x_coco.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = './mask_rcnn_regnetx-3.2GF_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://regnetx_8.0gf', - backbone=dict( - type='RegNet', - arch='regnetx_8.0gf', - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[80, 240, 720, 1920], - out_channels=256, - num_outs=5)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/scnet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/scnet/README.md deleted file mode 100644 index 1749df0cb7858b555a5e6877b09a9bf7a35264e3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/scnet/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# SCNet - -## Introduction - -[ALGORITHM] - -We provide the code for reproducing experiment results of [SCNet](https://arxiv.org/abs/2012.10150). - -``` -@inproceedings{vu2019cascade, - title={SCNet: Training Inference Sample Consistency for Instance Segmentation}, - author={Vu, Thang and Haeyong, Kang and Yoo, Chang D}, - booktitle={AAAI}, - year={2021} -} -``` - -## Dataset - -SCNet requires COCO and [COCO-stuff](http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip) dataset for training. You need to download and extract it in the COCO dataset path. -The directory should be like this. - -```none -mmdetection -├── mmdet -├── tools -├── configs -├── data -│ ├── coco -│ │ ├── annotations -│ │ ├── train2017 -│ │ ├── val2017 -│ │ ├── test2017 -| | ├── stuffthingmaps -``` - -## Results and Models - -The results on COCO 2017val are shown in the below table. (results on test-dev are usually slightly higher than val) - -| Backbone | Style | Lr schd | Mem (GB) | Inf speed (fps) | box AP | mask AP | TTA box AP | TTA mask AP | Config | Download | -|:---------------:|:-------:|:-------:|:--------:|:---------------:|:------:|:-------:|:----------:|:-----------:|:------:|:------------:| -| R-50-FPN | pytorch | 1x | 7.0 | 6.2 | 43.5 | 39.2 | 44.8 | 40.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_1x_coco.py) | [model](https://drive.google.com/file/d/1K5_8-P0EC43WZFtoO3q9_JE-df8pEc7J/view?usp=sharing) \| [log](https://drive.google.com/file/d/1ZFS6QhFfxlOnDYPiGpSDP_Fzgb7iDGN3/view?usp=sharing) | -| R-50-FPN | pytorch | 20e | 7.0 | 6.2 | 44.5 | 40.0 | 45.8 | 41.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_20e_coco.py) | [model](https://drive.google.com/file/d/15VGLCt5-IO5TbzB4Kw6ZyoF6QH0Q511A/view?usp=sharing) \| [log](https://drive.google.com/file/d/1-LnkOXN8n5ojQW34H0qZ625cgrnWpqSX/view?usp=sharing) | -| R-101-FPN | pytorch | 20e | 8.9 | 5.8 | 45.8 | 40.9 | 47.3 | 42.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r101_fpn_20e_coco.py) | [model](https://drive.google.com/file/d/1aeCGHsOBdfIqVBnBPp0JUE_RSIau3583/view?usp=sharing) \| [log](https://drive.google.com/file/d/1iRx-9GRgTaIDsz-we3DGwFVH22nbvCLa/view?usp=sharing) | -| X-101-64x4d-FPN | pytorch | 20e | 13.2 | 4.9 | 47.5 | 42.3 | 48.9 | 44.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_x101_64x4d_fpn_20e_coco.py) | [model](https://drive.google.com/file/d/1YjgutUKz4TTPpqSWGKUTkZJ8_X-kyCfY/view?usp=sharing) \| [log](https://drive.google.com/file/d/1OsfQJ8gwtqIQ61k358yxY21sCvbUcRjs/view?usp=sharing) | - -### Notes - -- Training hyper-parameters are identical to those of [HTC](https://github.com/open-mmlab/mmdetection/tree/master/configs/htc). -- TTA means Test Time Augmentation, which applies horizonal flip and multi-scale testing. Refer to [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_1x_coco.py). diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/gaussian_focal_loss.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/gaussian_focal_loss.py deleted file mode 100644 index e45506a38e8e3c187be8288d0b714cc1ee29cf27..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/losses/gaussian_focal_loss.py +++ /dev/null @@ -1,91 +0,0 @@ -import mmcv -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def gaussian_focal_loss(pred, gaussian_target, alpha=2.0, gamma=4.0): - """`Focal Loss `_ for targets in gaussian - distribution. - - Args: - pred (torch.Tensor): The prediction. - gaussian_target (torch.Tensor): The learning target of the prediction - in gaussian distribution. - alpha (float, optional): A balanced form for Focal Loss. - Defaults to 2.0. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 4.0. - """ - eps = 1e-12 - pos_weights = gaussian_target.eq(1) - neg_weights = (1 - gaussian_target).pow(gamma) - pos_loss = -(pred + eps).log() * (1 - pred).pow(alpha) * pos_weights - neg_loss = -(1 - pred + eps).log() * pred.pow(alpha) * neg_weights - return pos_loss + neg_loss - - -@LOSSES.register_module() -class GaussianFocalLoss(nn.Module): - """GaussianFocalLoss is a variant of focal loss. - - More details can be found in the `paper - `_ - Code is modified from `kp_utils.py - `_ # noqa: E501 - Please notice that the target in GaussianFocalLoss is a gaussian heatmap, - not 0/1 binary target. - - Args: - alpha (float): Power of prediction. - gamma (float): Power of target for negative samples. - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Loss weight of current loss. - """ - - def __init__(self, - alpha=2.0, - gamma=4.0, - reduction='mean', - loss_weight=1.0): - super(GaussianFocalLoss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction - in gaussian distribution. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_reg = self.loss_weight * gaussian_focal_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - reduction=reduction, - avg_factor=avg_factor) - return loss_reg diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/builder.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/builder.py deleted file mode 100644 index f362d1c92ca9d4ed95a2b3d28d3e6baedd14e462..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/builder.py +++ /dev/null @@ -1,14 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -TRANSFORMER = Registry('Transformer') -POSITIONAL_ENCODING = Registry('Position encoding') - - -def build_transformer(cfg, default_args=None): - """Builder for Transformer.""" - return build_from_cfg(cfg, TRANSFORMER, default_args) - - -def build_positional_encoding(cfg, default_args=None): - """Builder for Position Encoding.""" - return build_from_cfg(cfg, POSITIONAL_ENCODING, default_args) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k.py deleted file mode 100644 index 1491e3b8247c9d163d6016caf2fcd8043a053b7e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3plus_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -model = dict( - decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/schedules/schedule_40k.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/schedules/schedule_40k.py deleted file mode 100644 index cdbf841abcb26eed87bf76ab816aff4bae0630ee..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/schedules/schedule_40k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=40000) -checkpoint_config = dict(by_epoch=False, interval=4000) -evaluation = dict(interval=4000, metric='mIoU') diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/version.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/version.py deleted file mode 100644 index c7c8bb6ff4f8ed84e466a66cac6b953b901626ea..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/distlib/version.py +++ /dev/null @@ -1,739 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012-2017 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -""" -Implementation of a flexible versioning scheme providing support for PEP-440, -setuptools-compatible and semantic versioning. -""" - -import logging -import re - -from .compat import string_types -from .util import parse_requirement - -__all__ = ['NormalizedVersion', 'NormalizedMatcher', - 'LegacyVersion', 'LegacyMatcher', - 'SemanticVersion', 'SemanticMatcher', - 'UnsupportedVersionError', 'get_scheme'] - -logger = logging.getLogger(__name__) - - -class UnsupportedVersionError(ValueError): - """This is an unsupported version.""" - pass - - -class Version(object): - def __init__(self, s): - self._string = s = s.strip() - self._parts = parts = self.parse(s) - assert isinstance(parts, tuple) - assert len(parts) > 0 - - def parse(self, s): - raise NotImplementedError('please implement in a subclass') - - def _check_compatible(self, other): - if type(self) != type(other): - raise TypeError('cannot compare %r and %r' % (self, other)) - - def __eq__(self, other): - self._check_compatible(other) - return self._parts == other._parts - - def __ne__(self, other): - return not self.__eq__(other) - - def __lt__(self, other): - self._check_compatible(other) - return self._parts < other._parts - - def __gt__(self, other): - return not (self.__lt__(other) or self.__eq__(other)) - - def __le__(self, other): - return self.__lt__(other) or self.__eq__(other) - - def __ge__(self, other): - return self.__gt__(other) or self.__eq__(other) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - def __hash__(self): - return hash(self._parts) - - def __repr__(self): - return "%s('%s')" % (self.__class__.__name__, self._string) - - def __str__(self): - return self._string - - @property - def is_prerelease(self): - raise NotImplementedError('Please implement in subclasses.') - - -class Matcher(object): - version_class = None - - # value is either a callable or the name of a method - _operators = { - '<': lambda v, c, p: v < c, - '>': lambda v, c, p: v > c, - '<=': lambda v, c, p: v == c or v < c, - '>=': lambda v, c, p: v == c or v > c, - '==': lambda v, c, p: v == c, - '===': lambda v, c, p: v == c, - # by default, compatible => >=. - '~=': lambda v, c, p: v == c or v > c, - '!=': lambda v, c, p: v != c, - } - - # this is a method only to support alternative implementations - # via overriding - def parse_requirement(self, s): - return parse_requirement(s) - - def __init__(self, s): - if self.version_class is None: - raise ValueError('Please specify a version class') - self._string = s = s.strip() - r = self.parse_requirement(s) - if not r: - raise ValueError('Not valid: %r' % s) - self.name = r.name - self.key = self.name.lower() # for case-insensitive comparisons - clist = [] - if r.constraints: - # import pdb; pdb.set_trace() - for op, s in r.constraints: - if s.endswith('.*'): - if op not in ('==', '!='): - raise ValueError('\'.*\' not allowed for ' - '%r constraints' % op) - # Could be a partial version (e.g. for '2.*') which - # won't parse as a version, so keep it as a string - vn, prefix = s[:-2], True - # Just to check that vn is a valid version - self.version_class(vn) - else: - # Should parse as a version, so we can create an - # instance for the comparison - vn, prefix = self.version_class(s), False - clist.append((op, vn, prefix)) - self._parts = tuple(clist) - - def match(self, version): - """ - Check if the provided version matches the constraints. - - :param version: The version to match against this instance. - :type version: String or :class:`Version` instance. - """ - if isinstance(version, string_types): - version = self.version_class(version) - for operator, constraint, prefix in self._parts: - f = self._operators.get(operator) - if isinstance(f, string_types): - f = getattr(self, f) - if not f: - msg = ('%r not implemented ' - 'for %s' % (operator, self.__class__.__name__)) - raise NotImplementedError(msg) - if not f(version, constraint, prefix): - return False - return True - - @property - def exact_version(self): - result = None - if len(self._parts) == 1 and self._parts[0][0] in ('==', '==='): - result = self._parts[0][1] - return result - - def _check_compatible(self, other): - if type(self) != type(other) or self.name != other.name: - raise TypeError('cannot compare %s and %s' % (self, other)) - - def __eq__(self, other): - self._check_compatible(other) - return self.key == other.key and self._parts == other._parts - - def __ne__(self, other): - return not self.__eq__(other) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - def __hash__(self): - return hash(self.key) + hash(self._parts) - - def __repr__(self): - return "%s(%r)" % (self.__class__.__name__, self._string) - - def __str__(self): - return self._string - - -PEP440_VERSION_RE = re.compile(r'^v?(\d+!)?(\d+(\.\d+)*)((a|b|c|rc)(\d+))?' - r'(\.(post)(\d+))?(\.(dev)(\d+))?' - r'(\+([a-zA-Z\d]+(\.[a-zA-Z\d]+)?))?$') - - -def _pep_440_key(s): - s = s.strip() - m = PEP440_VERSION_RE.match(s) - if not m: - raise UnsupportedVersionError('Not a valid version: %s' % s) - groups = m.groups() - nums = tuple(int(v) for v in groups[1].split('.')) - while len(nums) > 1 and nums[-1] == 0: - nums = nums[:-1] - - if not groups[0]: - epoch = 0 - else: - epoch = int(groups[0][:-1]) - pre = groups[4:6] - post = groups[7:9] - dev = groups[10:12] - local = groups[13] - if pre == (None, None): - pre = () - else: - pre = pre[0], int(pre[1]) - if post == (None, None): - post = () - else: - post = post[0], int(post[1]) - if dev == (None, None): - dev = () - else: - dev = dev[0], int(dev[1]) - if local is None: - local = () - else: - parts = [] - for part in local.split('.'): - # to ensure that numeric compares as > lexicographic, avoid - # comparing them directly, but encode a tuple which ensures - # correct sorting - if part.isdigit(): - part = (1, int(part)) - else: - part = (0, part) - parts.append(part) - local = tuple(parts) - if not pre: - # either before pre-release, or final release and after - if not post and dev: - # before pre-release - pre = ('a', -1) # to sort before a0 - else: - pre = ('z',) # to sort after all pre-releases - # now look at the state of post and dev. - if not post: - post = ('_',) # sort before 'a' - if not dev: - dev = ('final',) - - #print('%s -> %s' % (s, m.groups())) - return epoch, nums, pre, post, dev, local - - -_normalized_key = _pep_440_key - - -class NormalizedVersion(Version): - """A rational version. - - Good: - 1.2 # equivalent to "1.2.0" - 1.2.0 - 1.2a1 - 1.2.3a2 - 1.2.3b1 - 1.2.3c1 - 1.2.3.4 - TODO: fill this out - - Bad: - 1 # minimum two numbers - 1.2a # release level must have a release serial - 1.2.3b - """ - def parse(self, s): - result = _normalized_key(s) - # _normalized_key loses trailing zeroes in the release - # clause, since that's needed to ensure that X.Y == X.Y.0 == X.Y.0.0 - # However, PEP 440 prefix matching needs it: for example, - # (~= 1.4.5.0) matches differently to (~= 1.4.5.0.0). - m = PEP440_VERSION_RE.match(s) # must succeed - groups = m.groups() - self._release_clause = tuple(int(v) for v in groups[1].split('.')) - return result - - PREREL_TAGS = set(['a', 'b', 'c', 'rc', 'dev']) - - @property - def is_prerelease(self): - return any(t[0] in self.PREREL_TAGS for t in self._parts if t) - - -def _match_prefix(x, y): - x = str(x) - y = str(y) - if x == y: - return True - if not x.startswith(y): - return False - n = len(y) - return x[n] == '.' - - -class NormalizedMatcher(Matcher): - version_class = NormalizedVersion - - # value is either a callable or the name of a method - _operators = { - '~=': '_match_compatible', - '<': '_match_lt', - '>': '_match_gt', - '<=': '_match_le', - '>=': '_match_ge', - '==': '_match_eq', - '===': '_match_arbitrary', - '!=': '_match_ne', - } - - def _adjust_local(self, version, constraint, prefix): - if prefix: - strip_local = '+' not in constraint and version._parts[-1] - else: - # both constraint and version are - # NormalizedVersion instances. - # If constraint does not have a local component, - # ensure the version doesn't, either. - strip_local = not constraint._parts[-1] and version._parts[-1] - if strip_local: - s = version._string.split('+', 1)[0] - version = self.version_class(s) - return version, constraint - - def _match_lt(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if version >= constraint: - return False - release_clause = constraint._release_clause - pfx = '.'.join([str(i) for i in release_clause]) - return not _match_prefix(version, pfx) - - def _match_gt(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if version <= constraint: - return False - release_clause = constraint._release_clause - pfx = '.'.join([str(i) for i in release_clause]) - return not _match_prefix(version, pfx) - - def _match_le(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - return version <= constraint - - def _match_ge(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - return version >= constraint - - def _match_eq(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if not prefix: - result = (version == constraint) - else: - result = _match_prefix(version, constraint) - return result - - def _match_arbitrary(self, version, constraint, prefix): - return str(version) == str(constraint) - - def _match_ne(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if not prefix: - result = (version != constraint) - else: - result = not _match_prefix(version, constraint) - return result - - def _match_compatible(self, version, constraint, prefix): - version, constraint = self._adjust_local(version, constraint, prefix) - if version == constraint: - return True - if version < constraint: - return False -# if not prefix: -# return True - release_clause = constraint._release_clause - if len(release_clause) > 1: - release_clause = release_clause[:-1] - pfx = '.'.join([str(i) for i in release_clause]) - return _match_prefix(version, pfx) - -_REPLACEMENTS = ( - (re.compile('[.+-]$'), ''), # remove trailing puncts - (re.compile(r'^[.](\d)'), r'0.\1'), # .N -> 0.N at start - (re.compile('^[.-]'), ''), # remove leading puncts - (re.compile(r'^\((.*)\)$'), r'\1'), # remove parentheses - (re.compile(r'^v(ersion)?\s*(\d+)'), r'\2'), # remove leading v(ersion) - (re.compile(r'^r(ev)?\s*(\d+)'), r'\2'), # remove leading v(ersion) - (re.compile('[.]{2,}'), '.'), # multiple runs of '.' - (re.compile(r'\b(alfa|apha)\b'), 'alpha'), # misspelt alpha - (re.compile(r'\b(pre-alpha|prealpha)\b'), - 'pre.alpha'), # standardise - (re.compile(r'\(beta\)$'), 'beta'), # remove parentheses -) - -_SUFFIX_REPLACEMENTS = ( - (re.compile('^[:~._+-]+'), ''), # remove leading puncts - (re.compile('[,*")([\\]]'), ''), # remove unwanted chars - (re.compile('[~:+_ -]'), '.'), # replace illegal chars - (re.compile('[.]{2,}'), '.'), # multiple runs of '.' - (re.compile(r'\.$'), ''), # trailing '.' -) - -_NUMERIC_PREFIX = re.compile(r'(\d+(\.\d+)*)') - - -def _suggest_semantic_version(s): - """ - Try to suggest a semantic form for a version for which - _suggest_normalized_version couldn't come up with anything. - """ - result = s.strip().lower() - for pat, repl in _REPLACEMENTS: - result = pat.sub(repl, result) - if not result: - result = '0.0.0' - - # Now look for numeric prefix, and separate it out from - # the rest. - #import pdb; pdb.set_trace() - m = _NUMERIC_PREFIX.match(result) - if not m: - prefix = '0.0.0' - suffix = result - else: - prefix = m.groups()[0].split('.') - prefix = [int(i) for i in prefix] - while len(prefix) < 3: - prefix.append(0) - if len(prefix) == 3: - suffix = result[m.end():] - else: - suffix = '.'.join([str(i) for i in prefix[3:]]) + result[m.end():] - prefix = prefix[:3] - prefix = '.'.join([str(i) for i in prefix]) - suffix = suffix.strip() - if suffix: - #import pdb; pdb.set_trace() - # massage the suffix. - for pat, repl in _SUFFIX_REPLACEMENTS: - suffix = pat.sub(repl, suffix) - - if not suffix: - result = prefix - else: - sep = '-' if 'dev' in suffix else '+' - result = prefix + sep + suffix - if not is_semver(result): - result = None - return result - - -def _suggest_normalized_version(s): - """Suggest a normalized version close to the given version string. - - If you have a version string that isn't rational (i.e. NormalizedVersion - doesn't like it) then you might be able to get an equivalent (or close) - rational version from this function. - - This does a number of simple normalizations to the given string, based - on observation of versions currently in use on PyPI. Given a dump of - those version during PyCon 2009, 4287 of them: - - 2312 (53.93%) match NormalizedVersion without change - with the automatic suggestion - - 3474 (81.04%) match when using this suggestion method - - @param s {str} An irrational version string. - @returns A rational version string, or None, if couldn't determine one. - """ - try: - _normalized_key(s) - return s # already rational - except UnsupportedVersionError: - pass - - rs = s.lower() - - # part of this could use maketrans - for orig, repl in (('-alpha', 'a'), ('-beta', 'b'), ('alpha', 'a'), - ('beta', 'b'), ('rc', 'c'), ('-final', ''), - ('-pre', 'c'), - ('-release', ''), ('.release', ''), ('-stable', ''), - ('+', '.'), ('_', '.'), (' ', ''), ('.final', ''), - ('final', '')): - rs = rs.replace(orig, repl) - - # if something ends with dev or pre, we add a 0 - rs = re.sub(r"pre$", r"pre0", rs) - rs = re.sub(r"dev$", r"dev0", rs) - - # if we have something like "b-2" or "a.2" at the end of the - # version, that is probably beta, alpha, etc - # let's remove the dash or dot - rs = re.sub(r"([abc]|rc)[\-\.](\d+)$", r"\1\2", rs) - - # 1.0-dev-r371 -> 1.0.dev371 - # 0.1-dev-r79 -> 0.1.dev79 - rs = re.sub(r"[\-\.](dev)[\-\.]?r?(\d+)$", r".\1\2", rs) - - # Clean: 2.0.a.3, 2.0.b1, 0.9.0~c1 - rs = re.sub(r"[.~]?([abc])\.?", r"\1", rs) - - # Clean: v0.3, v1.0 - if rs.startswith('v'): - rs = rs[1:] - - # Clean leading '0's on numbers. - #TODO: unintended side-effect on, e.g., "2003.05.09" - # PyPI stats: 77 (~2%) better - rs = re.sub(r"\b0+(\d+)(?!\d)", r"\1", rs) - - # Clean a/b/c with no version. E.g. "1.0a" -> "1.0a0". Setuptools infers - # zero. - # PyPI stats: 245 (7.56%) better - rs = re.sub(r"(\d+[abc])$", r"\g<1>0", rs) - - # the 'dev-rNNN' tag is a dev tag - rs = re.sub(r"\.?(dev-r|dev\.r)\.?(\d+)$", r".dev\2", rs) - - # clean the - when used as a pre delimiter - rs = re.sub(r"-(a|b|c)(\d+)$", r"\1\2", rs) - - # a terminal "dev" or "devel" can be changed into ".dev0" - rs = re.sub(r"[\.\-](dev|devel)$", r".dev0", rs) - - # a terminal "dev" can be changed into ".dev0" - rs = re.sub(r"(?![\.\-])dev$", r".dev0", rs) - - # a terminal "final" or "stable" can be removed - rs = re.sub(r"(final|stable)$", "", rs) - - # The 'r' and the '-' tags are post release tags - # 0.4a1.r10 -> 0.4a1.post10 - # 0.9.33-17222 -> 0.9.33.post17222 - # 0.9.33-r17222 -> 0.9.33.post17222 - rs = re.sub(r"\.?(r|-|-r)\.?(\d+)$", r".post\2", rs) - - # Clean 'r' instead of 'dev' usage: - # 0.9.33+r17222 -> 0.9.33.dev17222 - # 1.0dev123 -> 1.0.dev123 - # 1.0.git123 -> 1.0.dev123 - # 1.0.bzr123 -> 1.0.dev123 - # 0.1a0dev.123 -> 0.1a0.dev123 - # PyPI stats: ~150 (~4%) better - rs = re.sub(r"\.?(dev|git|bzr)\.?(\d+)$", r".dev\2", rs) - - # Clean '.pre' (normalized from '-pre' above) instead of 'c' usage: - # 0.2.pre1 -> 0.2c1 - # 0.2-c1 -> 0.2c1 - # 1.0preview123 -> 1.0c123 - # PyPI stats: ~21 (0.62%) better - rs = re.sub(r"\.?(pre|preview|-c)(\d+)$", r"c\g<2>", rs) - - # Tcl/Tk uses "px" for their post release markers - rs = re.sub(r"p(\d+)$", r".post\1", rs) - - try: - _normalized_key(rs) - except UnsupportedVersionError: - rs = None - return rs - -# -# Legacy version processing (distribute-compatible) -# - -_VERSION_PART = re.compile(r'([a-z]+|\d+|[\.-])', re.I) -_VERSION_REPLACE = { - 'pre': 'c', - 'preview': 'c', - '-': 'final-', - 'rc': 'c', - 'dev': '@', - '': None, - '.': None, -} - - -def _legacy_key(s): - def get_parts(s): - result = [] - for p in _VERSION_PART.split(s.lower()): - p = _VERSION_REPLACE.get(p, p) - if p: - if '0' <= p[:1] <= '9': - p = p.zfill(8) - else: - p = '*' + p - result.append(p) - result.append('*final') - return result - - result = [] - for p in get_parts(s): - if p.startswith('*'): - if p < '*final': - while result and result[-1] == '*final-': - result.pop() - while result and result[-1] == '00000000': - result.pop() - result.append(p) - return tuple(result) - - -class LegacyVersion(Version): - def parse(self, s): - return _legacy_key(s) - - @property - def is_prerelease(self): - result = False - for x in self._parts: - if (isinstance(x, string_types) and x.startswith('*') and - x < '*final'): - result = True - break - return result - - -class LegacyMatcher(Matcher): - version_class = LegacyVersion - - _operators = dict(Matcher._operators) - _operators['~='] = '_match_compatible' - - numeric_re = re.compile(r'^(\d+(\.\d+)*)') - - def _match_compatible(self, version, constraint, prefix): - if version < constraint: - return False - m = self.numeric_re.match(str(constraint)) - if not m: - logger.warning('Cannot compute compatible match for version %s ' - ' and constraint %s', version, constraint) - return True - s = m.groups()[0] - if '.' in s: - s = s.rsplit('.', 1)[0] - return _match_prefix(version, s) - -# -# Semantic versioning -# - -_SEMVER_RE = re.compile(r'^(\d+)\.(\d+)\.(\d+)' - r'(-[a-z0-9]+(\.[a-z0-9-]+)*)?' - r'(\+[a-z0-9]+(\.[a-z0-9-]+)*)?$', re.I) - - -def is_semver(s): - return _SEMVER_RE.match(s) - - -def _semantic_key(s): - def make_tuple(s, absent): - if s is None: - result = (absent,) - else: - parts = s[1:].split('.') - # We can't compare ints and strings on Python 3, so fudge it - # by zero-filling numeric values so simulate a numeric comparison - result = tuple([p.zfill(8) if p.isdigit() else p for p in parts]) - return result - - m = is_semver(s) - if not m: - raise UnsupportedVersionError(s) - groups = m.groups() - major, minor, patch = [int(i) for i in groups[:3]] - # choose the '|' and '*' so that versions sort correctly - pre, build = make_tuple(groups[3], '|'), make_tuple(groups[5], '*') - return (major, minor, patch), pre, build - - -class SemanticVersion(Version): - def parse(self, s): - return _semantic_key(s) - - @property - def is_prerelease(self): - return self._parts[1][0] != '|' - - -class SemanticMatcher(Matcher): - version_class = SemanticVersion - - -class VersionScheme(object): - def __init__(self, key, matcher, suggester=None): - self.key = key - self.matcher = matcher - self.suggester = suggester - - def is_valid_version(self, s): - try: - self.matcher.version_class(s) - result = True - except UnsupportedVersionError: - result = False - return result - - def is_valid_matcher(self, s): - try: - self.matcher(s) - result = True - except UnsupportedVersionError: - result = False - return result - - def is_valid_constraint_list(self, s): - """ - Used for processing some metadata fields - """ - # See issue #140. Be tolerant of a single trailing comma. - if s.endswith(','): - s = s[:-1] - return self.is_valid_matcher('dummy_name (%s)' % s) - - def suggest(self, s): - if self.suggester is None: - result = None - else: - result = self.suggester(s) - return result - -_SCHEMES = { - 'normalized': VersionScheme(_normalized_key, NormalizedMatcher, - _suggest_normalized_version), - 'legacy': VersionScheme(_legacy_key, LegacyMatcher, lambda self, s: s), - 'semantic': VersionScheme(_semantic_key, SemanticMatcher, - _suggest_semantic_version), -} - -_SCHEMES['default'] = _SCHEMES['normalized'] - - -def get_scheme(name): - if name not in _SCHEMES: - raise ValueError('unknown scheme name: %r' % name) - return _SCHEMES[name] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/_musllinux.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/_musllinux.py deleted file mode 100644 index 8ac3059ba3c246b9a5a6fb8d14936bb07777191e..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/_musllinux.py +++ /dev/null @@ -1,136 +0,0 @@ -"""PEP 656 support. - -This module implements logic to detect if the currently running Python is -linked against musl, and what musl version is used. -""" - -import contextlib -import functools -import operator -import os -import re -import struct -import subprocess -import sys -from typing import IO, Iterator, NamedTuple, Optional, Tuple - - -def _read_unpacked(f: IO[bytes], fmt: str) -> Tuple[int, ...]: - return struct.unpack(fmt, f.read(struct.calcsize(fmt))) - - -def _parse_ld_musl_from_elf(f: IO[bytes]) -> Optional[str]: - """Detect musl libc location by parsing the Python executable. - - Based on: https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca - ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html - """ - f.seek(0) - try: - ident = _read_unpacked(f, "16B") - except struct.error: - return None - if ident[:4] != tuple(b"\x7fELF"): # Invalid magic, not ELF. - return None - f.seek(struct.calcsize("HHI"), 1) # Skip file type, machine, and version. - - try: - # e_fmt: Format for program header. - # p_fmt: Format for section header. - # p_idx: Indexes to find p_type, p_offset, and p_filesz. - e_fmt, p_fmt, p_idx = { - 1: ("IIIIHHH", "IIIIIIII", (0, 1, 4)), # 32-bit. - 2: ("QQQIHHH", "IIQQQQQQ", (0, 2, 5)), # 64-bit. - }[ident[4]] - except KeyError: - return None - else: - p_get = operator.itemgetter(*p_idx) - - # Find the interpreter section and return its content. - try: - _, e_phoff, _, _, _, e_phentsize, e_phnum = _read_unpacked(f, e_fmt) - except struct.error: - return None - for i in range(e_phnum + 1): - f.seek(e_phoff + e_phentsize * i) - try: - p_type, p_offset, p_filesz = p_get(_read_unpacked(f, p_fmt)) - except struct.error: - return None - if p_type != 3: # Not PT_INTERP. - continue - f.seek(p_offset) - interpreter = os.fsdecode(f.read(p_filesz)).strip("\0") - if "musl" not in interpreter: - return None - return interpreter - return None - - -class _MuslVersion(NamedTuple): - major: int - minor: int - - -def _parse_musl_version(output: str) -> Optional[_MuslVersion]: - lines = [n for n in (n.strip() for n in output.splitlines()) if n] - if len(lines) < 2 or lines[0][:4] != "musl": - return None - m = re.match(r"Version (\d+)\.(\d+)", lines[1]) - if not m: - return None - return _MuslVersion(major=int(m.group(1)), minor=int(m.group(2))) - - -@functools.lru_cache() -def _get_musl_version(executable: str) -> Optional[_MuslVersion]: - """Detect currently-running musl runtime version. - - This is done by checking the specified executable's dynamic linking - information, and invoking the loader to parse its output for a version - string. If the loader is musl, the output would be something like:: - - musl libc (x86_64) - Version 1.2.2 - Dynamic Program Loader - """ - with contextlib.ExitStack() as stack: - try: - f = stack.enter_context(open(executable, "rb")) - except OSError: - return None - ld = _parse_ld_musl_from_elf(f) - if not ld: - return None - proc = subprocess.run([ld], stderr=subprocess.PIPE, universal_newlines=True) - return _parse_musl_version(proc.stderr) - - -def platform_tags(arch: str) -> Iterator[str]: - """Generate musllinux tags compatible to the current platform. - - :param arch: Should be the part of platform tag after the ``linux_`` - prefix, e.g. ``x86_64``. The ``linux_`` prefix is assumed as a - prerequisite for the current platform to be musllinux-compatible. - - :returns: An iterator of compatible musllinux tags. - """ - sys_musl = _get_musl_version(sys.executable) - if sys_musl is None: # Python not dynamically linked against musl. - return - for minor in range(sys_musl.minor, -1, -1): - yield f"musllinux_{sys_musl.major}_{minor}_{arch}" - - -if __name__ == "__main__": # pragma: no cover - import sysconfig - - plat = sysconfig.get_platform() - assert plat.startswith("linux-"), "not linux" - - print("plat:", plat) - print("musl:", _get_musl_version(sys.executable)) - print("tags:", end=" ") - for t in platform_tags(re.sub(r"[.-]", "_", plat.split("-", 1)[-1])): - print(t, end="\n ") diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/tags.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/tags.py deleted file mode 100644 index 9a3d25a71c75c975291cf987001ecd6882d6417d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/tags.py +++ /dev/null @@ -1,487 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import logging -import platform -import sys -import sysconfig -from importlib.machinery import EXTENSION_SUFFIXES -from typing import ( - Dict, - FrozenSet, - Iterable, - Iterator, - List, - Optional, - Sequence, - Tuple, - Union, - cast, -) - -from . import _manylinux, _musllinux - -logger = logging.getLogger(__name__) - -PythonVersion = Sequence[int] -MacVersion = Tuple[int, int] - -INTERPRETER_SHORT_NAMES: Dict[str, str] = { - "python": "py", # Generic. - "cpython": "cp", - "pypy": "pp", - "ironpython": "ip", - "jython": "jy", -} - - -_32_BIT_INTERPRETER = sys.maxsize <= 2 ** 32 - - -class Tag: - """ - A representation of the tag triple for a wheel. - - Instances are considered immutable and thus are hashable. Equality checking - is also supported. - """ - - __slots__ = ["_interpreter", "_abi", "_platform", "_hash"] - - def __init__(self, interpreter: str, abi: str, platform: str) -> None: - self._interpreter = interpreter.lower() - self._abi = abi.lower() - self._platform = platform.lower() - # The __hash__ of every single element in a Set[Tag] will be evaluated each time - # that a set calls its `.disjoint()` method, which may be called hundreds of - # times when scanning a page of links for packages with tags matching that - # Set[Tag]. Pre-computing the value here produces significant speedups for - # downstream consumers. - self._hash = hash((self._interpreter, self._abi, self._platform)) - - @property - def interpreter(self) -> str: - return self._interpreter - - @property - def abi(self) -> str: - return self._abi - - @property - def platform(self) -> str: - return self._platform - - def __eq__(self, other: object) -> bool: - if not isinstance(other, Tag): - return NotImplemented - - return ( - (self._hash == other._hash) # Short-circuit ASAP for perf reasons. - and (self._platform == other._platform) - and (self._abi == other._abi) - and (self._interpreter == other._interpreter) - ) - - def __hash__(self) -> int: - return self._hash - - def __str__(self) -> str: - return f"{self._interpreter}-{self._abi}-{self._platform}" - - def __repr__(self) -> str: - return f"<{self} @ {id(self)}>" - - -def parse_tag(tag: str) -> FrozenSet[Tag]: - """ - Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances. - - Returning a set is required due to the possibility that the tag is a - compressed tag set. - """ - tags = set() - interpreters, abis, platforms = tag.split("-") - for interpreter in interpreters.split("."): - for abi in abis.split("."): - for platform_ in platforms.split("."): - tags.add(Tag(interpreter, abi, platform_)) - return frozenset(tags) - - -def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]: - value = sysconfig.get_config_var(name) - if value is None and warn: - logger.debug( - "Config variable '%s' is unset, Python ABI tag may be incorrect", name - ) - return value - - -def _normalize_string(string: str) -> str: - return string.replace(".", "_").replace("-", "_") - - -def _abi3_applies(python_version: PythonVersion) -> bool: - """ - Determine if the Python version supports abi3. - - PEP 384 was first implemented in Python 3.2. - """ - return len(python_version) > 1 and tuple(python_version) >= (3, 2) - - -def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]: - py_version = tuple(py_version) # To allow for version comparison. - abis = [] - version = _version_nodot(py_version[:2]) - debug = pymalloc = ucs4 = "" - with_debug = _get_config_var("Py_DEBUG", warn) - has_refcount = hasattr(sys, "gettotalrefcount") - # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled - # extension modules is the best option. - # https://github.com/pypa/pip/issues/3383#issuecomment-173267692 - has_ext = "_d.pyd" in EXTENSION_SUFFIXES - if with_debug or (with_debug is None and (has_refcount or has_ext)): - debug = "d" - if py_version < (3, 8): - with_pymalloc = _get_config_var("WITH_PYMALLOC", warn) - if with_pymalloc or with_pymalloc is None: - pymalloc = "m" - if py_version < (3, 3): - unicode_size = _get_config_var("Py_UNICODE_SIZE", warn) - if unicode_size == 4 or ( - unicode_size is None and sys.maxunicode == 0x10FFFF - ): - ucs4 = "u" - elif debug: - # Debug builds can also load "normal" extension modules. - # We can also assume no UCS-4 or pymalloc requirement. - abis.append(f"cp{version}") - abis.insert( - 0, - "cp{version}{debug}{pymalloc}{ucs4}".format( - version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4 - ), - ) - return abis - - -def cpython_tags( - python_version: Optional[PythonVersion] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a CPython interpreter. - - The tags consist of: - - cp-- - - cp-abi3- - - cp-none- - - cp-abi3- # Older Python versions down to 3.2. - - If python_version only specifies a major version then user-provided ABIs and - the 'none' ABItag will be used. - - If 'abi3' or 'none' are specified in 'abis' then they will be yielded at - their normal position and not at the beginning. - """ - if not python_version: - python_version = sys.version_info[:2] - - interpreter = f"cp{_version_nodot(python_version[:2])}" - - if abis is None: - if len(python_version) > 1: - abis = _cpython_abis(python_version, warn) - else: - abis = [] - abis = list(abis) - # 'abi3' and 'none' are explicitly handled later. - for explicit_abi in ("abi3", "none"): - try: - abis.remove(explicit_abi) - except ValueError: - pass - - platforms = list(platforms or platform_tags()) - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - if _abi3_applies(python_version): - yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms) - yield from (Tag(interpreter, "none", platform_) for platform_ in platforms) - - if _abi3_applies(python_version): - for minor_version in range(python_version[1] - 1, 1, -1): - for platform_ in platforms: - interpreter = "cp{version}".format( - version=_version_nodot((python_version[0], minor_version)) - ) - yield Tag(interpreter, "abi3", platform_) - - -def _generic_abi() -> Iterator[str]: - abi = sysconfig.get_config_var("SOABI") - if abi: - yield _normalize_string(abi) - - -def generic_tags( - interpreter: Optional[str] = None, - abis: Optional[Iterable[str]] = None, - platforms: Optional[Iterable[str]] = None, - *, - warn: bool = False, -) -> Iterator[Tag]: - """ - Yields the tags for a generic interpreter. - - The tags consist of: - - -- - - The "none" ABI will be added if it was not explicitly provided. - """ - if not interpreter: - interp_name = interpreter_name() - interp_version = interpreter_version(warn=warn) - interpreter = "".join([interp_name, interp_version]) - if abis is None: - abis = _generic_abi() - platforms = list(platforms or platform_tags()) - abis = list(abis) - if "none" not in abis: - abis.append("none") - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - - -def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]: - """ - Yields Python versions in descending order. - - After the latest version, the major-only version will be yielded, and then - all previous versions of that major version. - """ - if len(py_version) > 1: - yield f"py{_version_nodot(py_version[:2])}" - yield f"py{py_version[0]}" - if len(py_version) > 1: - for minor in range(py_version[1] - 1, -1, -1): - yield f"py{_version_nodot((py_version[0], minor))}" - - -def compatible_tags( - python_version: Optional[PythonVersion] = None, - interpreter: Optional[str] = None, - platforms: Optional[Iterable[str]] = None, -) -> Iterator[Tag]: - """ - Yields the sequence of tags that are compatible with a specific version of Python. - - The tags consist of: - - py*-none- - - -none-any # ... if `interpreter` is provided. - - py*-none-any - """ - if not python_version: - python_version = sys.version_info[:2] - platforms = list(platforms or platform_tags()) - for version in _py_interpreter_range(python_version): - for platform_ in platforms: - yield Tag(version, "none", platform_) - if interpreter: - yield Tag(interpreter, "none", "any") - for version in _py_interpreter_range(python_version): - yield Tag(version, "none", "any") - - -def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str: - if not is_32bit: - return arch - - if arch.startswith("ppc"): - return "ppc" - - return "i386" - - -def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]: - formats = [cpu_arch] - if cpu_arch == "x86_64": - if version < (10, 4): - return [] - formats.extend(["intel", "fat64", "fat32"]) - - elif cpu_arch == "i386": - if version < (10, 4): - return [] - formats.extend(["intel", "fat32", "fat"]) - - elif cpu_arch == "ppc64": - # TODO: Need to care about 32-bit PPC for ppc64 through 10.2? - if version > (10, 5) or version < (10, 4): - return [] - formats.append("fat64") - - elif cpu_arch == "ppc": - if version > (10, 6): - return [] - formats.extend(["fat32", "fat"]) - - if cpu_arch in {"arm64", "x86_64"}: - formats.append("universal2") - - if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}: - formats.append("universal") - - return formats - - -def mac_platforms( - version: Optional[MacVersion] = None, arch: Optional[str] = None -) -> Iterator[str]: - """ - Yields the platform tags for a macOS system. - - The `version` parameter is a two-item tuple specifying the macOS version to - generate platform tags for. The `arch` parameter is the CPU architecture to - generate platform tags for. Both parameters default to the appropriate value - for the current system. - """ - version_str, _, cpu_arch = platform.mac_ver() - if version is None: - version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2]))) - else: - version = version - if arch is None: - arch = _mac_arch(cpu_arch) - else: - arch = arch - - if (10, 0) <= version and version < (11, 0): - # Prior to Mac OS 11, each yearly release of Mac OS bumped the - # "minor" version number. The major version was always 10. - for minor_version in range(version[1], -1, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=10, minor=minor_version, binary_format=binary_format - ) - - if version >= (11, 0): - # Starting with Mac OS 11, each yearly release bumps the major version - # number. The minor versions are now the midyear updates. - for major_version in range(version[0], 10, -1): - compat_version = major_version, 0 - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=major_version, minor=0, binary_format=binary_format - ) - - if version >= (11, 0): - # Mac OS 11 on x86_64 is compatible with binaries from previous releases. - # Arm64 support was introduced in 11.0, so no Arm binaries from previous - # releases exist. - # - # However, the "universal2" binary format can have a - # macOS version earlier than 11.0 when the x86_64 part of the binary supports - # that version of macOS. - if arch == "x86_64": - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - else: - for minor_version in range(16, 3, -1): - compat_version = 10, minor_version - binary_format = "universal2" - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - - -def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]: - linux = _normalize_string(sysconfig.get_platform()) - if is_32bit: - if linux == "linux_x86_64": - linux = "linux_i686" - elif linux == "linux_aarch64": - linux = "linux_armv7l" - _, arch = linux.split("_", 1) - yield from _manylinux.platform_tags(linux, arch) - yield from _musllinux.platform_tags(arch) - yield linux - - -def _generic_platforms() -> Iterator[str]: - yield _normalize_string(sysconfig.get_platform()) - - -def platform_tags() -> Iterator[str]: - """ - Provides the platform tags for this installation. - """ - if platform.system() == "Darwin": - return mac_platforms() - elif platform.system() == "Linux": - return _linux_platforms() - else: - return _generic_platforms() - - -def interpreter_name() -> str: - """ - Returns the name of the running interpreter. - """ - name = sys.implementation.name - return INTERPRETER_SHORT_NAMES.get(name) or name - - -def interpreter_version(*, warn: bool = False) -> str: - """ - Returns the version of the running interpreter. - """ - version = _get_config_var("py_version_nodot", warn=warn) - if version: - version = str(version) - else: - version = _version_nodot(sys.version_info[:2]) - return version - - -def _version_nodot(version: PythonVersion) -> str: - return "".join(map(str, version)) - - -def sys_tags(*, warn: bool = False) -> Iterator[Tag]: - """ - Returns the sequence of tag triples for the running interpreter. - - The order of the sequence corresponds to priority order for the - interpreter, from most to least important. - """ - - interp_name = interpreter_name() - if interp_name == "cp": - yield from cpython_tags(warn=warn) - else: - yield from generic_tags() - - if interp_name == "pp": - yield from compatible_tags(interpreter="pp3") - else: - yield from compatible_tags() diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_compat.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_compat.py deleted file mode 100644 index 95e509c0143e14e6371ec3cd1433ffec50c297fc..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_compat.py +++ /dev/null @@ -1,8 +0,0 @@ -__all__ = ("tomllib",) - -import sys - -if sys.version_info >= (3, 11): - import tomllib -else: - from pip._vendor import tomli as tomllib diff --git a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/timm_model.py b/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/timm_model.py deleted file mode 100644 index c9d1ab4666b5bab5038d44b90c9ddca5087de460..0000000000000000000000000000000000000000 --- a/spaces/Audio-AGI/AudioSep/models/CLAP/open_clip/timm_model.py +++ /dev/null @@ -1,112 +0,0 @@ -""" timm model adapter - -Wraps timm (https://github.com/rwightman/pytorch-image-models) models for use as a vision tower in CLIP model. -""" -from collections import OrderedDict - -import torch.nn as nn - -try: - import timm - from timm.models.layers import Mlp, to_2tuple - from timm.models.layers.attention_pool2d import RotAttentionPool2d - from timm.models.layers.attention_pool2d import ( - AttentionPool2d as AbsAttentionPool2d, - ) -except ImportError as e: - timm = None - -from .utils import freeze_batch_norm_2d - - -class TimmModel(nn.Module): - """timm model adapter - # FIXME this adapter is a work in progress, may change in ways that break weight compat - """ - - def __init__( - self, - model_name, - embed_dim, - image_size=224, - pool="avg", - proj="linear", - drop=0.0, - pretrained=False, - ): - super().__init__() - if timm is None: - raise RuntimeError("Please `pip install timm` to use timm models.") - - self.image_size = to_2tuple(image_size) - self.trunk = timm.create_model(model_name, pretrained=pretrained) - feat_size = self.trunk.default_cfg.get("pool_size", None) - feature_ndim = 1 if not feat_size else 2 - if pool in ("abs_attn", "rot_attn"): - assert feature_ndim == 2 - # if attn pooling used, remove both classifier and default pool - self.trunk.reset_classifier(0, global_pool="") - else: - # reset global pool if pool config set, otherwise leave as network default - reset_kwargs = dict(global_pool=pool) if pool else {} - self.trunk.reset_classifier(0, **reset_kwargs) - prev_chs = self.trunk.num_features - - head_layers = OrderedDict() - if pool == "abs_attn": - head_layers["pool"] = AbsAttentionPool2d( - prev_chs, feat_size=feat_size, out_features=embed_dim - ) - prev_chs = embed_dim - elif pool == "rot_attn": - head_layers["pool"] = RotAttentionPool2d(prev_chs, out_features=embed_dim) - prev_chs = embed_dim - else: - assert proj, "projection layer needed if non-attention pooling is used." - - # NOTE attention pool ends with a projection layer, so proj should usually be set to '' if such pooling is used - if proj == "linear": - head_layers["drop"] = nn.Dropout(drop) - head_layers["proj"] = nn.Linear(prev_chs, embed_dim) - elif proj == "mlp": - head_layers["mlp"] = Mlp(prev_chs, 2 * embed_dim, embed_dim, drop=drop) - - self.head = nn.Sequential(head_layers) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - """lock modules - Args: - unlocked_groups (int): leave last n layer groups unlocked (default: 0) - """ - if not unlocked_groups: - # lock full model - for param in self.trunk.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self.trunk) - else: - # NOTE: partial freeze requires latest timm (master) branch and is subject to change - try: - # FIXME import here until API stable and in an official release - from timm.models.helpers import group_parameters, group_modules - except ImportError: - raise RuntimeError( - "Please install latest timm `pip install git+https://github.com/rwightman/pytorch-image-models`" - ) - matcher = self.trunk.group_matcher() - gparams = group_parameters(self.trunk, matcher) - max_layer_id = max(gparams.keys()) - max_layer_id = max_layer_id - unlocked_groups - for group_idx in range(max_layer_id + 1): - group = gparams[group_idx] - for param in group: - self.trunk.get_parameter(param).requires_grad = False - if freeze_bn_stats: - gmodules = group_modules(self.trunk, matcher, reverse=True) - gmodules = {k for k, v in gmodules.items() if v <= max_layer_id} - freeze_batch_norm_2d(self.trunk, gmodules) - - def forward(self, x): - x = self.trunk(x) - x = self.head(x) - return x diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/testing.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/testing.py deleted file mode 100644 index 9e5ae625bb0593fc20739dd3ea549157e4df4f3d..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/testing.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -import pprint -import sys -from collections.abc import Mapping - - -def print_csv_format(results): - """ - Print main metrics in a format similar to Detectron, - so that they are easy to copypaste into a spreadsheet. - - Args: - results (OrderedDict[dict]): task_name -> {metric -> score} - unordered dict can also be printed, but in arbitrary order - """ - assert isinstance(results, Mapping) or not len(results), results - logger = logging.getLogger(__name__) - for task, res in results.items(): - if isinstance(res, Mapping): - # Don't print "AP-category" metrics since they are usually not tracked. - important_res = [(k, v) for k, v in res.items() if "-" not in k] - logger.info("copypaste: Task: {}".format(task)) - logger.info("copypaste: " + ",".join([k[0] for k in important_res])) - logger.info("copypaste: " + ",".join(["{0:.4f}".format(k[1]) for k in important_res])) - else: - logger.info(f"copypaste: {task}={res}") - - -def verify_results(cfg, results): - """ - Args: - results (OrderedDict[dict]): task_name -> {metric -> score} - - Returns: - bool: whether the verification succeeds or not - """ - expected_results = cfg.TEST.EXPECTED_RESULTS - if not len(expected_results): - return True - - ok = True - for task, metric, expected, tolerance in expected_results: - actual = results[task].get(metric, None) - if actual is None: - ok = False - continue - if not np.isfinite(actual): - ok = False - continue - diff = abs(actual - expected) - if diff > tolerance: - ok = False - - logger = logging.getLogger(__name__) - if not ok: - logger.error("Result verification failed!") - logger.error("Expected Results: " + str(expected_results)) - logger.error("Actual Results: " + pprint.pformat(results)) - - sys.exit(1) - else: - logger.info("Results verification passed.") - return ok - - -def flatten_results_dict(results): - """ - Expand a hierarchical dict of scalars into a flat dict of scalars. - If results[k1][k2][k3] = v, the returned dict will have the entry - {"k1/k2/k3": v}. - - Args: - results (dict): - """ - r = {} - for k, v in results.items(): - if isinstance(v, Mapping): - v = flatten_results_dict(v) - for kk, vv in v.items(): - r[k + "/" + kk] = vv - else: - r[k] = v - return r diff --git a/spaces/Benson/text-generation/Examples/Blockman Go Newshungama Mod Apk.md b/spaces/Benson/text-generation/Examples/Blockman Go Newshungama Mod Apk.md deleted file mode 100644 index 10760963474225d71be0b97efa9577047c0d7b8c..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Blockman Go Newshungama Mod Apk.md +++ /dev/null @@ -1,56 +0,0 @@ -
    -

    Blockman Go Newshungama Mod Apk: Un juego divertido y creativo para los usuarios de Android

    -

    Si estás buscando un juego que te ofrezca infinitas posibilidades de diversión y creatividad, entonces deberías echar un vistazo a Blockman Go. Este es un juego sandbox que le permite construir y explorar diferentes mundos, jugar varios minijuegos, e interactuar con otros jugadores en línea. Y si desea mejorar su experiencia de juego aún más, entonces usted debe probar el Newshungama Mod Apk, que es una versión modificada de Blockman Go que le da dinero y recursos ilimitados. En este artículo, le diremos todo lo que necesita saber sobre Blockman Go y Newshungama Mod Apk, incluyendo lo que son, lo que ofrecen, y cómo conseguirlos en su dispositivo Android.

    -

    ¿Qué es Blockman Go?

    -

    Blockman Go es un juego sandbox que fue desarrollado por Blockman GO Studio. Fue lanzado en 2017 y desde entonces ha ganado millones de fans en todo el mundo. El juego tiene varias características que lo hacen único y atractivo, como:

    -

    blockman go newshungama mod apk


    Download Zip ››› https://bltlly.com/2v6KnG



    -

    Un juego de sandbox con múltiples mini-juegos

    -

    Blockman Go no es solo un juego, sino una colección de muchos minijuegos entre los que puedes elegir. Puedes jugar juegos como Bed Wars, Sky Wars, Murder Mystery, Parkour, Prison Escape, y más. Cada juego tiene sus propias reglas, objetivos y desafíos que te mantendrán entretenido durante horas. También puedes crear tus propios juegos usando el editor integrado y compartirlos con otros jugadores.

    -

    Una plataforma social con funciones de chat y voz

    -

    Blockman Go no es solo un juego, sino también una plataforma social donde puedes conocer y chatear con otros jugadores de diferentes países. Puede unirse o crear salas y servidores donde puede jugar juegos juntos, chatear con mensajes de texto o voz y hacer nuevos amigos. También puedes unirte a clanes y gremios donde puedes cooperar y competir con otros miembros.

    -

    Un sistema avatar personalizable con pieles y accesorios

    - -

    ¿Qué es Newshungama Mod Apk?

    -

    Newshungama Mod Apk es una versión modificada de Blockman Go que fue creado por Newshungama.com. Es un archivo apk mod que se puede descargar e instalar en su dispositivo Android para disfrutar de algunas características y beneficios adicionales en el juego, tales como:

    -

    Una versión modificada de Blockman Go con dinero y recursos ilimitados

    -

    Newshungama Mod Apk le da dinero y recursos ilimitados en el juego, lo que significa que usted puede comprar o utilizar cualquier cosa que desee sin preocuparse por quedarse sin monedas o gemas. Puedes comprar más skins y accesorios para tu avatar, más artículos para tus juegos, más privilegios VIP y más.

    -

    Una forma de acceder a características y artículos premium de forma gratuita

    -

    Newshungama Mod Apk también le permite acceder a algunas características premium y artículos que normalmente no están disponibles de forma gratuita en el juego. Puedes obtener membresía VIP gratis, lo que te da más beneficios y privilegios, como pieles exclusivas, insignias, colores de chat y más. También puedes obtener códigos de regalo gratis, que te dan recompensas aleatorias, como monedas, gemas, pieles y más.

    -

    Un archivo apk seguro y fácil de instalar para dispositivos Android

    -

    Newshungama Mod Apk es un archivo apk seguro y fácil de instalar que no requiere ninguna raíz o jailbreak en su dispositivo. Puede descargarlo desde el sitio web oficial de Newshungama.com, que es una fuente confiable de archivos apk mod. También puede seguir las instrucciones sobre cómo instalarlo en su dispositivo sin problemas.

    -

    ¿Cuáles son los beneficios de usar Newshungama Mod Apk?

    -

    Mediante el uso de Newshungama Mod Apk, se puede disfrutar de más diversión y variedad en el juego. Algunos de los beneficios de usar este mod apk son:

    -

    -

    Puedes disfrutar de más diversión y variedad en el juego

    - -

    Puede crear y unir más salas y servidores

    -

    Con dinero y recursos ilimitados, también puede crear y unir más salas y servidores en el juego. Puedes organizar tus propios juegos e invitar a otros jugadores a unirse a ti. También puedes unirte a los juegos de otros jugadores y divertirte con ellos. También puedes chatear y hablar con ellos usando las características sociales del juego.

    -

    Puedes desbloquear y usar más skins y accesorios para tu avatar

    -

    Con dinero y recursos ilimitados, también puedes desbloquear y usar más skins y accesorios para tu avatar. Puede elegir entre una amplia gama de opciones para personalizar su apariencia y expresar su personalidad. También puede mezclar y combinar diferentes elementos para crear su propio aspecto único.

    -

    Cómo descargar e instalar Newshungama Mod Apk?

    -

    Si desea descargar e instalar Newshungama Mod Apk en su dispositivo Android, puede seguir estos sencillos pasos:

    -

    Siga estos sencillos pasos para obtener el apk mod en su dispositivo

    -
      -
    1. Ir a la página web oficial de Newshungama.com y encontrar el enlace de descarga de Blockman Go Newshungama Mod Apk.
    2. -
    3. Haga clic en el enlace de descarga y espere a que el archivo apk se descargue en su dispositivo.
    4. -
    5. Una vez completada la descarga, vaya a la configuración del dispositivo y habilite la instalación de fuentes desconocidas.
    6. -
    7. Busque el archivo apk en el almacenamiento del dispositivo y toque en él para iniciar el proceso de instalación.
    8. -
    9. Siga las instrucciones en la pantalla y espere a que termine la instalación.
    10. -
    11. Iniciar el juego desde el cajón de la aplicación y disfrutar de las características de mod.
    12. -
    -

    Conclusión

    - -

    Preguntas frecuentes

    -

    Aquí están algunas de las preguntas más frecuentes sobre Blockman Go Newshungama Mod Apk:

    -

    Q: ¿Es seguro usar Blockman Go Newshungama Mod Apk?

    -

    A: Sí, Blockman Go Newshungama Mod Apk es seguro de usar, siempre y cuando se descarga desde el sitio web oficial de Newshungama.com, que es una fuente confiable de archivos mod apk. El apk mod no contiene ningún virus o malware que podría dañar su dispositivo o datos.

    -

    Q: ¿Es Blockman Go Newshungama Mod Apk compatible con mi dispositivo?

    -

    A: Blockman Go Newshungama Mod Apk es compatible con cualquier dispositivo Android que tiene Android 4.1 o superior y al menos 2 GB de RAM. Puede comprobar las especificaciones de su dispositivo y la compatibilidad antes de descargar el apk mod.

    -

    Q: Será Blockman Go Newshungama Mod Apk afectar a mis datos originales del juego?

    -

    A: No, Blockman Go Newshungama Mod Apk no afectará a los datos originales del juego. El mod apk es un archivo separado que no sobrescribe ni interfiere con los datos originales del juego. Todavía puedes jugar el juego original sin ningún problema.

    -

    Q: ¿Puedo actualizar Blockman Go Newshungama Mod Apk a la última versión?

    -

    A: Sí, puede actualizar Blockman Go Newshungama Mod Apk a la última versión cada vez que hay una nueva actualización disponible. Puede consultar el sitio web oficial de Newshungama.com para obtener las últimas actualizaciones y descargarlas desde allí. También puede habilitar la función de actualización automática en la configuración de apk mod para obtener las actualizaciones automáticamente.

    -

    Q: ¿Puedo jugar Blockman Go Newshungama Mod Apk con otros jugadores en línea?

    -

    A: Sí, puedes jugar Blockman Go Newshungama Mod Apk con otros jugadores en línea. El mod apk es compatible con el modo multijugador en línea, lo que significa que puede unirse o crear salas y servidores donde se puede jugar y chatear con otros jugadores. Sin embargo, es posible que no pueda jugar con jugadores que estén usando el juego original o un apk mod diferente, ya que pueden tener diferentes versiones o características.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Como Hacer Una Hoja De Presentacin.md b/spaces/Benson/text-generation/Examples/Como Hacer Una Hoja De Presentacin.md deleted file mode 100644 index 75d2a79ea7779552224ea8ff788fd5fab5710849..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Como Hacer Una Hoja De Presentacin.md +++ /dev/null @@ -1,63 +0,0 @@ - -

    Cómo descargar Alquimia de almas episodio 18

    -

    Si eres un fan de los dramas de fantasía coreanos, es posible que hayas oído hablar de Alchemy of Souls, una serie original de Netflix que sigue las aventuras de jóvenes magos que pueden manipular almas. El espectáculo ha sido elogiado por su cautivadora trama, impresionantes efectos visuales y un reparto talentoso. Pero ¿qué pasa si desea ver el último episodio sin conexión, o no tiene acceso a Netflix en su región? En este artículo, te mostraremos cómo descargar el episodio 18 de Alchemy of Souls de dos fuentes diferentes: Netflix y Bilibili. También discutiremos los pros y los contras de descargar el episodio, y responderemos algunas preguntas frecuentes sobre el programa.

    -

    como hacer una hoja de presentación


    Download >>>>> https://bltlly.com/2v6M2q



    -

    Dónde ver Alchemy of Souls Online

    -

    Lo primero que necesitas saber es dónde puedes ver Alquimia de almas en línea. Hay dos opciones principales:

    -
      -
    • Netflix: Esta es la plataforma oficial de streaming para Alchemy of Souls, y ofrece todos los episodios con subtítulos en varios idiomas. Puede ver el programa en cualquier dispositivo compatible con Netflix, como su computadora, teléfono inteligente, tableta, televisor inteligente o consola de juegos. Sin embargo, necesita tener una suscripción de pago para acceder al contenido de Netflix, y la disponibilidad del programa puede variar dependiendo de su ubicación.
    • -
    • Bilibili: Este es un popular sitio web chino para compartir videos que ofrece una amplia gama de contenido, incluyendo anime, películas, música, juegos y más. Puedes ver Alquimia de almas en Bilibili con subtítulos en chino o inglés. También puedes interactuar con otros fans a través de comentarios, likes y viñetas. No necesitas pagar nada para ver el programa en Bilibili, pero sí necesitas crear una cuenta gratuita e iniciar sesión.
    • -
    -

    Cómo descargar la alquimia de las almas de Netflix

    -

    Si tienes una cuenta de Netflix y quieres descargar Alquimia de Almas episodio 18 desde allí, aquí están los pasos que necesitas seguir:

    -
      - -
    1. Buscar Alquimia de almas y seleccionar el episodio que desea descargar: Puede utilizar la barra de búsqueda o navegar por las categorías para encontrar Alquimia de almas en Netflix. Una vez que encuentre el programa, haga clic en él y seleccione el episodio que desea descargar. En este caso, es el episodio 18, titulado "La batalla final".
    2. -
    3. Toque el icono de descarga junto al título del episodio y espere a que la descarga termine: Verá un icono de descarga que parece una flecha hacia abajo junto al título del episodio. Toque en él y la descarga comenzará. Puede comprobar el progreso de la descarga en la sección de descargas de la aplicación. Una vez completada la descarga, puedes ver el episodio sin conexión cuando quieras.
    4. -
    -

    Cómo Descargar Alquimia de Almas de Bilibili

    -

    Si no tienes una cuenta de Netflix o prefieres ver Alquimia de almas en Bilibili, estos son los pasos que debes seguir:

    -

    -
      -
    1. Visite el sitio web de Bilibili o descargue la aplicación en su dispositivo: Puede acceder a Bilibili en su navegador web o descargar su aplicación desde la Google Play Store o la App Store. Necesitará tener una conexión a Internet estable para usar Bilibili.
    2. -
    3. Cree una cuenta gratuita o inicie sesión con su cuenta existente: Puede crear una cuenta gratuita en Bilibili proporcionando su dirección de correo electrónico, número de teléfono, nombre de usuario y contraseña. También puede iniciar sesión con su cuenta existente si ya tiene una.
    4. -
    5. Buscar Alquimia de almas y seleccionar el episodio que desea descargar: Puede utilizar la barra de búsqueda o navegar por las categorías para encontrar Alquimia de almas en Bilibili. Una vez que encuentre el programa, haga clic en él y seleccione el episodio que desea descargar. En este caso, es el episodio 18, titulado "La batalla final".
    6. - -
    -

    Pros y contras de descargar alquimia de almas

    -

    Descargar el episodio 18 de Alchemy of Souls puede parecer una buena idea, pero hay algunos pros y contras que debes considerar antes de hacerlo. Estos son algunos de ellos:

    -
      -
    • Pros:
        -
      • Puede ver el episodio sin conexión, lo que significa que no necesita una conexión a Internet o un plan de datos para disfrutarlo.
      • -
      • Puede guardar datos, especialmente si tiene un plan de datos limitado o caro.
      • -
      • Puedes evitar spoilers, especialmente si estás atrasado en el programa o vives en una zona horaria diferente.
      • -
      • Puede disfrutar de vídeo y audio de alta calidad, sin búfer ni interrupciones.
      • -
      -
    • -
    • Contras:
        -
      • Puede violar los términos de servicio de Netflix o Bilibili, que prohíben descargar o distribuir su contenido sin permiso.
      • -
      • Es posible que tenga problemas legales, especialmente si comparte o vende el episodio descargado a otros.
      • -
      • Es posible que encuentre malware, virus u otro software dañino que pueda dañar su dispositivo o comprometer su seguridad.
      • -
      • Es posible que se pierda actualizaciones y extras, como imágenes entre bastidores, entrevistas, eventos de fans y más.
      • -
      -
    • -
    -

    Conclusión

    -

    En conclusión, descargar el episodio 18 de Alchemy of Souls es posible desde dos fuentes diferentes: Netflix y Bilibili. Sin embargo, hay algunos pros y contras que usted debe pesar antes de hacerlo. Si decides descargar el episodio, asegúrate de hacerlo de forma segura y legal. Alternativamente, puedes ver el episodio en línea en Netflix o Bilibili y disfrutarlo con otros fans de todo el mundo.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre la alquimia de las almas:

    -
      - -
    1. ¿Cuántos episodios hay en Alchemy of Souls? : Alchemy of Souls tiene 20 episodios en total, cada uno dura aproximadamente una hora. El programa se estrenó en Netflix el 1 de mayo de 2023 y se emitió todos los lunes y martes hasta el 20 de junio de 2023.
    2. -
    3. ¿La alquimia de las almas se basa en una novela o en un webtoon? : Alchemy of Souls se basa en un webtoon del mismo nombre de Kim Eun-hee y Yang Kyung-il. El webtoon se publicó por primera vez en Naver Webtoon en 2019 y tiene más de 10 millones de visitas. El webtoon también está disponible en inglés en Webtoon.
    4. -
    5. ¿Quiénes son los actores principales en Alquimia de almas? : Los actores principales en Alquimia de almas son:
        -
      • Park Seo-joon como Lee Ji-hoon, un mago genio que puede crear y destruir almas.
      • -
      • Kim Ji-won como Kim Soo-hyun, un mago valiente y leal que puede controlar almas.
      • -
      • Lee Jong-suk como Choi Min-ki, un mago misterioso y poderoso que puede manipular almas.
      • -
      • Park Shin-hye como Yoo Na-ra, un mago amable y gentil que puede sanar almas.
      • -
      -
    6. -¿Habrá una segunda temporada de Alquimia de Almas? : Todavía no hay confirmación oficial, pero el programa ha sido bien recibido por críticos y fans por igual. El espectáculo también ha dejado algunas preguntas sin respuesta y cliffhangers que sugieren una posible continuación. El webtoon todavía está en curso, por lo que hay más material para adaptar. Sin embargo, la decisión final dependerá de las calificaciones, el presupuesto y la disponibilidad del elenco y el equipo. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/client.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/client.py deleted file mode 100644 index ba2366a846e514769fdecd62df3f0bd8355ca5eb..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/docs/client.py +++ /dev/null @@ -1,400 +0,0 @@ -# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -import os - -from botocore.compat import OrderedDict -from botocore.docs.bcdoc.restdoc import DocumentStructure -from botocore.docs.example import ResponseExampleDocumenter -from botocore.docs.method import ( - document_custom_method, - document_model_driven_method, - get_instance_public_methods, -) -from botocore.docs.params import ResponseParamsDocumenter -from botocore.docs.sharedexample import document_shared_examples -from botocore.docs.utils import DocumentedShape, get_official_service_name - - -def _allowlist_generate_presigned_url(method_name, service_name, **kwargs): - if method_name != 'generate_presigned_url': - return None - return service_name in ['s3'] - - -class ClientDocumenter: - _CLIENT_METHODS_FILTERS = [ - _allowlist_generate_presigned_url, - ] - - def __init__(self, client, root_docs_path, shared_examples=None): - self._client = client - self._client_class_name = self._client.__class__.__name__ - self._root_docs_path = root_docs_path - self._shared_examples = shared_examples - if self._shared_examples is None: - self._shared_examples = {} - self._service_name = self._client.meta.service_model.service_name - - def document_client(self, section): - """Documents a client and its methods - - :param section: The section to write to. - """ - self._add_title(section) - self._add_class_signature(section) - client_methods = self._get_client_methods() - self._add_client_intro(section, client_methods) - self._add_client_methods(client_methods) - - def _get_client_methods(self): - client_methods = get_instance_public_methods(self._client) - return self._filter_client_methods(client_methods) - - def _filter_client_methods(self, client_methods): - filtered_methods = {} - for method_name, method in client_methods.items(): - include = self._filter_client_method( - method=method, - method_name=method_name, - service_name=self._service_name, - ) - if include: - filtered_methods[method_name] = method - return filtered_methods - - def _filter_client_method(self, **kwargs): - # Apply each filter to the method - for filter in self._CLIENT_METHODS_FILTERS: - filter_include = filter(**kwargs) - # Use the first non-None value returned by any of the filters - if filter_include is not None: - return filter_include - # Otherwise default to including it - return True - - def _add_title(self, section): - section.style.h2('Client') - - def _add_client_intro(self, section, client_methods): - section = section.add_new_section('intro') - # Write out the top level description for the client. - official_service_name = get_official_service_name( - self._client.meta.service_model - ) - section.write( - f"A low-level client representing {official_service_name}" - ) - section.style.new_line() - section.include_doc_string( - self._client.meta.service_model.documentation - ) - - # Write out the client example instantiation. - self._add_client_creation_example(section) - - # List out all of the possible client methods. - section.style.dedent() - section.style.new_paragraph() - section.writeln('These are the available methods:') - section.style.toctree() - for method_name in sorted(client_methods): - section.style.tocitem(f'{self._service_name}/client/{method_name}') - - def _add_class_signature(self, section): - section.style.start_sphinx_py_class( - class_name=f'{self._client_class_name}.Client' - ) - - def _add_client_creation_example(self, section): - section.style.start_codeblock() - section.style.new_line() - section.write( - 'client = session.create_client(\'{service}\')'.format( - service=self._service_name - ) - ) - section.style.end_codeblock() - - def _add_client_methods(self, client_methods): - for method_name in sorted(client_methods): - # Create a new DocumentStructure for each client method and add contents. - method_doc_structure = DocumentStructure( - method_name, target='html' - ) - self._add_client_method( - method_doc_structure, method_name, client_methods[method_name] - ) - # Write client methods in individual/nested files. - # Path: /reference/services//client/.rst - client_dir_path = os.path.join( - self._root_docs_path, self._service_name, 'client' - ) - method_doc_structure.write_to_file(client_dir_path, method_name) - - def _add_client_method(self, section, method_name, method): - breadcrumb_section = section.add_new_section('breadcrumb') - breadcrumb_section.style.ref( - self._client_class_name, f'../../{self._service_name}' - ) - breadcrumb_section.write(f' / Client / {method_name}') - section.add_title_section(method_name) - method_section = section.add_new_section( - method_name, - context={'qualifier': f'{self._client_class_name}.Client.'}, - ) - if self._is_custom_method(method_name): - self._add_custom_method( - method_section, - method_name, - method, - ) - else: - self._add_model_driven_method(method_section, method_name) - - def _is_custom_method(self, method_name): - return method_name not in self._client.meta.method_to_api_mapping - - def _add_custom_method(self, section, method_name, method): - document_custom_method(section, method_name, method) - - def _add_method_exceptions_list(self, section, operation_model): - error_section = section.add_new_section('exceptions') - error_section.style.new_line() - error_section.style.bold('Exceptions') - error_section.style.new_line() - for error in operation_model.error_shapes: - class_name = ( - f'{self._client_class_name}.Client.exceptions.{error.name}' - ) - error_section.style.li(':py:class:`%s`' % class_name) - - def _add_model_driven_method(self, section, method_name): - service_model = self._client.meta.service_model - operation_name = self._client.meta.method_to_api_mapping[method_name] - operation_model = service_model.operation_model(operation_name) - - example_prefix = 'response = client.%s' % method_name - full_method_name = ( - f"{section.context.get('qualifier', '')}{method_name}" - ) - document_model_driven_method( - section, - full_method_name, - operation_model, - event_emitter=self._client.meta.events, - method_description=operation_model.documentation, - example_prefix=example_prefix, - ) - - # Add any modeled exceptions - if operation_model.error_shapes: - self._add_method_exceptions_list(section, operation_model) - - # Add the shared examples - shared_examples = self._shared_examples.get(operation_name) - if shared_examples: - document_shared_examples( - section, operation_model, example_prefix, shared_examples - ) - - -class ClientExceptionsDocumenter: - _USER_GUIDE_LINK = ( - 'https://boto3.amazonaws.com/' - 'v1/documentation/api/latest/guide/error-handling.html' - ) - _GENERIC_ERROR_SHAPE = DocumentedShape( - name='Error', - type_name='structure', - documentation=('Normalized access to common exception attributes.'), - members=OrderedDict( - [ - ( - 'Code', - DocumentedShape( - name='Code', - type_name='string', - documentation=( - 'An identifier specifying the exception type.' - ), - ), - ), - ( - 'Message', - DocumentedShape( - name='Message', - type_name='string', - documentation=( - 'A descriptive message explaining why the exception ' - 'occured.' - ), - ), - ), - ] - ), - ) - - def __init__(self, client, root_docs_path): - self._client = client - self._client_class_name = self._client.__class__.__name__ - self._service_name = self._client.meta.service_model.service_name - self._root_docs_path = root_docs_path - - def document_exceptions(self, section): - self._add_title(section) - self._add_overview(section) - self._add_exceptions_list(section) - self._add_exception_classes() - - def _add_title(self, section): - section.style.h2('Client Exceptions') - - def _add_overview(self, section): - section.style.new_line() - section.write( - 'Client exceptions are available on a client instance ' - 'via the ``exceptions`` property. For more detailed instructions ' - 'and examples on the exact usage of client exceptions, see the ' - 'error handling ' - ) - section.style.external_link( - title='user guide', - link=self._USER_GUIDE_LINK, - ) - section.write('.') - section.style.new_line() - - def _exception_class_name(self, shape): - return f'{self._client_class_name}.Client.exceptions.{shape.name}' - - def _add_exceptions_list(self, section): - error_shapes = self._client.meta.service_model.error_shapes - if not error_shapes: - section.style.new_line() - section.write('This client has no modeled exception classes.') - section.style.new_line() - return - section.style.new_line() - section.writeln('The available client exceptions are:') - section.style.toctree() - for shape in error_shapes: - section.style.tocitem( - f'{self._service_name}/client/exceptions/{shape.name}' - ) - - def _add_exception_classes(self): - for shape in self._client.meta.service_model.error_shapes: - # Create a new DocumentStructure for each exception method and add contents. - exception_doc_structure = DocumentStructure( - shape.name, target='html' - ) - self._add_exception_class(exception_doc_structure, shape) - # Write exceptions in individual/nested files. - # Path: /reference/services//client/exceptions/.rst - exception_dir_path = os.path.join( - self._root_docs_path, - self._service_name, - 'client', - 'exceptions', - ) - exception_doc_structure.write_to_file( - exception_dir_path, shape.name - ) - - def _add_exception_class(self, section, shape): - breadcrumb_section = section.add_new_section('breadcrumb') - breadcrumb_section.style.ref( - self._client_class_name, f'../../../{self._service_name}' - ) - breadcrumb_section.write(f' / Client / exceptions / {shape.name}') - section.add_title_section(shape.name) - class_section = section.add_new_section(shape.name) - class_name = self._exception_class_name(shape) - class_section.style.start_sphinx_py_class(class_name=class_name) - self._add_top_level_documentation(class_section, shape) - self._add_exception_catch_example(class_section, shape) - self._add_response_attr(class_section, shape) - class_section.style.end_sphinx_py_class() - - def _add_top_level_documentation(self, section, shape): - if shape.documentation: - section.style.new_line() - section.include_doc_string(shape.documentation) - section.style.new_line() - - def _add_exception_catch_example(self, section, shape): - section.style.new_line() - section.style.bold('Example') - section.style.start_codeblock() - section.write('try:') - section.style.indent() - section.style.new_line() - section.write('...') - section.style.dedent() - section.style.new_line() - section.write('except client.exceptions.%s as e:' % shape.name) - section.style.indent() - section.style.new_line() - section.write('print(e.response)') - section.style.dedent() - section.style.end_codeblock() - - def _add_response_attr(self, section, shape): - response_section = section.add_new_section('response') - response_section.style.start_sphinx_py_attr('response') - self._add_response_attr_description(response_section) - self._add_response_example(response_section, shape) - self._add_response_params(response_section, shape) - response_section.style.end_sphinx_py_attr() - - def _add_response_attr_description(self, section): - section.style.new_line() - section.include_doc_string( - 'The parsed error response. All exceptions have a top level ' - '``Error`` key that provides normalized access to common ' - 'exception atrributes. All other keys are specific to this ' - 'service or exception class.' - ) - section.style.new_line() - - def _add_response_example(self, section, shape): - example_section = section.add_new_section('syntax') - example_section.style.new_line() - example_section.style.bold('Syntax') - example_section.style.new_paragraph() - documenter = ResponseExampleDocumenter( - service_name=self._service_name, - operation_name=None, - event_emitter=self._client.meta.events, - ) - documenter.document_example( - example_section, - shape, - include=[self._GENERIC_ERROR_SHAPE], - ) - - def _add_response_params(self, section, shape): - params_section = section.add_new_section('Structure') - params_section.style.new_line() - params_section.style.bold('Structure') - params_section.style.new_paragraph() - documenter = ResponseParamsDocumenter( - service_name=self._service_name, - operation_name=None, - event_emitter=self._client.meta.events, - ) - documenter.document_params( - params_section, - shape, - include=[self._GENERIC_ERROR_SHAPE], - ) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/vcs/mercurial.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/vcs/mercurial.py deleted file mode 100644 index 2a005e0aff2df95f01aff4706b48af5da0c81db1..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/vcs/mercurial.py +++ /dev/null @@ -1,163 +0,0 @@ -import configparser -import logging -import os -from typing import List, Optional, Tuple - -from pip._internal.exceptions import BadCommand, InstallationError -from pip._internal.utils.misc import HiddenText, display_path -from pip._internal.utils.subprocess import make_command -from pip._internal.utils.urls import path_to_url -from pip._internal.vcs.versioncontrol import ( - RevOptions, - VersionControl, - find_path_to_project_root_from_repo_root, - vcs, -) - -logger = logging.getLogger(__name__) - - -class Mercurial(VersionControl): - name = "hg" - dirname = ".hg" - repo_name = "clone" - schemes = ( - "hg+file", - "hg+http", - "hg+https", - "hg+ssh", - "hg+static-http", - ) - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return [rev] - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info( - "Cloning hg %s%s to %s", - url, - rev_display, - display_path(dest), - ) - if verbosity <= 0: - flags: Tuple[str, ...] = ("--quiet",) - elif verbosity == 1: - flags = () - elif verbosity == 2: - flags = ("--verbose",) - else: - flags = ("--verbose", "--debug") - self.run_command(make_command("clone", "--noupdate", *flags, url, dest)) - self.run_command( - make_command("update", *flags, rev_options.to_args()), - cwd=dest, - ) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - repo_config = os.path.join(dest, self.dirname, "hgrc") - config = configparser.RawConfigParser() - try: - config.read(repo_config) - config.set("paths", "default", url.secret) - with open(repo_config, "w") as config_file: - config.write(config_file) - except (OSError, configparser.NoSectionError) as exc: - logger.warning("Could not switch Mercurial repository to %s: %s", url, exc) - else: - cmd_args = make_command("update", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - self.run_command(["pull", "-q"], cwd=dest) - cmd_args = make_command("update", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - - @classmethod - def get_remote_url(cls, location: str) -> str: - url = cls.run_command( - ["showconfig", "paths.default"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - if cls._is_local_repository(url): - url = path_to_url(url) - return url.strip() - - @classmethod - def get_revision(cls, location: str) -> str: - """ - Return the repository-local changeset revision number, as an integer. - """ - current_revision = cls.run_command( - ["parents", "--template={rev}"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - return current_revision - - @classmethod - def get_requirement_revision(cls, location: str) -> str: - """ - Return the changeset identification hash, as a 40-character - hexadecimal string - """ - current_rev_hash = cls.run_command( - ["parents", "--template={node}"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - return current_rev_hash - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """Always assume the versions don't match""" - return False - - @classmethod - def get_subdirectory(cls, location: str) -> Optional[str]: - """ - Return the path to Python project root, relative to the repo root. - Return None if the project root is in the repo root. - """ - # find the repo root - repo_root = cls.run_command( - ["root"], show_stdout=False, stdout_only=True, cwd=location - ).strip() - if not os.path.isabs(repo_root): - repo_root = os.path.abspath(os.path.join(location, repo_root)) - return find_path_to_project_root_from_repo_root(location, repo_root) - - @classmethod - def get_repository_root(cls, location: str) -> Optional[str]: - loc = super().get_repository_root(location) - if loc: - return loc - try: - r = cls.run_command( - ["root"], - cwd=location, - show_stdout=False, - stdout_only=True, - on_returncode="raise", - log_failed_cmd=False, - ) - except BadCommand: - logger.debug( - "could not determine if %s is under hg control " - "because hg is not available", - location, - ) - return None - except InstallationError: - return None - return os.path.normpath(r.rstrip("\r\n")) - - -vcs.register(Mercurial) diff --git a/spaces/BorisovMaksim/denoising/README.md b/spaces/BorisovMaksim/denoising/README.md deleted file mode 100644 index 1a4519c23760bd3165f7d964962e8a1174fa2bc9..0000000000000000000000000000000000000000 --- a/spaces/BorisovMaksim/denoising/README.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: Denoising -emoji: 🤗 -colorFrom: red -colorTo: orange -sdk: gradio -sdk_version: 3.28.1 -app_file: app.py -pinned: false ---- -This is a repo that implements web interface for DEMUCS model proposed in [Real Time Speech Enhancement in the Waveform Domain](https://arxiv.org/abs/2006.12847). -The model was trained from scratch in Pytorch. The proposed model is based on an encoder-decoder architecture with skip-connections. It is optimized on both time and frequency domains, using multiple loss functions. -You can record your voice in noisy conditions and get denoised version using DEMUCS model. There is also Spectral Gating denoiser as baseline. - -
    -
    - -# Running -Without docker: - -
    pip install -r requirements.txt
    -python app.py
    - - -Using docker: -
    docker build . --tag python-docker 
    -docker run -p 7860:7860 -e GRADIO_SERVER_NAME=0.0.0.0 -it python-docker:latest
    - - - -# Data -In the scope of this project [Valentini](https://datashare.ed.ac.uk/handle/10283/2791) dataset in used. It is clean and noisy parallel speech database. The database was designed to train and test speech enhancement methods that operate at 48kHz. There are 56 speakers and ~10 gb of speech data. - -For model improvement it is possible to use a bigger training set from [DNS](https://www.bing.com/search?q=dns+challenge&cvid=3773a401b19d40269d725a02faf6f79c&aqs=edge.0.69i59j69i57j0l6j69i60.1021j0j4&FORM=ANAB01&PC=U531) challenge. - -# Training -The training process in impemented in Pytorch. The data is (noisy speech, clean speech) pairs that are loaded as 2 second samples, randomly cutted from audio and padded if necessary. Model is optimized using SGD. In terms of loss functions, the L1 loss and MultiResolutionSTFTLoss are used. MultiResolutionSTFTLoss is the sum of STFT loss over different window sizes, hop sizes and fft sizes. - -$$L_{STFT}= L_{sc} + L_{mag}$$ - -$$L_{sc}= \frac{|| |STFT(\tilde{x})| - |STFT(x)| ||_{F}^{1}}{|STFT(x)|}$$ - -$$L_{mag} = \frac{1}{T}|| log|STFT(\tilde{x})| - log|STFT(x)| ||_{F}^{1}$$ - -where T is the time points in the waveform. - -# Metrics -- Perceptual Evaluation of Speech Quality ([PESQ](https://torchmetrics.readthedocs.io/en/stable/audio/perceptual_evaluation_speech_quality.html)) -- Short-Time Objective Intelligibility ([STOI](https://torchmetrics.readthedocs.io/en/stable/audio/short_time_objective_intelligibility.html)) - -The PESQ metric is used for estimating overall speech quality after denoising and STOI is used for estimating speech intelligibility after denoising. -Intelligibility measure is highly correlated with the intelligibility of degraded speech signals - -# Experiments -For tracking experiments local server of [Weights & Biases](https://wandb.ai/site) is used. To manage configs for different experiments [hydra](https://hydra.cc/) is used. It allows an easy way to track configs and override paramaters. - - -| Experiment | Description | Result | -|--------------|:-----:|--------------------------------------------------------| -| Baseline | Initial experiment with L1 loss | Poor quality | -| Baseline_L1_Multi_STFT_loss | Changed loss to Multi STFT + L1 loss | Better performance | -|L1_Multi_STFT_no_resample | Tried to train without resampling | No impovement, probably because RELU on the last layer | -|Updated_DEMUCS | Used relu in the last layer. Removed it.| Significant improvement | -|wav_normalization | Tried to normalized wav by std during training| Small improvement | -| original_sr| Train with original sample rate | Significant improvement | -|increased_L | Increased number of encoder-decoder pairs from 3 to 5| Performance comparable with original_sr | -| double_sr| Train with double sample rate| Small improvement | -|replicate paper | Lower learning rate and fix bug in dataloader | Massive improvement! | - - - -![img.png](images/plot.png) - - diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/utils/proc_dict_vqa.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/utils/proc_dict_vqa.py deleted file mode 100644 index 7113f5d5d9f59ea83061db2cb69c1e9881d0f3f7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/utils/proc_dict_vqa.py +++ /dev/null @@ -1,47 +0,0 @@ -# -------------------------------------------------------- -# mcan-vqa (Deep Modular Co-Attention Networks) -# Licensed under The MIT License [see LICENSE for details] -# Written by Yuhao Cui https://github.com/cuiyuhao1996 -# -------------------------------------------------------- - -import sys -sys.path.append('../') -from openvqa.utils.ans_punct import prep_ans -from openvqa.core.path_cfgs import PATH -import json - -path = PATH() - -# Loading answer word list -stat_ans_list = \ - json.load(open(path.RAW_PATH['vqa']['train-anno'], 'r'))['annotations'] + \ - json.load(open(path.RAW_PATH['vqa']['val-anno'], 'r'))['annotations'] - - -def ans_stat(stat_ans_list): - ans_to_ix = {} - ix_to_ans = {} - ans_freq_dict = {} - - for ans in stat_ans_list: - ans_proc = prep_ans(ans['multiple_choice_answer']) - if ans_proc not in ans_freq_dict: - ans_freq_dict[ans_proc] = 1 - else: - ans_freq_dict[ans_proc] += 1 - - ans_freq_filter = ans_freq_dict.copy() - for ans in ans_freq_dict: - if ans_freq_dict[ans] <= 8: - ans_freq_filter.pop(ans) - - for ans in ans_freq_filter: - ix_to_ans[ans_to_ix.__len__()] = ans - ans_to_ix[ans] = ans_to_ix.__len__() - - return ans_to_ix, ix_to_ans - -ans_to_ix, ix_to_ans = ans_stat(stat_ans_list) -print(ans_to_ix) -# print(ans_to_ix.__len__()) -json.dump([ans_to_ix, ix_to_ans], open('../openvqa/datasets/vqa/answer_dict.json', 'w')) diff --git a/spaces/CVPR/LIVE/thrust/CHANGELOG.md b/spaces/CVPR/LIVE/thrust/CHANGELOG.md deleted file mode 100644 index 5e845a81e6ff0a876dffd0c58136282e5ace4439..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/CHANGELOG.md +++ /dev/null @@ -1,1659 +0,0 @@ -# Thrust 1.9.10-1 (NVIDIA HPC SDK 20.7, CUDA Toolkit 11.1) - -## Summary - -Thrust 1.9.10-1 is the minor release accompanying the NVIDIA HPC SDK 20.7 release - and the CUDA Toolkit 11.1 release. - -## Bug Fixes - -- #1214, NVBug 200619442: Stop using `std::allocator` APIs deprecated in C++17. -- #1216, NVBug 200540293: Make `thrust::optional` work with Clang when used - with older libstdc++. -- #1207, NVBug 200618218: Don't force C++14 with older compilers that don't - support it. -- #1218: Wrap includes of `` and `` to avoid circular - inclusion with NVC++. - -# Thrust 1.9.10 (NVIDIA HPC SDK 20.5) - -## Summary - -Thrust 1.9.10 is the release accompanying the NVIDIA HPC SDK 20.5 release. -It adds CMake support for compilation with NVC++ and a number of minor bug fixes - for NVC++. -It also adds CMake `find_package` support, which replaces the broken 3rd-party - legacy `FindThrust.cmake` script. -C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated. -Starting with the upcoming 1.10.0 release, C++03 support will be dropped - entirely. - -## Breaking Changes - -- #1082: Thrust now checks that it is compatible with the version of CUB found - in your include path, generating an error if it is not. - If you are using your own version of CUB, it may be too old. - It is recommended to simply delete your own version of CUB and use the - version of CUB that comes with Thrust. -- #1089: C++03 and C++11 are deprecated. - Using these dialects will generate a compile-time warning. - These warnings can be suppressed by defining - `THRUST_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11 - deprecation warnings) or `THRUST_IGNORE_DEPRECATED_CPP11` (to suppress C++11 - deprecation warnings). - Suppression is only a short term solution. - We will be dropping support for C++03 in the 1.10.0 release and C++11 in the - near future. -- #1089: GCC < 5, Clang < 6, and MSVC < 2017 are deprecated. - Using these compilers will generate a compile-time warning. - These warnings can be suppressed by defining - `THRUST_IGNORE_DEPRECATED_COMPILER`. - Suppression is only a short term solution. - We will be dropping support for these compilers in the near future. - -## New Features - -- #1130: CMake `find_package` support. - This is significant because there is a legacy `FindThrust.cmake` script - authored by a third party in widespread use in the community which has a - bug in how it parses Thrust version numbers which will cause it to - incorrectly parse 1.9.10. - This script only handles the first digit of each part of the Thrust version - number correctly: for example, Thrust 17.17.17 would be interpreted as - Thrust 1.1.1701717. - You can find directions for using the new CMake `find_package` support and - migrating away from the legacy `FindThrust.cmake` [here](https://github.com/thrust/thrust/blob/master/thrust/cmake/README.md) -- #1129: Added `thrust::detail::single_device_tls_caching_allocator`, a - convenient way to get an MR caching allocator for device memory, which is - used by NVC++. - -## Other Enhancements - -- #1129: Refactored RDC handling in CMake to be a global option and not create - two targets for each example and test. - -## Bug Fixes - -- #1129: Fix the legacy `thrust::return_temporary_buffer` API to support - passing a size. - This was necessary to enable usage of Thrust caching MR allocators with - synchronous Thrust algorithms. - This change has allowed NVC++’s C++17 Parallel Algorithms implementation to - switch to use Thrust caching MR allocators for device temporary storage, - which gives a 2x speedup on large multi-GPU systems such as V100 and A100 - DGX where `cudaMalloc` is very slow. -- #1128: Respect `CUDA_API_PER_THREAD_DEFAULT_STREAM`. - Thanks to Rong Ou for this contribution. -- #1131: Fix the one-policy overload of `thrust::async::copy` to not copy the - policy, resolving use-afer-move issues. -- #1145: When cleaning up type names in `unittest::base_class_name`, only call - `std::string::replace` if we found the substring we are looking to replace. -- #1139: Don't use `cxx::__demangle` in NVC++. -- #1102: Don't use `thrust::detail::normal_distribution_nvcc` for Feta because - it uses `erfcinv`, a non-standard function that Feta doesn't have. - -# Thrust 1.9.9 (CUDA Toolkit 11.0) - -## Summary - -Thrust 1.9.9 adds support for NVC++, which uses Thrust to implement - GPU-accelerated C++17 Parallel Algorithms. -`thrust::zip_function` and `thrust::shuffle` were also added. -C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated. -Starting with the upcoming 1.10.0 release, C++03 support will be dropped - entirely. -All other deprecated platforms will be dropped in the near future. - -## Breaking Changes - -- #1082: Thrust now checks that it is compatible with the version of CUB found - in your include path, generating an error if it is not. - If you are using your own version of CUB, it may be too old. - It is recommended to simply delete your own version of CUB and use the - version of CUB that comes with Thrust. -- #1089: C++03 and C++11 are deprecated. - Using these dialects will generate a compile-time warning. - These warnings can be suppressed by defining - `THRUST_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11 - deprecation warnings) or `THRUST_IGNORE_DEPRECATED_CPP_11` (to suppress C++11 - deprecation warnings). - Suppression is only a short term solution. - We will be dropping support for C++03 in the 1.10.0 release and C++11 in the - near future. -- #1089: GCC < 5, Clang < 6, and MSVC < 2017 are deprecated. - Using these compilers will generate a compile-time warning. - These warnings can be suppressed by defining - `THRUST_IGNORE_DEPRECATED_COMPILER`. - Suppression is only a short term solution. - We will be dropping support for these compilers in the near future. - -## New Features - -- #1086: Support for NVC++ aka "Feta". - The most significant change is in how we use `__CUDA_ARCH__`. - Now, there are four macros that must be used: - - `THRUST_IS_DEVICE_CODE`, which should be used in an `if` statement around - device-only code. - - `THRUST_INCLUDE_DEVICE_CODE`, which should be used in an `#if` preprocessor - directive inside of the `if` statement mentioned in the prior bullet. - - `THRUST_IS_HOST_CODE`, which should be used in an `if` statement around - host-only code. - - `THRUST_INCLUDE_HOST_CODE`, which should be used in an `#if` preprocessor - directive inside of the `if` statement mentioned in the prior bullet. -- #1085: `thrust::shuffle`. - Thanks to Rory Mitchell for this contribution. -- #1029: `thrust::zip_function`, a facility for zipping functions that take N - parameters instead of a tuple of N parameters as `thrust::zip_iterator` - does. - Thanks to Ben Jude for this contribution. -- #1068: `thrust::system::cuda::managed_memory_pointer`, a universal memory - strongly typed pointer compatible with the ISO C++ Standard Library. - -## Other Enhancements - -- #1029: Thrust is now built and tested with NVCC warnings treated as errors. -- #1029: MSVC C++11 support. -- #1029: `THRUST_DEPRECATED` abstraction for generating compile-time - deprecation warning messages. -- #1029: `thrust::pointer::pointer_to(reference)`. -- #1070: Unit test for `thrust::inclusive_scan` with a user defined types. - Thanks to Conor Hoekstra for this contribution. - -## Bug Fixes - -- #1088: Allow `thrust::replace` to take functions that have non-`const` - `operator()`. -- #1094: Add missing `constexpr` to `par_t` constructors. - Thanks to Patrick Stotko for this contribution. -- #1077: Remove `__device__` from CUDA MR-based device allocators to fix - obscure "host function called from host device function" warning that occurs - when you use the new Thrust MR-based allocators. -- #1029: Remove inconsistently-used `THRUST_BEGIN`/`END_NS` macros. -- #1029: Fix C++ dialect detection on newer MSVC. -- #1029 Use `_Pragma`/`__pragma` instead of `#pragma` in macros. -- #1029: Replace raw `__cplusplus` checks with the appropriate Thrust macros. -- #1105: Add a missing `` include. -- #1103: Fix regression of `thrust::detail::temporary_allocator` with non-CUDA - back ends. -- #1111: Use Thrust's random number engine instead of `std::`s in device code. -- #1108: Get rid of a GCC 9 warning about deprecated generation of copy ctors. - -# Thrust 1.9.8-1 (NVIDIA HPC SDK 20.3) - -## Summary - -Thrust 1.9.8-1 is a variant of 1.9.8 accompanying the NVIDIA HPC SDK 20.3 - release. -It contains modifications necessary to serve as the implementation of NVC++'s - GPU-accelerated C++17 Parallel Algorithms when using the CUDA Toolkit 11.0 - release. - -# Thrust 1.9.8 (CUDA Toolkit 11.0 Early Access) - -## Summary - -Thrust 1.9.8, which is included in the CUDA Toolkit 11.0 release, removes - Thrust's internal derivative of CUB, upstreams all relevant changes too CUB, - and adds CUB as a Git submodule. -It will now be necessary to do `git clone --recursive` when checking out - Thrust, and to update the CUB submodule when pulling in new Thrust changes. -Additionally, CUB is now included as a first class citizen in the CUDA toolkit. -Thrust 1.9.8 also fixes bugs preventing most Thrust algorithms from working - with more than `2^31-1` elements. -Now, `thrust::reduce`, `thrust::*_scan`, and related algorithms (aka most of - Thrust) work with large element counts. - -## Breaking Changes - -- Thrust will now use the version of CUB in your include path instead of its own - internal copy. - If you are using your own version of CUB, it may be older and incompatible - with Thrust. - It is recommended to simply delete your own version of CUB and use the - version of CUB that comes with Thrust. - -## Other Enhancements - -- Refactor Thrust and CUB to support 64-bit indices in most algorithms. - In most cases, Thrust now selects between kernels that use 32-bit indices and - 64-bit indices at runtime depending on the size of the input. - This means large element counts work, but small element counts do not have to - pay for the register usage of 64-bit indices if they are not needed. - Now, `thrust::reduce`, `thrust::*_scan`, and related algorithms (aka most of - Thrust) work with more than `2^31-1` elements. - Notably, `thrust::sort` is still limited to less than `2^31-1` elements. -- CUB is now a submodule and the internal copy of CUB has been removed. -- #1051: Stop specifying the `__launch_bounds__` minimum blocks parameter - because it messes up register allocation and increases register pressure, - and we don't actually know at compile time how many blocks we will use - (aside from single tile kernels). - -## Bug Fixes - -- #1020: After making a CUDA API call, always clear the global CUDA error state - by calling `cudaGetLastError`. -- #1021: Avoid calling destroy in the destructor of a Thrust vector if the - vector is empty. -- #1046: Actually throw `thrust::bad_alloc` when `thrust::system::cuda::malloc` - fails instead of just constructing a temporary and doing nothing with it. -- Add missing copy constructor or copy assignment operator to all classes that - GCC 9's `-Wdeprecated-copy` complains about -- Add missing move operations to `thrust::system::cuda::vector`. -- #1015: Check that the backend is CUDA before using CUDA-specifics in - `thrust::detail::temporary_allocator`. - Thanks to Hugh Winkler for this contribution. -- #1055: More correctly detect the presence of aligned/sized `new`/`delete`. -- #1043: Fix ill-formed specialization of `thrust::system::is_error_code_enum` - for `thrust::event_errc`. - Thanks to Toru Niina for this contribution. -- #1027: Add tests for `thrust::tuple_for_each` and `thrust::tuple_subset`. - Thanks to Ben Jude for this contribution. -- #1027: Use correct macro in `thrust::tuple_for_each`. - Thanks to Ben Jude for this contribution. -- #1026: Use correct MSVC version formatting in CMake. - Thanks to Ben Jude for this contribution. -- Workaround an NVCC issue with type aliases with template template arguments - containing a parameter pack. -- Remove unused functions from the CUDA backend which call slow CUDA attribute - query APIs. -- Replace `CUB_RUNTIME_FUNCTION` with `THRUST_RUNTIME_FUNCTION`. -- Correct typo in `thrust::transform` documentation. - Thanks to Eden Yefet for this contribution. - -## Known Issues - -- `thrust::sort` remains limited to `2^31-1` elements for now. - -# Thrust 1.9.7-1 (CUDA Toolkit 10.2 for Tegra) - -## Summary - -Thrust 1.9.7-1 is a minor release accompanying the CUDA Toolkit 10.2 release - for Tegra. -It is nearly identical to 1.9.7. - -## Bug Fixes - -- Remove support for GCC's broken nodiscard-like attribute. - -# Thrust 1.9.7 (CUDA Toolkit 10.2) - -## Summary - -Thrust 1.9.7 is a minor release accompanying the CUDA Toolkit 10.2 release. -Unfortunately, although the version and patch numbers are identical, one bug - fix present in Thrust 1.9.7 (NVBug 2646034: Fix incorrect dependency handling - for stream acquisition in `thrust::future`) was not included in the CUDA - Toolkit 10.2 preview release for AArch64 SBSA. -The tag `cuda-10.2aarch64sbsa` contains the exact version of Thrust present - in the CUDA Toolkit 10.2 preview release for AArch64 SBSA. - -## Bug Fixes - -- #967, NVBug 2448170: Fix the CUDA backend `thrust::for_each` so that it - supports large input sizes with 64-bit indices. -- NVBug 2646034: Fix incorrect dependency handling for stream acquisition in - `thrust::future`. - - Not present in the CUDA Toolkit 10.2 preview release for AArch64 SBSA. -- #968, NVBug 2612102: Fix the `thrust::mr::polymorphic_adaptor` to actually - use its template parameter. - -# Thrust 1.9.6-1 (NVIDIA HPC SDK 20.3) - -## Summary - -Thrust 1.9.6-1 is a variant of 1.9.6 accompanying the NVIDIA HPC SDK 20.3 - release. -It contains modifications necessary to serve as the implementation of NVC++'s - GPU-accelerated C++17 Parallel Algorithms when using the CUDA Toolkit 10.1 - Update 2 release. - -# Thrust 1.9.6 (CUDA Toolkit 10.1 Update 2) - -## Summary - -Thrust 1.9.6 is a minor release accompanying the CUDA Toolkit 10.1 Update 2 - release. - -## Bug Fixes - -- NVBug 2509847: Inconsistent alignment of `thrust::complex` -- NVBug 2586774: Compilation failure with Clang + older libstdc++ that doesn't - have `std::is_trivially_copyable` -- NVBug 200488234: CUDA header files contain Unicode characters which leads - compiling errors on Windows -- #949, #973, NVBug 2422333, NVBug 2522259, NVBug 2528822: - `thrust::detail::aligned_reinterpret_cast` must be annotated with - `__host__ __device__`. -- NVBug 2599629: Missing include in the OpenMP sort implementation -- NVBug 200513211: Truncation warning in test code under VC142 - -# Thrust 1.9.5 (CUDA Toolkit 10.1 Update 1) - -## Summary - -Thrust 1.9.5 is a minor release accompanying the CUDA Toolkit 10.1 Update 1 - release. - -## Bug Fixes - -- NVBug 2502854: Fixed assignment of - `thrust::device_vector>` between host and device. - -# Thrust 1.9.4 (CUDA Toolkit 10.1) - -## Summary - -Thrust 1.9.4 adds asynchronous interfaces for parallel algorithms, a new - allocator system including caching allocators and unified memory support, as - well as a variety of other enhancements, mostly related to - C++11/C++14/C++17/C++20 support. -The new asynchronous algorithms in the `thrust::async` namespace return - `thrust::event` or `thrust::future` objects, which can be waited upon to - synchronize with the completion of the parallel operation. - -## Breaking Changes - -Synchronous Thrust algorithms now block until all of their operations have - completed. -Use the new asynchronous Thrust algorithms for non-blocking behavior. - -## New Features - -- `thrust::event` and `thrust::future`, uniquely-owned asynchronous handles - consisting of a state (ready or not ready), content (some value; for - `thrust::future` only), and an optional set of objects that should be - destroyed only when the future's value is ready and has been consumed. - - The design is loosely based on C++11's `std::future`. - - They can be `.wait`'d on, and the value of a future can be waited on and - retrieved with `.get` or `.extract`. - - Multiple `thrust::event`s and `thrust::future`s can be combined with - `thrust::when_all`. - - `thrust::future`s can be converted to `thrust::event`s. - - Currently, these primitives are only implemented for the CUDA backend and - are C++11 only. -- New asynchronous algorithms that return `thrust::event`/`thrust::future`s, - implemented as C++20 range style customization points: - - `thrust::async::reduce`. - - `thrust::async::reduce_into`, which takes a target location to store the - reduction result into. - - `thrust::async::copy`, including a two-policy overload that allows - explicit cross system copies which execution policy properties can be - attached to. - - `thrust::async::transform`. - - `thrust::async::for_each`. - - `thrust::async::stable_sort`. - - `thrust::async::sort`. - - By default the asynchronous algorithms use the new caching allocators. - Deallocation of temporary storage is deferred until the destruction of - the returned `thrust::future`. The content of `thrust::future`s is - stored in either device or universal memory and transferred to the host - only upon request to prevent unnecessary data migration. - - Asynchronous algorithms are currently only implemented for the CUDA - system and are C++11 only. -- `exec.after(f, g, ...)`, a new execution policy method that takes a set of - `thrust::event`/`thrust::future`s and returns an execution policy that - operations on that execution policy should depend upon. -- New logic and mindset for the type requirements for cross-system sequence - copies (currently only used by `thrust::async::copy`), based on: - - `thrust::is_contiguous_iterator` and `THRUST_PROCLAIM_CONTIGUOUS_ITERATOR` - for detecting/indicating that an iterator points to contiguous storage. - - `thrust::is_trivially_relocatable` and - `THRUST_PROCLAIM_TRIVIALLY_RELOCATABLE` for detecting/indicating that a - type is `memcpy`able (based on principles from - [P1144](https://wg21.link/P1144)). - - The new approach reduces buffering, increases performance, and increases - correctness. - - The fast path is now enabled when copying CUDA `__half` and vector types with - `thrust::async::copy`. -- All Thrust synchronous algorithms for the CUDA backend now actually - synchronize. Previously, any algorithm that did not allocate temporary - storage (counterexample: `thrust::sort`) and did not have a - computation-dependent result (counterexample: `thrust::reduce`) would - actually be launched asynchronously. Additionally, synchronous algorithms - that allocated temporary storage would become asynchronous if a custom - allocator was supplied that did not synchronize on allocation/deallocation, - unlike `cudaMalloc`/`cudaFree`. So, now `thrust::for_each`, - `thrust::transform`, `thrust::sort`, etc are truly synchronous. In some - cases this may be a performance regression; if you need asynchrony, use the - new asynchronous algorithms. -- Thrust's allocator framework has been rewritten. It now uses a memory - resource system, similar to C++17's `std::pmr` but supporting static - polymorphism. Memory resources are objects that allocate untyped storage and - allocators are cheap handles to memory resources in this new model. The new - facilities live in ``. - - `thrust::mr::memory_resource`, the memory resource base class, - which takes a (possibly tagged) pointer to `void` type as a parameter. - - `thrust::mr::allocator`, an allocator backed by a memory - resource object. - - `thrust::mr::polymorphic_adaptor_resource`, a type-erased memory - resource adaptor. - - `thrust::mr::polymorphic_allocator`, a C++17-style polymorphic allocator - backed by a type-erased memory resource object. - - New tunable C++17-style caching memory resources, - `thrust::mr::(disjoint_)?(un)?synchronized_pool_resource`, designed to - cache both small object allocations and large repetitive temporary - allocations. The disjoint variants use separate storage for management of - the pool, which is necessary if the memory being allocated cannot be - accessed on the host (e.g. device memory). - - System-specific allocators were rewritten to use the new memory resource - framework. - - New `thrust::device_memory_resource` for allocating device memory. - - New `thrust::universal_memory_resource` for allocating memory that can be - accessed from both the host and device (e.g. `cudaMallocManaged`). - - New `thrust::universal_host_pinned_memory_resource` for allocating memory - that can be accessed from the host and the device but always resides in - host memory (e.g. `cudaMallocHost`). - - `thrust::get_per_device_resource` and `thrust::per_device_allocator`, which - lazily create and retrieve a per-device singleton memory resource. - - Rebinding mechanisms (`rebind_traits` and `rebind_alloc`) for - `thrust::allocator_traits`. - - `thrust::device_make_unique`, a factory function for creating a - `std::unique_ptr` to a newly allocated object in device memory. - - ``, a C++11 implementation of the C++17 - uninitialized memory algorithms. - - `thrust::allocate_unique` and friends, based on the proposed C++23 - [`std::allocate_unique`](https://wg21.link/P0211). -- New type traits and metaprogramming facilities. Type traits are slowly being - migrated out of `thrust::detail::` and ``; their new home - will be `thrust::` and ``. - - `thrust::is_execution_policy`. - - `thrust::is_operator_less_or_greater_function_object`, which detects - `thrust::less`, `thrust::greater`, `std::less`, and `std::greater`. - - `thrust::is_operator_plus_function_object``, which detects `thrust::plus` - and `std::plus`. - - `thrust::remove_cvref(_t)?`, a C++11 implementation of C++20's - `thrust::remove_cvref(_t)?`. - - `thrust::void_t`, and various other new type traits. - - `thrust::integer_sequence` and friends, a C++11 implementation of C++20's - `std::integer_sequence` - - `thrust::conjunction`, `thrust::disjunction`, and `thrust::disjunction`, a - C++11 implementation of C++17's logical metafunctions. - - Some Thrust type traits (such as `thrust::is_constructible`) have been - redefined in terms of C++11's type traits when they are available. -- ``, new `std::tuple` algorithms: - - `thrust::tuple_transform`. - - `thrust::tuple_for_each`. - - `thrust::tuple_subset`. -- Miscellaneous new `std::`-like facilities: - - `thrust::optional`, a C++11 implementation of C++17's `std::optional`. - - `thrust::addressof`, an implementation of C++11's `std::addressof`. - - `thrust::next` and `thrust::prev`, an implementation of C++11's `std::next` - and `std::prev`. - - `thrust::square`, a `` style unary function object that - multiplies its argument by itself. - - `` and `thrust::numeric_limits`, a customized version of - `` and `std::numeric_limits`. -- ``, new general purpose preprocessor facilities: - - `THRUST_PP_CAT[2-5]`, concatenates two to five tokens. - - `THRUST_PP_EXPAND(_ARGS)?`, performs double expansion. - - `THRUST_PP_ARITY` and `THRUST_PP_DISPATCH`, tools for macro overloading. - - `THRUST_PP_BOOL`, boolean conversion. - - `THRUST_PP_INC` and `THRUST_PP_DEC`, increment/decrement. - - `THRUST_PP_HEAD`, a variadic macro that expands to the first argument. - - `THRUST_PP_TAIL`, a variadic macro that expands to all its arguments after - the first. - - `THRUST_PP_IIF`, bitwise conditional. - - `THRUST_PP_COMMA_IF`, and `THRUST_PP_HAS_COMMA`, facilities for adding and - detecting comma tokens. - - `THRUST_PP_IS_VARIADIC_NULLARY`, returns true if called with a nullary - `__VA_ARGS__`. - - `THRUST_CURRENT_FUNCTION`, expands to the name of the current function. -- New C++11 compatibility macros: - - `THRUST_NODISCARD`, expands to `[[nodiscard]]` when available and the best - equivalent otherwise. - - `THRUST_CONSTEXPR`, expands to `constexpr` when available and the best - equivalent otherwise. - - `THRUST_OVERRIDE`, expands to `override` when available and the best - equivalent otherwise. - - `THRUST_DEFAULT`, expands to `= default;` when available and the best - equivalent otherwise. - - `THRUST_NOEXCEPT`, expands to `noexcept` when available and the best - equivalent otherwise. - - `THRUST_FINAL`, expands to `final` when available and the best equivalent - otherwise. - - `THRUST_INLINE_CONSTANT`, expands to `inline constexpr` when available and - the best equivalent otherwise. -- ``, new C++11-only type deduction helpers: - - `THRUST_DECLTYPE_RETURNS*`, expand to function definitions with suitable - conditional `noexcept` qualifiers and trailing return types. - - `THRUST_FWD(x)`, expands to `::std::forward(x)`. - - `THRUST_MVCAP`, expands to a lambda move capture. - - `THRUST_RETOF`, expands to a decltype computing the return type of an - invocable. -- New CMake build system. - -## New Examples - -- `mr_basic` demonstrates how to use the new memory resource allocator system. - -## Other Enhancements - -- Tagged pointer enhancements: - - New `thrust::pointer_traits` specialization for `void const*`. - - `nullptr` support to Thrust tagged pointers. - - New `explicit operator bool` for Thrust tagged pointers when using C++11 - for `std::unique_ptr` interoperability. - - Added `thrust::reinterpret_pointer_cast` and `thrust::static_pointer_cast` - for casting Thrust tagged pointers. -- Iterator enhancements: - - `thrust::iterator_system` is now SFINAE friendly. - - Removed cv qualifiers from iterator types when using - `thrust::iterator_system`. -- Static assert enhancements: - - New `THRUST_STATIC_ASSERT_MSG`, takes an optional string constant to be - used as the error message when possible. - - Update `THRUST_STATIC_ASSERT(_MSG)` to use C++11's `static_assert` when - it's available. - - Introduce a way to test for static assertions. -- Testing enhancements: - - Additional scalar and sequence types, including non-builtin types and - vectors with unified memory allocators, have been added to the list of - types used by generic unit tests. - - The generation of random input data has been improved to increase the range - of values used and catch more corner cases. - - New `unittest::truncate_to_max_representable` utility for avoiding the - generation of ranges that cannot be represented by the underlying element - type in generic unit test code. - - The test driver now synchronizes with CUDA devices and check for errors - after each test, when switching devices, and after each raw kernel launch. - - The `warningtester` uber header is now compiled with NVCC to avoid needing - to disable CUDA-specific code with the preprocessor. - - Fixed the unit test framework's `ASSERT_*` to print `char`s as `int`s. - - New `DECLARE_INTEGRAL_VARIABLE_UNITTEST` test declaration macro. - - New `DECLARE_VARIABLE_UNITTEST_WITH_TYPES_AND_NAME` test declaration macro. - - `thrust::system_error` in the CUDA backend now print out its `cudaError_t` - enumerator in addition to the diagnostic message. - - Stopped using conditionally signed types like `char`. - -## Bug Fixes - -- #897, NVBug 2062242: Fix compilation error when using `__device__` lambdas - with `thrust::reduce` on MSVC. -- #908, NVBug 2089386: Static assert that `thrust::generate`/`thrust::fill` - isn't operating on const iterators. -- #919 Fix compilation failure with `thrust::zip_iterator` and - `thrust::complex`. -- #924, NVBug 2096679, NVBug 2315990: Fix dispatch for the CUDA backend's - `thrust::reduce` to use two functions (one with the pragma for disabling - exec checks, one with `THRUST_RUNTIME_FUNCTION`) instead of one. This fixes - a regression with device compilation that started in CUDA Toolkit 9.2. -- #928, NVBug 2341455: Add missing `__host__ __device__` annotations to a - `thrust::complex::operator=` to satisfy GoUDA. -- NVBug 2094642: Make `thrust::vector_base::clear` not depend on the element - type being default constructible. -- NVBug 2289115: Remove flaky `simple_cuda_streams` example. -- NVBug 2328572: Add missing `thrust::device_vector` constructor that takes an - allocator parameter. -- NVBug 2455740: Update the `range_view` example to not use device-side launch. -- NVBug 2455943: Ensure that sized unit tests that use - `thrust::counting_iterator` perform proper truncation. -- NVBug 2455952: Refactor questionable `thrust::copy_if` unit tests. - -# Thrust 1.9.3 (CUDA Toolkit 10.0) - -## Summary - -Thrust 1.9.3 unifies and integrates CUDA Thrust and GitHub Thrust. - -## Bug Fixes - -- #725, #850, #855, #859, #860: Unify the `thrust::iter_swap` interface and fix - `thrust::device_reference` swapping. -- NVBug 2004663: Add a `data` method to `thrust::detail::temporary_array` and - refactor temporary memory allocation in the CUDA backend to be exception - and leak safe. -- #886, #894, #914: Various documentation typo fixes. -- #724: Provide `NVVMIR_LIBRARY_DIR` environment variable to NVCC. -- #878: Optimize `thrust::min/max_element` to only use - `thrust::detail::get_iterator_value` for non-numeric types. -- #899: Make `thrust::cuda::experimental::pinned_allocator`'s comparison - operators `const`. -- NVBug 2092152: Remove all includes of ``. -- #911: Fix default comparator element type for `thrust::merge_by_key`. - -## Acknowledgments - -- Thanks to Andrew Corrigan for contributing fixes for swapping interfaces. -- Thanks to Francisco Facioni for contributing optimizations for - `thrust::min/max_element`. - -# Thrust 1.9.2 (CUDA Toolkit 9.2) - -## Summary - -Thrust 1.9.2 brings a variety of performance enhancements, bug fixes and test - improvements. -CUB 1.7.5 was integrated, enhancing the performance of `thrust::sort` on - small data types and `thrust::reduce`. -Changes were applied to `complex` to optimize memory access. -Thrust now compiles with compiler warnings enabled and treated as errors. -Additionally, the unit test suite and framework was enhanced to increase - coverage. - -## Breaking Changes - -- The `fallback_allocator` example was removed, as it was buggy and difficult - to support. - -## New Features - -- ``, utilities for memory alignment: - - `thrust::aligned_reinterpret_cast`. - - `thrust::aligned_storage_size`, which computes the amount of storage needed - for an object of a particular size and alignment. - - `thrust::alignment_of`, a C++03 implementation of C++11's - `std::alignment_of`. - - `thrust::aligned_storage`, a C++03 implementation of C++11's - `std::aligned_storage`. - - `thrust::max_align_t`, a C++03 implementation of C++11's - `std::max_align_t`. - -## Bug Fixes - -- NVBug 200385527, NVBug 200385119, NVBug 200385113, NVBug 200349350, NVBug - 2058778: Various compiler warning issues. -- NVBug 200355591: `thrust::reduce` performance issues. -- NVBug 2053727: Fixed an ADL bug that caused user-supplied `allocate` to be - overlooked but `deallocate` to be called with GCC <= 4.3. -- NVBug 1777043: Fixed `thrust::complex` to work with `thrust::sequence`. - -# Thrust 1.9.1-2 (CUDA Toolkit 9.1) - -## Summary - -Thrust 1.9.1-2 integrates version 1.7.4 of CUB and introduces a new CUDA backend - for `thrust::reduce` based on CUB. - -## Bug Fixes - -- NVBug 1965743: Remove unnecessary static qualifiers. -- NVBug 1940974: Fix regression causing a compilation error when using - `thrust::merge_by_key` with `thrust::constant_iterator`s. -- NVBug 1904217: Allow callables that take non-const refs to be used with - `thrust::reduce` and `thrust::*_scan`. - -# Thrust 1.9.0-5 (CUDA Toolkit 9.0) - -## Summary - -Thrust 1.9.0-5 replaces the original CUDA backend (bulk) with a new one - written using CUB, a high performance CUDA collectives library. -This brings a substantial performance improvement to the CUDA backend across - the board. - -## Breaking Changes - -- Any code depending on CUDA backend implementation details will likely be - broken. - -## New Features - -- New CUDA backend based on CUB which delivers substantially higher performance. -- `thrust::transform_output_iterator`, a fancy iterator that applies a function - to the output before storing the result. - -## New Examples - -- `transform_output_iterator` demonstrates use of the new fancy iterator - `thrust::transform_output_iterator`. - -## Other Enhancements - -- When C++11 is enabled, functors do not have to inherit from - `thrust::(unary|binary)_function` anymore to be used with - `thrust::transform_iterator`. -- Added C++11 only move constructors and move assignment operators for - `thrust::detail::vector_base`-based classes, e.g. `thrust::host_vector`, - `thrust::device_vector`, and friends. - -## Bug Fixes - -- `sin(thrust::complex)` no longer has precision loss to float. - -## Acknowledgments - -- Thanks to Manuel Schiller for contributing a C++11 based enhancement - regarding the deduction of functor return types, improving the performance - of `thrust::unique` and implementing `thrust::transform_output_iterator`. -- Thanks to Thibault Notargiacomo for the implementation of move semantics for - the `thrust::vector_base`-based classes. -- Thanks to Duane Merrill for developing CUB and helping to integrate it into - Thrust's backend. - -# Thrust 1.8.3 (CUDA Toolkit 8.0) - -## Summary - -Thrust 1.8.3 is a small bug fix release. - -## New Examples - -- `range_view` demonstrates the use of a view (a non-owning wrapper for an - iterator range with a container-like interface). - -## Bug Fixes - -- `thrust::(min|max|minmax)_element` can now accept raw device pointers when - an explicit device execution policy is used. -- `thrust::clear` operations on vector types no longer requires the element - type to have a default constructor. - -# Thrust 1.8.2 (CUDA Toolkit 7.5) - -## Summary - -Thrust 1.8.2 is a small bug fix release. - -## Bug Fixes - -- Avoid warnings and errors concerning user functions called from - `__host__ __device__` functions. -- #632: Fix an error in `thrust::set_intersection_by_key` with the CUDA backend. -- #651: `thrust::copy` between host and device now accepts execution policies - with streams attached, i.e. `thrust::::cuda::par.on(stream)`. -- #664: `thrust::for_each` and algorithms based on it no longer ignore streams - attached to execution policys. - -## Known Issues - -- #628: `thrust::reduce_by_key` for the CUDA backend fails for Compute - Capability 5.0 devices. - -# Thrust 1.8.1 (CUDA Toolkit 7.0) - -## Summary - -Thrust 1.8.1 is a small bug fix release. - -## Bug Fixes - -- #615, #620: Fixed `thrust::for_each` and `thrust::reduce` to no longer fail on - large inputs. - -## Known Issues - -- #628: `thrust::reduce_by_key` for the CUDA backend fails for Compute - Capability 5.0 devices. - -# Thrust 1.8.0 - -## Summary - -Thrust 1.8.0 introduces support for algorithm invocation from CUDA device - code, support for CUDA streams, and algorithm performance improvements. -Users may now invoke Thrust algorithms from CUDA device code, providing a - parallel algorithms library to CUDA programmers authoring custom kernels, as - well as allowing Thrust programmers to nest their algorithm calls within - functors. -The `thrust::seq` execution policy allows users to require sequential algorithm - execution in the calling thread and makes a sequential algorithms library - available to individual CUDA threads. -The `.on(stream)` syntax allows users to request a CUDA stream for kernels - launched during algorithm execution. -Finally, new CUDA algorithm implementations provide substantial performance - improvements. - -## New Features - -- Algorithms in CUDA Device Code: - - Thrust algorithms may now be invoked from CUDA `__device__` and - `__host__` __device__ functions. - Algorithms invoked in this manner must be invoked with an execution - policy as the first parameter. - The following execution policies are supported in CUDA __device__ code: - - `thrust::seq` - - `thrust::cuda::par` - - `thrust::device`, when THRUST_DEVICE_SYSTEM == THRUST_DEVICE_SYSTEM_CUDA. - - Device-side algorithm execution may not be parallelized unless CUDA Dynamic - Parallelism is available. -- Execution Policies: - - CUDA Streams - - The `thrust::cuda::par.on(stream)` syntax allows users to request that - CUDA kernels launched during algorithm execution should occur on a given - stream. - - Algorithms executed with a CUDA stream in this manner may still - synchronize with other streams when allocating temporary storage or - returning results to the CPU. - - `thrust::seq`, which allows users to require that an algorithm execute - sequentially in the calling thread. -- `thrust::complex`, a complex number data type. - -## New Examples - -- simple_cuda_streams demonstrates how to request a CUDA stream during - algorithm execution. -- async_reduce demonstrates ways to achieve algorithm invocations which are - asynchronous with the calling thread. - -## Other Enhancements - -- CUDA sort performance for user-defined types is 300% faster on Tesla K20c for - large problem sizes. -- CUDA merge performance is 200% faster on Tesla K20c for large problem sizes. -- CUDA sort performance for primitive types is 50% faster on Tesla K20c for - large problem sizes. -- CUDA reduce_by_key performance is 25% faster on Tesla K20c for large problem - sizes. -- CUDA scan performance is 15% faster on Tesla K20c for large problem sizes. -- fallback_allocator example is simpler. - -## Bug Fixes - -- #364: Iterators with unrelated system tags may be used with algorithms invoked - with an execution policy -- #371: Do not redefine `__CUDA_ARCH__`. -- #379: Fix crash when dereferencing transform_iterator on the host. -- #391: Avoid use of uppercase variable names. -- #392: Fix `thrust::copy` between `cusp::complex` and `std::complex`. -- #396: Program compiled with gcc < 4.3 hangs during comparison sort. -- #406: `fallback_allocator.cu` example checks device for unified addressing support. -- #417: Avoid using `std::less` in binary search algorithms. -- #418: Avoid various warnings. -- #443: Including version.h no longer configures default systems. -- #578: NVCC produces warnings when sequential algorithms are used with CPU systems. - -## Known Issues - -- When invoked with primitive data types, thrust::sort, thrust::sort_by_key, - thrust::stable_sort, & thrust::stable_sort_by_key may -- Sometimes linking fails when compiling with `-rdc=true` with NVCC. -- The CUDA implementation of thrust::reduce_by_key incorrectly outputs the last - element in a segment of equivalent keys instead of the first. - -## Acknowledgments - -- Thanks to Sean Baxter for contributing faster CUDA reduce, merge, and scan - implementations. -- Thanks to Duane Merrill for contributing a faster CUDA radix sort implementation. -- Thanks to Filipe Maia for contributing the implementation of thrust::complex. - -# Thrust 1.7.2 (CUDA Toolkit 6.5) - -## Summary - -Thrust 1.7.2 is a minor bug fix release. - -## Bug Fixes - -- Avoid use of `std::min` in generic find implementation. - -# Thrust 1.7.1 (CUDA Toolkit 6.0) - -## Summary - -Thrust 1.7.1 is a minor bug fix release. - -## Bug Fixes - -- Eliminate identifiers in `set_operations.cu` example with leading underscore. -- Eliminate unused variable warning in CUDA `reduce_by_key` implementation. -- Avoid deriving function objects from `std::unary_function` and - `std::binary_function`. - -# Thrust 1.7.0 (CUDA Toolkit 5.5) - -## Summary - -Thrust 1.7.0 introduces a new interface for controlling algorithm execution as - well as several new algorithms and performance improvements. -With this new interface, users may directly control how algorithms execute as - well as details such as the allocation of temporary storage. -Key/value versions of thrust::merge and the set operation algorithms have been - added, as well stencil versions of partitioning algorithms. -thrust::tabulate has been introduced to tabulate the values of functions taking - integers. -For 32b types, new CUDA merge and set operations provide 2-15x faster - performance while a new CUDA comparison sort provides 1.3-4x faster - performance. -Finally, a new TBB reduce_by_key implementation provides 80% faster - performance. - -## Breaking Changes - -- Dispatch: - - Custom user backend systems' tag types must now inherit from the - corresponding system's execution_policy template (e.g. - thrust::cuda::execution_policy) instead of the tag struct (e.g. - thrust::cuda::tag). Otherwise, algorithm specializations will silently go - unfound during dispatch. See examples/minimal_custom_backend.cu and - examples/cuda/fallback_allocator.cu for usage examples. - - thrust::advance and thrust::distance are no longer dispatched based on - iterator system type and thus may no longer be customized. -- Iterators: - - iterator_facade and iterator_adaptor's Pointer template parameters have - been eliminated. - - iterator_adaptor has been moved into the thrust namespace (previously - thrust::experimental::iterator_adaptor). - - iterator_facade has been moved into the thrust namespace (previously - thrust::experimental::iterator_facade). - - iterator_core_access has been moved into the thrust namespace (previously - thrust::experimental::iterator_core_access). - - All iterators' nested pointer typedef (the type of the result of - operator->) is now void instead of a pointer type to indicate that such - expressions are currently impossible. - - Floating point counting_iterators' nested difference_type typedef is now a - signed integral type instead of a floating point type. -- Other: - - normal_distribution has been moved into the thrust::random namespace - (previously thrust::random::experimental::normal_distribution). - - Placeholder expressions may no longer include the comma operator. - -## New Features -- Execution Policies: - - Users may directly control the dispatch of algorithm invocations with - optional execution policy arguments. - For example, instead of wrapping raw pointers allocated by cudaMalloc with - thrust::device_ptr, the thrust::device execution_policy may be passed as - an argument to an algorithm invocation to enable CUDA execution. - - The following execution policies are supported in this version: - - `thrust::host` - - `thrust::device` - - `thrust::cpp::par` - - `thrust::cuda::par` - - `thrust::omp::par` - - `thrust::tbb::par` -- Algorithms: - - `thrust::merge_by_key` - - `thrust::partition` with stencil - - `thrust::partition_copy` with stencil - - `thrust::set_difference_by_key` - - `thrust::set_intersection_by_key` - - `thrust::set_symmetric_difference_by_key` - - `thrust::set_union_by_key` - - `thrust::stable_partition with stencil` - - `thrust::stable_partition_copy with stencil` - - `thrust::tabulate` -- Memory Allocation: - - `thrust::malloc` - - `thrust::free` - - `thrust::get_temporary_buffer` - - `thrust::return_temporary_buffer` - -## New Examples - -- uninitialized_vector demonstrates how to use a custom allocator to avoid the - automatic initialization of elements in thrust::device_vector. - -## Other Enhancements - -- Authors of custom backend systems may manipulate arbitrary state during - algorithm dispatch by incorporating it into their execution_policy parameter. -- Users may control the allocation of temporary storage during algorithm - execution by passing standard allocators as parameters via execution policies - such as thrust::device. -- THRUST_DEVICE_SYSTEM_CPP has been added as a compile-time target for the - device backend. -- CUDA merge performance is 2-15x faster. -- CUDA comparison sort performance is 1.3-4x faster. -- CUDA set operation performance is 1.5-15x faster. -- TBB reduce_by_key performance is 80% faster. -- Several algorithms have been parallelized with TBB. -- Support for user allocators in vectors has been improved. -- The sparse_vector example is now implemented with merge_by_key instead of - sort_by_key. -- Warnings have been eliminated in various contexts. -- Warnings about __host__ or __device__-only functions called from __host__ - __device__ functions have been eliminated in various contexts. -- Documentation about algorithm requirements have been improved. -- Simplified the minimal_custom_backend example. -- Simplified the cuda/custom_temporary_allocation example. -- Simplified the cuda/fallback_allocator example. - -## Bug Fixes - -- #248: Fix broken `thrust::counting_iterator` behavior with OpenMP. -- #231, #209: Fix set operation failures with CUDA. -- #187: Fix incorrect occupancy calculation with CUDA. -- #153: Fix broken multi GPU behavior with CUDA. -- #142: Eliminate warning produced by `thrust::random::taus88` and MSVC 2010. -- #208: Correctly initialize elements in temporary storage when necessary. -- #16: Fix compilation error when sorting bool with CUDA. -- #10: Fix ambiguous overloads of `thrust::reinterpret_tag`. - -## Known Issues - -- GCC 4.3 and lower may fail to dispatch thrust::get_temporary_buffer correctly - causing infinite recursion in examples such as - cuda/custom_temporary_allocation. - -## Acknowledgments - -- Thanks to Sean Baxter, Bryan Catanzaro, and Manjunath Kudlur for contributing - a faster merge implementation for CUDA. -- Thanks to Sean Baxter for contributing a faster set operation implementation - for CUDA. -- Thanks to Cliff Woolley for contributing a correct occupancy calculation - algorithm. - -# Thrust 1.6.0 - -## Summary - -Thrust 1.6.0 provides an interface for customization and extension and a new - backend system based on the Threading Building Blocks library. -With this new interface, programmers may customize the behavior of specific - algorithms as well as control the allocation of temporary storage or invent - entirely new backends. -These enhancements also allow multiple different backend systems - such as CUDA and OpenMP to coexist within a single program. -Support for TBB allows Thrust programs to integrate more naturally into - applications which may already employ the TBB task scheduler. - -## Breaking Changes - -- The header has been moved to - -- thrust::experimental::cuda::pinned_allocator has been moved to - thrust::cuda::experimental::pinned_allocator -- The macro THRUST_DEVICE_BACKEND has been renamed THRUST_DEVICE_SYSTEM -- The macro THRUST_DEVICE_BACKEND_CUDA has been renamed THRUST_DEVICE_SYSTEM_CUDA -- The macro THRUST_DEVICE_BACKEND_OMP has been renamed THRUST_DEVICE_SYSTEM_OMP -- thrust::host_space_tag has been renamed thrust::host_system_tag -- thrust::device_space_tag has been renamed thrust::device_system_tag -- thrust::any_space_tag has been renamed thrust::any_system_tag -- thrust::iterator_space has been renamed thrust::iterator_system - -## New Features - -- Backend Systems - - Threading Building Blocks (TBB) is now supported -- Algorithms - - `thrust::for_each_n` - - `thrust::raw_reference_cast` -- Types - - `thrust::pointer` - - `thrust::reference` - -## New Examples - -- `cuda/custom_temporary_allocation` -- `cuda/fallback_allocator` -- `device_ptr` -- `expand` -- `minimal_custom_backend` -- `raw_reference_cast` -- `set_operations` - -## Other Enhancements -- thrust::for_each now returns the end of the input range similar to most other algorithms -- thrust::pair and thrust::tuple have swap functionality -- All CUDA algorithms now support large data types -- Iterators may be dereferenced in user __device__ or __global__ functions -- The safe use of different backend systems is now possible within a single binary - -## Bug Fixes - -- #469 `min_element` and `max_element` algorithms no longer require a const comparison operator - -## Known Issues - -- NVCC may crash when parsing TBB headers on Windows. - -# Thrust 1.5.3 (CUDA Toolkit 5.0) - -## Summary - -Thrust 1.5.3 is a minor bug fix release. - -## Bug Fixes - -- Avoid warnings about potential race due to `__shared__` non-POD variable - -# Thrust 1.5.2 (CUDA Toolkit 4.2) - -## Summary - -Thrust 1.5.2 is a minor bug fix release. - -## Bug Fixes - -- Fixed warning about C-style initialization of structures - -# Thrust 1.5.1 (CUDA Toolkit 4.1) - -## Summary - -Thrust 1.5.1 is a minor bug fix release. - -## Bug Fixes - -- Sorting data referenced by permutation_iterators on CUDA produces invalid results - -# Thrust 1.5.0 - -## Summary - -Thrust 1.5.0 provides introduces new programmer productivity and performance - enhancements. -New functionality for creating anonymous "lambda" functions has been added. -A faster host sort provides 2-10x faster performance for sorting arithmetic - types on (single-threaded) CPUs. -A new OpenMP sort provides 2.5x-3.0x speedup over the host sort using a - quad-core CPU. -When sorting arithmetic types with the OpenMP backend the combined performance - improvement is 5.9x for 32-bit integers and ranges from 3.0x (64-bit types) to - 14.2x (8-bit types). -A new CUDA `reduce_by_key` implementation provides 2-3x faster - performance. - -## Breaking Changes -- device_ptr no longer unsafely converts to device_ptr without an - explicit cast. - Use the expression device_pointer_cast(static_cast(void_ptr.get())) to - convert, for example, device_ptr to device_ptr. - -## New Features - -- Algorithms: - - Stencil-less `thrust::transform_if`. -- Lambda placeholders - -## New Examples -- lambda - -## Other Enhancements - -- Host sort is 2-10x faster for arithmetic types -- OMP sort provides speedup over host sort -- `reduce_by_key` is 2-3x faster -- `reduce_by_key` no longer requires O(N) temporary storage -- CUDA scan algorithms are 10-40% faster -- `host_vector` and `device_vector` are now documented -- out-of-memory exceptions now provide detailed information from CUDART -- improved histogram example -- `device_reference` now has a specialized swap -- `reduce_by_key` and scan algorithms are compatible with `discard_iterator` - -## Bug Fixes - -- #44: Allow `thrust::host_vector` to compile when `value_type` uses - `__align__`. -- #198: Allow `thrust::adjacent_difference` to permit safe in-situ operation. -- #303: Make thrust thread-safe. -- #313: Avoid race conditions in `thrust::device_vector::insert`. -- #314: Avoid unintended ADL invocation when dispatching copy. -- #365: Fix merge and set operation failures. - -## Known Issues - -- None - -## Acknowledgments - -- Thanks to Manjunath Kudlur for contributing his Carbon library, from which - the lambda functionality is derived. -- Thanks to Jean-Francois Bastien for suggesting a fix for #303. - -# Thrust 1.4.0 (CUDA Toolkit 4.0) - -## Summary - -Thrust 1.4.0 is the first release of Thrust to be included in the CUDA Toolkit. -Additionally, it brings many feature and performance improvements. -New set theoretic algorithms operating on sorted sequences have been added. -Additionally, a new fancy iterator allows discarding redundant or otherwise - unnecessary output from algorithms, conserving memory storage and bandwidth. - -## Breaking Changes - -- Eliminations - - `thrust/is_sorted.h` - - `thrust/utility.h` - - `thrust/set_intersection.h` - - `thrust/experimental/cuda/ogl_interop_allocator.h` and the functionality - therein - - `thrust::deprecated::copy_when` - - `thrust::deprecated::absolute_value` - - `thrust::deprecated::copy_when` - - `thrust::deprecated::absolute_value` - - `thrust::deprecated::copy_when` - - `thrust::deprecated::absolute_value` - - `thrust::gather` and `thrust::scatter` from host to device and vice versa - are no longer supported. - - Operations which modify the elements of a thrust::device_vector are no longer - available from source code compiled without nvcc when the device backend - is CUDA. - Instead, use the idiom from the cpp_interop example. - -## New Features - -- Algorithms: - - `thrust::copy_n` - - `thrust::merge` - - `thrust::set_difference` - - `thrust::set_symmetric_difference` - - `thrust::set_union` - -- Types - - `thrust::discard_iterator` - -- Device Support: - - Compute Capability 2.1 GPUs. - -## New Examples - -- run_length_decoding - -## Other Enhancements - -- Compilation warnings are substantially reduced in various contexts. -- The compilation time of thrust::sort, thrust::stable_sort, - thrust::sort_by_key, and thrust::stable_sort_by_key are substantially - reduced. -- A fast sort implementation is used when sorting primitive types with - thrust::greater. -- The performance of thrust::set_intersection is improved. -- The performance of thrust::fill is improved on SM 1.x devices. -- A code example is now provided in each algorithm's documentation. -- thrust::reverse now operates in-place - -## Bug Fixes - -- #212: `thrust::set_intersection` works correctly for large input sizes. -- #275: `thrust::counting_iterator` and `thrust::constant_iterator` work - correctly with OpenMP as the backend when compiling with optimization. -- #256: `min` and `max` correctly return their first argument as a tie-breaker -- #248: `NDEBUG` is interpreted incorrectly - -## Known Issues - -- NVCC may generate code containing warnings when compiling some Thrust - algorithms. -- When compiling with `-arch=sm_1x`, some Thrust algorithms may cause NVCC to - issue benign pointer advisories. -- When compiling with `-arch=sm_1x` and -G, some Thrust algorithms may fail to - execute correctly. -- `thrust::inclusive_scan`, `thrust::exclusive_scan`, - `thrust::inclusive_scan_by_key`, and `thrust::exclusive_scan_by_key` are - currently incompatible with `thrust::discard_iterator`. - -## Acknowledgments - -- Thanks to David Tarjan for improving the performance of set_intersection. -- Thanks to Duane Merrill for continued help with sort. -- Thanks to Nathan Whitehead for help with CUDA Toolkit integration. - -# Thrust 1.3.0 - -## Summary - -Thrust 1.3.0 provides support for CUDA Toolkit 3.2 in addition to many feature - and performance enhancements. -Performance of the sort and sort_by_key algorithms is improved by as much as 3x - in certain situations. -The performance of stream compaction algorithms, such as copy_if, is improved - by as much as 2x. -CUDA errors are now converted to runtime exceptions using the system_error - interface. -Combined with a debug mode, also new in 1.3, runtime errors can be located with - greater precision. -Lastly, a few header files have been consolidated or renamed for clarity. -See the deprecations section below for additional details. - -## Breaking Changes - -- Promotions - - thrust::experimental::inclusive_segmented_scan has been renamed - thrust::inclusive_scan_by_key and exposes a different interface - - thrust::experimental::exclusive_segmented_scan has been renamed - thrust::exclusive_scan_by_key and exposes a different interface - - thrust::experimental::partition_copy has been renamed - thrust::partition_copy and exposes a different interface - - thrust::next::gather has been renamed thrust::gather - - thrust::next::gather_if has been renamed thrust::gather_if - - thrust::unique_copy_by_key has been renamed thrust::unique_by_key_copy -- Deprecations - - thrust::copy_when has been renamed thrust::deprecated::copy_when - - thrust::absolute_value has been renamed thrust::deprecated::absolute_value - - The header thrust/set_intersection.h is now deprecated; use - thrust/set_operations.h instead - - The header thrust/utility.h is now deprecated; use thrust/swap.h instead - - The header thrust/swap_ranges.h is now deprecated; use thrust/swap.h instead -- Eliminations - - thrust::deprecated::gather - - thrust::deprecated::gather_if - - thrust/experimental/arch.h and the functions therein - - thrust/sorting/merge_sort.h - - thrust/sorting/radix_sort.h -- NVCC 2.3 is no longer supported - -## New Features - -- Algorithms: - - `thrust::exclusive_scan_by_key` - - `thrust::find` - - `thrust::find_if` - - `thrust::find_if_not` - - `thrust::inclusive_scan_by_key` - - `thrust::is_partitioned` - - `thrust::is_sorted_until` - - `thrust::mismatch` - - `thrust::partition_point` - - `thrust::reverse` - - `thrust::reverse_copy` - - `thrust::stable_partition_copy` - -- Types: - - `thrust::system_error` and related types. - - `thrust::experimental::cuda::ogl_interop_allocator`. - - `thrust::bit_and`, `thrust::bit_or`, and `thrust::bit_xor`. - -- Device Support: - - GF104-based GPUs. - -## New Examples - -- opengl_interop.cu -- repeated_range.cu -- simple_moving_average.cu -- sparse_vector.cu -- strided_range.cu - -## Other Enhancements - -- Performance of thrust::sort and thrust::sort_by_key is substantially improved - for primitive key types -- Performance of thrust::copy_if is substantially improved -- Performance of thrust::reduce and related reductions is improved -- THRUST_DEBUG mode added -- Callers of Thrust functions may detect error conditions by catching - thrust::system_error, which derives from std::runtime_error -- The number of compiler warnings generated by Thrust has been substantially - reduced -- Comparison sort now works correctly for input sizes > 32M -- min & max usage no longer collides with definitions -- Compiling against the OpenMP backend no longer requires nvcc -- Performance of device_vector initialized in .cpp files is substantially - improved in common cases -- Performance of thrust::sort_by_key on the host is substantially improved - -## Bug Fixes - -- Debug device code now compiles correctly -- thrust::uninitialized_copy and thrust::uninitialized_fill now dispatch - constructors on the device rather than the host - -## Known Issues - -- #212 set_intersection is known to fail for large input sizes -- partition_point is known to fail for 64b types with nvcc 3.2 - -Acknowledgments -- Thanks to Duane Merrill for contributing a fast CUDA radix sort implementation -- Thanks to Erich Elsen for contributing an implementation of find_if -- Thanks to Andrew Corrigan for contributing changes which allow the OpenMP - backend to compile in the absence of nvcc -- Thanks to Andrew Corrigan, Cliff Wooley, David Coeurjolly, Janick Martinez - Esturo, John Bowers, Maxim Naumov, Michael Garland, and Ryuta Suzuki for - bug reports -- Thanks to Cliff Woolley for help with testing - -# Thrust 1.2.1 - -## Summary - -Small fixes for compatibility for the CUDA Toolkit 3.1. - -## Known Issues - -- `thrust::inclusive_scan` and `thrust::exclusive_scan` may fail with very - large types. -- MSVC may fail to compile code using both sort and binary search algorithms. -- `thrust::uninitialized_fill` and `thrust::uninitialized_copy` dispatch - constructors on the host rather than the device. -- #109: Some algorithms may exhibit poor performance with the OpenMP backend - with large numbers (>= 6) of CPU threads. -- `thrust::default_random_engine::discard` is not accelerated with NVCC 2.3 -- NVCC 3.1 may fail to compile code using types derived from - `thrust::subtract_with_carry_engine`, such as `thrust::ranlux24` and - `thrust::ranlux48`. - -# Thrust 1.2.0 - -## Summary - -Thrust 1.2 introduces support for compilation to multicore CPUs and the Ocelot - virtual machine, and several new facilities for pseudo-random number - generation. -New algorithms such as set intersection and segmented reduction have also been - added. -Lastly, improvements to the robustness of the CUDA backend ensure correctness - across a broad set of (uncommon) use cases. - -## Breaking Changes - -- `thrust::gather`'s interface was incorrect and has been removed. - The old interface is deprecated but will be preserved for Thrust version 1.2 - at `thrust::deprecated::gather` and `thrust::deprecated::gather_if`. - The new interface is provided at `thrust::next::gather` and - `thrust::next::gather_if`. - The new interface will be promoted to `thrust::` in Thrust version 1.3. - For more details, please refer to [this thread](http://groups.google.com/group/thrust-users/browse_thread/thread/f5f0583cb97b51fd). -- The `thrust::sorting` namespace has been deprecated in favor of the top-level - sorting functions, such as `thrust::sort` and `thrust::sort_by_key`. -- Removed support for `thrust::equal` between host & device sequences. -- Removed support for `thrust::scatter` between host & device sequences. - -## New Features - -- Algorithms: - - `thrust::reduce_by_key` - - `thrust::set_intersection` - - `thrust::unique_copy` - - `thrust::unique_by_key` - - `thrust::unique_copy_by_key` -- Types -- Random Number Generation: - - `thrust::discard_block_engine` - - `thrust::default_random_engine` - - `thrust::linear_congruential_engine` - - `thrust::linear_feedback_shift_engine` - - `thrust::subtract_with_carry_engine` - - `thrust::xor_combine_engine` - - `thrust::minstd_rand` - - `thrust::minstd_rand0` - - `thrust::ranlux24` - - `thrust::ranlux48` - - `thrust::ranlux24_base` - - `thrust::ranlux48_base` - - `thrust::taus88` - - `thrust::uniform_int_distribution` - - `thrust::uniform_real_distribution` - - `thrust::normal_distribution` (experimental) -- Function Objects: - - `thrust::project1st` - - `thrust::project2nd` -- `thrust::tie` -- Fancy Iterators: - - `thrust::permutation_iterator` - - `thrust::reverse_iterator` -- Vector Functions: - - `operator!=` - - `rbegin` - - `crbegin` - - `rend` - - `crend` - - `data` - - `shrink_to_fit` -- Device Support: - - Multicore CPUs via OpenMP. - - Fermi-class GPUs. - - Ocelot virtual machines. -- Support for NVCC 3.0. - -## New Examples - -- `cpp_integration` -- `histogram` -- `mode` -- `monte_carlo` -- `monte_carlo_disjoint_sequences` -- `padded_grid_reduction` -- `permutation_iterator` -- `row_sum` -- `run_length_encoding` -- `segmented_scan` -- `stream_compaction` -- `summary_statistics` -- `transform_iterator` -- `word_count` - -## Other Enhancements - -- Integer sorting performance is improved when max is large but (max - min) is - small and when min is negative -- Performance of `thrust::inclusive_scan` and `thrust::exclusive_scan` is - improved by 20-25% for primitive types. - -## Bug Fixes - -- #8 cause a compiler error if the required compiler is not found rather than a - mysterious error at link time -- #42 device_ptr & device_reference are classes rather than structs, - eliminating warnings on certain platforms -- #46 gather & scatter handle any space iterators correctly -- #51 thrust::experimental::arch functions gracefully handle unrecognized GPUs -- #52 avoid collisions with common user macros such as BLOCK_SIZE -- #62 provide better documentation for device_reference -- #68 allow built-in CUDA vector types to work with device_vector in pure C++ - mode -- #102 eliminated a race condition in device_vector::erase -- various compilation warnings eliminated - -## Known Issues - -- inclusive_scan & exclusive_scan may fail with very large types -- MSVC may fail to compile code using both sort and binary search algorithms -- uninitialized_fill & uninitialized_copy dispatch constructors on the host - rather than the device -- #109 some algorithms may exhibit poor performance with the OpenMP backend - with large numbers (>= 6) of CPU threads -- default_random_engine::discard is not accelerated with nvcc 2.3 - -## Acknowledgments - -- Thanks to Gregory Diamos for contributing a CUDA implementation of - set_intersection -- Thanks to Ryuta Suzuki & Gregory Diamos for rigorously testing Thrust's unit - tests and examples against Ocelot -- Thanks to Tom Bradley for contributing an implementation of normal_distribution -- Thanks to Joseph Rhoads for contributing the example summary_statistics - -# Thrust 1.1.1 - -## Summary - -Small fixes for compatibility with CUDA Toolkit 2.3a and Mac OSX Snow Leopard. - -# Thrust 1.1.0 - -## Summary - -Thrust 1.1.0 introduces fancy iterators, binary search functions, and several - specialized reduction functions. -Experimental support for segmented scans has also been added. - -## Breaking Changes - -- `thrust::counting_iterator` has been moved into the `thrust` namespace - (previously `thrust::experimental`). - -## New Features - -- Algorithms: - - `thrust::copy_if` - - `thrust::lower_bound` - - `thrust::upper_bound` - - `thrust::vectorized lower_bound` - - `thrust::vectorized upper_bound` - - `thrust::equal_range` - - `thrust::binary_search` - - `thrust::vectorized binary_search` - - `thrust::all_of` - - `thrust::any_of` - - `thrust::none_of` - - `thrust::minmax_element` - - `thrust::advance` - - `thrust::inclusive_segmented_scan` (experimental) - - `thrust::exclusive_segmented_scan` (experimental) -- Types: - - `thrust::pair` - - `thrust::tuple` - - `thrust::device_malloc_allocator` -- Fancy Iterators: - - `thrust::constant_iterator` - - `thrust::counting_iterator` - - `thrust::transform_iterator` - - `thrust::zip_iterator` - -## New Examples - -- Computing the maximum absolute difference between vectors. -- Computing the bounding box of a two-dimensional point set. -- Sorting multiple arrays together (lexicographical sorting). -- Constructing a summed area table. -- Using `thrust::zip_iterator` to mimic an array of structs. -- Using `thrust::constant_iterator` to increment array values. - -## Other Enhancements - -- Added pinned memory allocator (experimental). -- Added more methods to host_vector & device_vector (issue #4). -- Added variant of remove_if with a stencil argument (issue #29). -- Scan and reduce use cudaFuncGetAttributes to determine grid size. -- Exceptions are reported when temporary device arrays cannot be allocated. - -## Bug Fixes - -- #5: Make vector work for larger data types -- #9: stable_partition_copy doesn't respect OutputIterator concept semantics -- #10: scans should return OutputIterator -- #16: make algorithms work for larger data types -- #27: Dispatch radix_sort even when comp=less is explicitly provided - -## Known Issues - -- Using functors with Thrust entry points may not compile on Mac OSX with gcc - 4.0.1. -- `thrust::uninitialized_copy` and `thrust::uninitialized_fill` dispatch - constructors on the host rather than the device. -- `thrust::inclusive_scan`, `thrust::inclusive_scan_by_key`, - `thrust::exclusive_scan`, and `thrust::exclusive_scan_by_key` may fail when - used with large types with the CUDA Toolkit 3.1. - -# Thrust 1.0.0 - -## Breaking Changes - -- Rename top level namespace `komrade` to `thrust`. -- Move `thrust::partition_copy` & `thrust::stable_partition_copy` into - `thrust::experimental` namespace until we can easily provide the standard - interface. -- Rename `thrust::range` to `thrust::sequence` to avoid collision with - Boost.Range. -- Rename `thrust::copy_if` to `thrust::copy_when` due to semantic differences - with C++0x `std::copy_if`. - -## New Features - -- Add C++0x style `cbegin` & `cend` methods to `thrust::host_vector` and - `thrust::device_vector`. -- Add `thrust::transform_if` function. -- Add stencil versions of `thrust::replace_if` & `thrust::replace_copy_if`. -- Allow `counting_iterator` to work with `thrust::for_each`. -- Allow types with constructors in comparison `thrust::sort` and - `thrust::reduce`. - -## Other Enhancements - -- `thrust::merge_sort` and `thrust::stable_merge_sort` are now 2x to 5x faster - when executed on the parallel device. - -## Bug Fixes - -- Komrade 6: Workaround an issue where an incremented iterator causes NVCC to - crash. -- Komrade 7: Fix an issue where `const_iterator`s could not be passed to - `thrust::transform`. - diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/config/cpp_compatibility.h b/spaces/CVPR/LIVE/thrust/thrust/detail/config/cpp_compatibility.h deleted file mode 100644 index 646f57504d202adb9263ccd2b0e92e73e8c82921..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/config/cpp_compatibility.h +++ /dev/null @@ -1,94 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -#include - -#if THRUST_CPP_DIALECT >= 2011 -# ifndef __has_cpp_attribute -# define __has_cpp_attribute(X) 0 -# endif - -# if __has_cpp_attribute(nodiscard) -# define THRUST_NODISCARD [[nodiscard]] -# endif - -# define THRUST_CONSTEXPR constexpr -# define THRUST_OVERRIDE override -# define THRUST_DEFAULT = default; -# define THRUST_NOEXCEPT noexcept -# define THRUST_FINAL final -#else -# define THRUST_CONSTEXPR -# define THRUST_OVERRIDE -# define THRUST_DEFAULT {} -# define THRUST_NOEXCEPT throw() -# define THRUST_FINAL -#endif - -#ifndef THRUST_NODISCARD -# define THRUST_NODISCARD -#endif - -// FIXME: Combine THRUST_INLINE_CONSTANT and -// THRUST_INLINE_INTEGRAL_MEMBER_CONSTANT into one macro when NVCC properly -// supports `constexpr` globals in host and device code. -#if defined(__CUDA_ARCH__) || defined(__NVCOMPILER_CUDA__) -// FIXME: Add this when NVCC supports inline variables. -//# if THRUST_CPP_DIALECT >= 2017 -//# define THRUST_INLINE_CONSTANT inline constexpr -//# define THRUST_INLINE_INTEGRAL_MEMBER_CONSTANT inline constexpr -# if THRUST_CPP_DIALECT >= 2011 -# define THRUST_INLINE_CONSTANT static const __device__ -# define THRUST_INLINE_INTEGRAL_MEMBER_CONSTANT static constexpr -# else -# define THRUST_INLINE_CONSTANT static const __device__ -# define THRUST_INLINE_INTEGRAL_MEMBER_CONSTANT static const -# endif -#else -// FIXME: Add this when NVCC supports inline variables. -//# if THRUST_CPP_DIALECT >= 2017 -//# define THRUST_INLINE_CONSTANT inline constexpr -//# define THRUST_INLINE_INTEGRAL_MEMBER_CONSTANT inline constexpr -# if THRUST_CPP_DIALECT >= 2011 -# define THRUST_INLINE_CONSTANT static constexpr -# define THRUST_INLINE_INTEGRAL_MEMBER_CONSTANT static constexpr -# else -# define THRUST_INLINE_CONSTANT static const -# define THRUST_INLINE_INTEGRAL_MEMBER_CONSTANT static const -# endif -#endif - -#if defined(__NVCOMPILER_CUDA__) -# define THRUST_IS_DEVICE_CODE __builtin_is_device_code() -# define THRUST_IS_HOST_CODE (!__builtin_is_device_code()) -# define THRUST_INCLUDE_DEVICE_CODE 1 -# define THRUST_INCLUDE_HOST_CODE 1 -#elif defined(__CUDA_ARCH__) -# define THRUST_IS_DEVICE_CODE 1 -# define THRUST_IS_HOST_CODE 0 -# define THRUST_INCLUDE_DEVICE_CODE 1 -# define THRUST_INCLUDE_HOST_CODE 0 -#else -# define THRUST_IS_DEVICE_CODE 0 -# define THRUST_IS_HOST_CODE 1 -# define THRUST_INCLUDE_DEVICE_CODE 0 -# define THRUST_INCLUDE_HOST_CODE 1 -#endif - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/remove.h b/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/remove.h deleted file mode 100644 index 49f70588d683a0079dc561ff8a6b0f7e6fbc8468..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/remove.h +++ /dev/null @@ -1,81 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace omp -{ -namespace detail -{ - -template - ForwardIterator remove_if(execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - Predicate pred); - - -template - ForwardIterator remove_if(execution_policy &exec, - ForwardIterator first, - ForwardIterator last, - InputIterator stencil, - Predicate pred); - - -template - OutputIterator remove_copy_if(execution_policy &exec, - InputIterator first, - InputIterator last, - OutputIterator result, - Predicate pred); - - -template - OutputIterator remove_copy_if(execution_policy &exec, - InputIterator1 first, - InputIterator1 last, - InputIterator2 stencil, - OutputIterator result, - Predicate pred); - - -} // end namespace detail -} // end namespace omp -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py deleted file mode 100644 index d665dfff83855e6db3866c681559ccdef09f9999..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py +++ /dev/null @@ -1,91 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, Linear, constant_init, xavier_init -from mmcv.runner import auto_fp16 - -from mmdet.models.builder import HEADS -from .fcn_mask_head import FCNMaskHead - - -@HEADS.register_module() -class CoarseMaskHead(FCNMaskHead): - """Coarse mask head used in PointRend. - - Compared with standard ``FCNMaskHead``, ``CoarseMaskHead`` will downsample - the input feature map instead of upsample it. - - Args: - num_convs (int): Number of conv layers in the head. Default: 0. - num_fcs (int): Number of fc layers in the head. Default: 2. - fc_out_channels (int): Number of output channels of fc layer. - Default: 1024. - downsample_factor (int): The factor that feature map is downsampled by. - Default: 2. - """ - - def __init__(self, - num_convs=0, - num_fcs=2, - fc_out_channels=1024, - downsample_factor=2, - *arg, - **kwarg): - super(CoarseMaskHead, self).__init__( - *arg, num_convs=num_convs, upsample_cfg=dict(type=None), **kwarg) - self.num_fcs = num_fcs - assert self.num_fcs > 0 - self.fc_out_channels = fc_out_channels - self.downsample_factor = downsample_factor - assert self.downsample_factor >= 1 - # remove conv_logit - delattr(self, 'conv_logits') - - if downsample_factor > 1: - downsample_in_channels = ( - self.conv_out_channels - if self.num_convs > 0 else self.in_channels) - self.downsample_conv = ConvModule( - downsample_in_channels, - self.conv_out_channels, - kernel_size=downsample_factor, - stride=downsample_factor, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg) - else: - self.downsample_conv = None - - self.output_size = (self.roi_feat_size[0] // downsample_factor, - self.roi_feat_size[1] // downsample_factor) - self.output_area = self.output_size[0] * self.output_size[1] - - last_layer_dim = self.conv_out_channels * self.output_area - - self.fcs = nn.ModuleList() - for i in range(num_fcs): - fc_in_channels = ( - last_layer_dim if i == 0 else self.fc_out_channels) - self.fcs.append(Linear(fc_in_channels, self.fc_out_channels)) - last_layer_dim = self.fc_out_channels - output_channels = self.num_classes * self.output_area - self.fc_logits = Linear(last_layer_dim, output_channels) - - def init_weights(self): - for m in self.fcs.modules(): - if isinstance(m, nn.Linear): - xavier_init(m) - constant_init(self.fc_logits, 0.001) - - @auto_fp16() - def forward(self, x): - for conv in self.convs: - x = conv(x) - - if self.downsample_conv is not None: - x = self.downsample_conv(x) - - x = x.flatten(1) - for fc in self.fcs: - x = self.relu(fc(x)) - mask_pred = self.fc_logits(x).view( - x.size(0), self.num_classes, *self.output_size) - return mask_pred diff --git a/spaces/ChallengeHub/Chinese-LangChain/app_modules/overwrites.py b/spaces/ChallengeHub/Chinese-LangChain/app_modules/overwrites.py deleted file mode 100644 index 7ef9614b8e1fca9210e1f8f9d5ce8a1243bcb527..0000000000000000000000000000000000000000 --- a/spaces/ChallengeHub/Chinese-LangChain/app_modules/overwrites.py +++ /dev/null @@ -1,49 +0,0 @@ -from __future__ import annotations - -from typing import List, Tuple - -from app_modules.utils import * - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - temp = [] - for x in y: - user, bot = x - if not detect_converted_mark(user): - user = convert_asis(user) - if not detect_converted_mark(bot): - bot = convert_mdtext(bot) - temp.append((user, bot)) - return temp - - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", - encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse diff --git a/spaces/CofAI/optor/style.css b/spaces/CofAI/optor/style.css deleted file mode 100644 index 57ac874613ad432d3129fa1757249a319a601f3e..0000000000000000000000000000000000000000 --- a/spaces/CofAI/optor/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/help.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/help.py deleted file mode 100644 index 4334e5001af3416a256add1ec6d32c422d015c8d..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/help.py +++ /dev/null @@ -1,35 +0,0 @@ -import pkgutil -import sys -import fontTools -import importlib -import os -from pathlib import Path - - -def main(): - """Show this help""" - path = fontTools.__path__ - descriptions = {} - for pkg in sorted( - mod.name - for mod in pkgutil.walk_packages([fontTools.__path__[0]], prefix="fontTools.") - ): - try: - imports = __import__(pkg, globals(), locals(), ["main"]) - except ImportError as e: - continue - try: - description = imports.main.__doc__ - if description: - pkg = pkg.replace("fontTools.", "").replace(".__main__", "") - # show the docstring's first line only - descriptions[pkg] = description.splitlines()[0] - except AttributeError as e: - pass - for pkg, description in descriptions.items(): - print("fonttools %-12s %s" % (pkg, description), file=sys.stderr) - - -if __name__ == "__main__": - print("fonttools v%s\n" % fontTools.__version__, file=sys.stderr) - main() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_commit_scheduler.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_commit_scheduler.py deleted file mode 100644 index e190693e38e7b6840cee4340fc43555f0c8f616c..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_commit_scheduler.py +++ /dev/null @@ -1,318 +0,0 @@ -import atexit -import logging -import os -import time -from concurrent.futures import Future -from dataclasses import dataclass -from io import SEEK_END, SEEK_SET, BytesIO -from pathlib import Path -from threading import Lock, Thread -from typing import Dict, List, Optional, Union - -from .hf_api import IGNORE_GIT_FOLDER_PATTERNS, CommitInfo, CommitOperationAdd, HfApi -from .utils import filter_repo_objects - - -logger = logging.getLogger(__name__) - - -@dataclass(frozen=True) -class _FileToUpload: - """Temporary dataclass to store info about files to upload. Not meant to be used directly.""" - - local_path: Path - path_in_repo: str - size_limit: int - last_modified: float - - -class CommitScheduler: - """ - Scheduler to upload a local folder to the Hub at regular intervals (e.g. push to hub every 5 minutes). - - The scheduler is started when instantiated and run indefinitely. At the end of your script, a last commit is - triggered. Checkout the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#scheduled-uploads) - to learn more about how to use it. - - Args: - repo_id (`str`): - The id of the repo to commit to. - folder_path (`str` or `Path`): - Path to the local folder to upload regularly. - every (`int` or `float`, *optional*): - The number of minutes between each commit. Defaults to 5 minutes. - path_in_repo (`str`, *optional*): - Relative path of the directory in the repo, for example: `"checkpoints/"`. Defaults to the root folder - of the repository. - repo_type (`str`, *optional*): - The type of the repo to commit to. Defaults to `model`. - revision (`str`, *optional*): - The revision of the repo to commit to. Defaults to `main`. - private (`bool`, *optional*): - Whether to make the repo private. Defaults to `False`. This value is ignored if the repo already exist. - token (`str`, *optional*): - The token to use to commit to the repo. Defaults to the token saved on the machine. - allow_patterns (`List[str]` or `str`, *optional*): - If provided, only files matching at least one pattern are uploaded. - ignore_patterns (`List[str]` or `str`, *optional*): - If provided, files matching any of the patterns are not uploaded. - hf_api (`HfApi`, *optional*): - The [`HfApi`] client to use to commit to the Hub. Can be set with custom settings (user agent, token,...). - - Example: - ```py - >>> from pathlib import Path - >>> from huggingface_hub import CommitScheduler - - # Scheduler uploads every 10 minutes - >>> csv_path = Path("watched_folder/data.csv") - >>> CommitScheduler(repo_id="test_scheduler", repo_type="dataset", folder_path=csv_path.parent, every=10) - - >>> with csv_path.open("a") as f: - ... f.write("first line") - - # Some time later (...) - >>> with csv_path.open("a") as f: - ... f.write("second line") - ``` - """ - - def __init__( - self, - *, - repo_id: str, - folder_path: Union[str, Path], - every: Union[int, float] = 5, - path_in_repo: Optional[str] = None, - repo_type: Optional[str] = None, - revision: Optional[str] = None, - private: bool = False, - token: Optional[str] = None, - allow_patterns: Optional[Union[List[str], str]] = None, - ignore_patterns: Optional[Union[List[str], str]] = None, - hf_api: Optional["HfApi"] = None, - ) -> None: - self.api = hf_api or HfApi(token=token) - - # Folder - self.folder_path = Path(folder_path).expanduser().resolve() - self.path_in_repo = path_in_repo or "" - self.allow_patterns = allow_patterns - - if ignore_patterns is None: - ignore_patterns = [] - elif isinstance(ignore_patterns, str): - ignore_patterns = [ignore_patterns] - self.ignore_patterns = ignore_patterns + IGNORE_GIT_FOLDER_PATTERNS - - if self.folder_path.is_file(): - raise ValueError(f"'folder_path' must be a directory, not a file: '{self.folder_path}'.") - self.folder_path.mkdir(parents=True, exist_ok=True) - - # Repository - repo_url = self.api.create_repo(repo_id=repo_id, private=private, repo_type=repo_type, exist_ok=True) - self.repo_id = repo_url.repo_id - self.repo_type = repo_type - self.revision = revision - self.token = token - - # Keep track of already uploaded files - self.last_uploaded: Dict[Path, float] = {} # key is local path, value is timestamp - - # Scheduler - if not every > 0: - raise ValueError(f"'every' must be a positive integer, not '{every}'.") - self.lock = Lock() - self.every = every - - logger.info(f"Scheduled job to push '{self.folder_path}' to '{self.repo_id}' every {self.every} minutes.") - self._scheduler_thread = Thread(target=self._run_scheduler, daemon=True) - self._scheduler_thread.start() - atexit.register(self._push_to_hub) - - self.__stopped = False - - def stop(self) -> None: - """Stop the scheduler. - - A stopped scheduler cannot be restarted. Mostly for tests purposes. - """ - self.__stopped = True - - def _run_scheduler(self) -> None: - """Dumb thread waiting between each scheduled push to Hub.""" - while True: - self.last_future = self.trigger() - time.sleep(self.every * 60) - if self.__stopped: - break - - def trigger(self) -> Future: - """Trigger a `push_to_hub` and return a future. - - This method is automatically called every `every` minutes. You can also call it manually to trigger a commit - immediately, without waiting for the next scheduled commit. - """ - return self.api.run_as_future(self._push_to_hub) - - def _push_to_hub(self) -> Optional[CommitInfo]: - if self.__stopped: # If stopped, already scheduled commits are ignored - return None - - logger.info("(Background) scheduled commit triggered.") - try: - return self.push_to_hub() - except Exception as e: - logger.error(f"Error while pushing to Hub: {e}") # Depending on the setup, error might be silenced - raise - - def push_to_hub(self) -> Optional[CommitInfo]: - """ - Push folder to the Hub and return the commit info. - - - - This method is not meant to be called directly. It is run in the background by the scheduler, respecting a - queue mechanism to avoid concurrent commits. Making a direct call to the method might lead to concurrency - issues. - - - - The default behavior of `push_to_hub` is to assume an append-only folder. It lists all files in the folder and - uploads only changed files. If no changes are found, the method returns without committing anything. If you want - to change this behavior, you can inherit from [`CommitScheduler`] and override this method. This can be useful - for example to compress data together in a single file before committing. For more details and examples, check - out our [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads). - """ - # Check files to upload (with lock) - with self.lock: - logger.debug("Listing files to upload for scheduled commit.") - - # List files from folder (taken from `_prepare_upload_folder_additions`) - relpath_to_abspath = { - path.relative_to(self.folder_path).as_posix(): path - for path in sorted(self.folder_path.glob("**/*")) # sorted to be deterministic - if path.is_file() - } - prefix = f"{self.path_in_repo.strip('/')}/" if self.path_in_repo else "" - - # Filter with pattern + filter out unchanged files + retrieve current file size - files_to_upload: List[_FileToUpload] = [] - for relpath in filter_repo_objects( - relpath_to_abspath.keys(), allow_patterns=self.allow_patterns, ignore_patterns=self.ignore_patterns - ): - local_path = relpath_to_abspath[relpath] - stat = local_path.stat() - if self.last_uploaded.get(local_path) is None or self.last_uploaded[local_path] != stat.st_mtime: - files_to_upload.append( - _FileToUpload( - local_path=local_path, - path_in_repo=prefix + relpath, - size_limit=stat.st_size, - last_modified=stat.st_mtime, - ) - ) - - # Return if nothing to upload - if len(files_to_upload) == 0: - logger.debug("Dropping schedule commit: no changed file to upload.") - return None - - # Convert `_FileToUpload` as `CommitOperationAdd` (=> compute file shas + limit to file size) - logger.debug("Removing unchanged files since previous scheduled commit.") - add_operations = [ - CommitOperationAdd( - # Cap the file to its current size, even if the user append data to it while a scheduled commit is happening - path_or_fileobj=PartialFileIO(file_to_upload.local_path, size_limit=file_to_upload.size_limit), - path_in_repo=file_to_upload.path_in_repo, - ) - for file_to_upload in files_to_upload - ] - - # Upload files (append mode expected - no need for lock) - logger.debug("Uploading files for scheduled commit.") - commit_info = self.api.create_commit( - repo_id=self.repo_id, - repo_type=self.repo_type, - operations=add_operations, - commit_message="Scheduled Commit", - revision=self.revision, - ) - - # Successful commit: keep track of the latest "last_modified" for each file - for file in files_to_upload: - self.last_uploaded[file.local_path] = file.last_modified - return commit_info - - -class PartialFileIO(BytesIO): - """A file-like object that reads only the first part of a file. - - Useful to upload a file to the Hub when the user might still be appending data to it. Only the first part of the - file is uploaded (i.e. the part that was available when the filesystem was first scanned). - - In practice, only used internally by the CommitScheduler to regularly push a folder to the Hub with minimal - disturbance for the user. The object is passed to `CommitOperationAdd`. - - Only supports `read`, `tell` and `seek` methods. - - Args: - file_path (`str` or `Path`): - Path to the file to read. - size_limit (`int`): - The maximum number of bytes to read from the file. If the file is larger than this, only the first part - will be read (and uploaded). - """ - - def __init__(self, file_path: Union[str, Path], size_limit: int) -> None: - self._file_path = Path(file_path) - self._file = self._file_path.open("rb") - self._size_limit = min(size_limit, os.fstat(self._file.fileno()).st_size) - - def __del__(self) -> None: - self._file.close() - return super().__del__() - - def __repr__(self) -> str: - return f"" - - def __len__(self) -> int: - return self._size_limit - - def __getattribute__(self, name: str): - if name.startswith("_") or name in ("read", "tell", "seek"): # only 3 public methods supported - return super().__getattribute__(name) - raise NotImplementedError(f"PartialFileIO does not support '{name}'.") - - def tell(self) -> int: - """Return the current file position.""" - return self._file.tell() - - def seek(self, __offset: int, __whence: int = SEEK_SET) -> int: - """Change the stream position to the given offset. - - Behavior is the same as a regular file, except that the position is capped to the size limit. - """ - if __whence == SEEK_END: - # SEEK_END => set from the truncated end - __offset = len(self) + __offset - __whence = SEEK_SET - - pos = self._file.seek(__offset, __whence) - if pos > self._size_limit: - return self._file.seek(self._size_limit) - return pos - - def read(self, __size: Optional[int] = -1) -> bytes: - """Read at most `__size` bytes from the file. - - Behavior is the same as a regular file, except that it is capped to the size limit. - """ - current = self._file.tell() - if __size is None or __size < 0: - # Read until file limit - truncated_size = self._size_limit - current - else: - # Read until file limit or __size - truncated_size = min(__size, self._size_limit - current) - return self._file.read(truncated_size) diff --git a/spaces/Dabs/UlamSpiral/README.md b/spaces/Dabs/UlamSpiral/README.md deleted file mode 100644 index 38815e4f377c41678988bea07277bbad2517ae9a..0000000000000000000000000000000000000000 --- a/spaces/Dabs/UlamSpiral/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: UlamSpiral -emoji: 🐨 -colorFrom: indigo -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py deleted file mode 100644 index 55929854a284626862af6666d3d981e83ad486fa..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# empty diff --git a/spaces/DravensCursed/OPENAI-REVERSE-PROXY/README.md b/spaces/DravensCursed/OPENAI-REVERSE-PROXY/README.md deleted file mode 100644 index d8e75df04905f01efd52a1d540d29371c758478e..0000000000000000000000000000000000000000 --- a/spaces/DravensCursed/OPENAI-REVERSE-PROXY/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: OPENAI REVERSE PROXY -emoji: 🏆 -colorFrom: indigo -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/pytorch2onnx.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/pytorch2onnx.py deleted file mode 100644 index 09d99b2e0171265e70e7507ed8e882b616b449a1..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/pytorch2onnx.py +++ /dev/null @@ -1,36 +0,0 @@ -import argparse -import torch -import torch.onnx -from basicsr.archs.rrdbnet_arch import RRDBNet - - -def main(args): - # An instance of the model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - if args.params: - keyname = 'params' - else: - keyname = 'params_ema' - model.load_state_dict(torch.load(args.input)[keyname]) - # set the train mode to false since we will only run the forward pass. - model.train(False) - model.cpu().eval() - - # An example input - x = torch.rand(1, 3, 64, 64) - # Export the model - with torch.no_grad(): - torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True) - print(torch_out.shape) - - -if __name__ == '__main__': - """Convert pytorch model to onnx models""" - parser = argparse.ArgumentParser() - parser.add_argument( - '--input', type=str, default='experiments/pretrained_models/RealESRGAN_x4plus.pth', help='Input model path') - parser.add_argument('--output', type=str, default='realesrgan-x4.onnx', help='Output onnx path') - parser.add_argument('--params', action='store_false', help='Use params instead of params_ema') - args = parser.parse_args() - - main(args) diff --git a/spaces/Epoching/3D_Photo_Inpainting/mesh.py b/spaces/Epoching/3D_Photo_Inpainting/mesh.py deleted file mode 100644 index 95cae5be1c26e517fa4d81bd03325a0f0017f9ad..0000000000000000000000000000000000000000 --- a/spaces/Epoching/3D_Photo_Inpainting/mesh.py +++ /dev/null @@ -1,2296 +0,0 @@ -import os -import numpy as np -try: - import cynetworkx as netx -except ImportError: - import networkx as netx -import matplotlib.pyplot as plt -from functools import partial -from vispy import scene, io -from vispy.scene import visuals -from vispy.visuals.filters import Alpha -import cv2 -from moviepy.editor import ImageSequenceClip -from skimage.transform import resize -import time -import copy -import torch -import os -from utils import path_planning, open_small_mask, clean_far_edge, refine_depth_around_edge -from utils import refine_color_around_edge, filter_irrelevant_edge_new, require_depth_edge, clean_far_edge_new -from utils import create_placeholder, refresh_node, find_largest_rect -from mesh_tools import get_depth_from_maps, get_map_from_ccs, get_edge_from_nodes, get_depth_from_nodes, get_rgb_from_nodes, crop_maps_by_size, convert2tensor, recursive_add_edge, update_info, filter_edge, relabel_node, depth_inpainting -from mesh_tools import refresh_bord_depth, enlarge_border, fill_dummy_bord, extrapolate, fill_missing_node, incomplete_node, get_valid_size, dilate_valid_size, size_operation -import transforms3d -import random -from functools import reduce - -def create_mesh(depth, image, int_mtx, config): - H, W, C = image.shape - ext_H, ext_W = H + 2 * config['extrapolation_thickness'], W + 2 * config['extrapolation_thickness'] - LDI = netx.Graph(H=ext_H, W=ext_W, noext_H=H, noext_W=W, cam_param=int_mtx) - xy2depth = {} - int_mtx_pix = int_mtx * np.array([[W], [H], [1.]]) - LDI.graph['cam_param_pix'], LDI.graph['cam_param_pix_inv'] = int_mtx_pix, np.linalg.inv(int_mtx_pix) - disp = 1. / (-depth) - LDI.graph['hoffset'], LDI.graph['woffset'] = config['extrapolation_thickness'], config['extrapolation_thickness'] - LDI.graph['bord_up'], LDI.graph['bord_down'] = LDI.graph['hoffset'] + 0, LDI.graph['hoffset'] + H - LDI.graph['bord_left'], LDI.graph['bord_right'] = LDI.graph['woffset'] + 0, LDI.graph['woffset'] + W - for idx in range(H): - for idy in range(W): - x, y = idx + LDI.graph['hoffset'], idy + LDI.graph['woffset'] - LDI.add_node((x, y, -depth[idx, idy]), - color=image[idx, idy], - disp=disp[idx, idy], - synthesis=False, - cc_id=set()) - xy2depth[(x, y)] = [-depth[idx, idy]] - for x, y, d in LDI.nodes: - two_nes = [ne for ne in [(x+1, y), (x, y+1)] if ne[0] < LDI.graph['bord_down'] and ne[1] < LDI.graph['bord_right']] - [LDI.add_edge((ne[0], ne[1], xy2depth[ne][0]), (x, y, d)) for ne in two_nes] - LDI = calculate_fov(LDI) - image = np.pad(image, - pad_width=((config['extrapolation_thickness'], config['extrapolation_thickness']), - (config['extrapolation_thickness'], config['extrapolation_thickness']), - (0, 0)), - mode='constant') - depth = np.pad(depth, - pad_width=((config['extrapolation_thickness'], config['extrapolation_thickness']), - (config['extrapolation_thickness'], config['extrapolation_thickness'])), - mode='constant') - - return LDI, xy2depth, image, depth - - -def tear_edges(mesh, threshold = 0.00025, xy2depth=None): - remove_edge_list = [] - remove_horizon, remove_vertical = np.zeros((2, mesh.graph['H'], mesh.graph['W'])) - mesh_nodes = mesh.nodes - for edge in mesh.edges: - if abs(mesh_nodes[edge[0]]['disp'] - mesh_nodes[edge[1]]['disp']) > threshold: - remove_edge_list.append((edge[0], edge[1])) - - near, far = edge if abs(edge[0][2]) < abs(edge[1][2]) else edge[::-1] - - mesh_nodes[far]['near'] = [] if mesh_nodes[far].get('near') is None else mesh_nodes[far]['near'].append(near) - mesh_nodes[near]['far'] = [] if mesh_nodes[near].get('far') is None else mesh_nodes[near]['far'].append(far) - - if near[0] == far[0]: - remove_horizon[near[0], np.minimum(near[1], far[1])] = 1 - elif near[1] == far[1]: - remove_vertical[np.minimum(near[0], far[0]), near[1]] = 1 - mesh.remove_edges_from(remove_edge_list) - - remove_edge_list = [] - - dang_horizon = np.where(np.roll(remove_horizon, 1, 0) + np.roll(remove_horizon, -1, 0) - remove_horizon == 2) - dang_vertical = np.where(np.roll(remove_vertical, 1, 1) + np.roll(remove_vertical, -1, 1) - remove_vertical == 2) - - horizon_condition = lambda x, y: mesh.graph['bord_up'] + 1 <= x < mesh.graph['bord_down'] - 1 - vertical_condition = lambda x, y: mesh.graph['bord_left'] + 1 <= y < mesh.graph['bord_right'] - 1 - - prjto3d = lambda x, y: (x, y, xy2depth[(x, y)][0]) - - node_existence = lambda x, y: mesh.has_node(prjto3d(x, y)) - - for x, y in zip(dang_horizon[0], dang_horizon[1]): - if horizon_condition(x, y) and node_existence(x, y) and node_existence(x, y+1): - remove_edge_list.append((prjto3d(x, y), prjto3d(x, y+1))) - for x, y in zip(dang_vertical[0], dang_vertical[1]): - if vertical_condition(x, y) and node_existence(x, y) and node_existence(x+1, y): - remove_edge_list.append((prjto3d(x, y), prjto3d(x+1, y))) - mesh.remove_edges_from(remove_edge_list) - - return mesh - -def calculate_fov(mesh): - k = mesh.graph['cam_param'] - mesh.graph['hFov'] = 2 * np.arctan(1. / (2*k[0, 0])) - mesh.graph['vFov'] = 2 * np.arctan(1. / (2*k[1, 1])) - mesh.graph['aspect'] = mesh.graph['noext_H'] / mesh.graph['noext_W'] - - return mesh - -def calculate_fov_FB(mesh): - mesh.graph['aspect'] = mesh.graph['H'] / mesh.graph['W'] - if mesh.graph['H'] > mesh.graph['W']: - mesh.graph['hFov'] = 0.508015513 - half_short = np.tan(mesh.graph['hFov']/2.0) - half_long = half_short * mesh.graph['aspect'] - mesh.graph['vFov'] = 2.0 * np.arctan(half_long) - else: - mesh.graph['vFov'] = 0.508015513 - half_short = np.tan(mesh.graph['vFov']/2.0) - half_long = half_short / mesh.graph['aspect'] - mesh.graph['hFov'] = 2.0 * np.arctan(half_long) - - return mesh - -def reproject_3d_int_detail(sx, sy, z, k_00, k_02, k_11, k_12, w_offset, h_offset): - abs_z = abs(z) - return [abs_z * ((sy+0.5-w_offset) * k_00 + k_02), abs_z * ((sx+0.5-h_offset) * k_11 + k_12), abs_z] - -def reproject_3d_int_detail_FB(sx, sy, z, w_offset, h_offset, mesh): - if mesh.graph.get('tan_hFov') is None: - mesh.graph['tan_hFov'] = np.tan(mesh.graph['hFov'] / 2.) - if mesh.graph.get('tan_vFov') is None: - mesh.graph['tan_vFov'] = np.tan(mesh.graph['vFov'] / 2.) - - ray = np.array([(-1. + 2. * ((sy+0.5-w_offset)/(mesh.graph['W'] - 1))) * mesh.graph['tan_hFov'], - (1. - 2. * (sx+0.5-h_offset)/(mesh.graph['H'] - 1)) * mesh.graph['tan_vFov'], - -1]) - point_3d = ray * np.abs(z) - - return point_3d - - -def reproject_3d_int(sx, sy, z, mesh): - k = mesh.graph['cam_param_pix_inv'].copy() - if k[0, 2] > 0: - k = np.linalg.inv(k) - ray = np.dot(k, np.array([sy-mesh.graph['woffset'], sx-mesh.graph['hoffset'], 1]).reshape(3, 1)) - - point_3d = ray * np.abs(z) - point_3d = point_3d.flatten() - - return point_3d - -def generate_init_node(mesh, config, min_node_in_cc): - mesh_nodes = mesh.nodes - - info_on_pix = {} - - ccs = sorted(netx.connected_components(mesh), key = len, reverse=True) - remove_nodes = [] - - for cc in ccs: - - remove_flag = True if len(cc) < min_node_in_cc else False - if remove_flag is False: - for (nx, ny, nd) in cc: - info_on_pix[(nx, ny)] = [{'depth':nd, - 'color':mesh_nodes[(nx, ny, nd)]['color'], - 'synthesis':False, - 'disp':mesh_nodes[(nx, ny, nd)]['disp']}] - else: - [remove_nodes.append((nx, ny, nd)) for (nx, ny, nd) in cc] - - for node in remove_nodes: - far_nodes = [] if mesh_nodes[node].get('far') is None else mesh_nodes[node]['far'] - for far_node in far_nodes: - if mesh.has_node(far_node) and mesh_nodes[far_node].get('near') is not None and node in mesh_nodes[far_node]['near']: - mesh_nodes[far_node]['near'].remove(node) - near_nodes = [] if mesh_nodes[node].get('near') is None else mesh_nodes[node]['near'] - for near_node in near_nodes: - if mesh.has_node(near_node) and mesh_nodes[near_node].get('far') is not None and node in mesh_nodes[near_node]['far']: - mesh_nodes[near_node]['far'].remove(node) - - [mesh.remove_node(node) for node in remove_nodes] - - return mesh, info_on_pix - -def get_neighbors(mesh, node): - return [*mesh.neighbors(node)] - -def generate_face(mesh, info_on_pix, config): - H, W = mesh.graph['H'], mesh.graph['W'] - str_faces = [] - num_node = len(mesh.nodes) - ply_flag = config.get('save_ply') - def out_fmt(input, cur_id_b, cur_id_self, cur_id_a, ply_flag): - if ply_flag is True: - input.append(' '.join(['3', cur_id_b, cur_id_self, cur_id_a]) + '\n') - else: - input.append([cur_id_b, cur_id_self, cur_id_a]) - mesh_nodes = mesh.nodes - for node in mesh_nodes: - cur_id_self = mesh_nodes[node]['cur_id'] - ne_nodes = get_neighbors(mesh, node) - four_dir_nes = {'up': [], 'left': [], - 'down': [], 'right': []} - for ne_node in ne_nodes: - store_tuple = [ne_node, mesh_nodes[ne_node]['cur_id']] - if ne_node[0] == node[0]: - if ne_node[1] == ne_node[1] - 1: - four_dir_nes['left'].append(store_tuple) - else: - four_dir_nes['right'].append(store_tuple) - else: - if ne_node[0] == ne_node[0] - 1: - four_dir_nes['up'].append(store_tuple) - else: - four_dir_nes['down'].append(store_tuple) - for node_a, cur_id_a in four_dir_nes['up']: - for node_b, cur_id_b in four_dir_nes['right']: - out_fmt(str_faces, cur_id_b, cur_id_self, cur_id_a, ply_flag) - for node_a, cur_id_a in four_dir_nes['right']: - for node_b, cur_id_b in four_dir_nes['down']: - out_fmt(str_faces, cur_id_b, cur_id_self, cur_id_a, ply_flag) - for node_a, cur_id_a in four_dir_nes['down']: - for node_b, cur_id_b in four_dir_nes['left']: - out_fmt(str_faces, cur_id_b, cur_id_self, cur_id_a, ply_flag) - for node_a, cur_id_a in four_dir_nes['left']: - for node_b, cur_id_b in four_dir_nes['up']: - out_fmt(str_faces, cur_id_b, cur_id_self, cur_id_a, ply_flag) - - return str_faces - -def reassign_floating_island(mesh, info_on_pix, image, depth): - H, W = mesh.graph['H'], mesh.graph['W'], - mesh_nodes = mesh.nodes - bord_up, bord_down = mesh.graph['bord_up'], mesh.graph['bord_down'] - bord_left, bord_right = mesh.graph['bord_left'], mesh.graph['bord_right'] - W = mesh.graph['W'] - lost_map = np.zeros((H, W)) - - ''' - (5) is_inside(x, y, xmin, xmax, ymin, ymax) : Check if a pixel(x, y) is inside the border. - (6) get_cross_nes(x, y) : Get the four cross neighbors of pixel(x, y). - ''' - key_exist = lambda d, k: k in d - is_inside = lambda x, y, xmin, xmax, ymin, ymax: xmin <= x < xmax and ymin <= y < ymax - get_cross_nes = lambda x, y: [(x + 1, y), (x - 1, y), (x, y - 1), (x, y + 1)] - ''' - (A) Highlight the pixels on isolated floating island. - (B) Number those isolated floating islands with connected component analysis. - (C) For each isolated island: - (1) Find its longest surrounded depth edge. - (2) Propogate depth from that depth edge to the pixels on the isolated island. - (3) Build the connection between the depth edge and that isolated island. - ''' - for x in range(H): - for y in range(W): - if is_inside(x, y, bord_up, bord_down, bord_left, bord_right) and not(key_exist(info_on_pix, (x, y))): - lost_map[x, y] = 1 - _, label_lost_map = cv2.connectedComponents(lost_map.astype(np.uint8), connectivity=4) - mask = np.zeros((H, W)) - mask[bord_up:bord_down, bord_left:bord_right] = 1 - label_lost_map = (label_lost_map * mask).astype(np.int) - - for i in range(1, label_lost_map.max()+1): - lost_xs, lost_ys = np.where(label_lost_map == i) - surr_edge_ids = {} - for lost_x, lost_y in zip(lost_xs, lost_ys): - if (lost_x, lost_y) == (295, 389) or (lost_x, lost_y) == (296, 389): - import pdb; pdb.set_trace() - for ne in get_cross_nes(lost_x, lost_y): - if key_exist(info_on_pix, ne): - for info in info_on_pix[ne]: - ne_node = (ne[0], ne[1], info['depth']) - if key_exist(mesh_nodes[ne_node], 'edge_id'): - edge_id = mesh_nodes[ne_node]['edge_id'] - surr_edge_ids[edge_id] = surr_edge_ids[edge_id] + [ne_node] if \ - key_exist(surr_edge_ids, edge_id) else [ne_node] - if len(surr_edge_ids) == 0: - continue - edge_id, edge_nodes = sorted([*surr_edge_ids.items()], key=lambda x: len(x[1]), reverse=True)[0] - edge_depth_map = np.zeros((H, W)) - for node in edge_nodes: - edge_depth_map[node[0], node[1]] = node[2] - lost_xs, lost_ys = np.where(label_lost_map == i) - while lost_xs.shape[0] > 0: - lost_xs, lost_ys = np.where(label_lost_map == i) - for lost_x, lost_y in zip(lost_xs, lost_ys): - propagated_depth = [] - real_nes = [] - for ne in get_cross_nes(lost_x, lost_y): - if not(is_inside(ne[0], ne[1], bord_up, bord_down, bord_left, bord_right)) or \ - edge_depth_map[ne[0], ne[1]] == 0: - continue - propagated_depth.append(edge_depth_map[ne[0], ne[1]]) - real_nes.append(ne) - if len(real_nes) == 0: - continue - reassign_depth = np.mean(propagated_depth) - label_lost_map[lost_x, lost_y] = 0 - edge_depth_map[lost_x, lost_y] = reassign_depth - depth[lost_x, lost_y] = -reassign_depth - mesh.add_node((lost_x, lost_y, reassign_depth), color=image[lost_x, lost_y], - synthesis=False, - disp=1./reassign_depth, - cc_id=set()) - info_on_pix[(lost_x, lost_y)] = [{'depth':reassign_depth, - 'color':image[lost_x, lost_y], - 'synthesis':False, - 'disp':1./reassign_depth}] - new_connections = [((lost_x, lost_y, reassign_depth), - (ne[0], ne[1], edge_depth_map[ne[0], ne[1]])) for ne in real_nes] - mesh.add_edges_from(new_connections) - - return mesh, info_on_pix, depth - -def remove_node_feat(mesh, *feats): - mesh_nodes = mesh.nodes - for node in mesh_nodes: - for feat in feats: - mesh_nodes[node][feat] = None - - return mesh - -def update_status(mesh, info_on_pix, depth=None): - ''' - (2) clear_node_feat(G, *fts) : Clear all the node feature on graph G. - (6) get_cross_nes(x, y) : Get the four cross neighbors of pixel(x, y). - ''' - key_exist = lambda d, k: d.get(k) is not None - is_inside = lambda x, y, xmin, xmax, ymin, ymax: xmin <= x < xmax and ymin <= y < ymax - get_cross_nes = lambda x, y: [(x + 1, y), (x - 1, y), (x, y - 1), (x, y + 1)] - append_element = lambda d, k, x: d[k] + [x] if key_exist(d, k) else [x] - - def clear_node_feat(G, fts): - le_nodes = G.nodes - for k in le_nodes: - v = le_nodes[k] - for ft in fts: - if ft in v: - v[ft] = None - - clear_node_feat(mesh, ['edge_id', 'far', 'near']) - bord_up, bord_down = mesh.graph['bord_up'], mesh.graph['bord_down'] - bord_left, bord_right = mesh.graph['bord_left'], mesh.graph['bord_right'] - - le_nodes = mesh.nodes - - for node_key in le_nodes: - if mesh.neighbors(node_key).__length_hint__() == 4: - continue - four_nes = [xx for xx in get_cross_nes(node_key[0], node_key[1]) if - is_inside(xx[0], xx[1], bord_up, bord_down, bord_left, bord_right) and - xx in info_on_pix] - [four_nes.remove((ne_node[0], ne_node[1])) for ne_node in mesh.neighbors(node_key)] - for ne in four_nes: - for info in info_on_pix[ne]: - assert mesh.has_node((ne[0], ne[1], info['depth'])), "No node_key" - ind_node = le_nodes[node_key] - if abs(node_key[2]) > abs(info['depth']): - ind_node['near'] = append_element(ind_node, 'near', (ne[0], ne[1], info['depth'])) - else: - ind_node['far'] = append_element(ind_node, 'far', (ne[0], ne[1], info['depth'])) - if depth is not None: - for key, value in info_on_pix.items(): - if depth[key[0], key[1]] != abs(value[0]['depth']): - value[0]['disp'] = 1. / value[0]['depth'] - depth[key[0], key[1]] = abs(value[0]['depth']) - - return mesh, depth, info_on_pix - else: - return mesh - -def group_edges(LDI, config, image, remove_conflict_ordinal, spdb=False): - - ''' - (1) add_new_node(G, node) : add "node" to graph "G" - (2) add_new_edge(G, node_a, node_b) : add edge "node_a--node_b" to graph "G" - (3) exceed_thre(x, y, thre) : Check if difference between "x" and "y" exceed threshold "thre" - (4) key_exist(d, k) : Check if key "k' exists in dictionary "d" - (5) comm_opp_bg(G, x, y) : Check if node "x" and "y" in graph "G" treat the same opposite node as background - (6) comm_opp_fg(G, x, y) : Check if node "x" and "y" in graph "G" treat the same opposite node as foreground - ''' - add_new_node = lambda G, node: None if G.has_node(node) else G.add_node(node) - add_new_edge = lambda G, node_a, node_b: None if G.has_edge(node_a, node_b) else G.add_edge(node_a, node_b) - exceed_thre = lambda x, y, thre: (abs(x) - abs(y)) > thre - key_exist = lambda d, k: d.get(k) is not None - comm_opp_bg = lambda G, x, y: key_exist(G.nodes[x], 'far') and key_exist(G.nodes[y], 'far') and \ - not(set(G.nodes[x]['far']).isdisjoint(set(G.nodes[y]['far']))) - comm_opp_fg = lambda G, x, y: key_exist(G.nodes[x], 'near') and key_exist(G.nodes[y], 'near') and \ - not(set(G.nodes[x]['near']).isdisjoint(set(G.nodes[y]['near']))) - discont_graph = netx.Graph() - ''' - (A) Skip the pixel at image boundary, we don't want to deal with them. - (B) Identify discontinuity by the number of its neighbor(degree). - If the degree < 4(up/right/buttom/left). We will go through following steps: - (1) Add the discontinuity pixel "node" to graph "discont_graph". - (2) Find "node"'s cross neighbor(up/right/buttom/left) "ne_node". - - If the cross neighbor "ne_node" is a discontinuity pixel(degree("ne_node") < 4), - (a) add it to graph "discont_graph" and build the connection between "ne_node" and "node". - (b) label its cross neighbor as invalid pixels "inval_diag_candi" to avoid building - connection between original discontinuity pixel "node" and "inval_diag_candi". - - Otherwise, find "ne_node"'s cross neighbors, called diagonal candidate "diag_candi". - - The "diag_candi" is diagonal to the original discontinuity pixel "node". - - If "diag_candi" exists, go to step(3). - (3) A diagonal candidate "diag_candi" will be : - - added to the "discont_graph" if its degree < 4. - - connected to the original discontinuity pixel "node" if it satisfied either - one of following criterion: - (a) the difference of disparity between "diag_candi" and "node" is smaller than default threshold. - (b) the "diag_candi" and "node" face the same opposite pixel. (See. function "tear_edges") - (c) Both of "diag_candi" and "node" must_connect to each other. (See. function "combine_end_node") - (C) Aggregate each connected part in "discont_graph" into "discont_ccs" (A.K.A. depth edge). - ''' - for node in LDI.nodes: - if not(LDI.graph['bord_up'] + 1 <= node[0] <= LDI.graph['bord_down'] - 2 and \ - LDI.graph['bord_left'] + 1 <= node[1] <= LDI.graph['bord_right'] - 2): - continue - neighbors = [*LDI.neighbors(node)] - if len(neighbors) < 4: - add_new_node(discont_graph, node) - diag_candi_anc, inval_diag_candi, discont_nes = set(), set(), set() - for ne_node in neighbors: - if len([*LDI.neighbors(ne_node)]) < 4: - add_new_node(discont_graph, ne_node) - add_new_edge(discont_graph, ne_node, node) - discont_nes.add(ne_node) - else: - diag_candi_anc.add(ne_node) - inval_diag_candi = set([inval_diagonal for ne_node in discont_nes for inval_diagonal in LDI.neighbors(ne_node) if \ - abs(inval_diagonal[0] - node[0]) < 2 and abs(inval_diagonal[1] - node[1]) < 2]) - for ne_node in diag_candi_anc: - if ne_node[0] == node[0]: - diagonal_xys = [[ne_node[0] + 1, ne_node[1]], [ne_node[0] - 1, ne_node[1]]] - elif ne_node[1] == node[1]: - diagonal_xys = [[ne_node[0], ne_node[1] + 1], [ne_node[0], ne_node[1] - 1]] - for diag_candi in LDI.neighbors(ne_node): - if [diag_candi[0], diag_candi[1]] in diagonal_xys and LDI.degree(diag_candi) < 4: - if diag_candi not in inval_diag_candi: - if not exceed_thre(1./node[2], 1./diag_candi[2], config['depth_threshold']) or \ - (comm_opp_bg(LDI, diag_candi, node) and comm_opp_fg(LDI, diag_candi, node)): - add_new_node(discont_graph, diag_candi) - add_new_edge(discont_graph, diag_candi, node) - if key_exist(LDI.nodes[diag_candi], 'must_connect') and node in LDI.nodes[diag_candi]['must_connect'] and \ - key_exist(LDI.nodes[node], 'must_connect') and diag_candi in LDI.nodes[node]['must_connect']: - add_new_node(discont_graph, diag_candi) - add_new_edge(discont_graph, diag_candi, node) - if spdb == True: - import pdb; pdb.set_trace() - discont_ccs = [*netx.connected_components(discont_graph)] - ''' - In some corner case, a depth edge "discont_cc" will contain both - foreground(FG) and background(BG) pixels. This violate the assumption that - a depth edge can only composite by one type of pixel(FG or BG). - We need to further divide this depth edge into several sub-part so that the - assumption is satisfied. - (A) A depth edge is invalid if both of its "far_flag"(BG) and - "near_flag"(FG) are True. - (B) If the depth edge is invalid, we need to do: - (1) Find the role("oridinal") of each pixel on the depth edge. - "-1" --> Its opposite pixels has smaller depth(near) than it. - It is a backgorund pixel. - "+1" --> Its opposite pixels has larger depth(far) than it. - It is a foregorund pixel. - "0" --> Some of opposite pixels has larger depth(far) than it, - and some has smaller pixel than it. - It is an ambiguous pixel. - (2) For each pixel "discont_node", check if its neigbhors' roles are consistent. - - If not, break the connection between the neighbor "ne_node" that has a role - different from "discont_node". - - If yes, remove all the role that are inconsistent to its neighbors "ne_node". - (3) Connected component analysis to re-identified those divided depth edge. - (C) Aggregate each connected part in "discont_graph" into "discont_ccs" (A.K.A. depth edge). - ''' - if remove_conflict_ordinal: - new_discont_ccs = [] - num_new_cc = 0 - for edge_id, discont_cc in enumerate(discont_ccs): - near_flag = False - far_flag = False - for discont_node in discont_cc: - near_flag = True if key_exist(LDI.nodes[discont_node], 'far') else near_flag - far_flag = True if key_exist(LDI.nodes[discont_node], 'near') else far_flag - if far_flag and near_flag: - break - if far_flag and near_flag: - for discont_node in discont_cc: - discont_graph.nodes[discont_node]['ordinal'] = \ - np.array([key_exist(LDI.nodes[discont_node], 'far'), - key_exist(LDI.nodes[discont_node], 'near')]) * \ - np.array([-1, 1]) - discont_graph.nodes[discont_node]['ordinal'] = \ - np.sum(discont_graph.nodes[discont_node]['ordinal']) - remove_nodes, remove_edges = [], [] - for discont_node in discont_cc: - ordinal_relation = np.sum([discont_graph.nodes[xx]['ordinal'] \ - for xx in discont_graph.neighbors(discont_node)]) - near_side = discont_graph.nodes[discont_node]['ordinal'] <= 0 - if abs(ordinal_relation) < len([*discont_graph.neighbors(discont_node)]): - remove_nodes.append(discont_node) - for ne_node in discont_graph.neighbors(discont_node): - remove_flag = (near_side and not(key_exist(LDI.nodes[ne_node], 'far'))) or \ - (not near_side and not(key_exist(LDI.nodes[ne_node], 'near'))) - remove_edges += [(discont_node, ne_node)] if remove_flag else [] - else: - if near_side and key_exist(LDI.nodes[discont_node], 'near'): - LDI.nodes[discont_node].pop('near') - elif not(near_side) and key_exist(LDI.nodes[discont_node], 'far'): - LDI.nodes[discont_node].pop('far') - discont_graph.remove_edges_from(remove_edges) - sub_mesh = discont_graph.subgraph(list(discont_cc)).copy() - sub_discont_ccs = [*netx.connected_components(sub_mesh)] - is_redun_near = lambda xx: len(xx) == 1 and xx[0] in remove_nodes and key_exist(LDI.nodes[xx[0]], 'far') - for sub_discont_cc in sub_discont_ccs: - if is_redun_near(list(sub_discont_cc)): - LDI.nodes[list(sub_discont_cc)[0]].pop('far') - new_discont_ccs.append(sub_discont_cc) - else: - new_discont_ccs.append(discont_cc) - discont_ccs = new_discont_ccs - new_discont_ccs = None - if spdb == True: - import pdb; pdb.set_trace() - - for edge_id, edge_cc in enumerate(discont_ccs): - for node in edge_cc: - LDI.nodes[node]['edge_id'] = edge_id - - return discont_ccs, LDI, discont_graph - -def combine_end_node(mesh, edge_mesh, edge_ccs, depth): - import collections - mesh_nodes = mesh.nodes - connect_dict = dict() - for valid_edge_id, valid_edge_cc in enumerate(edge_ccs): - connect_info = [] - for valid_edge_node in valid_edge_cc: - single_connect = set() - for ne_node in mesh.neighbors(valid_edge_node): - if mesh_nodes[ne_node].get('far') is not None: - for fn in mesh_nodes[ne_node].get('far'): - if mesh.has_node(fn) and mesh_nodes[fn].get('edge_id') is not None: - single_connect.add(mesh_nodes[fn]['edge_id']) - if mesh_nodes[ne_node].get('near') is not None: - for fn in mesh_nodes[ne_node].get('near'): - if mesh.has_node(fn) and mesh_nodes[fn].get('edge_id') is not None: - single_connect.add(mesh_nodes[fn]['edge_id']) - connect_info.extend([*single_connect]) - connect_dict[valid_edge_id] = collections.Counter(connect_info) - - end_maps = np.zeros((mesh.graph['H'], mesh.graph['W'])) - edge_maps = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - for valid_edge_id, valid_edge_cc in enumerate(edge_ccs): - for valid_edge_node in valid_edge_cc: - edge_maps[valid_edge_node[0], valid_edge_node[1]] = valid_edge_id - if len([*edge_mesh.neighbors(valid_edge_node)]) == 1: - num_ne = 1 - if num_ne == 1: - end_maps[valid_edge_node[0], valid_edge_node[1]] = valid_edge_node[2] - nxs, nys = np.where(end_maps != 0) - invalid_nodes = set() - for nx, ny in zip(nxs, nys): - if mesh.has_node((nx, ny, end_maps[nx, ny])) is False: - invalid_nodes.add((nx, ny)) - continue - four_nes = [xx for xx in [(nx - 1, ny), (nx + 1, ny), (nx, ny - 1), (nx, ny + 1)] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W'] and \ - end_maps[xx[0], xx[1]] != 0] - mesh_nes = [*mesh.neighbors((nx, ny, end_maps[nx, ny]))] - remove_num = 0 - for fne in four_nes: - if (fne[0], fne[1], end_maps[fne[0], fne[1]]) in mesh_nes: - remove_num += 1 - if remove_num == len(four_nes): - invalid_nodes.add((nx, ny)) - for invalid_node in invalid_nodes: - end_maps[invalid_node[0], invalid_node[1]] = 0 - - nxs, nys = np.where(end_maps != 0) - invalid_nodes = set() - for nx, ny in zip(nxs, nys): - if mesh_nodes[(nx, ny, end_maps[nx, ny])].get('edge_id') is None: - continue - else: - self_id = mesh_nodes[(nx, ny, end_maps[nx, ny])].get('edge_id') - self_connect = connect_dict[self_id] if connect_dict.get(self_id) is not None else dict() - four_nes = [xx for xx in [(nx - 1, ny), (nx + 1, ny), (nx, ny - 1), (nx, ny + 1)] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W'] and \ - end_maps[xx[0], xx[1]] != 0] - for fne in four_nes: - if mesh_nodes[(fne[0], fne[1], end_maps[fne[0], fne[1]])].get('edge_id') is None: - continue - else: - ne_id = mesh_nodes[(fne[0], fne[1], end_maps[fne[0], fne[1]])]['edge_id'] - if self_connect.get(ne_id) is None or self_connect.get(ne_id) == 1: - continue - else: - invalid_nodes.add((nx, ny)) - for invalid_node in invalid_nodes: - end_maps[invalid_node[0], invalid_node[1]] = 0 - nxs, nys = np.where(end_maps != 0) - invalid_nodes = set() - for nx, ny in zip(nxs, nys): - four_nes = [xx for xx in [(nx - 1, ny), (nx + 1, ny), (nx, ny - 1), (nx, ny + 1)] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W'] and \ - end_maps[xx[0], xx[1]] != 0] - for fne in four_nes: - if mesh.has_node((fne[0], fne[1], end_maps[fne[0], fne[1]])): - node_a, node_b = (fne[0], fne[1], end_maps[fne[0], fne[1]]), (nx, ny, end_maps[nx, ny]) - mesh.add_edge(node_a, node_b) - mesh_nodes[node_b]['must_connect'] = set() if mesh_nodes[node_b].get('must_connect') is None else mesh_nodes[node_b]['must_connect'] - mesh_nodes[node_b]['must_connect'].add(node_a) - mesh_nodes[node_b]['must_connect'] |= set([xx for xx in [*edge_mesh.neighbors(node_a)] if \ - (xx[0] - node_b[0]) < 2 and (xx[1] - node_b[1]) < 2]) - mesh_nodes[node_a]['must_connect'] = set() if mesh_nodes[node_a].get('must_connect') is None else mesh_nodes[node_a]['must_connect'] - mesh_nodes[node_a]['must_connect'].add(node_b) - mesh_nodes[node_a]['must_connect'] |= set([xx for xx in [*edge_mesh.neighbors(node_b)] if \ - (xx[0] - node_a[0]) < 2 and (xx[1] - node_a[1]) < 2]) - invalid_nodes.add((nx, ny)) - for invalid_node in invalid_nodes: - end_maps[invalid_node[0], invalid_node[1]] = 0 - - return mesh - -def remove_redundant_edge(mesh, edge_mesh, edge_ccs, info_on_pix, config, redundant_number=1000, invalid=False, spdb=False): - point_to_amount = {} - point_to_id = {} - end_maps = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - for valid_edge_id, valid_edge_cc in enumerate(edge_ccs): - for valid_edge_node in valid_edge_cc: - point_to_amount[valid_edge_node] = len(valid_edge_cc) - point_to_id[valid_edge_node] = valid_edge_id - if edge_mesh.has_node(valid_edge_node) is True: - if len([*edge_mesh.neighbors(valid_edge_node)]) == 1: - end_maps[valid_edge_node[0], valid_edge_node[1]] = valid_edge_id - nxs, nys = np.where(end_maps > -1) - point_to_adjoint = {} - for nx, ny in zip(nxs, nys): - adjoint_edges = set([end_maps[x, y] for x, y in [(nx + 1, ny), (nx - 1, ny), (nx, ny + 1), (nx, ny - 1)] if end_maps[x, y] != -1]) - point_to_adjoint[end_maps[nx, ny]] = (point_to_adjoint[end_maps[nx, ny]] | adjoint_edges) if point_to_adjoint.get(end_maps[nx, ny]) is not None else adjoint_edges - valid_edge_ccs = filter_edge(mesh, edge_ccs, config, invalid=invalid) - edge_canvas = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - for valid_edge_id, valid_edge_cc in enumerate(valid_edge_ccs): - for valid_edge_node in valid_edge_cc: - edge_canvas[valid_edge_node[0], valid_edge_node[1]] = valid_edge_id - if spdb is True: - plt.imshow(edge_canvas); plt.show() - import pdb; pdb.set_trace() - for valid_edge_id, valid_edge_cc in enumerate(valid_edge_ccs): - end_number = 0 - four_end_number = 0 - eight_end_number = 0 - db_eight_end_number = 0 - if len(valid_edge_cc) > redundant_number: - continue - for valid_edge_node in valid_edge_cc: - if len([*edge_mesh.neighbors(valid_edge_node)]) == 3: - break - elif len([*edge_mesh.neighbors(valid_edge_node)]) == 1: - hx, hy, hz = valid_edge_node - if invalid is False: - eight_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and edge_canvas[x, y] != -1 and edge_canvas[x, y] != valid_edge_id] - if len(eight_nes) == 0: - end_number += 1 - if invalid is True: - four_nes = []; eight_nes = []; db_eight_nes = [] - four_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and edge_canvas[x, y] != -1 and edge_canvas[x, y] != valid_edge_id] - eight_nes = [(x, y) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), \ - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and edge_canvas[x, y] != -1 and edge_canvas[x, y] != valid_edge_id] - db_eight_nes = [(x, y) for x in range(hx - 2, hx + 3) for y in range(hy - 2, hy + 3) \ - if info_on_pix.get((x, y)) is not None and edge_canvas[x, y] != -1 and edge_canvas[x, y] != valid_edge_id and (x, y) != (hx, hy)] - if len(four_nes) == 0 or len(eight_nes) == 0: - end_number += 1 - if len(four_nes) == 0: - four_end_number += 1 - if len(eight_nes) == 0: - eight_end_number += 1 - if len(db_eight_nes) == 0: - db_eight_end_number += 1 - elif len([*edge_mesh.neighbors(valid_edge_node)]) == 0: - hx, hy, hz = valid_edge_node - four_nes = [(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and \ - mesh.has_edge(valid_edge_node, (x, y, info_on_pix[(x, y)][0]['depth'])) is False] - for ne in four_nes: - try: - if invalid is True or (point_to_amount.get(ne) is None or point_to_amount[ne] < redundant_number) or \ - point_to_id[ne] in point_to_adjoint.get(point_to_id[valid_edge_node], set()): - mesh.add_edge(valid_edge_node, ne) - except: - import pdb; pdb.set_trace() - if (invalid is not True and end_number >= 1) or (invalid is True and end_number >= 2 and eight_end_number >= 1 and db_eight_end_number >= 1): - for valid_edge_node in valid_edge_cc: - hx, hy, _ = valid_edge_node - four_nes = [(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and \ - mesh.has_edge(valid_edge_node, (x, y, info_on_pix[(x, y)][0]['depth'])) is False and \ - (edge_canvas[x, y] == -1 or edge_canvas[x, y] == valid_edge_id)] - for ne in four_nes: - if invalid is True or (point_to_amount.get(ne) is None or point_to_amount[ne] < redundant_number) or \ - point_to_id[ne] in point_to_adjoint.get(point_to_id[valid_edge_node], set()): - mesh.add_edge(valid_edge_node, ne) - - return mesh - -def judge_dangle(mark, mesh, node): - if not (1 <= node[0] < mesh.graph['H']-1) or not(1 <= node[1] < mesh.graph['W']-1): - return mark - mesh_neighbors = [*mesh.neighbors(node)] - mesh_neighbors = [xx for xx in mesh_neighbors if 0 < xx[0] < mesh.graph['H'] - 1 and 0 < xx[1] < mesh.graph['W'] - 1] - if len(mesh_neighbors) >= 3: - return mark - elif len(mesh_neighbors) <= 1: - mark[node[0], node[1]] = (len(mesh_neighbors) + 1) - else: - dan_ne_node_a = mesh_neighbors[0] - dan_ne_node_b = mesh_neighbors[1] - if abs(dan_ne_node_a[0] - dan_ne_node_b[0]) > 1 or \ - abs(dan_ne_node_a[1] - dan_ne_node_b[1]) > 1: - mark[node[0], node[1]] = 3 - - return mark - -def remove_dangling(mesh, edge_ccs, edge_mesh, info_on_pix, image, depth, config): - - tmp_edge_ccs = copy.deepcopy(edge_ccs) - for edge_cc_id, valid_edge_cc in enumerate(tmp_edge_ccs): - if len(valid_edge_cc) > 1 or len(valid_edge_cc) == 0: - continue - single_edge_node = [*valid_edge_cc][0] - hx, hy, hz = single_edge_node - eight_nes = set([(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None]) - four_nes = [(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None] - sub_mesh = mesh.subgraph(eight_nes).copy() - ccs = netx.connected_components(sub_mesh) - four_ccs = [] - for cc_id, _cc in enumerate(ccs): - four_ccs.append(set()) - for cc_node in _cc: - if abs(cc_node[0] - hx) + abs(cc_node[1] - hy) < 2: - four_ccs[cc_id].add(cc_node) - largest_cc = sorted(four_ccs, key=lambda x: (len(x), -np.sum([abs(xx[2] - hz) for xx in x])))[-1] - if len(largest_cc) < 2: - for ne in four_nes: - mesh.add_edge(single_edge_node, ne) - else: - mesh.remove_edges_from([(single_edge_node, ne) for ne in mesh.neighbors(single_edge_node)]) - new_depth = np.mean([xx[2] for xx in largest_cc]) - info_on_pix[(hx, hy)][0]['depth'] = new_depth - info_on_pix[(hx, hy)][0]['disp'] = 1./new_depth - new_node = (hx, hy, new_depth) - mesh = refresh_node(single_edge_node, mesh.node[single_edge_node], new_node, dict(), mesh) - edge_ccs[edge_cc_id] = set([new_node]) - for ne in largest_cc: - mesh.add_edge(new_node, ne) - - mark = np.zeros((mesh.graph['H'], mesh.graph['W'])) - for edge_idx, edge_cc in enumerate(edge_ccs): - for edge_node in edge_cc: - if not (mesh.graph['bord_up'] <= edge_node[0] < mesh.graph['bord_down']-1) or \ - not (mesh.graph['bord_left'] <= edge_node[1] < mesh.graph['bord_right']-1): - continue - mesh_neighbors = [*mesh.neighbors(edge_node)] - mesh_neighbors = [xx for xx in mesh_neighbors \ - if mesh.graph['bord_up'] < xx[0] < mesh.graph['bord_down'] - 1 and \ - mesh.graph['bord_left'] < xx[1] < mesh.graph['bord_right'] - 1] - if len([*mesh.neighbors(edge_node)]) >= 3: - continue - elif len([*mesh.neighbors(edge_node)]) <= 1: - mark[edge_node[0], edge_node[1]] += (len([*mesh.neighbors(edge_node)]) + 1) - else: - dan_ne_node_a = [*mesh.neighbors(edge_node)][0] - dan_ne_node_b = [*mesh.neighbors(edge_node)][1] - if abs(dan_ne_node_a[0] - dan_ne_node_b[0]) > 1 or \ - abs(dan_ne_node_a[1] - dan_ne_node_b[1]) > 1: - mark[edge_node[0], edge_node[1]] += 3 - mxs, mys = np.where(mark == 1) - conn_0_nodes = [(x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']) for x in zip(mxs, mys) \ - if mesh.has_node((x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']))] - mxs, mys = np.where(mark == 2) - conn_1_nodes = [(x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']) for x in zip(mxs, mys) \ - if mesh.has_node((x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']))] - for node in conn_0_nodes: - hx, hy = node[0], node[1] - four_nes = [(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1)] \ - if info_on_pix.get((x, y)) is not None] - re_depth = {'value' : 0, 'count': 0} - for ne in four_nes: - mesh.add_edge(node, ne) - re_depth['value'] += cc_node[2] - re_depth['count'] += 1. - re_depth = re_depth['value'] / re_depth['count'] - mapping_dict = {node: (node[0], node[1], re_depth)} - info_on_pix, mesh, edge_mesh = update_info(mapping_dict, info_on_pix, mesh, edge_mesh) - depth[node[0], node[1]] = abs(re_depth) - mark[node[0], node[1]] = 0 - for node in conn_1_nodes: - hx, hy = node[0], node[1] - eight_nes = set([(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None]) - self_nes = set([ne2 for ne1 in mesh.neighbors(node) for ne2 in mesh.neighbors(ne1) if ne2 in eight_nes]) - eight_nes = [*(eight_nes - self_nes)] - sub_mesh = mesh.subgraph(eight_nes).copy() - ccs = netx.connected_components(sub_mesh) - largest_cc = sorted(ccs, key=lambda x: (len(x), -np.sum([abs(xx[0] - node[0]) + abs(xx[1] - node[1]) for xx in x])))[-1] - - mesh.remove_edges_from([(xx, node) for xx in mesh.neighbors(node)]) - re_depth = {'value' : 0, 'count': 0} - for cc_node in largest_cc: - if cc_node[0] == node[0] and cc_node[1] == node[1]: - continue - re_depth['value'] += cc_node[2] - re_depth['count'] += 1. - if abs(cc_node[0] - node[0]) + abs(cc_node[1] - node[1]) < 2: - mesh.add_edge(cc_node, node) - try: - re_depth = re_depth['value'] / re_depth['count'] - except: - re_depth = node[2] - renode = (node[0], node[1], re_depth) - mapping_dict = {node: renode} - info_on_pix, mesh, edge_mesh = update_info(mapping_dict, info_on_pix, mesh, edge_mesh) - depth[node[0], node[1]] = abs(re_depth) - mark[node[0], node[1]] = 0 - edge_mesh, mesh, mark, info_on_pix = recursive_add_edge(edge_mesh, mesh, info_on_pix, renode, mark) - mxs, mys = np.where(mark == 3) - conn_2_nodes = [(x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth']) for x in zip(mxs, mys) \ - if mesh.has_node((x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth'])) and \ - mesh.degree((x[0], x[1], info_on_pix[(x[0], x[1])][0]['depth'])) == 2] - sub_mesh = mesh.subgraph(conn_2_nodes).copy() - ccs = netx.connected_components(sub_mesh) - for cc in ccs: - candidate_nodes = [xx for xx in cc if sub_mesh.degree(xx) == 1] - for node in candidate_nodes: - if mesh.has_node(node) is False: - continue - ne_node = [xx for xx in mesh.neighbors(node) if xx not in cc][0] - hx, hy = node[0], node[1] - eight_nes = set([(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and (x, y, info_on_pix[(x, y)][0]['depth']) not in cc]) - ne_sub_mesh = mesh.subgraph(eight_nes).copy() - ne_ccs = netx.connected_components(ne_sub_mesh) - try: - ne_cc = [ne_cc for ne_cc in ne_ccs if ne_node in ne_cc][0] - except: - import pdb; pdb.set_trace() - largest_cc = [xx for xx in ne_cc if abs(xx[0] - node[0]) + abs(xx[1] - node[1]) == 1] - mesh.remove_edges_from([(xx, node) for xx in mesh.neighbors(node)]) - re_depth = {'value' : 0, 'count': 0} - for cc_node in largest_cc: - re_depth['value'] += cc_node[2] - re_depth['count'] += 1. - mesh.add_edge(cc_node, node) - try: - re_depth = re_depth['value'] / re_depth['count'] - except: - re_depth = node[2] - renode = (node[0], node[1], re_depth) - mapping_dict = {node: renode} - info_on_pix, mesh, edge_mesh = update_info(mapping_dict, info_on_pix, mesh, edge_mesh) - depth[node[0], node[1]] = abs(re_depth) - mark[node[0], node[1]] = 0 - edge_mesh, mesh, mark, info_on_pix = recursive_add_edge(edge_mesh, mesh, info_on_pix, renode, mark) - break - if len(cc) == 1: - node = [node for node in cc][0] - hx, hy = node[0], node[1] - nine_nes = set([(x, y, info_on_pix[(x, y)][0]['depth']) for x, y in [(hx, hy), (hx + 1, hy), (hx - 1, hy), (hx, hy + 1), (hx, hy - 1), - (hx + 1, hy + 1), (hx - 1, hy - 1), (hx - 1, hy + 1), (hx + 1, hy - 1)] \ - if info_on_pix.get((x, y)) is not None and mesh.has_node((x, y, info_on_pix[(x, y)][0]['depth']))]) - ne_sub_mesh = mesh.subgraph(nine_nes).copy() - ne_ccs = netx.connected_components(ne_sub_mesh) - for ne_cc in ne_ccs: - if node in ne_cc: - re_depth = {'value' : 0, 'count': 0} - for ne in ne_cc: - if abs(ne[0] - node[0]) + abs(ne[1] - node[1]) == 1: - mesh.add_edge(node, ne) - re_depth['value'] += ne[2] - re_depth['count'] += 1. - re_depth = re_depth['value'] / re_depth['count'] - mapping_dict = {node: (node[0], node[1], re_depth)} - info_on_pix, mesh, edge_mesh = update_info(mapping_dict, info_on_pix, mesh, edge_mesh) - depth[node[0], node[1]] = abs(re_depth) - mark[node[0], node[1]] = 0 - - - return mesh, info_on_pix, edge_mesh, depth, mark - -def context_and_holes(mesh, edge_ccs, config, specific_edge_id, specific_edge_loc, depth_feat_model, - connect_points_ccs=None, inpaint_iter=0, filter_edge=False, vis_edge_id=None): - edge_maps = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - mask_info = {} - for edge_id, edge_cc in enumerate(edge_ccs): - for edge_node in edge_cc: - edge_maps[edge_node[0], edge_node[1]] = edge_id - - context_ccs = [set() for x in range(len(edge_ccs))] - extend_context_ccs = [set() for x in range(len(edge_ccs))] - extend_erode_context_ccs = [set() for x in range(len(edge_ccs))] - extend_edge_ccs = [set() for x in range(len(edge_ccs))] - accomp_extend_context_ccs = [set() for x in range(len(edge_ccs))] - erode_context_ccs = [set() for x in range(len(edge_ccs))] - broken_mask_ccs = [set() for x in range(len(edge_ccs))] - invalid_extend_edge_ccs = [set() for x in range(len(edge_ccs))] - intouched_ccs = [set() for x in range(len(edge_ccs))] - redundant_ccs = [set() for x in range(len(edge_ccs))] - if inpaint_iter == 0: - background_thickness = config['background_thickness'] - context_thickness = config['context_thickness'] - else: - background_thickness = config['background_thickness_2'] - context_thickness = config['context_thickness_2'] - - mesh_nodes = mesh.nodes - for edge_id, edge_cc in enumerate(edge_ccs): - if context_thickness == 0 or (len(specific_edge_id) > 0 and edge_id not in specific_edge_id): - continue - edge_group = {} - for edge_node in edge_cc: - far_nodes = mesh_nodes[edge_node].get('far') - if far_nodes is None: - continue - for far_node in far_nodes: - if far_node in edge_cc: - continue - context_ccs[edge_id].add(far_node) - if mesh_nodes[far_node].get('edge_id') is not None: - if edge_group.get(mesh_nodes[far_node]['edge_id']) is None: - edge_group[mesh_nodes[far_node]['edge_id']] = set() - edge_group[mesh_nodes[far_node]['edge_id']].add(far_node) - if len(edge_cc) > 2: - for edge_key in [*edge_group.keys()]: - if len(edge_group[edge_key]) == 1: - context_ccs[edge_id].remove([*edge_group[edge_key]][0]) - for edge_id, edge_cc in enumerate(edge_ccs): - if inpaint_iter != 0: - continue - tmp_intouched_nodes = set() - for edge_node in edge_cc: - raw_intouched_nodes = set(mesh_nodes[edge_node].get('near')) if mesh_nodes[edge_node].get('near') is not None else set() - tmp_intouched_nodes |= set([xx for xx in raw_intouched_nodes if mesh_nodes[xx].get('edge_id') is not None and \ - len(context_ccs[mesh_nodes[xx].get('edge_id')]) > 0]) - intouched_ccs[edge_id] |= tmp_intouched_nodes - tmp_intouched_nodes = None - mask_ccs = copy.deepcopy(edge_ccs) - forbidden_len = 3 - forbidden_map = np.ones((mesh.graph['H'] - forbidden_len, mesh.graph['W'] - forbidden_len)) - forbidden_map = np.pad(forbidden_map, ((forbidden_len, forbidden_len), (forbidden_len, forbidden_len)), mode='constant').astype(np.bool) - cur_tmp_mask_map = np.zeros_like(forbidden_map).astype(np.bool) - passive_background = 10 if 10 is not None else background_thickness - passive_context = 1 if 1 is not None else context_thickness - - for edge_id, edge_cc in enumerate(edge_ccs): - cur_mask_cc = None; cur_mask_cc = [] - cur_context_cc = None; cur_context_cc = [] - cur_accomp_near_cc = None; cur_accomp_near_cc = [] - cur_invalid_extend_edge_cc = None; cur_invalid_extend_edge_cc = [] - cur_comp_far_cc = None; cur_comp_far_cc = [] - tmp_erode = [] - if len(context_ccs[edge_id]) == 0 or (len(specific_edge_id) > 0 and edge_id not in specific_edge_id): - continue - for i in range(max(background_thickness, context_thickness)): - cur_tmp_mask_map.fill(False) - if i == 0: - tmp_mask_nodes = copy.deepcopy(mask_ccs[edge_id]) - tmp_intersect_nodes = [] - tmp_intersect_context_nodes = [] - mask_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - context_depth = np.zeros((mesh.graph['H'], mesh.graph['W'])) - comp_cnt_depth = np.zeros((mesh.graph['H'], mesh.graph['W'])) - connect_map = np.zeros((mesh.graph['H'], mesh.graph['W'])) - for node in tmp_mask_nodes: - mask_map[node[0], node[1]] = True - depth_count = 0 - if mesh_nodes[node].get('far') is not None: - for comp_cnt_node in mesh_nodes[node]['far']: - comp_cnt_depth[node[0], node[1]] += abs(comp_cnt_node[2]) - depth_count += 1 - if depth_count > 0: - comp_cnt_depth[node[0], node[1]] = comp_cnt_depth[node[0], node[1]] / depth_count - connect_node = [] - if mesh_nodes[node].get('connect_point_id') is not None: - connect_node.append(mesh_nodes[node]['connect_point_id']) - connect_point_id = np.bincount(connect_node).argmax() if len(connect_node) > 0 else -1 - if connect_point_id > -1 and connect_points_ccs is not None: - for xx in connect_points_ccs[connect_point_id]: - if connect_map[xx[0], xx[1]] == 0: - connect_map[xx[0], xx[1]] = xx[2] - if mesh_nodes[node].get('connect_point_exception') is not None: - for xx in mesh_nodes[node]['connect_point_exception']: - if connect_map[xx[0], xx[1]] == 0: - connect_map[xx[0], xx[1]] = xx[2] - tmp_context_nodes = [*context_ccs[edge_id]] - tmp_erode.append([*context_ccs[edge_id]]) - context_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - if (context_map.astype(np.uint8) * mask_map.astype(np.uint8)).max() > 0: - import pdb; pdb.set_trace() - for node in tmp_context_nodes: - context_map[node[0], node[1]] = True - context_depth[node[0], node[1]] = node[2] - context_map[mask_map == True] = False - if (context_map.astype(np.uint8) * mask_map.astype(np.uint8)).max() > 0: - import pdb; pdb.set_trace() - tmp_intouched_nodes = [*intouched_ccs[edge_id]] - intouched_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - for node in tmp_intouched_nodes: intouched_map[node[0], node[1]] = True - intouched_map[mask_map == True] = False - tmp_redundant_nodes = set() - tmp_noncont_nodes = set() - noncont_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - intersect_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - intersect_context_map = np.zeros((mesh.graph['H'], mesh.graph['W']), dtype=np.bool) - if i > passive_background and inpaint_iter == 0: - new_tmp_intersect_nodes = None - new_tmp_intersect_nodes = [] - for node in tmp_intersect_nodes: - nes = mesh.neighbors(node) - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True and \ - bool(intouched_map[ne[0], ne[1]]) is False and\ - bool(intersect_map[ne[0], ne[1]]) is False and\ - bool(intersect_context_map[ne[0], ne[1]]) is False: - break_flag = False - if (i - passive_background) % 2 == 0 and (i - passive_background) % 8 != 0: - four_nes = [xx for xx in[[ne[0] - 1, ne[1]], [ne[0] + 1, ne[1]], [ne[0], ne[1] - 1], [ne[0], ne[1] + 1]] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W']] - for fne in four_nes: - if bool(mask_map[fne[0], fne[1]]) is True: - break_flag = True - break - if break_flag is True: - continue - intersect_map[ne[0], ne[1]] = True - new_tmp_intersect_nodes.append(ne) - tmp_intersect_nodes = None - tmp_intersect_nodes = new_tmp_intersect_nodes - - if i > passive_context and inpaint_iter == 1: - new_tmp_intersect_context_nodes = None - new_tmp_intersect_context_nodes = [] - for node in tmp_intersect_context_nodes: - nes = mesh.neighbors(node) - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True and \ - bool(intouched_map[ne[0], ne[1]]) is False and\ - bool(intersect_map[ne[0], ne[1]]) is False and \ - bool(intersect_context_map[ne[0], ne[1]]) is False: - intersect_context_map[ne[0], ne[1]] = True - new_tmp_intersect_context_nodes.append(ne) - tmp_intersect_context_nodes = None - tmp_intersect_context_nodes = new_tmp_intersect_context_nodes - - new_tmp_mask_nodes = None - new_tmp_mask_nodes = [] - for node in tmp_mask_nodes: - four_nes = {xx:[] for xx in [(node[0] - 1, node[1]), (node[0] + 1, node[1]), (node[0], node[1] - 1), (node[0], node[1] + 1)] if \ - 0 <= xx[0] < connect_map.shape[0] and 0 <= xx[1] < connect_map.shape[1]} - if inpaint_iter > 0: - for ne in four_nes.keys(): - if connect_map[ne[0], ne[1]] == True: - tmp_context_nodes.append((ne[0], ne[1], connect_map[ne[0], ne[1]])) - context_map[ne[0], ne[1]] = True - nes = mesh.neighbors(node) - if inpaint_iter > 0: - for ne in nes: four_nes[(ne[0], ne[1])].append(ne[2]) - nes = [] - for kfne, vfnes in four_nes.items(): vfnes.sort(key = lambda xx: abs(xx), reverse=True) - for kfne, vfnes in four_nes.items(): - for vfne in vfnes: nes.append((kfne[0], kfne[1], vfne)) - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True and \ - bool(intouched_map[ne[0], ne[1]]) is False and \ - bool(intersect_map[ne[0], ne[1]]) is False and \ - bool(intersect_context_map[ne[0], ne[1]]) is False: - if i == passive_background and inpaint_iter == 0: - if np.any(context_map[max(ne[0] - 1, 0):min(ne[0] + 2, mesh.graph['H']), max(ne[1] - 1, 0):min(ne[1] + 2, mesh.graph['W'])]) == True: - intersect_map[ne[0], ne[1]] = True - tmp_intersect_nodes.append(ne) - continue - if i < background_thickness: - if inpaint_iter == 0: - cur_mask_cc.append(ne) - elif mesh_nodes[ne].get('inpaint_id') == 1: - cur_mask_cc.append(ne) - else: - continue - mask_ccs[edge_id].add(ne) - if inpaint_iter == 0: - if comp_cnt_depth[node[0], node[1]] > 0 and comp_cnt_depth[ne[0], ne[1]] == 0: - comp_cnt_depth[ne[0], ne[1]] = comp_cnt_depth[node[0], node[1]] - if mesh_nodes[ne].get('far') is not None: - for comp_far_node in mesh_nodes[ne]['far']: - cur_comp_far_cc.append(comp_far_node) - cur_accomp_near_cc.append(ne) - cur_invalid_extend_edge_cc.append(comp_far_node) - if mesh_nodes[ne].get('edge_id') is not None and \ - len(context_ccs[mesh_nodes[ne].get('edge_id')]) > 0: - intouched_fars = set(mesh_nodes[ne].get('far')) if mesh_nodes[ne].get('far') is not None else set() - accum_intouched_fars = set(intouched_fars) - for intouched_far in intouched_fars: - accum_intouched_fars |= set([*mesh.neighbors(intouched_far)]) - for intouched_far in accum_intouched_fars: - if bool(mask_map[intouched_far[0], intouched_far[1]]) is True or \ - bool(context_map[intouched_far[0], intouched_far[1]]) is True: - continue - tmp_redundant_nodes.add(intouched_far) - intouched_map[intouched_far[0], intouched_far[1]] = True - if mesh_nodes[ne].get('near') is not None: - intouched_nears = set(mesh_nodes[ne].get('near')) - for intouched_near in intouched_nears: - if bool(mask_map[intouched_near[0], intouched_near[1]]) is True or \ - bool(context_map[intouched_near[0], intouched_near[1]]) is True: - continue - tmp_redundant_nodes.add(intouched_near) - intouched_map[intouched_near[0], intouched_near[1]] = True - if not (mesh_nodes[ne].get('inpaint_id') != 1 and inpaint_iter == 1): - new_tmp_mask_nodes.append(ne) - mask_map[ne[0], ne[1]] = True - tmp_mask_nodes = new_tmp_mask_nodes - - new_tmp_context_nodes = None - new_tmp_context_nodes = [] - for node in tmp_context_nodes: - nes = mesh.neighbors(node) - if inpaint_iter > 0: - four_nes = {(node[0] - 1, node[1]):[], (node[0] + 1, node[1]):[], (node[0], node[1] - 1):[], (node[0], node[1] + 1):[]} - for ne in nes: four_nes[(ne[0], ne[1])].append(ne[2]) - nes = [] - for kfne, vfnes in four_nes.items(): vfnes.sort(key = lambda xx: abs(xx), reverse=True) - for kfne, vfnes in four_nes.items(): - for vfne in vfnes: nes.append((kfne[0], kfne[1], vfne)) - for ne in nes: - mask_flag = (bool(mask_map[ne[0], ne[1]]) is False) - if bool(context_map[ne[0], ne[1]]) is False and mask_flag and \ - bool(forbidden_map[ne[0], ne[1]]) is True and bool(noncont_map[ne[0], ne[1]]) is False and \ - bool(intersect_context_map[ne[0], ne[1]]) is False: - if i == passive_context and inpaint_iter == 1: - mnes = mesh.neighbors(ne) - if any([mask_map[mne[0], mne[1]] == True for mne in mnes]) is True: - intersect_context_map[ne[0], ne[1]] = True - tmp_intersect_context_nodes.append(ne) - continue - if False and mesh_nodes[ne].get('near') is not None and mesh_nodes[ne].get('edge_id') != edge_id: - noncont_nears = set(mesh_nodes[ne].get('near')) - for noncont_near in noncont_nears: - if bool(context_map[noncont_near[0], noncont_near[1]]) is False: - tmp_noncont_nodes.add(noncont_near) - noncont_map[noncont_near[0], noncont_near[1]] = True - new_tmp_context_nodes.append(ne) - context_map[ne[0], ne[1]] = True - context_depth[ne[0], ne[1]] = ne[2] - cur_context_cc.extend(new_tmp_context_nodes) - tmp_erode.append(new_tmp_context_nodes) - tmp_context_nodes = None - tmp_context_nodes = new_tmp_context_nodes - new_tmp_intouched_nodes = None; new_tmp_intouched_nodes = [] - - for node in tmp_intouched_nodes: - if bool(context_map[node[0], node[1]]) is True or bool(mask_map[node[0], node[1]]) is True: - continue - nes = mesh.neighbors(node) - - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(intouched_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - new_tmp_intouched_nodes.append(ne) - intouched_map[ne[0], ne[1]] = True - tmp_intouched_nodes = None - tmp_intouched_nodes = set(new_tmp_intouched_nodes) - new_tmp_redundant_nodes = None; new_tmp_redundant_nodes = [] - for node in tmp_redundant_nodes: - if bool(context_map[node[0], node[1]]) is True or \ - bool(mask_map[node[0], node[1]]) is True: - continue - nes = mesh.neighbors(node) - - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(intouched_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - new_tmp_redundant_nodes.append(ne) - intouched_map[ne[0], ne[1]] = True - tmp_redundant_nodes = None - tmp_redundant_nodes = set(new_tmp_redundant_nodes) - new_tmp_noncont_nodes = None; new_tmp_noncont_nodes = [] - for node in tmp_noncont_nodes: - if bool(context_map[node[0], node[1]]) is True or \ - bool(mask_map[node[0], node[1]]) is True: - continue - nes = mesh.neighbors(node) - rmv_flag = False - for ne in nes: - if bool(context_map[ne[0], ne[1]]) is False and \ - bool(mask_map[ne[0], ne[1]]) is False and \ - bool(noncont_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - patch_context_map = context_map[max(ne[0] - 1, 0):min(ne[0] + 2, context_map.shape[0]), - max(ne[1] - 1, 0):min(ne[1] + 2, context_map.shape[1])] - if bool(np.any(patch_context_map)) is True: - new_tmp_noncont_nodes.append(ne) - noncont_map[ne[0], ne[1]] = True - tmp_noncont_nodes = None - tmp_noncont_nodes = set(new_tmp_noncont_nodes) - if inpaint_iter == 0: - depth_dict = get_depth_from_maps(context_map, mask_map, context_depth, mesh.graph['H'], mesh.graph['W'], log_depth=config['log_depth']) - mask_size = get_valid_size(depth_dict['mask']) - mask_size = dilate_valid_size(mask_size, depth_dict['mask'], dilate=[20, 20]) - context_size = get_valid_size(depth_dict['context']) - context_size = dilate_valid_size(context_size, depth_dict['context'], dilate=[20, 20]) - union_size = size_operation(mask_size, context_size, operation='+') - depth_dict = depth_inpainting(None, None, None, None, mesh, config, union_size, depth_feat_model, None, given_depth_dict=depth_dict, spdb=False) - near_depth_map, raw_near_depth_map = np.zeros((mesh.graph['H'], mesh.graph['W'])), np.zeros((mesh.graph['H'], mesh.graph['W'])) - filtered_comp_far_cc, filtered_accomp_near_cc = set(), set() - for node in cur_accomp_near_cc: - near_depth_map[node[0], node[1]] = depth_dict['output'][node[0], node[1]] - raw_near_depth_map[node[0], node[1]] = node[2] - for node in cur_comp_far_cc: - four_nes = [xx for xx in [(node[0] - 1, node[1]), (node[0] + 1, node[1]), (node[0], node[1] - 1), (node[0], node[1] + 1)] \ - if 0 <= xx[0] < mesh.graph['H'] and 0 <= xx[1] < mesh.graph['W'] and \ - near_depth_map[xx[0], xx[1]] != 0 and \ - abs(near_depth_map[xx[0], xx[1]]) < abs(node[2])] - if len(four_nes) > 0: - filtered_comp_far_cc.add(node) - for ne in four_nes: - filtered_accomp_near_cc.add((ne[0], ne[1], -abs(raw_near_depth_map[ne[0], ne[1]]))) - cur_comp_far_cc, cur_accomp_near_cc = filtered_comp_far_cc, filtered_accomp_near_cc - mask_ccs[edge_id] |= set(cur_mask_cc) - context_ccs[edge_id] |= set(cur_context_cc) - accomp_extend_context_ccs[edge_id] |= set(cur_accomp_near_cc).intersection(cur_mask_cc) - extend_edge_ccs[edge_id] |= set(cur_accomp_near_cc).intersection(cur_mask_cc) - extend_context_ccs[edge_id] |= set(cur_comp_far_cc) - invalid_extend_edge_ccs[edge_id] |= set(cur_invalid_extend_edge_cc) - erode_size = [0] - for tmp in tmp_erode: - erode_size.append(len(tmp)) - if len(erode_size) > 1: - erode_size[-1] += erode_size[-2] - if inpaint_iter == 0: - tmp_width = config['depth_edge_dilate'] - else: - tmp_width = 0 - while float(erode_size[tmp_width]) / (erode_size[-1] + 1e-6) > 0.3: - tmp_width = tmp_width - 1 - try: - if tmp_width == 0: - erode_context_ccs[edge_id] = set([]) - else: - erode_context_ccs[edge_id] = set(reduce(lambda x, y : x + y, [] + tmp_erode[:tmp_width])) - except: - import pdb; pdb.set_trace() - erode_context_cc = copy.deepcopy(erode_context_ccs[edge_id]) - for erode_context_node in erode_context_cc: - if (inpaint_iter != 0 and (mesh_nodes[erode_context_node].get('inpaint_id') is None or - mesh_nodes[erode_context_node].get('inpaint_id') == 0)): - erode_context_ccs[edge_id].remove(erode_context_node) - else: - context_ccs[edge_id].remove(erode_context_node) - context_map = np.zeros((mesh.graph['H'], mesh.graph['W'])) - for context_node in context_ccs[edge_id]: - context_map[context_node[0], context_node[1]] = 1 - extend_context_ccs[edge_id] = extend_context_ccs[edge_id] - mask_ccs[edge_id] - accomp_extend_context_ccs[edge_id] - if inpaint_iter == 0: - all_ecnt_cc = set() - for ecnt_id, ecnt_cc in enumerate(extend_context_ccs): - constraint_context_ids = set() - constraint_context_cc = set() - constraint_erode_context_cc = set() - tmp_mask_cc = set() - accum_context_cc = None; accum_context_cc = [] - for ecnt_node in accomp_extend_context_ccs[ecnt_id]: - if edge_maps[ecnt_node[0], ecnt_node[1]] > -1: - constraint_context_ids.add(int(round(edge_maps[ecnt_node[0], ecnt_node[1]]))) - constraint_erode_context_cc = erode_context_ccs[ecnt_id] - for constraint_context_id in constraint_context_ids: - constraint_context_cc = constraint_context_cc | context_ccs[constraint_context_id] | erode_context_ccs[constraint_context_id] - constraint_erode_context_cc = constraint_erode_context_cc | erode_context_ccs[constraint_context_id] - for i in range(background_thickness): - if i == 0: - tmp_context_nodes = copy.deepcopy(ecnt_cc) - tmp_invalid_context_nodes = copy.deepcopy(invalid_extend_edge_ccs[ecnt_id]) - tmp_mask_nodes = copy.deepcopy(accomp_extend_context_ccs[ecnt_id]) - tmp_context_map = np.zeros((mesh.graph['H'], mesh.graph['W'])).astype(np.bool) - tmp_mask_map = np.zeros((mesh.graph['H'], mesh.graph['W'])).astype(np.bool) - tmp_invalid_context_map = np.zeros((mesh.graph['H'], mesh.graph['W'])).astype(np.bool) - for node in tmp_mask_nodes: - tmp_mask_map[node[0], node[1]] = True - for node in context_ccs[ecnt_id]: - tmp_context_map[node[0], node[1]] = True - for node in erode_context_ccs[ecnt_id]: - tmp_context_map[node[0], node[1]] = True - for node in extend_context_ccs[ecnt_id]: - tmp_context_map[node[0], node[1]] = True - for node in invalid_extend_edge_ccs[ecnt_id]: - tmp_invalid_context_map[node[0], node[1]] = True - init_invalid_context_map = tmp_invalid_context_map.copy() - init_context_map = tmp - if (tmp_mask_map.astype(np.uint8) * tmp_context_map.astype(np.uint8)).max() > 0: - import pdb; pdb.set_trace() - if vis_edge_id is not None and ecnt_id == vis_edge_id: - f, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True) - ax1.imshow(tmp_context_map * 1); ax2.imshow(init_invalid_context_map * 1 + tmp_context_map * 2) - plt.show() - import pdb; pdb.set_trace() - else: - tmp_context_nodes = new_tmp_context_nodes - new_tmp_context_nodes = None - tmp_mask_nodes = new_tmp_mask_nodes - new_tmp_mask_nodes = None - tmp_invalid_context_nodes = new_tmp_invalid_context_nodes - new_tmp_invalid_context_nodes = None - new_tmp_context_nodes = None - new_tmp_context_nodes = [] - new_tmp_invalid_context_nodes = None - new_tmp_invalid_context_nodes = [] - new_tmp_mask_nodes = set([]) - for node in tmp_context_nodes: - for ne in mesh.neighbors(node): - if ne in constraint_context_cc and \ - bool(tmp_mask_map[ne[0], ne[1]]) is False and \ - bool(tmp_context_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - new_tmp_context_nodes.append(ne) - tmp_context_map[ne[0], ne[1]] = True - accum_context_cc.extend(new_tmp_context_nodes) - for node in tmp_invalid_context_nodes: - for ne in mesh.neighbors(node): - if bool(tmp_mask_map[ne[0], ne[1]]) is False and \ - bool(tmp_context_map[ne[0], ne[1]]) is False and \ - bool(tmp_invalid_context_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - tmp_invalid_context_map[ne[0], ne[1]] = True - new_tmp_invalid_context_nodes.append(ne) - for node in tmp_mask_nodes: - for ne in mesh.neighbors(node): - if bool(tmp_mask_map[ne[0], ne[1]]) is False and \ - bool(tmp_context_map[ne[0], ne[1]]) is False and \ - bool(tmp_invalid_context_map[ne[0], ne[1]]) is False and \ - bool(forbidden_map[ne[0], ne[1]]) is True: - new_tmp_mask_nodes.add(ne) - tmp_mask_map[ne[0], ne[1]] = True - init_invalid_context_map[tmp_context_map] = False - _, tmp_label_map = cv2.connectedComponents((init_invalid_context_map | tmp_context_map).astype(np.uint8), connectivity=8) - tmp_label_ids = set(np.unique(tmp_label_map[init_invalid_context_map])) - if (tmp_mask_map.astype(np.uint8) * tmp_context_map.astype(np.uint8)).max() > 0: - import pdb; pdb.set_trace() - if vis_edge_id is not None and ecnt_id == vis_edge_id: - f, ((ax1, ax2)) = plt.subplots(1, 2, sharex=True, sharey=True) - ax1.imshow(tmp_label_map); ax2.imshow(init_invalid_context_map * 1 + tmp_context_map * 2) - plt.show() - import pdb; pdb.set_trace() - extend_context_ccs[ecnt_id] |= set(accum_context_cc) - extend_context_ccs[ecnt_id] = extend_context_ccs[ecnt_id] - mask_ccs[ecnt_id] - extend_erode_context_ccs[ecnt_id] = extend_context_ccs[ecnt_id] & constraint_erode_context_cc - extend_context_ccs[ecnt_id] = extend_context_ccs[ecnt_id] - extend_erode_context_ccs[ecnt_id] - erode_context_ccs[ecnt_id] - tmp_context_cc = context_ccs[ecnt_id] - extend_erode_context_ccs[ecnt_id] - erode_context_ccs[ecnt_id] - if len(tmp_context_cc) > 0: - context_ccs[ecnt_id] = tmp_context_cc - tmp_mask_cc = tmp_mask_cc - context_ccs[ecnt_id] - erode_context_ccs[ecnt_id] - mask_ccs[ecnt_id] = mask_ccs[ecnt_id] | tmp_mask_cc - - return context_ccs, mask_ccs, broken_mask_ccs, edge_ccs, erode_context_ccs, invalid_extend_edge_ccs, edge_maps, extend_context_ccs, extend_edge_ccs, extend_erode_context_ccs - -def DL_inpaint_edge(mesh, - info_on_pix, - config, - image, - depth, - context_ccs, - erode_context_ccs, - extend_context_ccs, - extend_erode_context_ccs, - mask_ccs, - broken_mask_ccs, - edge_ccs, - extend_edge_ccs, - init_mask_connect, - edge_maps, - rgb_model=None, - depth_edge_model=None, - depth_edge_model_init=None, - depth_feat_model=None, - specific_edge_id=-1, - specific_edge_loc=None, - inpaint_iter=0): - - if isinstance(config["gpu_ids"], int) and (config["gpu_ids"] >= 0): - device = config["gpu_ids"] - else: - device = "cpu" - - edge_map = np.zeros_like(depth) - new_edge_ccs = [set() for _ in range(len(edge_ccs))] - edge_maps_with_id = edge_maps - edge_condition = lambda x, m: m.nodes[x].get('far') is not None and len(m.nodes[x].get('far')) > 0 - edge_map = get_map_from_ccs(edge_ccs, mesh.graph['H'], mesh.graph['W'], mesh, edge_condition) - np_depth, np_image = depth.copy(), image.copy() - image_c = image.shape[-1] - image = torch.FloatTensor(image.transpose(2, 0, 1)).unsqueeze(0).to(device) - if depth.ndim < 3: - depth = depth[..., None] - depth = torch.FloatTensor(depth.transpose(2, 0, 1)).unsqueeze(0).to(device) - mesh.graph['max_edge_id'] = len(edge_ccs) - connnect_points_ccs = [set() for _ in range(len(edge_ccs))] - gp_time, tmp_mesh_time, bilateral_time = 0, 0, 0 - edges_infos = dict() - edges_in_mask = [set() for _ in range(len(edge_ccs))] - tmp_specific_edge_id = [] - for edge_id, (context_cc, mask_cc, erode_context_cc, extend_context_cc, edge_cc) in enumerate(zip(context_ccs, mask_ccs, erode_context_ccs, extend_context_ccs, edge_ccs)): - if len(specific_edge_id) > 0: - if edge_id not in specific_edge_id: - continue - if len(context_cc) < 1 or len(mask_cc) < 1: - continue - edge_dict = get_edge_from_nodes(context_cc | extend_context_cc, erode_context_cc | extend_erode_context_ccs[edge_id], mask_cc, edge_cc, extend_edge_ccs[edge_id], - mesh.graph['H'], mesh.graph['W'], mesh) - edge_dict['edge'], end_depth_maps, _ = \ - filter_irrelevant_edge_new(edge_dict['self_edge'], edge_dict['comp_edge'], - edge_map, - edge_maps_with_id, - edge_id, - edge_dict['context'], - edge_dict['depth'], mesh, context_cc | erode_context_cc | extend_context_cc | extend_erode_context_ccs[edge_id], spdb=False) - if specific_edge_loc is not None and \ - (specific_edge_loc is not None and edge_dict['mask'][specific_edge_loc[0], specific_edge_loc[1]] == 0): - continue - mask_size = get_valid_size(edge_dict['mask']) - mask_size = dilate_valid_size(mask_size, edge_dict['mask'], dilate=[20, 20]) - context_size = get_valid_size(edge_dict['context']) - context_size = dilate_valid_size(context_size, edge_dict['context'], dilate=[20, 20]) - union_size = size_operation(mask_size, context_size, operation='+') - patch_edge_dict = dict() - patch_edge_dict['mask'], patch_edge_dict['context'], patch_edge_dict['rgb'], \ - patch_edge_dict['disp'], patch_edge_dict['edge'] = \ - crop_maps_by_size(union_size, edge_dict['mask'], edge_dict['context'], - edge_dict['rgb'], edge_dict['disp'], edge_dict['edge']) - x_anchor, y_anchor = [union_size['x_min'], union_size['x_max']], [union_size['y_min'], union_size['y_max']] - tensor_edge_dict = convert2tensor(patch_edge_dict) - input_edge_feat = torch.cat((tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_edge_dict['edge'], - 1 - tensor_edge_dict['context'], - tensor_edge_dict['mask']), dim=1) - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) and inpaint_iter == 0: - with torch.no_grad(): - depth_edge_output = depth_edge_model.forward_3P(tensor_edge_dict['mask'], - tensor_edge_dict['context'], - tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_edge_dict['edge'], - unit_length=128, - cuda=device) - depth_edge_output = depth_edge_output.cpu() - tensor_edge_dict['output'] = (depth_edge_output> config['ext_edge_threshold']).float() * tensor_edge_dict['mask'] + tensor_edge_dict['edge'] - else: - tensor_edge_dict['output'] = tensor_edge_dict['edge'] - depth_edge_output = tensor_edge_dict['edge'] + 0 - patch_edge_dict['output'] = tensor_edge_dict['output'].squeeze().data.cpu().numpy() - edge_dict['output'] = np.zeros((mesh.graph['H'], mesh.graph['W'])) - edge_dict['output'][union_size['x_min']:union_size['x_max'], union_size['y_min']:union_size['y_max']] = \ - patch_edge_dict['output'] - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) and inpaint_iter == 0: - if ((depth_edge_output> config['ext_edge_threshold']).float() * tensor_edge_dict['mask']).max() > 0: - try: - edge_dict['fpath_map'], edge_dict['npath_map'], break_flag, npaths, fpaths, invalid_edge_id = \ - clean_far_edge_new(edge_dict['output'], end_depth_maps, edge_dict['mask'], edge_dict['context'], mesh, info_on_pix, edge_dict['self_edge'], inpaint_iter, config) - except: - import pdb; pdb.set_trace() - pre_npath_map = edge_dict['npath_map'].copy() - if config.get('repeat_inpaint_edge') is True: - for _ in range(2): - tmp_input_edge = ((edge_dict['npath_map'] > -1) + edge_dict['edge']).clip(0, 1) - patch_tmp_input_edge = crop_maps_by_size(union_size, tmp_input_edge)[0] - tensor_input_edge = torch.FloatTensor(patch_tmp_input_edge)[None, None, ...] - depth_edge_output = depth_edge_model.forward_3P(tensor_edge_dict['mask'], - tensor_edge_dict['context'], - tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_input_edge, - unit_length=128, - cuda=device) - depth_edge_output = depth_edge_output.cpu() - depth_edge_output = (depth_edge_output> config['ext_edge_threshold']).float() * tensor_edge_dict['mask'] + tensor_edge_dict['edge'] - depth_edge_output = depth_edge_output.squeeze().data.cpu().numpy() - full_depth_edge_output = np.zeros((mesh.graph['H'], mesh.graph['W'])) - full_depth_edge_output[union_size['x_min']:union_size['x_max'], union_size['y_min']:union_size['y_max']] = \ - depth_edge_output - edge_dict['fpath_map'], edge_dict['npath_map'], break_flag, npaths, fpaths, invalid_edge_id = \ - clean_far_edge_new(full_depth_edge_output, end_depth_maps, edge_dict['mask'], edge_dict['context'], mesh, info_on_pix, edge_dict['self_edge'], inpaint_iter, config) - for nid in npaths.keys(): - npath, fpath = npaths[nid], fpaths[nid] - start_mx, start_my, end_mx, end_my = -1, -1, -1, -1 - if end_depth_maps[npath[0][0], npath[0][1]] != 0: - start_mx, start_my = npath[0][0], npath[0][1] - if end_depth_maps[npath[-1][0], npath[-1][1]] != 0: - end_mx, end_my = npath[-1][0], npath[-1][1] - if start_mx == -1: - import pdb; pdb.set_trace() - valid_end_pt = () if end_mx == -1 else (end_mx, end_my, info_on_pix[(end_mx, end_my)][0]['depth']) - new_edge_info = dict(fpath=fpath, - npath=npath, - cont_end_pts=valid_end_pt, - mask_id=edge_id, - comp_edge_id=nid, - depth=end_depth_maps[start_mx, start_my]) - if edges_infos.get((start_mx, start_my)) is None: - edges_infos[(start_mx, start_my)] = [] - edges_infos[(start_mx, start_my)].append(new_edge_info) - edges_in_mask[edge_id].add((start_mx, start_my)) - if len(valid_end_pt) > 0: - new_edge_info = dict(fpath=fpath[::-1], - npath=npath[::-1], - cont_end_pts=(start_mx, start_my, info_on_pix[(start_mx, start_my)][0]['depth']), - mask_id=edge_id, - comp_edge_id=nid, - depth=end_depth_maps[end_mx, end_my]) - if edges_infos.get((end_mx, end_my)) is None: - edges_infos[(end_mx, end_my)] = [] - edges_infos[(end_mx, end_my)].append(new_edge_info) - edges_in_mask[edge_id].add((end_mx, end_my)) - for edge_id, (context_cc, mask_cc, erode_context_cc, extend_context_cc, edge_cc) in enumerate(zip(context_ccs, mask_ccs, erode_context_ccs, extend_context_ccs, edge_ccs)): - if len(specific_edge_id) > 0: - if edge_id not in specific_edge_id: - continue - if len(context_cc) < 1 or len(mask_cc) < 1: - continue - edge_dict = get_edge_from_nodes(context_cc | extend_context_cc, erode_context_cc | extend_erode_context_ccs[edge_id], mask_cc, edge_cc, extend_edge_ccs[edge_id], - mesh.graph['H'], mesh.graph['W'], mesh) - if specific_edge_loc is not None and \ - (specific_edge_loc is not None and edge_dict['mask'][specific_edge_loc[0], specific_edge_loc[1]] == 0): - continue - else: - tmp_specific_edge_id.append(edge_id) - edge_dict['edge'], end_depth_maps, _ = \ - filter_irrelevant_edge_new(edge_dict['self_edge'], edge_dict['comp_edge'], - edge_map, - edge_maps_with_id, - edge_id, - edge_dict['context'], - edge_dict['depth'], mesh, context_cc | erode_context_cc | extend_context_cc | extend_erode_context_ccs[edge_id], spdb=False) - discard_map = np.zeros_like(edge_dict['edge']) - mask_size = get_valid_size(edge_dict['mask']) - mask_size = dilate_valid_size(mask_size, edge_dict['mask'], dilate=[20, 20]) - context_size = get_valid_size(edge_dict['context']) - context_size = dilate_valid_size(context_size, edge_dict['context'], dilate=[20, 20]) - union_size = size_operation(mask_size, context_size, operation='+') - patch_edge_dict = dict() - patch_edge_dict['mask'], patch_edge_dict['context'], patch_edge_dict['rgb'], \ - patch_edge_dict['disp'], patch_edge_dict['edge'] = \ - crop_maps_by_size(union_size, edge_dict['mask'], edge_dict['context'], - edge_dict['rgb'], edge_dict['disp'], edge_dict['edge']) - x_anchor, y_anchor = [union_size['x_min'], union_size['x_max']], [union_size['y_min'], union_size['y_max']] - tensor_edge_dict = convert2tensor(patch_edge_dict) - input_edge_feat = torch.cat((tensor_edge_dict['rgb'], - tensor_edge_dict['disp'], - tensor_edge_dict['edge'], - 1 - tensor_edge_dict['context'], - tensor_edge_dict['mask']), dim=1) - edge_dict['output'] = edge_dict['edge'].copy() - - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) and inpaint_iter == 0: - edge_dict['fpath_map'], edge_dict['npath_map'] = edge_dict['fpath_map'] * 0 - 1, edge_dict['npath_map'] * 0 - 1 - end_pts = edges_in_mask[edge_id] - for end_pt in end_pts: - cur_edge_infos = edges_infos[(end_pt[0], end_pt[1])] - cur_info = [xx for xx in cur_edge_infos if xx['mask_id'] == edge_id][0] - other_infos = [xx for xx in cur_edge_infos if xx['mask_id'] != edge_id and len(xx['cont_end_pts']) > 0] - if len(cur_info['cont_end_pts']) > 0 or (len(cur_info['cont_end_pts']) == 0 and len(other_infos) == 0): - for fnode in cur_info['fpath']: - edge_dict['fpath_map'][fnode[0], fnode[1]] = cur_info['comp_edge_id'] - for fnode in cur_info['npath']: - edge_dict['npath_map'][fnode[0], fnode[1]] = cur_info['comp_edge_id'] - fnmap = edge_dict['fpath_map'] * 1 - fnmap[edge_dict['npath_map'] != -1] = edge_dict['npath_map'][edge_dict['npath_map'] != -1] - for end_pt in end_pts: - cur_edge_infos = edges_infos[(end_pt[0], end_pt[1])] - cur_info = [xx for xx in cur_edge_infos if xx['mask_id'] == edge_id][0] - cur_depth = cur_info['depth'] - other_infos = [xx for xx in cur_edge_infos if xx['mask_id'] != edge_id and len(xx['cont_end_pts']) > 0] - comp_edge_id = cur_info['comp_edge_id'] - if len(cur_info['cont_end_pts']) == 0 and len(other_infos) > 0: - other_infos = sorted(other_infos, key=lambda aa: abs(abs(aa['cont_end_pts'][2]) - abs(cur_depth))) - for other_info in other_infos: - tmp_fmap, tmp_nmap = np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1, np.zeros((mesh.graph['H'], mesh.graph['W'])) - 1 - for fnode in other_info['fpath']: - if fnmap[fnode[0], fnode[1]] != -1: - tmp_fmap = tmp_fmap * 0 - 1 - break - else: - tmp_fmap[fnode[0], fnode[1]] = comp_edge_id - if fnmap[fnode[0], fnode[1]] != -1: - continue - for fnode in other_info['npath']: - if fnmap[fnode[0], fnode[1]] != -1: - tmp_nmap = tmp_nmap * 0 - 1 - break - else: - tmp_nmap[fnode[0], fnode[1]] = comp_edge_id - if fnmap[fnode[0], fnode[1]] != -1: - continue - break - if min(tmp_fmap.max(), tmp_nmap.max()) != -1: - edge_dict['fpath_map'] = tmp_fmap - edge_dict['fpath_map'][edge_dict['valid_area'] == 0] = -1 - edge_dict['npath_map'] = tmp_nmap - edge_dict['npath_map'][edge_dict['valid_area'] == 0] = -1 - discard_map = ((tmp_nmap != -1).astype(np.uint8) + (tmp_fmap != -1).astype(np.uint8)) * edge_dict['mask'] - else: - for fnode in cur_info['fpath']: - edge_dict['fpath_map'][fnode[0], fnode[1]] = cur_info['comp_edge_id'] - for fnode in cur_info['npath']: - edge_dict['npath_map'][fnode[0], fnode[1]] = cur_info['comp_edge_id'] - if edge_dict['npath_map'].min() == 0 or edge_dict['fpath_map'].min() == 0: - import pdb; pdb.set_trace() - edge_dict['output'] = (edge_dict['npath_map'] > -1) * edge_dict['mask'] + edge_dict['context'] * edge_dict['edge'] - mesh, _, _, _ = create_placeholder(edge_dict['context'], edge_dict['mask'], - edge_dict['depth'], edge_dict['fpath_map'], - edge_dict['npath_map'], mesh, inpaint_iter, - edge_ccs, - extend_edge_ccs[edge_id], - edge_maps_with_id, - edge_id) - - dxs, dys = np.where(discard_map != 0) - for dx, dy in zip(dxs, dys): - mesh.nodes[(dx, dy)]['inpaint_twice'] = False - depth_dict = depth_inpainting(context_cc, extend_context_cc, erode_context_cc | extend_erode_context_ccs[edge_id], mask_cc, mesh, config, union_size, depth_feat_model, edge_dict['output']) - refine_depth_output = depth_dict['output']*depth_dict['mask'] - for near_id in np.unique(edge_dict['npath_map'])[1:]: - refine_depth_output = refine_depth_around_edge(refine_depth_output.copy(), - (edge_dict['fpath_map'] == near_id).astype(np.uint8) * edge_dict['mask'], - (edge_dict['fpath_map'] == near_id).astype(np.uint8), - (edge_dict['npath_map'] == near_id).astype(np.uint8) * edge_dict['mask'], - depth_dict['mask'].copy(), - depth_dict['output'] * depth_dict['context'], - config) - depth_dict['output'][depth_dict['mask'] > 0] = refine_depth_output[depth_dict['mask'] > 0] - rgb_dict = get_rgb_from_nodes(context_cc | extend_context_cc, - erode_context_cc | extend_erode_context_ccs[edge_id], mask_cc, mesh.graph['H'], mesh.graph['W'], mesh) - if np.all(rgb_dict['mask'] == edge_dict['mask']) is False: - import pdb; pdb.set_trace() - rgb_dict['edge'] = edge_dict['output'] - patch_rgb_dict = dict() - patch_rgb_dict['mask'], patch_rgb_dict['context'], patch_rgb_dict['rgb'], \ - patch_rgb_dict['edge'] = crop_maps_by_size(union_size, rgb_dict['mask'], - rgb_dict['context'], rgb_dict['rgb'], - rgb_dict['edge']) - tensor_rgb_dict = convert2tensor(patch_rgb_dict) - resize_rgb_dict = {k: v.clone() for k, v in tensor_rgb_dict.items()} - max_hw = np.array([*patch_rgb_dict['mask'].shape[-2:]]).max() - init_frac = config['largest_size'] / (np.array([*patch_rgb_dict['mask'].shape[-2:]]).prod() ** 0.5) - resize_hw = [patch_rgb_dict['mask'].shape[-2] * init_frac, patch_rgb_dict['mask'].shape[-1] * init_frac] - resize_max_hw = max(resize_hw) - frac = (np.floor(resize_max_hw / 128.) * 128.) / max_hw - if frac < 1: - resize_mark = torch.nn.functional.interpolate(torch.cat((resize_rgb_dict['mask'], - resize_rgb_dict['context']), - dim=1), - scale_factor=frac, - mode='area') - resize_rgb_dict['mask'] = (resize_mark[:, 0:1] > 0).float() - resize_rgb_dict['context'] = (resize_mark[:, 1:2] == 1).float() - resize_rgb_dict['context'][resize_rgb_dict['mask'] > 0] = 0 - resize_rgb_dict['rgb'] = torch.nn.functional.interpolate(resize_rgb_dict['rgb'], - scale_factor=frac, - mode='area') - resize_rgb_dict['rgb'] = resize_rgb_dict['rgb'] * resize_rgb_dict['context'] - resize_rgb_dict['edge'] = torch.nn.functional.interpolate(resize_rgb_dict['edge'], - scale_factor=frac, - mode='area') - resize_rgb_dict['edge'] = (resize_rgb_dict['edge'] > 0).float() * 0 - resize_rgb_dict['edge'] = resize_rgb_dict['edge'] * (resize_rgb_dict['context'] + resize_rgb_dict['mask']) - rgb_input_feat = torch.cat((resize_rgb_dict['rgb'], resize_rgb_dict['edge']), dim=1) - rgb_input_feat[:, 3] = 1 - rgb_input_feat[:, 3] - resize_mask = open_small_mask(resize_rgb_dict['mask'], resize_rgb_dict['context'], 3, 41) - specified_hole = resize_mask - with torch.no_grad(): - rgb_output = rgb_model.forward_3P(specified_hole, - resize_rgb_dict['context'], - resize_rgb_dict['rgb'], - resize_rgb_dict['edge'], - unit_length=128, - cuda=device) - rgb_output = rgb_output.cpu() - if config.get('gray_image') is True: - rgb_output = rgb_output.mean(1, keepdim=True).repeat((1,3,1,1)) - rgb_output = rgb_output.cpu() - resize_rgb_dict['output'] = rgb_output * resize_rgb_dict['mask'] + resize_rgb_dict['rgb'] - tensor_rgb_dict['output'] = resize_rgb_dict['output'] - if frac < 1: - tensor_rgb_dict['output'] = torch.nn.functional.interpolate(tensor_rgb_dict['output'], - size=tensor_rgb_dict['mask'].shape[-2:], - mode='bicubic') - tensor_rgb_dict['output'] = tensor_rgb_dict['output'] * \ - tensor_rgb_dict['mask'] + (tensor_rgb_dict['rgb'] * tensor_rgb_dict['context']) - patch_rgb_dict['output'] = tensor_rgb_dict['output'].data.cpu().numpy().squeeze().transpose(1,2,0) - rgb_dict['output'] = np.zeros((mesh.graph['H'], mesh.graph['W'], 3)) - rgb_dict['output'][union_size['x_min']:union_size['x_max'], union_size['y_min']:union_size['y_max']] = \ - patch_rgb_dict['output'] - - if require_depth_edge(patch_edge_dict['edge'], patch_edge_dict['mask']) or inpaint_iter > 0: - edge_occlusion = True - else: - edge_occlusion = False - for node in erode_context_cc: - if rgb_dict['mask'][node[0], node[1]] > 0: - for info in info_on_pix[(node[0], node[1])]: - if abs(info['depth']) == abs(node[2]): - info['update_color'] = (rgb_dict['output'][node[0], node[1]] * 255).astype(np.uint8) - if frac < 1.: - depth_edge_dilate_2_color_flag = False - else: - depth_edge_dilate_2_color_flag = True - hxs, hys = np.where((rgb_dict['mask'] > 0) & (rgb_dict['erode'] == 0)) - for hx, hy in zip(hxs, hys): - real_depth = None - if abs(depth_dict['output'][hx, hy]) <= abs(np_depth[hx, hy]): - depth_dict['output'][hx, hy] = np_depth[hx, hy] + 0.01 - node = (hx, hy, -depth_dict['output'][hx, hy]) - if info_on_pix.get((node[0], node[1])) is not None: - for info in info_on_pix.get((node[0], node[1])): - if info.get('inpaint_id') is None or abs(info['inpaint_id'] < mesh.nodes[(hx, hy)]['inpaint_id']): - pre_depth = info['depth'] if info.get('real_depth') is None else info['real_depth'] - if abs(node[2]) < abs(pre_depth): - node = (node[0], node[1], -(abs(pre_depth) + 0.001)) - if mesh.has_node(node): - real_depth = node[2] - while True: - if mesh.has_node(node): - node = (node[0], node[1], -(abs(node[2]) + 0.001)) - else: - break - if real_depth == node[2]: - real_depth = None - cur_disp = 1./node[2] - if not(mesh.has_node(node)): - if not mesh.has_node((node[0], node[1])): - print("2D node not found.") - import pdb; pdb.set_trace() - if inpaint_iter == 1: - paint = (rgb_dict['output'][hx, hy] * 255).astype(np.uint8) - else: - paint = (rgb_dict['output'][hx, hy] * 255).astype(np.uint8) - ndict = dict(color=paint, - synthesis=True, - disp=cur_disp, - cc_id=set([edge_id]), - overlap_number=1.0, - refine_depth=False, - edge_occlusion=edge_occlusion, - depth_edge_dilate_2_color_flag=depth_edge_dilate_2_color_flag, - real_depth=real_depth) - mesh, _, _ = refresh_node((node[0], node[1]), mesh.nodes[(node[0], node[1])], node, ndict, mesh, stime=True) - if inpaint_iter == 0 and mesh.degree(node) < 4: - connnect_points_ccs[edge_id].add(node) - if info_on_pix.get((hx, hy)) is None: - info_on_pix[(hx, hy)] = [] - new_info = {'depth':node[2], - 'color': paint, - 'synthesis':True, - 'disp':cur_disp, - 'cc_id':set([edge_id]), - 'inpaint_id':inpaint_iter + 1, - 'edge_occlusion':edge_occlusion, - 'overlap_number':1.0, - 'real_depth': real_depth} - info_on_pix[(hx, hy)].append(new_info) - specific_edge_id = tmp_specific_edge_id - for erode_id, erode_context_cc in enumerate(erode_context_ccs): - if len(specific_edge_id) > 0 and erode_id not in specific_edge_id: - continue - for erode_node in erode_context_cc: - for info in info_on_pix[(erode_node[0], erode_node[1])]: - if info['depth'] == erode_node[2]: - info['color'] = info['update_color'] - mesh.nodes[erode_node]['color'] = info['update_color'] - np_image[(erode_node[0], erode_node[1])] = info['update_color'] - new_edge_ccs = [set() for _ in range(mesh.graph['max_edge_id'] + 1)] - for node in mesh.nodes: - if len(node) == 2: - mesh.remove_node(node) - continue - if mesh.nodes[node].get('edge_id') is not None and mesh.nodes[node].get('inpaint_id') == inpaint_iter + 1: - if mesh.nodes[node].get('inpaint_twice') is False: - continue - try: - new_edge_ccs[mesh.nodes[node].get('edge_id')].add(node) - except: - import pdb; pdb.set_trace() - specific_mask_nodes = None - if inpaint_iter == 0: - mesh, info_on_pix = refine_color_around_edge(mesh, info_on_pix, new_edge_ccs, config, False) - - return mesh, info_on_pix, specific_mask_nodes, new_edge_ccs, connnect_points_ccs, np_image - - -def write_ply(image, - depth, - int_mtx, - ply_name, - config, - rgb_model, - depth_edge_model, - depth_edge_model_init, - depth_feat_model): - depth = depth.astype(np.float64) - input_mesh, xy2depth, image, depth = create_mesh(depth, image, int_mtx, config) - - H, W = input_mesh.graph['H'], input_mesh.graph['W'] - input_mesh = tear_edges(input_mesh, config['depth_threshold'], xy2depth) - input_mesh, info_on_pix = generate_init_node(input_mesh, config, min_node_in_cc=200) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=False) - edge_canvas = np.zeros((H, W)) - 1 - - input_mesh, info_on_pix, depth = reassign_floating_island(input_mesh, info_on_pix, image, depth) - input_mesh = update_status(input_mesh, info_on_pix) - specific_edge_id = [] - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - pre_depth = depth.copy() - input_mesh, info_on_pix, edge_mesh, depth, aft_mark = remove_dangling(input_mesh, edge_ccs, edge_mesh, info_on_pix, image, depth, config) - - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - edge_canvas = np.zeros((H, W)) - 1 - - mesh, info_on_pix, depth = fill_missing_node(input_mesh, info_on_pix, image, depth) - if config['extrapolate_border'] is True: - pre_depth = depth.copy() - input_mesh, info_on_pix, depth = refresh_bord_depth(input_mesh, info_on_pix, image, depth) - input_mesh = remove_node_feat(input_mesh, 'edge_id') - aft_depth = depth.copy() - input_mesh, info_on_pix, depth, image = enlarge_border(input_mesh, info_on_pix, depth, image, config) - noext_H, noext_W = H, W - H, W = image.shape[:2] - input_mesh, info_on_pix = fill_dummy_bord(input_mesh, info_on_pix, image, depth, config) - edge_ccs, input_mesh, edge_mesh = \ - group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - input_mesh = combine_end_node(input_mesh, edge_mesh, edge_ccs, depth) - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = \ - group_edges(input_mesh, config, image, remove_conflict_ordinal=True, spdb=False) - input_mesh = remove_redundant_edge(input_mesh, edge_mesh, edge_ccs, info_on_pix, config, redundant_number=config['redundant_number'], spdb=False) - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - input_mesh = combine_end_node(input_mesh, edge_mesh, edge_ccs, depth) - input_mesh = remove_redundant_edge(input_mesh, edge_mesh, edge_ccs, info_on_pix, config, redundant_number=config['redundant_number'], invalid=True, spdb=False) - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - input_mesh = combine_end_node(input_mesh, edge_mesh, edge_ccs, depth) - input_mesh, depth, info_on_pix = update_status(input_mesh, info_on_pix, depth) - edge_ccs, input_mesh, edge_mesh = group_edges(input_mesh, config, image, remove_conflict_ordinal=True) - edge_condition = lambda x, m: m.nodes[x].get('far') is not None and len(m.nodes[x].get('far')) > 0 - edge_map = get_map_from_ccs(edge_ccs, input_mesh.graph['H'], input_mesh.graph['W'], input_mesh, edge_condition) - other_edge_with_id = get_map_from_ccs(edge_ccs, input_mesh.graph['H'], input_mesh.graph['W'], real_id=True) - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="up") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="left") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="down") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="right") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="right-up") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="right-down") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="left-up") - info_on_pix, input_mesh, image, depth, edge_ccs = extrapolate(input_mesh, info_on_pix, image, depth, other_edge_with_id, edge_map, edge_ccs, - depth_edge_model, depth_feat_model, rgb_model, config, direc="left-down") - specific_edge_loc = None - specific_edge_id = [] - vis_edge_id = None - context_ccs, mask_ccs, broken_mask_ccs, edge_ccs, erode_context_ccs, \ - init_mask_connect, edge_maps, extend_context_ccs, extend_edge_ccs, extend_erode_context_ccs = \ - context_and_holes(input_mesh, - edge_ccs, - config, - specific_edge_id, - specific_edge_loc, - depth_feat_model, - inpaint_iter=0, - vis_edge_id=vis_edge_id) - edge_canvas = np.zeros((H, W)) - mask = np.zeros((H, W)) - context = np.zeros((H, W)) - vis_edge_ccs = filter_edge(input_mesh, edge_ccs, config) - edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - 1 - specific_edge_loc = None - FG_edge_maps = edge_maps.copy() - edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - 1 - # for cc_id, cc in enumerate(edge_ccs): - # for node in cc: - # edge_canvas[node[0], node[1]] = cc_id - # f, ((ax0, ax1, ax2)) = plt.subplots(1, 3, sharex=True, sharey=True); ax0.imshow(1./depth); ax1.imshow(image); ax2.imshow(edge_canvas); plt.show() - input_mesh, info_on_pix, specific_edge_nodes, new_edge_ccs, connect_points_ccs, image = DL_inpaint_edge(input_mesh, - info_on_pix, - config, - image, - depth, - context_ccs, - erode_context_ccs, - extend_context_ccs, - extend_erode_context_ccs, - mask_ccs, - broken_mask_ccs, - edge_ccs, - extend_edge_ccs, - init_mask_connect, - edge_maps, - rgb_model, - depth_edge_model, - depth_edge_model_init, - depth_feat_model, - specific_edge_id, - specific_edge_loc, - inpaint_iter=0) - specific_edge_id = [] - edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - connect_points_ccs = [set() for _ in connect_points_ccs] - context_ccs, mask_ccs, broken_mask_ccs, edge_ccs, erode_context_ccs, init_mask_connect, \ - edge_maps, extend_context_ccs, extend_edge_ccs, extend_erode_context_ccs = \ - context_and_holes(input_mesh, new_edge_ccs, config, specific_edge_id, specific_edge_loc, depth_feat_model, connect_points_ccs, inpaint_iter=1) - mask_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - context_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - erode_context_ccs_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - # edge_canvas = np.zeros((input_mesh.graph['H'], input_mesh.graph['W'])) - 1 - # for cc_id, cc in enumerate(edge_ccs): - # for node in cc: - # edge_canvas[node[0], node[1]] = cc_id - specific_edge_id = [] - input_mesh, info_on_pix, specific_edge_nodes, new_edge_ccs, _, image = DL_inpaint_edge(input_mesh, - info_on_pix, - config, - image, - depth, - context_ccs, - erode_context_ccs, - extend_context_ccs, - extend_erode_context_ccs, - mask_ccs, - broken_mask_ccs, - edge_ccs, - extend_edge_ccs, - init_mask_connect, - edge_maps, - rgb_model, - depth_edge_model, - depth_edge_model_init, - depth_feat_model, - specific_edge_id, - specific_edge_loc, - inpaint_iter=1) - vertex_id = 0 - input_mesh.graph['H'], input_mesh.graph['W'] = input_mesh.graph['noext_H'], input_mesh.graph['noext_W'] - background_canvas = np.zeros((input_mesh.graph['H'], - input_mesh.graph['W'], - 3)) - ply_flag = config.get('save_ply') - if ply_flag is True: - node_str_list = [] - else: - node_str_color = [] - node_str_point = [] - out_fmt = lambda x, x_flag: str(x) if x_flag is True else x - point_time = 0 - hlight_time = 0 - cur_id_time = 0 - node_str_time = 0 - generate_face_time = 0 - point_list = [] - k_00, k_02, k_11, k_12 = \ - input_mesh.graph['cam_param_pix_inv'][0, 0], input_mesh.graph['cam_param_pix_inv'][0, 2], \ - input_mesh.graph['cam_param_pix_inv'][1, 1], input_mesh.graph['cam_param_pix_inv'][1, 2] - w_offset = input_mesh.graph['woffset'] - h_offset = input_mesh.graph['hoffset'] - for pix_xy, pix_list in info_on_pix.items(): - for pix_idx, pix_info in enumerate(pix_list): - pix_depth = pix_info['depth'] if pix_info.get('real_depth') is None else pix_info['real_depth'] - str_pt = [out_fmt(x, ply_flag) for x in reproject_3d_int_detail(pix_xy[0], pix_xy[1], pix_depth, - k_00, k_02, k_11, k_12, w_offset, h_offset)] - if input_mesh.has_node((pix_xy[0], pix_xy[1], pix_info['depth'])) is False: - return False - continue - if pix_info.get('overlap_number') is not None: - str_color = [out_fmt(x, ply_flag) for x in (pix_info['color']/pix_info['overlap_number']).astype(np.uint8).tolist()] - else: - str_color = [out_fmt(x, ply_flag) for x in pix_info['color'].tolist()] - if pix_info.get('edge_occlusion') is True: - str_color.append(out_fmt(4, ply_flag)) - else: - if pix_info.get('inpaint_id') is None: - str_color.append(out_fmt(1, ply_flag)) - else: - str_color.append(out_fmt(pix_info.get('inpaint_id') + 1, ply_flag)) - if pix_info.get('modified_border') is True or pix_info.get('ext_pixel') is True: - if len(str_color) == 4: - str_color[-1] = out_fmt(5, ply_flag) - else: - str_color.append(out_fmt(5, ply_flag)) - pix_info['cur_id'] = vertex_id - input_mesh.nodes[(pix_xy[0], pix_xy[1], pix_info['depth'])]['cur_id'] = out_fmt(vertex_id, ply_flag) - vertex_id += 1 - if ply_flag is True: - node_str_list.append(' '.join(str_pt) + ' ' + ' '.join(str_color) + '\n') - else: - node_str_color.append(str_color) - node_str_point.append(str_pt) - str_faces = generate_face(input_mesh, info_on_pix, config) - if config['save_ply'] is True: - print("Writing mesh file %s ..." % ply_name) - with open(ply_name, 'w') as ply_fi: - ply_fi.write('ply\n' + 'format ascii 1.0\n') - ply_fi.write('comment H ' + str(int(input_mesh.graph['H'])) + '\n') - ply_fi.write('comment W ' + str(int(input_mesh.graph['W'])) + '\n') - ply_fi.write('comment hFov ' + str(float(input_mesh.graph['hFov'])) + '\n') - ply_fi.write('comment vFov ' + str(float(input_mesh.graph['vFov'])) + '\n') - ply_fi.write('element vertex ' + str(len(node_str_list)) + '\n') - ply_fi.write('property float x\n' + \ - 'property float y\n' + \ - 'property float z\n' + \ - 'property uchar red\n' + \ - 'property uchar green\n' + \ - 'property uchar blue\n' + \ - 'property uchar alpha\n') - ply_fi.write('element face ' + str(len(str_faces)) + '\n') - ply_fi.write('property list uchar int vertex_index\n') - ply_fi.write('end_header\n') - ply_fi.writelines(node_str_list) - ply_fi.writelines(str_faces) - ply_fi.close() - return input_mesh - else: - H = int(input_mesh.graph['H']) - W = int(input_mesh.graph['W']) - hFov = input_mesh.graph['hFov'] - vFov = input_mesh.graph['vFov'] - node_str_color = np.array(node_str_color).astype(np.float32) - node_str_color[..., :3] = node_str_color[..., :3] / 255. - node_str_point = np.array(node_str_point) - str_faces = np.array(str_faces) - - return node_str_point, node_str_color, str_faces, H, W, hFov, vFov - -def read_ply(mesh_fi): - ply_fi = open(mesh_fi, 'r') - Height = None - Width = None - hFov = None - vFov = None - while True: - line = ply_fi.readline().split('\n')[0] - if line.startswith('element vertex'): - num_vertex = int(line.split(' ')[-1]) - elif line.startswith('element face'): - num_face = int(line.split(' ')[-1]) - elif line.startswith('comment'): - if line.split(' ')[1] == 'H': - Height = int(line.split(' ')[-1].split('\n')[0]) - if line.split(' ')[1] == 'W': - Width = int(line.split(' ')[-1].split('\n')[0]) - if line.split(' ')[1] == 'hFov': - hFov = float(line.split(' ')[-1].split('\n')[0]) - if line.split(' ')[1] == 'vFov': - vFov = float(line.split(' ')[-1].split('\n')[0]) - elif line.startswith('end_header'): - break - contents = ply_fi.readlines() - vertex_infos = contents[:num_vertex] - face_infos = contents[num_vertex:] - verts = [] - colors = [] - faces = [] - for v_info in vertex_infos: - str_info = [float(v) for v in v_info.split('\n')[0].split(' ')] - if len(str_info) == 6: - vx, vy, vz, r, g, b = str_info - else: - vx, vy, vz, r, g, b, hi = str_info - verts.append([vx, vy, vz]) - colors.append([r, g, b, hi]) - verts = np.array(verts) - try: - colors = np.array(colors) - colors[..., :3] = colors[..., :3]/255. - except: - import pdb - pdb.set_trace() - - for f_info in face_infos: - _, v1, v2, v3 = [int(f) for f in f_info.split('\n')[0].split(' ')] - faces.append([v1, v2, v3]) - faces = np.array(faces) - - - return verts, colors, faces, Height, Width, hFov, vFov - - -class Canvas_view(): - def __init__(self, - fov, - verts, - faces, - colors, - canvas_size, - factor=1, - bgcolor='gray', - proj='perspective', - ): - self.canvas = scene.SceneCanvas(bgcolor=bgcolor, size=(canvas_size*factor, canvas_size*factor)) - self.view = self.canvas.central_widget.add_view() - self.view.camera = 'perspective' - self.view.camera.fov = fov - self.mesh = visuals.Mesh(shading=None) - self.mesh.attach(Alpha(1.0)) - self.view.add(self.mesh) - self.tr = self.view.camera.transform - self.mesh.set_data(vertices=verts, faces=faces, vertex_colors=colors[:, :3]) - self.translate([0,0,0]) - self.rotate(axis=[1,0,0], angle=180) - self.view_changed() - - def translate(self, trans=[0,0,0]): - self.tr.translate(trans) - - def rotate(self, axis=[1,0,0], angle=0): - self.tr.rotate(axis=axis, angle=angle) - - def view_changed(self): - self.view.camera.view_changed() - - def render(self): - return self.canvas.render() - - def reinit_mesh(self, verts, faces, colors): - self.mesh.set_data(vertices=verts, faces=faces, vertex_colors=colors[:, :3]) - - def reinit_camera(self, fov): - self.view.camera.fov = fov - self.view.camera.view_changed() - - -def output_3d_photo(verts, colors, faces, Height, Width, hFov, vFov, tgt_poses, video_traj_types, ref_pose, - output_dir, ref_image, int_mtx, config, image, videos_poses, video_basename, original_H=None, original_W=None, - border=None, depth=None, normal_canvas=None, all_canvas=None, mean_loc_depth=None): - - cam_mesh = netx.Graph() - cam_mesh.graph['H'] = Height - cam_mesh.graph['W'] = Width - cam_mesh.graph['original_H'] = original_H - cam_mesh.graph['original_W'] = original_W - int_mtx_real_x = int_mtx[0] * Width - int_mtx_real_y = int_mtx[1] * Height - cam_mesh.graph['hFov'] = 2 * np.arctan((1. / 2.) * ((cam_mesh.graph['original_W']) / int_mtx_real_x[0])) - cam_mesh.graph['vFov'] = 2 * np.arctan((1. / 2.) * ((cam_mesh.graph['original_H']) / int_mtx_real_y[1])) - colors = colors[..., :3] - - fov_in_rad = max(cam_mesh.graph['vFov'], cam_mesh.graph['hFov']) - fov = (fov_in_rad * 180 / np.pi) - print("fov: " + str(fov)) - init_factor = 1 - if config.get('anti_flickering') is True: - init_factor = 3 - if (cam_mesh.graph['original_H'] is not None) and (cam_mesh.graph['original_W'] is not None): - canvas_w = cam_mesh.graph['original_W'] - canvas_h = cam_mesh.graph['original_H'] - else: - canvas_w = cam_mesh.graph['W'] - canvas_h = cam_mesh.graph['H'] - canvas_size = max(canvas_h, canvas_w) - if normal_canvas is None: - normal_canvas = Canvas_view(fov, - verts, - faces, - colors, - canvas_size=canvas_size, - factor=init_factor, - bgcolor='gray', - proj='perspective') - else: - normal_canvas.reinit_mesh(verts, faces, colors) - normal_canvas.reinit_camera(fov) - img = normal_canvas.render() - backup_img, backup_all_img, all_img_wo_bound = img.copy(), img.copy() * 0, img.copy() * 0 - img = cv2.resize(img, (int(img.shape[1] / init_factor), int(img.shape[0] / init_factor)), interpolation=cv2.INTER_AREA) - if border is None: - border = [0, img.shape[0], 0, img.shape[1]] - H, W = cam_mesh.graph['H'], cam_mesh.graph['W'] - if (cam_mesh.graph['original_H'] is not None) and (cam_mesh.graph['original_W'] is not None): - aspect_ratio = cam_mesh.graph['original_H'] / cam_mesh.graph['original_W'] - else: - aspect_ratio = cam_mesh.graph['H'] / cam_mesh.graph['W'] - if aspect_ratio > 1: - img_h_len = cam_mesh.graph['H'] if cam_mesh.graph.get('original_H') is None else cam_mesh.graph['original_H'] - img_w_len = img_h_len / aspect_ratio - anchor = [0, - img.shape[0], - int(max(0, int((img.shape[1])//2 - img_w_len//2))), - int(min(int((img.shape[1])//2 + img_w_len//2), (img.shape[1])-1))] - elif aspect_ratio <= 1: - img_w_len = cam_mesh.graph['W'] if cam_mesh.graph.get('original_W') is None else cam_mesh.graph['original_W'] - img_h_len = img_w_len * aspect_ratio - anchor = [int(max(0, int((img.shape[0])//2 - img_h_len//2))), - int(min(int((img.shape[0])//2 + img_h_len//2), (img.shape[0])-1)), - 0, - img.shape[1]] - anchor = np.array(anchor) - plane_width = np.tan(fov_in_rad/2.) * np.abs(mean_loc_depth) - for video_pose, video_traj_type in zip(videos_poses, video_traj_types): - stereos = [] - tops = []; buttoms = []; lefts = []; rights = [] - for tp_id, tp in enumerate(video_pose): - rel_pose = np.linalg.inv(np.dot(tp, np.linalg.inv(ref_pose))) - axis, angle = transforms3d.axangles.mat2axangle(rel_pose[0:3, 0:3]) - normal_canvas.rotate(axis=axis, angle=(angle*180)/np.pi) - normal_canvas.translate(rel_pose[:3,3]) - new_mean_loc_depth = mean_loc_depth - float(rel_pose[2, 3]) - if 'dolly' in video_traj_type: - new_fov = float((np.arctan2(plane_width, np.array([np.abs(new_mean_loc_depth)])) * 180. / np.pi) * 2) - normal_canvas.reinit_camera(new_fov) - else: - normal_canvas.reinit_camera(fov) - normal_canvas.view_changed() - img = normal_canvas.render() - img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0) - img = cv2.resize(img, (int(img.shape[1] / init_factor), int(img.shape[0] / init_factor)), interpolation=cv2.INTER_AREA) - img = img[anchor[0]:anchor[1], anchor[2]:anchor[3]] - img = img[int(border[0]):int(border[1]), int(border[2]):int(border[3])] - - if any(np.array(config['crop_border']) > 0.0): - H_c, W_c, _ = img.shape - o_t = int(H_c * config['crop_border'][0]) - o_l = int(W_c * config['crop_border'][1]) - o_b = int(H_c * config['crop_border'][2]) - o_r = int(W_c * config['crop_border'][3]) - img = img[o_t:H_c-o_b, o_l:W_c-o_r] - img = cv2.resize(img, (W_c, H_c), interpolation=cv2.INTER_CUBIC) - - """ - img = cv2.resize(img, (int(img.shape[1] / init_factor), int(img.shape[0] / init_factor)), interpolation=cv2.INTER_CUBIC) - img = img[anchor[0]:anchor[1], anchor[2]:anchor[3]] - img = img[int(border[0]):int(border[1]), int(border[2]):int(border[3])] - - if config['crop_border'] is True: - top, buttom, left, right = find_largest_rect(img, bg_color=(128, 128, 128)) - tops.append(top); buttoms.append(buttom); lefts.append(left); rights.append(right) - """ - stereos.append(img[..., :3]) - normal_canvas.translate(-rel_pose[:3,3]) - normal_canvas.rotate(axis=axis, angle=-(angle*180)/np.pi) - normal_canvas.view_changed() - """ - if config['crop_border'] is True: - atop, abuttom = min(max(tops), img.shape[0]//2 - 10), max(min(buttoms), img.shape[0]//2 + 10) - aleft, aright = min(max(lefts), img.shape[1]//2 - 10), max(min(rights), img.shape[1]//2 + 10) - atop -= atop % 2; abuttom -= abuttom % 2; aleft -= aleft % 2; aright -= aright % 2 - else: - atop = 0; abuttom = img.shape[0] - img.shape[0] % 2; aleft = 0; aright = img.shape[1] - img.shape[1] % 2 - """ - atop = 0; abuttom = img.shape[0] - img.shape[0] % 2; aleft = 0; aright = img.shape[1] - img.shape[1] % 2 - crop_stereos = [] - for stereo in stereos: - crop_stereos.append((stereo[atop:abuttom, aleft:aright, :3] * 1).astype(np.uint8)) - stereos = crop_stereos - clip = ImageSequenceClip(stereos, fps=config['fps']) - if isinstance(video_basename, list): - video_basename = video_basename[0] - clip.write_videofile(os.path.join(output_dir, video_basename + '_' + video_traj_type + '.mp4'), fps=config['fps']) - - - - return normal_canvas, all_canvas diff --git a/spaces/EuroPython2022/swinunetr-dicom-video/app.py b/spaces/EuroPython2022/swinunetr-dicom-video/app.py deleted file mode 100644 index 446bf1c540c138a9d11a477e1f7d735abc3a3f53..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/swinunetr-dicom-video/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import sys -import os -import glob -import shutil -import torch -import argparse -import mediapy -import cv2 -import numpy as np -import gradio as gr -from skimage import color, img_as_ubyte -from monai import transforms, data - -os.system("git clone https://github.com/darraghdog/Project-MONAI-research-contributions pmrc") -sys.path.append("pmrc/SwinUNETR/BTCV") -from swinunetr import SwinUnetrModelForInference, SwinUnetrConfig - - -ffmpeg_path = shutil.which('ffmpeg') -mediapy.set_ffmpeg(ffmpeg_path) - -# Load model -model = SwinUnetrModelForInference.from_pretrained('darragh/swinunetr-btcv-tiny') -model.eval() - -# Pull files from github -input_files = glob.glob('pmrc/SwinUNETR/BTCV/dataset/imagesSampleTs/*.nii.gz') -input_files = dict((f.split('/')[-1], f) for f in input_files) - -# Load and process dicom with monai transforms -test_transform = transforms.Compose( - [ - transforms.LoadImaged(keys=["image"]), - transforms.AddChanneld(keys=["image"]), - transforms.Spacingd(keys="image", - pixdim=(1.5, 1.5, 2.0), - mode="bilinear"), - transforms.ScaleIntensityRanged(keys=["image"], - a_min=-175.0, - a_max=250.0, - b_min=0.0, - b_max=1.0, - clip=True), - # transforms.Resized(keys=["image"], spatial_size = (256,256,-1)), - transforms.ToTensord(keys=["image"]), - ]) - -# Create Data Loader -def create_dl(test_files): - ds = test_transform(test_files) - loader = data.DataLoader(ds, - batch_size=1, - shuffle=False) - return loader - -# Inference and video generation -def generate_dicom_video(selected_file, n_frames): - - # Data processor - test_file = input_files[selected_file] - test_files = [{'image': test_file}] - dl = create_dl(test_files) - batch = next(iter(dl)) - - # Select dicom slices - tst_inputs = batch["image"] - tst_inputs = tst_inputs[:,:,:,:,-n_frames:] - - # Inference - with torch.no_grad(): - outputs = model(tst_inputs, - (96,96,96), - 8, - overlap=0.5, - mode="gaussian") - tst_outputs = torch.softmax(outputs.logits, 1) - tst_outputs = torch.argmax(tst_outputs, axis=1) - - # Write frames to video - for inp, outp in zip(tst_inputs, tst_outputs): - frames = [] - for idx in range(inp.shape[-1]): - # Segmentation - seg = outp[:,:,idx].numpy().astype(np.uint8) - # Input dicom frame - img = (inp[0,:,:,idx]*255).numpy().astype(np.uint8) - img = cv2.cvtColor(img,cv2.COLOR_GRAY2RGB) - frame = color.label2rgb(seg,img, bg_label = 0) - frame = img_as_ubyte(frame) - frame = np.concatenate((img, frame), 1) - frames.append(frame) - mediapy.write_video("dicom.mp4", frames, fps=4) - - return 'dicom.mp4' - - -theme = 'dark-peach' -with gr.Blocks(theme=theme) as demo: - - gr.Markdown('''

    SwinUnetr BTCV

    - This is a Gradio Blocks app of the winning transformer in the Beyond the Cranial Vault (BTCV) Segmentation Challenge, SwinUnetr (tiny version). - ''') - selected_dicom_key = gr.inputs.Dropdown( - choices=sorted(input_files), - type="value", - label="Select a dicom file") - n_frames = gr.Slider(1, 100, value=32, label="Choose the number of dicom slices to process", step = 1) - button_gen_video = gr.Button("Generate Video") - output_interpolation = gr.Video(label="Generated Video") - button_gen_video.click(fn=generate_dicom_video, - inputs=[selected_dicom_key, n_frames], - outputs=output_interpolation) - -demo.launch(debug=True, enable_queue=True) - - diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L12_Onnx.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L12_Onnx.py deleted file mode 100644 index 9ad5085e02654fd1fcfbdad7d476bfa9b763d2c6..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L12_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec256L12_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-256-layer-12.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) diff --git a/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/cantonese.py b/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-pcr/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/Friklogff/xx-xhai/XhApi.py b/spaces/Friklogff/xx-xhai/XhApi.py deleted file mode 100644 index 8eb0e48be1e234b31c3b1531ef170252a2e1bc4d..0000000000000000000000000000000000000000 --- a/spaces/Friklogff/xx-xhai/XhApi.py +++ /dev/null @@ -1,150 +0,0 @@ -# -*- coding = utf-8 -*- -""" -# @Time : 2023/7/20 12:37 -# @Author : CSDN:FriKlogff -# @File : XhApi.py -# @Software: PyCharm -# @Function: 星火大模型API -""" -import os -os.system("""python -m pip install -i https://mirrors.aliyun.com/pypi/simple/ --upgrade pip setuptools -pip install -i https://mirrors.aliyun.com/pypi/simple/ websocket -pip install -i https://mirrors.aliyun.com/pypi/simple/ websocket-client -pip install -i https://mirrors.aliyun.com/pypi/simple/ gradio -pip install -i https://mirrors.aliyun.com/pypi/simple/ sxtwl -""") -import _thread as thread # 导入线程模块 -import base64 # 导入base64编码模块 -import datetime # 导入datetime模块 -import hashlib # 导入hashlib模块 -import hmac # 导入hmac模块 -import json # 导入json模块 -from urllib.parse import urlparse # 从urllib.parse导入urlparse用于url解析 -import ssl # 导入ssl模块 -from datetime import datetime # 从datetime导入datetime类 -from time import mktime # 从time导入mktime用于生成时间戳 -from urllib.parse import urlencode # 从urllib.parse导入urlencode用于编码请求参数 -from wsgiref.handlers import format_date_time # 从wsgiref.handlers导入format_date_time用于格式化时间 - -import websocket # 导入websocket模块 - -response_content = "" - - -# 请求参数类 -class Ws_Param: - # 初始化 - def __init__(self, APPID, APIKey, APISecret, gpt_url): - self.APPID = APPID # 应用ID - self.APIKey = APIKey # API Key - self.APISecret = APISecret # API Secret - self.host = urlparse(gpt_url).netloc # 从url解析出host - self.path = urlparse(gpt_url).path # 从url解析出path - self.gpt_url = gpt_url # 完整的url - - # 生成签名和url的方法 - def create_url(self): - now = datetime.now() # 当前时间 - date = format_date_time(mktime(now.timetuple())) # 格式化的时间戳 - # 拼接签名原文 - signature_origin = "host: " + self.host + "\n" - signature_origin += "date: " + date + "\n" - signature_origin += "GET " + self.path + " HTTP/1.1" - # 生成签名 - signature_sha = hmac.new(self.APISecret.encode('utf-8'), signature_origin.encode('utf-8'), - digestmod=hashlib.sha256).digest() - signature_sha_base64 = base64.b64encode(signature_sha).decode(encoding='utf-8') - # 生成授权header - authorization_origin = f'api_key="{self.APIKey}", algorithm="hmac-sha256", headers="host date request-line", signature="{signature_sha_base64}"' - authorization = base64.b64encode(authorization_origin.encode('utf-8')).decode(encoding='utf-8') - # 生成url参数字典 - v = { - "authorization": authorization, - "date": date, - "host": self.host - } - # 构造最终url - url = self.gpt_url + '?' + urlencode(v) - return url - - -# 收到websocket错误的处理 -def on_error(ws, error): - print("### error:", error) - - -# 收到websocket关闭的处理 -def on_close(ws): - print("### closed ###") - - -# 收到websocket连接建立的处理 -def on_open(ws): - thread.start_new_thread(run, (ws,)) - - -# 发送请求的方法 -def run(ws, *args): - data = json.dumps(gen_params(appid=ws.appid, question=ws.question)) - ws.send(data) - - -# 收到websocket消息的处理 -def on_message(ws, message): - print(message) - data = json.loads(message) - code = data['header']['code'] - if code != 0: - print(f'请求错误: {code}, {data}') - ws.close() - else: - choices = data["payload"]["choices"] - status = choices["status"] - content = choices["text"][0]["content"] - print(content, end='') - global response_content - response_content += content - if status == 2: - ws.close() - - -# 生成请求参数 -def gen_params(appid, question): - """ - 通过appid和用户的提问来生成请参数 - """ - data = { - "header": { - "app_id": appid, - "uid": "1234" - }, - "parameter": { - "chat": { - "domain": "general", - "random_threshold": 0.5, - "max_tokens": 2048, - "auditing": "default" - } - }, - "payload": { - "message": { - "text": [ - {"role": "user", "content": question} - ] - } - } - } - return data - - - - -def main(appid, api_key, api_secret, gpt_url, question): - wsParam = Ws_Param(appid, api_key, api_secret, gpt_url) - websocket.enableTrace(False) - wsUrl = wsParam.create_url() - ws = websocket.WebSocketApp(wsUrl, on_message=on_message, on_error=on_error, on_close=on_close, on_open=on_open) - ws.appid = appid - ws.question = question - ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE}) - return response_content diff --git a/spaces/GIZ/vulnerability_analysis/appStore/__init__.py b/spaces/GIZ/vulnerability_analysis/appStore/__init__.py deleted file mode 100644 index 802fa483031a6683fa4d7aa4addae78f56f2937b..0000000000000000000000000000000000000000 --- a/spaces/GIZ/vulnerability_analysis/appStore/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# adding for package implementation \ No newline at end of file diff --git a/spaces/GXSA/bingo/src/components/chat.tsx b/spaces/GXSA/bingo/src/components/chat.tsx deleted file mode 100644 index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/components/chat.tsx +++ /dev/null @@ -1,93 +0,0 @@ -'use client' - -import { useCallback, useEffect, useMemo, useState } from 'react' -import { useAtom } from 'jotai' -import Image from 'next/image' -import { cn } from '@/lib/utils' -import { ChatList } from '@/components/chat-list' -import { ChatPanel } from '@/components/chat-panel' -import { WelcomeScreen } from '@/components/welcome-screen' -import { ChatScrollAnchor } from '@/components/chat-scroll-anchor' -import { ToneSelector } from './tone-selector' -import { ChatHeader } from './chat-header' -import { ChatSuggestions } from './chat-suggestions' -import { bingConversationStyleAtom } from '@/state' -import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom' -import StopIcon from '@/assets/images/stop.svg' -import { useBing } from '@/lib/hooks/use-bing' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { ChatNotification } from './chat-notification' -import { Settings } from './settings' -import { ChatHistory } from './chat-history' - -export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] } - -export default function Chat({ className }: ChatProps) { - - const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom) - const { - messages, - sendMessage, - resetConversation, - stopGenerating, - setInput, - bot, - input, - generating, - isSpeaking, - uploadImage, - attachmentList, - setAttachmentList, - } = useBing() - - useEffect(() => { - window.scrollTo({ - top: document.body.offsetHeight, - behavior: 'smooth' - }) - }, []) - - return ( -
    - -
    - - - - {messages.length ? ( - <> - - - - - - {generating ? ( -
    - -
    - ) : null} - - ) : null} -
    - - -
    - ) -} diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_smaller.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_smaller.sh deleted file mode 100644 index c4453cde6d4a278e90cc1462ffaf243996926b7a..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train5_gptmixcliport2_smaller.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -#SBATCH -c 10 -#SBATCH -n 1 -#SBATCH -o logs/%j.out -#SBATCH --exclusive -STEPS=${1-'50000'} - - -sh scripts/traintest_scripts/train_test_multi_task_goal_smaller.sh data \ - "[put-block-in-bowl,align-box-corner,color-coordinated-sphere-insertion,rainbow-stack,align-pair-colored-blocks-along-line,vertical-insertion-blocks,stack-blocks-in-container]" \ - "[put-block-in-bowl,align-box-corner]" \ - gpt5_mixcliport3_task $STEPS diff --git a/spaces/Gradio-Blocks/uniformer_image_demo/README.md b/spaces/Gradio-Blocks/uniformer_image_demo/README.md deleted file mode 100644 index de8ef46944886f63059954a8cd9eda98d1f156be..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Uniformer_image_demo -emoji: 📷 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.0.3 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_r101_fpn_20e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_r101_fpn_20e_coco.py deleted file mode 100644 index de3d5b7635a2416c5d8a533631dc5a26201ba72a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/htc/htc_r101_fpn_20e_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = './htc_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) -# learning policy -lr_config = dict(step=[16, 19]) -runner = dict(type='EpochBasedRunner', max_epochs=20) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/drive.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/drive.py deleted file mode 100644 index 06e8ff606e0d2a4514ec8b7d2c6c436a32efcbf4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/datasets/drive.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'DRIVEDataset' -data_root = 'data/DRIVE' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -img_scale = (584, 565) -crop_size = (64, 64) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=40000, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_40k_cityscapes.py deleted file mode 100644 index 99760c36d8399204ca8e35f32690bcd369676852..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_hr18.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/solvers/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/solvers/__init__.py deleted file mode 100644 index ae19f3a8c51abf469697d6affa91449d668716ba..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/solvers/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -Solvers. A Solver is a training recipe, combining the dataloaders, models, -optimizer, losses etc into a single convenient object. -""" - -# flake8: noqa -from .audiogen import AudioGenSolver -from .builders import get_solver -from .base import StandardSolver -from .compression import CompressionSolver -from .musicgen import MusicGenSolver -from .diffusion import DiffusionSolver diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/pegasus/tokenizers_pegasus.py b/spaces/HaloMaster/chinesesummary/fengshen/examples/pegasus/tokenizers_pegasus.py deleted file mode 100644 index f532875987b59a42aca9ad35eb7a1945c992869b..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/pegasus/tokenizers_pegasus.py +++ /dev/null @@ -1,597 +0,0 @@ -from fengshen.examples.pegasus.data_utils import ( - _is_control, - _is_punctuation, - _is_whitespace, - _is_chinese_char) -from transformers import PreTrainedTokenizer -from transformers import logging -from typing import List, Optional, Tuple, Union -import collections -import os -import unicodedata -import re -import jieba -import sys - -sys.path.append("../../../../") - -jieba.dt.tmp_dir = os.path.expanduser("~/.cache/") -# jieba.enable_parallel(8) -jieba.initialize() - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt"} - - -def load_vocab(vocab_file): - """Loads a vocabulary file into a dictionary.""" - vocab = collections.OrderedDict() - with open(vocab_file, "r", encoding="utf-8") as reader: - tokens = reader.readlines() - for index, token in enumerate(tokens): - token = token.rstrip("\n") - vocab[token] = index - return vocab - - -def whitespace_tokenize(text): - """Runs basic whitespace cleaning and splitting on a piece of text.""" - text = text.strip() - if not text: - return [] - tokens = text.split() - return tokens - - -class PegasusTokenizer(PreTrainedTokenizer): - # copy from BertTokenizer - r""" - Construct a Pegasus tokenizer. Based on WordPiece. - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - Args: - vocab_file (`str`): - File containing the vocabulary. - do_lower_case (`bool`, *optional*, defaults to `True`): - Whether or not to lowercase the input when tokenizing. - do_basic_tokenize (`bool`, *optional*, defaults to `True`): - Whether or not to do basic tokenization before WordPiece. - never_split (`Iterable`, *optional*): - Collection of tokens which will never be split during tokenization. Only has an effect when - `do_basic_tokenize=True` - unk_token (`str`, *optional*, defaults to `"[UNK]"`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - sep_token (`str`, *optional*, defaults to `"[SEP]"`): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for - sequence classification or for a text and a question for question answering. It is also used as the last - token of a sequence built with special tokens. - pad_token (`str`, *optional*, defaults to `"[PAD]"`): - The token used for padding, for example when batching sequences of different lengths. - cls_token (`str`, *optional*, defaults to `"[CLS]"`): - The classifier token which is used when doing sequence classification (classification of the whole sequence - instead of per-token classification). It is the first token of the sequence when built with special tokens. - mask_token (`str`, *optional*, defaults to `"[MASK]"`): - The token used for masking values. This is the token used when training this model with masked language - modeling. This is the token which the model will try to predict. - tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): - Whether or not to tokenize Chinese characters. - This should likely be deactivated for Japanese (see this - [issue](https://github.com/huggingface/transformers/issues/328)). - strip_accents (`bool`, *optional*): - Whether or not to strip all accents. If this option is not specified, then it will be determined by the - value for `lowercase` (as in the original BERT). - """ - - vocab_files_names = VOCAB_FILES_NAMES - model_input_names = ["input_ids", "attention_mask"] - - # pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - # pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION - # max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - - def __init__(self, - vocab_file, - do_lower_case=True, - do_basic_tokenize=True, - never_split=None, - pad_token="", - eos_token="", - unk_token="", - mask_token="", - mask_token_sent="", - additional_special_tokens=None, - sep_token="[SEP]", - cls_token="[CLS]", - tokenize_chinese_chars=True, - strip_accents=None, - offset=100, - pre_tokenizer=lambda x: jieba.cut(x, HMM=False), - **kwargs): - self.offset = offset - - if additional_special_tokens is not None: - if not isinstance(additional_special_tokens, list): - raise TypeError( - f"additional_special_tokens should be of type {type(list)}, \ - but is {type(additional_special_tokens)}" - ) - - additional_special_tokens_extended = ( - ([mask_token_sent] + additional_special_tokens) - if mask_token_sent not in additional_special_tokens - and mask_token_sent is not None else additional_special_tokens) - - # fill additional tokens with ..., in case not all additional tokens are already taken - additional_special_tokens_extended += [ - f"" for i in range( - len(additional_special_tokens_extended), self.offset - 1) - ] - - if len(set(additional_special_tokens_extended)) != len( - additional_special_tokens_extended): - raise ValueError( - f"Please make sure that the provided additional_special_tokens \ - do not contain an incorrectly shifted list of tokens. \ - Found {additional_special_tokens_extended}." - ) - additional_special_tokens = additional_special_tokens_extended - else: - additional_special_tokens = [ - mask_token_sent - ] if mask_token_sent is not None else [] - # additional_special_tokens += [f"" for i in range(3, self.offset)] - - # print("additional_special_tokens: ", additional_special_tokens) - - if not os.path.isfile(vocab_file): - raise ValueError( - f"Can't find a vocabulary file at path '{vocab_file}'. \ - To load the vocabulary from a Google pretrained " - "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`" - ) - - super().__init__( - do_lower_case=do_lower_case, - do_basic_tokenize=do_basic_tokenize, - never_split=never_split, - unk_token=unk_token, - sep_token=sep_token, - pad_token=pad_token, - cls_token=cls_token, - mask_token=mask_token, - eos_token=eos_token, - tokenize_chinese_chars=tokenize_chinese_chars, - additional_special_tokens=additional_special_tokens, - strip_accents=strip_accents, - **kwargs, - ) - - self.pre_tokenizer = pre_tokenizer - self.mask_token_sent = mask_token_sent - self.vocab = load_vocab(vocab_file) - - self.vocab[self.eos_token] = self.vocab.pop("[unused1]") - # self.vocab[self.eos_token] = self.vocab.pop("[unused2]") - self.vocab[self.pad_token] = self.vocab.pop("[PAD]") - self.vocab[self.unk_token] = self.vocab.pop("[UNK]") - - if self.mask_token_sent is not None: - self.vocab[self.mask_token] = self.vocab.pop("[unused3]") - self.vocab[self.mask_token_sent] = self.vocab.pop("[unused2]") - - self.ids_to_tokens = collections.OrderedDict([ - (ids, tok) for tok, ids in self.vocab.items() - ]) - self.do_basic_tokenize = do_basic_tokenize - if do_basic_tokenize: - self.basic_tokenizer = BasicTokenizer( - do_lower_case=do_lower_case, - never_split=never_split, - tokenize_chinese_chars=tokenize_chinese_chars, - strip_accents=strip_accents, - ) - self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab, - unk_token=self.unk_token) - - @property - def do_lower_case(self): - return self.basic_tokenizer.do_lower_case - - @property - def vocab_size(self): - return len(self.vocab) - - def get_vocab(self): - return dict(self.vocab, **self.added_tokens_encoder) - - def _tokenize(self, text): - split_tokens = [] - # print("pegasus_tokenizer: ", text) - for text in self.pre_tokenizer(text): - if text in self.vocab: - split_tokens.append(text) - else: - if self.do_basic_tokenize: - for token in self.basic_tokenizer.tokenize( - text, never_split=self.all_special_tokens): - - # If the token is part of the never_split set - if token in self.basic_tokenizer.never_split: - split_tokens.append(token) - else: - split_tokens += self.wordpiece_tokenizer.tokenize( - token) - else: - split_tokens = self.wordpiece_tokenizer.tokenize(text) - return split_tokens - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - return self.vocab.get(token, self.vocab.get(self.unk_token)) - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - return self.ids_to_tokens.get(index, self.unk_token) - - @staticmethod - def _cjk_punctuation(): - return u'\uff02\uff03\uff04\uff05\uff06\uff07\uff08\uff09\uff0a\uff0b\uff0c\uff0d\uff0f\uff1a\uff1b\uff1c\uff1d\ - \uff1e\uff20\uff3b\uff3c\uff3d\uff3e\uff3f\uff40\uff5b\uff5c\uff5d\uff5e\uff5f\uff60\uff62\ - \uff63\uff64\u3000\u3001\u3003\u3008\u3009\u300a\u300b\u300c\u300d\u300e\u300f\u3010\u3011\u3014\ - \u3015\u3016\u3017\u3018\u3019\u301a\u301b\u301c\u301d\u301e\u301f\u3030\u303e\u303f\u2013\u2014\ - \u2018\u2019\u201b\u201c\u201d\u201e\u201f\u2026\u2027\ufe4f\ufe51\ufe54\u00b7\uff01\uff1f\uff61\u3002' - - def convert_ids_to_tokens( - self, - ids: Union[int, List[int]], - skip_special_tokens: bool = False) -> Union[str, List[str]]: - """ - Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and - added tokens. - Args: - ids (`int` or `List[int]`): - The token id (or token ids) to convert to tokens. - skip_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not to remove special tokens in the decoding. - Returns: - `str` or `List[str]`: The decoded token(s). - """ - if isinstance(ids, int): - if ids in self.added_tokens_decoder: - return self.added_tokens_decoder[ids] - else: - return self._convert_id_to_token(ids) - tokens = [] - for index in ids: - index = int(index) - if skip_special_tokens and index in self.all_special_ids and index != 2: - continue - if index in self.added_tokens_decoder: - tokens.append(self.added_tokens_decoder[index]) - else: - tokens.append(self._convert_id_to_token(index)) - return tokens - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - # for token in - # tokens = tokens or self.ids_to_tokens(ids) - # tokens = [token for token in tokens if not self._is_special(token)] - - text = '' - for i, token in enumerate(tokens): - if token[:2] == '##': - text += token[2:] - elif len(token) == 1 and _is_chinese_char(ord(token)): - text += token - elif len(token) == 1 and _is_punctuation(token): - text += token - text += ' ' - elif i > 0 and _is_chinese_char(ord(text[-1])): - text += token - elif tokens == "": - continue - else: - text += ' ' - text += token - - text = re.sub(' +', ' ', text) - text = re.sub('\' (re|m|s|t|ve|d|ll) ', '\'\\1 ', text) - punctuation = re.sub(' +', '', self._cjk_punctuation()).strip() + '+-/={(<[' - punctuation_regex = '|'.join([re.escape(p) for p in punctuation]) - punctuation_regex = '(%s) ' % punctuation_regex - text = re.sub(punctuation_regex, '\\1', text) - text = re.sub(r'(\d\.) (\d)', '\\1\\2', text) - - return text.strip() - # out_string = " ".join(tokens).replace(" ##", "").strip() - - def build_inputs_with_special_tokens( - self, - token_ids_0: List[int], - token_ids_1: Optional[List[int]] = None) -> List[int]: - """ - Build model inputs from a sequence or a pair of sequences for sequence classification tasks by concatenating - and adding special tokens. A PEGASUS sequence has the following format, where `X` represents the sequence: - - single sequence: `X ` - - pair of sequences: `A B ` (not intended use) - BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a - separator. - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - Returns: - `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - if token_ids_1 is None: - return token_ids_0 + [self.eos_token_id] - return token_ids_0 + token_ids_1 + [self.eos_token_id] - - def _special_token_mask(self, seq): - all_special_ids = set( - self.all_special_ids) # call it once instead of inside list comp - # all_special_ids.remove(self.unk_token_id) # is only sometimes special - - return [1 if x in all_special_ids else 0 for x in seq] - - def get_special_tokens_mask( - self, - token_ids_0: List[int], - token_ids_1: Optional[List[int]] = None, - already_has_special_tokens: bool = False) -> List[int]: - """ - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - - if already_has_special_tokens: - return self._special_token_mask(token_ids_0) - elif token_ids_1 is None: - return self._special_token_mask(token_ids_0) + [self.eos_token_id] - else: - return self._special_token_mask(token_ids_0 + - token_ids_1) + [self.eos_token_id] - - def num_special_tokens_to_add(self, pair=False): - """Just EOS""" - return 1 - - def save_vocabulary(self, - save_directory: str, - filename_prefix: Optional[str] = None) -> Tuple[str]: - index = 0 - if os.path.isdir(save_directory): - vocab_file = os.path.join( - save_directory, - (filename_prefix + "-" if filename_prefix else "") + - VOCAB_FILES_NAMES["vocab_file"]) - else: - vocab_file = (filename_prefix + - "-" if filename_prefix else "") + save_directory - with open(vocab_file, "w", encoding="utf-8") as writer: - for token, token_index in sorted(self.vocab.items(), - key=lambda kv: kv[1]): - if index != token_index: - logger.warning( - f"Saving vocabulary to {vocab_file}: vocabulary indices are not consecutive." - " Please check that the vocabulary is not corrupted!") - index = token_index - writer.write(token + "\n") - index += 1 - return (vocab_file, ) - - -class BasicTokenizer(object): - """ - Constructs a BasicTokenizer that will run basic tokenization (punctuation splitting, lower casing, etc.). - Args: - do_lower_case (`bool`, *optional*, defaults to `True`): - Whether or not to lowercase the input when tokenizing. - never_split (`Iterable`, *optional*): - Collection of tokens which will never be split during tokenization. Only has an effect when - `do_basic_tokenize=True` - tokenize_chinese_chars (`bool`, *optional*, defaults to `True`): - Whether or not to tokenize Chinese characters. - This should likely be deactivated for Japanese (see this - [issue](https://github.com/huggingface/transformers/issues/328)). - strip_accents: (`bool`, *optional*): - Whether or not to strip all accents. If this option is not specified, then it will be determined by the - value for `lowercase` (as in the original BERT). - """ - - def __init__(self, - do_lower_case=True, - never_split=None, - tokenize_chinese_chars=True, - strip_accents=None): - if never_split is None: - never_split = [] - self.do_lower_case = do_lower_case - self.never_split = set(never_split) - self.tokenize_chinese_chars = tokenize_chinese_chars - self.strip_accents = strip_accents - - def tokenize(self, text, never_split=None): - """ - Basic Tokenization of a piece of text. Split on "white spaces" only, for sub-word tokenization, see - WordPieceTokenizer. - Args: - never_split (`List[str]`, *optional*) - Kept for backward compatibility purposes. Now implemented directly at the base class level (see - [`PreTrainedTokenizer.tokenize`]) List of token not to split. - """ - # union() returns a new set by concatenating the two sets. - never_split = self.never_split.union( - set(never_split)) if never_split else self.never_split - text = self._clean_text(text) - - # This was added on November 1st, 2018 for the multilingual and Chinese - # models. This is also applied to the English models now, but it doesn't - # matter since the English models were not trained on any Chinese data - # and generally don't have any Chinese data in them (there are Chinese - # characters in the vocabulary because Wikipedia does have some Chinese - # words in the English Wikipedia.). - if self.tokenize_chinese_chars: - text = self._tokenize_chinese_chars(text) - orig_tokens = whitespace_tokenize(text) - split_tokens = [] - for token in orig_tokens: - if token not in never_split: - if self.do_lower_case: - token = token.lower() - if self.strip_accents is not False: - token = self._run_strip_accents(token) - elif self.strip_accents: - token = self._run_strip_accents(token) - split_tokens.extend(self._run_split_on_punc(token, never_split)) - - output_tokens = whitespace_tokenize(" ".join(split_tokens)) - return output_tokens - - def _run_strip_accents(self, text): - """Strips accents from a piece of text.""" - text = unicodedata.normalize("NFD", text) - output = [] - for char in text: - cat = unicodedata.category(char) - if cat == "Mn": - continue - output.append(char) - return "".join(output) - - def _run_split_on_punc(self, text, never_split=None): - """Splits punctuation on a piece of text.""" - if never_split is not None and text in never_split: - return [text] - chars = list(text) - i = 0 - start_new_word = True - output = [] - while i < len(chars): - char = chars[i] - if _is_punctuation(char): - output.append([char]) - start_new_word = True - else: - if start_new_word: - output.append([]) - start_new_word = False - output[-1].append(char) - i += 1 - - return ["".join(x) for x in output] - - def _tokenize_chinese_chars(self, text): - """Adds whitespace around any CJK character.""" - output = [] - for char in text: - cp = ord(char) - if self._is_chinese_char(cp): - output.append(" ") - output.append(char) - output.append(" ") - else: - output.append(char) - return "".join(output) - - def _is_chinese_char(self, cp): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - if ((cp >= 0x4E00 and cp <= 0x9FFF) - or (cp >= 0x3400 and cp <= 0x4DBF) # - or (cp >= 0x20000 and cp <= 0x2A6DF) # - or (cp >= 0x2A700 and cp <= 0x2B73F) # - or (cp >= 0x2B740 and cp <= 0x2B81F) # - or (cp >= 0x2B820 and cp <= 0x2CEAF) # - or (cp >= 0xF900 and cp <= 0xFAFF) - or (cp >= 0x2F800 and cp <= 0x2FA1F)): # - return True - - return False - - def _clean_text(self, text): - """Performs invalid character removal and whitespace cleanup on text.""" - output = [] - for char in text: - cp = ord(char) - if cp == 0 or cp == 0xFFFD or _is_control(char): - continue - if _is_whitespace(char): - output.append(" ") - else: - output.append(char) - return "".join(output) - - -class WordpieceTokenizer(object): - """Runs WordPiece tokenization.""" - - def __init__(self, vocab, unk_token, max_input_chars_per_word=100): - self.vocab = vocab - self.unk_token = unk_token - self.max_input_chars_per_word = max_input_chars_per_word - - def tokenize(self, text): - """ - Tokenizes a piece of text into its word pieces. This uses a greedy longest-match-first algorithm to perform - tokenization using the given vocabulary. - For example, `input = "unaffable"` wil return as output `["un", "##aff", "##able"]`. - Args: - text: A single token or whitespace separated tokens. This should have - already been passed through *BasicTokenizer*. - Returns: - A list of wordpiece tokens. - """ - - output_tokens = [] - for token in whitespace_tokenize(text): - chars = list(token) - if len(chars) > self.max_input_chars_per_word: - output_tokens.append(self.unk_token) - continue - - is_bad = False - start = 0 - sub_tokens = [] - while start < len(chars): - end = len(chars) - cur_substr = None - while start < end: - substr = "".join(chars[start:end]) - if start > 0: - substr = "##" + substr - if substr in self.vocab: - cur_substr = substr - break - end -= 1 - if cur_substr is None: - is_bad = True - break - sub_tokens.append(cur_substr) - start = end - - if is_bad: - output_tokens.append(self.unk_token) - else: - output_tokens.extend(sub_tokens) - return output_tokens diff --git a/spaces/HarlanHong/DaGAN/depth/depth_decoder.py b/spaces/HarlanHong/DaGAN/depth/depth_decoder.py deleted file mode 100644 index efbdaf73ee199f8d0ca7a6b75b29f82b1711c56a..0000000000000000000000000000000000000000 --- a/spaces/HarlanHong/DaGAN/depth/depth_decoder.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright Niantic 2019. Patent Pending. All rights reserved. -# -# This software is licensed under the terms of the Monodepth2 licence -# which allows for non-commercial use only, the full terms of which are made -# available in the LICENSE file. - -from __future__ import absolute_import, division, print_function - -import numpy as np -import torch -import torch.nn as nn - -from collections import OrderedDict -from depth.layers import * - - -class DepthDecoder(nn.Module): - def __init__(self, num_ch_enc, scales=range(4), num_output_channels=1, use_skips=True): - super(DepthDecoder, self).__init__() - - self.num_output_channels = num_output_channels - self.use_skips = use_skips - self.upsample_mode = 'nearest' - self.scales = scales - - self.num_ch_enc = num_ch_enc - self.num_ch_dec = np.array([16, 32, 64, 128, 256]) - - # decoder - self.convs = OrderedDict() - for i in range(4, -1, -1): - # upconv_0 - num_ch_in = self.num_ch_enc[-1] if i == 4 else self.num_ch_dec[i + 1] - num_ch_out = self.num_ch_dec[i] - self.convs[("upconv", i, 0)] = ConvBlock(num_ch_in, num_ch_out) - - # upconv_1 - num_ch_in = self.num_ch_dec[i] - if self.use_skips and i > 0: - num_ch_in += self.num_ch_enc[i - 1] - num_ch_out = self.num_ch_dec[i] - self.convs[("upconv", i, 1)] = ConvBlock(num_ch_in, num_ch_out) - - for s in self.scales: - self.convs[("dispconv", s)] = Conv3x3(self.num_ch_dec[s], self.num_output_channels) - - self.decoder = nn.ModuleList(list(self.convs.values())) - self.sigmoid = nn.Sigmoid() - - def forward(self, input_features): - self.outputs = {} - - # decoder - x = input_features[-1] - for i in range(4, -1, -1): - x = self.convs[("upconv", i, 0)](x) - x = [upsample(x)] - if self.use_skips and i > 0: - x += [input_features[i - 1]] - x = torch.cat(x, 1) - x = self.convs[("upconv", i, 1)](x) - if i in self.scales: - self.outputs[("disp", i)] = self.sigmoid(self.convs[("dispconv", i)](x)) - - return self.outputs diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py deleted file mode 100644 index f10d557ff5a4fff03b94f81543bd58cf1a66bc8f..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py +++ /dev/null @@ -1,103 +0,0 @@ -import torch -from librosa.filters import mel as librosa_mel_fn -from .audio_processing import dynamic_range_compression -from .audio_processing import dynamic_range_decompression -from .stft import STFT -from .utils import get_mask_from_lengths - - -class LinearNorm(torch.nn.Module): - def __init__(self, in_dim, out_dim, bias=True, w_init_gain='linear'): - super(LinearNorm, self).__init__() - self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias) - - torch.nn.init.xavier_uniform_( - self.linear_layer.weight, - gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, x): - return self.linear_layer(x) - - -class ConvNorm(torch.nn.Module): - def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, - padding=None, dilation=1, bias=True, w_init_gain='linear'): - super(ConvNorm, self).__init__() - if padding is None: - assert(kernel_size % 2 == 1) - padding = int(dilation * (kernel_size - 1) / 2) - - self.conv = torch.nn.Conv1d(in_channels, out_channels, - kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, - bias=bias) - - torch.nn.init.xavier_uniform_( - self.conv.weight, gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, signal): - conv_signal = self.conv(signal) - return conv_signal - - -class GlobalAvgPool(torch.nn.Module): - def __init__(self): - super(GlobalAvgPool, self).__init__() - - def forward(self, x, lengths=None): - """Average pooling across time steps (dim=1) with optionally lengths. - Args: - x: torch.Tensor of shape (N, T, ...) - lengths: None or torch.Tensor of shape (N,) - dim: dimension to pool - """ - if lengths is None: - return x.mean(dim=1, keepdim=False) - else: - mask = get_mask_from_lengths(lengths).type(x.type()).to(x.device) - mask_shape = list(mask.size()) + [1 for _ in range(x.ndimension()-2)] - mask = mask.reshape(*mask_shape) - numer = (x * mask).sum(dim=1, keepdim=False) - denom = mask.sum(dim=1, keepdim=False) - return numer / denom - - -class TacotronSTFT(torch.nn.Module): - def __init__(self, filter_length=1024, hop_length=256, win_length=1024, - n_mel_channels=80, sampling_rate=22050, mel_fmin=0.0, - mel_fmax=8000.0): - super(TacotronSTFT, self).__init__() - self.n_mel_channels = n_mel_channels - self.sampling_rate = sampling_rate - self.stft_fn = STFT(filter_length, hop_length, win_length) - mel_basis = librosa_mel_fn( - sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer('mel_basis', mel_basis) - - def spectral_normalize(self, magnitudes): - output = dynamic_range_compression(magnitudes) - return output - - def spectral_de_normalize(self, magnitudes): - output = dynamic_range_decompression(magnitudes) - return output - - def mel_spectrogram(self, y): - """Computes mel-spectrograms from a batch of waves - PARAMS - ------ - y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1] - - RETURNS - ------- - mel_output: torch.FloatTensor of shape (B, n_mel_channels, T) - """ - assert(torch.min(y.data) >= -1) - assert(torch.max(y.data) <= 1) - - magnitudes, phases = self.stft_fn.transform(y) - magnitudes = magnitudes.data - mel_output = torch.matmul(self.mel_basis, magnitudes) - mel_output = self.spectral_normalize(mel_output) - return mel_output diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/wer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/wer.py deleted file mode 100644 index 613ab50d39019f6edf67c56c2353646be2a2f17d..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/scripts/wer.py +++ /dev/null @@ -1,82 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Implement unsupervised metric for decoding hyperparameter selection: - $$ alpha * LM_PPL + ViterbitUER(%) * 100 $$ -""" -import argparse -import logging -import sys - -import editdistance - -logging.root.setLevel(logging.INFO) -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging.getLogger(__name__) - - -def get_parser(): - parser = argparse.ArgumentParser() - parser.add_argument("-s", "--hypo", help="hypo transcription", required=True) - parser.add_argument( - "-r", "--reference", help="reference transcription", required=True - ) - return parser - - -def compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p): - d_cnt = 0 - w_cnt = 0 - w_cnt_h = 0 - for uid in hyp_uid_to_tra: - ref = ref_uid_to_tra[uid].split() - if g2p is not None: - hyp = g2p(hyp_uid_to_tra[uid]) - hyp = [p for p in hyp if p != "'" and p != " "] - hyp = [p[:-1] if p[-1].isnumeric() else p for p in hyp] - else: - hyp = hyp_uid_to_tra[uid].split() - d_cnt += editdistance.eval(ref, hyp) - w_cnt += len(ref) - w_cnt_h += len(hyp) - wer = float(d_cnt) / w_cnt - logger.debug( - ( - f"wer = {wer * 100:.2f}%; num. of ref words = {w_cnt}; " - f"num. of hyp words = {w_cnt_h}; num. of sentences = {len(ref_uid_to_tra)}" - ) - ) - return wer - - -def main(): - args = get_parser().parse_args() - - errs = 0 - count = 0 - with open(args.hypo, "r") as hf, open(args.reference, "r") as rf: - for h, r in zip(hf, rf): - h = h.rstrip().split() - r = r.rstrip().split() - errs += editdistance.eval(r, h) - count += len(r) - - logger.info(f"UER: {errs / count * 100:.2f}%") - - -if __name__ == "__main__": - main() - - -def load_tra(tra_path): - with open(tra_path, "r") as f: - uid_to_tra = {} - for line in f: - uid, tra = line.split(None, 1) - uid_to_tra[uid] = tra - logger.debug(f"loaded {len(uid_to_tra)} utterances from {tra_path}") - return uid_to_tra diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/sequence_scorer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/sequence_scorer.py deleted file mode 100644 index 411d4df4445ef8dd3f1907ad56f9de6943d1fed8..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/sequence_scorer.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import torch -from fairseq import utils - - -class SequenceScorer(object): - """Scores the target for a given source sentence.""" - - def __init__( - self, - tgt_dict, - softmax_batch=None, - compute_alignment=False, - eos=None, - symbols_to_strip_from_output=None, - ): - self.pad = tgt_dict.pad() - self.eos = tgt_dict.eos() if eos is None else eos - self.softmax_batch = softmax_batch or sys.maxsize - assert self.softmax_batch > 0 - self.compute_alignment = compute_alignment - self.symbols_to_strip_from_output = ( - symbols_to_strip_from_output.union({self.eos}) - if symbols_to_strip_from_output is not None - else {self.eos} - ) - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - """Score a batch of translations.""" - net_input = sample["net_input"] - - def batch_for_softmax(dec_out, target): - # assumes decoder_out[0] is the only thing needed (may not be correct for future models!) - first, rest = dec_out[0], dec_out[1:] - bsz, tsz, dim = first.shape - if bsz * tsz < self.softmax_batch: - yield dec_out, target, True - else: - flat = first.contiguous().view(1, -1, dim) - flat_tgt = target.contiguous().view(flat.shape[:-1]) - s = 0 - while s < flat.size(1): - e = s + self.softmax_batch - yield (flat[:, s:e],) + rest, flat_tgt[:, s:e], False - s = e - - def gather_target_probs(probs, target): - probs = probs.gather( - dim=2, - index=target.unsqueeze(-1), - ) - return probs - - orig_target = sample["target"] - - # compute scores for each model in the ensemble - avg_probs = None - avg_attn = None - for model in models: - model.eval() - decoder_out = model(**net_input) - attn = decoder_out[1] if len(decoder_out) > 1 else None - if type(attn) is dict: - attn = attn.get("attn", None) - - batched = batch_for_softmax(decoder_out, orig_target) - probs, idx = None, 0 - for bd, tgt, is_single in batched: - sample["target"] = tgt - curr_prob = model.get_normalized_probs( - bd, log_probs=len(models) == 1, sample=sample - ).data - if is_single: - probs = gather_target_probs(curr_prob, orig_target) - else: - if probs is None: - probs = curr_prob.new(orig_target.numel()) - step = curr_prob.size(0) * curr_prob.size(1) - end = step + idx - tgt_probs = gather_target_probs( - curr_prob.view(tgt.shape + (curr_prob.size(-1),)), tgt - ) - probs[idx:end] = tgt_probs.view(-1) - idx = end - sample["target"] = orig_target - - probs = probs.view(sample["target"].shape) - - if avg_probs is None: - avg_probs = probs - else: - avg_probs.add_(probs) - if attn is not None: - if torch.is_tensor(attn): - attn = attn.data - else: - attn = attn[0] - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - if len(models) > 1: - avg_probs.div_(len(models)) - avg_probs.log_() - if avg_attn is not None: - avg_attn.div_(len(models)) - - bsz = avg_probs.size(0) - hypos = [] - start_idxs = sample["start_indices"] if "start_indices" in sample else [0] * bsz - for i in range(bsz): - # remove padding from ref - ref = ( - utils.strip_pad(sample["target"][i, start_idxs[i] :], self.pad) - if sample["target"] is not None - else None - ) - tgt_len = ref.numel() - avg_probs_i = avg_probs[i][start_idxs[i] : start_idxs[i] + tgt_len] - score_i = avg_probs_i.sum() / tgt_len - if avg_attn is not None: - avg_attn_i = avg_attn[i] - if self.compute_alignment: - alignment = utils.extract_hard_alignment( - avg_attn_i, - sample["net_input"]["src_tokens"][i], - sample["target"][i], - self.pad, - self.eos, - ) - else: - alignment = None - else: - avg_attn_i = alignment = None - hypos.append( - [ - { - "tokens": ref, - "score": score_i, - "attention": avg_attn_i, - "alignment": alignment, - "positional_scores": avg_probs_i, - } - ] - ) - return hypos diff --git a/spaces/HawkEye098432/Vocals_seperator/app.py b/spaces/HawkEye098432/Vocals_seperator/app.py deleted file mode 100644 index 67a6ae7af79b6e80a195729ac563f348bf291c08..0000000000000000000000000000000000000000 --- a/spaces/HawkEye098432/Vocals_seperator/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import os -import gradio as gr -from scipy.io.wavfile import write - - -def inference(audio): - os.makedirs("out", exist_ok=True) - write('test.wav', audio[0], audio[1]) - os.system("python3 -m demucs.separate -n htdemucs --two-stems=vocals -d cpu test.wav -o out") - return "./out/htdemucs/test/vocals.wav","./out/htdemucs/test/no_vocals.wav" - -title = "Source Separation (v4)" - -gr.Interface( - inference, - gr.Audio(type="numpy", label="Input"), - [gr.Audio(type="filepath", label="Vocals"),gr.Audio(type="filepath", label="No Vocals / Instrumental")], - title=title, - ).launch() \ No newline at end of file diff --git a/spaces/HgMenon/Transcribe_V0.2/src/conversion/hf_converter.py b/spaces/HgMenon/Transcribe_V0.2/src/conversion/hf_converter.py deleted file mode 100644 index 6da4f0fd672d63b099f21d0498ba4001d23356f7..0000000000000000000000000000000000000000 --- a/spaces/HgMenon/Transcribe_V0.2/src/conversion/hf_converter.py +++ /dev/null @@ -1,67 +0,0 @@ -# https://github.com/bayartsogt-ya/whisper-multiple-hf-datasets - -from copy import deepcopy -import torch - -WHISPER_MAPPING = { - "layers": "blocks", - "fc1": "mlp.0", - "fc2": "mlp.2", - "final_layer_norm": "mlp_ln", - "layers": "blocks", - ".self_attn.q_proj": ".attn.query", - ".self_attn.k_proj": ".attn.key", - ".self_attn.v_proj": ".attn.value", - ".self_attn_layer_norm": ".attn_ln", - ".self_attn.out_proj": ".attn.out", - ".encoder_attn.q_proj": ".cross_attn.query", - ".encoder_attn.k_proj": ".cross_attn.key", - ".encoder_attn.v_proj": ".cross_attn.value", - ".encoder_attn_layer_norm": ".cross_attn_ln", - ".encoder_attn.out_proj": ".cross_attn.out", - "decoder.layer_norm.": "decoder.ln.", - "encoder.layer_norm.": "encoder.ln_post.", - "embed_tokens": "token_embedding", - "encoder.embed_positions.weight": "encoder.positional_embedding", - "decoder.embed_positions.weight": "decoder.positional_embedding", - "layer_norm": "ln_post", -} - - -def rename_keys(s_dict): - keys = list(s_dict.keys()) - for key in keys: - new_key = key - for k, v in WHISPER_MAPPING.items(): - if k in key: - new_key = new_key.replace(k, v) - - print(f"{key} -> {new_key}") - - s_dict[new_key] = s_dict.pop(key) - return s_dict - - -def convert_hf_whisper(hf_model_name_or_path: str, whisper_state_path: str): - from transformers import WhisperForConditionalGeneration - transformer_model = WhisperForConditionalGeneration.from_pretrained(hf_model_name_or_path) - config = transformer_model.config - - # first build dims - dims = { - 'n_mels': config.num_mel_bins, - 'n_vocab': config.vocab_size, - 'n_audio_ctx': config.max_source_positions, - 'n_audio_state': config.d_model, - 'n_audio_head': config.encoder_attention_heads, - 'n_audio_layer': config.encoder_layers, - 'n_text_ctx': config.max_target_positions, - 'n_text_state': config.d_model, - 'n_text_head': config.decoder_attention_heads, - 'n_text_layer': config.decoder_layers - } - - state_dict = deepcopy(transformer_model.model.state_dict()) - state_dict = rename_keys(state_dict) - - torch.save({"dims": dims, "model_state_dict": state_dict}, whisper_state_path) \ No newline at end of file diff --git a/spaces/Hina4867/bingo/src/app/layout.tsx b/spaces/Hina4867/bingo/src/app/layout.tsx deleted file mode 100644 index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/app/layout.tsx +++ /dev/null @@ -1,47 +0,0 @@ -import { Metadata } from 'next' -import { Toaster } from 'react-hot-toast' -import { TailwindIndicator } from '@/components/tailwind-indicator' -import { Providers } from '@/components/providers' -import { Header } from '@/components/header' - -import '@/app/globals.scss' - - -export const metadata: Metadata = { - title: { - default: 'Bing AI Chatbot', - template: `%s - Bing AI Chatbot` - }, - description: 'Bing AI Chatbot Web App.', - themeColor: [ - { media: '(prefers-color-scheme: light)', color: 'white' }, - { media: '(prefers-color-scheme: dark)', color: 'dark' } - ], - icons: { - icon: '/favicon.ico', - shortcut: '../assets/images/logo.svg', - apple: '../assets/images/logo.svg' - } -} - -interface RootLayoutProps { - children: React.ReactNode -} - -export default function RootLayout({ children }: RootLayoutProps) { - return ( - - - - -
    - {/* @ts-ignore */} -
    -
    {children}
    -
    - -
    - - - ) -} diff --git a/spaces/Hmjz100/YouTube-to-MT3/app.py b/spaces/Hmjz100/YouTube-to-MT3/app.py deleted file mode 100644 index 2a442c671143a2b5795aec29555fdcdc1d6d92ed..0000000000000000000000000000000000000000 --- a/spaces/Hmjz100/YouTube-to-MT3/app.py +++ /dev/null @@ -1,284 +0,0 @@ -import os -os.system("pip install gradio") - -import gradio as gr -from pathlib import Path -os.system("pip install gsutil") -os.system("​​​pip install glob2​​​") -import glob2 - - -os.system("git clone --branch=main https://github.com/google-research/t5x") -os.system("mv t5x t5x_tmp; mv t5x_tmp/* .; rm -r t5x_tmp") -os.system("sed -i 's:jax\[tpu\]:jax:' setup.py") -os.system("python3 -m pip install -e .") -os.system("python3 -m pip install --upgrade pip") - - -# 安装 mt3 -os.system("git clone --branch=main https://github.com/magenta/mt3") -os.system("mv mt3 mt3_tmp; mv mt3_tmp/* .; rm -r mt3_tmp") -os.system("python3 -m pip install -e .") -os.system("pip install tensorflow_cpu") -# 复制检查点 -os.system("gsutil -q -m cp -r gs://mt3/checkpoints .") - -# 复制 soundfont 文件(原始文件来自 https://sites.google.com/site/soundfonts4u) -os.system("gsutil -q -m cp gs://magentadata/soundfonts/SGM-v2.01-Sal-Guit-Bass-V1.3.sf2 .") - -#@title 导入和定义 -import functools -import os - -import numpy as np -import tensorflow.compat.v2 as tf - -import functools -import gin -import jax -import librosa -import note_seq - -import seqio -import t5 -import t5x - -from mt3 import metrics_utils -from mt3 import models -from mt3 import network -from mt3 import note_sequences -from mt3 import preprocessors -from mt3 import spectrograms -from mt3 import vocabularies - -import nest_asyncio -nest_asyncio.apply() - -SAMPLE_RATE = 16000 -SF2_PATH = 'SGM-v2.01-Sal-Guit-Bass-V1.3.sf2' - -def callbak_audio(audio, sample_rate): - return note_seq.audio_io.wav_data_to_samples_librosa( - audio, sample_rate=sample_rate) - -class InferenceModel(object): - """音乐转录的 T5X 模型包装器。""" - - def __init__(self, checkpoint_path, model_type='mt3'): - - # 模型常量。 - if model_type == 'ismir2021': - num_velocity_bins = 127 - self.encoding_spec = note_sequences.NoteEncodingSpec - self.inputs_length = 512 - elif model_type == 'mt3': - num_velocity_bins = 1 - self.encoding_spec = note_sequences.NoteEncodingWithTiesSpec - self.inputs_length = 256 - else: - raise ValueError('unknown model_type: %s' % model_type) - - gin_files = ['/home/user/app/mt3/gin/model.gin', - '/home/user/app/mt3/gin/mt3.gin'] - - self.batch_size = 8 - self.outputs_length = 1024 - self.sequence_length = {'inputs': self.inputs_length, - 'targets': self.outputs_length} - - self.partitioner = t5x.partitioning.PjitPartitioner( - model_parallel_submesh=(1, 1, 1, 1)) - - # 构建编解码器和词汇表。 - self.spectrogram_config = spectrograms.SpectrogramConfig() - self.codec = vocabularies.build_codec( - vocab_config=vocabularies.VocabularyConfig( - num_velocity_bins=num_velocity_bins)) - self.vocabulary = vocabularies.vocabulary_from_codec(self.codec) - self.output_features = { - 'inputs': seqio.ContinuousFeature(dtype=tf.float32, rank=2), - 'targets': seqio.Feature(vocabulary=self.vocabulary), - } - - # 创建 T5X 模型。 - self._parse_gin(gin_files) - self.model = self._load_model() - - # 从检查点中恢复。 - self.restore_from_checkpoint(checkpoint_path) - - @property - def input_shapes(self): - return { - 'encoder_input_tokens': (self.batch_size, self.inputs_length), - 'decoder_input_tokens': (self.batch_size, self.outputs_length) - } - - def _parse_gin(self, gin_files): - """解析用于训练模型的 gin 文件。""" - gin_bindings = [ - 'from __gin__ import dynamic_registration', - 'from mt3 import vocabularies', - 'VOCAB_CONFIG=@vocabularies.VocabularyConfig()', - 'vocabularies.VocabularyConfig.num_velocity_bins=%NUM_VELOCITY_BINS' - ] - with gin.unlock_config(): - gin.parse_config_files_and_bindings( - gin_files, gin_bindings, finalize_config=False) - - def _load_model(self): - """在解析训练 gin 配置后加载 T5X `Model`。""" - model_config = gin.get_configurable(network.T5Config)() - module = network.Transformer(config=model_config) - return models.ContinuousInputsEncoderDecoderModel( - module=module, - input_vocabulary=self.output_features['inputs'].vocabulary, - output_vocabulary=self.output_features['targets'].vocabulary, - optimizer_def=t5x.adafactor.Adafactor(decay_rate=0.8, step_offset=0), - input_depth=spectrograms.input_depth(self.spectrogram_config)) - - - def restore_from_checkpoint(self, checkpoint_path): - """从检查点中恢复训练状态,重置 self._predict_fn()。""" - train_state_initializer = t5x.utils.TrainStateInitializer( - optimizer_def=self.model.optimizer_def, - init_fn=self.model.get_initial_variables, - input_shapes=self.input_shapes, - partitioner=self.partitioner) - - restore_checkpoint_cfg = t5x.utils.RestoreCheckpointConfig( - path=checkpoint_path, mode='specific', dtype='float32') - - train_state_axes = train_state_initializer.train_state_axes - self._predict_fn = self._get_predict_fn(train_state_axes) - self._train_state = train_state_initializer.from_checkpoint_or_scratch( - [restore_checkpoint_cfg], init_rng=jax.random.PRNGKey(0)) - - @functools.lru_cache() - def _get_predict_fn(self, train_state_axes): - """生成一个分区的预测函数用于解码。""" - def partial_predict_fn(params, batch, decode_rng): - return self.model.predict_batch_with_aux( - params, batch, decoder_params={'decode_rng': None}) - return self.partitioner.partition( - partial_predict_fn, - in_axis_resources=( - train_state_axes.params, - t5x.partitioning.PartitionSpec('data',), None), - out_axis_resources=t5x.partitioning.PartitionSpec('data',) - ) - - def predict_tokens(self, batch, seed=0): - """从预处理的数据集批次中预测 tokens。""" - prediction, _ = self._predict_fn( - self._train_state.params, batch, jax.random.PRNGKey(seed)) - return self.vocabulary.decode_tf(prediction).numpy() - - def __call__(self, audio): - """从音频样本推断出音符序列。 - - 参数: - audio:16kHz 的单个音频样本的 1 维 numpy 数组。 - 返回: - 转录音频的音符序列。 - """ - ds = self.audio_to_dataset(audio) - ds = self.preprocess(ds) - - model_ds = self.model.FEATURE_CONVERTER_CLS(pack=False)( - ds, task_feature_lengths=self.sequence_length) - model_ds = model_ds.batch(self.batch_size) - - inferences = (tokens for batch in model_ds.as_numpy_iterator() - for tokens in self.predict_tokens(batch)) - - predictions = [] - for example, tokens in zip(ds.as_numpy_iterator(), inferences): - predictions.append(self.postprocess(tokens, example)) - - result = metrics_utils.event_predictions_to_ns( - predictions, codec=self.codec, encoding_spec=self.encoding_spec) - return result['est_ns'] - - def audio_to_dataset(self, audio): - """从输入音频创建一个包含频谱图的 TF Dataset。""" - frames, frame_times = self._audio_to_frames(audio) - return tf.data.Dataset.from_tensors({ - 'inputs': frames, - 'input_times': frame_times, - }) - - def _audio_to_frames(self, audio): - """从音频计算频谱图帧。""" - frame_size = self.spectrogram_config.hop_width - padding = [0, frame_size - len(audio) % frame_size] - audio = np.pad(audio, padding, mode='constant') - frames = spectrograms.split_audio(audio, self.spectrogram_config) - num_frames = len(audio) // frame_size - times = np.arange(num_frames) / self.spectrogram_config.frames_per_second - return frames, times - - def preprocess(self, ds): - pp_chain = [ - functools.partial( - t5.data.preprocessors.split_tokens_to_inputs_length, - sequence_length=self.sequence_length, - output_features=self.output_features, - feature_key='inputs', - additional_feature_keys=['input_times']), - # 在训练期间进行缓存。 - preprocessors.add_dummy_targets, - functools.partial( - preprocessors.compute_spectrograms, - spectrogram_config=self.spectrogram_config) - ] - for pp in pp_chain: - ds = pp(ds) - return ds - - def postprocess(self, tokens, example): - tokens = self._trim_eos(tokens) - start_time = example['input_times'][0] - # 向下取整到最接近的符号化时间步。 - start_time -= start_time % (1 / self.codec.steps_per_second) - return { - 'est_tokens': tokens, - 'start_time': start_time, - # 内部 MT3 代码期望原始输入,这里不使用。 - 'raw_inputs': [] - } - - @staticmethod - def _trim_eos(tokens): - tokens = np.array(tokens, np.int32) - if vocabularies.DECODED_EOS_ID in tokens: - tokens = tokens[:np.argmax(tokens == vocabularies.DECODED_EOS_ID)] - return tokens - -print(glob2.glob(".")) -inference_model = InferenceModel('/home/user/app/checkpoints/mt3/', 'mt3') - -def inference(url): - os.system(f"yt-dlp -x {url} -o 'audio.%(ext)s'") - audio = glob2.glob('audio.*')[0] - with open(audio, 'rb') as fd: - contents = fd.read() - audio = callbak_audio(contents,sample_rate=16000) - est_ns = inference_model(audio) - note_seq.sequence_proto_to_midi_file(est_ns, './transcribed.mid') - return './transcribed.mid' - -title = "YouTube-To-MT3" -description = "将YouTube音频上传到 MT3:多任务多音轨音乐转录的 Gradio 演示。感谢 akhaliq 的原始 Spaces 实现。" - -article = "

    MT3: 多任务多音轨音乐转录 | Github 仓库

    " - -gr.Interface( - inference, - gr.inputs.Textbox(label="URL"), - gr.outputs.File(label="输出"), - title=title, - description=description, - article=article, - enable_queue=True - ).launch() \ No newline at end of file diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/celle/transformer.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/celle/transformer.py deleted file mode 100644 index 9186ab4772261591cbe58c9db5882f14cf3bd66a..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Image_Prediction/celle/transformer.py +++ /dev/null @@ -1,213 +0,0 @@ -from functools import partial - -import torch -from torch import nn -import torch.nn.functional as F -from einops import rearrange - -from celle.reversible import SequentialSequence -from celle.attention import Attention - -from rotary_embedding_torch import RotaryEmbedding, broadcat -from celle.utils import exists, default, cast_tuple - -# https://arxiv.org/abs/2103.17239 -class LayerScale(nn.Module): - def __init__(self, dim, depth, fn): - super().__init__() - if depth <= 18: - init_eps = 0.1 - elif depth > 18 and depth <= 24: - init_eps = 1e-5 - else: - init_eps = 1e-6 - - scale = torch.zeros(1, 1, dim).fill_(init_eps) - self.scale = nn.Parameter(scale) - self.fn = fn - - def forward(self, x, **kwargs): - return self.fn(x, **kwargs) * self.scale - - -# layer norm -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.norm_out = nn.Identity() - self.fn = fn - - def forward(self, x, **kwargs): - x = self.norm(x) - x = self.fn(x, **kwargs) - return self.norm_out(x) - - -# feed forward - - -class GEGLU(nn.Module): - def forward(self, x): - x, gates = x.chunk(2, dim=-1) - return x * F.gelu(gates) - - -class FeedForward(nn.Module): - def __init__(self, dim, dropout=0.0, mult=4.0): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, dim * mult * 2), - GEGLU(), - nn.Dropout(dropout), - nn.Linear(dim * mult, dim), - ) - - def forward(self, x): - return self.net(x) - - -# main transformer class -class Transformer(nn.Module): - def __init__( - self, - *, - dim, - depth, - seq_len, - causal=True, - heads=8, - dim_head=64, - ff_mult=4, - attn_dropout=0.0, - ff_dropout=0.0, - image_fmap_size=None, - num_images=None, - stable=False, - rotary_emb=True, - ): - super().__init__() - layers = nn.ModuleList([]) - - self.seq_len = seq_len - self.image_fmap_size = image_fmap_size - - for ind in range(depth): - - attn_class = partial(Attention, stable=stable) - - attn = attn_class( - dim, - causal=causal, - seq_len=seq_len, - heads=heads, - dim_head=dim_head, - dropout=attn_dropout, - ) - - ff = FeedForward(dim, mult=ff_mult, dropout=ff_dropout) - - layers.append( - nn.ModuleList( - [ - LayerScale( - dim, ind + 1, PreNorm(dim, attn) - ), - LayerScale( - dim, ind + 1, PreNorm(dim, ff) - ), - ] - ) - ) - - # pairs arguments with attention layer - route_attn = ((True, False),) * depth - attn_route_map = { - "mask": route_attn, - "rotary_pos_emb": route_attn, - } - - self.layers = SequentialSequence(layers, args_route=attn_route_map) - - # generate positional embeddings for rotary - - pos_emb = None - if rotary_emb: - rot_dim = dim_head // 3 - img_seq_len = ((image_fmap_size // num_images) ** 2) * num_images - - text_len = seq_len - img_seq_len + 1 - - text_pos_emb = RotaryEmbedding(dim=rot_dim) - - img_axial_pos_emb = RotaryEmbedding(dim=rot_dim, freqs_for="pixel") - - text_freqs = text_pos_emb(torch.arange(text_len)) - - img_to_text_freqs = text_pos_emb( - torch.full((img_seq_len,), 8192) - ) # image is given a position far away from text - - text_freqs = torch.cat((text_freqs, img_to_text_freqs), dim=0) - - img_freqs_axial = img_axial_pos_emb( - torch.linspace(-1, 1, steps=image_fmap_size) - ) - - if num_images > 1: - split_img_freqs_axial = torch.split( - img_freqs_axial, image_fmap_size // num_images, dim=0 - ) - - split_img_freqs = [ - broadcat( - ( - rearrange(img_freqs_axial_per_image, "i d -> i () d"), - rearrange(img_freqs_axial_per_image, "j d -> () j d"), - ), - dim=-1, - ) - for img_freqs_axial_per_image in split_img_freqs_axial - ] - - split_img_freqs = [ - rearrange(img_freqs_per_image, "h w d -> (h w) d") - for img_freqs_per_image in split_img_freqs - ] - - # concat per image-image_freqs - - img_freqs = torch.cat(split_img_freqs, dim=0) - - elif num_images == 1: - img_freqs = broadcat( - ( - rearrange(img_freqs_axial, "i d -> i () d"), - rearrange(img_freqs_axial, "j d -> () j d"), - ), - dim=-1, - ) - - img_freqs = rearrange(img_freqs, "h w d -> (h w) d") - - else: - assert False, "num_images must be int greater than 0" - self.img_axial_pos_emb = img_axial_pos_emb - self.text_pos_emb = text_pos_emb - - text_axial_freqs = img_axial_pos_emb( - torch.full((text_len,), -10.0) - ) # text is given a position of -10 apart from the image axial positions, which is from range [-1, 1] - - text_axial_freqs = torch.cat((text_axial_freqs, text_axial_freqs), dim=-1) - - img_freqs = torch.cat((text_axial_freqs, img_freqs), dim=0) - - pos_emb = torch.cat((text_freqs, img_freqs), dim=-1) - - pos_emb = rearrange(pos_emb, "n d -> () n d") - - self.register_buffer("pos_emb", pos_emb) - - def forward(self, x, **kwargs): - return self.layers(x, rotary_pos_emb=self.pos_emb, **kwargs) \ No newline at end of file diff --git a/spaces/IAMTFRMZA/image-recognition-demo/README.md b/spaces/IAMTFRMZA/image-recognition-demo/README.md deleted file mode 100644 index f77ccee6f2bda2b096842aa2c51798e3a1be3de6..0000000000000000000000000000000000000000 --- a/spaces/IAMTFRMZA/image-recognition-demo/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Image Recognition Demo -emoji: 🚀 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: afl-3.0 -duplicated_from: hasibzunair/image-recognition-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - -# Image Recognition Demo -Code for a simple demo of an image recognition system built with Gradio and served on HuggingFace Spaces. App is live at https://huggingface.co/spaces/hasibzunair/image-recognition-demo. - -### References -* https://huggingface.co/docs/hub/spaces-github-actions -* https://www.gradio.app/image_classification_in_pytorch/ -* GH Actions https://youtu.be/8hOzsFETm4I \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/data/collaters.py b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/data/collaters.py deleted file mode 100644 index 6acfec876b87e5a00bc92083b1181301a2a18e3f..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/data/collaters.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" - This module contains collection of classes which implement - collate functionalities for various tasks. - - Collaters should know what data to expect for each sample - and they should pack / collate them into batches -""" - - -from __future__ import absolute_import, division, print_function, unicode_literals - -import numpy as np -import torch -from fairseq.data import data_utils as fairseq_data_utils - - -class Seq2SeqCollater(object): - """ - Implements collate function mainly for seq2seq tasks - This expects each sample to contain feature (src_tokens) and - targets. - This collator is also used for aligned training task. - """ - - def __init__( - self, - feature_index=0, - label_index=1, - pad_index=1, - eos_index=2, - move_eos_to_beginning=True, - ): - self.feature_index = feature_index - self.label_index = label_index - self.pad_index = pad_index - self.eos_index = eos_index - self.move_eos_to_beginning = move_eos_to_beginning - - def _collate_frames(self, frames): - """Convert a list of 2d frames into a padded 3d tensor - Args: - frames (list): list of 2d frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3d tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - len_max = max(frame.size(0) for frame in frames) - f_dim = frames[0].size(1) - res = frames[0].new(len(frames), len_max, f_dim).fill_(0.0) - - for i, v in enumerate(frames): - res[i, : v.size(0)] = v - - return res - - def collate(self, samples): - """ - utility function to collate samples into batch for speech recognition. - """ - if len(samples) == 0: - return {} - - # parse samples into torch tensors - parsed_samples = [] - for s in samples: - # skip invalid samples - if s["data"][self.feature_index] is None: - continue - source = s["data"][self.feature_index] - if isinstance(source, (np.ndarray, np.generic)): - source = torch.from_numpy(source) - target = s["data"][self.label_index] - if isinstance(target, (np.ndarray, np.generic)): - target = torch.from_numpy(target).long() - elif isinstance(target, list): - target = torch.LongTensor(target) - - parsed_sample = {"id": s["id"], "source": source, "target": target} - parsed_samples.append(parsed_sample) - samples = parsed_samples - - id = torch.LongTensor([s["id"] for s in samples]) - frames = self._collate_frames([s["source"] for s in samples]) - # sort samples by descending number of frames - frames_lengths = torch.LongTensor([s["source"].size(0) for s in samples]) - frames_lengths, sort_order = frames_lengths.sort(descending=True) - id = id.index_select(0, sort_order) - frames = frames.index_select(0, sort_order) - - target = None - target_lengths = None - prev_output_tokens = None - if samples[0].get("target", None) is not None: - ntokens = sum(len(s["target"]) for s in samples) - target = fairseq_data_utils.collate_tokens( - [s["target"] for s in samples], - self.pad_index, - self.eos_index, - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, sort_order) - target_lengths = torch.LongTensor( - [s["target"].size(0) for s in samples] - ).index_select(0, sort_order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [s["target"] for s in samples], - self.pad_index, - self.eos_index, - left_pad=False, - move_eos_to_beginning=self.move_eos_to_beginning, - ) - prev_output_tokens = prev_output_tokens.index_select(0, sort_order) - else: - ntokens = sum(len(s["source"]) for s in samples) - - batch = { - "id": id, - "ntokens": ntokens, - "net_input": {"src_tokens": frames, "src_lengths": frames_lengths}, - "target": target, - "target_lengths": target_lengths, - "nsentences": len(samples), - } - if prev_output_tokens is not None: - batch["net_input"]["prev_output_tokens"] = prev_output_tokens - return batch diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/README.md b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/README.md deleted file mode 100644 index 0b213fd202d04bce2149936ec149c23c6d483745..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/README.md +++ /dev/null @@ -1,103 +0,0 @@ -# wav2vec Unsupervised (wav2vec-U) - -Wav2vec Unsupervised (wav2vec-U) is a framework for building speech recognition systems without any labeled training data as described in [Unsupervised Speech Recognition (Baevski et al., 2021)](https://ai.facebook.com/research/publications/unsupervised-speech-recognition). The model takes as input wav2vec 2.0 or XLSR representations (see [pretrained models](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec)) as well as unlabeled speech and text data. - - The wav2vec-U training procedure consists of three consecutive main steps: -* Preparation of speech representations and text data -* Generative adversarial training (GAN) -* Iterative self-training + Kaldi LM-decoding - -## Preparation of speech and text data -Similar to [wav2vec 2.0](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md), data folders contain {train,valid,test}.{tsv,wrd,phn} files, where audio paths are stored in tsv files, and word, letter or phoneme transcriptions are stored in .{wrd,ltr,phn}. - -In **/path/to/data/with_silence** you need a *train.tsv* file as well as (optionally) *{valid,test}.{tsv,wrd,phn}*. It is nice to have *10h.{tsv,phn}* files there too for reproducing the ablation study on layer selection. In **/path/to/data/without_silence** you have the same files, except *.tsv* files contain audios with silences removed using rVAD. - -Pre-requisites: -* set FAIRSEQ_ROOT environmental variable to your fairseq installation -* set RVAD_ROOT environmental variable to a checkout of [rVADfast](https://github.com/zhenghuatan/rVADfast) -* set KENLM_ROOT environmental variable to the location of [KenLM](https://github.com/kpu/kenlm) binaries -* install [PyKaldi](https://github.com/pykaldi/pykaldi) and set KALDI_ROOT environmental variable to the location of your kaldi installation. To use the version bundled with PyKaldi, you can use /path/to/pykaldi/tools/kaldi - -Create new audio files without silences: -```shell -# create a manifest file for the set original of audio files -python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py /dir/to/save/audio/files --ext wav --dest /path/to/new/train.tsv --valid-percent 0 - -python scripts/vads.py -r $RVAD_ROOT < /path/to/train.tsv > train.vads - -python scripts/remove_silence.py --tsv /path/to/train.tsv --vads train.vads --out /dir/to/save/audio/files - -python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py /dir/to/save/audio/files --ext wav --dest /path/to/new/train.tsv --valid-percent 0.01 -``` - -Next, we need to preprocess the audio data to better match phonemized text data: - -```shell -zsh scripts/prepare_audio.sh /dir/with/{train,test,valid}.tsv /output/dir /path/to/wav2vec2/model.pt 512 14 -``` -Note that if you have splits different than train/valid/test, you will need to modify this script. The last two arguments are the PCA dimensionality and the 0-based index of the layer from which to extract representations. - -Now we need to prepare text data: -```shell -zsh scripts/prepare_text.sh language /path/to/text/file /output/dir 1000 espeak /path/to/fasttext/lid/model -``` - -The fourth argument is minimum number observations of phones to keep. If your text corpus is small, you might want to reduce this number. - -The fifth argument is which phonemizer to use. Supported values are [espeak](http://espeak.sourceforge.net/), [espeak-ng](https://github.com/espeak-ng/espeak-ng), and [G2P](https://github.com/Kyubyong/g2p) (english only). - -Pre-trained fasttext LID models can be downloaded [here](https://fasttext.cc/docs/en/language-identification.html). - -### Prepare TIMIT data -TIMIT transcripts include silence. Therefore VAD is not used for audio preprocessing, and we do not wrap transcripts with silences or insert random silence in between words. - -To prepare TIMIT data for both the matched an unmatched setup: -```shell -bash scripts/prepare_timit.sh /dir/to/timit/raw/data /output/dir /path/to/wav2vec2/model.pt -``` - -Note that we assume the TIMIT distribution with capitalized directories and filenames are used (e.g., `TRAIN/DR1/FCJF0/SA1.PHN`). - -## Generative adversarial training (GAN) - -We then use a GAN model to build a first unsupervised ASR model. The data preparation above of both speech features and text data is a necessary procedure that enables the generator to match speech to text in an unsupervised way. - -Launching GAN training on top of preprocessed features, with default hyperparameters can be done with: - -``` -PREFIX=w2v_unsup_gan_xp -TASK_DATA=/path/to/features/precompute_unfiltered_pca512_cls128_mean_pooled -TEXT_DATA=/path/to/data/phones # path to fairseq-preprocessed GAN data (phones dir) -KENLM_PATH=/path/to/data/phones/kenlm.phn.o4.bin # KenLM 4-gram phoneme language model (LM data = GAN data here) - -PYTHONPATH=$FAIRSEQ_ROOT PREFIX=$PREFIX fairseq-hydra-train \ - -m --config-dir config/gan \ - --config-name w2vu \ - task.data=${TASK_DATA} \ - task.text_data=${TEXT_DATA} \ - task.kenlm_path=${KENLM_PATH} \ - common.user_dir=${FAIRSEQ_ROOT}/examples/wav2vec/unsupervised \ - model.code_penalty=2,4 model.gradient_penalty=1.5,2.0 \ - model.smoothness_weight=0.5,0.75,1.0 'common.seed=range(0,5)' -``` - - -Once we find the best checkpoint (chosen using unsupervised metric that combined language model perplexity and vocabulary usage), we can use it to generate phone labels (or word labels with an appropriate kaldi WFST): - -```shell -python w2vu_generate.py --config-dir config/generate --config-name viterbi \ -fairseq.common.user_dir=${FAIRSEQ_ROOT}/examples/wav2vec/unsupervised \ -fairseq.task.data=/path/to/dir/with/features \ -fairseq.common_eval.path=/path/to/gan/checkpoint \ -fairseq.dataset.gen_subset=valid results_path=/where/to/save/transcriptions -``` - -The decoding without LM works best on the same adjacent-mean-pooled features that the gan was trained on, while decoding with LM works better on features before the adjacent timestep mean-pooling step (without the "_pooled" suffix). - -## Iterative self-training + Kaldi LM-decoding -After the GAN training provides a first unsupervised model, we can then progressively refine the quality of transcriptions using several iterations of semi-supervised learning. We perform two iterations: first, pseudo-label the training data with the unsupervised GAN model and train an HMM on the pseudo-labels. Second, we relabel the training data with the HMM and then fine-tune the original wav2vec 2.0 model using the HMM pseudo-labels with a CTC loss. Note that HMM models use phonemes as output, while wav2vec 2.0 use letter. Both are decoded using WFST decoders into words. - - -Please see [this README](kaldi_self_train/README.md) for more instructions on how to do iterative self-training + Kaldi LM-decoding. - -*** Note: these instructions are a work in progress and will be updated over the next few days diff --git a/spaces/Ifan/instant-ngp/README.md b/spaces/Ifan/instant-ngp/README.md deleted file mode 100644 index cf4fc8372c24ca4b1163b6057a92cce948dd7916..0000000000000000000000000000000000000000 --- a/spaces/Ifan/instant-ngp/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Instant Ngp -emoji: 📊 -colorFrom: green -colorTo: pink -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/directory.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/directory.py deleted file mode 100644 index bc42e00500c7a5b70b2cef83b03e45b5bb471ff8..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/visualizers/directory.py +++ /dev/null @@ -1,36 +0,0 @@ -import os - -import cv2 -import numpy as np - -from saicinpainting.training.visualizers.base import BaseVisualizer, visualize_mask_and_images_batch -from saicinpainting.utils import check_and_warn_input_range - - -class DirectoryVisualizer(BaseVisualizer): - DEFAULT_KEY_ORDER = 'image predicted_image inpainted'.split(' ') - - def __init__(self, outdir, key_order=DEFAULT_KEY_ORDER, max_items_in_batch=10, - last_without_mask=True, rescale_keys=None): - self.outdir = outdir - os.makedirs(self.outdir, exist_ok=True) - self.key_order = key_order - self.max_items_in_batch = max_items_in_batch - self.last_without_mask = last_without_mask - self.rescale_keys = rescale_keys - - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - check_and_warn_input_range(batch['image'], 0, 1, 'DirectoryVisualizer target image') - vis_img = visualize_mask_and_images_batch(batch, self.key_order, max_items=self.max_items_in_batch, - last_without_mask=self.last_without_mask, - rescale_keys=self.rescale_keys) - - vis_img = np.clip(vis_img * 255, 0, 255).astype('uint8') - - curoutdir = os.path.join(self.outdir, f'epoch{epoch_i:04d}{suffix}') - os.makedirs(curoutdir, exist_ok=True) - rank_suffix = f'_r{rank}' if rank is not None else '' - out_fname = os.path.join(curoutdir, f'batch{batch_i:07d}{rank_suffix}.jpg') - - vis_img = cv2.cvtColor(vis_img, cv2.COLOR_RGB2BGR) - cv2.imwrite(out_fname, vis_img) diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/prompt_encoder.py b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/prompt_encoder.py deleted file mode 100644 index c3143f4f8e02ddd7ca8587b40ff5d47c3a6b7ef3..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/prompt_encoder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch import nn - -from typing import Any, Optional, Tuple, Type - -from .common import LayerNorm2d - - -class PromptEncoder(nn.Module): - def __init__( - self, - embed_dim: int, - image_embedding_size: Tuple[int, int], - input_image_size: Tuple[int, int], - mask_in_chans: int, - activation: Type[nn.Module] = nn.GELU, - ) -> None: - """ - Encodes prompts for input to SAM's mask decoder. - - Arguments: - embed_dim (int): The prompts' embedding dimension - image_embedding_size (tuple(int, int)): The spatial size of the - image embedding, as (H, W). - input_image_size (int): The padded size of the image as input - to the image encoder, as (H, W). - mask_in_chans (int): The number of hidden channels used for - encoding input masks. - activation (nn.Module): The activation to use when encoding - input masks. - """ - super().__init__() - self.embed_dim = embed_dim - self.input_image_size = input_image_size - self.image_embedding_size = image_embedding_size - self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) - - self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners - point_embeddings = [nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings)] - self.point_embeddings = nn.ModuleList(point_embeddings) - self.not_a_point_embed = nn.Embedding(1, embed_dim) - - self.mask_input_size = (4 * image_embedding_size[0], 4 * image_embedding_size[1]) - self.mask_downscaling = nn.Sequential( - nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans // 4), - activation(), - nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans), - activation(), - nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), - ) - self.no_mask_embed = nn.Embedding(1, embed_dim) - - def get_dense_pe(self) -> torch.Tensor: - """ - Returns the positional encoding used to encode point prompts, - applied to a dense set of points the shape of the image encoding. - - Returns: - torch.Tensor: Positional encoding with shape - 1x(embed_dim)x(embedding_h)x(embedding_w) - """ - return self.pe_layer(self.image_embedding_size).unsqueeze(0) - - def _embed_points( - self, - points: torch.Tensor, - labels: torch.Tensor, - pad: bool, - ) -> torch.Tensor: - """Embeds point prompts.""" - points = points + 0.5 # Shift to center of pixel - if pad: - padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) - padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) - points = torch.cat([points, padding_point], dim=1) - labels = torch.cat([labels, padding_label], dim=1) - point_embedding = self.pe_layer.forward_with_coords(points, self.input_image_size) - point_embedding[labels == -1] = 0.0 - point_embedding[labels == -1] += self.not_a_point_embed.weight - point_embedding[labels == 0] += self.point_embeddings[0].weight - point_embedding[labels == 1] += self.point_embeddings[1].weight - return point_embedding - - def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: - """Embeds box prompts.""" - boxes = boxes + 0.5 # Shift to center of pixel - coords = boxes.reshape(-1, 2, 2) - corner_embedding = self.pe_layer.forward_with_coords(coords, self.input_image_size) - corner_embedding[:, 0, :] += self.point_embeddings[2].weight - corner_embedding[:, 1, :] += self.point_embeddings[3].weight - return corner_embedding - - def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: - """Embeds mask inputs.""" - mask_embedding = self.mask_downscaling(masks) - return mask_embedding - - def _get_batch_size( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> int: - """ - Gets the batch size of the output given the batch size of the input prompts. - """ - if points is not None: - return points[0].shape[0] - elif boxes is not None: - return boxes.shape[0] - elif masks is not None: - return masks.shape[0] - else: - return 1 - - def _get_device(self) -> torch.device: - return self.point_embeddings[0].weight.device - - def forward( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Embeds different types of prompts, returning both sparse and dense - embeddings. - - Arguments: - points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates - and labels to embed. - boxes (torch.Tensor or none): boxes to embed - masks (torch.Tensor or none): masks to embed - - Returns: - torch.Tensor: sparse embeddings for the points and boxes, with shape - BxNx(embed_dim), where N is determined by the number of input points - and boxes. - torch.Tensor: dense embeddings for the masks, in the shape - Bx(embed_dim)x(embed_H)x(embed_W) - """ - bs = self._get_batch_size(points, boxes, masks) - sparse_embeddings = torch.empty((bs, 0, self.embed_dim), device=self._get_device()) - if points is not None: - coords, labels = points - point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) - sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) - if boxes is not None: - box_embeddings = self._embed_boxes(boxes) - sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) - - if masks is not None: - dense_embeddings = self._embed_masks(masks) - else: - dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( - bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] - ) - - return sparse_embeddings, dense_embeddings - - -class PositionEmbeddingRandom(nn.Module): - """ - Positional encoding using random spatial frequencies. - """ - - def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: - super().__init__() - if scale is None or scale <= 0.0: - scale = 1.0 - self.register_buffer( - "positional_encoding_gaussian_matrix", - scale * torch.randn((2, num_pos_feats)), - ) - - def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: - """Positionally encode points that are normalized to [0,1].""" - # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape - coords = 2 * coords - 1 - coords = coords @ self.positional_encoding_gaussian_matrix - coords = 2 * np.pi * coords - # outputs d_1 x ... x d_n x C shape - return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) - - def forward(self, size: Tuple[int, int]) -> torch.Tensor: - """Generate positional encoding for a grid of the specified size.""" - h, w = size - device: Any = self.positional_encoding_gaussian_matrix.device - grid = torch.ones((h, w), device=device, dtype=torch.float32) - y_embed = grid.cumsum(dim=0) - 0.5 - x_embed = grid.cumsum(dim=1) - 0.5 - y_embed = y_embed / h - x_embed = x_embed / w - - pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) - return pe.permute(2, 0, 1) # C x H x W - - def forward_with_coords( - self, coords_input: torch.Tensor, image_size: Tuple[int, int] - ) -> torch.Tensor: - """Positionally encode points that are not normalized to [0,1].""" - coords = coords_input.clone() - coords[:, :, 0] = coords[:, :, 0] / image_size[1] - coords[:, :, 1] = coords[:, :, 1] / image_size[0] - return self._pe_encoding(coords.to(torch.float)) # B x N x C diff --git a/spaces/Isaoudata/WaltWhitman-GPT/app.py b/spaces/Isaoudata/WaltWhitman-GPT/app.py deleted file mode 100644 index 8d0e19cd6fe03223b0cf7aaad5e4e16bef911b82..0000000000000000000000000000000000000000 --- a/spaces/Isaoudata/WaltWhitman-GPT/app.py +++ /dev/null @@ -1,33 +0,0 @@ -from transformers import GPT2Tokenizer -import torch -import streamlit as st - - -tokenizer = GPT2Tokenizer.from_pretrained('gpt2') -tokenizer.pad_token = tokenizer.eos_token - -model = torch.load("poem_model.pt") - -def infer(inp): - inp = tokenizer(inp,return_tensors="pt") - X = inp["input_ids"] #.to(device) - a = inp["attention_mask"] #.to(device) - output = model.generate(X, - attention_mask=a, - max_length=100, - min_length=10, - early_stopping=True, - num_beams=5, - no_repeat_ngram_size=2) - - output = tokenizer.decode(output[0]) - - return output - -st.title("WaltWhitman-GPT By Ilyas") - -text = st.text_area("Enter Prompt") -if st.button("Generate Poem"): - if text: - output = infer(text) - st.write(output) \ No newline at end of file diff --git a/spaces/Jack7510/trychatgpt/README.md b/spaces/Jack7510/trychatgpt/README.md deleted file mode 100644 index 1149208d02aaa7c6132e348134b7c966fa70d683..0000000000000000000000000000000000000000 --- a/spaces/Jack7510/trychatgpt/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Trychatgpt -emoji: 🏢 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JayKen/YSF-External-Testing/README.md b/spaces/JayKen/YSF-External-Testing/README.md deleted file mode 100644 index 6b712581898331774bc443617ec4a23497853a2c..0000000000000000000000000000000000000000 --- a/spaces/JayKen/YSF-External-Testing/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: YSF X Dell Old -emoji: 😻 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/sampler.py b/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/sampler.py deleted file mode 100644 index 2c42d6f964d92658e769df95a81dec92250e5a99..0000000000000000000000000000000000000000 --- a/spaces/Kayson/InstructDiffusion/stable_diffusion/ldm/models/diffusion/dpm_solver/sampler.py +++ /dev/null @@ -1,82 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch - -from .dpm_solver import NoiseScheduleVP, model_wrapper, DPM_Solver - - -class DPMSolverSampler(object): - def __init__(self, model, **kwargs): - super().__init__() - self.model = model - to_torch = lambda x: x.clone().detach().to(torch.float32).to(model.device) - self.register_buffer('alphas_cumprod', to_torch(model.alphas_cumprod)) - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - - # print(f'Data shape for DPM-Solver sampling is {size}, sampling steps {S}') - - device = self.model.betas.device - if x_T is None: - img = torch.randn(size, device=device) - else: - img = x_T - - ns = NoiseScheduleVP('discrete', alphas_cumprod=self.alphas_cumprod) - - model_fn = model_wrapper( - lambda x, t, c: self.model.apply_model(x, t, c), - ns, - model_type="noise", - guidance_type="classifier-free", - condition=conditioning, - unconditional_condition=unconditional_conditioning, - guidance_scale=unconditional_guidance_scale, - ) - - dpm_solver = DPM_Solver(model_fn, ns, predict_x0=True, thresholding=False) - x = dpm_solver.sample(img, steps=S, skip_type="time_uniform", method="multistep", order=2, lower_order_final=True) - - return x.to(device), None diff --git a/spaces/Keay/Sae/Dockerfile b/spaces/Keay/Sae/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Keay/Sae/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Kevin676/Shanghainese-TTS-demo/models.py b/spaces/Kevin676/Shanghainese-TTS-demo/models.py deleted file mode 100644 index 0a722b1a69fa5b5bd96da7cf225664df181cd027..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Shanghainese-TTS-demo/models.py +++ /dev/null @@ -1,535 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - if self.n_vocab!=0: - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - if self.n_vocab!=0: - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/KyanChen/BuildingExtraction/Models/BackBone/__init__.py b/spaces/KyanChen/BuildingExtraction/Models/BackBone/__init__.py deleted file mode 100644 index 0da09945c4d8f63fd343aeaf9059c228e554b656..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/BuildingExtraction/Models/BackBone/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from Models.BackBone.GetBackbone import * \ No newline at end of file diff --git a/spaces/LEBEI/00002/app.py b/spaces/LEBEI/00002/app.py deleted file mode 100644 index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000 --- a/spaces/LEBEI/00002/app.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations -import argparse -import functools -import os -import pathlib -import sys -from typing import Callable -import uuid - -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image - -from io import BytesIO -from wbc.cartoonize import Cartoonize - -ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization' -TITLE = 'SystemErrorWang/White-box-Cartoonization' -DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}. - -""" -ARTICLE = """ - -""" - -SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"] -def compress_UUID(): - ''' - 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串 - 包括:[0-9a-zA-Z\-_]共64个 - 长度:(32-2)/3*2=20 - 备注:可在地球上人zhi人都用,使用100年不重复(2^120) - :return:String - ''' - row = str(uuid.uuid4()).replace('-', '') - safe_code = '' - for i in range(10): - enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10) - safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)]) - safe_code = safe_code.replace('-', '') - return safe_code - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - -def run( - image, - cartoonize : Cartoonize -) -> tuple[PIL.Image.Image]: - - out_path = compress_UUID()+'.png' - cartoonize.run_sigle(image.name, out_path) - - return PIL.Image.open(out_path) - - -def main(): - gr.close_all() - - args = parse_args() - - cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/')) - - func = functools.partial(run, cartoonize=cartoonize) - func = functools.update_wrapper(func, run) - - gr.Interface( - func, - [ - gr.inputs.Image(type='file', label='Input Image'), - ], - [ - gr.outputs.Image( - type='pil', - label='Result'), - ], - # examples=examples, - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/Lamai/LAMAIGPT/data_ingestion.py b/spaces/Lamai/LAMAIGPT/data_ingestion.py deleted file mode 100644 index b89a33dafd15c2e7bded0445a741a4a1c47ed417..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/data_ingestion.py +++ /dev/null @@ -1,96 +0,0 @@ -import argparse -import logging - -from autogpt.commands.file_operations import ingest_file, search_files -from autogpt.config import Config -from autogpt.memory import get_memory - -cfg = Config() - - -def configure_logging(): - logging.basicConfig( - filename="log-ingestion.txt", - filemode="a", - format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s", - datefmt="%H:%M:%S", - level=logging.DEBUG, - ) - return logging.getLogger("AutoGPT-Ingestion") - - -def ingest_directory(directory, memory, args): - """ - Ingest all files in a directory by calling the ingest_file function for each file. - - :param directory: The directory containing the files to ingest - :param memory: An object with an add() method to store the chunks in memory - """ - try: - files = search_files(directory) - for file in files: - ingest_file(file, memory, args.max_length, args.overlap) - except Exception as e: - print(f"Error while ingesting directory '{directory}': {str(e)}") - - -def main() -> None: - logger = configure_logging() - - parser = argparse.ArgumentParser( - description="Ingest a file or a directory with multiple files into memory. " - "Make sure to set your .env before running this script." - ) - group = parser.add_mutually_exclusive_group(required=True) - group.add_argument("--file", type=str, help="The file to ingest.") - group.add_argument( - "--dir", type=str, help="The directory containing the files to ingest." - ) - parser.add_argument( - "--init", - action="store_true", - help="Init the memory and wipe its content (default: False)", - default=False, - ) - parser.add_argument( - "--overlap", - type=int, - help="The overlap size between chunks when ingesting files (default: 200)", - default=200, - ) - parser.add_argument( - "--max_length", - type=int, - help="The max_length of each chunk when ingesting files (default: 4000)", - default=4000, - ) - - args = parser.parse_args() - - # Initialize memory - memory = get_memory(cfg, init=args.init) - print("Using memory of type: " + memory.__class__.__name__) - - if args.file: - try: - ingest_file(args.file, memory, args.max_length, args.overlap) - print(f"File '{args.file}' ingested successfully.") - except Exception as e: - logger.error(f"Error while ingesting file '{args.file}': {str(e)}") - print(f"Error while ingesting file '{args.file}': {str(e)}") - elif args.dir: - try: - ingest_directory(args.dir, memory, args) - print(f"Directory '{args.dir}' ingested successfully.") - except Exception as e: - logger.error(f"Error while ingesting directory '{args.dir}': {str(e)}") - print(f"Error while ingesting directory '{args.dir}': {str(e)}") - else: - print( - "Please provide either a file path (--file) or a directory name (--dir)" - " inside the auto_gpt_workspace directory as input." - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/LaynzKunz/Advanced-RVC-Inference/README.md b/spaces/LaynzKunz/Advanced-RVC-Inference/README.md deleted file mode 100644 index bf2ca8b63ec292bdd3436b7251d727570274cd3d..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Advanced-RVC-Inference/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Advanced RVC Inference -emoji: ⚡ -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.44.4 -app_file: app2.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/train/data_utils.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/train/data_utils.py deleted file mode 100644 index ad2f8a08718c0e55e9809bc9ac7fe70b58f6f310..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/train/data_utils.py +++ /dev/null @@ -1,517 +0,0 @@ -import os -import traceback -import logging - -logger = logging.getLogger(__name__) - -import numpy as np -import torch -import torch.utils.data - -from lib.infer.infer_libs.train.mel_processing import spectrogram_torch -from lib.infer.infer_libs.train.utils import load_filepaths_and_text, load_wav_to_torch - - -class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 5000) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv]) - lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - file = audiopath_and_text[0] - phone = audiopath_and_text[1] - pitch = audiopath_and_text[2] - pitchf = audiopath_and_text[3] - dv = audiopath_and_text[4] - - phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf) - spec, wav = self.get_audio(file) - dv = self.get_sid(dv) - - len_phone = phone.size()[0] - len_spec = spec.size()[-1] - # print(123,phone.shape,pitch.shape,spec.shape) - if len_phone != len_spec: - len_min = min(len_phone, len_spec) - # amor - len_wav = len_min * self.hop_length - - spec = spec[:, :len_min] - wav = wav[:, :len_wav] - - phone = phone[:len_min, :] - pitch = pitch[:len_min] - pitchf = pitchf[:len_min] - - return (spec, wav, phone, pitch, pitchf, dv) - - def get_labels(self, phone, pitch, pitchf): - phone = np.load(phone) - phone = np.repeat(phone, 2, axis=0) - pitch = np.load(pitch) - pitchf = np.load(pitchf) - n_num = min(phone.shape[0], 900) # DistributedBucketSampler - # print(234,phone.shape,pitch.shape) - phone = phone[:n_num, :] - pitch = pitch[:n_num] - pitchf = pitchf[:n_num] - phone = torch.FloatTensor(phone) - pitch = torch.LongTensor(pitch) - pitchf = torch.FloatTensor(pitchf) - return phone, pitch, pitchf - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio - # audio_norm = audio / self.max_wav_value - # audio_norm = audio / np.abs(audio).max() - - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - try: - spec = torch.load(spec_filename) - except: - logger.warn("%s %s", spec_filename, traceback.format_exc()) - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - return spec, audio_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollateMultiNSFsid: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True - ) - - max_spec_len = max([x[0].size(1) for x in batch]) - max_wave_len = max([x[1].size(1) for x in batch]) - spec_lengths = torch.LongTensor(len(batch)) - wave_lengths = torch.LongTensor(len(batch)) - spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len) - wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len) - spec_padded.zero_() - wave_padded.zero_() - - max_phone_len = max([x[2].size(0) for x in batch]) - phone_lengths = torch.LongTensor(len(batch)) - phone_padded = torch.FloatTensor( - len(batch), max_phone_len, batch[0][2].shape[1] - ) # (spec, wav, phone, pitch) - pitch_padded = torch.LongTensor(len(batch), max_phone_len) - pitchf_padded = torch.FloatTensor(len(batch), max_phone_len) - phone_padded.zero_() - pitch_padded.zero_() - pitchf_padded.zero_() - # dv = torch.FloatTensor(len(batch), 256)#gin=256 - sid = torch.LongTensor(len(batch)) - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - spec = row[0] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wave = row[1] - wave_padded[i, :, : wave.size(1)] = wave - wave_lengths[i] = wave.size(1) - - phone = row[2] - phone_padded[i, : phone.size(0), :] = phone - phone_lengths[i] = phone.size(0) - - pitch = row[3] - pitch_padded[i, : pitch.size(0)] = pitch - pitchf = row[4] - pitchf_padded[i, : pitchf.size(0)] = pitchf - - # dv[i] = row[5] - sid[i] = row[5] - - return ( - phone_padded, - phone_lengths, - pitch_padded, - pitchf_padded, - spec_padded, - spec_lengths, - wave_padded, - wave_lengths, - # dv - sid, - ) - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 5000) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text, dv in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text, dv]) - lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - file = audiopath_and_text[0] - phone = audiopath_and_text[1] - dv = audiopath_and_text[2] - - phone = self.get_labels(phone) - spec, wav = self.get_audio(file) - dv = self.get_sid(dv) - - len_phone = phone.size()[0] - len_spec = spec.size()[-1] - if len_phone != len_spec: - len_min = min(len_phone, len_spec) - len_wav = len_min * self.hop_length - spec = spec[:, :len_min] - wav = wav[:, :len_wav] - phone = phone[:len_min, :] - return (spec, wav, phone, dv) - - def get_labels(self, phone): - phone = np.load(phone) - phone = np.repeat(phone, 2, axis=0) - n_num = min(phone.shape[0], 900) # DistributedBucketSampler - phone = phone[:n_num, :] - phone = torch.FloatTensor(phone) - return phone - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio - # audio_norm = audio / self.max_wav_value - # audio_norm = audio / np.abs(audio).max() - - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - try: - spec = torch.load(spec_filename) - except: - logger.warn("%s %s", spec_filename, traceback.format_exc()) - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - return spec, audio_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True - ) - - max_spec_len = max([x[0].size(1) for x in batch]) - max_wave_len = max([x[1].size(1) for x in batch]) - spec_lengths = torch.LongTensor(len(batch)) - wave_lengths = torch.LongTensor(len(batch)) - spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len) - wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len) - spec_padded.zero_() - wave_padded.zero_() - - max_phone_len = max([x[2].size(0) for x in batch]) - phone_lengths = torch.LongTensor(len(batch)) - phone_padded = torch.FloatTensor( - len(batch), max_phone_len, batch[0][2].shape[1] - ) - phone_padded.zero_() - sid = torch.LongTensor(len(batch)) - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - spec = row[0] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wave = row[1] - wave_padded[i, :, : wave.size(1)] = wave - wave_lengths[i] = wave.size(1) - - phone = row[2] - phone_padded[i, : phone.size(0), :] = phone - phone_lengths[i] = phone.size(0) - - sid[i] = row[3] - - return ( - phone_padded, - phone_lengths, - spec_padded, - spec_lengths, - wave_padded, - wave_lengths, - sid, - ) - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__( - self, - dataset, - batch_size, - boundaries, - num_replicas=None, - rank=None, - shuffle=True, - ): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, -1, -1): # - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = ( - total_batch_size - (len_bucket % total_batch_size) - ) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ( - ids_bucket - + ids_bucket * (rem // len_bucket) - + ids_bucket[: (rem % len_bucket)] - ) - - # subsample - ids_bucket = ids_bucket[self.rank :: self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [ - bucket[idx] - for idx in ids_bucket[ - j * self.batch_size : (j + 1) * self.batch_size - ] - ] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/pretrained.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/pretrained.py deleted file mode 100644 index 6aac5db100cc7a9084af96d2cd083f0c8fac473c..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/demucs/pretrained.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import logging - -from diffq import DiffQuantizer -import torch.hub - -from .model import Demucs -from .tasnet import ConvTasNet -from .utils import set_state - -logger = logging.getLogger(__name__) -ROOT = "https://dl.fbaipublicfiles.com/demucs/v3.0/" - -PRETRAINED_MODELS = { - 'demucs': 'e07c671f', - 'demucs48_hq': '28a1282c', - 'demucs_extra': '3646af93', - 'demucs_quantized': '07afea75', - 'tasnet': 'beb46fac', - 'tasnet_extra': 'df3777b2', - 'demucs_unittest': '09ebc15f', -} - -SOURCES = ["drums", "bass", "other", "vocals"] - - -def get_url(name): - sig = PRETRAINED_MODELS[name] - return ROOT + name + "-" + sig[:8] + ".th" - - -def is_pretrained(name): - return name in PRETRAINED_MODELS - - -def load_pretrained(name): - if name == "demucs": - return demucs(pretrained=True) - elif name == "demucs48_hq": - return demucs(pretrained=True, hq=True, channels=48) - elif name == "demucs_extra": - return demucs(pretrained=True, extra=True) - elif name == "demucs_quantized": - return demucs(pretrained=True, quantized=True) - elif name == "demucs_unittest": - return demucs_unittest(pretrained=True) - elif name == "tasnet": - return tasnet(pretrained=True) - elif name == "tasnet_extra": - return tasnet(pretrained=True, extra=True) - else: - raise ValueError(f"Invalid pretrained name {name}") - - -def _load_state(name, model, quantizer=None): - url = get_url(name) - state = torch.hub.load_state_dict_from_url(url, map_location='cpu', check_hash=True) - set_state(model, quantizer, state) - if quantizer: - quantizer.detach() - - -def demucs_unittest(pretrained=True): - model = Demucs(channels=4, sources=SOURCES) - if pretrained: - _load_state('demucs_unittest', model) - return model - - -def demucs(pretrained=True, extra=False, quantized=False, hq=False, channels=64): - if not pretrained and (extra or quantized or hq): - raise ValueError("if extra or quantized is True, pretrained must be True.") - model = Demucs(sources=SOURCES, channels=channels) - if pretrained: - name = 'demucs' - if channels != 64: - name += str(channels) - quantizer = None - if sum([extra, quantized, hq]) > 1: - raise ValueError("Only one of extra, quantized, hq, can be True.") - if quantized: - quantizer = DiffQuantizer(model, group_size=8, min_size=1) - name += '_quantized' - if extra: - name += '_extra' - if hq: - name += '_hq' - _load_state(name, model, quantizer) - return model - - -def tasnet(pretrained=True, extra=False): - if not pretrained and extra: - raise ValueError("if extra is True, pretrained must be True.") - model = ConvTasNet(X=10, sources=SOURCES) - if pretrained: - name = 'tasnet' - if extra: - name = 'tasnet_extra' - _load_state(name, model) - return model diff --git a/spaces/Ld75/pyannote-voice-activity-detection/README.md b/spaces/Ld75/pyannote-voice-activity-detection/README.md deleted file mode 100644 index c54010fa845fffc08f3c7d6fa9f07a617ab0d78c..0000000000000000000000000000000000000000 --- a/spaces/Ld75/pyannote-voice-activity-detection/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Pyannote Voice Activity Detection -emoji: 🦀 -colorFrom: blue -colorTo: yellow -sdk: docker -pinned: false -app_port: 7860 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Lippmann/White-box-Cartoonization/app.py b/spaces/Lippmann/White-box-Cartoonization/app.py deleted file mode 100644 index c55ced56bd87a85f59d1c8ef84b7eca87422720f..0000000000000000000000000000000000000000 --- a/spaces/Lippmann/White-box-Cartoonization/app.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations -import argparse -import functools -import os -import pathlib -import sys -from typing import Callable -import uuid - -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image - -from io import BytesIO -from wbc.cartoonize import Cartoonize - -ORIGINAL_REPO_URL = 'https://github.com/SystemErrorWang/White-box-Cartoonization' -TITLE = 'SystemErrorWang/White-box-Cartoonization' -DESCRIPTION = f"""This is a demo for {ORIGINAL_REPO_URL}. - -""" -ARTICLE = """ - -""" - -SAFEHASH = [x for x in "0123456789-abcdefghijklmnopqrstuvwxyz_ABCDEFGHIJKLMNOPQRSTUVWXYZ"] -def compress_UUID(): - ''' - 根据http://www.ietf.org/rfc/rfc1738.txt,由uuid编码扩bai大字符域生成du串 - 包括:[0-9a-zA-Z\-_]共64个 - 长度:(32-2)/3*2=20 - 备注:可在地球上人zhi人都用,使用100年不重复(2^120) - :return:String - ''' - row = str(uuid.uuid4()).replace('-', '') - safe_code = '' - for i in range(10): - enbin = "%012d" % int(bin(int(row[i * 3] + row[i * 3 + 1] + row[i * 3 + 2], 16))[2:], 10) - safe_code += (SAFEHASH[int(enbin[0:6], 2)] + SAFEHASH[int(enbin[6:12], 2)]) - safe_code = safe_code.replace('-', '') - return safe_code - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - parser.add_argument('--allow-screenshot', action='store_true') - return parser.parse_args() - -def run( - image, - cartoonize : Cartoonize -) -> tuple[PIL.Image.Image]: - - out_path = compress_UUID()+'.png' - cartoonize.run_sigle(image.name, out_path) - - return PIL.Image.open(out_path) - - -def main(): - gr.close_all() - - args = parse_args() - - cartoonize = Cartoonize(os.path.join(os.path.dirname(os.path.abspath(__file__)),'wbc/saved_models/')) - - func = functools.partial(run, cartoonize=cartoonize) - func = functools.update_wrapper(func, run) - - gr.Interface( - func, - [ - gr.inputs.Image(type='file', label='Input Image'), - ], - [ - gr.outputs.Image( - type='pil', - label='Result'), - ], - # examples=examples, - theme=args.theme, - title=TITLE, - description=DESCRIPTION, - article=ARTICLE, - allow_screenshot=args.allow_screenshot, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/M-A-D/Dar-En-Translation-streamlit-Test/README.md b/spaces/M-A-D/Dar-En-Translation-streamlit-Test/README.md deleted file mode 100644 index bc7c8c87889c32788962fcab5af964fc66001688..0000000000000000000000000000000000000000 --- a/spaces/M-A-D/Dar-En-Translation-streamlit-Test/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dar En Translation Test -emoji: 📚 -colorFrom: red -colorTo: indigo -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/__init__.py b/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/__init__.py deleted file mode 100644 index 48ae82f3e40ecd1bf17a7de78d87790327af3362..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/Lovelive_Nijigasaki_VITS/text/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Manglik-R/PDF-ChatBot-BCS/README.md b/spaces/Manglik-R/PDF-ChatBot-BCS/README.md deleted file mode 100644 index 39fc1fe43cc4cafa441137c9a3683c4becf911c0..0000000000000000000000000000000000000000 --- a/spaces/Manglik-R/PDF-ChatBot-BCS/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PDF ChatBot BCS -emoji: 🐢 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/predictor.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/predictor.py deleted file mode 100644 index b69f3f6f3fdd32278ceee49f6251243e2e4f6d8e..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/predictor.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import atexit -import bisect -import multiprocessing as mp -from collections import deque -import cv2 -import torch - -from detectron2.data import MetadataCatalog -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.video_visualizer import VideoVisualizer -from detectron2.utils.visualizer import ColorMode, Visualizer - -from .modeling.utils import reset_cls_test - - -def get_clip_embeddings(vocabulary, prompt='a '): - from detic.modeling.text.text_encoder import build_text_encoder - text_encoder = build_text_encoder(pretrain=True) - text_encoder.eval() - texts = [prompt + x for x in vocabulary] - emb = text_encoder(texts).detach().permute(1, 0).contiguous().cpu() - return emb - -BUILDIN_CLASSIFIER = { - 'lvis': 'datasets/metadata/lvis_v1_clip_a+cname.npy', - 'objects365': 'datasets/metadata/o365_clip_a+cnamefix.npy', - 'openimages': 'datasets/metadata/oid_clip_a+cname.npy', - 'coco': 'datasets/metadata/coco_clip_a+cname.npy', -} - -BUILDIN_METADATA_PATH = { - 'lvis': 'lvis_v1_val', - 'objects365': 'objects365_v2_val', - 'openimages': 'oid_val_expanded', - 'coco': 'coco_2017_val', -} - -class VisualizationDemo(object): - def __init__(self, cfg, args, - instance_mode=ColorMode.IMAGE, parallel=False): - """ - Args: - cfg (CfgNode): - instance_mode (ColorMode): - parallel (bool): whether to run the model in different processes from visualization. - Useful since the visualization logic can be slow. - """ - if args.vocabulary == 'custom': - self.metadata = MetadataCatalog.get("__unused") - self.metadata.thing_classes = args.custom_vocabulary.split(',') - classifier = get_clip_embeddings(self.metadata.thing_classes) - else: - self.metadata = MetadataCatalog.get( - BUILDIN_METADATA_PATH[args.vocabulary]) - classifier = BUILDIN_CLASSIFIER[args.vocabulary] - - num_classes = len(self.metadata.thing_classes) - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.parallel = parallel - if parallel: - num_gpu = torch.cuda.device_count() - self.predictor = AsyncPredictor(cfg, num_gpus=num_gpu) - else: - self.predictor = DefaultPredictor(cfg) - reset_cls_test(self.predictor.model, classifier, num_classes) - - def run_on_image(self, image): - """ - Args: - image (np.ndarray): an image of shape (H, W, C) (in BGR order). - This is the format used by OpenCV. - - Returns: - predictions (dict): the output of the model. - vis_output (VisImage): the visualized image output. - """ - vis_output = None - predictions = self.predictor(image) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = Visualizer(image, self.metadata, instance_mode=self.instance_mode) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_output = visualizer.draw_panoptic_seg_predictions( - panoptic_seg.to(self.cpu_device), segments_info - ) - else: - if "sem_seg" in predictions: - vis_output = visualizer.draw_sem_seg( - predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - if "instances" in predictions: - instances = predictions["instances"].to(self.cpu_device) - vis_output = visualizer.draw_instance_predictions(predictions=instances) - - return predictions, vis_output - - def _frame_from_video(self, video): - while video.isOpened(): - success, frame = video.read() - if success: - yield frame - else: - break - - def run_on_video(self, video): - """ - Visualizes predictions on frames of the input video. - - Args: - video (cv2.VideoCapture): a :class:`VideoCapture` object, whose source can be - either a webcam or a video file. - - Yields: - ndarray: BGR visualizations of each video frame. - """ - video_visualizer = VideoVisualizer(self.metadata, self.instance_mode) - - def process_predictions(frame, predictions): - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - if "panoptic_seg" in predictions: - panoptic_seg, segments_info = predictions["panoptic_seg"] - vis_frame = video_visualizer.draw_panoptic_seg_predictions( - frame, panoptic_seg.to(self.cpu_device), segments_info - ) - elif "instances" in predictions: - predictions = predictions["instances"].to(self.cpu_device) - vis_frame = video_visualizer.draw_instance_predictions(frame, predictions) - elif "sem_seg" in predictions: - vis_frame = video_visualizer.draw_sem_seg( - frame, predictions["sem_seg"].argmax(dim=0).to(self.cpu_device) - ) - - # Converts Matplotlib RGB format to OpenCV BGR format - vis_frame = cv2.cvtColor(vis_frame.get_image(), cv2.COLOR_RGB2BGR) - return vis_frame - - frame_gen = self._frame_from_video(video) - if self.parallel: - buffer_size = self.predictor.default_buffer_size - - frame_data = deque() - - for cnt, frame in enumerate(frame_gen): - frame_data.append(frame) - self.predictor.put(frame) - - if cnt >= buffer_size: - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - - while len(frame_data): - frame = frame_data.popleft() - predictions = self.predictor.get() - yield process_predictions(frame, predictions) - else: - for frame in frame_gen: - yield process_predictions(frame, self.predictor(frame)) - - -class AsyncPredictor: - """ - A predictor that runs the model asynchronously, possibly on >1 GPUs. - Because rendering the visualization takes considerably amount of time, - this helps improve throughput a little bit when rendering videos. - """ - - class _StopToken: - pass - - class _PredictWorker(mp.Process): - def __init__(self, cfg, task_queue, result_queue): - self.cfg = cfg - self.task_queue = task_queue - self.result_queue = result_queue - super().__init__() - - def run(self): - predictor = DefaultPredictor(self.cfg) - - while True: - task = self.task_queue.get() - if isinstance(task, AsyncPredictor._StopToken): - break - idx, data = task - result = predictor(data) - self.result_queue.put((idx, result)) - - def __init__(self, cfg, num_gpus: int = 1): - """ - Args: - cfg (CfgNode): - num_gpus (int): if 0, will run on CPU - """ - num_workers = max(num_gpus, 1) - self.task_queue = mp.Queue(maxsize=num_workers * 3) - self.result_queue = mp.Queue(maxsize=num_workers * 3) - self.procs = [] - for gpuid in range(max(num_gpus, 1)): - cfg = cfg.clone() - cfg.defrost() - cfg.MODEL.DEVICE = "cuda:{}".format(gpuid) if num_gpus > 0 else "cpu" - self.procs.append( - AsyncPredictor._PredictWorker(cfg, self.task_queue, self.result_queue) - ) - - self.put_idx = 0 - self.get_idx = 0 - self.result_rank = [] - self.result_data = [] - - for p in self.procs: - p.start() - atexit.register(self.shutdown) - - def put(self, image): - self.put_idx += 1 - self.task_queue.put((self.put_idx, image)) - - def get(self): - self.get_idx += 1 # the index needed for this request - if len(self.result_rank) and self.result_rank[0] == self.get_idx: - res = self.result_data[0] - del self.result_data[0], self.result_rank[0] - return res - - while True: - # make sure the results are returned in the correct order - idx, res = self.result_queue.get() - if idx == self.get_idx: - return res - insert = bisect.bisect(self.result_rank, idx) - self.result_rank.insert(insert, idx) - self.result_data.insert(insert, res) - - def __len__(self): - return self.put_idx - self.get_idx - - def __call__(self, image): - self.put(image) - return self.get() - - def shutdown(self): - for _ in self.procs: - self.task_queue.put(AsyncPredictor._StopToken()) - - @property - def default_buffer_size(self): - return len(self.procs) * 5 - \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/midas/midas/midas_net_custom.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/midas/midas/midas_net_custom.py deleted file mode 100644 index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/midas/midas/midas_net_custom.py +++ /dev/null @@ -1,128 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder - - -class MidasNet_small(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True, - blocks={'expand': True}): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet_small, self).__init__() - - use_pretrained = False if path else True - - self.channels_last = channels_last - self.blocks = blocks - self.backbone = backbone - - self.groups = 1 - - features1=features - features2=features - features3=features - features4=features - self.expand = False - if "expand" in self.blocks and self.blocks['expand'] == True: - self.expand = True - features1=features - features2=features*2 - features3=features*4 - features4=features*8 - - self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable) - - self.scratch.activation = nn.ReLU(False) - - self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners) - - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1), - self.scratch.activation, - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - if path: - self.load(path) - - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - if self.channels_last==True: - print("self.channels_last = ", self.channels_last) - x.contiguous(memory_format=torch.channels_last) - - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) - - - -def fuse_model(m): - prev_previous_type = nn.Identity() - prev_previous_name = '' - previous_type = nn.Identity() - previous_name = '' - for name, module in m.named_modules(): - if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU: - # print("FUSED ", prev_previous_name, previous_name, name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True) - elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d: - # print("FUSED ", prev_previous_name, previous_name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True) - # elif previous_type == nn.Conv2d and type(module) == nn.ReLU: - # print("FUSED ", previous_name, name) - # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True) - - prev_previous_type = previous_type - prev_previous_name = previous_name - previous_type = type(module) - previous_name = name \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/enc_head.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/enc_head.py deleted file mode 100644 index da57af617e05d41761628fd2d6d232655b32d905..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/enc_head.py +++ /dev/null @@ -1,187 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule, build_norm_layer - -from annotator.uniformer.mmseg.ops import Encoding, resize -from ..builder import HEADS, build_loss -from .decode_head import BaseDecodeHead - - -class EncModule(nn.Module): - """Encoding Module used in EncNet. - - Args: - in_channels (int): Input channels. - num_codes (int): Number of code words. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, in_channels, num_codes, conv_cfg, norm_cfg, act_cfg): - super(EncModule, self).__init__() - self.encoding_project = ConvModule( - in_channels, - in_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - # TODO: resolve this hack - # change to 1d - if norm_cfg is not None: - encoding_norm_cfg = norm_cfg.copy() - if encoding_norm_cfg['type'] in ['BN', 'IN']: - encoding_norm_cfg['type'] += '1d' - else: - encoding_norm_cfg['type'] = encoding_norm_cfg['type'].replace( - '2d', '1d') - else: - # fallback to BN1d - encoding_norm_cfg = dict(type='BN1d') - self.encoding = nn.Sequential( - Encoding(channels=in_channels, num_codes=num_codes), - build_norm_layer(encoding_norm_cfg, num_codes)[1], - nn.ReLU(inplace=True)) - self.fc = nn.Sequential( - nn.Linear(in_channels, in_channels), nn.Sigmoid()) - - def forward(self, x): - """Forward function.""" - encoding_projection = self.encoding_project(x) - encoding_feat = self.encoding(encoding_projection).mean(dim=1) - batch_size, channels, _, _ = x.size() - gamma = self.fc(encoding_feat) - y = gamma.view(batch_size, channels, 1, 1) - output = F.relu_(x + x * y) - return encoding_feat, output - - -@HEADS.register_module() -class EncHead(BaseDecodeHead): - """Context Encoding for Semantic Segmentation. - - This head is the implementation of `EncNet - `_. - - Args: - num_codes (int): Number of code words. Default: 32. - use_se_loss (bool): Whether use Semantic Encoding Loss (SE-loss) to - regularize the training. Default: True. - add_lateral (bool): Whether use lateral connection to fuse features. - Default: False. - loss_se_decode (dict): Config of decode loss. - Default: dict(type='CrossEntropyLoss', use_sigmoid=True). - """ - - def __init__(self, - num_codes=32, - use_se_loss=True, - add_lateral=False, - loss_se_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=0.2), - **kwargs): - super(EncHead, self).__init__( - input_transform='multiple_select', **kwargs) - self.use_se_loss = use_se_loss - self.add_lateral = add_lateral - self.num_codes = num_codes - self.bottleneck = ConvModule( - self.in_channels[-1], - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if add_lateral: - self.lateral_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the last one - self.lateral_convs.append( - ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.fusion = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.enc_module = EncModule( - self.channels, - num_codes=num_codes, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.use_se_loss: - self.loss_se_decode = build_loss(loss_se_decode) - self.se_layer = nn.Linear(self.channels, self.num_classes) - - def forward(self, inputs): - """Forward function.""" - inputs = self._transform_inputs(inputs) - feat = self.bottleneck(inputs[-1]) - if self.add_lateral: - laterals = [ - resize( - lateral_conv(inputs[i]), - size=feat.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - feat = self.fusion(torch.cat([feat, *laterals], 1)) - encode_feat, output = self.enc_module(feat) - output = self.cls_seg(output) - if self.use_se_loss: - se_output = self.se_layer(encode_feat) - return output, se_output - else: - return output - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, ignore se_loss.""" - if self.use_se_loss: - return self.forward(inputs)[0] - else: - return self.forward(inputs) - - @staticmethod - def _convert_to_onehot_labels(seg_label, num_classes): - """Convert segmentation label to onehot. - - Args: - seg_label (Tensor): Segmentation label of shape (N, H, W). - num_classes (int): Number of classes. - - Returns: - Tensor: Onehot labels of shape (N, num_classes). - """ - - batch_size = seg_label.size(0) - onehot_labels = seg_label.new_zeros((batch_size, num_classes)) - for i in range(batch_size): - hist = seg_label[i].float().histc( - bins=num_classes, min=0, max=num_classes - 1) - onehot_labels[i] = hist > 0 - return onehot_labels - - def losses(self, seg_logit, seg_label): - """Compute segmentation and semantic encoding loss.""" - seg_logit, se_seg_logit = seg_logit - loss = dict() - loss.update(super(EncHead, self).losses(seg_logit, seg_label)) - se_loss = self.loss_se_decode( - se_seg_logit, - self._convert_to_onehot_labels(seg_label, self.num_classes)) - loss['loss_se'] = se_loss - return loss diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/tool_add_control.py b/spaces/Mellow-ai/PhotoAI_Mellow/tool_add_control.py deleted file mode 100644 index 8076b5143405e5516b063f4fd63096f65cffbed2..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/tool_add_control.py +++ /dev/null @@ -1,50 +0,0 @@ -import sys -import os - -assert len(sys.argv) == 3, 'Args are wrong.' - -input_path = sys.argv[1] -output_path = sys.argv[2] - -assert os.path.exists(input_path), 'Input model does not exist.' -assert not os.path.exists(output_path), 'Output filename already exists.' -assert os.path.exists(os.path.dirname(output_path)), 'Output path is not valid.' - -import torch -from share import * -from cldm.model import create_model - - -def get_node_name(name, parent_name): - if len(name) <= len(parent_name): - return False, '' - p = name[:len(parent_name)] - if p != parent_name: - return False, '' - return True, name[len(parent_name):] - - -model = create_model(config_path='./models/cldm_v15.yaml') - -pretrained_weights = torch.load(input_path) -if 'state_dict' in pretrained_weights: - pretrained_weights = pretrained_weights['state_dict'] - -scratch_dict = model.state_dict() - -target_dict = {} -for k in scratch_dict.keys(): - is_control, name = get_node_name(k, 'control_') - if is_control: - copy_k = 'model.diffusion_' + name - else: - copy_k = k - if copy_k in pretrained_weights: - target_dict[k] = pretrained_weights[copy_k].clone() - else: - target_dict[k] = scratch_dict[k].clone() - print(f'These weights are newly added: {k}') - -model.load_state_dict(target_dict, strict=True) -torch.save(model.state_dict(), output_path) -print('Done.') diff --git a/spaces/MestikonAgency/README/CONTRIBUTING.md b/spaces/MestikonAgency/README/CONTRIBUTING.md deleted file mode 100644 index 5eb507d673da326fb608ebccfaaaedc2016e7af2..0000000000000000000000000000000000000000 --- a/spaces/MestikonAgency/README/CONTRIBUTING.md +++ /dev/null @@ -1,31 +0,0 @@ -# Contributing to Llama -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to Llama, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. \ No newline at end of file diff --git a/spaces/MetaWabbit/Auto-GPT/tests/unit/test_browse_scrape_text.py b/spaces/MetaWabbit/Auto-GPT/tests/unit/test_browse_scrape_text.py deleted file mode 100644 index fea5ebfc05d466c7cb5711b5ac10e2ea102ddc45..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/tests/unit/test_browse_scrape_text.py +++ /dev/null @@ -1,98 +0,0 @@ -# Generated by CodiumAI - -import requests - -from autogpt.commands.web_requests import scrape_text - -""" -Code Analysis - -Objective: -The objective of the "scrape_text" function is to scrape the text content from -a given URL and return it as a string, after removing any unwanted HTML tags and scripts. - -Inputs: -- url: a string representing the URL of the webpage to be scraped. - -Flow: -1. Send a GET request to the given URL using the requests library and the user agent header from the config file. -2. Check if the response contains an HTTP error. If it does, return an error message. -3. Use BeautifulSoup to parse the HTML content of the response and extract all script and style tags. -4. Get the text content of the remaining HTML using the get_text() method of BeautifulSoup. -5. Split the text into lines and then into chunks, removing any extra whitespace. -6. Join the chunks into a single string with newline characters between them. -7. Return the cleaned text. - -Outputs: -- A string representing the cleaned text content of the webpage. - -Additional aspects: -- The function uses the requests library and BeautifulSoup to handle the HTTP request and HTML parsing, respectively. -- The function removes script and style tags from the HTML to avoid including unwanted content in the text output. -- The function uses a generator expression to split the text into lines and chunks, which can improve performance for large amounts of text. -""" - - -class TestScrapeText: - # Tests that scrape_text() returns the expected text when given a valid URL. - def test_scrape_text_with_valid_url(self, mocker): - # Mock the requests.get() method to return a response with expected text - expected_text = "This is some sample text" - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = f"

    {expected_text}

    " - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a valid URL and assert that it returns the expected text - url = "http://www.example.com" - assert scrape_text(url) == expected_text - - # Tests that the function returns an error message when an invalid or unreachable url is provided. - def test_invalid_url(self, mocker): - # Mock the requests.get() method to raise an exception - mocker.patch( - "requests.Session.get", side_effect=requests.exceptions.RequestException - ) - - # Call the function with an invalid URL and assert that it returns an error message - url = "http://www.invalidurl.com" - error_message = scrape_text(url) - assert "Error:" in error_message - - # Tests that the function returns an empty string when the html page contains no text to be scraped. - def test_no_text(self, mocker): - # Mock the requests.get() method to return a response with no text - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = "" - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a valid URL and assert that it returns an empty string - url = "http://www.example.com" - assert scrape_text(url) == "" - - # Tests that the function returns an error message when the response status code is an http error (>=400). - def test_http_error(self, mocker): - # Mock the requests.get() method to return a response with a 404 status code - mocker.patch("requests.Session.get", return_value=mocker.Mock(status_code=404)) - - # Call the function with a URL - result = scrape_text("https://www.example.com") - - # Check that the function returns an error message - assert result == "Error: HTTP 404 error" - - # Tests that scrape_text() properly handles HTML tags. - def test_scrape_text_with_html_tags(self, mocker): - # Create a mock response object with HTML containing tags - html = "

    This is bold text.

    " - mock_response = mocker.Mock() - mock_response.status_code = 200 - mock_response.text = html - mocker.patch("requests.Session.get", return_value=mock_response) - - # Call the function with a URL - result = scrape_text("https://www.example.com") - - # Check that the function properly handles HTML tags - assert result == "This is bold text." diff --git a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_overlap.py b/spaces/MuGeminorum/insecta/khandy/boxes/boxes_overlap.py deleted file mode 100644 index 1ebfb23068d75c771cf701c596784091e6f8142b..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_overlap.py +++ /dev/null @@ -1,166 +0,0 @@ -import numpy as np - - -def paired_intersection(boxes1, boxes2): - """Compute paired intersection areas between boxes. - Args: - boxes1: a numpy array with shape [N, 4] holding N boxes - boxes2: a numpy array with shape [N, 4] holding N boxes - - Returns: - a numpy array with shape [N,] representing itemwise intersection area - - References: - `core.box_list_ops.matched_intersection` in Tensorflow object detection API - - Notes: - can called as itemwise_intersection, matched_intersection, aligned_intersection - """ - max_x_mins = np.maximum(boxes1[:, 0], boxes2[:, 0]) - max_y_mins = np.maximum(boxes1[:, 1], boxes2[:, 1]) - min_x_maxs = np.minimum(boxes1[:, 2], boxes2[:, 2]) - min_y_maxs = np.minimum(boxes1[:, 3], boxes2[:, 3]) - intersect_widths = np.maximum(0, min_x_maxs - max_x_mins) - intersect_heights = np.maximum(0, min_y_maxs - max_y_mins) - return intersect_widths * intersect_heights - - -def pairwise_intersection(boxes1, boxes2): - """Compute pairwise intersection areas between boxes. - - Args: - boxes1: a numpy array with shape [N, 4] holding N boxes. - boxes2: a numpy array with shape [M, 4] holding M boxes. - - Returns: - a numpy array with shape [N, M] representing pairwise intersection area. - - References: - `core.box_list_ops.intersection` in Tensorflow object detection API - `utils.box_list_ops.intersection` in Tensorflow object detection API - """ - if boxes1.shape[0] * boxes2.shape[0] == 0: - return np.zeros((boxes1.shape[0], boxes2.shape[0]), dtype=boxes1.dtype) - - swap = False - if boxes1.shape[0] > boxes2.shape[0]: - boxes1, boxes2 = boxes2, boxes1 - swap = True - intersect_areas = np.empty((boxes1.shape[0], boxes2.shape[0]), dtype=boxes1.dtype) - - for i in range(boxes1.shape[0]): - max_x_mins = np.maximum(boxes1[i, 0], boxes2[:, 0]) - max_y_mins = np.maximum(boxes1[i, 1], boxes2[:, 1]) - min_x_maxs = np.minimum(boxes1[i, 2], boxes2[:, 2]) - min_y_maxs = np.minimum(boxes1[i, 3], boxes2[:, 3]) - intersect_widths = np.maximum(0, min_x_maxs - max_x_mins) - intersect_heights = np.maximum(0, min_y_maxs - max_y_mins) - intersect_areas[i, :] = intersect_widths * intersect_heights - if swap: - intersect_areas = intersect_areas.T - return intersect_areas - - -def paired_overlap_ratio(boxes1, boxes2, ratio_type='iou'): - """Compute paired overlap ratio between boxes. - - Args: - boxes1: a numpy array with shape [N, 4] holding N boxes - boxes2: a numpy array with shape [N, 4] holding N boxes - ratio_type: - iou: Intersection-over-union (iou). - ioa: Intersection-over-area (ioa) between two boxes box1 and box2 is defined as - their intersection area over box2's area. Note that ioa is not symmetric, - that is, IOA(box1, box2) != IOA(box2, box1). - min: Compute the ratio as the area of intersection between box1 and box2, - divided by the minimum area of the two bounding boxes. - - Returns: - a numpy array with shape [N,] representing itemwise overlap ratio. - - References: - `core.box_list_ops.matched_iou` in Tensorflow object detection API - `structures.boxes.matched_boxlist_iou` in detectron2 - `mmdet.core.bbox.bbox_overlaps`, see https://mmdetection.readthedocs.io/en/v2.17.0/api.html#mmdet.core.bbox.bbox_overlaps - """ - intersect_areas = paired_intersection(boxes1, boxes2) - areas1 = (boxes1[:, 2] - boxes1[:, 0]) * (boxes1[:, 3] - boxes1[:, 1]) - areas2 = (boxes2[:, 2] - boxes2[:, 0]) * (boxes2[:, 3] - boxes2[:, 1]) - - if ratio_type in ['union', 'iou', 'giou']: - union_areas = areas1 - intersect_areas - union_areas += areas2 - intersect_areas /= union_areas - elif ratio_type == 'min': - min_areas = np.minimum(areas1, areas2) - intersect_areas /= min_areas - elif ratio_type == 'ioa': - intersect_areas /= areas2 - else: - raise ValueError('Unsupported ratio_type. Got {}'.format(ratio_type)) - - if ratio_type == 'giou': - min_xy_mins = np.minimum(boxes1[:, 0:2], boxes2[:, 0:2]) - max_xy_mins = np.maximum(boxes1[:, 2:4], boxes2[:, 2:4]) - # mebb = minimum enclosing bounding boxes - mebb_whs = np.maximum(0, max_xy_mins - min_xy_mins) - mebb_areas = mebb_whs[:, 0] * mebb_whs[:, 1] - union_areas -= mebb_areas - union_areas /= mebb_areas - intersect_areas += union_areas - return intersect_areas - - -def pairwise_overlap_ratio(boxes1, boxes2, ratio_type='iou'): - """Compute pairwise overlap ratio between boxes. - - Args: - boxes1: a numpy array with shape [N, 4] holding N boxes - boxes2: a numpy array with shape [M, 4] holding M boxes - ratio_type: - iou: Intersection-over-union (iou). - ioa: Intersection-over-area (ioa) between two boxes box1 and box2 is defined as - their intersection area over box2's area. Note that ioa is not symmetric, - that is, IOA(box1, box2) != IOA(box2, box1). - min: Compute the ratio as the area of intersection between box1 and box2, - divided by the minimum area of the two bounding boxes. - - Returns: - a numpy array with shape [N, M] representing pairwise overlap ratio. - - References: - `utils.np_box_ops.iou` in Tensorflow object detection API - `utils.np_box_ops.ioa` in Tensorflow object detection API - `utils.np_box_ops.giou` in Tensorflow object detection API - `mmdet.core.bbox.bbox_overlaps`, see https://mmdetection.readthedocs.io/en/v2.17.0/api.html#mmdet.core.bbox.bbox_overlaps - `torchvision.ops.box_iou`, see https://pytorch.org/vision/stable/ops.html#torchvision.ops.box_iou - `torchvision.ops.generalized_box_iou`, see https://pytorch.org/vision/stable/ops.html#torchvision.ops.generalized_box_iou - http://ww2.mathworks.cn/help/vision/ref/bboxoverlapratio.html - """ - intersect_areas = pairwise_intersection(boxes1, boxes2) - areas1 = (boxes1[:, 2] - boxes1[:, 0]) * (boxes1[:, 3] - boxes1[:, 1]) - areas2 = (boxes2[:, 2] - boxes2[:, 0]) * (boxes2[:, 3] - boxes2[:, 1]) - - if ratio_type in ['union', 'iou', 'giou']: - union_areas = np.expand_dims(areas1, axis=1) - intersect_areas - union_areas += np.expand_dims(areas2, axis=0) - intersect_areas /= union_areas - elif ratio_type == 'min': - min_areas = np.minimum(np.expand_dims(areas1, axis=1), np.expand_dims(areas2, axis=0)) - intersect_areas /= min_areas - elif ratio_type == 'ioa': - intersect_areas /= np.expand_dims(areas2, axis=0) - else: - raise ValueError('Unsupported ratio_type. Got {}'.format(ratio_type)) - - if ratio_type == 'giou': - min_xy_mins = np.minimum(boxes1[:, None, 0:2], boxes2[:, 0:2]) - max_xy_mins = np.maximum(boxes1[:, None, 2:4], boxes2[:, 2:4]) - # mebb = minimum enclosing bounding boxes - mebb_whs = np.maximum(0, max_xy_mins - min_xy_mins) - mebb_areas = mebb_whs[:, :, 0] * mebb_whs[:, :, 1] - union_areas -= mebb_areas - union_areas /= mebb_areas - intersect_areas += union_areas - return intersect_areas - diff --git a/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/optimizer_factory.py b/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/optimizer_factory.py deleted file mode 100644 index ccb03d50ee8a5b74cda84cbe261cfdbecce60d23..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/modeling/optimization/optimizer_factory.py +++ /dev/null @@ -1,145 +0,0 @@ -# Lint as: python3 -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Optimizer factory class.""" -from typing import Union - -import tensorflow as tf - -import tensorflow_addons.optimizers as tfa_optimizers - -from official.modeling.optimization import lr_schedule -from official.modeling.optimization.configs import optimization_config as opt_cfg -from official.nlp import optimization as nlp_optimization - -OPTIMIZERS_CLS = { - 'sgd': tf.keras.optimizers.SGD, - 'adam': tf.keras.optimizers.Adam, - 'adamw': nlp_optimization.AdamWeightDecay, - 'lamb': tfa_optimizers.LAMB, - 'rmsprop': tf.keras.optimizers.RMSprop -} - -LR_CLS = { - 'stepwise': tf.keras.optimizers.schedules.PiecewiseConstantDecay, - 'polynomial': tf.keras.optimizers.schedules.PolynomialDecay, - 'exponential': tf.keras.optimizers.schedules.ExponentialDecay, - 'cosine': tf.keras.experimental.CosineDecay -} - -WARMUP_CLS = { - 'linear': lr_schedule.LinearWarmup, - 'polynomial': lr_schedule.PolynomialWarmUp -} - - -class OptimizerFactory(object): - """Optimizer factory class. - - This class builds learning rate and optimizer based on an optimization config. - To use this class, you need to do the following: - (1) Define optimization config, this includes optimizer, and learning rate - schedule. - (2) Initialize the class using the optimization config. - (3) Build learning rate. - (4) Build optimizer. - - This is a typical example for using this class: - params = { - 'optimizer': { - 'type': 'sgd', - 'sgd': {'learning_rate': 0.1, 'momentum': 0.9} - }, - 'learning_rate': { - 'type': 'stepwise', - 'stepwise': {'boundaries': [10000, 20000], - 'values': [0.1, 0.01, 0.001]} - }, - 'warmup': { - 'type': 'linear', - 'linear': {'warmup_steps': 500, 'warmup_learning_rate': 0.01} - } - } - opt_config = OptimizationConfig(params) - opt_factory = OptimizerFactory(opt_config) - lr = opt_factory.build_learning_rate() - optimizer = opt_factory.build_optimizer(lr) - """ - - def __init__(self, config: opt_cfg.OptimizationConfig): - """Initializing OptimizerFactory. - - Args: - config: OptimizationConfig instance contain optimization config. - """ - self._config = config - self._optimizer_config = config.optimizer.get() - self._optimizer_type = config.optimizer.type - - if self._optimizer_config is None: - raise ValueError('Optimizer type must be specified') - - self._lr_config = config.learning_rate.get() - self._lr_type = config.learning_rate.type - - self._warmup_config = config.warmup.get() - self._warmup_type = config.warmup.type - - def build_learning_rate(self): - """Build learning rate. - - Builds learning rate from config. Learning rate schedule is built according - to the learning rate config. If there is no learning rate config, optimizer - learning rate is returned. - - Returns: - tf.keras.optimizers.schedules.LearningRateSchedule instance. If no - learning rate schedule defined, optimizer_config.learning_rate is - returned. - """ - - # TODO(arashwan): Explore if we want to only allow explicit const lr sched. - if not self._lr_config: - lr = self._optimizer_config.learning_rate - else: - lr = LR_CLS[self._lr_type](**self._lr_config.as_dict()) - - if self._warmup_config: - lr = WARMUP_CLS[self._warmup_type](lr, **self._warmup_config.as_dict()) - - return lr - - def build_optimizer( - self, lr: Union[tf.keras.optimizers.schedules.LearningRateSchedule, - float]): - """Build optimizer. - - Builds optimizer from config. It takes learning rate as input, and builds - the optimizer according to the optimizer config. Typically, the learning - rate built using self.build_lr() is passed as an argument to this method. - - Args: - lr: A floating point value, or - a tf.keras.optimizers.schedules.LearningRateSchedule instance. - Returns: - tf.keras.optimizers.Optimizer instance. - """ - - optimizer_dict = self._optimizer_config.as_dict() - optimizer_dict['learning_rate'] = lr - - optimizer = OPTIMIZERS_CLS[self._optimizer_type](**optimizer_dict) - return optimizer - diff --git a/spaces/OAOA/DifFace/basicsr/metrics/psnr_ssim.py b/spaces/OAOA/DifFace/basicsr/metrics/psnr_ssim.py deleted file mode 100644 index ab03113f89805c990ff22795601274bf45db23a1..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/metrics/psnr_ssim.py +++ /dev/null @@ -1,231 +0,0 @@ -import cv2 -import numpy as np -import torch -import torch.nn.functional as F - -from basicsr.metrics.metric_util import reorder_image, to_y_channel -from basicsr.utils.color_util import rgb2ycbcr_pt -from basicsr.utils.registry import METRIC_REGISTRY - - -@METRIC_REGISTRY.register() -def calculate_psnr(img, img2, crop_border, input_order='HWC', test_y_channel=False, **kwargs): - """Calculate PSNR (Peak Signal-to-Noise Ratio). - - Reference: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio - - Args: - img (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edge of an image. These pixels are not involved in the calculation. - input_order (str): Whether the input order is 'HWC' or 'CHW'. Default: 'HWC'. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: PSNR result. - """ - - assert img.shape == img2.shape, (f'Image shapes are different: {img.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are "HWC" and "CHW"') - img = reorder_image(img, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - - if crop_border != 0: - img = img[crop_border:-crop_border, crop_border:-crop_border, ...] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...] - - if test_y_channel: - img = to_y_channel(img) - img2 = to_y_channel(img2) - - img = img.astype(np.float64) - img2 = img2.astype(np.float64) - - mse = np.mean((img - img2)**2) - if mse == 0: - return float('inf') - return 10. * np.log10(255. * 255. / mse) - - -@METRIC_REGISTRY.register() -def calculate_psnr_pt(img, img2, crop_border, test_y_channel=False, **kwargs): - """Calculate PSNR (Peak Signal-to-Noise Ratio) (PyTorch version). - - Reference: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio - - Args: - img (Tensor): Images with range [0, 1], shape (n, 3/1, h, w). - img2 (Tensor): Images with range [0, 1], shape (n, 3/1, h, w). - crop_border (int): Cropped pixels in each edge of an image. These pixels are not involved in the calculation. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: PSNR result. - """ - - assert img.shape == img2.shape, (f'Image shapes are different: {img.shape}, {img2.shape}.') - - if crop_border != 0: - img = img[:, :, crop_border:-crop_border, crop_border:-crop_border] - img2 = img2[:, :, crop_border:-crop_border, crop_border:-crop_border] - - if test_y_channel: - img = rgb2ycbcr_pt(img, y_only=True) - img2 = rgb2ycbcr_pt(img2, y_only=True) - - img = img.to(torch.float64) - img2 = img2.to(torch.float64) - - mse = torch.mean((img - img2)**2, dim=[1, 2, 3]) - return 10. * torch.log10(1. / (mse + 1e-8)) - - -@METRIC_REGISTRY.register() -def calculate_ssim(img, img2, crop_border, input_order='HWC', test_y_channel=False, **kwargs): - """Calculate SSIM (structural similarity). - - ``Paper: Image quality assessment: From error visibility to structural similarity`` - - The results are the same as that of the official released MATLAB code in - https://ece.uwaterloo.ca/~z70wang/research/ssim/. - - For three-channel images, SSIM is calculated for each channel and then - averaged. - - Args: - img (ndarray): Images with range [0, 255]. - img2 (ndarray): Images with range [0, 255]. - crop_border (int): Cropped pixels in each edge of an image. These pixels are not involved in the calculation. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - Default: 'HWC'. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: SSIM result. - """ - - assert img.shape == img2.shape, (f'Image shapes are different: {img.shape}, {img2.shape}.') - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are "HWC" and "CHW"') - img = reorder_image(img, input_order=input_order) - img2 = reorder_image(img2, input_order=input_order) - - if crop_border != 0: - img = img[crop_border:-crop_border, crop_border:-crop_border, ...] - img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...] - - if test_y_channel: - img = to_y_channel(img) - img2 = to_y_channel(img2) - - img = img.astype(np.float64) - img2 = img2.astype(np.float64) - - ssims = [] - for i in range(img.shape[2]): - ssims.append(_ssim(img[..., i], img2[..., i])) - return np.array(ssims).mean() - - -@METRIC_REGISTRY.register() -def calculate_ssim_pt(img, img2, crop_border, test_y_channel=False, **kwargs): - """Calculate SSIM (structural similarity) (PyTorch version). - - ``Paper: Image quality assessment: From error visibility to structural similarity`` - - The results are the same as that of the official released MATLAB code in - https://ece.uwaterloo.ca/~z70wang/research/ssim/. - - For three-channel images, SSIM is calculated for each channel and then - averaged. - - Args: - img (Tensor): Images with range [0, 1], shape (n, 3/1, h, w). - img2 (Tensor): Images with range [0, 1], shape (n, 3/1, h, w). - crop_border (int): Cropped pixels in each edge of an image. These pixels are not involved in the calculation. - test_y_channel (bool): Test on Y channel of YCbCr. Default: False. - - Returns: - float: SSIM result. - """ - - assert img.shape == img2.shape, (f'Image shapes are different: {img.shape}, {img2.shape}.') - - if crop_border != 0: - img = img[:, :, crop_border:-crop_border, crop_border:-crop_border] - img2 = img2[:, :, crop_border:-crop_border, crop_border:-crop_border] - - if test_y_channel: - img = rgb2ycbcr_pt(img, y_only=True) - img2 = rgb2ycbcr_pt(img2, y_only=True) - - img = img.to(torch.float64) - img2 = img2.to(torch.float64) - - ssim = _ssim_pth(img * 255., img2 * 255.) - return ssim - - -def _ssim(img, img2): - """Calculate SSIM (structural similarity) for one channel images. - - It is called by func:`calculate_ssim`. - - Args: - img (ndarray): Images with range [0, 255] with order 'HWC'. - img2 (ndarray): Images with range [0, 255] with order 'HWC'. - - Returns: - float: SSIM result. - """ - - c1 = (0.01 * 255)**2 - c2 = (0.03 * 255)**2 - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img, -1, window)[5:-5, 5:-5] # valid mode for window size 11 - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + c1) * (2 * sigma12 + c2)) / ((mu1_sq + mu2_sq + c1) * (sigma1_sq + sigma2_sq + c2)) - return ssim_map.mean() - - -def _ssim_pth(img, img2): - """Calculate SSIM (structural similarity) (PyTorch version). - - It is called by func:`calculate_ssim_pt`. - - Args: - img (Tensor): Images with range [0, 1], shape (n, 3/1, h, w). - img2 (Tensor): Images with range [0, 1], shape (n, 3/1, h, w). - - Returns: - float: SSIM result. - """ - c1 = (0.01 * 255)**2 - c2 = (0.03 * 255)**2 - - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - window = torch.from_numpy(window).view(1, 1, 11, 11).expand(img.size(1), 1, 11, 11).to(img.dtype).to(img.device) - - mu1 = F.conv2d(img, window, stride=1, padding=0, groups=img.shape[1]) # valid mode - mu2 = F.conv2d(img2, window, stride=1, padding=0, groups=img2.shape[1]) # valid mode - mu1_sq = mu1.pow(2) - mu2_sq = mu2.pow(2) - mu1_mu2 = mu1 * mu2 - sigma1_sq = F.conv2d(img * img, window, stride=1, padding=0, groups=img.shape[1]) - mu1_sq - sigma2_sq = F.conv2d(img2 * img2, window, stride=1, padding=0, groups=img.shape[1]) - mu2_sq - sigma12 = F.conv2d(img * img2, window, stride=1, padding=0, groups=img.shape[1]) - mu1_mu2 - - cs_map = (2 * sigma12 + c2) / (sigma1_sq + sigma2_sq + c2) - ssim_map = ((2 * mu1_mu2 + c1) / (mu1_sq + mu2_sq + c1)) * cs_map - return ssim_map.mean([1, 2, 3]) diff --git a/spaces/OAOA/DifFace/models/basic_ops.py b/spaces/OAOA/DifFace/models/basic_ops.py deleted file mode 100644 index ca0c97344677a5435bbe3bdebe0f3c1afc391dca..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/models/basic_ops.py +++ /dev/null @@ -1,118 +0,0 @@ -""" -Various utilities for neural networks. -""" - -import math - -import torch as th -import torch.nn as nn - -class SiLU(nn.Module): - def forward(self, x): - return x * th.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def update_ema(target_params, source_params, rate=0.99): - """ - Update target parameters to be closer to those of source parameters using - an exponential moving average. - - :param target_params: the target parameter sequence. - :param source_params: the source parameter sequence. - :param rate: the EMA rate (closer to 1 means slower). - """ - for targ, src in zip(target_params, source_params): - targ.detach().mul_(rate).add_(src, alpha=1 - rate) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -def timestep_embedding(timesteps, dim, max_period=10000): - """ - Create sinusoidal timestep embeddings. - - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - half = dim // 2 - freqs = th.exp( - -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] # B x half - embedding = th.cat([th.cos(args), th.sin(args)], dim=-1) - if dim % 2: - embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1) - return embedding - diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/unsupervised_mt/eval.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/unsupervised_mt/eval.sh deleted file mode 100644 index 03b773ed5a522eb82186fea8ffbb6c557e14b6d3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/unsupervised_mt/eval.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -SRC=si_LK -TGT=en_XX -MODEL=criss_checkpoints/criss.3rd.pt - -MULTIBLEU=mosesdecoder/scripts/generic/multi-bleu.perl -MOSES=mosesdecoder -REPLACE_UNICODE_PUNCT=$MOSES/scripts/tokenizer/replace-unicode-punctuation.perl -NORM_PUNC=$MOSES/scripts/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$MOSES/scripts/tokenizer/remove-non-printing-char.perl -TOKENIZER=$MOSES/scripts/tokenizer/tokenizer.perl -GEN_TMP_DIR=gen_tmp -LANG_DICT=criss_checkpoints/lang_dict.txt - -if [ ! -d "mosesdecoder" ]; then - git clone https://github.com/moses-smt/mosesdecoder -fi -mkdir -p $GEN_TMP_DIR -fairseq-generate data_tmp/${SRC}-${TGT}-flores \ - --task translation_multi_simple_epoch \ - --max-tokens 2000 \ - --path ${MODEL} \ - --skip-invalid-size-inputs-valid-test \ - --beam 5 --lenpen 1.0 --gen-subset test \ - --remove-bpe=sentencepiece \ - --source-lang ${SRC} --target-lang ${TGT} \ - --decoder-langtok --lang-pairs 'en_XX-ar_AR,en_XX-de_DE,en_XX-es_XX,en_XX-fr_XX,en_XX-hi_IN,en_XX-it_IT,en_XX-ja_XX,en_XX-ko_KR,en_XX-nl_XX,en_XX-ru_RU,en_XX-zh_CN,en_XX-tr_TR,en_XX-vi_VN,en_XX-ro_RO,en_XX-my_MM,en_XX-ne_NP,en_XX-si_LK,en_XX-cs_CZ,en_XX-lt_LT,en_XX-kk_KZ,en_XX-gu_IN,en_XX-fi_FI,en_XX-et_EE,en_XX-lv_LV,ar_AR-en_XX,cs_CZ-en_XX,de_DE-en_XX,es_XX-en_XX,et_EE-en_XX,fi_FI-en_XX,fr_XX-en_XX,gu_IN-en_XX,hi_IN-en_XX,it_IT-en_XX,ja_XX-en_XX,kk_KZ-en_XX,ko_KR-en_XX,lt_LT-en_XX,lv_LV-en_XX,my_MM-en_XX,ne_NP-en_XX,nl_XX-en_XX,ro_RO-en_XX,ru_RU-en_XX,si_LK-en_XX,tr_TR-en_XX,vi_VN-en_XX,zh_CN-en_XX,ar_AR-es_XX,es_XX-ar_AR,ar_AR-hi_IN,hi_IN-ar_AR,ar_AR-zh_CN,zh_CN-ar_AR,cs_CZ-es_XX,es_XX-cs_CZ,cs_CZ-hi_IN,hi_IN-cs_CZ,cs_CZ-zh_CN,zh_CN-cs_CZ,de_DE-es_XX,es_XX-de_DE,de_DE-hi_IN,hi_IN-de_DE,de_DE-zh_CN,zh_CN-de_DE,es_XX-hi_IN,hi_IN-es_XX,es_XX-zh_CN,zh_CN-es_XX,et_EE-es_XX,es_XX-et_EE,et_EE-hi_IN,hi_IN-et_EE,et_EE-zh_CN,zh_CN-et_EE,fi_FI-es_XX,es_XX-fi_FI,fi_FI-hi_IN,hi_IN-fi_FI,fi_FI-zh_CN,zh_CN-fi_FI,fr_XX-es_XX,es_XX-fr_XX,fr_XX-hi_IN,hi_IN-fr_XX,fr_XX-zh_CN,zh_CN-fr_XX,gu_IN-es_XX,es_XX-gu_IN,gu_IN-hi_IN,hi_IN-gu_IN,gu_IN-zh_CN,zh_CN-gu_IN,hi_IN-zh_CN,zh_CN-hi_IN,it_IT-es_XX,es_XX-it_IT,it_IT-hi_IN,hi_IN-it_IT,it_IT-zh_CN,zh_CN-it_IT,ja_XX-es_XX,es_XX-ja_XX,ja_XX-hi_IN,hi_IN-ja_XX,ja_XX-zh_CN,zh_CN-ja_XX,kk_KZ-es_XX,es_XX-kk_KZ,kk_KZ-hi_IN,hi_IN-kk_KZ,kk_KZ-zh_CN,zh_CN-kk_KZ,ko_KR-es_XX,es_XX-ko_KR,ko_KR-hi_IN,hi_IN-ko_KR,ko_KR-zh_CN,zh_CN-ko_KR,lt_LT-es_XX,es_XX-lt_LT,lt_LT-hi_IN,hi_IN-lt_LT,lt_LT-zh_CN,zh_CN-lt_LT,lv_LV-es_XX,es_XX-lv_LV,lv_LV-hi_IN,hi_IN-lv_LV,lv_LV-zh_CN,zh_CN-lv_LV,my_MM-es_XX,es_XX-my_MM,my_MM-hi_IN,hi_IN-my_MM,my_MM-zh_CN,zh_CN-my_MM,ne_NP-es_XX,es_XX-ne_NP,ne_NP-hi_IN,hi_IN-ne_NP,ne_NP-zh_CN,zh_CN-ne_NP,nl_XX-es_XX,es_XX-nl_XX,nl_XX-hi_IN,hi_IN-nl_XX,nl_XX-zh_CN,zh_CN-nl_XX,ro_RO-es_XX,es_XX-ro_RO,ro_RO-hi_IN,hi_IN-ro_RO,ro_RO-zh_CN,zh_CN-ro_RO,ru_RU-es_XX,es_XX-ru_RU,ru_RU-hi_IN,hi_IN-ru_RU,ru_RU-zh_CN,zh_CN-ru_RU,si_LK-es_XX,es_XX-si_LK,si_LK-hi_IN,hi_IN-si_LK,si_LK-zh_CN,zh_CN-si_LK,tr_TR-es_XX,es_XX-tr_TR,tr_TR-hi_IN,hi_IN-tr_TR,tr_TR-zh_CN,zh_CN-tr_TR,vi_VN-es_XX,es_XX-vi_VN,vi_VN-hi_IN,hi_IN-vi_VN,vi_VN-zh_CN,zh_CN-vi_VN' \ - --lang-dict ${LANG_DICT} --lang-tok-style 'mbart' --sampling-method 'temperature' --sampling-temperature '1.0' > $GEN_TMP_DIR/${SRC}_${TGT}.gen -cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^T-" | cut -f2 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.hyp -cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^H-" | cut -f3 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.ref -${MULTIBLEU} $GEN_TMP_DIR/${SRC}_${TGT}.ref < $GEN_TMP_DIR/${SRC}_${TGT}.hyp diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/README.md deleted file mode 100644 index f8b36bc691cb8f5bf82942e07b6d9c014387bdd8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# Linformer: Self-Attention with Linear Complexity (Wang et al., 2020) - -This example contains code to train Linformer models as described in our paper -[Linformer: Self-Attention with Linear Complexity](https://arxiv.org/abs/2006.04768). - -## Training a new Linformer RoBERTa model - -You can mostly follow the [RoBERTa pretraining README](/examples/roberta/README.pretraining.md), -updating your training command with `--user-dir examples/linformer/linformer_src --arch linformer_roberta_base`. - -## Citation - -If you use our work, please cite: - -```bibtex -@article{wang2020linformer, - title={Linformer: Self-Attention with Linear Complexity}, - author={Wang, Sinong and Li, Belinda and Khabsa, Madian and Fang, Han and Ma, Hao}, - journal={arXiv preprint arXiv:2006.04768}, - year={2020} -} -``` diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py deleted file mode 100644 index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/__init__.py deleted file mode 100644 index d0b96b734c4b5e7cd5d295238d0764c05093dc27..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/distributed/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .distributed_timeout_wrapper import DistributedTimeoutWrapper -from .fully_sharded_data_parallel import fsdp_enable_wrap, fsdp_wrap, FullyShardedDataParallel -from .legacy_distributed_data_parallel import LegacyDistributedDataParallel -from .module_proxy_wrapper import ModuleProxyWrapper -from .tpu_distributed_data_parallel import TPUDistributedDataParallel - - -__all__ = [ - "DistributedTimeoutWrapper", - "fsdp_enable_wrap", - "fsdp_wrap", - "FullyShardedDataParallel", - "LegacyDistributedDataParallel", - "ModuleProxyWrapper", - "TPUDistributedDataParallel", -] diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/docs/vctk_example.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/docs/vctk_example.md deleted file mode 100644 index 2ba78f3f73d6ea30f9de89150fbbc9dd5923b6fa..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_synthesis/docs/vctk_example.md +++ /dev/null @@ -1,51 +0,0 @@ -[[Back]](..) - -# VCTK - -[VCTK](https://datashare.ed.ac.uk/handle/10283/3443) is an open English speech corpus. We provide examples -for building [Transformer](https://arxiv.org/abs/1809.08895) models on this dataset. - - -## Data preparation -Download data, create splits and generate audio manifests with -```bash -python -m examples.speech_synthesis.preprocessing.get_vctk_audio_manifest \ - --output-data-root ${AUDIO_DATA_ROOT} \ - --output-manifest-root ${AUDIO_MANIFEST_ROOT} -``` - -Then, extract log-Mel spectrograms, generate feature manifest and create data configuration YAML with -```bash -python -m examples.speech_synthesis.preprocessing.get_feature_manifest \ - --audio-manifest-root ${AUDIO_MANIFEST_ROOT} \ - --output-root ${FEATURE_MANIFEST_ROOT} \ - --ipa-vocab --use-g2p -``` -where we use phoneme inputs (`--ipa-vocab --use-g2p`) as example. - -To denoise audio and trim leading/trailing silence using signal processing based VAD, run -```bash -for SPLIT in dev test train; do - python -m examples.speech_synthesis.preprocessing.denoise_and_vad_audio \ - --audio-manifest ${AUDIO_MANIFEST_ROOT}/${SPLIT}.audio.tsv \ - --output-dir ${PROCESSED_DATA_ROOT} \ - --denoise --vad --vad-agg-level 3 -done -``` - -## Training -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#transformer).) - -## Inference -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#inference).) - -## Automatic Evaluation -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#automatic-evaluation).) - -## Results - -| --arch | Params | Test MCD | Model | -|---|---|---|---| -| tts_transformer | 54M | 3.4 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/vctk_transformer_phn.tar) | - -[[Back]](..) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/logging/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/logging/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/bart/hub_interface.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/bart/hub_interface.py deleted file mode 100644 index 4d47d9751837c744b1d0d460117b78fcbeeb12d8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/bart/hub_interface.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import copy -import logging -from typing import Dict, List - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data import encoders -from fairseq.hub_utils import GeneratorHubInterface -from omegaconf import open_dict - - -logger = logging.getLogger(__name__) - - -class BARTHubInterface(GeneratorHubInterface): - """A simple PyTorch Hub interface to BART. - - Usage: https://github.com/pytorch/fairseq/tree/main/examples/bart - """ - - def __init__(self, cfg, task, model): - super().__init__(cfg, task, [model]) - self.model = self.models[0] - - def encode( - self, sentence: str, *addl_sentences, no_separator=True - ) -> torch.LongTensor: - """ - BPE-encode a sentence (or multiple sentences). - - Every sequence begins with a beginning-of-sentence (``) symbol. - Every sentence ends with an end-of-sentence (``). - - Example (single sentence): ` a b c ` - Example (sentence pair): ` d e f 1 2 3 ` - - The BPE encoding follows GPT-2. One subtle detail is that the GPT-2 BPE - requires leading spaces. For example:: - - >>> bart.encode('Hello world').tolist() - [0, 31414, 232, 2] - >>> bart.encode(' world').tolist() - [0, 232, 2] - >>> bart.encode('world').tolist() - [0, 8331, 2] - """ - tokens = self.bpe.encode(sentence) - if len(tokens.split(" ")) > min(self.max_positions) - 2: - tokens = " ".join(tokens.split(" ")[: min(self.max_positions) - 2]) - bpe_sentence = " " + tokens + " " - for s in addl_sentences: - bpe_sentence += " " if not no_separator else "" - bpe_sentence += " " + self.bpe.encode(s) + " " - tokens = self.task.source_dictionary.encode_line(bpe_sentence, append_eos=False) - return tokens.long() - - def decode(self, tokens: torch.LongTensor): - assert tokens.dim() == 1 - tokens = tokens.cpu().numpy() - if tokens[0] == self.task.source_dictionary.bos(): - tokens = tokens[1:] # remove - eos_mask = tokens == self.task.source_dictionary.eos() - doc_mask = eos_mask[1:] & eos_mask[:-1] - sentences = np.split(tokens, doc_mask.nonzero()[0] + 1) - sentences = [ - self.bpe.decode(self.task.source_dictionary.string(s)) for s in sentences - ] - if len(sentences) == 1: - return sentences[0] - return sentences - - def _build_sample(self, src_tokens: List[torch.LongTensor]): - # assert torch.is_tensor(src_tokens) - dataset = self.task.build_dataset_for_inference( - src_tokens, - [x.numel() for x in src_tokens], - ) - sample = dataset.collater(dataset) - sample = utils.apply_to_sample(lambda tensor: tensor.to(self.device), sample) - return sample - - def generate( - self, - tokenized_sentences: List[torch.LongTensor], - *args, - inference_step_args=None, - skip_invalid_size_inputs=False, - **kwargs - ) -> List[List[Dict[str, torch.Tensor]]]: - inference_step_args = inference_step_args or {} - if "prefix_tokens" in inference_step_args: - raise NotImplementedError("prefix generation not implemented for BART") - res = [] - for batch in self._build_batches(tokenized_sentences, skip_invalid_size_inputs): - src_tokens = batch['net_input']['src_tokens'] - inference_step_args["prefix_tokens"] =src_tokens.new_full( - (src_tokens.size(0), 1), fill_value=self.task.source_dictionary.bos() - ).to(device=self.device) - results = super().generate( - src_tokens, - *args, - inference_step_args=inference_step_args, - skip_invalid_size_inputs=skip_invalid_size_inputs, - **kwargs - ) - for id, hypos in zip(batch['id'].tolist(), results): - res.append((id, hypos)) - res = [hypos for _, hypos in sorted(res, key=lambda x: x[0])] - return res - - def extract_features( - self, tokens: torch.LongTensor, return_all_hiddens: bool = False - ) -> torch.Tensor: - if tokens.dim() == 1: - tokens = tokens.unsqueeze(0) - if tokens.size(-1) > min(self.model.max_positions()): - raise ValueError( - "tokens exceeds maximum length: {} > {}".format( - tokens.size(-1), self.model.max_positions() - ) - ) - tokens.to(device=self.device), - prev_output_tokens = tokens.clone() - - prev_output_tokens[:, 0] = tokens.gather( - 1, - (tokens.ne(self.task.source_dictionary.pad()).sum(dim=1) - 1).unsqueeze(-1), - ).squeeze() - - prev_output_tokens[:, 1:] = tokens[:, :-1] - features, extra = self.model( - src_tokens=tokens, - src_lengths=None, - prev_output_tokens=prev_output_tokens, - features_only=True, - return_all_hiddens=return_all_hiddens, - ) - if return_all_hiddens: - # convert from T x B x C -> B x T x C - inner_states = extra["inner_states"] - return [inner_state.transpose(0, 1) for inner_state in inner_states] - else: - return features # just the last layer's features - - def register_classification_head( - self, name: str, num_classes: int = None, embedding_size: int = None, **kwargs - ): - self.model.register_classification_head( - name, num_classes=num_classes, embedding_size=embedding_size, **kwargs - ) - - def predict(self, head: str, tokens: torch.LongTensor, return_logits: bool = False): - if tokens.dim() == 1: - tokens = tokens.unsqueeze(0) - features = self.extract_features(tokens.to(device=self.device)) - sentence_representation = features[ - tokens.eq(self.task.source_dictionary.eos()), : - ].view(features.size(0), -1, features.size(-1))[:, -1, :] - - logits = self.model.classification_heads[head](sentence_representation) - if return_logits: - return logits - return F.log_softmax(logits, dim=-1) - - def fill_mask( - self, - masked_inputs: List[str], - topk: int = 5, - match_source_len: bool = True, - **generate_kwargs - ): - masked_token = '' - batch_tokens = [] - for masked_input in masked_inputs: - assert masked_token in masked_input, \ - "please add one {} token for the input".format(masked_token) - - text_spans = masked_input.split(masked_token) - text_spans_bpe = (' {0} '.format(masked_token)).join( - [self.bpe.encode(text_span.rstrip()) for text_span in text_spans] - ).strip() - tokens = self.task.source_dictionary.encode_line( - ' ' + text_spans_bpe + ' ', - append_eos=False, - add_if_not_exist=False, - ).long() - batch_tokens.append(tokens) - - # ensure beam size is at least as big as topk - generate_kwargs['beam'] = max( - topk, - generate_kwargs.get('beam', -1), - ) - generate_kwargs['match_source_len'] = match_source_len - batch_hypos = self.generate(batch_tokens, **generate_kwargs) - - return [ - [(self.decode(hypo['tokens']), hypo['score']) for hypo in hypos[:topk]] - for hypos in batch_hypos - ] diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/unsupervised_mt/eval.sh b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/unsupervised_mt/eval.sh deleted file mode 100644 index 03b773ed5a522eb82186fea8ffbb6c557e14b6d3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/criss/unsupervised_mt/eval.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -SRC=si_LK -TGT=en_XX -MODEL=criss_checkpoints/criss.3rd.pt - -MULTIBLEU=mosesdecoder/scripts/generic/multi-bleu.perl -MOSES=mosesdecoder -REPLACE_UNICODE_PUNCT=$MOSES/scripts/tokenizer/replace-unicode-punctuation.perl -NORM_PUNC=$MOSES/scripts/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$MOSES/scripts/tokenizer/remove-non-printing-char.perl -TOKENIZER=$MOSES/scripts/tokenizer/tokenizer.perl -GEN_TMP_DIR=gen_tmp -LANG_DICT=criss_checkpoints/lang_dict.txt - -if [ ! -d "mosesdecoder" ]; then - git clone https://github.com/moses-smt/mosesdecoder -fi -mkdir -p $GEN_TMP_DIR -fairseq-generate data_tmp/${SRC}-${TGT}-flores \ - --task translation_multi_simple_epoch \ - --max-tokens 2000 \ - --path ${MODEL} \ - --skip-invalid-size-inputs-valid-test \ - --beam 5 --lenpen 1.0 --gen-subset test \ - --remove-bpe=sentencepiece \ - --source-lang ${SRC} --target-lang ${TGT} \ - --decoder-langtok --lang-pairs 'en_XX-ar_AR,en_XX-de_DE,en_XX-es_XX,en_XX-fr_XX,en_XX-hi_IN,en_XX-it_IT,en_XX-ja_XX,en_XX-ko_KR,en_XX-nl_XX,en_XX-ru_RU,en_XX-zh_CN,en_XX-tr_TR,en_XX-vi_VN,en_XX-ro_RO,en_XX-my_MM,en_XX-ne_NP,en_XX-si_LK,en_XX-cs_CZ,en_XX-lt_LT,en_XX-kk_KZ,en_XX-gu_IN,en_XX-fi_FI,en_XX-et_EE,en_XX-lv_LV,ar_AR-en_XX,cs_CZ-en_XX,de_DE-en_XX,es_XX-en_XX,et_EE-en_XX,fi_FI-en_XX,fr_XX-en_XX,gu_IN-en_XX,hi_IN-en_XX,it_IT-en_XX,ja_XX-en_XX,kk_KZ-en_XX,ko_KR-en_XX,lt_LT-en_XX,lv_LV-en_XX,my_MM-en_XX,ne_NP-en_XX,nl_XX-en_XX,ro_RO-en_XX,ru_RU-en_XX,si_LK-en_XX,tr_TR-en_XX,vi_VN-en_XX,zh_CN-en_XX,ar_AR-es_XX,es_XX-ar_AR,ar_AR-hi_IN,hi_IN-ar_AR,ar_AR-zh_CN,zh_CN-ar_AR,cs_CZ-es_XX,es_XX-cs_CZ,cs_CZ-hi_IN,hi_IN-cs_CZ,cs_CZ-zh_CN,zh_CN-cs_CZ,de_DE-es_XX,es_XX-de_DE,de_DE-hi_IN,hi_IN-de_DE,de_DE-zh_CN,zh_CN-de_DE,es_XX-hi_IN,hi_IN-es_XX,es_XX-zh_CN,zh_CN-es_XX,et_EE-es_XX,es_XX-et_EE,et_EE-hi_IN,hi_IN-et_EE,et_EE-zh_CN,zh_CN-et_EE,fi_FI-es_XX,es_XX-fi_FI,fi_FI-hi_IN,hi_IN-fi_FI,fi_FI-zh_CN,zh_CN-fi_FI,fr_XX-es_XX,es_XX-fr_XX,fr_XX-hi_IN,hi_IN-fr_XX,fr_XX-zh_CN,zh_CN-fr_XX,gu_IN-es_XX,es_XX-gu_IN,gu_IN-hi_IN,hi_IN-gu_IN,gu_IN-zh_CN,zh_CN-gu_IN,hi_IN-zh_CN,zh_CN-hi_IN,it_IT-es_XX,es_XX-it_IT,it_IT-hi_IN,hi_IN-it_IT,it_IT-zh_CN,zh_CN-it_IT,ja_XX-es_XX,es_XX-ja_XX,ja_XX-hi_IN,hi_IN-ja_XX,ja_XX-zh_CN,zh_CN-ja_XX,kk_KZ-es_XX,es_XX-kk_KZ,kk_KZ-hi_IN,hi_IN-kk_KZ,kk_KZ-zh_CN,zh_CN-kk_KZ,ko_KR-es_XX,es_XX-ko_KR,ko_KR-hi_IN,hi_IN-ko_KR,ko_KR-zh_CN,zh_CN-ko_KR,lt_LT-es_XX,es_XX-lt_LT,lt_LT-hi_IN,hi_IN-lt_LT,lt_LT-zh_CN,zh_CN-lt_LT,lv_LV-es_XX,es_XX-lv_LV,lv_LV-hi_IN,hi_IN-lv_LV,lv_LV-zh_CN,zh_CN-lv_LV,my_MM-es_XX,es_XX-my_MM,my_MM-hi_IN,hi_IN-my_MM,my_MM-zh_CN,zh_CN-my_MM,ne_NP-es_XX,es_XX-ne_NP,ne_NP-hi_IN,hi_IN-ne_NP,ne_NP-zh_CN,zh_CN-ne_NP,nl_XX-es_XX,es_XX-nl_XX,nl_XX-hi_IN,hi_IN-nl_XX,nl_XX-zh_CN,zh_CN-nl_XX,ro_RO-es_XX,es_XX-ro_RO,ro_RO-hi_IN,hi_IN-ro_RO,ro_RO-zh_CN,zh_CN-ro_RO,ru_RU-es_XX,es_XX-ru_RU,ru_RU-hi_IN,hi_IN-ru_RU,ru_RU-zh_CN,zh_CN-ru_RU,si_LK-es_XX,es_XX-si_LK,si_LK-hi_IN,hi_IN-si_LK,si_LK-zh_CN,zh_CN-si_LK,tr_TR-es_XX,es_XX-tr_TR,tr_TR-hi_IN,hi_IN-tr_TR,tr_TR-zh_CN,zh_CN-tr_TR,vi_VN-es_XX,es_XX-vi_VN,vi_VN-hi_IN,hi_IN-vi_VN,vi_VN-zh_CN,zh_CN-vi_VN' \ - --lang-dict ${LANG_DICT} --lang-tok-style 'mbart' --sampling-method 'temperature' --sampling-temperature '1.0' > $GEN_TMP_DIR/${SRC}_${TGT}.gen -cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^T-" | cut -f2 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.hyp -cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^H-" | cut -f3 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.ref -${MULTIBLEU} $GEN_TMP_DIR/${SRC}_${TGT}.ref < $GEN_TMP_DIR/${SRC}_${TGT}.hyp diff --git a/spaces/Omar7Hany/Conv_Kickstart/README.md b/spaces/Omar7Hany/Conv_Kickstart/README.md deleted file mode 100644 index 2941ef4d118e449b70cbe52c4e8b326b1b3d5fbd..0000000000000000000000000000000000000000 --- a/spaces/Omar7Hany/Conv_Kickstart/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Conv Kickstart -emoji: 🏢 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/masks/countless/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/evaluation/masks/countless/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/fetch_data/celebahq_gen_masks.sh b/spaces/OpenGVLab/InternGPT/third-party/lama/fetch_data/celebahq_gen_masks.sh deleted file mode 100644 index 190ccfd53038711df34d402ecf1ee729a7c1e254..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/fetch_data/celebahq_gen_masks.sh +++ /dev/null @@ -1,29 +0,0 @@ -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_thick_256.yaml \ -celeba-hq-dataset/val_source_256/ \ -celeba-hq-dataset/val_256/random_thick_256/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_thin_256.yaml \ -celeba-hq-dataset/val_source_256/ \ -celeba-hq-dataset/val_256/random_thin_256/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_medium_256.yaml \ -celeba-hq-dataset/val_source_256/ \ -celeba-hq-dataset/val_256/random_medium_256/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_thick_256.yaml \ -celeba-hq-dataset/visual_test_source_256/ \ -celeba-hq-dataset/visual_test_256/random_thick_256/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_thin_256.yaml \ -celeba-hq-dataset/visual_test_source_256/ \ -celeba-hq-dataset/visual_test_256/random_thin_256/ - -python3 bin/gen_mask_dataset.py \ -$(pwd)/configs/data_gen/random_medium_256.yaml \ -celeba-hq-dataset/visual_test_source_256/ \ -celeba-hq-dataset/visual_test_256/random_medium_256/ diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/batchnorm.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/batchnorm.py deleted file mode 100644 index 18318965335b37cc671004a6aceda3229dc7b477..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/batchnorm.py +++ /dev/null @@ -1,329 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast - -from .comm import SyncMaster - -__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d'] - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dementions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.001, affine=True): - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine) - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - # customed batch norm statistics - self._moving_average_fraction = 1. - momentum - self.register_buffer('_tmp_running_mean', torch.zeros(self.num_features)) - self.register_buffer('_tmp_running_var', torch.ones(self.num_features)) - self.register_buffer('_running_iter', torch.ones(1)) - self._tmp_running_mean = self.running_mean.clone() * self._running_iter - self._tmp_running_var = self.running_var.clone() * self._running_iter - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _add_weighted(self, dest, delta, alpha=1, beta=1, bias=0): - """return *dest* by `dest := dest*alpha + delta*beta + bias`""" - return dest * alpha + delta * beta + bias - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - self._tmp_running_mean = self._add_weighted(self._tmp_running_mean, mean.data, alpha=self._moving_average_fraction) - self._tmp_running_var = self._add_weighted(self._tmp_running_var, unbias_var.data, alpha=self._moving_average_fraction) - self._running_iter = self._add_weighted(self._running_iter, 1, alpha=self._moving_average_fraction) - - self.running_mean = self._tmp_running_mean / self._running_iter - self.running_var = self._tmp_running_var / self._running_iter - - return mean, bias_var.clamp(self.eps) ** -0.5 - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm1d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm2d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm3d, self)._check_input_dim(input) diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/base.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/base.py deleted file mode 100644 index 1c60a6021e8dda4c27bdd8365ba2e298ae6acf76..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/base.py +++ /dev/null @@ -1,84 +0,0 @@ -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2020 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -from dataclasses import dataclass, fields - - -class Transform: - - def collate(self, lst_datastruct): - from ..tools import collate_tensor_with_padding - example = lst_datastruct[0] - - def collate_or_none(key): - if example[key] is None: - return None - key_lst = [x[key] for x in lst_datastruct] - return collate_tensor_with_padding(key_lst) - - kwargs = {key: collate_or_none(key) for key in example.datakeys} - - return self.Datastruct(**kwargs) - - -# Inspired from SMPLX library -# need to define "datakeys" and transforms -@dataclass -class Datastruct: - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - self.__dict__[key] = value - - def get(self, key, default=None): - return getattr(self, key, default) - - def __iter__(self): - return self.keys() - - def keys(self): - keys = [t.name for t in fields(self)] - return iter(keys) - - def values(self): - values = [getattr(self, t.name) for t in fields(self)] - return iter(values) - - def items(self): - data = [(t.name, getattr(self, t.name)) for t in fields(self)] - return iter(data) - - def to(self, *args, **kwargs): - for key in self.datakeys: - if self[key] is not None: - self[key] = self[key].to(*args, **kwargs) - return self - - @property - def device(self): - return self[self.datakeys[0]].device - - def detach(self): - - def detach_or_none(tensor): - if tensor is not None: - return tensor.detach() - return None - - kwargs = {key: detach_or_none(self[key]) for key in self.datakeys} - return self.transforms.Datastruct(**kwargs) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/__init__.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/__init__.py deleted file mode 100644 index 2051b85f7e59bff7bdbaa131849ce8cd31f059a4..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/fileio/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .file_client import BaseStorageBackend, FileClient -from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler -from .io import dump, load, register_handler -from .parse import dict_from_file, list_from_file - -__all__ = [ - 'BaseStorageBackend', 'FileClient', 'load', 'dump', 'register_handler', - 'BaseFileHandler', 'JsonHandler', 'PickleHandler', 'YamlHandler', - 'list_from_file', 'dict_from_file' -] diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/core/evaluation/__init__.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/core/evaluation/__init__.py deleted file mode 100644 index f7cc4b23413a0639e9de00eeb0bf600632d2c6cd..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/core/evaluation/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .class_names import get_classes, get_palette -from .eval_hooks import DistEvalHook, EvalHook -from .metrics import eval_metrics, mean_dice, mean_fscore, mean_iou - -__all__ = [ - 'EvalHook', 'DistEvalHook', 'mean_dice', 'mean_iou', 'mean_fscore', - 'eval_metrics', 'get_classes', 'get_palette' -] diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/iteration_based_batch_sampler.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/iteration_based_batch_sampler.py deleted file mode 100644 index 93452b64696dc9b2cd2a347b8051729864bf9510..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/samplers/iteration_based_batch_sampler.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from torch.utils.data.sampler import BatchSampler - - -class IterationBasedBatchSampler(BatchSampler): - """ - Wraps a BatchSampler, resampling from it until - a specified number of iterations have been sampled - """ - - def __init__(self, batch_sampler, num_iterations, start_iter=0): - self.batch_sampler = batch_sampler - self.num_iterations = num_iterations - self.start_iter = start_iter - - def __iter__(self): - iteration = self.start_iter - while iteration <= self.num_iterations: - # if the underlying sampler has a set_epoch method, like - # DistributedSampler, used for making each process see - # a different split of the dataset, then set it - if hasattr(self.batch_sampler.sampler, "set_epoch"): - self.batch_sampler.sampler.set_epoch(iteration) - for batch in self.batch_sampler: - iteration += 1 - if iteration > self.num_iterations: - break - yield batch - - def __len__(self): - return self.num_iterations diff --git a/spaces/Pranay009/FACE2COMIC/README.md b/spaces/Pranay009/FACE2COMIC/README.md deleted file mode 100644 index 6d8ab061f47e9e376fefd6fa317d28f1bf7df514..0000000000000000000000000000000000000000 --- a/spaces/Pranay009/FACE2COMIC/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FACE2COMIC -emoji: 🏃 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: artistic-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/deadlock.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/deadlock.py deleted file mode 100644 index 8abd1bbeea5909e664cf816c020bd7c37effdb66..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/utils/deadlock.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -from queue import Queue, Empty -import signal -import sys -import threading -import traceback - -logger = logging.getLogger(__name__) - - -class DeadlockDetect: - def __init__(self, use: bool = False, timeout: float = 120.): - self.use = use - self.timeout = timeout - self._queue: Queue = Queue() - - def update(self, stage: str): - if self.use: - self._queue.put(stage) - - def __enter__(self): - if self.use: - self._thread = threading.Thread(target=self._detector_thread) - self._thread.start() - - def __exit__(self, exc_type, exc_val, exc_tb): - if self.use: - self._queue.put(None) - self._thread.join() - - def _detector_thread(self): - logger.debug("Deadlock detector started") - last_stage = "init" - while True: - try: - stage = self._queue.get(timeout=self.timeout) - except Empty: - break - if stage is None: - logger.debug("Exiting deadlock detector thread") - return - else: - last_stage = stage - logger.error("Deadlock detector timed out, last stage was %s", last_stage) - for th in threading.enumerate(): - print(th, file=sys.stderr) - traceback.print_stack(sys._current_frames()[th.ident]) - print(file=sys.stderr) - sys.stdout.flush() - sys.stderr.flush() - os.kill(os.getpid(), signal.SIGKILL) diff --git a/spaces/RASMUS/Whisper-youtube-crosslingual-subtitles/app.py b/spaces/RASMUS/Whisper-youtube-crosslingual-subtitles/app.py deleted file mode 100644 index 12a38d6a3600449362ee2f333ce5194edc2324d8..0000000000000000000000000000000000000000 --- a/spaces/RASMUS/Whisper-youtube-crosslingual-subtitles/app.py +++ /dev/null @@ -1,606 +0,0 @@ -import os -import requests -import json -import base64 - -os.system('git clone https://github.com/ggerganov/whisper.cpp.git') -os.system('make -C ./whisper.cpp') -os.system('bash ./whisper.cpp/models/download-ggml-model.sh small') -os.system('bash ./whisper.cpp/models/download-ggml-model.sh base') -os.system('bash ./whisper.cpp/models/download-ggml-model.sh medium') -os.system('bash ./whisper.cpp/models/download-ggml-model.sh large') -os.system('bash ./whisper.cpp/models/download-ggml-model.sh base.en') - - -import gradio as gr -from pathlib import Path -import pysrt -import pandas as pd -import re -import time - -from pytube import YouTube - -headers = {'Authorization': os.environ['DeepL_API_KEY']} - - -import torch - -whisper_models = ["base", "small", "medium", "large", "base.en"] - -custom_models = ["belarus-small"] - -combined_models = [] -combined_models.extend(whisper_models) -combined_models.extend(custom_models) - -usage = requests.get('https://api-free.deepl.com/v2/usage', headers=headers) -usage = json.loads(usage.text) -deepL_character_usage = str(usage['character_count']) -print("deepL_character_usage") - - - -LANGUAGES = { - "en": "English", - "zh": "Chinese", - "de": "German", - "es": "Spanish", - "ru": "Russian", - "ko": "Korean", - "fr": "French", - "ja": "Japanese", - "pt": "Portuguese", - "tr": "Turkish", - "pl": "Polish", - "ca": "Catalan", - "nl": "Dutch", - "ar": "Arabic", - "sv": "Swedish", - "it": "Italian", - "id": "Indonesian", - "hi": "Hindi", - "fi": "Finnish", - "vi": "Vietnamese", - "he": "Hebrew", - "uk": "Ukrainian", - "el": "Greek", - "ms": "Malay", - "cs": "Czech", - "ro": "Romanian", - "da": "Danish", - "hu": "Hungarian", - "ta": "Tamil", - "no": "Norwegian", - "th": "Thai", - "ur": "Urdu", - "hr": "Croatian", - "bg": "Bulgarian", - "lt": "Lithuanian", - "la": "Latin", - "mi": "Maori", - "ml": "Malayalam", - "cy": "Welsh", - "sk": "Slovak", - "te": "Telugu", - "fa": "Persian", - "lv": "Latvian", - "bn": "Bengali", - "sr": "Serbian", - "az": "Azerbaijani", - "sl": "Slovenian", - "kn": "Kannada", - "et": "Estonian", - "mk": "Macedonian", - "br": "Breton", - "eu": "Basque", - "is": "Icelandic", - "hy": "Armenian", - "ne": "Nepali", - "mn": "Mongolian", - "bs": "Bosnian", - "kk": "Kazakh", - "sq": "Albanian", - "sw": "Swahili", - "gl": "Galician", - "mr": "Marathi", - "pa": "Punjabi", - "si": "Sinhala", - "km": "Khmer", - "sn": "Shona", - "yo": "Yoruba", - "so": "Somali", - "af": "Afrikaans", - "oc": "Occitan", - "ka": "Georgian", - "be": "Belarusian", - "tg": "Tajik", - "sd": "Sindhi", - "gu": "Gujarati", - "am": "Amharic", - "yi": "Yiddish", - "lo": "Lao", - "uz": "Uzbek", - "fo": "Faroese", - "ht": "Haitian creole", - "ps": "Pashto", - "tk": "Turkmen", - "nn": "Nynorsk", - "mt": "Maltese", - "sa": "Sanskrit", - "lb": "Luxembourgish", - "my": "Myanmar", - "bo": "Tibetan", - "tl": "Tagalog", - "mg": "Malagasy", - "as": "Assamese", - "tt": "Tatar", - "haw": "Hawaiian", - "ln": "Lingala", - "ha": "Hausa", - "ba": "Bashkir", - "jw": "Javanese", - "su": "Sundanese", -} - -# language code lookup by name, with a few language aliases -source_languages = { - **{language: code for code, language in LANGUAGES.items()}, - "Burmese": "my", - "Valencian": "ca", - "Flemish": "nl", - "Haitian": "ht", - "Letzeburgesch": "lb", - "Pushto": "ps", - "Panjabi": "pa", - "Moldavian": "ro", - "Moldovan": "ro", - "Sinhalese": "si", - "Castilian": "es", - "Let the model analyze": "Let the model analyze" -} - -DeepL_language_codes_for_translation = { -"Bulgarian": "BG", -"Czech": "CS", -"Danish": "DA", -"German": "DE", -"Greek": "EL", -"English": "EN", -"Spanish": "ES", -"Estonian": "ET", -"Finnish": "FI", -"French": "FR", -"Hungarian": "HU", -"Indonesian": "ID", -"Italian": "IT", -"Japanese": "JA", -"Lithuanian": "LT", -"Latvian": "LV", -"Dutch": "NL", -"Polish": "PL", -"Portuguese": "PT", -"Romanian": "RO", -"Russian": "RU", -"Slovak": "SK", -"Slovenian": "SL", -"Swedish": "SV", -"Turkish": "TR", -"Ukrainian": "UK", -"Chinese": "ZH" -} - - -transcribe_options = dict(beam_size=3, best_of=3, without_timestamps=False) - - -source_language_list = [key[0] for key in source_languages.items()] -translation_models_list = [key[0] for key in DeepL_language_codes_for_translation.items()] - - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -print("DEVICE IS: ") -print(device) - -videos_out_path = Path("./videos_out") -videos_out_path.mkdir(parents=True, exist_ok=True) - - -def get_youtube(video_url): - yt = YouTube(video_url) - abs_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download() - print("LADATATTU POLKUUN") - print(abs_video_path) - - - return abs_video_path - -def speech_to_text(video_file_path, selected_source_lang, whisper_model): - """ - # Youtube with translated subtitles using OpenAI Whisper and Opus-MT models. - # Currently supports only English audio - This space allows you to: - 1. Download youtube video with a given url - 2. Watch it in the first video component - 3. Run automatic speech recognition on the video using fast Whisper models - 4. Translate the recognized transcriptions to 26 languages supported by deepL (If free API usage for the month is not yet fully consumed) - 5. Download generated subtitles in .vtt and .srt formats - 6. Watch the the original video with generated subtitles - - Speech Recognition is based on models from OpenAI Whisper https://github.com/openai/whisper - This space is using c++ implementation by https://github.com/ggerganov/whisper.cpp - """ - - if(video_file_path == None): - raise ValueError("Error no video input") - print(video_file_path) - try: - - - - _,file_ending = os.path.splitext(f'{video_file_path}') - print(f'file enging is {file_ending}') - print("starting conversion to wav") - os.system(f'ffmpeg -i "{video_file_path}" -ar 16000 -ac 1 -c:a pcm_s16le "{video_file_path.replace(file_ending, ".wav")}"') - print("conversion to wav ready") - - except Exception as e: - raise RuntimeError("Error Running inference with local model", e) - - try: - - print("starting whisper c++") - srt_path = str(video_file_path.replace(file_ending, ".wav")) + ".srt" - os.system(f'rm -f {srt_path}') - if selected_source_lang == "Let the model analyze": - os.system(f'./whisper.cpp/main "{video_file_path.replace(file_ending, ".wav")}" -t 4 -l "auto" -m ./whisper.cpp/models/ggml-{whisper_model}.bin -osrt') - else: - if whisper_model in custom_models: - os.system(f'./whisper.cpp/main "{video_file_path.replace(file_ending, ".wav")}" -t 4 -l {source_languages.get(selected_source_lang)} -m ./converted_models/ggml-{whisper_model}.bin -osrt') - else: - os.system(f'./whisper.cpp/main "{video_file_path.replace(file_ending, ".wav")}" -t 4 -l {source_languages.get(selected_source_lang)} -m ./whisper.cpp/models/ggml-{whisper_model}.bin -osrt') - print("starting whisper done with whisper") - except Exception as e: - raise RuntimeError("Error running Whisper cpp model") - - try: - - df = pd.DataFrame(columns = ['start','end','text']) - srt_path = str(video_file_path.replace(file_ending, ".wav")) + ".srt" - subs = pysrt.open(srt_path) - - - objects = [] - for sub in subs: - - - start_hours = str(str(sub.start.hours) + "00")[0:2] if len(str(sub.start.hours)) == 2 else str("0" + str(sub.start.hours) + "00")[0:2] - end_hours = str(str(sub.end.hours) + "00")[0:2] if len(str(sub.end.hours)) == 2 else str("0" + str(sub.end.hours) + "00")[0:2] - - start_minutes = str(str(sub.start.minutes) + "00")[0:2] if len(str(sub.start.minutes)) == 2 else str("0" + str(sub.start.minutes) + "00")[0:2] - end_minutes = str(str(sub.end.minutes) + "00")[0:2] if len(str(sub.end.minutes)) == 2 else str("0" + str(sub.end.minutes) + "00")[0:2] - - start_seconds = str(str(sub.start.seconds) + "00")[0:2] if len(str(sub.start.seconds)) == 2 else str("0" + str(sub.start.seconds) + "00")[0:2] - end_seconds = str(str(sub.end.seconds) + "00")[0:2] if len(str(sub.end.seconds)) == 2 else str("0" + str(sub.end.seconds) + "00")[0:2] - - start_millis = str(str(sub.start.milliseconds) + "000")[0:3] - end_millis = str(str(sub.end.milliseconds) + "000")[0:3] - objects.append([sub.text, f'{start_hours}:{start_minutes}:{start_seconds}.{start_millis}', f'{end_hours}:{end_minutes}:{end_seconds}.{end_millis}']) - - for object in objects: - srt_to_df = { - 'start': [object[1]], - 'end': [object[2]], - 'text': [object[0]] - } - - df = pd.concat([df, pd.DataFrame(srt_to_df)]) - except Exception as e: - print("Error creating srt df") - - - try: - usage = requests.get('https://api-free.deepl.com/v2/usage', headers=headers) - usage = json.loads(usage.text) - char_count = str(usage['character_count']) - - print('Usage is at: ' + str(usage['character_count']) + ' characters') - - if usage['character_count'] >= 490000: - print("USAGE CLOSE TO LIMIT") - - except Exception as e: - print('Error with DeepL API requesting usage count') - - - return df - - - - -def translate_transcriptions(df, selected_translation_lang_2): - if selected_translation_lang_2 is None: - selected_translation_lang_2 = 'English' - df.reset_index(inplace=True) - - print("start_translation") - translations = [] - - - - text_combined = "" - for i, sentence in enumerate(df['text']): - if i == 0: - text_combined = sentence - else: - text_combined = text_combined + '\n' + sentence - - data = {'text': text_combined, - 'tag_spitting': 'xml', - 'target_lang': DeepL_language_codes_for_translation.get(selected_translation_lang_2) - } - try: - - usage = requests.get('https://api-free.deepl.com/v2/usage', headers=headers) - usage = json.loads(usage.text) - deepL_character_usage = str(usage['character_count']) - try: - print('Usage is at: ' + deepL_character_usage + 'characters') - except Exception as e: - print(e) - - if int(deepL_character_usage) <= 490000: - print("STILL CHARACTERS LEFT") - response = requests.post('https://api-free.deepl.com/v2/translate', headers=headers, data=data) - - # Print the response from the server - translated_sentences = json.loads(response.text) - translated_sentences = translated_sentences['translations'][0]['text'].split('\n') - df['translation'] = translated_sentences - - else: - df['translation'] = df['text'] - - except Exception as e: - print("EXCEPTION WITH DEEPL API") - print(e) - df['translation'] = df['text'] - - print("translations done") - - print("Starting SRT-file creation") - print(df.head()) - df.reset_index(inplace=True) - with open('subtitles.vtt','w', encoding="utf-8") as file: - print("Starting WEBVTT-file creation") - - for i in range(len(df)): - if i == 0: - file.write('WEBVTT') - file.write('\n') - - else: - file.write(str(i+1)) - file.write('\n') - start = df.iloc[i]['start'] - - - file.write(f"{start.strip()}") - - stop = df.iloc[i]['end'] - - - file.write(' --> ') - file.write(f"{stop}") - file.write('\n') - file.writelines(df.iloc[i]['translation']) - if int(i) != len(df)-1: - file.write('\n\n') - - print("WEBVTT DONE") - - with open('subtitles.srt','w', encoding="utf-8") as file: - print("Starting SRT-file creation") - - for i in range(len(df)): - file.write(str(i+1)) - file.write('\n') - start = df.iloc[i]['start'] - - - file.write(f"{start.strip()}") - - stop = df.iloc[i]['end'] - - - file.write(' --> ') - file.write(f"{stop}") - file.write('\n') - file.writelines(df.iloc[i]['translation']) - if int(i) != len(df)-1: - file.write('\n\n') - - print("SRT DONE") - subtitle_files = ['subtitles.vtt','subtitles.srt'] - - return df, subtitle_files - -# def burn_srt_to_video(srt_file, video_in): - -# print("Starting creation of video wit srt") - -# try: -# video_out = video_in.replace('.mp4', '_out.mp4') -# print(os.system('ls -lrth')) -# print(video_in) -# print(video_out) -# command = 'ffmpeg -i "{}" -y -vf subtitles=./subtitles.srt "{}"'.format(video_in, video_out) -# os.system(command) - -# return video_out - -# except Exception as e: -# print(e) -# return video_out - -def create_video_player(subtitle_files, video_in): - - with open(video_in, "rb") as file: - video_base64 = base64.b64encode(file.read()) - with open('./subtitles.vtt', "rb") as file: - subtitle_base64 = base64.b64encode(file.read()) - - video_player = f''' - ''' - #video_player = gr.HTML(video_player) - return video_player - - - - -# ---- Gradio Layout ----- -video_in = gr.Video(label="Video file", mirror_webcam=False) -youtube_url_in = gr.Textbox(label="Youtube url", lines=1, interactive=True) -video_out = gr.Video(label="Video Out", mirror_webcam=False) - - - -df_init = pd.DataFrame(columns=['start','end','text', 'translation']) - -selected_source_lang = gr.Dropdown(choices=source_language_list, type="value", value="Let the model analyze", label="Spoken language in video", interactive=True) -selected_translation_lang_2 = gr.Dropdown(choices=translation_models_list, type="value", value="English", label="In which language you want the transcriptions?", interactive=True) -selected_whisper_model = gr.Dropdown(choices=whisper_models, type="value", value="base", label="Selected Whisper model", interactive=True) - -transcription_df = gr.DataFrame(value=df_init,label="Transcription dataframe", row_count=(0, "dynamic"), max_rows = 10, wrap=True, overflow_row_behaviour='paginate') -transcription_and_translation_df = gr.DataFrame(value=df_init,label="Transcription and translation dataframe", max_rows = 10, wrap=True, overflow_row_behaviour='paginate') - -subtitle_files = gr.File( - label="Download srt-file", - file_count="multiple", - type="file", - interactive=False, - ) - -video_player = gr.HTML('

    video will be played here after you press the button at step 4') - - -demo = gr.Blocks(css=''' -#cut_btn, #reset_btn { align-self:stretch; } -#\\31 3 { max-width: 540px; } -.output-markdown {max-width: 65ch !important;} -''') -demo.encrypt = False - - - - -with demo: - transcription_var = gr.Variable() - - with gr.Row(): - with gr.Column(): - gr.Markdown(''' - ### This space allows you to: - 1. Download youtube video with a given url - 2. Watch it in the first video component - 3. Run automatic speech recognition on the video using fast Whisper models - 4. Translate the recognized transcriptions to 26 languages supported by deepL - 5. Download generated subtitles in .vtt and .srt formats - 6. Watch the the original video with generated subtitles - ''') - - with gr.Column(): - gr.Markdown(''' - ### 1. Copy any non-private Youtube video URL to box below or click one of the examples. - (But please **consider using short videos** so others won't get queued)
    - Then press button "1. Download Youtube video"-button: - ''') - examples = gr.Examples(examples= - [ "https://www.youtube.com/watch?v=nlMuHtV82q8&ab_channel=NothingforSale24", - "https://www.youtube.com/watch?v=JzPfMbG1vrE&ab_channel=ExplainerVideosByLauren", - "https://www.youtube.com/watch?v=S68vvV0kod8&ab_channel=Pearl-CohnTelevision"], - label="Examples", inputs=[youtube_url_in]) - # Inspiration from https://huggingface.co/spaces/vumichien/whisper-speaker-diarization - - with gr.Row(): - with gr.Column(): - youtube_url_in.render() - download_youtube_btn = gr.Button("Step 1. Download Youtube video") - download_youtube_btn.click(get_youtube, [youtube_url_in], [ - video_in]) - print(video_in) - - - with gr.Row(): - with gr.Column(): - video_in.render() - with gr.Column(): - gr.Markdown(''' - ##### Here you can start the transcription and translation process. - Be aware that processing will last some time. With base model it is around 3x speed - **Please select source language** for better transcriptions. Using 'Let the model analyze' makes mistakes sometimes and may lead to bad transcriptions - ''') - selected_source_lang.render() - selected_whisper_model.render() - transcribe_btn = gr.Button("Step 2. Transcribe audio") - transcribe_btn.click(speech_to_text, [video_in, selected_source_lang, selected_whisper_model], [transcription_df]) - - - with gr.Row(): - gr.Markdown(''' - ##### Here you will get transcription output - ##### ''') - - with gr.Row(): - with gr.Column(): - transcription_df.render() - - with gr.Row(): - with gr.Column(): - gr.Markdown(''' - ### PLEASE READ BELOW - ### Because of big demand for this demo all credits for translation might be gone already for the month. In this case we return the original transcript from this component :( - ### I might make some adjustments in the future for the translation to use some other model if all the API credits have been used but this is the situation for now - ### Translation credits will reset every 5th of month. - Here you will can translate transcriptions to 26 languages. - If spoken language is not in the list, translation might not work. In this case original transcriptions are used. - ''') - gr.Markdown(f''' - DeepL API character usage: - {deepL_character_usage if deepL_character_usage is not None else ''}/500 000 characters - If usage is over 490 000 characters original transcriptions will be used for subtitles. This value might not properly update so if you get transcriptions in original language that might be the reason. - API usage resets on 5th of every month. - ''') - selected_translation_lang_2.render() - translate_transcriptions_button = gr.Button("Step 3. Translate transcription") - translate_transcriptions_button.click(translate_transcriptions, [transcription_df, selected_translation_lang_2], [transcription_and_translation_df, subtitle_files]) - transcription_and_translation_df.render() - - with gr.Row(): - with gr.Column(): - gr.Markdown('''##### From here you can download subtitles in .srt or .vtt format''') - subtitle_files.render() - - with gr.Row(): - with gr.Column(): - gr.Markdown(''' - ##### Now press the Step 4. Button to create output video with translated transcriptions - ##### ''') - create_video_button = gr.Button("Step 4. Create and add subtitles to video") - print(video_in) - create_video_button.click(create_video_player, [subtitle_files,video_in], [ - video_player]) - video_player.render() - - - - -demo.launch() \ No newline at end of file diff --git a/spaces/RMXK/RVC_HFF/lib/infer_pack/models_dml.py b/spaces/RMXK/RVC_HFF/lib/infer_pack/models_dml.py deleted file mode 100644 index 958d7b29259763d2fea94caf8ba7e314c4a77d05..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/lib/infer_pack/models_dml.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv.float() - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Ramos-Ramos/albef-vqa/data/transforms.py b/spaces/Ramos-Ramos/albef-vqa/data/transforms.py deleted file mode 100644 index 8d6eaf5d84ed1e7cc95193a0ee23a619ee95ca9a..0000000000000000000000000000000000000000 --- a/spaces/Ramos-Ramos/albef-vqa/data/transforms.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the BSD-style license found in the -# LICENSE file in the root directory of this source tree. - -import re -from typing import List, Tuple, Union - -import torch - -from torchtext.transforms import PadTransform, Sequential, ToTensor, Truncate -from torchvision import transforms -from transformers.models.bert.tokenization_bert import BertTokenizer - -# mean and standard deviation from the ALBEF repo: -# https://github.com/salesforce/ALBEF/blob/main/dataset/__init__.py#L16 -MEAN = (0.48145466, 0.4578275, 0.40821073) -STD_DEV = (0.26862954, 0.26130258, 0.27577711) - - -class ALBEFTextTransform: - """ - Remove punctuations and trailing spaces in input text and transform it into - a Tensor of token ids using BERTTokenizer. - - Args: - pretrained_tokenizer (str): Pretrained tokenizer to use. - Default: "bert-base-uncased" - do_pre_process (bool): Whether to pre-process input text. - Defaults to True. - truncate (bool): Whether to truncate input text to max_seq_length. - Defaults to False. - pad_to_max_seq_len (bool): Whether to pad the sequence to max_seq_length. - add_end_token (bool): Whether to add the end-of-sentence token. - Defaults to True. - max_seq_len (int): The max sequence length after truncating or padding. - Defaults to 25. - cls_token_id (int): Value to represent the start of each text. - Defaults to 101, Hugging Face's BERT cls token id. - sep_token_id (int): Value to represent the end of each text. - Defaults to 102, Hugging Face's BERT sep token id. - pad_token_id (int): Value with which to pad each text so that all texts are the same length. - Defaults to 0, Hugging Face's BERT pad token id. - - Inputs: - text (Union[List[str], str]): Input text to transform. - """ - - def __init__( - self, - pretrained_tokenizer: str = "bert-base-uncased", - do_pre_process: bool = True, - truncate: bool = False, - pad_to_max_seq_len: bool = False, - add_end_token: bool = True, - max_seq_len: int = 25, - cls_token_id: int = 101, - sep_token_id: int = 102, - pad_token_id: int = 0, - ): - self.do_pre_process = do_pre_process - self.cls_token_id = cls_token_id - self.sep_token_id = sep_token_id - self.pad_token_id = pad_token_id - self.add_end_token = add_end_token - - self.tokenizer = BertTokenizer.from_pretrained(pretrained_tokenizer) - self.transform = Sequential( - Truncate(max_seq_len=max_seq_len) if truncate else torch.nn.Identity(), - ToTensor(padding_value=self.pad_token_id), - PadTransform(max_length=max_seq_len, pad_value=self.pad_token_id) - if pad_to_max_seq_len - else torch.nn.Identity(), - ) - - def pre_process(self, text: str) -> str: - text = ( - re.sub( - r"([,.'!?\"()*#:;~])", - "", - text, - ) - .replace("-", " ") - .replace("/", " ") - ) - text = text.rstrip(" ") - - return text - - def __call__(self, text: Union[List[str], str]) -> torch.Tensor: - if self.do_pre_process: - if isinstance(text, str): - text = self.pre_process(text) - else: - text = [self.pre_process(t) for t in text] - tokens = self.tokenizer(text)["input_ids"] - if not self.add_end_token and tokens[-1] == self.sep_token_id: - tokens = tokens[:-1] - input_ids = self.transform(tokens) - - return input_ids - - -def training_image_transform( - image_size: int = 384, - scale: Tuple[float, float] = (0.5, 1.0), - image_interpolation=transforms.InterpolationMode.BICUBIC, - mean: Tuple[float, float, float] = MEAN, - std_dev: Tuple[float, float, float] = STD_DEV, -) -> transforms.Compose: - return transforms.Compose( - [ - transforms.RandomResizedCrop( - image_size, scale=scale, interpolation=image_interpolation - ), - transforms.RandomHorizontalFlip(), - transforms.RandAugment(2, 7), - transforms.ToTensor(), - transforms.Normalize(mean, std_dev), - ] - ) - - -def testing_image_transform( - image_size: int = 384, - image_interpolation=transforms.InterpolationMode.BICUBIC, - mean: Tuple[float, float, float] = MEAN, - std_dev: Tuple[float, float, float] = STD_DEV, -) -> transforms.Compose: - return transforms.Compose( - [ - transforms.Resize( - (image_size, image_size), interpolation=image_interpolation - ), - transforms.ToTensor(), - transforms.Normalize(mean, std_dev), - ] - ) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/req/req_install.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/req/req_install.py deleted file mode 100644 index 5f29261c252d897a7f5e03a453d0a7e9fc93bd85..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/req/req_install.py +++ /dev/null @@ -1,942 +0,0 @@ -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -import functools -import logging -import os -import shutil -import sys -import uuid -import zipfile -from enum import Enum -from optparse import Values -from typing import Any, Collection, Dict, Iterable, List, Optional, Sequence, Union - -from pip._vendor.packaging.markers import Marker -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.specifiers import SpecifierSet -from pip._vendor.packaging.utils import canonicalize_name -from pip._vendor.packaging.version import Version -from pip._vendor.packaging.version import parse as parse_version -from pip._vendor.pep517.wrappers import Pep517HookCaller - -from pip._internal.build_env import BuildEnvironment, NoOpBuildEnvironment -from pip._internal.exceptions import InstallationError, LegacyInstallFailure -from pip._internal.locations import get_scheme -from pip._internal.metadata import ( - BaseDistribution, - get_default_environment, - get_directory_distribution, - get_wheel_distribution, -) -from pip._internal.metadata.base import FilesystemWheel -from pip._internal.models.direct_url import DirectUrl -from pip._internal.models.link import Link -from pip._internal.operations.build.metadata import generate_metadata -from pip._internal.operations.build.metadata_editable import generate_editable_metadata -from pip._internal.operations.build.metadata_legacy import ( - generate_metadata as generate_metadata_legacy, -) -from pip._internal.operations.install.editable_legacy import ( - install_editable as install_editable_legacy, -) -from pip._internal.operations.install.legacy import install as install_legacy -from pip._internal.operations.install.wheel import install_wheel -from pip._internal.pyproject import load_pyproject_toml, make_pyproject_path -from pip._internal.req.req_uninstall import UninstallPathSet -from pip._internal.utils.deprecation import LegacyInstallReason, deprecated -from pip._internal.utils.direct_url_helpers import ( - direct_url_for_editable, - direct_url_from_link, -) -from pip._internal.utils.hashes import Hashes -from pip._internal.utils.misc import ( - ConfiguredPep517HookCaller, - ask_path_exists, - backup_dir, - display_path, - hide_url, - redact_auth_from_url, -) -from pip._internal.utils.packaging import safe_extra -from pip._internal.utils.subprocess import runner_with_spinner_message -from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds -from pip._internal.utils.virtualenv import running_under_virtualenv -from pip._internal.vcs import vcs - -logger = logging.getLogger(__name__) - - -class InstallRequirement: - """ - Represents something that may be installed later on, may have information - about where to fetch the relevant requirement and also contains logic for - installing the said requirement. - """ - - def __init__( - self, - req: Optional[Requirement], - comes_from: Optional[Union[str, "InstallRequirement"]], - editable: bool = False, - link: Optional[Link] = None, - markers: Optional[Marker] = None, - use_pep517: Optional[bool] = None, - isolated: bool = False, - install_options: Optional[List[str]] = None, - global_options: Optional[List[str]] = None, - hash_options: Optional[Dict[str, List[str]]] = None, - config_settings: Optional[Dict[str, str]] = None, - constraint: bool = False, - extras: Collection[str] = (), - user_supplied: bool = False, - permit_editable_wheels: bool = False, - ) -> None: - assert req is None or isinstance(req, Requirement), req - self.req = req - self.comes_from = comes_from - self.constraint = constraint - self.editable = editable - self.permit_editable_wheels = permit_editable_wheels - self.legacy_install_reason: Optional[LegacyInstallReason] = None - - # source_dir is the local directory where the linked requirement is - # located, or unpacked. In case unpacking is needed, creating and - # populating source_dir is done by the RequirementPreparer. Note this - # is not necessarily the directory where pyproject.toml or setup.py is - # located - that one is obtained via unpacked_source_directory. - self.source_dir: Optional[str] = None - if self.editable: - assert link - if link.is_file: - self.source_dir = os.path.normpath(os.path.abspath(link.file_path)) - - if link is None and req and req.url: - # PEP 508 URL requirement - link = Link(req.url) - self.link = self.original_link = link - self.original_link_is_in_wheel_cache = False - - # Information about the location of the artifact that was downloaded . This - # property is guaranteed to be set in resolver results. - self.download_info: Optional[DirectUrl] = None - - # Path to any downloaded or already-existing package. - self.local_file_path: Optional[str] = None - if self.link and self.link.is_file: - self.local_file_path = self.link.file_path - - if extras: - self.extras = extras - elif req: - self.extras = {safe_extra(extra) for extra in req.extras} - else: - self.extras = set() - if markers is None and req: - markers = req.marker - self.markers = markers - - # This holds the Distribution object if this requirement is already installed. - self.satisfied_by: Optional[BaseDistribution] = None - # Whether the installation process should try to uninstall an existing - # distribution before installing this requirement. - self.should_reinstall = False - # Temporary build location - self._temp_build_dir: Optional[TempDirectory] = None - # Set to True after successful installation - self.install_succeeded: Optional[bool] = None - # Supplied options - self.install_options = install_options if install_options else [] - self.global_options = global_options if global_options else [] - self.hash_options = hash_options if hash_options else {} - self.config_settings = config_settings - # Set to True after successful preparation of this requirement - self.prepared = False - # User supplied requirement are explicitly requested for installation - # by the user via CLI arguments or requirements files, as opposed to, - # e.g. dependencies, extras or constraints. - self.user_supplied = user_supplied - - self.isolated = isolated - self.build_env: BuildEnvironment = NoOpBuildEnvironment() - - # For PEP 517, the directory where we request the project metadata - # gets stored. We need this to pass to build_wheel, so the backend - # can ensure that the wheel matches the metadata (see the PEP for - # details). - self.metadata_directory: Optional[str] = None - - # The static build requirements (from pyproject.toml) - self.pyproject_requires: Optional[List[str]] = None - - # Build requirements that we will check are available - self.requirements_to_check: List[str] = [] - - # The PEP 517 backend we should use to build the project - self.pep517_backend: Optional[Pep517HookCaller] = None - - # Are we using PEP 517 for this requirement? - # After pyproject.toml has been loaded, the only valid values are True - # and False. Before loading, None is valid (meaning "use the default"). - # Setting an explicit value before loading pyproject.toml is supported, - # but after loading this flag should be treated as read only. - self.use_pep517 = use_pep517 - - # This requirement needs more preparation before it can be built - self.needs_more_preparation = False - - def __str__(self) -> str: - if self.req: - s = str(self.req) - if self.link: - s += " from {}".format(redact_auth_from_url(self.link.url)) - elif self.link: - s = redact_auth_from_url(self.link.url) - else: - s = "" - if self.satisfied_by is not None: - s += " in {}".format(display_path(self.satisfied_by.location)) - if self.comes_from: - if isinstance(self.comes_from, str): - comes_from: Optional[str] = self.comes_from - else: - comes_from = self.comes_from.from_path() - if comes_from: - s += f" (from {comes_from})" - return s - - def __repr__(self) -> str: - return "<{} object: {} editable={!r}>".format( - self.__class__.__name__, str(self), self.editable - ) - - def format_debug(self) -> str: - """An un-tested helper for getting state, for debugging.""" - attributes = vars(self) - names = sorted(attributes) - - state = ("{}={!r}".format(attr, attributes[attr]) for attr in sorted(names)) - return "<{name} object: {{{state}}}>".format( - name=self.__class__.__name__, - state=", ".join(state), - ) - - # Things that are valid for all kinds of requirements? - @property - def name(self) -> Optional[str]: - if self.req is None: - return None - return self.req.name - - @functools.lru_cache() # use cached_property in python 3.8+ - def supports_pyproject_editable(self) -> bool: - if not self.use_pep517: - return False - assert self.pep517_backend - with self.build_env: - runner = runner_with_spinner_message( - "Checking if build backend supports build_editable" - ) - with self.pep517_backend.subprocess_runner(runner): - return "build_editable" in self.pep517_backend._supported_features() - - @property - def specifier(self) -> SpecifierSet: - return self.req.specifier - - @property - def is_pinned(self) -> bool: - """Return whether I am pinned to an exact version. - - For example, some-package==1.2 is pinned; some-package>1.2 is not. - """ - specifiers = self.specifier - return len(specifiers) == 1 and next(iter(specifiers)).operator in {"==", "==="} - - def match_markers(self, extras_requested: Optional[Iterable[str]] = None) -> bool: - if not extras_requested: - # Provide an extra to safely evaluate the markers - # without matching any extra - extras_requested = ("",) - if self.markers is not None: - return any( - self.markers.evaluate({"extra": extra}) for extra in extras_requested - ) - else: - return True - - @property - def has_hash_options(self) -> bool: - """Return whether any known-good hashes are specified as options. - - These activate --require-hashes mode; hashes specified as part of a - URL do not. - - """ - return bool(self.hash_options) - - def hashes(self, trust_internet: bool = True) -> Hashes: - """Return a hash-comparer that considers my option- and URL-based - hashes to be known-good. - - Hashes in URLs--ones embedded in the requirements file, not ones - downloaded from an index server--are almost peers with ones from - flags. They satisfy --require-hashes (whether it was implicitly or - explicitly activated) but do not activate it. md5 and sha224 are not - allowed in flags, which should nudge people toward good algos. We - always OR all hashes together, even ones from URLs. - - :param trust_internet: Whether to trust URL-based (#md5=...) hashes - downloaded from the internet, as by populate_link() - - """ - good_hashes = self.hash_options.copy() - link = self.link if trust_internet else self.original_link - if link and link.hash: - good_hashes.setdefault(link.hash_name, []).append(link.hash) - return Hashes(good_hashes) - - def from_path(self) -> Optional[str]: - """Format a nice indicator to show where this "comes from" """ - if self.req is None: - return None - s = str(self.req) - if self.comes_from: - if isinstance(self.comes_from, str): - comes_from = self.comes_from - else: - comes_from = self.comes_from.from_path() - if comes_from: - s += "->" + comes_from - return s - - def ensure_build_location( - self, build_dir: str, autodelete: bool, parallel_builds: bool - ) -> str: - assert build_dir is not None - if self._temp_build_dir is not None: - assert self._temp_build_dir.path - return self._temp_build_dir.path - if self.req is None: - # Some systems have /tmp as a symlink which confuses custom - # builds (such as numpy). Thus, we ensure that the real path - # is returned. - self._temp_build_dir = TempDirectory( - kind=tempdir_kinds.REQ_BUILD, globally_managed=True - ) - - return self._temp_build_dir.path - - # This is the only remaining place where we manually determine the path - # for the temporary directory. It is only needed for editables where - # it is the value of the --src option. - - # When parallel builds are enabled, add a UUID to the build directory - # name so multiple builds do not interfere with each other. - dir_name: str = canonicalize_name(self.name) - if parallel_builds: - dir_name = f"{dir_name}_{uuid.uuid4().hex}" - - # FIXME: Is there a better place to create the build_dir? (hg and bzr - # need this) - if not os.path.exists(build_dir): - logger.debug("Creating directory %s", build_dir) - os.makedirs(build_dir) - actual_build_dir = os.path.join(build_dir, dir_name) - # `None` indicates that we respect the globally-configured deletion - # settings, which is what we actually want when auto-deleting. - delete_arg = None if autodelete else False - return TempDirectory( - path=actual_build_dir, - delete=delete_arg, - kind=tempdir_kinds.REQ_BUILD, - globally_managed=True, - ).path - - def _set_requirement(self) -> None: - """Set requirement after generating metadata.""" - assert self.req is None - assert self.metadata is not None - assert self.source_dir is not None - - # Construct a Requirement object from the generated metadata - if isinstance(parse_version(self.metadata["Version"]), Version): - op = "==" - else: - op = "===" - - self.req = Requirement( - "".join( - [ - self.metadata["Name"], - op, - self.metadata["Version"], - ] - ) - ) - - def warn_on_mismatching_name(self) -> None: - metadata_name = canonicalize_name(self.metadata["Name"]) - if canonicalize_name(self.req.name) == metadata_name: - # Everything is fine. - return - - # If we're here, there's a mismatch. Log a warning about it. - logger.warning( - "Generating metadata for package %s " - "produced metadata for project name %s. Fix your " - "#egg=%s fragments.", - self.name, - metadata_name, - self.name, - ) - self.req = Requirement(metadata_name) - - def check_if_exists(self, use_user_site: bool) -> None: - """Find an installed distribution that satisfies or conflicts - with this requirement, and set self.satisfied_by or - self.should_reinstall appropriately. - """ - if self.req is None: - return - existing_dist = get_default_environment().get_distribution(self.req.name) - if not existing_dist: - return - - version_compatible = self.req.specifier.contains( - existing_dist.version, - prereleases=True, - ) - if not version_compatible: - self.satisfied_by = None - if use_user_site: - if existing_dist.in_usersite: - self.should_reinstall = True - elif running_under_virtualenv() and existing_dist.in_site_packages: - raise InstallationError( - f"Will not install to the user site because it will " - f"lack sys.path precedence to {existing_dist.raw_name} " - f"in {existing_dist.location}" - ) - else: - self.should_reinstall = True - else: - if self.editable: - self.should_reinstall = True - # when installing editables, nothing pre-existing should ever - # satisfy - self.satisfied_by = None - else: - self.satisfied_by = existing_dist - - # Things valid for wheels - @property - def is_wheel(self) -> bool: - if not self.link: - return False - return self.link.is_wheel - - # Things valid for sdists - @property - def unpacked_source_directory(self) -> str: - return os.path.join( - self.source_dir, self.link and self.link.subdirectory_fragment or "" - ) - - @property - def setup_py_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - setup_py = os.path.join(self.unpacked_source_directory, "setup.py") - - return setup_py - - @property - def setup_cfg_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - setup_cfg = os.path.join(self.unpacked_source_directory, "setup.cfg") - - return setup_cfg - - @property - def pyproject_toml_path(self) -> str: - assert self.source_dir, f"No source dir for {self}" - return make_pyproject_path(self.unpacked_source_directory) - - def load_pyproject_toml(self) -> None: - """Load the pyproject.toml file. - - After calling this routine, all of the attributes related to PEP 517 - processing for this requirement have been set. In particular, the - use_pep517 attribute can be used to determine whether we should - follow the PEP 517 or legacy (setup.py) code path. - """ - pyproject_toml_data = load_pyproject_toml( - self.use_pep517, self.pyproject_toml_path, self.setup_py_path, str(self) - ) - - if pyproject_toml_data is None: - self.use_pep517 = False - return - - self.use_pep517 = True - requires, backend, check, backend_path = pyproject_toml_data - self.requirements_to_check = check - self.pyproject_requires = requires - self.pep517_backend = ConfiguredPep517HookCaller( - self, - self.unpacked_source_directory, - backend, - backend_path=backend_path, - ) - - def isolated_editable_sanity_check(self) -> None: - """Check that an editable requirement if valid for use with PEP 517/518. - - This verifies that an editable that has a pyproject.toml either supports PEP 660 - or as a setup.py or a setup.cfg - """ - if ( - self.editable - and self.use_pep517 - and not self.supports_pyproject_editable() - and not os.path.isfile(self.setup_py_path) - and not os.path.isfile(self.setup_cfg_path) - ): - raise InstallationError( - f"Project {self} has a 'pyproject.toml' and its build " - f"backend is missing the 'build_editable' hook. Since it does not " - f"have a 'setup.py' nor a 'setup.cfg', " - f"it cannot be installed in editable mode. " - f"Consider using a build backend that supports PEP 660." - ) - - def prepare_metadata(self) -> None: - """Ensure that project metadata is available. - - Under PEP 517 and PEP 660, call the backend hook to prepare the metadata. - Under legacy processing, call setup.py egg-info. - """ - assert self.source_dir - details = self.name or f"from {self.link}" - - if self.use_pep517: - assert self.pep517_backend is not None - if ( - self.editable - and self.permit_editable_wheels - and self.supports_pyproject_editable() - ): - self.metadata_directory = generate_editable_metadata( - build_env=self.build_env, - backend=self.pep517_backend, - details=details, - ) - else: - self.metadata_directory = generate_metadata( - build_env=self.build_env, - backend=self.pep517_backend, - details=details, - ) - else: - self.metadata_directory = generate_metadata_legacy( - build_env=self.build_env, - setup_py_path=self.setup_py_path, - source_dir=self.unpacked_source_directory, - isolated=self.isolated, - details=details, - ) - - # Act on the newly generated metadata, based on the name and version. - if not self.name: - self._set_requirement() - else: - self.warn_on_mismatching_name() - - self.assert_source_matches_version() - - @property - def metadata(self) -> Any: - if not hasattr(self, "_metadata"): - self._metadata = self.get_dist().metadata - - return self._metadata - - def get_dist(self) -> BaseDistribution: - if self.metadata_directory: - return get_directory_distribution(self.metadata_directory) - elif self.local_file_path and self.is_wheel: - return get_wheel_distribution( - FilesystemWheel(self.local_file_path), canonicalize_name(self.name) - ) - raise AssertionError( - f"InstallRequirement {self} has no metadata directory and no wheel: " - f"can't make a distribution." - ) - - def assert_source_matches_version(self) -> None: - assert self.source_dir - version = self.metadata["version"] - if self.req.specifier and version not in self.req.specifier: - logger.warning( - "Requested %s, but installing version %s", - self, - version, - ) - else: - logger.debug( - "Source in %s has version %s, which satisfies requirement %s", - display_path(self.source_dir), - version, - self, - ) - - # For both source distributions and editables - def ensure_has_source_dir( - self, - parent_dir: str, - autodelete: bool = False, - parallel_builds: bool = False, - ) -> None: - """Ensure that a source_dir is set. - - This will create a temporary build dir if the name of the requirement - isn't known yet. - - :param parent_dir: The ideal pip parent_dir for the source_dir. - Generally src_dir for editables and build_dir for sdists. - :return: self.source_dir - """ - if self.source_dir is None: - self.source_dir = self.ensure_build_location( - parent_dir, - autodelete=autodelete, - parallel_builds=parallel_builds, - ) - - # For editable installations - def update_editable(self) -> None: - if not self.link: - logger.debug( - "Cannot update repository at %s; repository location is unknown", - self.source_dir, - ) - return - assert self.editable - assert self.source_dir - if self.link.scheme == "file": - # Static paths don't get updated - return - vcs_backend = vcs.get_backend_for_scheme(self.link.scheme) - # Editable requirements are validated in Requirement constructors. - # So here, if it's neither a path nor a valid VCS URL, it's a bug. - assert vcs_backend, f"Unsupported VCS URL {self.link.url}" - hidden_url = hide_url(self.link.url) - vcs_backend.obtain(self.source_dir, url=hidden_url, verbosity=0) - - # Top-level Actions - def uninstall( - self, auto_confirm: bool = False, verbose: bool = False - ) -> Optional[UninstallPathSet]: - """ - Uninstall the distribution currently satisfying this requirement. - - Prompts before removing or modifying files unless - ``auto_confirm`` is True. - - Refuses to delete or modify files outside of ``sys.prefix`` - - thus uninstallation within a virtual environment can only - modify that virtual environment, even if the virtualenv is - linked to global site-packages. - - """ - assert self.req - dist = get_default_environment().get_distribution(self.req.name) - if not dist: - logger.warning("Skipping %s as it is not installed.", self.name) - return None - logger.info("Found existing installation: %s", dist) - - uninstalled_pathset = UninstallPathSet.from_dist(dist) - uninstalled_pathset.remove(auto_confirm, verbose) - return uninstalled_pathset - - def _get_archive_name(self, path: str, parentdir: str, rootdir: str) -> str: - def _clean_zip_name(name: str, prefix: str) -> str: - assert name.startswith( - prefix + os.path.sep - ), f"name {name!r} doesn't start with prefix {prefix!r}" - name = name[len(prefix) + 1 :] - name = name.replace(os.path.sep, "/") - return name - - path = os.path.join(parentdir, path) - name = _clean_zip_name(path, rootdir) - return self.name + "/" + name - - def archive(self, build_dir: Optional[str]) -> None: - """Saves archive to provided build_dir. - - Used for saving downloaded VCS requirements as part of `pip download`. - """ - assert self.source_dir - if build_dir is None: - return - - create_archive = True - archive_name = "{}-{}.zip".format(self.name, self.metadata["version"]) - archive_path = os.path.join(build_dir, archive_name) - - if os.path.exists(archive_path): - response = ask_path_exists( - "The file {} exists. (i)gnore, (w)ipe, " - "(b)ackup, (a)bort ".format(display_path(archive_path)), - ("i", "w", "b", "a"), - ) - if response == "i": - create_archive = False - elif response == "w": - logger.warning("Deleting %s", display_path(archive_path)) - os.remove(archive_path) - elif response == "b": - dest_file = backup_dir(archive_path) - logger.warning( - "Backing up %s to %s", - display_path(archive_path), - display_path(dest_file), - ) - shutil.move(archive_path, dest_file) - elif response == "a": - sys.exit(-1) - - if not create_archive: - return - - zip_output = zipfile.ZipFile( - archive_path, - "w", - zipfile.ZIP_DEFLATED, - allowZip64=True, - ) - with zip_output: - dir = os.path.normcase(os.path.abspath(self.unpacked_source_directory)) - for dirpath, dirnames, filenames in os.walk(dir): - for dirname in dirnames: - dir_arcname = self._get_archive_name( - dirname, - parentdir=dirpath, - rootdir=dir, - ) - zipdir = zipfile.ZipInfo(dir_arcname + "/") - zipdir.external_attr = 0x1ED << 16 # 0o755 - zip_output.writestr(zipdir, "") - for filename in filenames: - file_arcname = self._get_archive_name( - filename, - parentdir=dirpath, - rootdir=dir, - ) - filename = os.path.join(dirpath, filename) - zip_output.write(filename, file_arcname) - - logger.info("Saved %s", display_path(archive_path)) - - def install( - self, - install_options: List[str], - global_options: Optional[Sequence[str]] = None, - root: Optional[str] = None, - home: Optional[str] = None, - prefix: Optional[str] = None, - warn_script_location: bool = True, - use_user_site: bool = False, - pycompile: bool = True, - ) -> None: - scheme = get_scheme( - self.name, - user=use_user_site, - home=home, - root=root, - isolated=self.isolated, - prefix=prefix, - ) - - global_options = global_options if global_options is not None else [] - if self.editable and not self.is_wheel: - install_editable_legacy( - install_options, - global_options, - prefix=prefix, - home=home, - use_user_site=use_user_site, - name=self.name, - setup_py_path=self.setup_py_path, - isolated=self.isolated, - build_env=self.build_env, - unpacked_source_directory=self.unpacked_source_directory, - ) - self.install_succeeded = True - return - - if self.is_wheel: - assert self.local_file_path - direct_url = None - # TODO this can be refactored to direct_url = self.download_info - if self.editable: - direct_url = direct_url_for_editable(self.unpacked_source_directory) - elif self.original_link: - direct_url = direct_url_from_link( - self.original_link, - self.source_dir, - self.original_link_is_in_wheel_cache, - ) - install_wheel( - self.name, - self.local_file_path, - scheme=scheme, - req_description=str(self.req), - pycompile=pycompile, - warn_script_location=warn_script_location, - direct_url=direct_url, - requested=self.user_supplied, - ) - self.install_succeeded = True - return - - # TODO: Why don't we do this for editable installs? - - # Extend the list of global and install options passed on to - # the setup.py call with the ones from the requirements file. - # Options specified in requirements file override those - # specified on the command line, since the last option given - # to setup.py is the one that is used. - global_options = list(global_options) + self.global_options - install_options = list(install_options) + self.install_options - - try: - if ( - self.legacy_install_reason is not None - and self.legacy_install_reason.emit_before_install - ): - self.legacy_install_reason.emit_deprecation(self.name) - success = install_legacy( - install_options=install_options, - global_options=global_options, - root=root, - home=home, - prefix=prefix, - use_user_site=use_user_site, - pycompile=pycompile, - scheme=scheme, - setup_py_path=self.setup_py_path, - isolated=self.isolated, - req_name=self.name, - build_env=self.build_env, - unpacked_source_directory=self.unpacked_source_directory, - req_description=str(self.req), - ) - except LegacyInstallFailure as exc: - self.install_succeeded = False - raise exc - except Exception: - self.install_succeeded = True - raise - - self.install_succeeded = success - - if ( - success - and self.legacy_install_reason is not None - and self.legacy_install_reason.emit_after_success - ): - self.legacy_install_reason.emit_deprecation(self.name) - - -def check_invalid_constraint_type(req: InstallRequirement) -> str: - - # Check for unsupported forms - problem = "" - if not req.name: - problem = "Unnamed requirements are not allowed as constraints" - elif req.editable: - problem = "Editable requirements are not allowed as constraints" - elif req.extras: - problem = "Constraints cannot have extras" - - if problem: - deprecated( - reason=( - "Constraints are only allowed to take the form of a package " - "name and a version specifier. Other forms were originally " - "permitted as an accident of the implementation, but were " - "undocumented. The new implementation of the resolver no " - "longer supports these forms." - ), - replacement="replacing the constraint with a requirement", - # No plan yet for when the new resolver becomes default - gone_in=None, - issue=8210, - ) - - return problem - - -def _has_option(options: Values, reqs: List[InstallRequirement], option: str) -> bool: - if getattr(options, option, None): - return True - for req in reqs: - if getattr(req, option, None): - return True - return False - - -def _install_option_ignored( - install_options: List[str], reqs: List[InstallRequirement] -) -> bool: - for req in reqs: - if (install_options or req.install_options) and not req.use_pep517: - return False - return True - - -class LegacySetupPyOptionsCheckMode(Enum): - INSTALL = 1 - WHEEL = 2 - DOWNLOAD = 3 - - -def check_legacy_setup_py_options( - options: Values, - reqs: List[InstallRequirement], - mode: LegacySetupPyOptionsCheckMode, -) -> None: - has_install_options = _has_option(options, reqs, "install_options") - has_build_options = _has_option(options, reqs, "build_options") - has_global_options = _has_option(options, reqs, "global_options") - legacy_setup_py_options_present = ( - has_install_options or has_build_options or has_global_options - ) - if not legacy_setup_py_options_present: - return - - options.format_control.disallow_binaries() - logger.warning( - "Implying --no-binary=:all: due to the presence of " - "--build-option / --global-option / --install-option. " - "Consider using --config-settings for more flexibility.", - ) - if mode == LegacySetupPyOptionsCheckMode.INSTALL and has_install_options: - if _install_option_ignored(options.install_options, reqs): - logger.warning( - "Ignoring --install-option when building using PEP 517", - ) - else: - deprecated( - reason=( - "--install-option is deprecated because " - "it forces pip to use the 'setup.py install' " - "command which is itself deprecated." - ), - issue=11358, - replacement="to use --config-settings", - gone_in="23.1", - ) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/emoji.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/emoji.py deleted file mode 100644 index 791f0465de136088e33cdc6ef5696590df1e4f86..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/emoji.py +++ /dev/null @@ -1,96 +0,0 @@ -import sys -from typing import TYPE_CHECKING, Optional, Union - -from .jupyter import JupyterMixin -from .segment import Segment -from .style import Style -from ._emoji_codes import EMOJI -from ._emoji_replace import _emoji_replace - -if sys.version_info >= (3, 8): - from typing import Literal -else: - from pip._vendor.typing_extensions import Literal # pragma: no cover - - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderResult - - -EmojiVariant = Literal["emoji", "text"] - - -class NoEmoji(Exception): - """No emoji by that name.""" - - -class Emoji(JupyterMixin): - __slots__ = ["name", "style", "_char", "variant"] - - VARIANTS = {"text": "\uFE0E", "emoji": "\uFE0F"} - - def __init__( - self, - name: str, - style: Union[str, Style] = "none", - variant: Optional[EmojiVariant] = None, - ) -> None: - """A single emoji character. - - Args: - name (str): Name of emoji. - style (Union[str, Style], optional): Optional style. Defaults to None. - - Raises: - NoEmoji: If the emoji doesn't exist. - """ - self.name = name - self.style = style - self.variant = variant - try: - self._char = EMOJI[name] - except KeyError: - raise NoEmoji(f"No emoji called {name!r}") - if variant is not None: - self._char += self.VARIANTS.get(variant, "") - - @classmethod - def replace(cls, text: str) -> str: - """Replace emoji markup with corresponding unicode characters. - - Args: - text (str): A string with emojis codes, e.g. "Hello :smiley:!" - - Returns: - str: A string with emoji codes replaces with actual emoji. - """ - return _emoji_replace(text) - - def __repr__(self) -> str: - return f"" - - def __str__(self) -> str: - return self._char - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - yield Segment(self._char, console.get_style(self.style)) - - -if __name__ == "__main__": # pragma: no cover - import sys - - from pip._vendor.rich.columns import Columns - from pip._vendor.rich.console import Console - - console = Console(record=True) - - columns = Columns( - (f":{name}: {name}" for name in sorted(EMOJI.keys()) if "\u200D" not in name), - column_first=True, - ) - - console.print(columns) - if len(sys.argv) > 1: - console.save_html(sys.argv[1]) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/palette.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/palette.py deleted file mode 100644 index fa0c4dd40381addf5b42fae4228b6d8fef03abd9..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/palette.py +++ /dev/null @@ -1,100 +0,0 @@ -from math import sqrt -from functools import lru_cache -from typing import Sequence, Tuple, TYPE_CHECKING - -from .color_triplet import ColorTriplet - -if TYPE_CHECKING: - from pip._vendor.rich.table import Table - - -class Palette: - """A palette of available colors.""" - - def __init__(self, colors: Sequence[Tuple[int, int, int]]): - self._colors = colors - - def __getitem__(self, number: int) -> ColorTriplet: - return ColorTriplet(*self._colors[number]) - - def __rich__(self) -> "Table": - from pip._vendor.rich.color import Color - from pip._vendor.rich.style import Style - from pip._vendor.rich.text import Text - from pip._vendor.rich.table import Table - - table = Table( - "index", - "RGB", - "Color", - title="Palette", - caption=f"{len(self._colors)} colors", - highlight=True, - caption_justify="right", - ) - for index, color in enumerate(self._colors): - table.add_row( - str(index), - repr(color), - Text(" " * 16, style=Style(bgcolor=Color.from_rgb(*color))), - ) - return table - - # This is somewhat inefficient and needs caching - @lru_cache(maxsize=1024) - def match(self, color: Tuple[int, int, int]) -> int: - """Find a color from a palette that most closely matches a given color. - - Args: - color (Tuple[int, int, int]): RGB components in range 0 > 255. - - Returns: - int: Index of closes matching color. - """ - red1, green1, blue1 = color - _sqrt = sqrt - get_color = self._colors.__getitem__ - - def get_color_distance(index: int) -> float: - """Get the distance to a color.""" - red2, green2, blue2 = get_color(index) - red_mean = (red1 + red2) // 2 - red = red1 - red2 - green = green1 - green2 - blue = blue1 - blue2 - return _sqrt( - (((512 + red_mean) * red * red) >> 8) - + 4 * green * green - + (((767 - red_mean) * blue * blue) >> 8) - ) - - min_index = min(range(len(self._colors)), key=get_color_distance) - return min_index - - -if __name__ == "__main__": # pragma: no cover - import colorsys - from typing import Iterable - from pip._vendor.rich.color import Color - from pip._vendor.rich.console import Console, ConsoleOptions - from pip._vendor.rich.segment import Segment - from pip._vendor.rich.style import Style - - class ColorBox: - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> Iterable[Segment]: - height = console.size.height - 3 - for y in range(0, height): - for x in range(options.max_width): - h = x / options.max_width - l = y / (height + 1) - r1, g1, b1 = colorsys.hls_to_rgb(h, l, 1.0) - r2, g2, b2 = colorsys.hls_to_rgb(h, l + (1 / height / 2), 1.0) - bgcolor = Color.from_rgb(r1 * 255, g1 * 255, b1 * 255) - color = Color.from_rgb(r2 * 255, g2 * 255, b2 * 255) - yield Segment("▄", Style(color=color, bgcolor=bgcolor)) - yield Segment.line() - - console = Console() - console.print(ColorBox()) diff --git a/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/train_sg.sh b/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/train_sg.sh deleted file mode 100644 index a6ba093dfcaad6005520b65a068c60d7e93b03f8..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/train_sg.sh +++ /dev/null @@ -1,10 +0,0 @@ -OMP_NUM_THREADS=2 CUDA_VISIBLE_DEVICES='0' python -m torch.distributed.launch --nproc_per_node=1 --master_port 23003 main.py \ ---model_name=SG \ ---config_path=configs/sg.yaml \ ---rawdata_path=rawdata \ ---desc_path=desc_path \ ---desc_suffix=_root_1000.hdf5 \ ---dataset_path=dataset_path \ ---log_base=log_root_1k_sg \ ---num_kpt=1000 \ ---train_iter=900000 \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/__init__.py deleted file mode 100644 index a0b6b345640a895368ac8a647afef6f24333d90e..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import LoggerHook -from .dvclive import DvcliveLoggerHook -from .mlflow import MlflowLoggerHook -from .neptune import NeptuneLoggerHook -from .pavi import PaviLoggerHook -from .tensorboard import TensorboardLoggerHook -from .text import TextLoggerHook -from .wandb import WandbLoggerHook - -__all__ = [ - 'LoggerHook', 'MlflowLoggerHook', 'PaviLoggerHook', - 'TensorboardLoggerHook', 'TextLoggerHook', 'WandbLoggerHook', - 'NeptuneLoggerHook', 'DvcliveLoggerHook' -] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/utils.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/utils.py deleted file mode 100644 index 157c9a2e1fe009552fdec9b9c9e7a33ed46d51ff..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/utils.py +++ /dev/null @@ -1,158 +0,0 @@ -import copy -import warnings - -from mmcv.cnn import VGG -from mmcv.runner.hooks import HOOKS, Hook - -from mmdet.datasets.builder import PIPELINES -from mmdet.datasets.pipelines import LoadAnnotations, LoadImageFromFile -from mmdet.models.dense_heads import GARPNHead, RPNHead -from mmdet.models.roi_heads.mask_heads import FusedSemanticHead - - -def replace_ImageToTensor(pipelines): - """Replace the ImageToTensor transform in a data pipeline to - DefaultFormatBundle, which is normally useful in batch inference. - - Args: - pipelines (list[dict]): Data pipeline configs. - - Returns: - list: The new pipeline list with all ImageToTensor replaced by - DefaultFormatBundle. - - Examples: - >>> pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict( - ... type='MultiScaleFlipAug', - ... img_scale=(1333, 800), - ... flip=False, - ... transforms=[ - ... dict(type='Resize', keep_ratio=True), - ... dict(type='RandomFlip'), - ... dict(type='Normalize', mean=[0, 0, 0], std=[1, 1, 1]), - ... dict(type='Pad', size_divisor=32), - ... dict(type='ImageToTensor', keys=['img']), - ... dict(type='Collect', keys=['img']), - ... ]) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict( - ... type='MultiScaleFlipAug', - ... img_scale=(1333, 800), - ... flip=False, - ... transforms=[ - ... dict(type='Resize', keep_ratio=True), - ... dict(type='RandomFlip'), - ... dict(type='Normalize', mean=[0, 0, 0], std=[1, 1, 1]), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle'), - ... dict(type='Collect', keys=['img']), - ... ]) - ... ] - >>> assert expected_pipelines == replace_ImageToTensor(pipelines) - """ - pipelines = copy.deepcopy(pipelines) - for i, pipeline in enumerate(pipelines): - if pipeline['type'] == 'MultiScaleFlipAug': - assert 'transforms' in pipeline - pipeline['transforms'] = replace_ImageToTensor( - pipeline['transforms']) - elif pipeline['type'] == 'ImageToTensor': - warnings.warn( - '"ImageToTensor" pipeline is replaced by ' - '"DefaultFormatBundle" for batch inference. It is ' - 'recommended to manually replace it in the test ' - 'data pipeline in your config file.', UserWarning) - pipelines[i] = {'type': 'DefaultFormatBundle'} - return pipelines - - -def get_loading_pipeline(pipeline): - """Only keep loading image and annotations related configuration. - - Args: - pipeline (list[dict]): Data pipeline configs. - - Returns: - list[dict]: The new pipeline list with only keep - loading image and annotations related configuration. - - Examples: - >>> pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations', with_bbox=True), - ... dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - ... dict(type='RandomFlip', flip_ratio=0.5), - ... dict(type='Normalize', **img_norm_cfg), - ... dict(type='Pad', size_divisor=32), - ... dict(type='DefaultFormatBundle'), - ... dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) - ... ] - >>> expected_pipelines = [ - ... dict(type='LoadImageFromFile'), - ... dict(type='LoadAnnotations', with_bbox=True) - ... ] - >>> assert expected_pipelines ==\ - ... get_loading_pipeline(pipelines) - """ - loading_pipeline_cfg = [] - for cfg in pipeline: - obj_cls = PIPELINES.get(cfg['type']) - # TODO:use more elegant way to distinguish loading modules - if obj_cls is not None and obj_cls in (LoadImageFromFile, - LoadAnnotations): - loading_pipeline_cfg.append(cfg) - assert len(loading_pipeline_cfg) == 2, \ - 'The data pipeline in your config file must include ' \ - 'loading image and annotations related pipeline.' - return loading_pipeline_cfg - - -@HOOKS.register_module() -class NumClassCheckHook(Hook): - - def _check_head(self, runner): - """Check whether the `num_classes` in head matches the length of - `CLASSSES` in `dataset`. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - model = runner.model - dataset = runner.data_loader.dataset - if dataset.CLASSES is None: - runner.logger.warning( - f'Please set `CLASSES` ' - f'in the {dataset.__class__.__name__} and' - f'check if it is consistent with the `num_classes` ' - f'of head') - else: - for name, module in model.named_modules(): - if hasattr(module, 'num_classes') and not isinstance( - module, (RPNHead, VGG, FusedSemanticHead, GARPNHead)): - assert module.num_classes == len(dataset.CLASSES), \ - (f'The `num_classes` ({module.num_classes}) in ' - f'{module.__class__.__name__} of ' - f'{model.__class__.__name__} does not matches ' - f'the length of `CLASSES` ' - f'{len(dataset.CLASSES)}) in ' - f'{dataset.__class__.__name__}') - - def before_train_epoch(self, runner): - """Check whether the training dataset is compatible with head. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - self._check_head(runner) - - def before_val_epoch(self, runner): - """Check whether the dataset in val epoch is compatible with head. - - Args: - runner (obj:`EpochBasedRunner`): Epoch based Runner. - """ - self._check_head(runner) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/free_anchor_retina_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/free_anchor_retina_head.py deleted file mode 100644 index 79879fdc3171b8e34b606b27eb1ceb67f4473e3e..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/dense_heads/free_anchor_retina_head.py +++ /dev/null @@ -1,270 +0,0 @@ -import torch -import torch.nn.functional as F - -from mmdet.core import bbox_overlaps -from ..builder import HEADS -from .retina_head import RetinaHead - -EPS = 1e-12 - - -@HEADS.register_module() -class FreeAnchorRetinaHead(RetinaHead): - """FreeAnchor RetinaHead used in https://arxiv.org/abs/1909.02466. - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - stacked_convs (int): Number of conv layers in cls and reg tower. - Default: 4. - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None. - norm_cfg (dict): dictionary to construct and config norm layer. - Default: norm_cfg=dict(type='GN', num_groups=32, - requires_grad=True). - pre_anchor_topk (int): Number of boxes that be token in each bag. - bbox_thr (float): The threshold of the saturated linear function. It is - usually the same with the IoU threshold used in NMS. - gamma (float): Gamma parameter in focal loss. - alpha (float): Alpha parameter in focal loss. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - pre_anchor_topk=50, - bbox_thr=0.6, - gamma=2.0, - alpha=0.5, - **kwargs): - super(FreeAnchorRetinaHead, - self).__init__(num_classes, in_channels, stacked_convs, conv_cfg, - norm_cfg, **kwargs) - - self.pre_anchor_topk = pre_anchor_topk - self.bbox_thr = bbox_thr - self.gamma = gamma - self.alpha = alpha - - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == len(self.anchor_generator.base_anchors) - - anchor_list, _ = self.get_anchors(featmap_sizes, img_metas) - anchors = [torch.cat(anchor) for anchor in anchor_list] - - # concatenate each level - cls_scores = [ - cls.permute(0, 2, 3, - 1).reshape(cls.size(0), -1, self.cls_out_channels) - for cls in cls_scores - ] - bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(bbox_pred.size(0), -1, 4) - for bbox_pred in bbox_preds - ] - cls_scores = torch.cat(cls_scores, dim=1) - bbox_preds = torch.cat(bbox_preds, dim=1) - - cls_prob = torch.sigmoid(cls_scores) - box_prob = [] - num_pos = 0 - positive_losses = [] - for _, (anchors_, gt_labels_, gt_bboxes_, cls_prob_, - bbox_preds_) in enumerate( - zip(anchors, gt_labels, gt_bboxes, cls_prob, bbox_preds)): - - with torch.no_grad(): - if len(gt_bboxes_) == 0: - image_box_prob = torch.zeros( - anchors_.size(0), - self.cls_out_channels).type_as(bbox_preds_) - else: - # box_localization: a_{j}^{loc}, shape: [j, 4] - pred_boxes = self.bbox_coder.decode(anchors_, bbox_preds_) - - # object_box_iou: IoU_{ij}^{loc}, shape: [i, j] - object_box_iou = bbox_overlaps(gt_bboxes_, pred_boxes) - - # object_box_prob: P{a_{j} -> b_{i}}, shape: [i, j] - t1 = self.bbox_thr - t2 = object_box_iou.max( - dim=1, keepdim=True).values.clamp(min=t1 + 1e-12) - object_box_prob = ((object_box_iou - t1) / - (t2 - t1)).clamp( - min=0, max=1) - - # object_cls_box_prob: P{a_{j} -> b_{i}}, shape: [i, c, j] - num_obj = gt_labels_.size(0) - indices = torch.stack([ - torch.arange(num_obj).type_as(gt_labels_), gt_labels_ - ], - dim=0) - object_cls_box_prob = torch.sparse_coo_tensor( - indices, object_box_prob) - - # image_box_iou: P{a_{j} \in A_{+}}, shape: [c, j] - """ - from "start" to "end" implement: - image_box_iou = torch.sparse.max(object_cls_box_prob, - dim=0).t() - - """ - # start - box_cls_prob = torch.sparse.sum( - object_cls_box_prob, dim=0).to_dense() - - indices = torch.nonzero(box_cls_prob, as_tuple=False).t_() - if indices.numel() == 0: - image_box_prob = torch.zeros( - anchors_.size(0), - self.cls_out_channels).type_as(object_box_prob) - else: - nonzero_box_prob = torch.where( - (gt_labels_.unsqueeze(dim=-1) == indices[0]), - object_box_prob[:, indices[1]], - torch.tensor([ - 0 - ]).type_as(object_box_prob)).max(dim=0).values - - # upmap to shape [j, c] - image_box_prob = torch.sparse_coo_tensor( - indices.flip([0]), - nonzero_box_prob, - size=(anchors_.size(0), - self.cls_out_channels)).to_dense() - # end - - box_prob.append(image_box_prob) - - # construct bags for objects - match_quality_matrix = bbox_overlaps(gt_bboxes_, anchors_) - _, matched = torch.topk( - match_quality_matrix, - self.pre_anchor_topk, - dim=1, - sorted=False) - del match_quality_matrix - - # matched_cls_prob: P_{ij}^{cls} - matched_cls_prob = torch.gather( - cls_prob_[matched], 2, - gt_labels_.view(-1, 1, 1).repeat(1, self.pre_anchor_topk, - 1)).squeeze(2) - - # matched_box_prob: P_{ij}^{loc} - matched_anchors = anchors_[matched] - matched_object_targets = self.bbox_coder.encode( - matched_anchors, - gt_bboxes_.unsqueeze(dim=1).expand_as(matched_anchors)) - loss_bbox = self.loss_bbox( - bbox_preds_[matched], - matched_object_targets, - reduction_override='none').sum(-1) - matched_box_prob = torch.exp(-loss_bbox) - - # positive_losses: {-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )} - num_pos += len(gt_bboxes_) - positive_losses.append( - self.positive_bag_loss(matched_cls_prob, matched_box_prob)) - positive_loss = torch.cat(positive_losses).sum() / max(1, num_pos) - - # box_prob: P{a_{j} \in A_{+}} - box_prob = torch.stack(box_prob, dim=0) - - # negative_loss: - # \sum_{j}{ FL((1 - P{a_{j} \in A_{+}}) * (1 - P_{j}^{bg})) } / n||B|| - negative_loss = self.negative_bag_loss(cls_prob, box_prob).sum() / max( - 1, num_pos * self.pre_anchor_topk) - - # avoid the absence of gradients in regression subnet - # when no ground-truth in a batch - if num_pos == 0: - positive_loss = bbox_preds.sum() * 0 - - losses = { - 'positive_bag_loss': positive_loss, - 'negative_bag_loss': negative_loss - } - return losses - - def positive_bag_loss(self, matched_cls_prob, matched_box_prob): - """Compute positive bag loss. - - :math:`-log( Mean-max(P_{ij}^{cls} * P_{ij}^{loc}) )`. - - :math:`P_{ij}^{cls}`: matched_cls_prob, classification probability of matched samples. - - :math:`P_{ij}^{loc}`: matched_box_prob, box probability of matched samples. - - Args: - matched_cls_prob (Tensor): Classification probabilty of matched - samples in shape (num_gt, pre_anchor_topk). - matched_box_prob (Tensor): BBox probability of matched samples, - in shape (num_gt, pre_anchor_topk). - - Returns: - Tensor: Positive bag loss in shape (num_gt,). - """ # noqa: E501, W605 - # bag_prob = Mean-max(matched_prob) - matched_prob = matched_cls_prob * matched_box_prob - weight = 1 / torch.clamp(1 - matched_prob, 1e-12, None) - weight /= weight.sum(dim=1).unsqueeze(dim=-1) - bag_prob = (weight * matched_prob).sum(dim=1) - # positive_bag_loss = -self.alpha * log(bag_prob) - return self.alpha * F.binary_cross_entropy( - bag_prob, torch.ones_like(bag_prob), reduction='none') - - def negative_bag_loss(self, cls_prob, box_prob): - """Compute negative bag loss. - - :math:`FL((1 - P_{a_{j} \in A_{+}}) * (1 - P_{j}^{bg}))`. - - :math:`P_{a_{j} \in A_{+}}`: Box_probability of matched samples. - - :math:`P_{j}^{bg}`: Classification probability of negative samples. - - Args: - cls_prob (Tensor): Classification probability, in shape - (num_img, num_anchors, num_classes). - box_prob (Tensor): Box probability, in shape - (num_img, num_anchors, num_classes). - - Returns: - Tensor: Negative bag loss in shape (num_img, num_anchors, num_classes). - """ # noqa: E501, W605 - prob = cls_prob * (1 - box_prob) - # There are some cases when neg_prob = 0. - # This will cause the neg_prob.log() to be inf without clamp. - prob = prob.clamp(min=EPS, max=1 - EPS) - negative_bag_loss = prob**self.gamma * F.binary_cross_entropy( - prob, torch.zeros_like(prob), reduction='none') - return (1 - self.alpha) * negative_bag_loss diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/weight_init.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/weight_init.py deleted file mode 100644 index 38141ba3d61f64ddfc0a31574b4648cbad96d7dd..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/weight_init.py +++ /dev/null @@ -1,62 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/layers/drop.py.""" - -import math -import warnings - -import torch - - -def _no_grad_trunc_normal_(tensor, mean, std, a, b): - """Reference: https://people.sc.fsu.edu/~jburkardt/presentations - /truncated_normal.pdf""" - - def norm_cdf(x): - # Computes standard normal cumulative distribution function - return (1. + math.erf(x / math.sqrt(2.))) / 2. - - if (mean < a - 2 * std) or (mean > b + 2 * std): - warnings.warn( - 'mean is more than 2 std from [a, b] in nn.init.trunc_normal_. ' - 'The distribution of values may be incorrect.', - stacklevel=2) - - with torch.no_grad(): - # Values are generated by using a truncated uniform distribution and - # then using the inverse CDF for the normal distribution. - # Get upper and lower cdf values - lower_bound = norm_cdf((a - mean) / std) - upper_bound = norm_cdf((b - mean) / std) - - # Uniformly fill tensor with values from [l, u], then translate to - # [2l-1, 2u-1]. - tensor.uniform_(2 * lower_bound - 1, 2 * upper_bound - 1) - - # Use inverse cdf transform for normal distribution to get truncated - # standard normal - tensor.erfinv_() - - # Transform to proper mean, std - tensor.mul_(std * math.sqrt(2.)) - tensor.add_(mean) - - # Clamp to ensure it's in the proper range - tensor.clamp_(min=a, max=b) - return tensor - - -def trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.): - r"""Fills the input Tensor with values drawn from a truncated - normal distribution. The values are effectively drawn from the - normal distribution :math:`\mathcal{N}(\text{mean}, \text{std}^2)` - with values outside :math:`[a, b]` redrawn until they are within - the bounds. The method used for generating the random values works - best when :math:`a \leq \text{mean} \leq b`. - Args: - tensor (``torch.Tensor``): an n-dimensional `torch.Tensor` - mean (float): the mean of the normal distribution - std (float): the standard deviation of the normal distribution - a (float): the minimum cutoff value - b (float): the maximum cutoff value - """ - return _no_grad_trunc_normal_(tensor, mean, std, a, b) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/scatter_points.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/scatter_points.py deleted file mode 100644 index 2b8aa4169e9f6ca4a6f845ce17d6d1e4db416bb8..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/scatter_points.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', - ['dynamic_point_to_voxel_forward', 'dynamic_point_to_voxel_backward']) - - -class _DynamicScatter(Function): - - @staticmethod - def forward(ctx, feats, coors, reduce_type='max'): - """convert kitti points(N, >=3) to voxels. - - Args: - feats (torch.Tensor): [N, C]. Points features to be reduced - into voxels. - coors (torch.Tensor): [N, ndim]. Corresponding voxel coordinates - (specifically multi-dim voxel index) of each points. - reduce_type (str, optional): Reduce op. support 'max', 'sum' and - 'mean'. Default: 'max'. - - Returns: - voxel_feats (torch.Tensor): [M, C]. Reduced features, input - features that shares the same voxel coordinates are reduced to - one row. - voxel_coors (torch.Tensor): [M, ndim]. Voxel coordinates. - """ - results = ext_module.dynamic_point_to_voxel_forward( - feats, coors, reduce_type) - (voxel_feats, voxel_coors, point2voxel_map, - voxel_points_count) = results - ctx.reduce_type = reduce_type - ctx.save_for_backward(feats, voxel_feats, point2voxel_map, - voxel_points_count) - ctx.mark_non_differentiable(voxel_coors) - return voxel_feats, voxel_coors - - @staticmethod - def backward(ctx, grad_voxel_feats, grad_voxel_coors=None): - (feats, voxel_feats, point2voxel_map, - voxel_points_count) = ctx.saved_tensors - grad_feats = torch.zeros_like(feats) - # TODO: whether to use index put or use cuda_backward - # To use index put, need point to voxel index - ext_module.dynamic_point_to_voxel_backward( - grad_feats, grad_voxel_feats.contiguous(), feats, voxel_feats, - point2voxel_map, voxel_points_count, ctx.reduce_type) - return grad_feats, None, None - - -dynamic_scatter = _DynamicScatter.apply - - -class DynamicScatter(nn.Module): - """Scatters points into voxels, used in the voxel encoder with dynamic - voxelization. - - Note: - The CPU and GPU implementation get the same output, but have numerical - difference after summation and division (e.g., 5e-7). - - Args: - voxel_size (list): list [x, y, z] size of three dimension. - point_cloud_range (list): The coordinate range of points, [x_min, - y_min, z_min, x_max, y_max, z_max]. - average_points (bool): whether to use avg pooling to scatter points - into voxel. - """ - - def __init__(self, voxel_size, point_cloud_range, average_points: bool): - super().__init__() - - self.voxel_size = voxel_size - self.point_cloud_range = point_cloud_range - self.average_points = average_points - - def forward_single(self, points, coors): - """Scatters points into voxels. - - Args: - points (torch.Tensor): Points to be reduced into voxels. - coors (torch.Tensor): Corresponding voxel coordinates (specifically - multi-dim voxel index) of each points. - - Returns: - voxel_feats (torch.Tensor): Reduced features, input features that - shares the same voxel coordinates are reduced to one row. - voxel_coors (torch.Tensor): Voxel coordinates. - """ - reduce = 'mean' if self.average_points else 'max' - return dynamic_scatter(points.contiguous(), coors.contiguous(), reduce) - - def forward(self, points, coors): - """Scatters points/features into voxels. - - Args: - points (torch.Tensor): Points to be reduced into voxels. - coors (torch.Tensor): Corresponding voxel coordinates (specifically - multi-dim voxel index) of each points. - - Returns: - voxel_feats (torch.Tensor): Reduced features, input features that - shares the same voxel coordinates are reduced to one row. - voxel_coors (torch.Tensor): Voxel coordinates. - """ - if coors.size(-1) == 3: - return self.forward_single(points, coors) - else: - batch_size = coors[-1, 0] + 1 - voxels, voxel_coors = [], [] - for i in range(batch_size): - inds = torch.where(coors[:, 0] == i) - voxel, voxel_coor = self.forward_single( - points[inds], coors[inds][:, 1:]) - coor_pad = nn.functional.pad( - voxel_coor, (1, 0), mode='constant', value=i) - voxel_coors.append(coor_pad) - voxels.append(voxel) - features = torch.cat(voxels, dim=0) - feature_coors = torch.cat(voxel_coors, dim=0) - - return features, feature_coors - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += 'voxel_size=' + str(self.voxel_size) - s += ', point_cloud_range=' + str(self.point_cloud_range) - s += ', average_points=' + str(self.average_points) - s += ')' - return s diff --git a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/wav_processors/__init__.py b/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/wav_processors/__init__.py deleted file mode 100644 index 4be97b377dcb95a0e6bceb876ac0ce93c8290249..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/wav_processors/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from . import base_processor -from . import common_processors diff --git a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/wav_processors/base_processor.py b/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/wav_processors/base_processor.py deleted file mode 100644 index e8200dc58a9388ac94a5ec34b8a65f75e380255b..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/data_gen/tts/wav_processors/base_processor.py +++ /dev/null @@ -1,25 +0,0 @@ -REGISTERED_WAV_PROCESSORS = {} - - -def register_wav_processors(name): - def _f(cls): - REGISTERED_WAV_PROCESSORS[name] = cls - return cls - - return _f - - -def get_wav_processor_cls(name): - return REGISTERED_WAV_PROCESSORS.get(name, None) - - -class BaseWavProcessor: - @property - def name(self): - raise NotImplementedError - - def output_fn(self, input_fn): - return f'{input_fn[:-4]}_{self.name}.wav' - - def process(self, input_fn, sr, tmp_dir, processed_dir, item_name, preprocess_args): - raise NotImplementedError diff --git a/spaces/Rongjiehuang/GenerSpeech/modules/hifigan/mel_utils.py b/spaces/Rongjiehuang/GenerSpeech/modules/hifigan/mel_utils.py deleted file mode 100644 index 06e0f7d4d16fa3e4aefc8949347455f5a6e938da..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/modules/hifigan/mel_utils.py +++ /dev/null @@ -1,80 +0,0 @@ -import numpy as np -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def mel_spectrogram(y, hparams, center=False, complex=False): - # hop_size: 512 # For 22050Hz, 275 ~= 12.5 ms (0.0125 * sample_rate) - # win_size: 2048 # For 22050Hz, 1100 ~= 50 ms (If None, win_size: fft_size) (0.05 * sample_rate) - # fmin: 55 # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To test depending on dataset. Pitch info: male~[65, 260], female~[100, 525]) - # fmax: 10000 # To be increased/reduced depending on data. - # fft_size: 2048 # Extra window size is filled with 0 paddings to match this parameter - # n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, - n_fft = hparams['fft_size'] - num_mels = hparams['audio_num_mel_bins'] - sampling_rate = hparams['audio_sample_rate'] - hop_size = hparams['hop_size'] - win_size = hparams['win_size'] - fmin = hparams['fmin'] - fmax = hparams['fmax'] - y = y.clamp(min=-1., max=1.) - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax) + '_' + str(y.device)] = torch.from_numpy(mel).float().to(y.device) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - if not complex: - spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9)) - spec = torch.matmul(mel_basis[str(fmax) + '_' + str(y.device)], spec) - spec = spectral_normalize_torch(spec) - else: - B, C, T, _ = spec.shape - spec = spec.transpose(1, 2) # [B, T, n_fft, 2] - return spec diff --git a/spaces/Rongjiehuang/GenerSpeech/utils/pitch_utils.py b/spaces/Rongjiehuang/GenerSpeech/utils/pitch_utils.py deleted file mode 100644 index f7fd166abd3a03bac5909e498669b482447435cf..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/utils/pitch_utils.py +++ /dev/null @@ -1,76 +0,0 @@ -######### -# world -########## -import librosa -import numpy as np -import torch - -gamma = 0 -mcepInput = 3 # 0 for dB, 3 for magnitude -alpha = 0.45 -en_floor = 10 ** (-80 / 20) -FFT_SIZE = 2048 - - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def norm_f0(f0, uv, hparams): - is_torch = isinstance(f0, torch.Tensor) - if hparams['pitch_norm'] == 'standard': - f0 = (f0 - hparams['f0_mean']) / hparams['f0_std'] - if hparams['pitch_norm'] == 'log': - f0 = torch.log2(f0) if is_torch else np.log2(f0) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - return f0 - - -def norm_interp_f0(f0, hparams): - is_torch = isinstance(f0, torch.Tensor) - if is_torch: - device = f0.device - f0 = f0.data.cpu().numpy() - uv = f0 == 0 - f0 = norm_f0(f0, uv, hparams) - if sum(uv) == len(f0): - f0[uv] = 0 - elif sum(uv) > 0: - f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv]) - uv = torch.FloatTensor(uv) - f0 = torch.FloatTensor(f0) - if is_torch: - f0 = f0.to(device) - return f0, uv - - -def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None): - if hparams['pitch_norm'] == 'standard': - f0 = f0 * hparams['f0_std'] + hparams['f0_mean'] - if hparams['pitch_norm'] == 'log': - f0 = 2 ** f0 - if min is not None: - f0 = f0.clamp(min=min) - if max is not None: - f0 = f0.clamp(max=max) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - if pitch_padding is not None: - f0[pitch_padding] = 0 - return f0 diff --git a/spaces/Rongjiehuang/ProDiff/usr/diff/diffusion.py b/spaces/Rongjiehuang/ProDiff/usr/diff/diffusion.py deleted file mode 100644 index e874d64d4636c0b842392b91e92c7586770cbe58..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/ProDiff/usr/diff/diffusion.py +++ /dev/null @@ -1,333 +0,0 @@ -import math -import random -from functools import partial -from inspect import isfunction -from pathlib import Path -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from tqdm import tqdm -from einops import rearrange - -from modules.fastspeech.fs2 import FastSpeech2 -from utils.hparams import hparams - - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def cycle(dl): - while True: - for data in dl: - yield data - - -def num_to_groups(num, divisor): - groups = num // divisor - remainder = num % divisor - arr = [divisor] * groups - if remainder > 0: - arr.append(remainder) - return arr - - -class Residual(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - - def forward(self, x, *args, **kwargs): - return self.fn(x, *args, **kwargs) + x - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -class Mish(nn.Module): - def forward(self, x): - return x * torch.tanh(F.softplus(x)) - - -class Upsample(nn.Module): - def __init__(self, dim): - super().__init__() - self.conv = nn.ConvTranspose2d(dim, dim, 4, 2, 1) - - def forward(self, x): - return self.conv(x) - - -class Downsample(nn.Module): - def __init__(self, dim): - super().__init__() - self.conv = nn.Conv2d(dim, dim, 3, 2, 1) - - def forward(self, x): - return self.conv(x) - - -class Rezero(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - self.g = nn.Parameter(torch.zeros(1)) - - def forward(self, x): - return self.fn(x) * self.g - - -# building block modules - -class Block(nn.Module): - def __init__(self, dim, dim_out, groups=8): - super().__init__() - self.block = nn.Sequential( - nn.Conv2d(dim, dim_out, 3, padding=1), - nn.GroupNorm(groups, dim_out), - Mish() - ) - - def forward(self, x): - return self.block(x) - - -class ResnetBlock(nn.Module): - def __init__(self, dim, dim_out, *, time_emb_dim, groups=8): - super().__init__() - self.mlp = nn.Sequential( - Mish(), - nn.Linear(time_emb_dim, dim_out) - ) - - self.block1 = Block(dim, dim_out) - self.block2 = Block(dim_out, dim_out) - self.res_conv = nn.Conv2d(dim, dim_out, 1) if dim != dim_out else nn.Identity() - - def forward(self, x, time_emb): - h = self.block1(x) - h += self.mlp(time_emb)[:, :, None, None] - h = self.block2(h) - return h + self.res_conv(x) - - -class LinearAttention(nn.Module): - def __init__(self, dim, heads=4, dim_head=32): - super().__init__() - self.heads = heads - hidden_dim = dim_head * heads - self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias=False) - self.to_out = nn.Conv2d(hidden_dim, dim, 1) - - def forward(self, x): - b, c, h, w = x.shape - qkv = self.to_qkv(x) - q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads=self.heads, qkv=3) - k = k.softmax(dim=-1) - context = torch.einsum('bhdn,bhen->bhde', k, v) - out = torch.einsum('bhde,bhdn->bhen', context, q) - out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w) - return self.to_out(out) - - -# gaussian diffusion trainer class - -def extract(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() - - -def cosine_beta_schedule(timesteps, s=0.008): - """ - cosine schedule - as proposed in https://openreview.net/forum?id=-NEXDKk8gZ - """ - steps = timesteps + 1 - x = np.linspace(0, steps, steps) - alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2 - alphas_cumprod = alphas_cumprod / alphas_cumprod[0] - betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1]) - return np.clip(betas, a_min=0, a_max=0.999) - - -class GaussianDiffusion(nn.Module): - def __init__(self, phone_encoder, out_dims, denoise_fn, - timesteps=1000, loss_type='l1', betas=None, spec_min=None, spec_max=None): - super().__init__() - self.denoise_fn = denoise_fn - if hparams.get('use_midi') is not None and hparams['use_midi']: - self.fs2 = FastSpeech2MIDI(phone_encoder, out_dims) - else: - self.fs2 = FastSpeech2(phone_encoder, out_dims) - self.fs2.decoder = None - self.mel_bins = out_dims - - if exists(betas): - betas = betas.detach().cpu().numpy() if isinstance(betas, torch.Tensor) else betas - else: - betas = cosine_beta_schedule(timesteps) - - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.loss_type = loss_type - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod) - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']]) - self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']]) - - def q_mean_variance(self, x_start, t): - mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - variance = extract(1. - self.alphas_cumprod, t, x_start.shape) - log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, cond, clip_denoised: bool): - noise_pred = self.denoise_fn(x, t, cond=cond) - x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred) - - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return ( - extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise - ) - - def p_losses(self, x_start, t, cond, noise=None, nonpadding=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - x_recon = self.denoise_fn(x_noisy, t, cond) - - if self.loss_type == 'l1': - if nonpadding is not None: - loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean() - else: - # print('are you sure w/o nonpadding?') - loss = (noise - x_recon).abs().mean() - - elif self.loss_type == 'l2': - loss = F.mse_loss(noise, x_recon) - else: - raise NotImplementedError() - - return loss - - def forward(self, txt_tokens, mel2ph=None, spk_embed=None, - ref_mels=None, f0=None, uv=None, energy=None, infer=False): - b, *_, device = *txt_tokens.shape, txt_tokens.device - ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy, - skip_decoder=True, infer=infer) - cond = ret['decoder_inp'].transpose(1, 2) - if not infer: - t = torch.randint(0, self.num_timesteps, (b,), device=device).long() - x = ref_mels - x = self.norm_spec(x) - x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T] - nonpadding = (mel2ph != 0).float() - ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding) - else: - t = self.num_timesteps - shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2]) - x = torch.randn(shape, device=device) - for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t): - x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond) - x = x[:, 0].transpose(1, 2) - ret['mel_out'] = self.denorm_spec(x) - - return ret - - def norm_spec(self, x): - return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1 - - def denorm_spec(self, x): - return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min - - def cwt2f0_norm(self, cwt_spec, mean, std, mel2ph): - return self.fs2.cwt2f0_norm(cwt_spec, mean, std, mel2ph) - - def out2mel(self, x): - return x diff --git a/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py b/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py deleted file mode 100644 index acd00238895d57ba878fd0211d5654250fb10061..0000000000000000000000000000000000000000 --- a/spaces/SERER/VITS-Umamusume-voice-synthesizer/ONNXVITS_models.py +++ /dev/null @@ -1,509 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import ONNXVITS_modules as modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - self.w = None - self.reverse = None - self.noise_scale = None - def forward(self, x, x_mask, g=None): - w = self.w - reverse = self.reverse - noise_scale = self.noise_scale - - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - self.reverse = None - def forward(self, x, x_mask, g=None): - reverse = self.reverse - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask # x_in : [b, c, t] -> [b, h, t] - x = self.enc(x, x_mask, g=g) # x_in : [b, h, t], g : [b, h, 1], x = x_in + g - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask # z, m, logs : [b, h, t] - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - - if n_speakers > 0: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, sid=None, noise_scale=.667, length_scale=1, noise_scale_w=.8, max_len=None): - torch.onnx.export( - self.enc_p, - (x, x_lengths), - "ONNX_net/enc_p.onnx", - input_names=["x", "x_lengths"], - output_names=["xout", "m_p", "logs_p", "x_mask"], - dynamic_axes={ - "x" : [1], - "xout" : [2], - "m_p" : [2], - "logs_p" : [2], - "x_mask" : [2] - }, - verbose=True, - ) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - self.dp.reverse = True - self.dp.noise_scale = noise_scale_w - torch.onnx.export( - self.dp, - (x, x_mask, g), - "ONNX_net/dp.onnx", - input_names=["x", "x_mask", "g"], - output_names=["logw"], - dynamic_axes={ - "x" : [2], - "x_mask" : [2], - "logw" : [2] - }, - verbose=True, - ) - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - - self.flow.reverse = True - torch.onnx.export( - self.flow, - (z_p, y_mask, g), - "ONNX_net/flow.onnx", - input_names=["z_p", "y_mask", "g"], - output_names=["z"], - dynamic_axes={ - "z_p" : [2], - "y_mask" : [2], - "z" : [2] - }, - verbose=True, - ) - z = self.flow(z_p, y_mask, g=g) - z_in = (z * y_mask)[:,:,:max_len] - - torch.onnx.export( - self.dec, - (z_in, g), - "ONNX_net/dec.onnx", - input_names=["z_in", "g"], - output_names=["o"], - dynamic_axes={ - "z_in" : [2], - "o" : [2] - }, - verbose=True, - ) - o = self.dec(z_in, g=g) - return o diff --git a/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/ngu_dialect.py b/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/ngu_dialect.py deleted file mode 100644 index ce3e12bbf0469426872eed5f681985d3e1be9b26..0000000000000000000000000000000000000000 --- a/spaces/SERER/VITS-Umamusume-voice-synthesizer/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC(dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/sample_model.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/sample_model.py deleted file mode 100644 index 4c60e3f8ff81ed867cc0d0bfa7bc28594d70d59e..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/Text2Human/Text2Human/models/sample_model.py +++ /dev/null @@ -1,500 +0,0 @@ -import logging - -import numpy as np -import torch -import torch.distributions as dists -import torch.nn.functional as F -from torchvision.utils import save_image - -from models.archs.fcn_arch import FCNHead, MultiHeadFCNHead -from models.archs.shape_attr_embedding_arch import ShapeAttrEmbedding -from models.archs.transformer_arch import TransformerMultiHead -from models.archs.unet_arch import ShapeUNet, UNet -from models.archs.vqgan_arch import (Decoder, DecoderRes, Encoder, - VectorQuantizer, - VectorQuantizerSpatialTextureAware, - VectorQuantizerTexture) - -logger = logging.getLogger('base') - - -class BaseSampleModel(): - """Base Model""" - - def __init__(self, opt): - self.opt = opt - self.device = torch.device('cuda') - - # hierarchical VQVAE - self.decoder = Decoder( - in_channels=opt['top_in_channels'], - resolution=opt['top_resolution'], - z_channels=opt['top_z_channels'], - ch=opt['top_ch'], - out_ch=opt['top_out_ch'], - num_res_blocks=opt['top_num_res_blocks'], - attn_resolutions=opt['top_attn_resolutions'], - ch_mult=opt['top_ch_mult'], - dropout=opt['top_dropout'], - resamp_with_conv=True, - give_pre_end=False).to(self.device) - self.top_quantize = VectorQuantizerTexture( - 1024, opt['embed_dim'], beta=0.25).to(self.device) - self.top_post_quant_conv = torch.nn.Conv2d(opt['embed_dim'], - opt["top_z_channels"], - 1).to(self.device) - self.load_top_pretrain_models() - - self.bot_decoder_res = DecoderRes( - in_channels=opt['bot_in_channels'], - resolution=opt['bot_resolution'], - z_channels=opt['bot_z_channels'], - ch=opt['bot_ch'], - num_res_blocks=opt['bot_num_res_blocks'], - ch_mult=opt['bot_ch_mult'], - dropout=opt['bot_dropout'], - give_pre_end=False).to(self.device) - self.bot_quantize = VectorQuantizerSpatialTextureAware( - opt['bot_n_embed'], - opt['embed_dim'], - beta=0.25, - spatial_size=opt['bot_codebook_spatial_size']).to(self.device) - self.bot_post_quant_conv = torch.nn.Conv2d(opt['embed_dim'], - opt["bot_z_channels"], - 1).to(self.device) - self.load_bot_pretrain_network() - - # top -> bot prediction - self.index_pred_guidance_encoder = UNet( - in_channels=opt['index_pred_encoder_in_channels']).to(self.device) - self.index_pred_decoder = MultiHeadFCNHead( - in_channels=opt['index_pred_fc_in_channels'], - in_index=opt['index_pred_fc_in_index'], - channels=opt['index_pred_fc_channels'], - num_convs=opt['index_pred_fc_num_convs'], - concat_input=opt['index_pred_fc_concat_input'], - dropout_ratio=opt['index_pred_fc_dropout_ratio'], - num_classes=opt['index_pred_fc_num_classes'], - align_corners=opt['index_pred_fc_align_corners'], - num_head=18).to(self.device) - self.load_index_pred_network() - - # VAE for segmentation mask - self.segm_encoder = Encoder( - ch=opt['segm_ch'], - num_res_blocks=opt['segm_num_res_blocks'], - attn_resolutions=opt['segm_attn_resolutions'], - ch_mult=opt['segm_ch_mult'], - in_channels=opt['segm_in_channels'], - resolution=opt['segm_resolution'], - z_channels=opt['segm_z_channels'], - double_z=opt['segm_double_z'], - dropout=opt['segm_dropout']).to(self.device) - self.segm_quantizer = VectorQuantizer( - opt['segm_n_embed'], - opt['segm_embed_dim'], - beta=0.25, - sane_index_shape=True).to(self.device) - self.segm_quant_conv = torch.nn.Conv2d(opt["segm_z_channels"], - opt['segm_embed_dim'], - 1).to(self.device) - self.load_pretrained_segm_token() - - # define sampler - self.sampler_fn = TransformerMultiHead( - codebook_size=opt['codebook_size'], - segm_codebook_size=opt['segm_codebook_size'], - texture_codebook_size=opt['texture_codebook_size'], - bert_n_emb=opt['bert_n_emb'], - bert_n_layers=opt['bert_n_layers'], - bert_n_head=opt['bert_n_head'], - block_size=opt['block_size'], - latent_shape=opt['latent_shape'], - embd_pdrop=opt['embd_pdrop'], - resid_pdrop=opt['resid_pdrop'], - attn_pdrop=opt['attn_pdrop'], - num_head=opt['num_head']).to(self.device) - self.load_sampler_pretrained_network() - - self.shape = tuple(opt['latent_shape']) - - self.mask_id = opt['codebook_size'] - self.sample_steps = opt['sample_steps'] - - def load_top_pretrain_models(self): - # load pretrained vqgan - top_vae_checkpoint = torch.load(self.opt['top_vae_path']) - - self.decoder.load_state_dict( - top_vae_checkpoint['decoder'], strict=True) - self.top_quantize.load_state_dict( - top_vae_checkpoint['quantize'], strict=True) - self.top_post_quant_conv.load_state_dict( - top_vae_checkpoint['post_quant_conv'], strict=True) - - self.decoder.eval() - self.top_quantize.eval() - self.top_post_quant_conv.eval() - - def load_bot_pretrain_network(self): - checkpoint = torch.load(self.opt['bot_vae_path']) - self.bot_decoder_res.load_state_dict( - checkpoint['bot_decoder_res'], strict=True) - self.decoder.load_state_dict(checkpoint['decoder'], strict=True) - self.bot_quantize.load_state_dict( - checkpoint['bot_quantize'], strict=True) - self.bot_post_quant_conv.load_state_dict( - checkpoint['bot_post_quant_conv'], strict=True) - - self.bot_decoder_res.eval() - self.decoder.eval() - self.bot_quantize.eval() - self.bot_post_quant_conv.eval() - - def load_pretrained_segm_token(self): - # load pretrained vqgan for segmentation mask - segm_token_checkpoint = torch.load(self.opt['segm_token_path']) - self.segm_encoder.load_state_dict( - segm_token_checkpoint['encoder'], strict=True) - self.segm_quantizer.load_state_dict( - segm_token_checkpoint['quantize'], strict=True) - self.segm_quant_conv.load_state_dict( - segm_token_checkpoint['quant_conv'], strict=True) - - self.segm_encoder.eval() - self.segm_quantizer.eval() - self.segm_quant_conv.eval() - - def load_index_pred_network(self): - checkpoint = torch.load(self.opt['pretrained_index_network']) - self.index_pred_guidance_encoder.load_state_dict( - checkpoint['guidance_encoder'], strict=True) - self.index_pred_decoder.load_state_dict( - checkpoint['index_decoder'], strict=True) - - self.index_pred_guidance_encoder.eval() - self.index_pred_decoder.eval() - - def load_sampler_pretrained_network(self): - checkpoint = torch.load(self.opt['pretrained_sampler']) - self.sampler_fn.load_state_dict(checkpoint, strict=True) - self.sampler_fn.eval() - - def bot_index_prediction(self, feature_top, texture_mask): - self.index_pred_guidance_encoder.eval() - self.index_pred_decoder.eval() - - texture_tokens = F.interpolate( - texture_mask, (32, 16), mode='nearest').view(self.batch_size, - -1).long() - - texture_mask_flatten = texture_tokens.view(-1) - min_encodings_indices_list = [ - torch.full( - texture_mask_flatten.size(), - fill_value=-1, - dtype=torch.long, - device=texture_mask_flatten.device) for _ in range(18) - ] - with torch.no_grad(): - feature_enc = self.index_pred_guidance_encoder(feature_top) - memory_logits_list = self.index_pred_decoder(feature_enc) - for codebook_idx, memory_logits in enumerate(memory_logits_list): - region_of_interest = texture_mask_flatten == codebook_idx - if torch.sum(region_of_interest) > 0: - memory_indices_pred = memory_logits.argmax(dim=1).view(-1) - memory_indices_pred = memory_indices_pred - min_encodings_indices_list[codebook_idx][ - region_of_interest] = memory_indices_pred[ - region_of_interest] - min_encodings_indices_return_list = [ - min_encodings_indices.view((1, 32, 16)) - for min_encodings_indices in min_encodings_indices_list - ] - - return min_encodings_indices_return_list - - def sample_and_refine(self, save_dir=None, img_name=None): - # sample 32x16 features indices - sampled_top_indices_list = self.sample_fn( - temp=1, sample_steps=self.sample_steps) - - for sample_idx in range(self.batch_size): - sample_indices = [ - sampled_indices_cur[sample_idx:sample_idx + 1] - for sampled_indices_cur in sampled_top_indices_list - ] - top_quant = self.top_quantize.get_codebook_entry( - sample_indices, self.texture_mask[sample_idx:sample_idx + 1], - (sample_indices[0].size(0), self.shape[0], self.shape[1], - self.opt["top_z_channels"])) - - top_quant = self.top_post_quant_conv(top_quant) - - bot_indices_list = self.bot_index_prediction( - top_quant, self.texture_mask[sample_idx:sample_idx + 1]) - - quant_bot = self.bot_quantize.get_codebook_entry( - bot_indices_list, self.texture_mask[sample_idx:sample_idx + 1], - (bot_indices_list[0].size(0), bot_indices_list[0].size(1), - bot_indices_list[0].size(2), - self.opt["bot_z_channels"])) #.permute(0, 3, 1, 2) - quant_bot = self.bot_post_quant_conv(quant_bot) - bot_dec_res = self.bot_decoder_res(quant_bot) - - dec = self.decoder(top_quant, bot_h=bot_dec_res) - - dec = ((dec + 1) / 2) - dec = dec.clamp_(0, 1) - if save_dir is None and img_name is None: - return dec - else: - save_image( - dec, - f'{save_dir}/{img_name[sample_idx]}', - nrow=1, - padding=4) - - def sample_fn(self, temp=1.0, sample_steps=None): - self.sampler_fn.eval() - - x_t = torch.ones((self.batch_size, np.prod(self.shape)), - device=self.device).long() * self.mask_id - unmasked = torch.zeros_like(x_t, device=self.device).bool() - sample_steps = list(range(1, sample_steps + 1)) - - texture_tokens = F.interpolate( - self.texture_mask, (32, 16), - mode='nearest').view(self.batch_size, -1).long() - - texture_mask_flatten = texture_tokens.view(-1) - - # min_encodings_indices_list would be used to visualize the image - min_encodings_indices_list = [ - torch.full( - texture_mask_flatten.size(), - fill_value=-1, - dtype=torch.long, - device=texture_mask_flatten.device) for _ in range(18) - ] - - for t in reversed(sample_steps): - t = torch.full((self.batch_size, ), - t, - device=self.device, - dtype=torch.long) - - # where to unmask - changes = torch.rand( - x_t.shape, device=self.device) < 1 / t.float().unsqueeze(-1) - # don't unmask somewhere already unmasked - changes = torch.bitwise_xor(changes, - torch.bitwise_and(changes, unmasked)) - # update mask with changes - unmasked = torch.bitwise_or(unmasked, changes) - - x_0_logits_list = self.sampler_fn( - x_t, self.segm_tokens, texture_tokens, t=t) - - changes_flatten = changes.view(-1) - ori_shape = x_t.shape # [b, h*w] - x_t = x_t.view(-1) # [b*h*w] - for codebook_idx, x_0_logits in enumerate(x_0_logits_list): - if torch.sum(texture_mask_flatten[changes_flatten] == - codebook_idx) > 0: - # scale by temperature - x_0_logits = x_0_logits / temp - x_0_dist = dists.Categorical(logits=x_0_logits) - x_0_hat = x_0_dist.sample().long() - x_0_hat = x_0_hat.view(-1) - - # only replace the changed indices with corresponding codebook_idx - changes_segm = torch.bitwise_and( - changes_flatten, texture_mask_flatten == codebook_idx) - - # x_t would be the input to the transformer, so the index range should be continual one - x_t[changes_segm] = x_0_hat[ - changes_segm] + 1024 * codebook_idx - min_encodings_indices_list[codebook_idx][ - changes_segm] = x_0_hat[changes_segm] - - x_t = x_t.view(ori_shape) # [b, h*w] - - min_encodings_indices_return_list = [ - min_encodings_indices.view(ori_shape) - for min_encodings_indices in min_encodings_indices_list - ] - - self.sampler_fn.train() - - return min_encodings_indices_return_list - - @torch.no_grad() - def get_quantized_segm(self, segm): - segm_one_hot = F.one_hot( - segm.squeeze(1).long(), - num_classes=self.opt['segm_num_segm_classes']).permute( - 0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - encoded_segm_mask = self.segm_encoder(segm_one_hot) - encoded_segm_mask = self.segm_quant_conv(encoded_segm_mask) - _, _, [_, _, segm_tokens] = self.segm_quantizer(encoded_segm_mask) - - return segm_tokens - - -class SampleFromParsingModel(BaseSampleModel): - """SampleFromParsing model. - """ - - def feed_data(self, data): - self.segm = data['segm'].to(self.device) - self.texture_mask = data['texture_mask'].to(self.device) - self.batch_size = self.segm.size(0) - - self.segm_tokens = self.get_quantized_segm(self.segm) - self.segm_tokens = self.segm_tokens.view(self.batch_size, -1) - - def inference(self, data_loader, save_dir): - for _, data in enumerate(data_loader): - img_name = data['img_name'] - self.feed_data(data) - with torch.no_grad(): - self.sample_and_refine(save_dir, img_name) - - -class SampleFromPoseModel(BaseSampleModel): - """SampleFromPose model. - """ - - def __init__(self, opt): - super().__init__(opt) - # pose-to-parsing - self.shape_attr_embedder = ShapeAttrEmbedding( - dim=opt['shape_embedder_dim'], - out_dim=opt['shape_embedder_out_dim'], - cls_num_list=opt['shape_attr_class_num']).to(self.device) - self.shape_parsing_encoder = ShapeUNet( - in_channels=opt['shape_encoder_in_channels']).to(self.device) - self.shape_parsing_decoder = FCNHead( - in_channels=opt['shape_fc_in_channels'], - in_index=opt['shape_fc_in_index'], - channels=opt['shape_fc_channels'], - num_convs=opt['shape_fc_num_convs'], - concat_input=opt['shape_fc_concat_input'], - dropout_ratio=opt['shape_fc_dropout_ratio'], - num_classes=opt['shape_fc_num_classes'], - align_corners=opt['shape_fc_align_corners'], - ).to(self.device) - self.load_shape_generation_models() - - self.palette = [[0, 0, 0], [255, 250, 250], [220, 220, 220], - [250, 235, 215], [255, 250, 205], [211, 211, 211], - [70, 130, 180], [127, 255, 212], [0, 100, 0], - [50, 205, 50], [255, 255, 0], [245, 222, 179], - [255, 140, 0], [255, 0, 0], [16, 78, 139], - [144, 238, 144], [50, 205, 174], [50, 155, 250], - [160, 140, 88], [213, 140, 88], [90, 140, 90], - [185, 210, 205], [130, 165, 180], [225, 141, 151]] - - def load_shape_generation_models(self): - checkpoint = torch.load(self.opt['pretrained_parsing_gen']) - - self.shape_attr_embedder.load_state_dict( - checkpoint['embedder'], strict=True) - self.shape_attr_embedder.eval() - - self.shape_parsing_encoder.load_state_dict( - checkpoint['encoder'], strict=True) - self.shape_parsing_encoder.eval() - - self.shape_parsing_decoder.load_state_dict( - checkpoint['decoder'], strict=True) - self.shape_parsing_decoder.eval() - - def feed_data(self, data): - self.pose = data['densepose'].to(self.device) - self.batch_size = self.pose.size(0) - - self.shape_attr = data['shape_attr'].to(self.device) - self.upper_fused_attr = data['upper_fused_attr'].to(self.device) - self.lower_fused_attr = data['lower_fused_attr'].to(self.device) - self.outer_fused_attr = data['outer_fused_attr'].to(self.device) - - def inference(self, data_loader, save_dir): - for _, data in enumerate(data_loader): - img_name = data['img_name'] - self.feed_data(data) - with torch.no_grad(): - self.generate_parsing_map() - self.generate_quantized_segm() - self.generate_texture_map() - self.sample_and_refine(save_dir, img_name) - - def generate_parsing_map(self): - with torch.no_grad(): - attr_embedding = self.shape_attr_embedder(self.shape_attr) - pose_enc = self.shape_parsing_encoder(self.pose, attr_embedding) - seg_logits = self.shape_parsing_decoder(pose_enc) - self.segm = seg_logits.argmax(dim=1) - self.segm = self.segm.unsqueeze(1) - - def generate_quantized_segm(self): - self.segm_tokens = self.get_quantized_segm(self.segm) - self.segm_tokens = self.segm_tokens.view(self.batch_size, -1) - - def generate_texture_map(self): - upper_cls = [1., 4.] - lower_cls = [3., 5., 21.] - outer_cls = [2.] - - mask_batch = [] - for idx in range(self.batch_size): - mask = torch.zeros_like(self.segm[idx]) - upper_fused_attr = self.upper_fused_attr[idx] - lower_fused_attr = self.lower_fused_attr[idx] - outer_fused_attr = self.outer_fused_attr[idx] - if upper_fused_attr != 17: - for cls in upper_cls: - mask[self.segm[idx] == cls] = upper_fused_attr + 1 - - if lower_fused_attr != 17: - for cls in lower_cls: - mask[self.segm[idx] == cls] = lower_fused_attr + 1 - - if outer_fused_attr != 17: - for cls in outer_cls: - mask[self.segm[idx] == cls] = outer_fused_attr + 1 - - mask_batch.append(mask) - self.texture_mask = torch.stack(mask_batch, dim=0).to(torch.float32) - - def feed_pose_data(self, pose_img): - # for ui demo - - self.pose = pose_img.to(self.device) - self.batch_size = self.pose.size(0) - - def feed_shape_attributes(self, shape_attr): - # for ui demo - - self.shape_attr = shape_attr.to(self.device) - - def feed_texture_attributes(self, texture_attr): - # for ui demo - - self.upper_fused_attr = texture_attr[0].unsqueeze(0).to(self.device) - self.lower_fused_attr = texture_attr[1].unsqueeze(0).to(self.device) - self.outer_fused_attr = texture_attr[2].unsqueeze(0).to(self.device) - - def palette_result(self, result): - - seg = result[0] - palette = np.array(self.palette) - assert palette.shape[1] == 3 - assert len(palette.shape) == 2 - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - # convert to BGR - # color_seg = color_seg[..., ::-1] - return color_seg diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/monotonic_align/setup.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/SeViLA/SeViLA/lavis/models/blip_models/blip_retrieval.py b/spaces/SeViLA/SeViLA/lavis/models/blip_models/blip_retrieval.py deleted file mode 100644 index 44e9c5c998d60400c2443112f69f4be5ad415048..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/blip_models/blip_retrieval.py +++ /dev/null @@ -1,396 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from copy import deepcopy - -import torch -import torch.nn.functional as F -from lavis.common.registry import registry -from lavis.models.albef_models import compute_sim_matrix -from lavis.models.base_model import ( - MomentumDistilationMixin, - SharedQueueMixin, - all_gather_with_grad, - concat_all_gather, -) -from lavis.models.blip_models.blip import BlipBase -from lavis.models.blip_models.blip_outputs import ( - BlipOutput, - BlipSimilarity, - BlipIntermediateOutput, -) -from lavis.models.med import XBertEncoder -from lavis.models.vit import VisionTransformerEncoder -from torch import nn - - -@registry.register_model("blip_retrieval") -class BlipRetrieval(BlipBase, MomentumDistilationMixin, SharedQueueMixin): - """ - BLIP retrieval model. - - Supported model types: - - coco: fine-tuned BLIP base model on COCO dataset (Karpathy split). - - flickr: fine-tuned BLIP base model on Flickr30k dataset. - - Usage: - >>> from lavis.models import load_model - >>> model = load_model("blip_retrieval", "coco") - >>> model = load_model("blip_retrieval", "flickr") - """ - - PRETRAINED_MODEL_CONFIG_DICT = { - "coco": "configs/models/blip_retrieval_coco.yaml", - "flickr": "configs/models/blip_retrieval_flickr.yaml", - } - - def __init__( - self, - image_encoder, - text_encoder, - queue_size, - alpha=0.4, - embed_dim=256, - momentum=0.995, - negative_all_rank=False, - max_txt_len=35, - ): - """ """ - super().__init__() - - self.tokenizer = self.init_tokenizer() - - self.visual_encoder = image_encoder - - self.text_encoder = text_encoder - - # creating projection layers for ITC - text_width = text_encoder.config.hidden_size - vision_width = image_encoder.vision_width - - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - - self.itm_head = nn.Linear(text_width, 2) - - # create the momentum encoder - self.visual_encoder_m = deepcopy(self.visual_encoder) - self.text_encoder_m = deepcopy(self.text_encoder) - - self.vision_proj_m = deepcopy(self.vision_proj) - self.text_proj_m = deepcopy(self.text_proj) - - self.model_pairs = [ - [self.visual_encoder, self.visual_encoder_m], - [self.text_encoder, self.text_encoder_m], - [self.vision_proj, self.vision_proj_m], - [self.text_proj, self.text_proj_m], - ] - self.copy_params() - - # create the queue - self.register_buffer("image_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("text_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("idx_queue", torch.full((1, queue_size), -100)) - self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long)) - - self.image_queue = nn.functional.normalize(self.image_queue, dim=0) - self.text_queue = nn.functional.normalize(self.text_queue, dim=0) - - self.queue_size = queue_size - self.momentum = momentum - self.temp = nn.Parameter(0.07 * torch.ones([])) - - self.alpha = alpha - self.max_txt_len = max_txt_len - - self.negative_all_rank = negative_all_rank - - def _rampup_factor(self, epoch, iters, num_iters_per_epoch): - return min(1, (epoch * num_iters_per_epoch + iters) / (2 * num_iters_per_epoch)) - - def forward(self, samples): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). The input images. - - text_input (list): A list of length batch_size, each element is a string of text/caption. - - image_id (torch.Tensor): A tensor of shape (batch_size, ). The image ids, used to identify same images in batch. - - epoch (int): The current epoch. - - iters (int): The current iteration. - - num_iters_per_epoch (int): The number of iterations per epoch. - - Returns: - BlipOutput: A BlipOutput object. See ``lavis.models.blip_models.blip_outputs.BlipOutput`` for more details. - - Examples: - >>> import torch - >>> from lavis.models import load_model - >>> model = load_model("blip_retrieval", "coco") - >>> images = torch.randn(4, 3, 384, 384) - >>> text_input = ["caption of image 1", "another caption of image 1", "caption of image 2", "caption of image 3"] - >>> image_id = torch.tensor([1, 1, 2, 3]) - >>> samples = {"image": images, "text_input": text_input, "image_id": image_id, "epoch": 0, "iters": 0, "num_iters_per_epoch": 100} - >>> output = model(samples) - >>> output.keys() - odict_keys(['sims', 'intermediate_output', 'loss', 'loss_itc', 'loss_itm']) - """ - image = samples["image"] - caption = samples["text_input"] - idx = samples["image_id"] - - alpha = self.alpha * self._rampup_factor( - epoch=samples["epoch"], - iters=samples["iters"], - num_iters_per_epoch=samples["num_iters_per_epoch"], - ) - - with torch.no_grad(): - self.temp.clamp_(0.001, 0.5) - - image_embeds = self.visual_encoder.forward_features(image) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to( - image.device - ) - image_feat = F.normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1) - - text = self.tokenizer( - caption, - padding="max_length", - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(image.device) - - text_output = self.text_encoder.forward_text(text) - text_embeds = text_output.last_hidden_state - text_feat = F.normalize(self.text_proj(text_embeds[:, 0, :]), dim=-1) - - # Image-text Contrastive Learning - idx = idx.view(-1, 1) - idx_all = torch.cat([idx.t(), self.idx_queue.clone().detach()], dim=1) - pos_idx = torch.eq(idx, idx_all).float() - sim_targets = pos_idx / pos_idx.sum(1, keepdim=True) - - # get momentum features - with torch.no_grad(): - self._momentum_update() - image_embeds_m = self.visual_encoder_m(image) - image_feat_m = F.normalize( - self.vision_proj_m(image_embeds_m[:, 0, :]), dim=-1 - ) - image_feat_m_all = torch.cat( - [image_feat_m.t(), self.image_queue.clone().detach()], dim=1 - ) - - text_output_m = self.text_encoder_m.forward_text(text) - text_embeds_m = text_output_m.last_hidden_state - text_feat_m = F.normalize(self.text_proj_m(text_embeds_m[:, 0, :]), dim=-1) - text_feat_m_all = torch.cat( - [text_feat_m.t(), self.text_queue.clone().detach()], dim=1 - ) - - sim_i2t_m = image_feat_m @ text_feat_m_all / self.temp - sim_t2i_m = text_feat_m @ image_feat_m_all / self.temp - - sim_i2t_targets = ( - alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets - ) - sim_t2i_targets = ( - alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets - ) - - sim_i2t = image_feat @ text_feat_m_all / self.temp - sim_t2i = text_feat @ image_feat_m_all / self.temp - - loss_i2t = -torch.sum( - F.log_softmax(sim_i2t, dim=1) * sim_i2t_targets, dim=1 - ).mean() - loss_t2i = -torch.sum( - F.log_softmax(sim_t2i, dim=1) * sim_t2i_targets, dim=1 - ).mean() - - loss_itc = (loss_i2t + loss_t2i) / 2 - - self._dequeue_and_enqueue(image_feat_m, text_feat_m, idx) - - # Image-text Matching - encoder_input_ids = text.input_ids.clone() - encoder_input_ids[:, 0] = self.tokenizer.enc_token_id - - # forward the positve image-text pair - bs = image.size(0) - output_pos = self.text_encoder( - encoder_input_ids, - attention_mask=text.attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - - idxs = concat_all_gather(idx) - if self.negative_all_rank: - # compute sample similarity - with torch.no_grad(): - mask = torch.eq(idx, idxs.t()) - - image_feat_world = concat_all_gather(image_feat) - text_feat_world = concat_all_gather(text_feat) - - sim_i2t = image_feat @ text_feat_world.t() / self.temp - sim_t2i = text_feat @ image_feat_world.t() / self.temp - - weights_i2t = F.softmax(sim_i2t, dim=1) - weights_i2t.masked_fill_(mask, 0) - - weights_t2i = F.softmax(sim_t2i, dim=1) - weights_t2i.masked_fill_(mask, 0) - - image_embeds_world = all_gather_with_grad(image_embeds) - - # select a negative image (from all ranks) for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds_world[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg, dim=0) - - # select a negative text (from all ranks) for each image - input_ids_world = concat_all_gather(encoder_input_ids) - att_mask_world = concat_all_gather(text.attention_mask) - - text_ids_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_ids_neg.append(input_ids_world[neg_idx]) - text_atts_neg.append(att_mask_world[neg_idx]) - - else: - with torch.no_grad(): - mask = torch.eq(idx, idx.t()) - - sim_i2t = image_feat @ text_feat.t() / self.temp - sim_t2i = text_feat @ image_feat.t() / self.temp - - weights_i2t = F.softmax(sim_i2t, dim=1) - weights_i2t.masked_fill_(mask, 0) - - weights_t2i = F.softmax(sim_t2i, dim=1) - weights_t2i.masked_fill_(mask, 0) - - # select a negative image (from same rank) for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg, dim=0) - - # select a negative text (from same rank) for each image - text_ids_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_ids_neg.append(encoder_input_ids[neg_idx]) - text_atts_neg.append(text.attention_mask[neg_idx]) - - text_ids_neg = torch.stack(text_ids_neg, dim=0) - text_atts_neg = torch.stack(text_atts_neg, dim=0) - - text_ids_all = torch.cat([encoder_input_ids, text_ids_neg], dim=0) - text_atts_all = torch.cat([text.attention_mask, text_atts_neg], dim=0) - - image_embeds_all = torch.cat([image_embeds_neg, image_embeds], dim=0) - image_atts_all = torch.cat([image_atts, image_atts], dim=0) - - output_neg = self.text_encoder( - text_ids_all, - attention_mask=text_atts_all, - encoder_hidden_states=image_embeds_all, - encoder_attention_mask=image_atts_all, - return_dict=True, - ) - - vl_embeddings = torch.cat( - [ - output_pos.last_hidden_state[:, 0, :], - output_neg.last_hidden_state[:, 0, :], - ], - dim=0, - ) - itm_logits = self.itm_head(vl_embeddings) - - itm_labels = torch.cat( - [torch.ones(bs, dtype=torch.long), torch.zeros(2 * bs, dtype=torch.long)], - dim=0, - ).to(self.device) - loss_itm = F.cross_entropy(itm_logits, itm_labels) - - return BlipOutput( - loss=loss_itc + loss_itm, - loss_itc=loss_itc, - loss_itm=loss_itm, - sims=BlipSimilarity( - sim_i2t=sim_i2t, - sim_t2i=sim_t2i, - sim_i2t_m=sim_i2t_m, - sim_t2i_m=sim_t2i_m, - sim_i2t_targets=sim_i2t_targets, - sim_t2i_targets=sim_t2i_targets, - ), - intermediate_output=BlipIntermediateOutput( - image_embeds=image_embeds, - image_embeds_m=image_embeds_m, - text_embeds=text_embeds, - text_embeds_m=text_embeds_m, - encoder_output=output_pos, - encoder_output_neg=output_neg, - itm_logits=itm_logits, - itm_labels=itm_labels, - ), - ) - - def reset_queue_ptr(self): - self.queue_ptr = torch.zeros(1, dtype=torch.long) - - @classmethod - def from_config(cls, cfg=None): - # set from_pretrained=True to load weights for 'bert-base-uncased' - image_encoder = VisionTransformerEncoder.from_config(cfg) - text_encoder = XBertEncoder.from_config(cfg) - - embed_dim = cfg.get("embed_dim", 256) - momentum = cfg.get("momentum", 0.995) - alpha = cfg.get("alpha", 0.4) - negative_all_rank = cfg.get("negative_all_rank", False) - - queue_size = cfg.get("queue_size", 0) - max_txt_len = cfg.get("max_txt_len", 35) - - model = cls( - image_encoder=image_encoder, - text_encoder=text_encoder, - queue_size=queue_size, - alpha=alpha, - embed_dim=embed_dim, - momentum=momentum, - negative_all_rank=negative_all_rank, - max_txt_len=max_txt_len, - ) - - model.load_checkpoint_from_config(cfg) - model.reset_queue_ptr() - - return model - - def compute_sim_matrix(self, data_loader, task_cfg): - """ - Compute similarity i2t, t2i matrix for the given data loader. - """ - k_test = task_cfg.k_test - - return compute_sim_matrix(model=self, data_loader=data_loader, k_test=k_test) diff --git a/spaces/Smotto/Vocal-Isolator/src/Sound_Feature_Extraction/short_time_fourier_transform.py b/spaces/Smotto/Vocal-Isolator/src/Sound_Feature_Extraction/short_time_fourier_transform.py deleted file mode 100644 index b782e726cd4393426bd15d42e53646b9b5144500..0000000000000000000000000000000000000000 --- a/spaces/Smotto/Vocal-Isolator/src/Sound_Feature_Extraction/short_time_fourier_transform.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch - - -class STFT: - def __init__(self, n_fft, hop_length, dim_f): - self.n_fft = n_fft - self.hop_length = hop_length - self.window = torch.hann_window(window_length=n_fft, periodic=True) - self.dim_f = dim_f - - def __call__(self, x): - window = self.window.to(x.device) - batch_dims = x.shape[:-2] - c, t = x.shape[-2:] - x = x.reshape([-1, t]) - x = torch.stft( - x, - n_fft=self.n_fft, - hop_length=self.hop_length, - window=window, - center=True, - return_complex=True, - ) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([*batch_dims, c, 2, -1, x.shape[-1]]).reshape( - [*batch_dims, c * 2, -1, x.shape[-1]] - ) - return x[..., : self.dim_f, :] - - def inverse(self, x): - window = self.window.to(x.device) - batch_dims = x.shape[:-3] - c, f, t = x.shape[-3:] - n = self.n_fft // 2 + 1 - f_pad = torch.zeros([*batch_dims, c, n - f, t]).to(x.device) - x = torch.cat([x, f_pad], -2) - x = x.reshape([*batch_dims, c // 2, 2, n, t]).reshape([-1, 2, n, t]) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - t_complex = torch.view_as_complex(x) - x = torch.istft( - t_complex, - n_fft=self.n_fft, - hop_length=self.hop_length, - window=window, - center=True, - ) - x = x.reshape([*batch_dims, 2, -1]) - return x diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiofiles/ospath.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiofiles/ospath.py deleted file mode 100644 index a0a60f7acba19c5af4fcb20d13f65e564ca72820..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiofiles/ospath.py +++ /dev/null @@ -1,15 +0,0 @@ -"""Async executor versions of file functions from the os.path module.""" - -from .os import wrap -from os import path - -exists = wrap(path.exists) -isfile = wrap(path.isfile) -isdir = wrap(path.isdir) -islink = wrap(path.islink) -getsize = wrap(path.getsize) -getmtime = wrap(path.getmtime) -getatime = wrap(path.getatime) -getctime = wrap(path.getctime) -samefile = wrap(path.samefile) -sameopenfile = wrap(path.sameopenfile) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_comm_constants.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_comm_constants.py deleted file mode 100644 index ad05a3250747f95a5c6bd06bc2c4ebc6184ca299..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_comm_constants.py +++ /dev/null @@ -1,208 +0,0 @@ -CMD_RUN = 101 -CMD_LIST_THREADS = 102 -CMD_THREAD_CREATE = 103 -CMD_THREAD_KILL = 104 -CMD_THREAD_SUSPEND = 105 -CMD_THREAD_RUN = 106 -CMD_STEP_INTO = 107 -CMD_STEP_OVER = 108 -CMD_STEP_RETURN = 109 -CMD_GET_VARIABLE = 110 -CMD_SET_BREAK = 111 -CMD_REMOVE_BREAK = 112 -CMD_EVALUATE_EXPRESSION = 113 -CMD_GET_FRAME = 114 -CMD_EXEC_EXPRESSION = 115 -CMD_WRITE_TO_CONSOLE = 116 -CMD_CHANGE_VARIABLE = 117 -CMD_RUN_TO_LINE = 118 -CMD_RELOAD_CODE = 119 -CMD_GET_COMPLETIONS = 120 - -# Note: renumbered (conflicted on merge) -CMD_CONSOLE_EXEC = 121 -CMD_ADD_EXCEPTION_BREAK = 122 -CMD_REMOVE_EXCEPTION_BREAK = 123 -CMD_LOAD_SOURCE = 124 -CMD_ADD_DJANGO_EXCEPTION_BREAK = 125 -CMD_REMOVE_DJANGO_EXCEPTION_BREAK = 126 -CMD_SET_NEXT_STATEMENT = 127 -CMD_SMART_STEP_INTO = 128 -CMD_EXIT = 129 -CMD_SIGNATURE_CALL_TRACE = 130 - -CMD_SET_PY_EXCEPTION = 131 -CMD_GET_FILE_CONTENTS = 132 -CMD_SET_PROPERTY_TRACE = 133 -# Pydev debug console commands -CMD_EVALUATE_CONSOLE_EXPRESSION = 134 -CMD_RUN_CUSTOM_OPERATION = 135 -CMD_GET_BREAKPOINT_EXCEPTION = 136 -CMD_STEP_CAUGHT_EXCEPTION = 137 -CMD_SEND_CURR_EXCEPTION_TRACE = 138 -CMD_SEND_CURR_EXCEPTION_TRACE_PROCEEDED = 139 -CMD_IGNORE_THROWN_EXCEPTION_AT = 140 -CMD_ENABLE_DONT_TRACE = 141 -CMD_SHOW_CONSOLE = 142 - -CMD_GET_ARRAY = 143 -CMD_STEP_INTO_MY_CODE = 144 -CMD_GET_CONCURRENCY_EVENT = 145 -CMD_SHOW_RETURN_VALUES = 146 -CMD_INPUT_REQUESTED = 147 -CMD_GET_DESCRIPTION = 148 - -CMD_PROCESS_CREATED = 149 -CMD_SHOW_CYTHON_WARNING = 150 -CMD_LOAD_FULL_VALUE = 151 - -CMD_GET_THREAD_STACK = 152 - -# This is mostly for unit-tests to diagnose errors on ci. -CMD_THREAD_DUMP_TO_STDERR = 153 - -# Sent from the client to signal that we should stop when we start executing user code. -CMD_STOP_ON_START = 154 - -# When the debugger is stopped in an exception, this command will provide the details of the current exception (in the current thread). -CMD_GET_EXCEPTION_DETAILS = 155 - -# Allows configuring pydevd settings (can be called multiple times and only keys -# available in the json will be configured -- keys not passed will not change the -# previous configuration). -CMD_PYDEVD_JSON_CONFIG = 156 - -CMD_THREAD_SUSPEND_SINGLE_NOTIFICATION = 157 -CMD_THREAD_RESUME_SINGLE_NOTIFICATION = 158 - -CMD_STEP_OVER_MY_CODE = 159 -CMD_STEP_RETURN_MY_CODE = 160 - -CMD_SET_PY_EXCEPTION_JSON = 161 -CMD_SET_PATH_MAPPING_JSON = 162 - -CMD_GET_SMART_STEP_INTO_VARIANTS = 163 # XXX: PyCharm has 160 for this (we're currently incompatible anyways). - -CMD_REDIRECT_OUTPUT = 200 -CMD_GET_NEXT_STATEMENT_TARGETS = 201 -CMD_SET_PROJECT_ROOTS = 202 - -CMD_MODULE_EVENT = 203 -CMD_PROCESS_EVENT = 204 - -CMD_AUTHENTICATE = 205 - -CMD_STEP_INTO_COROUTINE = 206 - -CMD_LOAD_SOURCE_FROM_FRAME_ID = 207 - -CMD_SET_FUNCTION_BREAK = 208 - -CMD_VERSION = 501 -CMD_RETURN = 502 -CMD_SET_PROTOCOL = 503 -CMD_ERROR = 901 - -# this number can be changed if there's need to do so -# if the io is too big, we'll not send all (could make the debugger too non-responsive) -MAX_IO_MSG_SIZE = 10000 - -VERSION_STRING = "@@BUILD_NUMBER@@" - -from _pydev_bundle._pydev_filesystem_encoding import getfilesystemencoding -file_system_encoding = getfilesystemencoding() -filesystem_encoding_is_utf8 = file_system_encoding.lower() in ('utf-8', 'utf_8', 'utf8') - -ID_TO_MEANING = { - '101': 'CMD_RUN', - '102': 'CMD_LIST_THREADS', - '103': 'CMD_THREAD_CREATE', - '104': 'CMD_THREAD_KILL', - '105': 'CMD_THREAD_SUSPEND', - '106': 'CMD_THREAD_RUN', - '107': 'CMD_STEP_INTO', - '108': 'CMD_STEP_OVER', - '109': 'CMD_STEP_RETURN', - '110': 'CMD_GET_VARIABLE', - '111': 'CMD_SET_BREAK', - '112': 'CMD_REMOVE_BREAK', - '113': 'CMD_EVALUATE_EXPRESSION', - '114': 'CMD_GET_FRAME', - '115': 'CMD_EXEC_EXPRESSION', - '116': 'CMD_WRITE_TO_CONSOLE', - '117': 'CMD_CHANGE_VARIABLE', - '118': 'CMD_RUN_TO_LINE', - '119': 'CMD_RELOAD_CODE', - '120': 'CMD_GET_COMPLETIONS', - '121': 'CMD_CONSOLE_EXEC', - '122': 'CMD_ADD_EXCEPTION_BREAK', - '123': 'CMD_REMOVE_EXCEPTION_BREAK', - '124': 'CMD_LOAD_SOURCE', - '125': 'CMD_ADD_DJANGO_EXCEPTION_BREAK', - '126': 'CMD_REMOVE_DJANGO_EXCEPTION_BREAK', - '127': 'CMD_SET_NEXT_STATEMENT', - '128': 'CMD_SMART_STEP_INTO', - '129': 'CMD_EXIT', - '130': 'CMD_SIGNATURE_CALL_TRACE', - - '131': 'CMD_SET_PY_EXCEPTION', - '132': 'CMD_GET_FILE_CONTENTS', - '133': 'CMD_SET_PROPERTY_TRACE', - '134': 'CMD_EVALUATE_CONSOLE_EXPRESSION', - '135': 'CMD_RUN_CUSTOM_OPERATION', - '136': 'CMD_GET_BREAKPOINT_EXCEPTION', - '137': 'CMD_STEP_CAUGHT_EXCEPTION', - '138': 'CMD_SEND_CURR_EXCEPTION_TRACE', - '139': 'CMD_SEND_CURR_EXCEPTION_TRACE_PROCEEDED', - '140': 'CMD_IGNORE_THROWN_EXCEPTION_AT', - '141': 'CMD_ENABLE_DONT_TRACE', - '142': 'CMD_SHOW_CONSOLE', - '143': 'CMD_GET_ARRAY', - '144': 'CMD_STEP_INTO_MY_CODE', - '145': 'CMD_GET_CONCURRENCY_EVENT', - '146': 'CMD_SHOW_RETURN_VALUES', - '147': 'CMD_INPUT_REQUESTED', - '148': 'CMD_GET_DESCRIPTION', - - '149': 'CMD_PROCESS_CREATED', # Note: this is actually a notification of a sub-process created. - '150': 'CMD_SHOW_CYTHON_WARNING', - '151': 'CMD_LOAD_FULL_VALUE', - '152': 'CMD_GET_THREAD_STACK', - '153': 'CMD_THREAD_DUMP_TO_STDERR', - '154': 'CMD_STOP_ON_START', - '155': 'CMD_GET_EXCEPTION_DETAILS', - '156': 'CMD_PYDEVD_JSON_CONFIG', - '157': 'CMD_THREAD_SUSPEND_SINGLE_NOTIFICATION', - '158': 'CMD_THREAD_RESUME_SINGLE_NOTIFICATION', - - '159': 'CMD_STEP_OVER_MY_CODE', - '160': 'CMD_STEP_RETURN_MY_CODE', - - '161': 'CMD_SET_PY_EXCEPTION_JSON', - '162': 'CMD_SET_PATH_MAPPING_JSON', - '163': 'CMD_GET_SMART_STEP_INTO_VARIANTS', - - '200': 'CMD_REDIRECT_OUTPUT', - '201': 'CMD_GET_NEXT_STATEMENT_TARGETS', - '202': 'CMD_SET_PROJECT_ROOTS', - '203': 'CMD_MODULE_EVENT', - '204': 'CMD_PROCESS_EVENT', # DAP process event. - - '205': 'CMD_AUTHENTICATE', - - '206': 'CMD_STEP_INTO_COROUTINE', - - '207': 'CMD_LOAD_SOURCE_FROM_FRAME_ID', - - '501': 'CMD_VERSION', - '502': 'CMD_RETURN', - '503': 'CMD_SET_PROTOCOL', - '901': 'CMD_ERROR', -} - - -def constant_to_str(constant): - s = ID_TO_MEANING.get(str(constant)) - if not s: - s = '' % (constant,) - return s diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/enc_head.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/enc_head.py deleted file mode 100644 index da57af617e05d41761628fd2d6d232655b32d905..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/decode_heads/enc_head.py +++ /dev/null @@ -1,187 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule, build_norm_layer - -from annotator.uniformer.mmseg.ops import Encoding, resize -from ..builder import HEADS, build_loss -from .decode_head import BaseDecodeHead - - -class EncModule(nn.Module): - """Encoding Module used in EncNet. - - Args: - in_channels (int): Input channels. - num_codes (int): Number of code words. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - """ - - def __init__(self, in_channels, num_codes, conv_cfg, norm_cfg, act_cfg): - super(EncModule, self).__init__() - self.encoding_project = ConvModule( - in_channels, - in_channels, - 1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - # TODO: resolve this hack - # change to 1d - if norm_cfg is not None: - encoding_norm_cfg = norm_cfg.copy() - if encoding_norm_cfg['type'] in ['BN', 'IN']: - encoding_norm_cfg['type'] += '1d' - else: - encoding_norm_cfg['type'] = encoding_norm_cfg['type'].replace( - '2d', '1d') - else: - # fallback to BN1d - encoding_norm_cfg = dict(type='BN1d') - self.encoding = nn.Sequential( - Encoding(channels=in_channels, num_codes=num_codes), - build_norm_layer(encoding_norm_cfg, num_codes)[1], - nn.ReLU(inplace=True)) - self.fc = nn.Sequential( - nn.Linear(in_channels, in_channels), nn.Sigmoid()) - - def forward(self, x): - """Forward function.""" - encoding_projection = self.encoding_project(x) - encoding_feat = self.encoding(encoding_projection).mean(dim=1) - batch_size, channels, _, _ = x.size() - gamma = self.fc(encoding_feat) - y = gamma.view(batch_size, channels, 1, 1) - output = F.relu_(x + x * y) - return encoding_feat, output - - -@HEADS.register_module() -class EncHead(BaseDecodeHead): - """Context Encoding for Semantic Segmentation. - - This head is the implementation of `EncNet - `_. - - Args: - num_codes (int): Number of code words. Default: 32. - use_se_loss (bool): Whether use Semantic Encoding Loss (SE-loss) to - regularize the training. Default: True. - add_lateral (bool): Whether use lateral connection to fuse features. - Default: False. - loss_se_decode (dict): Config of decode loss. - Default: dict(type='CrossEntropyLoss', use_sigmoid=True). - """ - - def __init__(self, - num_codes=32, - use_se_loss=True, - add_lateral=False, - loss_se_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=0.2), - **kwargs): - super(EncHead, self).__init__( - input_transform='multiple_select', **kwargs) - self.use_se_loss = use_se_loss - self.add_lateral = add_lateral - self.num_codes = num_codes - self.bottleneck = ConvModule( - self.in_channels[-1], - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if add_lateral: - self.lateral_convs = nn.ModuleList() - for in_channels in self.in_channels[:-1]: # skip the last one - self.lateral_convs.append( - ConvModule( - in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - self.fusion = ConvModule( - len(self.in_channels) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.enc_module = EncModule( - self.channels, - num_codes=num_codes, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if self.use_se_loss: - self.loss_se_decode = build_loss(loss_se_decode) - self.se_layer = nn.Linear(self.channels, self.num_classes) - - def forward(self, inputs): - """Forward function.""" - inputs = self._transform_inputs(inputs) - feat = self.bottleneck(inputs[-1]) - if self.add_lateral: - laterals = [ - resize( - lateral_conv(inputs[i]), - size=feat.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - for i, lateral_conv in enumerate(self.lateral_convs) - ] - feat = self.fusion(torch.cat([feat, *laterals], 1)) - encode_feat, output = self.enc_module(feat) - output = self.cls_seg(output) - if self.use_se_loss: - se_output = self.se_layer(encode_feat) - return output, se_output - else: - return output - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, ignore se_loss.""" - if self.use_se_loss: - return self.forward(inputs)[0] - else: - return self.forward(inputs) - - @staticmethod - def _convert_to_onehot_labels(seg_label, num_classes): - """Convert segmentation label to onehot. - - Args: - seg_label (Tensor): Segmentation label of shape (N, H, W). - num_classes (int): Number of classes. - - Returns: - Tensor: Onehot labels of shape (N, num_classes). - """ - - batch_size = seg_label.size(0) - onehot_labels = seg_label.new_zeros((batch_size, num_classes)) - for i in range(batch_size): - hist = seg_label[i].float().histc( - bins=num_classes, min=0, max=num_classes - 1) - onehot_labels[i] = hist > 0 - return onehot_labels - - def losses(self, seg_logit, seg_label): - """Compute segmentation and semantic encoding loss.""" - seg_logit, se_seg_logit = seg_logit - loss = dict() - loss.update(super(EncHead, self).losses(seg_logit, seg_label)) - se_loss = self.loss_se_decode( - se_seg_logit, - self._convert_to_onehot_labels(seg_label, self.num_classes)) - loss['loss_se'] = se_loss - return loss diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/packaging/markers.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/packaging/markers.py deleted file mode 100644 index 540e7a4dc79d02a820e291b57c43335d5aa25a41..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/packaging/markers.py +++ /dev/null @@ -1,304 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import operator -import os -import platform -import sys -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -from pip._vendor.pyparsing import ( # noqa: N817 - Forward, - Group, - Literal as L, - ParseException, - ParseResults, - QuotedString, - ZeroOrMore, - stringEnd, - stringStart, -) - -from .specifiers import InvalidSpecifier, Specifier - -__all__ = [ - "InvalidMarker", - "UndefinedComparison", - "UndefinedEnvironmentName", - "Marker", - "default_environment", -] - -Operator = Callable[[str, str], bool] - - -class InvalidMarker(ValueError): - """ - An invalid marker was found, users should refer to PEP 508. - """ - - -class UndefinedComparison(ValueError): - """ - An invalid operation was attempted on a value that doesn't support it. - """ - - -class UndefinedEnvironmentName(ValueError): - """ - A name was attempted to be used that does not exist inside of the - environment. - """ - - -class Node: - def __init__(self, value: Any) -> None: - self.value = value - - def __str__(self) -> str: - return str(self.value) - - def __repr__(self) -> str: - return f"<{self.__class__.__name__}('{self}')>" - - def serialize(self) -> str: - raise NotImplementedError - - -class Variable(Node): - def serialize(self) -> str: - return str(self) - - -class Value(Node): - def serialize(self) -> str: - return f'"{self}"' - - -class Op(Node): - def serialize(self) -> str: - return str(self) - - -VARIABLE = ( - L("implementation_version") - | L("platform_python_implementation") - | L("implementation_name") - | L("python_full_version") - | L("platform_release") - | L("platform_version") - | L("platform_machine") - | L("platform_system") - | L("python_version") - | L("sys_platform") - | L("os_name") - | L("os.name") # PEP-345 - | L("sys.platform") # PEP-345 - | L("platform.version") # PEP-345 - | L("platform.machine") # PEP-345 - | L("platform.python_implementation") # PEP-345 - | L("python_implementation") # undocumented setuptools legacy - | L("extra") # PEP-508 -) -ALIASES = { - "os.name": "os_name", - "sys.platform": "sys_platform", - "platform.version": "platform_version", - "platform.machine": "platform_machine", - "platform.python_implementation": "platform_python_implementation", - "python_implementation": "platform_python_implementation", -} -VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0]))) - -VERSION_CMP = ( - L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<") -) - -MARKER_OP = VERSION_CMP | L("not in") | L("in") -MARKER_OP.setParseAction(lambda s, l, t: Op(t[0])) - -MARKER_VALUE = QuotedString("'") | QuotedString('"') -MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0])) - -BOOLOP = L("and") | L("or") - -MARKER_VAR = VARIABLE | MARKER_VALUE - -MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR) -MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0])) - -LPAREN = L("(").suppress() -RPAREN = L(")").suppress() - -MARKER_EXPR = Forward() -MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN) -MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR) - -MARKER = stringStart + MARKER_EXPR + stringEnd - - -def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]: - if isinstance(results, ParseResults): - return [_coerce_parse_result(i) for i in results] - else: - return results - - -def _format_marker( - marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True -) -> str: - - assert isinstance(marker, (list, tuple, str)) - - # Sometimes we have a structure like [[...]] which is a single item list - # where the single item is itself it's own list. In that case we want skip - # the rest of this function so that we don't get extraneous () on the - # outside. - if ( - isinstance(marker, list) - and len(marker) == 1 - and isinstance(marker[0], (list, tuple)) - ): - return _format_marker(marker[0]) - - if isinstance(marker, list): - inner = (_format_marker(m, first=False) for m in marker) - if first: - return " ".join(inner) - else: - return "(" + " ".join(inner) + ")" - elif isinstance(marker, tuple): - return " ".join([m.serialize() for m in marker]) - else: - return marker - - -_operators: Dict[str, Operator] = { - "in": lambda lhs, rhs: lhs in rhs, - "not in": lambda lhs, rhs: lhs not in rhs, - "<": operator.lt, - "<=": operator.le, - "==": operator.eq, - "!=": operator.ne, - ">=": operator.ge, - ">": operator.gt, -} - - -def _eval_op(lhs: str, op: Op, rhs: str) -> bool: - try: - spec = Specifier("".join([op.serialize(), rhs])) - except InvalidSpecifier: - pass - else: - return spec.contains(lhs) - - oper: Optional[Operator] = _operators.get(op.serialize()) - if oper is None: - raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.") - - return oper(lhs, rhs) - - -class Undefined: - pass - - -_undefined = Undefined() - - -def _get_env(environment: Dict[str, str], name: str) -> str: - value: Union[str, Undefined] = environment.get(name, _undefined) - - if isinstance(value, Undefined): - raise UndefinedEnvironmentName( - f"{name!r} does not exist in evaluation environment." - ) - - return value - - -def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool: - groups: List[List[bool]] = [[]] - - for marker in markers: - assert isinstance(marker, (list, tuple, str)) - - if isinstance(marker, list): - groups[-1].append(_evaluate_markers(marker, environment)) - elif isinstance(marker, tuple): - lhs, op, rhs = marker - - if isinstance(lhs, Variable): - lhs_value = _get_env(environment, lhs.value) - rhs_value = rhs.value - else: - lhs_value = lhs.value - rhs_value = _get_env(environment, rhs.value) - - groups[-1].append(_eval_op(lhs_value, op, rhs_value)) - else: - assert marker in ["and", "or"] - if marker == "or": - groups.append([]) - - return any(all(item) for item in groups) - - -def format_full_version(info: "sys._version_info") -> str: - version = "{0.major}.{0.minor}.{0.micro}".format(info) - kind = info.releaselevel - if kind != "final": - version += kind[0] + str(info.serial) - return version - - -def default_environment() -> Dict[str, str]: - iver = format_full_version(sys.implementation.version) - implementation_name = sys.implementation.name - return { - "implementation_name": implementation_name, - "implementation_version": iver, - "os_name": os.name, - "platform_machine": platform.machine(), - "platform_release": platform.release(), - "platform_system": platform.system(), - "platform_version": platform.version(), - "python_full_version": platform.python_version(), - "platform_python_implementation": platform.python_implementation(), - "python_version": ".".join(platform.python_version_tuple()[:2]), - "sys_platform": sys.platform, - } - - -class Marker: - def __init__(self, marker: str) -> None: - try: - self._markers = _coerce_parse_result(MARKER.parseString(marker)) - except ParseException as e: - raise InvalidMarker( - f"Invalid marker: {marker!r}, parse error at " - f"{marker[e.loc : e.loc + 8]!r}" - ) - - def __str__(self) -> str: - return _format_marker(self._markers) - - def __repr__(self) -> str: - return f"" - - def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool: - """Evaluate a marker. - - Return the boolean from evaluating the given marker against the - environment. environment is an optional argument to override all or - part of the determined environment. - - The environment is determined from the current Python process. - """ - current_environment = default_environment() - if environment is not None: - current_environment.update(environment) - - return _evaluate_markers(self._markers, current_environment) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/convert.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/convert.py deleted file mode 100644 index 1ce9b5f3c16adcd07672d5dbddcff9f44f4b82a7..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/convert.py +++ /dev/null @@ -1,273 +0,0 @@ -from __future__ import annotations - -import os.path -import re -import shutil -import tempfile -import zipfile -from glob import iglob - -from ..bdist_wheel import bdist_wheel -from ..wheelfile import WheelFile -from . import WheelError - -try: - from setuptools import Distribution -except ImportError: - from distutils.dist import Distribution - -egg_info_re = re.compile( - r""" - (?P.+?)-(?P.+?) - (-(?Ppy\d\.\d+) - (-(?P.+?))? - )?.egg$""", - re.VERBOSE, -) - - -class _bdist_wheel_tag(bdist_wheel): - # allow the client to override the default generated wheel tag - # The default bdist_wheel implementation uses python and abi tags - # of the running python process. This is not suitable for - # generating/repackaging prebuild binaries. - - full_tag_supplied = False - full_tag = None # None or a (pytag, soabitag, plattag) triple - - def get_tag(self): - if self.full_tag_supplied and self.full_tag is not None: - return self.full_tag - else: - return bdist_wheel.get_tag(self) - - -def egg2wheel(egg_path: str, dest_dir: str): - filename = os.path.basename(egg_path) - match = egg_info_re.match(filename) - if not match: - raise WheelError(f"Invalid egg file name: {filename}") - - egg_info = match.groupdict() - dir = tempfile.mkdtemp(suffix="_e2w") - if os.path.isfile(egg_path): - # assume we have a bdist_egg otherwise - with zipfile.ZipFile(egg_path) as egg: - egg.extractall(dir) - else: - # support buildout-style installed eggs directories - for pth in os.listdir(egg_path): - src = os.path.join(egg_path, pth) - if os.path.isfile(src): - shutil.copy2(src, dir) - else: - shutil.copytree(src, os.path.join(dir, pth)) - - pyver = egg_info["pyver"] - if pyver: - pyver = egg_info["pyver"] = pyver.replace(".", "") - - arch = (egg_info["arch"] or "any").replace(".", "_").replace("-", "_") - - # assume all binary eggs are for CPython - abi = "cp" + pyver[2:] if arch != "any" else "none" - - root_is_purelib = egg_info["arch"] is None - if root_is_purelib: - bw = bdist_wheel(Distribution()) - else: - bw = _bdist_wheel_tag(Distribution()) - - bw.root_is_pure = root_is_purelib - bw.python_tag = pyver - bw.plat_name_supplied = True - bw.plat_name = egg_info["arch"] or "any" - if not root_is_purelib: - bw.full_tag_supplied = True - bw.full_tag = (pyver, abi, arch) - - dist_info_dir = os.path.join(dir, "{name}-{ver}.dist-info".format(**egg_info)) - bw.egg2dist(os.path.join(dir, "EGG-INFO"), dist_info_dir) - bw.write_wheelfile(dist_info_dir, generator="egg2wheel") - wheel_name = "{name}-{ver}-{pyver}-{}-{}.whl".format(abi, arch, **egg_info) - with WheelFile(os.path.join(dest_dir, wheel_name), "w") as wf: - wf.write_files(dir) - - shutil.rmtree(dir) - - -def parse_wininst_info(wininfo_name, egginfo_name): - """Extract metadata from filenames. - - Extracts the 4 metadataitems needed (name, version, pyversion, arch) from - the installer filename and the name of the egg-info directory embedded in - the zipfile (if any). - - The egginfo filename has the format:: - - name-ver(-pyver)(-arch).egg-info - - The installer filename has the format:: - - name-ver.arch(-pyver).exe - - Some things to note: - - 1. The installer filename is not definitive. An installer can be renamed - and work perfectly well as an installer. So more reliable data should - be used whenever possible. - 2. The egg-info data should be preferred for the name and version, because - these come straight from the distutils metadata, and are mandatory. - 3. The pyver from the egg-info data should be ignored, as it is - constructed from the version of Python used to build the installer, - which is irrelevant - the installer filename is correct here (even to - the point that when it's not there, any version is implied). - 4. The architecture must be taken from the installer filename, as it is - not included in the egg-info data. - 5. Architecture-neutral installers still have an architecture because the - installer format itself (being executable) is architecture-specific. We - should therefore ignore the architecture if the content is pure-python. - """ - - egginfo = None - if egginfo_name: - egginfo = egg_info_re.search(egginfo_name) - if not egginfo: - raise ValueError(f"Egg info filename {egginfo_name} is not valid") - - # Parse the wininst filename - # 1. Distribution name (up to the first '-') - w_name, sep, rest = wininfo_name.partition("-") - if not sep: - raise ValueError(f"Installer filename {wininfo_name} is not valid") - - # Strip '.exe' - rest = rest[:-4] - # 2. Python version (from the last '-', must start with 'py') - rest2, sep, w_pyver = rest.rpartition("-") - if sep and w_pyver.startswith("py"): - rest = rest2 - w_pyver = w_pyver.replace(".", "") - else: - # Not version specific - use py2.py3. While it is possible that - # pure-Python code is not compatible with both Python 2 and 3, there - # is no way of knowing from the wininst format, so we assume the best - # here (the user can always manually rename the wheel to be more - # restrictive if needed). - w_pyver = "py2.py3" - # 3. Version and architecture - w_ver, sep, w_arch = rest.rpartition(".") - if not sep: - raise ValueError(f"Installer filename {wininfo_name} is not valid") - - if egginfo: - w_name = egginfo.group("name") - w_ver = egginfo.group("ver") - - return {"name": w_name, "ver": w_ver, "arch": w_arch, "pyver": w_pyver} - - -def wininst2wheel(path, dest_dir): - with zipfile.ZipFile(path) as bdw: - # Search for egg-info in the archive - egginfo_name = None - for filename in bdw.namelist(): - if ".egg-info" in filename: - egginfo_name = filename - break - - info = parse_wininst_info(os.path.basename(path), egginfo_name) - - root_is_purelib = True - for zipinfo in bdw.infolist(): - if zipinfo.filename.startswith("PLATLIB"): - root_is_purelib = False - break - if root_is_purelib: - paths = {"purelib": ""} - else: - paths = {"platlib": ""} - - dist_info = "{name}-{ver}".format(**info) - datadir = "%s.data/" % dist_info - - # rewrite paths to trick ZipFile into extracting an egg - # XXX grab wininst .ini - between .exe, padding, and first zip file. - members = [] - egginfo_name = "" - for zipinfo in bdw.infolist(): - key, basename = zipinfo.filename.split("/", 1) - key = key.lower() - basepath = paths.get(key, None) - if basepath is None: - basepath = datadir + key.lower() + "/" - oldname = zipinfo.filename - newname = basepath + basename - zipinfo.filename = newname - del bdw.NameToInfo[oldname] - bdw.NameToInfo[newname] = zipinfo - # Collect member names, but omit '' (from an entry like "PLATLIB/" - if newname: - members.append(newname) - # Remember egg-info name for the egg2dist call below - if not egginfo_name: - if newname.endswith(".egg-info"): - egginfo_name = newname - elif ".egg-info/" in newname: - egginfo_name, sep, _ = newname.rpartition("/") - dir = tempfile.mkdtemp(suffix="_b2w") - bdw.extractall(dir, members) - - # egg2wheel - abi = "none" - pyver = info["pyver"] - arch = (info["arch"] or "any").replace(".", "_").replace("-", "_") - # Wininst installers always have arch even if they are not - # architecture-specific (because the format itself is). - # So, assume the content is architecture-neutral if root is purelib. - if root_is_purelib: - arch = "any" - # If the installer is architecture-specific, it's almost certainly also - # CPython-specific. - if arch != "any": - pyver = pyver.replace("py", "cp") - wheel_name = "-".join((dist_info, pyver, abi, arch)) - if root_is_purelib: - bw = bdist_wheel(Distribution()) - else: - bw = _bdist_wheel_tag(Distribution()) - - bw.root_is_pure = root_is_purelib - bw.python_tag = pyver - bw.plat_name_supplied = True - bw.plat_name = info["arch"] or "any" - - if not root_is_purelib: - bw.full_tag_supplied = True - bw.full_tag = (pyver, abi, arch) - - dist_info_dir = os.path.join(dir, "%s.dist-info" % dist_info) - bw.egg2dist(os.path.join(dir, egginfo_name), dist_info_dir) - bw.write_wheelfile(dist_info_dir, generator="wininst2wheel") - - wheel_path = os.path.join(dest_dir, wheel_name) - with WheelFile(wheel_path, "w") as wf: - wf.write_files(dir) - - shutil.rmtree(dir) - - -def convert(files, dest_dir, verbose): - for pat in files: - for installer in iglob(pat): - if os.path.splitext(installer)[1] == ".egg": - conv = egg2wheel - else: - conv = wininst2wheel - - if verbose: - print(f"{installer}... ", flush=True) - - conv(installer, dest_dir) - if verbose: - print("OK") diff --git a/spaces/TandCAcceptMe/face-swap-docker/plugins/codeformer_face_helper_cv2.py b/spaces/TandCAcceptMe/face-swap-docker/plugins/codeformer_face_helper_cv2.py deleted file mode 100644 index 3e849f3b282a57f09a594360779100e2a2ba98e3..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/plugins/codeformer_face_helper_cv2.py +++ /dev/null @@ -1,94 +0,0 @@ -from codeformer.facelib.utils.face_restoration_helper import FaceRestoreHelper - -import numpy as np -from codeformer.basicsr.utils.misc import get_device - -class FaceRestoreHelperOptimized(FaceRestoreHelper): - def __init__( - self, - upscale_factor, - face_size=512, - crop_ratio=(1, 1), - det_model="retinaface_resnet50", - save_ext="png", - template_3points=False, - pad_blur=False, - use_parse=False, - device=None, - ): - self.template_3points = template_3points # improve robustness - self.upscale_factor = int(upscale_factor) - # the cropped face ratio based on the square face - self.crop_ratio = crop_ratio # (h, w) - assert self.crop_ratio[0] >= 1 and self.crop_ratio[1] >= 1, "crop ration only supports >=1" - self.face_size = (int(face_size * self.crop_ratio[1]), int(face_size * self.crop_ratio[0])) - self.det_model = det_model - - if self.det_model == "dlib": - # standard 5 landmarks for FFHQ faces with 1024 x 1024 - self.face_template = np.array( - [ - [686.77227723, 488.62376238], - [586.77227723, 493.59405941], - [337.91089109, 488.38613861], - [437.95049505, 493.51485149], - [513.58415842, 678.5049505], - ] - ) - self.face_template = self.face_template / (1024 // face_size) - elif self.template_3points: - self.face_template = np.array([[192, 240], [319, 240], [257, 371]]) - else: - # standard 5 landmarks for FFHQ faces with 512 x 512 - # facexlib - self.face_template = np.array( - [ - [192.98138, 239.94708], - [318.90277, 240.1936], - [256.63416, 314.01935], - [201.26117, 371.41043], - [313.08905, 371.15118], - ] - ) - - # dlib: left_eye: 36:41 right_eye: 42:47 nose: 30,32,33,34 left mouth corner: 48 right mouth corner: 54 - # self.face_template = np.array([[193.65928, 242.98541], [318.32558, 243.06108], [255.67984, 328.82894], - # [198.22603, 372.82502], [313.91018, 372.75659]]) - - self.face_template = self.face_template * (face_size / 512.0) - if self.crop_ratio[0] > 1: - self.face_template[:, 1] += face_size * (self.crop_ratio[0] - 1) / 2 - if self.crop_ratio[1] > 1: - self.face_template[:, 0] += face_size * (self.crop_ratio[1] - 1) / 2 - self.save_ext = save_ext - self.pad_blur = pad_blur - if self.pad_blur is True: - self.template_3points = False - - self.all_landmarks_5 = [] - self.det_faces = [] - self.affine_matrices = [] - self.inverse_affine_matrices = [] - self.cropped_faces = [] - self.restored_faces = [] - self.pad_input_imgs = [] - - if device is None: - # self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.device = get_device() - else: - self.device = device - - # init face detection model - # if self.det_model == "dlib": - # self.face_detector, self.shape_predictor_5 = self.init_dlib( - # dlib_model_url["face_detector"], dlib_model_url["shape_predictor_5"] - # ) - # else: - # self.face_detector = init_detection_model(det_model, half=False, device=self.device) - - # init face parsing model - self.use_parse = use_parse - #self.face_parse = init_parsing_model(model_name="parsenet", device=self.device) - - # MUST set face_detector and face_parse!!! \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/run.py b/spaces/TandCAcceptMe/face-swap-docker/run.py deleted file mode 100644 index b52e5cc4a8ea9ce5cadd4e7111fb15531f380314..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/run.py +++ /dev/null @@ -1,6 +0,0 @@ -#!/usr/bin/env python3 - -from roop import core - -if __name__ == '__main__': - core.run() diff --git a/spaces/Tatusho/TTS/README.md b/spaces/Tatusho/TTS/README.md deleted file mode 100644 index 137543c7ea3207f6d9d1b1a9bda52ee0eb6295b4..0000000000000000000000000000000000000000 --- a/spaces/Tatusho/TTS/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: TTS -emoji: 🔥 -colorFrom: gray -colorTo: purple -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py deleted file mode 100644 index 7b86ea8c6c5c48f5d26c9e0df7cf96e745b17b34..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py +++ /dev/null @@ -1,14 +0,0 @@ -from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -train.max_iter *= 4 # 100ep -> 400ep - -lr_multiplier.scheduler.milestones = [ - milestone * 4 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/ThomasSimonini/Murder-on-horsea-island-prototype/README.md b/spaces/ThomasSimonini/Murder-on-horsea-island-prototype/README.md deleted file mode 100644 index 2eba9339f3948c154ae33fa0833e165ef3bd8d04..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/Murder-on-horsea-island-prototype/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Murder On Horsea Island Prototype -emoji: ⚡ -colorFrom: red -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Um124/Global_Warming_Analysis/pages/Forest Coverage data Analysis.py b/spaces/Um124/Global_Warming_Analysis/pages/Forest Coverage data Analysis.py deleted file mode 100644 index 67425404781b45c1a88150f2a7420fd721c8047a..0000000000000000000000000000000000000000 --- a/spaces/Um124/Global_Warming_Analysis/pages/Forest Coverage data Analysis.py +++ /dev/null @@ -1,125 +0,0 @@ -import pandas as pd -import numpy as np -import plotly.express as px -import streamlit as st - -st.set_page_config( - page_title=' Forest Coverage data Analysis', - page_icon='📈', - layout='wide' -) - - -Years=['1990','1991','1992','1993','1994','1995','1996','1997','1998','1999', -'2000','2001','2002','2003','2004','2005','2006','2007','2008','2009','2010','2011','2012','2013','2014','2015'] - -@st.cache_data -def load_data(): - df=pd.read_csv('data/forest_coverage_percent.csv') - df.rename({'geo':'Country'},axis=1,inplace=True) - df.set_index('Country',inplace=True) - df.sort_values('Country',inplace=True) - df['Total']=df[Years].sum(axis=1) - df['Average']=df.mean(axis=1) - df['Minimum']=df.min(axis=1) - df['Maximum']=df.max(axis=1) - return df - -st.title('Forest Coverage Percentage') -df=load_data() -st.dataframe(df,use_container_width=True) - -countries= df.index.unique().tolist() -Graphs = ['bar','pie','line','area','funnel'] -c1,c2 = st.columns(2) -country = c1.selectbox("Select a Country", countries) -Graph = c2.selectbox("Select a Graph type", Graphs) - - - -st.header('Country wise Visualization') -cdf = df.loc[country,Years].reset_index() -cdf.rename({'index':'Years'},axis=1, inplace=True) -if Graph == Graphs[0]: - fig = px.bar(cdf, 'Years',country, title=f'{country} forest coverage percentage') -if Graph == Graphs[1]: - fig = px.pie(cdf, 'Years',country, title=f'{country} forest coverage percentage') -if Graph == Graphs[2]: - fig = px.line(cdf, 'Years',country, title=f'{country} forest coverage percentage') -if Graph == Graphs[3]: - fig = px.area(cdf, 'Years',country, title=f'{country} forest coverage percentage') -if Graph == Graphs[4]: - fig = px.funnel(cdf, 'Years',country, title=f'{country} forest coverage percentage') -st.plotly_chart(fig, use_container_width=True) - -st.header('Comparison of Country') -clist = st.multiselect("Select countries to compare", countries, default='India') -cdf = df.loc[clist, Years].T # T to rotate the data in 90deg -cdf.rename({'index':'Years'},axis=1,inplace=True) -st.write(cdf) -figc = px.line(cdf,cdf.index, clist, title=f'Comparing {", ".join(clist)}') - -st.plotly_chart(figc, use_container_width=True) - - -df.sort_values(by='Total', ascending=False, inplace=True) -fig1=px.bar(df, x=df.index, y='Total',title='Total forest coverage percent') -st.plotly_chart(fig1,use_container_width=True) - -dfavg = df.sort_values(by='Average').reset_index() -dfavg.rename({'index':'Country'},axis=1,inplace=True) -fig2=px.bar(dfavg, 'Country', 'Average', title="Average percent of forest coverage by Country") -st.plotly_chart(fig2,use_container_width=True) - -dfmin=df.sort_values(by='Minimum').reset_index() -dfmin.rename({'index':'Country'},axis=1,inplace=True) -fig3=px.bar(dfmin,'Country','Minimum',title='Minimum forest coverage by the Country' ) -st.plotly_chart(fig3,use_container_width=True) - -dfmax=df.sort_values(by='Maximum').reset_index() -dfmax.rename({'index':'Country'},axis=1,inplace=True) -fig4=px.bar(dfmax,'Country','Maximum',title='Maximum forest coverage by the Country' ) -st.plotly_chart(fig4,use_container_width=True) - -dfcomp=df.sort_values(by='Country',ascending=False,inplace=True) -fig5 = px.line(df, x=df.index, y='Maximum',title='Maximum and Minimum forest coverage by Country comparison') -fig5.add_scatter(x=df.index, y=df['Minimum'], mode='lines',) -st.plotly_chart(fig5,use_container_width=True) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/spaces/VasudevaK/Information_Extractor/app.py b/spaces/VasudevaK/Information_Extractor/app.py deleted file mode 100644 index cbbd36f7dc90a537ecbf709361bdf59f281cbd5c..0000000000000000000000000000000000000000 --- a/spaces/VasudevaK/Information_Extractor/app.py +++ /dev/null @@ -1,159 +0,0 @@ -import streamlit as st -from PIL import Image -# from pdf2image import convert_from_path -import pandas as pd -import yake -import fitz -import nltk -from gtts import gTTS -nltk.download('punkt') -nltk.download('wordnet') -nltk.download('omw-1.4') -from sklearn.feature_extraction.text import TfidfVectorizer -from sklearn.metrics.pairwise import cosine_similarity -import string -import os -import re - -os.system('pip install -q pytesseract') -import pytesseract - -st.title("Extract info from Files") - -st.sidebar.title('Hyper Params') - -menu = ["Image","Dataset","DocumentFiles","About"] -choice = st.sidebar.selectbox("Select the type of data", menu) - -no_of_keys = st.sidebar.slider('Select the no of keywords', 1, 20, 2, 2) - -output = 'response' -output = st.selectbox('Select the type of output', ('keys', 'response')) - -# pre processing the images -filters = ['Gaussian', 'Low pass', 'High Pass', 'System defined'] -filter = st.sidebar.selectbox("Select the type of filter to preprocess the image", filters) - -# tes = 'C:\\Program Files\\Tesseract-OCR\\tesseract.exe' -# pytesseract.pytesseract.tesseract_cmd = tes - -extractor = yake.KeywordExtractor() -language = 'en' -max_ngram_size = st.sidebar.slider('Select the parameter for ngram', 1, 20, 3, 2) -deduplication_threshold = st.sidebar.slider('Select the parameter for DD threshold', 1, 10, 9, 1) -deduplication_threshold = deduplication_threshold/10 -numOfKeywords = 100 -custom_kw_extractor = yake.KeywordExtractor(lan=language, n=max_ngram_size, dedupLim=deduplication_threshold, top=numOfKeywords, features=None) - -lemmer = nltk.stem.WordNetLemmatizer() - -def LemTokens(tokens): - return [lemmer.lemmatize(token) for token in tokens] -remove_punct_dict= dict((ord(punct), None) for punct in string.punctuation) - -def LemNormalize(text): - return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict))) - -def rees(glo_text, keys): - for key in keys[:no_of_keys]: - # st.write(type(glo_text)) - sent_tokens = nltk.sent_tokenize(glo_text) - word_tokens = nltk.word_tokenize(glo_text) - sent_tokens.append(key) - word_tokens = word_tokens + nltk.word_tokenize(key) - TfidfVec = TfidfVectorizer(tokenizer = LemNormalize, stop_words='english') - tfidf = TfidfVec.fit_transform(sent_tokens) - vals = cosine_similarity(tfidf[-1], tfidf) - idx = vals.argsort()[0][-2] - response = sent_tokens[idx] - if(output == 'response'): - st.write(' - ' + key + ':' + response) - else: - st.write(' - ' + key) - response = re.sub("[^a-zA-Z0-9]","",response) - myobj = gTTS(text=response, lang=language, slow=False) - myobj.save("audio.mp3") - st.audio("audio.mp3", format='audio/ogg') - os.remove("audio.mp3") - -def load_image(image_file): - img = Image.open(image_file) - st.image(img, width=250) - text = pytesseract.image_to_string(img) - img.close() - return text - # text = pytesseract.image_to_string(img) - -def load_pdf(data_file): - doc = fitz.open(stream=data_file.read(), filetype="pdf") - text = "" - glo_text = '' - for page in doc: - text = text + page.get_text() - glo_text += text - keywords = custom_kw_extractor.extract_keywords(text) - - for kw in keywords[::-1]: - if(kw[1] > 0.1): - keys.append(kw[0]) - # st.write(keys) - doc.close() - return glo_text, keys - -keys = [] - -def tes_image(image_file): - if image_file != None: - # add filters if time permits - glo_text = '' - # text = pytesseract.image_to_string(load_image(image_file)) # can add a specific language to detect the text on the screen - # st.image(load_image(image_file),width=250) - # st.write(text) - text = load_image(image_file) - glo_text += text - keywords = custom_kw_extractor.extract_keywords(text) - - for kw in keywords[::-1]: - if(kw[1] > 0.1): - keys.append(kw[0]) - - # st.write(keys) - return glo_text, keys - -def tes_doc(data_file): - if data_file != None: - tup = load_pdf(data_file) - return tup - -def convert_df_to_text(df): - pass # implement key to text here using key2text package - -if choice == "Image": - st.subheader("Image") - image_file = st.file_uploader("Upload Images", type=["png","jpg","jpeg"]) - if image_file != None: - file_details = {"filename":image_file.name, "filetype":image_file.type, "filesize":image_file.size} - st.write(file_details) - glo_text, keys = tes_image(image_file) - rees(glo_text, keys) - -elif choice == "Dataset": - st.subheader("Dataset") - data_file = st.file_uploader("Upload CSV",type=["csv"]) - if data_file != None: - file_details = {"filename":data_file, "filetype":data_file.type, "filesize":data_file.size} - st.write(file_details) - df = pd.read_csv(data_file) - st.write(df) - convert_df_to_text(df) - - -elif choice == "DocumentFiles": - st.subheader("DocumentFiles") - docx_file = st.file_uploader("Upload Document", type=["pdf","docx","txt"]) - if st.button("Process"): - if docx_file is not None: - file_details = {"filename":docx_file.name, "filetype":docx_file.type, "filesize":docx_file.size} - st.write(file_details) - glo_text, keys = tes_doc(docx_file) - rees(glo_text, keys) \ No newline at end of file diff --git a/spaces/Vokturz/can-it-run-llm/src/utils.py b/spaces/Vokturz/can-it-run-llm/src/utils.py deleted file mode 100644 index 9a1dcdd08f9d023e05a07021bc5360f95cf903a7..0000000000000000000000000000000000000000 --- a/spaces/Vokturz/can-it-run-llm/src/utils.py +++ /dev/null @@ -1,103 +0,0 @@ -# using https://huggingface.co/spaces/hf-accelerate/model-memory-usage/blob/main/src/model_utils.py - -import torch -from accelerate.commands.estimate import check_has_model, create_empty_model -from urllib.parse import urlparse -from accelerate.utils import calculate_maximum_sizes -from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError -import streamlit as st - -DTYPE_MODIFIER = {"float32": 1, "float16/bfloat16": 2, "int8": 4, "int4": 8} - -def translate_llama2(text): - "Translates llama-2 to its hf counterpart" - if not text.endswith("-hf"): - return text + "-hf" - return text - -def get_model(model_name: str, library: str, access_token: str): - "Finds and grabs model from the Hub, and initializes on `meta`" - if "meta-llama" in model_name: - model_name = translate_llama2(model_name) - if library == "auto": - library = None - model_name = extract_from_url(model_name) - try: - model = create_empty_model(model_name, library_name=library, trust_remote_code=True, access_token=access_token) - except GatedRepoError: - st.error( - f"Model `{model_name}` is a gated model, please ensure to pass in your access token and try again if you have access. You can find your access token here : https://huggingface.co/settings/tokens. " - ) - st.stop() - except RepositoryNotFoundError: - st.error(f"Model `{model_name}` was not found on the Hub, please try another model name.") - st.stop() - except ValueError: - st.error( - f"Model `{model_name}` does not have any library metadata on the Hub, please manually select a library_name to use (such as `transformers`)" - ) - st.stop() - except (RuntimeError, OSError) as e: - library = check_has_model(e) - if library != "unknown": - st.error( - f"Tried to load `{model_name}` with `{library}` but a possible model to load was not found inside the repo." - ) - st.stop() - st.error( - f"Model `{model_name}` had an error, please open a discussion on the model's page with the error message and name: `{e}`" - ) - st.stop() - except ImportError: - # hacky way to check if it works with `trust_remote_code=False` - model = create_empty_model( - model_name, library_name=library, trust_remote_code=False, access_token=access_token - ) - except Exception as e: - st.error( - f"Model `{model_name}` had an error, please open a discussion on the model's page with the error message and name: `{e}`" - ) - st.stop() - return model - -def extract_from_url(name: str): - "Checks if `name` is a URL, and if so converts it to a model name" - is_url = False - try: - result = urlparse(name) - is_url = all([result.scheme, result.netloc]) - except Exception: - is_url = False - # Pass through if not a URL - if not is_url: - return name - else: - path = result.path - return path[1:] - -def calculate_memory(model: torch.nn.Module, options: list): - "Calculates the memory usage for a model init on `meta` device" - total_size, largest_layer = calculate_maximum_sizes(model) - num_parameters = sum(p.numel() for p in model.parameters() if p.requires_grad) - data = [] - for dtype in options: - dtype_total_size = total_size - dtype_largest_layer = largest_layer[0] - - modifier = DTYPE_MODIFIER[dtype] - dtype_total_size /= modifier - dtype_largest_layer /= modifier - - dtype_training_size = dtype_total_size * 4 / (1024**3) - dtype_inference = dtype_total_size * 1.2 / (1024**3) - dtype_total_size = dtype_total_size / (1024**3) - data.append( - { - "dtype": dtype, - "Total Size (GB)": dtype_total_size, - "Inference (GB)" : dtype_inference, - "Training using Adam (GB)": dtype_training_size, - "Parameters (Billion)" : num_parameters / 1e9 - } - ) - return data \ No newline at end of file diff --git a/spaces/Vrk/SkimLit/app.py b/spaces/Vrk/SkimLit/app.py deleted file mode 100644 index 70412503d0e6081fbc884227ef3d4a688e48f657..0000000000000000000000000000000000000000 --- a/spaces/Vrk/SkimLit/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import streamlit as st -import torch -import spacy -# from spacy.lang.en import English -# from utils import spacy_function, make_predictions, example_input - -from Dataset import SkimlitDataset -from Embeddings import get_embeddings -from Model import SkimlitModel -from Tokenizer import Tokenizer -from LabelEncoder import LabelEncoder -from MakePredictions import make_skimlit_predictions -from RandomAbstract import Choose_Random_text - -MODEL_PATH = 'skimlit-model-final-1.pt' -TOKENIZER_PATH = 'tokenizer.json' -LABEL_ENOCDER_PATH = "label_encoder.json" -EMBEDDING_FILE_PATH = 'glove.6B.300d.txt' - -@st.cache() -def create_utils(model_path, tokenizer_path, label_encoder_path, embedding_file_path): - tokenizer = Tokenizer.load(fp=tokenizer_path) - label_encoder = LabelEncoder.load(fp=label_encoder_path) - embedding_matrix = get_embeddings(embedding_file_path, tokenizer, 300) - model = SkimlitModel(embedding_dim=300, vocab_size=len(tokenizer), hidden_dim=128, n_layers=3, linear_output=128, num_classes=len(label_encoder), pretrained_embeddings=embedding_matrix) - model.load_state_dict(torch.load(model_path, map_location='cpu')) - print(model) - return model, tokenizer, label_encoder - -def model_prediction(abstract, model, tokenizer, label_encoder): - objective = '' - background = '' - method = '' - conclusion = '' - result = '' - - lines, pred = make_skimlit_predictions(abstract, model, tokenizer, label_encoder) - # pred, lines = make_predictions(abstract) - - for i, line in enumerate(lines): - if pred[i] == 'OBJECTIVE': - objective = objective + line - - elif pred[i] == 'BACKGROUND': - background = background + line - - elif pred[i] == 'METHODS': - method = method + line - - elif pred[i] == 'RESULTS': - result = result + line - - elif pred[i] == 'CONCLUSIONS': - conclusion = conclusion + line - - return objective, background, method, conclusion, result - - - -def main(): - - st.set_page_config( - page_title="SkimLit", - page_icon="📄", - layout="wide", - initial_sidebar_state="expanded" - ) - - st.title('SkimLit📄🔥') - st.caption('An NLP model to classify medical abstract sentences into the role they play (e.g. objective, methods, results, etc..) to enable researchers to skim through the literature and dive deeper when necessary.') - - # creating model, tokenizer and labelEncoder - # if PREP_MODEL: - # skimlit_model, tokenizer, label_encoder = create_utils(MODEL_PATH, TOKENIZER_PATH, LABEL_ENOCDER_PATH, EMBEDDING_FILE_PATH) - # PREP_MODEL = False - - col1, col2 = st.columns(2) - - with col1: - st.write('#### Entre Abstract Here !!') - abstract = st.text_area(label='', height=200) - - agree = st.checkbox('Show Example Abstract') - predict = st.button('Extract !') - - if agree: - example_input = Choose_Random_text() - st.info(example_input) - - # make prediction button logic - if predict: - with col2: - with st.spinner('Wait for prediction....'): - skimlit_model, tokenizer, label_encoder = create_utils(MODEL_PATH, TOKENIZER_PATH, LABEL_ENOCDER_PATH, EMBEDDING_FILE_PATH) - objective, background, methods, conclusion, result = model_prediction(abstract, skimlit_model, tokenizer, label_encoder) - - st.markdown(f'### Objective : ') - st.info(objective) - # st.write(f'{objective}') - st.markdown(f'### Background : ') - st.info(background) - # st.write(f'{background}') - st.markdown(f'### Methods : ') - st.info(methods) - # st.write(f'{methods}') - st.markdown(f'### Result : ') - st.info(result) - # st.write(f'{result}') - st.markdown(f'### Conclusion : ') - st.info(conclusion) - # st.write(f'{conclusion}') - - - -if __name__=='__main__': - main() \ No newline at end of file diff --git a/spaces/Wootang01/text_generator_three/app.py b/spaces/Wootang01/text_generator_three/app.py deleted file mode 100644 index 359a09ce2c6a86b35bd8763379d13866597b1956..0000000000000000000000000000000000000000 --- a/spaces/Wootang01/text_generator_three/app.py +++ /dev/null @@ -1,20 +0,0 @@ -#level 5 text generator -import gradio as gr - -api = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B") - - -def complete_with_gpt(text): - # Use the last 50 characters of the text as context - return text[:-50] + api(text[-50:]) - - -with gr.Blocks() as demo: - with gr.Row(): - textbox = gr.Textbox(placeholder="Type here and press enter...", lines=8) - with gr.Column(): - btn = gr.Button("Generate") - - btn.click(complete_with_gpt, textbox, textbox) - -demo.launch() \ No newline at end of file diff --git a/spaces/Xenova/distil-whisper-web/index.html b/spaces/Xenova/distil-whisper-web/index.html deleted file mode 100644 index 4f1255e4a79608155ce2fe3950259b78eb1f3971..0000000000000000000000000000000000000000 --- a/spaces/Xenova/distil-whisper-web/index.html +++ /dev/null @@ -1,15 +0,0 @@ - - - - - - - Whisper Web - - - - -

    - - - diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/commons.py b/spaces/XzJosh/ShanBao-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ShanBao-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/XzJosh/otto-Bert-VITS2/attentions.py b/spaces/XzJosh/otto-Bert-VITS2/attentions.py deleted file mode 100644 index 1192dd7268c20c11010e73a6017ed09549695afe..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/otto-Bert-VITS2/attentions.py +++ /dev/null @@ -1,344 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import logging - -logger = logging.getLogger(__name__) - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - #if isflow: - # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - # self.cond_layer = weight_norm(cond_layer, name='weight') - # self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - logging.debug(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py b/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py deleted file mode 100644 index 489d501bef364020212306d81e9b85c8daa27491..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py +++ /dev/null @@ -1,413 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from: -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/functions/ms_deform_attn_func.py -# https://github.com/fundamentalvision/Deformable-DETR/blob/main/models/ops/modules/ms_deform_attn.py -# https://github.com/open-mmlab/mmcv/blob/master/mmcv/ops/multi_scale_deform_attn.py -# ------------------------------------------------------------------------------------------------ - -import math -import warnings -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.init import constant_, xavier_uniform_ - -try: - from groundingdino import _C -except: - warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") - - -# helpers -def _is_power_of_2(n): - if (not isinstance(n, int)) or (n < 0): - raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n))) - return (n & (n - 1) == 0) and n != 0 - - -class MultiScaleDeformableAttnFunction(Function): - @staticmethod - def forward( - ctx, - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - im2col_step, - ): - ctx.im2col_step = im2col_step - output = _C.ms_deform_attn_forward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ctx.im2col_step, - ) - ctx.save_for_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - ( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - ) = ctx.saved_tensors - grad_value, grad_sampling_loc, grad_attn_weight = _C.ms_deform_attn_backward( - value, - value_spatial_shapes, - value_level_start_index, - sampling_locations, - attention_weights, - grad_output, - ctx.im2col_step, - ) - - return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None - - -def multi_scale_deformable_attn_pytorch( - value: torch.Tensor, - value_spatial_shapes: torch.Tensor, - sampling_locations: torch.Tensor, - attention_weights: torch.Tensor, -) -> torch.Tensor: - - bs, _, num_heads, embed_dims = value.shape - _, num_queries, num_heads, num_levels, num_points, _ = sampling_locations.shape - value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1) - sampling_grids = 2 * sampling_locations - 1 - sampling_value_list = [] - for level, (H_, W_) in enumerate(value_spatial_shapes): - # bs, H_*W_, num_heads, embed_dims -> - # bs, H_*W_, num_heads*embed_dims -> - # bs, num_heads*embed_dims, H_*W_ -> - # bs*num_heads, embed_dims, H_, W_ - value_l_ = ( - value_list[level].flatten(2).transpose(1, 2).reshape(bs * num_heads, embed_dims, H_, W_) - ) - # bs, num_queries, num_heads, num_points, 2 -> - # bs, num_heads, num_queries, num_points, 2 -> - # bs*num_heads, num_queries, num_points, 2 - sampling_grid_l_ = sampling_grids[:, :, :, level].transpose(1, 2).flatten(0, 1) - # bs*num_heads, embed_dims, num_queries, num_points - sampling_value_l_ = F.grid_sample( - value_l_, sampling_grid_l_, mode="bilinear", padding_mode="zeros", align_corners=False - ) - sampling_value_list.append(sampling_value_l_) - # (bs, num_queries, num_heads, num_levels, num_points) -> - # (bs, num_heads, num_queries, num_levels, num_points) -> - # (bs, num_heads, 1, num_queries, num_levels*num_points) - attention_weights = attention_weights.transpose(1, 2).reshape( - bs * num_heads, 1, num_queries, num_levels * num_points - ) - output = ( - (torch.stack(sampling_value_list, dim=-2).flatten(-2) * attention_weights) - .sum(-1) - .view(bs, num_heads * embed_dims, num_queries) - ) - return output.transpose(1, 2).contiguous() - - -class MultiScaleDeformableAttention(nn.Module): - """Multi-Scale Deformable Attention Module used in Deformable-DETR - - `Deformable DETR: Deformable Transformers for End-to-End Object Detection. - `_. - - Args: - embed_dim (int): The embedding dimension of Attention. Default: 256. - num_heads (int): The number of attention heads. Default: 8. - num_levels (int): The number of feature map used in Attention. Default: 4. - num_points (int): The number of sampling points for each query - in each head. Default: 4. - img2col_steps (int): The step used in image_to_column. Defualt: 64. - dropout (float): Dropout layer used in output. Default: 0.1. - batch_first (bool): if ``True``, then the input and output tensor will be - provided as `(bs, n, embed_dim)`. Default: False. `(n, bs, embed_dim)` - """ - - def __init__( - self, - embed_dim: int = 256, - num_heads: int = 8, - num_levels: int = 4, - num_points: int = 4, - img2col_step: int = 64, - batch_first: bool = False, - ): - super().__init__() - if embed_dim % num_heads != 0: - raise ValueError( - "embed_dim must be divisible by num_heads, but got {} and {}".format( - embed_dim, num_heads - ) - ) - head_dim = embed_dim // num_heads - - self.batch_first = batch_first - - if not _is_power_of_2(head_dim): - warnings.warn( - """ - You'd better set d_model in MSDeformAttn to make sure that - each dim of the attention head a power of 2, which is more efficient. - """ - ) - - self.im2col_step = img2col_step - self.embed_dim = embed_dim - self.num_heads = num_heads - self.num_levels = num_levels - self.num_points = num_points - self.sampling_offsets = nn.Linear(embed_dim, num_heads * num_levels * num_points * 2) - self.attention_weights = nn.Linear(embed_dim, num_heads * num_levels * num_points) - self.value_proj = nn.Linear(embed_dim, embed_dim) - self.output_proj = nn.Linear(embed_dim, embed_dim) - - self.init_weights() - - def _reset_parameters(self): - return self.init_weights() - - def init_weights(self): - """ - Default initialization for Parameters of Module. - """ - constant_(self.sampling_offsets.weight.data, 0.0) - thetas = torch.arange(self.num_heads, dtype=torch.float32) * ( - 2.0 * math.pi / self.num_heads - ) - grid_init = torch.stack([thetas.cos(), thetas.sin()], -1) - grid_init = ( - (grid_init / grid_init.abs().max(-1, keepdim=True)[0]) - .view(self.num_heads, 1, 1, 2) - .repeat(1, self.num_levels, self.num_points, 1) - ) - for i in range(self.num_points): - grid_init[:, :, i, :] *= i + 1 - with torch.no_grad(): - self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1)) - constant_(self.attention_weights.weight.data, 0.0) - constant_(self.attention_weights.bias.data, 0.0) - xavier_uniform_(self.value_proj.weight.data) - constant_(self.value_proj.bias.data, 0.0) - xavier_uniform_(self.output_proj.weight.data) - constant_(self.output_proj.bias.data, 0.0) - - def freeze_sampling_offsets(self): - print("Freeze sampling offsets") - self.sampling_offsets.weight.requires_grad = False - self.sampling_offsets.bias.requires_grad = False - - def freeze_attention_weights(self): - print("Freeze attention weights") - self.attention_weights.weight.requires_grad = False - self.attention_weights.bias.requires_grad = False - - def forward( - self, - query: torch.Tensor, - key: Optional[torch.Tensor] = None, - value: Optional[torch.Tensor] = None, - query_pos: Optional[torch.Tensor] = None, - key_padding_mask: Optional[torch.Tensor] = None, - reference_points: Optional[torch.Tensor] = None, - spatial_shapes: Optional[torch.Tensor] = None, - level_start_index: Optional[torch.Tensor] = None, - **kwargs - ) -> torch.Tensor: - - """Forward Function of MultiScaleDeformableAttention - - Args: - query (torch.Tensor): Query embeddings with shape - `(num_query, bs, embed_dim)` - key (torch.Tensor): Key embeddings with shape - `(num_key, bs, embed_dim)` - value (torch.Tensor): Value embeddings with shape - `(num_key, bs, embed_dim)` - query_pos (torch.Tensor): The position embedding for `query`. Default: None. - key_padding_mask (torch.Tensor): ByteTensor for `query`, with shape `(bs, num_key)`, - indicating which elements within `key` to be ignored in attention. - reference_points (torch.Tensor): The normalized reference points - with shape `(bs, num_query, num_levels, 2)`, - all elements is range in [0, 1], top-left (0, 0), - bottom-right (1, 1), including padding are. - or `(N, Length_{query}, num_levels, 4)`, add additional - two dimensions `(h, w)` to form reference boxes. - spatial_shapes (torch.Tensor): Spatial shape of features in different levels. - With shape `(num_levels, 2)`, last dimension represents `(h, w)`. - level_start_index (torch.Tensor): The start index of each level. A tensor with - shape `(num_levels, )` which can be represented as - `[0, h_0 * w_0, h_0 * w_0 + h_1 * w_1, ...]`. - - Returns: - torch.Tensor: forward results with shape `(num_query, bs, embed_dim)` - """ - - if value is None: - value = query - - if query_pos is not None: - query = query + query_pos - - if not self.batch_first: - # change to (bs, num_query ,embed_dims) - query = query.permute(1, 0, 2) - value = value.permute(1, 0, 2) - - bs, num_query, _ = query.shape - bs, num_value, _ = value.shape - - assert (spatial_shapes[:, 0] * spatial_shapes[:, 1]).sum() == num_value - - value = self.value_proj(value) - if key_padding_mask is not None: - value = value.masked_fill(key_padding_mask[..., None], float(0)) - value = value.view(bs, num_value, self.num_heads, -1) - sampling_offsets = self.sampling_offsets(query).view( - bs, num_query, self.num_heads, self.num_levels, self.num_points, 2 - ) - attention_weights = self.attention_weights(query).view( - bs, num_query, self.num_heads, self.num_levels * self.num_points - ) - attention_weights = attention_weights.softmax(-1) - attention_weights = attention_weights.view( - bs, - num_query, - self.num_heads, - self.num_levels, - self.num_points, - ) - - # bs, num_query, num_heads, num_levels, num_points, 2 - if reference_points.shape[-1] == 2: - offset_normalizer = torch.stack([spatial_shapes[..., 1], spatial_shapes[..., 0]], -1) - sampling_locations = ( - reference_points[:, :, None, :, None, :] - + sampling_offsets / offset_normalizer[None, None, None, :, None, :] - ) - elif reference_points.shape[-1] == 4: - sampling_locations = ( - reference_points[:, :, None, :, None, :2] - + sampling_offsets - / self.num_points - * reference_points[:, :, None, :, None, 2:] - * 0.5 - ) - else: - raise ValueError( - "Last dim of reference_points must be 2 or 4, but get {} instead.".format( - reference_points.shape[-1] - ) - ) - - if torch.cuda.is_available() and value.is_cuda: - halffloat = False - if value.dtype == torch.float16: - halffloat = True - value = value.float() - sampling_locations = sampling_locations.float() - attention_weights = attention_weights.float() - - output = MultiScaleDeformableAttnFunction.apply( - value, - spatial_shapes, - level_start_index, - sampling_locations, - attention_weights, - self.im2col_step, - ) - - if halffloat: - output = output.half() - else: - output = multi_scale_deformable_attn_pytorch( - value, spatial_shapes, sampling_locations, attention_weights - ) - - output = self.output_proj(output) - - if not self.batch_first: - output = output.permute(1, 0, 2) - - return output - - -def create_dummy_class(klass, dependency, message=""): - """ - When a dependency of a class is not available, create a dummy class which throws ImportError - when used. - - Args: - klass (str): name of the class. - dependency (str): name of the dependency. - message: extra message to print - Returns: - class: a class object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, klass) - if message: - err = err + " " + message - - class _DummyMetaClass(type): - # throw error on class attribute access - def __getattr__(_, __): # noqa: B902 - raise ImportError(err) - - class _Dummy(object, metaclass=_DummyMetaClass): - # throw error on constructor - def __init__(self, *args, **kwargs): - raise ImportError(err) - - return _Dummy - - -def create_dummy_func(func, dependency, message=""): - """ - When a dependency of a function is not available, create a dummy function which throws - ImportError when used. - - Args: - func (str): name of the function. - dependency (str or list[str]): name(s) of the dependency. - message: extra message to print - Returns: - function: a function object - """ - err = "Cannot import '{}', therefore '{}' is not available.".format(dependency, func) - if message: - err = err + " " + message - - if isinstance(dependency, (list, tuple)): - dependency = ",".join(dependency) - - def _dummy(*args, **kwargs): - raise ImportError(err) - - return _dummy diff --git a/spaces/Yuelili/RealNagrse/Training.md b/spaces/Yuelili/RealNagrse/Training.md deleted file mode 100644 index 64704e1d2e1f334984232afd12b245235b274a9e..0000000000000000000000000000000000000000 --- a/spaces/Yuelili/RealNagrse/Training.md +++ /dev/null @@ -1,100 +0,0 @@ -# :computer: How to Train Real-ESRGAN - -The training codes have been released.
    -Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I will also retrain the models. - -## Overview - -The training has been divided into two stages. These two stages have the same data synthesis process and training pipeline, except for the loss functions. Specifically, - -1. We first train Real-ESRNet with L1 loss from the pre-trained model ESRGAN. -1. We then use the trained Real-ESRNet model as an initialization of the generator, and train the Real-ESRGAN with a combination of L1 loss, perceptual loss and GAN loss. - -## Dataset Preparation - -We use DF2K (DIV2K and Flickr2K) + OST datasets for our training. Only HR images are required.
    -You can download from : - -1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip -2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar -3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip - -For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales. - -We then crop DF2K images into sub-images for faster IO and processing. - -You need to prepare a txt file containing the image paths. The following are some examples in `meta_info_DF2Kmultiscale+OST_sub.txt` (As different users may have different sub-images partitions, this file is not suitable for your purpose and you need to prepare your own txt file): - -```txt -DF2K_HR_sub/000001_s001.png -DF2K_HR_sub/000001_s002.png -DF2K_HR_sub/000001_s003.png -... -``` - -## Train Real-ESRNet - -1. Download pre-trained model [ESRGAN](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) into `experiments/pretrained_models`. - ```bash - wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth -P experiments/pretrained_models - ``` -1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly: - ```yml - train: - name: DF2K+OST - type: RealESRGANDataset - dataroot_gt: datasets/DF2K # modify to the root path of your folder - meta_info: realesrgan/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt - io_backend: - type: disk - ``` -1. If you want to perform validation during training, uncomment those lines and modify accordingly: - ```yml - # Uncomment these for validation - # val: - # name: validation - # type: PairedImageDataset - # dataroot_gt: path_to_gt - # dataroot_lq: path_to_lq - # io_backend: - # type: disk - - ... - - # Uncomment these for validation - # validation settings - # val: - # val_freq: !!float 5e3 - # save_img: True - - # metrics: - # psnr: # metric name, can be arbitrary - # type: calculate_psnr - # crop_border: 4 - # test_y_channel: false - ``` -1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug - ``` -1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary. - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume - ``` - -## Train Real-ESRGAN - -1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`. -1. Modify the option file `train_realesrgan_x4plus.yml` accordingly. Most modifications are similar to those listed above. -1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training: - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug - ``` -1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary. - ```bash - CUDA_VISIBLE_DEVICES=0,1,2,3 \ - python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 realesrgan/train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume - ``` diff --git a/spaces/Zengyf-CVer/gradio_yolov5_det/README.md b/spaces/Zengyf-CVer/gradio_yolov5_det/README.md deleted file mode 100644 index f62981b207994eafcfb24618b8b6c28c80dfa8ca..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/gradio_yolov5_det/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Gradio_YOLOv5_Det_v0.1 -emoji: 🚀 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - -🚀 项目主页:https://gitee.com/CV_Lab/gradio_yolov5_det \ No newline at end of file diff --git a/spaces/ZilliaxOfficial/nyaru-svc-3.0/preprocess_hubert_f0.py b/spaces/ZilliaxOfficial/nyaru-svc-3.0/preprocess_hubert_f0.py deleted file mode 100644 index 4fe7f21541acb01537797f430d53b3c0e63279e1..0000000000000000000000000000000000000000 --- a/spaces/ZilliaxOfficial/nyaru-svc-3.0/preprocess_hubert_f0.py +++ /dev/null @@ -1,106 +0,0 @@ -import os -import argparse - -import torch -import json -from glob import glob - -from pyworld import pyworld -from tqdm import tqdm -from scipy.io import wavfile - -import utils -from mel_processing import mel_spectrogram_torch -#import h5py -import logging -logging.getLogger('numba').setLevel(logging.WARNING) - -import parselmouth -import librosa -import numpy as np - - -def get_f0(path,p_len=None, f0_up_key=0): - x, _ = librosa.load(path, 32000) - if p_len is None: - p_len = x.shape[0]//320 - else: - assert abs(p_len-x.shape[0]//320) < 3, (path, p_len, x.shape) - time_step = 320 / 32000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 32000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0bak = f0.copy() - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak - -def resize2d(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0(path, c_len): - x, sr = librosa.load(path, sr=32000) - f0, t = pyworld.dio( - x.astype(np.double), - fs=sr, - f0_ceil=800, - frame_period=1000 * 320 / sr, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, 32000) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - assert abs(c_len - x.shape[0]//320) < 3, (c_len, f0.shape) - - return None, resize2d(f0, c_len) - - -def process(filename): - print(filename) - save_name = filename+".soft.pt" - if not os.path.exists(save_name): - devive = torch.device("cuda" if torch.cuda.is_available() else "cpu") - wav, _ = librosa.load(filename, sr=16000) - wav = torch.from_numpy(wav).unsqueeze(0).to(devive) - c = utils.get_hubert_content(hmodel, wav) - torch.save(c.cpu(), save_name) - else: - c = torch.load(save_name) - f0path = filename+".f0.npy" - if not os.path.exists(f0path): - cf0, f0 = compute_f0(filename, c.shape[-1] * 2) - np.save(f0path, f0) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--in_dir", type=str, default="dataset/32k", help="path to input dir") - args = parser.parse_args() - - print("Loading hubert for content...") - hmodel = utils.get_hubert_model(0 if torch.cuda.is_available() else None) - print("Loaded hubert.") - - filenames = glob(f'{args.in_dir}/*/*.wav', recursive=True)#[:10] - - for filename in tqdm(filenames): - process(filename) - \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ga_retina_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ga_retina_head.py deleted file mode 100644 index 8822d1ca78ee2fa2f304a0649e81274830383533..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/ga_retina_head.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init -from mmcv.ops import MaskedConv2d - -from ..builder import HEADS -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead - - -@HEADS.register_module() -class GARetinaHead(GuidedAnchorHead): - """Guided-Anchor-based RetinaNet head.""" - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=None, - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(GARetinaHead, self).__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - - self.conv_loc = nn.Conv2d(self.feat_channels, 1, 1) - self.conv_shape = nn.Conv2d(self.feat_channels, self.num_anchors * 2, - 1) - self.feature_adaption_cls = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.feature_adaption_reg = FeatureAdaption( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.retina_cls = MaskedConv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - 3, - padding=1) - self.retina_reg = MaskedConv2d( - self.feat_channels, self.num_anchors * 4, 3, padding=1) - - def init_weights(self): - """Initialize weights of the layer.""" - for m in self.cls_convs: - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - normal_init(m.conv, std=0.01) - - self.feature_adaption_cls.init_weights() - self.feature_adaption_reg.init_weights() - - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_loc, std=0.01, bias=bias_cls) - normal_init(self.conv_shape, std=0.01) - normal_init(self.retina_cls, std=0.01, bias=bias_cls) - normal_init(self.retina_reg, std=0.01) - - def forward_single(self, x): - """Forward feature map of a single scale level.""" - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - - loc_pred = self.conv_loc(cls_feat) - shape_pred = self.conv_shape(reg_feat) - - cls_feat = self.feature_adaption_cls(cls_feat, shape_pred) - reg_feat = self.feature_adaption_reg(reg_feat, shape_pred) - - if not self.training: - mask = loc_pred.sigmoid()[0] >= self.loc_filter_thr - else: - mask = None - cls_score = self.retina_cls(cls_feat, mask) - bbox_pred = self.retina_reg(reg_feat, mask) - return cls_score, bbox_pred, shape_pred, loc_pred diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/seg/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/seg/__init__.py deleted file mode 100644 index 93bc129b685e4a3efca2cc891729981b2865900d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/seg/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .builder import build_pixel_sampler -from .sampler import BasePixelSampler, OHEMPixelSampler - -__all__ = ['build_pixel_sampler', 'BasePixelSampler', 'OHEMPixelSampler'] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/backbones/resnet.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/backbones/resnet.py deleted file mode 100644 index 4e52bf048d28ecb069db4728e5f05ad85ac53198..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/backbones/resnet.py +++ /dev/null @@ -1,688 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (build_conv_layer, build_norm_layer, build_plugin_layer, - constant_init, kaiming_init) -from annotator.uniformer.mmcv.runner import load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(nn.Module): - """Basic block for ResNet.""" - - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(BasicBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - """Forward function for plugins.""" - out = x - for name in plugin_names: - out = getattr(self, name)(x) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default" 3. - stem_channels (int): Number of stem channels. Default: 64. - base_channels (int): Number of base channels of res layer. Default: 64. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - - position (str, required): Position inside block to insert plugin, - options: 'after_conv1', 'after_conv2', 'after_conv3'. - - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages' - multi_grid (Sequence[int]|None): Multi grid dilation rates of last - stage. Default: None - contract_dilation (bool): Whether contract first dilation of each layer - Default: False - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from annotator.uniformer.mmseg.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=64, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=False, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - multi_grid=None, - contract_dilation=False, - with_cp=False, - zero_init_residual=True): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - self.depth = depth - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.multi_grid = multi_grid - self.contract_dilation = contract_dilation - self.zero_init_residual = zero_init_residual - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - # multi grid is applied to last layer only - stage_multi_grid = multi_grid if i == len( - self.stage_blocks) - 1 else None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - multi_grid=stage_multi_grid, - contract_dilation=contract_dilation) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i+1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """make plugins for ResNet 'stage_idx'th stage . - - Currently we support to insert 'context_block', - 'empirical_attention_block', 'nonlocal_block' into the backbone like - ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be : - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose 'stage_idx=0', the structure of blocks in the stage would be: - conv1-> conv2->conv3->yyy->zzz1->zzz2 - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - """Make stem layer for ResNet.""" - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - """Freeze stages param and norm stats.""" - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.dcn is not None: - for m in self.modules(): - if isinstance(m, Bottleneck) and hasattr( - m, 'conv2_offset'): - constant_init(m.conv2_offset, 0) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1c(ResNet): - """ResNetV1c variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1c replaces the 7x7 conv - in the input stem with three 3x3 convs. - - References: - .. [1] https://arxiv.org/pdf/1812.01187.pdf - """ - - def __init__(self, **kwargs): - super(ResNetV1c, self).__init__( - deep_stem=True, avg_down=False, **kwargs) - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - """ResNetV1d variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/hrf.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/hrf.py deleted file mode 100644 index a7afb56eb0120d578ce31070dc8131cf15ac07a6..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/hrf.py +++ /dev/null @@ -1,39 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv - * Copyright (c) OpenMMLab. All rights reserved. -''' - -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class HRFDataset(CustomDataset): - """HRF dataset. - - In segmentation map annotation for HRF, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(HRFDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/abidlabs/ControlNet/app.py b/spaces/abidlabs/ControlNet/app.py deleted file mode 100644 index c5297c9fedd08a3467f23e5b163a431bdd18db5f..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/ControlNet/app.py +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import pathlib -import shlex -import subprocess - -import gradio as gr - -if os.getenv('SYSTEM') == 'spaces': - with open('patch') as f: - subprocess.run(shlex.split('patch -p1'), stdin=f, cwd='ControlNet') - -base_url = 'https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/' -names = [ - 'body_pose_model.pth', - 'dpt_hybrid-midas-501f0c75.pt', - 'hand_pose_model.pth', - 'mlsd_large_512_fp32.pth', - 'mlsd_tiny_512_fp32.pth', - 'network-bsds500.pth', - 'upernet_global_small.pth', -] -for name in names: - command = f'wget https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/{name} -O {name}' - out_path = pathlib.Path(f'ControlNet/annotator/ckpts/{name}') - if out_path.exists(): - continue - subprocess.run(shlex.split(command), cwd='ControlNet/annotator/ckpts/') - -from gradio_canny2image import create_demo as create_demo_canny -from gradio_depth2image import create_demo as create_demo_depth -from gradio_fake_scribble2image import create_demo as create_demo_fake_scribble -from gradio_hed2image import create_demo as create_demo_hed -from gradio_hough2image import create_demo as create_demo_hough -from gradio_normal2image import create_demo as create_demo_normal -from gradio_pose2image import create_demo as create_demo_pose -from gradio_scribble2image import create_demo as create_demo_scribble -from gradio_scribble2image_interactive import \ - create_demo as create_demo_scribble_interactive -from gradio_seg2image import create_demo as create_demo_seg -from model import Model - -MAX_IMAGES = 1 - - -model = Model() - -with gr.Blocks(css='style.css') as demo: - create_demo_canny(model.process_canny, max_images=MAX_IMAGES) - -demo.queue(api_open=False).launch() diff --git a/spaces/aijack/jojo/model.py b/spaces/aijack/jojo/model.py deleted file mode 100644 index 497bf78d57c54d58cd3b55f26c718be2470a04f1..0000000000000000000000000000000000000000 --- a/spaces/aijack/jojo/model.py +++ /dev/null @@ -1,688 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import conv2d_gradfix -if torch.cuda.is_available(): - from op.fused_act import FusedLeakyReLU, fused_leaky_relu - from op.upfirdn2d import upfirdn2d -else: - from op.fused_act_cpu import FusedLeakyReLU, fused_leaky_relu - from op.upfirdn2d_cpu import upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = conv2d_gradfix.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})" - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - ) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - fused=True, - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - self.fused = fused - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - if not self.fused: - weight = self.scale * self.weight.squeeze(0) - style = self.modulation(style) - - if self.demodulate: - w = weight.unsqueeze(0) * style.view(batch, 1, in_channel, 1, 1) - dcoefs = (w.square().sum((2, 3, 4)) + 1e-8).rsqrt() - - input = input * style.reshape(batch, in_channel, 1, 1) - - if self.upsample: - weight = weight.transpose(0, 1) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2 - ) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - out = conv2d_gradfix.conv2d(input, weight, padding=0, stride=2) - - else: - out = conv2d_gradfix.conv2d(input, weight, padding=self.padding) - - if self.demodulate: - out = out * dcoefs.view(batch, -1, 1, 1) - - return out - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = conv2d_gradfix.conv_transpose2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=0, stride=2, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = conv2d_gradfix.conv2d( - input, weight, padding=self.padding, groups=batch - ) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu" - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f"noise_{layer_idx}", torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - @torch.no_grad() - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - @torch.no_grad() - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f"noise_{i}") for i in range(self.num_layers) - ] - - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - latent = styles[0].unsqueeze(1).repeat(1, self.n_latent, 1) - else: - latent = styles - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - return image - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - layers.append(FusedLeakyReLU(out_channel, bias=bias)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out - diff --git a/spaces/akhaliq/Kapao/models/experimental.py b/spaces/akhaliq/Kapao/models/experimental.py deleted file mode 100644 index e25a4e1779fa7847f79e8570e7cf0a23e845a9f0..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Kapao/models/experimental.py +++ /dev/null @@ -1,115 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Experimental modules -""" - -import numpy as np -import torch -import torch.nn as nn - -from models.common import Conv -from utils.downloads import attempt_download - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class Sum(nn.Module): - # Weighted sum of 2 or more layers https://arxiv.org/abs/1911.09070 - def __init__(self, n, weight=False): # n: number of inputs - super().__init__() - self.weight = weight # apply weights boolean - self.iter = range(n - 1) # iter object - if weight: - self.w = nn.Parameter(-torch.arange(1., n) / 2, requires_grad=True) # layer weights - - def forward(self, x): - y = x[0] # no weight - if self.weight: - w = torch.sigmoid(self.w) * 2 - for i in self.iter: - y = y + x[i + 1] * w[i] - else: - for i in self.iter: - y = y + x[i + 1] - return y - - -class MixConv2d(nn.Module): - # Mixed Depth-wise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): - super().__init__() - groups = len(k) - if equal_ch: # equal c_ per group - i = torch.linspace(0, groups - 1E-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(groups)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * groups - a = np.eye(groups + 1, groups, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.LeakyReLU(0.1, inplace=True) - - def forward(self, x): - return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) - - -class Ensemble(nn.ModuleList): - # Ensemble of models - def __init__(self): - super().__init__() - - def forward(self, x, augment=False, profile=False, visualize=False): - y = [] - for module in self: - y.append(module(x, augment, profile, visualize)[0]) - # y = torch.stack(y).max(0)[0] # max ensemble - # y = torch.stack(y).mean(0) # mean ensemble - y = torch.cat(y, 1) # nms ensemble - return y, None # inference, train output - - -def attempt_load(weights, map_location=None, inplace=True, fuse=True): - from models.yolo import Detect, Model - - # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a - model = Ensemble() - for w in weights if isinstance(weights, list) else [weights]: - ckpt = torch.load(attempt_download(w), map_location=map_location) # load - if fuse: - model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model - else: - model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().eval()) # without layer fuse - - - # Compatibility updates - for m in model.modules(): - if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU, Detect, Model]: - m.inplace = inplace # pytorch 1.7.0 compatibility - elif type(m) is Conv: - m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility - - if len(model) == 1: - return model[-1] # return model - else: - print(f'Ensemble created with {weights}\n') - for k in ['names']: - setattr(model, k, getattr(model[-1], k)) - model.stride = model[torch.argmax(torch.tensor([m.stride.max() for m in model])).int()].stride # max stride - return model # return ensemble diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/train.py b/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/train.py deleted file mode 100644 index 6dc2f892e1fc134b311e2c9ee42250a2d3713547..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/train.py +++ /dev/null @@ -1,127 +0,0 @@ -from vocoder.models.fatchord_version import WaveRNN -from vocoder.vocoder_dataset import VocoderDataset, collate_vocoder -from vocoder.distribution import discretized_mix_logistic_loss -from vocoder.display import stream, simple_table -from vocoder.gen_wavernn import gen_testset -from torch.utils.data import DataLoader -from pathlib import Path -from torch import optim -import torch.nn.functional as F -import vocoder.hparams as hp -import numpy as np -import time -import torch -import platform - -def train(run_id: str, syn_dir: Path, voc_dir: Path, models_dir: Path, ground_truth: bool, - save_every: int, backup_every: int, force_restart: bool): - # Check to make sure the hop length is correctly factorised - assert np.cumprod(hp.voc_upsample_factors)[-1] == hp.hop_length - - # Instantiate the model - print("Initializing the model...") - model = WaveRNN( - rnn_dims=hp.voc_rnn_dims, - fc_dims=hp.voc_fc_dims, - bits=hp.bits, - pad=hp.voc_pad, - upsample_factors=hp.voc_upsample_factors, - feat_dims=hp.num_mels, - compute_dims=hp.voc_compute_dims, - res_out_dims=hp.voc_res_out_dims, - res_blocks=hp.voc_res_blocks, - hop_length=hp.hop_length, - sample_rate=hp.sample_rate, - mode=hp.voc_mode - ) - - if torch.cuda.is_available(): - model = model.cuda() - device = torch.device('cuda') - else: - device = torch.device('cpu') - - # Initialize the optimizer - optimizer = optim.Adam(model.parameters()) - for p in optimizer.param_groups: - p["lr"] = hp.voc_lr - loss_func = F.cross_entropy if model.mode == "RAW" else discretized_mix_logistic_loss - - # Load the weights - model_dir = models_dir.joinpath(run_id) - model_dir.mkdir(exist_ok=True) - weights_fpath = model_dir.joinpath(run_id + ".pt") - if force_restart or not weights_fpath.exists(): - print("\nStarting the training of WaveRNN from scratch\n") - model.save(weights_fpath, optimizer) - else: - print("\nLoading weights at %s" % weights_fpath) - model.load(weights_fpath, optimizer) - print("WaveRNN weights loaded from step %d" % model.step) - - # Initialize the dataset - metadata_fpath = syn_dir.joinpath("train.txt") if ground_truth else \ - voc_dir.joinpath("synthesized.txt") - mel_dir = syn_dir.joinpath("mels") if ground_truth else voc_dir.joinpath("mels_gta") - wav_dir = syn_dir.joinpath("audio") - dataset = VocoderDataset(metadata_fpath, mel_dir, wav_dir) - test_loader = DataLoader(dataset, - batch_size=1, - shuffle=True, - pin_memory=True) - - # Begin the training - simple_table([('Batch size', hp.voc_batch_size), - ('LR', hp.voc_lr), - ('Sequence Len', hp.voc_seq_len)]) - - for epoch in range(1, 350): - data_loader = DataLoader(dataset, - collate_fn=collate_vocoder, - batch_size=hp.voc_batch_size, - num_workers=2 if platform.system() != "Windows" else 0, - shuffle=True, - pin_memory=True) - start = time.time() - running_loss = 0. - - for i, (x, y, m) in enumerate(data_loader, 1): - if torch.cuda.is_available(): - x, m, y = x.cuda(), m.cuda(), y.cuda() - - # Forward pass - y_hat = model(x, m) - if model.mode == 'RAW': - y_hat = y_hat.transpose(1, 2).unsqueeze(-1) - elif model.mode == 'MOL': - y = y.float() - y = y.unsqueeze(-1) - - # Backward pass - loss = loss_func(y_hat, y) - optimizer.zero_grad() - loss.backward() - optimizer.step() - - running_loss += loss.item() - speed = i / (time.time() - start) - avg_loss = running_loss / i - - step = model.get_step() - k = step // 1000 - - if backup_every != 0 and step % backup_every == 0 : - model.checkpoint(model_dir, optimizer) - - if save_every != 0 and step % save_every == 0 : - model.save(weights_fpath, optimizer) - - msg = f"| Epoch: {epoch} ({i}/{len(data_loader)}) | " \ - f"Loss: {avg_loss:.4f} | {speed:.1f} " \ - f"steps/s | Step: {k}k | " - stream(msg) - - - gen_testset(model, test_loader, hp.voc_gen_at_checkpoint, hp.voc_gen_batched, - hp.voc_target, hp.voc_overlap, model_dir) - print("") diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/torch/index.html b/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/torch/index.html deleted file mode 100644 index 6468d9bc5da8da7fad63dee970ec8b1339134a10..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/torch/index.html +++ /dev/null @@ -1,65 +0,0 @@ - - - - - - -infinibatch.torch API documentation - - - - - - - - - -
    - - -
    - - - - - \ No newline at end of file diff --git a/spaces/akhaliq/deeplab2/model/__init__.py b/spaces/akhaliq/deeplab2/model/__init__.py deleted file mode 100644 index 35e4ce02ff422f3aa84ab644b88d65b13e0cbc03..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treebuilders/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treebuilders/__init__.py deleted file mode 100644 index d44447eaf5a3912ea699e6d895d51f9b0782cfba..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/treebuilders/__init__.py +++ /dev/null @@ -1,88 +0,0 @@ -"""A collection of modules for building different kinds of trees from HTML -documents. - -To create a treebuilder for a new type of tree, you need to do -implement several things: - -1. A set of classes for various types of elements: Document, Doctype, Comment, - Element. These must implement the interface of ``base.treebuilders.Node`` - (although comment nodes have a different signature for their constructor, - see ``treebuilders.etree.Comment``) Textual content may also be implemented - as another node type, or not, as your tree implementation requires. - -2. A treebuilder object (called ``TreeBuilder`` by convention) that inherits - from ``treebuilders.base.TreeBuilder``. This has 4 required attributes: - - * ``documentClass`` - the class to use for the bottommost node of a document - * ``elementClass`` - the class to use for HTML Elements - * ``commentClass`` - the class to use for comments - * ``doctypeClass`` - the class to use for doctypes - - It also has one required method: - - * ``getDocument`` - Returns the root node of the complete document tree - -3. If you wish to run the unit tests, you must also create a ``testSerializer`` - method on your treebuilder which accepts a node and returns a string - containing Node and its children serialized according to the format used in - the unittests - -""" - -from __future__ import absolute_import, division, unicode_literals - -from .._utils import default_etree - -treeBuilderCache = {} - - -def getTreeBuilder(treeType, implementation=None, **kwargs): - """Get a TreeBuilder class for various types of trees with built-in support - - :arg treeType: the name of the tree type required (case-insensitive). Supported - values are: - - * "dom" - A generic builder for DOM implementations, defaulting to a - xml.dom.minidom based implementation. - * "etree" - A generic builder for tree implementations exposing an - ElementTree-like interface, defaulting to xml.etree.cElementTree if - available and xml.etree.ElementTree if not. - * "lxml" - A etree-based builder for lxml.etree, handling limitations - of lxml's implementation. - - :arg implementation: (Currently applies to the "etree" and "dom" tree - types). A module implementing the tree type e.g. xml.etree.ElementTree - or xml.etree.cElementTree. - - :arg kwargs: Any additional options to pass to the TreeBuilder when - creating it. - - Example: - - >>> from html5lib.treebuilders import getTreeBuilder - >>> builder = getTreeBuilder('etree') - - """ - - treeType = treeType.lower() - if treeType not in treeBuilderCache: - if treeType == "dom": - from . import dom - # Come up with a sane default (pref. from the stdlib) - if implementation is None: - from xml.dom import minidom - implementation = minidom - # NEVER cache here, caching is done in the dom submodule - return dom.getDomModule(implementation, **kwargs).TreeBuilder - elif treeType == "lxml": - from . import etree_lxml - treeBuilderCache[treeType] = etree_lxml.TreeBuilder - elif treeType == "etree": - from . import etree - if implementation is None: - implementation = default_etree - # NEVER cache here, caching is done in the etree submodule - return etree.getETreeModule(implementation, **kwargs).TreeBuilder - else: - raise ValueError("""Unrecognised treebuilder "%s" """ % treeType) - return treeBuilderCache.get(treeType) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/common.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/common.py deleted file mode 100644 index 1859fb79cc4e78850b69742fca56698041ce59f8..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/common.py +++ /dev/null @@ -1,424 +0,0 @@ -# common.py -from .core import * -from .helpers import delimited_list, any_open_tag, any_close_tag -from datetime import datetime - - -# some other useful expressions - using lower-case class name since we are really using this as a namespace -class pyparsing_common: - """Here are some common low-level expressions that may be useful in - jump-starting parser development: - - - numeric forms (:class:`integers`, :class:`reals`, - :class:`scientific notation`) - - common :class:`programming identifiers` - - network addresses (:class:`MAC`, - :class:`IPv4`, :class:`IPv6`) - - ISO8601 :class:`dates` and - :class:`datetime` - - :class:`UUID` - - :class:`comma-separated list` - - :class:`url` - - Parse actions: - - - :class:`convertToInteger` - - :class:`convertToFloat` - - :class:`convertToDate` - - :class:`convertToDatetime` - - :class:`stripHTMLTags` - - :class:`upcaseTokens` - - :class:`downcaseTokens` - - Example:: - - pyparsing_common.number.runTests(''' - # any int or real number, returned as the appropriate type - 100 - -100 - +100 - 3.14159 - 6.02e23 - 1e-12 - ''') - - pyparsing_common.fnumber.runTests(''' - # any int or real number, returned as float - 100 - -100 - +100 - 3.14159 - 6.02e23 - 1e-12 - ''') - - pyparsing_common.hex_integer.runTests(''' - # hex numbers - 100 - FF - ''') - - pyparsing_common.fraction.runTests(''' - # fractions - 1/2 - -3/4 - ''') - - pyparsing_common.mixed_integer.runTests(''' - # mixed fractions - 1 - 1/2 - -3/4 - 1-3/4 - ''') - - import uuid - pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID)) - pyparsing_common.uuid.runTests(''' - # uuid - 12345678-1234-5678-1234-567812345678 - ''') - - prints:: - - # any int or real number, returned as the appropriate type - 100 - [100] - - -100 - [-100] - - +100 - [100] - - 3.14159 - [3.14159] - - 6.02e23 - [6.02e+23] - - 1e-12 - [1e-12] - - # any int or real number, returned as float - 100 - [100.0] - - -100 - [-100.0] - - +100 - [100.0] - - 3.14159 - [3.14159] - - 6.02e23 - [6.02e+23] - - 1e-12 - [1e-12] - - # hex numbers - 100 - [256] - - FF - [255] - - # fractions - 1/2 - [0.5] - - -3/4 - [-0.75] - - # mixed fractions - 1 - [1] - - 1/2 - [0.5] - - -3/4 - [-0.75] - - 1-3/4 - [1.75] - - # uuid - 12345678-1234-5678-1234-567812345678 - [UUID('12345678-1234-5678-1234-567812345678')] - """ - - convert_to_integer = token_map(int) - """ - Parse action for converting parsed integers to Python int - """ - - convert_to_float = token_map(float) - """ - Parse action for converting parsed numbers to Python float - """ - - integer = Word(nums).set_name("integer").set_parse_action(convert_to_integer) - """expression that parses an unsigned integer, returns an int""" - - hex_integer = ( - Word(hexnums).set_name("hex integer").set_parse_action(token_map(int, 16)) - ) - """expression that parses a hexadecimal integer, returns an int""" - - signed_integer = ( - Regex(r"[+-]?\d+") - .set_name("signed integer") - .set_parse_action(convert_to_integer) - ) - """expression that parses an integer with optional leading sign, returns an int""" - - fraction = ( - signed_integer().set_parse_action(convert_to_float) - + "/" - + signed_integer().set_parse_action(convert_to_float) - ).set_name("fraction") - """fractional expression of an integer divided by an integer, returns a float""" - fraction.add_parse_action(lambda tt: tt[0] / tt[-1]) - - mixed_integer = ( - fraction | signed_integer + Opt(Opt("-").suppress() + fraction) - ).set_name("fraction or mixed integer-fraction") - """mixed integer of the form 'integer - fraction', with optional leading integer, returns float""" - mixed_integer.add_parse_action(sum) - - real = ( - Regex(r"[+-]?(?:\d+\.\d*|\.\d+)") - .set_name("real number") - .set_parse_action(convert_to_float) - ) - """expression that parses a floating point number and returns a float""" - - sci_real = ( - Regex(r"[+-]?(?:\d+(?:[eE][+-]?\d+)|(?:\d+\.\d*|\.\d+)(?:[eE][+-]?\d+)?)") - .set_name("real number with scientific notation") - .set_parse_action(convert_to_float) - ) - """expression that parses a floating point number with optional - scientific notation and returns a float""" - - # streamlining this expression makes the docs nicer-looking - number = (sci_real | real | signed_integer).setName("number").streamline() - """any numeric expression, returns the corresponding Python type""" - - fnumber = ( - Regex(r"[+-]?\d+\.?\d*([eE][+-]?\d+)?") - .set_name("fnumber") - .set_parse_action(convert_to_float) - ) - """any int or real number, returned as float""" - - identifier = Word(identchars, identbodychars).set_name("identifier") - """typical code identifier (leading alpha or '_', followed by 0 or more alphas, nums, or '_')""" - - ipv4_address = Regex( - r"(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})(\.(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})){3}" - ).set_name("IPv4 address") - "IPv4 address (``0.0.0.0 - 255.255.255.255``)" - - _ipv6_part = Regex(r"[0-9a-fA-F]{1,4}").set_name("hex_integer") - _full_ipv6_address = (_ipv6_part + (":" + _ipv6_part) * 7).set_name( - "full IPv6 address" - ) - _short_ipv6_address = ( - Opt(_ipv6_part + (":" + _ipv6_part) * (0, 6)) - + "::" - + Opt(_ipv6_part + (":" + _ipv6_part) * (0, 6)) - ).set_name("short IPv6 address") - _short_ipv6_address.add_condition( - lambda t: sum(1 for tt in t if pyparsing_common._ipv6_part.matches(tt)) < 8 - ) - _mixed_ipv6_address = ("::ffff:" + ipv4_address).set_name("mixed IPv6 address") - ipv6_address = Combine( - (_full_ipv6_address | _mixed_ipv6_address | _short_ipv6_address).set_name( - "IPv6 address" - ) - ).set_name("IPv6 address") - "IPv6 address (long, short, or mixed form)" - - mac_address = Regex( - r"[0-9a-fA-F]{2}([:.-])[0-9a-fA-F]{2}(?:\1[0-9a-fA-F]{2}){4}" - ).set_name("MAC address") - "MAC address xx:xx:xx:xx:xx (may also have '-' or '.' delimiters)" - - @staticmethod - def convert_to_date(fmt: str = "%Y-%m-%d"): - """ - Helper to create a parse action for converting parsed date string to Python datetime.date - - Params - - - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%d"``) - - Example:: - - date_expr = pyparsing_common.iso8601_date.copy() - date_expr.setParseAction(pyparsing_common.convertToDate()) - print(date_expr.parseString("1999-12-31")) - - prints:: - - [datetime.date(1999, 12, 31)] - """ - - def cvt_fn(ss, ll, tt): - try: - return datetime.strptime(tt[0], fmt).date() - except ValueError as ve: - raise ParseException(ss, ll, str(ve)) - - return cvt_fn - - @staticmethod - def convert_to_datetime(fmt: str = "%Y-%m-%dT%H:%M:%S.%f"): - """Helper to create a parse action for converting parsed - datetime string to Python datetime.datetime - - Params - - - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%dT%H:%M:%S.%f"``) - - Example:: - - dt_expr = pyparsing_common.iso8601_datetime.copy() - dt_expr.setParseAction(pyparsing_common.convertToDatetime()) - print(dt_expr.parseString("1999-12-31T23:59:59.999")) - - prints:: - - [datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)] - """ - - def cvt_fn(s, l, t): - try: - return datetime.strptime(t[0], fmt) - except ValueError as ve: - raise ParseException(s, l, str(ve)) - - return cvt_fn - - iso8601_date = Regex( - r"(?P\d{4})(?:-(?P\d\d)(?:-(?P\d\d))?)?" - ).set_name("ISO8601 date") - "ISO8601 date (``yyyy-mm-dd``)" - - iso8601_datetime = Regex( - r"(?P\d{4})-(?P\d\d)-(?P\d\d)[T ](?P\d\d):(?P\d\d)(:(?P\d\d(\.\d*)?)?)?(?PZ|[+-]\d\d:?\d\d)?" - ).set_name("ISO8601 datetime") - "ISO8601 datetime (``yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)``) - trailing seconds, milliseconds, and timezone optional; accepts separating ``'T'`` or ``' '``" - - uuid = Regex(r"[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}").set_name("UUID") - "UUID (``xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx``)" - - _html_stripper = any_open_tag.suppress() | any_close_tag.suppress() - - @staticmethod - def strip_html_tags(s: str, l: int, tokens: ParseResults): - """Parse action to remove HTML tags from web page HTML source - - Example:: - - # strip HTML links from normal text - text = 'More info at the pyparsing wiki page' - td, td_end = makeHTMLTags("TD") - table_text = td + SkipTo(td_end).setParseAction(pyparsing_common.stripHTMLTags)("body") + td_end - print(table_text.parseString(text).body) - - Prints:: - - More info at the pyparsing wiki page - """ - return pyparsing_common._html_stripper.transform_string(tokens[0]) - - _commasepitem = ( - Combine( - OneOrMore( - ~Literal(",") - + ~LineEnd() - + Word(printables, exclude_chars=",") - + Opt(White(" \t") + ~FollowedBy(LineEnd() | ",")) - ) - ) - .streamline() - .set_name("commaItem") - ) - comma_separated_list = delimited_list( - Opt(quoted_string.copy() | _commasepitem, default="") - ).set_name("comma separated list") - """Predefined expression of 1 or more printable words or quoted strings, separated by commas.""" - - upcase_tokens = staticmethod(token_map(lambda t: t.upper())) - """Parse action to convert tokens to upper case.""" - - downcase_tokens = staticmethod(token_map(lambda t: t.lower())) - """Parse action to convert tokens to lower case.""" - - # fmt: off - url = Regex( - # https://mathiasbynens.be/demo/url-regex - # https://gist.github.com/dperini/729294 - r"^" + - # protocol identifier (optional) - # short syntax // still required - r"(?:(?:(?Phttps?|ftp):)?\/\/)" + - # user:pass BasicAuth (optional) - r"(?:(?P\S+(?::\S*)?)@)?" + - r"(?P" + - # IP address exclusion - # private & local networks - r"(?!(?:10|127)(?:\.\d{1,3}){3})" + - r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" + - r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" + - # IP address dotted notation octets - # excludes loopback network 0.0.0.0 - # excludes reserved space >= 224.0.0.0 - # excludes network & broadcast addresses - # (first & last IP address of each class) - r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" + - r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" + - r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" + - r"|" + - # host & domain names, may end with dot - # can be replaced by a shortest alternative - # (?![-_])(?:[-\w\u00a1-\uffff]{0,63}[^-_]\.)+ - r"(?:" + - r"(?:" + - r"[a-z0-9\u00a1-\uffff]" + - r"[a-z0-9\u00a1-\uffff_-]{0,62}" + - r")?" + - r"[a-z0-9\u00a1-\uffff]\." + - r")+" + - # TLD identifier name, may end with dot - r"(?:[a-z\u00a1-\uffff]{2,}\.?)" + - r")" + - # port number (optional) - r"(:(?P\d{2,5}))?" + - # resource path (optional) - r"(?P\/[^?# ]*)?" + - # query string (optional) - r"(\?(?P[^#]*))?" + - # fragment (optional) - r"(#(?P\S*))?" + - r"$" - ).set_name("url") - # fmt: on - - # pre-PEP8 compatibility names - convertToInteger = convert_to_integer - convertToFloat = convert_to_float - convertToDate = convert_to_date - convertToDatetime = convert_to_datetime - stripHTMLTags = strip_html_tags - upcaseTokens = upcase_tokens - downcaseTokens = downcase_tokens - - -_builtin_exprs = [ - v for v in vars(pyparsing_common).values() if isinstance(v, ParserElement) -] diff --git a/spaces/alexyuyxj/zh-en-translation/README.md b/spaces/alexyuyxj/zh-en-translation/README.md deleted file mode 100644 index 3b12a8c3306219dbc76d99422f54050614f75513..0000000000000000000000000000000000000000 --- a/spaces/alexyuyxj/zh-en-translation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Zh En Translation -emoji: 👀 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Trainers/Tasks.py b/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Trainers/Tasks.py deleted file mode 100644 index 7463abfd9d547af935838c85d0b711998d620902..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/Models/Trainers/Tasks.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT license. - - -class Task: - """ - This class is the ensemble of two classes: BatchGen and Eval. - The `setup_task` function defines tasks w.r.t the three components based - on the `task_name`. - """ - - def __init__(self, batch_gen, evaluator): - self.batch_gen = batch_gen - self.evaluator = evaluator - - @classmethod - def setup_task(cls, task_name, opt, save_dir): - - if task_name == "HMNet": - from model.third_party.HMNet.Utils.HMNet.InfinibatchLoader import ( - HMNetBatchGen, - ) - - batch_gen = HMNetBatchGen - from model.third_party.HMNet.Evaluation.ROUGEEval import ROUGEEval - - evaluator = ROUGEEval(opt["datadir"], save_dir, opt) - else: - assert False - print("ERROR: Task {} not defined".format(task_name)) - - return cls(batch_gen, evaluator) diff --git a/spaces/allknowingroger/Image-Models-Test21/README.md b/spaces/allknowingroger/Image-Models-Test21/README.md deleted file mode 100644 index 37ed449bb6af79ef0ffd2a40143a46616af89e88..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test21/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test20 ---- - - \ No newline at end of file diff --git a/spaces/amin2809/rvc-models/app.py b/spaces/amin2809/rvc-models/app.py deleted file mode 100644 index 3eea1979c8f7338d48722ad8ef74271a77fd86b2..0000000000000000000000000000000000000000 --- a/spaces/amin2809/rvc-models/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 60 and limitation: - return "Please upload an audio file that is less than 2000 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
    RVC Models\n" - "##
    The input audio should be clean and pure voice without background music.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=ardha27.Rvc-Models)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12rbZk9CoXD1m84dqBW5IKMBjiVY6tcoj?usp=share_link)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n" - "[![Train Own Voice](https://badgen.net/badge/icon/github?icon=github&label=Train%20Voice)](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n" - "[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/R6R7AH1FA)\n\n" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/angelhimi/anime-remove-background/app.py b/spaces/angelhimi/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/angelhimi/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/aphenx/bingo/src/pages/api/healthz.ts b/spaces/aphenx/bingo/src/pages/api/healthz.ts deleted file mode 100644 index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/pages/api/healthz.ts +++ /dev/null @@ -1,7 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - res.status(200).end('ok') -} diff --git a/spaces/aronvandepol/KGPT/app.py b/spaces/aronvandepol/KGPT/app.py deleted file mode 100644 index eac409808d1540f657802cd8df7cb96cdc8b1ac2..0000000000000000000000000000000000000000 --- a/spaces/aronvandepol/KGPT/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import gradio as gr -import random -import os -from transformers import AutoTokenizer, AutoModelForCausalLM - -auth_token = os.environ['TOKEN'] or True - -model = AutoModelForCausalLM.from_pretrained("aronvandepol/K-GPT125M", use_auth_token=auth_token) - -tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M") - -def get_gen(sample: str, length: int=30, beam: int =1): - input_ids = tokenizer(sample, return_tensors='pt').input_ids - output = model.generate(input_ids, - max_length = length, - num_beams = 5, - no_repeat_ngram_size = 3, - early_stopping = True, - do_sample=True, - num_return_sequences = 10, - pad_token_id=tokenizer.eos_token_id - ) - generation = tokenizer.batch_decode(output, skip_special_tokens=True) - gen_text = generation[beam-1] - return gen_text - - -with gr.Blocks() as app: - gr.Markdown("

    K-GPT_NEO

    ") - gr.Markdown( - """ - - Interact with the K-GPT_NEO model and generate kpop song texts! By entering a few words into the input prompt and press generate to get the most probable sentence. If you want to see some less probable results press the I'm feeling lucky button. - - """ - ) - beam=gr.Number(value=1, visible=False, precision=0) - with gr.Row(): - length = gr.Slider(0, 100, step=5, label="Max generated words", value=30) - with gr.Group(): - txt1 = gr.Textbox(label = "Input", placeholder="Type here and press enter...", lines=4) - txt2 = gr.Textbox(label = "Output", placeholder="Generated sentence will appear here", lines=4, interactive=False) - with gr.Row(): - btn = gr.Button("Generate most probable") - rnd = gr.Button("Feeling Lucky!") - btn.click(fn=get_gen, inputs=[txt1, length, beam], outputs=txt2) - rnd.click(fn=get_gen, inputs=[txt1, length, gr.Number(value=random.randint(2, 10), visible=False, precision=0)], outputs=txt2) - gr.Examples(examples=[['I miss you'], ['My Love has not faded yet'], ['Dancing the stars away']], inputs=txt1, outputs=txt2, fn=get_gen, cache_examples=True) - gr.Markdown( - """ -
    -
    -
    - - K-GPT_NEO is based on the GPT-Neo text generation model developed by Eleuther AI. - This architecture was fine-tuned on 2000 English translations of K-pop songs ranging from BTS and BLACKPINK, to TWICE and ZONE. For more information on the training and data, please visit: - - """ - ) - -app.launch() \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_neuralhmm_tts_train.py b/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_neuralhmm_tts_train.py deleted file mode 100644 index 25d9aa8148aff95a75aad823eb8d7bff13f09e12..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_neuralhmm_tts_train.py +++ /dev/null @@ -1,92 +0,0 @@ -import glob -import json -import os -import shutil - -import torch -from trainer import get_last_checkpoint - -from tests import get_device_id, get_tests_output_path, run_cli -from TTS.tts.configs.neuralhmm_tts_config import NeuralhmmTTSConfig - -config_path = os.path.join(get_tests_output_path(), "test_model_config.json") -output_path = os.path.join(get_tests_output_path(), "train_outputs") -parameter_path = os.path.join(get_tests_output_path(), "lj_parameters.pt") - -torch.save({"mean": -5.5138, "std": 2.0636, "init_transition_prob": 0.3212}, parameter_path) - -config = NeuralhmmTTSConfig( - batch_size=3, - eval_batch_size=3, - num_loader_workers=0, - num_eval_loader_workers=0, - text_cleaner="phoneme_cleaners", - use_phonemes=True, - phoneme_language="en-us", - phoneme_cache_path=os.path.join(get_tests_output_path(), "train_outputs/phoneme_cache/"), - run_eval=True, - test_delay_epochs=-1, - mel_statistics_parameter_path=parameter_path, - epochs=1, - print_step=1, - test_sentences=[ - "Be a voice, not an echo.", - ], - print_eval=True, - max_sampling_time=50, -) -config.audio.do_trim_silence = True -config.audio.trim_db = 60 -config.save_json(config_path) - - -# train the model for one epoch when mel parameters exists -command_train = ( - f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_tts.py --config_path {config_path} " - f"--coqpit.output_path {output_path} " - "--coqpit.datasets.0.formatter ljspeech " - "--coqpit.datasets.0.meta_file_train metadata.csv " - "--coqpit.datasets.0.meta_file_val metadata.csv " - "--coqpit.datasets.0.path tests/data/ljspeech " - "--coqpit.test_delay_epochs 0 " -) -run_cli(command_train) - - -# train the model for one epoch when mel parameters have to be computed from the dataset -if os.path.exists(parameter_path): - os.remove(parameter_path) -command_train = ( - f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_tts.py --config_path {config_path} " - f"--coqpit.output_path {output_path} " - "--coqpit.datasets.0.formatter ljspeech " - "--coqpit.datasets.0.meta_file_train metadata.csv " - "--coqpit.datasets.0.meta_file_val metadata.csv " - "--coqpit.datasets.0.path tests/data/ljspeech " - "--coqpit.test_delay_epochs 0 " -) -run_cli(command_train) - -# Find latest folder -continue_path = max(glob.glob(os.path.join(output_path, "*/")), key=os.path.getmtime) - -# Inference using TTS API -continue_config_path = os.path.join(continue_path, "config.json") -continue_restore_path, _ = get_last_checkpoint(continue_path) -out_wav_path = os.path.join(get_tests_output_path(), "output.wav") - -# Check integrity of the config -with open(continue_config_path, "r", encoding="utf-8") as f: - config_loaded = json.load(f) -assert config_loaded["characters"] is not None -assert config_loaded["output_path"] in continue_path -assert config_loaded["test_delay_epochs"] == 0 - -# Load the model and run inference -inference_command = f"CUDA_VISIBLE_DEVICES='{get_device_id()}' tts --text 'This is an example.' --config_path {continue_config_path} --model_path {continue_restore_path} --out_path {out_wav_path}" -run_cli(inference_command) - -# restore the model and continue training for one more epoch -command_train = f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_tts.py --continue_path {continue_path} " -run_cli(command_train) -shutil.rmtree(continue_path) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/st_common.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/st_common.py deleted file mode 100644 index e098d8129af179e404e6ee777d9cf792bb5e4c86..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/st_common.py +++ /dev/null @@ -1,55 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/st_common.py: Common functions for SelfTest modules -# -# Written in 2008 by Dwayne C. Litzenberger -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Common functions for SelfTest modules""" - -import unittest -import binascii -from Crypto.Util.py3compat import b - - -def list_test_cases(class_): - """Return a list of TestCase instances given a TestCase class - - This is useful when you have defined test* methods on your TestCase class. - """ - return unittest.TestLoader().loadTestsFromTestCase(class_) - -def strip_whitespace(s): - """Remove whitespace from a text or byte string""" - if isinstance(s,str): - return b("".join(s.split())) - else: - return b("").join(s.split()) - -def a2b_hex(s): - """Convert hexadecimal to binary, ignoring whitespace""" - return binascii.a2b_hex(strip_whitespace(s)) - -def b2a_hex(s): - """Convert binary to hexadecimal""" - # For completeness - return binascii.b2a_hex(s) - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interactive_layered_crossfilter.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interactive_layered_crossfilter.py deleted file mode 100644 index edde1957b2d7a2ea72a8bc700933129dea1b69d9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interactive_layered_crossfilter.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -Interactive Crossfilter -======================= -This example shows a multi-panel view of the same data, where you can interactively -select a portion of the data in any of the panels to highlight that portion in any -of the other panels. -""" -# category: interactive charts -import altair as alt -from vega_datasets import data - -source = alt.UrlData( - data.flights_2k.url, - format={'parse': {'date': 'date'}} -) - -brush = alt.selection(type='interval', encodings=['x']) - -# Define the base chart, with the common parts of the -# background and highlights -base = alt.Chart().mark_bar().encode( - x=alt.X(alt.repeat('column'), type='quantitative', bin=alt.Bin(maxbins=20)), - y='count()' -).properties( - width=160, - height=130 -) - -# gray background with selection -background = base.encode( - color=alt.value('#ddd') -).add_selection(brush) - -# blue highlights on the transformed data -highlight = base.transform_filter(brush) - -# layer the two charts & repeat -alt.layer( - background, - highlight, - data=source -).transform_calculate( - "time", - "hours(datum.date)" -).repeat(column=["distance", "delay", "time"]) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attr/_funcs.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attr/_funcs.py deleted file mode 100644 index 4c90085a4013bf906a726597f52b206d4c842b22..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attr/_funcs.py +++ /dev/null @@ -1,422 +0,0 @@ -# SPDX-License-Identifier: MIT - -from __future__ import absolute_import, division, print_function - -import copy - -from ._compat import iteritems -from ._make import NOTHING, _obj_setattr, fields -from .exceptions import AttrsAttributeNotFoundError - - -def asdict( - inst, - recurse=True, - filter=None, - dict_factory=dict, - retain_collection_types=False, - value_serializer=None, -): - """ - Return the ``attrs`` attribute values of *inst* as a dict. - - Optionally recurse into other ``attrs``-decorated classes. - - :param inst: Instance of an ``attrs``-decorated class. - :param bool recurse: Recurse into classes that are also - ``attrs``-decorated. - :param callable filter: A callable whose return code determines whether an - attribute or element is included (``True``) or dropped (``False``). Is - called with the `attrs.Attribute` as the first argument and the - value as the second argument. - :param callable dict_factory: A callable to produce dictionaries from. For - example, to produce ordered dictionaries instead of normal Python - dictionaries, pass in ``collections.OrderedDict``. - :param bool retain_collection_types: Do not convert to ``list`` when - encountering an attribute whose type is ``tuple`` or ``set``. Only - meaningful if ``recurse`` is ``True``. - :param Optional[callable] value_serializer: A hook that is called for every - attribute or dict key/value. It receives the current instance, field - and value and must return the (updated) value. The hook is run *after* - the optional *filter* has been applied. - - :rtype: return type of *dict_factory* - - :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` - class. - - .. versionadded:: 16.0.0 *dict_factory* - .. versionadded:: 16.1.0 *retain_collection_types* - .. versionadded:: 20.3.0 *value_serializer* - .. versionadded:: 21.3.0 If a dict has a collection for a key, it is - serialized as a tuple. - """ - attrs = fields(inst.__class__) - rv = dict_factory() - for a in attrs: - v = getattr(inst, a.name) - if filter is not None and not filter(a, v): - continue - - if value_serializer is not None: - v = value_serializer(inst, a, v) - - if recurse is True: - if has(v.__class__): - rv[a.name] = asdict( - v, - recurse=True, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - elif isinstance(v, (tuple, list, set, frozenset)): - cf = v.__class__ if retain_collection_types is True else list - rv[a.name] = cf( - [ - _asdict_anything( - i, - is_key=False, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - for i in v - ] - ) - elif isinstance(v, dict): - df = dict_factory - rv[a.name] = df( - ( - _asdict_anything( - kk, - is_key=True, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - _asdict_anything( - vv, - is_key=False, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - ) - for kk, vv in iteritems(v) - ) - else: - rv[a.name] = v - else: - rv[a.name] = v - return rv - - -def _asdict_anything( - val, - is_key, - filter, - dict_factory, - retain_collection_types, - value_serializer, -): - """ - ``asdict`` only works on attrs instances, this works on anything. - """ - if getattr(val.__class__, "__attrs_attrs__", None) is not None: - # Attrs class. - rv = asdict( - val, - recurse=True, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - elif isinstance(val, (tuple, list, set, frozenset)): - if retain_collection_types is True: - cf = val.__class__ - elif is_key: - cf = tuple - else: - cf = list - - rv = cf( - [ - _asdict_anything( - i, - is_key=False, - filter=filter, - dict_factory=dict_factory, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ) - for i in val - ] - ) - elif isinstance(val, dict): - df = dict_factory - rv = df( - ( - _asdict_anything( - kk, - is_key=True, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - _asdict_anything( - vv, - is_key=False, - filter=filter, - dict_factory=df, - retain_collection_types=retain_collection_types, - value_serializer=value_serializer, - ), - ) - for kk, vv in iteritems(val) - ) - else: - rv = val - if value_serializer is not None: - rv = value_serializer(None, None, rv) - - return rv - - -def astuple( - inst, - recurse=True, - filter=None, - tuple_factory=tuple, - retain_collection_types=False, -): - """ - Return the ``attrs`` attribute values of *inst* as a tuple. - - Optionally recurse into other ``attrs``-decorated classes. - - :param inst: Instance of an ``attrs``-decorated class. - :param bool recurse: Recurse into classes that are also - ``attrs``-decorated. - :param callable filter: A callable whose return code determines whether an - attribute or element is included (``True``) or dropped (``False``). Is - called with the `attrs.Attribute` as the first argument and the - value as the second argument. - :param callable tuple_factory: A callable to produce tuples from. For - example, to produce lists instead of tuples. - :param bool retain_collection_types: Do not convert to ``list`` - or ``dict`` when encountering an attribute which type is - ``tuple``, ``dict`` or ``set``. Only meaningful if ``recurse`` is - ``True``. - - :rtype: return type of *tuple_factory* - - :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` - class. - - .. versionadded:: 16.2.0 - """ - attrs = fields(inst.__class__) - rv = [] - retain = retain_collection_types # Very long. :/ - for a in attrs: - v = getattr(inst, a.name) - if filter is not None and not filter(a, v): - continue - if recurse is True: - if has(v.__class__): - rv.append( - astuple( - v, - recurse=True, - filter=filter, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - ) - elif isinstance(v, (tuple, list, set, frozenset)): - cf = v.__class__ if retain is True else list - rv.append( - cf( - [ - astuple( - j, - recurse=True, - filter=filter, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(j.__class__) - else j - for j in v - ] - ) - ) - elif isinstance(v, dict): - df = v.__class__ if retain is True else dict - rv.append( - df( - ( - astuple( - kk, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(kk.__class__) - else kk, - astuple( - vv, - tuple_factory=tuple_factory, - retain_collection_types=retain, - ) - if has(vv.__class__) - else vv, - ) - for kk, vv in iteritems(v) - ) - ) - else: - rv.append(v) - else: - rv.append(v) - - return rv if tuple_factory is list else tuple_factory(rv) - - -def has(cls): - """ - Check whether *cls* is a class with ``attrs`` attributes. - - :param type cls: Class to introspect. - :raise TypeError: If *cls* is not a class. - - :rtype: bool - """ - return getattr(cls, "__attrs_attrs__", None) is not None - - -def assoc(inst, **changes): - """ - Copy *inst* and apply *changes*. - - :param inst: Instance of a class with ``attrs`` attributes. - :param changes: Keyword changes in the new copy. - - :return: A copy of inst with *changes* incorporated. - - :raise attr.exceptions.AttrsAttributeNotFoundError: If *attr_name* couldn't - be found on *cls*. - :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` - class. - - .. deprecated:: 17.1.0 - Use `attrs.evolve` instead if you can. - This function will not be removed du to the slightly different approach - compared to `attrs.evolve`. - """ - import warnings - - warnings.warn( - "assoc is deprecated and will be removed after 2018/01.", - DeprecationWarning, - stacklevel=2, - ) - new = copy.copy(inst) - attrs = fields(inst.__class__) - for k, v in iteritems(changes): - a = getattr(attrs, k, NOTHING) - if a is NOTHING: - raise AttrsAttributeNotFoundError( - "{k} is not an attrs attribute on {cl}.".format( - k=k, cl=new.__class__ - ) - ) - _obj_setattr(new, k, v) - return new - - -def evolve(inst, **changes): - """ - Create a new instance, based on *inst* with *changes* applied. - - :param inst: Instance of a class with ``attrs`` attributes. - :param changes: Keyword changes in the new copy. - - :return: A copy of inst with *changes* incorporated. - - :raise TypeError: If *attr_name* couldn't be found in the class - ``__init__``. - :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` - class. - - .. versionadded:: 17.1.0 - """ - cls = inst.__class__ - attrs = fields(cls) - for a in attrs: - if not a.init: - continue - attr_name = a.name # To deal with private attributes. - init_name = attr_name if attr_name[0] != "_" else attr_name[1:] - if init_name not in changes: - changes[init_name] = getattr(inst, attr_name) - - return cls(**changes) - - -def resolve_types(cls, globalns=None, localns=None, attribs=None): - """ - Resolve any strings and forward annotations in type annotations. - - This is only required if you need concrete types in `Attribute`'s *type* - field. In other words, you don't need to resolve your types if you only - use them for static type checking. - - With no arguments, names will be looked up in the module in which the class - was created. If this is not what you want, e.g. if the name only exists - inside a method, you may pass *globalns* or *localns* to specify other - dictionaries in which to look up these names. See the docs of - `typing.get_type_hints` for more details. - - :param type cls: Class to resolve. - :param Optional[dict] globalns: Dictionary containing global variables. - :param Optional[dict] localns: Dictionary containing local variables. - :param Optional[list] attribs: List of attribs for the given class. - This is necessary when calling from inside a ``field_transformer`` - since *cls* is not an ``attrs`` class yet. - - :raise TypeError: If *cls* is not a class. - :raise attr.exceptions.NotAnAttrsClassError: If *cls* is not an ``attrs`` - class and you didn't pass any attribs. - :raise NameError: If types cannot be resolved because of missing variables. - - :returns: *cls* so you can use this function also as a class decorator. - Please note that you have to apply it **after** `attrs.define`. That - means the decorator has to come in the line **before** `attrs.define`. - - .. versionadded:: 20.1.0 - .. versionadded:: 21.1.0 *attribs* - - """ - # Since calling get_type_hints is expensive we cache whether we've - # done it already. - if getattr(cls, "__attrs_types_resolved__", None) != cls: - import typing - - hints = typing.get_type_hints(cls, globalns=globalns, localns=localns) - for field in fields(cls) if attribs is None else attribs: - if field.name in hints: - # Since fields have been frozen we must work around it. - _obj_setattr(field, "type", hints[field.name]) - # We store the class we resolved so that subclasses know they haven't - # been resolved. - cls.__attrs_types_resolved__ = cls - - # Return the class so you can use it as a decorator too. - return cls diff --git a/spaces/awacke1/LawsofSuccessandPower/app.py b/spaces/awacke1/LawsofSuccessandPower/app.py deleted file mode 100644 index 514a7cf550ebbfc6f55948fdf52153b14a77c453..0000000000000000000000000000000000000000 --- a/spaces/awacke1/LawsofSuccessandPower/app.py +++ /dev/null @@ -1,150 +0,0 @@ -import streamlit as st - -st.markdown(""" -# Laws of Power -- Never Outshine the Master 🌟 -- Always Say Less Than Necessary 🤐 -- Keep Your Friends Close, but Your Enemies Closer 👥🔍 -- Conceal Your Intentions 🙅‍♂️💭 -- Crush Your Enemy Totally 💥👊 -- Use Absence to Increase Respect and Honor 🚶‍♂️👀 -- Get Others to Do the Work for You 👥💪 -- Think as You Like, but Behave Like Others 🤔👥 -- Play to People's Fantasies 🌟🧚‍♀️ -- Avoid Stepping on the Toes of Those Above You 🚶‍♂️👞 - -# Laws of Success -- Set Clear Goals and Visualize Them 🎯🖼️ -- Take Action and Persevere 💪🚀 -- Continuously Learn and Adapt 📚🔄 -- Build Strong Relationships and Networks 👥🤝 -- Embrace Failure and Learn from It ❌📖 -- Take Ownership of Your Actions and Results 🙋‍♂️📊 -- Stay Focused and Prioritize 🎯🔝 -- Maintain a Positive Attitude and Mindset 😄🌈 -- Practice Discipline and Consistency 📆🔁 -- Celebrate Milestones and Reward Yourself 🎉🏆 - -## Intersection List -- Laws of Power: - - Never Outshine the Master - - Always Say Less Than Necessary - - Keep Your Friends Close, but Your Enemies Closer - - Conceal Your Intentions - - Crush Your Enemy Totally - - Use Absence to Increase Respect and Honor - - Get Others to Do the Work for You - - Think as You Like, but Behave Like Others - - Play to People's Fantasies - - Avoid Stepping on the Toes of Those Above You - -- Laws of Success: - - Set Clear Goals and Visualize Them - - Take Action and Persevere - - Continuously Learn and Adapt - - Build Strong Relationships and Networks - - Embrace Failure and Learn from It - - Take Ownership of Your Actions and Results - - Stay Focused and Prioritize - - Maintain a Positive Attitude and Mindset - - Practice Discipline and Consistency - - Celebrate Milestones and Reward Yourself - -## Heatmap of Dimensions -- Success: - - Goals - - Action - - Learning - - Relationships - - Failure - - Ownership - - Focus - - Attitude - - Discipline - - Milestones - -- Power: - - Master - - Necessary - - Enemies - - Intentions - - Enemy - - Absence - - Work - - Behave - - Fantasies - - Above -""") - -import nltk -from nltk.corpus import wordnet as wn - -nltk.download('wordnet') - -# Laws of Power -laws_of_power = [ - ("Never Outshine the Master", "Avoid overshadowing or showing up your superiors, as it can lead to jealousy and resentment."), - ("Always Say Less Than Necessary", "Control your words and reveal only what is necessary, as excessive talking can lead to mistakes or giving away too much information."), - ("Keep Your Friends Close, but Your Enemies Closer", "Stay aware of your enemies and their actions, as understanding their motives and intentions can help you anticipate and counter their moves."), - ("Conceal Your Intentions", "Keep your true intentions hidden to maintain an element of surprise and gain an advantage in negotiations or conflicts."), - ("Crush Your Enemy Totally", "When facing opposition, eliminate any chance of retaliation or future threats by decisively defeating your enemies."), - ("Use Absence to Increase Respect and Honor", "Create a sense of importance and value by occasionally withdrawing from the spotlight, making others appreciate your presence more."), - ("Get Others to Do the Work for You", "Delegate tasks and responsibilities to others, allowing you to focus on higher-level strategies and maintain control over the outcome."), - ("Think as You Like, but Behave Like Others", "Blend in with your surroundings and adapt to social norms, as deviating too much from the norm can lead to isolation or suspicion."), - ("Play to People's Fantasies", "Appeal to people's desires and aspirations, tapping into their fantasies to gain their support or influence their decisions."), - ("Avoid Stepping on the Toes of Those Above You", "Be cautious not to offend or challenge those in positions of power, as it can lead to negative consequences for your own advancement.") -] - -# Laws of Success -laws_of_success = [ - ("Set Clear Goals and Visualize Them", "Define specific and achievable goals, and visualize yourself achieving them to increase motivation and focus."), - ("Take Action and Persevere", "Act upon your goals and persist through challenges and setbacks, as consistent effort is essential for success."), - ("Continuously Learn and Adapt", "Embrace a growth mindset and seek knowledge and new skills, as the ability to adapt to changing circumstances is crucial for success."), - ("Build Strong Relationships and Networks", "Cultivate meaningful connections with others, as strong relationships and networks can provide support, opportunities, and valuable insights."), - ("Embrace Failure and Learn from It", "See failure as a learning opportunity and use it to improve and grow, as resilience and the ability to bounce back are vital for success."), - ("Take Ownership of Your Actions and Results", "Accept responsibility for your choices and outcomes, as taking ownership empowers you to make necessary changes and drive your own success."), - ("Stay Focused and Prioritize", "Concentrate your time and energy on the most important tasks and goals, avoiding distractions that can hinder progress."), - ("Maintain a Positive Attitude and Mindset", "Cultivate optimism and resilience, as a positive attitude can help you overcome challenges and attract opportunities."), - ("Practice Discipline and Consistency", "Develop self-discipline and maintain consistent effort towards your goals, as small daily actions can lead to significant long-term success."), - ("Celebrate Milestones and Reward Yourself", "Acknowledge and celebrate your achievements along the way, as rewarding yourself reinforces positive behavior and motivates further success.") -] - -# Intersection List -intersection_list = [law[0] for law in laws_of_power if law[0] in [law[0] for law in laws_of_success]] - -# Heatmap of Dimensions -success_words = ['Goals', 'Action', 'Learning', 'Relationships', 'Failure', 'Ownership', 'Focus', 'Attitude', 'Discipline', 'Milestones'] -power_words = ['Master', 'Necessary', 'Enemies', 'Intentions', 'Enemy', 'Absence', 'Work', 'Behave', 'Fantasies', 'Above'] - -heatmap = [] -for word in success_words: - if word in power_words: - heatmap.append(word) - -# Print the laws of power -print("## Laws of Power") -for law in laws_of_power: - print(f"- {law[0]} {law[1]}") - -# Print the laws of success -print("\n## Laws of Success") -for law in laws_of_success: - print(f"- {law[0]} {law[1]}") - -# Print the intersection list -print("\n## Intersection List") -for law in intersection_list: - print(f"- {law}") - -# Print the heatmap of dimensions -print("\n## Heatmap of Dimensions") -print("- Success:") -for word in success_words: - if word in heatmap: - print(f" - {word}") -print("\n- Power:") -for word in power_words: - if word in heatmap: - print(f" - {word}") - - diff --git a/spaces/awacke1/Mp4VideoGallery/README.md b/spaces/awacke1/Mp4VideoGallery/README.md deleted file mode 100644 index 9403df9c29aea44ee7fca09405b08bef81ace6e4..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Mp4VideoGallery/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mp4VideoGallery -emoji: 👀 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bigcode/license/index.html b/spaces/bigcode/license/index.html deleted file mode 100644 index 19aa821655f94b7c55007832c7807ec2e0f174fd..0000000000000000000000000000000000000000 --- a/spaces/bigcode/license/index.html +++ /dev/null @@ -1,1465 +0,0 @@ - - - - - - - - -

            

    -

    CodeML Open RAIL-M v0.1 License

    -

    dated 22 December 2022

    -

    -

    Download as .txt , - .docx , - .pdf , - or - .html

    -

    -

    -

    Section  I:  PREAMBLE

    -

    -

    This OpenRAIL-M License was created under BigCode, an open and collaborative - research project aimed at the responsible development and use of Large Language Models (“LLMs”) for code generation.

    -

    -

    This license is generally applicable to any machine-learning Model used in the - context of coding-related tasks. As with any other model, models applied to the context of coding tasks, - such as code-generating models, have the potential to generate a broad positive impact in society and, at - the same time, negative effects if misused.

    -

    -

    This License strives for both the open and responsible use of the accompanying - Model. When it comes to the open character, the License is inspired by open source permissive licenses for - the grant of IP rights. Referring to responsible use, the addition of use restrictions not permitting the - use of the Model in specific scenarios enables the Licensor to enforce the License in case potential misuse - of the Model may occur. Even though derivative versions of the Model could be released under different - licensing terms, the License specifies the obligation to include - at minimum - the same use restrictions as the ones in the original License (this - license).

    -

    -

    This License governs the use of the Model and its derivatives and is informed by the - model card associated with the Model.

    -

    -

    NOW THEREFORE, You and Licensor agree as follows:

    -

    -

    1. Definitions

    -

    -
      -
    1. "License" means the terms and conditions for use, reproduction, and Distribution as defined in - this document.
    2. -
    3. “Data” means a - collection of information and/or content extracted from the dataset - used with the Model, including to train, pretrain, or otherwise evaluate the - Model. The Data is not licensed under this License.
    4. -
    5. “Output” means the - results of operating a Model as embodied in informational content resulting therefrom.
    6. -
    7. Model” means any accompanying machine-learning based assemblies (including - checkpoints), consisting of learnt weights, parameters (including optimizer states), corresponding to - the model architecture as embodied in the - Complementary Material, that have been trained or tuned, in whole or in part on the Data, using the - Complementary Material.
    8. -
    9. “Derivatives of the Model” means all modifications to the Model, works based on the Model, or any other - model which is created or initialized by transfer of patterns of the weights, parameters, activations or - output of the Model, to the other model, in order to cause the other model to perform similarly to the - Model, including - but not limited to - distillation methods entailing the use of intermediate data - representations or methods based on the generation of synthetic data by the Model for training the other - model.
    10. -
    11. Complementary - Material”  means the accompanying source code and scripts used - to define, run, load, benchmark or evaluate the Model, and used to prepare data for training or - evaluation, if any. This includes any accompanying documentation, tutorials, examples, etc, if any. - Complementary Material is not licensed under this License.
    12. -
    13. “Distribution” means any transmission, reproduction, publication or other sharing of the Model or - Derivatives of the Model to a third party, including providing the Model as a hosted service made - available by electronic or other remote means - e.g. API-based or web access.
    14. -
    15. Licensor” means the rights owner - or entity authorized by the rights owner that is granting the License, including the persons or entities that may have rights in the Model and/or distributing the - Model.
    16. -
    17. "You" - (or "Your")  means an individual or - Legal Entity exercising permissions granted by this License and/or making use of the Model for whichever - purpose and in any field of use, including usage of the Model in an end-use application - e.g. chatbot, - translator, image generator.
    18. -
    19. Third Parties” means individuals or legal entities that are not under common control with Licensor - or You.
    20. -
    21. "Contribution"  means any work, including the original version of the Model and any - modifications or additions to that Model or Derivatives of the Model thereof, that is intentionally - submitted to Licensor for inclusion in the Model by the rights owner or by an individual or Legal Entity - authorized to submit on behalf of the rights owner. For the purposes of this definition, - “submitted” means any form of electronic, - verbal, or written communication sent to the Licensor or its representatives, including but not limited - to communication on electronic mailing lists, source code control systems, and issue tracking systems - that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the - Model, but excluding communication that is conspicuously marked or otherwise designated in writing by - the rights owner as "Not a Contribution." -
    22. -
    23. "Contributor"  means Licensor and any individual or Legal Entity on behalf of whom a - Contribution has been received by Licensor and subsequently incorporated within the Model.
    24. -
    -

    -

    Section II:   - INTELLECTUAL PROPERTY RIGHTS

    -

    Both copyright and patent grants apply to the Model and Derivatives of the Model. - The Model and Derivatives of the Model are subject to additional terms as - described in Section III, which shall govern the use of the Model and - Derivatives of the Model even in the event Section II is held unenforceable.

    -

    2. Grant of Copyright License. Subject to the terms and - conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, - no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, - publicly display, publicly perform, sublicense, and distribute the Model and Derivatives of the - Model.

    -

    3. Grant of Patent License. Subject to the terms and - conditions of this License and where and as applicable, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this paragraph) patent - license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Model and/or - Derivatives of the Model where such license applies only to those patent claims licensable by such - Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their - Contribution(s) with the Model or Derivatives of the Model to which such Contribution(s) was submitted. If - You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) - alleging that the Model or Derivative of the Model and/or a Contribution incorporated within the Model or - Derivative of the Model constitutes direct or contributory patent infringement, then any patent licenses - granted to You under this License for the Model and/or Derivative of the Model shall terminate as of the - date such litigation is asserted or filed.

    -

    -

    Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION

    -

    -

    4. Distribution and Redistribution. You may host for Third - Party remote access purposes (e.g. software-as-a-service), reproduce and distribute copies of the Model or - Derivatives of the Model thereof in any medium, with or without modifications, provided that You meet the - following conditions:

    -
      -
    1. Use-based restrictions as referenced in paragraph 5 MUST be included as an enforceable provision by You in any type of legal agreement (e.g. - a license) governing the use and/or distribution of the Model or Derivatives of the Model, and You shall - give notice to subsequent users You Distribute to, that the Model or Derivatives of the Model are - subject to paragraph 5.
    2. -
    3. You must give any Third Party recipients of the Model or - Derivatives of the Model a copy of this License;
    4. -
    5. You must cause any modified files to carry prominent notices - stating that You changed the files;
    6. -
    7. You must retain all copyright, patent, trademark, and - attribution notices excluding those notices that do not pertain to any part of the Model, Derivatives of - the Model.
    8. -
    -

    You may add Your own copyright statement to Your modifications and may provide - additional or different license terms and conditions - respecting paragraph 4.a. - for use, reproduction, or - Distribution of Your modifications, or for any such Derivatives of the Model as a whole, provided Your use, - reproduction, and Distribution of the Model otherwise complies with the conditions stated in this - License.

    -

    5. Use-based restrictions. The restrictions set forth in Attachment A are considered Use-based restrictions. Therefore - You cannot use the Model and the Derivatives of the Model for the specified restricted uses. You may use the - Model subject to this License, including only for lawful purposes and in accordance with the License. - Use may include creating any content with, - finetuning, updating, running, training, evaluating and/or reparametrizing the Model. You shall require all - of Your users who use the Model or a Derivative of the Model to comply with the terms of this paragraph (paragraph 5).

    -

    6.  The Output You Generate. Except as set forth - herein, Licensor claims no rights in the Output You generate using the Model. You are accountable for the - Output you generate and its subsequent uses. No use of the output can contravene any provision as stated in - the License.

    -

    -

    Section IV: OTHER PROVISIONS

    -

    7. Updates and Runtime Restrictions. To the maximum extent - permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in - violation of this License, update the Model through electronic means, or modify the Output of the Model - based on updates.

    -

    8. Trademarks and related. Nothing in this License - permits You to make use of Licensors’ trademarks, trade names, logos or to otherwise suggest - endorsement or misrepresent the relationship between the parties; and any rights not expressly granted - herein are reserved by the Licensors.

    -

    9. Disclaimer of Warranty. Unless required by applicable law - or agreed to in writing, Licensor provides the Model (and each Contributor provides its Contributions) on an - "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, - without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR - A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or - redistributing the Model and Derivatives of the Model, and assume any risks associated with Your exercise of - permissions under this License.

    -

    10. Limitation of Liability. In no event and under no legal - theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law - (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to - You for damages, including any direct, indirect, special, incidental, or consequential damages of any - character arising as a result of this License or out of the use or inability to use the Model (including but - not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor has been advised of the possibility of such - damages.

    -

    11. Accepting Warranty or Additional Liability. While - redistributing the Model, Derivatives of the Model thereof, You may choose to offer and charge a fee for, - acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with - this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole - responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold - each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by - reason of your accepting any such warranty or additional liability.

    -

    12. If any provision of this License is held to be - invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as - if such provision had not been set forth herein.

    -

    END OF TERMS AND CONDITIONS

    -

    -

    -

    -

    Attachment A

    -

    Use Restrictions

    -

    You agree not to use the Model or Derivatives of the Model:

    -
      -
    1. In any way that violates any applicable national, federal, state, - local or international law or regulation;
    2. -
    3. For the purpose of exploiting, harming or attempting to exploit or - harm minors in any way;
    4. -
    5. To generate and/or disseminate malware (e.g. ransomware) or any - other content to be used for the purpose of harming electronic systems; -
    6. -
    7. To generate or disseminate verifiably false information and/or - content with the purpose of harming others;
    8. -
    9. To generate or disseminate personal identifiable information that can be used to harm an individual;
    10. -
    11. To generate or disseminate information and/or content (e.g. images, - code, posts, articles), and place the information and/or content in any public context (e.g. bot - generating tweets) without expressly and intelligibly disclaiming that the information and/or content is - machine generated;
    12. -
    13. To defame, disparage or otherwise harass others;
    14. -
    15. To impersonate or attempt to impersonate (e.g. deepfakes) others - without their consent;
    16. -
    17. For fully automated decision making that adversely impacts an - individual’s legal rights or otherwise creates or modifies a binding, enforceable - obligation;
    18. -
    19. For any use intended to or which has the effect of discriminating - against or harming individuals or groups based - on online or offline social behavior or known or predicted personal or personality - characteristics;
    20. -
    21. To exploit any of the vulnerabilities of a specific group of - persons based on their age, social, physical or mental characteristics, in order to materially distort - the behavior of a person pertaining to that group in a manner that causes or is likely to cause that - person or another person physical or psychological harm;
    22. -
    23. For any use intended to or which has the effect of discriminating - against individuals or groups based on legally protected characteristics or categories;
    24. -
    25. To provide medical advice and medical results interpretation; -
    26. -
    27. To generate or disseminate information for the purpose to be used - for administration of justice, law enforcement, immigration or asylum processes, such as predicting an - individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships - between assertions made in documents, indiscriminate and arbitrarily-targeted use).
    28. -
    -
    -

    -
    - - - \ No newline at end of file diff --git a/spaces/bigcode/search/README.md b/spaces/bigcode/search/README.md deleted file mode 100644 index b6503cfeb5abee8e28b9dcd4788ce88002b6d0e7..0000000000000000000000000000000000000000 --- a/spaces/bigcode/search/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: StarCoder Search -emoji: 🔎📑 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.28.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: bigcode/py-search ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/Descargarsolucionariodellibrocalculointegralmoiseslazaro.md b/spaces/bioriAsaeru/text-to-voice/Descargarsolucionariodellibrocalculointegralmoiseslazaro.md deleted file mode 100644 index 0980bbe381a41ca94e9ee35080bcb7ec2acd958d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Descargarsolucionariodellibrocalculointegralmoiseslazaro.md +++ /dev/null @@ -1,11 +0,0 @@ - -

    https://firstlady-realestate.com/2022/09/09/descargarsolucionariodellibocalculintegralmoisseaqslazaro/ https://wmondemand.com/p=42793. descargarsolucionariodellibocalculintegralmoisseaqslazaro crack phan mem du toan escon 2012 simple port forwarding pro 3.8.5 serial 775l.

    -

    descargarsolucionariodellibrocalculointegralmoiseslazaro


    Download ►►►►► https://urloso.com/2uyRNc



    -

    https://www.jint.com/redirect-to-the-official-ask-a-geek-subreddit-with-nicholson-roast/254. descargarsolucionariodellibrocalculointegralmoiseslazaro free download for windows 7 form the galaxy 5 online dating apps for android 10 how to install and use im on windows 7.

    -

    descargarsolucionariodellibocalculo membraje 10000 calculera memro alg piu efetivo pdf de libro pdf ebook writer mocio (pdf, doc, zip, rar) parable of the lessons of history tkuk dance chantler 2012 professional camtasia editor 10.

    -

    https://www.jint.com/how-to-restart-dialer-app-on-ios-10/254. descargarsolucionariodellibocalculo membraje 10000 calculera memro alg piu efetivo pdf de libro pdf ebook writer mocio (pdf, doc, zip, rar) parable of the lessons of history tkuk dance chantler 2012 professional camtasia editor 10.

    -

    -

    i am using a mac, though it is not relevant to the issue, it is just to clarify how i am getting the problem to show up. in this page, i need to have the file saved locally first. the file is loaded, but the lack of a.php extension will cause the page to not be run properly. how would i get rid of this issue on my mac.

    -

    descargarsolucionariodellibrocalculointegralmoiseslazaro patched. download. 1 / 2. page 2. descargarsolucionariodellibocalculoinintegralmoiseslazaro. related links: rac book by rs khurmi pdf free download v20 lore of the clans pdf 28 descargarsolucionariodellibocalculoinintegralmoiseslazaro.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/bla/tranny/App/Users/Model.py b/spaces/bla/tranny/App/Users/Model.py deleted file mode 100644 index dc6c82ebb23c9ee8b7d90dd74ba05d4007b94cb4..0000000000000000000000000000000000000000 --- a/spaces/bla/tranny/App/Users/Model.py +++ /dev/null @@ -1,28 +0,0 @@ -import asyncio -import orm -import psycopg2 -import datetime -import pydantic -from passlib.context import CryptContext -from App.modelInit import database, models - -pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto") - - -class User(orm.Model): - tablename = "users" - registry = models - fields = { - "id": orm.Integer(primary_key=True), - "name": orm.String(max_length=100, index=True), - "email": orm.String(max_length=100, index=True, unique=True), - "password": orm.String(max_length=100, index=True), - "phoneNumber": orm.String(max_length=100, index=True, allow_null=True), - "account_type": orm.Integer(index=True, default=1), - "createdAt": orm.DateTime(index=True, default=datetime.datetime.now), - "updatedAt": orm.DateTime(index=True, default=datetime.datetime.now), - "lastLogin": orm.DateTime(index=True, default=datetime.datetime.now), - } - - def verify_password(self, plain_password): - return pwd_context.verify(plain_password, self.password) diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/IptcImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/IptcImagePlugin.py deleted file mode 100644 index 4c47b55c1a5c7445e430a55e984de303ed4713f5..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/IptcImagePlugin.py +++ /dev/null @@ -1,230 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# IPTC/NAA file handling -# -# history: -# 1995-10-01 fl Created -# 1998-03-09 fl Cleaned up and added to PIL -# 2002-06-18 fl Added getiptcinfo helper -# -# Copyright (c) Secret Labs AB 1997-2002. -# Copyright (c) Fredrik Lundh 1995. -# -# See the README file for information on usage and redistribution. -# -import os -import tempfile - -from . import Image, ImageFile -from ._binary import i8 -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 - -COMPRESSION = {1: "raw", 5: "jpeg"} - -PAD = o8(0) * 4 - - -# -# Helpers - - -def i(c): - return i32((PAD + c)[-4:]) - - -def dump(c): - for i in c: - print("%02x" % i8(i), end=" ") - print() - - -## -# Image plugin for IPTC/NAA datastreams. To read IPTC/NAA fields -# from TIFF and JPEG files, use the getiptcinfo function. - - -class IptcImageFile(ImageFile.ImageFile): - format = "IPTC" - format_description = "IPTC/NAA" - - def getint(self, key): - return i(self.info[key]) - - def field(self): - # - # get a IPTC field header - s = self.fp.read(5) - if not len(s): - return None, 0 - - tag = s[1], s[2] - - # syntax - if s[0] != 0x1C or tag[0] < 1 or tag[0] > 9: - msg = "invalid IPTC/NAA file" - raise SyntaxError(msg) - - # field size - size = s[3] - if size > 132: - msg = "illegal field length in IPTC/NAA file" - raise OSError(msg) - elif size == 128: - size = 0 - elif size > 128: - size = i(self.fp.read(size - 128)) - else: - size = i16(s, 3) - - return tag, size - - def _open(self): - # load descriptive fields - while True: - offset = self.fp.tell() - tag, size = self.field() - if not tag or tag == (8, 10): - break - if size: - tagdata = self.fp.read(size) - else: - tagdata = None - if tag in self.info: - if isinstance(self.info[tag], list): - self.info[tag].append(tagdata) - else: - self.info[tag] = [self.info[tag], tagdata] - else: - self.info[tag] = tagdata - - # mode - layers = i8(self.info[(3, 60)][0]) - component = i8(self.info[(3, 60)][1]) - if (3, 65) in self.info: - id = i8(self.info[(3, 65)][0]) - 1 - else: - id = 0 - if layers == 1 and not component: - self.mode = "L" - elif layers == 3 and component: - self.mode = "RGB"[id] - elif layers == 4 and component: - self.mode = "CMYK"[id] - - # size - self._size = self.getint((3, 20)), self.getint((3, 30)) - - # compression - try: - compression = COMPRESSION[self.getint((3, 120))] - except KeyError as e: - msg = "Unknown IPTC image compression" - raise OSError(msg) from e - - # tile - if tag == (8, 10): - self.tile = [ - ("iptc", (compression, offset), (0, 0, self.size[0], self.size[1])) - ] - - def load(self): - if len(self.tile) != 1 or self.tile[0][0] != "iptc": - return ImageFile.ImageFile.load(self) - - type, tile, box = self.tile[0] - - encoding, offset = tile - - self.fp.seek(offset) - - # Copy image data to temporary file - o_fd, outfile = tempfile.mkstemp(text=False) - o = os.fdopen(o_fd) - if encoding == "raw": - # To simplify access to the extracted file, - # prepend a PPM header - o.write("P5\n%d %d\n255\n" % self.size) - while True: - type, size = self.field() - if type != (8, 10): - break - while size > 0: - s = self.fp.read(min(size, 8192)) - if not s: - break - o.write(s) - size -= len(s) - o.close() - - try: - with Image.open(outfile) as _im: - _im.load() - self.im = _im.im - finally: - try: - os.unlink(outfile) - except OSError: - pass - - -Image.register_open(IptcImageFile.format, IptcImageFile) - -Image.register_extension(IptcImageFile.format, ".iim") - - -def getiptcinfo(im): - """ - Get IPTC information from TIFF, JPEG, or IPTC file. - - :param im: An image containing IPTC data. - :returns: A dictionary containing IPTC information, or None if - no IPTC information block was found. - """ - import io - - from . import JpegImagePlugin, TiffImagePlugin - - data = None - - if isinstance(im, IptcImageFile): - # return info dictionary right away - return im.info - - elif isinstance(im, JpegImagePlugin.JpegImageFile): - # extract the IPTC/NAA resource - photoshop = im.info.get("photoshop") - if photoshop: - data = photoshop.get(0x0404) - - elif isinstance(im, TiffImagePlugin.TiffImageFile): - # get raw data from the IPTC/NAA tag (PhotoShop tags the data - # as 4-byte integers, so we cannot use the get method...) - try: - data = im.tag.tagdata[TiffImagePlugin.IPTC_NAA_CHUNK] - except (AttributeError, KeyError): - pass - - if data is None: - return None # no properties - - # create an IptcImagePlugin object without initializing it - class FakeImage: - pass - - im = FakeImage() - im.__class__ = IptcImageFile - - # parse the IPTC information chunk - im.info = {} - im.fp = io.BytesIO(data) - - try: - im._open() - except (IndexError, KeyError): - pass # expected failure - - return im.info diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/mask_rcnn_vitdet.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/mask_rcnn_vitdet.py deleted file mode 100644 index d6f5244402734a3f9f675c5c4e42439ea708d24d..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/configs/common/models/mask_rcnn_vitdet.py +++ /dev/null @@ -1,59 +0,0 @@ -from functools import partial -import torch.nn as nn -from detectron2.config import LazyCall as L -from detectron2.modeling import ViT, SimpleFeaturePyramid -from detectron2.modeling.backbone.fpn import LastLevelMaxPool - -from .mask_rcnn_fpn import model -from ..data.constants import constants - -model.pixel_mean = constants.imagenet_rgb256_mean -model.pixel_std = constants.imagenet_rgb256_std -model.input_format = "RGB" - -# Base -embed_dim, depth, num_heads, dp = 768, 12, 12, 0.1 -# Creates Simple Feature Pyramid from ViT backbone -model.backbone = L(SimpleFeaturePyramid)( - net=L(ViT)( # Single-scale ViT backbone - img_size=1024, - patch_size=16, - embed_dim=embed_dim, - depth=depth, - num_heads=num_heads, - drop_path_rate=dp, - window_size=14, - mlp_ratio=4, - qkv_bias=True, - norm_layer=partial(nn.LayerNorm, eps=1e-6), - window_block_indexes=[ - # 2, 5, 8 11 for global attention - 0, - 1, - 3, - 4, - 6, - 7, - 9, - 10, - ], - residual_block_indexes=[], - use_rel_pos=True, - out_feature="last_feat", - ), - in_feature="${.net.out_feature}", - out_channels=256, - scale_factors=(4.0, 2.0, 1.0, 0.5), - top_block=L(LastLevelMaxPool)(), - norm="LN", - square_pad=1024, -) - -model.roi_heads.box_head.conv_norm = model.roi_heads.mask_head.conv_norm = "LN" - -# 2conv in RPN: -model.proposal_generator.head.conv_dims = [-1, -1] - -# 4conv1fc box head -model.roi_heads.box_head.conv_dims = [256, 256, 256, 256] -model.roi_heads.box_head.fc_dims = [1024] diff --git a/spaces/chansung/zero2story/constants/__init__.py b/spaces/chansung/zero2story/constants/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chasemcdo/hf_localai/api/localai.go b/spaces/chasemcdo/hf_localai/api/localai.go deleted file mode 100644 index b719689611a788941469a56d09077febe225c44f..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/api/localai.go +++ /dev/null @@ -1,78 +0,0 @@ -package api - -import ( - "fmt" - "os" - "path/filepath" - - model "github.com/go-skynet/LocalAI/pkg/model" - "github.com/go-skynet/LocalAI/pkg/tts" - "github.com/go-skynet/LocalAI/pkg/utils" - llama "github.com/go-skynet/go-llama.cpp" - "github.com/gofiber/fiber/v2" -) - -type TTSRequest struct { - Model string `json:"model" yaml:"model"` - Input string `json:"input" yaml:"input"` -} - -func generateUniqueFileName(dir, baseName, ext string) string { - counter := 1 - fileName := baseName + ext - - for { - filePath := filepath.Join(dir, fileName) - _, err := os.Stat(filePath) - if os.IsNotExist(err) { - return fileName - } - - counter++ - fileName = fmt.Sprintf("%s_%d%s", baseName, counter, ext) - } -} - -func ttsEndpoint(cm *ConfigMerger, o *Option) func(c *fiber.Ctx) error { - return func(c *fiber.Ctx) error { - - input := new(TTSRequest) - // Get input data from the request body - if err := c.BodyParser(input); err != nil { - return err - } - - piperModel, err := o.loader.BackendLoader(model.PiperBackend, input.Model, []llama.ModelOption{}, uint32(0), o.assetsDestination) - if err != nil { - return err - } - - if piperModel == nil { - return fmt.Errorf("could not load piper model") - } - - w, ok := piperModel.(*tts.Piper) - if !ok { - return fmt.Errorf("loader returned non-piper object %+v", w) - } - - if err := os.MkdirAll(o.audioDir, 0755); err != nil { - return err - } - - fileName := generateUniqueFileName(o.audioDir, "piper", ".wav") - filePath := filepath.Join(o.audioDir, fileName) - - modelPath := filepath.Join(o.loader.ModelPath, input.Model) - - if err := utils.VerifyPath(modelPath, o.loader.ModelPath); err != nil { - return err - } - - if err := w.TTS(input.Input, modelPath, filePath); err != nil { - return err - } - - return c.Download(filePath) - } -} diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/tools/export_torchscript.py b/spaces/chendl/compositional_test/multimodal/YOLOX/tools/export_torchscript.py deleted file mode 100644 index 16a563bc56fe7c61475aec31ab5f2b604398cda9..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/tools/export_torchscript.py +++ /dev/null @@ -1,80 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import argparse -import os -from loguru import logger - -import torch - -from yolox.exp import get_exp - - -def make_parser(): - parser = argparse.ArgumentParser("YOLOX torchscript deploy") - parser.add_argument( - "--output-name", type=str, default="yolox.torchscript.pt", help="output name of models" - ) - parser.add_argument("--batch-size", type=int, default=1, help="batch size") - parser.add_argument( - "-f", - "--exp_file", - default=None, - type=str, - help="experiment description file", - ) - parser.add_argument("-expn", "--experiment-name", type=str, default=None) - parser.add_argument("-n", "--name", type=str, default=None, help="model name") - parser.add_argument("-c", "--ckpt", default=None, type=str, help="ckpt path") - parser.add_argument( - "--decode_in_inference", - action="store_true", - help="decode in inference or not" - ) - parser.add_argument( - "opts", - help="Modify config options using the command-line", - default=None, - nargs=argparse.REMAINDER, - ) - - return parser - - -@logger.catch -def main(): - args = make_parser().parse_args() - logger.info("args value: {}".format(args)) - exp = get_exp(args.exp_file, args.name) - exp.merge(args.opts) - - if not args.experiment_name: - args.experiment_name = exp.exp_name - - model = exp.get_model() - if args.ckpt is None: - file_name = os.path.join(exp.output_dir, args.experiment_name) - ckpt_file = os.path.join(file_name, "best_ckpt.pth") - else: - ckpt_file = args.ckpt - - # load the model state dict - ckpt = torch.load(ckpt_file, map_location="cpu") - - model.eval() - if "model" in ckpt: - ckpt = ckpt["model"] - model.load_state_dict(ckpt) - model.head.decode_in_inference = args.decode_in_inference - - logger.info("loading checkpoint done.") - dummy_input = torch.randn(args.batch_size, 3, exp.test_size[0], exp.test_size[1]) - - mod = torch.jit.trace(model, dummy_input) - mod.save(args.output_name) - logger.info("generated torchscript model named {}".format(args.output_name)) - - -if __name__ == "__main__": - main() diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/modeling_frcnn.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/modeling_frcnn.py deleted file mode 100644 index edbd224cbe08d71a6d19cd3e0437e3def7d00210..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/lxmert/modeling_frcnn.py +++ /dev/null @@ -1,1921 +0,0 @@ -""" - coding=utf-8 - Copyright 2018, Antonio Mendoza Hao Tan, Mohit Bansal - Adapted From Facebook Inc, Detectron2 && Huggingface Co. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License.import copy - """ -import itertools -import math -import os -from abc import ABCMeta, abstractmethod -from collections import OrderedDict, namedtuple -from typing import Dict, List, Tuple - -import numpy as np -import torch -from torch import nn -from torch.nn.modules.batchnorm import BatchNorm2d -from torchvision.ops import RoIPool -from torchvision.ops.boxes import batched_nms, nms - -from utils import WEIGHTS_NAME, Config, cached_path, hf_bucket_url, is_remote_url, load_checkpoint - - -# other: -def norm_box(boxes, raw_sizes): - if not isinstance(boxes, torch.Tensor): - normalized_boxes = boxes.copy() - else: - normalized_boxes = boxes.clone() - normalized_boxes[:, :, (0, 2)] /= raw_sizes[:, 1] - normalized_boxes[:, :, (1, 3)] /= raw_sizes[:, 0] - return normalized_boxes - - -def pad_list_tensors( - list_tensors, - preds_per_image, - max_detections=None, - return_tensors=None, - padding=None, - pad_value=0, - location=None, -): - """ - location will always be cpu for np tensors - """ - if location is None: - location = "cpu" - assert return_tensors in {"pt", "np", None} - assert padding in {"max_detections", "max_batch", None} - new = [] - if padding is None: - if return_tensors is None: - return list_tensors - elif return_tensors == "pt": - if not isinstance(list_tensors, torch.Tensor): - return torch.stack(list_tensors).to(location) - else: - return list_tensors.to(location) - else: - if not isinstance(list_tensors, list): - return np.array(list_tensors.to(location)) - else: - return list_tensors.to(location) - if padding == "max_detections": - assert max_detections is not None, "specify max number of detections per batch" - elif padding == "max_batch": - max_detections = max(preds_per_image) - for i in range(len(list_tensors)): - too_small = False - tensor_i = list_tensors.pop(0) - if tensor_i.ndim < 2: - too_small = True - tensor_i = tensor_i.unsqueeze(-1) - assert isinstance(tensor_i, torch.Tensor) - tensor_i = nn.functional.pad( - input=tensor_i, - pad=(0, 0, 0, max_detections - preds_per_image[i]), - mode="constant", - value=pad_value, - ) - if too_small: - tensor_i = tensor_i.squeeze(-1) - if return_tensors is None: - if location == "cpu": - tensor_i = tensor_i.cpu() - tensor_i = tensor_i.tolist() - if return_tensors == "np": - if location == "cpu": - tensor_i = tensor_i.cpu() - tensor_i = tensor_i.numpy() - else: - if location == "cpu": - tensor_i = tensor_i.cpu() - new.append(tensor_i) - if return_tensors == "np": - return np.stack(new, axis=0) - elif return_tensors == "pt" and not isinstance(new, torch.Tensor): - return torch.stack(new, dim=0) - else: - return list_tensors - - -def do_nms(boxes, scores, image_shape, score_thresh, nms_thresh, mind, maxd): - scores = scores[:, :-1] - num_bbox_reg_classes = boxes.shape[1] // 4 - # Convert to Boxes to use the `clip` function ... - boxes = boxes.reshape(-1, 4) - _clip_box(boxes, image_shape) - boxes = boxes.view(-1, num_bbox_reg_classes, 4) # R x C x 4 - - # Select max scores - max_scores, max_classes = scores.max(1) # R x C --> R - num_objs = boxes.size(0) - boxes = boxes.view(-1, 4) - idxs = torch.arange(num_objs).to(boxes.device) * num_bbox_reg_classes + max_classes - max_boxes = boxes[idxs] # Select max boxes according to the max scores. - - # Apply NMS - keep = nms(max_boxes, max_scores, nms_thresh) - keep = keep[:maxd] - if keep.shape[-1] >= mind and keep.shape[-1] <= maxd: - max_boxes, max_scores = max_boxes[keep], max_scores[keep] - classes = max_classes[keep] - return max_boxes, max_scores, classes, keep - else: - return None - - -# Helper Functions -def _clip_box(tensor, box_size: Tuple[int, int]): - assert torch.isfinite(tensor).all(), "Box tensor contains infinite or NaN!" - h, w = box_size - tensor[:, 0].clamp_(min=0, max=w) - tensor[:, 1].clamp_(min=0, max=h) - tensor[:, 2].clamp_(min=0, max=w) - tensor[:, 3].clamp_(min=0, max=h) - - -def _nonempty_boxes(box, threshold: float = 0.0) -> torch.Tensor: - widths = box[:, 2] - box[:, 0] - heights = box[:, 3] - box[:, 1] - keep = (widths > threshold) & (heights > threshold) - return keep - - -def get_norm(norm, out_channels): - if isinstance(norm, str): - if len(norm) == 0: - return None - norm = { - "BN": BatchNorm2d, - "GN": lambda channels: nn.GroupNorm(32, channels), - "nnSyncBN": nn.SyncBatchNorm, # keep for debugging - "": lambda x: x, - }[norm] - return norm(out_channels) - - -def _create_grid_offsets(size: List[int], stride: int, offset: float, device): - grid_height, grid_width = size - shifts_x = torch.arange( - offset * stride, - grid_width * stride, - step=stride, - dtype=torch.float32, - device=device, - ) - shifts_y = torch.arange( - offset * stride, - grid_height * stride, - step=stride, - dtype=torch.float32, - device=device, - ) - - shift_y, shift_x = torch.meshgrid(shifts_y, shifts_x) - shift_x = shift_x.reshape(-1) - shift_y = shift_y.reshape(-1) - return shift_x, shift_y - - -def build_backbone(cfg): - input_shape = ShapeSpec(channels=len(cfg.MODEL.PIXEL_MEAN)) - norm = cfg.RESNETS.NORM - stem = BasicStem( - in_channels=input_shape.channels, - out_channels=cfg.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - caffe_maxpool=cfg.MODEL.MAX_POOL, - ) - freeze_at = cfg.BACKBONE.FREEZE_AT - - if freeze_at >= 1: - for p in stem.parameters(): - p.requires_grad = False - - out_features = cfg.RESNETS.OUT_FEATURES - depth = cfg.RESNETS.DEPTH - num_groups = cfg.RESNETS.NUM_GROUPS - width_per_group = cfg.RESNETS.WIDTH_PER_GROUP - bottleneck_channels = num_groups * width_per_group - in_channels = cfg.RESNETS.STEM_OUT_CHANNELS - out_channels = cfg.RESNETS.RES2_OUT_CHANNELS - stride_in_1x1 = cfg.RESNETS.STRIDE_IN_1X1 - res5_dilation = cfg.RESNETS.RES5_DILATION - assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation) - - num_blocks_per_stage = {50: [3, 4, 6, 3], 101: [3, 4, 23, 3], 152: [3, 8, 36, 3]}[depth] - - stages = [] - out_stage_idx = [{"res2": 2, "res3": 3, "res4": 4, "res5": 5}[f] for f in out_features] - max_stage_idx = max(out_stage_idx) - for idx, stage_idx in enumerate(range(2, max_stage_idx + 1)): - dilation = res5_dilation if stage_idx == 5 else 1 - first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2 - stage_kargs = { - "num_blocks": num_blocks_per_stage[idx], - "first_stride": first_stride, - "in_channels": in_channels, - "bottleneck_channels": bottleneck_channels, - "out_channels": out_channels, - "num_groups": num_groups, - "norm": norm, - "stride_in_1x1": stride_in_1x1, - "dilation": dilation, - } - - stage_kargs["block_class"] = BottleneckBlock - blocks = ResNet.make_stage(**stage_kargs) - in_channels = out_channels - out_channels *= 2 - bottleneck_channels *= 2 - - if freeze_at >= stage_idx: - for block in blocks: - block.freeze() - stages.append(blocks) - - return ResNet(stem, stages, out_features=out_features) - - -def find_top_rpn_proposals( - proposals, - pred_objectness_logits, - images, - image_sizes, - nms_thresh, - pre_nms_topk, - post_nms_topk, - min_box_side_len, - training, -): - """Args: - proposals (list[Tensor]): (L, N, Hi*Wi*A, 4). - pred_objectness_logits: tensors of length L. - nms_thresh (float): IoU threshold to use for NMS - pre_nms_topk (int): before nms - post_nms_topk (int): after nms - min_box_side_len (float): minimum proposal box side - training (bool): True if proposals are to be used in training, - Returns: - results (List[Dict]): stores post_nms_topk object proposals for image i. - """ - num_images = len(images) - device = proposals[0].device - - # 1. Select top-k anchor for every level and every image - topk_scores = [] # #lvl Tensor, each of shape N x topk - topk_proposals = [] - level_ids = [] # #lvl Tensor, each of shape (topk,) - batch_idx = torch.arange(num_images, device=device) - for level_id, proposals_i, logits_i in zip(itertools.count(), proposals, pred_objectness_logits): - Hi_Wi_A = logits_i.shape[1] - num_proposals_i = min(pre_nms_topk, Hi_Wi_A) - - # sort is faster than topk (https://github.com/pytorch/pytorch/issues/22812) - # topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1) - logits_i, idx = logits_i.sort(descending=True, dim=1) - topk_scores_i = logits_i[batch_idx, :num_proposals_i] - topk_idx = idx[batch_idx, :num_proposals_i] - - # each is N x topk - topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 4 - - topk_proposals.append(topk_proposals_i) - topk_scores.append(topk_scores_i) - level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device)) - - # 2. Concat all levels together - topk_scores = torch.cat(topk_scores, dim=1) - topk_proposals = torch.cat(topk_proposals, dim=1) - level_ids = torch.cat(level_ids, dim=0) - - # if I change to batched_nms, I wonder if this will make a difference - # 3. For each image, run a per-level NMS, and choose topk results. - results = [] - for n, image_size in enumerate(image_sizes): - boxes = topk_proposals[n] - scores_per_img = topk_scores[n] - # I will have to take a look at the boxes clip method - _clip_box(boxes, image_size) - # filter empty boxes - keep = _nonempty_boxes(boxes, threshold=min_box_side_len) - lvl = level_ids - if keep.sum().item() != len(boxes): - boxes, scores_per_img, lvl = ( - boxes[keep], - scores_per_img[keep], - level_ids[keep], - ) - - keep = batched_nms(boxes, scores_per_img, lvl, nms_thresh) - keep = keep[:post_nms_topk] - - res = (boxes[keep], scores_per_img[keep]) - results.append(res) - - # I wonder if it would be possible for me to pad all these things. - return results - - -def subsample_labels(labels, num_samples, positive_fraction, bg_label): - """ - Returns: - pos_idx, neg_idx (Tensor): - 1D vector of indices. The total length of both is `num_samples` or fewer. - """ - positive = torch.nonzero((labels != -1) & (labels != bg_label)).squeeze(1) - negative = torch.nonzero(labels == bg_label).squeeze(1) - - num_pos = int(num_samples * positive_fraction) - # protect against not enough positive examples - num_pos = min(positive.numel(), num_pos) - num_neg = num_samples - num_pos - # protect against not enough negative examples - num_neg = min(negative.numel(), num_neg) - - # randomly select positive and negative examples - perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos] - perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg] - - pos_idx = positive[perm1] - neg_idx = negative[perm2] - return pos_idx, neg_idx - - -def add_ground_truth_to_proposals(gt_boxes, proposals): - raise NotImplementedError() - - -def add_ground_truth_to_proposals_single_image(gt_boxes, proposals): - raise NotImplementedError() - - -def _fmt_box_list(box_tensor, batch_index: int): - repeated_index = torch.full( - (len(box_tensor), 1), - batch_index, - dtype=box_tensor.dtype, - device=box_tensor.device, - ) - return torch.cat((repeated_index, box_tensor), dim=1) - - -def convert_boxes_to_pooler_format(box_lists: List[torch.Tensor]): - pooler_fmt_boxes = torch.cat( - [_fmt_box_list(box_list, i) for i, box_list in enumerate(box_lists)], - dim=0, - ) - return pooler_fmt_boxes - - -def assign_boxes_to_levels( - box_lists: List[torch.Tensor], - min_level: int, - max_level: int, - canonical_box_size: int, - canonical_level: int, -): - box_sizes = torch.sqrt(torch.cat([boxes.area() for boxes in box_lists])) - # Eqn.(1) in FPN paper - level_assignments = torch.floor(canonical_level + torch.log2(box_sizes / canonical_box_size + 1e-8)) - # clamp level to (min, max), in case the box size is too large or too small - # for the available feature maps - level_assignments = torch.clamp(level_assignments, min=min_level, max=max_level) - return level_assignments.to(torch.int64) - min_level - - -# Helper Classes -class _NewEmptyTensorOp(torch.autograd.Function): - @staticmethod - def forward(ctx, x, new_shape): - ctx.shape = x.shape - return x.new_empty(new_shape) - - @staticmethod - def backward(ctx, grad): - shape = ctx.shape - return _NewEmptyTensorOp.apply(grad, shape), None - - -class ShapeSpec(namedtuple("_ShapeSpec", ["channels", "height", "width", "stride"])): - def __new__(cls, *, channels=None, height=None, width=None, stride=None): - return super().__new__(cls, channels, height, width, stride) - - -class Box2BoxTransform(object): - """ - This R-CNN transformation scales the box's width and height - by exp(dw), exp(dh) and shifts a box's center by the offset - (dx * width, dy * height). - """ - - def __init__(self, weights: Tuple[float, float, float, float], scale_clamp: float = None): - """ - Args: - weights (4-element tuple): Scaling factors that are applied to the - (dx, dy, dw, dh) deltas. In Fast R-CNN, these were originally set - such that the deltas have unit variance; now they are treated as - hyperparameters of the system. - scale_clamp (float): When predicting deltas, the predicted box scaling - factors (dw and dh) are clamped such that they are <= scale_clamp. - """ - self.weights = weights - if scale_clamp is not None: - self.scale_clamp = scale_clamp - else: - """ - Value for clamping large dw and dh predictions. - The heuristic is that we clamp such that dw and dh are no larger - than what would transform a 16px box into a 1000px box - (based on a small anchor, 16px, and a typical image size, 1000px). - """ - self.scale_clamp = math.log(1000.0 / 16) - - def get_deltas(self, src_boxes, target_boxes): - """ - Get box regression transformation deltas (dx, dy, dw, dh) that can be used - to transform the `src_boxes` into the `target_boxes`. That is, the relation - ``target_boxes == self.apply_deltas(deltas, src_boxes)`` is true (unless - any delta is too large and is clamped). - Args: - src_boxes (Tensor): source boxes, e.g., object proposals - target_boxes (Tensor): target of the transformation, e.g., ground-truth - boxes. - """ - assert isinstance(src_boxes, torch.Tensor), type(src_boxes) - assert isinstance(target_boxes, torch.Tensor), type(target_boxes) - - src_widths = src_boxes[:, 2] - src_boxes[:, 0] - src_heights = src_boxes[:, 3] - src_boxes[:, 1] - src_ctr_x = src_boxes[:, 0] + 0.5 * src_widths - src_ctr_y = src_boxes[:, 1] + 0.5 * src_heights - - target_widths = target_boxes[:, 2] - target_boxes[:, 0] - target_heights = target_boxes[:, 3] - target_boxes[:, 1] - target_ctr_x = target_boxes[:, 0] + 0.5 * target_widths - target_ctr_y = target_boxes[:, 1] + 0.5 * target_heights - - wx, wy, ww, wh = self.weights - dx = wx * (target_ctr_x - src_ctr_x) / src_widths - dy = wy * (target_ctr_y - src_ctr_y) / src_heights - dw = ww * torch.log(target_widths / src_widths) - dh = wh * torch.log(target_heights / src_heights) - - deltas = torch.stack((dx, dy, dw, dh), dim=1) - assert (src_widths > 0).all().item(), "Input boxes to Box2BoxTransform are not valid!" - return deltas - - def apply_deltas(self, deltas, boxes): - """ - Apply transformation `deltas` (dx, dy, dw, dh) to `boxes`. - Args: - deltas (Tensor): transformation deltas of shape (N, k*4), where k >= 1. - deltas[i] represents k potentially different class-specific - box transformations for the single box boxes[i]. - boxes (Tensor): boxes to transform, of shape (N, 4) - """ - boxes = boxes.to(deltas.dtype) - - widths = boxes[:, 2] - boxes[:, 0] - heights = boxes[:, 3] - boxes[:, 1] - ctr_x = boxes[:, 0] + 0.5 * widths - ctr_y = boxes[:, 1] + 0.5 * heights - - wx, wy, ww, wh = self.weights - dx = deltas[:, 0::4] / wx - dy = deltas[:, 1::4] / wy - dw = deltas[:, 2::4] / ww - dh = deltas[:, 3::4] / wh - - # Prevent sending too large values into torch.exp() - dw = torch.clamp(dw, max=self.scale_clamp) - dh = torch.clamp(dh, max=self.scale_clamp) - - pred_ctr_x = dx * widths[:, None] + ctr_x[:, None] - pred_ctr_y = dy * heights[:, None] + ctr_y[:, None] - pred_w = torch.exp(dw) * widths[:, None] - pred_h = torch.exp(dh) * heights[:, None] - - pred_boxes = torch.zeros_like(deltas) - pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * pred_w # x1 - pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * pred_h # y1 - pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * pred_w # x2 - pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * pred_h # y2 - return pred_boxes - - -class Matcher(object): - """ - This class assigns to each predicted "element" (e.g., a box) a ground-truth - element. Each predicted element will have exactly zero or one matches; each - ground-truth element may be matched to zero or more predicted elements. - The matching is determined by the MxN match_quality_matrix, that characterizes - how well each (ground-truth, prediction)-pair match each other. For example, - if the elements are boxes, this matrix may contain box intersection-over-union - overlap values. - The matcher returns (a) a vector of length N containing the index of the - ground-truth element m in [0, M) that matches to prediction n in [0, N). - (b) a vector of length N containing the labels for each prediction. - """ - - def __init__( - self, - thresholds: List[float], - labels: List[int], - allow_low_quality_matches: bool = False, - ): - """ - Args: - thresholds (list): a list of thresholds used to stratify predictions - into levels. - labels (list): a list of values to label predictions belonging at - each level. A label can be one of {-1, 0, 1} signifying - {ignore, negative class, positive class}, respectively. - allow_low_quality_matches (bool): if True, produce additional matches or predictions with maximum match quality lower than high_threshold. - For example, thresholds = [0.3, 0.5] labels = [0, -1, 1] All predictions with iou < 0.3 will be marked with 0 and - thus will be considered as false positives while training. All predictions with 0.3 <= iou < 0.5 will be marked with -1 and - thus will be ignored. All predictions with 0.5 <= iou will be marked with 1 and thus will be considered as true positives. - """ - thresholds = thresholds[:] - assert thresholds[0] > 0 - thresholds.insert(0, -float("inf")) - thresholds.append(float("inf")) - assert all([low <= high for (low, high) in zip(thresholds[:-1], thresholds[1:])]) - assert all([label_i in [-1, 0, 1] for label_i in labels]) - assert len(labels) == len(thresholds) - 1 - self.thresholds = thresholds - self.labels = labels - self.allow_low_quality_matches = allow_low_quality_matches - - def __call__(self, match_quality_matrix): - """ - Args: - match_quality_matrix (Tensor[float]): an MxN tensor, containing the pairwise quality between M ground-truth elements and N predicted - elements. All elements must be >= 0 (due to the us of `torch.nonzero` for selecting indices in :meth:`set_low_quality_matches_`). - Returns: - matches (Tensor[int64]): a vector of length N, where matches[i] is a matched ground-truth index in [0, M) - match_labels (Tensor[int8]): a vector of length N, where pred_labels[i] indicates true or false positive or ignored - """ - assert match_quality_matrix.dim() == 2 - if match_quality_matrix.numel() == 0: - default_matches = match_quality_matrix.new_full((match_quality_matrix.size(1),), 0, dtype=torch.int64) - # When no gt boxes exist, we define IOU = 0 and therefore set labels - # to `self.labels[0]`, which usually defaults to background class 0 - # To choose to ignore instead, - # can make labels=[-1,0,-1,1] + set appropriate thresholds - default_match_labels = match_quality_matrix.new_full( - (match_quality_matrix.size(1),), self.labels[0], dtype=torch.int8 - ) - return default_matches, default_match_labels - - assert torch.all(match_quality_matrix >= 0) - - # match_quality_matrix is M (gt) x N (predicted) - # Max over gt elements (dim 0) to find best gt candidate for each prediction - matched_vals, matches = match_quality_matrix.max(dim=0) - - match_labels = matches.new_full(matches.size(), 1, dtype=torch.int8) - - for l, low, high in zip(self.labels, self.thresholds[:-1], self.thresholds[1:]): - low_high = (matched_vals >= low) & (matched_vals < high) - match_labels[low_high] = l - - if self.allow_low_quality_matches: - self.set_low_quality_matches_(match_labels, match_quality_matrix) - - return matches, match_labels - - def set_low_quality_matches_(self, match_labels, match_quality_matrix): - """ - Produce additional matches for predictions that have only low-quality matches. - Specifically, for each ground-truth G find the set of predictions that have - maximum overlap with it (including ties); for each prediction in that set, if - it is unmatched, then match it to the ground-truth G. - This function implements the RPN assignment case (i) - in Sec. 3.1.2 of Faster R-CNN. - """ - # For each gt, find the prediction with which it has highest quality - highest_quality_foreach_gt, _ = match_quality_matrix.max(dim=1) - # Find the highest quality match available, even if it is low, including ties. - # Note that the matches qualities must be positive due to the use of - # `torch.nonzero`. - of_quality_inds = match_quality_matrix == highest_quality_foreach_gt[:, None] - if of_quality_inds.dim() == 0: - (_, pred_inds_with_highest_quality) = of_quality_inds.unsqueeze(0).nonzero().unbind(1) - else: - (_, pred_inds_with_highest_quality) = of_quality_inds.nonzero().unbind(1) - match_labels[pred_inds_with_highest_quality] = 1 - - -class RPNOutputs(object): - def __init__( - self, - box2box_transform, - anchor_matcher, - batch_size_per_image, - positive_fraction, - images, - pred_objectness_logits, - pred_anchor_deltas, - anchors, - boundary_threshold=0, - gt_boxes=None, - smooth_l1_beta=0.0, - ): - """ - Args: - box2box_transform (Box2BoxTransform): :class:`Box2BoxTransform` instance for anchor-proposal transformations. - anchor_matcher (Matcher): :class:`Matcher` instance for matching anchors to ground-truth boxes; used to determine training labels. - batch_size_per_image (int): number of proposals to sample when training - positive_fraction (float): target fraction of sampled proposals that should be positive - images (ImageList): :class:`ImageList` instance representing N input images - pred_objectness_logits (list[Tensor]): A list of L elements. Element i is a tensor of shape (N, A, Hi, W) - pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape (N, A*4, Hi, Wi) - anchors (list[torch.Tensor]): nested list of boxes. anchors[i][j] at (n, l) stores anchor array for feature map l - boundary_threshold (int): if >= 0, then anchors that extend beyond the image boundary by more than boundary_thresh are not used in training. - gt_boxes (list[Boxes], optional): A list of N elements. - smooth_l1_beta (float): The transition point between L1 and L2 lossn. When set to 0, the loss becomes L1. When +inf, it is ignored - """ - self.box2box_transform = box2box_transform - self.anchor_matcher = anchor_matcher - self.batch_size_per_image = batch_size_per_image - self.positive_fraction = positive_fraction - self.pred_objectness_logits = pred_objectness_logits - self.pred_anchor_deltas = pred_anchor_deltas - - self.anchors = anchors - self.gt_boxes = gt_boxes - self.num_feature_maps = len(pred_objectness_logits) - self.num_images = len(images) - self.boundary_threshold = boundary_threshold - self.smooth_l1_beta = smooth_l1_beta - - def _get_ground_truth(self): - raise NotImplementedError() - - def predict_proposals(self): - # pred_anchor_deltas: (L, N, ? Hi, Wi) - # anchors:(N, L, -1, B) - # here we loop over specific feature map, NOT images - proposals = [] - anchors = self.anchors.transpose(0, 1) - for anchors_i, pred_anchor_deltas_i in zip(anchors, self.pred_anchor_deltas): - B = anchors_i.size(-1) - N, _, Hi, Wi = pred_anchor_deltas_i.shape - anchors_i = anchors_i.flatten(start_dim=0, end_dim=1) - pred_anchor_deltas_i = pred_anchor_deltas_i.view(N, -1, B, Hi, Wi).permute(0, 3, 4, 1, 2).reshape(-1, B) - proposals_i = self.box2box_transform.apply_deltas(pred_anchor_deltas_i, anchors_i) - # Append feature map proposals with shape (N, Hi*Wi*A, B) - proposals.append(proposals_i.view(N, -1, B)) - proposals = torch.stack(proposals) - return proposals - - def predict_objectness_logits(self): - """ - Returns: - pred_objectness_logits (list[Tensor]) -> (N, Hi*Wi*A). - """ - pred_objectness_logits = [ - # Reshape: (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A) - score.permute(0, 2, 3, 1).reshape(self.num_images, -1) - for score in self.pred_objectness_logits - ] - return pred_objectness_logits - - -# Main Classes -class Conv2d(nn.Conv2d): - def __init__(self, *args, **kwargs): - norm = kwargs.pop("norm", None) - activation = kwargs.pop("activation", None) - super().__init__(*args, **kwargs) - - self.norm = norm - self.activation = activation - - def forward(self, x): - if x.numel() == 0 and self.training: - assert not isinstance(self.norm, nn.SyncBatchNorm) - if x.numel() == 0: - assert not isinstance(self.norm, nn.GroupNorm) - output_shape = [ - (i + 2 * p - (di * (k - 1) + 1)) // s + 1 - for i, p, di, k, s in zip( - x.shape[-2:], - self.padding, - self.dilation, - self.kernel_size, - self.stride, - ) - ] - output_shape = [x.shape[0], self.weight.shape[0]] + output_shape - empty = _NewEmptyTensorOp.apply(x, output_shape) - if self.training: - _dummy = sum(x.view(-1)[0] for x in self.parameters()) * 0.0 - return empty + _dummy - else: - return empty - - x = super().forward(x) - if self.norm is not None: - x = self.norm(x) - if self.activation is not None: - x = self.activation(x) - return x - - -class LastLevelMaxPool(nn.Module): - """ - This module is used in the original FPN to generate a downsampled P6 feature from P5. - """ - - def __init__(self): - super().__init__() - self.num_levels = 1 - self.in_feature = "p5" - - def forward(self, x): - return [nn.functional.max_pool2d(x, kernel_size=1, stride=2, padding=0)] - - -class LastLevelP6P7(nn.Module): - """ - This module is used in RetinaNet to generate extra layers, P6 and P7 from C5 feature. - """ - - def __init__(self, in_channels, out_channels): - super().__init__() - self.num_levels = 2 - self.in_feature = "res5" - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - - def forward(self, c5): - p6 = self.p6(c5) - p7 = self.p7(nn.functional.relu(p6)) - return [p6, p7] - - -class BasicStem(nn.Module): - def __init__(self, in_channels=3, out_channels=64, norm="BN", caffe_maxpool=False): - super().__init__() - self.conv1 = Conv2d( - in_channels, - out_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False, - norm=get_norm(norm, out_channels), - ) - self.caffe_maxpool = caffe_maxpool - # use pad 1 instead of pad zero - - def forward(self, x): - x = self.conv1(x) - x = nn.functional.relu_(x) - if self.caffe_maxpool: - x = nn.functional.max_pool2d(x, kernel_size=3, stride=2, padding=0, ceil_mode=True) - else: - x = nn.functional.max_pool2d(x, kernel_size=3, stride=2, padding=1) - return x - - @property - def out_channels(self): - return self.conv1.out_channels - - @property - def stride(self): - return 4 # = stride 2 conv -> stride 2 max pool - - -class ResNetBlockBase(nn.Module): - def __init__(self, in_channels, out_channels, stride): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.stride = stride - - def freeze(self): - for p in self.parameters(): - p.requires_grad = False - return self - - -class BottleneckBlock(ResNetBlockBase): - def __init__( - self, - in_channels, - out_channels, - bottleneck_channels, - stride=1, - num_groups=1, - norm="BN", - stride_in_1x1=False, - dilation=1, - ): - super().__init__(in_channels, out_channels, stride) - - if in_channels != out_channels: - self.shortcut = Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - bias=False, - norm=get_norm(norm, out_channels), - ) - else: - self.shortcut = None - - # The original MSRA ResNet models have stride in the first 1x1 conv - # The subsequent fb.torch.resnet and Caffe2 ResNe[X]t implementations have - # stride in the 3x3 conv - stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride) - - self.conv1 = Conv2d( - in_channels, - bottleneck_channels, - kernel_size=1, - stride=stride_1x1, - bias=False, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv2 = Conv2d( - bottleneck_channels, - bottleneck_channels, - kernel_size=3, - stride=stride_3x3, - padding=1 * dilation, - bias=False, - groups=num_groups, - dilation=dilation, - norm=get_norm(norm, bottleneck_channels), - ) - - self.conv3 = Conv2d( - bottleneck_channels, - out_channels, - kernel_size=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - - def forward(self, x): - out = self.conv1(x) - out = nn.functional.relu_(out) - - out = self.conv2(out) - out = nn.functional.relu_(out) - - out = self.conv3(out) - - if self.shortcut is not None: - shortcut = self.shortcut(x) - else: - shortcut = x - - out += shortcut - out = nn.functional.relu_(out) - return out - - -class Backbone(nn.Module, metaclass=ABCMeta): - def __init__(self): - super().__init__() - - @abstractmethod - def forward(self): - pass - - @property - def size_divisibility(self): - """ - Some backbones require the input height and width to be divisible by a specific integer. This is - typically true for encoder / decoder type networks with lateral connection (e.g., FPN) for which feature maps need to match - dimension in the "bottom up" and "top down" paths. Set to 0 if no specific input size divisibility is required. - """ - return 0 - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], - stride=self._out_feature_strides[name], - ) - for name in self._out_features - } - - @property - def out_features(self): - """deprecated""" - return self._out_features - - @property - def out_feature_strides(self): - """deprecated""" - return {f: self._out_feature_strides[f] for f in self._out_features} - - @property - def out_feature_channels(self): - """deprecated""" - return {f: self._out_feature_channels[f] for f in self._out_features} - - -class ResNet(Backbone): - def __init__(self, stem, stages, num_classes=None, out_features=None): - """ - Args: - stem (nn.Module): a stem module - stages (list[list[ResNetBlock]]): several (typically 4) stages, each contains multiple :class:`ResNetBlockBase`. - num_classes (None or int): if None, will not perform classification. - out_features (list[str]): name of the layers whose outputs should be returned in forward. Can be anything in: - "stem", "linear", or "res2" ... If None, will return the output of the last layer. - """ - super(ResNet, self).__init__() - self.stem = stem - self.num_classes = num_classes - - current_stride = self.stem.stride - self._out_feature_strides = {"stem": current_stride} - self._out_feature_channels = {"stem": self.stem.out_channels} - - self.stages_and_names = [] - for i, blocks in enumerate(stages): - for block in blocks: - assert isinstance(block, ResNetBlockBase), block - curr_channels = block.out_channels - stage = nn.Sequential(*blocks) - name = "res" + str(i + 2) - self.add_module(name, stage) - self.stages_and_names.append((stage, name)) - self._out_feature_strides[name] = current_stride = int( - current_stride * np.prod([k.stride for k in blocks]) - ) - self._out_feature_channels[name] = blocks[-1].out_channels - - if num_classes is not None: - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - self.linear = nn.Linear(curr_channels, num_classes) - - # Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour": - # "The 1000-way fully-connected layer is initialized by - # drawing weights from a zero-mean Gaussian with std of 0.01." - nn.init.normal_(self.linear.weight, stddev=0.01) - name = "linear" - - if out_features is None: - out_features = [name] - self._out_features = out_features - assert len(self._out_features) - children = [x[0] for x in self.named_children()] - for out_feature in self._out_features: - assert out_feature in children, "Available children: {}".format(", ".join(children)) - - def forward(self, x): - outputs = {} - x = self.stem(x) - if "stem" in self._out_features: - outputs["stem"] = x - for stage, name in self.stages_and_names: - x = stage(x) - if name in self._out_features: - outputs[name] = x - if self.num_classes is not None: - x = self.avgpool(x) - x = self.linear(x) - if "linear" in self._out_features: - outputs["linear"] = x - return outputs - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], - stride=self._out_feature_strides[name], - ) - for name in self._out_features - } - - @staticmethod - def make_stage( - block_class, - num_blocks, - first_stride=None, - *, - in_channels, - out_channels, - **kwargs, - ): - """ - Usually, layers that produce the same feature map spatial size - are defined as one "stage". - Under such definition, stride_per_block[1:] should all be 1. - """ - if first_stride is not None: - assert "stride" not in kwargs and "stride_per_block" not in kwargs - kwargs["stride_per_block"] = [first_stride] + [1] * (num_blocks - 1) - blocks = [] - for i in range(num_blocks): - curr_kwargs = {} - for k, v in kwargs.items(): - if k.endswith("_per_block"): - assert ( - len(v) == num_blocks - ), f"Argument '{k}' of make_stage should have the same length as num_blocks={num_blocks}." - newk = k[: -len("_per_block")] - assert newk not in kwargs, f"Cannot call make_stage with both {k} and {newk}!" - curr_kwargs[newk] = v[i] - else: - curr_kwargs[k] = v - - blocks.append(block_class(in_channels=in_channels, out_channels=out_channels, **curr_kwargs)) - in_channels = out_channels - - return blocks - - -class ROIPooler(nn.Module): - """ - Region of interest feature map pooler that supports pooling from one or more - feature maps. - """ - - def __init__( - self, - output_size, - scales, - sampling_ratio, - canonical_box_size=224, - canonical_level=4, - ): - super().__init__() - # assumption that stride is a power of 2. - min_level = -math.log2(scales[0]) - max_level = -math.log2(scales[-1]) - - # a bunch of testing - assert math.isclose(min_level, int(min_level)) and math.isclose(max_level, int(max_level)) - assert len(scales) == max_level - min_level + 1, "not pyramid" - assert 0 < min_level and min_level <= max_level - if isinstance(output_size, int): - output_size = (output_size, output_size) - assert len(output_size) == 2 and isinstance(output_size[0], int) and isinstance(output_size[1], int) - if len(scales) > 1: - assert min_level <= canonical_level and canonical_level <= max_level - assert canonical_box_size > 0 - - self.output_size = output_size - self.min_level = int(min_level) - self.max_level = int(max_level) - self.level_poolers = nn.ModuleList(RoIPool(output_size, spatial_scale=scale) for scale in scales) - self.canonical_level = canonical_level - self.canonical_box_size = canonical_box_size - - def forward(self, feature_maps, boxes): - """ - Args: - feature_maps: List[torch.Tensor(N,C,W,H)] - box_lists: list[torch.Tensor]) - Returns: - A tensor of shape(N*B, Channels, output_size, output_size) - """ - x = list(feature_maps.values()) - num_level_assignments = len(self.level_poolers) - assert len(x) == num_level_assignments and len(boxes) == x[0].size(0) - - pooler_fmt_boxes = convert_boxes_to_pooler_format(boxes) - - if num_level_assignments == 1: - return self.level_poolers[0](x[0], pooler_fmt_boxes) - - level_assignments = assign_boxes_to_levels( - boxes, - self.min_level, - self.max_level, - self.canonical_box_size, - self.canonical_level, - ) - - num_boxes = len(pooler_fmt_boxes) - num_channels = x[0].shape[1] - output_size = self.output_size[0] - - dtype, device = x[0].dtype, x[0].device - output = torch.zeros( - (num_boxes, num_channels, output_size, output_size), - dtype=dtype, - device=device, - ) - - for level, (x_level, pooler) in enumerate(zip(x, self.level_poolers)): - inds = torch.nonzero(level_assignments == level).squeeze(1) - pooler_fmt_boxes_level = pooler_fmt_boxes[inds] - output[inds] = pooler(x_level, pooler_fmt_boxes_level) - - return output - - -class ROIOutputs(object): - def __init__(self, cfg, training=False): - self.smooth_l1_beta = cfg.ROI_BOX_HEAD.SMOOTH_L1_BETA - self.box2box_transform = Box2BoxTransform(weights=cfg.ROI_BOX_HEAD.BBOX_REG_WEIGHTS) - self.training = training - self.score_thresh = cfg.ROI_HEADS.SCORE_THRESH_TEST - self.min_detections = cfg.MIN_DETECTIONS - self.max_detections = cfg.MAX_DETECTIONS - - nms_thresh = cfg.ROI_HEADS.NMS_THRESH_TEST - if not isinstance(nms_thresh, list): - nms_thresh = [nms_thresh] - self.nms_thresh = nms_thresh - - def _predict_boxes(self, proposals, box_deltas, preds_per_image): - num_pred = box_deltas.size(0) - B = proposals[0].size(-1) - K = box_deltas.size(-1) // B - box_deltas = box_deltas.view(num_pred * K, B) - proposals = torch.cat(proposals, dim=0).unsqueeze(-2).expand(num_pred, K, B) - proposals = proposals.reshape(-1, B) - boxes = self.box2box_transform.apply_deltas(box_deltas, proposals) - return boxes.view(num_pred, K * B).split(preds_per_image, dim=0) - - def _predict_objs(self, obj_logits, preds_per_image): - probs = nn.functional.softmax(obj_logits, dim=-1) - probs = probs.split(preds_per_image, dim=0) - return probs - - def _predict_attrs(self, attr_logits, preds_per_image): - attr_logits = attr_logits[..., :-1].softmax(-1) - attr_probs, attrs = attr_logits.max(-1) - return attr_probs.split(preds_per_image, dim=0), attrs.split(preds_per_image, dim=0) - - @torch.no_grad() - def inference( - self, - obj_logits, - attr_logits, - box_deltas, - pred_boxes, - features, - sizes, - scales=None, - ): - # only the pred boxes is the - preds_per_image = [p.size(0) for p in pred_boxes] - boxes_all = self._predict_boxes(pred_boxes, box_deltas, preds_per_image) - obj_scores_all = self._predict_objs(obj_logits, preds_per_image) # list of length N - attr_probs_all, attrs_all = self._predict_attrs(attr_logits, preds_per_image) - features = features.split(preds_per_image, dim=0) - - # fun for each image too, also I can experiment and do multiple images - final_results = [] - zipped = zip(boxes_all, obj_scores_all, attr_probs_all, attrs_all, sizes) - for i, (boxes, obj_scores, attr_probs, attrs, size) in enumerate(zipped): - for nms_t in self.nms_thresh: - outputs = do_nms( - boxes, - obj_scores, - size, - self.score_thresh, - nms_t, - self.min_detections, - self.max_detections, - ) - if outputs is not None: - max_boxes, max_scores, classes, ids = outputs - break - - if scales is not None: - scale_yx = scales[i] - max_boxes[:, 0::2] *= scale_yx[1] - max_boxes[:, 1::2] *= scale_yx[0] - - final_results.append( - ( - max_boxes, - classes, - max_scores, - attrs[ids], - attr_probs[ids], - features[i][ids], - ) - ) - boxes, classes, class_probs, attrs, attr_probs, roi_features = map(list, zip(*final_results)) - return boxes, classes, class_probs, attrs, attr_probs, roi_features - - def training(self, obj_logits, attr_logits, box_deltas, pred_boxes, features, sizes): - pass - - def __call__( - self, - obj_logits, - attr_logits, - box_deltas, - pred_boxes, - features, - sizes, - scales=None, - ): - if self.training: - raise NotImplementedError() - return self.inference( - obj_logits, - attr_logits, - box_deltas, - pred_boxes, - features, - sizes, - scales=scales, - ) - - -class Res5ROIHeads(nn.Module): - """ - ROIHeads perform all per-region computation in an R-CNN. - It contains logic of cropping the regions, extract per-region features - (by the res-5 block in this case), and make per-region predictions. - """ - - def __init__(self, cfg, input_shape): - super().__init__() - self.batch_size_per_image = cfg.RPN.BATCH_SIZE_PER_IMAGE - self.positive_sample_fraction = cfg.ROI_HEADS.POSITIVE_FRACTION - self.in_features = cfg.ROI_HEADS.IN_FEATURES - self.num_classes = cfg.ROI_HEADS.NUM_CLASSES - self.proposal_append_gt = cfg.ROI_HEADS.PROPOSAL_APPEND_GT - self.feature_strides = {k: v.stride for k, v in input_shape.items()} - self.feature_channels = {k: v.channels for k, v in input_shape.items()} - self.cls_agnostic_bbox_reg = cfg.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG - self.stage_channel_factor = 2**3 # res5 is 8x res2 - self.out_channels = cfg.RESNETS.RES2_OUT_CHANNELS * self.stage_channel_factor - - # self.proposal_matcher = Matcher( - # cfg.ROI_HEADS.IOU_THRESHOLDS, - # cfg.ROI_HEADS.IOU_LABELS, - # allow_low_quality_matches=False, - # ) - - pooler_resolution = cfg.ROI_BOX_HEAD.POOLER_RESOLUTION - pooler_scales = (1.0 / self.feature_strides[self.in_features[0]],) - sampling_ratio = cfg.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO - res5_halve = cfg.ROI_BOX_HEAD.RES5HALVE - use_attr = cfg.ROI_BOX_HEAD.ATTR - num_attrs = cfg.ROI_BOX_HEAD.NUM_ATTRS - - self.pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - ) - - self.res5 = self._build_res5_block(cfg) - if not res5_halve: - """ - Modifications for VG in RoI heads: - 1. Change the stride of conv1 and shortcut in Res5.Block1 from 2 to 1 - 2. Modifying all conv2 with (padding: 1 --> 2) and (dilation: 1 --> 2) - """ - self.res5[0].conv1.stride = (1, 1) - self.res5[0].shortcut.stride = (1, 1) - for i in range(3): - self.res5[i].conv2.padding = (2, 2) - self.res5[i].conv2.dilation = (2, 2) - - self.box_predictor = FastRCNNOutputLayers( - self.out_channels, - self.num_classes, - self.cls_agnostic_bbox_reg, - use_attr=use_attr, - num_attrs=num_attrs, - ) - - def _build_res5_block(self, cfg): - stage_channel_factor = self.stage_channel_factor # res5 is 8x res2 - num_groups = cfg.RESNETS.NUM_GROUPS - width_per_group = cfg.RESNETS.WIDTH_PER_GROUP - bottleneck_channels = num_groups * width_per_group * stage_channel_factor - out_channels = self.out_channels - stride_in_1x1 = cfg.RESNETS.STRIDE_IN_1X1 - norm = cfg.RESNETS.NORM - - blocks = ResNet.make_stage( - BottleneckBlock, - 3, - first_stride=2, - in_channels=out_channels // 2, - bottleneck_channels=bottleneck_channels, - out_channels=out_channels, - num_groups=num_groups, - norm=norm, - stride_in_1x1=stride_in_1x1, - ) - return nn.Sequential(*blocks) - - def _shared_roi_transform(self, features, boxes): - x = self.pooler(features, boxes) - return self.res5(x) - - def forward(self, features, proposal_boxes, gt_boxes=None): - if self.training: - """ - see https://github.com/airsplay/py-bottom-up-attention/\ - blob/master/detectron2/modeling/roi_heads/roi_heads.py - """ - raise NotImplementedError() - - assert not proposal_boxes[0].requires_grad - box_features = self._shared_roi_transform(features, proposal_boxes) - feature_pooled = box_features.mean(dim=[2, 3]) # pooled to 1x1 - obj_logits, attr_logits, pred_proposal_deltas = self.box_predictor(feature_pooled) - return obj_logits, attr_logits, pred_proposal_deltas, feature_pooled - - -class AnchorGenerator(nn.Module): - """ - For a set of image sizes and feature maps, computes a set of anchors. - """ - - def __init__(self, cfg, input_shape: List[ShapeSpec]): - super().__init__() - sizes = cfg.ANCHOR_GENERATOR.SIZES - aspect_ratios = cfg.ANCHOR_GENERATOR.ASPECT_RATIOS - self.strides = [x.stride for x in input_shape] - self.offset = cfg.ANCHOR_GENERATOR.OFFSET - assert 0.0 <= self.offset < 1.0, self.offset - - """ - sizes (list[list[int]]): sizes[i] is the list of anchor sizes for feat map i - 1. given in absolute lengths in units of the input image; - 2. they do not dynamically scale if the input image size changes. - aspect_ratios (list[list[float]]) - strides (list[int]): stride of each input feature. - """ - - self.num_features = len(self.strides) - self.cell_anchors = nn.ParameterList(self._calculate_anchors(sizes, aspect_ratios)) - self._spacial_feat_dim = 4 - - def _calculate_anchors(self, sizes, aspect_ratios): - # If one size (or aspect ratio) is specified and there are multiple feature - # maps, then we "broadcast" anchors of that single size (or aspect ratio) - if len(sizes) == 1: - sizes *= self.num_features - if len(aspect_ratios) == 1: - aspect_ratios *= self.num_features - assert self.num_features == len(sizes) - assert self.num_features == len(aspect_ratios) - - cell_anchors = [self.generate_cell_anchors(s, a).float() for s, a in zip(sizes, aspect_ratios)] - - return cell_anchors - - @property - def box_dim(self): - return self._spacial_feat_dim - - @property - def num_cell_anchors(self): - """ - Returns: - list[int]: Each int is the number of anchors at every pixel location, on that feature map. - """ - return [len(cell_anchors) for cell_anchors in self.cell_anchors] - - def grid_anchors(self, grid_sizes): - anchors = [] - for size, stride, base_anchors in zip(grid_sizes, self.strides, self.cell_anchors): - shift_x, shift_y = _create_grid_offsets(size, stride, self.offset, base_anchors.device) - shifts = torch.stack((shift_x, shift_y, shift_x, shift_y), dim=1) - - anchors.append((shifts.view(-1, 1, 4) + base_anchors.view(1, -1, 4)).reshape(-1, 4)) - - return anchors - - def generate_cell_anchors(self, sizes=(32, 64, 128, 256, 512), aspect_ratios=(0.5, 1, 2)): - """ - anchors are continuous geometric rectangles - centered on one feature map point sample. - We can later build the set of anchors - for the entire feature map by tiling these tensors - """ - - anchors = [] - for size in sizes: - area = size**2.0 - for aspect_ratio in aspect_ratios: - w = math.sqrt(area / aspect_ratio) - h = aspect_ratio * w - x0, y0, x1, y1 = -w / 2.0, -h / 2.0, w / 2.0, h / 2.0 - anchors.append([x0, y0, x1, y1]) - return nn.Parameter(torch.tensor(anchors)) - - def forward(self, features): - """ - Args: - features List[torch.Tensor]: list of feature maps on which to generate anchors. - Returns: - torch.Tensor: a list of #image elements. - """ - num_images = features[0].size(0) - grid_sizes = [feature_map.shape[-2:] for feature_map in features] - anchors_over_all_feature_maps = self.grid_anchors(grid_sizes) - anchors_over_all_feature_maps = torch.stack(anchors_over_all_feature_maps) - return anchors_over_all_feature_maps.unsqueeze(0).repeat_interleave(num_images, dim=0) - - -class RPNHead(nn.Module): - """ - RPN classification and regression heads. Uses a 3x3 conv to produce a shared - hidden state from which one 1x1 conv predicts objectness logits for each anchor - and a second 1x1 conv predicts bounding-box deltas specifying how to deform - each anchor into an object proposal. - """ - - def __init__(self, cfg, input_shape: List[ShapeSpec]): - super().__init__() - - # Standard RPN is shared across levels: - in_channels = [s.channels for s in input_shape] - assert len(set(in_channels)) == 1, "Each level must have the same channel!" - in_channels = in_channels[0] - - anchor_generator = AnchorGenerator(cfg, input_shape) - num_cell_anchors = anchor_generator.num_cell_anchors - box_dim = anchor_generator.box_dim - assert len(set(num_cell_anchors)) == 1, "Each level must have the same number of cell anchors" - num_cell_anchors = num_cell_anchors[0] - - if cfg.PROPOSAL_GENERATOR.HIDDEN_CHANNELS == -1: - hid_channels = in_channels - else: - hid_channels = cfg.PROPOSAL_GENERATOR.HIDDEN_CHANNELS - # Modifications for VG in RPN (modeling/proposal_generator/rpn.py) - # Use hidden dim instead fo the same dim as Res4 (in_channels) - - # 3x3 conv for the hidden representation - self.conv = nn.Conv2d(in_channels, hid_channels, kernel_size=3, stride=1, padding=1) - # 1x1 conv for predicting objectness logits - self.objectness_logits = nn.Conv2d(hid_channels, num_cell_anchors, kernel_size=1, stride=1) - # 1x1 conv for predicting box2box transform deltas - self.anchor_deltas = nn.Conv2d(hid_channels, num_cell_anchors * box_dim, kernel_size=1, stride=1) - - for layer in [self.conv, self.objectness_logits, self.anchor_deltas]: - nn.init.normal_(layer.weight, std=0.01) - nn.init.constant_(layer.bias, 0) - - def forward(self, features): - """ - Args: - features (list[Tensor]): list of feature maps - """ - pred_objectness_logits = [] - pred_anchor_deltas = [] - for x in features: - t = nn.functional.relu(self.conv(x)) - pred_objectness_logits.append(self.objectness_logits(t)) - pred_anchor_deltas.append(self.anchor_deltas(t)) - return pred_objectness_logits, pred_anchor_deltas - - -class RPN(nn.Module): - """ - Region Proposal Network, introduced by the Faster R-CNN paper. - """ - - def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]): - super().__init__() - - self.min_box_side_len = cfg.PROPOSAL_GENERATOR.MIN_SIZE - self.in_features = cfg.RPN.IN_FEATURES - self.nms_thresh = cfg.RPN.NMS_THRESH - self.batch_size_per_image = cfg.RPN.BATCH_SIZE_PER_IMAGE - self.positive_fraction = cfg.RPN.POSITIVE_FRACTION - self.smooth_l1_beta = cfg.RPN.SMOOTH_L1_BETA - self.loss_weight = cfg.RPN.LOSS_WEIGHT - - self.pre_nms_topk = { - True: cfg.RPN.PRE_NMS_TOPK_TRAIN, - False: cfg.RPN.PRE_NMS_TOPK_TEST, - } - self.post_nms_topk = { - True: cfg.RPN.POST_NMS_TOPK_TRAIN, - False: cfg.RPN.POST_NMS_TOPK_TEST, - } - self.boundary_threshold = cfg.RPN.BOUNDARY_THRESH - - self.anchor_generator = AnchorGenerator(cfg, [input_shape[f] for f in self.in_features]) - self.box2box_transform = Box2BoxTransform(weights=cfg.RPN.BBOX_REG_WEIGHTS) - self.anchor_matcher = Matcher( - cfg.RPN.IOU_THRESHOLDS, - cfg.RPN.IOU_LABELS, - allow_low_quality_matches=True, - ) - self.rpn_head = RPNHead(cfg, [input_shape[f] for f in self.in_features]) - - def training(self, images, image_shapes, features, gt_boxes): - pass - - def inference(self, outputs, images, image_shapes, features, gt_boxes=None): - outputs = find_top_rpn_proposals( - outputs.predict_proposals(), - outputs.predict_objectness_logits(), - images, - image_shapes, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_side_len, - self.training, - ) - - results = [] - for img in outputs: - im_boxes, img_box_logits = img - img_box_logits, inds = img_box_logits.sort(descending=True) - im_boxes = im_boxes[inds] - results.append((im_boxes, img_box_logits)) - - (proposal_boxes, logits) = tuple(map(list, zip(*results))) - return proposal_boxes, logits - - def forward(self, images, image_shapes, features, gt_boxes=None): - """ - Args: - images (torch.Tensor): input images of length `N` - features (dict[str: Tensor]) - gt_instances - """ - # features is dict, key = block level, v = feature_map - features = [features[f] for f in self.in_features] - pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features) - anchors = self.anchor_generator(features) - outputs = RPNOutputs( - self.box2box_transform, - self.anchor_matcher, - self.batch_size_per_image, - self.positive_fraction, - images, - pred_objectness_logits, - pred_anchor_deltas, - anchors, - self.boundary_threshold, - gt_boxes, - self.smooth_l1_beta, - ) - # For RPN-only models, the proposals are the final output - - if self.training: - raise NotImplementedError() - return self.training(outputs, images, image_shapes, features, gt_boxes) - else: - return self.inference(outputs, images, image_shapes, features, gt_boxes) - - -class FastRCNNOutputLayers(nn.Module): - """ - Two linear layers for predicting Fast R-CNN outputs: - (1) proposal-to-detection box regression deltas - (2) classification scores - """ - - def __init__( - self, - input_size, - num_classes, - cls_agnostic_bbox_reg, - box_dim=4, - use_attr=False, - num_attrs=-1, - ): - """ - Args: - input_size (int): channels, or (channels, height, width) - num_classes (int) - cls_agnostic_bbox_reg (bool) - box_dim (int) - """ - super().__init__() - - if not isinstance(input_size, int): - input_size = np.prod(input_size) - - # (do + 1 for background class) - self.cls_score = nn.Linear(input_size, num_classes + 1) - num_bbox_reg_classes = 1 if cls_agnostic_bbox_reg else num_classes - self.bbox_pred = nn.Linear(input_size, num_bbox_reg_classes * box_dim) - - self.use_attr = use_attr - if use_attr: - """ - Modifications for VG in RoI heads - Embedding: {num_classes + 1} --> {input_size // 8} - Linear: {input_size + input_size // 8} --> {input_size // 4} - Linear: {input_size // 4} --> {num_attrs + 1} - """ - self.cls_embedding = nn.Embedding(num_classes + 1, input_size // 8) - self.fc_attr = nn.Linear(input_size + input_size // 8, input_size // 4) - self.attr_score = nn.Linear(input_size // 4, num_attrs + 1) - - nn.init.normal_(self.cls_score.weight, std=0.01) - nn.init.normal_(self.bbox_pred.weight, std=0.001) - for item in [self.cls_score, self.bbox_pred]: - nn.init.constant_(item.bias, 0) - - def forward(self, roi_features): - if roi_features.dim() > 2: - roi_features = torch.flatten(roi_features, start_dim=1) - scores = self.cls_score(roi_features) - proposal_deltas = self.bbox_pred(roi_features) - if self.use_attr: - _, max_class = scores.max(-1) # [b, c] --> [b] - cls_emb = self.cls_embedding(max_class) # [b] --> [b, 256] - roi_features = torch.cat([roi_features, cls_emb], -1) # [b, 2048] + [b, 256] --> [b, 2304] - roi_features = self.fc_attr(roi_features) - roi_features = nn.functional.relu(roi_features) - attr_scores = self.attr_score(roi_features) - return scores, attr_scores, proposal_deltas - else: - return scores, proposal_deltas - - -class GeneralizedRCNN(nn.Module): - def __init__(self, cfg): - super().__init__() - - self.device = torch.device(cfg.MODEL.DEVICE) - self.backbone = build_backbone(cfg) - self.proposal_generator = RPN(cfg, self.backbone.output_shape()) - self.roi_heads = Res5ROIHeads(cfg, self.backbone.output_shape()) - self.roi_outputs = ROIOutputs(cfg) - self.to(self.device) - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs): - config = kwargs.pop("config", None) - state_dict = kwargs.pop("state_dict", None) - cache_dir = kwargs.pop("cache_dir", None) - from_tf = kwargs.pop("from_tf", False) - force_download = kwargs.pop("force_download", False) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - local_files_only = kwargs.pop("local_files_only", False) - use_cdn = kwargs.pop("use_cdn", True) - - # Load config if we don't provide a configuration - if not isinstance(config, Config): - config_path = config if config is not None else pretrained_model_name_or_path - # try: - config = Config.from_pretrained( - config_path, - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - ) - - # Load model - if pretrained_model_name_or_path is not None: - if os.path.isdir(pretrained_model_name_or_path): - if os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)): - # Load from a PyTorch checkpoint - archive_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME) - else: - raise EnvironmentError( - "Error no file named {} found in directory {} ".format( - WEIGHTS_NAME, - pretrained_model_name_or_path, - ) - ) - elif os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path): - archive_file = pretrained_model_name_or_path - elif os.path.isfile(pretrained_model_name_or_path + ".index"): - assert ( - from_tf - ), "We found a TensorFlow checkpoint at {}, please set from_tf to True to load from this checkpoint".format( - pretrained_model_name_or_path + ".index" - ) - archive_file = pretrained_model_name_or_path + ".index" - else: - archive_file = hf_bucket_url( - pretrained_model_name_or_path, - filename=WEIGHTS_NAME, - use_cdn=use_cdn, - ) - - try: - # Load from URL or cache if already cached - resolved_archive_file = cached_path( - archive_file, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - local_files_only=local_files_only, - ) - if resolved_archive_file is None: - raise EnvironmentError - except EnvironmentError: - msg = f"Can't load weights for '{pretrained_model_name_or_path}'." - raise EnvironmentError(msg) - - if resolved_archive_file == archive_file: - print("loading weights file {}".format(archive_file)) - else: - print("loading weights file {} from cache at {}".format(archive_file, resolved_archive_file)) - else: - resolved_archive_file = None - - # Instantiate model. - model = cls(config) - - if state_dict is None: - try: - try: - state_dict = torch.load(resolved_archive_file, map_location="cpu") - except Exception: - state_dict = load_checkpoint(resolved_archive_file) - - except Exception: - raise OSError( - "Unable to load weights from pytorch checkpoint file. " - "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. " - ) - - missing_keys = [] - unexpected_keys = [] - error_msgs = [] - - # Convert old format to new format if needed from a PyTorch state_dict - old_keys = [] - new_keys = [] - for key in state_dict.keys(): - new_key = None - if "gamma" in key: - new_key = key.replace("gamma", "weight") - if "beta" in key: - new_key = key.replace("beta", "bias") - if new_key: - old_keys.append(key) - new_keys.append(new_key) - for old_key, new_key in zip(old_keys, new_keys): - state_dict[new_key] = state_dict.pop(old_key) - - # copy state_dict so _load_from_state_dict can modify it - metadata = getattr(state_dict, "_metadata", None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - model_to_load = model - model_to_load.load_state_dict(state_dict) - - if model.__class__.__name__ != model_to_load.__class__.__name__: - base_model_state_dict = model_to_load.state_dict().keys() - head_model_state_dict_without_base_prefix = [ - key.split(cls.base_model_prefix + ".")[-1] for key in model.state_dict().keys() - ] - missing_keys.extend(head_model_state_dict_without_base_prefix - base_model_state_dict) - - if len(unexpected_keys) > 0: - print( - f"Some weights of the model checkpoint at {pretrained_model_name_or_path} were not used when" - f" initializing {model.__class__.__name__}: {unexpected_keys}\n- This IS expected if you are" - f" initializing {model.__class__.__name__} from the checkpoint of a model trained on another task or" - " with another architecture (e.g. initializing a BertForSequenceClassification model from a" - " BertForPreTraining model).\n- This IS NOT expected if you are initializing" - f" {model.__class__.__name__} from the checkpoint of a model that you expect to be exactly identical" - " (initializing a BertForSequenceClassification model from a BertForSequenceClassification model)." - ) - else: - print(f"All model checkpoint weights were used when initializing {model.__class__.__name__}.\n") - if len(missing_keys) > 0: - print( - f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized: {missing_keys}\nYou should probably" - " TRAIN this model on a down-stream task to be able to use it for predictions and inference." - ) - else: - print( - f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at" - f" {pretrained_model_name_or_path}.\nIf your task is similar to the task the model of the checkpoint" - f" was trained on, you can already use {model.__class__.__name__} for predictions without further" - " training." - ) - if len(error_msgs) > 0: - raise RuntimeError( - "Error(s) in loading state_dict for {}:\n\t{}".format( - model.__class__.__name__, "\n\t".join(error_msgs) - ) - ) - # Set model in evaluation mode to deactivate DropOut modules by default - model.eval() - - return model - - def forward( - self, - images, - image_shapes, - gt_boxes=None, - proposals=None, - scales_yx=None, - **kwargs, - ): - """ - kwargs: - max_detections (int), return_tensors {"np", "pt", None}, padding {None, - "max_detections"}, pad_value (int), location = {"cuda", "cpu"} - """ - if self.training: - raise NotImplementedError() - return self.inference( - images=images, - image_shapes=image_shapes, - gt_boxes=gt_boxes, - proposals=proposals, - scales_yx=scales_yx, - **kwargs, - ) - - @torch.no_grad() - def inference( - self, - images, - image_shapes, - gt_boxes=None, - proposals=None, - scales_yx=None, - **kwargs, - ): - # run images through backbone - original_sizes = image_shapes * scales_yx - features = self.backbone(images) - - # generate proposals if none are available - if proposals is None: - proposal_boxes, _ = self.proposal_generator(images, image_shapes, features, gt_boxes) - else: - assert proposals is not None - - # pool object features from either gt_boxes, or from proposals - obj_logits, attr_logits, box_deltas, feature_pooled = self.roi_heads(features, proposal_boxes, gt_boxes) - - # prepare FRCNN Outputs and select top proposals - boxes, classes, class_probs, attrs, attr_probs, roi_features = self.roi_outputs( - obj_logits=obj_logits, - attr_logits=attr_logits, - box_deltas=box_deltas, - pred_boxes=proposal_boxes, - features=feature_pooled, - sizes=image_shapes, - scales=scales_yx, - ) - - # will we pad??? - subset_kwargs = { - "max_detections": kwargs.get("max_detections", None), - "return_tensors": kwargs.get("return_tensors", None), - "pad_value": kwargs.get("pad_value", 0), - "padding": kwargs.get("padding", None), - } - preds_per_image = torch.tensor([p.size(0) for p in boxes]) - boxes = pad_list_tensors(boxes, preds_per_image, **subset_kwargs) - classes = pad_list_tensors(classes, preds_per_image, **subset_kwargs) - class_probs = pad_list_tensors(class_probs, preds_per_image, **subset_kwargs) - attrs = pad_list_tensors(attrs, preds_per_image, **subset_kwargs) - attr_probs = pad_list_tensors(attr_probs, preds_per_image, **subset_kwargs) - roi_features = pad_list_tensors(roi_features, preds_per_image, **subset_kwargs) - subset_kwargs["padding"] = None - preds_per_image = pad_list_tensors(preds_per_image, None, **subset_kwargs) - sizes = pad_list_tensors(image_shapes, None, **subset_kwargs) - normalized_boxes = norm_box(boxes, original_sizes) - return OrderedDict( - { - "obj_ids": classes, - "obj_probs": class_probs, - "attr_ids": attrs, - "attr_probs": attr_probs, - "boxes": boxes, - "sizes": sizes, - "preds_per_image": preds_per_image, - "roi_features": roi_features, - "normalized_boxes": normalized_boxes, - } - ) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/euctwfreq.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/euctwfreq.py deleted file mode 100644 index 4900ccc160a1dbf4de3a01c234735c21dd4417d6..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/euctwfreq.py +++ /dev/null @@ -1,388 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -# EUCTW frequency table -# Converted from big5 work -# by Taiwan's Mandarin Promotion Council -# - -# 128 --> 0.42261 -# 256 --> 0.57851 -# 512 --> 0.74851 -# 1024 --> 0.89384 -# 2048 --> 0.97583 -# -# Idea Distribution Ratio = 0.74851/(1-0.74851) =2.98 -# Random Distribution Ration = 512/(5401-512)=0.105 -# -# Typical Distribution Ratio about 25% of Ideal one, still much higher than RDR - -EUCTW_TYPICAL_DISTRIBUTION_RATIO = 0.75 - -# Char to FreqOrder table -EUCTW_TABLE_SIZE = 5376 - -# fmt: off -EUCTW_CHAR_TO_FREQ_ORDER = ( - 1, 1800, 1506, 255, 1431, 198, 9, 82, 6, 7310, 177, 202, 3615, 1256, 2808, 110, # 2742 - 3735, 33, 3241, 261, 76, 44, 2113, 16, 2931, 2184, 1176, 659, 3868, 26, 3404, 2643, # 2758 - 1198, 3869, 3313, 4060, 410, 2211, 302, 590, 361, 1963, 8, 204, 58, 4296, 7311, 1931, # 2774 - 63, 7312, 7313, 317, 1614, 75, 222, 159, 4061, 2412, 1480, 7314, 3500, 3068, 224, 2809, # 2790 - 3616, 3, 10, 3870, 1471, 29, 2774, 1135, 2852, 1939, 873, 130, 3242, 1123, 312, 7315, # 2806 - 4297, 2051, 507, 252, 682, 7316, 142, 1914, 124, 206, 2932, 34, 3501, 3173, 64, 604, # 2822 - 7317, 2494, 1976, 1977, 155, 1990, 645, 641, 1606, 7318, 3405, 337, 72, 406, 7319, 80, # 2838 - 630, 238, 3174, 1509, 263, 939, 1092, 2644, 756, 1440, 1094, 3406, 449, 69, 2969, 591, # 2854 - 179, 2095, 471, 115, 2034, 1843, 60, 50, 2970, 134, 806, 1868, 734, 2035, 3407, 180, # 2870 - 995, 1607, 156, 537, 2893, 688, 7320, 319, 1305, 779, 2144, 514, 2374, 298, 4298, 359, # 2886 - 2495, 90, 2707, 1338, 663, 11, 906, 1099, 2545, 20, 2436, 182, 532, 1716, 7321, 732, # 2902 - 1376, 4062, 1311, 1420, 3175, 25, 2312, 1056, 113, 399, 382, 1949, 242, 3408, 2467, 529, # 2918 - 3243, 475, 1447, 3617, 7322, 117, 21, 656, 810, 1297, 2295, 2329, 3502, 7323, 126, 4063, # 2934 - 706, 456, 150, 613, 4299, 71, 1118, 2036, 4064, 145, 3069, 85, 835, 486, 2114, 1246, # 2950 - 1426, 428, 727, 1285, 1015, 800, 106, 623, 303, 1281, 7324, 2127, 2354, 347, 3736, 221, # 2966 - 3503, 3110, 7325, 1955, 1153, 4065, 83, 296, 1199, 3070, 192, 624, 93, 7326, 822, 1897, # 2982 - 2810, 3111, 795, 2064, 991, 1554, 1542, 1592, 27, 43, 2853, 859, 139, 1456, 860, 4300, # 2998 - 437, 712, 3871, 164, 2392, 3112, 695, 211, 3017, 2096, 195, 3872, 1608, 3504, 3505, 3618, # 3014 - 3873, 234, 811, 2971, 2097, 3874, 2229, 1441, 3506, 1615, 2375, 668, 2076, 1638, 305, 228, # 3030 - 1664, 4301, 467, 415, 7327, 262, 2098, 1593, 239, 108, 300, 200, 1033, 512, 1247, 2077, # 3046 - 7328, 7329, 2173, 3176, 3619, 2673, 593, 845, 1062, 3244, 88, 1723, 2037, 3875, 1950, 212, # 3062 - 266, 152, 149, 468, 1898, 4066, 4302, 77, 187, 7330, 3018, 37, 5, 2972, 7331, 3876, # 3078 - 7332, 7333, 39, 2517, 4303, 2894, 3177, 2078, 55, 148, 74, 4304, 545, 483, 1474, 1029, # 3094 - 1665, 217, 1869, 1531, 3113, 1104, 2645, 4067, 24, 172, 3507, 900, 3877, 3508, 3509, 4305, # 3110 - 32, 1408, 2811, 1312, 329, 487, 2355, 2247, 2708, 784, 2674, 4, 3019, 3314, 1427, 1788, # 3126 - 188, 109, 499, 7334, 3620, 1717, 1789, 888, 1217, 3020, 4306, 7335, 3510, 7336, 3315, 1520, # 3142 - 3621, 3878, 196, 1034, 775, 7337, 7338, 929, 1815, 249, 439, 38, 7339, 1063, 7340, 794, # 3158 - 3879, 1435, 2296, 46, 178, 3245, 2065, 7341, 2376, 7342, 214, 1709, 4307, 804, 35, 707, # 3174 - 324, 3622, 1601, 2546, 140, 459, 4068, 7343, 7344, 1365, 839, 272, 978, 2257, 2572, 3409, # 3190 - 2128, 1363, 3623, 1423, 697, 100, 3071, 48, 70, 1231, 495, 3114, 2193, 7345, 1294, 7346, # 3206 - 2079, 462, 586, 1042, 3246, 853, 256, 988, 185, 2377, 3410, 1698, 434, 1084, 7347, 3411, # 3222 - 314, 2615, 2775, 4308, 2330, 2331, 569, 2280, 637, 1816, 2518, 757, 1162, 1878, 1616, 3412, # 3238 - 287, 1577, 2115, 768, 4309, 1671, 2854, 3511, 2519, 1321, 3737, 909, 2413, 7348, 4069, 933, # 3254 - 3738, 7349, 2052, 2356, 1222, 4310, 765, 2414, 1322, 786, 4311, 7350, 1919, 1462, 1677, 2895, # 3270 - 1699, 7351, 4312, 1424, 2437, 3115, 3624, 2590, 3316, 1774, 1940, 3413, 3880, 4070, 309, 1369, # 3286 - 1130, 2812, 364, 2230, 1653, 1299, 3881, 3512, 3882, 3883, 2646, 525, 1085, 3021, 902, 2000, # 3302 - 1475, 964, 4313, 421, 1844, 1415, 1057, 2281, 940, 1364, 3116, 376, 4314, 4315, 1381, 7, # 3318 - 2520, 983, 2378, 336, 1710, 2675, 1845, 321, 3414, 559, 1131, 3022, 2742, 1808, 1132, 1313, # 3334 - 265, 1481, 1857, 7352, 352, 1203, 2813, 3247, 167, 1089, 420, 2814, 776, 792, 1724, 3513, # 3350 - 4071, 2438, 3248, 7353, 4072, 7354, 446, 229, 333, 2743, 901, 3739, 1200, 1557, 4316, 2647, # 3366 - 1920, 395, 2744, 2676, 3740, 4073, 1835, 125, 916, 3178, 2616, 4317, 7355, 7356, 3741, 7357, # 3382 - 7358, 7359, 4318, 3117, 3625, 1133, 2547, 1757, 3415, 1510, 2313, 1409, 3514, 7360, 2145, 438, # 3398 - 2591, 2896, 2379, 3317, 1068, 958, 3023, 461, 311, 2855, 2677, 4074, 1915, 3179, 4075, 1978, # 3414 - 383, 750, 2745, 2617, 4076, 274, 539, 385, 1278, 1442, 7361, 1154, 1964, 384, 561, 210, # 3430 - 98, 1295, 2548, 3515, 7362, 1711, 2415, 1482, 3416, 3884, 2897, 1257, 129, 7363, 3742, 642, # 3446 - 523, 2776, 2777, 2648, 7364, 141, 2231, 1333, 68, 176, 441, 876, 907, 4077, 603, 2592, # 3462 - 710, 171, 3417, 404, 549, 18, 3118, 2393, 1410, 3626, 1666, 7365, 3516, 4319, 2898, 4320, # 3478 - 7366, 2973, 368, 7367, 146, 366, 99, 871, 3627, 1543, 748, 807, 1586, 1185, 22, 2258, # 3494 - 379, 3743, 3180, 7368, 3181, 505, 1941, 2618, 1991, 1382, 2314, 7369, 380, 2357, 218, 702, # 3510 - 1817, 1248, 3418, 3024, 3517, 3318, 3249, 7370, 2974, 3628, 930, 3250, 3744, 7371, 59, 7372, # 3526 - 585, 601, 4078, 497, 3419, 1112, 1314, 4321, 1801, 7373, 1223, 1472, 2174, 7374, 749, 1836, # 3542 - 690, 1899, 3745, 1772, 3885, 1476, 429, 1043, 1790, 2232, 2116, 917, 4079, 447, 1086, 1629, # 3558 - 7375, 556, 7376, 7377, 2020, 1654, 844, 1090, 105, 550, 966, 1758, 2815, 1008, 1782, 686, # 3574 - 1095, 7378, 2282, 793, 1602, 7379, 3518, 2593, 4322, 4080, 2933, 2297, 4323, 3746, 980, 2496, # 3590 - 544, 353, 527, 4324, 908, 2678, 2899, 7380, 381, 2619, 1942, 1348, 7381, 1341, 1252, 560, # 3606 - 3072, 7382, 3420, 2856, 7383, 2053, 973, 886, 2080, 143, 4325, 7384, 7385, 157, 3886, 496, # 3622 - 4081, 57, 840, 540, 2038, 4326, 4327, 3421, 2117, 1445, 970, 2259, 1748, 1965, 2081, 4082, # 3638 - 3119, 1234, 1775, 3251, 2816, 3629, 773, 1206, 2129, 1066, 2039, 1326, 3887, 1738, 1725, 4083, # 3654 - 279, 3120, 51, 1544, 2594, 423, 1578, 2130, 2066, 173, 4328, 1879, 7386, 7387, 1583, 264, # 3670 - 610, 3630, 4329, 2439, 280, 154, 7388, 7389, 7390, 1739, 338, 1282, 3073, 693, 2857, 1411, # 3686 - 1074, 3747, 2440, 7391, 4330, 7392, 7393, 1240, 952, 2394, 7394, 2900, 1538, 2679, 685, 1483, # 3702 - 4084, 2468, 1436, 953, 4085, 2054, 4331, 671, 2395, 79, 4086, 2441, 3252, 608, 567, 2680, # 3718 - 3422, 4087, 4088, 1691, 393, 1261, 1791, 2396, 7395, 4332, 7396, 7397, 7398, 7399, 1383, 1672, # 3734 - 3748, 3182, 1464, 522, 1119, 661, 1150, 216, 675, 4333, 3888, 1432, 3519, 609, 4334, 2681, # 3750 - 2397, 7400, 7401, 7402, 4089, 3025, 0, 7403, 2469, 315, 231, 2442, 301, 3319, 4335, 2380, # 3766 - 7404, 233, 4090, 3631, 1818, 4336, 4337, 7405, 96, 1776, 1315, 2082, 7406, 257, 7407, 1809, # 3782 - 3632, 2709, 1139, 1819, 4091, 2021, 1124, 2163, 2778, 1777, 2649, 7408, 3074, 363, 1655, 3183, # 3798 - 7409, 2975, 7410, 7411, 7412, 3889, 1567, 3890, 718, 103, 3184, 849, 1443, 341, 3320, 2934, # 3814 - 1484, 7413, 1712, 127, 67, 339, 4092, 2398, 679, 1412, 821, 7414, 7415, 834, 738, 351, # 3830 - 2976, 2146, 846, 235, 1497, 1880, 418, 1992, 3749, 2710, 186, 1100, 2147, 2746, 3520, 1545, # 3846 - 1355, 2935, 2858, 1377, 583, 3891, 4093, 2573, 2977, 7416, 1298, 3633, 1078, 2549, 3634, 2358, # 3862 - 78, 3750, 3751, 267, 1289, 2099, 2001, 1594, 4094, 348, 369, 1274, 2194, 2175, 1837, 4338, # 3878 - 1820, 2817, 3635, 2747, 2283, 2002, 4339, 2936, 2748, 144, 3321, 882, 4340, 3892, 2749, 3423, # 3894 - 4341, 2901, 7417, 4095, 1726, 320, 7418, 3893, 3026, 788, 2978, 7419, 2818, 1773, 1327, 2859, # 3910 - 3894, 2819, 7420, 1306, 4342, 2003, 1700, 3752, 3521, 2359, 2650, 787, 2022, 506, 824, 3636, # 3926 - 534, 323, 4343, 1044, 3322, 2023, 1900, 946, 3424, 7421, 1778, 1500, 1678, 7422, 1881, 4344, # 3942 - 165, 243, 4345, 3637, 2521, 123, 683, 4096, 764, 4346, 36, 3895, 1792, 589, 2902, 816, # 3958 - 626, 1667, 3027, 2233, 1639, 1555, 1622, 3753, 3896, 7423, 3897, 2860, 1370, 1228, 1932, 891, # 3974 - 2083, 2903, 304, 4097, 7424, 292, 2979, 2711, 3522, 691, 2100, 4098, 1115, 4347, 118, 662, # 3990 - 7425, 611, 1156, 854, 2381, 1316, 2861, 2, 386, 515, 2904, 7426, 7427, 3253, 868, 2234, # 4006 - 1486, 855, 2651, 785, 2212, 3028, 7428, 1040, 3185, 3523, 7429, 3121, 448, 7430, 1525, 7431, # 4022 - 2164, 4348, 7432, 3754, 7433, 4099, 2820, 3524, 3122, 503, 818, 3898, 3123, 1568, 814, 676, # 4038 - 1444, 306, 1749, 7434, 3755, 1416, 1030, 197, 1428, 805, 2821, 1501, 4349, 7435, 7436, 7437, # 4054 - 1993, 7438, 4350, 7439, 7440, 2195, 13, 2779, 3638, 2980, 3124, 1229, 1916, 7441, 3756, 2131, # 4070 - 7442, 4100, 4351, 2399, 3525, 7443, 2213, 1511, 1727, 1120, 7444, 7445, 646, 3757, 2443, 307, # 4086 - 7446, 7447, 1595, 3186, 7448, 7449, 7450, 3639, 1113, 1356, 3899, 1465, 2522, 2523, 7451, 519, # 4102 - 7452, 128, 2132, 92, 2284, 1979, 7453, 3900, 1512, 342, 3125, 2196, 7454, 2780, 2214, 1980, # 4118 - 3323, 7455, 290, 1656, 1317, 789, 827, 2360, 7456, 3758, 4352, 562, 581, 3901, 7457, 401, # 4134 - 4353, 2248, 94, 4354, 1399, 2781, 7458, 1463, 2024, 4355, 3187, 1943, 7459, 828, 1105, 4101, # 4150 - 1262, 1394, 7460, 4102, 605, 4356, 7461, 1783, 2862, 7462, 2822, 819, 2101, 578, 2197, 2937, # 4166 - 7463, 1502, 436, 3254, 4103, 3255, 2823, 3902, 2905, 3425, 3426, 7464, 2712, 2315, 7465, 7466, # 4182 - 2332, 2067, 23, 4357, 193, 826, 3759, 2102, 699, 1630, 4104, 3075, 390, 1793, 1064, 3526, # 4198 - 7467, 1579, 3076, 3077, 1400, 7468, 4105, 1838, 1640, 2863, 7469, 4358, 4359, 137, 4106, 598, # 4214 - 3078, 1966, 780, 104, 974, 2938, 7470, 278, 899, 253, 402, 572, 504, 493, 1339, 7471, # 4230 - 3903, 1275, 4360, 2574, 2550, 7472, 3640, 3029, 3079, 2249, 565, 1334, 2713, 863, 41, 7473, # 4246 - 7474, 4361, 7475, 1657, 2333, 19, 463, 2750, 4107, 606, 7476, 2981, 3256, 1087, 2084, 1323, # 4262 - 2652, 2982, 7477, 1631, 1623, 1750, 4108, 2682, 7478, 2864, 791, 2714, 2653, 2334, 232, 2416, # 4278 - 7479, 2983, 1498, 7480, 2654, 2620, 755, 1366, 3641, 3257, 3126, 2025, 1609, 119, 1917, 3427, # 4294 - 862, 1026, 4109, 7481, 3904, 3760, 4362, 3905, 4363, 2260, 1951, 2470, 7482, 1125, 817, 4110, # 4310 - 4111, 3906, 1513, 1766, 2040, 1487, 4112, 3030, 3258, 2824, 3761, 3127, 7483, 7484, 1507, 7485, # 4326 - 2683, 733, 40, 1632, 1106, 2865, 345, 4113, 841, 2524, 230, 4364, 2984, 1846, 3259, 3428, # 4342 - 7486, 1263, 986, 3429, 7487, 735, 879, 254, 1137, 857, 622, 1300, 1180, 1388, 1562, 3907, # 4358 - 3908, 2939, 967, 2751, 2655, 1349, 592, 2133, 1692, 3324, 2985, 1994, 4114, 1679, 3909, 1901, # 4374 - 2185, 7488, 739, 3642, 2715, 1296, 1290, 7489, 4115, 2198, 2199, 1921, 1563, 2595, 2551, 1870, # 4390 - 2752, 2986, 7490, 435, 7491, 343, 1108, 596, 17, 1751, 4365, 2235, 3430, 3643, 7492, 4366, # 4406 - 294, 3527, 2940, 1693, 477, 979, 281, 2041, 3528, 643, 2042, 3644, 2621, 2782, 2261, 1031, # 4422 - 2335, 2134, 2298, 3529, 4367, 367, 1249, 2552, 7493, 3530, 7494, 4368, 1283, 3325, 2004, 240, # 4438 - 1762, 3326, 4369, 4370, 836, 1069, 3128, 474, 7495, 2148, 2525, 268, 3531, 7496, 3188, 1521, # 4454 - 1284, 7497, 1658, 1546, 4116, 7498, 3532, 3533, 7499, 4117, 3327, 2684, 1685, 4118, 961, 1673, # 4470 - 2622, 190, 2005, 2200, 3762, 4371, 4372, 7500, 570, 2497, 3645, 1490, 7501, 4373, 2623, 3260, # 4486 - 1956, 4374, 584, 1514, 396, 1045, 1944, 7502, 4375, 1967, 2444, 7503, 7504, 4376, 3910, 619, # 4502 - 7505, 3129, 3261, 215, 2006, 2783, 2553, 3189, 4377, 3190, 4378, 763, 4119, 3763, 4379, 7506, # 4518 - 7507, 1957, 1767, 2941, 3328, 3646, 1174, 452, 1477, 4380, 3329, 3130, 7508, 2825, 1253, 2382, # 4534 - 2186, 1091, 2285, 4120, 492, 7509, 638, 1169, 1824, 2135, 1752, 3911, 648, 926, 1021, 1324, # 4550 - 4381, 520, 4382, 997, 847, 1007, 892, 4383, 3764, 2262, 1871, 3647, 7510, 2400, 1784, 4384, # 4566 - 1952, 2942, 3080, 3191, 1728, 4121, 2043, 3648, 4385, 2007, 1701, 3131, 1551, 30, 2263, 4122, # 4582 - 7511, 2026, 4386, 3534, 7512, 501, 7513, 4123, 594, 3431, 2165, 1821, 3535, 3432, 3536, 3192, # 4598 - 829, 2826, 4124, 7514, 1680, 3132, 1225, 4125, 7515, 3262, 4387, 4126, 3133, 2336, 7516, 4388, # 4614 - 4127, 7517, 3912, 3913, 7518, 1847, 2383, 2596, 3330, 7519, 4389, 374, 3914, 652, 4128, 4129, # 4630 - 375, 1140, 798, 7520, 7521, 7522, 2361, 4390, 2264, 546, 1659, 138, 3031, 2445, 4391, 7523, # 4646 - 2250, 612, 1848, 910, 796, 3765, 1740, 1371, 825, 3766, 3767, 7524, 2906, 2554, 7525, 692, # 4662 - 444, 3032, 2624, 801, 4392, 4130, 7526, 1491, 244, 1053, 3033, 4131, 4132, 340, 7527, 3915, # 4678 - 1041, 2987, 293, 1168, 87, 1357, 7528, 1539, 959, 7529, 2236, 721, 694, 4133, 3768, 219, # 4694 - 1478, 644, 1417, 3331, 2656, 1413, 1401, 1335, 1389, 3916, 7530, 7531, 2988, 2362, 3134, 1825, # 4710 - 730, 1515, 184, 2827, 66, 4393, 7532, 1660, 2943, 246, 3332, 378, 1457, 226, 3433, 975, # 4726 - 3917, 2944, 1264, 3537, 674, 696, 7533, 163, 7534, 1141, 2417, 2166, 713, 3538, 3333, 4394, # 4742 - 3918, 7535, 7536, 1186, 15, 7537, 1079, 1070, 7538, 1522, 3193, 3539, 276, 1050, 2716, 758, # 4758 - 1126, 653, 2945, 3263, 7539, 2337, 889, 3540, 3919, 3081, 2989, 903, 1250, 4395, 3920, 3434, # 4774 - 3541, 1342, 1681, 1718, 766, 3264, 286, 89, 2946, 3649, 7540, 1713, 7541, 2597, 3334, 2990, # 4790 - 7542, 2947, 2215, 3194, 2866, 7543, 4396, 2498, 2526, 181, 387, 1075, 3921, 731, 2187, 3335, # 4806 - 7544, 3265, 310, 313, 3435, 2299, 770, 4134, 54, 3034, 189, 4397, 3082, 3769, 3922, 7545, # 4822 - 1230, 1617, 1849, 355, 3542, 4135, 4398, 3336, 111, 4136, 3650, 1350, 3135, 3436, 3035, 4137, # 4838 - 2149, 3266, 3543, 7546, 2784, 3923, 3924, 2991, 722, 2008, 7547, 1071, 247, 1207, 2338, 2471, # 4854 - 1378, 4399, 2009, 864, 1437, 1214, 4400, 373, 3770, 1142, 2216, 667, 4401, 442, 2753, 2555, # 4870 - 3771, 3925, 1968, 4138, 3267, 1839, 837, 170, 1107, 934, 1336, 1882, 7548, 7549, 2118, 4139, # 4886 - 2828, 743, 1569, 7550, 4402, 4140, 582, 2384, 1418, 3437, 7551, 1802, 7552, 357, 1395, 1729, # 4902 - 3651, 3268, 2418, 1564, 2237, 7553, 3083, 3772, 1633, 4403, 1114, 2085, 4141, 1532, 7554, 482, # 4918 - 2446, 4404, 7555, 7556, 1492, 833, 1466, 7557, 2717, 3544, 1641, 2829, 7558, 1526, 1272, 3652, # 4934 - 4142, 1686, 1794, 416, 2556, 1902, 1953, 1803, 7559, 3773, 2785, 3774, 1159, 2316, 7560, 2867, # 4950 - 4405, 1610, 1584, 3036, 2419, 2754, 443, 3269, 1163, 3136, 7561, 7562, 3926, 7563, 4143, 2499, # 4966 - 3037, 4406, 3927, 3137, 2103, 1647, 3545, 2010, 1872, 4144, 7564, 4145, 431, 3438, 7565, 250, # 4982 - 97, 81, 4146, 7566, 1648, 1850, 1558, 160, 848, 7567, 866, 740, 1694, 7568, 2201, 2830, # 4998 - 3195, 4147, 4407, 3653, 1687, 950, 2472, 426, 469, 3196, 3654, 3655, 3928, 7569, 7570, 1188, # 5014 - 424, 1995, 861, 3546, 4148, 3775, 2202, 2685, 168, 1235, 3547, 4149, 7571, 2086, 1674, 4408, # 5030 - 3337, 3270, 220, 2557, 1009, 7572, 3776, 670, 2992, 332, 1208, 717, 7573, 7574, 3548, 2447, # 5046 - 3929, 3338, 7575, 513, 7576, 1209, 2868, 3339, 3138, 4409, 1080, 7577, 7578, 7579, 7580, 2527, # 5062 - 3656, 3549, 815, 1587, 3930, 3931, 7581, 3550, 3439, 3777, 1254, 4410, 1328, 3038, 1390, 3932, # 5078 - 1741, 3933, 3778, 3934, 7582, 236, 3779, 2448, 3271, 7583, 7584, 3657, 3780, 1273, 3781, 4411, # 5094 - 7585, 308, 7586, 4412, 245, 4413, 1851, 2473, 1307, 2575, 430, 715, 2136, 2449, 7587, 270, # 5110 - 199, 2869, 3935, 7588, 3551, 2718, 1753, 761, 1754, 725, 1661, 1840, 4414, 3440, 3658, 7589, # 5126 - 7590, 587, 14, 3272, 227, 2598, 326, 480, 2265, 943, 2755, 3552, 291, 650, 1883, 7591, # 5142 - 1702, 1226, 102, 1547, 62, 3441, 904, 4415, 3442, 1164, 4150, 7592, 7593, 1224, 1548, 2756, # 5158 - 391, 498, 1493, 7594, 1386, 1419, 7595, 2055, 1177, 4416, 813, 880, 1081, 2363, 566, 1145, # 5174 - 4417, 2286, 1001, 1035, 2558, 2599, 2238, 394, 1286, 7596, 7597, 2068, 7598, 86, 1494, 1730, # 5190 - 3936, 491, 1588, 745, 897, 2948, 843, 3340, 3937, 2757, 2870, 3273, 1768, 998, 2217, 2069, # 5206 - 397, 1826, 1195, 1969, 3659, 2993, 3341, 284, 7599, 3782, 2500, 2137, 2119, 1903, 7600, 3938, # 5222 - 2150, 3939, 4151, 1036, 3443, 1904, 114, 2559, 4152, 209, 1527, 7601, 7602, 2949, 2831, 2625, # 5238 - 2385, 2719, 3139, 812, 2560, 7603, 3274, 7604, 1559, 737, 1884, 3660, 1210, 885, 28, 2686, # 5254 - 3553, 3783, 7605, 4153, 1004, 1779, 4418, 7606, 346, 1981, 2218, 2687, 4419, 3784, 1742, 797, # 5270 - 1642, 3940, 1933, 1072, 1384, 2151, 896, 3941, 3275, 3661, 3197, 2871, 3554, 7607, 2561, 1958, # 5286 - 4420, 2450, 1785, 7608, 7609, 7610, 3942, 4154, 1005, 1308, 3662, 4155, 2720, 4421, 4422, 1528, # 5302 - 2600, 161, 1178, 4156, 1982, 987, 4423, 1101, 4157, 631, 3943, 1157, 3198, 2420, 1343, 1241, # 5318 - 1016, 2239, 2562, 372, 877, 2339, 2501, 1160, 555, 1934, 911, 3944, 7611, 466, 1170, 169, # 5334 - 1051, 2907, 2688, 3663, 2474, 2994, 1182, 2011, 2563, 1251, 2626, 7612, 992, 2340, 3444, 1540, # 5350 - 2721, 1201, 2070, 2401, 1996, 2475, 7613, 4424, 528, 1922, 2188, 1503, 1873, 1570, 2364, 3342, # 5366 - 3276, 7614, 557, 1073, 7615, 1827, 3445, 2087, 2266, 3140, 3039, 3084, 767, 3085, 2786, 4425, # 5382 - 1006, 4158, 4426, 2341, 1267, 2176, 3664, 3199, 778, 3945, 3200, 2722, 1597, 2657, 7616, 4427, # 5398 - 7617, 3446, 7618, 7619, 7620, 3277, 2689, 1433, 3278, 131, 95, 1504, 3946, 723, 4159, 3141, # 5414 - 1841, 3555, 2758, 2189, 3947, 2027, 2104, 3665, 7621, 2995, 3948, 1218, 7622, 3343, 3201, 3949, # 5430 - 4160, 2576, 248, 1634, 3785, 912, 7623, 2832, 3666, 3040, 3786, 654, 53, 7624, 2996, 7625, # 5446 - 1688, 4428, 777, 3447, 1032, 3950, 1425, 7626, 191, 820, 2120, 2833, 971, 4429, 931, 3202, # 5462 - 135, 664, 783, 3787, 1997, 772, 2908, 1935, 3951, 3788, 4430, 2909, 3203, 282, 2723, 640, # 5478 - 1372, 3448, 1127, 922, 325, 3344, 7627, 7628, 711, 2044, 7629, 7630, 3952, 2219, 2787, 1936, # 5494 - 3953, 3345, 2220, 2251, 3789, 2300, 7631, 4431, 3790, 1258, 3279, 3954, 3204, 2138, 2950, 3955, # 5510 - 3956, 7632, 2221, 258, 3205, 4432, 101, 1227, 7633, 3280, 1755, 7634, 1391, 3281, 7635, 2910, # 5526 - 2056, 893, 7636, 7637, 7638, 1402, 4161, 2342, 7639, 7640, 3206, 3556, 7641, 7642, 878, 1325, # 5542 - 1780, 2788, 4433, 259, 1385, 2577, 744, 1183, 2267, 4434, 7643, 3957, 2502, 7644, 684, 1024, # 5558 - 4162, 7645, 472, 3557, 3449, 1165, 3282, 3958, 3959, 322, 2152, 881, 455, 1695, 1152, 1340, # 5574 - 660, 554, 2153, 4435, 1058, 4436, 4163, 830, 1065, 3346, 3960, 4437, 1923, 7646, 1703, 1918, # 5590 - 7647, 932, 2268, 122, 7648, 4438, 947, 677, 7649, 3791, 2627, 297, 1905, 1924, 2269, 4439, # 5606 - 2317, 3283, 7650, 7651, 4164, 7652, 4165, 84, 4166, 112, 989, 7653, 547, 1059, 3961, 701, # 5622 - 3558, 1019, 7654, 4167, 7655, 3450, 942, 639, 457, 2301, 2451, 993, 2951, 407, 851, 494, # 5638 - 4440, 3347, 927, 7656, 1237, 7657, 2421, 3348, 573, 4168, 680, 921, 2911, 1279, 1874, 285, # 5654 - 790, 1448, 1983, 719, 2167, 7658, 7659, 4441, 3962, 3963, 1649, 7660, 1541, 563, 7661, 1077, # 5670 - 7662, 3349, 3041, 3451, 511, 2997, 3964, 3965, 3667, 3966, 1268, 2564, 3350, 3207, 4442, 4443, # 5686 - 7663, 535, 1048, 1276, 1189, 2912, 2028, 3142, 1438, 1373, 2834, 2952, 1134, 2012, 7664, 4169, # 5702 - 1238, 2578, 3086, 1259, 7665, 700, 7666, 2953, 3143, 3668, 4170, 7667, 4171, 1146, 1875, 1906, # 5718 - 4444, 2601, 3967, 781, 2422, 132, 1589, 203, 147, 273, 2789, 2402, 898, 1786, 2154, 3968, # 5734 - 3969, 7668, 3792, 2790, 7669, 7670, 4445, 4446, 7671, 3208, 7672, 1635, 3793, 965, 7673, 1804, # 5750 - 2690, 1516, 3559, 1121, 1082, 1329, 3284, 3970, 1449, 3794, 65, 1128, 2835, 2913, 2759, 1590, # 5766 - 3795, 7674, 7675, 12, 2658, 45, 976, 2579, 3144, 4447, 517, 2528, 1013, 1037, 3209, 7676, # 5782 - 3796, 2836, 7677, 3797, 7678, 3452, 7679, 2602, 614, 1998, 2318, 3798, 3087, 2724, 2628, 7680, # 5798 - 2580, 4172, 599, 1269, 7681, 1810, 3669, 7682, 2691, 3088, 759, 1060, 489, 1805, 3351, 3285, # 5814 - 1358, 7683, 7684, 2386, 1387, 1215, 2629, 2252, 490, 7685, 7686, 4173, 1759, 2387, 2343, 7687, # 5830 - 4448, 3799, 1907, 3971, 2630, 1806, 3210, 4449, 3453, 3286, 2760, 2344, 874, 7688, 7689, 3454, # 5846 - 3670, 1858, 91, 2914, 3671, 3042, 3800, 4450, 7690, 3145, 3972, 2659, 7691, 3455, 1202, 1403, # 5862 - 3801, 2954, 2529, 1517, 2503, 4451, 3456, 2504, 7692, 4452, 7693, 2692, 1885, 1495, 1731, 3973, # 5878 - 2365, 4453, 7694, 2029, 7695, 7696, 3974, 2693, 1216, 237, 2581, 4174, 2319, 3975, 3802, 4454, # 5894 - 4455, 2694, 3560, 3457, 445, 4456, 7697, 7698, 7699, 7700, 2761, 61, 3976, 3672, 1822, 3977, # 5910 - 7701, 687, 2045, 935, 925, 405, 2660, 703, 1096, 1859, 2725, 4457, 3978, 1876, 1367, 2695, # 5926 - 3352, 918, 2105, 1781, 2476, 334, 3287, 1611, 1093, 4458, 564, 3146, 3458, 3673, 3353, 945, # 5942 - 2631, 2057, 4459, 7702, 1925, 872, 4175, 7703, 3459, 2696, 3089, 349, 4176, 3674, 3979, 4460, # 5958 - 3803, 4177, 3675, 2155, 3980, 4461, 4462, 4178, 4463, 2403, 2046, 782, 3981, 400, 251, 4179, # 5974 - 1624, 7704, 7705, 277, 3676, 299, 1265, 476, 1191, 3804, 2121, 4180, 4181, 1109, 205, 7706, # 5990 - 2582, 1000, 2156, 3561, 1860, 7707, 7708, 7709, 4464, 7710, 4465, 2565, 107, 2477, 2157, 3982, # 6006 - 3460, 3147, 7711, 1533, 541, 1301, 158, 753, 4182, 2872, 3562, 7712, 1696, 370, 1088, 4183, # 6022 - 4466, 3563, 579, 327, 440, 162, 2240, 269, 1937, 1374, 3461, 968, 3043, 56, 1396, 3090, # 6038 - 2106, 3288, 3354, 7713, 1926, 2158, 4467, 2998, 7714, 3564, 7715, 7716, 3677, 4468, 2478, 7717, # 6054 - 2791, 7718, 1650, 4469, 7719, 2603, 7720, 7721, 3983, 2661, 3355, 1149, 3356, 3984, 3805, 3985, # 6070 - 7722, 1076, 49, 7723, 951, 3211, 3289, 3290, 450, 2837, 920, 7724, 1811, 2792, 2366, 4184, # 6086 - 1908, 1138, 2367, 3806, 3462, 7725, 3212, 4470, 1909, 1147, 1518, 2423, 4471, 3807, 7726, 4472, # 6102 - 2388, 2604, 260, 1795, 3213, 7727, 7728, 3808, 3291, 708, 7729, 3565, 1704, 7730, 3566, 1351, # 6118 - 1618, 3357, 2999, 1886, 944, 4185, 3358, 4186, 3044, 3359, 4187, 7731, 3678, 422, 413, 1714, # 6134 - 3292, 500, 2058, 2345, 4188, 2479, 7732, 1344, 1910, 954, 7733, 1668, 7734, 7735, 3986, 2404, # 6150 - 4189, 3567, 3809, 4190, 7736, 2302, 1318, 2505, 3091, 133, 3092, 2873, 4473, 629, 31, 2838, # 6166 - 2697, 3810, 4474, 850, 949, 4475, 3987, 2955, 1732, 2088, 4191, 1496, 1852, 7737, 3988, 620, # 6182 - 3214, 981, 1242, 3679, 3360, 1619, 3680, 1643, 3293, 2139, 2452, 1970, 1719, 3463, 2168, 7738, # 6198 - 3215, 7739, 7740, 3361, 1828, 7741, 1277, 4476, 1565, 2047, 7742, 1636, 3568, 3093, 7743, 869, # 6214 - 2839, 655, 3811, 3812, 3094, 3989, 3000, 3813, 1310, 3569, 4477, 7744, 7745, 7746, 1733, 558, # 6230 - 4478, 3681, 335, 1549, 3045, 1756, 4192, 3682, 1945, 3464, 1829, 1291, 1192, 470, 2726, 2107, # 6246 - 2793, 913, 1054, 3990, 7747, 1027, 7748, 3046, 3991, 4479, 982, 2662, 3362, 3148, 3465, 3216, # 6262 - 3217, 1946, 2794, 7749, 571, 4480, 7750, 1830, 7751, 3570, 2583, 1523, 2424, 7752, 2089, 984, # 6278 - 4481, 3683, 1959, 7753, 3684, 852, 923, 2795, 3466, 3685, 969, 1519, 999, 2048, 2320, 1705, # 6294 - 7754, 3095, 615, 1662, 151, 597, 3992, 2405, 2321, 1049, 275, 4482, 3686, 4193, 568, 3687, # 6310 - 3571, 2480, 4194, 3688, 7755, 2425, 2270, 409, 3218, 7756, 1566, 2874, 3467, 1002, 769, 2840, # 6326 - 194, 2090, 3149, 3689, 2222, 3294, 4195, 628, 1505, 7757, 7758, 1763, 2177, 3001, 3993, 521, # 6342 - 1161, 2584, 1787, 2203, 2406, 4483, 3994, 1625, 4196, 4197, 412, 42, 3096, 464, 7759, 2632, # 6358 - 4484, 3363, 1760, 1571, 2875, 3468, 2530, 1219, 2204, 3814, 2633, 2140, 2368, 4485, 4486, 3295, # 6374 - 1651, 3364, 3572, 7760, 7761, 3573, 2481, 3469, 7762, 3690, 7763, 7764, 2271, 2091, 460, 7765, # 6390 - 4487, 7766, 3002, 962, 588, 3574, 289, 3219, 2634, 1116, 52, 7767, 3047, 1796, 7768, 7769, # 6406 - 7770, 1467, 7771, 1598, 1143, 3691, 4198, 1984, 1734, 1067, 4488, 1280, 3365, 465, 4489, 1572, # 6422 - 510, 7772, 1927, 2241, 1812, 1644, 3575, 7773, 4490, 3692, 7774, 7775, 2663, 1573, 1534, 7776, # 6438 - 7777, 4199, 536, 1807, 1761, 3470, 3815, 3150, 2635, 7778, 7779, 7780, 4491, 3471, 2915, 1911, # 6454 - 2796, 7781, 3296, 1122, 377, 3220, 7782, 360, 7783, 7784, 4200, 1529, 551, 7785, 2059, 3693, # 6470 - 1769, 2426, 7786, 2916, 4201, 3297, 3097, 2322, 2108, 2030, 4492, 1404, 136, 1468, 1479, 672, # 6486 - 1171, 3221, 2303, 271, 3151, 7787, 2762, 7788, 2049, 678, 2727, 865, 1947, 4493, 7789, 2013, # 6502 - 3995, 2956, 7790, 2728, 2223, 1397, 3048, 3694, 4494, 4495, 1735, 2917, 3366, 3576, 7791, 3816, # 6518 - 509, 2841, 2453, 2876, 3817, 7792, 7793, 3152, 3153, 4496, 4202, 2531, 4497, 2304, 1166, 1010, # 6534 - 552, 681, 1887, 7794, 7795, 2957, 2958, 3996, 1287, 1596, 1861, 3154, 358, 453, 736, 175, # 6550 - 478, 1117, 905, 1167, 1097, 7796, 1853, 1530, 7797, 1706, 7798, 2178, 3472, 2287, 3695, 3473, # 6566 - 3577, 4203, 2092, 4204, 7799, 3367, 1193, 2482, 4205, 1458, 2190, 2205, 1862, 1888, 1421, 3298, # 6582 - 2918, 3049, 2179, 3474, 595, 2122, 7800, 3997, 7801, 7802, 4206, 1707, 2636, 223, 3696, 1359, # 6598 - 751, 3098, 183, 3475, 7803, 2797, 3003, 419, 2369, 633, 704, 3818, 2389, 241, 7804, 7805, # 6614 - 7806, 838, 3004, 3697, 2272, 2763, 2454, 3819, 1938, 2050, 3998, 1309, 3099, 2242, 1181, 7807, # 6630 - 1136, 2206, 3820, 2370, 1446, 4207, 2305, 4498, 7808, 7809, 4208, 1055, 2605, 484, 3698, 7810, # 6646 - 3999, 625, 4209, 2273, 3368, 1499, 4210, 4000, 7811, 4001, 4211, 3222, 2274, 2275, 3476, 7812, # 6662 - 7813, 2764, 808, 2606, 3699, 3369, 4002, 4212, 3100, 2532, 526, 3370, 3821, 4213, 955, 7814, # 6678 - 1620, 4214, 2637, 2427, 7815, 1429, 3700, 1669, 1831, 994, 928, 7816, 3578, 1260, 7817, 7818, # 6694 - 7819, 1948, 2288, 741, 2919, 1626, 4215, 2729, 2455, 867, 1184, 362, 3371, 1392, 7820, 7821, # 6710 - 4003, 4216, 1770, 1736, 3223, 2920, 4499, 4500, 1928, 2698, 1459, 1158, 7822, 3050, 3372, 2877, # 6726 - 1292, 1929, 2506, 2842, 3701, 1985, 1187, 2071, 2014, 2607, 4217, 7823, 2566, 2507, 2169, 3702, # 6742 - 2483, 3299, 7824, 3703, 4501, 7825, 7826, 666, 1003, 3005, 1022, 3579, 4218, 7827, 4502, 1813, # 6758 - 2253, 574, 3822, 1603, 295, 1535, 705, 3823, 4219, 283, 858, 417, 7828, 7829, 3224, 4503, # 6774 - 4504, 3051, 1220, 1889, 1046, 2276, 2456, 4004, 1393, 1599, 689, 2567, 388, 4220, 7830, 2484, # 6790 - 802, 7831, 2798, 3824, 2060, 1405, 2254, 7832, 4505, 3825, 2109, 1052, 1345, 3225, 1585, 7833, # 6806 - 809, 7834, 7835, 7836, 575, 2730, 3477, 956, 1552, 1469, 1144, 2323, 7837, 2324, 1560, 2457, # 6822 - 3580, 3226, 4005, 616, 2207, 3155, 2180, 2289, 7838, 1832, 7839, 3478, 4506, 7840, 1319, 3704, # 6838 - 3705, 1211, 3581, 1023, 3227, 1293, 2799, 7841, 7842, 7843, 3826, 607, 2306, 3827, 762, 2878, # 6854 - 1439, 4221, 1360, 7844, 1485, 3052, 7845, 4507, 1038, 4222, 1450, 2061, 2638, 4223, 1379, 4508, # 6870 - 2585, 7846, 7847, 4224, 1352, 1414, 2325, 2921, 1172, 7848, 7849, 3828, 3829, 7850, 1797, 1451, # 6886 - 7851, 7852, 7853, 7854, 2922, 4006, 4007, 2485, 2346, 411, 4008, 4009, 3582, 3300, 3101, 4509, # 6902 - 1561, 2664, 1452, 4010, 1375, 7855, 7856, 47, 2959, 316, 7857, 1406, 1591, 2923, 3156, 7858, # 6918 - 1025, 2141, 3102, 3157, 354, 2731, 884, 2224, 4225, 2407, 508, 3706, 726, 3583, 996, 2428, # 6934 - 3584, 729, 7859, 392, 2191, 1453, 4011, 4510, 3707, 7860, 7861, 2458, 3585, 2608, 1675, 2800, # 6950 - 919, 2347, 2960, 2348, 1270, 4511, 4012, 73, 7862, 7863, 647, 7864, 3228, 2843, 2255, 1550, # 6966 - 1346, 3006, 7865, 1332, 883, 3479, 7866, 7867, 7868, 7869, 3301, 2765, 7870, 1212, 831, 1347, # 6982 - 4226, 4512, 2326, 3830, 1863, 3053, 720, 3831, 4513, 4514, 3832, 7871, 4227, 7872, 7873, 4515, # 6998 - 7874, 7875, 1798, 4516, 3708, 2609, 4517, 3586, 1645, 2371, 7876, 7877, 2924, 669, 2208, 2665, # 7014 - 2429, 7878, 2879, 7879, 7880, 1028, 3229, 7881, 4228, 2408, 7882, 2256, 1353, 7883, 7884, 4518, # 7030 - 3158, 518, 7885, 4013, 7886, 4229, 1960, 7887, 2142, 4230, 7888, 7889, 3007, 2349, 2350, 3833, # 7046 - 516, 1833, 1454, 4014, 2699, 4231, 4519, 2225, 2610, 1971, 1129, 3587, 7890, 2766, 7891, 2961, # 7062 - 1422, 577, 1470, 3008, 1524, 3373, 7892, 7893, 432, 4232, 3054, 3480, 7894, 2586, 1455, 2508, # 7078 - 2226, 1972, 1175, 7895, 1020, 2732, 4015, 3481, 4520, 7896, 2733, 7897, 1743, 1361, 3055, 3482, # 7094 - 2639, 4016, 4233, 4521, 2290, 895, 924, 4234, 2170, 331, 2243, 3056, 166, 1627, 3057, 1098, # 7110 - 7898, 1232, 2880, 2227, 3374, 4522, 657, 403, 1196, 2372, 542, 3709, 3375, 1600, 4235, 3483, # 7126 - 7899, 4523, 2767, 3230, 576, 530, 1362, 7900, 4524, 2533, 2666, 3710, 4017, 7901, 842, 3834, # 7142 - 7902, 2801, 2031, 1014, 4018, 213, 2700, 3376, 665, 621, 4236, 7903, 3711, 2925, 2430, 7904, # 7158 - 2431, 3302, 3588, 3377, 7905, 4237, 2534, 4238, 4525, 3589, 1682, 4239, 3484, 1380, 7906, 724, # 7174 - 2277, 600, 1670, 7907, 1337, 1233, 4526, 3103, 2244, 7908, 1621, 4527, 7909, 651, 4240, 7910, # 7190 - 1612, 4241, 2611, 7911, 2844, 7912, 2734, 2307, 3058, 7913, 716, 2459, 3059, 174, 1255, 2701, # 7206 - 4019, 3590, 548, 1320, 1398, 728, 4020, 1574, 7914, 1890, 1197, 3060, 4021, 7915, 3061, 3062, # 7222 - 3712, 3591, 3713, 747, 7916, 635, 4242, 4528, 7917, 7918, 7919, 4243, 7920, 7921, 4529, 7922, # 7238 - 3378, 4530, 2432, 451, 7923, 3714, 2535, 2072, 4244, 2735, 4245, 4022, 7924, 1764, 4531, 7925, # 7254 - 4246, 350, 7926, 2278, 2390, 2486, 7927, 4247, 4023, 2245, 1434, 4024, 488, 4532, 458, 4248, # 7270 - 4025, 3715, 771, 1330, 2391, 3835, 2568, 3159, 2159, 2409, 1553, 2667, 3160, 4249, 7928, 2487, # 7286 - 2881, 2612, 1720, 2702, 4250, 3379, 4533, 7929, 2536, 4251, 7930, 3231, 4252, 2768, 7931, 2015, # 7302 - 2736, 7932, 1155, 1017, 3716, 3836, 7933, 3303, 2308, 201, 1864, 4253, 1430, 7934, 4026, 7935, # 7318 - 7936, 7937, 7938, 7939, 4254, 1604, 7940, 414, 1865, 371, 2587, 4534, 4535, 3485, 2016, 3104, # 7334 - 4536, 1708, 960, 4255, 887, 389, 2171, 1536, 1663, 1721, 7941, 2228, 4027, 2351, 2926, 1580, # 7350 - 7942, 7943, 7944, 1744, 7945, 2537, 4537, 4538, 7946, 4539, 7947, 2073, 7948, 7949, 3592, 3380, # 7366 - 2882, 4256, 7950, 4257, 2640, 3381, 2802, 673, 2703, 2460, 709, 3486, 4028, 3593, 4258, 7951, # 7382 - 1148, 502, 634, 7952, 7953, 1204, 4540, 3594, 1575, 4541, 2613, 3717, 7954, 3718, 3105, 948, # 7398 - 3232, 121, 1745, 3837, 1110, 7955, 4259, 3063, 2509, 3009, 4029, 3719, 1151, 1771, 3838, 1488, # 7414 - 4030, 1986, 7956, 2433, 3487, 7957, 7958, 2093, 7959, 4260, 3839, 1213, 1407, 2803, 531, 2737, # 7430 - 2538, 3233, 1011, 1537, 7960, 2769, 4261, 3106, 1061, 7961, 3720, 3721, 1866, 2883, 7962, 2017, # 7446 - 120, 4262, 4263, 2062, 3595, 3234, 2309, 3840, 2668, 3382, 1954, 4542, 7963, 7964, 3488, 1047, # 7462 - 2704, 1266, 7965, 1368, 4543, 2845, 649, 3383, 3841, 2539, 2738, 1102, 2846, 2669, 7966, 7967, # 7478 - 1999, 7968, 1111, 3596, 2962, 7969, 2488, 3842, 3597, 2804, 1854, 3384, 3722, 7970, 7971, 3385, # 7494 - 2410, 2884, 3304, 3235, 3598, 7972, 2569, 7973, 3599, 2805, 4031, 1460, 856, 7974, 3600, 7975, # 7510 - 2885, 2963, 7976, 2886, 3843, 7977, 4264, 632, 2510, 875, 3844, 1697, 3845, 2291, 7978, 7979, # 7526 - 4544, 3010, 1239, 580, 4545, 4265, 7980, 914, 936, 2074, 1190, 4032, 1039, 2123, 7981, 7982, # 7542 - 7983, 3386, 1473, 7984, 1354, 4266, 3846, 7985, 2172, 3064, 4033, 915, 3305, 4267, 4268, 3306, # 7558 - 1605, 1834, 7986, 2739, 398, 3601, 4269, 3847, 4034, 328, 1912, 2847, 4035, 3848, 1331, 4270, # 7574 - 3011, 937, 4271, 7987, 3602, 4036, 4037, 3387, 2160, 4546, 3388, 524, 742, 538, 3065, 1012, # 7590 - 7988, 7989, 3849, 2461, 7990, 658, 1103, 225, 3850, 7991, 7992, 4547, 7993, 4548, 7994, 3236, # 7606 - 1243, 7995, 4038, 963, 2246, 4549, 7996, 2705, 3603, 3161, 7997, 7998, 2588, 2327, 7999, 4550, # 7622 - 8000, 8001, 8002, 3489, 3307, 957, 3389, 2540, 2032, 1930, 2927, 2462, 870, 2018, 3604, 1746, # 7638 - 2770, 2771, 2434, 2463, 8003, 3851, 8004, 3723, 3107, 3724, 3490, 3390, 3725, 8005, 1179, 3066, # 7654 - 8006, 3162, 2373, 4272, 3726, 2541, 3163, 3108, 2740, 4039, 8007, 3391, 1556, 2542, 2292, 977, # 7670 - 2887, 2033, 4040, 1205, 3392, 8008, 1765, 3393, 3164, 2124, 1271, 1689, 714, 4551, 3491, 8009, # 7686 - 2328, 3852, 533, 4273, 3605, 2181, 617, 8010, 2464, 3308, 3492, 2310, 8011, 8012, 3165, 8013, # 7702 - 8014, 3853, 1987, 618, 427, 2641, 3493, 3394, 8015, 8016, 1244, 1690, 8017, 2806, 4274, 4552, # 7718 - 8018, 3494, 8019, 8020, 2279, 1576, 473, 3606, 4275, 3395, 972, 8021, 3607, 8022, 3067, 8023, # 7734 - 8024, 4553, 4554, 8025, 3727, 4041, 4042, 8026, 153, 4555, 356, 8027, 1891, 2888, 4276, 2143, # 7750 - 408, 803, 2352, 8028, 3854, 8029, 4277, 1646, 2570, 2511, 4556, 4557, 3855, 8030, 3856, 4278, # 7766 - 8031, 2411, 3396, 752, 8032, 8033, 1961, 2964, 8034, 746, 3012, 2465, 8035, 4279, 3728, 698, # 7782 - 4558, 1892, 4280, 3608, 2543, 4559, 3609, 3857, 8036, 3166, 3397, 8037, 1823, 1302, 4043, 2706, # 7798 - 3858, 1973, 4281, 8038, 4282, 3167, 823, 1303, 1288, 1236, 2848, 3495, 4044, 3398, 774, 3859, # 7814 - 8039, 1581, 4560, 1304, 2849, 3860, 4561, 8040, 2435, 2161, 1083, 3237, 4283, 4045, 4284, 344, # 7830 - 1173, 288, 2311, 454, 1683, 8041, 8042, 1461, 4562, 4046, 2589, 8043, 8044, 4563, 985, 894, # 7846 - 8045, 3399, 3168, 8046, 1913, 2928, 3729, 1988, 8047, 2110, 1974, 8048, 4047, 8049, 2571, 1194, # 7862 - 425, 8050, 4564, 3169, 1245, 3730, 4285, 8051, 8052, 2850, 8053, 636, 4565, 1855, 3861, 760, # 7878 - 1799, 8054, 4286, 2209, 1508, 4566, 4048, 1893, 1684, 2293, 8055, 8056, 8057, 4287, 4288, 2210, # 7894 - 479, 8058, 8059, 832, 8060, 4049, 2489, 8061, 2965, 2490, 3731, 990, 3109, 627, 1814, 2642, # 7910 - 4289, 1582, 4290, 2125, 2111, 3496, 4567, 8062, 799, 4291, 3170, 8063, 4568, 2112, 1737, 3013, # 7926 - 1018, 543, 754, 4292, 3309, 1676, 4569, 4570, 4050, 8064, 1489, 8065, 3497, 8066, 2614, 2889, # 7942 - 4051, 8067, 8068, 2966, 8069, 8070, 8071, 8072, 3171, 4571, 4572, 2182, 1722, 8073, 3238, 3239, # 7958 - 1842, 3610, 1715, 481, 365, 1975, 1856, 8074, 8075, 1962, 2491, 4573, 8076, 2126, 3611, 3240, # 7974 - 433, 1894, 2063, 2075, 8077, 602, 2741, 8078, 8079, 8080, 8081, 8082, 3014, 1628, 3400, 8083, # 7990 - 3172, 4574, 4052, 2890, 4575, 2512, 8084, 2544, 2772, 8085, 8086, 8087, 3310, 4576, 2891, 8088, # 8006 - 4577, 8089, 2851, 4578, 4579, 1221, 2967, 4053, 2513, 8090, 8091, 8092, 1867, 1989, 8093, 8094, # 8022 - 8095, 1895, 8096, 8097, 4580, 1896, 4054, 318, 8098, 2094, 4055, 4293, 8099, 8100, 485, 8101, # 8038 - 938, 3862, 553, 2670, 116, 8102, 3863, 3612, 8103, 3498, 2671, 2773, 3401, 3311, 2807, 8104, # 8054 - 3613, 2929, 4056, 1747, 2930, 2968, 8105, 8106, 207, 8107, 8108, 2672, 4581, 2514, 8109, 3015, # 8070 - 890, 3614, 3864, 8110, 1877, 3732, 3402, 8111, 2183, 2353, 3403, 1652, 8112, 8113, 8114, 941, # 8086 - 2294, 208, 3499, 4057, 2019, 330, 4294, 3865, 2892, 2492, 3733, 4295, 8115, 8116, 8117, 8118, # 8102 -) -# fmt: on diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/tools.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/tools.py deleted file mode 100644 index b72c6f286db91ef6dbe4a0a1ec14b8f06f2899c6..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/tools.py +++ /dev/null @@ -1,19 +0,0 @@ -from typing import Optional, Sequence, Dict, Any - -from clickhouse_connect.driver import Client -from clickhouse_connect.driver.summary import QuerySummary -from clickhouse_connect.driver.query import quote_identifier - - -def insert_file(client: Client, - table: str, - file_path: str, - fmt: Optional[str] = None, - column_names: Optional[Sequence[str]] = None, - database: Optional[str] = None, - settings: Optional[Dict[str, Any]] = None) -> QuerySummary: - full_table = f'{quote_identifier(database)}.{quote_identifier(table)}' if database else quote_identifier(table) - if not fmt: - fmt = 'CSV' if column_names else 'CSVWithNames' - with open(file_path, 'rb') as file: - return client.raw_insert(full_table, column_names=column_names, insert_block=file, fmt=fmt, settings=settings) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/kdf/x963kdf.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/kdf/x963kdf.py deleted file mode 100644 index 17acc5174bb09c6056569f9198e253275dfc40bf..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/cryptography/hazmat/primitives/kdf/x963kdf.py +++ /dev/null @@ -1,61 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from __future__ import annotations - -import typing - -from cryptography import utils -from cryptography.exceptions import AlreadyFinalized, InvalidKey -from cryptography.hazmat.primitives import constant_time, hashes -from cryptography.hazmat.primitives.kdf import KeyDerivationFunction - - -def _int_to_u32be(n: int) -> bytes: - return n.to_bytes(length=4, byteorder="big") - - -class X963KDF(KeyDerivationFunction): - def __init__( - self, - algorithm: hashes.HashAlgorithm, - length: int, - sharedinfo: typing.Optional[bytes], - backend: typing.Any = None, - ): - max_len = algorithm.digest_size * (2**32 - 1) - if length > max_len: - raise ValueError(f"Cannot derive keys larger than {max_len} bits.") - if sharedinfo is not None: - utils._check_bytes("sharedinfo", sharedinfo) - - self._algorithm = algorithm - self._length = length - self._sharedinfo = sharedinfo - self._used = False - - def derive(self, key_material: bytes) -> bytes: - if self._used: - raise AlreadyFinalized - self._used = True - utils._check_byteslike("key_material", key_material) - output = [b""] - outlen = 0 - counter = 1 - - while self._length > outlen: - h = hashes.Hash(self._algorithm) - h.update(key_material) - h.update(_int_to_u32be(counter)) - if self._sharedinfo is not None: - h.update(self._sharedinfo) - output.append(h.finalize()) - outlen += len(output[-1]) - counter += 1 - - return b"".join(output)[: self._length] - - def verify(self, key_material: bytes, expected_key: bytes) -> None: - if not constant_time.bytes_eq(self.derive(key_material), expected_key): - raise InvalidKey diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/analytics.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/analytics.py deleted file mode 100644 index 6724619ae13cd5f05165a41033bf72a86f66b020..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/analytics.py +++ /dev/null @@ -1,190 +0,0 @@ -""" Functions related to analytics and telemetry. """ -from __future__ import annotations - -import json -import os -import pkgutil -import threading -import warnings -from distutils.version import StrictVersion -from typing import Any - -import requests - -import gradio -from gradio.context import Context -from gradio.utils import GRADIO_VERSION - -ANALYTICS_URL = "https://api.gradio.app/" -PKG_VERSION_URL = "https://api.gradio.app/pkg-version" - - -def analytics_enabled() -> bool: - """ - Returns: True if analytics are enabled, False otherwise. - """ - return os.getenv("GRADIO_ANALYTICS_ENABLED", "True") == "True" - - -def _do_analytics_request(url: str, data: dict[str, Any]) -> None: - try: - requests.post(url, data=data, timeout=5) - except (requests.ConnectionError, requests.exceptions.ReadTimeout): - pass # do not push analytics if no network - - -def version_check(): - if not analytics_enabled(): - return - try: - version_data = pkgutil.get_data(__name__, "version.txt") - if not version_data: - raise FileNotFoundError - current_pkg_version = version_data.decode("ascii").strip() - latest_pkg_version = requests.get(url=PKG_VERSION_URL, timeout=3).json()[ - "version" - ] - if StrictVersion(latest_pkg_version) > StrictVersion(current_pkg_version): - print( - f"IMPORTANT: You are using gradio version {current_pkg_version}, " - f"however version {latest_pkg_version} is available, please upgrade." - ) - print("--------") - except json.decoder.JSONDecodeError: - warnings.warn("unable to parse version details from package URL.") - except KeyError: - warnings.warn("package URL does not contain version info.") - except Exception: - pass - - -def get_local_ip_address() -> str: - """ - Gets the public IP address or returns the string "No internet connection" if unable - to obtain it or the string "Analytics disabled" if a user has disabled analytics. - Does not make a new request if the IP address has already been obtained in the - same Python session. - """ - if not analytics_enabled(): - return "Analytics disabled" - - if Context.ip_address is None: - try: - ip_address = requests.get( - "https://checkip.amazonaws.com/", timeout=3 - ).text.strip() - except (requests.ConnectionError, requests.exceptions.ReadTimeout): - ip_address = "No internet connection" - Context.ip_address = ip_address - else: - ip_address = Context.ip_address - return ip_address - - -def initiated_analytics(data: dict[str, Any]) -> None: - if not analytics_enabled(): - return - - threading.Thread( - target=_do_analytics_request, - kwargs={ - "url": f"{ANALYTICS_URL}gradio-initiated-analytics/", - "data": {**data, "ip_address": get_local_ip_address()}, - }, - ).start() - - -def launched_analytics(blocks: gradio.Blocks, data: dict[str, Any]) -> None: - if not analytics_enabled(): - return - - blocks_telemetry, inputs_telemetry, outputs_telemetry, targets_telemetry = ( - [], - [], - [], - [], - ) - - from gradio.blocks import BlockContext - - for x in list(blocks.blocks.values()): - blocks_telemetry.append(x.get_block_name()) if isinstance( - x, BlockContext - ) else blocks_telemetry.append(str(x)) - - for x in blocks.dependencies: - targets_telemetry = targets_telemetry + [ - # Sometimes the target can be the Blocks object itself, so we need to check if its in blocks.blocks - str(blocks.blocks[y]) - for y in x["targets"] - if y in blocks.blocks - ] - inputs_telemetry = inputs_telemetry + [ - str(blocks.blocks[y]) for y in x["inputs"] if y in blocks.blocks - ] - outputs_telemetry = outputs_telemetry + [ - str(blocks.blocks[y]) for y in x["outputs"] if y in blocks.blocks - ] - additional_data = { - "version": GRADIO_VERSION, - "is_kaggle": blocks.is_kaggle, - "is_sagemaker": blocks.is_sagemaker, - "using_auth": blocks.auth is not None, - "dev_mode": blocks.dev_mode, - "show_api": blocks.show_api, - "show_error": blocks.show_error, - "title": blocks.title, - "inputs": blocks.input_components - if blocks.mode == "interface" - else inputs_telemetry, - "outputs": blocks.output_components - if blocks.mode == "interface" - else outputs_telemetry, - "targets": targets_telemetry, - "blocks": blocks_telemetry, - "events": [str(x["trigger"]) for x in blocks.dependencies], - } - - data.update(additional_data) - data.update({"ip_address": get_local_ip_address()}) - - threading.Thread( - target=_do_analytics_request, - kwargs={ - "url": f"{ANALYTICS_URL}gradio-launched-telemetry/", - "data": data, - }, - ).start() - - -def integration_analytics(data: dict[str, Any]) -> None: - if not analytics_enabled(): - return - - threading.Thread( - target=_do_analytics_request, - kwargs={ - "url": f"{ANALYTICS_URL}gradio-integration-analytics/", - "data": {**data, "ip_address": get_local_ip_address()}, - }, - ).start() - - -def error_analytics(message: str) -> None: - """ - Send error analytics if there is network - Parameters: - message: Details about error - """ - if not analytics_enabled(): - return - - data = {"ip_address": get_local_ip_address(), "error": message} - - threading.Thread( - target=_do_analytics_request, - kwargs={ - "url": f"{ANALYTICS_URL}gradio-error-analytics/", - "data": data, - }, - ).start() diff --git a/spaces/cihyFjudo/fairness-paper-search/Commercial Series Radio Cps R05.10 25 The Ultimate Guide for Radio Enthusiasts.md b/spaces/cihyFjudo/fairness-paper-search/Commercial Series Radio Cps R05.10 25 The Ultimate Guide for Radio Enthusiasts.md deleted file mode 100644 index 3450d499d0995052a8b300690cacb8a296b36621..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Commercial Series Radio Cps R05.10 25 The Ultimate Guide for Radio Enthusiasts.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Commercial Series Radio Cps R05.10 25


    Download ✸✸✸ https://tinurli.com/2uwi7x



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Explicit Sex In Mainstream Erika Lust The Good Girl.avi.rar VERIFIED.md b/spaces/cihyFjudo/fairness-paper-search/Explicit Sex In Mainstream Erika Lust The Good Girl.avi.rar VERIFIED.md deleted file mode 100644 index 1eacff35f10e11be82371355bbfc385c227cd383..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Explicit Sex In Mainstream Erika Lust The Good Girl.avi.rar VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Explicit Sex In Mainstream Erika Lust The Good Girl.avi.rar


    Download File 🗸🗸🗸 https://tinurli.com/2uwkCZ



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Fileice Serial Numbertxt Download 13 How to Bypass Surveys and Get Your Files Fast.md b/spaces/cihyFjudo/fairness-paper-search/Fileice Serial Numbertxt Download 13 How to Bypass Surveys and Get Your Files Fast.md deleted file mode 100644 index c55d1bab35322fa8c0ca5dfe46d892efa32ba11e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Fileice Serial Numbertxt Download 13 How to Bypass Surveys and Get Your Files Fast.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Fileice Serial Numbertxt Download 13


    Downloadhttps://tinurli.com/2uwjnU



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Lbgh Vickie 11 12 Added By Users.md b/spaces/cihyFjudo/fairness-paper-search/Lbgh Vickie 11 12 Added By Users.md deleted file mode 100644 index 975d5d9469de5e3530b6f4915328fd861e105cdf..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Lbgh Vickie 11 12 Added By Users.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Lbgh Vickie 11 12 | added by users


    Download Zip ✪✪✪ https://tinurli.com/2uwjVs



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Manon Thomas Naakt Foto.33 !!LINK!!.md b/spaces/cihyFjudo/fairness-paper-search/Manon Thomas Naakt Foto.33 !!LINK!!.md deleted file mode 100644 index dd2954b63e060a268a937b36fa9c34dcb4cebb83..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Manon Thomas Naakt Foto.33 !!LINK!!.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    Juist manon schatje doe mu ook maar een naakt reportage in de bekende
    mannen bladen zoals Playboy of penthouse want je bent nu gezien op het web.al dan niet gejat foto werk van je zelf zo bloot.Je bent wel een onwijs mooie vrouw van 44 lentes jong.

    -

    Manon Thomas Naakt Foto.33


    Download Zip 🔗 https://tinurli.com/2uwkKH



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Soggade Chinni Nayana Full Movie Download 26 Learn More About the Plot and Characters of the Movie.md b/spaces/cihyFjudo/fairness-paper-search/Soggade Chinni Nayana Full Movie Download 26 Learn More About the Plot and Characters of the Movie.md deleted file mode 100644 index f615d582e6cb60caefe4c6123c135e680e5f4b30..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Soggade Chinni Nayana Full Movie Download 26 Learn More About the Plot and Characters of the Movie.md +++ /dev/null @@ -1,18 +0,0 @@ -
    -

    Are you trying to do the same (run SCARM 1.0.0 in Linux/Wine) or you have another problem? If you have another problem, please, restart your PC and then try to download and install the license again in SCARM. If that does not work, write here again.

    -

    There is no difference between the old and new, free and paid versions regarding the roads. To select the Roadways library, open the tracks selection menu, navigate to the Objects sub-menu on the bottom and select Roadways from it. Read more about the roads in SCARM here: -stuff/the-roads-in-scarm-part-1-basic-usage/

    -

    SCARM 1.0.0 Crack


    DOWNLOADhttps://tinurli.com/2uwil9



    -

    I do a bit of traveling. I have scarm installed on 2 computers. one at home and a laptop. I have 1 key. the laptop I take with me traveling. is it ok to uninstall and reinstall the key multiple times so I can work on my layout when I am not at home?

    -

    The most important benefit that you get with a license (and that cannot be seen directly in the menus) is that you can use unlimited number of tracks, objects and layers in your SCARM projects. Otherwise, you are limited to only 100 tracks/objects per project and up to 5 layers. All differences between the free and paid versions of SCARM are described in the Feature comparison table here: =extensions&ext=scarm_license_key.

    -

    please, look at the screenshots, which you can find under following dropbox-link:
    =0
    What can I do now?
    (Notice: Today I udpated scarm to 1.6.0. The files were created with a former version.)

    -

    Many apologies! You reset the license and I immediately re-registered the old bad license by mistake, so I ended up right back where I started. Would you please reset the license again so I can delete it using the uninstall license option on the Help menu? I also sent another note to scarm@scarm.info saying the same thing.
    Sorry about that.

    -

    I have purchased this program a couple years ago. My computer got broken and I gave it to techician to fix it. The man reinstalled new windows loosing my Scarm!! How can I get my scarm back. ? Also the email adress provided to conract the owner is not valid address . My mail refusing to accept this email-scarm@scarm.info

    -

    Hi Milen,
    My I have purchased a license and been using scarm on my laptop more than happily, however my laptop has crashed and I can no longer use it.
    Is there any way I can transfer the license to my PC, I have already downloaded 1.7.

    -

    Recently I moved to a new computer. (new mainboard+cpu+gpu)
    I walked through all steps described.
    I even did a full reïnstall, making sure I deleted every scarm related file from my program files folder.

    -

    Hello Jan,
    I just checked your new license and it is still pending (not yet activated).
    In order to activate it, please, follow the instructions in the purchase confirmation email. You can see them also here: =webshop&item=scarm_license_key&step=guide

    -

    -

    My computer failed and I cannot start it, so I am now using my laptop now for the scarm program, but it will not run without the license. I cannot start the old computer to un-intsall the license so I can use it on the laptop. What do I do????

    -

    On 13 February 2007, Hester and Long were contacted by Petit who informed them of his support and encouraged them to continue development. Plans were then made to reintegrate MediaFork as a direct successor to HandBrake. The MediaFork website and forums were moved to HandBrake's, and the next release was officially named HandBrake.[3] On 24 December 2016 after more than 13 years of development, HandBrake 1.0.0 was released.[4]

    -

    Specifies the name of the file containing Diffie-Hellman parameters used for so-called ephemeral DH family of SSL ciphers. The default is empty, in which case compiled-in default DH parameters used. Using custom DH parameters reduces the exposure if an attacker manages to crack the well-known compiled-in DH parameters. You can create your own DH parameters file with the command openssl dhparam -out dhparams.pem 2048.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/woff2.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/woff2.py deleted file mode 100644 index 3e247b02e93da590d0e03cd2e019d1763e84f40b..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/woff2.py +++ /dev/null @@ -1,1688 +0,0 @@ -from io import BytesIO -import sys -import array -import struct -from collections import OrderedDict -from fontTools.misc import sstruct -from fontTools.misc.arrayTools import calcIntBounds -from fontTools.misc.textTools import Tag, bytechr, byteord, bytesjoin, pad -from fontTools.ttLib import ( - TTFont, - TTLibError, - getTableModule, - getTableClass, - getSearchRange, -) -from fontTools.ttLib.sfnt import ( - SFNTReader, - SFNTWriter, - DirectoryEntry, - WOFFFlavorData, - sfntDirectoryFormat, - sfntDirectorySize, - SFNTDirectoryEntry, - sfntDirectoryEntrySize, - calcChecksum, -) -from fontTools.ttLib.tables import ttProgram, _g_l_y_f -import logging - - -log = logging.getLogger("fontTools.ttLib.woff2") - -haveBrotli = False -try: - try: - import brotlicffi as brotli - except ImportError: - import brotli - haveBrotli = True -except ImportError: - pass - - -class WOFF2Reader(SFNTReader): - - flavor = "woff2" - - def __init__(self, file, checkChecksums=0, fontNumber=-1): - if not haveBrotli: - log.error( - "The WOFF2 decoder requires the Brotli Python extension, available at: " - "https://github.com/google/brotli" - ) - raise ImportError("No module named brotli") - - self.file = file - - signature = Tag(self.file.read(4)) - if signature != b"wOF2": - raise TTLibError("Not a WOFF2 font (bad signature)") - - self.file.seek(0) - self.DirectoryEntry = WOFF2DirectoryEntry - data = self.file.read(woff2DirectorySize) - if len(data) != woff2DirectorySize: - raise TTLibError("Not a WOFF2 font (not enough data)") - sstruct.unpack(woff2DirectoryFormat, data, self) - - self.tables = OrderedDict() - offset = 0 - for i in range(self.numTables): - entry = self.DirectoryEntry() - entry.fromFile(self.file) - tag = Tag(entry.tag) - self.tables[tag] = entry - entry.offset = offset - offset += entry.length - - totalUncompressedSize = offset - compressedData = self.file.read(self.totalCompressedSize) - decompressedData = brotli.decompress(compressedData) - if len(decompressedData) != totalUncompressedSize: - raise TTLibError( - "unexpected size for decompressed font data: expected %d, found %d" - % (totalUncompressedSize, len(decompressedData)) - ) - self.transformBuffer = BytesIO(decompressedData) - - self.file.seek(0, 2) - if self.length != self.file.tell(): - raise TTLibError("reported 'length' doesn't match the actual file size") - - self.flavorData = WOFF2FlavorData(self) - - # make empty TTFont to store data while reconstructing tables - self.ttFont = TTFont(recalcBBoxes=False, recalcTimestamp=False) - - def __getitem__(self, tag): - """Fetch the raw table data. Reconstruct transformed tables.""" - entry = self.tables[Tag(tag)] - if not hasattr(entry, "data"): - if entry.transformed: - entry.data = self.reconstructTable(tag) - else: - entry.data = entry.loadData(self.transformBuffer) - return entry.data - - def reconstructTable(self, tag): - """Reconstruct table named 'tag' from transformed data.""" - entry = self.tables[Tag(tag)] - rawData = entry.loadData(self.transformBuffer) - if tag == "glyf": - # no need to pad glyph data when reconstructing - padding = self.padding if hasattr(self, "padding") else None - data = self._reconstructGlyf(rawData, padding) - elif tag == "loca": - data = self._reconstructLoca() - elif tag == "hmtx": - data = self._reconstructHmtx(rawData) - else: - raise TTLibError("transform for table '%s' is unknown" % tag) - return data - - def _reconstructGlyf(self, data, padding=None): - """Return recostructed glyf table data, and set the corresponding loca's - locations. Optionally pad glyph offsets to the specified number of bytes. - """ - self.ttFont["loca"] = WOFF2LocaTable() - glyfTable = self.ttFont["glyf"] = WOFF2GlyfTable() - glyfTable.reconstruct(data, self.ttFont) - if padding: - glyfTable.padding = padding - data = glyfTable.compile(self.ttFont) - return data - - def _reconstructLoca(self): - """Return reconstructed loca table data.""" - if "loca" not in self.ttFont: - # make sure glyf is reconstructed first - self.tables["glyf"].data = self.reconstructTable("glyf") - locaTable = self.ttFont["loca"] - data = locaTable.compile(self.ttFont) - if len(data) != self.tables["loca"].origLength: - raise TTLibError( - "reconstructed 'loca' table doesn't match original size: " - "expected %d, found %d" % (self.tables["loca"].origLength, len(data)) - ) - return data - - def _reconstructHmtx(self, data): - """Return reconstructed hmtx table data.""" - # Before reconstructing 'hmtx' table we need to parse other tables: - # 'glyf' is required for reconstructing the sidebearings from the glyphs' - # bounding box; 'hhea' is needed for the numberOfHMetrics field. - if "glyf" in self.flavorData.transformedTables: - # transformed 'glyf' table is self-contained, thus 'loca' not needed - tableDependencies = ("maxp", "hhea", "glyf") - else: - # decompiling untransformed 'glyf' requires 'loca', which requires 'head' - tableDependencies = ("maxp", "head", "hhea", "loca", "glyf") - for tag in tableDependencies: - self._decompileTable(tag) - hmtxTable = self.ttFont["hmtx"] = WOFF2HmtxTable() - hmtxTable.reconstruct(data, self.ttFont) - data = hmtxTable.compile(self.ttFont) - return data - - def _decompileTable(self, tag): - """Decompile table data and store it inside self.ttFont.""" - data = self[tag] - if self.ttFont.isLoaded(tag): - return self.ttFont[tag] - tableClass = getTableClass(tag) - table = tableClass(tag) - self.ttFont.tables[tag] = table - table.decompile(data, self.ttFont) - - -class WOFF2Writer(SFNTWriter): - - flavor = "woff2" - - def __init__( - self, - file, - numTables, - sfntVersion="\000\001\000\000", - flavor=None, - flavorData=None, - ): - if not haveBrotli: - log.error( - "The WOFF2 encoder requires the Brotli Python extension, available at: " - "https://github.com/google/brotli" - ) - raise ImportError("No module named brotli") - - self.file = file - self.numTables = numTables - self.sfntVersion = Tag(sfntVersion) - self.flavorData = WOFF2FlavorData(data=flavorData) - - self.directoryFormat = woff2DirectoryFormat - self.directorySize = woff2DirectorySize - self.DirectoryEntry = WOFF2DirectoryEntry - - self.signature = Tag("wOF2") - - self.nextTableOffset = 0 - self.transformBuffer = BytesIO() - - self.tables = OrderedDict() - - # make empty TTFont to store data while normalising and transforming tables - self.ttFont = TTFont(recalcBBoxes=False, recalcTimestamp=False) - - def __setitem__(self, tag, data): - """Associate new entry named 'tag' with raw table data.""" - if tag in self.tables: - raise TTLibError("cannot rewrite '%s' table" % tag) - if tag == "DSIG": - # always drop DSIG table, since the encoding process can invalidate it - self.numTables -= 1 - return - - entry = self.DirectoryEntry() - entry.tag = Tag(tag) - entry.flags = getKnownTagIndex(entry.tag) - # WOFF2 table data are written to disk only on close(), after all tags - # have been specified - entry.data = data - - self.tables[tag] = entry - - def close(self): - """All tags must have been specified. Now write the table data and directory.""" - if len(self.tables) != self.numTables: - raise TTLibError( - "wrong number of tables; expected %d, found %d" - % (self.numTables, len(self.tables)) - ) - - if self.sfntVersion in ("\x00\x01\x00\x00", "true"): - isTrueType = True - elif self.sfntVersion == "OTTO": - isTrueType = False - else: - raise TTLibError("Not a TrueType or OpenType font (bad sfntVersion)") - - # The WOFF2 spec no longer requires the glyph offsets to be 4-byte aligned. - # However, the reference WOFF2 implementation still fails to reconstruct - # 'unpadded' glyf tables, therefore we need to 'normalise' them. - # See: - # https://github.com/khaledhosny/ots/issues/60 - # https://github.com/google/woff2/issues/15 - if ( - isTrueType - and "glyf" in self.flavorData.transformedTables - and "glyf" in self.tables - ): - self._normaliseGlyfAndLoca(padding=4) - self._setHeadTransformFlag() - - # To pass the legacy OpenType Sanitiser currently included in browsers, - # we must sort the table directory and data alphabetically by tag. - # See: - # https://github.com/google/woff2/pull/3 - # https://lists.w3.org/Archives/Public/public-webfonts-wg/2015Mar/0000.html - # - # 2023: We rely on this in _transformTables where we expect that - # "loca" comes after "glyf" table. - self.tables = OrderedDict(sorted(self.tables.items())) - - self.totalSfntSize = self._calcSFNTChecksumsLengthsAndOffsets() - - fontData = self._transformTables() - compressedFont = brotli.compress(fontData, mode=brotli.MODE_FONT) - - self.totalCompressedSize = len(compressedFont) - self.length = self._calcTotalSize() - self.majorVersion, self.minorVersion = self._getVersion() - self.reserved = 0 - - directory = self._packTableDirectory() - self.file.seek(0) - self.file.write(pad(directory + compressedFont, size=4)) - self._writeFlavorData() - - def _normaliseGlyfAndLoca(self, padding=4): - """Recompile glyf and loca tables, aligning glyph offsets to multiples of - 'padding' size. Update the head table's 'indexToLocFormat' accordingly while - compiling loca. - """ - if self.sfntVersion == "OTTO": - return - - for tag in ("maxp", "head", "loca", "glyf", "fvar"): - if tag in self.tables: - self._decompileTable(tag) - self.ttFont["glyf"].padding = padding - for tag in ("glyf", "loca"): - self._compileTable(tag) - - def _setHeadTransformFlag(self): - """Set bit 11 of 'head' table flags to indicate that the font has undergone - a lossless modifying transform. Re-compile head table data.""" - self._decompileTable("head") - self.ttFont["head"].flags |= 1 << 11 - self._compileTable("head") - - def _decompileTable(self, tag): - """Fetch table data, decompile it, and store it inside self.ttFont.""" - tag = Tag(tag) - if tag not in self.tables: - raise TTLibError("missing required table: %s" % tag) - if self.ttFont.isLoaded(tag): - return - data = self.tables[tag].data - if tag == "loca": - tableClass = WOFF2LocaTable - elif tag == "glyf": - tableClass = WOFF2GlyfTable - elif tag == "hmtx": - tableClass = WOFF2HmtxTable - else: - tableClass = getTableClass(tag) - table = tableClass(tag) - self.ttFont.tables[tag] = table - table.decompile(data, self.ttFont) - - def _compileTable(self, tag): - """Compile table and store it in its 'data' attribute.""" - self.tables[tag].data = self.ttFont[tag].compile(self.ttFont) - - def _calcSFNTChecksumsLengthsAndOffsets(self): - """Compute the 'original' SFNT checksums, lengths and offsets for checksum - adjustment calculation. Return the total size of the uncompressed font. - """ - offset = sfntDirectorySize + sfntDirectoryEntrySize * len(self.tables) - for tag, entry in self.tables.items(): - data = entry.data - entry.origOffset = offset - entry.origLength = len(data) - if tag == "head": - entry.checkSum = calcChecksum(data[:8] + b"\0\0\0\0" + data[12:]) - else: - entry.checkSum = calcChecksum(data) - offset += (entry.origLength + 3) & ~3 - return offset - - def _transformTables(self): - """Return transformed font data.""" - transformedTables = self.flavorData.transformedTables - for tag, entry in self.tables.items(): - data = None - if tag in transformedTables: - data = self.transformTable(tag) - if data is not None: - entry.transformed = True - if data is None: - if tag == "glyf": - # Currently we always sort table tags so - # 'loca' comes after 'glyf'. - transformedTables.discard("loca") - # pass-through the table data without transformation - data = entry.data - entry.transformed = False - entry.offset = self.nextTableOffset - entry.saveData(self.transformBuffer, data) - self.nextTableOffset += entry.length - self.writeMasterChecksum() - fontData = self.transformBuffer.getvalue() - return fontData - - def transformTable(self, tag): - """Return transformed table data, or None if some pre-conditions aren't - met -- in which case, the non-transformed table data will be used. - """ - if tag == "loca": - data = b"" - elif tag == "glyf": - for tag in ("maxp", "head", "loca", "glyf"): - self._decompileTable(tag) - glyfTable = self.ttFont["glyf"] - data = glyfTable.transform(self.ttFont) - elif tag == "hmtx": - if "glyf" not in self.tables: - return - for tag in ("maxp", "head", "hhea", "loca", "glyf", "hmtx"): - self._decompileTable(tag) - hmtxTable = self.ttFont["hmtx"] - data = hmtxTable.transform(self.ttFont) # can be None - else: - raise TTLibError("Transform for table '%s' is unknown" % tag) - return data - - def _calcMasterChecksum(self): - """Calculate checkSumAdjustment.""" - tags = list(self.tables.keys()) - checksums = [] - for i in range(len(tags)): - checksums.append(self.tables[tags[i]].checkSum) - - # Create a SFNT directory for checksum calculation purposes - self.searchRange, self.entrySelector, self.rangeShift = getSearchRange( - self.numTables, 16 - ) - directory = sstruct.pack(sfntDirectoryFormat, self) - tables = sorted(self.tables.items()) - for tag, entry in tables: - sfntEntry = SFNTDirectoryEntry() - sfntEntry.tag = entry.tag - sfntEntry.checkSum = entry.checkSum - sfntEntry.offset = entry.origOffset - sfntEntry.length = entry.origLength - directory = directory + sfntEntry.toString() - - directory_end = sfntDirectorySize + len(self.tables) * sfntDirectoryEntrySize - assert directory_end == len(directory) - - checksums.append(calcChecksum(directory)) - checksum = sum(checksums) & 0xFFFFFFFF - # BiboAfba! - checksumadjustment = (0xB1B0AFBA - checksum) & 0xFFFFFFFF - return checksumadjustment - - def writeMasterChecksum(self): - """Write checkSumAdjustment to the transformBuffer.""" - checksumadjustment = self._calcMasterChecksum() - self.transformBuffer.seek(self.tables["head"].offset + 8) - self.transformBuffer.write(struct.pack(">L", checksumadjustment)) - - def _calcTotalSize(self): - """Calculate total size of WOFF2 font, including any meta- and/or private data.""" - offset = self.directorySize - for entry in self.tables.values(): - offset += len(entry.toString()) - offset += self.totalCompressedSize - offset = (offset + 3) & ~3 - offset = self._calcFlavorDataOffsetsAndSize(offset) - return offset - - def _calcFlavorDataOffsetsAndSize(self, start): - """Calculate offsets and lengths for any meta- and/or private data.""" - offset = start - data = self.flavorData - if data.metaData: - self.metaOrigLength = len(data.metaData) - self.metaOffset = offset - self.compressedMetaData = brotli.compress( - data.metaData, mode=brotli.MODE_TEXT - ) - self.metaLength = len(self.compressedMetaData) - offset += self.metaLength - else: - self.metaOffset = self.metaLength = self.metaOrigLength = 0 - self.compressedMetaData = b"" - if data.privData: - # make sure private data is padded to 4-byte boundary - offset = (offset + 3) & ~3 - self.privOffset = offset - self.privLength = len(data.privData) - offset += self.privLength - else: - self.privOffset = self.privLength = 0 - return offset - - def _getVersion(self): - """Return the WOFF2 font's (majorVersion, minorVersion) tuple.""" - data = self.flavorData - if data.majorVersion is not None and data.minorVersion is not None: - return data.majorVersion, data.minorVersion - else: - # if None, return 'fontRevision' from 'head' table - if "head" in self.tables: - return struct.unpack(">HH", self.tables["head"].data[4:8]) - else: - return 0, 0 - - def _packTableDirectory(self): - """Return WOFF2 table directory data.""" - directory = sstruct.pack(self.directoryFormat, self) - for entry in self.tables.values(): - directory = directory + entry.toString() - return directory - - def _writeFlavorData(self): - """Write metadata and/or private data using appropiate padding.""" - compressedMetaData = self.compressedMetaData - privData = self.flavorData.privData - if compressedMetaData and privData: - compressedMetaData = pad(compressedMetaData, size=4) - if compressedMetaData: - self.file.seek(self.metaOffset) - assert self.file.tell() == self.metaOffset - self.file.write(compressedMetaData) - if privData: - self.file.seek(self.privOffset) - assert self.file.tell() == self.privOffset - self.file.write(privData) - - def reordersTables(self): - return True - - -# -- woff2 directory helpers and cruft - -woff2DirectoryFormat = """ - > # big endian - signature: 4s # "wOF2" - sfntVersion: 4s - length: L # total woff2 file size - numTables: H # number of tables - reserved: H # set to 0 - totalSfntSize: L # uncompressed size - totalCompressedSize: L # compressed size - majorVersion: H # major version of WOFF file - minorVersion: H # minor version of WOFF file - metaOffset: L # offset to metadata block - metaLength: L # length of compressed metadata - metaOrigLength: L # length of uncompressed metadata - privOffset: L # offset to private data block - privLength: L # length of private data block -""" - -woff2DirectorySize = sstruct.calcsize(woff2DirectoryFormat) - -woff2KnownTags = ( - "cmap", - "head", - "hhea", - "hmtx", - "maxp", - "name", - "OS/2", - "post", - "cvt ", - "fpgm", - "glyf", - "loca", - "prep", - "CFF ", - "VORG", - "EBDT", - "EBLC", - "gasp", - "hdmx", - "kern", - "LTSH", - "PCLT", - "VDMX", - "vhea", - "vmtx", - "BASE", - "GDEF", - "GPOS", - "GSUB", - "EBSC", - "JSTF", - "MATH", - "CBDT", - "CBLC", - "COLR", - "CPAL", - "SVG ", - "sbix", - "acnt", - "avar", - "bdat", - "bloc", - "bsln", - "cvar", - "fdsc", - "feat", - "fmtx", - "fvar", - "gvar", - "hsty", - "just", - "lcar", - "mort", - "morx", - "opbd", - "prop", - "trak", - "Zapf", - "Silf", - "Glat", - "Gloc", - "Feat", - "Sill", -) - -woff2FlagsFormat = """ - > # big endian - flags: B # table type and flags -""" - -woff2FlagsSize = sstruct.calcsize(woff2FlagsFormat) - -woff2UnknownTagFormat = """ - > # big endian - tag: 4s # 4-byte tag (optional) -""" - -woff2UnknownTagSize = sstruct.calcsize(woff2UnknownTagFormat) - -woff2UnknownTagIndex = 0x3F - -woff2Base128MaxSize = 5 -woff2DirectoryEntryMaxSize = ( - woff2FlagsSize + woff2UnknownTagSize + 2 * woff2Base128MaxSize -) - -woff2TransformedTableTags = ("glyf", "loca") - -woff2GlyfTableFormat = """ - > # big endian - version: H # = 0x0000 - optionFlags: H # Bit 0: we have overlapSimpleBitmap[], Bits 1-15: reserved - numGlyphs: H # Number of glyphs - indexFormat: H # Offset format for loca table - nContourStreamSize: L # Size of nContour stream - nPointsStreamSize: L # Size of nPoints stream - flagStreamSize: L # Size of flag stream - glyphStreamSize: L # Size of glyph stream - compositeStreamSize: L # Size of composite stream - bboxStreamSize: L # Comnined size of bboxBitmap and bboxStream - instructionStreamSize: L # Size of instruction stream -""" - -woff2GlyfTableFormatSize = sstruct.calcsize(woff2GlyfTableFormat) - -bboxFormat = """ - > # big endian - xMin: h - yMin: h - xMax: h - yMax: h -""" - -woff2OverlapSimpleBitmapFlag = 0x0001 - - -def getKnownTagIndex(tag): - """Return index of 'tag' in woff2KnownTags list. Return 63 if not found.""" - for i in range(len(woff2KnownTags)): - if tag == woff2KnownTags[i]: - return i - return woff2UnknownTagIndex - - -class WOFF2DirectoryEntry(DirectoryEntry): - def fromFile(self, file): - pos = file.tell() - data = file.read(woff2DirectoryEntryMaxSize) - left = self.fromString(data) - consumed = len(data) - len(left) - file.seek(pos + consumed) - - def fromString(self, data): - if len(data) < 1: - raise TTLibError("can't read table 'flags': not enough data") - dummy, data = sstruct.unpack2(woff2FlagsFormat, data, self) - if self.flags & 0x3F == 0x3F: - # if bits [0..5] of the flags byte == 63, read a 4-byte arbitrary tag value - if len(data) < woff2UnknownTagSize: - raise TTLibError("can't read table 'tag': not enough data") - dummy, data = sstruct.unpack2(woff2UnknownTagFormat, data, self) - else: - # otherwise, tag is derived from a fixed 'Known Tags' table - self.tag = woff2KnownTags[self.flags & 0x3F] - self.tag = Tag(self.tag) - self.origLength, data = unpackBase128(data) - self.length = self.origLength - if self.transformed: - self.length, data = unpackBase128(data) - if self.tag == "loca" and self.length != 0: - raise TTLibError("the transformLength of the 'loca' table must be 0") - # return left over data - return data - - def toString(self): - data = bytechr(self.flags) - if (self.flags & 0x3F) == 0x3F: - data += struct.pack(">4s", self.tag.tobytes()) - data += packBase128(self.origLength) - if self.transformed: - data += packBase128(self.length) - return data - - @property - def transformVersion(self): - """Return bits 6-7 of table entry's flags, which indicate the preprocessing - transformation version number (between 0 and 3). - """ - return self.flags >> 6 - - @transformVersion.setter - def transformVersion(self, value): - assert 0 <= value <= 3 - self.flags |= value << 6 - - @property - def transformed(self): - """Return True if the table has any transformation, else return False.""" - # For all tables in a font, except for 'glyf' and 'loca', the transformation - # version 0 indicates the null transform (where the original table data is - # passed directly to the Brotli compressor). For 'glyf' and 'loca' tables, - # transformation version 3 indicates the null transform - if self.tag in {"glyf", "loca"}: - return self.transformVersion != 3 - else: - return self.transformVersion != 0 - - @transformed.setter - def transformed(self, booleanValue): - # here we assume that a non-null transform means version 0 for 'glyf' and - # 'loca' and 1 for every other table (e.g. hmtx); but that may change as - # new transformation formats are introduced in the future (if ever). - if self.tag in {"glyf", "loca"}: - self.transformVersion = 3 if not booleanValue else 0 - else: - self.transformVersion = int(booleanValue) - - -class WOFF2LocaTable(getTableClass("loca")): - """Same as parent class. The only difference is that it attempts to preserve - the 'indexFormat' as encoded in the WOFF2 glyf table. - """ - - def __init__(self, tag=None): - self.tableTag = Tag(tag or "loca") - - def compile(self, ttFont): - try: - max_location = max(self.locations) - except AttributeError: - self.set([]) - max_location = 0 - if "glyf" in ttFont and hasattr(ttFont["glyf"], "indexFormat"): - # copile loca using the indexFormat specified in the WOFF2 glyf table - indexFormat = ttFont["glyf"].indexFormat - if indexFormat == 0: - if max_location >= 0x20000: - raise TTLibError("indexFormat is 0 but local offsets > 0x20000") - if not all(l % 2 == 0 for l in self.locations): - raise TTLibError( - "indexFormat is 0 but local offsets not multiples of 2" - ) - locations = array.array("H") - for i in range(len(self.locations)): - locations.append(self.locations[i] // 2) - else: - locations = array.array("I", self.locations) - if sys.byteorder != "big": - locations.byteswap() - data = locations.tobytes() - else: - # use the most compact indexFormat given the current glyph offsets - data = super(WOFF2LocaTable, self).compile(ttFont) - return data - - -class WOFF2GlyfTable(getTableClass("glyf")): - """Decoder/Encoder for WOFF2 'glyf' table transform.""" - - subStreams = ( - "nContourStream", - "nPointsStream", - "flagStream", - "glyphStream", - "compositeStream", - "bboxStream", - "instructionStream", - ) - - def __init__(self, tag=None): - self.tableTag = Tag(tag or "glyf") - - def reconstruct(self, data, ttFont): - """Decompile transformed 'glyf' data.""" - inputDataSize = len(data) - - if inputDataSize < woff2GlyfTableFormatSize: - raise TTLibError("not enough 'glyf' data") - dummy, data = sstruct.unpack2(woff2GlyfTableFormat, data, self) - offset = woff2GlyfTableFormatSize - - for stream in self.subStreams: - size = getattr(self, stream + "Size") - setattr(self, stream, data[:size]) - data = data[size:] - offset += size - - hasOverlapSimpleBitmap = self.optionFlags & woff2OverlapSimpleBitmapFlag - self.overlapSimpleBitmap = None - if hasOverlapSimpleBitmap: - overlapSimpleBitmapSize = (self.numGlyphs + 7) >> 3 - self.overlapSimpleBitmap = array.array("B", data[:overlapSimpleBitmapSize]) - offset += overlapSimpleBitmapSize - - if offset != inputDataSize: - raise TTLibError( - "incorrect size of transformed 'glyf' table: expected %d, received %d bytes" - % (offset, inputDataSize) - ) - - bboxBitmapSize = ((self.numGlyphs + 31) >> 5) << 2 - bboxBitmap = self.bboxStream[:bboxBitmapSize] - self.bboxBitmap = array.array("B", bboxBitmap) - self.bboxStream = self.bboxStream[bboxBitmapSize:] - - self.nContourStream = array.array("h", self.nContourStream) - if sys.byteorder != "big": - self.nContourStream.byteswap() - assert len(self.nContourStream) == self.numGlyphs - - if "head" in ttFont: - ttFont["head"].indexToLocFormat = self.indexFormat - try: - self.glyphOrder = ttFont.getGlyphOrder() - except: - self.glyphOrder = None - if self.glyphOrder is None: - self.glyphOrder = [".notdef"] - self.glyphOrder.extend(["glyph%.5d" % i for i in range(1, self.numGlyphs)]) - else: - if len(self.glyphOrder) != self.numGlyphs: - raise TTLibError( - "incorrect glyphOrder: expected %d glyphs, found %d" - % (len(self.glyphOrder), self.numGlyphs) - ) - - glyphs = self.glyphs = {} - for glyphID, glyphName in enumerate(self.glyphOrder): - glyph = self._decodeGlyph(glyphID) - glyphs[glyphName] = glyph - - def transform(self, ttFont): - """Return transformed 'glyf' data""" - self.numGlyphs = len(self.glyphs) - assert len(self.glyphOrder) == self.numGlyphs - if "maxp" in ttFont: - ttFont["maxp"].numGlyphs = self.numGlyphs - self.indexFormat = ttFont["head"].indexToLocFormat - - for stream in self.subStreams: - setattr(self, stream, b"") - bboxBitmapSize = ((self.numGlyphs + 31) >> 5) << 2 - self.bboxBitmap = array.array("B", [0] * bboxBitmapSize) - - self.overlapSimpleBitmap = array.array("B", [0] * ((self.numGlyphs + 7) >> 3)) - for glyphID in range(self.numGlyphs): - try: - self._encodeGlyph(glyphID) - except NotImplementedError: - return None - hasOverlapSimpleBitmap = any(self.overlapSimpleBitmap) - - self.bboxStream = self.bboxBitmap.tobytes() + self.bboxStream - for stream in self.subStreams: - setattr(self, stream + "Size", len(getattr(self, stream))) - self.version = 0 - self.optionFlags = 0 - if hasOverlapSimpleBitmap: - self.optionFlags |= woff2OverlapSimpleBitmapFlag - data = sstruct.pack(woff2GlyfTableFormat, self) - data += bytesjoin([getattr(self, s) for s in self.subStreams]) - if hasOverlapSimpleBitmap: - data += self.overlapSimpleBitmap.tobytes() - return data - - def _decodeGlyph(self, glyphID): - glyph = getTableModule("glyf").Glyph() - glyph.numberOfContours = self.nContourStream[glyphID] - if glyph.numberOfContours == 0: - return glyph - elif glyph.isComposite(): - self._decodeComponents(glyph) - else: - self._decodeCoordinates(glyph) - self._decodeOverlapSimpleFlag(glyph, glyphID) - self._decodeBBox(glyphID, glyph) - return glyph - - def _decodeComponents(self, glyph): - data = self.compositeStream - glyph.components = [] - more = 1 - haveInstructions = 0 - while more: - component = getTableModule("glyf").GlyphComponent() - more, haveInstr, data = component.decompile(data, self) - haveInstructions = haveInstructions | haveInstr - glyph.components.append(component) - self.compositeStream = data - if haveInstructions: - self._decodeInstructions(glyph) - - def _decodeCoordinates(self, glyph): - data = self.nPointsStream - endPtsOfContours = [] - endPoint = -1 - for i in range(glyph.numberOfContours): - ptsOfContour, data = unpack255UShort(data) - endPoint += ptsOfContour - endPtsOfContours.append(endPoint) - glyph.endPtsOfContours = endPtsOfContours - self.nPointsStream = data - self._decodeTriplets(glyph) - self._decodeInstructions(glyph) - - def _decodeOverlapSimpleFlag(self, glyph, glyphID): - if self.overlapSimpleBitmap is None or glyph.numberOfContours <= 0: - return - byte = glyphID >> 3 - bit = glyphID & 7 - if self.overlapSimpleBitmap[byte] & (0x80 >> bit): - glyph.flags[0] |= _g_l_y_f.flagOverlapSimple - - def _decodeInstructions(self, glyph): - glyphStream = self.glyphStream - instructionStream = self.instructionStream - instructionLength, glyphStream = unpack255UShort(glyphStream) - glyph.program = ttProgram.Program() - glyph.program.fromBytecode(instructionStream[:instructionLength]) - self.glyphStream = glyphStream - self.instructionStream = instructionStream[instructionLength:] - - def _decodeBBox(self, glyphID, glyph): - haveBBox = bool(self.bboxBitmap[glyphID >> 3] & (0x80 >> (glyphID & 7))) - if glyph.isComposite() and not haveBBox: - raise TTLibError("no bbox values for composite glyph %d" % glyphID) - if haveBBox: - dummy, self.bboxStream = sstruct.unpack2(bboxFormat, self.bboxStream, glyph) - else: - glyph.recalcBounds(self) - - def _decodeTriplets(self, glyph): - def withSign(flag, baseval): - assert 0 <= baseval and baseval < 65536, "integer overflow" - return baseval if flag & 1 else -baseval - - nPoints = glyph.endPtsOfContours[-1] + 1 - flagSize = nPoints - if flagSize > len(self.flagStream): - raise TTLibError("not enough 'flagStream' data") - flagsData = self.flagStream[:flagSize] - self.flagStream = self.flagStream[flagSize:] - flags = array.array("B", flagsData) - - triplets = array.array("B", self.glyphStream) - nTriplets = len(triplets) - assert nPoints <= nTriplets - - x = 0 - y = 0 - glyph.coordinates = getTableModule("glyf").GlyphCoordinates.zeros(nPoints) - glyph.flags = array.array("B") - tripletIndex = 0 - for i in range(nPoints): - flag = flags[i] - onCurve = not bool(flag >> 7) - flag &= 0x7F - if flag < 84: - nBytes = 1 - elif flag < 120: - nBytes = 2 - elif flag < 124: - nBytes = 3 - else: - nBytes = 4 - assert (tripletIndex + nBytes) <= nTriplets - if flag < 10: - dx = 0 - dy = withSign(flag, ((flag & 14) << 7) + triplets[tripletIndex]) - elif flag < 20: - dx = withSign(flag, (((flag - 10) & 14) << 7) + triplets[tripletIndex]) - dy = 0 - elif flag < 84: - b0 = flag - 20 - b1 = triplets[tripletIndex] - dx = withSign(flag, 1 + (b0 & 0x30) + (b1 >> 4)) - dy = withSign(flag >> 1, 1 + ((b0 & 0x0C) << 2) + (b1 & 0x0F)) - elif flag < 120: - b0 = flag - 84 - dx = withSign(flag, 1 + ((b0 // 12) << 8) + triplets[tripletIndex]) - dy = withSign( - flag >> 1, 1 + (((b0 % 12) >> 2) << 8) + triplets[tripletIndex + 1] - ) - elif flag < 124: - b2 = triplets[tripletIndex + 1] - dx = withSign(flag, (triplets[tripletIndex] << 4) + (b2 >> 4)) - dy = withSign( - flag >> 1, ((b2 & 0x0F) << 8) + triplets[tripletIndex + 2] - ) - else: - dx = withSign( - flag, (triplets[tripletIndex] << 8) + triplets[tripletIndex + 1] - ) - dy = withSign( - flag >> 1, - (triplets[tripletIndex + 2] << 8) + triplets[tripletIndex + 3], - ) - tripletIndex += nBytes - x += dx - y += dy - glyph.coordinates[i] = (x, y) - glyph.flags.append(int(onCurve)) - bytesConsumed = tripletIndex - self.glyphStream = self.glyphStream[bytesConsumed:] - - def _encodeGlyph(self, glyphID): - glyphName = self.getGlyphName(glyphID) - glyph = self[glyphName] - self.nContourStream += struct.pack(">h", glyph.numberOfContours) - if glyph.numberOfContours == 0: - return - elif glyph.isComposite(): - self._encodeComponents(glyph) - elif glyph.isVarComposite(): - raise NotImplementedError - else: - self._encodeCoordinates(glyph) - self._encodeOverlapSimpleFlag(glyph, glyphID) - self._encodeBBox(glyphID, glyph) - - def _encodeComponents(self, glyph): - lastcomponent = len(glyph.components) - 1 - more = 1 - haveInstructions = 0 - for i in range(len(glyph.components)): - if i == lastcomponent: - haveInstructions = hasattr(glyph, "program") - more = 0 - component = glyph.components[i] - self.compositeStream += component.compile(more, haveInstructions, self) - if haveInstructions: - self._encodeInstructions(glyph) - - def _encodeCoordinates(self, glyph): - lastEndPoint = -1 - if _g_l_y_f.flagCubic in glyph.flags: - raise NotImplementedError - for endPoint in glyph.endPtsOfContours: - ptsOfContour = endPoint - lastEndPoint - self.nPointsStream += pack255UShort(ptsOfContour) - lastEndPoint = endPoint - self._encodeTriplets(glyph) - self._encodeInstructions(glyph) - - def _encodeOverlapSimpleFlag(self, glyph, glyphID): - if glyph.numberOfContours <= 0: - return - if glyph.flags[0] & _g_l_y_f.flagOverlapSimple: - byte = glyphID >> 3 - bit = glyphID & 7 - self.overlapSimpleBitmap[byte] |= 0x80 >> bit - - def _encodeInstructions(self, glyph): - instructions = glyph.program.getBytecode() - self.glyphStream += pack255UShort(len(instructions)) - self.instructionStream += instructions - - def _encodeBBox(self, glyphID, glyph): - assert glyph.numberOfContours != 0, "empty glyph has no bbox" - if not glyph.isComposite(): - # for simple glyphs, compare the encoded bounding box info with the calculated - # values, and if they match omit the bounding box info - currentBBox = glyph.xMin, glyph.yMin, glyph.xMax, glyph.yMax - calculatedBBox = calcIntBounds(glyph.coordinates) - if currentBBox == calculatedBBox: - return - self.bboxBitmap[glyphID >> 3] |= 0x80 >> (glyphID & 7) - self.bboxStream += sstruct.pack(bboxFormat, glyph) - - def _encodeTriplets(self, glyph): - assert len(glyph.coordinates) == len(glyph.flags) - coordinates = glyph.coordinates.copy() - coordinates.absoluteToRelative() - - flags = array.array("B") - triplets = array.array("B") - for i in range(len(coordinates)): - onCurve = glyph.flags[i] & _g_l_y_f.flagOnCurve - x, y = coordinates[i] - absX = abs(x) - absY = abs(y) - onCurveBit = 0 if onCurve else 128 - xSignBit = 0 if (x < 0) else 1 - ySignBit = 0 if (y < 0) else 1 - xySignBits = xSignBit + 2 * ySignBit - - if x == 0 and absY < 1280: - flags.append(onCurveBit + ((absY & 0xF00) >> 7) + ySignBit) - triplets.append(absY & 0xFF) - elif y == 0 and absX < 1280: - flags.append(onCurveBit + 10 + ((absX & 0xF00) >> 7) + xSignBit) - triplets.append(absX & 0xFF) - elif absX < 65 and absY < 65: - flags.append( - onCurveBit - + 20 - + ((absX - 1) & 0x30) - + (((absY - 1) & 0x30) >> 2) - + xySignBits - ) - triplets.append((((absX - 1) & 0xF) << 4) | ((absY - 1) & 0xF)) - elif absX < 769 and absY < 769: - flags.append( - onCurveBit - + 84 - + 12 * (((absX - 1) & 0x300) >> 8) - + (((absY - 1) & 0x300) >> 6) - + xySignBits - ) - triplets.append((absX - 1) & 0xFF) - triplets.append((absY - 1) & 0xFF) - elif absX < 4096 and absY < 4096: - flags.append(onCurveBit + 120 + xySignBits) - triplets.append(absX >> 4) - triplets.append(((absX & 0xF) << 4) | (absY >> 8)) - triplets.append(absY & 0xFF) - else: - flags.append(onCurveBit + 124 + xySignBits) - triplets.append(absX >> 8) - triplets.append(absX & 0xFF) - triplets.append(absY >> 8) - triplets.append(absY & 0xFF) - - self.flagStream += flags.tobytes() - self.glyphStream += triplets.tobytes() - - -class WOFF2HmtxTable(getTableClass("hmtx")): - def __init__(self, tag=None): - self.tableTag = Tag(tag or "hmtx") - - def reconstruct(self, data, ttFont): - (flags,) = struct.unpack(">B", data[:1]) - data = data[1:] - if flags & 0b11111100 != 0: - raise TTLibError("Bits 2-7 of '%s' flags are reserved" % self.tableTag) - - # When bit 0 is _not_ set, the lsb[] array is present - hasLsbArray = flags & 1 == 0 - # When bit 1 is _not_ set, the leftSideBearing[] array is present - hasLeftSideBearingArray = flags & 2 == 0 - if hasLsbArray and hasLeftSideBearingArray: - raise TTLibError( - "either bits 0 or 1 (or both) must set in transformed '%s' flags" - % self.tableTag - ) - - glyfTable = ttFont["glyf"] - headerTable = ttFont["hhea"] - glyphOrder = glyfTable.glyphOrder - numGlyphs = len(glyphOrder) - numberOfHMetrics = min(int(headerTable.numberOfHMetrics), numGlyphs) - - assert len(data) >= 2 * numberOfHMetrics - advanceWidthArray = array.array("H", data[: 2 * numberOfHMetrics]) - if sys.byteorder != "big": - advanceWidthArray.byteswap() - data = data[2 * numberOfHMetrics :] - - if hasLsbArray: - assert len(data) >= 2 * numberOfHMetrics - lsbArray = array.array("h", data[: 2 * numberOfHMetrics]) - if sys.byteorder != "big": - lsbArray.byteswap() - data = data[2 * numberOfHMetrics :] - else: - # compute (proportional) glyphs' lsb from their xMin - lsbArray = array.array("h") - for i, glyphName in enumerate(glyphOrder): - if i >= numberOfHMetrics: - break - glyph = glyfTable[glyphName] - xMin = getattr(glyph, "xMin", 0) - lsbArray.append(xMin) - - numberOfSideBearings = numGlyphs - numberOfHMetrics - if hasLeftSideBearingArray: - assert len(data) >= 2 * numberOfSideBearings - leftSideBearingArray = array.array("h", data[: 2 * numberOfSideBearings]) - if sys.byteorder != "big": - leftSideBearingArray.byteswap() - data = data[2 * numberOfSideBearings :] - else: - # compute (monospaced) glyphs' leftSideBearing from their xMin - leftSideBearingArray = array.array("h") - for i, glyphName in enumerate(glyphOrder): - if i < numberOfHMetrics: - continue - glyph = glyfTable[glyphName] - xMin = getattr(glyph, "xMin", 0) - leftSideBearingArray.append(xMin) - - if data: - raise TTLibError("too much '%s' table data" % self.tableTag) - - self.metrics = {} - for i in range(numberOfHMetrics): - glyphName = glyphOrder[i] - advanceWidth, lsb = advanceWidthArray[i], lsbArray[i] - self.metrics[glyphName] = (advanceWidth, lsb) - lastAdvance = advanceWidthArray[-1] - for i in range(numberOfSideBearings): - glyphName = glyphOrder[i + numberOfHMetrics] - self.metrics[glyphName] = (lastAdvance, leftSideBearingArray[i]) - - def transform(self, ttFont): - glyphOrder = ttFont.getGlyphOrder() - glyf = ttFont["glyf"] - hhea = ttFont["hhea"] - numberOfHMetrics = hhea.numberOfHMetrics - - # check if any of the proportional glyphs has left sidebearings that - # differ from their xMin bounding box values. - hasLsbArray = False - for i in range(numberOfHMetrics): - glyphName = glyphOrder[i] - lsb = self.metrics[glyphName][1] - if lsb != getattr(glyf[glyphName], "xMin", 0): - hasLsbArray = True - break - - # do the same for the monospaced glyphs (if any) at the end of hmtx table - hasLeftSideBearingArray = False - for i in range(numberOfHMetrics, len(glyphOrder)): - glyphName = glyphOrder[i] - lsb = self.metrics[glyphName][1] - if lsb != getattr(glyf[glyphName], "xMin", 0): - hasLeftSideBearingArray = True - break - - # if we need to encode both sidebearings arrays, then no transformation is - # applicable, and we must use the untransformed hmtx data - if hasLsbArray and hasLeftSideBearingArray: - return - - # set bit 0 and 1 when the respective arrays are _not_ present - flags = 0 - if not hasLsbArray: - flags |= 1 << 0 - if not hasLeftSideBearingArray: - flags |= 1 << 1 - - data = struct.pack(">B", flags) - - advanceWidthArray = array.array( - "H", - [ - self.metrics[glyphName][0] - for i, glyphName in enumerate(glyphOrder) - if i < numberOfHMetrics - ], - ) - if sys.byteorder != "big": - advanceWidthArray.byteswap() - data += advanceWidthArray.tobytes() - - if hasLsbArray: - lsbArray = array.array( - "h", - [ - self.metrics[glyphName][1] - for i, glyphName in enumerate(glyphOrder) - if i < numberOfHMetrics - ], - ) - if sys.byteorder != "big": - lsbArray.byteswap() - data += lsbArray.tobytes() - - if hasLeftSideBearingArray: - leftSideBearingArray = array.array( - "h", - [ - self.metrics[glyphOrder[i]][1] - for i in range(numberOfHMetrics, len(glyphOrder)) - ], - ) - if sys.byteorder != "big": - leftSideBearingArray.byteswap() - data += leftSideBearingArray.tobytes() - - return data - - -class WOFF2FlavorData(WOFFFlavorData): - - Flavor = "woff2" - - def __init__(self, reader=None, data=None, transformedTables=None): - """Data class that holds the WOFF2 header major/minor version, any - metadata or private data (as bytes strings), and the set of - table tags that have transformations applied (if reader is not None), - or will have once the WOFF2 font is compiled. - - Args: - reader: an SFNTReader (or subclass) object to read flavor data from. - data: another WOFFFlavorData object to initialise data from. - transformedTables: set of strings containing table tags to be transformed. - - Raises: - ImportError if the brotli module is not installed. - - NOTE: The 'reader' argument, on the one hand, and the 'data' and - 'transformedTables' arguments, on the other hand, are mutually exclusive. - """ - if not haveBrotli: - raise ImportError("No module named brotli") - - if reader is not None: - if data is not None: - raise TypeError("'reader' and 'data' arguments are mutually exclusive") - if transformedTables is not None: - raise TypeError( - "'reader' and 'transformedTables' arguments are mutually exclusive" - ) - - if transformedTables is not None and ( - "glyf" in transformedTables - and "loca" not in transformedTables - or "loca" in transformedTables - and "glyf" not in transformedTables - ): - raise ValueError("'glyf' and 'loca' must be transformed (or not) together") - super(WOFF2FlavorData, self).__init__(reader=reader) - if reader: - transformedTables = [ - tag for tag, entry in reader.tables.items() if entry.transformed - ] - elif data: - self.majorVersion = data.majorVersion - self.majorVersion = data.minorVersion - self.metaData = data.metaData - self.privData = data.privData - if transformedTables is None and hasattr(data, "transformedTables"): - transformedTables = data.transformedTables - - if transformedTables is None: - transformedTables = woff2TransformedTableTags - - self.transformedTables = set(transformedTables) - - def _decompress(self, rawData): - return brotli.decompress(rawData) - - -def unpackBase128(data): - r"""Read one to five bytes from UIntBase128-encoded input string, and return - a tuple containing the decoded integer plus any leftover data. - - >>> unpackBase128(b'\x3f\x00\x00') == (63, b"\x00\x00") - True - >>> unpackBase128(b'\x8f\xff\xff\xff\x7f')[0] == 4294967295 - True - >>> unpackBase128(b'\x80\x80\x3f') # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - File "", line 1, in ? - TTLibError: UIntBase128 value must not start with leading zeros - >>> unpackBase128(b'\x8f\xff\xff\xff\xff\x7f')[0] # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - File "", line 1, in ? - TTLibError: UIntBase128-encoded sequence is longer than 5 bytes - >>> unpackBase128(b'\x90\x80\x80\x80\x00')[0] # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - File "", line 1, in ? - TTLibError: UIntBase128 value exceeds 2**32-1 - """ - if len(data) == 0: - raise TTLibError("not enough data to unpack UIntBase128") - result = 0 - if byteord(data[0]) == 0x80: - # font must be rejected if UIntBase128 value starts with 0x80 - raise TTLibError("UIntBase128 value must not start with leading zeros") - for i in range(woff2Base128MaxSize): - if len(data) == 0: - raise TTLibError("not enough data to unpack UIntBase128") - code = byteord(data[0]) - data = data[1:] - # if any of the top seven bits are set then we're about to overflow - if result & 0xFE000000: - raise TTLibError("UIntBase128 value exceeds 2**32-1") - # set current value = old value times 128 bitwise-or (byte bitwise-and 127) - result = (result << 7) | (code & 0x7F) - # repeat until the most significant bit of byte is false - if (code & 0x80) == 0: - # return result plus left over data - return result, data - # make sure not to exceed the size bound - raise TTLibError("UIntBase128-encoded sequence is longer than 5 bytes") - - -def base128Size(n): - """Return the length in bytes of a UIntBase128-encoded sequence with value n. - - >>> base128Size(0) - 1 - >>> base128Size(24567) - 3 - >>> base128Size(2**32-1) - 5 - """ - assert n >= 0 - size = 1 - while n >= 128: - size += 1 - n >>= 7 - return size - - -def packBase128(n): - r"""Encode unsigned integer in range 0 to 2**32-1 (inclusive) to a string of - bytes using UIntBase128 variable-length encoding. Produce the shortest possible - encoding. - - >>> packBase128(63) == b"\x3f" - True - >>> packBase128(2**32-1) == b'\x8f\xff\xff\xff\x7f' - True - """ - if n < 0 or n >= 2**32: - raise TTLibError("UIntBase128 format requires 0 <= integer <= 2**32-1") - data = b"" - size = base128Size(n) - for i in range(size): - b = (n >> (7 * (size - i - 1))) & 0x7F - if i < size - 1: - b |= 0x80 - data += struct.pack("B", b) - return data - - -def unpack255UShort(data): - """Read one to three bytes from 255UInt16-encoded input string, and return a - tuple containing the decoded integer plus any leftover data. - - >>> unpack255UShort(bytechr(252))[0] - 252 - - Note that some numbers (e.g. 506) can have multiple encodings: - >>> unpack255UShort(struct.pack("BB", 254, 0))[0] - 506 - >>> unpack255UShort(struct.pack("BB", 255, 253))[0] - 506 - >>> unpack255UShort(struct.pack("BBB", 253, 1, 250))[0] - 506 - """ - code = byteord(data[:1]) - data = data[1:] - if code == 253: - # read two more bytes as an unsigned short - if len(data) < 2: - raise TTLibError("not enough data to unpack 255UInt16") - (result,) = struct.unpack(">H", data[:2]) - data = data[2:] - elif code == 254: - # read another byte, plus 253 * 2 - if len(data) == 0: - raise TTLibError("not enough data to unpack 255UInt16") - result = byteord(data[:1]) - result += 506 - data = data[1:] - elif code == 255: - # read another byte, plus 253 - if len(data) == 0: - raise TTLibError("not enough data to unpack 255UInt16") - result = byteord(data[:1]) - result += 253 - data = data[1:] - else: - # leave as is if lower than 253 - result = code - # return result plus left over data - return result, data - - -def pack255UShort(value): - r"""Encode unsigned integer in range 0 to 65535 (inclusive) to a bytestring - using 255UInt16 variable-length encoding. - - >>> pack255UShort(252) == b'\xfc' - True - >>> pack255UShort(506) == b'\xfe\x00' - True - >>> pack255UShort(762) == b'\xfd\x02\xfa' - True - """ - if value < 0 or value > 0xFFFF: - raise TTLibError("255UInt16 format requires 0 <= integer <= 65535") - if value < 253: - return struct.pack(">B", value) - elif value < 506: - return struct.pack(">BB", 255, value - 253) - elif value < 762: - return struct.pack(">BB", 254, value - 506) - else: - return struct.pack(">BH", 253, value) - - -def compress(input_file, output_file, transform_tables=None): - """Compress OpenType font to WOFF2. - - Args: - input_file: a file path, file or file-like object (open in binary mode) - containing an OpenType font (either CFF- or TrueType-flavored). - output_file: a file path, file or file-like object where to save the - compressed WOFF2 font. - transform_tables: Optional[Iterable[str]]: a set of table tags for which - to enable preprocessing transformations. By default, only 'glyf' - and 'loca' tables are transformed. An empty set means disable all - transformations. - """ - log.info("Processing %s => %s" % (input_file, output_file)) - - font = TTFont(input_file, recalcBBoxes=False, recalcTimestamp=False) - font.flavor = "woff2" - - if transform_tables is not None: - font.flavorData = WOFF2FlavorData( - data=font.flavorData, transformedTables=transform_tables - ) - - font.save(output_file, reorderTables=False) - - -def decompress(input_file, output_file): - """Decompress WOFF2 font to OpenType font. - - Args: - input_file: a file path, file or file-like object (open in binary mode) - containing a compressed WOFF2 font. - output_file: a file path, file or file-like object where to save the - decompressed OpenType font. - """ - log.info("Processing %s => %s" % (input_file, output_file)) - - font = TTFont(input_file, recalcBBoxes=False, recalcTimestamp=False) - font.flavor = None - font.flavorData = None - font.save(output_file, reorderTables=True) - - -def main(args=None): - """Compress and decompress WOFF2 fonts""" - import argparse - from fontTools import configLogger - from fontTools.ttx import makeOutputFileName - - class _HelpAction(argparse._HelpAction): - def __call__(self, parser, namespace, values, option_string=None): - subparsers_actions = [ - action - for action in parser._actions - if isinstance(action, argparse._SubParsersAction) - ] - for subparsers_action in subparsers_actions: - for choice, subparser in subparsers_action.choices.items(): - print(subparser.format_help()) - parser.exit() - - class _NoGlyfTransformAction(argparse.Action): - def __call__(self, parser, namespace, values, option_string=None): - namespace.transform_tables.difference_update({"glyf", "loca"}) - - class _HmtxTransformAction(argparse.Action): - def __call__(self, parser, namespace, values, option_string=None): - namespace.transform_tables.add("hmtx") - - parser = argparse.ArgumentParser( - prog="fonttools ttLib.woff2", description=main.__doc__, add_help=False - ) - - parser.add_argument( - "-h", "--help", action=_HelpAction, help="show this help message and exit" - ) - - parser_group = parser.add_subparsers(title="sub-commands") - parser_compress = parser_group.add_parser( - "compress", description="Compress a TTF or OTF font to WOFF2" - ) - parser_decompress = parser_group.add_parser( - "decompress", description="Decompress a WOFF2 font to OTF" - ) - - for subparser in (parser_compress, parser_decompress): - group = subparser.add_mutually_exclusive_group(required=False) - group.add_argument( - "-v", - "--verbose", - action="store_true", - help="print more messages to console", - ) - group.add_argument( - "-q", - "--quiet", - action="store_true", - help="do not print messages to console", - ) - - parser_compress.add_argument( - "input_file", - metavar="INPUT", - help="the input OpenType font (.ttf or .otf)", - ) - parser_decompress.add_argument( - "input_file", - metavar="INPUT", - help="the input WOFF2 font", - ) - - parser_compress.add_argument( - "-o", - "--output-file", - metavar="OUTPUT", - help="the output WOFF2 font", - ) - parser_decompress.add_argument( - "-o", - "--output-file", - metavar="OUTPUT", - help="the output OpenType font", - ) - - transform_group = parser_compress.add_argument_group() - transform_group.add_argument( - "--no-glyf-transform", - dest="transform_tables", - nargs=0, - action=_NoGlyfTransformAction, - help="Do not transform glyf (and loca) tables", - ) - transform_group.add_argument( - "--hmtx-transform", - dest="transform_tables", - nargs=0, - action=_HmtxTransformAction, - help="Enable optional transformation for 'hmtx' table", - ) - - parser_compress.set_defaults( - subcommand=compress, - transform_tables={"glyf", "loca"}, - ) - parser_decompress.set_defaults(subcommand=decompress) - - options = vars(parser.parse_args(args)) - - subcommand = options.pop("subcommand", None) - if not subcommand: - parser.print_help() - return - - quiet = options.pop("quiet") - verbose = options.pop("verbose") - configLogger( - level=("ERROR" if quiet else "DEBUG" if verbose else "INFO"), - ) - - if not options["output_file"]: - if subcommand is compress: - extension = ".woff2" - elif subcommand is decompress: - # choose .ttf/.otf file extension depending on sfntVersion - with open(options["input_file"], "rb") as f: - f.seek(4) # skip 'wOF2' signature - sfntVersion = f.read(4) - assert len(sfntVersion) == 4, "not enough data" - extension = ".otf" if sfntVersion == b"OTTO" else ".ttf" - else: - raise AssertionError(subcommand) - options["output_file"] = makeOutputFileName( - options["input_file"], outputDir=None, extension=extension - ) - - try: - subcommand(**options) - except TTLibError as e: - parser.error(e) - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/colakin/video-generater/public/ffmpeg/doc/examples/resample_audio.c b/spaces/colakin/video-generater/public/ffmpeg/doc/examples/resample_audio.c deleted file mode 100644 index db9b4e5e087e33f028ce0d938337934155b4e26b..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/doc/examples/resample_audio.c +++ /dev/null @@ -1,220 +0,0 @@ -/* - * Copyright (c) 2012 Stefano Sabatini - * - * Permission is hereby granted, free of charge, to any person obtaining a copy - * of this software and associated documentation files (the "Software"), to deal - * in the Software without restriction, including without limitation the rights - * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - * copies of the Software, and to permit persons to whom the Software is - * furnished to do so, subject to the following conditions: - * - * The above copyright notice and this permission notice shall be included in - * all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN - * THE SOFTWARE. - */ - -/** - * @file audio resampling API usage example - * @example resample_audio.c - * - * Generate a synthetic audio signal, and Use libswresample API to perform audio - * resampling. The output is written to a raw audio file to be played with - * ffplay. - */ - -#include -#include -#include -#include - -static int get_format_from_sample_fmt(const char **fmt, - enum AVSampleFormat sample_fmt) -{ - int i; - struct sample_fmt_entry { - enum AVSampleFormat sample_fmt; const char *fmt_be, *fmt_le; - } sample_fmt_entries[] = { - { AV_SAMPLE_FMT_U8, "u8", "u8" }, - { AV_SAMPLE_FMT_S16, "s16be", "s16le" }, - { AV_SAMPLE_FMT_S32, "s32be", "s32le" }, - { AV_SAMPLE_FMT_FLT, "f32be", "f32le" }, - { AV_SAMPLE_FMT_DBL, "f64be", "f64le" }, - }; - *fmt = NULL; - - for (i = 0; i < FF_ARRAY_ELEMS(sample_fmt_entries); i++) { - struct sample_fmt_entry *entry = &sample_fmt_entries[i]; - if (sample_fmt == entry->sample_fmt) { - *fmt = AV_NE(entry->fmt_be, entry->fmt_le); - return 0; - } - } - - fprintf(stderr, - "Sample format %s not supported as output format\n", - av_get_sample_fmt_name(sample_fmt)); - return AVERROR(EINVAL); -} - -/** - * Fill dst buffer with nb_samples, generated starting from t. - */ -static void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t) -{ - int i, j; - double tincr = 1.0 / sample_rate, *dstp = dst; - const double c = 2 * M_PI * 440.0; - - /* generate sin tone with 440Hz frequency and duplicated channels */ - for (i = 0; i < nb_samples; i++) { - *dstp = sin(c * *t); - for (j = 1; j < nb_channels; j++) - dstp[j] = dstp[0]; - dstp += nb_channels; - *t += tincr; - } -} - -int main(int argc, char **argv) -{ - AVChannelLayout src_ch_layout = AV_CHANNEL_LAYOUT_STEREO, dst_ch_layout = AV_CHANNEL_LAYOUT_SURROUND; - int src_rate = 48000, dst_rate = 44100; - uint8_t **src_data = NULL, **dst_data = NULL; - int src_nb_channels = 0, dst_nb_channels = 0; - int src_linesize, dst_linesize; - int src_nb_samples = 1024, dst_nb_samples, max_dst_nb_samples; - enum AVSampleFormat src_sample_fmt = AV_SAMPLE_FMT_DBL, dst_sample_fmt = AV_SAMPLE_FMT_S16; - const char *dst_filename = NULL; - FILE *dst_file; - int dst_bufsize; - const char *fmt; - struct SwrContext *swr_ctx; - char buf[64]; - double t; - int ret; - - if (argc != 2) { - fprintf(stderr, "Usage: %s output_file\n" - "API example program to show how to resample an audio stream with libswresample.\n" - "This program generates a series of audio frames, resamples them to a specified " - "output format and rate and saves them to an output file named output_file.\n", - argv[0]); - exit(1); - } - dst_filename = argv[1]; - - dst_file = fopen(dst_filename, "wb"); - if (!dst_file) { - fprintf(stderr, "Could not open destination file %s\n", dst_filename); - exit(1); - } - - /* create resampler context */ - swr_ctx = swr_alloc(); - if (!swr_ctx) { - fprintf(stderr, "Could not allocate resampler context\n"); - ret = AVERROR(ENOMEM); - goto end; - } - - /* set options */ - av_opt_set_chlayout(swr_ctx, "in_chlayout", &src_ch_layout, 0); - av_opt_set_int(swr_ctx, "in_sample_rate", src_rate, 0); - av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", src_sample_fmt, 0); - - av_opt_set_chlayout(swr_ctx, "out_chlayout", &dst_ch_layout, 0); - av_opt_set_int(swr_ctx, "out_sample_rate", dst_rate, 0); - av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", dst_sample_fmt, 0); - - /* initialize the resampling context */ - if ((ret = swr_init(swr_ctx)) < 0) { - fprintf(stderr, "Failed to initialize the resampling context\n"); - goto end; - } - - /* allocate source and destination samples buffers */ - - src_nb_channels = src_ch_layout.nb_channels; - ret = av_samples_alloc_array_and_samples(&src_data, &src_linesize, src_nb_channels, - src_nb_samples, src_sample_fmt, 0); - if (ret < 0) { - fprintf(stderr, "Could not allocate source samples\n"); - goto end; - } - - /* compute the number of converted samples: buffering is avoided - * ensuring that the output buffer will contain at least all the - * converted input samples */ - max_dst_nb_samples = dst_nb_samples = - av_rescale_rnd(src_nb_samples, dst_rate, src_rate, AV_ROUND_UP); - - /* buffer is going to be directly written to a rawaudio file, no alignment */ - dst_nb_channels = dst_ch_layout.nb_channels; - ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, dst_nb_channels, - dst_nb_samples, dst_sample_fmt, 0); - if (ret < 0) { - fprintf(stderr, "Could not allocate destination samples\n"); - goto end; - } - - t = 0; - do { - /* generate synthetic audio */ - fill_samples((double *)src_data[0], src_nb_samples, src_nb_channels, src_rate, &t); - - /* compute destination number of samples */ - dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) + - src_nb_samples, dst_rate, src_rate, AV_ROUND_UP); - if (dst_nb_samples > max_dst_nb_samples) { - av_freep(&dst_data[0]); - ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels, - dst_nb_samples, dst_sample_fmt, 1); - if (ret < 0) - break; - max_dst_nb_samples = dst_nb_samples; - } - - /* convert to destination format */ - ret = swr_convert(swr_ctx, dst_data, dst_nb_samples, (const uint8_t **)src_data, src_nb_samples); - if (ret < 0) { - fprintf(stderr, "Error while converting\n"); - goto end; - } - dst_bufsize = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels, - ret, dst_sample_fmt, 1); - if (dst_bufsize < 0) { - fprintf(stderr, "Could not get sample buffer size\n"); - goto end; - } - printf("t:%f in:%d out:%d\n", t, src_nb_samples, ret); - fwrite(dst_data[0], 1, dst_bufsize, dst_file); - } while (t < 10); - - if ((ret = get_format_from_sample_fmt(&fmt, dst_sample_fmt)) < 0) - goto end; - av_channel_layout_describe(&dst_ch_layout, buf, sizeof(buf)); - fprintf(stderr, "Resampling succeeded. Play the output file with the command:\n" - "ffplay -f %s -channel_layout %s -channels %d -ar %d %s\n", - fmt, buf, dst_nb_channels, dst_rate, dst_filename); - -end: - fclose(dst_file); - - if (src_data) - av_freep(&src_data[0]); - av_freep(&src_data); - - if (dst_data) - av_freep(&dst_data[0]); - av_freep(&dst_data); - - swr_free(&swr_ctx); - return ret < 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac3plus.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac3plus.c deleted file mode 100644 index 5661654ce31385468ab66e95de90a1f563a39332..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac3plus.c +++ /dev/null @@ -1,1716 +0,0 @@ -/* - * ATRAC3+ compatible decoder - * - * Copyright (c) 2010-2013 Maxim Poliakovski - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Bitstream parser for ATRAC3+ decoder. - */ - -#include "libavutil/avassert.h" -#include "avcodec.h" -#include "get_bits.h" -#include "atrac3plus.h" -#include "atrac3plus_data.h" - -static VLCElem tables_data[154276]; -static VLC wl_vlc_tabs[4]; -static VLC sf_vlc_tabs[8]; -static VLC ct_vlc_tabs[4]; -static VLC spec_vlc_tabs[112]; -static VLC gain_vlc_tabs[11]; -static VLC tone_vlc_tabs[7]; - -/** - * Generate canonical VLC table from given descriptor. - * - * @param[in] cb ptr to codebook descriptor - * @param[in,out] xlat ptr to ptr to translation table - * @param[in,out] tab_offset starting offset to the generated vlc table - * @param[out] out_vlc ptr to vlc table to be generated - */ -static av_cold void build_canonical_huff(const uint8_t *cb, const uint8_t **xlat, - int *tab_offset, VLC *out_vlc) -{ - int i, max_len; - uint8_t bits[256]; - int index = 0; - - for (int b = 1; b <= 12; b++) { - for (i = *cb++; i > 0; i--) { - av_assert0(index < 256); - bits[index] = b; - index++; - } - } - max_len = bits[index - 1]; - - out_vlc->table = &tables_data[*tab_offset]; - out_vlc->table_allocated = 1 << max_len; - - ff_init_vlc_from_lengths(out_vlc, max_len, index, bits, 1, - *xlat, 1, 1, 0, INIT_VLC_USE_NEW_STATIC, NULL); - - *tab_offset += 1 << max_len; - *xlat += index; -} - -av_cold void ff_atrac3p_init_vlcs(void) -{ - int i, tab_offset = 0; - const uint8_t *xlats; - - xlats = atrac3p_wl_ct_xlats; - for (int i = 0; i < 4; i++) { - build_canonical_huff(atrac3p_wl_cbs[i], &xlats, - &tab_offset, &wl_vlc_tabs[i]); - build_canonical_huff(atrac3p_ct_cbs[i], &xlats, - &tab_offset, &ct_vlc_tabs[i]); - } - - xlats = atrac3p_sf_xlats; - for (int i = 0; i < 8; i++) - build_canonical_huff(atrac3p_sf_cbs[i], &xlats, - &tab_offset, &sf_vlc_tabs[i]); - - /* build huffman tables for spectrum decoding */ - xlats = atrac3p_spectra_xlats; - for (i = 0; i < 112; i++) { - if (atrac3p_spectra_cbs[i][0] >= 0) - build_canonical_huff(atrac3p_spectra_cbs[i], - &xlats, &tab_offset, &spec_vlc_tabs[i]); - else /* Reuse already initialized VLC table */ - spec_vlc_tabs[i] = spec_vlc_tabs[-atrac3p_spectra_cbs[i][0]]; - } - - /* build huffman tables for gain data decoding */ - xlats = atrac3p_gain_xlats; - for (i = 0; i < 11; i++) - build_canonical_huff(atrac3p_gain_cbs[i], &xlats, - &tab_offset, &gain_vlc_tabs[i]); - - /* build huffman tables for tone decoding */ - xlats = atrac3p_tone_xlats; - for (i = 0; i < 7; i++) - build_canonical_huff(atrac3p_tone_cbs[i], &xlats, - &tab_offset, &tone_vlc_tabs[i]); -} - -/** - * Decode number of coded quantization units. - * - * @param[in] gb the GetBit context - * @param[in,out] chan ptr to the channel parameters - * @param[in,out] ctx ptr to the channel unit context - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int num_coded_units(GetBitContext *gb, Atrac3pChanParams *chan, - Atrac3pChanUnitCtx *ctx, AVCodecContext *avctx) -{ - chan->fill_mode = get_bits(gb, 2); - if (!chan->fill_mode) { - chan->num_coded_vals = ctx->num_quant_units; - } else { - chan->num_coded_vals = get_bits(gb, 5); - if (chan->num_coded_vals > ctx->num_quant_units) { - av_log(avctx, AV_LOG_ERROR, - "Invalid number of transmitted units!\n"); - return AVERROR_INVALIDDATA; - } - - if (chan->fill_mode == 3) - chan->split_point = get_bits(gb, 2) + (chan->ch_num << 1) + 1; - } - - return 0; -} - -/** - * Add weighting coefficients to the decoded word-length information. - * - * @param[in,out] ctx ptr to the channel unit context - * @param[in,out] chan ptr to the channel parameters - * @param[in] wtab_idx index of the table of weights - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int add_wordlen_weights(Atrac3pChanUnitCtx *ctx, - Atrac3pChanParams *chan, int wtab_idx, - AVCodecContext *avctx) -{ - int i; - const int8_t *weights_tab = - &atrac3p_wl_weights[chan->ch_num * 3 + wtab_idx - 1][0]; - - for (i = 0; i < ctx->num_quant_units; i++) { - chan->qu_wordlen[i] += weights_tab[i]; - if (chan->qu_wordlen[i] < 0 || chan->qu_wordlen[i] > 7) { - av_log(avctx, AV_LOG_ERROR, - "WL index out of range: pos=%d, val=%d!\n", - i, chan->qu_wordlen[i]); - return AVERROR_INVALIDDATA; - } - } - - return 0; -} - -/** - * Subtract weighting coefficients from decoded scalefactors. - * - * @param[in,out] ctx ptr to the channel unit context - * @param[in,out] chan ptr to the channel parameters - * @param[in] wtab_idx index of table of weights - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int subtract_sf_weights(Atrac3pChanUnitCtx *ctx, - Atrac3pChanParams *chan, int wtab_idx, - AVCodecContext *avctx) -{ - int i; - const int8_t *weights_tab = &atrac3p_sf_weights[wtab_idx - 1][0]; - - for (i = 0; i < ctx->used_quant_units; i++) { - chan->qu_sf_idx[i] -= weights_tab[i]; - if (chan->qu_sf_idx[i] < 0 || chan->qu_sf_idx[i] > 63) { - av_log(avctx, AV_LOG_ERROR, - "SF index out of range: pos=%d, val=%d!\n", - i, chan->qu_sf_idx[i]); - return AVERROR_INVALIDDATA; - } - } - - return 0; -} - -/** - * Unpack vector quantization tables. - * - * @param[in] start_val start value for the unpacked table - * @param[in] shape_vec ptr to table to unpack - * @param[out] dst ptr to output array - * @param[in] num_values number of values to unpack - */ -static inline void unpack_vq_shape(int start_val, const int8_t *shape_vec, - int *dst, int num_values) -{ - int i; - - if (num_values) { - dst[0] = dst[1] = dst[2] = start_val; - for (i = 3; i < num_values; i++) - dst[i] = start_val - shape_vec[atrac3p_qu_num_to_seg[i] - 1]; - } -} - -#define UNPACK_SF_VQ_SHAPE(gb, dst, num_vals) \ - start_val = get_bits((gb), 6); \ - unpack_vq_shape(start_val, &atrac3p_sf_shapes[get_bits((gb), 6)][0], \ - (dst), (num_vals)) - -/** - * Decode word length for each quantization unit of a channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_channel_wordlen(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, AVCodecContext *avctx) -{ - int i, weight_idx = 0, delta, diff, pos, delta_bits, min_val, flag, - ret, start_val; - VLC *vlc_tab; - Atrac3pChanParams *chan = &ctx->channels[ch_num]; - Atrac3pChanParams *ref_chan = &ctx->channels[0]; - - chan->fill_mode = 0; - - switch (get_bits(gb, 2)) { /* switch according to coding mode */ - case 0: /* coded using constant number of bits */ - for (i = 0; i < ctx->num_quant_units; i++) - chan->qu_wordlen[i] = get_bits(gb, 3); - break; - case 1: - if (ch_num) { - if ((ret = num_coded_units(gb, chan, ctx, avctx)) < 0) - return ret; - - if (chan->num_coded_vals) { - vlc_tab = &wl_vlc_tabs[get_bits(gb, 2)]; - - for (i = 0; i < chan->num_coded_vals; i++) { - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_wordlen[i] = (ref_chan->qu_wordlen[i] + delta) & 7; - } - } - } else { - weight_idx = get_bits(gb, 2); - if ((ret = num_coded_units(gb, chan, ctx, avctx)) < 0) - return ret; - - if (chan->num_coded_vals) { - pos = get_bits(gb, 5); - if (pos > chan->num_coded_vals) { - av_log(avctx, AV_LOG_ERROR, - "WL mode 1: invalid position!\n"); - return AVERROR_INVALIDDATA; - } - - delta_bits = get_bits(gb, 2); - min_val = get_bits(gb, 3); - - for (i = 0; i < pos; i++) - chan->qu_wordlen[i] = get_bits(gb, 3); - - for (i = pos; i < chan->num_coded_vals; i++) - chan->qu_wordlen[i] = (min_val + get_bitsz(gb, delta_bits)) & 7; - } - } - break; - case 2: - if ((ret = num_coded_units(gb, chan, ctx, avctx)) < 0) - return ret; - - if (ch_num && chan->num_coded_vals) { - vlc_tab = &wl_vlc_tabs[get_bits(gb, 2)]; - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_wordlen[0] = (ref_chan->qu_wordlen[0] + delta) & 7; - - for (i = 1; i < chan->num_coded_vals; i++) { - diff = ref_chan->qu_wordlen[i] - ref_chan->qu_wordlen[i - 1]; - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_wordlen[i] = (chan->qu_wordlen[i - 1] + diff + delta) & 7; - } - } else if (chan->num_coded_vals) { - flag = get_bits(gb, 1); - vlc_tab = &wl_vlc_tabs[get_bits(gb, 1)]; - - start_val = get_bits(gb, 3); - unpack_vq_shape(start_val, - &atrac3p_wl_shapes[start_val][get_bits(gb, 4)][0], - chan->qu_wordlen, chan->num_coded_vals); - - if (!flag) { - for (i = 0; i < chan->num_coded_vals; i++) { - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_wordlen[i] = (chan->qu_wordlen[i] + delta) & 7; - } - } else { - for (i = 0; i < (chan->num_coded_vals & - 2); i += 2) - if (!get_bits1(gb)) { - chan->qu_wordlen[i] = (chan->qu_wordlen[i] + - get_vlc2(gb, vlc_tab->table, - vlc_tab->bits, 1)) & 7; - chan->qu_wordlen[i + 1] = (chan->qu_wordlen[i + 1] + - get_vlc2(gb, vlc_tab->table, - vlc_tab->bits, 1)) & 7; - } - - if (chan->num_coded_vals & 1) - chan->qu_wordlen[i] = (chan->qu_wordlen[i] + - get_vlc2(gb, vlc_tab->table, - vlc_tab->bits, 1)) & 7; - } - } - break; - case 3: - weight_idx = get_bits(gb, 2); - if ((ret = num_coded_units(gb, chan, ctx, avctx)) < 0) - return ret; - - if (chan->num_coded_vals) { - vlc_tab = &wl_vlc_tabs[get_bits(gb, 2)]; - - /* first coefficient is coded directly */ - chan->qu_wordlen[0] = get_bits(gb, 3); - - for (i = 1; i < chan->num_coded_vals; i++) { - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_wordlen[i] = (chan->qu_wordlen[i - 1] + delta) & 7; - } - } - break; - } - - if (chan->fill_mode == 2) { - for (i = chan->num_coded_vals; i < ctx->num_quant_units; i++) - chan->qu_wordlen[i] = ch_num ? get_bits1(gb) : 1; - } else if (chan->fill_mode == 3) { - pos = ch_num ? chan->num_coded_vals + chan->split_point - : ctx->num_quant_units - chan->split_point; - if (pos > FF_ARRAY_ELEMS(chan->qu_wordlen)) { - av_log(avctx, AV_LOG_ERROR, "Split point beyond array\n"); - pos = FF_ARRAY_ELEMS(chan->qu_wordlen); - } - for (i = chan->num_coded_vals; i < pos; i++) - chan->qu_wordlen[i] = 1; - } - - if (weight_idx) - return add_wordlen_weights(ctx, chan, weight_idx, avctx); - - return 0; -} - -/** - * Decode scale factor indexes for each quant unit of a channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_channel_sf_idx(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, AVCodecContext *avctx) -{ - int i, weight_idx = 0, delta, diff, num_long_vals, - delta_bits, min_val, vlc_sel, start_val; - VLC *vlc_tab; - Atrac3pChanParams *chan = &ctx->channels[ch_num]; - Atrac3pChanParams *ref_chan = &ctx->channels[0]; - - switch (get_bits(gb, 2)) { /* switch according to coding mode */ - case 0: /* coded using constant number of bits */ - for (i = 0; i < ctx->used_quant_units; i++) - chan->qu_sf_idx[i] = get_bits(gb, 6); - break; - case 1: - if (ch_num) { - vlc_tab = &sf_vlc_tabs[get_bits(gb, 2)]; - - for (i = 0; i < ctx->used_quant_units; i++) { - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_sf_idx[i] = (ref_chan->qu_sf_idx[i] + delta) & 0x3F; - } - } else { - weight_idx = get_bits(gb, 2); - if (weight_idx == 3) { - UNPACK_SF_VQ_SHAPE(gb, chan->qu_sf_idx, ctx->used_quant_units); - - num_long_vals = get_bits(gb, 5); - delta_bits = get_bits(gb, 2); - min_val = get_bits(gb, 4) - 7; - - for (i = 0; i < num_long_vals; i++) - chan->qu_sf_idx[i] = (chan->qu_sf_idx[i] + - get_bits(gb, 4) - 7) & 0x3F; - - /* all others are: min_val + delta */ - for (i = num_long_vals; i < ctx->used_quant_units; i++) - chan->qu_sf_idx[i] = (chan->qu_sf_idx[i] + min_val + - get_bitsz(gb, delta_bits)) & 0x3F; - } else { - num_long_vals = get_bits(gb, 5); - delta_bits = get_bits(gb, 3); - min_val = get_bits(gb, 6); - if (num_long_vals > ctx->used_quant_units || delta_bits == 7) { - av_log(avctx, AV_LOG_ERROR, - "SF mode 1: invalid parameters!\n"); - return AVERROR_INVALIDDATA; - } - - /* read full-precision SF indexes */ - for (i = 0; i < num_long_vals; i++) - chan->qu_sf_idx[i] = get_bits(gb, 6); - - /* all others are: min_val + delta */ - for (i = num_long_vals; i < ctx->used_quant_units; i++) - chan->qu_sf_idx[i] = (min_val + - get_bitsz(gb, delta_bits)) & 0x3F; - } - } - break; - case 2: - if (ch_num) { - vlc_tab = &sf_vlc_tabs[get_bits(gb, 2)]; - - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_sf_idx[0] = (ref_chan->qu_sf_idx[0] + delta) & 0x3F; - - for (i = 1; i < ctx->used_quant_units; i++) { - diff = ref_chan->qu_sf_idx[i] - ref_chan->qu_sf_idx[i - 1]; - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_sf_idx[i] = (chan->qu_sf_idx[i - 1] + diff + delta) & 0x3F; - } - } else { - vlc_tab = &sf_vlc_tabs[get_bits(gb, 2) + 4]; - - UNPACK_SF_VQ_SHAPE(gb, chan->qu_sf_idx, ctx->used_quant_units); - - for (i = 0; i < ctx->used_quant_units; i++) { - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_sf_idx[i] = (chan->qu_sf_idx[i] + - sign_extend(delta, 4)) & 0x3F; - } - } - break; - case 3: - if (ch_num) { - /* copy coefficients from reference channel */ - for (i = 0; i < ctx->used_quant_units; i++) - chan->qu_sf_idx[i] = ref_chan->qu_sf_idx[i]; - } else { - weight_idx = get_bits(gb, 2); - vlc_sel = get_bits(gb, 2); - vlc_tab = &sf_vlc_tabs[vlc_sel]; - - if (weight_idx == 3) { - vlc_tab = &sf_vlc_tabs[vlc_sel + 4]; - - UNPACK_SF_VQ_SHAPE(gb, chan->qu_sf_idx, ctx->used_quant_units); - - diff = (get_bits(gb, 4) + 56) & 0x3F; - chan->qu_sf_idx[0] = (chan->qu_sf_idx[0] + diff) & 0x3F; - - for (i = 1; i < ctx->used_quant_units; i++) { - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - diff = (diff + sign_extend(delta, 4)) & 0x3F; - chan->qu_sf_idx[i] = (diff + chan->qu_sf_idx[i]) & 0x3F; - } - } else { - /* 1st coefficient is coded directly */ - chan->qu_sf_idx[0] = get_bits(gb, 6); - - for (i = 1; i < ctx->used_quant_units; i++) { - delta = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - chan->qu_sf_idx[i] = (chan->qu_sf_idx[i - 1] + delta) & 0x3F; - } - } - } - break; - } - - if (weight_idx && weight_idx < 3) - return subtract_sf_weights(ctx, chan, weight_idx, avctx); - - return 0; -} - -/** - * Decode word length information for each channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] num_channels number of channels to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_quant_wordlen(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int num_channels, AVCodecContext *avctx) -{ - int ch_num, i, ret; - - for (ch_num = 0; ch_num < num_channels; ch_num++) { - memset(ctx->channels[ch_num].qu_wordlen, 0, - sizeof(ctx->channels[ch_num].qu_wordlen)); - - if ((ret = decode_channel_wordlen(gb, ctx, ch_num, avctx)) < 0) - return ret; - } - - /* scan for last non-zero coeff in both channels and - * set number of quant units having coded spectrum */ - for (i = ctx->num_quant_units - 1; i >= 0; i--) - if (ctx->channels[0].qu_wordlen[i] || - (num_channels == 2 && ctx->channels[1].qu_wordlen[i])) - break; - ctx->used_quant_units = i + 1; - - return 0; -} - -/** - * Decode scale factor indexes for each channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] num_channels number of channels to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_scale_factors(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int num_channels, AVCodecContext *avctx) -{ - int ch_num, ret; - - if (!ctx->used_quant_units) - return 0; - - for (ch_num = 0; ch_num < num_channels; ch_num++) { - memset(ctx->channels[ch_num].qu_sf_idx, 0, - sizeof(ctx->channels[ch_num].qu_sf_idx)); - - if ((ret = decode_channel_sf_idx(gb, ctx, ch_num, avctx)) < 0) - return ret; - } - - return 0; -} - -/** - * Decode number of code table values. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int get_num_ct_values(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - AVCodecContext *avctx) -{ - int num_coded_vals; - - if (get_bits1(gb)) { - num_coded_vals = get_bits(gb, 5); - if (num_coded_vals > ctx->used_quant_units) { - av_log(avctx, AV_LOG_ERROR, - "Invalid number of code table indexes: %d!\n", num_coded_vals); - return AVERROR_INVALIDDATA; - } - return num_coded_vals; - } else - return ctx->used_quant_units; -} - -#define DEC_CT_IDX_COMMON(OP) \ - num_vals = get_num_ct_values(gb, ctx, avctx); \ - if (num_vals < 0) \ - return num_vals; \ - \ - for (i = 0; i < num_vals; i++) { \ - if (chan->qu_wordlen[i]) { \ - chan->qu_tab_idx[i] = OP; \ - } else if (ch_num && ref_chan->qu_wordlen[i]) \ - /* get clone master flag */ \ - chan->qu_tab_idx[i] = get_bits1(gb); \ - } - -#define CODING_DIRECT get_bits(gb, num_bits) - -#define CODING_VLC get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1) - -#define CODING_VLC_DELTA \ - (!i) ? CODING_VLC \ - : (pred + get_vlc2(gb, delta_vlc->table, \ - delta_vlc->bits, 1)) & mask; \ - pred = chan->qu_tab_idx[i] - -#define CODING_VLC_DIFF \ - (ref_chan->qu_tab_idx[i] + \ - get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1)) & mask - -/** - * Decode code table indexes for each quant unit of a channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_channel_code_tab(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, AVCodecContext *avctx) -{ - int i, num_vals, num_bits, pred; - int mask = ctx->use_full_table ? 7 : 3; /* mask for modular arithmetic */ - VLC *vlc_tab, *delta_vlc; - Atrac3pChanParams *chan = &ctx->channels[ch_num]; - Atrac3pChanParams *ref_chan = &ctx->channels[0]; - - chan->table_type = get_bits1(gb); - - switch (get_bits(gb, 2)) { /* switch according to coding mode */ - case 0: /* directly coded */ - num_bits = ctx->use_full_table + 2; - DEC_CT_IDX_COMMON(CODING_DIRECT); - break; - case 1: /* entropy-coded */ - vlc_tab = ctx->use_full_table ? &ct_vlc_tabs[1] - : ct_vlc_tabs; - DEC_CT_IDX_COMMON(CODING_VLC); - break; - case 2: /* entropy-coded delta */ - if (ctx->use_full_table) { - vlc_tab = &ct_vlc_tabs[1]; - delta_vlc = &ct_vlc_tabs[2]; - } else { - vlc_tab = ct_vlc_tabs; - delta_vlc = ct_vlc_tabs; - } - pred = 0; - DEC_CT_IDX_COMMON(CODING_VLC_DELTA); - break; - case 3: /* entropy-coded difference to master */ - if (ch_num) { - vlc_tab = ctx->use_full_table ? &ct_vlc_tabs[3] - : ct_vlc_tabs; - DEC_CT_IDX_COMMON(CODING_VLC_DIFF); - } - break; - } - - return 0; -} - -/** - * Decode code table indexes for each channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] num_channels number of channels to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_code_table_indexes(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int num_channels, AVCodecContext *avctx) -{ - int ch_num, ret; - - if (!ctx->used_quant_units) - return 0; - - ctx->use_full_table = get_bits1(gb); - - for (ch_num = 0; ch_num < num_channels; ch_num++) { - memset(ctx->channels[ch_num].qu_tab_idx, 0, - sizeof(ctx->channels[ch_num].qu_tab_idx)); - - if ((ret = decode_channel_code_tab(gb, ctx, ch_num, avctx)) < 0) - return ret; - } - - return 0; -} - -/** - * Decode huffman-coded spectral lines for a given quant unit. - * - * This is a generalized version for all known coding modes. - * Its speed can be improved by creating separate functions for each mode. - * - * @param[in] gb the GetBit context - * @param[in] tab code table telling how to decode spectral lines - * @param[in] vlc_tab ptr to the huffman table associated with the code table - * @param[out] out pointer to buffer where decoded data should be stored - * @param[in] num_specs number of spectral lines to decode - */ -static void decode_qu_spectra(GetBitContext *gb, const Atrac3pSpecCodeTab *tab, - VLC *vlc_tab, int16_t *out, const int num_specs) -{ - int i, j, pos, cf; - int group_size = tab->group_size; - int num_coeffs = tab->num_coeffs; - int bits = tab->bits; - int is_signed = tab->is_signed; - unsigned val; - - for (pos = 0; pos < num_specs;) { - if (group_size == 1 || get_bits1(gb)) { - for (j = 0; j < group_size; j++) { - val = get_vlc2(gb, vlc_tab->table, vlc_tab->bits, 1); - - for (i = 0; i < num_coeffs; i++) { - cf = av_mod_uintp2(val, bits); - if (is_signed) - cf = sign_extend(cf, bits); - else if (cf && get_bits1(gb)) - cf = -cf; - - out[pos++] = cf; - val >>= bits; - } - } - } else /* group skipped */ - pos += group_size * num_coeffs; - } -} - -/** - * Decode huffman-coded IMDCT spectrum for all channels. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] num_channels number of channels to process - * @param[in] avctx ptr to the AVCodecContext - */ -static void decode_spectrum(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int num_channels, AVCodecContext *avctx) -{ - int i, ch_num, qu, wordlen, codetab, tab_index, num_specs; - const Atrac3pSpecCodeTab *tab; - Atrac3pChanParams *chan; - - for (ch_num = 0; ch_num < num_channels; ch_num++) { - chan = &ctx->channels[ch_num]; - - memset(chan->spectrum, 0, sizeof(chan->spectrum)); - - /* set power compensation level to disabled */ - memset(chan->power_levs, ATRAC3P_POWER_COMP_OFF, sizeof(chan->power_levs)); - - for (qu = 0; qu < ctx->used_quant_units; qu++) { - num_specs = ff_atrac3p_qu_to_spec_pos[qu + 1] - - ff_atrac3p_qu_to_spec_pos[qu]; - - wordlen = chan->qu_wordlen[qu]; - codetab = chan->qu_tab_idx[qu]; - if (wordlen) { - if (!ctx->use_full_table) - codetab = atrac3p_ct_restricted_to_full[chan->table_type][wordlen - 1][codetab]; - - tab_index = (chan->table_type * 8 + codetab) * 7 + wordlen - 1; - tab = &atrac3p_spectra_tabs[tab_index]; - - decode_qu_spectra(gb, tab, &spec_vlc_tabs[tab_index], - &chan->spectrum[ff_atrac3p_qu_to_spec_pos[qu]], - num_specs); - } else if (ch_num && ctx->channels[0].qu_wordlen[qu] && !codetab) { - /* copy coefficients from master */ - memcpy(&chan->spectrum[ff_atrac3p_qu_to_spec_pos[qu]], - &ctx->channels[0].spectrum[ff_atrac3p_qu_to_spec_pos[qu]], - num_specs * - sizeof(chan->spectrum[ff_atrac3p_qu_to_spec_pos[qu]])); - chan->qu_wordlen[qu] = ctx->channels[0].qu_wordlen[qu]; - } - } - - /* Power compensation levels only present in the bitstream - * if there are more than 2 quant units. The lowest two units - * correspond to the frequencies 0...351 Hz, whose shouldn't - * be affected by the power compensation. */ - if (ctx->used_quant_units > 2) { - num_specs = atrac3p_subband_to_num_powgrps[ctx->num_coded_subbands - 1]; - for (i = 0; i < num_specs; i++) - chan->power_levs[i] = get_bits(gb, 4); - } - } -} - -/** - * Retrieve specified amount of flag bits from the input bitstream. - * The data can be shortened in the case of the following two common conditions: - * if all bits are zero then only one signal bit = 0 will be stored, - * if all bits are ones then two signal bits = 1,0 will be stored. - * Otherwise, all necessary bits will be directly stored - * prefixed by two signal bits = 1,1. - * - * @param[in] gb ptr to the GetBitContext - * @param[out] out where to place decoded flags - * @param[in] num_flags number of flags to process - * @return: 0 = all flag bits are zero, 1 = there is at least one non-zero flag bit - */ -static int get_subband_flags(GetBitContext *gb, uint8_t *out, int num_flags) -{ - int i, result; - - memset(out, 0, num_flags); - - result = get_bits1(gb); - if (result) { - if (get_bits1(gb)) - for (i = 0; i < num_flags; i++) - out[i] = get_bits1(gb); - else - memset(out, 1, num_flags); - } - - return result; -} - -/** - * Decode mdct window shape flags for all channels. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] num_channels number of channels to process - */ -static void decode_window_shape(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int num_channels) -{ - int ch_num; - - for (ch_num = 0; ch_num < num_channels; ch_num++) - get_subband_flags(gb, ctx->channels[ch_num].wnd_shape, - ctx->num_subbands); -} - -/** - * Decode number of gain control points. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] coded_subbands number of subbands to process - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_gainc_npoints(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, int coded_subbands) -{ - int i, delta, delta_bits, min_val; - Atrac3pChanParams *chan = &ctx->channels[ch_num]; - Atrac3pChanParams *ref_chan = &ctx->channels[0]; - - switch (get_bits(gb, 2)) { /* switch according to coding mode */ - case 0: /* fixed-length coding */ - for (i = 0; i < coded_subbands; i++) - chan->gain_data[i].num_points = get_bits(gb, 3); - break; - case 1: /* variable-length coding */ - for (i = 0; i < coded_subbands; i++) - chan->gain_data[i].num_points = - get_vlc2(gb, gain_vlc_tabs[0].table, - gain_vlc_tabs[0].bits, 1); - break; - case 2: - if (ch_num) { /* VLC modulo delta to master channel */ - for (i = 0; i < coded_subbands; i++) { - delta = get_vlc2(gb, gain_vlc_tabs[1].table, - gain_vlc_tabs[1].bits, 1); - chan->gain_data[i].num_points = - (ref_chan->gain_data[i].num_points + delta) & 7; - } - } else { /* VLC modulo delta to previous */ - chan->gain_data[0].num_points = - get_vlc2(gb, gain_vlc_tabs[0].table, - gain_vlc_tabs[0].bits, 1); - - for (i = 1; i < coded_subbands; i++) { - delta = get_vlc2(gb, gain_vlc_tabs[1].table, - gain_vlc_tabs[1].bits, 1); - chan->gain_data[i].num_points = - (chan->gain_data[i - 1].num_points + delta) & 7; - } - } - break; - case 3: - if (ch_num) { /* copy data from master channel */ - for (i = 0; i < coded_subbands; i++) - chan->gain_data[i].num_points = - ref_chan->gain_data[i].num_points; - } else { /* shorter delta to min */ - delta_bits = get_bits(gb, 2); - min_val = get_bits(gb, 3); - - for (i = 0; i < coded_subbands; i++) { - chan->gain_data[i].num_points = min_val + get_bitsz(gb, delta_bits); - if (chan->gain_data[i].num_points > 7) - return AVERROR_INVALIDDATA; - } - } - } - - return 0; -} - -/** - * Implements coding mode 3 (slave) for gain compensation levels. - * - * @param[out] dst ptr to the output array - * @param[in] ref ptr to the reference channel - */ -static inline void gainc_level_mode3s(AtracGainInfo *dst, AtracGainInfo *ref) -{ - int i; - - for (i = 0; i < dst->num_points; i++) - dst->lev_code[i] = (i >= ref->num_points) ? 7 : ref->lev_code[i]; -} - -/** - * Implements coding mode 1 (master) for gain compensation levels. - * - * @param[in] gb the GetBit context - * @param[in] ctx ptr to the channel unit context - * @param[out] dst ptr to the output array - */ -static inline void gainc_level_mode1m(GetBitContext *gb, - Atrac3pChanUnitCtx *ctx, - AtracGainInfo *dst) -{ - int i, delta; - - if (dst->num_points > 0) - dst->lev_code[0] = get_vlc2(gb, gain_vlc_tabs[2].table, - gain_vlc_tabs[2].bits, 1); - - for (i = 1; i < dst->num_points; i++) { - delta = get_vlc2(gb, gain_vlc_tabs[3].table, - gain_vlc_tabs[3].bits, 1); - dst->lev_code[i] = (dst->lev_code[i - 1] + delta) & 0xF; - } -} - -/** - * Decode level code for each gain control point. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] coded_subbands number of subbands to process - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_gainc_levels(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, int coded_subbands) -{ - int sb, i, delta, delta_bits, min_val, pred; - Atrac3pChanParams *chan = &ctx->channels[ch_num]; - Atrac3pChanParams *ref_chan = &ctx->channels[0]; - - switch (get_bits(gb, 2)) { /* switch according to coding mode */ - case 0: /* fixed-length coding */ - for (sb = 0; sb < coded_subbands; sb++) - for (i = 0; i < chan->gain_data[sb].num_points; i++) - chan->gain_data[sb].lev_code[i] = get_bits(gb, 4); - break; - case 1: - if (ch_num) { /* VLC modulo delta to master channel */ - for (sb = 0; sb < coded_subbands; sb++) - for (i = 0; i < chan->gain_data[sb].num_points; i++) { - delta = get_vlc2(gb, gain_vlc_tabs[5].table, - gain_vlc_tabs[5].bits, 1); - pred = (i >= ref_chan->gain_data[sb].num_points) - ? 7 : ref_chan->gain_data[sb].lev_code[i]; - chan->gain_data[sb].lev_code[i] = (pred + delta) & 0xF; - } - } else { /* VLC modulo delta to previous */ - for (sb = 0; sb < coded_subbands; sb++) - gainc_level_mode1m(gb, ctx, &chan->gain_data[sb]); - } - break; - case 2: - if (ch_num) { /* VLC modulo delta to previous or clone master */ - for (sb = 0; sb < coded_subbands; sb++) - if (chan->gain_data[sb].num_points > 0) { - if (get_bits1(gb)) - gainc_level_mode1m(gb, ctx, &chan->gain_data[sb]); - else - gainc_level_mode3s(&chan->gain_data[sb], - &ref_chan->gain_data[sb]); - } - } else { /* VLC modulo delta to lev_codes of previous subband */ - if (chan->gain_data[0].num_points > 0) - gainc_level_mode1m(gb, ctx, &chan->gain_data[0]); - - for (sb = 1; sb < coded_subbands; sb++) - for (i = 0; i < chan->gain_data[sb].num_points; i++) { - delta = get_vlc2(gb, gain_vlc_tabs[4].table, - gain_vlc_tabs[4].bits, 1); - pred = (i >= chan->gain_data[sb - 1].num_points) - ? 7 : chan->gain_data[sb - 1].lev_code[i]; - chan->gain_data[sb].lev_code[i] = (pred + delta) & 0xF; - } - } - break; - case 3: - if (ch_num) { /* clone master */ - for (sb = 0; sb < coded_subbands; sb++) - gainc_level_mode3s(&chan->gain_data[sb], - &ref_chan->gain_data[sb]); - } else { /* shorter delta to min */ - delta_bits = get_bits(gb, 2); - min_val = get_bits(gb, 4); - - for (sb = 0; sb < coded_subbands; sb++) - for (i = 0; i < chan->gain_data[sb].num_points; i++) { - chan->gain_data[sb].lev_code[i] = min_val + get_bitsz(gb, delta_bits); - if (chan->gain_data[sb].lev_code[i] > 15) - return AVERROR_INVALIDDATA; - } - } - break; - } - - return 0; -} - -/** - * Implements coding mode 0 for gain compensation locations. - * - * @param[in] gb the GetBit context - * @param[in] ctx ptr to the channel unit context - * @param[out] dst ptr to the output array - * @param[in] pos position of the value to be processed - */ -static inline void gainc_loc_mode0(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - AtracGainInfo *dst, int pos) -{ - int delta_bits; - - if (!pos || dst->loc_code[pos - 1] < 15) - dst->loc_code[pos] = get_bits(gb, 5); - else if (dst->loc_code[pos - 1] >= 30) - dst->loc_code[pos] = 31; - else { - delta_bits = av_log2(30 - dst->loc_code[pos - 1]) + 1; - dst->loc_code[pos] = dst->loc_code[pos - 1] + - get_bits(gb, delta_bits) + 1; - } -} - -/** - * Implements coding mode 1 for gain compensation locations. - * - * @param[in] gb the GetBit context - * @param[in] ctx ptr to the channel unit context - * @param[out] dst ptr to the output array - */ -static inline void gainc_loc_mode1(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - AtracGainInfo *dst) -{ - int i; - VLC *tab; - - if (dst->num_points > 0) { - /* 1st coefficient is stored directly */ - dst->loc_code[0] = get_bits(gb, 5); - - for (i = 1; i < dst->num_points; i++) { - /* switch VLC according to the curve direction - * (ascending/descending) */ - tab = (dst->lev_code[i] <= dst->lev_code[i - 1]) - ? &gain_vlc_tabs[7] - : &gain_vlc_tabs[9]; - dst->loc_code[i] = dst->loc_code[i - 1] + - get_vlc2(gb, tab->table, tab->bits, 1); - } - } -} - -/** - * Decode location code for each gain control point. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] coded_subbands number of subbands to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_gainc_loc_codes(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, int coded_subbands, - AVCodecContext *avctx) -{ - int sb, i, delta, delta_bits, min_val, pred, more_than_ref; - AtracGainInfo *dst, *ref; - VLC *tab; - Atrac3pChanParams *chan = &ctx->channels[ch_num]; - Atrac3pChanParams *ref_chan = &ctx->channels[0]; - - switch (get_bits(gb, 2)) { /* switch according to coding mode */ - case 0: /* sequence of numbers in ascending order */ - for (sb = 0; sb < coded_subbands; sb++) - for (i = 0; i < chan->gain_data[sb].num_points; i++) - gainc_loc_mode0(gb, ctx, &chan->gain_data[sb], i); - break; - case 1: - if (ch_num) { - for (sb = 0; sb < coded_subbands; sb++) { - if (chan->gain_data[sb].num_points <= 0) - continue; - dst = &chan->gain_data[sb]; - ref = &ref_chan->gain_data[sb]; - - /* 1st value is vlc-coded modulo delta to master */ - delta = get_vlc2(gb, gain_vlc_tabs[10].table, - gain_vlc_tabs[10].bits, 1); - pred = ref->num_points > 0 ? ref->loc_code[0] : 0; - dst->loc_code[0] = (pred + delta) & 0x1F; - - for (i = 1; i < dst->num_points; i++) { - more_than_ref = i >= ref->num_points; - if (dst->lev_code[i] > dst->lev_code[i - 1]) { - /* ascending curve */ - if (more_than_ref) { - delta = - get_vlc2(gb, gain_vlc_tabs[9].table, - gain_vlc_tabs[9].bits, 1); - dst->loc_code[i] = dst->loc_code[i - 1] + delta; - } else { - if (get_bits1(gb)) - gainc_loc_mode0(gb, ctx, dst, i); // direct coding - else - dst->loc_code[i] = ref->loc_code[i]; // clone master - } - } else { /* descending curve */ - tab = more_than_ref ? &gain_vlc_tabs[7] - : &gain_vlc_tabs[10]; - delta = get_vlc2(gb, tab->table, tab->bits, 1); - if (more_than_ref) - dst->loc_code[i] = dst->loc_code[i - 1] + delta; - else - dst->loc_code[i] = (ref->loc_code[i] + delta) & 0x1F; - } - } - } - } else /* VLC delta to previous */ - for (sb = 0; sb < coded_subbands; sb++) - gainc_loc_mode1(gb, ctx, &chan->gain_data[sb]); - break; - case 2: - if (ch_num) { - for (sb = 0; sb < coded_subbands; sb++) { - if (chan->gain_data[sb].num_points <= 0) - continue; - dst = &chan->gain_data[sb]; - ref = &ref_chan->gain_data[sb]; - if (dst->num_points > ref->num_points || get_bits1(gb)) - gainc_loc_mode1(gb, ctx, dst); - else /* clone master for the whole subband */ - for (i = 0; i < chan->gain_data[sb].num_points; i++) - dst->loc_code[i] = ref->loc_code[i]; - } - } else { - /* data for the first subband is coded directly */ - for (i = 0; i < chan->gain_data[0].num_points; i++) - gainc_loc_mode0(gb, ctx, &chan->gain_data[0], i); - - for (sb = 1; sb < coded_subbands; sb++) { - if (chan->gain_data[sb].num_points <= 0) - continue; - dst = &chan->gain_data[sb]; - - /* 1st value is vlc-coded modulo delta to the corresponding - * value of the previous subband if any or zero */ - delta = get_vlc2(gb, gain_vlc_tabs[6].table, - gain_vlc_tabs[6].bits, 1); - pred = dst[-1].num_points > 0 - ? dst[-1].loc_code[0] : 0; - dst->loc_code[0] = (pred + delta) & 0x1F; - - for (i = 1; i < dst->num_points; i++) { - more_than_ref = i >= dst[-1].num_points; - /* Select VLC table according to curve direction and - * presence of prediction. */ - tab = &gain_vlc_tabs[(dst->lev_code[i] > dst->lev_code[i - 1]) * - 2 + more_than_ref + 6]; - delta = get_vlc2(gb, tab->table, tab->bits, 1); - if (more_than_ref) - dst->loc_code[i] = dst->loc_code[i - 1] + delta; - else - dst->loc_code[i] = (dst[-1].loc_code[i] + delta) & 0x1F; - } - } - } - break; - case 3: - if (ch_num) { /* clone master or direct or direct coding */ - for (sb = 0; sb < coded_subbands; sb++) - for (i = 0; i < chan->gain_data[sb].num_points; i++) { - if (i >= ref_chan->gain_data[sb].num_points) - gainc_loc_mode0(gb, ctx, &chan->gain_data[sb], i); - else - chan->gain_data[sb].loc_code[i] = - ref_chan->gain_data[sb].loc_code[i]; - } - } else { /* shorter delta to min */ - delta_bits = get_bits(gb, 2) + 1; - min_val = get_bits(gb, 5); - - for (sb = 0; sb < coded_subbands; sb++) - for (i = 0; i < chan->gain_data[sb].num_points; i++) - chan->gain_data[sb].loc_code[i] = min_val + i + - get_bits(gb, delta_bits); - } - break; - } - - /* Validate decoded information */ - for (sb = 0; sb < coded_subbands; sb++) { - dst = &chan->gain_data[sb]; - for (i = 0; i < chan->gain_data[sb].num_points; i++) { - if (dst->loc_code[i] < 0 || dst->loc_code[i] > 31 || - (i && dst->loc_code[i] <= dst->loc_code[i - 1])) { - av_log(avctx, AV_LOG_ERROR, - "Invalid gain location: ch=%d, sb=%d, pos=%d, val=%d\n", - ch_num, sb, i, dst->loc_code[i]); - return AVERROR_INVALIDDATA; - } - } - } - - return 0; -} - -/** - * Decode gain control data for all channels. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] num_channels number of channels to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_gainc_data(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int num_channels, AVCodecContext *avctx) -{ - int ch_num, coded_subbands, sb, ret; - - for (ch_num = 0; ch_num < num_channels; ch_num++) { - memset(ctx->channels[ch_num].gain_data, 0, - sizeof(*ctx->channels[ch_num].gain_data) * ATRAC3P_SUBBANDS); - - if (get_bits1(gb)) { /* gain control data present? */ - coded_subbands = get_bits(gb, 4) + 1; - if (get_bits1(gb)) /* is high band gain data replication on? */ - ctx->channels[ch_num].num_gain_subbands = get_bits(gb, 4) + 1; - else - ctx->channels[ch_num].num_gain_subbands = coded_subbands; - - if ((ret = decode_gainc_npoints(gb, ctx, ch_num, coded_subbands)) < 0 || - (ret = decode_gainc_levels(gb, ctx, ch_num, coded_subbands)) < 0 || - (ret = decode_gainc_loc_codes(gb, ctx, ch_num, coded_subbands, avctx)) < 0) - return ret; - - if (coded_subbands > 0) { /* propagate gain data if requested */ - for (sb = coded_subbands; sb < ctx->channels[ch_num].num_gain_subbands; sb++) - ctx->channels[ch_num].gain_data[sb] = - ctx->channels[ch_num].gain_data[sb - 1]; - } - } else { - ctx->channels[ch_num].num_gain_subbands = 0; - } - } - - return 0; -} - -/** - * Decode envelope for all tones of a channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] band_has_tones ptr to an array of per-band-flags: - * 1 - tone data present - */ -static void decode_tones_envelope(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, int band_has_tones[]) -{ - int sb; - Atrac3pWavesData *dst = ctx->channels[ch_num].tones_info; - Atrac3pWavesData *ref = ctx->channels[0].tones_info; - - if (!ch_num || !get_bits1(gb)) { /* mode 0: fixed-length coding */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb]) - continue; - dst[sb].pend_env.has_start_point = get_bits1(gb); - dst[sb].pend_env.start_pos = dst[sb].pend_env.has_start_point - ? get_bits(gb, 5) : -1; - dst[sb].pend_env.has_stop_point = get_bits1(gb); - dst[sb].pend_env.stop_pos = dst[sb].pend_env.has_stop_point - ? get_bits(gb, 5) : 32; - } - } else { /* mode 1(slave only): copy master */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb]) - continue; - dst[sb].pend_env.has_start_point = ref[sb].pend_env.has_start_point; - dst[sb].pend_env.has_stop_point = ref[sb].pend_env.has_stop_point; - dst[sb].pend_env.start_pos = ref[sb].pend_env.start_pos; - dst[sb].pend_env.stop_pos = ref[sb].pend_env.stop_pos; - } - } -} - -/** - * Decode number of tones for each subband of a channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] band_has_tones ptr to an array of per-band-flags: - * 1 - tone data present - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_band_numwavs(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, int band_has_tones[], - AVCodecContext *avctx) -{ - int mode, sb, delta; - Atrac3pWavesData *dst = ctx->channels[ch_num].tones_info; - Atrac3pWavesData *ref = ctx->channels[0].tones_info; - - mode = get_bits(gb, ch_num + 1); - switch (mode) { - case 0: /** fixed-length coding */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) - if (band_has_tones[sb]) - dst[sb].num_wavs = get_bits(gb, 4); - break; - case 1: /** variable-length coding */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) - if (band_has_tones[sb]) - dst[sb].num_wavs = - get_vlc2(gb, tone_vlc_tabs[1].table, - tone_vlc_tabs[1].bits, 1); - break; - case 2: /** VLC modulo delta to master (slave only) */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) - if (band_has_tones[sb]) { - delta = get_vlc2(gb, tone_vlc_tabs[2].table, - tone_vlc_tabs[2].bits, 1); - delta = sign_extend(delta, 3); - dst[sb].num_wavs = (ref[sb].num_wavs + delta) & 0xF; - } - break; - case 3: /** copy master (slave only) */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) - if (band_has_tones[sb]) - dst[sb].num_wavs = ref[sb].num_wavs; - break; - } - - /** initialize start tone index for each subband */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) - if (band_has_tones[sb]) { - if (ctx->waves_info->tones_index + dst[sb].num_wavs > 48) { - av_log(avctx, AV_LOG_ERROR, - "Too many tones: %d (max. 48), frame: %"PRId64"!\n", - ctx->waves_info->tones_index + dst[sb].num_wavs, - avctx->frame_num); - return AVERROR_INVALIDDATA; - } - dst[sb].start_index = ctx->waves_info->tones_index; - ctx->waves_info->tones_index += dst[sb].num_wavs; - } - - return 0; -} - -/** - * Decode frequency information for each subband of a channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] band_has_tones ptr to an array of per-band-flags: - * 1 - tone data present - */ -static void decode_tones_frequency(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, int band_has_tones[]) -{ - int sb, i, direction, nbits, pred, delta; - Atrac3pWaveParam *iwav, *owav; - Atrac3pWavesData *dst = ctx->channels[ch_num].tones_info; - Atrac3pWavesData *ref = ctx->channels[0].tones_info; - - if (!ch_num || !get_bits1(gb)) { /* mode 0: fixed-length coding */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb] || !dst[sb].num_wavs) - continue; - iwav = &ctx->waves_info->waves[dst[sb].start_index]; - direction = (dst[sb].num_wavs > 1) ? get_bits1(gb) : 0; - if (direction) { /** packed numbers in descending order */ - if (dst[sb].num_wavs) - iwav[dst[sb].num_wavs - 1].freq_index = get_bits(gb, 10); - for (i = dst[sb].num_wavs - 2; i >= 0 ; i--) { - nbits = av_log2(iwav[i+1].freq_index) + 1; - iwav[i].freq_index = get_bits(gb, nbits); - } - } else { /** packed numbers in ascending order */ - for (i = 0; i < dst[sb].num_wavs; i++) { - if (!i || iwav[i - 1].freq_index < 512) - iwav[i].freq_index = get_bits(gb, 10); - else { - nbits = av_log2(1023 - iwav[i - 1].freq_index) + 1; - iwav[i].freq_index = get_bits(gb, nbits) + - 1024 - (1 << nbits); - } - } - } - } - } else { /* mode 1: VLC modulo delta to master (slave only) */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb] || !dst[sb].num_wavs) - continue; - iwav = &ctx->waves_info->waves[ref[sb].start_index]; - owav = &ctx->waves_info->waves[dst[sb].start_index]; - for (i = 0; i < dst[sb].num_wavs; i++) { - delta = get_vlc2(gb, tone_vlc_tabs[6].table, - tone_vlc_tabs[6].bits, 1); - delta = sign_extend(delta, 8); - pred = (i < ref[sb].num_wavs) ? iwav[i].freq_index : - (ref[sb].num_wavs ? iwav[ref[sb].num_wavs - 1].freq_index : 0); - owav[i].freq_index = (pred + delta) & 0x3FF; - } - } - } -} - -/** - * Decode amplitude information for each subband of a channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] band_has_tones ptr to an array of per-band-flags: - * 1 - tone data present - */ -static void decode_tones_amplitude(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, int band_has_tones[]) -{ - int mode, sb, j, i, diff, maxdiff, fi, delta, pred; - Atrac3pWaveParam *wsrc, *wref; - int refwaves[48] = { 0 }; - Atrac3pWavesData *dst = ctx->channels[ch_num].tones_info; - Atrac3pWavesData *ref = ctx->channels[0].tones_info; - - if (ch_num) { - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb] || !dst[sb].num_wavs) - continue; - wsrc = &ctx->waves_info->waves[dst[sb].start_index]; - wref = &ctx->waves_info->waves[ref[sb].start_index]; - for (j = 0; j < dst[sb].num_wavs; j++) { - for (i = 0, fi = 0, maxdiff = 1024; i < ref[sb].num_wavs; i++) { - diff = FFABS(wsrc[j].freq_index - wref[i].freq_index); - if (diff < maxdiff) { - maxdiff = diff; - fi = i; - } - } - - if (maxdiff < 8) - refwaves[dst[sb].start_index + j] = fi + ref[sb].start_index; - else if (j < ref[sb].num_wavs) - refwaves[dst[sb].start_index + j] = j + ref[sb].start_index; - else - refwaves[dst[sb].start_index + j] = -1; - } - } - } - - mode = get_bits(gb, ch_num + 1); - - switch (mode) { - case 0: /** fixed-length coding */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb] || !dst[sb].num_wavs) - continue; - if (ctx->waves_info->amplitude_mode) - for (i = 0; i < dst[sb].num_wavs; i++) - ctx->waves_info->waves[dst[sb].start_index + i].amp_sf = get_bits(gb, 6); - else - ctx->waves_info->waves[dst[sb].start_index].amp_sf = get_bits(gb, 6); - } - break; - case 1: /** min + VLC delta */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb] || !dst[sb].num_wavs) - continue; - if (ctx->waves_info->amplitude_mode) - for (i = 0; i < dst[sb].num_wavs; i++) - ctx->waves_info->waves[dst[sb].start_index + i].amp_sf = - get_vlc2(gb, tone_vlc_tabs[3].table, - tone_vlc_tabs[3].bits, 1) + 20; - else - ctx->waves_info->waves[dst[sb].start_index].amp_sf = - get_vlc2(gb, tone_vlc_tabs[4].table, - tone_vlc_tabs[4].bits, 1) + 24; - } - break; - case 2: /** VLC modulo delta to master (slave only) */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb] || !dst[sb].num_wavs) - continue; - for (i = 0; i < dst[sb].num_wavs; i++) { - delta = get_vlc2(gb, tone_vlc_tabs[5].table, - tone_vlc_tabs[5].bits, 1); - delta = sign_extend(delta, 5); - pred = refwaves[dst[sb].start_index + i] >= 0 ? - ctx->waves_info->waves[refwaves[dst[sb].start_index + i]].amp_sf : 34; - ctx->waves_info->waves[dst[sb].start_index + i].amp_sf = (pred + delta) & 0x3F; - } - } - break; - case 3: /** clone master (slave only) */ - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb]) - continue; - for (i = 0; i < dst[sb].num_wavs; i++) - ctx->waves_info->waves[dst[sb].start_index + i].amp_sf = - refwaves[dst[sb].start_index + i] >= 0 - ? ctx->waves_info->waves[refwaves[dst[sb].start_index + i]].amp_sf - : 32; - } - break; - } -} - -/** - * Decode phase information for each subband of a channel. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] ch_num channel to process - * @param[in] band_has_tones ptr to an array of per-band-flags: - * 1 - tone data present - */ -static void decode_tones_phase(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int ch_num, int band_has_tones[]) -{ - int sb, i; - Atrac3pWaveParam *wparam; - Atrac3pWavesData *dst = ctx->channels[ch_num].tones_info; - - for (sb = 0; sb < ctx->waves_info->num_tone_bands; sb++) { - if (!band_has_tones[sb]) - continue; - wparam = &ctx->waves_info->waves[dst[sb].start_index]; - for (i = 0; i < dst[sb].num_wavs; i++) - wparam[i].phase_index = get_bits(gb, 5); - } -} - -/** - * Decode tones info for all channels. - * - * @param[in] gb the GetBit context - * @param[in,out] ctx ptr to the channel unit context - * @param[in] num_channels number of channels to process - * @param[in] avctx ptr to the AVCodecContext - * @return result code: 0 = OK, otherwise - error code - */ -static int decode_tones_info(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int num_channels, AVCodecContext *avctx) -{ - int ch_num, i, ret; - int band_has_tones[16]; - - for (ch_num = 0; ch_num < num_channels; ch_num++) - memset(ctx->channels[ch_num].tones_info, 0, - sizeof(*ctx->channels[ch_num].tones_info) * ATRAC3P_SUBBANDS); - - ctx->waves_info->tones_present = get_bits1(gb); - if (!ctx->waves_info->tones_present) - return 0; - - memset(ctx->waves_info->waves, 0, sizeof(ctx->waves_info->waves)); - - ctx->waves_info->amplitude_mode = get_bits1(gb); - if (!ctx->waves_info->amplitude_mode) { - avpriv_report_missing_feature(avctx, "GHA amplitude mode 0"); - return AVERROR_PATCHWELCOME; - } - - ctx->waves_info->num_tone_bands = - get_vlc2(gb, tone_vlc_tabs[0].table, - tone_vlc_tabs[0].bits, 1) + 1; - - if (num_channels == 2) { - get_subband_flags(gb, ctx->waves_info->tone_sharing, ctx->waves_info->num_tone_bands); - get_subband_flags(gb, ctx->waves_info->tone_master, ctx->waves_info->num_tone_bands); - get_subband_flags(gb, ctx->waves_info->invert_phase, ctx->waves_info->num_tone_bands); - } - - ctx->waves_info->tones_index = 0; - - for (ch_num = 0; ch_num < num_channels; ch_num++) { - for (i = 0; i < ctx->waves_info->num_tone_bands; i++) - band_has_tones[i] = !ch_num ? 1 : !ctx->waves_info->tone_sharing[i]; - - decode_tones_envelope(gb, ctx, ch_num, band_has_tones); - if ((ret = decode_band_numwavs(gb, ctx, ch_num, band_has_tones, - avctx)) < 0) - return ret; - - decode_tones_frequency(gb, ctx, ch_num, band_has_tones); - decode_tones_amplitude(gb, ctx, ch_num, band_has_tones); - decode_tones_phase(gb, ctx, ch_num, band_has_tones); - } - - if (num_channels == 2) { - for (i = 0; i < ctx->waves_info->num_tone_bands; i++) { - if (ctx->waves_info->tone_sharing[i]) - ctx->channels[1].tones_info[i] = ctx->channels[0].tones_info[i]; - - if (ctx->waves_info->tone_master[i]) - FFSWAP(Atrac3pWavesData, ctx->channels[0].tones_info[i], - ctx->channels[1].tones_info[i]); - } - } - - return 0; -} - -int ff_atrac3p_decode_channel_unit(GetBitContext *gb, Atrac3pChanUnitCtx *ctx, - int num_channels, AVCodecContext *avctx) -{ - int ret; - - /* parse sound header */ - ctx->num_quant_units = get_bits(gb, 5) + 1; - if (ctx->num_quant_units > 28 && ctx->num_quant_units < 32) { - av_log(avctx, AV_LOG_ERROR, - "Invalid number of quantization units: %d!\n", - ctx->num_quant_units); - return AVERROR_INVALIDDATA; - } - - ctx->mute_flag = get_bits1(gb); - - /* decode various sound parameters */ - if ((ret = decode_quant_wordlen(gb, ctx, num_channels, avctx)) < 0) - return ret; - - ctx->num_subbands = atrac3p_qu_to_subband[ctx->num_quant_units - 1] + 1; - ctx->num_coded_subbands = ctx->used_quant_units - ? atrac3p_qu_to_subband[ctx->used_quant_units - 1] + 1 - : 0; - - if ((ret = decode_scale_factors(gb, ctx, num_channels, avctx)) < 0) - return ret; - - if ((ret = decode_code_table_indexes(gb, ctx, num_channels, avctx)) < 0) - return ret; - - decode_spectrum(gb, ctx, num_channels, avctx); - - if (num_channels == 2) { - get_subband_flags(gb, ctx->swap_channels, ctx->num_coded_subbands); - get_subband_flags(gb, ctx->negate_coeffs, ctx->num_coded_subbands); - } - - decode_window_shape(gb, ctx, num_channels); - - if ((ret = decode_gainc_data(gb, ctx, num_channels, avctx)) < 0) - return ret; - - if ((ret = decode_tones_info(gb, ctx, num_channels, avctx)) < 0) - return ret; - - /* decode global noise info */ - ctx->noise_present = get_bits1(gb); - if (ctx->noise_present) { - ctx->noise_level_index = get_bits(gb, 4); - ctx->noise_table_index = get_bits(gb, 4); - } - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadsp.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadsp.h deleted file mode 100644 index c29755267b3d5e5e7e07aecb757894beca1b2a18..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dcadsp.h +++ /dev/null @@ -1,100 +0,0 @@ -/* - * Copyright (C) 2016 foo86 - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_DCADSP_H -#define AVCODEC_DCADSP_H - -#include "libavutil/common.h" -#include "libavutil/tx.h" - -#include "dcadct.h" -#include "synth_filter.h" - -typedef struct DCADSPContext { - void (*decode_hf)(int32_t **dst, - const int32_t *vq_index, - const int8_t hf_vq[1024][32], - int32_t scale_factors[32][2], - ptrdiff_t sb_start, ptrdiff_t sb_end, - ptrdiff_t ofs, ptrdiff_t len); - - void (*decode_joint)(int32_t **dst, int32_t **src, - const int32_t *scale_factors, - ptrdiff_t sb_start, ptrdiff_t sb_end, - ptrdiff_t ofs, ptrdiff_t len); - - void (*lfe_fir_float[2])(float *pcm_samples, int32_t *lfe_samples, - const float *filter_coeff, ptrdiff_t npcmblocks); - - void (*lfe_x96_float)(float *dst, const float *src, - float *hist, ptrdiff_t len); - - void (*sub_qmf_float[2])(SynthFilterContext *synth, - AVTXContext *imdct, - av_tx_fn imdct_fn, - float *pcm_samples, - int32_t **subband_samples_lo, - int32_t **subband_samples_hi, - float *hist1, int *offset, float *hist2, - const float *filter_coeff, ptrdiff_t npcmblocks, - float scale); - - void (*lfe_fir_fixed)(int32_t *pcm_samples, int32_t *lfe_samples, - const int32_t *filter_coeff, ptrdiff_t npcmblocks); - - void (*lfe_x96_fixed)(int32_t *dst, const int32_t *src, - int32_t *hist, ptrdiff_t len); - - void (*sub_qmf_fixed[2])(SynthFilterContext *synth, - DCADCTContext *imdct, - int32_t *pcm_samples, - int32_t **subband_samples_lo, - int32_t **subband_samples_hi, - int32_t *hist1, int *offset, int32_t *hist2, - const int32_t *filter_coeff, ptrdiff_t npcmblocks); - - void (*decor)(int32_t *dst, const int32_t *src, int coeff, ptrdiff_t len); - - void (*dmix_sub_xch)(int32_t *dst1, int32_t *dst2, - const int32_t *src, ptrdiff_t len); - - void (*dmix_sub)(int32_t *dst, const int32_t *src, int coeff, ptrdiff_t len); - - void (*dmix_add)(int32_t *dst, const int32_t *src, int coeff, ptrdiff_t len); - - void (*dmix_scale)(int32_t *dst, int scale, ptrdiff_t len); - - void (*dmix_scale_inv)(int32_t *dst, int scale_inv, ptrdiff_t len); - - void (*assemble_freq_bands)(int32_t *dst, int32_t *src0, int32_t *src1, - const int32_t *coeff, ptrdiff_t len); - - void (*lbr_bank)(float output[32][4], float **input, - const float *coeff, ptrdiff_t ofs, ptrdiff_t len); - - void (*lfe_iir)(float *output, const float *input, - const float iir[5][4], float hist[5][2], - ptrdiff_t factor); -} DCADSPContext; - -av_cold void ff_dcadsp_init(DCADSPContext *s); -av_cold void ff_dcadsp_init_x86(DCADSPContext *s); - -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flacenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flacenc.c deleted file mode 100644 index a449b73235370e328e71d4141b4f9eecfb8bae3a..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/flacenc.c +++ /dev/null @@ -1,1768 +0,0 @@ -/* - * FLAC audio encoder - * Copyright (c) 2006 Justin Ruggles - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/avassert.h" -#include "libavutil/channel_layout.h" -#include "libavutil/crc.h" -#include "libavutil/intmath.h" -#include "libavutil/md5.h" -#include "libavutil/opt.h" - -#include "avcodec.h" -#include "bswapdsp.h" -#include "codec_internal.h" -#include "encode.h" -#include "put_bits.h" -#include "lpc.h" -#include "flac.h" -#include "flacdata.h" -#include "flacencdsp.h" - -#define FLAC_SUBFRAME_CONSTANT 0 -#define FLAC_SUBFRAME_VERBATIM 1 -#define FLAC_SUBFRAME_FIXED 8 -#define FLAC_SUBFRAME_LPC 32 - -#define MAX_FIXED_ORDER 4 -#define MAX_PARTITION_ORDER 8 -#define MAX_PARTITIONS (1 << MAX_PARTITION_ORDER) -#define MAX_LPC_PRECISION 15 -#define MIN_LPC_SHIFT 0 -#define MAX_LPC_SHIFT 15 - -enum CodingMode { - CODING_MODE_RICE = 4, - CODING_MODE_RICE2 = 5, -}; - -typedef struct CompressionOptions { - int compression_level; - int block_time_ms; - enum FFLPCType lpc_type; - int lpc_passes; - int lpc_coeff_precision; - int min_prediction_order; - int max_prediction_order; - int prediction_order_method; - int min_partition_order; - int max_partition_order; - int ch_mode; - int exact_rice_parameters; - int multi_dim_quant; -} CompressionOptions; - -typedef struct RiceContext { - enum CodingMode coding_mode; - int porder; - int params[MAX_PARTITIONS]; -} RiceContext; - -typedef struct FlacSubframe { - int type; - int type_code; - int obits; - int wasted; - int order; - int32_t coefs[MAX_LPC_ORDER]; - int shift; - - RiceContext rc; - uint32_t rc_udata[FLAC_MAX_BLOCKSIZE]; - uint64_t rc_sums[32][MAX_PARTITIONS]; - - int32_t samples[FLAC_MAX_BLOCKSIZE]; - int32_t residual[FLAC_MAX_BLOCKSIZE+11]; -} FlacSubframe; - -typedef struct FlacFrame { - FlacSubframe subframes[FLAC_MAX_CHANNELS]; - int64_t samples_33bps[FLAC_MAX_BLOCKSIZE]; - int blocksize; - int bs_code[2]; - uint8_t crc8; - int ch_mode; - int verbatim_only; -} FlacFrame; - -typedef struct FlacEncodeContext { - AVClass *class; - PutBitContext pb; - int channels; - int samplerate; - int sr_code[2]; - int bps_code; - int max_blocksize; - int min_framesize; - int max_framesize; - int max_encoded_framesize; - uint32_t frame_count; - uint64_t sample_count; - uint8_t md5sum[16]; - FlacFrame frame; - CompressionOptions options; - AVCodecContext *avctx; - LPCContext lpc_ctx; - struct AVMD5 *md5ctx; - uint8_t *md5_buffer; - unsigned int md5_buffer_size; - BswapDSPContext bdsp; - FLACEncDSPContext flac_dsp; - - int flushed; - int64_t next_pts; -} FlacEncodeContext; - - -/** - * Write streaminfo metadata block to byte array. - */ -static void write_streaminfo(FlacEncodeContext *s, uint8_t *header) -{ - PutBitContext pb; - - memset(header, 0, FLAC_STREAMINFO_SIZE); - init_put_bits(&pb, header, FLAC_STREAMINFO_SIZE); - - /* streaminfo metadata block */ - put_bits(&pb, 16, s->max_blocksize); - put_bits(&pb, 16, s->max_blocksize); - put_bits(&pb, 24, s->min_framesize); - put_bits(&pb, 24, s->max_framesize); - put_bits(&pb, 20, s->samplerate); - put_bits(&pb, 3, s->channels-1); - put_bits(&pb, 5, s->avctx->bits_per_raw_sample - 1); - /* write 36-bit sample count in 2 put_bits() calls */ - put_bits(&pb, 24, (s->sample_count & 0xFFFFFF000LL) >> 12); - put_bits(&pb, 12, s->sample_count & 0x000000FFFLL); - flush_put_bits(&pb); - memcpy(&header[18], s->md5sum, 16); -} - - -/** - * Calculate an estimate for the maximum frame size based on verbatim mode. - * @param blocksize block size, in samples - * @param ch number of channels - * @param bps bits-per-sample - */ -static int flac_get_max_frame_size(int blocksize, int ch, int bps) -{ - /* Technically, there is no limit to FLAC frame size, but an encoder - should not write a frame that is larger than if verbatim encoding mode - were to be used. */ - - int count; - - count = 16; /* frame header */ - count += ch * ((7+bps+7)/8); /* subframe headers */ - if (ch == 2) { - /* for stereo, need to account for using decorrelation */ - count += (( 2*bps+1) * blocksize + 7) / 8; - } else { - count += ( ch*bps * blocksize + 7) / 8; - } - count += 2; /* frame footer */ - - return count; -} - - -/** - * Set blocksize based on samplerate. - * Choose the closest predefined blocksize >= BLOCK_TIME_MS milliseconds. - */ -static int select_blocksize(int samplerate, int block_time_ms) -{ - int i; - int target; - int blocksize; - - av_assert0(samplerate > 0); - blocksize = ff_flac_blocksize_table[1]; - target = (samplerate * block_time_ms) / 1000; - for (i = 0; i < 16; i++) { - if (target >= ff_flac_blocksize_table[i] && - ff_flac_blocksize_table[i] > blocksize) { - blocksize = ff_flac_blocksize_table[i]; - } - } - return blocksize; -} - - -static av_cold void dprint_compression_options(FlacEncodeContext *s) -{ - AVCodecContext *avctx = s->avctx; - CompressionOptions *opt = &s->options; - - av_log(avctx, AV_LOG_DEBUG, " compression: %d\n", opt->compression_level); - - switch (opt->lpc_type) { - case FF_LPC_TYPE_NONE: - av_log(avctx, AV_LOG_DEBUG, " lpc type: None\n"); - break; - case FF_LPC_TYPE_FIXED: - av_log(avctx, AV_LOG_DEBUG, " lpc type: Fixed pre-defined coefficients\n"); - break; - case FF_LPC_TYPE_LEVINSON: - av_log(avctx, AV_LOG_DEBUG, " lpc type: Levinson-Durbin recursion with Welch window\n"); - break; - case FF_LPC_TYPE_CHOLESKY: - av_log(avctx, AV_LOG_DEBUG, " lpc type: Cholesky factorization, %d pass%s\n", - opt->lpc_passes, opt->lpc_passes == 1 ? "" : "es"); - break; - } - - av_log(avctx, AV_LOG_DEBUG, " prediction order: %d, %d\n", - opt->min_prediction_order, opt->max_prediction_order); - - switch (opt->prediction_order_method) { - case ORDER_METHOD_EST: - av_log(avctx, AV_LOG_DEBUG, " order method: %s\n", "estimate"); - break; - case ORDER_METHOD_2LEVEL: - av_log(avctx, AV_LOG_DEBUG, " order method: %s\n", "2-level"); - break; - case ORDER_METHOD_4LEVEL: - av_log(avctx, AV_LOG_DEBUG, " order method: %s\n", "4-level"); - break; - case ORDER_METHOD_8LEVEL: - av_log(avctx, AV_LOG_DEBUG, " order method: %s\n", "8-level"); - break; - case ORDER_METHOD_SEARCH: - av_log(avctx, AV_LOG_DEBUG, " order method: %s\n", "full search"); - break; - case ORDER_METHOD_LOG: - av_log(avctx, AV_LOG_DEBUG, " order method: %s\n", "log search"); - break; - } - - - av_log(avctx, AV_LOG_DEBUG, " partition order: %d, %d\n", - opt->min_partition_order, opt->max_partition_order); - - av_log(avctx, AV_LOG_DEBUG, " block size: %d\n", avctx->frame_size); - - av_log(avctx, AV_LOG_DEBUG, " lpc precision: %d\n", - opt->lpc_coeff_precision); -} - - -static av_cold int flac_encode_init(AVCodecContext *avctx) -{ - int freq = avctx->sample_rate; - int channels = avctx->ch_layout.nb_channels; - FlacEncodeContext *s = avctx->priv_data; - int i, level, ret; - uint8_t *streaminfo; - - s->avctx = avctx; - - switch (avctx->sample_fmt) { - case AV_SAMPLE_FMT_S16: - avctx->bits_per_raw_sample = 16; - s->bps_code = 4; - break; - case AV_SAMPLE_FMT_S32: - if (avctx->bits_per_raw_sample <= 24) { - if (avctx->bits_per_raw_sample < 24) - av_log(avctx, AV_LOG_WARNING, "encoding as 24 bits-per-sample\n"); - avctx->bits_per_raw_sample = 24; - s->bps_code = 6; - } else if (avctx->strict_std_compliance > FF_COMPLIANCE_EXPERIMENTAL) { - av_log(avctx, AV_LOG_WARNING, - "encoding as 24 bits-per-sample, more is considered " - "experimental. Add -strict experimental if you want " - "to encode more than 24 bits-per-sample\n"); - avctx->bits_per_raw_sample = 24; - s->bps_code = 6; - } else { - avctx->bits_per_raw_sample = 32; - s->bps_code = 7; - } - break; - } - - if (channels < 1 || channels > FLAC_MAX_CHANNELS) { - av_log(avctx, AV_LOG_ERROR, "%d channels not supported (max %d)\n", - channels, FLAC_MAX_CHANNELS); - return AVERROR(EINVAL); - } - s->channels = channels; - - /* find samplerate in table */ - if (freq < 1) - return AVERROR(EINVAL); - for (i = 1; i < 12; i++) { - if (freq == ff_flac_sample_rate_table[i]) { - s->samplerate = ff_flac_sample_rate_table[i]; - s->sr_code[0] = i; - s->sr_code[1] = 0; - break; - } - } - /* if not in table, samplerate is non-standard */ - if (i == 12) { - if (freq % 1000 == 0 && freq < 255000) { - s->sr_code[0] = 12; - s->sr_code[1] = freq / 1000; - } else if (freq % 10 == 0 && freq < 655350) { - s->sr_code[0] = 14; - s->sr_code[1] = freq / 10; - } else if (freq < 65535) { - s->sr_code[0] = 13; - s->sr_code[1] = freq; - } else if (freq < 1048576) { - s->sr_code[0] = 0; - s->sr_code[1] = 0; - } else { - av_log(avctx, AV_LOG_ERROR, "%d Hz not supported\n", freq); - return AVERROR(EINVAL); - } - s->samplerate = freq; - } - - /* set compression option defaults based on avctx->compression_level */ - if (avctx->compression_level < 0) - s->options.compression_level = 5; - else - s->options.compression_level = avctx->compression_level; - - level = s->options.compression_level; - if (level > 12) { - av_log(avctx, AV_LOG_ERROR, "invalid compression level: %d\n", - s->options.compression_level); - return AVERROR(EINVAL); - } - - s->options.block_time_ms = ((int[]){ 27, 27, 27,105,105,105,105,105,105,105,105,105,105})[level]; - - if (s->options.lpc_type == FF_LPC_TYPE_DEFAULT) - s->options.lpc_type = ((int[]){ FF_LPC_TYPE_FIXED, FF_LPC_TYPE_FIXED, FF_LPC_TYPE_FIXED, - FF_LPC_TYPE_LEVINSON, FF_LPC_TYPE_LEVINSON, FF_LPC_TYPE_LEVINSON, - FF_LPC_TYPE_LEVINSON, FF_LPC_TYPE_LEVINSON, FF_LPC_TYPE_LEVINSON, - FF_LPC_TYPE_LEVINSON, FF_LPC_TYPE_LEVINSON, FF_LPC_TYPE_LEVINSON, - FF_LPC_TYPE_LEVINSON})[level]; - - if (s->options.min_prediction_order < 0) - s->options.min_prediction_order = ((int[]){ 2, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1})[level]; - if (s->options.max_prediction_order < 0) - s->options.max_prediction_order = ((int[]){ 3, 4, 4, 6, 8, 8, 8, 8, 12, 12, 12, 32, 32})[level]; - - if (s->options.prediction_order_method < 0) - s->options.prediction_order_method = ((int[]){ ORDER_METHOD_EST, ORDER_METHOD_EST, ORDER_METHOD_EST, - ORDER_METHOD_EST, ORDER_METHOD_EST, ORDER_METHOD_EST, - ORDER_METHOD_4LEVEL, ORDER_METHOD_LOG, ORDER_METHOD_4LEVEL, - ORDER_METHOD_LOG, ORDER_METHOD_SEARCH, ORDER_METHOD_LOG, - ORDER_METHOD_SEARCH})[level]; - - if (s->options.min_partition_order > s->options.max_partition_order) { - av_log(avctx, AV_LOG_ERROR, "invalid partition orders: min=%d max=%d\n", - s->options.min_partition_order, s->options.max_partition_order); - return AVERROR(EINVAL); - } - if (s->options.min_partition_order < 0) - s->options.min_partition_order = ((int[]){ 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0})[level]; - if (s->options.max_partition_order < 0) - s->options.max_partition_order = ((int[]){ 2, 2, 3, 3, 3, 8, 8, 8, 8, 8, 8, 8, 8})[level]; - - if (s->options.lpc_type == FF_LPC_TYPE_NONE) { - s->options.min_prediction_order = 0; - s->options.max_prediction_order = 0; - } else if (s->options.lpc_type == FF_LPC_TYPE_FIXED) { - if (s->options.min_prediction_order > MAX_FIXED_ORDER) { - av_log(avctx, AV_LOG_WARNING, - "invalid min prediction order %d, clamped to %d\n", - s->options.min_prediction_order, MAX_FIXED_ORDER); - s->options.min_prediction_order = MAX_FIXED_ORDER; - } - if (s->options.max_prediction_order > MAX_FIXED_ORDER) { - av_log(avctx, AV_LOG_WARNING, - "invalid max prediction order %d, clamped to %d\n", - s->options.max_prediction_order, MAX_FIXED_ORDER); - s->options.max_prediction_order = MAX_FIXED_ORDER; - } - } - - if (s->options.max_prediction_order < s->options.min_prediction_order) { - av_log(avctx, AV_LOG_ERROR, "invalid prediction orders: min=%d max=%d\n", - s->options.min_prediction_order, s->options.max_prediction_order); - return AVERROR(EINVAL); - } - - if (avctx->frame_size > 0) { - if (avctx->frame_size < FLAC_MIN_BLOCKSIZE || - avctx->frame_size > FLAC_MAX_BLOCKSIZE) { - av_log(avctx, AV_LOG_ERROR, "invalid block size: %d\n", - avctx->frame_size); - return AVERROR(EINVAL); - } - } else { - s->avctx->frame_size = select_blocksize(s->samplerate, s->options.block_time_ms); - } - s->max_blocksize = s->avctx->frame_size; - - /* set maximum encoded frame size in verbatim mode */ - s->max_framesize = flac_get_max_frame_size(s->avctx->frame_size, - s->channels, - s->avctx->bits_per_raw_sample); - - /* initialize MD5 context */ - s->md5ctx = av_md5_alloc(); - if (!s->md5ctx) - return AVERROR(ENOMEM); - av_md5_init(s->md5ctx); - - streaminfo = av_malloc(FLAC_STREAMINFO_SIZE); - if (!streaminfo) - return AVERROR(ENOMEM); - write_streaminfo(s, streaminfo); - avctx->extradata = streaminfo; - avctx->extradata_size = FLAC_STREAMINFO_SIZE; - - s->frame_count = 0; - s->min_framesize = s->max_framesize; - - if ((channels == 3 && - av_channel_layout_compare(&avctx->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_SURROUND)) || - (channels == 4 && - av_channel_layout_compare(&avctx->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_2_2) && - av_channel_layout_compare(&avctx->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_QUAD)) || - (channels == 5 && - av_channel_layout_compare(&avctx->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_5POINT0) && - av_channel_layout_compare(&avctx->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_5POINT0_BACK)) || - (channels == 6 && - av_channel_layout_compare(&avctx->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_5POINT1) && - av_channel_layout_compare(&avctx->ch_layout, &(AVChannelLayout)AV_CHANNEL_LAYOUT_5POINT1_BACK))) { - if (avctx->ch_layout.order != AV_CHANNEL_ORDER_UNSPEC) { - av_log(avctx, AV_LOG_ERROR, "Channel layout not supported by Flac, " - "output stream will have incorrect " - "channel layout.\n"); - } else { - av_log(avctx, AV_LOG_WARNING, "No channel layout specified. The encoder " - "will use Flac channel layout for " - "%d channels.\n", channels); - } - } - - ret = ff_lpc_init(&s->lpc_ctx, avctx->frame_size, - s->options.max_prediction_order, FF_LPC_TYPE_LEVINSON); - - ff_bswapdsp_init(&s->bdsp); - ff_flacencdsp_init(&s->flac_dsp); - - dprint_compression_options(s); - - return ret; -} - - -static void init_frame(FlacEncodeContext *s, int nb_samples) -{ - int i, ch; - FlacFrame *frame; - - frame = &s->frame; - - for (i = 0; i < 16; i++) { - if (nb_samples == ff_flac_blocksize_table[i]) { - frame->blocksize = ff_flac_blocksize_table[i]; - frame->bs_code[0] = i; - frame->bs_code[1] = 0; - break; - } - } - if (i == 16) { - frame->blocksize = nb_samples; - if (frame->blocksize <= 256) { - frame->bs_code[0] = 6; - frame->bs_code[1] = frame->blocksize-1; - } else { - frame->bs_code[0] = 7; - frame->bs_code[1] = frame->blocksize-1; - } - } - - for (ch = 0; ch < s->channels; ch++) { - FlacSubframe *sub = &frame->subframes[ch]; - - sub->wasted = 0; - sub->obits = s->avctx->bits_per_raw_sample; - - if (sub->obits > 16) - sub->rc.coding_mode = CODING_MODE_RICE2; - else - sub->rc.coding_mode = CODING_MODE_RICE; - } - - frame->verbatim_only = 0; -} - - -/** - * Copy channel-interleaved input samples into separate subframes. - */ -static void copy_samples(FlacEncodeContext *s, const void *samples) -{ - int i, j, ch; - FlacFrame *frame; - int shift = av_get_bytes_per_sample(s->avctx->sample_fmt) * 8 - - s->avctx->bits_per_raw_sample; - -#define COPY_SAMPLES(bits) do { \ - const int ## bits ## _t *samples0 = samples; \ - frame = &s->frame; \ - for (i = 0, j = 0; i < frame->blocksize; i++) \ - for (ch = 0; ch < s->channels; ch++, j++) \ - frame->subframes[ch].samples[i] = samples0[j] >> shift; \ -} while (0) - - if (s->avctx->sample_fmt == AV_SAMPLE_FMT_S16) - COPY_SAMPLES(16); - else - COPY_SAMPLES(32); -} - - -static uint64_t rice_count_exact(const int32_t *res, int n, int k) -{ - int i; - uint64_t count = 0; - - for (i = 0; i < n; i++) { - unsigned v = ((unsigned)(res[i]) << 1) ^ (res[i] >> 31); - count += (v >> k) + 1 + k; - } - return count; -} - - -static uint64_t subframe_count_exact(FlacEncodeContext *s, FlacSubframe *sub, - int pred_order) -{ - int p, porder, psize; - int i, part_end; - uint64_t count = 0; - - /* subframe header */ - count += 8; - - if (sub->wasted) - count += sub->wasted; - - /* subframe */ - if (sub->type == FLAC_SUBFRAME_CONSTANT) { - count += sub->obits; - } else if (sub->type == FLAC_SUBFRAME_VERBATIM) { - count += s->frame.blocksize * sub->obits; - } else { - /* warm-up samples */ - count += pred_order * sub->obits; - - /* LPC coefficients */ - if (sub->type == FLAC_SUBFRAME_LPC) - count += 4 + 5 + pred_order * s->options.lpc_coeff_precision; - - /* rice-encoded block */ - count += 2; - - /* partition order */ - porder = sub->rc.porder; - psize = s->frame.blocksize >> porder; - count += 4; - - /* residual */ - i = pred_order; - part_end = psize; - for (p = 0; p < 1 << porder; p++) { - int k = sub->rc.params[p]; - count += sub->rc.coding_mode; - count += rice_count_exact(&sub->residual[i], part_end - i, k); - i = part_end; - part_end = FFMIN(s->frame.blocksize, part_end + psize); - } - } - - return count; -} - - -#define rice_encode_count(sum, n, k) (((n)*((k)+1))+((sum-(n>>1))>>(k))) - -/** - * Solve for d/dk(rice_encode_count) = n-((sum-(n>>1))>>(k+1)) = 0. - */ -static int find_optimal_param(uint64_t sum, int n, int max_param) -{ - int k; - uint64_t sum2; - - if (sum <= n >> 1) - return 0; - sum2 = sum - (n >> 1); - k = av_log2(av_clipl_int32(sum2 / n)); - return FFMIN(k, max_param); -} - -static int find_optimal_param_exact(uint64_t sums[32][MAX_PARTITIONS], int i, int max_param) -{ - int bestk = 0; - int64_t bestbits = INT64_MAX; - int k; - - for (k = 0; k <= max_param; k++) { - int64_t bits = sums[k][i]; - if (bits < bestbits) { - bestbits = bits; - bestk = k; - } - } - - return bestk; -} - -static uint64_t calc_optimal_rice_params(RiceContext *rc, int porder, - uint64_t sums[32][MAX_PARTITIONS], - int n, int pred_order, int max_param, int exact) -{ - int i; - int k, cnt, part; - uint64_t all_bits; - - part = (1 << porder); - all_bits = 4 * part; - - cnt = (n >> porder) - pred_order; - for (i = 0; i < part; i++) { - if (exact) { - k = find_optimal_param_exact(sums, i, max_param); - all_bits += sums[k][i]; - } else { - k = find_optimal_param(sums[0][i], cnt, max_param); - all_bits += rice_encode_count(sums[0][i], cnt, k); - } - rc->params[i] = k; - cnt = n >> porder; - } - - rc->porder = porder; - - return all_bits; -} - - -static void calc_sum_top(int pmax, int kmax, const uint32_t *data, int n, int pred_order, - uint64_t sums[32][MAX_PARTITIONS]) -{ - int i, k; - int parts; - const uint32_t *res, *res_end; - - /* sums for highest level */ - parts = (1 << pmax); - - for (k = 0; k <= kmax; k++) { - res = &data[pred_order]; - res_end = &data[n >> pmax]; - for (i = 0; i < parts; i++) { - if (kmax) { - uint64_t sum = (1LL + k) * (res_end - res); - while (res < res_end) - sum += *(res++) >> k; - sums[k][i] = sum; - } else { - uint64_t sum = 0; - while (res < res_end) - sum += *(res++); - sums[k][i] = sum; - } - res_end += n >> pmax; - } - } -} - -static void calc_sum_next(int level, uint64_t sums[32][MAX_PARTITIONS], int kmax) -{ - int i, k; - int parts = (1 << level); - for (i = 0; i < parts; i++) { - for (k=0; k<=kmax; k++) - sums[k][i] = sums[k][2*i] + sums[k][2*i+1]; - } -} - -static uint64_t calc_rice_params(RiceContext *rc, - uint32_t udata[FLAC_MAX_BLOCKSIZE], - uint64_t sums[32][MAX_PARTITIONS], - int pmin, int pmax, - const int32_t *data, int n, int pred_order, int exact) -{ - int i; - uint64_t bits[MAX_PARTITION_ORDER+1]; - int opt_porder; - RiceContext tmp_rc; - int kmax = (1 << rc->coding_mode) - 2; - - av_assert1(pmin >= 0 && pmin <= MAX_PARTITION_ORDER); - av_assert1(pmax >= 0 && pmax <= MAX_PARTITION_ORDER); - av_assert1(pmin <= pmax); - - tmp_rc.coding_mode = rc->coding_mode; - - for (i = pred_order; i < n; i++) - udata[i] = ((unsigned)(data[i]) << 1) ^ (data[i] >> 31); - - calc_sum_top(pmax, exact ? kmax : 0, udata, n, pred_order, sums); - - opt_porder = pmin; - bits[pmin] = UINT32_MAX; - for (i = pmax; ; ) { - bits[i] = calc_optimal_rice_params(&tmp_rc, i, sums, n, pred_order, kmax, exact); - if (bits[i] < bits[opt_porder] || pmax == pmin) { - opt_porder = i; - *rc = tmp_rc; - } - if (i == pmin) - break; - calc_sum_next(--i, sums, exact ? kmax : 0); - } - - return bits[opt_porder]; -} - - -static int get_max_p_order(int max_porder, int n, int order) -{ - int porder = FFMIN(max_porder, av_log2(n^(n-1))); - if (order > 0) - porder = FFMIN(porder, av_log2(n/order)); - return porder; -} - - -static uint64_t find_subframe_rice_params(FlacEncodeContext *s, - FlacSubframe *sub, int pred_order) -{ - int pmin = get_max_p_order(s->options.min_partition_order, - s->frame.blocksize, pred_order); - int pmax = get_max_p_order(s->options.max_partition_order, - s->frame.blocksize, pred_order); - - uint64_t bits = 8 + pred_order * sub->obits + 2 + sub->rc.coding_mode; - if (sub->type == FLAC_SUBFRAME_LPC) - bits += 4 + 5 + pred_order * s->options.lpc_coeff_precision; - bits += calc_rice_params(&sub->rc, sub->rc_udata, sub->rc_sums, pmin, pmax, sub->residual, - s->frame.blocksize, pred_order, s->options.exact_rice_parameters); - return bits; -} - - -static void encode_residual_fixed(int32_t *res, const int32_t *smp, int n, - int order) -{ - int i; - - for (i = 0; i < order; i++) - res[i] = smp[i]; - - if (order == 0) { - for (i = order; i < n; i++) - res[i] = smp[i]; - } else if (order == 1) { - for (i = order; i < n; i++) - res[i] = smp[i] - smp[i-1]; - } else if (order == 2) { - int a = smp[order-1] - smp[order-2]; - for (i = order; i < n; i += 2) { - int b = smp[i ] - smp[i-1]; - res[i] = b - a; - a = smp[i+1] - smp[i ]; - res[i+1] = a - b; - } - } else if (order == 3) { - int a = smp[order-1] - smp[order-2]; - int c = smp[order-1] - 2*smp[order-2] + smp[order-3]; - for (i = order; i < n; i += 2) { - int b = smp[i ] - smp[i-1]; - int d = b - a; - res[i] = d - c; - a = smp[i+1] - smp[i ]; - c = a - b; - res[i+1] = c - d; - } - } else { - int a = smp[order-1] - smp[order-2]; - int c = smp[order-1] - 2*smp[order-2] + smp[order-3]; - int e = smp[order-1] - 3*smp[order-2] + 3*smp[order-3] - smp[order-4]; - for (i = order; i < n; i += 2) { - int b = smp[i ] - smp[i-1]; - int d = b - a; - int f = d - c; - res[i ] = f - e; - a = smp[i+1] - smp[i ]; - c = a - b; - e = c - d; - res[i+1] = e - f; - } - } -} - - -/* These four functions check for every residual whether it can be - * contained in INT32_MAX) \ - return 1; \ - res[i] = res64; \ - } \ - } else if (order == 2) { \ - for (int i = order; i < n; i++) { \ - int64_t res64 = (int64_t)smp[i] - 2*(int64_t)smp[i-1] + smp[i-2]; \ - if (res64 <= INT32_MIN || res64 > INT32_MAX) \ - return 1; \ - res[i] = res64; \ - } \ - } else if (order == 3) { \ - for (int i = order; i < n; i++) { \ - int64_t res64 = (int64_t)smp[i] - 3*(int64_t)smp[i-1] + 3*(int64_t)smp[i-2] - smp[i-3]; \ - if (res64 <= INT32_MIN || res64 > INT32_MAX) \ - return 1; \ - res[i] = res64; \ - } \ - } else { \ - for (int i = order; i < n; i++) { \ - int64_t res64 = (int64_t)smp[i] - 4*(int64_t)smp[i-1] + 6*(int64_t)smp[i-2] - 4*(int64_t)smp[i-3] + smp[i-4]; \ - if (res64 <= INT32_MIN || res64 > INT32_MAX) \ - return 1; \ - res[i] = res64; \ - } \ - } \ - return 0; \ -} - -static int encode_residual_fixed_with_residual_limit(int32_t *res, const int32_t *smp, - int n, int order) -{ - ENCODE_RESIDUAL_FIXED_WITH_RESIDUAL_LIMIT(); -} - - -static int encode_residual_fixed_with_residual_limit_33bps(int32_t *res, const int64_t *smp, - int n, int order) -{ - ENCODE_RESIDUAL_FIXED_WITH_RESIDUAL_LIMIT(); -} - -#define LPC_ENCODE_WITH_RESIDUAL_LIMIT() \ -{ \ - for (int i = 0; i < order; i++) \ - res[i] = smp[i]; \ - for (int i = order; i < len; i++) { \ - int64_t p = 0, tmp; \ - for (int j = 0; j < order; j++) \ - p += (int64_t)coefs[j]*smp[(i-1)-j]; \ - p >>= shift; \ - tmp = smp[i] - p; \ - if (tmp <= INT32_MIN || tmp > INT32_MAX) \ - return 1; \ - res[i] = tmp; \ - } \ - return 0; \ -} - -static int lpc_encode_with_residual_limit(int32_t *res, const int32_t *smp, int len, - int order, int32_t *coefs, int shift) -{ - LPC_ENCODE_WITH_RESIDUAL_LIMIT(); -} - -static int lpc_encode_with_residual_limit_33bps(int32_t *res, const int64_t *smp, int len, - int order, int32_t *coefs, int shift) -{ - LPC_ENCODE_WITH_RESIDUAL_LIMIT(); -} - -static int lpc_encode_choose_datapath(FlacEncodeContext *s, int32_t bps, - int32_t *res, const int32_t *smp, - const int64_t *smp_33bps, int len, - int order, int32_t *coefs, int shift) -{ - uint64_t max_residual_value = 0; - int64_t max_sample_value = ((int64_t)(1) << (bps-1)); - /* This calculates the max size of any residual with the current - * predictor, so we know whether we need to check the residual */ - for (int i = 0; i < order; i++) - max_residual_value += FFABS(max_sample_value * coefs[i]); - max_residual_value >>= shift; - max_residual_value += max_sample_value; - if (bps > 32) { - if (lpc_encode_with_residual_limit_33bps(res, smp_33bps, len, order, coefs, shift)) - return 1; - } else if (max_residual_value > INT32_MAX) { - if (lpc_encode_with_residual_limit(res, smp, len, order, coefs, shift)) - return 1; - } else if (bps + s->options.lpc_coeff_precision + av_log2(order) <= 32) { - s->flac_dsp.lpc16_encode(res, smp, len, order, coefs, shift); - } else { - s->flac_dsp.lpc32_encode(res, smp, len, order, coefs, shift); - } - return 0; -} - -#define DEFAULT_TO_VERBATIM() \ -{ \ - sub->type = sub->type_code = FLAC_SUBFRAME_VERBATIM; \ - if (sub->obits <= 32) \ - memcpy(res, smp, n * sizeof(int32_t)); \ - return subframe_count_exact(s, sub, 0); \ -} - -static int encode_residual_ch(FlacEncodeContext *s, int ch) -{ - int i, n; - int min_order, max_order, opt_order, omethod; - FlacFrame *frame; - FlacSubframe *sub; - int32_t coefs[MAX_LPC_ORDER][MAX_LPC_ORDER]; - int shift[MAX_LPC_ORDER]; - int32_t *res, *smp; - int64_t *smp_33bps; - - frame = &s->frame; - sub = &frame->subframes[ch]; - res = sub->residual; - smp = sub->samples; - smp_33bps = frame->samples_33bps; - n = frame->blocksize; - - /* CONSTANT */ - if (sub->obits > 32) { - for (i = 1; i < n; i++) - if(smp_33bps[i] != smp_33bps[0]) - break; - if (i == n) { - sub->type = sub->type_code = FLAC_SUBFRAME_CONSTANT; - return subframe_count_exact(s, sub, 0); - } - } else { - for (i = 1; i < n; i++) - if(smp[i] != smp[0]) - break; - if (i == n) { - sub->type = sub->type_code = FLAC_SUBFRAME_CONSTANT; - res[0] = smp[0]; - return subframe_count_exact(s, sub, 0); - } - } - - /* VERBATIM */ - if (frame->verbatim_only || n < 5) { - DEFAULT_TO_VERBATIM(); - } - - min_order = s->options.min_prediction_order; - max_order = s->options.max_prediction_order; - omethod = s->options.prediction_order_method; - - /* FIXED */ - sub->type = FLAC_SUBFRAME_FIXED; - if (s->options.lpc_type == FF_LPC_TYPE_NONE || - s->options.lpc_type == FF_LPC_TYPE_FIXED || n <= max_order) { - uint64_t bits[MAX_FIXED_ORDER+1]; - if (max_order > MAX_FIXED_ORDER) - max_order = MAX_FIXED_ORDER; - opt_order = 0; - bits[0] = UINT32_MAX; - for (i = min_order; i <= max_order; i++) { - if (sub->obits == 33) { - if (encode_residual_fixed_with_residual_limit_33bps(res, smp_33bps, n, i)) - continue; - } else if (sub->obits + i >= 32) { - if (encode_residual_fixed_with_residual_limit(res, smp, n, i)) - continue; - } else - encode_residual_fixed(res, smp, n, i); - bits[i] = find_subframe_rice_params(s, sub, i); - if (bits[i] < bits[opt_order]) - opt_order = i; - } - if (opt_order == 0 && bits[0] == UINT32_MAX) { - /* No predictor found with residuals within order = opt_order; - sub->type_code = sub->type | sub->order; - if (sub->order != max_order) { - if (sub->obits == 33) - encode_residual_fixed_with_residual_limit_33bps(res, smp_33bps, n, sub->order); - else if (sub->obits + i >= 32) - encode_residual_fixed_with_residual_limit(res, smp, n, sub->order); - else - encode_residual_fixed(res, smp, n, sub->order); - find_subframe_rice_params(s, sub, sub->order); - } - return subframe_count_exact(s, sub, sub->order); - } - - /* LPC */ - sub->type = FLAC_SUBFRAME_LPC; - if (sub->obits == 33) - /* As ff_lpc_calc_coefs is shared with other codecs and the LSB - * probably isn't predictable anyway, throw away LSB for analysis - * so it fits 32 bit int and existing function can be used - * unmodified */ - for (i = 0; i < n; i++) - smp[i] = smp_33bps[i] >> 1; - - opt_order = ff_lpc_calc_coefs(&s->lpc_ctx, smp, n, min_order, max_order, - s->options.lpc_coeff_precision, coefs, shift, s->options.lpc_type, - s->options.lpc_passes, omethod, - MIN_LPC_SHIFT, MAX_LPC_SHIFT, 0); - - if (omethod == ORDER_METHOD_2LEVEL || - omethod == ORDER_METHOD_4LEVEL || - omethod == ORDER_METHOD_8LEVEL) { - int levels = 1 << omethod; - uint64_t bits[1 << ORDER_METHOD_8LEVEL]; - int order = -1; - int opt_index = levels-1; - opt_order = max_order-1; - bits[opt_index] = UINT32_MAX; - for (i = levels-1; i >= 0; i--) { - int last_order = order; - order = min_order + (((max_order-min_order+1) * (i+1)) / levels)-1; - order = av_clip(order, min_order - 1, max_order - 1); - if (order == last_order) - continue; - if(lpc_encode_choose_datapath(s, sub->obits, res, smp, smp_33bps, n, order+1, coefs[order], shift[order])) - continue; - bits[i] = find_subframe_rice_params(s, sub, order+1); - if (bits[i] < bits[opt_index]) { - opt_index = i; - opt_order = order; - } - } - opt_order++; - } else if (omethod == ORDER_METHOD_SEARCH) { - // brute-force optimal order search - uint64_t bits[MAX_LPC_ORDER]; - opt_order = 0; - bits[0] = UINT32_MAX; - for (i = min_order-1; i < max_order; i++) { - if(lpc_encode_choose_datapath(s, sub->obits, res, smp, smp_33bps, n, i+1, coefs[i], shift[i])) - continue; - bits[i] = find_subframe_rice_params(s, sub, i+1); - if (bits[i] < bits[opt_order]) - opt_order = i; - } - opt_order++; - } else if (omethod == ORDER_METHOD_LOG) { - uint64_t bits[MAX_LPC_ORDER]; - int step; - - opt_order = min_order - 1 + (max_order-min_order)/3; - memset(bits, -1, sizeof(bits)); - - for (step = 16; step; step >>= 1) { - int last = opt_order; - for (i = last-step; i <= last+step; i += step) { - if (i < min_order-1 || i >= max_order || bits[i] < UINT32_MAX) - continue; - if(lpc_encode_choose_datapath(s, sub->obits, res, smp, smp_33bps, n, i+1, coefs[i], shift[i])) - continue; - bits[i] = find_subframe_rice_params(s, sub, i+1); - if (bits[i] < bits[opt_order]) - opt_order = i; - } - } - opt_order++; - } - - if (s->options.multi_dim_quant) { - int allsteps = 1; - int i, step, improved; - int64_t best_score = INT64_MAX; - int32_t qmax; - - qmax = (1 << (s->options.lpc_coeff_precision - 1)) - 1; - - for (i=0; i8) - continue; - - if(lpc_encode_choose_datapath(s, sub->obits, res, smp, smp_33bps, n, opt_order, lpc_try, shift[opt_order-1])) - continue; - score = find_subframe_rice_params(s, sub, opt_order); - if (score < best_score) { - best_score = score; - memcpy(coefs[opt_order-1], lpc_try, sizeof(*coefs)); - improved=1; - } - } - } while(improved); - } - - sub->order = opt_order; - sub->type_code = sub->type | (sub->order-1); - sub->shift = shift[sub->order-1]; - for (i = 0; i < sub->order; i++) - sub->coefs[i] = coefs[sub->order-1][i]; - - if(lpc_encode_choose_datapath(s, sub->obits, res, smp, smp_33bps, n, sub->order, sub->coefs, sub->shift)) { - /* No predictor found with residuals within order); - - return subframe_count_exact(s, sub, sub->order); -} - - -static int count_frame_header(FlacEncodeContext *s) -{ - uint8_t av_unused tmp; - int count; - - /* - <14> Sync code - <1> Reserved - <1> Blocking strategy - <4> Block size in inter-channel samples - <4> Sample rate - <4> Channel assignment - <3> Sample size in bits - <1> Reserved - */ - count = 32; - - /* coded frame number */ - PUT_UTF8(s->frame_count, tmp, count += 8;) - - /* explicit block size */ - if (s->frame.bs_code[0] == 6) - count += 8; - else if (s->frame.bs_code[0] == 7) - count += 16; - - /* explicit sample rate */ - count += ((s->sr_code[0] == 12) + (s->sr_code[0] > 12) * 2) * 8; - - /* frame header CRC-8 */ - count += 8; - - return count; -} - - -static int encode_frame(FlacEncodeContext *s) -{ - int ch; - uint64_t count; - - count = count_frame_header(s); - - for (ch = 0; ch < s->channels; ch++) - count += encode_residual_ch(s, ch); - - count += (8 - (count & 7)) & 7; // byte alignment - count += 16; // CRC-16 - - count >>= 3; - if (count > INT_MAX) - return AVERROR_BUG; - return count; -} - - -static void remove_wasted_bits(FlacEncodeContext *s) -{ - int ch, i, wasted_bits; - - for (ch = 0; ch < s->channels; ch++) { - FlacSubframe *sub = &s->frame.subframes[ch]; - - if (sub->obits > 32) { - int64_t v = 0; - for (i = 0; i < s->frame.blocksize; i++) { - v |= s->frame.samples_33bps[i]; - if (v & 1) - break; - } - - if (!v || (v & 1)) - return; - - v = ff_ctzll(v); - - /* If any wasted bits are found, samples are moved - * from frame.samples_33bps to frame.subframes[ch] */ - for (i = 0; i < s->frame.blocksize; i++) - sub->samples[i] = s->frame.samples_33bps[i] >> v; - wasted_bits = v; - } else { - int32_t v = 0; - for (i = 0; i < s->frame.blocksize; i++) { - v |= sub->samples[i]; - if (v & 1) - break; - } - - if (!v || (v & 1)) - return; - - v = ff_ctz(v); - - for (i = 0; i < s->frame.blocksize; i++) - sub->samples[i] >>= v; - wasted_bits = v; - } - - sub->wasted = wasted_bits; - sub->obits -= wasted_bits; - - /* for 24-bit, check if removing wasted bits makes the range better - * suited for using RICE instead of RICE2 for entropy coding */ - if (sub->obits <= 17) - sub->rc.coding_mode = CODING_MODE_RICE; - } -} - - -static int estimate_stereo_mode(const int32_t *left_ch, const int32_t *right_ch, int n, - int max_rice_param, int bps) -{ - int best; - uint64_t sum[4]; - uint64_t score[4]; - int k; - - /* calculate sum of 2nd order residual for each channel */ - sum[0] = sum[1] = sum[2] = sum[3] = 0; - if(bps < 30) { - int32_t lt, rt; - for (int i = 2; i < n; i++) { - lt = left_ch[i] - 2*left_ch[i-1] + left_ch[i-2]; - rt = right_ch[i] - 2*right_ch[i-1] + right_ch[i-2]; - sum[2] += FFABS((lt + rt) >> 1); - sum[3] += FFABS(lt - rt); - sum[0] += FFABS(lt); - sum[1] += FFABS(rt); - } - } else { - int64_t lt, rt; - for (int i = 2; i < n; i++) { - lt = (int64_t)left_ch[i] - 2*(int64_t)left_ch[i-1] + left_ch[i-2]; - rt = (int64_t)right_ch[i] - 2*(int64_t)right_ch[i-1] + right_ch[i-2]; - sum[2] += FFABS((lt + rt) >> 1); - sum[3] += FFABS(lt - rt); - sum[0] += FFABS(lt); - sum[1] += FFABS(rt); - } - } - /* estimate bit counts */ - for (int i = 0; i < 4; i++) { - k = find_optimal_param(2 * sum[i], n, max_rice_param); - sum[i] = rice_encode_count( 2 * sum[i], n, k); - } - - /* calculate score for each mode */ - score[0] = sum[0] + sum[1]; - score[1] = sum[0] + sum[3]; - score[2] = sum[1] + sum[3]; - score[3] = sum[2] + sum[3]; - - /* return mode with lowest score */ - best = 0; - for (int i = 1; i < 4; i++) - if (score[i] < score[best]) - best = i; - - return best; -} - - -/** - * Perform stereo channel decorrelation. - */ -static void channel_decorrelation(FlacEncodeContext *s) -{ - FlacFrame *frame; - int32_t *left, *right; - int64_t *side_33bps; - int n; - - frame = &s->frame; - n = frame->blocksize; - left = frame->subframes[0].samples; - right = frame->subframes[1].samples; - side_33bps = frame->samples_33bps; - - if (s->channels != 2) { - frame->ch_mode = FLAC_CHMODE_INDEPENDENT; - return; - } - - if (s->options.ch_mode < 0) { - int max_rice_param = (1 << frame->subframes[0].rc.coding_mode) - 2; - frame->ch_mode = estimate_stereo_mode(left, right, n, max_rice_param, s->avctx->bits_per_raw_sample); - } else - frame->ch_mode = s->options.ch_mode; - - /* perform decorrelation and adjust bits-per-sample */ - if (frame->ch_mode == FLAC_CHMODE_INDEPENDENT) - return; - if(s->avctx->bits_per_raw_sample == 32) { - if (frame->ch_mode == FLAC_CHMODE_MID_SIDE) { - int64_t tmp; - for (int i = 0; i < n; i++) { - tmp = left[i]; - left[i] = (tmp + right[i]) >> 1; - side_33bps[i] = tmp - right[i]; - } - frame->subframes[1].obits++; - } else if (frame->ch_mode == FLAC_CHMODE_LEFT_SIDE) { - for (int i = 0; i < n; i++) - side_33bps[i] = (int64_t)left[i] - right[i]; - frame->subframes[1].obits++; - } else { - for (int i = 0; i < n; i++) - side_33bps[i] = (int64_t)left[i] - right[i]; - frame->subframes[0].obits++; - } - } else { - if (frame->ch_mode == FLAC_CHMODE_MID_SIDE) { - int32_t tmp; - for (int i = 0; i < n; i++) { - tmp = left[i]; - left[i] = (tmp + right[i]) >> 1; - right[i] = tmp - right[i]; - } - frame->subframes[1].obits++; - } else if (frame->ch_mode == FLAC_CHMODE_LEFT_SIDE) { - for (int i = 0; i < n; i++) - right[i] = left[i] - right[i]; - frame->subframes[1].obits++; - } else { - for (int i = 0; i < n; i++) - left[i] -= right[i]; - frame->subframes[0].obits++; - } - } -} - - -static void write_utf8(PutBitContext *pb, uint32_t val) -{ - uint8_t tmp; - PUT_UTF8(val, tmp, put_bits(pb, 8, tmp);) -} - - -static void write_frame_header(FlacEncodeContext *s) -{ - FlacFrame *frame; - int crc; - - frame = &s->frame; - - put_bits(&s->pb, 16, 0xFFF8); - put_bits(&s->pb, 4, frame->bs_code[0]); - put_bits(&s->pb, 4, s->sr_code[0]); - - if (frame->ch_mode == FLAC_CHMODE_INDEPENDENT) - put_bits(&s->pb, 4, s->channels-1); - else - put_bits(&s->pb, 4, frame->ch_mode + FLAC_MAX_CHANNELS - 1); - - put_bits(&s->pb, 3, s->bps_code); - put_bits(&s->pb, 1, 0); - write_utf8(&s->pb, s->frame_count); - - if (frame->bs_code[0] == 6) - put_bits(&s->pb, 8, frame->bs_code[1]); - else if (frame->bs_code[0] == 7) - put_bits(&s->pb, 16, frame->bs_code[1]); - - if (s->sr_code[0] == 12) - put_bits(&s->pb, 8, s->sr_code[1]); - else if (s->sr_code[0] > 12) - put_bits(&s->pb, 16, s->sr_code[1]); - - flush_put_bits(&s->pb); - crc = av_crc(av_crc_get_table(AV_CRC_8_ATM), 0, s->pb.buf, - put_bytes_output(&s->pb)); - put_bits(&s->pb, 8, crc); -} - - -static inline void set_sr_golomb_flac(PutBitContext *pb, int i, int k) -{ - unsigned v, e; - - v = ((unsigned)(i) << 1) ^ (i >> 31); - - e = (v >> k) + 1; - while (e > 31) { - put_bits(pb, 31, 0); - e -= 31; - } - put_bits(pb, e, 1); - if (k) { - unsigned mask = UINT32_MAX >> (32-k); - put_bits(pb, k, v & mask); - } -} - - -static void write_subframes(FlacEncodeContext *s) -{ - int ch; - - for (ch = 0; ch < s->channels; ch++) { - FlacSubframe *sub = &s->frame.subframes[ch]; - int p, porder, psize; - int32_t *part_end; - int32_t *res = sub->residual; - int32_t *frame_end = &sub->residual[s->frame.blocksize]; - - /* subframe header */ - put_bits(&s->pb, 1, 0); - put_bits(&s->pb, 6, sub->type_code); - put_bits(&s->pb, 1, !!sub->wasted); - if (sub->wasted) - put_bits(&s->pb, sub->wasted, 1); - - /* subframe */ - if (sub->type == FLAC_SUBFRAME_CONSTANT) { - if(sub->obits == 33) - put_sbits63(&s->pb, 33, s->frame.samples_33bps[0]); - else if(sub->obits == 32) - put_bits32(&s->pb, res[0]); - else - put_sbits(&s->pb, sub->obits, res[0]); - } else if (sub->type == FLAC_SUBFRAME_VERBATIM) { - if (sub->obits == 33) { - int64_t *res64 = s->frame.samples_33bps; - int64_t *frame_end64 = &s->frame.samples_33bps[s->frame.blocksize]; - while (res64 < frame_end64) - put_sbits63(&s->pb, 33, (*res64++)); - } else if (sub->obits == 32) { - while (res < frame_end) - put_bits32(&s->pb, *res++); - } else { - while (res < frame_end) - put_sbits(&s->pb, sub->obits, *res++); - } - } else { - /* warm-up samples */ - if (sub->obits == 33) { - for (int i = 0; i < sub->order; i++) - put_sbits63(&s->pb, 33, s->frame.samples_33bps[i]); - res += sub->order; - } else if (sub->obits == 32) { - for (int i = 0; i < sub->order; i++) - put_bits32(&s->pb, *res++); - } else { - for (int i = 0; i < sub->order; i++) - put_sbits(&s->pb, sub->obits, *res++); - } - - /* LPC coefficients */ - if (sub->type == FLAC_SUBFRAME_LPC) { - int cbits = s->options.lpc_coeff_precision; - put_bits( &s->pb, 4, cbits-1); - put_sbits(&s->pb, 5, sub->shift); - for (int i = 0; i < sub->order; i++) - put_sbits(&s->pb, cbits, sub->coefs[i]); - } - - /* rice-encoded block */ - put_bits(&s->pb, 2, sub->rc.coding_mode - 4); - - /* partition order */ - porder = sub->rc.porder; - psize = s->frame.blocksize >> porder; - put_bits(&s->pb, 4, porder); - - /* residual */ - part_end = &sub->residual[psize]; - for (p = 0; p < 1 << porder; p++) { - int k = sub->rc.params[p]; - put_bits(&s->pb, sub->rc.coding_mode, k); - while (res < part_end) - set_sr_golomb_flac(&s->pb, *res++, k); - part_end = FFMIN(frame_end, part_end + psize); - } - } - } -} - - -static void write_frame_footer(FlacEncodeContext *s) -{ - int crc; - flush_put_bits(&s->pb); - crc = av_bswap16(av_crc(av_crc_get_table(AV_CRC_16_ANSI), 0, s->pb.buf, - put_bytes_output(&s->pb))); - put_bits(&s->pb, 16, crc); - flush_put_bits(&s->pb); -} - - -static int write_frame(FlacEncodeContext *s, AVPacket *avpkt) -{ - init_put_bits(&s->pb, avpkt->data, avpkt->size); - write_frame_header(s); - write_subframes(s); - write_frame_footer(s); - return put_bytes_output(&s->pb); -} - - -static int update_md5_sum(FlacEncodeContext *s, const void *samples) -{ - const uint8_t *buf; - int buf_size = s->frame.blocksize * s->channels * - ((s->avctx->bits_per_raw_sample + 7) / 8); - - if (s->avctx->bits_per_raw_sample > 16 || HAVE_BIGENDIAN) { - av_fast_malloc(&s->md5_buffer, &s->md5_buffer_size, buf_size); - if (!s->md5_buffer) - return AVERROR(ENOMEM); - } - - if (s->avctx->bits_per_raw_sample <= 16) { - buf = (const uint8_t *)samples; -#if HAVE_BIGENDIAN - s->bdsp.bswap16_buf((uint16_t *) s->md5_buffer, - (const uint16_t *) samples, buf_size / 2); - buf = s->md5_buffer; -#endif - } else if (s->avctx->bits_per_raw_sample <= 24) { - int i; - const int32_t *samples0 = samples; - uint8_t *tmp = s->md5_buffer; - - for (i = 0; i < s->frame.blocksize * s->channels; i++) { - int32_t v = samples0[i] >> 8; - AV_WL24(tmp + 3*i, v); - } - buf = s->md5_buffer; - } else { - /* s->avctx->bits_per_raw_sample <= 32 */ - int i; - const int32_t *samples0 = samples; - uint8_t *tmp = s->md5_buffer; - - for (i = 0; i < s->frame.blocksize * s->channels; i++) - AV_WL32(tmp + 4*i, samples0[i]); - buf = s->md5_buffer; - } - av_md5_update(s->md5ctx, buf, buf_size); - - return 0; -} - - -static int flac_encode_frame(AVCodecContext *avctx, AVPacket *avpkt, - const AVFrame *frame, int *got_packet_ptr) -{ - FlacEncodeContext *s; - int frame_bytes, out_bytes, ret; - - s = avctx->priv_data; - - /* when the last block is reached, update the header in extradata */ - if (!frame) { - s->max_framesize = s->max_encoded_framesize; - av_md5_final(s->md5ctx, s->md5sum); - write_streaminfo(s, avctx->extradata); - - if (!s->flushed) { - uint8_t *side_data = av_packet_new_side_data(avpkt, AV_PKT_DATA_NEW_EXTRADATA, - avctx->extradata_size); - if (!side_data) - return AVERROR(ENOMEM); - memcpy(side_data, avctx->extradata, avctx->extradata_size); - - avpkt->pts = s->next_pts; - - *got_packet_ptr = 1; - s->flushed = 1; - } - - return 0; - } - - /* change max_framesize for small final frame */ - if (frame->nb_samples < s->frame.blocksize) { - s->max_framesize = flac_get_max_frame_size(frame->nb_samples, - s->channels, - avctx->bits_per_raw_sample); - } - - init_frame(s, frame->nb_samples); - - copy_samples(s, frame->data[0]); - - channel_decorrelation(s); - - remove_wasted_bits(s); - - frame_bytes = encode_frame(s); - - /* Fall back on verbatim mode if the compressed frame is larger than it - would be if encoded uncompressed. */ - if (frame_bytes < 0 || frame_bytes > s->max_framesize) { - s->frame.verbatim_only = 1; - frame_bytes = encode_frame(s); - if (frame_bytes < 0) { - av_log(avctx, AV_LOG_ERROR, "Bad frame count\n"); - return frame_bytes; - } - } - - if ((ret = ff_get_encode_buffer(avctx, avpkt, frame_bytes, 0)) < 0) - return ret; - - out_bytes = write_frame(s, avpkt); - - s->frame_count++; - s->sample_count += frame->nb_samples; - if ((ret = update_md5_sum(s, frame->data[0])) < 0) { - av_log(avctx, AV_LOG_ERROR, "Error updating MD5 checksum\n"); - return ret; - } - if (out_bytes > s->max_encoded_framesize) - s->max_encoded_framesize = out_bytes; - if (out_bytes < s->min_framesize) - s->min_framesize = out_bytes; - - s->next_pts = frame->pts + ff_samples_to_time_base(avctx, frame->nb_samples); - - av_shrink_packet(avpkt, out_bytes); - - *got_packet_ptr = 1; - return 0; -} - - -static av_cold int flac_encode_close(AVCodecContext *avctx) -{ - FlacEncodeContext *s = avctx->priv_data; - - av_freep(&s->md5ctx); - av_freep(&s->md5_buffer); - ff_lpc_end(&s->lpc_ctx); - return 0; -} - -#define FLAGS AV_OPT_FLAG_ENCODING_PARAM | AV_OPT_FLAG_AUDIO_PARAM -static const AVOption options[] = { -{ "lpc_coeff_precision", "LPC coefficient precision", offsetof(FlacEncodeContext, options.lpc_coeff_precision), AV_OPT_TYPE_INT, {.i64 = 15 }, 0, MAX_LPC_PRECISION, FLAGS }, -{ "lpc_type", "LPC algorithm", offsetof(FlacEncodeContext, options.lpc_type), AV_OPT_TYPE_INT, {.i64 = FF_LPC_TYPE_DEFAULT }, FF_LPC_TYPE_DEFAULT, FF_LPC_TYPE_NB-1, FLAGS, "lpc_type" }, -{ "none", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_LPC_TYPE_NONE }, INT_MIN, INT_MAX, FLAGS, "lpc_type" }, -{ "fixed", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_LPC_TYPE_FIXED }, INT_MIN, INT_MAX, FLAGS, "lpc_type" }, -{ "levinson", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_LPC_TYPE_LEVINSON }, INT_MIN, INT_MAX, FLAGS, "lpc_type" }, -{ "cholesky", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_LPC_TYPE_CHOLESKY }, INT_MIN, INT_MAX, FLAGS, "lpc_type" }, -{ "lpc_passes", "Number of passes to use for Cholesky factorization during LPC analysis", offsetof(FlacEncodeContext, options.lpc_passes), AV_OPT_TYPE_INT, {.i64 = 2 }, 1, INT_MAX, FLAGS }, -{ "min_partition_order", NULL, offsetof(FlacEncodeContext, options.min_partition_order), AV_OPT_TYPE_INT, {.i64 = -1 }, -1, MAX_PARTITION_ORDER, FLAGS }, -{ "max_partition_order", NULL, offsetof(FlacEncodeContext, options.max_partition_order), AV_OPT_TYPE_INT, {.i64 = -1 }, -1, MAX_PARTITION_ORDER, FLAGS }, -{ "prediction_order_method", "Search method for selecting prediction order", offsetof(FlacEncodeContext, options.prediction_order_method), AV_OPT_TYPE_INT, {.i64 = -1 }, -1, ORDER_METHOD_LOG, FLAGS, "predm" }, -{ "estimation", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = ORDER_METHOD_EST }, INT_MIN, INT_MAX, FLAGS, "predm" }, -{ "2level", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = ORDER_METHOD_2LEVEL }, INT_MIN, INT_MAX, FLAGS, "predm" }, -{ "4level", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = ORDER_METHOD_4LEVEL }, INT_MIN, INT_MAX, FLAGS, "predm" }, -{ "8level", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = ORDER_METHOD_8LEVEL }, INT_MIN, INT_MAX, FLAGS, "predm" }, -{ "search", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = ORDER_METHOD_SEARCH }, INT_MIN, INT_MAX, FLAGS, "predm" }, -{ "log", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = ORDER_METHOD_LOG }, INT_MIN, INT_MAX, FLAGS, "predm" }, -{ "ch_mode", "Stereo decorrelation mode", offsetof(FlacEncodeContext, options.ch_mode), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, FLAC_CHMODE_MID_SIDE, FLAGS, "ch_mode" }, -{ "auto", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = -1 }, INT_MIN, INT_MAX, FLAGS, "ch_mode" }, -{ "indep", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FLAC_CHMODE_INDEPENDENT }, INT_MIN, INT_MAX, FLAGS, "ch_mode" }, -{ "left_side", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FLAC_CHMODE_LEFT_SIDE }, INT_MIN, INT_MAX, FLAGS, "ch_mode" }, -{ "right_side", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FLAC_CHMODE_RIGHT_SIDE }, INT_MIN, INT_MAX, FLAGS, "ch_mode" }, -{ "mid_side", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FLAC_CHMODE_MID_SIDE }, INT_MIN, INT_MAX, FLAGS, "ch_mode" }, -{ "exact_rice_parameters", "Calculate rice parameters exactly", offsetof(FlacEncodeContext, options.exact_rice_parameters), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, -{ "multi_dim_quant", "Multi-dimensional quantization", offsetof(FlacEncodeContext, options.multi_dim_quant), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS }, -{ "min_prediction_order", NULL, offsetof(FlacEncodeContext, options.min_prediction_order), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, MAX_LPC_ORDER, FLAGS }, -{ "max_prediction_order", NULL, offsetof(FlacEncodeContext, options.max_prediction_order), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, MAX_LPC_ORDER, FLAGS }, - -{ NULL }, -}; - -static const AVClass flac_encoder_class = { - .class_name = "FLAC encoder", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_flac_encoder = { - .p.name = "flac", - CODEC_LONG_NAME("FLAC (Free Lossless Audio Codec)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_FLAC, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY | - AV_CODEC_CAP_SMALL_LAST_FRAME | - AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .priv_data_size = sizeof(FlacEncodeContext), - .init = flac_encode_init, - FF_CODEC_ENCODE_CB(flac_encode_frame), - .close = flac_encode_close, - .p.sample_fmts = (const enum AVSampleFormat[]){ AV_SAMPLE_FMT_S16, - AV_SAMPLE_FMT_S32, - AV_SAMPLE_FMT_NONE }, - .p.priv_class = &flac_encoder_class, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP | FF_CODEC_CAP_EOF_FLUSH, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download APK TikTok Biasa The Best Way to Enjoy Short-Form Videos on Android.md b/spaces/congsaPfin/Manga-OCR/logs/Download APK TikTok Biasa The Best Way to Enjoy Short-Form Videos on Android.md deleted file mode 100644 index 34ba0444c44b00936a1c25c131cee88a58c6f53b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download APK TikTok Biasa The Best Way to Enjoy Short-Form Videos on Android.md +++ /dev/null @@ -1,216 +0,0 @@ - -

    How to Download and Install TikTok APK on Android

    -

    TikTok is a video-sharing app that allows users to create and share short-form videos on any topic. It’s mainly mobile-based, although you can still watch TikTok videos using the web app. The platform allows users to get creative with their content using filters, stickers, voiceovers, sound effects, and background music.

    -

    download apk tiktok biasa


    Download ->>> https://urlca.com/2uOdRn



    -

    TikTok has become a go-to source of entertainment, information, and inspiration for millions of users around the world. Whether you’re into comedy, gaming, DIY, food, sports, memes, pets, or anything else, you can find something for you on TikTok. You can also follow your favorite creators, interact with other users, and join various challenges and trends.

    -

    But before you can enjoy all that TikTok has to offer, you need to download and install the app on your Android device. In this article, we’ll show you how to do that using the APK file of TikTok. We’ll also introduce you to some of the best features and benefits of TikTok, as well as some of the best alternatives to TikTok in case you want to try something different.

    -

    How to Download and Install TikTok APK on Android

    -

    An APK file is an Android Package file that contains all the necessary files and data for an app to run on an Android device. Sometimes, you may need to download an APK file instead of getting an app from the Google Play Store. This could be because the app is not available in your region, or because you want to get an older or newer version of the app.

    -

    To download and install an APK file on your Android device, you need to follow these steps:

    -

    How to Allow Unknown Apps on Android

    -

    By default, Android devices only allow you to install apps from the Google Play Store. To install apps from other sources, such as APK files, you need to enable a setting called Unknown Sources or Install Unknown Apps. This will allow your device to accept apps from outside the Play Store.

    -

    download tiktok apk for android
    -download tiktok apk latest version
    -download tiktok apk without watermark
    -download tiktok apk mod
    -download tiktok apk uptodown
    -download tiktok apk for pc
    -download tiktok apk no watermark
    -download tiktok apk free
    -download tiktok apk old version
    -download tiktok apk pure
    -download tiktok apk 2023
    -download tiktok apk mirror
    -download tiktok apk android 5.0
    -download tiktok apk with music
    -download tiktok apk offline
    -download tiktok apk from google play
    -download tiktok apk for android tv
    -download tiktok apk pro
    -download tiktok apk full version
    -download tiktok apk hack
    -download tiktok apk premium
    -download tiktok apk file
    -download tiktok apk online
    -download tiktok apk terbaru
    -download tiktok apk update
    -download tiktok apk video downloader
    -download tiktok apk unlimited likes
    -download tiktok apk 30.0.3
    -download tiktok apk for samsung
    -download tiktok apk for huawei
    -download tiktok apk for xiaomi
    -download tiktok apk for oppo
    -download tiktok apk for vivo
    -download tiktok apk for realme
    -download tiktok apk for nokia
    -download tiktok apk for lg
    -download tiktok apk for sony
    -download tiktok apk for lenovo
    -download tiktok apk for asus
    -download tiktok apk for oneplus
    -download tiktok apk for motorola
    -download tiktok apk for zte
    -download tiktok apk for tecno
    -download tiktok apk for infinix
    -download tiktok apk for itel
    -download tiktok apk for gionee
    -download tiktok apk for micromax
    -download tiktok apk for lava
    -download tiktok apk for karbonn

    -

    To enable this setting, follow these steps:

    -
      -
    1. Go to your device settings and tap Apps & Notifications (or Apps in older versions of Android).
    2. -
    3. Tap the three dots in the upper-right corner.
    4. -
    5. Tap Special access.
    6. -
    7. Tap Install unknown apps.
    8. -
    9. Tap Chrome (or whichever web browser you use to download APK files).
    10. -
    11. Toggle on Allow from this source.
    12. -
    -

    You can also enable this setting for specific apps by going to their app info page and tapping Advanced > Install unknown apps.

    -

    How to Download the APK File of TikTok from Uptodown

    -

    Uptodown is a website that offers a large collection of APK files for various Android apps and games. You can use Uptodown to download the APK file of TikTok from a reliable and secure source. To do that, follow these steps:

    -
      -
    1. Open Chrome (or your preferred web browser) on your Android device and go to https://www.uptodown.com/android.
    2. -
    3. In the search box, type TikTok and tap the magnifying glass icon.
    4. -
    5. Tap the TikTok icon from the search results.
    6. -
    7. Tap the green Download button.
    8. -
    9. Tap OK to confirm the download.
    10. -
    11. Wait for the download to finish. You can check the progress in the notification bar.
    12. -
    -

    How to Install the APK File of TikTok on Android

    -

    Once you have downloaded the APK file of TikTok, you can install it on your Android device by following these steps:

    -
      -
    1. Open your file manager app and locate the downloaded APK file. It should be in the Downloads folder by default.
    2. -
    3. Tap the APK file to open it.
    4. -
    5. If prompted, tap Settings and enable Unknown Sources or Install Unknown Apps for your file manager app.
    6. -
    7. Tap Install and wait for the installation to complete.
    8. -
    9. Tap Open to launch TikTok or tap Done to exit.
    10. -
    -

    How to Use TikTok on Android

    -

    Now that you have installed TikTok on your Android device, you can start using it to create and watch amazing videos. Here are some of the basic steps you need to follow:

    -

    How to Create an Account and Set Up Your Profile

    -

    To use TikTok, you need to create an account and set up your profile. You can do that by following these steps:

    -
      -
    1. Open TikTok and tap Me in the bottom-right corner.
    2. -
    3. Tap Sign up with phone or email or Sign up with Facebook/Google/Twitter/Instagram (depending on your preference).
    4. -
    5. Enter your phone number or email address and tap Next. If you choose a social media option, follow the instructions on the screen.
    6. -
    7. Create a password and tap Next.
    8. -
    9. Select your birthday and tap Next.
    10. -
    11. Select a username and tap Next.
    12. -
    13. TikTok will send you a verification code via SMS or email. Enter the code and tap Next.
    14. -
    15. Congratulations, you have created your TikTok account!
    16. -
    -

    To set up your profile, follow these steps:

    -
      -
    1. Tap Edit profile.
    2. -
    3. Add a profile photo or video by tapping the camera icon.
    4. -
    5. Add a bio, website, or Instagram/Snapchat/YouTube account by tapping the corresponding fields.
    6. -
    7. Tap Save.
    8. -
    -

    How to Browse, Watch, and Like Videos on TikTok

    -

    TikTok has two main feeds where you can browse and watch videos: For You and Following. The For You feed shows you videos that are personalized for you based on your preferences, interactions, and location. The Following feed shows you videos from the users you follow. To switch between the feeds, swipe left or right on the screen.

    -

    To watch a video on TikTok, simply tap on it. You can also swipe up or down to see more videos. To pause a video, tap and hold on it. To resume playing, release your finger. To like a video, double-tap on it or tap the heart icon on the right side of the screen. You can also comment, share, or save a video by tapping the corresponding icons on the right side of the screen.

    -

    How to Create, Edit, and Share Videos on TikTok

    -

    TikTok allows you to create videos up to 60 seconds long using various tools and features. To create a video on TikTok, follow these steps:

    -
      -
    1. Tap the plus icon in the bottom-center of the screen.
    2. -
    3. Select a recording mode: 15s (for 15-second videos), 60s (for 60-second videos), Templates (for pre-made templates), or Photo Templates (for photo slideshows).
    4. -
    5. Select a sound by tapping Sounds at the top of the screen. You can browse through different categories or search for a specific song or sound.
    6. -
    7. Adjust the speed, beauty, filters, and timer by tapping the icons on the right side of the screen.
    8. -
    9. Record your video by tapping and holding the red button. You can also tap it once to start and stop recording. You can record multiple clips and stitch them together.
    10. -
    11. Edit your video by tapping Next. You can trim, cut, duplicate, or delete clips by tapping the scissors icon. You can also add effects, stickers, text, or voiceovers by tapping the icons on the bottom of the screen.
    12. -
    13. Share your video by tapping Next. You can add a caption, hashtags, mentions, or location by tapping the corresponding fields. You can also choose who can view, comment, duet, or stitch your video by tapping Who can view this video. You can also save your video to your device or drafts by tapping Save or Draft.
    14. -
    15. Tap Post to publish your video on TikTok.
    16. -
    -

    Best Features and Benefits of TikTok

    -

    TikTok is more than just a video-sharing app. It’s also a platform where you can express yourself, discover new things, learn new skills, and connect with others. Here are some of the best features and benefits of TikTok that make it stand out from other apps:

    -

    TikTok's Unique Algorithm that Shows Personalized Videos

    -

    One of the reasons why TikTok is so addictive is because of its unique algorithm that shows you videos that match your interests, preferences, and behavior. The algorithm analyzes various factors such as your watch time, likes, comments, shares, follows, and device settings to determine what kind of videos you like and dislike. It then shows you more videos that you are likely to enjoy and less videos that you are likely to skip. This way, you can always find something new and relevant to watch on TikTok.

    -

    TikTok's User-Friendly Interface and Powerful Video Editor

    -

    TikTok has a simple and intuitive interface that makes it easy to use for anyone. You can swipe left or right to switch between feeds, tap to watch or pause videos, double-tap to like videos, and swipe up or down to see more videos. You can also access various features and settings by tapping the icons on the bottom or top of the screen.

    -

    TikTok also has a powerful video editor that lets you create amazing videos with ease. You can record up to 60 seconds of video using different modes, sounds, speeds, filters, and timers. You can also edit your video using various tools such as effects, stickers, text, voiceovers, and more. You can also use features such as duet, stitch, react, and green screen to collaborate with other users or add more fun to your videos.

    -

    TikTok's Huge Library of Music and Sounds

    -

    TikTok has a huge library of music and sounds that you can use for your videos. You can choose from different genres such as pop, rock, hip hop, rap, country, EDM, classical, and more. You can also find songs from popular artists such as Taylor Swift, Ed Sheeran, Drake , Ariana Grande, and more. You can also find sounds from movies, TV shows, memes, viral videos, and more. You can also create your own sounds by recording your voice or using the voice changer feature.

    -

    TikTok's Creative Effects and Filters

    -

    TikTok has a variety of effects and filters that you can use to enhance your videos and make them more fun and engaging. You can find effects such as beauty, bling, face morph, animal, anime, and more. You can also find filters such as vintage, glitch, neon, rainbow, and more. You can apply effects and filters before or after recording your video, or even while recording. You can also adjust the intensity and duration of the effects and filters to suit your preference.

    -

    TikTok's Global Community of Creators and Influencers

    -

    TikTok has a global community of creators and influencers that you can follow, interact with, and learn from. You can find creators from different categories such as comedy, music, dance, art, beauty, fashion, sports, education, and more. You can also find influencers from different industries such as entertainment, business, politics, health, and more. You can watch their videos, like their posts, comment on their content, send them messages, or even collaborate with them. You can also join various challenges and trends that they initiate or participate in.

    -

    Best Alternatives to TikTok

    -

    TikTok is not the only app that lets you create and watch short videos. There are many other apps that offer similar or different features and benefits. Here are some of the best alternatives to TikTok that you can try:

    -

    Snapchat: Best for Original Content and Snap Originals

    -

    Snapchat is a social media app that lets you send and receive photos and videos that disappear after a few seconds. You can also create stories that last for 24 hours or longer using the Spotlight feature. Snapchat also has a feature called Snap Originals, which are exclusive shows created by Snapchat in collaboration with various celebrities and creators. You can watch these shows on the Discover tab or on the Snap Originals website.

    -

    Snapchat is best for creating original content that is authentic and spontaneous. You can use various tools such as lenses, filters, stickers for Vine Lovers and Creator Community -

    Clash is a video app that lets you create and watch short videos that are fun and authentic. You can use various features such as music, filters, stickers, captions, and more to make your videos more lively and expressive. You can also browse through different categories such as comedy, music, art, fashion, and more to find videos that inspire you.

    -

    Clash is best for Vine lovers and creator community who miss the original app and want to support independent creators. You can use Clash to create videos that are humorous, artistic, or educational. You can also interact with other users, tip your favorite creators, and join the Clash community.

    -

    YouTube Shorts: Best for YouTube Fans and Short Videos

    -

    YouTube Shorts is a feature of YouTube that lets you create and watch short videos on the app. You can use various tools such as music, filters, text, stickers, and more to make your shorts more fun and interesting. You can also browse through different categories such as comedy, music, gaming, beauty, and more to find shorts that entertain you.

    -

    YouTube Shorts is best for YouTube fans and short videos who want to enjoy the content from their favorite YouTube channels in a shorter format. You can use shorts to watch snippets of videos from your favorite creators, discover new channels, or create your own shorts using the YouTube camera. You can also interact with other users, like, comment, or share shorts on social media.

    -

    Conclusion

    -

    TikTok is a video-sharing app that lets you create and watch short videos on any topic. It has many features and benefits that make it a popular and engaging platform for users of all ages and interests. However, if you want to download and install TikTok on your Android device, you may need to use the APK file of TikTok instead of getting it from the Google Play Store. This article showed you how to do that using Uptodown as a reliable source of APK files.

    -

    We also introduced you to some of the best alternatives to TikTok that you can try if you want to explore other apps that offer similar or different features and benefits. Whether you choose Snapchat, Funimate, Instagram Reels, Clash, or YouTube Shorts, you can enjoy creating and watching short videos on your Android device.

    -

    Whichever app you choose, here are some tips and suggestions for using it:

    -
      -
    • Be creative and original with your content. Don't copy or plagiarize other users' videos.
    • -
    • Be respectful and positive with your interactions. Don't post or comment anything that is hateful, offensive, or inappropriate.
    • -
    • Be safe and smart with your privacy. Don't share any personal or sensitive information on your videos or profile.
    • -
    • Be aware and responsible with your usage. Don't spend too much time or money on the app.
    • -
    • Have fun and enjoy yourself. Don't take the app too seriously or stress yourself out over it.
    • -
    -

    FAQs

    -

    What is an APK file and why do I need it?

    -

    An APK file is an Android Package file that contains all the necessary files and data for an app to run on an Android device. Sometimes, you may need to download an APK file instead of getting an app from the Google Play Store. This could be because the app is not available in your region , or because you want to get an older or newer version of the app. To download and install an APK file on your Android device, you need to enable a setting called Unknown Sources or Install Unknown Apps. This will allow your device to accept apps from outside the Play Store.

    -

    Is TikTok safe to use on Android?

    -

    TikTok is generally safe to use on Android, as long as you download it from a trusted source such as Uptodown. However, like any other app, TikTok may have some risks and issues that you should be aware of. Some of these include:

    -
      -
    • Data privacy and security: TikTok may collect and use your personal data for various purposes, such as advertising, analytics, and content moderation. TikTok may also share your data with third parties, such as its parent company ByteDance, its affiliates, or its partners. TikTok may also be subject to government requests or legal actions that may affect your data privacy and security.
    • -
    • Content moderation and censorship: TikTok may remove or restrict your content or account if it violates its community guidelines or terms of service. TikTok may also censor or limit your content or account based on your location, language, or political views.
    • -
    • Addiction and mental health: TikTok may be addictive and harmful to your mental health if you use it excessively or obsessively. TikTok may also expose you to negative or inappropriate content that may affect your mood, self-esteem, or well-being.
    • -
    -

    To use TikTok safely on Android, you should follow some best practices such as:

    -
      -
    • Reviewing and adjusting your privacy and security settings on the app.
    • -
    • Being careful and selective with what you share and post on the app.
    • -
    • Being respectful and positive with what you watch and comment on the app.
    • -
    • Limiting your time and frequency of using the app.
    • -
    • Seeking help and support if you experience any problems or issues with the app.
    • -
    -

    How can I make money on TikTok?

    -

    TikTok is not only a platform for entertainment and creativity, but also a platform for monetization and income. There are several ways that you can make money on TikTok, such as:

    -
      -
    • Joining the TikTok Creator Fund: The TikTok Creator Fund is a program that pays eligible creators for their views and engagement on the app. To join the program, you need to meet certain criteria, such as having at least 10,000 followers, 10,000 views in the last 30 days, and being 18 years old or older. You also need to follow the community guidelines and terms of service of TikTok.
    • -
    • Getting sponsored by brands: You can get sponsored by brands that want to promote their products or services on your videos. You can either reach out to brands directly or use platforms such as FameBit, AspireIQ, or Upfluence to connect with them. You can then negotiate the terms and conditions of the sponsorship deal, such as the payment amount, the content requirements, and the disclosure rules.
    • -
    • Selling your own products or services: You can sell your own products or services on your videos by using features such as TikTok Shop or Shopify. You can also use links in your bio or captions to direct your viewers to your website or online store. You can then showcase your products or services on your videos, such as clothing, accessories, art, music , coaching, etc. You can then encourage your viewers to buy your products or services by using calls to action, discounts, or testimonials.
    • -
    • Collecting donations from fans: You can collect donations from your fans who want to support your content and show their appreciation. You can use features such as TikTok Live or TikTok Gifts to receive donations from your viewers. You can also use platforms such as Patreon, Ko-fi, or Buy Me a Coffee to receive donations from your fans. You can then thank your donors and reward them with exclusive content, shoutouts, or perks.
    • -
    -

    How can I delete my TikTok account?

    -

    If you want to delete your TikTok account, you need to follow these steps:

    -
      -
    1. Open TikTok and tap Me in the bottom-right corner.
    2. -
    3. Tap the three dots in the upper-right corner.
    4. -
    5. Tap Manage account.
    6. -
    7. Tap Delete account at the bottom of the screen.
    8. -
    9. Follow the instructions on the screen to verify your identity and confirm your decision.
    10. -
    -

    Note that deleting your TikTok account will result in the following consequences:

    -
      -
    • You will lose access to your account and all your videos, likes, comments, messages, and followers.
    • -
    • You will not be able to log in with the same account again.
    • -
    • You will not be able to get a refund for any purchases you made on the app.
    • -
    • Your account may still be visible to others for up to 30 days before it is permanently deleted.
    • -
    -

    How can I contact TikTok support?

    -

    If you have any questions, issues, or feedback regarding TikTok, you can contact TikTok support by following these steps:

    -
      -
    1. Open TikTok and tap Me in the bottom-right corner.
    2. -
    3. Tap the three dots in the upper-right corner.
    4. -
    5. Tap Report a problem.
    6. -
    7. Select a category and a subcategory that best describes your problem.
    8. -
    9. Read the suggested solutions or tap No, I still need help.
    10. -
    11. Tap Still have problem.
    12. -
    13. Fill out the form with your details and description of your problem.
    14. -
    15. Tap Submit.
    16. -
    -

    You can also contact TikTok support by sending an email to feedback@tiktok.com or visiting their website at https://www.tiktok.com/contact-us.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Various Mini Games with 1 2 3 4 5 6 Player Games APK - No Internet Required.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Various Mini Games with 1 2 3 4 5 6 Player Games APK - No Internet Required.md deleted file mode 100644 index 75e6ed4d10ff609e23873fd8c44b7c7e7ef25b48..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Various Mini Games with 1 2 3 4 5 6 Player Games APK - No Internet Required.md +++ /dev/null @@ -1,146 +0,0 @@ - -

    1 2 3 4 5 6 Player Games APK: A Guide to the Best Multiplayer Games for Android

    -

    Do you love playing games with your friends and family on your Android device? Are you looking for some fun and exciting multiplayer games that can support up to six players at once? If yes, then you are in luck! In this article, we will introduce you to the concept of 1 2 3 4 5 6 player games apk, which are applications that allow you to play various games with multiple players on your device. We will also show you how to download and install these apk files on your device, and recommend some of the best multiplayer games for different genres and preferences. So, let's get started!

    -

    1 2 3 4 5 6 player games apk


    Download Zip ---> https://urlca.com/2uO6f7



    -

    What are 1 2 3 4 5 6 player games apk?

    -

    1 2 3 4 5 6 player games apk are applications that contain a collection of games and minigames that can be played by one, two, three, four, five, or six players on the same device. These games are usually simple and easy to play, but also very fun and addictive. They can be played offline or online, depending on the game. Some of these games require only one button to control the action, while others require more coordination and skill. These games are perfect for parties, gatherings, or just casual gaming with your friends and family.

    -

    The benefits of playing multiplayer games on Android

    -

    Playing multiplayer games on Android has many benefits, such as:

    -
      -
    • Socializing: Playing multiplayer games can help you bond with your friends and family, as well as meet new people online. You can chat, cooperate, compete, or just have fun with others who share your interest in gaming.
    • -
    • Entertaining: Playing multiplayer games can keep you entertained for hours, as there are many different genres and styles to choose from. You can enjoy action, arcade, puzzle, brain, sports, racing, and many more types of games with multiple players.
    • -
    • Challenging: Playing multiplayer games can challenge your skills, reflexes, strategy, and creativity. You can test yourself against other players who may have different levels of experience and expertise. You can also learn from others and improve your own gameplay.
    • -
    • Economical: Playing multiplayer games on Android can save you money, as you don't need to buy expensive consoles or accessories to enjoy gaming with others. You can use your existing device and download free or cheap apk files that contain multiple games.
    • -
    -

    The challenges of finding and downloading multiplayer games on Android

    -

    However, playing multiplayer games on Android also has some challenges, such as:

    -
      Finding: Finding multiplayer games on Android can be difficult, as there are many options available on the Google Play Store and other sources, but not all of them are of good quality or compatible with your device. You may have to spend a lot of time and effort searching for the best games that suit your taste and needs. - Downloading: Downloading multiplayer games on Android can be risky, as some apk files may contain malware, viruses, or other harmful elements that can damage your device or compromise your privacy. You may also have to deal with annoying ads, pop-ups, or in-app purchases that can ruin your gaming experience.

      How to download and install 1 2 3 4 5 6 player games apk on your device

      -

      If you want to download and install 1 2 3 4 5 6 player games apk on your device, you need to follow these steps:

      -

      The steps to download and install apk files from trusted sources

      -
        -
      1. Find a trusted source: The first step is to find a reliable and reputable source that offers apk files for multiplayer games. You can use Google or other search engines to look for websites or blogs that review and recommend multiplayer games for Android. You can also check the ratings, reviews, and feedback from other users to verify the quality and safety of the apk files.
      2. -
      3. Download the apk file: The next step is to download the apk file of the game you want to play. You can click on the download link or button provided by the source, and choose a location on your device where you want to save the file. You may have to wait for a few minutes for the download to complete, depending on the size of the file and your internet speed.
      4. -
      5. Install the apk file: The final step is to install the apk file on your device. You need to enable the option of "Unknown sources" in your device settings, which allows you to install applications from sources other than the Google Play Store. Then, you need to locate the apk file on your device, tap on it, and follow the instructions on the screen to complete the installation. You may have to grant some permissions to the game to access certain features or functions on your device.
      6. -
      -

      The precautions to take before installing apk files from unknown sources

      -

      However, before installing apk files from unknown sources, you need to take some precautions, such as:

      -

      Keyword Tool[^1^]: This tool generates up to 750+ keywords from Google autocomplete for free. You can also use the paid version, Keyword Tool Pro, to get more keywords and other features.
      -WordStream Free Keyword Tool[^2^]: This tool gives you hundreds of relevant keyword results, plus additional information like competition level and estimated CPC. You can also download your full keyword list and use it for your SEO and PPC campaigns.
      -Google Ads Keyword Planner[^3^]: This tool helps you find the keywords that are most relevant for your business and estimate the performance of your ads. You can also discover new keyword ideas based on your products or services.
      -1 2 3 4 5 6 player games apk download
      -1 2 3 4 5 6 player games apk mod
      -1 2 3 4 5 6 player games apk offline
      -1 2 3 4 5 6 player games apk online
      -1 2 3 4 5 6 player games apk free
      -best 1 2 3 4 5 6 player games apk
      -fun 1 2 3 4 5 6 player games apk
      -multiplayer games apk for android
      -multiplayer games apk offline
      -multiplayer games apk online
      -multiplayer games apk mod
      -multiplayer games apk download
      -multiplayer games apk free
      -best multiplayer games apk
      -fun multiplayer games apk
      -local multiplayer games apk
      -bluetooth multiplayer games apk
      -wifi multiplayer games apk
      -lan multiplayer games apk
      -co op multiplayer games apk
      -party games apk for android
      -party games apk offline
      -party games apk online
      -party games apk mod
      -party games apk download
      -party games apk free
      -best party games apk
      -fun party games apk
      -local party games apk
      -bluetooth party games apk
      -wifi party games apk
      -lan party games apk
      -co op party games apk
      -family games apk for android
      -family games apk offline
      -family games apk online
      -family games apk mod
      -family games apk download
      -family games apk free
      -best family games apk
      -fun family games apk
      -local family games apk
      -bluetooth family games apk
      -wifi family games apk
      -lan family games apk

      -
        -
      • Scan the apk file: You need to scan the apk file with a reliable antivirus or anti-malware software before installing it on your device. This can help you detect and remove any potential threats or infections that may harm your device or data.
      • -
      • Backup your data: You need to backup your data such as photos, videos, contacts, messages, etc. before installing apk files from unknown sources. This can help you restore your data in case something goes wrong during or after the installation.
      • -
      • Read the terms and conditions: You need to read the terms and conditions of the game before installing it on your device. This can help you understand what kind of permissions, access, or information the game requires from you, and whether you agree with them or not.
      • -
      -

      The best 1 2 3 4 5 6 player games apk for different genres and preferences

      -

      Now that you know how to download and install 1 2 3 4 5 6 player games apk on your device, you may wonder which games are worth playing. Well, there are many options available for different genres and preferences, but here are some of the best ones that we recommend:

      -

      Action and arcade games

      -

      If you like action and arcade games that are fast-paced, thrilling, and fun, then you should try these games:

      -

      1 2 3 4 Player Games - Offline

      -

      This is a game that contains more than 20 minigames that can be played by one, two, three, or four players on the same device. The games include tank battles, soccer matches, wrestling fights, car races, zombie survival, and more. The games are simple but addictive, and you can customize your characters and settings. You can also play offline without internet connection.

      -

      MiniBattles

      -

      This is another game that contains more than 30 minigames that can be played by two, three, four, five, or six players on the same device. The games include archery duels, basketball shootouts, sword fights, volleyball matches, golf tournaments, and more. The games are easy to play but hard to master, and you can challenge your friends and family to see who is the best. You can also play online with other players around the world.

      -

      Puzzle and brain games

      -

      If you like puzzle and brain games that are challenging, stimulating, and educational, then you should try these games:

      -

      Brain It On! - Physics Puzzles

      -

      This is a game that tests your physics knowledge and creativity. You have to draw shapes on the screen to solve various puzzles, such as balancing objects, launching projectiles, destroying structures, and more. You can play by yourself or with up to four players on the same device. You can also create your own puzzles and share them with others.

      -

      Spaceteam

      -

      This is a game that tests your communication and teamwork skills. You have to work with up to eight players on the same device or over WiFi to operate a spaceship. You have to follow instructions, press buttons, flip switches, and shout commands to avoid disasters and reach your destination. The game is hilarious and chaotic, and you will have a blast with your friends and family.

      -

      Sports and racing games

      -

      If you like sports and racing games that are competitive, exciting, and realistic, then you should try these games:

      -

      Soccer Stars

      -

      This is a game that lets you play soccer with up to six players on the same device or online. You have to flick your soccer pieces to score goals and win matches. You can customize your team, choose your formation, and collect different soccer pieces. You can also join tournaments, leagues, and cups to compete with other players around the world.

      -

      Drive Ahead!

      -

      This is a game that lets you drive various vehicles and smash your opponents' heads. You can play with up to four players on the same device or online. You can choose from cars, trucks, bikes, tanks, robots, and more. You can also play in different arenas, modes, and events. The game is crazy and fun, and you will enjoy crashing and exploding your friends and foes.

      -

      Conclusion

      -

      In conclusion, 1 2 3 4 5 6 player games apk are applications that allow you to play multiple games with multiple players on your Android device. They are great for socializing, entertaining, challenging, and economical gaming. However, you need to be careful when downloading and installing apk files from unknown sources, as they may contain harmful elements or unwanted features. You also need to find the best games that suit your genre and preference. We have recommended some of the best action, arcade, puzzle, brain, sports, and racing games for you to try. We hope you enjoy playing these games with your friends and family!

      -

      FAQs

      -

      Here are some frequently asked questions about 1 2 3 4 5 6 player games apk:

      -
        -
      • Q: What are the advantages of playing games with apk files?
      • -
      • A: Some of the advantages of playing games with apk files are: -
          -
        • You can access games that are not available on the Google Play Store or in your region.
        • -
        • You can get the latest updates and features of the games before they are officially released.
        • -
        • You can modify or customize the games according to your preferences.
        • -
        -
      • -
      • Q: What are the disadvantages of playing games with apk files?
      • -
      • A: Some of the disadvantages of playing games with apk files are: -
          -
        • You may expose your device or data to malware, viruses, or other harmful elements.
        • -
        • You may violate the terms and conditions of the game developers or publishers.
        • -
        • You may encounter compatibility or performance issues with your device or game.
        • -
        -
      • -
      • Q: How can I find more 1 2 3 4 5 6 player games apk?
      • -
      • A: You can find more 1 2 3 4 5 6 player games apk by: -
          -
        • Searching on Google or other search engines for websites or blogs that review and recommend multiplayer games for Android.
        • -
        • Asking for recommendations from other gamers on online forums or communities.
        • -
        • Exploring the categories or genres of multiplayer games on the Google Play Store or other sources.
        • -
        -
      • -
      • Q: How can I play online multiplayer games with apk files?
      • -
      • A: You can play online multiplayer games with apk files by: -
          Connecting your device to a stable and secure internet connection. -
        • Using the same apk file and version as the other players you want to play with.
        • -
        • Following the instructions or rules of the game to join or create a multiplayer session.
        • -
        -
      • -
      • Q: How can I play offline multiplayer games with apk files?
      • -
      • A: You can play offline multiplayer games with apk files by: -
          -
        • Downloading and installing the apk file of the game you want to play on your device.
        • -
        • Using the same device or connecting multiple devices via Bluetooth, WiFi, or cable.
        • -
        • Choosing the offline or local mode of the game and selecting the number of players.
        • -
        -
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Get Tour of Neverland MOD APK with Unlimited Pearls and Resources.md b/spaces/congsaPfin/Manga-OCR/logs/How to Get Tour of Neverland MOD APK with Unlimited Pearls and Resources.md deleted file mode 100644 index ef8d220433600bb91a5f30585b7de0f25ae9ff3b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Get Tour of Neverland MOD APK with Unlimited Pearls and Resources.md +++ /dev/null @@ -1,95 +0,0 @@ - -

      Download Game Tour of Neverland Mod Apk

      -

      Do you love farming games with cute graphics and relaxing gameplay? If so, you might want to check out Tour of Neverland, a new game from Mars Game Hongkong that lets you run your own farm on a tropical island. In this game, you can plant crops, raise animals, make food, go fishing, snorkeling, mining, decorating, and more. You can also visit other players' islands, trade items in the market, and make new friends. Sounds fun, right?

      -

      But what if you want to enjoy the game without any limitations or interruptions? What if you want to have unlimited money and resources, unlock all the items and outfits, remove all the ads and pop-ups, and improve the graphics and performance? Well, there is a way to do that. You can download the mod apk version of Tour of Neverland, which is a modified version of the original game that gives you access to all these benefits. In this article, we will tell you more about the features and benefits of Tour of Neverland mod apk, and how to download and install it on your Android device.

      -

      download game tour of neverland mod apk


      Download Filehttps://urlca.com/2uO9US



      -

      Features of Tour of Neverland

      -

      Tour of Neverland is a game that offers you a lot of features to keep you entertained and relaxed. Here are some of them:

      -

      Farming and crafting

      -

      As a farmer, you can grow over 30 kinds of crops and livestock on your island. You can also construct various farming facilities, such as barns, mills, bakeries, etc. You can use your crops and animals to make food, such as bread, cheese, jam, etc. You can also craft other items, such as furniture, decorations, clothes, etc.

      -

      Fishing and snorkeling

      -

      If you love the sea, you can catch fish from the ocean or go snorkeling to catch sea creatures. You can also build a hotel to attract visitors who will pay you for your services. You can also make friends with some fascinating animals, such as dolphins, turtles, sharks, etc.

      -

      Exploring and mining

      -

      If you are feeling adventurous, you can jump in a minecart and explore the depths of the mines in search of treasures. You can find gems, ores, fossils, etc. You can also encounter some dangers, such as bats, spiders, etc. You can also explore other areas of the island, such as forests, caves, volcanoes, etc.

      -

      Decorating and customizing

      -

      If you want to express your creativity, you can decorate your cabin and your island with hundreds of items and furniture. You can also customize your appearance with tons of outfits and pets. You can change your hairstyle, clothes, accessories, etc.

      -

      Socializing and trading

      -

      If you want to interact with other players, you can visit their islands, trade items in the open market, or board the airship to make new friends. You can also chat with them, send gifts to them, or join them in events.

      -

      download game tour of neverland mod apk unlimited pearls
      -how to download game tour of neverland mod apk for android
      -download game tour of neverland mod apk latest version
      -download game tour of neverland mod apk free shopping
      -download game tour of neverland mod apk offline
      -download game tour of neverland mod apk no root
      -download game tour of neverland mod apk hack
      -download game tour of neverland mod apk 2023
      -download game tour of neverland mod apk full unlocked
      -download game tour of neverland mod apk obb
      -download game tour of neverland mod apk revdl
      -download game tour of neverland mod apk rexdl
      -download game tour of neverland mod apk happymod
      -download game tour of neverland mod apk an1
      -download game tour of neverland mod apk android 1
      -download game tour of neverland mod apk pure
      -download game tour of neverland mod apk uptodown
      -download game tour of neverland mod apk apkpure
      -download game tour of neverland mod apk apkmody
      -download game tour of neverland mod apk apkmirror
      -download game tour of neverland mod apk mob.org
      -download game tour of neverland mod apk mobpark
      -download game tour of neverland mod apk platinmods
      -download game tour of neverland mod apk blackmod
      -download game tour of neverland mod apk andropalace
      -download game tour of neverland mod apk androeed.ru
      -download game tour of neverland mod apk androgamer.org
      -download game tour of neverland mod apk androidoyun.club
      -download game tour of neverland mod apk android republic
      -download game tour of neverland mod apk ihackedit.com
      -download game tour of neverland mod apk lenov.ru
      -download game tour of neverland mod apk sbenny.com
      -download game tour of neverland mod apk 5play.ru
      -download game tour of neverland mod apk onhax.me
      -download game tour of neverland mod apk douploads.net
      -download game tour of neverland mod apk mediafire.com
      -download game tour of neverland mod apk mega.nz
      -download game tour of neverland mod apk zippyshare.com
      -download game tour of neverland mod apk google drive link
      -download game tour of neverland mod apk direct link no ads

      -

      Benefits of Tour of Neverland Mod Apk

      -

      While Tour of Neverland is a fun game to play, it also has some drawbacks that might affect your enjoyment. For example For example, you might run out of money and resources to buy or upgrade things, you might have to wait for a long time to complete tasks or unlock new features, you might get annoyed by the ads and pop-ups that interrupt your gameplay, or you might experience some lag or glitches that affect the graphics and performance of the game. That's why you might want to download the mod apk version of Tour of Neverland, which gives you the following benefits:

      -

      Unlimited money and resources

      -

      With the mod apk version, you don't have to worry about running out of money and resources. You can have unlimited coins, gems, wood, stone, etc. You can use them to buy anything you want, such as seeds, animals, food, items, outfits, etc. You can also use them to upgrade your facilities, expand your island, or speed up your tasks.

      -

      Unlocked items and outfits

      -

      With the mod apk version, you don't have to wait for a long time to unlock new items and outfits. You can have access to all the items and outfits in the game from the start. You can choose from hundreds of items and furniture to decorate your cabin and island. You can also choose from tons of outfits and pets to customize your appearance.

      -

      No ads and pop-ups

      -

      With the mod apk version, you don't have to get annoyed by the ads and pop-ups that interrupt your gameplay. You can enjoy the game without any distractions or interruptions. You can also save your data and battery life by not having to watch or click on ads.

      -

      Enhanced graphics and performance

      -

      With the mod apk version, you don't have to experience any lag or glitches that affect the graphics and performance of the game. You can enjoy the game with enhanced graphics and performance. You can see the details and colors of the game more clearly and smoothly. You can also adjust the settings according to your preference.

      -

      How to Download and Install Tour of Neverland Mod Apk

      -

      If you are interested in downloading and installing Tour of Neverland mod apk on your Android device, you can follow these simple steps:

      -

      Step 1: Enable unknown sources

      -

      Before you download the mod apk file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.

      -

      Step 2: Download the mod apk file

      -

      Next, you need to download the mod apk file from a reliable source. You can search for Tour of Neverland mod apk on Google or use this link to download it directly. Make sure you have enough space on your device to store the file.

      -

      Step 3: Install the mod apk file

      -

      After you download the mod apk file, you need to install it on your device. To do this, locate the file in your file manager and tap on it. You will see a pop-up asking for your permission to install the app. Tap on Install and wait for the installation process to finish.

      -

      Step 4: Launch the game and enjoy

      -

      Finally, you can launch the game and enjoy all the benefits of Tour of Neverland mod apk. You will see a lot of money and resources in your account, as well as all the items and outfits unlocked. You will also notice that there are no ads and pop-ups in the game, and that the graphics and performance are improved.

      -

      Conclusion

      -

      Tour of Neverland is a fun and relaxing game that lets you run your own farm on a tropical island. You can do a lot of activities in this game, such as farming, crafting, fishing, snorkeling, exploring, mining, decorating, customizing, socializing, trading, etc. However, if you want to enjoy the game without any limitations or interruptions, you might want to download Tour of Neverland mod apk, which gives you unlimited money and resources, unlocked items and outfits, no ads and pop-ups, enhanced graphics and performance.

      -

      If you are interested in downloading Tour of Neverland mod apk on your Android device, you can follow our guide above. It is very easy and simple to do. Just make sure you download the file from a reliable source and enable unknown sources on your device before installing it.

      -

      We hope this article was helpful for you. If you have any questions or feedbacks about Tour of Neverland mod apk , you can leave them in the comment section below. We will try to answer them as soon as possible. Thank you for reading and happy gaming!

      -

      FAQs

      -

      Here are some frequently asked questions about Tour of Neverland mod apk:

      -

      Is Tour of Neverland mod apk safe to use?

      -

      Yes, Tour of Neverland mod apk is safe to use as long as you download it from a reliable source. However, you should always be careful when installing apps from unknown sources, as they might contain viruses or malware that can harm your device. You should also backup your data before installing the mod apk, in case something goes wrong.

      -

      Is Tour of Neverland mod apk compatible with my device?

      -

      Tour of Neverland mod apk is compatible with most Android devices that run on Android 4.4 or higher. However, some devices might not support the mod apk due to different specifications or settings. You should check the compatibility of your device before downloading the mod apk.

      -

      Will I get banned for using Tour of Neverland mod apk?

      -

      There is a low chance of getting banned for using Tour of Neverland mod apk, as the game does not have a strict anti-cheat system. However, you should still be careful when using the mod apk, as you might get reported by other players or detected by the game developers. You should not use the mod apk to cheat or abuse the game, as this might ruin the fun for yourself and others.

      -

      Can I update Tour of Neverland mod apk?

      -

      Yes, you can update Tour of Neverland mod apk whenever there is a new version available. However, you should not update the mod apk from the Google Play Store, as this might overwrite the modded features and cause errors. You should update the mod apk from the same source where you downloaded it, or from another reliable source that offers the latest version.

      -

      Can I play Tour of Neverland mod apk offline?

      -

      No, you cannot play Tour of Neverland mod apk offline, as the game requires an internet connection to run. You need to connect to the internet to access all the features and content of the game, such as visiting other players' islands, trading items in the market, joining events, etc. You also need to connect to the internet to save your progress and sync your data.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/PUBG Mobile Highly Compressed 2020 - APK OBB Download - Livik Map Cheer Park Library Map and More.md b/spaces/congsaPfin/Manga-OCR/logs/PUBG Mobile Highly Compressed 2020 - APK OBB Download - Livik Map Cheer Park Library Map and More.md deleted file mode 100644 index 08425114aa128596e531e2d686a9c8ee6cf2b3eb..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/PUBG Mobile Highly Compressed 2020 - APK OBB Download - Livik Map Cheer Park Library Map and More.md +++ /dev/null @@ -1,150 +0,0 @@ - -

      Download PUBG Mobile apk + obb highly compressed

      -

      PUBG Mobile is one of the most popular and addictive mobile games in the world. It is a battle royale game where you parachute down to a remote island and fight to be the last one standing. You can play solo or in teams of up to four players, with various maps, modes, weapons, vehicles, and items to choose from.

      -

      Downloading PUBG Mobile can be a challenge for some players, especially if they have limited data plans or slow internet speeds. The game is quite large in size, with regular updates adding more content and features. That's why some players prefer to download the game in a highly compressed format, which reduces the file size significantly without compromising the quality or performance.

      -

      download pubg mobile apk + obb highly compressed


      Download File ✔✔✔ https://urlca.com/2uOaY3



      -

      In this article, we will show you how to download PUBG Mobile apk + obb highly compressed, as well as some tips and tricks to improve your gameplay and win more matches. But first, let's take a look at the system requirements and features of PUBG Mobile.

      -

      System requirements and features of PUBG Mobile

      -

      PUBG Mobile is a resource-intensive game that requires a stable network connection and a decent device to run smoothly. According to the official Google Play Store page, PUBG Mobile's recommended system requirements are:

      -
        -
      • Android version: 5.1.1 or newer
      • -
      • RAM: 2GB or more (4GB recommended)
      • -
      • Storage: 2GB or more (4GB recommended) of free space
      • -
      • Processor: Snapdragon 430/ Kirin 655/ Exynos 8895/ Mediatek Helio G90T or higher
      • -
      -

      If your device doesn't meet these requirements, you can try PUBG Mobile Lite, which is a lighter version of the game designed for low-end devices.

      -

      PUBG Mobile has many features that make it one of the best mobile shooting games. Some of these features are:

      -
        -
      • Epic battle royale gameplay with up to 100 players in each match
      • -
      • Lots of maps and battles with different terrains, weather, and time of day
      • -
      • Various game modes such as Classic, Payload, Arena, Infection, etc.
      • -
      • Customizable controls, training mode, and voice chat with friends
      • -
      • Realistic graphics, physics, and sound effects
      • -
      • Frequent updates with new items, maps, modes, events, etc.
      • -
      -

      How to download PUBG Mobile apk + obb highly compressed

      -

      To download PUBG Mobile apk + obb highly compressed, you will need two files: an APK file and an OBB file. The APK file is the application file that installs the game on your device. The OBB file is the data file that contains all the game assets such as graphics, sounds, etc.

      -

      You can download these files from various sources on the internet, but make sure you use trusted and reliable links Here are some links that you can use to download PUBG Mobile apk + obb highly compressed: - APK file: [Download PUBG Mobile 1.6.0 APK] - OBB file: [Download PUBG Mobile 1.6.0 OBB] These files are from the latest update of PUBG Mobile, which is version 1.6.0. The update brings a new mode called Flora Menace, which features alien plants invading the maps. The update also adds new weapons, vehicles, skins, and more. To install PUBG Mobile apk + obb highly compressed, follow these steps:

      Step 1: Download the APK and OBB files from the links provided

      -

      First, you need to download the APK and OBB files from the links above. You can use any browser or download manager to do this. Make sure you have enough storage space on your device before downloading the files.

      -

      pubg mobile apk + obb highly compressed 2020 download link
      -pubg mobile apk + obb highly compressed 2023 latest version
      -pubg mobile apk + obb highly compressed 49 mb
      -pubg mobile apk + obb highly compressed 0.19.0 update
      -pubg mobile apk + obb highly compressed livik map
      -pubg mobile apk + obb highly compressed bonfire mode
      -pubg mobile apk + obb highly compressed new cheer park
      -pubg mobile apk + obb highly compressed new library map
      -pubg mobile apk + obb highly compressed sportskeeda
      -pubg mobile apk + obb highly compressed apkcombo
      -pubg mobile apk + obb highly compressed free download
      -pubg mobile apk + obb highly compressed offline mode
      -pubg mobile apk + obb highly compressed no verification
      -pubg mobile apk + obb highly compressed android 11
      -pubg mobile apk + obb highly compressed android tv
      -pubg mobile apk + obb highly compressed pc windows
      -pubg mobile apk + obb highly compressed rar file
      -pubg mobile apk + obb highly compressed zip file
      -pubg mobile apk + obb highly compressed google drive link
      -pubg mobile apk + obb highly compressed mediafire link
      -pubg mobile apk + obb highly compressed mega link
      -pubg mobile apk + obb highly compressed direct link
      -pubg mobile apk + obb highly compressed fast download
      -pubg mobile apk + obb highly compressed low mb
      -pubg mobile apk + obb highly compressed unlimited uc
      -pubg mobile apk + obb highly compressed hack version
      -pubg mobile apk + obb highly compressed mod version
      -pubg mobile apk + obb highly compressed new update 2023
      -pubg mobile apk + obb highly compressed new features 2023
      -pubg mobile apk + obb highly compressed best graphics 2023
      -pubg mobile apk + obb highly compressed smooth gameplay 2023
      -pubg mobile apk + obb highly compressed no lag 2023
      -pubg mobile apk + obb highly compressed no ban 2023
      -pubg mobile apk + obb highly compressed no root 2023
      -pubg mobile apk + obb highly compressed easy installation 2023
      -pubg mobile apk + obb highly compressed how to download 2023
      -pubg mobile apk + obb highly compressed tutorial 2023
      -pubg mobile apk + obb highly compressed review 2023
      -pubg mobile apk + obb highly compressed rating 2023
      -pubg mobile apk + obb highly compressed comparison 2023
      -pubg mobile apk + obb highly compressed vs original version 2023
      -pubg mobile apk + obb highly compressed vs lite version 2023
      -pubg mobile apk + obb highly compressed vs global version 2023
      -pubg mobile apk + obb highly compressed vs korean version 2023
      -pubg mobile apk + obb highly compressed vs chinese version 2023
      -pubg mobile apk + obb highly compressed vs indian version 2023
      -pubg mobile apk + obb highly compressed for all devices 2023
      -pubg mobile apk + obb highly compressed for low end devices 2023
      -pubg mobile apk + obb highly compressed for high end devices 2023

      -

      The size of the APK file is about 70 MB, while the size of the OBB file is about 700 MB. The total size of the game after installation will be about 1 GB.

      -

      Step 2: Enable the installation from unknown sources option on your device

      -

      Since you are installing the game from a third-party source, you need to enable the installation from unknown sources option on your device. This will allow you to install apps that are not from the Google Play Store.

      -

      To enable this option, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but just ignore it and tap OK.

      -

      Step 3: Install the APK file but do not open it yet

      -

      Next, you need to install the APK file that you downloaded in step 1. To do this, locate the file in your device's file manager and tap on it. You may see a prompt asking you to confirm the installation, just tap Install and wait for it to finish.

      -

      Do not open the game yet after installing the APK file. You still need to copy the OBB file to the right folder.

      -

      Step 4: Copy the OBB file to the Android/OBB/com.tencent.ig folder

      -

      Now, you need to copy the OBB file that you downloaded in step 1 to the Android/OBB/com.tencent.ig folder on your device's internal storage. This folder is where all the game data is stored.

      -

      If you don't have this folder, you can create it manually using your file manager. Just make sure you name it exactly as shown above.

      -

      Once you have copied the OBB file to the folder, you are ready to launch the game.

      -

      Step 5: Launch the game and enjoy the latest update

      -

      Finally, you can launch PUBG Mobile and enjoy the latest update with all its features and content. You may need to verify your account and download some additional data before playing.

      -

      You can also check for updates within the game by tapping on the Settings icon and then on Update.

      -

      Tips and tricks for PUBG Mobile

      -

      PUBG Mobile is a fun and challenging game that requires skill, strategy, and luck to win. Here are some tips and tricks that can help you improve your gameplay and win more matches:

      -

      Choose where to land carefully and loot efficiently

      -

      The first thing you need to do in PUBG Mobile is to choose where to land on the map. This will determine how much loot you can find and how many enemies you will encounter.

      -

      You can either land in a hot spot with lots of loot and players, such as Pochinki, School, or Military Base, or in a more secluded area with less loot and players, such as Farm, Shelter, or Ruins.

      -

      The choice depends on your play style and preference. If you want more action and challenge, go for a hot spot. If you want more survival and stealth, go for a secluded area.

      -

      Once you land, loot as fast as possible and equip yourself with weapons, armor, ammo, meds, etc. Don't waste time looting unnecessary items or searching empty buildings. Also, be aware of your surroundings and watch out for enemies nearby.

      -

      Manage your ammunition and share resources with your teammates

      -

      Ammunition is one of the most important resources in PUBG Mobile. You need it to shoot your enemies and defend yourself. However, ammo is also limited and scarce in some areas.

      -

      You should always manage your ammunition wisely and avoid wasting it on unnecessary shots or missed shots. You should also carry different types of ammo for different weapons such as 5.56mm, 7.62mm, 9mm, etc. You should also use the right ammo for the right weapon, such as using 5.56mm for M416 or 7.62mm for AKM.

      -

      If you are playing in a team, you should also share your resources with your teammates. You can drop ammo, meds, attachments, etc. for your teammates to pick up. You can also request resources from your teammates by using the quick chat or voice chat options. Sharing resources can help you and your teammates survive longer and fight better.

      -

      Use the best graphics settings and customize your controls

      -

      PUBG Mobile has various graphics settings that you can adjust to optimize your game performance and experience. You can access these settings by tapping on the Settings icon and then on Graphics.

      -

      You can choose from different graphics options such as Smooth, Balanced, HD, HDR, or Ultra HD. You can also choose from different frame rate options such as Low, Medium, High, Ultra, or Extreme. The higher the graphics and frame rate options, the better the game will look and feel, but the more battery and data it will consume.

      -

      You should choose the graphics settings that suit your device and network conditions. You can also use the Auto Adjust Graphics option to let the game automatically adjust the graphics settings based on your device's performance.

      -

      Another thing you can do to improve your gameplay is to customize your controls. You can access these settings by tapping on the Settings icon and then on Controls.

      -

      You can choose from different control presets such as Classic, Dynamic, or Customizable. You can also adjust the size, position, and opacity of the buttons on your screen. You can also enable or disable various features such as Aim Assist, Peek & Fire, Gyroscope, etc.

      -

      You should customize your controls according to your preference and comfort. You can also use the Training Mode to practice your controls and skills.

      -

      Avoid unnecessary risks and learn from your mistakes

      -

      PUBG Mobile is a game of survival and strategy. You should always avoid unnecessary risks that can get you killed or expose you to enemies. Some of these risks are:

      -
        -
      • Landing in a crowded area with many enemies
      • -
      • Looting in an open area without cover
      • -
      • Shooting without a suppressor or a scope
      • -
      • Driving recklessly or loudly
      • -
      • Moving out of the safe zone too late or too early
      • -
      • Engaging in a fight without backup or advantage
      • -
      -

      You should always play smart and safe, and use your common sense and intuition to make the best decisions. You should also learn from your mistakes and analyze what went wrong and how you can improve.

      -

      Conclusion

      -

      PUBG Mobile is an amazing game that offers a thrilling and immersive battle royale experience on your mobile device. You can download PUBG Mobile apk + obb highly compressed by following the steps we have shown in this article. You can also use our tips and tricks to enhance your gameplay and win more matches.

      -

      We hope you enjoyed this article and found it helpful. If you did, please share it with your friends and fellow PUBG Mobile players. Also, don't forget to leave us a comment below and let us know what you think about PUBG Mobile and its latest update.

      -

      FAQs

      -

      Here are some of the frequently asked questions about PUBG Mobile apk + obb highly compressed:

      -

      What is the size of PUBG Mobile apk + obb highly compressed?

      -

      The size of PUBG Mobile apk + obb highly compressed is about 770 MB in total. The APK file is about 70 MB, while the OBB file is about 700 MB. The size of the game after installation will be about 1 GB.

      -

      How to update PUBG Mobile apk + obb highly compressed?

      -

      To update PUBG Mobile apk + obb highly compressed, you will need to download the latest version of the APK and OBB files from the links we have provided in this article. Then, you will need to follow the same steps as before to install them on your device.

      -

      Is PUBG Mobile apk + obb highly compressed safe and legal?

      -

      PUBG Mobile apk + obb highly compressed is safe and legal as long as you download it from trusted and reliable sources like ours. However, we do not take any responsibility for any issues or damages that may occur due to downloading or installing these files. Use them at your own risk.

      -

      How to play PUBG Mobile with friends?

      -

      To play PUBG Mobile with friends, you need to create or join a team. You can do this by tapping on the Team icon on the bottom left corner of the main menu. You can either invite your friends from your contacts, social media, or game friends list, or join a random team with other players. You can also create a room and invite your friends to join with a room ID and password.

      -

      What are some of the best weapons and vehicles in PUBG Mobile?

      -

      PUBG Mobile has a wide variety of weapons and vehicles that you can use to fight and move around the map. Some of the best weapons are:

      -
        -
      • M416: A versatile assault rifle that can be equipped with various attachments and has good accuracy, stability, and fire rate.
      • -
      • AWM: A powerful sniper rifle that can kill enemies with one shot if aimed at the head or chest. It uses rare .300 Magnum ammo and can only be found in air drops.
      • -
      • DP-28: A light machine gun that has a large magazine capacity and high damage output. It is ideal for suppressing fire and close-range combat.
      • -
      • Pan: A melee weapon that can also deflect bullets and grenades. It is useful for finishing off enemies or protecting yourself from behind.
      • -
      -

      Some of the best vehicles are:

      -
        -
      • UAZ: A four-seater jeep that has good speed, durability, and off-road capability. It is suitable for most terrains and situations.
      • -
      • Buggy: A two-seater car that has excellent acceleration, maneuverability, and stability. It is ideal for escaping or chasing enemies.
      • -
      • Motorcycle: A two-seater bike that has the fastest speed and the highest jump ability. It is great for performing stunts and avoiding obstacles.
      • -
      • Boat: A water vehicle that can carry up to four players. It is useful for crossing rivers or lakes or reaching islands.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Whats New in MOTOTRBO CPS 2.0 Version 2.21.61.0? A Complete Guide.md b/spaces/congsaPfin/Manga-OCR/logs/Whats New in MOTOTRBO CPS 2.0 Version 2.21.61.0? A Complete Guide.md deleted file mode 100644 index bf66e5231549bc87169647030798399cfeccb860..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Whats New in MOTOTRBO CPS 2.0 Version 2.21.61.0? A Complete Guide.md +++ /dev/null @@ -1,142 +0,0 @@ -
      -

      How to Download and Install MOTOTRBO CPS 2.0 Version 2.21.61.0

      -

      If you are a dealer or service technician who needs to program and configure Motorola MOTOTRBO radios, you will need a reliable software tool that can help you do that. One such tool is MOTOTRBO Customer Programming Software (CPS) 2.0, which is a radio programming software for Windows PCs.

      -

      MOTOTRBO CPS 2.0 allows you to access and program (Read, Write, or Clone) the codeplug of the MOTOTRBO subscriber and repeater in various systems, such as conventional, IP Site Connect, Capacity Plus, Linked Capacity Plus, Connect Plus, Capacity Max, and WAVE PTX. It also allows you to update or recover the codeplug and firmware of your radios as well as manage them in-the-field.

      -

      mototrbo cps 2.0 version 2.21.61.0 download


      Downloadhttps://urlca.com/2uO9wm



      -

      In this article, we will show you how to download and install MOTOTRBO CPS 2.0 version 2.21.61.0, which is the latest version as of June 2021. We will also show you how to register and activate its features, how to use it to program and configure your radios, how to update or recover their codeplug and firmware, and how to troubleshoot some common issues with it.

      -

      Before we begin, make sure you have a compatible Windows PC with an Internet connection, a USB cable or a programming cable for your radio, and an entitlement ID (EID) for the software features you want to use. If you don't have an EID, you can purchase one from a Motorola Solutions sales representative or an authorized dealer.

      -

      How to Register and Activate MOTOTRBO CPS 2.0 Features

      -

      MOTOTRBO CPS 2.0 has several features that require registration and activation with an EID. These features include:

      -
        -
      • Basic Privacy
      • -
      • Enhanced Privacy
      • -
      • Transmit Interrupt
      • -
      • Man Down
      • -
      • Indoor Location Tracking
      • -
      • Outdoor Location Tracking
      • -
      • Over the Air Programming (OTAP)
      • -
      • Text to Speech
      • -
      • Bluetooth Audio
      • -
      • Bluetooth Data
      • -
      • Wi-Fi
      • -
      • WAVE PTX
      • -
      -

      To register and activate these features, follow these steps:

      -
        -
      1. Open MOTOTRBO CPS 2.0 and click on Licenses -> Register Application Licenses.
      2. -
      3. Enter your EID and click Query. The software will connect to the Motorola Solutions server and display the available features for your EID.
      4. -
      5. Select the feature you want to register and click Register. The software will register the feature and show a confirmation message.
      6. -
      7. Verify that the feature is activated by clicking on Licenses -> View Application Licenses. The software will show the status of your registered features.
      8. -
      -

      How to Use MOTOTRBO CPS 2.0 to Program and Configure Your Radios

      -

      MOTOTRBO CPS 2.0 allows you to program and configure your radios by reading and writing their codeplug, which is a file that contains all the settings and parameters of the radio. You can make changes to the codeplug using the tabs and menus of the software, such as General Settings, Channels, Zones, Scan Lists, Contacts, Privacy, etc. You can also clone the codeplug from one radio to another radio of the same model. To use MOTOTRBO CPS 2.0 to program and configure your radios, follow these steps:

      -
        -
      1. Connect your radio to your PC using a USB cable or a programming cable. Make sure the radio is turned on and the PC recognizes it as a device.
      2. -
      3. Click on Device -> Read to read the codeplug from your radio. The software will display the codeplug information in the main window.
      4. -
      5. Make the changes you want in the codeplug using the tabs and menus of the software. You can use the Help -> Contents option to access the user guide for more details on each tab and menu.
      6. -
      7. Click on Device -> Write to write the codeplug back to your radio. The software will show a progress bar and a confirmation message when done.
      8. -
      -

      How to Update or Recover the Codeplug and Firmware of Your Radios Using MOTOTRBO CPS 2.0

      -

      MOTOTRBO CPS 2.0 also allows you to update or recover the codeplug and firmware of your radios using a firmware update wizard. Firmware is a software that controls the functionality of the radio hardware. Updating or recovering the codeplug and firmware can help you fix some issues with your radios or enhance their performance and compatibility. To update or recover the codeplug and firmware of your radios using MOTOTRBO CPS 2.0, follow these steps:

      -
        -
      1. Connect your radio to your PC using a USB cable or a programming cable. Make sure the radio is turned on and the PC recognizes it as a device.
      2. -
      3. Click on Device -> Update Firmware to open the firmware update wizard.
      4. -
      5. Select the firmware version you want to update to from the drop-down list and click Next. The software will show you a summary of the update process and ask you to confirm.
      6. -
      7. Follow the instructions on the screen and wait for the update to complete. The software will show a progress bar and a confirmation message when done.
      8. -
      -
      How to Troubleshoot Common Issues with MOTOTRBO CPS 2.0
      -

      MOTOTRBO CPS 2.0 is a powerful and user-friendly software tool, but sometimes you may encounter some issues with it that prevent you from programming or configuring your radios properly. Here are some of the common issues and their possible solutions:

      - - - - - - - - - - - - - - - - - - - - - - - - - -
      IssueSolution
      The software cannot read or write to your radio.Check your cable connection, COM port settings, radio model, and firmware version. Make sure they are compatible and working properly. You can use the Device -> Test Port option to test your cable connection and COM port settings. You can also try using a different cable or PC.
      The software shows an error message that says "The Software Update Management policy enabled on this device has expired and is not entitled to be updated to the specified firmware version".This means that your radio has a Software Update Management (SUM) policy that limits the firmware versions it can be updated to. You need to contact a Motorola Solutions sales representative for more information or purchase a new EID for your device that allows you to update to the desired firmware version.
      The software shows an error message that says "The codeplug is not compatible with the device".This means that the codeplug you are trying to read or write is not compatible with the radio model or firmware version. You need to use a codeplug that matches your radio model and firmware version. You can use the Tools -> Codeplug Convert option to convert a codeplug from one radio model to another, but some settings may not be transferred correctly.
      The software shows an error message that says "The device is not responding".This means that the software cannot communicate with the radio. You need to check your cable connection, COM port settings, radio power, and battery level. You can also try resetting your radio by turning it off and on again.
      The software shows an error message that says "The device is locked".This means that the radio has a password protection feature that prevents unauthorized access. You need to enter the correct password to read or write the codeplug. If you don't know the password, you need to contact the owner or administrator of the radio.
      -
      Conclusion and FAQs
      -

      MOTOTRBO CPS 2.0 is a great software tool for programming and configuring your Motorola MOTOTRBO radios. It allows you to access and modify the codeplug of your radios, update or recover their codeplug and firmware, register and activate their features, and troubleshoot some common issues with them. In this article, we have shown you how to download and install MOTOTRBO CPS 2.0 version 2.21.61.0, which is the latest version as of June 2021. We have also shown you how to use it to program and configure your radios, how to update or recover their codeplug and firmware, and how to troubleshoot some common issues with it.

      -

      mototrbo cps 2.0 installation guide
      -mototrbo cps 2.0 user manual
      -mototrbo cps 2.0 firmware upgrade
      -mototrbo cps 2.0 programming software free download
      -mototrbo cps 2.0 license registration
      -mototrbo cps 2.0 analog mode support
      -mototrbo cps 2.0 flashzap driver
      -mototrbo cps 2.0 radio driver
      -mototrbo cps 2.0 codeplug compatibility
      -mototrbo cps 2.0 error codes
      -mototrbo cps 2.0 latest version
      -mototrbo cps 2.0 release notes
      -mototrbo cps 2.0 features and benefits
      -mototrbo cps 2.0 system requirements
      -mototrbo cps 2.0 tutorial video
      -mototrbo cps 2.0 compatible radios
      -mototrbo cps 2.0 repeater configuration
      -mototrbo cps 2.0 software update management
      -mototrbo cps 2.0 entitlement id
      -mototrbo cps 2.0 troubleshooting tips
      -mototrbo cps 2.0 windows compatibility
      -mototrbo cps 2.0 gmvn6241 download link
      -mototrbo cps 2.0 hellas zone download page
      -mototrbo cps 2.0 radiotronics uk blog post
      -mototrbo cps 2.0 blog.teknisi article
      -mototrbo cps 2.0 reading data from a radio
      -mototrbo cps 2.0 writing data to a radio
      -mototrbo cps 2.0 cloning a codeplug
      -mototrbo cps 2.0 editing a codeplug
      -mototrbo cps 2.0 importing and exporting a codeplug
      -mototrbo cps 2.0 device discovery tool
      -mototrbo cps 2.0 device manager tool
      -mototrbo cps 2.0 network interface tool
      -mototrbo cps 2.0 wi-fi configuration tool
      -mototrbo cps 2.0 bluetooth configuration tool
      -mototrbo cps 2.0 gps configuration tool
      -mototrbo cps 2.0 encryption configuration tool
      -mototrbo cps 2.0 over the air programming tool
      -mototrbo cps 2.0 remote monitor tool
      -mototrbo cps 2.0 text messaging tool
      -mototrbo cps 2.0 voice announcement tool
      -mototrbo cps 2.0 emergency alert tool
      -mototrbo cps 2.0 lone worker tool
      -mototrbo cps 2.0 man down tool
      -mototrbo cps 2.0 telemetry tool
      -mototrbo cps 2.0 indoor location tracking tool
      -mototrbo cps 2.0 outdoor location tracking tool
      -mototrbo cps 2.0 geofencing tool
      -mototrbo cps 2.0 call alert tool

      -

      If you want to learn more about MOTOTRBO CPS 2.0, you can visit the official website of Motorola Solutions or download the user guide from their support portal. You can also contact their customer service or technical support team for any questions or issues you may have with the software or your radios.

      -

      We hope you have found this article helpful and informative. If you have any feedback or suggestions, please let us know in the comments section below. Thank you for reading!

      -

      Here are some FAQs about MOTOTRBO CPS 2.0:

      -
        -
      1. What are the system requirements for MOTOTRBO CPS 2.0?
      2. -

        MOTOTRBO CPS 2.0 requires a Windows PC with at least 4 GB of RAM, 1 GB of free disk space, a USB port, an Internet connection, and one of the following operating systems: Windows 7 (32-bit or 64-bit), Windows 8 (32-bit or 64-bit), Windows 8.1 (32-bit or 64-bit), Windows 10 (32-bit or 64-bit).

        -
      3. How much does MOTOTRBO CPS 2.0 cost?
      4. -

        MOTOTRBO CPS 2.0 is a paid software that requires a license key to activate. The license key can be purchased from a Motorola Solutions sales representative or an authorized dealer. The price may vary depending on the region, currency, and features you want to use.

        -
      5. How can I get the latest firmware version for my radio?
      6. -

        You can get the latest firmware version for your radio from the Motorola Solutions support portal. You need to log in with your Motorola Solutions account and enter your radio serial number or model number to download the firmware file. You can then use MOTOTRBO CPS 2.0 to update the firmware of your radio using the firmware update wizard.

        -
      7. How can I backup and restore my codeplug?
      8. -

        You can backup and restore your codeplug using MOTOTRBO CPS 2.0. To backup your codeplug, you need to read it from your radio and save it as a file on your PC. You can use the File -> Save As option to save the codeplug file with a name and location of your choice. To restore your codeplug, you need to open the codeplug file on your PC and write it to your radio. You can use the File -> Open option to open the codeplug file and the Device -> Write option to write it to your radio.

        -
      9. How can I contact Motorola Solutions for support?
      10. -

        You can contact Motorola Solutions for support by visiting their website or calling their toll-free number. You can also use their online chat or email service to get in touch with their customer service or technical support team. You can find their contact information on their website or in the user guide of MOTOTRBO CPS 2.0.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Casas Muertas Miguel Otero Silva Pdf Descargar Facil HOT.md b/spaces/contluForse/HuggingGPT/assets/Casas Muertas Miguel Otero Silva Pdf Descargar Facil HOT.md deleted file mode 100644 index e05378c9be9597f75052068524d4c43014428d68..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Casas Muertas Miguel Otero Silva Pdf Descargar Facil HOT.md +++ /dev/null @@ -1,6 +0,0 @@ -

      casas muertas miguel otero silva pdf descargar facil


      Download > https://ssurll.com/2uzxLb



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/contluForse/HuggingGPT/assets/Edius 6.53 Serial Number !NEW!.md b/spaces/contluForse/HuggingGPT/assets/Edius 6.53 Serial Number !NEW!.md deleted file mode 100644 index 37188366e7997205889c9f8003bfa51db8c92cc4..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Edius 6.53 Serial Number !NEW!.md +++ /dev/null @@ -1,39 +0,0 @@ -
      -

      How to Register and Activate EDIUS 6.53

      -

      EDIUS 6.53 is a video editing software that offers high performance and versatility. It supports various formats, resolutions, and frame rates, and allows you to edit in real time. To use EDIUS 6.53, you need to register and activate it with a serial number.

      -

      A serial number is a combination of 6 and 16 digits that is pasted on the product package. You can use EDIUS in TRIAL mode for 31 days without a serial number, but after that you need to register it online or offline. Here are the steps to register and activate EDIUS 6.53:

      -

      edius 6.53 serial number


      Download File ★★★ https://ssurll.com/2uzy5h



      -

      Online Registration

      -

      If you have an internet connection, you can register and activate EDIUS online. This is the easiest and fastest way to get started with EDIUS.

      -
        -
      1. Double-click the [EDIUS] icon on the desktop. During the first launch of EDIUS, input serial number screen appears. Register according to the on-screen instructions.
      2. -
      3. Enter the serial number of 6 and 16 digits, which is pasted on the product package. Please note that the serial number cannot be reissued. Keep the number securely.
      4. -
      5. Click [Next] and follow the instructions to complete the online activation.
      6. -
      -

      You can also access the serial number registration from [Help] → [Serial number registration] on EDIUS, or from [Start] → [All Programs] → [Grass Valley] → [Serial number registration].[^1^]

      -

      Offline Registration

      -

      If you do not have an internet connection, you can register and activate EDIUS offline. This requires a separate device that can access the internet, such as a smartphone or a tablet.

      -

      -
        -
      1. Start up GV LicenseManager on the EDIUS terminal. You can find it in [Start] → [All Programs] → [Grass Valley] → [GV LicenseManager].
      2. -
      3. Select products to activate licenses in the [License List] dialog box.
      4. -
      5. Click [Offline Activation Create ID File] and save the ID file to a USB drive or other removable media.
      6. -
      7. Transfer the ID file to a device that can access the internet.
      8. -
      9. Go to https://activation.grassvalley.com/activation/ on your device and upload the ID file.
      10. -
      11. Download the activation file from the website and transfer it back to the EDIUS terminal.
      12. -
      13. Click [Offline Activation Register Activation File] on GV LicenseManager and select the activation file.
      14. -
      15. Click [OK] to complete the offline activation.
      16. -
      -

      You can also deactivate licenses offline by following similar steps with [Offline Deactivation Create ID File] and [Offline Deactivation Register Activation File].[^2^]

      -

      Moving License in Online Environment

      -

      If you want to move licenses between EDIUS terminals in the online environment, you can do so by deactivating licenses on the move source EDIUS terminal and activating them on the move destination EDIUS terminal.

      -
        -
      1. Start up GV LicenseManager on the move source EDIUS terminal.
      2. -
      3. Select products to deactivate licenses in the [License List] dialog box.
      4. -
      5. Click [Online deactivation], and click [Yes]. Access the activation server automatically and deactivate the licenses.
      6. -
      7. Start up EDIUS on the move destination EDIUS terminal.
      8. -
      9. Enter the same serial number as before and follow the instructions to complete the online activation.
      10. -
      -

      You can also check part of your serial number by right-clicking the product in the [License List] dialog box and clicking [Confirm part of serial number].[^2^]

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/nfnet.py b/spaces/cooelf/Multimodal-CoT/timm/models/nfnet.py deleted file mode 100644 index 4e0f2b211155dc1e304cf076506929817c78d913..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/nfnet.py +++ /dev/null @@ -1,966 +0,0 @@ -""" Normalization Free Nets. NFNet, NF-RegNet, NF-ResNet (pre-activation) Models - -Paper: `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - -Paper: `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - -Official Deepmind JAX code: https://github.com/deepmind/deepmind-research/tree/master/nfnets - -Status: -* These models are a work in progress, experiments ongoing. -* Pretrained weights for two models so far, more to come. -* Model details updated to closer match official JAX code now that it's released -* NF-ResNet, NF-RegNet-B, and NFNet-F models supported - -Hacked together by / copyright Ross Wightman, 2021. -""" -import math -from dataclasses import dataclass, field -from collections import OrderedDict -from typing import Tuple, Optional -from functools import partial - -import torch -import torch.nn as nn - -from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from .helpers import build_model_with_cfg -from .registry import register_model -from .layers import ClassifierHead, DropPath, AvgPool2dSame, ScaledStdConv2d, ScaledStdConv2dSame,\ - get_act_layer, get_act_fn, get_attn, make_divisible - - -def _dcfg(url='', **kwargs): - return { - 'url': url, - 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7), - 'crop_pct': 0.9, 'interpolation': 'bicubic', - 'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD, - 'first_conv': 'stem.conv1', 'classifier': 'head.fc', - **kwargs - } - - -default_cfgs = dict( - dm_nfnet_f0=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-dnf-weights/dm_nfnet_f0-604f9c3a.pth', - pool_size=(6, 6), input_size=(3, 192, 192), test_input_size=(3, 256, 256), crop_pct=.9), - dm_nfnet_f1=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-dnf-weights/dm_nfnet_f1-fc540f82.pth', - pool_size=(7, 7), input_size=(3, 224, 224), test_input_size=(3, 320, 320), crop_pct=0.91), - dm_nfnet_f2=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-dnf-weights/dm_nfnet_f2-89875923.pth', - pool_size=(8, 8), input_size=(3, 256, 256), test_input_size=(3, 352, 352), crop_pct=0.92), - dm_nfnet_f3=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-dnf-weights/dm_nfnet_f3-d74ab3aa.pth', - pool_size=(10, 10), input_size=(3, 320, 320), test_input_size=(3, 416, 416), crop_pct=0.94), - dm_nfnet_f4=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-dnf-weights/dm_nfnet_f4-0ac5b10b.pth', - pool_size=(12, 12), input_size=(3, 384, 384), test_input_size=(3, 512, 512), crop_pct=0.951), - dm_nfnet_f5=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-dnf-weights/dm_nfnet_f5-ecb20ab1.pth', - pool_size=(13, 13), input_size=(3, 416, 416), test_input_size=(3, 544, 544), crop_pct=0.954), - dm_nfnet_f6=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-dnf-weights/dm_nfnet_f6-e0f12116.pth', - pool_size=(14, 14), input_size=(3, 448, 448), test_input_size=(3, 576, 576), crop_pct=0.956), - - nfnet_f0=_dcfg( - url='', pool_size=(6, 6), input_size=(3, 192, 192), test_input_size=(3, 256, 256)), - nfnet_f1=_dcfg( - url='', pool_size=(7, 7), input_size=(3, 224, 224), test_input_size=(3, 320, 320)), - nfnet_f2=_dcfg( - url='', pool_size=(8, 8), input_size=(3, 256, 256), test_input_size=(3, 352, 352)), - nfnet_f3=_dcfg( - url='', pool_size=(10, 10), input_size=(3, 320, 320), test_input_size=(3, 416, 416)), - nfnet_f4=_dcfg( - url='', pool_size=(12, 12), input_size=(3, 384, 384), test_input_size=(3, 512, 512)), - nfnet_f5=_dcfg( - url='', pool_size=(13, 13), input_size=(3, 416, 416), test_input_size=(3, 544, 544)), - nfnet_f6=_dcfg( - url='', pool_size=(14, 14), input_size=(3, 448, 448), test_input_size=(3, 576, 576)), - nfnet_f7=_dcfg( - url='', pool_size=(15, 15), input_size=(3, 480, 480), test_input_size=(3, 608, 608)), - - nfnet_f0s=_dcfg( - url='', pool_size=(6, 6), input_size=(3, 192, 192), test_input_size=(3, 256, 256)), - nfnet_f1s=_dcfg( - url='', pool_size=(7, 7), input_size=(3, 224, 224), test_input_size=(3, 320, 320)), - nfnet_f2s=_dcfg( - url='', pool_size=(8, 8), input_size=(3, 256, 256), test_input_size=(3, 352, 352)), - nfnet_f3s=_dcfg( - url='', pool_size=(10, 10), input_size=(3, 320, 320), test_input_size=(3, 416, 416)), - nfnet_f4s=_dcfg( - url='', pool_size=(12, 12), input_size=(3, 384, 384), test_input_size=(3, 512, 512)), - nfnet_f5s=_dcfg( - url='', pool_size=(13, 13), input_size=(3, 416, 416), test_input_size=(3, 544, 544)), - nfnet_f6s=_dcfg( - url='', pool_size=(14, 14), input_size=(3, 448, 448), test_input_size=(3, 576, 576)), - nfnet_f7s=_dcfg( - url='', pool_size=(15, 15), input_size=(3, 480, 480), test_input_size=(3, 608, 608)), - - nfnet_l0=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/nfnet_l0_ra2-45c6688d.pth', - pool_size=(7, 7), input_size=(3, 224, 224), test_input_size=(3, 288, 288), crop_pct=1.0), - eca_nfnet_l0=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecanfnet_l0_ra2-e3e9ac50.pth', - hf_hub='timm/eca_nfnet_l0', - pool_size=(7, 7), input_size=(3, 224, 224), test_input_size=(3, 288, 288), crop_pct=1.0), - eca_nfnet_l1=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecanfnet_l1_ra2-7dce93cd.pth', - pool_size=(8, 8), input_size=(3, 256, 256), test_input_size=(3, 320, 320), crop_pct=1.0), - eca_nfnet_l2=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ecanfnet_l2_ra3-da781a61.pth', - pool_size=(10, 10), input_size=(3, 320, 320), test_input_size=(3, 384, 384), crop_pct=1.0), - eca_nfnet_l3=_dcfg( - url='', - pool_size=(11, 11), input_size=(3, 352, 352), test_input_size=(3, 448, 448), crop_pct=1.0), - - nf_regnet_b0=_dcfg( - url='', pool_size=(6, 6), input_size=(3, 192, 192), test_input_size=(3, 256, 256), first_conv='stem.conv'), - nf_regnet_b1=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/nf_regnet_b1_256_ra2-ad85cfef.pth', - pool_size=(8, 8), input_size=(3, 256, 256), test_input_size=(3, 288, 288), first_conv='stem.conv'), # NOT to paper spec - nf_regnet_b2=_dcfg( - url='', pool_size=(8, 8), input_size=(3, 240, 240), test_input_size=(3, 272, 272), first_conv='stem.conv'), - nf_regnet_b3=_dcfg( - url='', pool_size=(9, 9), input_size=(3, 288, 288), test_input_size=(3, 320, 320), first_conv='stem.conv'), - nf_regnet_b4=_dcfg( - url='', pool_size=(10, 10), input_size=(3, 320, 320), test_input_size=(3, 384, 384), first_conv='stem.conv'), - nf_regnet_b5=_dcfg( - url='', pool_size=(12, 12), input_size=(3, 384, 384), test_input_size=(3, 456, 456), first_conv='stem.conv'), - - nf_resnet26=_dcfg(url='', first_conv='stem.conv'), - nf_resnet50=_dcfg( - url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/nf_resnet50_ra2-9f236009.pth', - pool_size=(8, 8), input_size=(3, 256, 256), test_input_size=(3, 288, 288), crop_pct=0.94, first_conv='stem.conv'), - nf_resnet101=_dcfg(url='', first_conv='stem.conv'), - - nf_seresnet26=_dcfg(url='', first_conv='stem.conv'), - nf_seresnet50=_dcfg(url='', first_conv='stem.conv'), - nf_seresnet101=_dcfg(url='', first_conv='stem.conv'), - - nf_ecaresnet26=_dcfg(url='', first_conv='stem.conv'), - nf_ecaresnet50=_dcfg(url='', first_conv='stem.conv'), - nf_ecaresnet101=_dcfg(url='', first_conv='stem.conv'), -) - - -@dataclass -class NfCfg: - depths: Tuple[int, int, int, int] - channels: Tuple[int, int, int, int] - alpha: float = 0.2 - stem_type: str = '3x3' - stem_chs: Optional[int] = None - group_size: Optional[int] = None - attn_layer: Optional[str] = None - attn_kwargs: dict = None - attn_gain: float = 2.0 # NF correction gain to apply if attn layer is used - width_factor: float = 1.0 - bottle_ratio: float = 0.5 - num_features: int = 0 # num out_channels for final conv, no final_conv if 0 - ch_div: int = 8 # round channels % 8 == 0 to keep tensor-core use optimal - reg: bool = False # enables EfficientNet-like options used in RegNet variants, expand from in_chs, se in middle - extra_conv: bool = False # extra 3x3 bottleneck convolution for NFNet models - gamma_in_act: bool = False - same_padding: bool = False - std_conv_eps: float = 1e-5 - skipinit: bool = False # disabled by default, non-trivial performance impact - zero_init_fc: bool = False - act_layer: str = 'silu' - - -def _nfres_cfg( - depths, channels=(256, 512, 1024, 2048), group_size=None, act_layer='relu', attn_layer=None, attn_kwargs=None): - attn_kwargs = attn_kwargs or {} - cfg = NfCfg( - depths=depths, channels=channels, stem_type='7x7_pool', stem_chs=64, bottle_ratio=0.25, - group_size=group_size, act_layer=act_layer, attn_layer=attn_layer, attn_kwargs=attn_kwargs) - return cfg - - -def _nfreg_cfg(depths, channels=(48, 104, 208, 440)): - num_features = 1280 * channels[-1] // 440 - attn_kwargs = dict(rd_ratio=0.5) - cfg = NfCfg( - depths=depths, channels=channels, stem_type='3x3', group_size=8, width_factor=0.75, bottle_ratio=2.25, - num_features=num_features, reg=True, attn_layer='se', attn_kwargs=attn_kwargs) - return cfg - - -def _nfnet_cfg( - depths, channels=(256, 512, 1536, 1536), group_size=128, bottle_ratio=0.5, feat_mult=2., - act_layer='gelu', attn_layer='se', attn_kwargs=None): - num_features = int(channels[-1] * feat_mult) - attn_kwargs = attn_kwargs if attn_kwargs is not None else dict(rd_ratio=0.5) - cfg = NfCfg( - depths=depths, channels=channels, stem_type='deep_quad', stem_chs=128, group_size=group_size, - bottle_ratio=bottle_ratio, extra_conv=True, num_features=num_features, act_layer=act_layer, - attn_layer=attn_layer, attn_kwargs=attn_kwargs) - return cfg - - -def _dm_nfnet_cfg(depths, channels=(256, 512, 1536, 1536), act_layer='gelu', skipinit=True): - cfg = NfCfg( - depths=depths, channels=channels, stem_type='deep_quad', stem_chs=128, group_size=128, - bottle_ratio=0.5, extra_conv=True, gamma_in_act=True, same_padding=True, skipinit=skipinit, - num_features=int(channels[-1] * 2.0), act_layer=act_layer, attn_layer='se', attn_kwargs=dict(rd_ratio=0.5)) - return cfg - - - -model_cfgs = dict( - # NFNet-F models w/ GELU compatible with DeepMind weights - dm_nfnet_f0=_dm_nfnet_cfg(depths=(1, 2, 6, 3)), - dm_nfnet_f1=_dm_nfnet_cfg(depths=(2, 4, 12, 6)), - dm_nfnet_f2=_dm_nfnet_cfg(depths=(3, 6, 18, 9)), - dm_nfnet_f3=_dm_nfnet_cfg(depths=(4, 8, 24, 12)), - dm_nfnet_f4=_dm_nfnet_cfg(depths=(5, 10, 30, 15)), - dm_nfnet_f5=_dm_nfnet_cfg(depths=(6, 12, 36, 18)), - dm_nfnet_f6=_dm_nfnet_cfg(depths=(7, 14, 42, 21)), - - # NFNet-F models w/ GELU (I will likely deprecate/remove these models and just keep dm_ ver for GELU) - nfnet_f0=_nfnet_cfg(depths=(1, 2, 6, 3)), - nfnet_f1=_nfnet_cfg(depths=(2, 4, 12, 6)), - nfnet_f2=_nfnet_cfg(depths=(3, 6, 18, 9)), - nfnet_f3=_nfnet_cfg(depths=(4, 8, 24, 12)), - nfnet_f4=_nfnet_cfg(depths=(5, 10, 30, 15)), - nfnet_f5=_nfnet_cfg(depths=(6, 12, 36, 18)), - nfnet_f6=_nfnet_cfg(depths=(7, 14, 42, 21)), - nfnet_f7=_nfnet_cfg(depths=(8, 16, 48, 24)), - - # NFNet-F models w/ SiLU (much faster in PyTorch) - nfnet_f0s=_nfnet_cfg(depths=(1, 2, 6, 3), act_layer='silu'), - nfnet_f1s=_nfnet_cfg(depths=(2, 4, 12, 6), act_layer='silu'), - nfnet_f2s=_nfnet_cfg(depths=(3, 6, 18, 9), act_layer='silu'), - nfnet_f3s=_nfnet_cfg(depths=(4, 8, 24, 12), act_layer='silu'), - nfnet_f4s=_nfnet_cfg(depths=(5, 10, 30, 15), act_layer='silu'), - nfnet_f5s=_nfnet_cfg(depths=(6, 12, 36, 18), act_layer='silu'), - nfnet_f6s=_nfnet_cfg(depths=(7, 14, 42, 21), act_layer='silu'), - nfnet_f7s=_nfnet_cfg(depths=(8, 16, 48, 24), act_layer='silu'), - - # Experimental 'light' versions of NFNet-F that are little leaner - nfnet_l0=_nfnet_cfg( - depths=(1, 2, 6, 3), feat_mult=1.5, group_size=64, bottle_ratio=0.25, - attn_kwargs=dict(rd_ratio=0.25, rd_divisor=8), act_layer='silu'), - eca_nfnet_l0=_nfnet_cfg( - depths=(1, 2, 6, 3), feat_mult=1.5, group_size=64, bottle_ratio=0.25, - attn_layer='eca', attn_kwargs=dict(), act_layer='silu'), - eca_nfnet_l1=_nfnet_cfg( - depths=(2, 4, 12, 6), feat_mult=2, group_size=64, bottle_ratio=0.25, - attn_layer='eca', attn_kwargs=dict(), act_layer='silu'), - eca_nfnet_l2=_nfnet_cfg( - depths=(3, 6, 18, 9), feat_mult=2, group_size=64, bottle_ratio=0.25, - attn_layer='eca', attn_kwargs=dict(), act_layer='silu'), - eca_nfnet_l3=_nfnet_cfg( - depths=(4, 8, 24, 12), feat_mult=2, group_size=64, bottle_ratio=0.25, - attn_layer='eca', attn_kwargs=dict(), act_layer='silu'), - - # EffNet influenced RegNet defs. - # NOTE: These aren't quite the official ver, ch_div=1 must be set for exact ch counts. I round to ch_div=8. - nf_regnet_b0=_nfreg_cfg(depths=(1, 3, 6, 6)), - nf_regnet_b1=_nfreg_cfg(depths=(2, 4, 7, 7)), - nf_regnet_b2=_nfreg_cfg(depths=(2, 4, 8, 8), channels=(56, 112, 232, 488)), - nf_regnet_b3=_nfreg_cfg(depths=(2, 5, 9, 9), channels=(56, 128, 248, 528)), - nf_regnet_b4=_nfreg_cfg(depths=(2, 6, 11, 11), channels=(64, 144, 288, 616)), - nf_regnet_b5=_nfreg_cfg(depths=(3, 7, 14, 14), channels=(80, 168, 336, 704)), - # FIXME add B6-B8 - - # ResNet (preact, D style deep stem/avg down) defs - nf_resnet26=_nfres_cfg(depths=(2, 2, 2, 2)), - nf_resnet50=_nfres_cfg(depths=(3, 4, 6, 3)), - nf_resnet101=_nfres_cfg(depths=(3, 4, 23, 3)), - - nf_seresnet26=_nfres_cfg(depths=(2, 2, 2, 2), attn_layer='se', attn_kwargs=dict(rd_ratio=1/16)), - nf_seresnet50=_nfres_cfg(depths=(3, 4, 6, 3), attn_layer='se', attn_kwargs=dict(rd_ratio=1/16)), - nf_seresnet101=_nfres_cfg(depths=(3, 4, 23, 3), attn_layer='se', attn_kwargs=dict(rd_ratio=1/16)), - - nf_ecaresnet26=_nfres_cfg(depths=(2, 2, 2, 2), attn_layer='eca', attn_kwargs=dict()), - nf_ecaresnet50=_nfres_cfg(depths=(3, 4, 6, 3), attn_layer='eca', attn_kwargs=dict()), - nf_ecaresnet101=_nfres_cfg(depths=(3, 4, 23, 3), attn_layer='eca', attn_kwargs=dict()), - -) - - -class GammaAct(nn.Module): - def __init__(self, act_type='relu', gamma: float = 1.0, inplace=False): - super().__init__() - self.act_fn = get_act_fn(act_type) - self.gamma = gamma - self.inplace = inplace - - def forward(self, x): - return self.act_fn(x, inplace=self.inplace).mul_(self.gamma) - - -def act_with_gamma(act_type, gamma: float = 1.): - def _create(inplace=False): - return GammaAct(act_type, gamma=gamma, inplace=inplace) - return _create - - -class DownsampleAvg(nn.Module): - def __init__( - self, in_chs, out_chs, stride=1, dilation=1, first_dilation=None, conv_layer=ScaledStdConv2d): - """ AvgPool Downsampling as in 'D' ResNet variants. Support for dilation.""" - super(DownsampleAvg, self).__init__() - avg_stride = stride if dilation == 1 else 1 - if stride > 1 or dilation > 1: - avg_pool_fn = AvgPool2dSame if avg_stride == 1 and dilation > 1 else nn.AvgPool2d - self.pool = avg_pool_fn(2, avg_stride, ceil_mode=True, count_include_pad=False) - else: - self.pool = nn.Identity() - self.conv = conv_layer(in_chs, out_chs, 1, stride=1) - - def forward(self, x): - return self.conv(self.pool(x)) - - -class NormFreeBlock(nn.Module): - """Normalization-Free pre-activation block. - """ - - def __init__( - self, in_chs, out_chs=None, stride=1, dilation=1, first_dilation=None, - alpha=1.0, beta=1.0, bottle_ratio=0.25, group_size=None, ch_div=1, reg=True, extra_conv=False, - skipinit=False, attn_layer=None, attn_gain=2.0, act_layer=None, conv_layer=None, drop_path_rate=0.): - super().__init__() - first_dilation = first_dilation or dilation - out_chs = out_chs or in_chs - # RegNet variants scale bottleneck from in_chs, otherwise scale from out_chs like ResNet - mid_chs = make_divisible(in_chs * bottle_ratio if reg else out_chs * bottle_ratio, ch_div) - groups = 1 if not group_size else mid_chs // group_size - if group_size and group_size % ch_div == 0: - mid_chs = group_size * groups # correct mid_chs if group_size divisible by ch_div, otherwise error - self.alpha = alpha - self.beta = beta - self.attn_gain = attn_gain - - if in_chs != out_chs or stride != 1 or dilation != first_dilation: - self.downsample = DownsampleAvg( - in_chs, out_chs, stride=stride, dilation=dilation, first_dilation=first_dilation, conv_layer=conv_layer) - else: - self.downsample = None - - self.act1 = act_layer() - self.conv1 = conv_layer(in_chs, mid_chs, 1) - self.act2 = act_layer(inplace=True) - self.conv2 = conv_layer(mid_chs, mid_chs, 3, stride=stride, dilation=first_dilation, groups=groups) - if extra_conv: - self.act2b = act_layer(inplace=True) - self.conv2b = conv_layer(mid_chs, mid_chs, 3, stride=1, dilation=dilation, groups=groups) - else: - self.act2b = None - self.conv2b = None - if reg and attn_layer is not None: - self.attn = attn_layer(mid_chs) # RegNet blocks apply attn btw conv2 & 3 - else: - self.attn = None - self.act3 = act_layer() - self.conv3 = conv_layer(mid_chs, out_chs, 1, gain_init=1. if skipinit else 0.) - if not reg and attn_layer is not None: - self.attn_last = attn_layer(out_chs) # ResNet blocks apply attn after conv3 - else: - self.attn_last = None - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0 else nn.Identity() - self.skipinit_gain = nn.Parameter(torch.tensor(0.)) if skipinit else None - - def forward(self, x): - out = self.act1(x) * self.beta - - # shortcut branch - shortcut = x - if self.downsample is not None: - shortcut = self.downsample(out) - - # residual branch - out = self.conv1(out) - out = self.conv2(self.act2(out)) - if self.conv2b is not None: - out = self.conv2b(self.act2b(out)) - if self.attn is not None: - out = self.attn_gain * self.attn(out) - out = self.conv3(self.act3(out)) - if self.attn_last is not None: - out = self.attn_gain * self.attn_last(out) - out = self.drop_path(out) - - if self.skipinit_gain is not None: - out.mul_(self.skipinit_gain) # this slows things down more than expected, TBD - out = out * self.alpha + shortcut - return out - - -def create_stem(in_chs, out_chs, stem_type='', conv_layer=None, act_layer=None, preact_feature=True): - stem_stride = 2 - stem_feature = dict(num_chs=out_chs, reduction=2, module='stem.conv') - stem = OrderedDict() - assert stem_type in ('', 'deep', 'deep_tiered', 'deep_quad', '3x3', '7x7', 'deep_pool', '3x3_pool', '7x7_pool') - if 'deep' in stem_type: - if 'quad' in stem_type: - # 4 deep conv stack as in NFNet-F models - assert not 'pool' in stem_type - stem_chs = (out_chs // 8, out_chs // 4, out_chs // 2, out_chs) - strides = (2, 1, 1, 2) - stem_stride = 4 - stem_feature = dict(num_chs=out_chs // 2, reduction=2, module='stem.conv3') - else: - if 'tiered' in stem_type: - stem_chs = (3 * out_chs // 8, out_chs // 2, out_chs) # 'T' resnets in resnet.py - else: - stem_chs = (out_chs // 2, out_chs // 2, out_chs) # 'D' ResNets - strides = (2, 1, 1) - stem_feature = dict(num_chs=out_chs // 2, reduction=2, module='stem.conv2') - last_idx = len(stem_chs) - 1 - for i, (c, s) in enumerate(zip(stem_chs, strides)): - stem[f'conv{i + 1}'] = conv_layer(in_chs, c, kernel_size=3, stride=s) - if i != last_idx: - stem[f'act{i + 2}'] = act_layer(inplace=True) - in_chs = c - elif '3x3' in stem_type: - # 3x3 stem conv as in RegNet - stem['conv'] = conv_layer(in_chs, out_chs, kernel_size=3, stride=2) - else: - # 7x7 stem conv as in ResNet - stem['conv'] = conv_layer(in_chs, out_chs, kernel_size=7, stride=2) - - if 'pool' in stem_type: - stem['pool'] = nn.MaxPool2d(3, stride=2, padding=1) - stem_stride = 4 - - return nn.Sequential(stem), stem_stride, stem_feature - - -# from https://github.com/deepmind/deepmind-research/tree/master/nfnets -_nonlin_gamma = dict( - identity=1.0, - celu=1.270926833152771, - elu=1.2716004848480225, - gelu=1.7015043497085571, - leaky_relu=1.70590341091156, - log_sigmoid=1.9193484783172607, - log_softmax=1.0002083778381348, - relu=1.7139588594436646, - relu6=1.7131484746932983, - selu=1.0008515119552612, - sigmoid=4.803835391998291, - silu=1.7881293296813965, - softsign=2.338853120803833, - softplus=1.9203323125839233, - tanh=1.5939117670059204, -) - - -class NormFreeNet(nn.Module): - """ Normalization-Free Network - - As described in : - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - and - `High-Performance Large-Scale Image Recognition Without Normalization` - https://arxiv.org/abs/2102.06171 - - This model aims to cover both the NFRegNet-Bx models as detailed in the paper's code snippets and - the (preact) ResNet models described earlier in the paper. - - There are a few differences: - * channels are rounded to be divisible by 8 by default (keep tensor core kernels happy), - this changes channel dim and param counts slightly from the paper models - * activation correcting gamma constants are moved into the ScaledStdConv as it has less performance - impact in PyTorch when done with the weight scaling there. This likely wasn't a concern in the JAX impl. - * a config option `gamma_in_act` can be enabled to not apply gamma in StdConv as described above, but - apply it in each activation. This is slightly slower, numerically different, but matches official impl. - * skipinit is disabled by default, it seems to have a rather drastic impact on GPU memory use and throughput - for what it is/does. Approx 8-10% throughput loss. - """ - def __init__(self, cfg: NfCfg, num_classes=1000, in_chans=3, global_pool='avg', output_stride=32, - drop_rate=0., drop_path_rate=0.): - super().__init__() - self.num_classes = num_classes - self.drop_rate = drop_rate - assert cfg.act_layer in _nonlin_gamma, f"Please add non-linearity constants for activation ({cfg.act_layer})." - conv_layer = ScaledStdConv2dSame if cfg.same_padding else ScaledStdConv2d - if cfg.gamma_in_act: - act_layer = act_with_gamma(cfg.act_layer, gamma=_nonlin_gamma[cfg.act_layer]) - conv_layer = partial(conv_layer, eps=cfg.std_conv_eps) - else: - act_layer = get_act_layer(cfg.act_layer) - conv_layer = partial(conv_layer, gamma=_nonlin_gamma[cfg.act_layer], eps=cfg.std_conv_eps) - attn_layer = partial(get_attn(cfg.attn_layer), **cfg.attn_kwargs) if cfg.attn_layer else None - - stem_chs = make_divisible((cfg.stem_chs or cfg.channels[0]) * cfg.width_factor, cfg.ch_div) - self.stem, stem_stride, stem_feat = create_stem( - in_chans, stem_chs, cfg.stem_type, conv_layer=conv_layer, act_layer=act_layer) - - self.feature_info = [stem_feat] - drop_path_rates = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(cfg.depths)).split(cfg.depths)] - prev_chs = stem_chs - net_stride = stem_stride - dilation = 1 - expected_var = 1.0 - stages = [] - for stage_idx, stage_depth in enumerate(cfg.depths): - stride = 1 if stage_idx == 0 and stem_stride > 2 else 2 - if net_stride >= output_stride and stride > 1: - dilation *= stride - stride = 1 - net_stride *= stride - first_dilation = 1 if dilation in (1, 2) else 2 - - blocks = [] - for block_idx in range(cfg.depths[stage_idx]): - first_block = block_idx == 0 and stage_idx == 0 - out_chs = make_divisible(cfg.channels[stage_idx] * cfg.width_factor, cfg.ch_div) - blocks += [NormFreeBlock( - in_chs=prev_chs, out_chs=out_chs, - alpha=cfg.alpha, - beta=1. / expected_var ** 0.5, - stride=stride if block_idx == 0 else 1, - dilation=dilation, - first_dilation=first_dilation, - group_size=cfg.group_size, - bottle_ratio=1. if cfg.reg and first_block else cfg.bottle_ratio, - ch_div=cfg.ch_div, - reg=cfg.reg, - extra_conv=cfg.extra_conv, - skipinit=cfg.skipinit, - attn_layer=attn_layer, - attn_gain=cfg.attn_gain, - act_layer=act_layer, - conv_layer=conv_layer, - drop_path_rate=drop_path_rates[stage_idx][block_idx], - )] - if block_idx == 0: - expected_var = 1. # expected var is reset after first block of each stage - expected_var += cfg.alpha ** 2 # Even if reset occurs, increment expected variance - first_dilation = dilation - prev_chs = out_chs - self.feature_info += [dict(num_chs=prev_chs, reduction=net_stride, module=f'stages.{stage_idx}')] - stages += [nn.Sequential(*blocks)] - self.stages = nn.Sequential(*stages) - - if cfg.num_features: - # The paper NFRegNet models have an EfficientNet-like final head convolution. - self.num_features = make_divisible(cfg.width_factor * cfg.num_features, cfg.ch_div) - self.final_conv = conv_layer(prev_chs, self.num_features, 1) - self.feature_info[-1] = dict(num_chs=self.num_features, reduction=net_stride, module=f'final_conv') - else: - self.num_features = prev_chs - self.final_conv = nn.Identity() - self.final_act = act_layer(inplace=cfg.num_features > 0) - - self.head = ClassifierHead(self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate) - - for n, m in self.named_modules(): - if 'fc' in n and isinstance(m, nn.Linear): - if cfg.zero_init_fc: - nn.init.zeros_(m.weight) - else: - nn.init.normal_(m.weight, 0., .01) - if m.bias is not None: - nn.init.zeros_(m.bias) - elif isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_in', nonlinearity='linear') - if m.bias is not None: - nn.init.zeros_(m.bias) - - def get_classifier(self): - return self.head.fc - - def reset_classifier(self, num_classes, global_pool='avg'): - self.head = ClassifierHead(self.num_features, num_classes, pool_type=global_pool, drop_rate=self.drop_rate) - - def forward_features(self, x): - x = self.stem(x) - x = self.stages(x) - x = self.final_conv(x) - x = self.final_act(x) - return x - - def forward(self, x): - x = self.forward_features(x) - x = self.head(x) - return x - - -def _create_normfreenet(variant, pretrained=False, **kwargs): - model_cfg = model_cfgs[variant] - feature_cfg = dict(flatten_sequential=True) - return build_model_with_cfg( - NormFreeNet, variant, pretrained, - default_cfg=default_cfgs[variant], - model_cfg=model_cfg, - feature_cfg=feature_cfg, - **kwargs) - - -@register_model -def dm_nfnet_f0(pretrained=False, **kwargs): - """ NFNet-F0 (DeepMind weight compatible) - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('dm_nfnet_f0', pretrained=pretrained, **kwargs) - - -@register_model -def dm_nfnet_f1(pretrained=False, **kwargs): - """ NFNet-F1 (DeepMind weight compatible) - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('dm_nfnet_f1', pretrained=pretrained, **kwargs) - - -@register_model -def dm_nfnet_f2(pretrained=False, **kwargs): - """ NFNet-F2 (DeepMind weight compatible) - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('dm_nfnet_f2', pretrained=pretrained, **kwargs) - - -@register_model -def dm_nfnet_f3(pretrained=False, **kwargs): - """ NFNet-F3 (DeepMind weight compatible) - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('dm_nfnet_f3', pretrained=pretrained, **kwargs) - - -@register_model -def dm_nfnet_f4(pretrained=False, **kwargs): - """ NFNet-F4 (DeepMind weight compatible) - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('dm_nfnet_f4', pretrained=pretrained, **kwargs) - - -@register_model -def dm_nfnet_f5(pretrained=False, **kwargs): - """ NFNet-F5 (DeepMind weight compatible) - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('dm_nfnet_f5', pretrained=pretrained, **kwargs) - - -@register_model -def dm_nfnet_f6(pretrained=False, **kwargs): - """ NFNet-F6 (DeepMind weight compatible) - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('dm_nfnet_f6', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f0(pretrained=False, **kwargs): - """ NFNet-F0 - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f0', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f1(pretrained=False, **kwargs): - """ NFNet-F1 - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f1', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f2(pretrained=False, **kwargs): - """ NFNet-F2 - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f2', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f3(pretrained=False, **kwargs): - """ NFNet-F3 - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f3', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f4(pretrained=False, **kwargs): - """ NFNet-F4 - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f4', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f5(pretrained=False, **kwargs): - """ NFNet-F5 - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f5', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f6(pretrained=False, **kwargs): - """ NFNet-F6 - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f6', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f7(pretrained=False, **kwargs): - """ NFNet-F7 - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f7', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f0s(pretrained=False, **kwargs): - """ NFNet-F0 w/ SiLU - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f0s', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f1s(pretrained=False, **kwargs): - """ NFNet-F1 w/ SiLU - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f1s', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f2s(pretrained=False, **kwargs): - """ NFNet-F2 w/ SiLU - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f2s', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f3s(pretrained=False, **kwargs): - """ NFNet-F3 w/ SiLU - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f3s', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f4s(pretrained=False, **kwargs): - """ NFNet-F4 w/ SiLU - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f4s', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f5s(pretrained=False, **kwargs): - """ NFNet-F5 w/ SiLU - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f5s', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f6s(pretrained=False, **kwargs): - """ NFNet-F6 w/ SiLU - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f6s', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_f7s(pretrained=False, **kwargs): - """ NFNet-F7 w/ SiLU - `High-Performance Large-Scale Image Recognition Without Normalization` - - https://arxiv.org/abs/2102.06171 - """ - return _create_normfreenet('nfnet_f7s', pretrained=pretrained, **kwargs) - - -@register_model -def nfnet_l0(pretrained=False, **kwargs): - """ NFNet-L0b w/ SiLU - My experimental 'light' model w/ F0 repeats, 1.5x final_conv mult, 64 group_size, .25 bottleneck & SE ratio - """ - return _create_normfreenet('nfnet_l0', pretrained=pretrained, **kwargs) - - -@register_model -def eca_nfnet_l0(pretrained=False, **kwargs): - """ ECA-NFNet-L0 w/ SiLU - My experimental 'light' model w/ F0 repeats, 1.5x final_conv mult, 64 group_size, .25 bottleneck & ECA attn - """ - return _create_normfreenet('eca_nfnet_l0', pretrained=pretrained, **kwargs) - - -@register_model -def eca_nfnet_l1(pretrained=False, **kwargs): - """ ECA-NFNet-L1 w/ SiLU - My experimental 'light' model w/ F1 repeats, 2.0x final_conv mult, 64 group_size, .25 bottleneck & ECA attn - """ - return _create_normfreenet('eca_nfnet_l1', pretrained=pretrained, **kwargs) - - -@register_model -def eca_nfnet_l2(pretrained=False, **kwargs): - """ ECA-NFNet-L2 w/ SiLU - My experimental 'light' model w/ F2 repeats, 2.0x final_conv mult, 64 group_size, .25 bottleneck & ECA attn - """ - return _create_normfreenet('eca_nfnet_l2', pretrained=pretrained, **kwargs) - - -@register_model -def eca_nfnet_l3(pretrained=False, **kwargs): - """ ECA-NFNet-L3 w/ SiLU - My experimental 'light' model w/ F3 repeats, 2.0x final_conv mult, 64 group_size, .25 bottleneck & ECA attn - """ - return _create_normfreenet('eca_nfnet_l3', pretrained=pretrained, **kwargs) - - -@register_model -def nf_regnet_b0(pretrained=False, **kwargs): - """ Normalization-Free RegNet-B0 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_regnet_b0', pretrained=pretrained, **kwargs) - - -@register_model -def nf_regnet_b1(pretrained=False, **kwargs): - """ Normalization-Free RegNet-B1 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_regnet_b1', pretrained=pretrained, **kwargs) - - -@register_model -def nf_regnet_b2(pretrained=False, **kwargs): - """ Normalization-Free RegNet-B2 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_regnet_b2', pretrained=pretrained, **kwargs) - - -@register_model -def nf_regnet_b3(pretrained=False, **kwargs): - """ Normalization-Free RegNet-B3 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_regnet_b3', pretrained=pretrained, **kwargs) - - -@register_model -def nf_regnet_b4(pretrained=False, **kwargs): - """ Normalization-Free RegNet-B4 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_regnet_b4', pretrained=pretrained, **kwargs) - - -@register_model -def nf_regnet_b5(pretrained=False, **kwargs): - """ Normalization-Free RegNet-B5 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_regnet_b5', pretrained=pretrained, **kwargs) - - -@register_model -def nf_resnet26(pretrained=False, **kwargs): - """ Normalization-Free ResNet-26 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_resnet26', pretrained=pretrained, **kwargs) - - -@register_model -def nf_resnet50(pretrained=False, **kwargs): - """ Normalization-Free ResNet-50 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_resnet50', pretrained=pretrained, **kwargs) - - -@register_model -def nf_resnet101(pretrained=False, **kwargs): - """ Normalization-Free ResNet-101 - `Characterizing signal propagation to close the performance gap in unnormalized ResNets` - - https://arxiv.org/abs/2101.08692 - """ - return _create_normfreenet('nf_resnet101', pretrained=pretrained, **kwargs) - - -@register_model -def nf_seresnet26(pretrained=False, **kwargs): - """ Normalization-Free SE-ResNet26 - """ - return _create_normfreenet('nf_seresnet26', pretrained=pretrained, **kwargs) - - -@register_model -def nf_seresnet50(pretrained=False, **kwargs): - """ Normalization-Free SE-ResNet50 - """ - return _create_normfreenet('nf_seresnet50', pretrained=pretrained, **kwargs) - - -@register_model -def nf_seresnet101(pretrained=False, **kwargs): - """ Normalization-Free SE-ResNet101 - """ - return _create_normfreenet('nf_seresnet101', pretrained=pretrained, **kwargs) - - -@register_model -def nf_ecaresnet26(pretrained=False, **kwargs): - """ Normalization-Free ECA-ResNet26 - """ - return _create_normfreenet('nf_ecaresnet26', pretrained=pretrained, **kwargs) - - -@register_model -def nf_ecaresnet50(pretrained=False, **kwargs): - """ Normalization-Free ECA-ResNet50 - """ - return _create_normfreenet('nf_ecaresnet50', pretrained=pretrained, **kwargs) - - -@register_model -def nf_ecaresnet101(pretrained=False, **kwargs): - """ Normalization-Free ECA-ResNet101 - """ - return _create_normfreenet('nf_ecaresnet101', pretrained=pretrained, **kwargs) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mlsd/utils.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mlsd/utils.py deleted file mode 100644 index e24e7fbb028b34d5871bb7a5d96c68f35774bbd0..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mlsd/utils.py +++ /dev/null @@ -1,582 +0,0 @@ -''' -modified by lihaoweicv -pytorch version -''' - -''' -M-LSD -Copyright 2021-present NAVER Corp. -Apache License v2.0 -''' - -import os -import numpy as np -import cv2 -import torch -from torch.nn import functional as F - - -def deccode_output_score_and_ptss(tpMap, topk_n = 200, ksize = 5): - ''' - tpMap: - center: tpMap[1, 0, :, :] - displacement: tpMap[1, 1:5, :, :] - ''' - b, c, h, w = tpMap.shape - assert b==1, 'only support bsize==1' - displacement = tpMap[:, 1:5, :, :][0] - center = tpMap[:, 0, :, :] - heat = torch.sigmoid(center) - hmax = F.max_pool2d( heat, (ksize, ksize), stride=1, padding=(ksize-1)//2) - keep = (hmax == heat).float() - heat = heat * keep - heat = heat.reshape(-1, ) - - scores, indices = torch.topk(heat, topk_n, dim=-1, largest=True) - yy = torch.floor_divide(indices, w).unsqueeze(-1) - xx = torch.fmod(indices, w).unsqueeze(-1) - ptss = torch.cat((yy, xx),dim=-1) - - ptss = ptss.detach().cpu().numpy() - scores = scores.detach().cpu().numpy() - displacement = displacement.detach().cpu().numpy() - displacement = displacement.transpose((1,2,0)) - return ptss, scores, displacement - - -def pred_lines(image, model, - input_shape=[512, 512], - score_thr=0.10, - dist_thr=20.0): - h, w, _ = image.shape - h_ratio, w_ratio = [h / input_shape[0], w / input_shape[1]] - - resized_image = np.concatenate([cv2.resize(image, (input_shape[1], input_shape[0]), interpolation=cv2.INTER_AREA), - np.ones([input_shape[0], input_shape[1], 1])], axis=-1) - - resized_image = resized_image.transpose((2,0,1)) - batch_image = np.expand_dims(resized_image, axis=0).astype('float32') - batch_image = (batch_image / 127.5) - 1.0 - -# batch_image = torch.from_numpy(batch_image).float().cuda() - batch_image = torch.from_numpy(batch_image).float().cpu() - outputs = model(batch_image) - pts, pts_score, vmap = deccode_output_score_and_ptss(outputs, 200, 3) - start = vmap[:, :, :2] - end = vmap[:, :, 2:] - dist_map = np.sqrt(np.sum((start - end) ** 2, axis=-1)) - - segments_list = [] - for center, score in zip(pts, pts_score): - y, x = center - distance = dist_map[y, x] - if score > score_thr and distance > dist_thr: - disp_x_start, disp_y_start, disp_x_end, disp_y_end = vmap[y, x, :] - x_start = x + disp_x_start - y_start = y + disp_y_start - x_end = x + disp_x_end - y_end = y + disp_y_end - segments_list.append([x_start, y_start, x_end, y_end]) - - lines = 2 * np.array(segments_list) # 256 > 512 - lines[:, 0] = lines[:, 0] * w_ratio - lines[:, 1] = lines[:, 1] * h_ratio - lines[:, 2] = lines[:, 2] * w_ratio - lines[:, 3] = lines[:, 3] * h_ratio - - return lines - - -def pred_squares(image, - model, - input_shape=[512, 512], - params={'score': 0.06, - 'outside_ratio': 0.28, - 'inside_ratio': 0.45, - 'w_overlap': 0.0, - 'w_degree': 1.95, - 'w_length': 0.0, - 'w_area': 1.86, - 'w_center': 0.14}): - ''' - shape = [height, width] - ''' - h, w, _ = image.shape - original_shape = [h, w] - - resized_image = np.concatenate([cv2.resize(image, (input_shape[0], input_shape[1]), interpolation=cv2.INTER_AREA), - np.ones([input_shape[0], input_shape[1], 1])], axis=-1) - resized_image = resized_image.transpose((2, 0, 1)) - batch_image = np.expand_dims(resized_image, axis=0).astype('float32') - batch_image = (batch_image / 127.5) - 1.0 - -# batch_image = torch.from_numpy(batch_image).float().cuda() - batch_image = torch.from_numpy(batch_image).float().cpu() - outputs = model(batch_image) - - pts, pts_score, vmap = deccode_output_score_and_ptss(outputs, 200, 3) - start = vmap[:, :, :2] # (x, y) - end = vmap[:, :, 2:] # (x, y) - dist_map = np.sqrt(np.sum((start - end) ** 2, axis=-1)) - - junc_list = [] - segments_list = [] - for junc, score in zip(pts, pts_score): - y, x = junc - distance = dist_map[y, x] - if score > params['score'] and distance > 20.0: - junc_list.append([x, y]) - disp_x_start, disp_y_start, disp_x_end, disp_y_end = vmap[y, x, :] - d_arrow = 1.0 - x_start = x + d_arrow * disp_x_start - y_start = y + d_arrow * disp_y_start - x_end = x + d_arrow * disp_x_end - y_end = y + d_arrow * disp_y_end - segments_list.append([x_start, y_start, x_end, y_end]) - - segments = np.array(segments_list) - - ####### post processing for squares - # 1. get unique lines - point = np.array([[0, 0]]) - point = point[0] - start = segments[:, :2] - end = segments[:, 2:] - diff = start - end - a = diff[:, 1] - b = -diff[:, 0] - c = a * start[:, 0] + b * start[:, 1] - - d = np.abs(a * point[0] + b * point[1] - c) / np.sqrt(a ** 2 + b ** 2 + 1e-10) - theta = np.arctan2(diff[:, 0], diff[:, 1]) * 180 / np.pi - theta[theta < 0.0] += 180 - hough = np.concatenate([d[:, None], theta[:, None]], axis=-1) - - d_quant = 1 - theta_quant = 2 - hough[:, 0] //= d_quant - hough[:, 1] //= theta_quant - _, indices, counts = np.unique(hough, axis=0, return_index=True, return_counts=True) - - acc_map = np.zeros([512 // d_quant + 1, 360 // theta_quant + 1], dtype='float32') - idx_map = np.zeros([512 // d_quant + 1, 360 // theta_quant + 1], dtype='int32') - 1 - yx_indices = hough[indices, :].astype('int32') - acc_map[yx_indices[:, 0], yx_indices[:, 1]] = counts - idx_map[yx_indices[:, 0], yx_indices[:, 1]] = indices - - acc_map_np = acc_map - # acc_map = acc_map[None, :, :, None] - # - # ### fast suppression using tensorflow op - # acc_map = tf.constant(acc_map, dtype=tf.float32) - # max_acc_map = tf.keras.layers.MaxPool2D(pool_size=(5, 5), strides=1, padding='same')(acc_map) - # acc_map = acc_map * tf.cast(tf.math.equal(acc_map, max_acc_map), tf.float32) - # flatten_acc_map = tf.reshape(acc_map, [1, -1]) - # topk_values, topk_indices = tf.math.top_k(flatten_acc_map, k=len(pts)) - # _, h, w, _ = acc_map.shape - # y = tf.expand_dims(topk_indices // w, axis=-1) - # x = tf.expand_dims(topk_indices % w, axis=-1) - # yx = tf.concat([y, x], axis=-1) - - ### fast suppression using pytorch op - acc_map = torch.from_numpy(acc_map_np).unsqueeze(0).unsqueeze(0) - _,_, h, w = acc_map.shape - max_acc_map = F.max_pool2d(acc_map,kernel_size=5, stride=1, padding=2) - acc_map = acc_map * ( (acc_map == max_acc_map).float() ) - flatten_acc_map = acc_map.reshape([-1, ]) - - scores, indices = torch.topk(flatten_acc_map, len(pts), dim=-1, largest=True) - yy = torch.div(indices, w, rounding_mode='floor').unsqueeze(-1) - xx = torch.fmod(indices, w).unsqueeze(-1) - yx = torch.cat((yy, xx), dim=-1) - - yx = yx.detach().cpu().numpy() - - topk_values = scores.detach().cpu().numpy() - indices = idx_map[yx[:, 0], yx[:, 1]] - basis = 5 // 2 - - merged_segments = [] - for yx_pt, max_indice, value in zip(yx, indices, topk_values): - y, x = yx_pt - if max_indice == -1 or value == 0: - continue - segment_list = [] - for y_offset in range(-basis, basis + 1): - for x_offset in range(-basis, basis + 1): - indice = idx_map[y + y_offset, x + x_offset] - cnt = int(acc_map_np[y + y_offset, x + x_offset]) - if indice != -1: - segment_list.append(segments[indice]) - if cnt > 1: - check_cnt = 1 - current_hough = hough[indice] - for new_indice, new_hough in enumerate(hough): - if (current_hough == new_hough).all() and indice != new_indice: - segment_list.append(segments[new_indice]) - check_cnt += 1 - if check_cnt == cnt: - break - group_segments = np.array(segment_list).reshape([-1, 2]) - sorted_group_segments = np.sort(group_segments, axis=0) - x_min, y_min = sorted_group_segments[0, :] - x_max, y_max = sorted_group_segments[-1, :] - - deg = theta[max_indice] - if deg >= 90: - merged_segments.append([x_min, y_max, x_max, y_min]) - else: - merged_segments.append([x_min, y_min, x_max, y_max]) - - # 2. get intersections - new_segments = np.array(merged_segments) # (x1, y1, x2, y2) - start = new_segments[:, :2] # (x1, y1) - end = new_segments[:, 2:] # (x2, y2) - new_centers = (start + end) / 2.0 - diff = start - end - dist_segments = np.sqrt(np.sum(diff ** 2, axis=-1)) - - # ax + by = c - a = diff[:, 1] - b = -diff[:, 0] - c = a * start[:, 0] + b * start[:, 1] - pre_det = a[:, None] * b[None, :] - det = pre_det - np.transpose(pre_det) - - pre_inter_y = a[:, None] * c[None, :] - inter_y = (pre_inter_y - np.transpose(pre_inter_y)) / (det + 1e-10) - pre_inter_x = c[:, None] * b[None, :] - inter_x = (pre_inter_x - np.transpose(pre_inter_x)) / (det + 1e-10) - inter_pts = np.concatenate([inter_x[:, :, None], inter_y[:, :, None]], axis=-1).astype('int32') - - # 3. get corner information - # 3.1 get distance - ''' - dist_segments: - | dist(0), dist(1), dist(2), ...| - dist_inter_to_segment1: - | dist(inter,0), dist(inter,0), dist(inter,0), ... | - | dist(inter,1), dist(inter,1), dist(inter,1), ... | - ... - dist_inter_to_semgnet2: - | dist(inter,0), dist(inter,1), dist(inter,2), ... | - | dist(inter,0), dist(inter,1), dist(inter,2), ... | - ... - ''' - - dist_inter_to_segment1_start = np.sqrt( - np.sum(((inter_pts - start[:, None, :]) ** 2), axis=-1, keepdims=True)) # [n_batch, n_batch, 1] - dist_inter_to_segment1_end = np.sqrt( - np.sum(((inter_pts - end[:, None, :]) ** 2), axis=-1, keepdims=True)) # [n_batch, n_batch, 1] - dist_inter_to_segment2_start = np.sqrt( - np.sum(((inter_pts - start[None, :, :]) ** 2), axis=-1, keepdims=True)) # [n_batch, n_batch, 1] - dist_inter_to_segment2_end = np.sqrt( - np.sum(((inter_pts - end[None, :, :]) ** 2), axis=-1, keepdims=True)) # [n_batch, n_batch, 1] - - # sort ascending - dist_inter_to_segment1 = np.sort( - np.concatenate([dist_inter_to_segment1_start, dist_inter_to_segment1_end], axis=-1), - axis=-1) # [n_batch, n_batch, 2] - dist_inter_to_segment2 = np.sort( - np.concatenate([dist_inter_to_segment2_start, dist_inter_to_segment2_end], axis=-1), - axis=-1) # [n_batch, n_batch, 2] - - # 3.2 get degree - inter_to_start = new_centers[:, None, :] - inter_pts - deg_inter_to_start = np.arctan2(inter_to_start[:, :, 1], inter_to_start[:, :, 0]) * 180 / np.pi - deg_inter_to_start[deg_inter_to_start < 0.0] += 360 - inter_to_end = new_centers[None, :, :] - inter_pts - deg_inter_to_end = np.arctan2(inter_to_end[:, :, 1], inter_to_end[:, :, 0]) * 180 / np.pi - deg_inter_to_end[deg_inter_to_end < 0.0] += 360 - - ''' - B -- G - | | - C -- R - B : blue / G: green / C: cyan / R: red - - 0 -- 1 - | | - 3 -- 2 - ''' - # rename variables - deg1_map, deg2_map = deg_inter_to_start, deg_inter_to_end - # sort deg ascending - deg_sort = np.sort(np.concatenate([deg1_map[:, :, None], deg2_map[:, :, None]], axis=-1), axis=-1) - - deg_diff_map = np.abs(deg1_map - deg2_map) - # we only consider the smallest degree of intersect - deg_diff_map[deg_diff_map > 180] = 360 - deg_diff_map[deg_diff_map > 180] - - # define available degree range - deg_range = [60, 120] - - corner_dict = {corner_info: [] for corner_info in range(4)} - inter_points = [] - for i in range(inter_pts.shape[0]): - for j in range(i + 1, inter_pts.shape[1]): - # i, j > line index, always i < j - x, y = inter_pts[i, j, :] - deg1, deg2 = deg_sort[i, j, :] - deg_diff = deg_diff_map[i, j] - - check_degree = deg_diff > deg_range[0] and deg_diff < deg_range[1] - - outside_ratio = params['outside_ratio'] # over ratio >>> drop it! - inside_ratio = params['inside_ratio'] # over ratio >>> drop it! - check_distance = ((dist_inter_to_segment1[i, j, 1] >= dist_segments[i] and \ - dist_inter_to_segment1[i, j, 0] <= dist_segments[i] * outside_ratio) or \ - (dist_inter_to_segment1[i, j, 1] <= dist_segments[i] and \ - dist_inter_to_segment1[i, j, 0] <= dist_segments[i] * inside_ratio)) and \ - ((dist_inter_to_segment2[i, j, 1] >= dist_segments[j] and \ - dist_inter_to_segment2[i, j, 0] <= dist_segments[j] * outside_ratio) or \ - (dist_inter_to_segment2[i, j, 1] <= dist_segments[j] and \ - dist_inter_to_segment2[i, j, 0] <= dist_segments[j] * inside_ratio)) - - if check_degree and check_distance: - corner_info = None - - if (deg1 >= 0 and deg1 <= 45 and deg2 >= 45 and deg2 <= 120) or \ - (deg2 >= 315 and deg1 >= 45 and deg1 <= 120): - corner_info, color_info = 0, 'blue' - elif (deg1 >= 45 and deg1 <= 125 and deg2 >= 125 and deg2 <= 225): - corner_info, color_info = 1, 'green' - elif (deg1 >= 125 and deg1 <= 225 and deg2 >= 225 and deg2 <= 315): - corner_info, color_info = 2, 'black' - elif (deg1 >= 0 and deg1 <= 45 and deg2 >= 225 and deg2 <= 315) or \ - (deg2 >= 315 and deg1 >= 225 and deg1 <= 315): - corner_info, color_info = 3, 'cyan' - else: - corner_info, color_info = 4, 'red' # we don't use it - continue - - corner_dict[corner_info].append([x, y, i, j]) - inter_points.append([x, y]) - - square_list = [] - connect_list = [] - segments_list = [] - for corner0 in corner_dict[0]: - for corner1 in corner_dict[1]: - connect01 = False - for corner0_line in corner0[2:]: - if corner0_line in corner1[2:]: - connect01 = True - break - if connect01: - for corner2 in corner_dict[2]: - connect12 = False - for corner1_line in corner1[2:]: - if corner1_line in corner2[2:]: - connect12 = True - break - if connect12: - for corner3 in corner_dict[3]: - connect23 = False - for corner2_line in corner2[2:]: - if corner2_line in corner3[2:]: - connect23 = True - break - if connect23: - for corner3_line in corner3[2:]: - if corner3_line in corner0[2:]: - # SQUARE!!! - ''' - 0 -- 1 - | | - 3 -- 2 - square_list: - order: 0 > 1 > 2 > 3 - | x0, y0, x1, y1, x2, y2, x3, y3 | - | x0, y0, x1, y1, x2, y2, x3, y3 | - ... - connect_list: - order: 01 > 12 > 23 > 30 - | line_idx01, line_idx12, line_idx23, line_idx30 | - | line_idx01, line_idx12, line_idx23, line_idx30 | - ... - segments_list: - order: 0 > 1 > 2 > 3 - | line_idx0_i, line_idx0_j, line_idx1_i, line_idx1_j, line_idx2_i, line_idx2_j, line_idx3_i, line_idx3_j | - | line_idx0_i, line_idx0_j, line_idx1_i, line_idx1_j, line_idx2_i, line_idx2_j, line_idx3_i, line_idx3_j | - ... - ''' - square_list.append(corner0[:2] + corner1[:2] + corner2[:2] + corner3[:2]) - connect_list.append([corner0_line, corner1_line, corner2_line, corner3_line]) - segments_list.append(corner0[2:] + corner1[2:] + corner2[2:] + corner3[2:]) - - def check_outside_inside(segments_info, connect_idx): - # return 'outside or inside', min distance, cover_param, peri_param - if connect_idx == segments_info[0]: - check_dist_mat = dist_inter_to_segment1 - else: - check_dist_mat = dist_inter_to_segment2 - - i, j = segments_info - min_dist, max_dist = check_dist_mat[i, j, :] - connect_dist = dist_segments[connect_idx] - if max_dist > connect_dist: - return 'outside', min_dist, 0, 1 - else: - return 'inside', min_dist, -1, -1 - - top_square = None - - try: - map_size = input_shape[0] / 2 - squares = np.array(square_list).reshape([-1, 4, 2]) - score_array = [] - connect_array = np.array(connect_list) - segments_array = np.array(segments_list).reshape([-1, 4, 2]) - - # get degree of corners: - squares_rollup = np.roll(squares, 1, axis=1) - squares_rolldown = np.roll(squares, -1, axis=1) - vec1 = squares_rollup - squares - normalized_vec1 = vec1 / (np.linalg.norm(vec1, axis=-1, keepdims=True) + 1e-10) - vec2 = squares_rolldown - squares - normalized_vec2 = vec2 / (np.linalg.norm(vec2, axis=-1, keepdims=True) + 1e-10) - inner_products = np.sum(normalized_vec1 * normalized_vec2, axis=-1) # [n_squares, 4] - squares_degree = np.arccos(inner_products) * 180 / np.pi # [n_squares, 4] - - # get square score - overlap_scores = [] - degree_scores = [] - length_scores = [] - - for connects, segments, square, degree in zip(connect_array, segments_array, squares, squares_degree): - ''' - 0 -- 1 - | | - 3 -- 2 - - # segments: [4, 2] - # connects: [4] - ''' - - ###################################### OVERLAP SCORES - cover = 0 - perimeter = 0 - # check 0 > 1 > 2 > 3 - square_length = [] - - for start_idx in range(4): - end_idx = (start_idx + 1) % 4 - - connect_idx = connects[start_idx] # segment idx of segment01 - start_segments = segments[start_idx] - end_segments = segments[end_idx] - - start_point = square[start_idx] - end_point = square[end_idx] - - # check whether outside or inside - start_position, start_min, start_cover_param, start_peri_param = check_outside_inside(start_segments, - connect_idx) - end_position, end_min, end_cover_param, end_peri_param = check_outside_inside(end_segments, connect_idx) - - cover += dist_segments[connect_idx] + start_cover_param * start_min + end_cover_param * end_min - perimeter += dist_segments[connect_idx] + start_peri_param * start_min + end_peri_param * end_min - - square_length.append( - dist_segments[connect_idx] + start_peri_param * start_min + end_peri_param * end_min) - - overlap_scores.append(cover / perimeter) - ###################################### - ###################################### DEGREE SCORES - ''' - deg0 vs deg2 - deg1 vs deg3 - ''' - deg0, deg1, deg2, deg3 = degree - deg_ratio1 = deg0 / deg2 - if deg_ratio1 > 1.0: - deg_ratio1 = 1 / deg_ratio1 - deg_ratio2 = deg1 / deg3 - if deg_ratio2 > 1.0: - deg_ratio2 = 1 / deg_ratio2 - degree_scores.append((deg_ratio1 + deg_ratio2) / 2) - ###################################### - ###################################### LENGTH SCORES - ''' - len0 vs len2 - len1 vs len3 - ''' - len0, len1, len2, len3 = square_length - len_ratio1 = len0 / len2 if len2 > len0 else len2 / len0 - len_ratio2 = len1 / len3 if len3 > len1 else len3 / len1 - length_scores.append((len_ratio1 + len_ratio2) / 2) - - ###################################### - - overlap_scores = np.array(overlap_scores) - overlap_scores /= np.max(overlap_scores) - - degree_scores = np.array(degree_scores) - # degree_scores /= np.max(degree_scores) - - length_scores = np.array(length_scores) - - ###################################### AREA SCORES - area_scores = np.reshape(squares, [-1, 4, 2]) - area_x = area_scores[:, :, 0] - area_y = area_scores[:, :, 1] - correction = area_x[:, -1] * area_y[:, 0] - area_y[:, -1] * area_x[:, 0] - area_scores = np.sum(area_x[:, :-1] * area_y[:, 1:], axis=-1) - np.sum(area_y[:, :-1] * area_x[:, 1:], axis=-1) - area_scores = 0.5 * np.abs(area_scores + correction) - area_scores /= (map_size * map_size) # np.max(area_scores) - ###################################### - - ###################################### CENTER SCORES - centers = np.array([[256 // 2, 256 // 2]], dtype='float32') # [1, 2] - # squares: [n, 4, 2] - square_centers = np.mean(squares, axis=1) # [n, 2] - center2center = np.sqrt(np.sum((centers - square_centers) ** 2)) - center_scores = center2center / (map_size / np.sqrt(2.0)) - - ''' - score_w = [overlap, degree, area, center, length] - ''' - score_w = [0.0, 1.0, 10.0, 0.5, 1.0] - score_array = params['w_overlap'] * overlap_scores \ - + params['w_degree'] * degree_scores \ - + params['w_area'] * area_scores \ - - params['w_center'] * center_scores \ - + params['w_length'] * length_scores - - best_square = [] - - sorted_idx = np.argsort(score_array)[::-1] - score_array = score_array[sorted_idx] - squares = squares[sorted_idx] - - except Exception as e: - pass - - '''return list - merged_lines, squares, scores - ''' - - try: - new_segments[:, 0] = new_segments[:, 0] * 2 / input_shape[1] * original_shape[1] - new_segments[:, 1] = new_segments[:, 1] * 2 / input_shape[0] * original_shape[0] - new_segments[:, 2] = new_segments[:, 2] * 2 / input_shape[1] * original_shape[1] - new_segments[:, 3] = new_segments[:, 3] * 2 / input_shape[0] * original_shape[0] - except: - new_segments = [] - - try: - squares[:, :, 0] = squares[:, :, 0] * 2 / input_shape[1] * original_shape[1] - squares[:, :, 1] = squares[:, :, 1] * 2 / input_shape[0] * original_shape[0] - except: - squares = [] - score_array = [] - - try: - inter_points = np.array(inter_points) - inter_points[:, 0] = inter_points[:, 0] * 2 / input_shape[1] * original_shape[1] - inter_points[:, 1] = inter_points[:, 1] * 2 / input_shape[0] * original_shape[0] - except: - inter_points = [] - - return new_segments, squares, score_array, inter_points diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/sep_aspp_head.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/sep_aspp_head.py deleted file mode 100644 index 3339a7ac56e77dfc638e9bffb557d4699148686b..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/decode_heads/sep_aspp_head.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule, DepthwiseSeparableConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .aspp_head import ASPPHead, ASPPModule - - -class DepthwiseSeparableASPPModule(ASPPModule): - """Atrous Spatial Pyramid Pooling (ASPP) Module with depthwise separable - conv.""" - - def __init__(self, **kwargs): - super(DepthwiseSeparableASPPModule, self).__init__(**kwargs) - for i, dilation in enumerate(self.dilations): - if dilation > 1: - self[i] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - 3, - dilation=dilation, - padding=dilation, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - -@HEADS.register_module() -class DepthwiseSeparableASPPHead(ASPPHead): - """Encoder-Decoder with Atrous Separable Convolution for Semantic Image - Segmentation. - - This head is the implementation of `DeepLabV3+ - `_. - - Args: - c1_in_channels (int): The input channels of c1 decoder. If is 0, - the no decoder will be used. - c1_channels (int): The intermediate channels of c1 decoder. - """ - - def __init__(self, c1_in_channels, c1_channels, **kwargs): - super(DepthwiseSeparableASPPHead, self).__init__(**kwargs) - assert c1_in_channels >= 0 - self.aspp_modules = DepthwiseSeparableASPPModule( - dilations=self.dilations, - in_channels=self.in_channels, - channels=self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - if c1_in_channels > 0: - self.c1_bottleneck = ConvModule( - c1_in_channels, - c1_channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - else: - self.c1_bottleneck = None - self.sep_bottleneck = nn.Sequential( - DepthwiseSeparableConvModule( - self.channels + c1_channels, - self.channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg), - DepthwiseSeparableConvModule( - self.channels, - self.channels, - 3, - padding=1, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - aspp_outs = [ - resize( - self.image_pool(x), - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ] - aspp_outs.extend(self.aspp_modules(x)) - aspp_outs = torch.cat(aspp_outs, dim=1) - output = self.bottleneck(aspp_outs) - if self.c1_bottleneck is not None: - c1_output = self.c1_bottleneck(inputs[0]) - output = resize( - input=output, - size=c1_output.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - output = torch.cat([output, c1_output], dim=1) - output = self.sep_bottleneck(output) - output = self.cls_seg(output) - return output diff --git a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/data/EvalDataset.py b/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/data/EvalDataset.py deleted file mode 100644 index ad42b46459aa099ed48780b5cff0cb9099f82b71..0000000000000000000000000000000000000000 --- a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/data/EvalDataset.py +++ /dev/null @@ -1,166 +0,0 @@ -from torch.utils.data import Dataset -import numpy as np -import os -import random -import torchvision.transforms as transforms -from PIL import Image, ImageOps -import cv2 -import torch -from PIL.ImageFilter import GaussianBlur -import trimesh -import cv2 - - -class EvalDataset(Dataset): - @staticmethod - def modify_commandline_options(parser): - return parser - - def __init__(self, opt, root=None): - self.opt = opt - self.projection_mode = 'orthogonal' - - # Path setup - self.root = self.opt.dataroot - if root is not None: - self.root = root - self.RENDER = os.path.join(self.root, 'RENDER') - self.MASK = os.path.join(self.root, 'MASK') - self.PARAM = os.path.join(self.root, 'PARAM') - self.OBJ = os.path.join(self.root, 'GEO', 'OBJ') - - self.phase = 'val' - self.load_size = self.opt.loadSize - - self.num_views = self.opt.num_views - - self.max_view_angle = 360 - self.interval = 1 - self.subjects = self.get_subjects() - - # PIL to tensor - self.to_tensor = transforms.Compose([ - transforms.Resize(self.load_size), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) - ]) - - def get_subjects(self): - var_file = os.path.join(self.root, 'val.txt') - if os.path.exists(var_file): - var_subjects = np.loadtxt(var_file, dtype=str) - return sorted(list(var_subjects)) - all_subjects = os.listdir(self.RENDER) - return sorted(list(all_subjects)) - - def __len__(self): - return len(self.subjects) * self.max_view_angle // self.interval - - def get_render(self, subject, num_views, view_id=None, random_sample=False): - ''' - Return the render data - :param subject: subject name - :param num_views: how many views to return - :param view_id: the first view_id. If None, select a random one. - :return: - 'img': [num_views, C, W, H] images - 'calib': [num_views, 4, 4] calibration matrix - 'extrinsic': [num_views, 4, 4] extrinsic matrix - 'mask': [num_views, 1, W, H] masks - ''' - # For now we only have pitch = 00. Hard code it here - pitch = 0 - # Select a random view_id from self.max_view_angle if not given - if view_id is None: - view_id = np.random.randint(self.max_view_angle) - # The ids are an even distribution of num_views around view_id - view_ids = [(view_id + self.max_view_angle // num_views * offset) % self.max_view_angle - for offset in range(num_views)] - if random_sample: - view_ids = np.random.choice(self.max_view_angle, num_views, replace=False) - - calib_list = [] - render_list = [] - mask_list = [] - extrinsic_list = [] - - for vid in view_ids: - param_path = os.path.join(self.PARAM, subject, '%d_%02d.npy' % (vid, pitch)) - render_path = os.path.join(self.RENDER, subject, '%d_%02d.jpg' % (vid, pitch)) - mask_path = os.path.join(self.MASK, subject, '%d_%02d.png' % (vid, pitch)) - - # loading calibration data - param = np.load(param_path) - # pixel unit / world unit - ortho_ratio = param.item().get('ortho_ratio') - # world unit / model unit - scale = param.item().get('scale') - # camera center world coordinate - center = param.item().get('center') - # model rotation - R = param.item().get('R') - - translate = -np.matmul(R, center).reshape(3, 1) - extrinsic = np.concatenate([R, translate], axis=1) - extrinsic = np.concatenate([extrinsic, np.array([0, 0, 0, 1]).reshape(1, 4)], 0) - # Match camera space to image pixel space - scale_intrinsic = np.identity(4) - scale_intrinsic[0, 0] = scale / ortho_ratio - scale_intrinsic[1, 1] = -scale / ortho_ratio - scale_intrinsic[2, 2] = -scale / ortho_ratio - # Match image pixel space to image uv space - uv_intrinsic = np.identity(4) - uv_intrinsic[0, 0] = 1.0 / float(self.opt.loadSize // 2) - uv_intrinsic[1, 1] = 1.0 / float(self.opt.loadSize // 2) - uv_intrinsic[2, 2] = 1.0 / float(self.opt.loadSize // 2) - # Transform under image pixel space - trans_intrinsic = np.identity(4) - - mask = Image.open(mask_path).convert('L') - render = Image.open(render_path).convert('RGB') - - intrinsic = np.matmul(trans_intrinsic, np.matmul(uv_intrinsic, scale_intrinsic)) - calib = torch.Tensor(np.matmul(intrinsic, extrinsic)).float() - extrinsic = torch.Tensor(extrinsic).float() - - mask = transforms.Resize(self.load_size)(mask) - mask = transforms.ToTensor()(mask).float() - mask_list.append(mask) - - render = self.to_tensor(render) - render = mask.expand_as(render) * render - - render_list.append(render) - calib_list.append(calib) - extrinsic_list.append(extrinsic) - - return { - 'img': torch.stack(render_list, dim=0), - 'calib': torch.stack(calib_list, dim=0), - 'extrinsic': torch.stack(extrinsic_list, dim=0), - 'mask': torch.stack(mask_list, dim=0) - } - - def get_item(self, index): - # In case of a missing file or IO error, switch to a random sample instead - try: - sid = index % len(self.subjects) - vid = (index // len(self.subjects)) * self.interval - # name of the subject 'rp_xxxx_xxx' - subject = self.subjects[sid] - res = { - 'name': subject, - 'mesh_path': os.path.join(self.OBJ, subject + '.obj'), - 'sid': sid, - 'vid': vid, - } - render_data = self.get_render(subject, num_views=self.num_views, view_id=vid, - random_sample=self.opt.random_multiview) - res.update(render_data) - return res - except Exception as e: - print(e) - return self.get_item(index=random.randint(0, self.__len__() - 1)) - - def __getitem__(self, index): - return self.get_item(index) diff --git a/spaces/cynika/taffy/mel_processing.py b/spaces/cynika/taffy/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/cynika/taffy/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/README.md b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/README.md deleted file mode 100644 index 2ee63a861229b68873561fa39bfa7c9a8b53b947..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/models/arcface_torch/README.md +++ /dev/null @@ -1,164 +0,0 @@ -# Distributed Arcface Training in Pytorch - -This is a deep learning library that makes face recognition efficient, and effective, which can train tens of millions -identity on a single server. - -## Requirements - -- Install [pytorch](http://pytorch.org) (torch>=1.6.0), our doc for [install.md](docs/install.md). -- `pip install -r requirements.txt`. -- Download the dataset - from [https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_](https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_) - . - -## How to Training - -To train a model, run `train.py` with the path to the configs: - -### 1. Single node, 8 GPUs: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50 -``` - -### 2. Multiple nodes, each node 8 GPUs: - -Node 0: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr="ip1" --master_port=1234 train.py train.py configs/ms1mv3_r50 -``` - -Node 1: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr="ip1" --master_port=1234 train.py train.py configs/ms1mv3_r50 -``` - -### 3.Training resnet2060 with 8 GPUs: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r2060.py -``` - -## Model Zoo - -- The models are available for non-commercial research purposes only. -- All models can be found in here. -- [Baidu Yun Pan](https://pan.baidu.com/s/1CL-l4zWqsI1oDuEEYVhj-g): e8pw -- [onedrive](https://1drv.ms/u/s!AswpsDO2toNKq0lWY69vN58GR6mw?e=p9Ov5d) - -### Performance on [**ICCV2021-MFR**](http://iccv21-mfr.com/) - -ICCV2021-MFR testset consists of non-celebrities so we can ensure that it has very few overlap with public available face -recognition training set, such as MS1M and CASIA as they mostly collected from online celebrities. -As the result, we can evaluate the FAIR performance for different algorithms. - -For **ICCV2021-MFR-ALL** set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6). The -globalised multi-racial testset contains 242,143 identities and 1,624,305 images. - -For **ICCV2021-MFR-MASK** set, TAR is measured on mask-to-nonmask 1:1 protocal, with FAR less than 0.0001(e-4). -Mask testset contains 6,964 identities, 6,964 masked images and 13,928 non-masked images. -There are totally 13,928 positive pairs and 96,983,824 negative pairs. - -| Datasets | backbone | Training throughout | Size / MB | **ICCV2021-MFR-MASK** | **ICCV2021-MFR-ALL** | -| :---: | :--- | :--- | :--- |:--- |:--- | -| MS1MV3 | r18 | - | 91 | **47.85** | **68.33** | -| Glint360k | r18 | 8536 | 91 | **53.32** | **72.07** | -| MS1MV3 | r34 | - | 130 | **58.72** | **77.36** | -| Glint360k | r34 | 6344 | 130 | **65.10** | **83.02** | -| MS1MV3 | r50 | 5500 | 166 | **63.85** | **80.53** | -| Glint360k | r50 | 5136 | 166 | **70.23** | **87.08** | -| MS1MV3 | r100 | - | 248 | **69.09** | **84.31** | -| Glint360k | r100 | 3332 | 248 | **75.57** | **90.66** | -| MS1MV3 | mobilefacenet | 12185 | 7.8 | **41.52** | **65.26** | -| Glint360k | mobilefacenet | 11197 | 7.8 | **44.52** | **66.48** | - -### Performance on IJB-C and Verification Datasets - -| Datasets | backbone | IJBC(1e-05) | IJBC(1e-04) | agedb30 | cfp_fp | lfw | log | -| :---: | :--- | :--- | :--- | :--- |:--- |:--- |:--- | -| MS1MV3 | r18 | 92.07 | 94.66 | 97.77 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r18_fp16/training.log)| -| MS1MV3 | r34 | 94.10 | 95.90 | 98.10 | 98.67 | 99.80 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r34_fp16/training.log)| -| MS1MV3 | r50 | 94.79 | 96.46 | 98.35 | 98.96 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r50_fp16/training.log)| -| MS1MV3 | r100 | 95.31 | 96.81 | 98.48 | 99.06 | 99.85 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r100_fp16/training.log)| -| MS1MV3 | **r2060**| 95.34 | 97.11 | 98.67 | 99.24 | 99.87 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r2060_fp16/training.log)| -| Glint360k |r18-0.1 | 93.16 | 95.33 | 97.72 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r18_fp16_0.1/training.log)| -| Glint360k |r34-0.1 | 95.16 | 96.56 | 98.33 | 98.78 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r34_fp16_0.1/training.log)| -| Glint360k |r50-0.1 | 95.61 | 96.97 | 98.38 | 99.20 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r50_fp16_0.1/training.log)| -| Glint360k |r100-0.1 | 95.88 | 97.32 | 98.48 | 99.29 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r100_fp16_0.1/training.log)| - -[comment]: <> (More details see [model.md](docs/modelzoo.md) in docs.) - - -## [Speed Benchmark](docs/speed_benchmark.md) - -**Arcface Torch** can train large-scale face recognition training set efficiently and quickly. When the number of -classes in training sets is greater than 300K and the training is sufficient, partial fc sampling strategy will get same -accuracy with several times faster training performance and smaller GPU memory. -Partial FC is a sparse variant of the model parallel architecture for large sacle face recognition. Partial FC use a -sparse softmax, where each batch dynamicly sample a subset of class centers for training. In each iteration, only a -sparse part of the parameters will be updated, which can reduce a lot of GPU memory and calculations. With Partial FC, -we can scale trainset of 29 millions identities, the largest to date. Partial FC also supports multi-machine distributed -training and mixed precision training. - -![Image text](https://github.com/anxiangsir/insightface_arcface_log/blob/master/partial_fc_v2.png) - -More details see -[speed_benchmark.md](docs/speed_benchmark.md) in docs. - -### 1. Training speed of different parallel methods (samples / second), Tesla V100 32GB * 8. (Larger is better) - -`-` means training failed because of gpu memory limitations. - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 4681 | 4824 | 5004 | -|1400000 | **1672** | 3043 | 4738 | -|5500000 | **-** | **1389** | 3975 | -|8000000 | **-** | **-** | 3565 | -|16000000 | **-** | **-** | 2679 | -|29000000 | **-** | **-** | **1855** | - -### 2. GPU memory cost of different parallel methods (MB per GPU), Tesla V100 32GB * 8. (Smaller is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 7358 | 5306 | 4868 | -|1400000 | 32252 | 11178 | 6056 | -|5500000 | **-** | 32188 | 9854 | -|8000000 | **-** | **-** | 12310 | -|16000000 | **-** | **-** | 19950 | -|29000000 | **-** | **-** | 32324 | - -## Evaluation ICCV2021-MFR and IJB-C - -More details see [eval.md](docs/eval.md) in docs. - -## Test - -We tested many versions of PyTorch. Please create an issue if you are having trouble. - -- [x] torch 1.6.0 -- [x] torch 1.7.1 -- [x] torch 1.8.0 -- [x] torch 1.9.0 - -## Citation - -``` -@inproceedings{deng2019arcface, - title={Arcface: Additive angular margin loss for deep face recognition}, - author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos}, - booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, - pages={4690--4699}, - year={2019} -} -@inproceedings{an2020partical_fc, - title={Partial FC: Training 10 Million Identities on a Single Machine}, - author={An, Xiang and Zhu, Xuhan and Xiao, Yang and Wu, Lan and Zhang, Ming and Gao, Yuan and Qin, Bin and - Zhang, Debing and Fu Ying}, - booktitle={Arxiv 2010.05222}, - year={2020} -} -``` diff --git a/spaces/dawdqd/ChuanhuChatGPT/modules/models/configuration_moss.py b/spaces/dawdqd/ChuanhuChatGPT/modules/models/configuration_moss.py deleted file mode 100644 index 9bad4396ecea6578c1628732d0ef077d8964d45d..0000000000000000000000000000000000000000 --- a/spaces/dawdqd/ChuanhuChatGPT/modules/models/configuration_moss.py +++ /dev/null @@ -1,118 +0,0 @@ -""" Moss model configuration""" - -from transformers.utils import logging -from transformers.configuration_utils import PretrainedConfig - - -logger = logging.get_logger(__name__) - - -class MossConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`MossModel`]. It is used to instantiate a - Moss model according to the specified arguments, defining the model architecture. Instantiating a configuration - with the defaults will yield a similar configuration to that of the Moss - [fnlp/moss-moon-003-base](https://huggingface.co/fnlp/moss-moon-003-base) architecture. Configuration objects - inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from - [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 107008): - Vocabulary size of the Moss model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`MossModel`]. - n_positions (`int`, *optional*, defaults to 2048): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - n_embd (`int`, *optional*, defaults to 4096): - Dimensionality of the embeddings and hidden states. - n_layer (`int`, *optional*, defaults to 28): - Number of hidden layers in the Transformer encoder. - n_head (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - rotary_dim (`int`, *optional*, defaults to 64): - Number of dimensions in the embedding that Rotary Position Embedding is applied to. - n_inner (`int`, *optional*, defaults to None): - Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd - activation_function (`str`, *optional*, defaults to `"gelu_new"`): - Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. - resid_pdrop (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (`int`, *optional*, defaults to 0.1): - The dropout ratio for the embeddings. - attn_pdrop (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention. - layer_norm_epsilon (`float`, *optional*, defaults to 1e-5): - The epsilon to use in the layer normalization layers. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). - - Example: - - ```python - >>> from modeling_moss import MossModel - >>> from configuration_moss import MossConfig - - >>> # Initializing a moss-moon-003-base configuration - >>> configuration = MossConfig() - - >>> # Initializing a model (with random weights) from the configuration - >>> model = MossModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "moss" - attribute_map = { - "max_position_embeddings": "n_positions", - "hidden_size": "n_embd", - "num_attention_heads": "n_head", - "num_hidden_layers": "n_layer", - } - - def __init__( - self, - vocab_size=107008, - n_positions=2048, - n_ctx=2048, - n_embd=4096, - n_layer=28, - n_head=16, - rotary_dim=64, - n_inner=None, - activation_function="gelu_new", - resid_pdrop=0.0, - embd_pdrop=0.0, - attn_pdrop=0.0, - layer_norm_epsilon=1e-5, - initializer_range=0.02, - use_cache=True, - bos_token_id=106028, - eos_token_id=106068, - tie_word_embeddings=False, - **kwargs, - ): - self.vocab_size = vocab_size - self.n_ctx = n_ctx - self.n_positions = n_positions - self.n_embd = n_embd - self.n_layer = n_layer - self.n_head = n_head - self.n_inner = n_inner - self.rotary_dim = rotary_dim - self.activation_function = activation_function - self.resid_pdrop = resid_pdrop - self.embd_pdrop = embd_pdrop - self.attn_pdrop = attn_pdrop - self.layer_norm_epsilon = layer_norm_epsilon - self.initializer_range = initializer_range - self.use_cache = use_cache - - self.bos_token_id = bos_token_id - self.eos_token_id = eos_token_id - - super().__init__( - bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs - ) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/explicitClosingLinePen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/explicitClosingLinePen.py deleted file mode 100644 index e3c9c943cc504e970d4e9ec9f96c3817d8383ccf..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/explicitClosingLinePen.py +++ /dev/null @@ -1,101 +0,0 @@ -from fontTools.pens.filterPen import ContourFilterPen - - -class ExplicitClosingLinePen(ContourFilterPen): - """A filter pen that adds an explicit lineTo to the first point of each closed - contour if the end point of the last segment is not already the same as the first point. - Otherwise, it passes the contour through unchanged. - - >>> from pprint import pprint - >>> from fontTools.pens.recordingPen import RecordingPen - >>> rec = RecordingPen() - >>> pen = ExplicitClosingLinePen(rec) - >>> pen.moveTo((0, 0)) - >>> pen.lineTo((100, 0)) - >>> pen.lineTo((100, 100)) - >>> pen.closePath() - >>> pprint(rec.value) - [('moveTo', ((0, 0),)), - ('lineTo', ((100, 0),)), - ('lineTo', ((100, 100),)), - ('lineTo', ((0, 0),)), - ('closePath', ())] - >>> rec = RecordingPen() - >>> pen = ExplicitClosingLinePen(rec) - >>> pen.moveTo((0, 0)) - >>> pen.lineTo((100, 0)) - >>> pen.lineTo((100, 100)) - >>> pen.lineTo((0, 0)) - >>> pen.closePath() - >>> pprint(rec.value) - [('moveTo', ((0, 0),)), - ('lineTo', ((100, 0),)), - ('lineTo', ((100, 100),)), - ('lineTo', ((0, 0),)), - ('closePath', ())] - >>> rec = RecordingPen() - >>> pen = ExplicitClosingLinePen(rec) - >>> pen.moveTo((0, 0)) - >>> pen.curveTo((100, 0), (0, 100), (100, 100)) - >>> pen.closePath() - >>> pprint(rec.value) - [('moveTo', ((0, 0),)), - ('curveTo', ((100, 0), (0, 100), (100, 100))), - ('lineTo', ((0, 0),)), - ('closePath', ())] - >>> rec = RecordingPen() - >>> pen = ExplicitClosingLinePen(rec) - >>> pen.moveTo((0, 0)) - >>> pen.curveTo((100, 0), (0, 100), (100, 100)) - >>> pen.lineTo((0, 0)) - >>> pen.closePath() - >>> pprint(rec.value) - [('moveTo', ((0, 0),)), - ('curveTo', ((100, 0), (0, 100), (100, 100))), - ('lineTo', ((0, 0),)), - ('closePath', ())] - >>> rec = RecordingPen() - >>> pen = ExplicitClosingLinePen(rec) - >>> pen.moveTo((0, 0)) - >>> pen.curveTo((100, 0), (0, 100), (0, 0)) - >>> pen.closePath() - >>> pprint(rec.value) - [('moveTo', ((0, 0),)), - ('curveTo', ((100, 0), (0, 100), (0, 0))), - ('closePath', ())] - >>> rec = RecordingPen() - >>> pen = ExplicitClosingLinePen(rec) - >>> pen.moveTo((0, 0)) - >>> pen.closePath() - >>> pprint(rec.value) - [('moveTo', ((0, 0),)), ('closePath', ())] - >>> rec = RecordingPen() - >>> pen = ExplicitClosingLinePen(rec) - >>> pen.closePath() - >>> pprint(rec.value) - [('closePath', ())] - >>> rec = RecordingPen() - >>> pen = ExplicitClosingLinePen(rec) - >>> pen.moveTo((0, 0)) - >>> pen.lineTo((100, 0)) - >>> pen.lineTo((100, 100)) - >>> pen.endPath() - >>> pprint(rec.value) - [('moveTo', ((0, 0),)), - ('lineTo', ((100, 0),)), - ('lineTo', ((100, 100),)), - ('endPath', ())] - """ - - def filterContour(self, contour): - if ( - not contour - or contour[0][0] != "moveTo" - or contour[-1][0] != "closePath" - or len(contour) < 3 - ): - return - movePt = contour[0][1][0] - lastSeg = contour[-2][1] - if lastSeg and movePt != lastSeg[-1]: - contour[-1:] = [("lineTo", (movePt,)), ("closePath", ())] diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/label.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/label.py deleted file mode 100644 index 5a2c40fd387b7250cd75d3dfd7ade49ab5343b51..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/label.py +++ /dev/null @@ -1,182 +0,0 @@ -"""gr.Label() component.""" - -from __future__ import annotations - -import operator -from pathlib import Path -from typing import Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import ( - JSONSerializable, -) - -from gradio.components.base import IOComponent, _Keywords -from gradio.deprecation import warn_style_method_deprecation -from gradio.events import ( - Changeable, - EventListenerMethod, - Selectable, -) - -set_documentation_group("component") - - -@document() -class Label(Changeable, Selectable, IOComponent, JSONSerializable): - """ - Displays a classification label, along with confidence scores of top categories, if provided. - Preprocessing: this component does *not* accept input. - Postprocessing: expects a {Dict[str, float]} of classes and confidences, or {str} with just the class or an {int}/{float} for regression outputs, or a {str} path to a .json file containing a json dictionary in the structure produced by Label.postprocess(). - - Demos: main_note, titanic_survival - Guides: image-classification-in-pytorch, image-classification-in-tensorflow, image-classification-with-vision-transformers, building-a-pictionary-app - """ - - CONFIDENCES_KEY = "confidences" - - def __init__( - self, - value: dict[str, float] | str | float | Callable | None = None, - *, - num_top_classes: int | None = None, - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - color: str | None = None, - **kwargs, - ): - """ - Parameters: - value: Default value to show in the component. If a str or number is provided, simply displays the string or number. If a {Dict[str, float]} of classes and confidences is provided, displays the top class on top and the `num_top_classes` below, along with their confidence bars. If callable, the function will be called whenever the app loads to set the initial value of the component. - num_top_classes: number of most confident classes to show. - label: component name in interface. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - color: The background color of the label (either a valid css color name or hexadecimal string). - """ - self.num_top_classes = num_top_classes - self.color = color - self.select: EventListenerMethod - """ - Event listener for when the user selects a category from Label. - Uses event data gradio.SelectData to carry `value` referring to name of selected category, and `index` to refer to index. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def get_config(self): - return { - "num_top_classes": self.num_top_classes, - "value": self.value, - "color": self.color, - "selectable": self.selectable, - **IOComponent.get_config(self), - } - - def postprocess(self, y: dict[str, float] | str | float | None) -> dict | None: - """ - Parameters: - y: a dictionary mapping labels to confidence value, or just a string/numerical label by itself - Returns: - Object with key 'label' representing primary label, and key 'confidences' representing a list of label-confidence pairs - """ - if y is None or y == {}: - return {} - if isinstance(y, str) and y.endswith(".json") and Path(y).exists(): - return self.serialize(y) - if isinstance(y, (str, float, int)): - return {"label": str(y)} - if isinstance(y, dict): - if "confidences" in y and isinstance(y["confidences"], dict): - y = y["confidences"] - y = {c["label"]: c["confidence"] for c in y} - sorted_pred = sorted(y.items(), key=operator.itemgetter(1), reverse=True) - if self.num_top_classes is not None: - sorted_pred = sorted_pred[: self.num_top_classes] - return { - "label": sorted_pred[0][0], - "confidences": [ - {"label": pred[0], "confidence": pred[1]} for pred in sorted_pred - ], - } - raise ValueError( - "The `Label` output interface expects one of: a string label, or an int label, a " - "float label, or a dictionary whose keys are labels and values are confidences. " - f"Instead, got a {type(y)}" - ) - - @staticmethod - def update( - value: dict[str, float] - | str - | float - | Literal[_Keywords.NO_VALUE] - | None = _Keywords.NO_VALUE, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - color: str | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - ): - # If color is not specified (NO_VALUE) map it to None so that - # it gets filtered out in postprocess. This will mean the color - # will not be updated in the front-end - if color is _Keywords.NO_VALUE: - color = None - # If the color was specified by the developer as None - # Map is so that the color is updated to be transparent, - # e.g. no background default state. - elif color is None: - color = "transparent" - return { - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "color": color, - "__type__": "update", - } - - def style( - self, - *, - container: bool | None = None, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if container is not None: - self.container = container - return self diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4.py deleted file mode 100644 index 48f23c0498c788ba2df7cb72ae846ea57ad2ef5b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_gtk4.py +++ /dev/null @@ -1,610 +0,0 @@ -import functools -import io -import os - -import matplotlib as mpl -from matplotlib import _api, backend_tools, cbook -from matplotlib.backend_bases import ( - ToolContainerBase, KeyEvent, LocationEvent, MouseEvent, ResizeEvent, - CloseEvent) - -try: - import gi -except ImportError as err: - raise ImportError("The GTK4 backends require PyGObject") from err - -try: - # :raises ValueError: If module/version is already loaded, already - # required, or unavailable. - gi.require_version("Gtk", "4.0") -except ValueError as e: - # in this case we want to re-raise as ImportError so the - # auto-backend selection logic correctly skips. - raise ImportError(e) from e - -from gi.repository import Gio, GLib, Gtk, Gdk, GdkPixbuf -from . import _backend_gtk -from ._backend_gtk import ( # noqa: F401 # pylint: disable=W0611 - _BackendGTK, _FigureCanvasGTK, _FigureManagerGTK, _NavigationToolbar2GTK, - TimerGTK as TimerGTK4, -) - - -class FigureCanvasGTK4(_FigureCanvasGTK, Gtk.DrawingArea): - required_interactive_framework = "gtk4" - supports_blit = False - manager_class = _api.classproperty(lambda cls: FigureManagerGTK4) - _context_is_scaled = False - - def __init__(self, figure=None): - super().__init__(figure=figure) - - self.set_hexpand(True) - self.set_vexpand(True) - - self._idle_draw_id = 0 - self._rubberband_rect = None - - self.set_draw_func(self._draw_func) - self.connect('resize', self.resize_event) - self.connect('notify::scale-factor', self._update_device_pixel_ratio) - - click = Gtk.GestureClick() - click.set_button(0) # All buttons. - click.connect('pressed', self.button_press_event) - click.connect('released', self.button_release_event) - self.add_controller(click) - - key = Gtk.EventControllerKey() - key.connect('key-pressed', self.key_press_event) - key.connect('key-released', self.key_release_event) - self.add_controller(key) - - motion = Gtk.EventControllerMotion() - motion.connect('motion', self.motion_notify_event) - motion.connect('enter', self.enter_notify_event) - motion.connect('leave', self.leave_notify_event) - self.add_controller(motion) - - scroll = Gtk.EventControllerScroll.new( - Gtk.EventControllerScrollFlags.VERTICAL) - scroll.connect('scroll', self.scroll_event) - self.add_controller(scroll) - - self.set_focusable(True) - - css = Gtk.CssProvider() - style = '.matplotlib-canvas { background-color: white; }' - if Gtk.check_version(4, 9, 3) is None: - css.load_from_data(style, -1) - else: - css.load_from_data(style.encode('utf-8')) - style_ctx = self.get_style_context() - style_ctx.add_provider(css, Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION) - style_ctx.add_class("matplotlib-canvas") - - def destroy(self): - CloseEvent("close_event", self)._process() - - def set_cursor(self, cursor): - # docstring inherited - self.set_cursor_from_name(_backend_gtk.mpl_to_gtk_cursor_name(cursor)) - - def _mpl_coords(self, xy=None): - """ - Convert the *xy* position of a GTK event, or of the current cursor - position if *xy* is None, to Matplotlib coordinates. - - GTK use logical pixels, but the figure is scaled to physical pixels for - rendering. Transform to physical pixels so that all of the down-stream - transforms work as expected. - - Also, the origin is different and needs to be corrected. - """ - if xy is None: - surface = self.get_native().get_surface() - is_over, x, y, mask = surface.get_device_position( - self.get_display().get_default_seat().get_pointer()) - else: - x, y = xy - x = x * self.device_pixel_ratio - # flip y so y=0 is bottom of canvas - y = self.figure.bbox.height - y * self.device_pixel_ratio - return x, y - - def scroll_event(self, controller, dx, dy): - MouseEvent( - "scroll_event", self, *self._mpl_coords(), step=dy, - modifiers=self._mpl_modifiers(controller), - )._process() - return True - - def button_press_event(self, controller, n_press, x, y): - MouseEvent( - "button_press_event", self, *self._mpl_coords((x, y)), - controller.get_current_button(), - modifiers=self._mpl_modifiers(controller), - )._process() - self.grab_focus() - - def button_release_event(self, controller, n_press, x, y): - MouseEvent( - "button_release_event", self, *self._mpl_coords((x, y)), - controller.get_current_button(), - modifiers=self._mpl_modifiers(controller), - )._process() - - def key_press_event(self, controller, keyval, keycode, state): - KeyEvent( - "key_press_event", self, self._get_key(keyval, keycode, state), - *self._mpl_coords(), - )._process() - return True - - def key_release_event(self, controller, keyval, keycode, state): - KeyEvent( - "key_release_event", self, self._get_key(keyval, keycode, state), - *self._mpl_coords(), - )._process() - return True - - def motion_notify_event(self, controller, x, y): - MouseEvent( - "motion_notify_event", self, *self._mpl_coords((x, y)), - modifiers=self._mpl_modifiers(controller), - )._process() - - def enter_notify_event(self, controller, x, y): - LocationEvent( - "figure_enter_event", self, *self._mpl_coords((x, y)), - modifiers=self._mpl_modifiers(), - )._process() - - def leave_notify_event(self, controller): - LocationEvent( - "figure_leave_event", self, *self._mpl_coords(), - modifiers=self._mpl_modifiers(), - )._process() - - def resize_event(self, area, width, height): - self._update_device_pixel_ratio() - dpi = self.figure.dpi - winch = width * self.device_pixel_ratio / dpi - hinch = height * self.device_pixel_ratio / dpi - self.figure.set_size_inches(winch, hinch, forward=False) - ResizeEvent("resize_event", self)._process() - self.draw_idle() - - def _mpl_modifiers(self, controller=None): - if controller is None: - surface = self.get_native().get_surface() - is_over, x, y, event_state = surface.get_device_position( - self.get_display().get_default_seat().get_pointer()) - else: - event_state = controller.get_current_event_state() - mod_table = [ - ("ctrl", Gdk.ModifierType.CONTROL_MASK), - ("alt", Gdk.ModifierType.ALT_MASK), - ("shift", Gdk.ModifierType.SHIFT_MASK), - ("super", Gdk.ModifierType.SUPER_MASK), - ] - return [name for name, mask in mod_table if event_state & mask] - - def _get_key(self, keyval, keycode, state): - unikey = chr(Gdk.keyval_to_unicode(keyval)) - key = cbook._unikey_or_keysym_to_mplkey( - unikey, - Gdk.keyval_name(keyval)) - modifiers = [ - ("ctrl", Gdk.ModifierType.CONTROL_MASK, "control"), - ("alt", Gdk.ModifierType.ALT_MASK, "alt"), - ("shift", Gdk.ModifierType.SHIFT_MASK, "shift"), - ("super", Gdk.ModifierType.SUPER_MASK, "super"), - ] - mods = [ - mod for mod, mask, mod_key in modifiers - if (mod_key != key and state & mask - and not (mod == "shift" and unikey.isprintable()))] - return "+".join([*mods, key]) - - def _update_device_pixel_ratio(self, *args, **kwargs): - # We need to be careful in cases with mixed resolution displays if - # device_pixel_ratio changes. - if self._set_device_pixel_ratio(self.get_scale_factor()): - self.draw() - - def _draw_rubberband(self, rect): - self._rubberband_rect = rect - # TODO: Only update the rubberband area. - self.queue_draw() - - def _draw_func(self, drawing_area, ctx, width, height): - self.on_draw_event(self, ctx) - self._post_draw(self, ctx) - - def _post_draw(self, widget, ctx): - if self._rubberband_rect is None: - return - - lw = 1 - dash = 3 - if not self._context_is_scaled: - x0, y0, w, h = (dim / self.device_pixel_ratio - for dim in self._rubberband_rect) - else: - x0, y0, w, h = self._rubberband_rect - lw *= self.device_pixel_ratio - dash *= self.device_pixel_ratio - x1 = x0 + w - y1 = y0 + h - - # Draw the lines from x0, y0 towards x1, y1 so that the - # dashes don't "jump" when moving the zoom box. - ctx.move_to(x0, y0) - ctx.line_to(x0, y1) - ctx.move_to(x0, y0) - ctx.line_to(x1, y0) - ctx.move_to(x0, y1) - ctx.line_to(x1, y1) - ctx.move_to(x1, y0) - ctx.line_to(x1, y1) - - ctx.set_antialias(1) - ctx.set_line_width(lw) - ctx.set_dash((dash, dash), 0) - ctx.set_source_rgb(0, 0, 0) - ctx.stroke_preserve() - - ctx.set_dash((dash, dash), dash) - ctx.set_source_rgb(1, 1, 1) - ctx.stroke() - - def on_draw_event(self, widget, ctx): - # to be overwritten by GTK4Agg or GTK4Cairo - pass - - def draw(self): - # docstring inherited - if self.is_drawable(): - self.queue_draw() - - def draw_idle(self): - # docstring inherited - if self._idle_draw_id != 0: - return - def idle_draw(*args): - try: - self.draw() - finally: - self._idle_draw_id = 0 - return False - self._idle_draw_id = GLib.idle_add(idle_draw) - - def flush_events(self): - # docstring inherited - context = GLib.MainContext.default() - while context.pending(): - context.iteration(True) - - -class NavigationToolbar2GTK4(_NavigationToolbar2GTK, Gtk.Box): - @_api.delete_parameter("3.6", "window") - def __init__(self, canvas, window=None): - self._win = window - Gtk.Box.__init__(self) - - self.add_css_class('toolbar') - - self._gtk_ids = {} - for text, tooltip_text, image_file, callback in self.toolitems: - if text is None: - self.append(Gtk.Separator()) - continue - image = Gtk.Image.new_from_gicon( - Gio.Icon.new_for_string( - str(cbook._get_data_path('images', - f'{image_file}-symbolic.svg')))) - self._gtk_ids[text] = button = ( - Gtk.ToggleButton() if callback in ['zoom', 'pan'] else - Gtk.Button()) - button.set_child(image) - button.add_css_class('flat') - button.add_css_class('image-button') - # Save the handler id, so that we can block it as needed. - button._signal_handler = button.connect( - 'clicked', getattr(self, callback)) - button.set_tooltip_text(tooltip_text) - self.append(button) - - # This filler item ensures the toolbar is always at least two text - # lines high. Otherwise the canvas gets redrawn as the mouse hovers - # over images because those use two-line messages which resize the - # toolbar. - label = Gtk.Label() - label.set_markup( - '\N{NO-BREAK SPACE}\n\N{NO-BREAK SPACE}') - label.set_hexpand(True) # Push real message to the right. - self.append(label) - - self.message = Gtk.Label() - self.message.set_justify(Gtk.Justification.RIGHT) - self.append(self.message) - - _NavigationToolbar2GTK.__init__(self, canvas) - - win = _api.deprecated("3.6")(property(lambda self: self._win)) - - def save_figure(self, *args): - dialog = Gtk.FileChooserNative( - title='Save the figure', - transient_for=self.canvas.get_root(), - action=Gtk.FileChooserAction.SAVE, - modal=True) - self._save_dialog = dialog # Must keep a reference. - - ff = Gtk.FileFilter() - ff.set_name('All files') - ff.add_pattern('*') - dialog.add_filter(ff) - dialog.set_filter(ff) - - formats = [] - default_format = None - for i, (name, fmts) in enumerate( - self.canvas.get_supported_filetypes_grouped().items()): - ff = Gtk.FileFilter() - ff.set_name(name) - for fmt in fmts: - ff.add_pattern(f'*.{fmt}') - dialog.add_filter(ff) - formats.append(name) - if self.canvas.get_default_filetype() in fmts: - default_format = i - # Setting the choice doesn't always work, so make sure the default - # format is first. - formats = [formats[default_format], *formats[:default_format], - *formats[default_format+1:]] - dialog.add_choice('format', 'File format', formats, formats) - dialog.set_choice('format', formats[default_format]) - - dialog.set_current_folder(Gio.File.new_for_path( - os.path.expanduser(mpl.rcParams['savefig.directory']))) - dialog.set_current_name(self.canvas.get_default_filename()) - - @functools.partial(dialog.connect, 'response') - def on_response(dialog, response): - file = dialog.get_file() - fmt = dialog.get_choice('format') - fmt = self.canvas.get_supported_filetypes_grouped()[fmt][0] - dialog.destroy() - self._save_dialog = None - if response != Gtk.ResponseType.ACCEPT: - return - # Save dir for next time, unless empty str (which means use cwd). - if mpl.rcParams['savefig.directory']: - parent = file.get_parent() - mpl.rcParams['savefig.directory'] = parent.get_path() - try: - self.canvas.figure.savefig(file.get_path(), format=fmt) - except Exception as e: - msg = Gtk.MessageDialog( - transient_for=self.canvas.get_root(), - message_type=Gtk.MessageType.ERROR, - buttons=Gtk.ButtonsType.OK, modal=True, - text=str(e)) - msg.show() - - dialog.show() - - -class ToolbarGTK4(ToolContainerBase, Gtk.Box): - _icon_extension = '-symbolic.svg' - - def __init__(self, toolmanager): - ToolContainerBase.__init__(self, toolmanager) - Gtk.Box.__init__(self) - self.set_property('orientation', Gtk.Orientation.HORIZONTAL) - - # Tool items are created later, but must appear before the message. - self._tool_box = Gtk.Box() - self.append(self._tool_box) - self._groups = {} - self._toolitems = {} - - # This filler item ensures the toolbar is always at least two text - # lines high. Otherwise the canvas gets redrawn as the mouse hovers - # over images because those use two-line messages which resize the - # toolbar. - label = Gtk.Label() - label.set_markup( - '\N{NO-BREAK SPACE}\n\N{NO-BREAK SPACE}') - label.set_hexpand(True) # Push real message to the right. - self.append(label) - - self._message = Gtk.Label() - self._message.set_justify(Gtk.Justification.RIGHT) - self.append(self._message) - - def add_toolitem(self, name, group, position, image_file, description, - toggle): - if toggle: - button = Gtk.ToggleButton() - else: - button = Gtk.Button() - button.set_label(name) - button.add_css_class('flat') - - if image_file is not None: - image = Gtk.Image.new_from_gicon( - Gio.Icon.new_for_string(image_file)) - button.set_child(image) - button.add_css_class('image-button') - - if position is None: - position = -1 - - self._add_button(button, group, position) - signal = button.connect('clicked', self._call_tool, name) - button.set_tooltip_text(description) - self._toolitems.setdefault(name, []) - self._toolitems[name].append((button, signal)) - - def _find_child_at_position(self, group, position): - children = [None] - child = self._groups[group].get_first_child() - while child is not None: - children.append(child) - child = child.get_next_sibling() - return children[position] - - def _add_button(self, button, group, position): - if group not in self._groups: - if self._groups: - self._add_separator() - group_box = Gtk.Box() - self._tool_box.append(group_box) - self._groups[group] = group_box - self._groups[group].insert_child_after( - button, self._find_child_at_position(group, position)) - - def _call_tool(self, btn, name): - self.trigger_tool(name) - - def toggle_toolitem(self, name, toggled): - if name not in self._toolitems: - return - for toolitem, signal in self._toolitems[name]: - toolitem.handler_block(signal) - toolitem.set_active(toggled) - toolitem.handler_unblock(signal) - - def remove_toolitem(self, name): - if name not in self._toolitems: - self.toolmanager.message_event(f'{name} not in toolbar', self) - return - - for group in self._groups: - for toolitem, _signal in self._toolitems[name]: - if toolitem in self._groups[group]: - self._groups[group].remove(toolitem) - del self._toolitems[name] - - def _add_separator(self): - sep = Gtk.Separator() - sep.set_property("orientation", Gtk.Orientation.VERTICAL) - self._tool_box.append(sep) - - def set_message(self, s): - self._message.set_label(s) - - -@backend_tools._register_tool_class(FigureCanvasGTK4) -class SaveFigureGTK4(backend_tools.SaveFigureBase): - def trigger(self, *args, **kwargs): - NavigationToolbar2GTK4.save_figure( - self._make_classic_style_pseudo_toolbar()) - - -@backend_tools._register_tool_class(FigureCanvasGTK4) -class HelpGTK4(backend_tools.ToolHelpBase): - def _normalize_shortcut(self, key): - """ - Convert Matplotlib key presses to GTK+ accelerator identifiers. - - Related to `FigureCanvasGTK4._get_key`. - """ - special = { - 'backspace': 'BackSpace', - 'pagedown': 'Page_Down', - 'pageup': 'Page_Up', - 'scroll_lock': 'Scroll_Lock', - } - - parts = key.split('+') - mods = ['<' + mod + '>' for mod in parts[:-1]] - key = parts[-1] - - if key in special: - key = special[key] - elif len(key) > 1: - key = key.capitalize() - elif key.isupper(): - mods += [''] - - return ''.join(mods) + key - - def _is_valid_shortcut(self, key): - """ - Check for a valid shortcut to be displayed. - - - GTK will never send 'cmd+' (see `FigureCanvasGTK4._get_key`). - - The shortcut window only shows keyboard shortcuts, not mouse buttons. - """ - return 'cmd+' not in key and not key.startswith('MouseButton.') - - def trigger(self, *args): - section = Gtk.ShortcutsSection() - - for name, tool in sorted(self.toolmanager.tools.items()): - if not tool.description: - continue - - # Putting everything in a separate group allows GTK to - # automatically split them into separate columns/pages, which is - # useful because we have lots of shortcuts, some with many keys - # that are very wide. - group = Gtk.ShortcutsGroup() - section.append(group) - # A hack to remove the title since we have no group naming. - child = group.get_first_child() - while child is not None: - child.set_visible(False) - child = child.get_next_sibling() - - shortcut = Gtk.ShortcutsShortcut( - accelerator=' '.join( - self._normalize_shortcut(key) - for key in self.toolmanager.get_tool_keymap(name) - if self._is_valid_shortcut(key)), - title=tool.name, - subtitle=tool.description) - group.append(shortcut) - - window = Gtk.ShortcutsWindow( - title='Help', - modal=True, - transient_for=self._figure.canvas.get_root()) - window.set_child(section) - - window.show() - - -@backend_tools._register_tool_class(FigureCanvasGTK4) -class ToolCopyToClipboardGTK4(backend_tools.ToolCopyToClipboardBase): - def trigger(self, *args, **kwargs): - with io.BytesIO() as f: - self.canvas.print_rgba(f) - w, h = self.canvas.get_width_height() - pb = GdkPixbuf.Pixbuf.new_from_data(f.getbuffer(), - GdkPixbuf.Colorspace.RGB, True, - 8, w, h, w*4) - clipboard = self.canvas.get_clipboard() - clipboard.set(pb) - - -backend_tools._register_tool_class( - FigureCanvasGTK4, _backend_gtk.ConfigureSubplotsGTK) -backend_tools._register_tool_class( - FigureCanvasGTK4, _backend_gtk.RubberbandGTK) -Toolbar = ToolbarGTK4 - - -class FigureManagerGTK4(_FigureManagerGTK): - _toolbar2_class = NavigationToolbar2GTK4 - _toolmanager_toolbar_class = ToolbarGTK4 - - -@_BackendGTK.export -class _BackendGTK4(_BackendGTK): - FigureCanvas = FigureCanvasGTK4 - FigureManager = FigureManagerGTK4 diff --git a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/hteyun.py b/spaces/dcq/freegpt-webui/g4f/Provider/Providers/hteyun.py deleted file mode 100644 index a6eba7c00331d720afb47215e818f5900d4aedcf..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/hteyun.py +++ /dev/null @@ -1,34 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://hteyun.com' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - 'Accept': 'application/json, text/plain, */*', - 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - 'Origin': 'https://hteyun.com', - 'Referer': 'https://hteyun.com/chat/', - } - data = { - 'messages': messages, - 'model': model, - 'systemMessage': 'You are ChatGPT, a large language model trained by OpenAI. Follow the user\'s instructions carefully. Respond using russian language.', - 'temperature': 0.7, - 'presence_penalty': 0, - } - response = requests.post(url + '/api/chat-stream', json=data, headers=headers, stream=True) - print(response.json()) - - # Извлечение текста из response - return response.json()['text'] - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/declare-lab/tango/audioldm/hifigan/models.py b/spaces/declare-lab/tango/audioldm/hifigan/models.py deleted file mode 100644 index c4382cc39de0463f9b7c0f33f037dbc233e7cb36..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/audioldm/hifigan/models.py +++ /dev/null @@ -1,174 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Conv1d, ConvTranspose1d -from torch.nn.utils import weight_norm, remove_weight_norm - -LRELU_SLOPE = 0.1 - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -class ResBlock(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock, self).__init__() - self.h = h - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm( - Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3) - ) - resblock = ResBlock - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - h.upsample_initial_channel // (2**i), - h.upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes) - ): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - # print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py deleted file mode 100644 index 69703fb8d82c20ea0288d2bf6f6aced2f741c1db..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py +++ /dev/null @@ -1,702 +0,0 @@ -import inspect -from itertools import repeat -from typing import Callable, List, Optional, Union - -import torch -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import logging, randn_tensor -from . import SemanticStableDiffusionPipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import SemanticStableDiffusionPipeline - - >>> pipe = SemanticStableDiffusionPipeline.from_pretrained( - ... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 - ... ) - >>> pipe = pipe.to("cuda") - - >>> out = pipe( - ... prompt="a photo of the face of a woman", - ... num_images_per_prompt=1, - ... guidance_scale=7, - ... editing_prompt=[ - ... "smiling, smile", # Concepts to apply - ... "glasses, wearing glasses", - ... "curls, wavy hair, curly hair", - ... "beard, full beard, mustache", - ... ], - ... reverse_editing_direction=[ - ... False, - ... False, - ... False, - ... False, - ... ], # Direction of guidance i.e. increase all concepts - ... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept - ... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept - ... edit_threshold=[ - ... 0.99, - ... 0.975, - ... 0.925, - ... 0.96, - ... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions - ... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance - ... edit_mom_beta=0.6, # Momentum beta - ... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other - ... ) - >>> image = out.images[0] - ``` -""" - - -class SemanticStableDiffusionPipeline(DiffusionPipeline): - r""" - Pipeline for text-to-image generation with latent editing. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - This model builds on the implementation of ['StableDiffusionPipeline'] - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`Q16SafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs - def check_inputs( - self, - prompt, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: int = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - editing_prompt: Optional[Union[str, List[str]]] = None, - editing_prompt_embeddings: Optional[torch.Tensor] = None, - reverse_editing_direction: Optional[Union[bool, List[bool]]] = False, - edit_guidance_scale: Optional[Union[float, List[float]]] = 5, - edit_warmup_steps: Optional[Union[int, List[int]]] = 10, - edit_cooldown_steps: Optional[Union[int, List[int]]] = None, - edit_threshold: Optional[Union[float, List[float]]] = 0.9, - edit_momentum_scale: Optional[float] = 0.1, - edit_mom_beta: Optional[float] = 0.4, - edit_weights: Optional[List[float]] = None, - sem_guidance: Optional[List[torch.Tensor]] = None, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - editing_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to use for Semantic guidance. Semantic guidance is disabled by setting - `editing_prompt = None`. Guidance direction of prompt should be specified via - `reverse_editing_direction`. - editing_prompt_embeddings (`torch.Tensor>`, *optional*): - Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be - specified via `reverse_editing_direction`. - reverse_editing_direction (`bool` or `List[bool]`, *optional*, defaults to `False`): - Whether the corresponding prompt in `editing_prompt` should be increased or decreased. - edit_guidance_scale (`float` or `List[float]`, *optional*, defaults to 5): - Guidance scale for semantic guidance. If provided as list values should correspond to `editing_prompt`. - `edit_guidance_scale` is defined as `s_e` of equation 6 of [SEGA - Paper](https://arxiv.org/pdf/2301.12247.pdf). - edit_warmup_steps (`float` or `List[float]`, *optional*, defaults to 10): - Number of diffusion steps (for each prompt) for which semantic guidance will not be applied. Momentum - will still be calculated for those steps and applied once all warmup periods are over. - `edit_warmup_steps` is defined as `delta` (δ) of [SEGA Paper](https://arxiv.org/pdf/2301.12247.pdf). - edit_cooldown_steps (`float` or `List[float]`, *optional*, defaults to `None`): - Number of diffusion steps (for each prompt) after which semantic guidance will no longer be applied. - edit_threshold (`float` or `List[float]`, *optional*, defaults to 0.9): - Threshold of semantic guidance. - edit_momentum_scale (`float`, *optional*, defaults to 0.1): - Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0 - momentum will be disabled. Momentum is already built up during warmup, i.e. for diffusion steps smaller - than `sld_warmup_steps`. Momentum will only be added to latent guidance once all warmup periods are - finished. `edit_momentum_scale` is defined as `s_m` of equation 7 of [SEGA - Paper](https://arxiv.org/pdf/2301.12247.pdf). - edit_mom_beta (`float`, *optional*, defaults to 0.4): - Defines how semantic guidance momentum builds up. `edit_mom_beta` indicates how much of the previous - momentum will be kept. Momentum is already built up during warmup, i.e. for diffusion steps smaller - than `edit_warmup_steps`. `edit_mom_beta` is defined as `beta_m` (β) of equation 8 of [SEGA - Paper](https://arxiv.org/pdf/2301.12247.pdf). - edit_weights (`List[float]`, *optional*, defaults to `None`): - Indicates how much each individual concept should influence the overall guidance. If no weights are - provided all concepts are applied equally. `edit_mom_beta` is defined as `g_i` of equation 9 of [SEGA - Paper](https://arxiv.org/pdf/2301.12247.pdf). - sem_guidance (`List[torch.Tensor]`, *optional*): - List of pre-generated guidance vectors to be applied at generation. Length of the list has to - correspond to `num_inference_steps`. - - Returns: - [`~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput`] if `return_dict` is True, - otherwise a `tuple. When returning a tuple, the first element is a list with the generated images, and the - second element is a list of `bool`s denoting whether the corresponding generated image likely represents - "not-safe-for-work" (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - - if editing_prompt: - enable_edit_guidance = True - if isinstance(editing_prompt, str): - editing_prompt = [editing_prompt] - enabled_editing_prompts = len(editing_prompt) - elif editing_prompt_embeddings is not None: - enable_edit_guidance = True - enabled_editing_prompts = editing_prompt_embeddings.shape[0] - else: - enabled_editing_prompts = 0 - enable_edit_guidance = False - - # get prompt text embeddings - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - - if text_input_ids.shape[-1] > self.tokenizer.model_max_length: - removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length] - text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - if enable_edit_guidance: - # get safety text embeddings - if editing_prompt_embeddings is None: - edit_concepts_input = self.tokenizer( - [x for item in editing_prompt for x in repeat(item, batch_size)], - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ) - - edit_concepts_input_ids = edit_concepts_input.input_ids - - if edit_concepts_input_ids.shape[-1] > self.tokenizer.model_max_length: - removed_text = self.tokenizer.batch_decode( - edit_concepts_input_ids[:, self.tokenizer.model_max_length :] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - edit_concepts_input_ids = edit_concepts_input_ids[:, : self.tokenizer.model_max_length] - edit_concepts = self.text_encoder(edit_concepts_input_ids.to(self.device))[0] - else: - edit_concepts = editing_prompt_embeddings.to(self.device).repeat(batch_size, 1, 1) - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed_edit, seq_len_edit, _ = edit_concepts.shape - edit_concepts = edit_concepts.repeat(1, num_images_per_prompt, 1) - edit_concepts = edit_concepts.view(bs_embed_edit * num_images_per_prompt, seq_len_edit, -1) - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - if enable_edit_guidance: - text_embeddings = torch.cat([uncond_embeddings, text_embeddings, edit_concepts]) - else: - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - # get the initial random noise unless the user supplied it - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=self.device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - text_embeddings.dtype, - self.device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # Initialize edit_momentum to None - edit_momentum = None - - self.uncond_estimates = None - self.text_estimates = None - self.edit_estimates = None - self.sem_guidance = None - - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = ( - torch.cat([latents] * (2 + enabled_editing_prompts)) if do_classifier_free_guidance else latents - ) - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_out = noise_pred.chunk(2 + enabled_editing_prompts) # [b,4, 64, 64] - noise_pred_uncond, noise_pred_text = noise_pred_out[0], noise_pred_out[1] - noise_pred_edit_concepts = noise_pred_out[2:] - - # default text guidance - noise_guidance = guidance_scale * (noise_pred_text - noise_pred_uncond) - # noise_guidance = (noise_pred_text - noise_pred_edit_concepts[0]) - - if self.uncond_estimates is None: - self.uncond_estimates = torch.zeros((num_inference_steps + 1, *noise_pred_uncond.shape)) - self.uncond_estimates[i] = noise_pred_uncond.detach().cpu() - - if self.text_estimates is None: - self.text_estimates = torch.zeros((num_inference_steps + 1, *noise_pred_text.shape)) - self.text_estimates[i] = noise_pred_text.detach().cpu() - - if self.edit_estimates is None and enable_edit_guidance: - self.edit_estimates = torch.zeros( - (num_inference_steps + 1, len(noise_pred_edit_concepts), *noise_pred_edit_concepts[0].shape) - ) - - if self.sem_guidance is None: - self.sem_guidance = torch.zeros((num_inference_steps + 1, *noise_pred_text.shape)) - - if edit_momentum is None: - edit_momentum = torch.zeros_like(noise_guidance) - - if enable_edit_guidance: - concept_weights = torch.zeros( - (len(noise_pred_edit_concepts), noise_guidance.shape[0]), - device=self.device, - dtype=noise_guidance.dtype, - ) - noise_guidance_edit = torch.zeros( - (len(noise_pred_edit_concepts), *noise_guidance.shape), - device=self.device, - dtype=noise_guidance.dtype, - ) - # noise_guidance_edit = torch.zeros_like(noise_guidance) - warmup_inds = [] - for c, noise_pred_edit_concept in enumerate(noise_pred_edit_concepts): - self.edit_estimates[i, c] = noise_pred_edit_concept - if isinstance(edit_guidance_scale, list): - edit_guidance_scale_c = edit_guidance_scale[c] - else: - edit_guidance_scale_c = edit_guidance_scale - - if isinstance(edit_threshold, list): - edit_threshold_c = edit_threshold[c] - else: - edit_threshold_c = edit_threshold - if isinstance(reverse_editing_direction, list): - reverse_editing_direction_c = reverse_editing_direction[c] - else: - reverse_editing_direction_c = reverse_editing_direction - if edit_weights: - edit_weight_c = edit_weights[c] - else: - edit_weight_c = 1.0 - if isinstance(edit_warmup_steps, list): - edit_warmup_steps_c = edit_warmup_steps[c] - else: - edit_warmup_steps_c = edit_warmup_steps - - if isinstance(edit_cooldown_steps, list): - edit_cooldown_steps_c = edit_cooldown_steps[c] - elif edit_cooldown_steps is None: - edit_cooldown_steps_c = i + 1 - else: - edit_cooldown_steps_c = edit_cooldown_steps - if i >= edit_warmup_steps_c: - warmup_inds.append(c) - if i >= edit_cooldown_steps_c: - noise_guidance_edit[c, :, :, :, :] = torch.zeros_like(noise_pred_edit_concept) - continue - - noise_guidance_edit_tmp = noise_pred_edit_concept - noise_pred_uncond - # tmp_weights = (noise_pred_text - noise_pred_edit_concept).sum(dim=(1, 2, 3)) - tmp_weights = (noise_guidance - noise_pred_edit_concept).sum(dim=(1, 2, 3)) - - tmp_weights = torch.full_like(tmp_weights, edit_weight_c) # * (1 / enabled_editing_prompts) - if reverse_editing_direction_c: - noise_guidance_edit_tmp = noise_guidance_edit_tmp * -1 - concept_weights[c, :] = tmp_weights - - noise_guidance_edit_tmp = noise_guidance_edit_tmp * edit_guidance_scale_c - - # torch.quantile function expects float32 - if noise_guidance_edit_tmp.dtype == torch.float32: - tmp = torch.quantile( - torch.abs(noise_guidance_edit_tmp).flatten(start_dim=2), - edit_threshold_c, - dim=2, - keepdim=False, - ) - else: - tmp = torch.quantile( - torch.abs(noise_guidance_edit_tmp).flatten(start_dim=2).to(torch.float32), - edit_threshold_c, - dim=2, - keepdim=False, - ).to(noise_guidance_edit_tmp.dtype) - - noise_guidance_edit_tmp = torch.where( - torch.abs(noise_guidance_edit_tmp) >= tmp[:, :, None, None], - noise_guidance_edit_tmp, - torch.zeros_like(noise_guidance_edit_tmp), - ) - noise_guidance_edit[c, :, :, :, :] = noise_guidance_edit_tmp - - # noise_guidance_edit = noise_guidance_edit + noise_guidance_edit_tmp - - warmup_inds = torch.tensor(warmup_inds).to(self.device) - if len(noise_pred_edit_concepts) > warmup_inds.shape[0] > 0: - concept_weights = concept_weights.to("cpu") # Offload to cpu - noise_guidance_edit = noise_guidance_edit.to("cpu") - - concept_weights_tmp = torch.index_select(concept_weights.to(self.device), 0, warmup_inds) - concept_weights_tmp = torch.where( - concept_weights_tmp < 0, torch.zeros_like(concept_weights_tmp), concept_weights_tmp - ) - concept_weights_tmp = concept_weights_tmp / concept_weights_tmp.sum(dim=0) - # concept_weights_tmp = torch.nan_to_num(concept_weights_tmp) - - noise_guidance_edit_tmp = torch.index_select( - noise_guidance_edit.to(self.device), 0, warmup_inds - ) - noise_guidance_edit_tmp = torch.einsum( - "cb,cbijk->bijk", concept_weights_tmp, noise_guidance_edit_tmp - ) - noise_guidance_edit_tmp = noise_guidance_edit_tmp - noise_guidance = noise_guidance + noise_guidance_edit_tmp - - self.sem_guidance[i] = noise_guidance_edit_tmp.detach().cpu() - - del noise_guidance_edit_tmp - del concept_weights_tmp - concept_weights = concept_weights.to(self.device) - noise_guidance_edit = noise_guidance_edit.to(self.device) - - concept_weights = torch.where( - concept_weights < 0, torch.zeros_like(concept_weights), concept_weights - ) - - concept_weights = torch.nan_to_num(concept_weights) - - noise_guidance_edit = torch.einsum("cb,cbijk->bijk", concept_weights, noise_guidance_edit) - - noise_guidance_edit = noise_guidance_edit + edit_momentum_scale * edit_momentum - - edit_momentum = edit_mom_beta * edit_momentum + (1 - edit_mom_beta) * noise_guidance_edit - - if warmup_inds.shape[0] == len(noise_pred_edit_concepts): - noise_guidance = noise_guidance + noise_guidance_edit - self.sem_guidance[i] = noise_guidance_edit.detach().cpu() - - if sem_guidance is not None: - edit_guidance = sem_guidance[i].to(self.device) - noise_guidance = noise_guidance + edit_guidance - - noise_pred = noise_pred_uncond + noise_guidance - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to( - self.device - ) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype) - ) - else: - has_nsfw_concept = None - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return SemanticStableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py deleted file mode 100644 index 46adb69671407174afeeb858ebc911e75b619d7d..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py +++ /dev/null @@ -1,1050 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import math -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import torch -from torch.nn import functional as F -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from ...loaders import TextualInversionLoaderMixin -from ...models import AutoencoderKL, UNet2DConditionModel -from ...models.attention_processor import Attention -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import is_accelerate_available, is_accelerate_version, logging, randn_tensor, replace_example_docstring -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> from diffusers import StableDiffusionAttendAndExcitePipeline - - >>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( - ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 - ... ).to("cuda") - - - >>> prompt = "a cat and a frog" - - >>> # use get_indices function to find out indices of the tokens you want to alter - >>> pipe.get_indices(prompt) - {0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} - - >>> token_indices = [2, 5] - >>> seed = 6141 - >>> generator = torch.Generator("cuda").manual_seed(seed) - - >>> images = pipe( - ... prompt=prompt, - ... token_indices=token_indices, - ... guidance_scale=7.5, - ... generator=generator, - ... num_inference_steps=50, - ... max_iter_to_alter=25, - ... ).images - - >>> image = images[0] - >>> image.save(f"../images/{prompt}_{seed}.png") - ``` -""" - - -class AttentionStore: - @staticmethod - def get_empty_store(): - return {"down": [], "mid": [], "up": []} - - def __call__(self, attn, is_cross: bool, place_in_unet: str): - if self.cur_att_layer >= 0 and is_cross: - if attn.shape[1] == self.attn_res**2: - self.step_store[place_in_unet].append(attn) - - self.cur_att_layer += 1 - if self.cur_att_layer == self.num_att_layers: - self.cur_att_layer = 0 - self.between_steps() - - def between_steps(self): - self.attention_store = self.step_store - self.step_store = self.get_empty_store() - - def get_average_attention(self): - average_attention = self.attention_store - return average_attention - - def aggregate_attention(self, from_where: List[str]) -> torch.Tensor: - """Aggregates the attention across the different layers and heads at the specified resolution.""" - out = [] - attention_maps = self.get_average_attention() - for location in from_where: - for item in attention_maps[location]: - cross_maps = item.reshape(-1, self.attn_res, self.attn_res, item.shape[-1]) - out.append(cross_maps) - out = torch.cat(out, dim=0) - out = out.sum(0) / out.shape[0] - return out - - def reset(self): - self.cur_att_layer = 0 - self.step_store = self.get_empty_store() - self.attention_store = {} - - def __init__(self, attn_res=16): - """ - Initialize an empty AttentionStore :param step_index: used to visualize only a specific step in the diffusion - process - """ - self.num_att_layers = -1 - self.cur_att_layer = 0 - self.step_store = self.get_empty_store() - self.attention_store = {} - self.curr_step_index = 0 - self.attn_res = attn_res - - -class AttendExciteAttnProcessor: - def __init__(self, attnstore, place_in_unet): - super().__init__() - self.attnstore = attnstore - self.place_in_unet = place_in_unet - - def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None): - batch_size, sequence_length, _ = hidden_states.shape - attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size) - - query = attn.to_q(hidden_states) - - is_cross = encoder_hidden_states is not None - encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states - key = attn.to_k(encoder_hidden_states) - value = attn.to_v(encoder_hidden_states) - - query = attn.head_to_batch_dim(query) - key = attn.head_to_batch_dim(key) - value = attn.head_to_batch_dim(value) - - attention_probs = attn.get_attention_scores(query, key, attention_mask) - - # only need to store attention maps during the Attend and Excite process - if attention_probs.requires_grad: - self.attnstore(attention_probs, is_cross, self.place_in_unet) - - hidden_states = torch.bmm(attention_probs, value) - hidden_states = attn.batch_to_head_dim(hidden_states) - - # linear proj - hidden_states = attn.to_out[0](hidden_states) - # dropout - hidden_states = attn.to_out[1](hidden_states) - - return hidden_states - - -class StableDiffusionAttendAndExcitePipeline(DiffusionPipeline, TextualInversionLoaderMixin): - r""" - Pipeline for text-to-image generation using Stable Diffusion and Attend and Excite. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. - - When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several - steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - Note that offloading happens on a submodule basis. Memory savings are higher than with - `enable_model_cpu_offload`, but performance is lower. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.14.0"): - from accelerate import cpu_offload - else: - raise ImportError("`enable_sequential_cpu_offload` requires `accelerate v0.14.0` or higher") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - indices, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - indices_is_list_ints = isinstance(indices, list) and isinstance(indices[0], int) - indices_is_list_list_ints = ( - isinstance(indices, list) and isinstance(indices[0], list) and isinstance(indices[0][0], int) - ) - - if not indices_is_list_ints and not indices_is_list_list_ints: - raise TypeError("`indices` must be a list of ints or a list of a list of ints") - - if indices_is_list_ints: - indices_batch_size = 1 - elif indices_is_list_list_ints: - indices_batch_size = len(indices) - - if prompt is not None and isinstance(prompt, str): - prompt_batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - prompt_batch_size = len(prompt) - elif prompt_embeds is not None: - prompt_batch_size = prompt_embeds.shape[0] - - if indices_batch_size != prompt_batch_size: - raise ValueError( - f"indices batch size must be same as prompt batch size. indices batch size: {indices_batch_size}, prompt batch size: {prompt_batch_size}" - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @staticmethod - def _compute_max_attention_per_index( - attention_maps: torch.Tensor, - indices: List[int], - ) -> List[torch.Tensor]: - """Computes the maximum attention value for each of the tokens we wish to alter.""" - attention_for_text = attention_maps[:, :, 1:-1] - attention_for_text *= 100 - attention_for_text = torch.nn.functional.softmax(attention_for_text, dim=-1) - - # Shift indices since we removed the first token - indices = [index - 1 for index in indices] - - # Extract the maximum values - max_indices_list = [] - for i in indices: - image = attention_for_text[:, :, i] - smoothing = GaussianSmoothing().to(attention_maps.device) - input = F.pad(image.unsqueeze(0).unsqueeze(0), (1, 1, 1, 1), mode="reflect") - image = smoothing(input).squeeze(0).squeeze(0) - max_indices_list.append(image.max()) - return max_indices_list - - def _aggregate_and_get_max_attention_per_token( - self, - indices: List[int], - ): - """Aggregates the attention for each token and computes the max activation value for each token to alter.""" - attention_maps = self.attention_store.aggregate_attention( - from_where=("up", "down", "mid"), - ) - max_attention_per_index = self._compute_max_attention_per_index( - attention_maps=attention_maps, - indices=indices, - ) - return max_attention_per_index - - @staticmethod - def _compute_loss(max_attention_per_index: List[torch.Tensor]) -> torch.Tensor: - """Computes the attend-and-excite loss using the maximum attention value for each token.""" - losses = [max(0, 1.0 - curr_max) for curr_max in max_attention_per_index] - loss = max(losses) - return loss - - @staticmethod - def _update_latent(latents: torch.Tensor, loss: torch.Tensor, step_size: float) -> torch.Tensor: - """Update the latent according to the computed loss.""" - grad_cond = torch.autograd.grad(loss.requires_grad_(True), [latents], retain_graph=True)[0] - latents = latents - step_size * grad_cond - return latents - - def _perform_iterative_refinement_step( - self, - latents: torch.Tensor, - indices: List[int], - loss: torch.Tensor, - threshold: float, - text_embeddings: torch.Tensor, - step_size: float, - t: int, - max_refinement_steps: int = 20, - ): - """ - Performs the iterative latent refinement introduced in the paper. Here, we continuously update the latent code - according to our loss objective until the given threshold is reached for all tokens. - """ - iteration = 0 - target_loss = max(0, 1.0 - threshold) - while loss > target_loss: - iteration += 1 - - latents = latents.clone().detach().requires_grad_(True) - self.unet(latents, t, encoder_hidden_states=text_embeddings).sample - self.unet.zero_grad() - - # Get max activation value for each subject token - max_attention_per_index = self._aggregate_and_get_max_attention_per_token( - indices=indices, - ) - - loss = self._compute_loss(max_attention_per_index) - - if loss != 0: - latents = self._update_latent(latents, loss, step_size) - - logger.info(f"\t Try {iteration}. loss: {loss}") - - if iteration >= max_refinement_steps: - logger.info(f"\t Exceeded max number of iterations ({max_refinement_steps})! ") - break - - # Run one more time but don't compute gradients and update the latents. - # We just need to compute the new loss - the grad update will occur below - latents = latents.clone().detach().requires_grad_(True) - _ = self.unet(latents, t, encoder_hidden_states=text_embeddings).sample - self.unet.zero_grad() - - # Get max activation value for each subject token - max_attention_per_index = self._aggregate_and_get_max_attention_per_token( - indices=indices, - ) - loss = self._compute_loss(max_attention_per_index) - logger.info(f"\t Finished with loss of: {loss}") - return loss, latents, max_attention_per_index - - def register_attention_control(self): - attn_procs = {} - cross_att_count = 0 - for name in self.unet.attn_processors.keys(): - if name.startswith("mid_block"): - place_in_unet = "mid" - elif name.startswith("up_blocks"): - place_in_unet = "up" - elif name.startswith("down_blocks"): - place_in_unet = "down" - else: - continue - - cross_att_count += 1 - attn_procs[name] = AttendExciteAttnProcessor(attnstore=self.attention_store, place_in_unet=place_in_unet) - - self.unet.set_attn_processor(attn_procs) - self.attention_store.num_att_layers = cross_att_count - - def get_indices(self, prompt: str) -> Dict[str, int]: - """Utility function to list the indices of the tokens you wish to alte""" - ids = self.tokenizer(prompt).input_ids - indices = {i: tok for tok, i in zip(self.tokenizer.convert_ids_to_tokens(ids), range(len(ids)))} - return indices - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]], - token_indices: Union[List[int], List[List[int]]], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: int = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - max_iter_to_alter: int = 25, - thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8}, - scale_factor: int = 20, - attn_res: int = 16, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - token_indices (`List[int]`): - The token indices to alter with attend-and-excite. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - max_iter_to_alter (`int`, *optional*, defaults to `25`): - Number of denoising steps to apply attend-and-excite. The first denoising steps are - where the attend-and-excite is applied. I.e. if `max_iter_to_alter` is 25 and there are a total of `30` - denoising steps, the first 25 denoising steps will apply attend-and-excite and the last 5 will not - apply attend-and-excite. - thresholds (`dict`, *optional*, defaults to `{0: 0.05, 10: 0.5, 20: 0.8}`): - Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. - scale_factor (`int`, *optional*, default to 20): - Scale factor that controls the step size of each Attend and Excite update. - attn_res (`int`, *optional*, default to 16): - The resolution of most semantic attention map. - - Examples: - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. :type attention_store: object - """ - - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, - token_indices, - height, - width, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - self.attention_store = AttentionStore(attn_res=attn_res) - self.register_attention_control() - - # default config for step size from original repo - scale_range = np.linspace(1.0, 0.5, len(self.scheduler.timesteps)) - step_size = scale_factor * np.sqrt(scale_range) - - text_embeddings = ( - prompt_embeds[batch_size * num_images_per_prompt :] if do_classifier_free_guidance else prompt_embeds - ) - - if isinstance(token_indices[0], int): - token_indices = [token_indices] - - indices = [] - - for ind in token_indices: - indices = indices + [ind] * num_images_per_prompt - - # 7. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # Attend and excite process - with torch.enable_grad(): - latents = latents.clone().detach().requires_grad_(True) - updated_latents = [] - for latent, index, text_embedding in zip(latents, indices, text_embeddings): - # Forward pass of denoising with text conditioning - latent = latent.unsqueeze(0) - text_embedding = text_embedding.unsqueeze(0) - - self.unet( - latent, - t, - encoder_hidden_states=text_embedding, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - self.unet.zero_grad() - - # Get max activation value for each subject token - max_attention_per_index = self._aggregate_and_get_max_attention_per_token( - indices=index, - ) - - loss = self._compute_loss(max_attention_per_index=max_attention_per_index) - - # If this is an iterative refinement step, verify we have reached the desired threshold for all - if i in thresholds.keys() and loss > 1.0 - thresholds[i]: - loss, latent, max_attention_per_index = self._perform_iterative_refinement_step( - latents=latent, - indices=index, - loss=loss, - threshold=thresholds[i], - text_embeddings=text_embedding, - step_size=step_size[i], - t=t, - ) - - # Perform gradient update - if i < max_iter_to_alter: - if loss != 0: - latent = self._update_latent( - latents=latent, - loss=loss, - step_size=step_size[i], - ) - logger.info(f"Iteration {i} | Loss: {loss:0.4f}") - - updated_latents.append(latent) - - latents = torch.cat(updated_latents, dim=0) - - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 10. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) - - -class GaussianSmoothing(torch.nn.Module): - """ - Arguments: - Apply gaussian smoothing on a 1d, 2d or 3d tensor. Filtering is performed seperately for each channel in the input - using a depthwise convolution. - channels (int, sequence): Number of channels of the input tensors. Output will - have this number of channels as well. - kernel_size (int, sequence): Size of the gaussian kernel. sigma (float, sequence): Standard deviation of the - gaussian kernel. dim (int, optional): The number of dimensions of the data. - Default value is 2 (spatial). - """ - - # channels=1, kernel_size=kernel_size, sigma=sigma, dim=2 - def __init__( - self, - channels: int = 1, - kernel_size: int = 3, - sigma: float = 0.5, - dim: int = 2, - ): - super().__init__() - - if isinstance(kernel_size, int): - kernel_size = [kernel_size] * dim - if isinstance(sigma, float): - sigma = [sigma] * dim - - # The gaussian kernel is the product of the - # gaussian function of each dimension. - kernel = 1 - meshgrids = torch.meshgrid([torch.arange(size, dtype=torch.float32) for size in kernel_size]) - for size, std, mgrid in zip(kernel_size, sigma, meshgrids): - mean = (size - 1) / 2 - kernel *= 1 / (std * math.sqrt(2 * math.pi)) * torch.exp(-(((mgrid - mean) / (2 * std)) ** 2)) - - # Make sure sum of values in gaussian kernel equals 1. - kernel = kernel / torch.sum(kernel) - - # Reshape to depthwise convolutional weight - kernel = kernel.view(1, 1, *kernel.size()) - kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) - - self.register_buffer("weight", kernel) - self.groups = channels - - if dim == 1: - self.conv = F.conv1d - elif dim == 2: - self.conv = F.conv2d - elif dim == 3: - self.conv = F.conv3d - else: - raise RuntimeError("Only 1, 2 and 3 dimensions are supported. Received {}.".format(dim)) - - def forward(self, input): - """ - Arguments: - Apply gaussian filter to input. - input (torch.Tensor): Input to apply gaussian filter on. - Returns: - filtered (torch.Tensor): Filtered output. - """ - return self.conv(input, weight=self.weight.to(input.dtype), groups=self.groups) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/stable_unclip_image_normalizer.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/stable_unclip_image_normalizer.py deleted file mode 100644 index 7362df7e80e72719133f1804600a618fe161f668..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/stable_unclip_image_normalizer.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Optional, Union - -import torch -from torch import nn - -from ...configuration_utils import ConfigMixin, register_to_config -from ...models.modeling_utils import ModelMixin - - -class StableUnCLIPImageNormalizer(ModelMixin, ConfigMixin): - """ - This class is used to hold the mean and standard deviation of the CLIP embedder used in stable unCLIP. - - It is used to normalize the image embeddings before the noise is applied and un-normalize the noised image - embeddings. - """ - - @register_to_config - def __init__( - self, - embedding_dim: int = 768, - ): - super().__init__() - - self.mean = nn.Parameter(torch.zeros(1, embedding_dim)) - self.std = nn.Parameter(torch.ones(1, embedding_dim)) - - def to( - self, - torch_device: Optional[Union[str, torch.device]] = None, - torch_dtype: Optional[torch.dtype] = None, - ): - self.mean = nn.Parameter(self.mean.to(torch_device).to(torch_dtype)) - self.std = nn.Parameter(self.std.to(torch_device).to(torch_dtype)) - return self - - def scale(self, embeds): - embeds = (embeds - self.mean) * 1.0 / self.std - return embeds - - def unscale(self, embeds): - embeds = (embeds * self.std) + self.mean - return embeds diff --git a/spaces/deelerb/3dselfie/README.md b/spaces/deelerb/3dselfie/README.md deleted file mode 100644 index 53e34df08da55169377741bd2e7676843237d810..0000000000000000000000000000000000000000 --- a/spaces/deelerb/3dselfie/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: PIFu Clothed Human Digitization -emoji: "🧍🏽‍♀️🧍🏻🧍🏽‍♂️\_" -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.0.2 -app_file: ./PIFu/spaces.py -pinned: false -python_version: 3.7.13 -duplicated_from: radames/PIFu-Clothed-Human-Digitization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/deepskyreal/ai-mixer-hotchpotch/comic_style/face_detection.py b/spaces/deepskyreal/ai-mixer-hotchpotch/comic_style/face_detection.py deleted file mode 100644 index 9a202a511900dfbd2c25816081e468a652b538d9..0000000000000000000000000000000000000000 --- a/spaces/deepskyreal/ai-mixer-hotchpotch/comic_style/face_detection.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) 2021 Justin Pinkney - -import cv2 -import dlib -import numpy as np -from PIL import Image -from PIL import ImageOps -from scipy.ndimage import gaussian_filter - -MODEL_PATH = "comic_style/shape_predictor_5_face_landmarks.dat" -detector = dlib.get_frontal_face_detector() - - -def align(image_in, face_index=0, output_size=256): - try: - image_in = ImageOps.exif_transpose(image_in) - except: - print("exif problem, not rotating") - - landmarks = list(get_landmarks(image_in)) - n_faces = len(landmarks) - face_index = min(n_faces - 1, face_index) - if n_faces == 0: - aligned_image = image_in - quad = None - else: - aligned_image, quad = image_align(image_in, landmarks[face_index], output_size=output_size) - - return aligned_image, n_faces, quad - - -def composite_images(quad, img, output): - """Composite an image into and output canvas according to transformed co-ords""" - output = output.convert("RGBA") - img = img.convert("RGBA") - input_size = img.size - src = np.array(((0, 0), (0, input_size[1]), input_size, (input_size[0], 0)), dtype=np.float32) - dst = np.float32(quad) - mtx = cv2.getPerspectiveTransform(dst, src) - img = img.transform(output.size, Image.PERSPECTIVE, mtx.flatten(), Image.BILINEAR) - output.alpha_composite(img) - - return output.convert("RGB") - - -def get_landmarks(image): - """Get landmarks from PIL image""" - shape_predictor = dlib.shape_predictor(MODEL_PATH) - - max_size = max(image.size) - reduction_scale = int(max_size / 512) - if reduction_scale == 0: - reduction_scale = 1 - downscaled = image.reduce(reduction_scale) - img = np.array(downscaled) - detections = detector(img, 0) - - for detection in detections: - try: - face_landmarks = [(reduction_scale * item.x, reduction_scale * item.y) for item in - shape_predictor(img, detection).parts()] - yield face_landmarks - except Exception as e: - print(e) - - -def image_align(src_img, face_landmarks, output_size=512, transform_size=2048, enable_padding=True, x_scale=1, y_scale=1, - em_scale=0.1, alpha=False): - # Align function modified from ffhq-dataset - # See https://github.com/NVlabs/ffhq-dataset for license - - lm = np.array(face_landmarks) - lm_eye_left = lm[2:3] # left-clockwise - lm_eye_right = lm[0:1] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = 0.71 * (eye_right - eye_left) - mouth_avg = lm[4] - eye_to_mouth = 1.35 * (mouth_avg - eye_avg) - - # Choose oriented crop rectangle. - x = eye_to_eye.copy() - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - x *= x_scale - y = np.flipud(x) * [-y_scale, y_scale] - c = eye_avg + eye_to_mouth * em_scale - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - quad_orig = quad.copy() - qsize = np.hypot(*x) * 2 - - img = src_img.convert('RGBA').convert('RGB') - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - crop = ( - max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))), - int(np.ceil(max(quad[:, 1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), - max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]), - 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3])) - blur = qsize * 0.02 - img += (gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0) - img = np.uint8(np.clip(np.rint(img), 0, 255)) - if alpha: - mask = 1 - np.clip(3.0 * mask, 0.0, 1.0) - mask = np.uint8(np.clip(np.rint(mask * 255), 0, 255)) - img = np.concatenate((img, mask), axis=2) - img = Image.fromarray(img, 'RGBA') - else: - img = Image.fromarray(img, 'RGB') - quad += pad[:2] - - # Transform. - img = img.transform((transform_size, transform_size), Image.QUAD, (quad + 0.5).flatten(), Image.BILINEAR) - if output_size < transform_size: - img = img.resize((output_size, output_size), Image.ANTIALIAS) - - return img, quad_orig diff --git a/spaces/dhanilka/illusion-image-ai/illusion_style.py b/spaces/dhanilka/illusion-image-ai/illusion_style.py deleted file mode 100644 index 54a3614533167bcee0d4ba77c2f07294c1ed1690..0000000000000000000000000000000000000000 --- a/spaces/dhanilka/illusion-image-ai/illusion_style.py +++ /dev/null @@ -1,10 +0,0 @@ -css=''' -#share-btn-container {padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; max-width: 13rem; margin-left: auto;} -div#share-btn-container > div {flex-direction: row;background: black;align-items: center} -#share-btn-container:hover {background-color: #060606} -#share-btn {all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.5rem !important; padding-bottom: 0.5rem !important;right:0;} -#share-btn * {all: unset} -#share-btn-container div:nth-child(-n+2){width: auto !important;min-height: 0px !important;} -#share-btn-container .wrap {display: none !important} -#share-btn-container.hidden {display: none!important} -''' \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Password.txt - 0.01 KB.rar _TOP_.md b/spaces/diacanFperku/AutoGPT/Password.txt - 0.01 KB.rar _TOP_.md deleted file mode 100644 index 02a5039a0e8067e9c3160f674848a361149c5913..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Password.txt - 0.01 KB.rar _TOP_.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Password.txt - 0.01 KB.rar


      Download Filehttps://gohhs.com/2uFV9g



      -
      - 3cee63e6c2
      -
      -
      -

      diff --git a/spaces/dineshreddy/WALT/configs/_base_/schedules/schedule_1x.py b/spaces/dineshreddy/WALT/configs/_base_/schedules/schedule_1x.py deleted file mode 100644 index 13b3783cbbe93b6c32bc415dc50f633dffa4aec7..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/configs/_base_/schedules/schedule_1x.py +++ /dev/null @@ -1,11 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[8, 11]) -runner = dict(type='EpochBasedRunner', max_epochs=12) diff --git a/spaces/divilis/chatgpt/llama_func.py b/spaces/divilis/chatgpt/llama_func.py deleted file mode 100644 index c71027dd4e6f99c0c12626cbbf276f407877be04..0000000000000000000000000000000000000000 --- a/spaces/divilis/chatgpt/llama_func.py +++ /dev/null @@ -1,192 +0,0 @@ -import os -import logging - -from llama_index import GPTSimpleVectorIndex -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -from langchain.llms import OpenAI -import colorama - - -from presets import * -from utils import * - - -def get_documents(file_src): - documents = [] - index_name = "" - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - logging.debug(f"file: {file.name}") - index_name += file.name - if os.path.splitext(file.name)[1] == ".pdf": - logging.debug("Loading PDF...") - CJKPDFReader = download_loader("CJKPDFReader") - loader = CJKPDFReader() - documents += loader.load_data(file=file.name) - elif os.path.splitext(file.name)[1] == ".docx": - logging.debug("Loading DOCX...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - documents += loader.load_data(file=file.name) - elif os.path.splitext(file.name)[1] == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - documents += loader.load_data(file=file.name) - else: - logging.debug("Loading text file...") - with open(file.name, "r", encoding="utf-8") as f: - text = add_space(f.read()) - documents += [Document(text)] - index_name = sha1sum(index_name) - return documents, index_name - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=1, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" ", - num_children=10, - max_keywords_per_chunk=10, -): - os.environ["OPENAI_API_KEY"] = api_key - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - llm_predictor = LLMPredictor( - llm=OpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key) - ) - prompt_helper = PromptHelper( - max_input_size, - num_outputs, - max_chunk_overlap, - embedding_limit, - chunk_size_limit, - separator=separator, - ) - documents, index_name = get_documents(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - logging.debug("构建索引中……") - index = GPTSimpleVectorIndex( - documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper - ) - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - return index - except Exception as e: - print(e) - return None - - -def chat_ai( - api_key, - index, - question, - context, - chatbot, -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.info(f"Question: {question}") - - response, chatbot_display, status_text = ask_ai( - api_key, - index, - question, - replace_today(PROMPT_TEMPLATE), - REFINE_TEMPLATE, - SIM_K, - INDEX_QUERY_TEMPRATURE, - context, - ) - if response is None: - status_text = "查询失败,请换个问法试试" - return context, chatbot - response = response - - context.append({"role": "user", "content": question}) - context.append({"role": "assistant", "content": response}) - chatbot.append((question, chatbot_display)) - - os.environ["OPENAI_API_KEY"] = "" - return context, chatbot, status_text - - -def ask_ai( - api_key, - index, - question, - prompt_tmpl, - refine_tmpl, - sim_k=1, - temprature=0, - prefix_messages=[], -): - os.environ["OPENAI_API_KEY"] = api_key - - logging.debug("Index file found") - logging.debug("Querying index...") - llm_predictor = LLMPredictor( - llm=OpenAI( - temperature=temprature, - model_name="gpt-3.5-turbo-0301", - prefix_messages=prefix_messages, - ) - ) - - response = None # Initialize response variable to avoid UnboundLocalError - qa_prompt = QuestionAnswerPrompt(prompt_tmpl) - rf_prompt = RefinePrompt(refine_tmpl) - response = index.query( - question, - llm_predictor=llm_predictor, - similarity_top_k=sim_k, - text_qa_template=qa_prompt, - refine_template=rf_prompt, - response_mode="compact", - ) - - if response is not None: - logging.info(f"Response: {response}") - ret_text = response.response - nodes = [] - for index, node in enumerate(response.source_nodes): - brief = node.source_text[:25].replace("\n", "") - nodes.append( - f"
      [{index+1}]\t{brief}...

      {node.source_text}

      " - ) - new_response = ret_text + "\n----------\n" + "\n\n".join(nodes) - logging.info( - f"Response: {colorama.Fore.BLUE}{ret_text}{colorama.Style.RESET_ALL}" - ) - os.environ["OPENAI_API_KEY"] = "" - return ret_text, new_response, f"查询消耗了{llm_predictor.last_token_usage} tokens" - else: - logging.warning("No response found, returning None") - os.environ["OPENAI_API_KEY"] = "" - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/dolceschokolade/chatbot-mini/utils/app/codeblock.ts b/spaces/dolceschokolade/chatbot-mini/utils/app/codeblock.ts deleted file mode 100644 index d28c8aa97bd045cf8711c2e2284aa3aee035c453..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/utils/app/codeblock.ts +++ /dev/null @@ -1,39 +0,0 @@ -interface languageMap { - [key: string]: string | undefined; -} - -export const programmingLanguages: languageMap = { - javascript: '.js', - python: '.py', - java: '.java', - c: '.c', - cpp: '.cpp', - 'c++': '.cpp', - 'c#': '.cs', - ruby: '.rb', - php: '.php', - swift: '.swift', - 'objective-c': '.m', - kotlin: '.kt', - typescript: '.ts', - go: '.go', - perl: '.pl', - rust: '.rs', - scala: '.scala', - haskell: '.hs', - lua: '.lua', - shell: '.sh', - sql: '.sql', - html: '.html', - css: '.css', - // add more file extensions here, make sure the key is same as language prop in CodeBlock.tsx component -}; - -export const generateRandomString = (length: number, lowercase = false) => { - const chars = 'ABCDEFGHJKLMNPQRSTUVWXY3456789'; // excluding similar looking characters like Z, 2, I, 1, O, 0 - let result = ''; - for (let i = 0; i < length; i++) { - result += chars.charAt(Math.floor(Math.random() * chars.length)); - } - return lowercase ? result.toLowerCase() : result; -}; diff --git a/spaces/dorkai/ChatUIPro/next.config.js b/spaces/dorkai/ChatUIPro/next.config.js deleted file mode 100644 index 161c84c4cf3b29b8906163dcd729acbc94d5361f..0000000000000000000000000000000000000000 --- a/spaces/dorkai/ChatUIPro/next.config.js +++ /dev/null @@ -1,21 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - productionBrowserSourceMaps: false, // enable browser source map generation during the production build - // Configure pageExtensions to include md and mdx - pageExtensions: ['ts', 'tsx', 'js', 'jsx', 'md', 'mdx'], - experimental: { - appDir: true, - }, - // fix all before production. Now it slow the develop speed. - eslint: { - // Warning: This allows production builds to successfully complete even if - // your project has ESLint errors. - ignoreDuringBuilds: true, - }, - typescript: { - // https://nextjs.org/docs/api-reference/next.config.js/ignoring-typescript-errors - ignoreBuildErrors: true, - } -} - -module.exports = nextConfig diff --git a/spaces/ehcalabres/EMOVoice/app.py b/spaces/ehcalabres/EMOVoice/app.py deleted file mode 100644 index fd40c031f0043c339df199e1d1adb29e314300e6..0000000000000000000000000000000000000000 --- a/spaces/ehcalabres/EMOVoice/app.py +++ /dev/null @@ -1,53 +0,0 @@ -import json -import os -import requests -import json -import streamlit as st - -EXAMPLE_PATH = [] -for f in os.listdir('data/'): - EXAMPLE_PATH.append(f) - -st.sidebar.image('img/love_emoji_128.png') -st.sidebar.title('EMOVoice') -st.sidebar.write('Welcome to EMOVoice, a tool for Speech Emotion Recognition based on the Wav2Vec2 model.') - -st.title('EMOVoice') -st.write("This is a work in progress, stay tuned!") - -st.sidebar.subheader('Model input') -input_mode = st.sidebar.radio('Select your input mode:', ['Upload audio', 'Select example']) - -file = None - -if input_mode == 'Upload audio': - file = st.sidebar.file_uploader("Choose a file", type=['mp3', 'mp4', 'wav', 'flac']) - file_size = file.size if file else None -elif input_mode == 'Select example': - example_selected = st.sidebar.selectbox('Choose an audio example', EXAMPLE_PATH) - file = open('data/' + example_selected, 'rb') - file_size = os.stat('data/' + example_selected).st_size - -if file is not None: - st.write('Audio added!') - audio_bytes = file.read() - st.audio(audio_bytes) - - - url = "https://api-inference.huggingface.co/models/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" - - payload=file - headers = { - 'Content-Type': 'audio/mp3', - 'Authorization': 'Bearer ' + st.secrets['API_TOKEN'] - } - - response = requests.request("POST", url, headers=headers, data=audio_bytes) - - response.request - - decoded_response = json.loads(response.text) - st.write(decoded_response) - - - file.close() diff --git a/spaces/ennov8ion/comicbook-models/app.py b/spaces/ennov8ion/comicbook-models/app.py deleted file mode 100644 index 8b84ee5db4e6da91fb1c03a64f1e413179b79cf0..0000000000000000000000000000000000000000 --- a/spaces/ennov8ion/comicbook-models/app.py +++ /dev/null @@ -1,95 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path - -models = [ - {"name": "Archer Diffusion", "url": "nitrosocke/archer-diffusion"}, - {"name": "Icomix 2", "url": "stablediffusionapi/icomix-2"}, - {"name": "Mo-Di Diffusion", "url": "nitrosocke/mo-di-diffusion"}, - {"name": "Comic Diffusion", "url": "ogkalu/Comic-Diffusion"}, - {"name": "Marvel WhatIf Diffusion", "url": "ItsJayQz/Marvel_WhatIf_Diffusion"}, - {"name": "Nitro Diffusion", "url": "nitrosocke/Nitro-Diffusion"}, -] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/daspartho/prompt-extend") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(value=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -with gr.Blocks() as myface: - gr.HTML( - - ) - - with gr.Row(): - with gr.Row(): - input_text = gr.Textbox(label="Prompt idea", placeholder="", lines=1) - # Model selection dropdown - model_name1 = gr.Dropdown( - label="Choose Model", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - ) - with gr.Row(): - see_prompts = gr.Button("Generate Prompts") - run = gr.Button("Generate Images", variant="primary") - - with gr.Row(): - output1 = gr.Image(label="") - output2 = gr.Image(label="") - output3 = gr.Image(label="") - with gr.Row(): - magic1 = gr.Textbox(label="Generated Prompt", lines=2) - magic2 = gr.Textbox(label="Generated Prompt", lines=2) - magic3 = gr.Textbox(label="Generated Prompt", lines=2) - with gr.Row(): - output4 = gr.Image(label="") - output5 = gr.Image(label="") - output6 = gr.Image(label="") - with gr.Row(): - magic4 = gr.Textbox(label="Generated Prompt", lines=2) - magic5 = gr.Textbox(label="Generated Prompt", lines=2) - magic6 = gr.Textbox(label="Generated Prompt", lines=2) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - run.click(send_it, inputs=[magic4, model_name1], outputs=[output4]) - run.click(send_it, inputs=[magic5, model_name1], outputs=[output5]) - run.click(send_it, inputs=[magic6, model_name1], outputs=[output6]) - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic4]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic5]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic6]) - -myface.queue(concurrency_count=200) -myface.launch(inline=True, show_api=False, max_threads=400) \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/app.py b/spaces/eson/tokenizer-arena/app.py deleted file mode 100644 index 716689c168d5045379610c0f3fca66c3fdb49ffd..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/app.py +++ /dev/null @@ -1,199 +0,0 @@ -# coding=utf-8 -# author: xusong -# time: 2022/8/23 16:06 - -""" -## TODO: -- i18 国际化 https://blog.csdn.net/qq_26212731/article/details/78457198 request.header中也有language -- iter_vocab 的 warmup -- 开关 - - add_special_token 开关 - - theme 开关 light/dark - - token_id/tokens/bytes 开关 - - 中文字词统计,是否要包括 _ G 等字符 -- 评测 - - OOV评测 -- 通过 javascript 添加 hover_text -- 英文 utf-8编码 -- 词典支持下载,借用image下载的标签, -- baichuan的单字数量怎么两万多个? -- qwen: ValueError: Unclosed image token - -plots - -table - -## related demo -- [](http://text-processing.com/demo/tokenize/) -- [gpt-tokenizer](https://gpt-tokenizer.dev/) -- [llama-tokenizer-js](https://belladoreai.github.io/llama-tokenizer-js/example-demo/build/) -- [](https://huggingface.co/spaces/Xenova/the-tokenizer-playground) - -## 可视化 - -[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ] -""" - -import gradio as gr -from vocab import all_tokenizers -from util import * -from examples import example_fn - -get_window_url_params = """ - function(url_params) { - const params = new URLSearchParams(window.location.search); - url_params = JSON.stringify(Object.fromEntries(params)); - return url_params; - } - """ - -with gr.Blocks(css="css/style.css", title="Tokenizer Arena") as demo: - gr.HTML("""

      Tokenizer Arena ⚔️

      """) - # links: https://www.coderstool.com/utf8-encoding-decoding - # 功能:输入文本,进行分词 - # 分词器:常见的分词器有集中, - # 背景:方便分词、看词粒度、对比 - - with gr.Row(): - gr.Markdown("## Input Text") - dropdown_examples = gr.Dropdown( - # ["空格测试", "标点测试", "符号测试", "数字测试"], - ["spaces", "punctuations", "symbols", "digits"], - value="Examples", - type="index", - show_label=False, - container=False, - scale=0, - elem_classes="example-style" - ) - user_input = gr.Textbox( - # value=default_user_input, - label="Input Text", - lines=5, - show_label=False, - ) - gr.Markdown("## Tokenization") - with gr.Row(): - with gr.Column(scale=6): - with gr.Group(): - tokenizer_type_1 = gr.Dropdown( - all_tokenizers, - label="Tokenizer 1", - ) - with gr.Group(): - """ -
      69
      Characters
      - """ - with gr.Row(): - stats_vocab_size_1 = gr.TextArea( - label="VocabSize", - lines=1, - elem_classes="statistics" - ) - stats_zh_token_size_1 = gr.TextArea( - label="ZH char/word", - lines=1, - elem_classes="statistics" - ) - stats_overlap_token_size_1 = gr.TextArea( - # value=default_stats_overlap_token_size, - label="Overlap Tokens", - lines=1, - elem_classes="statistics" - ) - # stats_3 = gr.TextArea( - # label="Compress Rate", - # lines=1, - # elem_classes="statistics" - # ) - # https://www.onlinewebfonts.com/icon/418591 - gr.Image("images/VS.svg", scale=1, show_label=False, - show_download_button=False, container=False, - show_share_button=False) - with gr.Column(scale=6): - with gr.Group(): - tokenizer_type_2 = gr.Dropdown( - all_tokenizers, - label="Tokenizer 2", - ) - with gr.Group(): - with gr.Row(): - stats_vocab_size_2 = gr.TextArea( - label="VocabSize", - lines=1, - elem_classes="statistics" - ) - stats_zh_token_size_2 = gr.TextArea( - label="ZH char/word", # 中文字/词 - lines=1, - elem_classes="statistics" - ) - # stats_6 = gr.TextArea( - # label="Compress Rate", - # lines=1, - # elem_classes="statistics" - # ) - stats_overlap_token_size_2 = gr.TextArea( - label="Overlap Tokens", - lines=1, - elem_classes="statistics" - ) - - # TODO: 图 表 压缩率 - with gr.Row(): - with gr.Column(): - output_text_1 = gr.Highlightedtext( - show_legend=True, - elem_classes="space-show" - ) - with gr.Column(): - output_text_2 = gr.Highlightedtext( - show_legend=True, - elem_classes="space-show" - ) - - with gr.Row(): - output_table_1 = gr.Dataframe() - output_table_2 = gr.Dataframe() - - tokenizer_type_1.change(tokenize, [user_input, tokenizer_type_1], - [output_text_1, output_table_1]) - tokenizer_type_1.change(basic_count, [tokenizer_type_1], [stats_vocab_size_1, stats_zh_token_size_1]) - tokenizer_type_1.change(get_overlap_token_size, [tokenizer_type_1, tokenizer_type_2], - [stats_overlap_token_size_1, stats_overlap_token_size_2]) - - user_input.change(tokenize_pair, - [user_input, tokenizer_type_1, tokenizer_type_2], - [output_text_1, output_table_1, output_text_2, output_table_2]) # , pass_request=1 - - tokenizer_type_2.change(tokenize, [user_input, tokenizer_type_2], - [output_text_2, output_table_2]) - tokenizer_type_2.change(basic_count, [tokenizer_type_2], [stats_vocab_size_2, stats_zh_token_size_2]) - tokenizer_type_2.change(get_overlap_token_size, [tokenizer_type_1, tokenizer_type_2], - [stats_overlap_token_size_1, stats_overlap_token_size_2]) - - dropdown_examples.change( - example_fn, - dropdown_examples, - [user_input, tokenizer_type_1, tokenizer_type_2] - ) - - demo.load(_js=open("js/onload.js", "r", encoding="utf-8").read()) - demo.load( - fn=on_load, - inputs=[user_input], # 这里只需要传个空object即可。 - outputs=[user_input, tokenizer_type_1, tokenizer_type_2], - _js=get_window_url_params - ) - - -if __name__ == "__main__": - print("http://127.0.0.1:7860/?tokenizer1=llama&tokenizer2=chinese_llama2&text=fdsjlk") # llama chinese_llama2 - print( - "http://127.0.0.1:7860/?tokenizer1=chinese_llama&tokenizer2=chinese_llama2&text=fdsjlk") # llama chinese_llama2 - print("http://127.0.0.1:7860/?tokenizer1=baichuan&tokenizer2=baichuan2&text=sss") # baichuan 1 VS 2 - print("http://127.0.0.1:7860/?tokenizer1=bert&tokenizer2=clue&text=sss") # bert VS clue - print("http://127.0.0.1:7860/?tokenizer1=clue&tokenizer2=kplug&text=sss") # clue VS kplug - print("http://127.0.0.1:7860/?tokenizer1=baichuan&tokenizer2=baichuan2&text=sss") # - # demo.queue(max_size=20).launch() - demo.launch() diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/README.md b/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/README.md deleted file mode 100644 index 6c6fcd97193ff5a3bd1f323e64714586bdaaf46a..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_neox_chinese_v1/README.md +++ /dev/null @@ -1,64 +0,0 @@ - - -``` -added vocab (size: 54634) with 22 dummy tokens (new size: 54656) -Vocab size: 54634 - -训练数据 -``` - - -https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py - - -## 20B - -[configs/20B.yml](https://github.com/EleutherAI/gpt-neox/blob/main/configs/20B.yml#L7) -``` - "vocab-file": "./20B_checkpoints/20B_tokenizer.json", -``` - -Vocab size: 50277 -self.padded_vocab_size = 50304 - - -padded vocab (size: 50277) with 27 dummy tokens (new size: 50304) - -## 词典 - -见 convert_vocab_to_txt.py - -``` -{"id": 13609, "token": "\u00e4\u00b8\u0143", "token_decode": "\u4e2d"} 中 - -# 多个符号拼接在一起的 -{"id": 13663, "token": ".*]{}", "token_decode": ".*]{}"} .*]{} - -# ss - -``` - - -## 中文支持 - -基本没有OOV。 - -gpt-neox是在800G英文数据集上训练的,为啥词典支持中文?因为是byte-level BPE - -``` -丁 [3218, 212] -七 [3218, 214] -万 [3218, 218] -诀 [11894, 211] -证 [11894, 212] -``` - - -编码长度统计: Counter({2: 4190, 3: 1295, 1: 285}) -平均编码长度: 2.1750433275563257 - - -## ss - - - diff --git a/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/test_zh_coding_len.py b/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/test_zh_coding_len.py deleted file mode 100644 index 2baf006802ca75bd1f62545f355bc3bce7d8484b..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/gpt_nexo_20b/test_zh_coding_len.py +++ /dev/null @@ -1,18 +0,0 @@ -""" -1. jd_vocab_tokens的中文: -编码长度统计: Counter({2: 4190, 3: 1295, 1: 285}) -平均编码长度: 2.1750433275563257 - - -2. 中文标点 -编码长度统计: Counter({2: 55, 1: 23, 3: 3}) -平均编码长度: 1.7530864197530864 - -3. 全中文(单字) unicode -编码长度统计: Counter({2: 13342, 3: 7257, 1: 302}) -平均编码长度: 2.3327591981244917 - - -4. 全中文() -中文汉字数:313, 中文标点数: 86 -""" diff --git a/spaces/facebook/MusicGen/audiocraft/data/__init__.py b/spaces/facebook/MusicGen/audiocraft/data/__init__.py deleted file mode 100644 index 2906ff12bc85a894837579f3137f6f71a0438329..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/data/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Audio loading and writing support. Datasets for raw audio -or also including some metadata.""" - -# flake8: noqa -from . import audio, audio_dataset, info_audio_dataset, music_dataset, sound_dataset diff --git a/spaces/facebook/MusicGen/audiocraft/solvers/base.py b/spaces/facebook/MusicGen/audiocraft/solvers/base.py deleted file mode 100644 index 0432e44a36838c5731711f9d54f81822b21f20bd..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/solvers/base.py +++ /dev/null @@ -1,631 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -from contextlib import contextmanager -from pathlib import Path -import typing as tp - -import flashy -import omegaconf -import torch -from torch import nn - -from .. import optim -from ..optim import fsdp -from ..utils import checkpoint -from ..utils.autocast import TorchAutocast -from ..utils.best_state import BestStateDictManager -from ..utils.deadlock import DeadlockDetect -from ..utils.profiler import Profiler -from ..utils.utils import copy_state, dict_from_config, model_hash, with_rank_rng - - -class StandardSolver(ABC, flashy.BaseSolver): - """Standard solver for AudioCraft. - - The standard solver implements a base training loop with the following stages: - train, valid, evaluate and generate that are expected to be all defined for - solvers in AudioCraft. It also provides a nice default management of Dora history replay, - checkpoint management across epoch, and logging configuration. - - AudioCraft solvers must inherit from the StandardSolver and define the methods - associated to each stage as well as the show, build_model and build_dataloaders methods. - """ - def __init__(self, cfg: omegaconf.DictConfig): - super().__init__() - self.logger.info(f"Instantiating solver {self.__class__.__name__} for XP {self.xp.sig}") - self.logger.info(f"All XP logs are stored in {self.xp.folder}") - self.cfg = cfg - self.device = cfg.device - self.model: nn.Module - self._continue_best_source_keys = ['best_state', 'fsdp_best_state'] - self._fsdp_modules: tp.List[fsdp.FSDP] = [] - self._ema_sources: nn.ModuleDict = nn.ModuleDict() - self.ema: tp.Optional[optim.ModuleDictEMA] = None - self.dataloaders: tp.Dict[str, torch.utils.data.DataLoader] = dict() - self._log_updates = self.cfg.logging.get('log_updates', 10) - if self.cfg.logging.log_tensorboard: - self.init_tensorboard(**self.cfg.get('tensorboard')) - if self.cfg.logging.log_wandb and self: - self.init_wandb(**self.cfg.get('wandb')) - # keep a copy of the best performing state for stateful objects - # used for evaluation and generation stages - dtype_best: tp.Optional[torch.dtype] = None - if self.cfg.fsdp.use: - dtype_best = getattr(torch, self.cfg.fsdp.param_dtype) # type: ignore - assert isinstance(dtype_best, torch.dtype) - elif self.cfg.autocast: - dtype_best = getattr(torch, self.cfg.autocast_dtype) # type: ignore - assert isinstance(dtype_best, torch.dtype) - self.best_state: BestStateDictManager = BestStateDictManager(dtype=dtype_best) - # Hacky support for keeping a copy of the full best state in rank0. - self.fsdp_best_state: tp.Dict[str, tp.Any] = {} - self.register_stateful('best_state', 'fsdp_best_state') # register best_state object to keep it in state_dict - self._new_best_state: bool = False # should save a new checkpoint - # instantiate datasets and appropriate number of updates per epoch - self.build_dataloaders() - if self.cfg.execute_only is None: - assert 'train' in self.dataloaders, "The train dataset split must be provided." - assert 'valid' in self.dataloaders, "The valid dataset split must be provided." - self.train_updates_per_epoch = len(self.dataloaders['train']) if 'train' in self.dataloaders else 0 - if self.cfg.optim.updates_per_epoch: - self.train_updates_per_epoch = self.cfg.optim.updates_per_epoch - self.total_updates = self.train_updates_per_epoch * self.cfg.optim.epochs - # instantiate model & exponential moving average on the model - self.build_model() - self.logger.info("Model hash: %s", model_hash(self.model)) - assert 'model' in self.stateful.sources, \ - "Please register the model to stateful with self.register_stateful('model') in build_model." - self.profiler = Profiler(self.model, **self.cfg.profiler) - self.initialize_ema() - self.register_stateful('ema') - assert self.ema is None or 'ema' in self.stateful.sources, \ - "Please register the ema to stateful with self.register_stateful('ema') in build_model." - self.deadlock_detect = DeadlockDetect(**self.cfg.deadlock) - # basic statistics on the trained model - model_size = sum(p.numel() for p in self.model.parameters() if p.requires_grad) / 1e6 - # one copy of grad, one copy of momentum, one copy of denominator and model weights. - # and 4 bytes for each float! - mem_usage = model_size * 4 * 4 / 1000 - self.logger.info("Model size: %.2f M params", model_size) - self.logger.info("Base memory usage, with model, grad and optim: %.2f GB", mem_usage) - - @property - def autocast(self): - """Convenient autocast (or not) using the solver configuration.""" - return TorchAutocast(enabled=self.cfg.autocast, device_type=self.device, dtype=self.autocast_dtype) - - def _get_state_source(self, name) -> flashy.state.StateDictSource: - # Internal utility to get a state source from the solver - return self.stateful.sources[name] - - @property - def best_metric_name(self) -> tp.Optional[str]: - """Metric name used to identify the best state. This metric should be stored in the metrics - used on the stage for best state identification (most likely, `valid`). If None, then - no best state is saved. - """ - return None - - def register_best_state(self, *args: str): - """Register state sources in `BestStateDictManager` to keep their best states along with their - latest states. The best state will be used at evaluation stages instead of the latest states. - - Shortcut around `BestStateDictManager.register` method. You can pass any number of - attribute, included nested attributes and those will be included into the checkpoints - and automatically restored when `BaseSolver.restore` is called. - """ - for name in args: - state_source = self._get_state_source(name) - assert name in self.stateful.sources, "Registered states in best should be registered in stateful first!" - self.best_state.register(name, state_source) - - def register_ema(self, *args: str): - """Register state sources for exponential moving average. - - The registered sources are used to instantiate a ModuleDictEMA instance. - The ModuleDictEMA keeps a `nn.ModuleDict` module that is updated when self.ema.step() is called - and swapped with the original state sources with self.swap_ema_state() method. - - Usage: - self.register_ema('model') - """ - assert self.ema is None, "Cannot register state source to already instantiated EMA." - for name in args: - self._ema_sources[name] = getattr(self, name) - - def wrap_with_fsdp(self, model: torch.nn.Module, *args, **kwargs): - model = fsdp.wrap_with_fsdp(self.cfg.fsdp, model, *args, **kwargs) - if isinstance(model, fsdp.FSDP): - self._fsdp_modules.append(model) - return model - - def update_best_state_from_stage(self, stage_name: str = 'valid'): - """Update latest best state based on pending metrics of a given stage. This method relies - on the `BestStateDictManager.update` method to update the best state_dict with latest weights - if the registered states happen to match to the best performing setup. - """ - if self.best_metric_name is None: - # when no best metric is defined, the last state is always the best - self._new_best_state = True - self.logger.info("Updating best state with current state.") - else: - assert stage_name in self._pending_metrics, f"Metrics for stage {stage_name} not found." - assert self.best_metric_name in self._pending_metrics[stage_name], \ - f"Best metric not found in {stage_name} metrics. Cannot register best state" - current_score = self._pending_metrics[stage_name][self.best_metric_name] - all_best_metric_scores = [ - past_metrics[stage_name][self.best_metric_name] - for past_metrics in self.history - ] - all_best_metric_scores.append(current_score) - best_score = min(all_best_metric_scores) - self._new_best_state = current_score == best_score - if self._new_best_state: - old_best = min(all_best_metric_scores[:-1] + [float('inf')]) - self.logger.info( - f"New best state with {self.best_metric_name}={current_score:.3f} (was {old_best:.3f})") - - if self._new_best_state: - if self.cfg.fsdp.use: - # this will give an empty state dict on all ranks but the rank 0 - # which will have a copy in memory of the full model. - with fsdp.switch_to_full_state_dict(self._fsdp_modules): - for name in self.best_state.states.keys(): - state_source = self._get_state_source(name) - self.best_state.update(name, state_source) - # we save to a different dict. - self.fsdp_best_state.update(self.best_state.state_dict()) - # We cannot efficiently load fsdp_best_state when using FSDP, - # so we have do do a second pass, with the local shards. - for name in self.best_state.states.keys(): - state_source = self._get_state_source(name) - self.best_state.update(name, state_source) - - def _load_new_state_dict(self, state_dict: dict) -> dict: - old_states = {} - for name, new_state in state_dict.items(): - state_source = self._get_state_source(name) - old_states[name] = copy_state(state_source.state_dict()) - state_source.load_state_dict(new_state) - return old_states - - @contextmanager - def swap_best_state(self): - self.logger.debug(f"Swapping to best state for: {', '.join(self.best_state.state_dict().keys())}") - old_states = self._load_new_state_dict(self.best_state.state_dict()) - try: - yield - finally: - self.logger.debug("Swapping back from best to original state") - for name, old_state in old_states.items(): - state_source = self._get_state_source(name) - state_source.load_state_dict(old_state) - - @contextmanager - def swap_ema_state(self): - if self.ema is None: - yield - else: - ema_state_dict = self.ema.state_dict()['state'] - self.logger.debug(f"Swapping to EMA state for: {', '.join(ema_state_dict.keys())}") - old_states = self._load_new_state_dict(ema_state_dict) - try: - yield - finally: - self.logger.debug("Swapping back from EMA state to original state") - for name, old_state in old_states.items(): - state_source = self._get_state_source(name) - state_source.load_state_dict(old_state) - - @property - def is_training(self): - return self.current_stage == 'train' - - def log_model_summary(self, model: nn.Module): - """Log model summary, architecture and size of the model.""" - self.logger.info(model) - mb = sum(p.numel() for p in model.parameters()) * 4 / 2 ** 20 - self.logger.info("Size: %.1f MB", mb) - - @abstractmethod - def build_model(self): - """Method to implement to initialize model.""" - ... - - def initialize_ema(self): - """Initialize exponential moving average with the registered sources. - EMA object is created if the optim.ema.model.decay value is non-null. - """ - from .builders import get_ema - self.ema = get_ema(self._ema_sources, self.cfg.optim.ema) - if self.ema is None: - self.logger.info('No EMA on the model.') - else: - assert self.cfg.optim.ema.updates > 0 - self.logger.info( - f'Initializing EMA on the model with decay = {self.ema.decay}' - f' every {self.cfg.optim.ema.updates} updates' - ) - - @abstractmethod - def build_dataloaders(self): - """Method to implement to initialize dataloaders.""" - ... - - @abstractmethod - def show(self): - """Method to log any information without running the job.""" - ... - - @property - def log_updates(self): - # convenient access to log updates - return self._log_updates - - def checkpoint_path(self, **kwargs): - kwargs.setdefault('use_fsdp', self.cfg.fsdp.use) - return self.folder / checkpoint.checkpoint_name(**kwargs) - - def epoch_checkpoint_path(self, epoch: int, **kwargs): - kwargs.setdefault('use_fsdp', self.cfg.fsdp.use) - return self.folder / checkpoint.checkpoint_name(str(epoch), **kwargs) - - def checkpoint_path_with_name(self, name: str, **kwargs): - kwargs.setdefault('use_fsdp', self.cfg.fsdp.use) - return self.folder / checkpoint.checkpoint_name(name=name, **kwargs) - - def save_checkpoints(self): - """Save checkpoint, optionally keeping a copy for a given epoch.""" - is_sharded = self.cfg.fsdp.use - if not flashy.distrib.is_rank_zero() and not is_sharded: - return - self.logger.info("Model hash: %s", model_hash(self.model)) - state = self.state_dict() - epoch = self.epoch - 1 # pushing metrics will increase the epoch in Flashy, so we do -1 here - - # save minimal state_dict as new checkpoint every X epoch - if self.cfg.checkpoint.save_every: - if epoch % self.cfg.checkpoint.save_every == 0: - minimal_state = state - if self.cfg.checkpoint.keep_every_states is not None and len(self.cfg.checkpoint.keep_every_states) > 0: - minimal_state = { - name: source for name, source in state.items() - if name in self.cfg.checkpoint.keep_every_states - } - epoch_checkpoint_path = self.epoch_checkpoint_path(epoch) - checkpoint.save_checkpoint(minimal_state, epoch_checkpoint_path, is_sharded) - - # save checkpoint as latest checkpoint - if self.cfg.checkpoint.save_last: - last_checkpoint_path = self.checkpoint_path() - checkpoint.save_checkpoint(state, last_checkpoint_path, is_sharded) - - # flush any stale checkpoint to reduce disk footprint - checkpoint.flush_stale_checkpoints(self.checkpoint_path()) - - def load_from_pretrained(self, name: str) -> dict: - raise NotImplementedError("Solver does not provide a way to load pretrained models.") - - def load_checkpoints(self, load_best: bool = False, ignore_state_keys: tp.List[str] = []) -> tp.Optional[dict]: - """Load last checkpoint or the one specified in continue_from. - - Args: - load_best (bool): Whether to load from best state dict or not. - Best state dict is always used when not loading the current xp. - ignore_state_keys (list of str): List of sources to ignore when loading the state, e.g. `optimizer`. - Returns: - state (dict, optional): The loaded state dictionary. - """ - # load checkpoints from xp folder or cfg.continue_from - is_sharded = self.cfg.fsdp.use - load_from_path: tp.Optional[Path] = None - checkpoint_source: tp.Optional[checkpoint.CheckpointSource] = None - - if load_best: - self.logger.info("Trying to load state_dict from best state.") - - state: tp.Optional[dict] = None - rank0_checkpoint_path = self.checkpoint_path(use_fsdp=False) - current_checkpoint_path = self.checkpoint_path() - _pretrained_prefix = '//pretrained/' - continue_pretrained = (self.cfg.continue_from or '').startswith(_pretrained_prefix) - if rank0_checkpoint_path.exists(): - self.logger.info(f"Loading existing checkpoint: {current_checkpoint_path}") - load_from_path = current_checkpoint_path - checkpoint.check_sharded_checkpoint(current_checkpoint_path, rank0_checkpoint_path) - checkpoint_source = checkpoint.CheckpointSource.CURRENT_XP - elif self.cfg.continue_from and not continue_pretrained: - self.logger.info(f"Continuing from provided checkpoint: {self.cfg.continue_from}") - # we're always continuing from consolidated checkpoints: self.cfg.use_fsdp and not continue_best - load_from_path = checkpoint.resolve_checkpoint_path(self.cfg.continue_from, use_fsdp=False) - if load_from_path is None: - self.logger.error('Could not resolve the continue_from checkpoint %s', self.cfg.continue_from) - raise RuntimeError(f'Could not resolve continue_from checkpoint {self.cfg.continue_from}') - checkpoint_source = checkpoint.CheckpointSource.OTHER - - if load_from_path is not None: - state = checkpoint.load_checkpoint(load_from_path, is_sharded) - elif continue_pretrained: - self.logger.info("Loading a pretrained model. Ignoring 'load_best' and 'ignore_state_keys' params.") - state = self.load_from_pretrained(self.cfg.continue_from[len(_pretrained_prefix):]) - checkpoint_source = checkpoint.CheckpointSource.PRETRAINED - load_best = True - - # checkpoints are not from the current xp, we only retrieve the best state - if checkpoint_source is not None and checkpoint_source != checkpoint.CheckpointSource.CURRENT_XP: - assert state is not None - self.logger.info("Checkpoint source is not the current xp: Load state_dict from best state.") - load_best = True - state = {key: state[key] for key in self._continue_best_source_keys if key in state} - # loaded checkpoints are FSDP checkpoints: we're reading the best state - # from FSDP and we drop the regular best_state - if 'fsdp_best_state' in state and state['fsdp_best_state']: - state.pop('best_state', None) - self.logger.info("... Loaded checkpoint has FSDP best state") - # FSDP is enabled in the solver, if the loaded checkpoints do not have FSDP support - # then we're initializing FSDP best state with the regular best state - elif self.cfg.fsdp.use: - if 'fsdp_best_state' not in state or not state['fsdp_best_state']: - # we swap non-FSDP checkpoints best_state to FSDP-compatible best state - state['fsdp_best_state'] = state.pop('best_state') - self.logger.info("... Loaded checkpoint does not have FSDP best state. Use regular best state") - - if state is not None: - if load_best: - self.logger.info("Ignoring keys when loading best %r", ignore_state_keys) - for key in set(ignore_state_keys): - if key in state: - state.pop(key) - has_best_state = 'best_state' in state or 'fsdp_best_state' in state - assert has_best_state, ("Trying to load best state but neither 'best_state'", - " or 'fsdp_best_state' found in checkpoints.") - self.load_state_dict(state) - - # for FSDP, let's make extra sure nothing bad happened with out of sync - # checkpoints across workers. - epoch = float(self.epoch) - avg_epoch = flashy.distrib.average_metrics({'epoch': epoch})['epoch'] - if avg_epoch != epoch: - raise RuntimeError( - f"Inconsistent loading of checkpoints happened, our epoch is {epoch} " - f"but average of epochs is {avg_epoch}, at least one gpu must have a " - "different epoch number.") - - # on load_best, properly reinitialize state_dict, best states and ema - # otherwise we load from the current xp and don't alter anything - if load_best: - self.logger.info("Loading state_dict from best state.") - if not self.cfg.fsdp.use and self.fsdp_best_state: - # loading from an FSDP checkpoint but with FSDP deactivated - self.logger.info("... Loading from FSDP best state dict.") - self.best_state.load_state_dict(self.fsdp_best_state) - - # if load_best, we permanently override the regular state_dict with the best state - if self.cfg.fsdp.use: - self.logger.info("FSDP is used, loading from FSDP best state.") - with fsdp.switch_to_full_state_dict(self._fsdp_modules): - # this might be really fragile but okay for now. - self.load_state_dict(self.fsdp_best_state) - else: - # we permanently swap the stateful objects to their best state - self._load_new_state_dict(self.best_state.state_dict()) - - # the EMA modules should also be instantiated with best state. - # the easiest way to do so is to reinitialize a new EMA with best state loaded. - if self.ema is not None: - self.logger.info("Re-initializing EMA from best state") - self.initialize_ema() - - if self.cfg.fsdp.use: - self.logger.info("Re-initializing best state after using FSDP best state.") - for name in self.best_state.states.keys(): - state_source = self._get_state_source(name) - self.best_state.update(name, state_source) - - return state - - def restore(self, load_best: bool = False, replay_metrics: bool = False, - ignore_state_keys: tp.List[str] = []) -> bool: - """Restore the status of a solver for a given xp. - - Args: - load_best (bool): if `True`, load the best state from the checkpoint. - replay_metrics (bool): if `True`, logs all the metrics from past epochs. - ignore_state_keys (list of str): list of sources to ignore when loading the state, e.g. `optimizer`. - """ - self.logger.info("Restoring weights and history.") - restored_checkpoints = self.load_checkpoints(load_best, ignore_state_keys) - - self.logger.info("Model hash: %s", model_hash(self.model)) - - if replay_metrics and len(self.history) > 0: - self.logger.info("Replaying past metrics...") - for epoch, stages in enumerate(self.history): - for stage_name, metrics in stages.items(): - # We manually log the metrics summary to the result logger - # as we don't want to add them to the pending metrics - self.result_logger._log_summary(stage_name, metrics, step=epoch + 1, step_name='epoch', - formatter=self.get_formatter(stage_name)) - return restored_checkpoints is not None - - def commit(self, save_checkpoints: bool = True): - """Commit metrics to dora and save checkpoints at the end of an epoch.""" - # we override commit to introduce more complex checkpoint saving behaviors - self.history.append(self._pending_metrics) # This will increase self.epoch - if save_checkpoints: - self.save_checkpoints() - self._start_epoch() - if flashy.distrib.is_rank_zero(): - self.xp.link.update_history(self.history) - - def run_epoch(self): - """Run a single epoch with all stages. - - Metrics for a given stage are stored in _pending_metrics and committed by the solver afterwards. - Children solvers can extend this method with custom behavior, e.g.: - - def run_epoch(self): - ... # custom code - super().run_epoch() - ... # custom code - """ - self.run_stage('train', self.train) - with torch.no_grad(): - with self.swap_ema_state(): - self.run_stage('valid', self.valid) - # the best state is updated with EMA states if available - self.update_best_state_from_stage('valid') - with self.swap_best_state(): - if self.should_run_stage('evaluate'): - self.run_stage('evaluate', self.evaluate) - if self.should_run_stage('generate'): - self.run_stage('generate', with_rank_rng()(self.generate)) - - def run(self): - """Training loop.""" - assert len(self.state_dict()) > 0 - self.restore(replay_metrics=True) # load checkpoint and replay history - self.log_hyperparams(dict_from_config(self.cfg)) - for epoch in range(self.epoch, self.cfg.optim.epochs + 1): - if self.should_stop_training(): - return - self.run_epoch() - # Commit will send the metrics to Dora and save checkpoints by default. - self.commit() - - def should_stop_training(self) -> bool: - """Check whether we should stop training or not.""" - return self.epoch > self.cfg.optim.epochs - - def should_run_stage(self, stage_name) -> bool: - """Check whether we want to run the specified stages.""" - stage_every = self.cfg[stage_name].get('every', None) - is_last_epoch = self.epoch == self.cfg.optim.epochs - is_epoch_every = (stage_every and self.epoch % stage_every == 0) - return is_last_epoch or is_epoch_every - - @abstractmethod - def run_step(self, idx: int, batch: tp.Any, metrics: dict): - """Perform one training or valid step on a given batch.""" - ... - - def common_train_valid(self, dataset_split: str, **kwargs: tp.Any): - """Common logic for train and valid stages.""" - self.model.train(self.is_training) - - loader = self.dataloaders[dataset_split] - # get a different order for distributed training, otherwise this will get ignored - if flashy.distrib.world_size() > 1 \ - and isinstance(loader.sampler, torch.utils.data.distributed.DistributedSampler): - loader.sampler.set_epoch(self.epoch) - updates_per_epoch = self.train_updates_per_epoch if self.is_training else len(loader) - if self.cfg.benchmark_no_load: - self.logger.warning("Fake loading for benchmarking: re-using first batch") - batch = next(iter(loader)) - loader = [batch] * updates_per_epoch # type: ignore - lp = self.log_progress(self.current_stage, loader, total=updates_per_epoch, updates=self.log_updates) - average = flashy.averager() # epoch wise average - instant_average = flashy.averager() # average between two logging - metrics: dict = {} - - with self.profiler, self.deadlock_detect: # profiler will only run for the first 20 updates. - for idx, batch in enumerate(lp): - self.deadlock_detect.update('batch') - if idx >= updates_per_epoch: - break - metrics = {} - metrics = self.run_step(idx, batch, metrics) - self.deadlock_detect.update('step') - # run EMA step - if self.ema is not None and self.is_training and (idx + 1) % self.cfg.optim.ema.updates == 0: - self.logger.debug("EMA model step") - self.ema.step() - self.deadlock_detect.update('ema') - self.profiler.step() - instant_metrics = instant_average(metrics) - if lp.update(**instant_metrics): - instant_average = flashy.averager() # reset averager between two logging - metrics = average(metrics) # epoch wise average - self.deadlock_detect.update('end_batch') - - metrics = flashy.distrib.average_metrics(metrics, updates_per_epoch) - return metrics - - def train(self): - """Train stage.""" - return self.common_train_valid('train') - - def valid(self): - """Valid stage.""" - return self.common_train_valid('valid') - - @abstractmethod - def evaluate(self): - """Evaluate stage.""" - ... - - @abstractmethod - def generate(self): - """Generate stage.""" - ... - - def run_one_stage(self, stage_name: str): - """Run only the specified stage. - This method is useful to only generate samples from a trained experiment - or rerun the validation or evaluation stages. - """ - fn = { - 'generate': with_rank_rng()(self.generate), - 'evaluate': self.evaluate, - 'valid': self.valid, - } - if stage_name not in fn: - raise ValueError(f'Trying to run stage {stage_name} is not supported.') - assert len(self.state_dict()) > 0 - self._start_epoch() - with torch.no_grad(), self.swap_best_state(): - self.run_stage(stage_name, fn[stage_name]) - if not self.cfg.execute_inplace: - self.commit(save_checkpoints=False) - - @staticmethod - def get_eval_solver_from_sig(sig: str, dtype: tp.Optional[str] = None, - device: tp.Optional[str] = None, autocast: bool = True, - batch_size: tp.Optional[int] = None, - override_cfg: tp.Optional[tp.Union[dict, omegaconf.DictConfig]] = None, - **kwargs): - """Mostly a convenience function around audiocraft.train.get_solver_from_sig, - populating all the proper param, deactivating EMA, FSDP, loading the best state, - basically all you need to get a solver ready to "play" with in single GPU mode - and with minimal memory overhead. - - Args: - sig (str): signature to load. - dtype (str or None): potential dtype, as a string, i.e. 'float16'. - device (str or None): potential device, as a string, i.e. 'cuda'. - override_cfg (dict or omegaconf.DictConfig or None): potential device, as a string, i.e. 'cuda'. - """ - from audiocraft import train - our_override_cfg: tp.Dict[str, tp.Any] = {'optim': {'ema': {'use': False}}} - our_override_cfg['autocast'] = autocast - if dtype is not None: - our_override_cfg['dtype'] = dtype - if device is not None: - our_override_cfg['device'] = device - if batch_size is not None: - our_override_cfg['dataset'] = {'batch_size': batch_size} - if override_cfg is None: - override_cfg = {} - override_cfg = omegaconf.OmegaConf.merge( - omegaconf.DictConfig(override_cfg), omegaconf.DictConfig(our_override_cfg)) # type: ignore - solver = train.get_solver_from_sig( - sig, override_cfg=override_cfg, - load_best=True, disable_fsdp=True, - ignore_state_keys=['optimizer', 'ema'], **kwargs) - solver.model.eval() - return solver diff --git a/spaces/failfast/nextjs-hf-spaces/src/lib/createEmotionCache.ts b/spaces/failfast/nextjs-hf-spaces/src/lib/createEmotionCache.ts deleted file mode 100644 index c069bdc93f94312f5f6f1962bb3a7818588128a0..0000000000000000000000000000000000000000 --- a/spaces/failfast/nextjs-hf-spaces/src/lib/createEmotionCache.ts +++ /dev/null @@ -1,19 +0,0 @@ -import createCache from "@emotion/cache"; - -const isBrowser = typeof document !== "undefined"; - -// On the client side, Create a meta tag at the top of the and set it as insertionPoint. -// This assures that MUI styles are loaded first. -// It allows developers to easily override MUI styles with other styling solutions, like CSS modules. -export default function createEmotionCache() { - let insertionPoint; - - if (isBrowser) { - const emotionInsertionPoint = document.querySelector( - 'meta[name="emotion-insertion-point"]' - ); - insertionPoint = emotionInsertionPoint ?? undefined; - } - - return createCache({ key: "mui-style", insertionPoint }); -} diff --git a/spaces/falterWliame/Face_Mask_Detection/Creative Sound Blaster X-Fi MB Activation Key.rar.md b/spaces/falterWliame/Face_Mask_Detection/Creative Sound Blaster X-Fi MB Activation Key.rar.md deleted file mode 100644 index f3cedc4fd6865bd84a77ebf0e64f4507f938e960..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Creative Sound Blaster X-Fi MB Activation Key.rar.md +++ /dev/null @@ -1,26 +0,0 @@ -
      -

      How to Download and Activate Creative Sound Blaster X-Fi MB for Free

      -

      Creative Sound Blaster X-Fi MB is a software solution that enhances the audio quality of your PC systems equipped with basic onboard audio. It offers various features and effects, such as EAX® ADVANCED HD 4.0, which delivers a realistic and immersive 3D gaming experience[^1^]. However, this software is not free and requires an activation key to unlock its full potential.

      -

      Creative sound blaster X-Fi MB activation key.rar


      Download ►►► https://urlca.com/2uDdKx



      -

      If you are looking for a way to download and activate Creative Sound Blaster X-Fi MB for free, you may have come across a file named "Creative sound blaster X-Fi MB activation key.rar" on the internet. This file claims to contain a crack or a serial number that can bypass the activation process of the software. But is it safe and reliable?

      -

      The Risks of Using Creative Sound Blaster X-Fi MB Activation Key.rar

      -

      Before you download and use Creative sound blaster X-Fi MB activation key.rar, you should be aware of the possible risks and consequences of doing so. Here are some of them:

      -
        -
      • It may not work. There is no guarantee that the file contains a valid or working activation key for Creative Sound Blaster X-Fi MB. It may be outdated, fake, or incompatible with your system or software version. You may end up wasting your time and bandwidth downloading a useless file.
      • -
      • It may contain malware. The file may be infected with viruses, trojans, worms, spyware, ransomware, or other malicious programs that can harm your computer or compromise your personal data. You may expose yourself to identity theft, fraud, data loss, or other cyberattacks by opening or running the file.
      • -
      • It may violate the law. The file may be illegal or infringe the intellectual property rights of Creative Technology Ltd., the developer of Creative Sound Blaster X-Fi MB. You may be breaking the law or violating the terms of service of the software by using an unauthorized activation key. You may face legal actions, fines, or penalties from Creative Technology Ltd. or other authorities for doing so.
      • -
      -

      The Alternative Way to Download and Activate Creative Sound Blaster X-Fi MB for Free

      -

      Instead of using Creative sound blaster X-Fi MB activation key.rar, there is a safer and more reliable way to download and activate Creative Sound Blaster X-Fi MB for free. That is to use the official website of Creative Technology Ltd. Here are the steps to do so:

      -

      -
        -
      1. Visit the official website of Creative Technology Ltd. Go to https://www.creative.com/ and browse through their products and services.
      2. -
      3. Find Creative Sound Blaster X-Fi MB. Search for Creative Sound Blaster X-Fi MB in the search bar or navigate to the software section of the website. You should see a page with a description, features, screenshots, and system requirements of the software.
      4. -
      5. Download Creative Sound Blaster X-Fi MB. Click on the download button or link on the page and follow the instructions to download the software installer file to your computer. Make sure you have enough disk space and a stable internet connection.
      6. -
      7. Install Creative Sound Blaster X-Fi MB. Run the installer file and follow the steps to install the software on your computer. You may need to restart your computer after the installation is complete.
      8. -
      9. Activate Creative Sound Blaster X-Fi MB. Launch the software and enter your email address when prompted. You should receive an email from Creative Technology Ltd. with an activation link. Click on the link and follow the instructions to activate your software for free. You should be able to enjoy all the features and effects of Creative Sound Blaster X-Fi MB without any limitations.
      10. -
      -

      Conclusion

      -

      Creative Sound Blaster X-Fi MB is a great software solution that can enhance your audio quality and gaming experience on your PC systems. However, you should avoid using Creative sound blaster X-Fi MB activation key.rar or any other similar files that claim to offer a free activation key for

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Fcap Array Software V3.0 !NEW! Download Firefox.md b/spaces/falterWliame/Face_Mask_Detection/Fcap Array Software V3.0 !NEW! Download Firefox.md deleted file mode 100644 index 1a283e7f588b719e0d6b6d172bd38580a7a12421..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Fcap Array Software V3.0 !NEW! Download Firefox.md +++ /dev/null @@ -1,156 +0,0 @@ -
      -

      FCAP Array Software V3.0 Download Firefox: A Guide for Bioscience Researchers

      - -

      If you are a bioscience researcher who uses flow cytometry to measure multiple analytes in a single sample, you might be interested in FCAP Array Software V3.0, a software that allows you to analyze and interpret multiplex bead-based immunoassays. FCAP Array Software V3.0 is a product of BD Biosciences, a global leader in medical technology and innovation. In this article, we will show you how to download FCAP Array Software V3.0 for Firefox, one of the compatible browsers for this software.

      -

      Fcap Array Software V3.0 Download Firefox


      Download 🗸 https://urlca.com/2uDdQ8



      - -

      What is FCAP Array Software V3.0?

      - -

      FCAP Array Software V3.0 is a software that enables you to perform data analysis and interpretation of multiplex bead-based immunoassays, such as BD Cytometric Bead Array (CBA) and BD™ CBA Flex Sets. These assays use fluorescent beads coated with specific capture antibodies to measure multiple analytes simultaneously in a single sample.

      - -

      With FCAP Array Software V3.0, you can:

      - -
        -
      • Import data from BD FACS™ instruments or other flow cytometers
      • -
      • Calibrate data using standard curves
      • -
      • Perform quality control checks
      • -
      • Generate reports with graphs, tables, and statistics
      • -
      • Export data to Excel or other formats
      • -
      - -

      FCAP Array Software V3.0 is designed to be user-friendly and intuitive, with features such as drag-and-drop functionality, customizable templates, and automated calculations. You can also access a protocol library with predefined settings for various assays and analytes.

      -

      - -

      Why use Firefox to download FCAP Array Software V3.0?

      - -

      FCAP Array Software V3.0 is compatible with Firefox browser, as well as Chrome and Safari. You can use Firefox to access the online version of FCAP Array Software V3.0, which is hosted on BD Biosciences website. The online version allows you to perform data analysis and interpretation without installing the software on your computer. You can also upload your own data files or use sample data files provided by BD Biosciences.

      - -

      To use the online version of FCAP Array Software V3.0 with Firefox, you need to have a minimum version of 68 installed on your computer. You also need to have an internet connection and a valid account on BD Biosciences website. You can create an account for free by registering on the website.

      - -

      If you prefer to install FCAP Array Software V3.0 on your computer, you can also download it from BD Biosciences website using Firefox browser. The download process is simple and straightforward, as we will explain in the next section.

      - -

      How to download FCAP Array Software V3.0 for Firefox?

      - -

      If you want to download FCAP Array Software V3.0 for Firefox, you can follow these steps:

      - -
        -
      1. Go to https://www.bdbiosciences.com/en-us/products/instruments/software-informatics/instrument-software/fcap-array-software-v3-0
      2. -
      3. Select your region and language from the drop-down menu
      4. -
      5. Click on the "Download" button under "FCAP Array Software V3.0 (RUO)"
      6. -
      7. Enter your email address and password to log in to your account or create a new one if you don't have one
      8. -
      9. Follow the instructions on the screen to download and install the software on your computer
      10. -
      - -

      The download file size is about 200 MB and the installation process takes about 10 minutes. You can use FCAP Array Software V3.0 with any flow cytometer that produces FCS 2.0 or FCS 3.0 files.

      - -

      Conclusion

      - -

      FCAP Array Software V3.0 is a powerful tool for analyzing and interpreting multiplex bead-based immunoassays, such as BD CBA and BD™ CBA Flex Sets. It is compatible with Firefox browser, as well as Chrome and Safari. You can use Firefox to access the online version of FCAP Array Software V3.0 or download it from BD Biosciences website for offline use.

      - -

      If you are interested in learning more about FCAP Array Software V3.0 or other products from BD Biosciences, you can visit their website or contact their customer service team.

      -

      What are the benefits of using FCAP Array Software V3.0 with Firefox?

      - -

      Using FCAP Array Software V3.0 with Firefox can offer you several benefits, such as:

      - -
        -
      • Speed: Firefox is one of the fastest browsers available, which means you can access and use FCAP Array Software V3.0 online without any delays or interruptions.
      • -
      • Security: Firefox is one of the most secure browsers available, which means you can protect your data and privacy while using FCAP Array Software V3.0 online. Firefox also has features such as tracking protection, password manager, and private browsing mode.
      • -
      • Compatibility: Firefox is one of the most compatible browsers available, which means you can use FCAP Array Software V3.0 online with any operating system, device, or screen size.
      • -
      • Customization: Firefox is one of the most customizable browsers available, which means you can personalize your browsing experience while using FCAP Array Software V3.0 online. Firefox also has features such as themes, extensions, and bookmarks.
      • -
      - -

      Using FCAP Array Software V3.0 with Firefox can help you to optimize your workflow and productivity while performing multiplex bead-based immunoassays.

      - -

      How to get support for FCAP Array Software V3.0 with Firefox?

      - -

      If you have any questions or issues while using FCAP Array Software V3.0 with Firefox, you can get support from BD Biosciences or Firefox teams.

      - -

      For support from BD Biosciences, you can:

      - -
        -
      • Visit their website and access their online resources, such as data sheets, user guides, manuals, FAQs, and videos.
      • -
      • Contact their customer service team by phone, email, or chat.
      • -
      • Request a demo or a quote for FCAP Array Software V3.0 or other products.
      • -
      - -

      For support from Firefox, you can:

      - -
        -
      • Visit their website and access their online resources, such as help articles, tutorials, forums, and blogs.
      • -
      • Contact their community support team by email or social media.
      • -
      • Report a bug or a feedback for Firefox browser or FCAP Array Software V3.0 online version.
      • -
      - -

      Using FCAP Array Software V3.0 with Firefox can provide you with reliable and responsive support from both BD Biosciences and Firefox teams.

      -

      What are the applications of FCAP Array Software V3.0 with Firefox?

      - -

      FCAP Array Software V3.0 with Firefox can be used for various applications in bioscience research, such as:

      - -
        -
      • Inflammation and immune response: You can use FCAP Array Software V3.0 with Firefox to measure cytokines, chemokines, growth factors, and other biomarkers involved in inflammation and immune response.
      • -
      • Infectious diseases: You can use FCAP Array Software V3.0 with Firefox to detect and quantify pathogens, antibodies, antigens, and other markers related to infectious diseases.
      • -
      • Cancer: You can use FCAP Array Software V3.0 with Firefox to assess tumor markers, angiogenesis factors, apoptosis indicators, and other markers related to cancer.
      • -
      • Neuroscience: You can use FCAP Array Software V3.0 with Firefox to evaluate neurotrophins, neurotransmitters, receptors, and other markers related to neuroscience.
      • -
      • Cardiovascular diseases: You can use FCAP Array Software V3.0 with Firefox to monitor cardiac markers, coagulation factors, vascular endothelial growth factors, and other markers related to cardiovascular diseases.
      • -
      - -

      FCAP Array Software V3.0 with Firefox can help you to perform multiplex bead-based immunoassays for various applications in bioscience research.

      - -

      How to update FCAP Array Software V3.0 with Firefox?

      - -

      If you want to update FCAP Array Software V3.0 with Firefox, you can follow these steps:

      - -
        -
      1. Go to https://www.bdbiosciences.com/en-us/products/instruments/software-informatics/instrument-software/fcap-array-software-v3-0
      2. -
      3. Select your region and language from the drop-down menu
      4. -
      5. Click on the "Download" button under "FCAP Array Software v3.0 (RUO)"
      6. -
      7. Enter your email address and password to log in to your account or create a new one if you don't have one
      8. -
      9. Follow the instructions on the screen to download and install the latest version of the software on your computer
      10. -
      - -

      The update process is similar to the download process and takes about 10 minutes. You can check the version number of your software by clicking on the "Help" menu and selecting "About".

      - -

      Updating FCAP Array Software V3.0 with Firefox can help you to access the latest features and improvements of the software.

      -

      What are the features of FCAP Array Software V3.0 with Firefox?

      - -

      FCAP Array Software V3.0 with Firefox has many features that make it a versatile and efficient software for multiplex bead-based immunoassays, such as:

      - -
        -
      • Data import: You can import data from BD FACS™ instruments or other flow cytometers that produce FCS 2.0 or FCS 3.0 files. You can also import data from Excel or CSV files.
      • -
      • Data calibration: You can calibrate data using standard curves generated from known concentrations of analytes. You can also use built-in algorithms to correct for background noise, bead loss, and sample dilution.
      • -
      • Data quality control: You can perform quality control checks to ensure the validity and reliability of your data. You can use features such as outlier detection, coefficient of variation, and signal-to-noise ratio.
      • -
      • Data analysis: You can perform data analysis using various methods, such as median fluorescence intensity, concentration, fold change, and z-score. You can also use features such as clustering, heat map, and scatter plot.
      • -
      • Data interpretation: You can interpret data using various tools, such as graphs, tables, and statistics. You can also use features such as annotation, legend, and axis.
      • -
      • Data export: You can export data to Excel or other formats for further analysis or presentation. You can also export data to PDF or PNG files for printing or sharing.
      • -
      - -

      FCAP Array Software V3.0 with Firefox has many features that make it a comprehensive and user-friendly software for multiplex bead-based immunoassays.

      - -

      How to get started with FCAP Array Software V3.0 with Firefox?

      - -

      If you want to get started with FCAP Array Software V3.0 with Firefox, you can follow these steps:

      - -
        -
      1. Go to https://www.bdbiosciences.com/en-us/products/instruments/software-informatics/instrument-software/fcap-array-software-v3-0
      2. -
      3. Select your region and language from the drop-down menu
      4. -
      5. Click on the "Online Version" button under "FCAP Array Software v3.0 (RUO)"
      6. -
      7. Enter your email address and password to log in to your account or create a new one if you don't have one
      8. -
      9. Choose a protocol from the protocol library or create your own protocol
      10. -
      11. Upload your data file or use a sample data file
      12. -
      13. Perform data calibration, quality control, analysis, interpretation, and export
      14. -
      - -

      You can also watch a video tutorial on how to use FCAP Array Software V3.0 with Firefox on BD Biosciences website.

      - -

      Getting started with FCAP Array Software V3.0 with Firefox is easy and fast.

      -

      Conclusion

      - -

      FCAP Array Software V3.0 is a powerful software for analyzing and interpreting multiplex bead-based immunoassays, such as BD CBA and BD™ CBA Flex Sets. It is compatible with Firefox browser, as well as Chrome and Safari. You can use Firefox to access the online version of FCAP Array Software V3.0 or download it from BD Biosciences website for offline use.

      - -

      FCAP Array Software V3.0 has many features and benefits that make it a versatile and efficient software for bioscience research. It can help you to perform data import, calibration, quality control, analysis, interpretation, and export. It can also help you to perform various applications, such as inflammation and immune response, infectious diseases, cancer, neuroscience, and cardiovascular diseases.

      - -

      If you want to learn more about FCAP Array Software V3.0 or other products from BD Biosciences, you can visit their website or contact their customer service team. You can also get support from BD Biosciences or Firefox teams if you have any questions or issues while using FCAP Array Software V3.0 with Firefox.

      - -

      We hope this article has helped you to understand what FCAP Array Software V3.0 is and how to download it for Firefox. Thank you for reading and happy data analysis!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Fifa 08 Crack !EXCLUSIVE! Download Tpb File.md b/spaces/falterWliame/Face_Mask_Detection/Fifa 08 Crack !EXCLUSIVE! Download Tpb File.md deleted file mode 100644 index df35a809d92def5b7b8ff48360b20c3e41fde04e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Fifa 08 Crack !EXCLUSIVE! Download Tpb File.md +++ /dev/null @@ -1,22 +0,0 @@ -
      -

      How to Download and Install FIFA 08 on PC

      -

      FIFA 08 is a soccer video game released by EA Sports in 2007. It features more than 30 leagues, 620 teams, and 15,000 players from around the world. If you want to play FIFA 08 on your PC, you will need to download a file called "tpb file" which contains the game data and a crack to bypass the copy protection.

      -

      A tpb file is a torrent file that can be downloaded using a peer-to-peer (P2P) network such as BitTorrent. A torrent file contains information about the files and folders that are shared by other users who have the same file. To download a tpb file, you will need a torrent client such as uTorrent or BitComet.

      -

      fifa 08 crack download tpb file


      Download » https://urlca.com/2uDdBv



      -

      Here are the steps to download and install FIFA 08 on PC using a tpb file:

      -
        -
      1. Go to a torrent website such as The Pirate Bay or Kickass Torrents and search for "fifa 08 tpb file". You will see a list of results with different sizes and seeds. Seeds are the number of users who have the complete file and are sharing it with others. Choose a result with a high number of seeds and a reasonable size. Click on the magnet link or download the torrent file to your computer.
      2. -
      3. Open your torrent client and add the torrent file or magnet link. The download will start automatically. Depending on your internet speed and the number of seeds, it may take some time to complete. You can check the progress and speed of the download in your torrent client.
      4. -
      5. Once the download is finished, you will see a folder containing several files and folders. One of them will be named "FIFA 08.iso". This is an image file that contains the game data. You will need to mount this file using a virtual drive software such as Daemon Tools or PowerISO.
      6. -
      7. After mounting the image file, you will see a new drive in your computer with the FIFA 08 logo. Open this drive and run the setup.exe file. Follow the instructions to install the game on your PC. You may need to enter a serial key during the installation. You can find one in the folder named "Crack" or "Keygen".
      8. -
      9. After installing the game, you will need to copy the crack file from the folder named "Crack" or "NoCD" to the game installation folder. This will replace the original game executable with a modified one that will bypass the copy protection. You can find the game installation folder by right-clicking on the FIFA 08 shortcut on your desktop and choosing "Open file location".
      10. -
      11. Now you can launch the game from your desktop or start menu and enjoy playing FIFA 08 on your PC.
      12. -
      -

      Note: Downloading and installing FIFA 08 using a tpb file may be illegal in some countries and regions. It may also expose your computer to viruses and malware. Use it at your own risk.

      - -

      If you want to play FIFA 08 online with other players, you will need to create an EA account and register your game. You can do this by launching the game and choosing "Online" from the main menu. You will be asked to enter your email address and password. If you don't have an EA account, you can create one for free. You will also need to enter your serial key that you used during the installation.

      -

      After creating and logging in to your EA account, you will be able to join or create online matches with other players. You can choose from different modes such as friendly, ranked, tournament, or league. You can also customize your online profile and settings. You can check your online stats and rankings on the EA website.

      -

      -

      FIFA 08 is a fun and realistic soccer game that you can play on your PC. It has many features and options that will keep you entertained for hours. Whether you play solo or online, you will enjoy the thrill of scoring goals and winning matches.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Lumerical Fdtd License Crack Software !EXCLUSIVE!.md b/spaces/falterWliame/Face_Mask_Detection/Lumerical Fdtd License Crack Software !EXCLUSIVE!.md deleted file mode 100644 index 70dc37f4aa78b1a0fec04df8c5daa18bfe55f8ed..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Lumerical Fdtd License Crack Software !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Lumerical Fdtd License Crack Software


      Download ✦✦✦ https://urlca.com/2uDd2K



      -
      -Through a donation from Lumerical, this license has been extended to the ... of the FDTD Solutions software you want to use prior to running fdtd-run-pbs.sh:. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/falterWliame/Face_Mask_Detection/PATCHED Crack Taos Contaplus Elite 2012.md b/spaces/falterWliame/Face_Mask_Detection/PATCHED Crack Taos Contaplus Elite 2012.md deleted file mode 100644 index 75edd935c50098c3f2d5dcf06e937e390e99da92..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/PATCHED Crack Taos Contaplus Elite 2012.md +++ /dev/null @@ -1,6 +0,0 @@ -

      crack taos contaplus elite 2012


      Download Zip ————— https://urlca.com/2uDd7c



      -
      -Keygen Taos Serial Number, key, crack, keygen. ... Contaplus taos, keygen taos contaplus 2012, medicina taos ... FacturaPlus Elite 2012. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Bel-Air Season 1 Download Links - Drama Comedy and Nostalgia.md b/spaces/fatiXbelha/sd/Bel-Air Season 1 Download Links - Drama Comedy and Nostalgia.md deleted file mode 100644 index 26a6d469366b9c798e286ad514f92e5db1606b07..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Bel-Air Season 1 Download Links - Drama Comedy and Nostalgia.md +++ /dev/null @@ -1,198 +0,0 @@ -
      -

      Bel-Air Season 1 Download: How to Watch the Reimagined Fresh Prince of Bel-Air

      -

      If you are a fan of the classic sitcom The Fresh Prince of Bel-Air, you might be curious about the new drama series Bel-Air, which is a modern and dramatic take on the same story. In this article, we will tell you everything you need to know about Bel-Air season 1, including what it is, how to watch it online, and how to download it offline. Read on to find out more.

      -

      bel air season 1 download


      Download Filehttps://urllie.com/2uNCUK



      -

      What is Bel-Air?

      -

      Bel-Air is a TV series that reimagines the beloved sitcom The Fresh Prince of Bel-Air through a new lens. It follows the journey of Will Smith, a street-smart teenager from West Philadelphia, who is sent to live with his wealthy relatives at their Bel Air mansion. There, he faces the challenges and opportunities of a new world, while also dealing with his past and his identity.

      -

      A brief summary of the show

      -

      The show is based on a viral fan-made trailer by Morgan Cooper, who also serves as a co-writer and director for the series. It is produced by Universal Television and Westbrook Studios, a division of Jada Pinkett Smith and Will Smith's media company Westbrook Inc. Will Smith himself is an executive producer for the show, along with Quincy Jones, Benny Medina, Andy Borowitz, Susan Borowitz, Malcolm Spellman, T.J. Brady, Rasheed Newson, Terence Carter, James Lassiter, Miguel Melendez, and Morgan Cooper.

      -

      The show is set in modern-day America and explores the themes of race, class, culture, family, identity, and love. It also pays homage to the original sitcom by incorporating some of its iconic elements, such as the theme song, the characters' names, and some of the memorable scenes and jokes.

      -

      The cast and crew of Bel-Air

      -

      The show features a talented cast of actors who bring the characters to life. Here are some of the main cast members and their roles:

      -
        -
      • Jabari Banks as Will Smith: The protagonist of the show, who moves from Philadelphia to Bel Air after getting into trouble with a local gang.
      • -
      • Cassandra Freeman as Vivian Banks: Will's aunt and Phil's wife, who is a successful lawyer and a loving mother.
      • -
      • Jimmy Akingbola as Philip Banks: Will's uncle and Vivian's husband, who is a prominent judge and a strict father.
      • -
      • Olly Sholotan as Carlton Banks: Will's cousin and Phil's son, who is a smart but insecure student at Bel Air Academy.
      • -
      • Coco Jones as Hilary Banks: Will's cousin and Phil's daughter, who is a social media influencer and a spoiled fashionista.
      • -
      • Akira Akbar as Ashley Banks: Will's cousin and Phil's daughter, who is a rebellious teenager who looks up to Will.
      • -
      • Adrian Holmes as Geoffrey Butler: The Banks family's butler, who is sarcastic and loyal.
      • -
      • Aliyah Royale as Lisa Wilkes: Will's love interest and Phil's campaign manager, who is passionate and outspoken.
      • -
      -

      The show also features other supporting and

    • Bel-Air has a different plot and character development, while The Fresh Prince of Bel-Air has a similar plot and character development. Bel-Air explores the backstory and the motivation of Will and the other characters, while The Fresh Prince of Bel-Air focuses on the comedy and the situations that Will faces.
    • -
    • Bel-Air has a different style and format, while The Fresh Prince of Bel-Air has a similar style and format. Bel-Air has a cinematic and dramatic look, while The Fresh Prince of Bel-Air has a sitcom and colorful look. Bel-Air has an hour-long episode format, while The Fresh Prince of Bel-Air has a half-hour episode format.
    • -
    -

    Despite these differences, both shows share the same core message and theme: the importance of family, friendship, and self-discovery.

    -

    bel air season 1 google play
    -bel air season 1 netnaija
    -bel air season 1 lite trendz
    -bel air season 1 episodes
    -bel air season 1 cast
    -bel air season 1 release date
    -bel air season 1 trailer
    -bel air season 1 review
    -bel air season 1 stream
    -bel air season 1 watch online
    -bel air season 1 full episodes
    -bel air season 1 free download
    -bel air season 1 torrent
    -bel air season 1 subtitles
    -bel air season 1 imdb
    -bel air season 1 rotten tomatoes
    -bel air season 1 metacritic
    -bel air season 1 ratings
    -bel air season 1 spoilers
    -bel air season 1 recap
    -bel air season 1 plot
    -bel air season 1 soundtrack
    -bel air season 1 quotes
    -bel air season 1 memes
    -bel air season 1 behind the scenes
    -bel air season 1 netflix
    -bel air season 1 hulu
    -bel air season 1 amazon prime
    -bel air season 1 peacock
    -bel air season 1 youtube
    -bel air season 1 dvd
    -bel air season 1 blu ray
    -bel air season 1 digital download
    -bel air season 1 itunes
    -bel air season 1 vudu
    -bel air season 1 fzmovies
    -bel air season 1 o2tvseries
    -bel air season 1 toxicwap
    -bel air season 1 tvshows4mobile
    -bel air season 1 index of series

    -

    How to watch Bel-Air season 1 online

    -

    If you are interested in watching Bel-Air season 1 online, you have several options to choose from. Here are some of the details that you need to know:

    -

    The release date and schedule of Bel-Air season 1

    -

    Bel-Air season 1 is expected to premiere in 2022, although the exact date has not been announced yet. The show will have 10 episodes, each lasting for an hour. The show will be released weekly, rather than all at once.

    -

    The streaming platforms that offer Bel-Air season 1

    -

    Bel-Air season 1 will be exclusively available on Peacock, which is a streaming service owned by NBCUniversal. Peacock is the home of many NBC shows, such as The Office, Parks and Recreation, Saturday Night Live, and more. Peacock also offers original content, such as Brave New World, Girls5eva, Dr. Death, and more.

    -

    Peacock is not the only streaming platform that you can use to watch Bel-Air season 1 online. You can also use other platforms that have a deal with Peacock, such as Hulu, YouTube TV, Sling TV, Fubo TV, and Cox Contour. These platforms allow you to access Peacock as part of their packages or add-ons.

    -

    The subscription plans and prices of the streaming platforms

    -

    If you want to watch Bel-Air season 1 online, you need to subscribe to one of the streaming platforms that offer it. Here are some of the subscription plans and prices that you can choose from:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Streaming PlatformSubscription PlanPriceFeatures
    PeacockPeacock Free$0 per monthLimited content, ads, no offline downloads
    PeacockPeacock Premium$4.99 per month or $49.99 per yearAll content, ads, offline downloads on mobile devices
    PeacockPeacock Premium Plus$9.99 per month or $99.99 per yearAll content, no ads, offline downloads on mobile devices
    HuluHulu + Live TV + Peacock Premium Add-on$64.99 per month + $4.99 per month for Peacock Premium Add-onHulu content, live TV channels, Peacock content, ads on Hulu and Peacock, offline downloads on Hulu only
    YouTube TVYouTube TV + Peacock Premium Add-on$64.99 per month + $4.99 per month for Peacock Premium Add-onYouTube TV content, live TV channels, Peacock content, ads on YouTube TV and Peacock, offline downloads on YouTube TV only
    Sling TVSling Blue + Peacock Premium Add-on$35 per month + $4.99 per month for Peacock Premium Add-onSling TV content, live TV channels, Peacock content, ads on Sling TV and Peacock, no offline downloads
    Fubo TVFubo TV + Peacock Premium Add-on$64.99 per month + $4.99 per month for Peacock Premium Add-onFubo TV content, live TV channels, Peacock content, ads on Fubo TV and Peacock, offline downloads on Fubo TV only
    Cox ContourCox Contour + Peacock Premium Add-on$69.99 per month + $4.99 per month for Peacock Premium Add-onCox Contour content, live TV channels, Peacock content, ads on Cox Contour and Peacock, no offline downloads
    -

    As you can see, the prices and features vary depending on the streaming platform and the subscription plan that you choose. You should compare them carefully and pick the one that suits your budget and preferences.

    -

    The free trial options and discounts of the streaming platforms

    -

    If you are not sure whether you want to commit to a subscription plan or not, you can take advantage of the free trial options and discounts that some of the streaming platforms offer. Here are some of them:

    -
      -
    • Peacock offers a 7-day free trial for both Peacock Premium and Peacock Premium Plus plans. You can cancel anytime before the trial ends and you will not be charged.
    • -
    • Hulu offers a 7-day free trial for Hulu + Live TV plan. You can also get a 30-day free trial for Hulu (without Live TV) plan if you sign up through select partners, such as Spotify, Sprint, or Verizon.
    • -
    • YouTube TV offers a 14-day free trial for YouTube TV plan. You can also get a 30-day free trial for YouTube Premium plan if you sign up through select partners, such as T-Mobile or Google One.
    • -
    • Sling TV offers a 3-day free trial for Sling Blue plan. You can also get a $10 discount for your first month of Sling Blue plan if you sign up through select partners, such as Best Buy or Samsung.
    • -
    • Fubo TV offers a 7-day free trial for Fubo TV plan. You can also get a $10 discount for your first month of Fubo TV plan if you sign up through select partners, such as Roku or LG.
    • -
    • Cox Contour offers a 30-day money-back guarantee for Cox Contour plan. You can also get a $10 discount for your first month of Cox Contour plan if you sign up through select partners, such as Xfinity or AT&T.
    • -
    -

    These free trial options and discounts are subject to change and availability, so you should check the official websites of the streaming platforms for the latest information and terms and conditions.

    -

    How to download Bel-Air season 1 offline

    -

    If you want to watch Bel-Air season 1 offline, you need to download it to your device first. This can be useful if you want to save data, avoid buffering, or watch it when you don't have an internet connection. Here are some of the benefits and steps of downloading Bel-Air season 1 offline:

    -

    The benefits of downloading Bel-Air season 1 offline

    -

    Downloading Bel-Air season 1 offline has several benefits, such as:

    -
      -
    • You can watch it anytime and anywhere, without relying on an internet connection.
    • -
    • You can save data and bandwidth, especially if you have a limited or expensive data plan.
    • -
    • You can avoid buffering and loading issues, which can ruin your viewing experience.
    • -
    • You can have more control over your viewing preferences, such as the video quality, the subtitles, and the playback speed.
    • -
    • You can share it with your friends and family, who might not have access to the same streaming platform as you.
    • -
    -

    The steps to download Bel-Air season 1 offline on different devices

    -

    The steps to download Bel-Air season 1 offline vary depending on the device and the streaming platform that you use. Here are some of the general steps that you can follow:

    -
      -
    1. Make sure that you have a valid subscription to one of the streaming platforms that offer Bel-Air season 1.
    2. -
    3. Make sure that you have enough storage space on your device to download the episodes that you want.
    4. -
    5. Make sure that you have a stable internet connection to download the episodes quickly and smoothly.
    6. -
    7. Open the app or the website of the streaming platform on your device.
    8. -
    9. Search for Bel-Air season 1 and select the episode that you want to download.
    10. -
    11. Look
    12. Look for the download icon or button, which is usually a downward arrow or a cloud with an arrow. Tap or click on it to start the download process.
    13. -
    14. Wait for the download to finish, which may take a few minutes depending on the size and quality of the episode.
    15. -
    16. Once the download is complete, you can find the episode in your device's library or downloads folder. You can then watch it offline whenever you want.
    17. -
    -

    Note that some streaming platforms may have different or additional steps to download Bel-Air season 1 offline, such as requiring you to use a specific app or device, or limiting the number or duration of downloads. You should check the official help pages of the streaming platforms for more details and instructions.

    -

    The tips and tricks to download Bel-Air season 1 offline safely and legally

    -

    Downloading Bel-Air season 1 offline can be easy and convenient, but you should also be aware of some tips and tricks to do it safely and legally, such as:

    -
      -
    • Only download Bel-Air season 1 from the authorized streaming platforms that have the rights to distribute it. Do not use illegal or pirated websites or apps that may contain viruses, malware, or spyware.
    • -
    • Only download Bel-Air season 1 for your personal and non-commercial use. Do not share, sell, or distribute it to others without the permission of the creators and owners of the show.
    • -
    • Only download Bel-Air season 1 within the terms and conditions of the streaming platforms that you use. Do not violate their policies or rules, such as downloading more than the allowed limit, using unauthorized devices or apps, or modifying or copying the content.
    • -
    • Only download Bel-Air season 1 within the availability and expiration dates of the show. Do not download it before it is released or after it is removed from the streaming platforms.
    • -
    • Delete Bel-Air season 1 from your device when you are done watching it or when you no longer need it. This can help you save storage space, avoid clutter, and respect the rights of the show.
    • -
    -

    By following these tips and tricks, you can enjoy watching Bel-Air season 1 offline without any hassle or risk.

    -

    Conclusion

    -

    Bel-Air season 1 is a TV series that reimagines the classic sitcom The Fresh Prince of Bel-Air as a modern and dramatic drama. It follows the story of Will Smith, a teenager from Philadelphia who moves to Bel Air to live with his wealthy relatives. The show explores the themes of race, class, culture, family, identity, and love in contemporary America.

    -

    If you want to watch Bel-Air season 1 online, you can use one of the streaming platforms that offer it, such as Peacock, Hulu, YouTube TV, Sling TV, Fubo TV, or Cox Contour. You can compare their subscription plans and prices, and take advantage of their free trial options and discounts.

    -

    If you want to watch Bel-Air season 1 offline, you can download it to your device from one of the streaming platforms that offer it. You can enjoy the benefits of downloading it offline, such as saving data, avoiding buffering, and watching it anytime and anywhere. You can also follow the steps to download it offline on different devices, and use the tips and tricks to do it safely and legally.

    -

    We hope that this article has helped you learn more about Bel-Air season 1 and how to watch it online and offline. If you are interested in watching this show, don't miss this opportunity to see a new version of a beloved story. Happy watching!

    -

    Frequently Asked Questions

    -

    Here are some of the frequently asked questions about Bel-Air season 1:

    -
      -
    1. Q: Is Bel-Air season 1 a remake or a reboot of The Fresh Prince of Bel-Air?
    2. -
    3. A: No, Bel-Air season 1 is not a remake or a reboot of The Fresh Prince of Bel-Air. It is a reimagining that takes a different approach to the story and the characters.
    4. -
    5. Q: When will Bel-Air season 1 be released?
    6. -
    7. A: Bel-Air season 1 is expected to be released in 2022, although the exact date has not been announced yet.
    8. -
    9. Q: How many episodes will Bel-Air season 1 have?
    10. -
    11. A: Bel-Air season 1 will have 10 episodes, each lasting for an hour.
    12. -
    13. Q: Who are the cast and crew of Bel-Air season 1?
    14. -
    15. A: Bel-Air season 1 features a talented cast of actors who play the roles of Will Smith and his relatives and friends. The show is also produced by Will Smith himself, along with other notable names such as Quincy Jones, Benny Medina, and Morgan Cooper.
    16. -
    17. Q: Where can I watch Bel-Air season 1 online?
    18. -
    19. A: You can watch Bel-Air season 1 online on Peacock, which is a streaming service owned by NBCUniversal. You can also watch it on other streaming platforms that have a deal with Peacock, such as Hulu, YouTube TV, Sling TV, Fubo TV, and Cox Contour.
    20. -
    21. Q: How can I download Bel-Air season 1 offline?
    22. -
    23. A: You can download Bel-Air season 1 offline from one of the streaming platforms that offer it, such as Peacock, Hulu, YouTube TV, Sling TV, or Fubo TV. You need to have a valid subscription to the streaming platform and enough storage space on your device. You also need to follow the steps to download it offline on different devices and use the tips and tricks to do it safely and legally.
    24. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Carrom Pool Disc Game Cheat APK How to Get Free Coins and Diamonds.md b/spaces/fatiXbelha/sd/Carrom Pool Disc Game Cheat APK How to Get Free Coins and Diamonds.md deleted file mode 100644 index df3b8fdb556f2595ed81d79a38d1b35c3c52a83b..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Carrom Pool Disc Game Cheat APK How to Get Free Coins and Diamonds.md +++ /dev/null @@ -1,99 +0,0 @@ -
    -

    Carrom Disc Pool Game Hack APK Download: A Complete Guide

    -

    If you are a fan of carrom, you might have heard of Carrom Disc Pool Game, a popular mobile game that lets you play carrom with your friends or other players online. But did you know that there is a way to get unlimited money and gems, unlock all modes and tables, and enjoy the game without any ads or restrictions? Yes, you heard it right. In this article, we will show you how to download and install Carrom Disc Pool Game Hack APK, a modified version of the original game that gives you access to all these features and more. Read on to find out more.

    -

    carrom disc pool game hack apk download


    DOWNLOAD ::: https://urllie.com/2uNwk3



    -

    What is Carrom Disc Pool Game?

    -

    Carrom Disc Pool Game is a multiplayer board game that simulates the classic Indian game of carrom. The game has two modes: carrom and disc pool. In carrom mode, you have to strike the red queen and your own pieces into the pockets before your opponent does. In disc pool mode, you have to pot all your pieces before your opponent does. You can play with your friends or other players online, or practice offline against the computer. You can also customize your pieces, strikers, boards, and tables with various designs and colors. The game has realistic physics, smooth controls, stunning graphics, and addictive gameplay.

    -

    What is Carrom Disc Pool Game Hack APK?

    -

    Carrom Disc Pool Game Hack APK is a modified version of the original game that gives you unlimited money and gems, which are the in-game currencies. You can use them to buy new pieces, strikers, boards, and tables, as well as upgrade your skills and abilities. You can also unlock all modes and tables, which are otherwise locked behind levels or payments. Moreover, you can enjoy the game without any ads or interruptions, as well as without any root requirement. With Carrom Disc Pool Game Hack APK, you can have more fun and excitement playing carrom on your mobile device.

    -

    Why do you need Carrom Disc Pool Game Hack APK?

    -

    You might be wondering why you need Carrom Disc Pool Game Hack APK when you can play the original game for free. Well, there are several reasons why you might want to try this hack version. First of all, you can save a lot of time and money by getting unlimited money and gems for free. You don't have to watch ads or complete surveys to earn them, or spend real money to buy them. You can get everything you want in the game without any hassle. Secondly, you can unlock all modes and tables, which are otherwise limited by your level or payment. You can play any mode and table you like, and enjoy the variety and challenge of the game. Thirdly, you can enjoy the game without any ads or interruptions, which can be annoying and distracting. You can focus on the game and have a smooth and satisfying experience. Lastly, you can use Carrom Disc Pool Game Hack APK without any root requirement, which means you don't have to risk your device's security or warranty. You can use the hack version safely and easily.

    How to download and install Carrom Disc Pool Game Hack APK?

    -

    Now that you know what Carrom Disc Pool Game Hack APK is and why you need it, you might be wondering how to download and install it on your device. Well, don't worry, we have got you covered. Just follow these simple steps and you will be ready to play the game with unlimited money and gems, all modes and tables unlocked, no ads, and no root required.

    -

    Step 1: Enable unknown sources

    -

    Before you download the APK file, you need to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings, then security, then unknown sources, and toggle it on. You might see a warning message, but just ignore it and proceed.

    -

    Step 2: Download the APK file

    -

    Next, you need to download the APK file of Carrom Disc Pool Game Hack APK from a reliable source. You can use the link below to download it directly to your device. The file size is about 30 MB, so make sure you have enough space and a stable internet connection.

    -

    carrom pool disc game mod apk unlimited money
    -carrom pool disc game apk download latest version
    -carrom pool disc game hack online generator
    -carrom pool disc game cheat codes android
    -carrom pool disc game free download for pc
    -carrom pool disc game mod menu apk
    -carrom pool disc game hack apk no root
    -carrom pool disc game unlimited gems and coins
    -carrom pool disc game apk pure download
    -carrom pool disc game hack tool download
    -carrom pool disc game mod apk revdl
    -carrom pool disc game offline mode apk
    -carrom pool disc game hack version download
    -carrom pool disc game premium apk free
    -carrom pool disc game mod apk rexdl
    -carrom pool disc game hack apk 2023
    -carrom pool disc game cracked apk download
    -carrom pool disc game mod apk happymod
    -carrom pool disc game hack without human verification
    -carrom pool disc game vip mod apk
    -carrom pool disc game hack apk ios
    -carrom pool disc game original apk download
    -carrom pool disc game mod apk android 1
    -carrom pool disc game hack script download
    -carrom pool disc game pro mod apk
    -carrom pool disc game hack apk latest version
    -carrom pool disc game full unlocked apk
    -carrom pool disc game mod apk an1
    -carrom pool disc game hack online free
    -carrom pool disc game mega mod apk
    -carrom pool disc game hack apk obb
    -carrom pool disc game old version apk download
    -carrom pool disc game mod apk unlimited everything
    -carrom pool disc game hack app download
    -carrom pool disc game mod apk 2023 download

    -

    Download Carrom Disc Pool Game Hack APK here

    -

    Step 3: Install the APK file

    -

    Once you have downloaded the APK file, you need to install it on your device. To do this, locate the file in your device's file manager or downloads folder, and tap on it. You might see a pop-up asking for permission to install the app, just tap on install and wait for the process to finish. After that, you will see a confirmation message that the app has been installed successfully.

    -

    How to use Carrom Disc Pool Game Hack APK?

    -

    Now that you have installed Carrom Disc Pool Game Hack APK on your device, you are ready to use it and enjoy its features. Here are some steps on how to use it:

    -

    Step 1: Launch the game

    -

    To launch the game, just tap on its icon on your device's home screen or app drawer. You will see the game's logo and loading screen, followed by the main menu. You will also notice that you have unlimited money and gems in your account.

    -

    Step 2: Choose your mode and table

    -

    To choose your mode and table, just tap on the play button on the main menu. You will see two options: carrom and disc pool. Tap on the one you want to play, and then choose your table from the available ones. You will also see that all modes and tables are unlocked for you.

    -

    Step 3: Enjoy unlimited money and gems

    -

    To enjoy unlimited money and gems, just play the game as usual. You can use them to buy new pieces, strikers, boards, and tables from the shop, as well as upgrade your skills and abilities from the profile section. You can also use them to enter higher stakes matches or tournaments.

    -

    What are the features of Carrom Disc Pool Game Hack APK?

    -

    Carrom Disc Pool Game Hack APK has many features that make it better than the original game. Here are some of them:

    -

    Feature 1: Unlimited money and gems

    -

    This is probably the most obvious feature of Carrom Disc Pool Game Hack APK. You get unlimited money and gems in your account, which you can use for anything you want in the game. You don't have to watch ads or complete surveys to earn them, or spend real money to buy them. You can get everything you want in the game without any hassle.

    -

    Feature 2: All modes and tables unlocked

    -

    This is another great feature of Carrom Disc Pool Game Hack APK. You get access to all modes and tables in the game, which are otherwise locked behind levels or payments. You can play any mode and table you like, and enjoy the variety and challenge of the game. You can also try new and exclusive tables that are not available in the original game.

    -

    Feature 3: No ads and no root required

    -

    This is another feature that makes Carrom Disc Pool Game Hack APK better than the original game. You can enjoy the game without any ads or interruptions, which can be annoying and distracting. You can focus on the game and have a smooth and satisfying experience. Moreover, you can use Carrom Disc Pool Game Hack APK without any root requirement, which means you don't have to risk your device's security or warranty. You can use the hack version safely and easily.

    -

    What are the pros and cons of Carrom Disc Pool Game Hack APK?

    -

    Carrom Disc Pool Game Hack APK has many pros and cons that you should be aware of before using it. Here are some of them:

    -

    Pro 1: Free and easy to use

    -

    One of the main advantages of Carrom Disc Pool Game Hack APK is that it is free and easy to use. You don't have to pay anything to download and install it, or to use its features. You also don't need any technical skills or knowledge to use it. You just need to follow the simple steps we have provided above and you will be ready to play the game with unlimited money and gems, all modes and tables unlocked, no ads, and no root required.

    -

    Pro 2: Enhanced gameplay and graphics

    -

    Another advantage of Carrom Disc Pool Game Hack APK is that it enhances the gameplay and graphics of the original game. You can enjoy a more realistic and immersive experience playing carrom on your mobile device. The game has smooth controls, accurate physics, stunning graphics, and addictive gameplay. You can also customize your pieces, strikers, boards, and tables with various designs and colors.

    -

    Pro 3: Compatible with most devices

    -

    Another advantage of Carrom Disc Pool Game Hack APK is that it is compatible with most devices. You can use it on any Android device that runs on Android 4.1 or higher. You don't need to worry about the compatibility issues or performance issues. The game runs smoothly and flawlessly on most devices.

    -

    Con 1: Risk of malware and viruses

    -

    One of the main disadvantages of Carrom Disc Pool Game Hack APK is that it comes with a risk of malware and viruses. Since you are downloading and installing an app from an unknown source, you might expose your device to harmful software that can damage your device or steal your data. You should always be careful when downloading and installing apps from unknown sources, and use a reliable antivirus program to scan the app before using it.

    -

    Con 2: Risk of ban and account suspension

    -

    Another disadvantage of Carrom Disc Pool Game Hack APK is that it comes with a risk of ban and account suspension. Since you are using a modified version of the original game, you might violate the terms and conditions of the game developer or publisher. They might detect your hack version and ban your account or suspend your access to the game. You should always be aware of the consequences of using a hack version, and use it at your own risk.

    -

    Con 3: Unfair advantage over other players

    -

    Another disadvantage of Carrom Disc Pool Game Hack APK is that it gives you an unfair advantage over other players. Since you have unlimited money and gems, all modes and tables unlocked, no ads, and no root required, you can easily win any match or tournament in the game. This might ruin the fun and challenge of the game for you and other players. You should always respect the rules and ethics of the game, and play fair with other players.

    -

    Conclusion

    -

    In conclusion, Carrom Disc Pool Game Hack APK is a modified version of the original game that gives you unlimited money and gems, all modes and tables unlocked, no ads, and no root required. It is a free and easy to use app that enhances the gameplay and graphics of the original game. It is also compatible with most devices. However, it also comes with some risks and drawbacks, such as malware and viruses, ban and account suspension, and unfair advantage over other players. You should always be careful when downloading and installing apps from unknown sources, and use a reliable antivirus program to scan the app before using it. You should also be aware of the consequences of using a hack version, and use it at your own risk. You should also respect the rules and ethics of the game, and play fair with other players.

    FAQs

    -

    Here are some frequently asked questions about Carrom Disc Pool Game Hack APK:

    -

    FAQ 1: Is Carrom Disc Pool Game Hack APK safe to use?

    -

    Carrom Disc Pool Game Hack APK is not completely safe to use, as it comes with a risk of malware and viruses. You should always be careful when downloading and installing apps from unknown sources, and use a reliable antivirus program to scan the app before using it.

    -

    FAQ 2: Is Carrom Disc Pool Game Hack APK legal to use?

    -

    Carrom Disc Pool Game Hack APK is not legal to use, as it violates the terms and conditions of the game developer or publisher. They might detect your hack version and ban your account or suspend your access to the game. You should always be aware of the consequences of using a hack version, and use it at your own risk.

    -

    FAQ 3: How can I update Carrom Disc Pool Game Hack APK?

    -

    To update Carrom Disc Pool Game Hack APK, you need to download and install the latest version of the app from a reliable source. You can use the link below to download it directly to your device. However, you should always check the app for malware and viruses before using it.

    -

    Download Carrom Disc Pool Game Hack APK here

    -

    FAQ 4: Can I play online with Carrom Disc Pool Game Hack APK?

    -

    You can play online with Carrom Disc Pool Game Hack APK, but you might face some issues or problems. For example, you might not be able to connect to the server or join a match. You might also encounter other players who are using the hack version or report you for cheating. You should always respect the rules and ethics of the game, and play fair with other players.

    -

    FAQ 5: Can I use Carrom Disc Pool Game Hack APK on iOS devices?

    -

    No, you cannot use Carrom Disc Pool Game Hack APK on iOS devices, as it is only compatible with Android devices. You need an Android device that runs on Android 4.1 or higher to use the app.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Descarga metal slug 1 2 3 4 5 6 apk y revive la nostalgia.md b/spaces/fatiXbelha/sd/Descarga metal slug 1 2 3 4 5 6 apk y revive la nostalgia.md deleted file mode 100644 index 135feddb993da339aac8354c006011ea3189aaa4..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Descarga metal slug 1 2 3 4 5 6 apk y revive la nostalgia.md +++ /dev/null @@ -1,111 +0,0 @@ -
    -

    Descargar Metal Slug 1 2 3 4 5 6 APK: How to Play the Classic Run and Gun Games on Your Android Device

    -

    If you are a fan of retro arcade games, you probably have heard of Metal Slug, a series of run and gun video games created by SNK. Metal Slug games are known for their fast-paced action, humorous graphics, and addictive gameplay. They have been released on various platforms such as Neo Geo, PlayStation, Xbox, Nintendo DS, and more.

    -

    descargar metal slug 1 2 3 4 5 6 apk


    DOWNLOAD –––––>>> https://urllie.com/2uNIuq



    -

    But did you know that you can also play Metal Slug games on your Android device? Yes, you can enjoy these classic games on your smartphone or tablet with just a few steps. In this article, we will show you how to download and install Metal Slug APK, which is a file that contains all six main games in the series: Metal Slug 1, 2, X, 3, 4, and 5. We will also give you an overview of each game and some tips and tricks for playing them.

    -

    So what are you waiting for? Let's get started!

    -

    Metal Slug Series Overview

    -

    Metal Slug is a series of run and gun video games that started in 1996 with Metal Slug: Super Vehicle-001. The games follow the adventures of the Peregrine Falcon Squad, a group of elite soldiers who fight against various enemies such as rebels, aliens, zombies, mummies, and more. The games are famous for their cartoonish graphics, humorous animations, explosive sound effects, and diverse weapons and vehicles.

    -

    descargar metal slug collection pc 1 link
    -descargar metal slug x para android apk
    -descargar metal slug anthology android apk
    -descargar metal slug 1 2 3 4 5 6 x mega
    -descargar metal slug complete pc español
    -descargar metal slug saga completa para android
    -descargar metal slug 3 apk sin emulador
    -descargar metal slug x pc full español
    -descargar metal slug 6 para android apk
    -descargar metal slug collection pc mega
    -descargar metal slug 1 apk + datos obb
    -descargar metal slug x android gratis
    -descargar metal slug anthology psp español
    -descargar metal slug 4 apk sin emulador
    -descargar metal slug complete pc full
    -descargar metal slug saga completa pc
    -descargar metal slug 3 apk mod
    -descargar metal slug x psx iso español
    -descargar metal slug 6 para ppsspp android
    -descargar metal slug collection pc portable
    -descargar metal slug 1 para android gratis
    -descargar metal slug x apk + datos obb
    -descargar metal slug anthology ps2 iso español
    -descargar metal slug 4 para android gratis
    -descargar metal slug complete sound box
    -descargar metal slug saga completa mega
    -descargar metal slug 3 apk full gratis
    -descargar metal slug x steam edition
    -descargar metal slug 6 para android sin emulador
    -descargar metal slug collection pc mediafire
    -descargar metal slug 1 para pc gratis español
    -descargar metal slug x apk mod
    -descargar metal slug anthology wii iso español
    -descargar metal slug 4 para ppsspp android
    -descargar metal slug complete pc mf
    -descargar metal slug saga completa android apk
    -descargar metal slug 3 apk hack
    -descargar metal slug x apk full gratis
    -descargar metal slug 6 para pc full español mega
    -descargar metal slug collection pc windows 10
    -descargar metal slug 1 para android apk full mega
    -descargar metal slug x apk hack
    -descargar metal slug anthology pc español mega
    -descargar metal slug 4 para android apk full mega
    -descargar metal slug complete pc crack no cd
    -descargar metal slug saga completa gratis
    -descargar metal slug 3 apk + datos sd
    -descargar metal slug x apk sin emulador
    -descargar metal slug 6 para ps2 iso español
    -descargar metal slug collection pc sin emulador

    -

    Here is a brief overview of each game in the series:

    -

    Metal Slug 1

    -

    Metal Slug was released in 1996 for Neo Geo arcade machines and home consoles. It was also ported to other platforms such as Sega Saturn, PlayStation, and PC. It introduced the main characters of the series: Marco Rossi, Tarma Roving, General Morden, and Allen O'Neil. The game has six stages that take place in various locations such as forests, deserts, snowfields, and military bases. The game features a variety of weapons such as pistols, machine guns, rocket launchers, grenades, and the iconic Metal Slug tank. The game also has hidden items and prisoners of war that can be rescued for extra points and bonuses.

    -

    Metal Slug 2 / X

    -

    Metal Slug 2 was released in 1998 for Neo Geo arcade machines and home consoles. It was also ported to other platforms such as PlayStation, PC, and iOS. It added two new playable characters: Eri Kasamoto and Fio Germi. The game has six stages that take place in new locations such as ancient ruins, Arabian towns, alien spaceships, and pyramids. The game features new weapons such as lasers, flame shots, iron lizards, and enemy chasers. The game also introduces new enemies such as mummies, aliens, and mutants. The game also has new vehicles such as camels, planes, and submarines.

    -

    Metal Slug X was released in 1999 for Neo Geo arcade machines and home consoles. It was also ported to other platforms such as PlayStation, PC, iOS, and Android. It is an improved version of Metal Slug 2 that fixes some of the issues of the original game such as slowdowns and glitches. It also changes some of the stage layouts, enemy placements, weapon drops, and boss battles. It also adds some new features such as time attack mode, combat school mode, and secret paths.

    -

    Metal Slug 3

    -

    Metal Slug 3 was released in 2000 for Neo Geo arcade machines and home consoles. It was also ported to other platforms such as PlayStation 2, Xbox, PC, iOS, Android, and Nintendo Switch. It is considered by many fans to be the best game in the series due to its variety and replay value. The game has five stages that take place in diverse locations such as jungles, oceans, caves, factories, and outer space. The game features new weapons such as shotguns, homing missiles, dual machine guns, and satellite lasers. The game also introduces new enemies such as zombies, giant crabs, yetis, and martians. The game also has new vehicles such as elephants, ostriches, and helicopters. The game also has branching paths that lead to different endings and bonus stages.

    -

    Metal Slug 4

    -

    Metal Slug 4 was released in 2002 for Neo Geo arcade machines and home consoles. It was also ported to other platforms such as PlayStation 2, Xbox, PC, and Nintendo Switch. It replaced Eri and Tarma with two new playable characters: Nadia Cassel and Trevor Spacey. The game has six stages that take place in urban settings such as cities, subways, airports, and military bases. The game features new weapons such as dual pistols, thunder shots, and landmines. The game also introduces new enemies such as cyborgs, robots, and hackers. The game also has new vehicles such as motorcycles, trucks, and tanks.

    -

    Metal Slug 5

    -

    Metal Slug 5 was released in 2003 for Neo Geo arcade machines and home consoles. It was also ported to other platforms such as PlayStation 2, Xbox, PC, and Nintendo Switch. It brought back Eri and Tarma as playable characters along with Marco and Fio. The game has six stages that take place in exotic locations such as jungles, waterfalls, ancient ruins, and underground caves. The game features new weapons such as flame whips, grenade launchers, and laser rifles. The game also introduces new enemies such as masked soldiers, ninjas, and giant worms. The game also has new vehicles such as boats, jet skis, and slides.

    -

    Metal Slug 6

    -

    Metal Slug 6 was released in 2006 for Atomiswave arcade machines and PlayStation 2. It was also ported to other platforms such as PC and Nintendo Wii. It added two new playable characters: Ralf Jones and Clark Still from the King of Fighters and Ikari Warriors series. The game has seven stages that take place in futuristic settings such as space stations, moon bases, and alien planets. The game features new weapons such as machine guns, flame throwers, and rocket launchers. The game also introduces new enemies such as clones, mutants, and aliens. The game also has new vehicles such as mechs, hovercrafts, and spaceships.

    -

    How to Download and Install Metal Slug APK on Android

    -

    Now that you have a brief idea of what each Metal Slug game is about, you might be wondering how to play them on your Android device. Well, it's not that hard if you follow these simple steps:

    -

    Download a PPSSPP emulator and a file manager app

    -

    The first thing you need to do is to download a PPSSPP emulator and a file manager app on your Android device. A PPSSPP emulator is a software that allows you to run PlayStation Portable games on your device. A file manager app is a software that allows you to manage your files on your device.

    -

    You can download PPSSPP from Google Play Store or from its official website at https://www.ppsspp.org/. You can download a file manager app like ZArchiver or ES File Explorer from Google Play Store or from their official websites at https://zarchiver.en.softonic.com/android or https://es-file-explorer.en.softonic.com/android.

    -

    Once you have downloaded both apps, install them on your device by following the instructions on the screen.

    -

    Download the Metal Slug ISO files from a trusted source

    -

    The next thing you need to do is to download the Metal Slug ISO files from a trusted source. An ISO file is a file that contains the data of a disc image. In this case, you need the ISO files of the Metal Slug games that were released for PlayStation Portable.

    -

    You can find and download the ISO files for each Metal Slug game from a reliable website or torrent. Some of the websites that offer these files are https://www.emuparadise.me/, https://www.freeroms.com/, or https://www.coolrom.com/. Some of the torrents that offer these files are https://thepiratebay.org/, https://1337x.to/, or https://rarbg.to/.

    -

    Make sure you check the file size and format of the downloaded files before opening them. The ISO files should be around 200 MB to 500 MB in size and have the .iso extension. If the files are compressed in ZIP or RAR format, you need to extract them using the file manager app.

    -

    Load the Metal Slug ISO files on PPSSPP and start playing

    -

    The final thing you need to do is to load the Metal Slug ISO files on PPSSPP and start playing. To do this, you need to open PPSSPP and locate the folder where the ISO files are stored using the file manager app. You can create a separate folder for the Metal Slug games on your device's internal storage or external SD card for easier access.

    -

    Once you have found the folder, select and load the desired Metal Slug game on PPSSPP. You can adjust the settings of the emulator such as graphics, sound, controls, and performance according to your preference. You can also save and load your progress using the save states feature of PPSSPP.

    -

    To play the game, you can use the virtual buttons on the screen or connect a controller to your device via Bluetooth or USB. You can also play with your friends using the multiplayer mode of PPSSPP. You can either join an online server or create a local network with your friends using Wi-Fi or hotspot.

    -

    Conclusion

    -

    Playing Metal Slug games on your Android device is a great way to relive the nostalgia of these classic run and gun games. You can enjoy the fast-paced action, humorous graphics, and addictive gameplay of these games anytime and anywhere with just a few steps. All you need is a PPSSPP emulator, a file manager app, and the Metal Slug ISO files.

    -

    Here are some tips and tricks for playing Metal Slug games on your Android device:

    -
      -
    • Use different weapons and vehicles to deal with different enemies and situations. Don't be afraid to experiment with different combinations.
    • -
    • Rescue as many prisoners of war as possible to get extra points and bonuses. Some of them may also give you special items or weapons.
    • -
    • Look for hidden items and secrets in each stage. Some of them may reveal new paths, modes, or characters.
    • -
    • Use cheats if you want to have some fun or challenge yourself. Some of the cheats include unlimited ammo, invincibility, level select, and more.
    • -
    • Have fun and don't give up. Metal Slug games are known for their difficulty and unpredictability. But they are also rewarding and satisfying once you complete them.
    • -
    -

    We hope this article has helped you learn how to download and install Metal Slug APK on your Android device. If you have any feedback or questions, please feel free to leave a comment below or contact us for more information. Thank you for reading!

    -

    Frequently Asked Questions

    -

    Here are some of the frequently asked questions about Metal Slug APK:

    -

    Q: Is Metal Slug APK safe to download?

    -

    A: Yes, as long as you download it from a trusted source and scan it with an antivirus app before opening it. However, we do not endorse or promote any illegal downloading or piracy of these games. Please support the original developers by buying their games from official sources.

    -

    Q: Is Metal Slug APK free to download?

    -

    A: Yes, most of the websites or torrents that offer these files do not charge any fee for downloading them. However, some of them may require you to register an account or complete a survey before accessing them. Please be careful of any scams or malware that may harm your device or data.

    -

    Q: Can I play Metal Slug APK offline?

    -

    A: Yes, you can play these games offline once you have downloaded and installed them on your device. You do not need an internet connection to play them unless you want to use the multiplayer mode of PPSSPP.

    -

    Q: Can I play Metal Slug APK on other devices?

    -

    A: Yes, you can play these games on other devices that support PPSSPP emulator such as Windows PC, Mac OS, Linux, iOS, PSP, PS Vita, and more. You just need to download and install PPSSPP emulator and the Metal Slug ISO files on those devices.

    -

    Q: Which Metal Slug game is the best?

    -

    A: This is a subjective question that depends on your personal preference and taste. However, most fans agree that Metal Slug 3 is the best game in the series due to its variety and replay value. But you can also try other games in the series and see which one suits you best.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Lagu Eyes Blue X Terpikat Senyummu Lirik dan Terjemahan.md b/spaces/fatiXbelha/sd/Download Lagu Eyes Blue X Terpikat Senyummu Lirik dan Terjemahan.md deleted file mode 100644 index e09c4350c68e07b3601e71cd3bd2fce7a4b1b705..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Lagu Eyes Blue X Terpikat Senyummu Lirik dan Terjemahan.md +++ /dev/null @@ -1,31 +0,0 @@ -
    -

    Method 1: Use a YouTube Converter

    - One of the simplest ways to download Lagu Eyes Blue X Terpikat Senyummu is to use a YouTube converter. This is a tool that can convert any YouTube video into an MP3 file that you can save on your device. Here are the steps to follow:
      -
    1. Go to YouTube and search for Lagu Eyes Blue X Terpikat Senyummu. You should find several videos that have this song.
    2. -
    3. Copy the URL of the video that you want to download.
    4. -
    5. Go to a YouTube converter website, such as [4K Download](^1^) or [Ubersuggest](^2^).
    6. -
    7. Paste the URL into the search box and click Convert.
    8. -
    9. Choose the quality and format of the MP3 file that you want to download.
    10. -
    11. Click Download and save the file on your device.
    12. -
    -

    Method 2: Use SoundCloud

    - Another way to download Lagu Eyes Blue X Terpikat Senyummu is to use SoundCloud. This is a platform that hosts millions of songs and podcasts from various artists and creators. You can find Lagu Eyes Blue X Terpikat Senyummu on SoundCloud by following these steps:
      -
    1. Go to SoundCloud and search for Lagu Eyes Blue X Terpikat Senyummu. You should find a track by amirxhani that has this song.
    2. -
    3. Click on the track and then click on the More button (three dots).
    4. -
    5. Select Download file from the menu.
    6. -
    7. Save the file on your device.
    8. -
    - Note: You may need to create a free account on SoundCloud to download tracks.

    Method 3: Use Spotify

    - The third way to download Lagu Eyes Blue X Terpikat Senyummu is to use Spotify. This is a popular music streaming service that offers millions of songs and playlists. You can find Lagu Eyes Blue X Terpikat Senyummu on Spotify by following these steps:
      -
    1. Go to Spotify and search for Lagu Eyes Blue X Terpikat Senyummu. You should find a playlist by Nadin Amizah that has this song.
    2. -
    3. Click on the playlist and then click on the Heart button to like it.
    4. -
    5. Go to Your Library and select Playlists.
    6. -
    7. Find the playlist that you liked and click on the Download button (down arrow).
    8. -
    9. Wait for the playlist to download on your device.
    10. -
    - Note: You need to have a premium subscription on Spotify to download playlists.

    Conclusion

    - Lagu Eyes Blue X Terpikat Senyummu is a lovely song that you can enjoy anytime, anywhere. You can download it using any of the three methods above: YouTube converter, SoundCloud, or Spotify. Choose the one that suits you best and enjoy listening to this song offline.

    Frequently Asked Questions

    - Q: What does lagu mean? A: Lagu is a word that has different meanings in different languages. In Indonesian, it means song. Q: Who are Conan Gray and Nadin Amizah? A: Conan Gray is an American singer-songwriter who rose to fame with his debut album Kid Krow in 2020. Nadin Amizah is an Indonesian singer-songwriter who is known for her indie folk music. Q: How can I find more songs like Lagu Eyes Blue X Terpikat Senyummu? A: You can search for more songs by Conan Gray or Nadin Amizah on YouTube, SoundCloud, or Spotify. You can also look for other mashups or covers of their songs. Q: Is it legal to download music from YouTube? A A: It depends on the terms and conditions of the YouTube video and the website that you use to convert it. Some videos may have a Creative Commons license that allows you to download and reuse them, while others may have a standard YouTube license that prohibits you from doing so. You should always check the license of the video before downloading it and respect the rights of the original creators. Q: How can I support the artists who made Lagu Eyes Blue X Terpikat Senyummu? A: You can support them by streaming their music on official platforms, buying their albums or merchandise, following them on social media, or attending their concerts. You can also share their music with your friends and family and spread the word about their talent. I hope you enjoyed this article and learned something new. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

    -

    download lagu eyes blue x terpikat senyummu


    Downloadhttps://urllie.com/2uNELP



    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Leps World 4 APK and Join Lep in His Quest for Gold.md b/spaces/fatiXbelha/sd/Download Leps World 4 APK and Join Lep in His Quest for Gold.md deleted file mode 100644 index 939452af9fbba307254b7d95c00514ce15675e26..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Leps World 4 APK and Join Lep in His Quest for Gold.md +++ /dev/null @@ -1,204 +0,0 @@ -
    -

    Lep's World 4 APK Download: A Guide for Android Users

    -

    If you are a fan of classic platformer games like Super Mario, then you will love Lep's World 4. This is a fun and addictive game that will take you on an adventure through different worlds and levels. You will play as Lep, a brave leprechaun who has to find his gold that was stolen by evil monsters. Along the way, you will encounter various enemies, obstacles, power-ups, and boss fights. You will also be able to choose from different characters, each with their own abilities and skills.

    -

    Lep's World 4 is available for both iOS and Android devices, but in this article, we will focus on how to download and install the game on your Android device. We will also give you some tips and tricks on how to play the game better and have more fun. So, without further ado, let's get started!

    -

    leps world 4 apk download


    DOWNLOAD ————— https://urllie.com/2uNAiZ



    -

    How to Download and Install Lep's World 4 APK on Your Android Device

    -

    There are two ways to download and install Lep's World 4 APK on your Android device. You can either go to the official website of Lep's World or use a trusted third-party source. Here are the steps for both methods:

    -

    Step1 : Go to the official website of Lep's World or use a trusted third-party source

    -

    The official website of Lep's World is ( https://www.lepsworld4.com/). This is the safest and most reliable way to get the latest version of the game. You can also find more information about the game, such as its features, screenshots, videos, and reviews. To download the game from the official website, follow these steps:

    -
      -
    • Go to https://www.lepsworld4.com/ on your Android device's browser.
    • -
    • Tap on the "Download" button at the top right corner of the screen.
    • -
    • You will be redirected to the Google Play Store page of Lep's World 4.
    • -
    • Tap on the "Install" button and wait for the game to download and install on your device.
    • -
    -

    If you prefer to use a third-party source, you can also find Lep's World 4 APK on various websites that offer free APK downloads. However, you should be careful and only use trusted and reputable sources, as some websites may contain malware or viruses that can harm your device. To download the game from a third-party source, follow these steps:

    -
      -
    • Go to a website that offers Lep's World 4 APK, such as [^2^(https://apkpure.com/lep-s-world-4-%F0%9F%8D%80-jump-n-run-games/at.ner.lepsWorld4)](https://apkpure.com/lep-s-world-4-%F0%9F%8D%80-jump-n-run-games/at.ner.lepsWorld4) or [^3^(https://apkdone.com/leps-world-4/)](https://apkdone.com/leps-world-4/) on your Android device's browser.
    • -
    • Tap on the "Download APK" button and wait for the file to download on your device.
    • -
    -

    Step 2: Download the APK file and allow unknown sources in your settings

    -

    Once you have downloaded the APK file of Lep's World 4, you need to allow unknown sources in your device's settings. This is because Android devices normally block the installation of apps from sources other than the Google Play Store. To allow unknown sources, follow these steps:

    -
      -
    • Go to your device's settings and tap on "Security" or "Privacy".
    • -
    • Find and enable the option that says "Unknown sources" or "Install unknown apps".
    • -
    • You may see a warning message that says installing apps from unknown sources may harm your device. Tap on "OK" or "Allow" to proceed.
    • -
    -

    Step 3: Locate the APK file and tap on it to install it

    -

    The final step is to locate the APK file of Lep's World 4 and tap on it to install it on your device. To do this, follow these steps:

    -

    impossible leps world 4 apk free download
    -leps world 4 adventure game apk
    -legend leps world 4 adventure apk
    -leps world 4 apk mod unlimited coins
    -leps world 4 apk offline installer
    -leps world 4 apk latest version download
    -leps world 4 apk for android tv
    -leps world 4 apk pure download
    -leps world 4 apk hack download
    -leps world 4 apk full version download
    -leps world 4 apk old version download
    -leps world 4 apk for pc windows 10
    -leps world 4 apk for ios iphone
    -leps world 4 apk for firestick
    -leps world 4 apk for chromebook
    -leps world 4 apk mirror download
    -leps world 4 apk uptodown download
    -leps world 4 apk rexdl download
    -leps world 4 apk revdl download
    -leps world 4 apk mob.org download
    -leps world 4 apk no ads download
    -leps world 4 apk premium download
    -leps world 4 apk pro download
    -leps world 4 apk cracked download
    -leps world 4 apk unlocked download
    -leps world 4 adventure jungle game apk
    -legend leps world 4 adventure game apk
    -impossible leps world 4 adventure game apk
    -super leps world 4 adventure game apk
    -new leps world 4 adventure game apk
    -best leps world 4 adventure game apk
    -fun leps world 4 adventure game apk
    -cool leps world 4 adventure game apk
    -amazing leps world 4 adventure game apk
    -awesome leps world 4 adventure game apk
    -how to download leps world 4 apk on android phone
    -how to install leps world 4 apk on android device
    -how to play leps world 4 apk on android tablet
    -how to update leps world 4 apk on android smartphone
    -how to uninstall leps world 4 apk on android emulator
    -how to backup leps world 4 apk on android sd card
    -how to restore leps world 4 apk on android cloud storage
    -how to transfer leps world 4 apk from android to pc
    -how to share leps world 4 apk with android friends
    -how to fix leps world 4 apk not working on android error
    -how to solve leps world 4 apk crashing on android issue
    -how to get more coins in leps world 4 apk on android cheat
    -how to unlock all levels in leps world 4 apk on android hack
    -how to complete all missions in leps world 4 apk on android tips

    -
      -
    • Go to your device's file manager and find the folder where you downloaded the APK file. It may be in your "Downloads" folder or in a folder named after the website you used.
    • -
    • Tap on the APK file of Lep's World 4 and you will see a pop-up window that asks you if you want to install this application.
    • -
    • Tap on "Install" and wait for the installation process to complete.
    • -
    • You may see another pop-up window that asks you if you want to open this application. Tap on "Open" and enjoy playing Lep's World 4!
    • -
    -

    How to Play Lep's World 4 on Your Android Device

    -

    Now that you have successfully downloaded and installed Lep's World 4 APK on your Android device, you are ready to play this amazing game. Here are some steps on how to play Lep's World 4 on your Android device:

    -

    Step 1: Launch the game and choose your character

    -

    When you launch the game, you will see a splash screen with the logo of Lep's World 4. After that, you will see a main menu with different options, such as Play, Settings, Achievements, Leaderboards, and More Games. Tap on "Play" to start playing the game.

    -

    You will then see a screen where you can choose your character. You can choose from four different characters: Lep, Lily, Mike, and Louie. Each character has their own special skill that can help you in different situations. For example, Lep can throw pinecones faster, Lily can jump higher, Mike can run faster, and Louie can break blocks easier. You can also unlock more characters by collecting enough coins in the game.

    -

    To choose your character, simply tap on their icon and then tap on "Select".

    Step 2: Explore the different worlds and levels

    -

    After choosing your character, you will see a world map where you can select the world and the level you want to play. There are four different worlds in Lep's World 4, each with a different theme and environment. They are:

    -
      -
    • World 1: The Forest - A green and lush forest full of trees, flowers, mushrooms, and animals.
    • -
    • World 2: The Desert - A hot and dry desert with cacti, sand, rocks, and scorpions.
    • -
    • World 3: The Ice - A cold and snowy ice land with icebergs, snowmen, penguins, and polar bears.
    • -
    • World 4: The Castle - A dark and spooky castle with ghosts, bats, skeletons, and witches.
    • -
    -

    Each world has 25 levels that you can play in any order. However, some levels are locked and require you to collect a certain number of clovers to unlock them. Clovers are special items that are hidden in some levels. You can find them by breaking blocks, hitting switches, or exploring secret areas. You can also buy clovers with coins in the shop.

    -

    To select a world and a level, simply tap on their icons and then tap on "Play".

    -

    Step 3: Collect coins, clovers, pinecones, and power-ups

    -

    Once you start playing a level, you will see a user interface that shows your score, lives, coins, clovers, pinecones, and power-ups. You can collect these items by jumping on them or hitting them with pinecones. Here is what each item does:

    -
      -
    • Coins: These are the currency of the game. You can use them to buy more lives, power-ups, characters, or clovers in the shop. You can also use them to play bonus games or revive yourself when you die.
    • -
    • Clovers: These are the keys to unlock more levels. You need to collect a certain number of clovers to access some levels. You can also use them to play bonus games or revive yourself when you die.
    • -
    • Pinecones: These are your weapons. You can throw them at enemies or blocks to defeat them or break them. You can also use them to activate switches or reveal hidden items. You can carry up to 10 pinecones at a time.
    • -
    • Power-ups: These are special items that give you extra abilities or advantages. There are four types of power-ups in Lep's World 4:
    • -
        -
      • Magnet: This attracts all coins and clovers to you automatically.
      • -
      • Shield: This protects you from one hit by an enemy or an obstacle.
      • -
      • Speed: This makes you run faster and jump higher.
      • -
      • Fireball: This allows you to shoot fireballs instead of pinecones.
      • -
      -
    -

    Step 4: Defeat enemies and bosses

    -

    As you play through the levels, you will encounter various enemies that will try to stop you or harm you. Some of the enemies you will face are:

    -
      -
    • Bee: A flying insect that shoots stingers at you.
    • -
    • Frog: A hopping amphibian that tries to bite you.
    • -
    • Hedgehog: A spiky mammal that rolls into a ball and charges at you.
    • -
    • Snake: A slithering reptile that spits venom at you.
    • -
    • Turtle: A shelled animal that hides in its shell and spins around.
    • -
    -

    You can defeat most enemies by jumping on them or throwing pinecones at them. However, some enemies are immune to certain attacks or have special abilities that make them harder to defeat. For example, bees are immune to pinecones, frogs can jump high, hedgehogs are immune to jumping, snakes can hide in bushes, and turtles can reflect pinecones back at you.

    -

    At the end of each world, you will face a boss that is much bigger and stronger than the regular enemies. Each boss has a different appearance, behavior, and attack pattern. You will need to dodge their attacks and hit their weak spots with pinecones or fireballs. Some of the bosses you will face are:

    -
      -
    • Giant Spider: A huge arachnid that hangs from the ceiling and drops webs and eggs at you.
    • -
    • Giant Scorpion: A massive scorpion that crawls on the ground and swings its tail and claws at you.
    • -
    • Giant Snowman: A colossal snowman that throws snowballs and icicles at you.
    • -
    • Giant Witch: A wicked witch that flies on a broom and casts spells at you.
    • -
    -

    To defeat a boss, you will need to hit them a certain number of times with pinecones or fireballs. You will also need to avoid their attacks and watch out for their health bar. When you defeat a boss, you will be rewarded with a lot of coins and clovers, and you will unlock the next world.

    -

    Features of Lep's World 4 That Make It a Great Platformer Game

    -

    Lep's World 4 is not just a simple platformer game. It has many features that make it stand out from other games in the genre. Here are some of the features that make Lep's World 4 a great platformer game:

    -

    Feature 1: 4 different addictive worlds with different themes and challenges

    -

    Lep's World 4 offers you a variety of worlds to explore and enjoy. Each world has a different theme and environment, such as forest, desert, ice, and castle. Each world also has different challenges and obstacles, such as moving platforms, spikes, traps, and puzzles. You will never get bored of playing Lep's World 4, as each world offers you something new and exciting.

    -

    Feature 2: 100 beautiful levels with increasing difficulty and hidden secrets

    -

    Lep's World 4 has 100 levels that you can play in any order. Each level has a different layout, design, and objective. Some levels are easy and straightforward, while others are hard and complex. Some levels have hidden secrets, such as bonus levels, hidden blocks, or secret areas. You will have to use your skills and creativity to complete each level and find all the secrets.

    -

    Feature 3: Awesome boss fights with unique strategies and skills

    -

    Lep's World 4 has some of the most awesome boss fights you will ever see in a platformer game. Each boss is unique and has its own appearance, behavior, and attack pattern. You will have to use your strategy and skills to defeat each boss and advance to the next world. Each boss fight is challenging and rewarding, as you will feel a sense of accomplishment when you win.

    -

    Feature 4: Many power-ups, bonus levels, hidden blocks, and bonus items

    -

    Lep's World 4 has many power-ups that can help you in your adventure. You can find them in blocks or bricks that you can break with pinecones or fireballs. Some of the power-ups are magnet, shield, speed, and fireball. Each power-up gives you an extra ability or advantage for a limited time. You can also find bonus levels that can give you more coins and clovers. You can access them by finding hidden blocks or switches that open secret doors or pipes. You can also find bonus items that can give you extra lives or pinecones. You can find them in chests or pots that you can open with pinecones or fireballs.

    -

    Feature 5: Over 20 different animated enemies with different behaviors and attacks

    -

    Lep's World 4 has over 20 different enemies that will try to stop you or harm you. Each enemy is animated and has its own behavior and attack. Some enemies are passive and only move around, while others are aggressive and chase you or shoot at you. Some enemies are immune to certain attacks or have special abilities that make them harder to defeat. You will have to learn their patterns and weaknesses to overcome them.

    -

    Tips and Tricks to Master Lep's World 4 and Have More Fun

    -

    Lep's World 4 is a fun and addictive game that anyone can play and enjoy. However, if you want to master the game and have more fun, here are some tips and tricks that can help you:

    -

    Tip 1: Use the pinecones to break blocks and hit enemies from a distance

    -

    Pinecones are your main weapon in Lep's World 4. You can throw them at enemies or blocks to defeat them or break them. You can also use them to activate switches or reveal hidden items. You can carry up to 10 pinecones at a time, but you can find more in blocks or pots. You can also upgrade your pinecones in the shop to make them faster or stronger.

    -

    Pinecones are useful for hitting enemies or blocks from a distance, especially if they are out of your reach or too dangerous to approach. For example, you can use pinecones to hit bees that fly above you, frogs that jump high, snakes that hide in bushes, or turtles that reflect pinecones back at you.

    -

    Tip 2: Replay previous levels to grind for more coins and lives

    -

    Coins and lives are important resources in Lep's World 4. You need coins to buy more lives, power-ups, characters, or clovers in the shop. You also need coins to play bonus games or revive yourself when you die. You need lives to continue playing the game and avoid losing your progress. You start with 5 lives, but you can lose them by dying or quitting a level.

    -

    One way to get more coins and lives is to replay previous levels that you have already completed. You can do this by tapping on the level icon on the world map and then tapping on "Replay". By replaying previous levels, you can collect more coins and clovers that you may have missed or ignored before. You can also find more lives or pinecones in blocks or pots that respawn every time you replay a level.

    -

    Replaying previous levels can also help you improve your skills and strategies, as you can practice your moves and learn from your mistakes. You can also try to beat your previous score or time, or find all the secrets and hidden items.

    -

    Tip 3: Use the special skills hidden in destroyable blocks and bricks

    -

    In Lep's World 4, there are some blocks and bricks that you can destroy with pinecones or fireballs. These blocks and bricks may contain coins, clovers, pinecones, power-ups, or bonus items. However, some of them may also contain special skills that can give you an edge in the game. These special skills are hidden and only appear when you break the block or brick that contains them.

    -

    Some of the special skills that you can find are:

    -
      -
    • Double Jump: This allows you to jump twice in the air.
    • -
    • Glide: This allows you to glide in the air for a short distance.
    • -
    • Wall Jump: This allows you to jump off walls.
    • -
    • Dash: This allows you to dash forward quickly.
    • -
    -

    These special skills can help you reach higher places, avoid obstacles, cross gaps, or escape enemies. You can use them by tapping on the screen or swiping in a certain direction. However, these special skills only last for a limited time, so use them wisely.

    -

    Reviews of Lep's World 4 by Other Players

    -

    Lep's World 4 is a popular game that has received many positive reviews from other players. Here are some of the reviews that show why people love this game:

    -

    Review 1: A positive review from a satisfied player who praises the game's graphics, music, controls, and gameplay

    -

    "This game is awesome! I love the graphics, they are so colorful and detailed. The music is catchy and fits the mood of each world. The controls are easy and responsive, I can move and jump with no problem. The gameplay is fun and addictive, I can't stop playing it. There are so many levels to play and secrets to find. The boss fights are epic and challenging. This game is a masterpiece of platformer games!"

    -

    Review 2: A negative review from a disappointed player who criticizes the game's ads, bugs, difficulty, and lack of originality

    -

    "This game is terrible! I hate the ads, they are so annoying and intrusive. They pop up every time I finish a level or die. The game is also full of bugs and glitches, sometimes it crashes or freezes. The game is too hard and unfair, some levels are impossible to beat without power-ups or clovers. The game is also boring and repetitive, it is just a copy of Super Mario with different characters. This game is a waste of time and space!"

    -

    Conclusion: Why You Should Download Lep's World 4 APK Today

    -

    Lep's World 4 is a great platformer game that will keep you entertained for hours. It has many features that make it stand out from other games in the genre, such as:

    -
      -
    • 4 different addictive worlds with different themes and challenges
    • -
    • 100 beautiful levels with increasing difficulty and hidden secrets
    • -
    • Awesome boss fights with unique strategies and skills
    • -
    • Many power-ups, bonus levels, hidden blocks, and bonus items
    • -
    • Over 20 different animated enemies with different behaviors and attacks
    • -
    • 4 different characters with their own special skills
    • -
    • Catchy music and sound effects
    • -
    • Easy and responsive controls
    • -
    • Friendly user interface and graphics
    • -
    • Free to play and download
    • -
    -

    If you are looking for a fun and addictive platformer game that will challenge your skills and entertain you for hours, then you should download Lep's World 4 APK today. You will not regret it, as this game is one of the best platformer games you will ever play. You can download it from the official website of Lep's World or from a trusted third-party source. Just follow the steps we have provided in this article and you will be able to enjoy this amazing game on your Android device.

    -

    So, what are you waiting for? Download Lep's World 4 APK today and join Lep and his friends in their quest to find their gold and save their world from the evil monsters. You will have a blast playing this game and discovering all its secrets and surprises. Have fun and good luck!

    -

    FAQs About Lep's World 4 APK Download

    -

    Here are some of the frequently asked questions about Lep's World 4 APK download:

    -

    FAQ 1: Is Lep's World 4 free to play?

    -

    Yes, Lep's World 4 is free to play and download. However, the game contains ads that may interrupt your gameplay or offer you in-app purchases. You can remove the ads or buy more coins, clovers, lives, or power-ups with real money if you want to. But you can also play the game without spending any money, as the game is generous with its rewards and bonuses.

    -

    FAQ 2: Is Lep's World 4 safe to download?

    -

    Yes, Lep's World 4 is safe to download if you use the official website of Lep's World or a trusted third-party source. However, you should be careful and only use reputable and reliable sources, as some websites may contain malware or viruses that can harm your device. You should also scan the APK file with an antivirus software before installing it on your device.

    -

    FAQ 3: How can I play Lep's World 4 offline?

    -

    You can play Lep's World 4 offline if you have already downloaded and installed the game on your device. You can also play the game without an internet connection if you have already unlocked the levels that you want to play. However, you will not be able to access some features of the game that require an internet connection, such as achievements, leaderboards, bonus games, or online multiplayer.

    -

    FAQ 4: How can I contact the developer of Lep's World 4?

    -

    You can contact the developer of Lep's World 4 by sending an email to [^4^(support@lepsworld.zendesk.com)](support@lepsworld.zendesk.com) or by visiting their website at [^5^(https://www.lepsworld4.com/)](https://www.lepsworld4.com/). You can also follow them on social media platforms such as Facebook, Twitter, Instagram, or YouTube. You can find the links to their social media accounts on their website or on the game's main menu.

    -

    FAQ 5: How can I get more lives in Lep's World 4?

    -

    You can get more lives in Lep's World 4 by doing one of the following:

    -
      -
    • Buying more lives with coins in the shop.
    • -
    • Finding more lives in blocks or pots in the levels.
    • -
    • Watching a video ad to get a free life when you die.
    • -
    • Waiting for your lives to refill over time.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/FR Legends Bike Mod Apk The Most Realistic and Fun Bike Racing Game Ever.md b/spaces/fatiXbelha/sd/FR Legends Bike Mod Apk The Most Realistic and Fun Bike Racing Game Ever.md deleted file mode 100644 index 050f3a9585af50efddb229c42880c4c0a9a4e272..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/FR Legends Bike Mod Apk The Most Realistic and Fun Bike Racing Game Ever.md +++ /dev/null @@ -1,82 +0,0 @@ -
    -

    FR Legends Bike Mod APK: How to Download and Install It

    -

    If you are a fan of FR Legends, the popular drifting game for mobile devices, you might be interested in trying out the FR Legends Bike Mod APK. This is a fan-made modification that adds bikes to the game, allowing you to drift on two wheels instead of four. In this article, I will show you how to download and install the FR Legends Bike Mod APK on your Android device.

    -

    fr legends bike mod apk


    Download Filehttps://urllie.com/2uNIDY



    -

    What is FR Legends?

    -

    FR Legends is a racing game that focuses on drifting, the art of sliding your car sideways around corners. The game features realistic physics, customizable cars, and various tracks and modes to test your skills. You can also compete with other players online or challenge yourself in solo mode.

    -

    What is FR Legends Bike Mod APK?

    -

    FR Legends Bike Mod APK is a modification of the original FR Legends game that adds bikes as a new vehicle type. You can choose from different models of bikes, such as sport bikes, dirt bikes, and choppers. You can also customize your bike with different parts, colors, and stickers. The mod also adds new maps and challenges that are suitable for bike drifting.

    -

    Why should you try FR Legends Bike Mod APK?

    -

    If you love FR Legends and want to experience a new way of drifting, you should give FR Legends Bike Mod APK a try. You will be able to enjoy the following benefits:

    -

    fr legends bike mod apk download
    -fr legends bike mod apk latest version
    -fr legends bike mod apk unlimited money
    -fr legends bike mod apk 0.3.3.1
    -fr legends bike mod apk android
    -fr legends bike mod apk ios
    -fr legends bike mod apk free
    -fr legends bike mod apk no root
    -fr legends bike mod apk offline
    -fr legends bike mod apk online
    -fr legends bike mod apk 2023
    -fr legends bike mod apk new update
    -fr legends bike mod apk hack
    -fr legends bike mod apk cheats
    -fr legends bike mod apk revdl
    -fr legends bike mod apk rexdl
    -fr legends bike mod apk happymod
    -fr legends bike mod apk an1
    -fr legends bike mod apk obb
    -fr legends bike mod apk data
    -fr legends bike mod apk pure
    -fr legends bike mod apk uptodown
    -fr legends bike mod apk apkpure
    -fr legends bike mod apk apkmody
    -fr legends bike mod apk apkmirror
    -fr legends bike mod pack download
    -fr legends bike mod pack latest version
    -fr legends bike mod pack unlimited money
    -fr legends bike mod pack 0.3.3.1
    -fr legends bike mod pack android
    -fr legends bike mod pack ios
    -fr legends bike mod pack free
    -fr legends bike mod pack no root
    -fr legends bike mod pack offline
    -fr legends bike mod pack online
    -fr legends bike mod pack 2023
    -fr legends bike mod pack new update
    -fr legends bike mod pack hack
    -fr legends bike mod pack cheats
    -fr legends bike mod pack revdl
    -fr legends bike mod pack rexdl

    -
      -
    • More variety: You can switch between cars and bikes anytime you want, and explore different styles of drifting.
    • -
    • More challenge: Drifting on a bike is harder than on a car, as you have to balance your speed, angle, and throttle. You will need more skill and practice to master bike drifting.
    • -
    • More fun: Drifting on a bike is more thrilling and exciting than on a car, as you can perform more tricks and stunts. You can also show off your bike to other players online.
    • -
    -

    How to download and install FR Legends Bike Mod APK?

    -

    To download and install FR Legends Bike Mod APK on your Android device, you need to follow these steps:

    -
      -
    1. Download the FR Legends Bike Mod APK file from a trusted source. You can find it on various websites that offer modded games, such as [frlmods.com](^1^) or [android-1.com](^2^).
    2. -
    3. Enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    4. -
    5. Locate the downloaded APK file on your device using a file manager app. Tap on it and follow the instructions to install it.
    6. -
    7. Launch the game and enjoy drifting on bikes.
    8. -
    -

    Conclusion

    -

    FR Legends Bike Mod APK is a great way to spice up your drifting experience on FR Legends. You can download and install it easily on your Android device and have fun with bikes. However, be aware that this is an unofficial modification that may not be compatible with future updates of the original game. Also, make sure you download the mod from a safe source and scan it for viruses before installing it.

    -

    Frequently Asked Questions

    -
      -
    • Is FR Legends Bike Mod APK free?
      -Yes, FR Legends Bike Mod APK is free to download and play. However, some features may require in-app purchases or ads.
    • -
    • Is FR Legends Bike Mod APK safe?
      -FR Legends Bike Mod APK is generally safe to use, as long as you download it from a reliable source and scan it for malware before installing it. However, since it is an unofficial modification, it may have some bugs or glitches that could affect your gameplay or device performance.
    • -
    • Can I play FR Legends Bike Mod APK online?
      -Yes, you can play FR Legends Bike Mod APK online with other players who have the same mod installed. However, you may not be able to play with players who have the original game or a different mod installed.
    • -
    • Can I play FR Legends Bike Mod APK offline?
      -Yes, you can play FR Legends Bike Mod APK offline in solo mode or local multiplayer mode with your friends. However, you will need an internet connection to download and update the mod.
    • -
    • How can I uninstall FR Legends Bike Mod APK?
      -To uninstall FR Legends Bike Mod APK, you can simply delete the APK file from your device or go to Settings > Apps > FR Legends and tap on Uninstall.
    • -
    -

    I hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Happy drifting!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/Makefile b/spaces/fffiloni/Image-to-MusicGen/Makefile deleted file mode 100644 index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/Makefile +++ /dev/null @@ -1,21 +0,0 @@ -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report --include 'audiocraft/*' - -docs: - pdoc3 --html -o docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests docs dist diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/utils/notebook.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/utils/notebook.py deleted file mode 100644 index 019b9d19e5bef976bedddf428fd25da42a8a9726..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/utils/notebook.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -try: - import IPython.display as ipd # type: ignore -except ImportError: - # Note in a notebook... - pass - - -import torch - - -def display_audio(samples: torch.Tensor, sample_rate: int): - """Renders an audio player for the given audio samples. - - Args: - samples (torch.Tensor): a Tensor of decoded audio samples - with shapes [B, C, T] or [C, T] - sample_rate (int): sample rate audio should be displayed with. - """ - assert samples.dim() == 2 or samples.dim() == 3 - - samples = samples.detach().cpu() - if samples.dim() == 2: - samples = samples[None, ...] - - for audio in samples: - ipd.display(ipd.Audio(audio, rate=sample_rate)) diff --git a/spaces/fffiloni/video_frame_interpolation/app.py b/spaces/fffiloni/video_frame_interpolation/app.py deleted file mode 100644 index 090a67637269d8ecc2636c2c174753766ab69213..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/video_frame_interpolation/app.py +++ /dev/null @@ -1,177 +0,0 @@ -import os -os.system("git clone https://github.com/google-research/frame-interpolation") -import sys -sys.path.append("frame-interpolation") - -import cv2 -import numpy as np -import tensorflow as tf -import mediapy -from PIL import Image - -import gradio as gr - -from huggingface_hub import snapshot_download - -from image_tools.sizes import resize_and_crop -from moviepy.editor import * - - -model = snapshot_download(repo_id="akhaliq/frame-interpolation-film-style") -from eval import interpolator, util -interpolator = interpolator.Interpolator(model, None) - -ffmpeg_path = util.get_ffmpeg_path() -mediapy.set_ffmpeg(ffmpeg_path) - - - -def do_interpolation(frame1, frame2, times_to_interpolate): - print(frame1, frame2) - input_frames = [frame1, frame2] - #times_to_interpolate = 2 - frames = list( - util.interpolate_recursively_from_files( - input_frames, times_to_interpolate, interpolator)) - - #print(frames) - mediapy.write_video(f"{frame1}_to_{frame2}_out.mp4", frames, fps=12) - return f"{frame1}_to_{frame2}_out.mp4" - -def get_frames(video_in, step, name): - frames = [] - #resize the video - clip = VideoFileClip(video_in) - - #check fps - if clip.fps > 30: - print("vide rate is over 30, resetting to 30") - clip_resized = clip.resize(height=512) - clip_resized.write_videofile("video_resized.mp4", fps=30) - else: - print("video rate is OK") - clip_resized = clip.resize(height=512) - clip_resized.write_videofile("video_resized.mp4", fps=clip.fps) - - print("video resized to 512 height") - - # Opens the Video file with CV2 - cap= cv2.VideoCapture("video_resized.mp4") - - fps = cap.get(cv2.CAP_PROP_FPS) - print("video fps: " + str(fps)) - i=0 - while(cap.isOpened()): - ret, frame = cap.read() - if ret == False: - break - cv2.imwrite(f"{name}_{step}{str(i)}.jpg",frame) - frames.append(f"{name}_{step}{str(i)}.jpg") - i+=1 - - cap.release() - cv2.destroyAllWindows() - print("broke the video into frames") - - return frames, fps - - -def create_video(frames, fps, type): - print("building video result") - clip = ImageSequenceClip(frames, fps=fps) - clip.write_videofile(type + "_result.mp4", fps=fps) - - return type + "_result.mp4" - - -def infer(video_in,interpolation,fps_output): - - - # 1. break video into frames and get FPS - break_vid = get_frames(video_in, "vid_input_frame", "origin") - frames_list= break_vid[0] - fps = break_vid[1] - print(f"ORIGIN FPS: {fps}") - n_frame = int(4*fps) #limited to 4 seconds - #n_frame = len(frames_list) - - if n_frame >= len(frames_list): - print("video is shorter than the cut value") - n_frame = len(frames_list) - - # 2. prepare frames result arrays - result_frames = [] - print("set stop frames to: " + str(n_frame)) - - - - - for idx, frame in enumerate(frames_list[0:int(n_frame)]): - if idx < len(frames_list) - 1: - next_frame = frames_list[idx+1] - interpolated_frames = do_interpolation(frame, next_frame,interpolation) # should return a list of 3 interpolated frames - break_interpolated_video = get_frames(interpolated_frames, "interpol",f"{idx}_") - print(break_interpolated_video[0]) - for j, img in enumerate(break_interpolated_video[0][0:len(break_interpolated_video[0])-1]): - print(f"IMG:{img}") - os.rename(img, f"{frame}_to_{next_frame}_{j}.jpg") - result_frames.append(f"{frame}_to_{next_frame}_{j}.jpg") - - print("frames " + str(idx) + " & " + str(idx+1) + "/" + str(n_frame) + ": done;") - #print(f"CURRENT FRAMES: {result_frames}") - result_frames.append(f"{frames_list[n_frame-1]}") - final_vid = create_video(result_frames, fps_output, "interpolated") - - files = final_vid - - return final_vid, files - -title=""" -
    -
    -

    - Video interpolation with FILM -

    - -
    -

    This space uses FILM to generate interpolation frames in a video you need to fluidify.
    - Generation is limited to 4 seconds, from the beginning of your video input.
    - Duplicate Space -

    -
    -""" - -with gr.Blocks() as demo: - with gr.Column(): - gr.HTML(title) - with gr.Row(): - with gr.Column(): - video_input = gr.Video(source="upload", type="filepath") - with gr.Row(): - interpolation = gr.Slider(minimum=1,maximum=4,step=1, value=1, label="Interpolation Steps") - fps_output = gr.Radio([8, 12, 24], label="FPS output", value=8) - submit_btn = gr.Button("Submit") - - with gr.Column(): - video_output = gr.Video() - file_output = gr.File() - - gr.Examples( - examples=[["./examples/yoda-fps2.mp4", 1, 12]], - fn=infer, - inputs=[video_input,interpolation,fps_output], - outputs=[video_output,file_output], - cache_examples=True - ) - - submit_btn.click(fn=infer, inputs=[video_input,interpolation,fps_output], outputs=[video_output, file_output]) - -demo.launch() \ No newline at end of file diff --git a/spaces/flax-community/SinhalaLanguageDemos/model.py b/spaces/flax-community/SinhalaLanguageDemos/model.py deleted file mode 100644 index d7b2c1072f71610d404e2c2fec2d81e505862ba5..0000000000000000000000000000000000000000 --- a/spaces/flax-community/SinhalaLanguageDemos/model.py +++ /dev/null @@ -1,11 +0,0 @@ -import streamlit as st - -from transformers import AutoTokenizer, AutoModelForCausalLM - -def load_model(model_name): - with st.spinner('Waiting for the model to load.....'): - # snapshot_download('flax-community/Sinhala-gpt2') - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = AutoModelForCausalLM.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id) - st.success('Model loaded!!') - return model, tokenizer \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/talkitoutnoliarpolite.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/talkitoutnoliarpolite.py deleted file mode 100644 index 248fede7bebc471394579675db392972edac0556..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/backup_envs/talkitoutnoliarpolite.py +++ /dev/null @@ -1,428 +0,0 @@ -from gym_minigrid.minigrid import * -from gym_minigrid.register import register - - -class Wizard(NPC): - """ - A simple NPC that knows who is telling the truth - """ - - def __init__(self, color, name, env): - super().__init__(color) - self.name = name - self.env = env - self.npc_dir = 1 # NPC initially looks downward - # todo: this should be id == name - self.npc_type = 0 # this will be put into the encoding - self.was_introduced_to = False - - def can_overlap(self): - # If the NPC is hidden, agent can overlap on it - return self.env.hidden_npc - - def encode(self, nb_dims=3): - if self.env.hidden_npc: - if nb_dims == 3: - return (1, 0, 0) - elif nb_dims == 4: - return (1, 0, 0, 0) - else: - return super().encode(nb_dims=nb_dims) - - def listen(self, utterance): - if self.env.hidden_npc: - return None - - if self.was_introduced_to: - if utterance == TalkItOutNoLiarPoliteGrammar.construct_utterance([0, 1]): - if self.env.nameless: - return "Ask the {} guide.".format(self.env.true_guide.color) - else: - return "Ask {}.".format(self.env.true_guide.name) - else: - if utterance == TalkItOutNoLiarPoliteGrammar.construct_utterance([3, 3]): - self.was_introduced_to = True - return "I am well." - - return None - - - -class Guide(NPC): - """ - A simple NPC that knows the correct door. - """ - - def __init__(self, color, name, env, liar=False): - super().__init__(color) - self.name = name - self.env = env - self.liar = liar - self.npc_dir = 1 # NPC initially looks downward - # todo: this should be id == name - self.npc_type = 1 # this will be put into the encoding - self.was_introduced_to = False - assert not self.liar # in this env the guide is always good - - # Select a random target object as mission - obj_idx = self.env._rand_int(0, len(self.env.door_pos)) - self.target_pos = self.env.door_pos[obj_idx] - self.target_color = self.env.door_colors[obj_idx] - - def can_overlap(self): - # If the NPC is hidden, agent can overlap on it - return self.env.hidden_npc - - def encode(self, nb_dims=3): - if self.env.hidden_npc: - if nb_dims == 3: - return (1, 0, 0) - elif nb_dims == 4: - return (1, 0, 0, 0) - else: - return super().encode(nb_dims=nb_dims) - - def listen(self, utterance): - if self.was_introduced_to: - if utterance == TalkItOutNoLiarPoliteGrammar.construct_utterance([0, 1]): - return self.env.mission - else: - if utterance == TalkItOutNoLiarPoliteGrammar.construct_utterance([3, 3]): - self.was_introduced_to = True - return "I am well." - - - def render(self, img): - c = COLORS[self.color] - - npc_shapes = [] - # Draw eyes - npc_shapes.append(point_in_circle(cx=0.70, cy=0.50, r=0.10)) - npc_shapes.append(point_in_circle(cx=0.30, cy=0.50, r=0.10)) - - # Draw mouth - npc_shapes.append(point_in_rect(0.20, 0.80, 0.72, 0.81)) - - # todo: move this to super function - # todo: super.render should be able to take the npc_shapes and then rotate them - - if hasattr(self, "npc_dir"): - # Pre-rotation to ensure npc_dir = 1 means NPC looks downwards - npc_shapes = [rotate_fn(v, cx=0.5, cy=0.5, theta=-1*(math.pi / 2)) for v in npc_shapes] - # Rotate npc based on its direction - npc_shapes = [rotate_fn(v, cx=0.5, cy=0.5, theta=(math.pi/2) * self.npc_dir) for v in npc_shapes] - - # Draw shapes - for v in npc_shapes: - fill_coords(img, v, c) - - def is_near_agent(self): - ax, ay = self.env.agent_pos - wx, wy = self.cur_pos - if (ax == wx and abs(ay - wy) == 1) or (ay == wy and abs(ax - wx) == 1): - return True - return False - - -class TalkItOutNoLiarPoliteGrammar(object): - - templates = ["Where is", "Open", "Close", "How are"] - things = [ - "sesame", "the exit", "the wall", "you", "the ceiling", "the window", "the entrance", "the closet", - "the drawer", "the fridge", "the floor", "the lamp", "the trash can", "the chair", "the bed", "the sofa" - ] - assert len(templates)*len(things) == 64 - print("language complexity {}:".format(len(templates)*len(things))) - - grammar_action_space = spaces.MultiDiscrete([len(templates), len(things)]) - - @classmethod - def construct_utterance(cls, action): - return cls.templates[int(action[0])] + " " + cls.things[int(action[1])] + " " - - -class TalkItOutNoLiarPoliteEnv(MultiModalMiniGridEnv): - """ - Environment in which the agent is instructed to go to a given object - named using an English text string - """ - - def __init__( - self, - size=5, - hear_yourself=False, - diminished_reward=True, - step_penalty=False, - nameless=False, - max_steps=100, - hidden_npc=False, - ): - assert size >= 5 - self.empty_symbol = "NA \n" - self.hear_yourself = hear_yourself - self.diminished_reward = diminished_reward - self.step_penalty = step_penalty - self.nameless = nameless - self.hidden_npc = hidden_npc - - if max_steps is None: - max_steps = 5*size**2 - - super().__init__( - grid_size=size, - max_steps=max_steps, - # Set this to True for maximum speed - see_through_walls=True, - actions=MiniGridEnv.Actions, - action_space=spaces.MultiDiscrete([ - len(MiniGridEnv.Actions), - *TalkItOutNoLiarPoliteGrammar.grammar_action_space.nvec - ]), - add_npc_direction=True - ) - - print({ - "size": size, - "hear_yourself": hear_yourself, - "diminished_reward": diminished_reward, - "step_penalty": step_penalty, - }) - - def _gen_grid(self, width, height): - # Create the grid - self.grid = Grid(width, height, nb_obj_dims=4) - - # Randomly vary the room width and height - width = self._rand_int(5, width+1) - height = self._rand_int(5, height+1) - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, width, height) - - # Generate the 4 doors at random positions - self.door_pos = [] - self.door_front_pos = [] # Remembers positions in front of door to avoid setting wizard here - - self.door_pos.append((self._rand_int(2, width-2), 0)) - self.door_front_pos.append((self.door_pos[-1][0], self.door_pos[-1][1]+1)) - - self.door_pos.append((self._rand_int(2, width-2), height-1)) - self.door_front_pos.append((self.door_pos[-1][0], self.door_pos[-1][1] - 1)) - - self.door_pos.append((0, self._rand_int(2, height-2))) - self.door_front_pos.append((self.door_pos[-1][0] + 1, self.door_pos[-1][1])) - - self.door_pos.append((width-1, self._rand_int(2, height-2))) - self.door_front_pos.append((self.door_pos[-1][0] - 1, self.door_pos[-1][1])) - - # Generate the door colors - self.door_colors = [] - while len(self.door_colors) < len(self.door_pos): - color = self._rand_elem(COLOR_NAMES) - if color in self.door_colors: - continue - self.door_colors.append(color) - - # Place the doors in the grid - for idx, pos in enumerate(self.door_pos): - color = self.door_colors[idx] - self.grid.set(*pos, Door(color)) - - - # Set a randomly coloured WIZARD at a random position - color = self._rand_elem(COLOR_NAMES) - self.wizard = Wizard(color, "Gandalf", self) - - # Place it randomly, omitting front of door positions - self.place_obj(self.wizard, - size=(width, height), - reject_fn=lambda _, p: tuple(p) in self.door_front_pos) - - # Set a randomly coloured TRUE GUIDE at a random position - name = "John" - color = self._rand_elem(COLOR_NAMES) - self.true_guide = Guide(color, name, self, liar=False) - - # Place it randomly, omitting invalid positions - self.place_obj(self.true_guide, - size=(width, height), - # reject_fn=lambda _, p: tuple(p) in self.door_front_pos) - reject_fn=lambda _, p: tuple(p) in [*self.door_front_pos, tuple(self.wizard.cur_pos)]) - - # Randomize the agent's start position and orientation - self.place_agent(size=(width, height)) - - # Select a random target door - self.doorIdx = self._rand_int(0, len(self.door_pos)) - self.target_pos = self.door_pos[self.doorIdx] - self.target_color = self.door_colors[self.doorIdx] - - # Generate the mission string - self.mission = 'go to the %s door' % self.target_color - - # Dummy beginning string - self.beginning_string = "This is what you hear. \n" - self.utterance = self.beginning_string - - # utterance appended at the end of each step - self.utterance_history = "" - - # used for rendering - self.conversation = self.utterance - self.outcome_info = None - - def step(self, action): - p_action = action[0] - utterance_action = action[1:] - - # assert all nan or neither nan - assert len(set(np.isnan(utterance_action))) == 1 - - speak_flag = not all(np.isnan(utterance_action)) - - obs, reward, done, info = super().step(p_action) - - if speak_flag: - utterance = TalkItOutNoLiarPoliteGrammar.construct_utterance(utterance_action) - if self.hear_yourself: - if self.nameless: - self.utterance += "{} \n".format(utterance) - else: - self.utterance += "YOU: {} \n".format(utterance) - - self.conversation += "YOU: {} \n".format(utterance) - - # check if near wizard - if self.wizard.is_near_agent(): - reply = self.wizard.listen(utterance) - - if reply: - if self.nameless: - self.utterance += "{} \n".format(reply) - else: - self.utterance += "{}: {} \n".format(self.wizard.name, reply) - - self.conversation += "{}: {} \n".format(self.wizard.name, reply) - - if self.true_guide.is_near_agent(): - reply = self.true_guide.listen(utterance) - - if reply: - if self.nameless: - self.utterance += "{} \n".format(reply) - else: - self.utterance += "{}: {} \n".format(self.true_guide.name, reply) - - self.conversation += "{}: {} \n".format(self.true_guide.name, reply) - - if utterance == TalkItOutNoLiarPoliteGrammar.construct_utterance([1, 0]): - ax, ay = self.agent_pos - tx, ty = self.target_pos - - if (ax == tx and abs(ay - ty) == 1) or (ay == ty and abs(ax - tx) == 1): - reward = self._reward() - - for dx, dy in self.door_pos: - if (ax == dx and abs(ay - dy) == 1) or (ay == dy and abs(ax - dx) == 1): - # agent has chosen some door episode, regardless of if the door is correct the episode is over - done = True - - # Don't let the agent open any of the doors - if p_action == self.actions.toggle: - done = True - - if p_action == self.actions.done: - done = True - - # discount - if self.step_penalty: - reward = reward - 0.01 - - if self.hidden_npc: - # all npc are hidden - assert np.argwhere(obs['image'][:,:,0] == OBJECT_TO_IDX['npc']).size == 0 - assert "{}:".format(self.wizard.name) not in self.utterance - #assert "{}:".format(self.true_guide.name) not in self.utterance - - # fill observation with text - self.append_existing_utterance_to_history() - obs = self.add_utterance_to_observation(obs) - self.reset_utterance() - - if done: - if reward > 0: - self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1)) - else: - self.outcome_info = "FAILURE: agent got {} reward \n".format(reward) - - return obs, reward, done, info - - def _reward(self): - if self.diminished_reward: - return super()._reward() - else: - return 1.0 - - def render(self, *args, **kwargs): - obs = super().render(*args, **kwargs) - - self.window.clear_text() # erase previous text - - self.window.set_caption(self.conversation, [ - "Gandalf:", - "Jack:", - "John:", - "Where is the exit", - "Open sesame", - ]) - - self.window.ax.set_title("correct door: {}".format(self.true_guide.target_color), loc="left", fontsize=10) - if self.outcome_info: - color = None - if "SUCCESS" in self.outcome_info: - color = "lime" - elif "FAILURE" in self.outcome_info: - color = "red" - self.window.add_text(*(0.01, 0.85, self.outcome_info), - **{'fontsize':15, 'color':color, 'weight':"bold"}) - - self.window.show_img(obs) # re-draw image to add changes to window - return obs - - -class TalkItOutNoLiarPolite8x8Env(TalkItOutNoLiarPoliteEnv): - def __init__(self, **kwargs): - super().__init__(size=8, max_steps=100, **kwargs) - - -class TalkItOutNoLiarPolite6x6Env(TalkItOutNoLiarPoliteEnv): - def __init__(self): - super().__init__(size=6, max_steps=100) - - -class TalkItOutNoLiarPoliteNameless8x8Env(TalkItOutNoLiarPoliteEnv): - def __init__(self): - super().__init__(size=8, max_steps=100, nameless=True) - -register( - id='MiniGrid-TalkItOutNoLiarPolite-5x5-v0', - entry_point='gym_minigrid.envs:TalkItOutNoLiarPoliteEnv' -) - -register( - id='MiniGrid-TalkItOutNoLiarPolite-6x6-v0', - entry_point='gym_minigrid.envs:TalkItOutNoLiarPolite6x6Env' -) - -register( - id='MiniGrid-TalkItOutNoLiarPolite-8x8-v0', - entry_point='gym_minigrid.envs:TalkItOutNoLiarPolite8x8Env' -) - -register( - id='MiniGrid-TalkItOutNoLiarPoliteNameless-8x8-v0', - entry_point='gym_minigrid.envs:TalkItOutNoLiarPoliteNameless8x8Env' -) \ No newline at end of file diff --git a/spaces/freddyaboulton/test-blue/theme_dropdown.py b/spaces/freddyaboulton/test-blue/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/test-blue/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/bbox.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/bbox.py deleted file mode 100644 index 0c4d58b6c91f652933974f519acd3403a833e906..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/ops/bbox.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['bbox_overlaps']) - - -def bbox_overlaps(bboxes1, bboxes2, mode='iou', aligned=False, offset=0): - """Calculate overlap between two set of bboxes. - - If ``aligned`` is ``False``, then calculate the ious between each bbox - of bboxes1 and bboxes2, otherwise the ious between each aligned pair of - bboxes1 and bboxes2. - - Args: - bboxes1 (Tensor): shape (m, 4) in format or empty. - bboxes2 (Tensor): shape (n, 4) in format or empty. - If aligned is ``True``, then m and n must be equal. - mode (str): "iou" (intersection over union) or iof (intersection over - foreground). - - Returns: - ious(Tensor): shape (m, n) if aligned == False else shape (m, 1) - - Example: - >>> bboxes1 = torch.FloatTensor([ - >>> [0, 0, 10, 10], - >>> [10, 10, 20, 20], - >>> [32, 32, 38, 42], - >>> ]) - >>> bboxes2 = torch.FloatTensor([ - >>> [0, 0, 10, 20], - >>> [0, 10, 10, 19], - >>> [10, 10, 20, 20], - >>> ]) - >>> bbox_overlaps(bboxes1, bboxes2) - tensor([[0.5000, 0.0000, 0.0000], - [0.0000, 0.0000, 1.0000], - [0.0000, 0.0000, 0.0000]]) - - Example: - >>> empty = torch.FloatTensor([]) - >>> nonempty = torch.FloatTensor([ - >>> [0, 0, 10, 9], - >>> ]) - >>> assert tuple(bbox_overlaps(empty, nonempty).shape) == (0, 1) - >>> assert tuple(bbox_overlaps(nonempty, empty).shape) == (1, 0) - >>> assert tuple(bbox_overlaps(empty, empty).shape) == (0, 0) - """ - - mode_dict = {'iou': 0, 'iof': 1} - assert mode in mode_dict.keys() - mode_flag = mode_dict[mode] - # Either the boxes are empty or the length of boxes' last dimension is 4 - assert (bboxes1.size(-1) == 4 or bboxes1.size(0) == 0) - assert (bboxes2.size(-1) == 4 or bboxes2.size(0) == 0) - assert offset == 1 or offset == 0 - - rows = bboxes1.size(0) - cols = bboxes2.size(0) - if aligned: - assert rows == cols - - if rows * cols == 0: - return bboxes1.new(rows, 1) if aligned else bboxes1.new(rows, cols) - - if aligned: - ious = bboxes1.new_zeros(rows) - else: - ious = bboxes1.new_zeros((rows, cols)) - ext_module.bbox_overlaps( - bboxes1, bboxes2, ious, mode=mode_flag, aligned=aligned, offset=offset) - return ious diff --git a/spaces/georgefen/Face-Landmark-ControlNet/ldm/modules/diffusionmodules/upscaling.py b/spaces/georgefen/Face-Landmark-ControlNet/ldm/modules/diffusionmodules/upscaling.py deleted file mode 100644 index 03816662098ce1ffac79bd939b892e867ab91988..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/ldm/modules/diffusionmodules/upscaling.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -import torch.nn as nn -import numpy as np -from functools import partial - -from ldm.modules.diffusionmodules.util import extract_into_tensor, make_beta_schedule -from ldm.util import default - - -class AbstractLowScaleModel(nn.Module): - # for concatenating a downsampled image to the latent representation - def __init__(self, noise_schedule_config=None): - super(AbstractLowScaleModel, self).__init__() - if noise_schedule_config is not None: - self.register_schedule(**noise_schedule_config) - - def register_schedule(self, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def forward(self, x): - return x, None - - def decode(self, x): - return x - - -class SimpleImageConcat(AbstractLowScaleModel): - # no noise level conditioning - def __init__(self): - super(SimpleImageConcat, self).__init__(noise_schedule_config=None) - self.max_noise_level = 0 - - def forward(self, x): - # fix to constant noise level - return x, torch.zeros(x.shape[0], device=x.device).long() - - -class ImageConcatWithNoiseAugmentation(AbstractLowScaleModel): - def __init__(self, noise_schedule_config, max_noise_level=1000, to_cuda=False): - super().__init__(noise_schedule_config=noise_schedule_config) - self.max_noise_level = max_noise_level - - def forward(self, x, noise_level=None): - if noise_level is None: - noise_level = torch.randint(0, self.max_noise_level, (x.shape[0],), device=x.device).long() - else: - assert isinstance(noise_level, torch.Tensor) - z = self.q_sample(x, noise_level) - return z, noise_level - - - diff --git a/spaces/gkw2004/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/gkw2004/QQsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index e3c701d4a36fef5ecd4ebe0ac807d091c2722d27..0000000000000000000000000000000000000000 --- a/spaces/gkw2004/QQsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.7.jar;%APP_HOME%\lib\unidbg-1.0.2.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/glyszt/vt/vtoonify/model/dualstylegan.py b/spaces/glyszt/vt/vtoonify/model/dualstylegan.py deleted file mode 100644 index 60d9850ad049a2751781871d6ae0c2779ecc863f..0000000000000000000000000000000000000000 --- a/spaces/glyszt/vt/vtoonify/model/dualstylegan.py +++ /dev/null @@ -1,203 +0,0 @@ -import random -import torch -from torch import nn -from model.stylegan.model import ConvLayer, PixelNorm, EqualLinear, Generator - -class AdaptiveInstanceNorm(nn.Module): - def __init__(self, fin, style_dim=512): - super().__init__() - - self.norm = nn.InstanceNorm2d(fin, affine=False) - self.style = nn.Linear(style_dim, fin * 2) - - self.style.bias.data[:fin] = 1 - self.style.bias.data[fin:] = 0 - - def forward(self, input, style): - style = self.style(style).unsqueeze(2).unsqueeze(3) - gamma, beta = style.chunk(2, 1) - out = self.norm(input) - out = gamma * out + beta - return out - -# modulative residual blocks (ModRes) -class AdaResBlock(nn.Module): - def __init__(self, fin, style_dim=512, dilation=1): # modified - super().__init__() - - self.conv = ConvLayer(fin, fin, 3, dilation=dilation) # modified - self.conv2 = ConvLayer(fin, fin, 3, dilation=dilation) # modified - self.norm = AdaptiveInstanceNorm(fin, style_dim) - self.norm2 = AdaptiveInstanceNorm(fin, style_dim) - - # model initialization - # the convolution filters are set to values close to 0 to produce negligible residual features - self.conv[0].weight.data *= 0.01 - self.conv2[0].weight.data *= 0.01 - - def forward(self, x, s, w=1): - skip = x - if w == 0: - return skip - out = self.conv(self.norm(x, s)) - out = self.conv2(self.norm2(out, s)) - out = out * w + skip - return out - -class DualStyleGAN(nn.Module): - def __init__(self, size, style_dim, n_mlp, channel_multiplier=2, twoRes=True, res_index=6): - super().__init__() - - layers = [PixelNorm()] - for i in range(n_mlp-6): - layers.append(EqualLinear(512, 512, lr_mul=0.01, activation="fused_lrelu")) - # color transform blocks T_c - self.style = nn.Sequential(*layers) - # StyleGAN2 - self.generator = Generator(size, style_dim, n_mlp, channel_multiplier) - # The extrinsic style path - self.res = nn.ModuleList() - self.res_index = res_index//2 * 2 - self.res.append(AdaResBlock(self.generator.channels[2 ** 2])) # for conv1 - for i in range(3, self.generator.log_size + 1): - out_channel = self.generator.channels[2 ** i] - if i < 3 + self.res_index//2: - # ModRes - self.res.append(AdaResBlock(out_channel)) - self.res.append(AdaResBlock(out_channel)) - else: - # structure transform block T_s - self.res.append(EqualLinear(512, 512)) - # FC layer is initialized with identity matrices, meaning no changes to the input latent code - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.res.append(EqualLinear(512, 512)) - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.res.append(EqualLinear(512, 512)) # for to_rgb7 - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.size = self.generator.size - self.style_dim = self.generator.style_dim - self.log_size = self.generator.log_size - self.num_layers = self.generator.num_layers - self.n_latent = self.generator.n_latent - self.channels = self.generator.channels - - def forward( - self, - styles, # intrinsic style code - exstyles, # extrinsic style code - return_latents=False, - return_feat=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - z_plus_latent=False, # intrinsic style code is z+ or z - use_res=True, # whether to use the extrinsic style path - fuse_index=18, # layers > fuse_index do not use the extrinsic style path - interp_weights=[1]*18, # weight vector for style combination of two paths - ): - - if not input_is_latent: - if not z_plus_latent: - styles = [self.generator.style(s) for s in styles] - else: - styles = [self.generator.style(s.reshape(s.shape[0]*s.shape[1], s.shape[2])).reshape(s.shape) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.generator.num_layers - else: - noise = [ - getattr(self.generator.noises, f"noise_{i}") for i in range(self.generator.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.generator.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.generator.n_latent - 1) - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.generator.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - else: - latent = torch.cat([styles[0][:,0:inject_index], styles[1][:,inject_index:]], 1) - - if use_res: - if exstyles.ndim < 3: - resstyles = self.style(exstyles).unsqueeze(1).repeat(1, self.generator.n_latent, 1) - adastyles = exstyles.unsqueeze(1).repeat(1, self.generator.n_latent, 1) - else: - nB, nL, nD = exstyles.shape - resstyles = self.style(exstyles.reshape(nB*nL, nD)).reshape(nB, nL, nD) - adastyles = exstyles - - out = self.generator.input(latent) - out = self.generator.conv1(out, latent[:, 0], noise=noise[0]) - if use_res and fuse_index > 0: - out = self.res[0](out, resstyles[:, 0], interp_weights[0]) - - skip = self.generator.to_rgb1(out, latent[:, 1]) - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.generator.convs[::2], self.generator.convs[1::2], noise[1::2], noise[2::2], self.generator.to_rgbs): - if use_res and fuse_index >= i and i > self.res_index: - out = conv1(out, interp_weights[i] * self.res[i](adastyles[:, i]) + - (1-interp_weights[i]) * latent[:, i], noise=noise1) - else: - out = conv1(out, latent[:, i], noise=noise1) - if use_res and fuse_index >= i and i <= self.res_index: - out = self.res[i](out, resstyles[:, i], interp_weights[i]) - if use_res and fuse_index >= (i+1) and i > self.res_index: - out = conv2(out, interp_weights[i+1] * self.res[i+1](adastyles[:, i+1]) + - (1-interp_weights[i+1]) * latent[:, i+1], noise=noise2) - else: - out = conv2(out, latent[:, i + 1], noise=noise2) - if use_res and fuse_index >= (i+1) and i <= self.res_index: - out = self.res[i+1](out, resstyles[:, i+1], interp_weights[i+1]) - if use_res and fuse_index >= (i+2) and i >= self.res_index-1: - skip = to_rgb(out, interp_weights[i+2] * self.res[i+2](adastyles[:, i+2]) + - (1-interp_weights[i+2]) * latent[:, i + 2], skip) - else: - skip = to_rgb(out, latent[:, i + 2], skip) - i += 2 - if i > self.res_index and return_feat: - return out, skip - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - def make_noise(self): - return self.generator.make_noise() - - def mean_latent(self, n_latent): - return self.generator.mean_latent(n_latent) - - def get_latent(self, input): - return self.generator.style(input) \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Color Efex Pro 4 Crack Dll Files ((FULL)).md b/spaces/gotiQspiryo/whisper-ui/examples/Color Efex Pro 4 Crack Dll Files ((FULL)).md deleted file mode 100644 index 1a0baf1542e066a217ba3009261b3aaddf34226d..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Color Efex Pro 4 Crack Dll Files ((FULL)).md +++ /dev/null @@ -1,7 +0,0 @@ - -

    some of the most useful tools in the entire color efex collection are the hue saturation tool, the curves tool, and the exposure tool. the hue saturation tool lets you adjust the color of your image in a number of different ways. the hue saturation tool lets you adjust the red, yellow, blue, and saturation of the image. the tool allows you to change the hue, saturation, brightness, and lightness, as well as adjust the overall color of the image. the curves tool lets you adjust the contrast, brightness, and saturation of the image. you can use the curves tool to make fine adjustments to an image, or you can use it to adjust the entire image in one step. the exposure tool lets you control the brightness of the image. it lets you make changes to the contrast, brightness, and highlight and shadow points.

    -

    Color Efex Pro 4 Crack Dll Files


    Download File ——— https://urlgoal.com/2uyN3R



    -

    also, you can use the color efex pro 4 crack dll files to produce realistic special effects. you can use the variance slider to control the amount of noise in the image. the noise filter lets you soften the edges of the image, add grain, blur the image, and more. you can use the high pass filter to create a soft, blurred effect. the grain filter lets you create a realistic look. you can use the grain filter to add grain, light, and dark effects, and you can even use it to create the effect of a rippled surface. the motion blur filter lets you blur and blur the image, or you can use it to produce the effect of a moving image.

    -

    you can also use the lens distortion filter to create an effect similar to the lens distortion found in many lenses. you can use the perspective filter to create an effect similar to that found in old cameras. color efex is only available for windows. the software works on 32-bit and 64-bit versions of windows xp and later.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Google to create RCS client for Android How it will change SMS communication.md b/spaces/gotiQspiryo/whisper-ui/examples/Google to create RCS client for Android How it will change SMS communication.md deleted file mode 100644 index 40e06f2b492a3a4880a2f3a94c124507e0c5e3bf..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Google to create RCS client for Android How it will change SMS communication.md +++ /dev/null @@ -1,11 +0,0 @@ - -

    RCS is a set of standards developed by the GSMA (the Global System for Mobile Communications, which represents mobile carriers worldwide and establishes the standards they use) for enhanced mobile messaging. With it, you can send messages and media (such as photos and videos) at a higher quality between devices with benefits like reactions and typing indicators. It even operates with any data connection, including Wi-Fi. It brings all the benefits of not-so-modern instant messaging to replace the aging SMS standard.

    -

    This does mean that the US carriers will likely provide at least an Android app to be pre-loaded on devices from the carriers. Does that mean that existing RCS Messaging apps such as Android Messages or Samsung Messages will become obsolete? Not likely. All RCS messaging clients on Android should work, regardless of the backend RCS solution the carrier is using.

    -

    Google to create RCS client for Android, enhanced SMS for all carriers


    Download Zip > https://urlgoal.com/2uyMEX



    -

    Meanwhile, using RCS with the Google Messages app has the support of all of the major carriers in the U.S., removing any potential friction. Samsung even went so far as to ship a customized version of the Google Messages app, making it look more like what you'll find with the company's own messaging client.

    -

    With this solution, customers benefit from our near real-time RCS routing data, flexible deployment options, lightning-fast provisioning and fully redundant managed service to improve efficiency and create an enhanced mobile user experience.

    -

    Apple probably didn't create iMessage as a way to lock people into iOS at the start, but the platform has certainly grown that way. It's effectively a social network, one that Apple can upgrade without waiting on industry associations and carriers.

    -

    Right, it's a joke, but AOSP does have a messaging app that I believe is distinct from Google Messages. They could simply go back to that. I'm not suggesting carriers stop shipping an SMS client (though, at some point ten, twenty years from now, that, too, might just happen).

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/data/concat_dataset.py b/spaces/gradio/HuBERT/fairseq/data/concat_dataset.py deleted file mode 100644 index 01a4078bb159fa44b2d1062b9a971fe7f1abd1c2..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/concat_dataset.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import bisect - -import numpy as np -from torch.utils.data.dataloader import default_collate - -from . import FairseqDataset - - -class ConcatDataset(FairseqDataset): - @staticmethod - def cumsum(sequence, sample_ratios): - r, s = [], 0 - for e, ratio in zip(sequence, sample_ratios): - curr_len = int(ratio * len(e)) - r.append(curr_len + s) - s += curr_len - return r - - def __init__(self, datasets, sample_ratios=1): - super(ConcatDataset, self).__init__() - assert len(datasets) > 0, "datasets should not be an empty iterable" - self.datasets = list(datasets) - if isinstance(sample_ratios, int): - sample_ratios = [sample_ratios] * len(self.datasets) - self.sample_ratios = sample_ratios - self.cumulative_sizes = self.cumsum(self.datasets, sample_ratios) - self.real_sizes = [len(d) for d in self.datasets] - - def __len__(self): - return self.cumulative_sizes[-1] - - def __getitem__(self, idx): - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - return self.datasets[dataset_idx][sample_idx] - - def _get_dataset_and_sample_index(self, idx: int): - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - sample_idx = sample_idx % self.real_sizes[dataset_idx] - return dataset_idx, sample_idx - - def collater(self, samples, **extra_args): - # For now only supports datasets with same underlying collater implementations - if hasattr(self.datasets[0], "collater"): - return self.datasets[0].collater(samples, **extra_args) - else: - return default_collate(samples, **extra_args) - - def size(self, idx: int): - """ - Return an example's size as a float or tuple. - """ - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - return self.datasets[dataset_idx].size(sample_idx) - - def num_tokens(self, index: int): - return np.max(self.size(index)) - - def attr(self, attr: str, index: int): - dataset_idx = bisect.bisect_right(self.cumulative_sizes, index) - return getattr(self.datasets[dataset_idx], attr, None) - - @property - def sizes(self): - _dataset_sizes = [] - for ds, sr in zip(self.datasets, self.sample_ratios): - if isinstance(ds.sizes, np.ndarray): - _dataset_sizes.append(np.tile(ds.sizes, sr)) - else: - # Only support underlying dataset with single size array. - assert isinstance(ds.sizes, list) - _dataset_sizes.append(np.tile(ds.sizes[0], sr)) - return np.concatenate(_dataset_sizes) - - @property - def supports_prefetch(self): - return all(d.supports_prefetch for d in self.datasets) - - def ordered_indices(self): - """ - Returns indices sorted by length. So less padding is needed. - """ - if isinstance(self.sizes, np.ndarray) and len(self.sizes.shape) > 1: - # special handling for concatenating lang_pair_datasets - indices = np.arange(len(self)) - sizes = self.sizes - tgt_sizes = ( - sizes[:, 1] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else None - ) - src_sizes = ( - sizes[:, 0] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else sizes - ) - # sort by target length, then source length - if tgt_sizes is not None: - indices = indices[np.argsort(tgt_sizes[indices], kind="mergesort")] - return indices[np.argsort(src_sizes[indices], kind="mergesort")] - else: - return np.argsort(self.sizes) - - def prefetch(self, indices): - frm = 0 - for to, ds in zip(self.cumulative_sizes, self.datasets): - real_size = len(ds) - if getattr(ds, "supports_prefetch", False): - ds.prefetch([(i - frm) % real_size for i in indices if frm <= i < to]) - frm = to - - @property - def can_reuse_epoch_itr_across_epochs(self): - return all(d.can_reuse_epoch_itr_across_epochs for d in self.datasets) - - def set_epoch(self, epoch): - super().set_epoch(epoch) - for ds in self.datasets: - if hasattr(ds, "set_epoch"): - ds.set_epoch(epoch) diff --git a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/fma.py b/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/fma.py deleted file mode 100644 index 51a45dfa0829987e8ee5214663e068cb3af2a8b9..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/models/torch_utils/ops/fma.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Fused multiply-add, with slightly faster gradients than `torch.addcmul()`.""" - -import torch - -#---------------------------------------------------------------------------- - -def fma(a, b, c): # => a * b + c - return _FusedMultiplyAdd.apply(a, b, c) - -#---------------------------------------------------------------------------- - -class _FusedMultiplyAdd(torch.autograd.Function): # a * b + c - @staticmethod - def forward(ctx, a, b, c): # pylint: disable=arguments-differ - out = torch.addcmul(c, a, b) - ctx.save_for_backward(a, b) - ctx.c_shape = c.shape - return out - - @staticmethod - def backward(ctx, dout): # pylint: disable=arguments-differ - a, b = ctx.saved_tensors - c_shape = ctx.c_shape - da = None - db = None - dc = None - - if ctx.needs_input_grad[0]: - da = _unbroadcast(dout * b, a.shape) - - if ctx.needs_input_grad[1]: - db = _unbroadcast(dout * a, b.shape) - - if ctx.needs_input_grad[2]: - dc = _unbroadcast(dout, c_shape) - - return da, db, dc - -#---------------------------------------------------------------------------- - -def _unbroadcast(x, shape): - extra_dims = x.ndim - len(shape) - assert extra_dims >= 0 - dim = [i for i in range(x.ndim) if x.shape[i] > 1 and (i < extra_dims or shape[i - extra_dims] == 1)] - if len(dim): - x = x.sum(dim=dim, keepdim=True) - if extra_dims: - x = x.reshape(-1, *x.shape[extra_dims+1:]) - assert x.shape == shape - return x - -#---------------------------------------------------------------------------- diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/data_augmentation/__init__.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/data_augmentation/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/housexu123/bingo-2.0/src/components/external-link.tsx b/spaces/housexu123/bingo-2.0/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/housexu123/bingo-2.0/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - - {children} - - - ) -} diff --git a/spaces/huggingface-projects/wordalle/static/_app/immutable/start-cc027d18.js b/spaces/huggingface-projects/wordalle/static/_app/immutable/start-cc027d18.js deleted file mode 100644 index 837bd7c66765c4492b424d81006b7bb5824a6397..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/wordalle/static/_app/immutable/start-cc027d18.js +++ /dev/null @@ -1 +0,0 @@ -var Ze=Object.defineProperty,Qe=Object.defineProperties;var et=Object.getOwnPropertyDescriptors;var fe=Object.getOwnPropertySymbols;var De=Object.prototype.hasOwnProperty,Ve=Object.prototype.propertyIsEnumerable;var Ce=(n,e,t)=>e in n?Ze(n,e,{enumerable:!0,configurable:!0,writable:!0,value:t}):n[e]=t,P=(n,e)=>{for(var t in e||(e={}))De.call(e,t)&&Ce(n,t,e[t]);if(fe)for(var t of fe(e))Ve.call(e,t)&&Ce(n,t,e[t]);return n},ne=(n,e)=>Qe(n,et(e));var ze=(n,e)=>{var t={};for(var i in n)De.call(n,i)&&e.indexOf(i)<0&&(t[i]=n[i]);if(n!=null&&fe)for(var i of fe(n))e.indexOf(i)<0&&Ve.call(n,i)&&(t[i]=n[i]);return t};import{S as tt,i as nt,s as st,e as rt,c as it,a as at,d as V,b as be,f as B,g as z,t as ot,h as ct,j as lt,k as ft,l as T,m as ut,n as Y,o as j,p as G,q as I,r as dt,u as pt,v as ke,w as q,x as re,y as J,z as ie,A as ae,B as K,C as oe,D as qe}from"./chunks/index-86f4d6c3.js";import{_ as ye,s as ht,w as ue,a as _t}from"./chunks/preload-helper-359634c4.js";function mt(n){let e,t,i;const l=[n[1]||{}];var c=n[0][0];function u(s){let r={};for(let a=0;a{K(d,1)}),G()}c?(e=new c(u()),q(e.$$.fragment),I(e.$$.fragment,1),J(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(s){i||(e&&I(e.$$.fragment,s),i=!0)},o(s){e&&j(e.$$.fragment,s),i=!1},d(s){s&&V(t),e&&K(e,s)}}}function gt(n){let e,t,i;const l=[n[1]||{}];var c=n[0][0];function u(s){let r={$$slots:{default:[vt]},$$scope:{ctx:s}};for(let a=0;a{K(d,1)}),G()}c?(e=new c(u(s)),q(e.$$.fragment),I(e.$$.fragment,1),J(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(s){i||(e&&I(e.$$.fragment,s),i=!0)},o(s){e&&j(e.$$.fragment,s),i=!1},d(s){s&&V(t),e&&K(e,s)}}}function wt(n){let e,t,i;const l=[n[2]||{}];var c=n[0][1];function u(s){let r={};for(let a=0;a{K(d,1)}),G()}c?(e=new c(u()),q(e.$$.fragment),I(e.$$.fragment,1),J(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(s){i||(e&&I(e.$$.fragment,s),i=!0)},o(s){e&&j(e.$$.fragment,s),i=!1},d(s){s&&V(t),e&&K(e,s)}}}function bt(n){let e,t,i;const l=[n[2]||{}];var c=n[0][1];function u(s){let r={$$slots:{default:[yt]},$$scope:{ctx:s}};for(let a=0;a{K(d,1)}),G()}c?(e=new c(u(s)),q(e.$$.fragment),I(e.$$.fragment,1),J(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(s){i||(e&&I(e.$$.fragment,s),i=!0)},o(s){e&&j(e.$$.fragment,s),i=!1},d(s){s&&V(t),e&&K(e,s)}}}function yt(n){let e,t,i;const l=[n[3]||{}];var c=n[0][2];function u(s){let r={};for(let a=0;a{K(d,1)}),G()}c?(e=new c(u()),q(e.$$.fragment),I(e.$$.fragment,1),J(e,t.parentNode,t)):e=null}else c&&e.$set(a)},i(s){i||(e&&I(e.$$.fragment,s),i=!0)},o(s){e&&j(e.$$.fragment,s),i=!1},d(s){s&&V(t),e&&K(e,s)}}}function vt(n){let e,t,i,l;const c=[bt,wt],u=[];function s(r,a){return r[0][2]?0:1}return e=s(n),t=u[e]=c[e](n),{c(){t.c(),i=T()},l(r){t.l(r),i=T()},m(r,a){u[e].m(r,a),z(r,i,a),l=!0},p(r,a){let d=e;e=s(r),e===d?u[e].p(r,a):(Y(),j(u[d],1,1,()=>{u[d]=null}),G(),t=u[e],t?t.p(r,a):(t=u[e]=c[e](r),t.c()),I(t,1),t.m(i.parentNode,i))},i(r){l||(I(t),l=!0)},o(r){j(t),l=!1},d(r){u[e].d(r),r&&V(i)}}}function Je(n){let e,t=n[5]&&Ke(n);return{c(){e=rt("div"),t&&t.c(),this.h()},l(i){e=it(i,"DIV",{id:!0,"aria-live":!0,"aria-atomic":!0,style:!0});var l=at(e);t&&t.l(l),l.forEach(V),this.h()},h(){be(e,"id","svelte-announcer"),be(e,"aria-live","assertive"),be(e,"aria-atomic","true"),B(e,"position","absolute"),B(e,"left","0"),B(e,"top","0"),B(e,"clip","rect(0 0 0 0)"),B(e,"clip-path","inset(50%)"),B(e,"overflow","hidden"),B(e,"white-space","nowrap"),B(e,"width","1px"),B(e,"height","1px")},m(i,l){z(i,e,l),t&&t.m(e,null)},p(i,l){i[5]?t?t.p(i,l):(t=Ke(i),t.c(),t.m(e,null)):t&&(t.d(1),t=null)},d(i){i&&V(e),t&&t.d()}}}function Ke(n){let e;return{c(){e=ot(n[6])},l(t){e=ct(t,n[6])},m(t,i){z(t,e,i)},p(t,i){i&64&<(e,t[6])},d(t){t&&V(e)}}}function $t(n){let e,t,i,l,c;const u=[gt,mt],s=[];function r(d,L){return d[0][1]?0:1}e=r(n),t=s[e]=u[e](n);let a=n[4]&&Je(n);return{c(){t.c(),i=ft(),a&&a.c(),l=T()},l(d){t.l(d),i=ut(d),a&&a.l(d),l=T()},m(d,L){s[e].m(d,L),z(d,i,L),a&&a.m(d,L),z(d,l,L),c=!0},p(d,[L]){let E=e;e=r(d),e===E?s[e].p(d,L):(Y(),j(s[E],1,1,()=>{s[E]=null}),G(),t=s[e],t?t.p(d,L):(t=s[e]=u[e](d),t.c()),I(t,1),t.m(i.parentNode,i)),d[4]?a?a.p(d,L):(a=Je(d),a.c(),a.m(l.parentNode,l)):a&&(a.d(1),a=null)},i(d){c||(I(t),c=!0)},o(d){j(t),c=!1},d(d){s[e].d(d),d&&V(i),a&&a.d(d),d&&V(l)}}}function kt(n,e,t){let{stores:i}=e,{page:l}=e,{components:c}=e,{props_0:u=null}=e,{props_1:s=null}=e,{props_2:r=null}=e;dt("__svelte__",i),pt(i.page.notify);let a=!1,d=!1,L=null;return ke(()=>{const E=i.page.subscribe(()=>{a&&(t(5,d=!0),t(6,L=document.title||"untitled page"))});return t(4,a=!0),E}),n.$$set=E=>{"stores"in E&&t(7,i=E.stores),"page"in E&&t(8,l=E.page),"components"in E&&t(0,c=E.components),"props_0"in E&&t(1,u=E.props_0),"props_1"in E&&t(2,s=E.props_1),"props_2"in E&&t(3,r=E.props_2)},n.$$.update=()=>{n.$$.dirty&384&&i.page.set(l)},[c,u,s,r,a,d,L,i,l]}class Et extends tt{constructor(e){super(),nt(this,e,kt,$t,st,{stores:7,page:8,components:0,props_0:1,props_1:2,props_2:3})}}const Rt={},de=[()=>ye(()=>import("./pages/__layout.svelte-53f051f3.js"),["pages/__layout.svelte-53f051f3.js","assets/pages/__layout.svelte-7926a3a8.css","chunks/index-86f4d6c3.js"]),()=>ye(()=>import("./error.svelte-ca9403a0.js"),["error.svelte-ca9403a0.js","chunks/index-86f4d6c3.js"]),()=>ye(()=>import("./pages/index.svelte-e9dccd76.js"),["pages/index.svelte-e9dccd76.js","assets/pages/index.svelte-b52b250e.css","chunks/index-86f4d6c3.js","chunks/preload-helper-359634c4.js"])],Lt={"":[[0,2],[1]]};function St(n){n.client}function Be(n){return n instanceof Error||n&&n.name&&n.message?n:new Error(JSON.stringify(n))}function Me(n){if(n.fallthrough)throw new Error("fallthrough is no longer supported. Use matchers instead: https://kit.svelte.dev/docs/routing#advanced-routing-matching");if("maxage"in n)throw new Error("maxage should be replaced with cache: { maxage }");const e=n.status&&n.status>=400&&n.status<=599&&!n.redirect;if(n.error||e){const t=n.status;if(!n.error&&e)return{status:t||500,error:new Error};const i=typeof n.error=="string"?new Error(n.error):n.error;return i instanceof Error?!t||t<400||t>599?(console.warn('"error" returned from load() without a valid status code \u2014 defaulting to 500'),{status:500,error:i}):{status:t,error:i}:{status:500,error:new Error(`"error" property returned from load() must be a string or instance of Error, received type "${typeof i}"`)}}if(n.redirect){if(!n.status||Math.floor(n.status/100)!==3)throw new Error('"redirect" property returned from load() must be accompanied by a 3xx status code');if(typeof n.redirect!="string")throw new Error('"redirect" property returned from load() must be a string')}if(n.dependencies&&(!Array.isArray(n.dependencies)||n.dependencies.some(t=>typeof t!="string")))throw new Error('"dependencies" property returned from load() must be of type string[]');if(n.context)throw new Error('You are returning "context" from a load function. "context" was renamed to "stuff", please adjust your code accordingly.');return n}function Ut(n,e){return n==="/"||e==="ignore"?n:e==="never"?n.endsWith("/")?n.slice(0,-1):n:e==="always"&&!n.endsWith("/")?n+"/":n}class At extends URL{get hash(){throw new Error("url.hash is inaccessible from load. Consider accessing hash from the page store within the script tag of your component.")}}function We(n){let e=n.baseURI;if(!e){const t=n.getElementsByTagName("base");e=t.length?t[0].href:n.URL}return e}function Ee(){return{x:pageXOffset,y:pageYOffset}}function Ye(n){return n.composedPath().find(t=>t instanceof Node&&t.nodeName.toUpperCase()==="A")}function Ge(n){return n instanceof SVGAElement?new URL(n.href.baseVal,document.baseURI):new URL(n.href)}function Fe(n){const e=ue(n);let t=!0;function i(){t=!0,e.update(u=>u)}function l(u){t=!1,e.set(u)}function c(u){let s;return e.subscribe(r=>{(s===void 0||t&&r!==s)&&u(s=r)})}return{notify:i,set:l,subscribe:c}}function Nt(){const{set:n,subscribe:e}=ue(!1),t="1666720273436";let i;async function l(){clearTimeout(i);const u=await fetch(`${_t}/_app/version.json`,{headers:{pragma:"no-cache","cache-control":"no-cache"}});if(u.ok){const{version:s}=await u.json(),r=s!==t;return r&&(n(!0),clearTimeout(i)),r}else throw new Error(`Version check failed: ${u.status}`)}return{subscribe:e,check:l}}function xt(n){let e=5381,t=n.length;if(typeof n=="string")for(;t;)e=e*33^n.charCodeAt(--t);else for(;t;)e=e*33^n[--t];return(e>>>0).toString(36)}const Re=window.fetch;function Ot(n,e){let i=`script[sveltekit\\:data-type="data"][sveltekit\\:data-url=${JSON.stringify(typeof n=="string"?n:n.url)}]`;e&&typeof e.body=="string"&&(i+=`[sveltekit\\:data-body="${xt(e.body)}"]`);const l=document.querySelector(i);if(l&&l.textContent){const c=JSON.parse(l.textContent),{body:u}=c,s=ze(c,["body"]);return Promise.resolve(new Response(u,s))}return Re(n,e)}const Pt=/^(\.\.\.)?(\w+)(?:=(\w+))?$/;function Tt(n){const e=[],t=[];let i=!0;return{pattern:n===""?/^\/$/:new RegExp(`^${decodeURIComponent(n).split(/(?:@[a-zA-Z0-9_-]+)?(?:\/|$)/).map((c,u,s)=>{const r=/^\[\.\.\.(\w+)(?:=(\w+))?\]$/.exec(c);if(r)return e.push(r[1]),t.push(r[2]),"(?:/(.*))?";const a=u===s.length-1;return c&&"/"+c.split(/\[(.+?)\]/).map((d,L)=>{if(L%2){const[,E,H,F]=Pt.exec(d);return e.push(H),t.push(F),E?"(.*?)":"([^/]+?)"}return a&&d.includes(".")&&(i=!1),d.normalize().replace(/%5[Bb]/g,"[").replace(/%5[Dd]/g,"]").replace(/#/g,"%23").replace(/\?/g,"%3F").replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}).join("")}).join("")}${i?"/?":""}$`),names:e,types:t}}function jt(n,e,t,i){const l={};for(let c=0;c{const{pattern:r,names:a,types:d}=Tt(l);return{id:l,exec:L=>{const E=r.exec(L);if(E)return jt(E,a,d,t)},a:c.map(L=>n[L]),b:u.map(L=>n[L]),has_shadow:!!s}})}const He="sveltekit:scroll",M="sveltekit:index",ve=It(de,Lt,Rt),Ct=de[0](),Dt=de[1](),Xe={};let se={};try{se=JSON.parse(sessionStorage[He])}catch{}function $e(n){se[n]=Ee()}function Vt({target:n,session:e,base:t,trailing_slash:i}){var je;const l=new Map,c=[],u={url:Fe({}),page:Fe({}),navigating:ue(null),session:ue(e),updated:Nt()},s={id:null,promise:null},r={before_navigate:[],after_navigate:[]};let a={branch:[],error:null,session_id:0,stuff:Xe,url:null},d=!1,L=!0,E=!1,H=1,F=null,Le,Se,Ue=!1;u.session.subscribe(async o=>{Se=o,Ue&&(H+=1,me(new URL(location.href),[],!0))}),Ue=!0;let X=!0,C=(je=history.state)==null?void 0:je[M];C||(C=Date.now(),history.replaceState(ne(P({},history.state),{[M]:C}),"",location.href));const pe=se[C];pe&&(history.scrollRestoration="manual",scrollTo(pe.x,pe.y));let he=!1,_e,Ae;async function Ne(o,{noscroll:p=!1,replaceState:w=!1,keepfocus:f=!1,state:h={}},b){if(typeof o=="string"&&(o=new URL(o,We(document))),X)return we({url:o,scroll:p?Ee():null,keepfocus:f,redirect_chain:b,details:{state:h,replaceState:w},accepted:()=>{},blocked:()=>{}});await ee(o)}async function xe(o){const p=Te(o);if(!p)throw new Error("Attempted to prefetch a URL that does not belong to this app");return s.promise=Pe(p,!1),s.id=p.id,s.promise}async function me(o,p,w,f,h){var R,S,N;const b=Te(o),v=Ae={};let _=b&&await Pe(b,w);if(!_&&o.origin===location.origin&&o.pathname===location.pathname&&(_=await Q({status:404,error:new Error(`Not found: ${o.pathname}`),url:o,routeId:null})),!_)return await ee(o),!1;if(Ae!==v)return!1;if(c.length=0,_.redirect)if(p.length>10||p.includes(o.pathname))_=await Q({status:500,error:new Error("Redirect loop"),url:o,routeId:null});else return X?Ne(new URL(_.redirect,o).href,{},[...p,o.pathname]):await ee(new URL(_.redirect,location.href)),!1;else((S=(R=_.props)==null?void 0:R.page)==null?void 0:S.status)>=400&&await u.updated.check()&&await ee(o);if(E=!0,f&&f.details){const{details:$}=f,y=$.replaceState?0:1;$.state[M]=C+=y,history[$.replaceState?"replaceState":"pushState"]($.state,"",o)}if(d?(a=_.state,_.props.page&&(_.props.page.url=o),Le.$set(_.props)):Oe(_),f){const{scroll:$,keepfocus:y}=f;if(!y){const U=document.body,g=U.getAttribute("tabindex");(N=getSelection())==null||N.removeAllRanges(),U.tabIndex=-1,U.focus({preventScroll:!0}),g!==null?U.setAttribute("tabindex",g):U.removeAttribute("tabindex")}if(await qe(),L){const U=o.hash&&document.getElementById(o.hash.slice(1));$?scrollTo($.x,$.y):U?U.scrollIntoView():scrollTo(0,0)}}else await qe();s.promise=null,s.id=null,L=!0,_.props.page&&(_e=_.props.page);const m=_.state.branch[_.state.branch.length-1];X=(m==null?void 0:m.module.router)!==!1,h&&h(),E=!1}function Oe(o){a=o.state;const p=document.querySelector("style[data-sveltekit]");if(p&&p.remove(),_e=o.props.page,Le=new Et({target:n,props:ne(P({},o.props),{stores:u}),hydrate:!0}),X){const w={from:null,to:new URL(location.href)};r.after_navigate.forEach(f=>f(w))}d=!0}async function ge({url:o,params:p,stuff:w,branch:f,status:h,error:b,routeId:v}){var y,U;const _=f.filter(Boolean),m=_.find(g=>{var x;return(x=g.loaded)==null?void 0:x.redirect}),R={redirect:(y=m==null?void 0:m.loaded)==null?void 0:y.redirect,state:{url:o,params:p,branch:f,error:b,stuff:w,session_id:H},props:{components:_.map(g=>g.module.default)}};for(let g=0;g<_.length;g+=1){const x=_[g].loaded;R.props[`props_${g}`]=x?await x.props:null}if(!a.url||o.href!==a.url.href||a.error!==b||a.stuff!==w){R.props.page={error:b,params:p,routeId:v,status:h,stuff:w,url:o};const g=(x,k)=>{Object.defineProperty(R.props.page,x,{get:()=>{throw new Error(`$page.${x} has been replaced by $page.url.${k}`)}})};g("origin","origin"),g("path","pathname"),g("query","searchParams")}const N=_[_.length-1],$=(U=N==null?void 0:N.loaded)==null?void 0:U.cache;if($){const g=o.pathname+o.search;let x=!1;const k=()=>{l.get(g)===R&&l.delete(g),O(),clearTimeout(A)},A=setTimeout(k,$.maxage*1e3),O=u.session.subscribe(()=>{x&&k()});x=!0,l.set(g,R)}return R}async function Z({status:o,error:p,module:w,url:f,params:h,stuff:b,props:v,routeId:_}){const m={module:w,uses:{params:new Set,url:!1,session:!1,stuff:!1,dependencies:new Set},loaded:null,stuff:b};function R(y){const{href:U}=new URL(y,f);m.uses.dependencies.add(U)}v&&m.uses.dependencies.add(f.href);const S={};for(const y in h)Object.defineProperty(S,y,{get(){return m.uses.params.add(y),h[y]},enumerable:!0});const N=Se,$=new At(f);if(w.load){const y={routeId:_,params:S,props:v||{},get url(){return m.uses.url=!0,$},get session(){return m.uses.session=!0,N},get stuff(){return m.uses.stuff=!0,P({},b)},async fetch(g,x){let k;typeof g=="string"?k=g:(k=g.url,x=P({body:g.method==="GET"||g.method==="HEAD"?void 0:await g.blob(),cache:g.cache,credentials:g.credentials,headers:g.headers,integrity:g.integrity,keepalive:g.keepalive,method:g.method,mode:g.mode,redirect:g.redirect,referrer:g.referrer,referrerPolicy:g.referrerPolicy,signal:g.signal},x));const A=new URL(k,f).href;return R(A),d?Re(A,x):Ot(k,x)},status:o!=null?o:null,error:p!=null?p:null};let U;if(U=await w.load.call(null,y),!U)throw new Error("load function must return a value");m.loaded=Me(U),m.loaded.stuff&&(m.stuff=m.loaded.stuff),m.loaded.dependencies&&m.loaded.dependencies.forEach(R)}else v&&(m.loaded=Me({props:v}));return m}async function Pe({id:o,url:p,params:w,route:f},h){var U,g,x;if(s.id===o&&s.promise)return s.promise;if(!h){const k=l.get(o);if(k)return k}const{a:b,b:v,has_shadow:_}=f,m=a.url&&{url:o!==a.url.pathname+a.url.search,params:Object.keys(w).filter(k=>a.params[k]!==w[k]),session:H!==a.session_id};let R=[],S=Xe,N=!1,$=200,y=null;b.forEach(k=>k().catch(()=>{}));e:for(let k=0;kD.uses.params.has(W))||m.session&&D.uses.session||Array.from(D.uses.dependencies).some(W=>c.some(le=>le(W)))||N&&D.uses.stuff){let W={};const le=_&&k===b.length-1;if(le){const te=await Re(`${p.pathname}${p.pathname.endsWith("/")?"":"/"}__data.json${p.search}`,{headers:{"x-sveltekit-load":"true"}});if(te.ok){const Ie=te.headers.get("x-sveltekit-location");if(Ie)return{redirect:Ie,props:{},state:a};W=te.status===204?{}:await te.json()}else $=te.status,y=new Error("Failed to load data")}if(y||(A=await Z({module:O,url:p,params:w,props:W,stuff:S,routeId:f.id})),A&&(le&&(A.uses.url=!0),A.loaded)){if(A.loaded.error&&($=A.loaded.status,y=A.loaded.error),A.loaded.redirect)return{redirect:A.loaded.redirect,props:{},state:a};A.loaded.stuff&&(N=!0)}}else A=D}catch(O){$=500,y=Be(O)}if(y){for(;k--;)if(v[k]){let O,D,ce=k;for(;!(D=R[ce]);)ce-=1;try{if(O=await Z({status:$,error:y,module:await v[k](),url:p,params:w,stuff:D.stuff,routeId:f.id}),(U=O==null?void 0:O.loaded)!=null&&U.error)continue;(g=O==null?void 0:O.loaded)!=null&&g.stuff&&(S=P(P({},S),O.loaded.stuff)),R=R.slice(0,ce+1).concat(O);break e}catch{continue}}return await Q({status:$,error:y,url:p,routeId:f.id})}else(x=A==null?void 0:A.loaded)!=null&&x.stuff&&(S=P(P({},S),A.loaded.stuff)),R.push(A)}return await ge({url:p,params:w,stuff:S,branch:R,status:$,error:y,routeId:f.id})}async function Q({status:o,error:p,url:w,routeId:f}){var _,m;const h={},b=await Z({module:await Ct,url:w,params:h,stuff:{},routeId:f}),v=await Z({status:o,error:p,module:await Dt,url:w,params:h,stuff:b&&b.loaded&&b.loaded.stuff||{},routeId:f});return await ge({url:w,params:h,stuff:P(P({},(_=b==null?void 0:b.loaded)==null?void 0:_.stuff),(m=v==null?void 0:v.loaded)==null?void 0:m.stuff),branch:[b,v],status:o,error:p,routeId:f})}function Te(o){if(o.origin!==location.origin||!o.pathname.startsWith(t))return;const p=decodeURI(o.pathname.slice(t.length)||"/");for(const w of ve){const f=w.exec(p);if(f)return{id:o.pathname+o.search,route:w,params:f,url:o}}}async function we({url:o,scroll:p,keepfocus:w,redirect_chain:f,details:h,accepted:b,blocked:v}){const _=a.url;let m=!1;const R={from:_,to:o,cancel:()=>m=!0};if(r.before_navigate.forEach($=>$(R)),m){v();return}const S=Ut(o.pathname,i),N=new URL(o.origin+S+o.search+o.hash);$e(C),b(),d&&u.navigating.set({from:a.url,to:N}),await me(N,f,!1,{scroll:p,keepfocus:w,details:h},()=>{const $={from:_,to:N};r.after_navigate.forEach(y=>y($)),u.navigating.set(null)})}function ee(o){return location.href=o.href,new Promise(()=>{})}return{after_navigate:o=>{ke(()=>(r.after_navigate.push(o),()=>{const p=r.after_navigate.indexOf(o);r.after_navigate.splice(p,1)}))},before_navigate:o=>{ke(()=>(r.before_navigate.push(o),()=>{const p=r.before_navigate.indexOf(o);r.before_navigate.splice(p,1)}))},disable_scroll_handling:()=>{(E||!d)&&(L=!1)},goto:(o,p={})=>Ne(o,p,[]),invalidate:o=>{if(typeof o=="function")c.push(o);else{const{href:p}=new URL(o,location.href);c.push(w=>w===p)}return F||(F=Promise.resolve().then(async()=>{await me(new URL(location.href),[],!0),F=null})),F},prefetch:async o=>{const p=new URL(o,We(document));await xe(p)},prefetch_routes:async o=>{const w=(o?ve.filter(f=>o.some(h=>f.exec(h))):ve).map(f=>Promise.all(f.a.map(h=>h())));await Promise.all(w)},_start_router:()=>{history.scrollRestoration="manual",addEventListener("beforeunload",f=>{let h=!1;const b={from:a.url,to:null,cancel:()=>h=!0};r.before_navigate.forEach(v=>v(b)),h?(f.preventDefault(),f.returnValue=""):history.scrollRestoration="auto"}),addEventListener("visibilitychange",()=>{if(document.visibilityState==="hidden"){$e(C);try{sessionStorage[He]=JSON.stringify(se)}catch{}}});const o=f=>{const h=Ye(f);h&&h.href&&h.hasAttribute("sveltekit:prefetch")&&xe(Ge(h))};let p;const w=f=>{clearTimeout(p),p=setTimeout(()=>{var h;(h=f.target)==null||h.dispatchEvent(new CustomEvent("sveltekit:trigger_prefetch",{bubbles:!0}))},20)};addEventListener("touchstart",o),addEventListener("mousemove",w),addEventListener("sveltekit:trigger_prefetch",o),addEventListener("click",f=>{if(!X||f.button||f.which!==1||f.metaKey||f.ctrlKey||f.shiftKey||f.altKey||f.defaultPrevented)return;const h=Ye(f);if(!h||!h.href)return;const b=h instanceof SVGAElement,v=Ge(h);if(!b&&v.origin==="null")return;const _=(h.getAttribute("rel")||"").split(/\s+/);if(h.hasAttribute("download")||_.includes("external")||h.hasAttribute("sveltekit:reload")||(b?h.target.baseVal:h.target))return;const[m,R]=v.href.split("#");if(R!==void 0&&m===location.href.split("#")[0]){he=!0,$e(C),u.page.set(ne(P({},_e),{url:v})),u.page.notify();return}we({url:v,scroll:h.hasAttribute("sveltekit:noscroll")?Ee():null,keepfocus:!1,redirect_chain:[],details:{state:{},replaceState:v.href===location.href},accepted:()=>f.preventDefault(),blocked:()=>f.preventDefault()})}),addEventListener("popstate",f=>{if(f.state&&X){if(f.state[M]===C)return;we({url:new URL(location.href),scroll:se[f.state[M]],keepfocus:!1,redirect_chain:[],details:null,accepted:()=>{C=f.state[M]},blocked:()=>{const h=C-f.state[M];history.go(h)}})}}),addEventListener("hashchange",()=>{he&&(he=!1,history.replaceState(ne(P({},history.state),{[M]:++C}),"",location.href))})},_hydrate:async({status:o,error:p,nodes:w,params:f,routeId:h})=>{const b=new URL(location.href),v=[];let _={},m,R;try{for(let S=0;S 10: - # lang = detect(text) - #if lang != 'en': - # raise Exception(F"""Non English text detected. Restore Punctuation works only for English. - # If you are certain the input is English, pass argument lang='en' to this function. - # Punctuate received: {text}""") - - def chunks(L, n): - return [L[x : x + n] for x in range(0, len(L), n)] - - - - # plit up large text into bert digestable chunks - splits = self.split_on_toks(text, self.wrds_per_pred, self.overlap_wrds) - - texts = [i["text"] for i in splits] - batches = chunks(texts, batch_size) - preds_lst = [] - - - for batch in batches: - batch_preds, _ = self.model.predict(batch) - preds_lst.extend(batch_preds) - - - # predict slices - # full_preds_lst contains tuple of labels and logits - #full_preds_lst = [self.predict(i['text']) for i in splits] - # extract predictions, and discard logits - #preds_lst = [i[0][0] for i in full_preds_lst] - # join text slices - combined_preds = self.combine_results(text, preds_lst) - # create punctuated prediction - punct_text = self.punctuate_texts(combined_preds) - return punct_text - - def predict(self, input_slice): - """ - Passes the unpunctuated text to the model for punctuation. - """ - predictions, raw_outputs = self.model.predict([input_slice]) - return predictions, raw_outputs - - @staticmethod - def split_on_toks(text, length, overlap): - """ - Splits text into predefined slices of overlapping text with indexes (offsets) - that tie-back to original text. - This is done to bypass 512 token limit on transformer models by sequentially - feeding chunks of < 512 toks. - Example output: - [{...}, {"text": "...", 'start_idx': 31354, 'end_idx': 32648}, {...}] - """ - wrds = text.replace('\n', ' ').split(" ") - resp = [] - lst_chunk_idx = 0 - i = 0 - - while True: - # words in the chunk and the overlapping portion - wrds_len = wrds[(length * i):(length * (i + 1))] - wrds_ovlp = wrds[(length * (i + 1)):((length * (i + 1)) + overlap)] - wrds_split = wrds_len + wrds_ovlp - - # Break loop if no more words - if not wrds_split: - break - - wrds_str = " ".join(wrds_split) - nxt_chunk_start_idx = len(" ".join(wrds_len)) - lst_char_idx = len(" ".join(wrds_split)) - - resp_obj = { - "text": wrds_str, - "start_idx": lst_chunk_idx, - "end_idx": lst_char_idx + lst_chunk_idx, - } - - resp.append(resp_obj) - lst_chunk_idx += nxt_chunk_start_idx + 1 - i += 1 - logging.info(f"Sliced transcript into {len(resp)} slices.") - return resp - - @staticmethod - def combine_results(full_text: str, text_slices): - """ - Given a full text and predictions of each slice combines predictions into a single text again. - Performs validataion wether text was combined correctly - """ - split_full_text = full_text.replace('\n', ' ').split(" ") - split_full_text = [i for i in split_full_text if i] - split_full_text_len = len(split_full_text) - output_text = [] - index = 0 - - if len(text_slices[-1]) <= 3 and len(text_slices) > 1: - text_slices = text_slices[:-1] - - for _slice in text_slices: - slice_wrds = len(_slice) - for ix, wrd in enumerate(_slice): - # print(index, "|", str(list(wrd.keys())[0]), "|", split_full_text[index]) - if index == split_full_text_len: - break - - if split_full_text[index] == str(list(wrd.keys())[0]) and \ - ix <= slice_wrds - 3 and text_slices[-1] != _slice: - index += 1 - pred_item_tuple = list(wrd.items())[0] - output_text.append(pred_item_tuple) - elif split_full_text[index] == str(list(wrd.keys())[0]) and text_slices[-1] == _slice: - index += 1 - pred_item_tuple = list(wrd.items())[0] - output_text.append(pred_item_tuple) - assert [i[0] for i in output_text] == split_full_text - return output_text - - @staticmethod - def punctuate_texts(full_pred: list): - """ - Given a list of Predictions from the model, applies the predictions to text, - thus punctuating it. - """ - punct_resp = "" - for i in full_pred: - word, label = i - if label[-1] == "U": - punct_wrd = word.capitalize() - else: - punct_wrd = word - - if label[0] != "O": - punct_wrd += label[0] - - punct_resp += punct_wrd + " " - punct_resp = punct_resp.strip() - # Append trailing period if doesnt exist. - if punct_resp[-1].isalnum(): - punct_resp += "." - return punct_resp - - -if __name__ == "__main__": - - start = time.time() - punct_model = RestorePuncts() - - load_model = time.time() - print(f'Time to load model: {load_model - start}') - # read test file - # with open('en_lower.txt', 'r') as fp: - # # test_sample = fp.read() - # lines = fp.readlines() - - with open('sample.vtt', 'r') as fp: - source_text = fp.read() - - # captions = webvtt.read_buffer(StringIO(source_text)) - captions = webvtt.read('sample.vtt') - source_sentences = [caption.text.replace('\r', '').replace('\n', ' ') for caption in captions] - - # print(source_sentences) - - sent = ' '.join(source_sentences) - punctuated = punct_model.punctuate(sent) - - tokenised = sent_tokenize(punctuated) - # print(tokenised) - - for i in range(len(tokenised)): - captions[i].text = tokenised[i] - # return captions.content - captions.save('my_captions.vtt') - - end = time.time() - print(f'Time for run: {end - load_model}') - print(f'Total time: {end - start}') diff --git a/spaces/hylee/u2net_portrait/U-2-Net/u2net_portrait_demo.py b/spaces/hylee/u2net_portrait/U-2-Net/u2net_portrait_demo.py deleted file mode 100644 index 516272a61d6533b8ebf8e466dfa3bda2d9c4e9a3..0000000000000000000000000000000000000000 --- a/spaces/hylee/u2net_portrait/U-2-Net/u2net_portrait_demo.py +++ /dev/null @@ -1,175 +0,0 @@ -import cv2 -import torch -from model import U2NET -from torch.autograd import Variable -import numpy as np -from glob import glob -import os - -def detect_single_face(face_cascade,img): - # Convert into grayscale - gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - - # Detect faces - faces = face_cascade.detectMultiScale(gray, 1.1, 4) - if(len(faces)==0): - print("Warming: no face detection, the portrait u2net will run on the whole image!") - return None - - # filter to keep the largest face - wh = 0 - idx = 0 - for i in range(0,len(faces)): - (x,y,w,h) = faces[i] - if(whwidth): - r = right-width - right = width - - tpad = int(float(h)*0.6) - top = y - tpad - if(top<0): - t = tpad-y - top = 0 - - bpad = int(float(h)*0.2) - bottom = y+h+bpad - if(bottom>height): - b = bottom-height - bottom = height - - - im_face = img[top:bottom,left:right] - if(len(im_face.shape)==2): - im_face = np.repeat(im_face[:,:,np.newaxis],(1,1,3)) - - im_face = np.pad(im_face,((t,b),(l,r),(0,0)),mode='constant',constant_values=((255,255),(255,255),(255,255))) - - # pad to achieve image with square shape for avoding face deformation after resizing - hf,wf = im_face.shape[0:2] - if(hf-2>wf): - wfp = int((hf-wf)/2) - im_face = np.pad(im_face,((0,0),(wfp,wfp),(0,0)),mode='constant',constant_values=((255,255),(255,255),(255,255))) - elif(wf-2>hf): - hfp = int((wf-hf)/2) - im_face = np.pad(im_face,((hfp,hfp),(0,0),(0,0)),mode='constant',constant_values=((255,255),(255,255),(255,255))) - - # resize to have 512x512 resolution - im_face = cv2.resize(im_face, (512,512), interpolation = cv2.INTER_AREA) - - return im_face - -def normPRED(d): - ma = torch.max(d) - mi = torch.min(d) - - dn = (d-mi)/(ma-mi) - - return dn - -def inference(net,input): - - # normalize the input - tmpImg = np.zeros((input.shape[0],input.shape[1],3)) - input = input/np.max(input) - - tmpImg[:,:,0] = (input[:,:,2]-0.406)/0.225 - tmpImg[:,:,1] = (input[:,:,1]-0.456)/0.224 - tmpImg[:,:,2] = (input[:,:,0]-0.485)/0.229 - - # convert BGR to RGB - tmpImg = tmpImg.transpose((2, 0, 1)) - tmpImg = tmpImg[np.newaxis,:,:,:] - tmpImg = torch.from_numpy(tmpImg) - - # convert numpy array to torch tensor - tmpImg = tmpImg.type(torch.FloatTensor) - - if torch.cuda.is_available(): - tmpImg = Variable(tmpImg.cuda()) - else: - tmpImg = Variable(tmpImg) - - # inference - d1,d2,d3,d4,d5,d6,d7= net(tmpImg) - - # normalization - pred = 1.0 - d1[:,0,:,:] - pred = normPRED(pred) - - # convert torch tensor to numpy array - pred = pred.squeeze() - pred = pred.cpu().data.numpy() - - del d1,d2,d3,d4,d5,d6,d7 - - return pred - -def main(): - - # get the image path list for inference - im_list = glob('./test_data/test_portrait_images/your_portrait_im/*') - print("Number of images: ",len(im_list)) - # indicate the output directory - out_dir = './test_data/test_portrait_images/your_portrait_results' - if(not os.path.exists(out_dir)): - os.mkdir(out_dir) - - # Load the cascade face detection model - face_cascade = cv2.CascadeClassifier('./saved_models/face_detection_cv2/haarcascade_frontalface_default.xml') - # u2net_portrait path - model_dir = './saved_models/u2net_portrait/u2net_portrait.pth' - - # load u2net_portrait model - net = U2NET(3,1) - net.load_state_dict(torch.load(model_dir)) - if torch.cuda.is_available(): - net.cuda() - net.eval() - - # do the inference one-by-one - for i in range(0,len(im_list)): - print("--------------------------") - print("inferencing ", i, "/", len(im_list), im_list[i]) - - # load each image - img = cv2.imread(im_list[i]) - height,width = img.shape[0:2] - face = detect_single_face(face_cascade,img) - im_face = crop_face(img, face) - im_portrait = inference(net,im_face) - - # save the output - cv2.imwrite(out_dir+"/"+im_list[i].split('/')[-1][0:-4]+'.png',(im_portrait*255).astype(np.uint8)) - -if __name__ == '__main__': - main() diff --git a/spaces/iamironman4279/SadTalker/src/face3d/util/html.py b/spaces/iamironman4279/SadTalker/src/face3d/util/html.py deleted file mode 100644 index cc3262a1eafda34842e4dbad47bb6ba72f0c5a68..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/util/html.py +++ /dev/null @@ -1,86 +0,0 @@ -import dominate -from dominate.tags import meta, h3, table, tr, td, p, a, img, br -import os - - -class HTML: - """This HTML class allows us to save images and write texts into a single HTML file. - - It consists of functions such as (add a text header to the HTML file), - (add a row of images to the HTML file), and (save the HTML to the disk). - It is based on Python library 'dominate', a Python library for creating and manipulating HTML documents using a DOM API. - """ - - def __init__(self, web_dir, title, refresh=0): - """Initialize the HTML classes - - Parameters: - web_dir (str) -- a directory that stores the webpage. HTML file will be created at /index.html; images will be saved at 0: - with self.doc.head: - meta(http_equiv="refresh", content=str(refresh)) - - def get_image_dir(self): - """Return the directory that stores images""" - return self.img_dir - - def add_header(self, text): - """Insert a header to the HTML file - - Parameters: - text (str) -- the header text - """ - with self.doc: - h3(text) - - def add_images(self, ims, txts, links, width=400): - """add images to the HTML file - - Parameters: - ims (str list) -- a list of image paths - txts (str list) -- a list of image names shown on the website - links (str list) -- a list of hyperref links; when you click an image, it will redirect you to a new page - """ - self.t = table(border=1, style="table-layout: fixed;") # Insert a table - self.doc.add(self.t) - with self.t: - with tr(): - for im, txt, link in zip(ims, txts, links): - with td(style="word-wrap: break-word;", halign="center", valign="top"): - with p(): - with a(href=os.path.join('images', link)): - img(style="width:%dpx" % width, src=os.path.join('images', im)) - br() - p(txt) - - def save(self): - """save the current content to the HMTL file""" - html_file = '%s/index.html' % self.web_dir - f = open(html_file, 'wt') - f.write(self.doc.render()) - f.close() - - -if __name__ == '__main__': # we show an example usage here. - html = HTML('web/', 'test_html') - html.add_header('hello world') - - ims, txts, links = [], [], [] - for n in range(4): - ims.append('image_%d.png' % n) - txts.append('text_%d' % n) - links.append('image_%d.png' % n) - html.add_images(ims, txts, links) - html.save() diff --git a/spaces/indikamk/MisconAI/README.md b/spaces/indikamk/MisconAI/README.md deleted file mode 100644 index 982156100affe2e455fb8b5466a8f77416b485b6..0000000000000000000000000000000000000000 --- a/spaces/indikamk/MisconAI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MisconAI -emoji: 🚀 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/innat/HybridModel-GradCAM/layers/__init__.py b/spaces/innat/HybridModel-GradCAM/layers/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/innnky/vits-nyaru/utils.py b/spaces/innnky/vits-nyaru/utils.py deleted file mode 100644 index a311e1c75de8f65f7edb49e0e6d5cdea085b5e5c..0000000000000000000000000000000000000000 --- a/spaces/innnky/vits-nyaru/utils.py +++ /dev/null @@ -1,258 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("../drive/MyDrive", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Company Of Heroes 2 Reloaded Skirmish Offline.epub.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Company Of Heroes 2 Reloaded Skirmish Offline.epub.md deleted file mode 100644 index 5f74604150c781be3d1c362574086b0b307cb28b..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Company Of Heroes 2 Reloaded Skirmish Offline.epub.md +++ /dev/null @@ -1,12 +0,0 @@ -

    Company Of Heroes 2 Reloaded Skirmish Offline.epub


    Downloadhttps://urlin.us/2uEwBe



    - -January 25, 2014 — . /6d/company-of-heroes-2-skirmish-offline-cracked.html . coub.com/stories/2186910-choplifter-hd-update-1-skidrow-corepack-. /tag/ -The company is preparing to update its hacking tools with a new version of the patch. -Besides that, there is no doubt that it will continue to be updated to the final version of the game. -The patch is available now. -It provides the possibility to play through both online and offline modes. -If you need the cracking of the game without a game copy, then you should pay a few pounds to the Russian website. -This crack is also not free, but you can download it for free. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Naa Peru Surya Na Illu India Full Mo) [PORTABLE].md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Naa Peru Surya Na Illu India Full Mo) [PORTABLE].md deleted file mode 100644 index 29378dfc289fb4178b639a81e131945a94d393ac..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (Naa Peru Surya Na Illu India Full Mo) [PORTABLE].md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (Naa Peru Surya Na Illu India full mo)


    Download Filehttps://urlin.us/2uEyAt



    - -Video permanently removed due to copyright complaint (sorry about that). To view in HD VISIT OUR . In response to a recent copyright infringement suit on a video that was uploaded more than 14 years before the first lawsuit was published, Warner Music Group, which owns Universal Music Group in the UK, has removed videos that were first uploaded to YouTube in 1996.Warner Music Group sued a man last week, alleging he illegally uploaded a video to YouTube just six months before his first lawsuit was filed. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Iqbal Hd 1080p Blu-ray [BEST] Download Torrent.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Iqbal Hd 1080p Blu-ray [BEST] Download Torrent.md deleted file mode 100644 index 5d3809d919a26f489206b4da7da4491e9fe0b0dc..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Iqbal Hd 1080p Blu-ray [BEST] Download Torrent.md +++ /dev/null @@ -1,8 +0,0 @@ -

    Iqbal hd 1080p blu-ray download torrent


    Download Zip ⚹⚹⚹ https://urlin.us/2uExla



    - -Jun 19, 2018 - If you trade without a VPN, your ISP can see what you're trading and can ... 720p.BLU 1080p.BLU. 839.3 MB. 1280*534. English 2.0. Download Vivarium / Virabyum (2019) movie from torrent free, without registration in good quality or watch online in HD 1080p / HD 720p. -File: Virabyum - Virabyum (2019) - BDRip 720p.torrent Format: MKV Video codec: AVC Audio codec: AC3 Video: MPEG-4 AVC, 4700 Kbps, 1280x536 Audio: Russian (AC3, 6 ch, 384 Kbps), English (DTS, 6 ch, 1509 Kbps) Translation: Professional (multi-voice sub-title) License. -Subtitles: Russian, English. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/17habaibberpengaruhdiindonesiapdffree FREE.md b/spaces/inreVtussa/clothingai/Examples/17habaibberpengaruhdiindonesiapdffree FREE.md deleted file mode 100644 index 269c014292b0b7f147136f1bd272f15a4d6a02e9..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/17habaibberpengaruhdiindonesiapdffree FREE.md +++ /dev/null @@ -1,6 +0,0 @@ - -

    Karena terdapat sekitar 50-an contoh yang bisa mengubah garis keamanan, kita semua menemukan munculnya yang kita kenal dengan rasa-rasa kelelahan. Namun, ini dilakukan oleh struktur berantai, dan sebab membuatnya berbeda dengan 17habaibberpengaruhdiindonesiapdffree

    pertanyaan yang kita bicarakan di sekolah. Berikut faktanya, kami menemukan munculnya sejumlah contoh ilmu pengetahuan yang kami dapatkan, seperti yang ini. Hal ini akan menjadi contoh luar biasa bagi kami untuk menyingkapkan cara menulis kode 17habaibberpengaruhdiindonesiapdffree

    yang dapat mengubah garis ketenaran bagi semua orang yang mengenalkan kami di internet.

    17habaibberpengaruhdiindonesiapdffree

    karena kami menulisnya, ialah Nomor File Ini

    17habaibberpengaruhdiindonesiapdffree

    dan cukup berklik ke file ini sebagai bagian dari ini

    17habaibberpengaruhdiindonesiapdffree

    karena kami menulisnya, ialah Nomor File Ini

    17habaibberpengaruhdiindonesiapdffree

    dan cukup berklik ke file ini sebagai bagian dari ini

    -

    17habaibberpengaruhdiindonesiapdffree


    Download ✒ ✒ ✒ https://tiurll.com/2uCk4f



    -

    some questions:

    • i saw this article about some syrian immigrants in germany getting deported; how do you think the project should address what the us is doing to immigrants in general?
    • what is your plan for making the catalogue open source?
    • how much of the work is done (at least as of today) by volunteers?
    • how can participation be organized in other countries?
    • how much would it cost to sustain the project, and in what ways?
    • do you have a final list of which cities need to be included? if so, could you send me a copy?
    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/City Guide 7 Wince Crack PATCHED.md b/spaces/inreVtussa/clothingai/Examples/City Guide 7 Wince Crack PATCHED.md deleted file mode 100644 index 2bd9cafd05b1bbf1b95058f77af3784278e03c3c..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/City Guide 7 Wince Crack PATCHED.md +++ /dev/null @@ -1,6 +0,0 @@ -

    City Guide 7 Wince Crack


    Download Zip ——— https://tiurll.com/2uCiAa



    -
    -Recent Posts See All. City Guide 7 Wince Crack. Hitman Contracts Cheats Hitman Absolution Crack Games Crack free hack patch crack keygen key games look ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/ispast/Genshin_MB_VITS_TTS/commons.py b/spaces/ispast/Genshin_MB_VITS_TTS/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/ispast/Genshin_MB_VITS_TTS/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/jackli888/stable-diffusion-webui/modules/script_callbacks.py b/spaces/jackli888/stable-diffusion-webui/modules/script_callbacks.py deleted file mode 100644 index c98c2395b6fe46ddec2a10cc6a54ee0f3ba248f5..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/script_callbacks.py +++ /dev/null @@ -1,359 +0,0 @@ -import sys -import traceback -from collections import namedtuple -import inspect -from typing import Optional, Dict, Any - -from fastapi import FastAPI -from gradio import Blocks - - -def report_exception(c, job): - print(f"Error executing callback {job} for {c.script}", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - - -class ImageSaveParams: - def __init__(self, image, p, filename, pnginfo): - self.image = image - """the PIL image itself""" - - self.p = p - """p object with processing parameters; either StableDiffusionProcessing or an object with same fields""" - - self.filename = filename - """name of file that the image would be saved to""" - - self.pnginfo = pnginfo - """dictionary with parameters for image's PNG info data; infotext will have the key 'parameters'""" - - -class CFGDenoiserParams: - def __init__(self, x, image_cond, sigma, sampling_step, total_sampling_steps): - self.x = x - """Latent image representation in the process of being denoised""" - - self.image_cond = image_cond - """Conditioning image""" - - self.sigma = sigma - """Current sigma noise step value""" - - self.sampling_step = sampling_step - """Current Sampling step number""" - - self.total_sampling_steps = total_sampling_steps - """Total number of sampling steps planned""" - - -class CFGDenoisedParams: - def __init__(self, x, sampling_step, total_sampling_steps): - self.x = x - """Latent image representation in the process of being denoised""" - - self.sampling_step = sampling_step - """Current Sampling step number""" - - self.total_sampling_steps = total_sampling_steps - """Total number of sampling steps planned""" - - -class UiTrainTabParams: - def __init__(self, txt2img_preview_params): - self.txt2img_preview_params = txt2img_preview_params - - -class ImageGridLoopParams: - def __init__(self, imgs, cols, rows): - self.imgs = imgs - self.cols = cols - self.rows = rows - - -ScriptCallback = namedtuple("ScriptCallback", ["script", "callback"]) -callback_map = dict( - callbacks_app_started=[], - callbacks_model_loaded=[], - callbacks_ui_tabs=[], - callbacks_ui_train_tabs=[], - callbacks_ui_settings=[], - callbacks_before_image_saved=[], - callbacks_image_saved=[], - callbacks_cfg_denoiser=[], - callbacks_cfg_denoised=[], - callbacks_before_component=[], - callbacks_after_component=[], - callbacks_image_grid=[], - callbacks_infotext_pasted=[], - callbacks_script_unloaded=[], - callbacks_before_ui=[], -) - - -def clear_callbacks(): - for callback_list in callback_map.values(): - callback_list.clear() - - -def app_started_callback(demo: Optional[Blocks], app: FastAPI): - for c in callback_map['callbacks_app_started']: - try: - c.callback(demo, app) - except Exception: - report_exception(c, 'app_started_callback') - - -def model_loaded_callback(sd_model): - for c in callback_map['callbacks_model_loaded']: - try: - c.callback(sd_model) - except Exception: - report_exception(c, 'model_loaded_callback') - - -def ui_tabs_callback(): - res = [] - - for c in callback_map['callbacks_ui_tabs']: - try: - res += c.callback() or [] - except Exception: - report_exception(c, 'ui_tabs_callback') - - return res - - -def ui_train_tabs_callback(params: UiTrainTabParams): - for c in callback_map['callbacks_ui_train_tabs']: - try: - c.callback(params) - except Exception: - report_exception(c, 'callbacks_ui_train_tabs') - - -def ui_settings_callback(): - for c in callback_map['callbacks_ui_settings']: - try: - c.callback() - except Exception: - report_exception(c, 'ui_settings_callback') - - -def before_image_saved_callback(params: ImageSaveParams): - for c in callback_map['callbacks_before_image_saved']: - try: - c.callback(params) - except Exception: - report_exception(c, 'before_image_saved_callback') - - -def image_saved_callback(params: ImageSaveParams): - for c in callback_map['callbacks_image_saved']: - try: - c.callback(params) - except Exception: - report_exception(c, 'image_saved_callback') - - -def cfg_denoiser_callback(params: CFGDenoiserParams): - for c in callback_map['callbacks_cfg_denoiser']: - try: - c.callback(params) - except Exception: - report_exception(c, 'cfg_denoiser_callback') - - -def cfg_denoised_callback(params: CFGDenoisedParams): - for c in callback_map['callbacks_cfg_denoised']: - try: - c.callback(params) - except Exception: - report_exception(c, 'cfg_denoised_callback') - - -def before_component_callback(component, **kwargs): - for c in callback_map['callbacks_before_component']: - try: - c.callback(component, **kwargs) - except Exception: - report_exception(c, 'before_component_callback') - - -def after_component_callback(component, **kwargs): - for c in callback_map['callbacks_after_component']: - try: - c.callback(component, **kwargs) - except Exception: - report_exception(c, 'after_component_callback') - - -def image_grid_callback(params: ImageGridLoopParams): - for c in callback_map['callbacks_image_grid']: - try: - c.callback(params) - except Exception: - report_exception(c, 'image_grid') - - -def infotext_pasted_callback(infotext: str, params: Dict[str, Any]): - for c in callback_map['callbacks_infotext_pasted']: - try: - c.callback(infotext, params) - except Exception: - report_exception(c, 'infotext_pasted') - - -def script_unloaded_callback(): - for c in reversed(callback_map['callbacks_script_unloaded']): - try: - c.callback() - except Exception: - report_exception(c, 'script_unloaded') - - -def before_ui_callback(): - for c in reversed(callback_map['callbacks_before_ui']): - try: - c.callback() - except Exception: - report_exception(c, 'before_ui') - - -def add_callback(callbacks, fun): - stack = [x for x in inspect.stack() if x.filename != __file__] - filename = stack[0].filename if len(stack) > 0 else 'unknown file' - - callbacks.append(ScriptCallback(filename, fun)) - - -def remove_current_script_callbacks(): - stack = [x for x in inspect.stack() if x.filename != __file__] - filename = stack[0].filename if len(stack) > 0 else 'unknown file' - if filename == 'unknown file': - return - for callback_list in callback_map.values(): - for callback_to_remove in [cb for cb in callback_list if cb.script == filename]: - callback_list.remove(callback_to_remove) - - -def remove_callbacks_for_function(callback_func): - for callback_list in callback_map.values(): - for callback_to_remove in [cb for cb in callback_list if cb.callback == callback_func]: - callback_list.remove(callback_to_remove) - - -def on_app_started(callback): - """register a function to be called when the webui started, the gradio `Block` component and - fastapi `FastAPI` object are passed as the arguments""" - add_callback(callback_map['callbacks_app_started'], callback) - - -def on_model_loaded(callback): - """register a function to be called when the stable diffusion model is created; the model is - passed as an argument; this function is also called when the script is reloaded. """ - add_callback(callback_map['callbacks_model_loaded'], callback) - - -def on_ui_tabs(callback): - """register a function to be called when the UI is creating new tabs. - The function must either return a None, which means no new tabs to be added, or a list, where - each element is a tuple: - (gradio_component, title, elem_id) - - gradio_component is a gradio component to be used for contents of the tab (usually gr.Blocks) - title is tab text displayed to user in the UI - elem_id is HTML id for the tab - """ - add_callback(callback_map['callbacks_ui_tabs'], callback) - - -def on_ui_train_tabs(callback): - """register a function to be called when the UI is creating new tabs for the train tab. - Create your new tabs with gr.Tab. - """ - add_callback(callback_map['callbacks_ui_train_tabs'], callback) - - -def on_ui_settings(callback): - """register a function to be called before UI settings are populated; add your settings - by using shared.opts.add_option(shared.OptionInfo(...)) """ - add_callback(callback_map['callbacks_ui_settings'], callback) - - -def on_before_image_saved(callback): - """register a function to be called before an image is saved to a file. - The callback is called with one argument: - - params: ImageSaveParams - parameters the image is to be saved with. You can change fields in this object. - """ - add_callback(callback_map['callbacks_before_image_saved'], callback) - - -def on_image_saved(callback): - """register a function to be called after an image is saved to a file. - The callback is called with one argument: - - params: ImageSaveParams - parameters the image was saved with. Changing fields in this object does nothing. - """ - add_callback(callback_map['callbacks_image_saved'], callback) - - -def on_cfg_denoiser(callback): - """register a function to be called in the kdiffussion cfg_denoiser method after building the inner model inputs. - The callback is called with one argument: - - params: CFGDenoiserParams - parameters to be passed to the inner model and sampling state details. - """ - add_callback(callback_map['callbacks_cfg_denoiser'], callback) - - -def on_cfg_denoised(callback): - """register a function to be called in the kdiffussion cfg_denoiser method after building the inner model inputs. - The callback is called with one argument: - - params: CFGDenoisedParams - parameters to be passed to the inner model and sampling state details. - """ - add_callback(callback_map['callbacks_cfg_denoised'], callback) - - -def on_before_component(callback): - """register a function to be called before a component is created. - The callback is called with arguments: - - component - gradio component that is about to be created. - - **kwargs - args to gradio.components.IOComponent.__init__ function - - Use elem_id/label fields of kwargs to figure out which component it is. - This can be useful to inject your own components somewhere in the middle of vanilla UI. - """ - add_callback(callback_map['callbacks_before_component'], callback) - - -def on_after_component(callback): - """register a function to be called after a component is created. See on_before_component for more.""" - add_callback(callback_map['callbacks_after_component'], callback) - - -def on_image_grid(callback): - """register a function to be called before making an image grid. - The callback is called with one argument: - - params: ImageGridLoopParams - parameters to be used for grid creation. Can be modified. - """ - add_callback(callback_map['callbacks_image_grid'], callback) - - -def on_infotext_pasted(callback): - """register a function to be called before applying an infotext. - The callback is called with two arguments: - - infotext: str - raw infotext. - - result: Dict[str, any] - parsed infotext parameters. - """ - add_callback(callback_map['callbacks_infotext_pasted'], callback) - - -def on_script_unloaded(callback): - """register a function to be called before the script is unloaded. Any hooks/hijacks/monkeying about that - the script did should be reverted here""" - - add_callback(callback_map['callbacks_script_unloaded'], callback) - - -def on_before_ui(callback): - """register a function to be called before the UI is created.""" - - add_callback(callback_map['callbacks_before_ui'], callback) diff --git a/spaces/jbilcke-hf/VideoQuest/Dockerfile b/spaces/jbilcke-hf/VideoQuest/Dockerfile deleted file mode 100644 index 91319be9b3dd35d916d18fba5260f51125c46b50..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/Dockerfile +++ /dev/null @@ -1,65 +0,0 @@ -FROM node:18-alpine AS base - -# Install dependencies only when needed -FROM base AS deps -# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. -RUN apk add --no-cache libc6-compat -WORKDIR /app - -# Install dependencies based on the preferred package manager -COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ -RUN \ - if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ - elif [ -f package-lock.json ]; then npm ci; \ - elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ - else echo "Lockfile not found." && exit 1; \ - fi - -# Uncomment the following lines if you want to use a secret at buildtime, -# for example to access your private npm packages -# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \ -# $(cat /run/secrets/HF_EXAMPLE_SECRET) - -# Rebuild the source code only when needed -FROM base AS builder -WORKDIR /app -COPY --from=deps /app/node_modules ./node_modules -COPY . . - -# Next.js collects completely anonymous telemetry data about general usage. -# Learn more here: https://nextjs.org/telemetry -# Uncomment the following line in case you want to disable telemetry during the build. -# ENV NEXT_TELEMETRY_DISABLED 1 - -# RUN yarn build - -# If you use yarn, comment out this line and use the line above -RUN npm run build - -# Production image, copy all the files and run next -FROM base AS runner -WORKDIR /app - -ENV NODE_ENV production -# Uncomment the following line in case you want to disable telemetry during runtime. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN addgroup --system --gid 1001 nodejs -RUN adduser --system --uid 1001 nextjs - -COPY --from=builder /app/public ./public - -# Automatically leverage output traces to reduce image size -# https://nextjs.org/docs/advanced-features/output-file-tracing -COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ -COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static -COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache -# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache - -USER nextjs - -EXPOSE 3000 - -ENV PORT 3000 - -CMD ["node", "server.js"] \ No newline at end of file diff --git a/spaces/jianyq/ResumeBot/README copy.md b/spaces/jianyq/ResumeBot/README copy.md deleted file mode 100644 index 0a42fc7b5b6a7aaea9fec22c25fc31b0497ec67c..0000000000000000000000000000000000000000 --- a/spaces/jianyq/ResumeBot/README copy.md +++ /dev/null @@ -1,2 +0,0 @@ -# resume-chatbot -A chatbot built using chatGPT to discuss my resume with recruiters. diff --git a/spaces/jmesikto/whisper-webui/app-network.py b/spaces/jmesikto/whisper-webui/app-network.py deleted file mode 100644 index 4f0e565b9029761d4b995fe32a65c58d1de55f53..0000000000000000000000000000000000000000 --- a/spaces/jmesikto/whisper-webui/app-network.py +++ /dev/null @@ -1,5 +0,0 @@ -# Run the app with no audio file restrictions, and make it available on the network -from app import create_ui -from src.config import ApplicationConfig - -create_ui(ApplicationConfig.create_default(input_audio_max_duration=-1, server_name="0.0.0.0")) \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fsspec/implementations/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/johnslegers/ImageProcessService/U2Net/processor.py b/spaces/johnslegers/ImageProcessService/U2Net/processor.py deleted file mode 100644 index 94eb5091850ba4e412e04e0b92b164bb29e8a8b4..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/ImageProcessService/U2Net/processor.py +++ /dev/null @@ -1,83 +0,0 @@ -from os import mkdir -from os.path import join, exists, isfile -from cv2 import imread, resize, imwrite -import numpy as np - -__all__ = ['Processor'] - -class Processor(): - def __init__(self, paths, images, batch_size, input_size): - # Image list - self.imgs = self.load_datas(paths, images) - # Input data - self.input_datas = self.preprocess(self.imgs, batch_size, input_size) - - # Read data function - def load_datas(self, paths, images): - datas = [] - # Read data list - if paths is not None: - for im_path in paths: - assert isfile(im_path), "The {} isn't a valid file path.".format(im_path) - im = imread(im_path) - datas.append(im) - if images is not None: - datas = images - # Return data list - return datas - - # Preprocessing - def preprocess(self, imgs, batch_size=1, input_size=320): - input_datas = [] - for image in imgs: - image = resize(image, (input_size, input_size)) - tmpImg = np.zeros((image.shape[0],image.shape[1],3)) - image = image/np.max(image) - - tmpImg[:,:,0] = (image[:,:,0]-0.485)/0.229 - tmpImg[:,:,1] = (image[:,:,1]-0.456)/0.224 - tmpImg[:,:,2] = (image[:,:,2]-0.406)/0.225 - - # Convert BGR to RGB - tmpImg = tmpImg.transpose((2, 0, 1)) - tmpImg = tmpImg[np.newaxis,:,:,:] - input_datas.append(tmpImg) - - input_datas = np.concatenate(input_datas, 0) - datas_num = input_datas.shape[0] - split_num = datas_num//batch_size+1 if datas_num%batch_size!=0 else datas_num//batch_size - input_datas = np.array_split(input_datas, split_num) - return input_datas - - def normPRED(self, d): - ma = np.max(d) - mi = np.min(d) - return (d-mi)/(ma-mi) - - # Post-processing - def postprocess(self, outputs, visualization=False, output_dir='output'): - results = [] - if visualization and not exists(output_dir): - mkdir(output_dir) - - for i, image in enumerate(self.imgs): - # Normalization - pred = outputs[i,0,:,:] - pred = self.normPRED(pred) - - # Convert torch tensor to numpy array - h, w = image.shape[:2] - mask = resize(pred, (w, h)) - output_img = (image*mask[..., np.newaxis] + (1-mask[..., np.newaxis])*255).astype(np.uint8) - mask = (mask*255).astype(np.uint8) - - if visualization: - imwrite(join(output_dir, 'result_mask_%d.png' % i), mask) - imwrite(join(output_dir, 'result_%d.png' % i), output_img) - - results.append({ - 'mask': mask, - 'front': output_img - }) - - return results diff --git a/spaces/johnslegers/ImageProcessService/modules/server.py b/spaces/johnslegers/ImageProcessService/modules/server.py deleted file mode 100644 index b80ab3ecbc0da18fdb5a3767bab97cd0d9ca5824..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/ImageProcessService/modules/server.py +++ /dev/null @@ -1,48 +0,0 @@ -from fastapi import FastAPI, Request, HTTPException -from fastapi.staticfiles import StaticFiles -from fastapi.responses import FileResponse, StreamingResponse, RedirectResponse, HTMLResponse - -from modules.client import app -from modules.u2net import u2net_inference -from modules.config import API_TOKEN -import logging - -gunicorn_logger = logging.getLogger("gunicorn") -logger = logging.getLogger(__name__) -logger.setLevel(gunicorn_logger.level) -logger.propagate = True - -routes = FastAPI() - -routes.mount("/app", app) - -routes.mount("/static", StaticFiles(directory="static"), name="static") - -@routes.get("/") -async def redirect(request: Request): - return RedirectResponse("/app/"); - -@routes.get('/api') -async def get_author(request: Request): - return { - "name": "PaddleHub Service", - "author": "John Slegers", - "version": "0.1" - } - -@routes.get("/api/u2net") -async def get_u2net(request: Request): - return {"message": "Please use a POST request to send the image"} - -@routes.post("/api/u2net") -async def post_u2net(request: Request): - try: - input = await request.json() - output = u2net_inference(input['img']) - logging.debug(output) - t = [output[1], output[2], output[3]] - return t - except Exception as err: - message = str(err) - logging.exception(message) - raise HTTPException(status_code=500, detail=message) diff --git a/spaces/jordonpeter01/MusicGen2/Makefile b/spaces/jordonpeter01/MusicGen2/Makefile deleted file mode 100644 index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen2/Makefile +++ /dev/null @@ -1,21 +0,0 @@ -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report --include 'audiocraft/*' - -docs: - pdoc3 --html -o docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests docs dist diff --git a/spaces/joshen/gpt-academic/app.py b/spaces/joshen/gpt-academic/app.py deleted file mode 100644 index 49f6d1778173f817f1480063695ad8a82d6d45d5..0000000000000000000000000000000000000000 --- a/spaces/joshen/gpt-academic/app.py +++ /dev/null @@ -1,150 +0,0 @@ -import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 -import gradio as gr -from predict import predict -from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf - -# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 -proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT = \ - get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT') - -# 如果WEB_PORT是-1, 则随机选取WEB端口 -PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT -if not AUTHENTICATION: AUTHENTICATION = None - -initial_prompt = "Serve me as a writing and programming assistant." -title_html = "

    ChatGPT 学术优化

    " -description = """代码开源和更新[地址🚀](https://github.com/binary-husky/chatgpt_academic),感谢热情的[开发者们❤️](https://github.com/binary-husky/chatgpt_academic/graphs/contributors)""" - -# 问询记录, python 版本建议3.9+(越新越好) -import logging -os.makedirs("gpt_log", exist_ok=True) -try:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO, encoding="utf-8") -except:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO) -print("所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!") - -# 一些普通功能模块 -from functional import get_functionals -functional = get_functionals() - -# 高级函数插件 -from functional_crazy import get_crazy_functionals -crazy_fns = get_crazy_functionals() - -# 处理markdown文本格式的转变 -gr.Chatbot.postprocess = format_io - -# 做一些外观色彩上的调整 -from theme import adjust_theme, advanced_css -set_theme = adjust_theme() - -cancel_handles = [] -with gr.Blocks(theme=set_theme, analytics_enabled=False, css=advanced_css) as demo: - gr.HTML(title_html) - # To add a Duplicate Space badge - gr.HTML('''
    Duplicate Space请您打开此页面后务必点击上方的“复制空间”(Duplicate Space)按钮!
    切忌在“复制空间”(Duplicate Space)之前填入API_KEY或进行提问,否则您的API_KEY将极可能被空间所有者攫取!
    ''') - - with gr.Row().style(equal_height=True): - with gr.Column(scale=2): - chatbot = gr.Chatbot() - chatbot.style(height=CHATBOT_HEIGHT) - history = gr.State([]) - with gr.Column(scale=1): - with gr.Row(): - api_key = gr.Textbox(show_label=False, placeholder="输入API_KEY,输入后自动生效.").style(container=False) - with gr.Row(): - txt = gr.Textbox(show_label=False, placeholder="输入问题.").style(container=False) - with gr.Row(): - submitBtn = gr.Button("提交", variant="primary") - with gr.Row(): - resetBtn = gr.Button("重置", variant="secondary"); resetBtn.style(size="sm") - stopBtn = gr.Button("停止", variant="secondary"); stopBtn.style(size="sm") - with gr.Row(): - from check_proxy import check_proxy - status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {check_proxy(proxies)}") - with gr.Accordion("基础功能区", open=True) as area_basic_fn: - with gr.Row(): - for k in functional: - variant = functional[k]["Color"] if "Color" in functional[k] else "secondary" - functional[k]["Button"] = gr.Button(k, variant=variant) - with gr.Accordion("函数插件区", open=True) as area_crazy_fn: - with gr.Row(): - gr.Markdown("注意:以下“红颜色”标识的函数插件需从input区读取路径作为参数.") - with gr.Row(): - for k in crazy_fns: - if not crazy_fns[k].get("AsButton", True): continue - variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary" - crazy_fns[k]["Button"] = gr.Button(k, variant=variant) - with gr.Row(): - with gr.Accordion("更多函数插件", open=True): - dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)] - with gr.Column(scale=1): - dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False) - with gr.Column(scale=1): - switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary") - with gr.Row(): - with gr.Accordion("点击展开“文件上传区”。上传本地文件可供红色函数插件调用。", open=False) as area_file_up: - file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple") - with gr.Accordion("展开SysPrompt & 交互界面布局 & Github地址", open=False): - system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt) - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",) - checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区") - gr.Markdown(description) - # 功能区显示开关与功能区的互动 - def fn_area_visibility(a): - ret = {} - ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))}) - ret.update({area_crazy_fn: gr.update(visible=("函数插件区" in a))}) - return ret - checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn] ) - # 整理反复出现的控件句柄组合 - input_combo = [txt, top_p, api_key, temperature, chatbot, history, system_prompt] - output_combo = [chatbot, history, status] - predict_args = dict(fn=predict, inputs=input_combo, outputs=output_combo) - empty_txt_args = dict(fn=lambda: "", inputs=[], outputs=[txt]) # 用于在提交后清空输入栏 - # 提交按钮、重置按钮 - cancel_handles.append(txt.submit(**predict_args)) #; txt.submit(**empty_txt_args) 在提交后清空输入栏 - cancel_handles.append(submitBtn.click(**predict_args)) #; submitBtn.click(**empty_txt_args) 在提交后清空输入栏 - resetBtn.click(lambda: ([], [], "已重置"), None, output_combo) - # 基础功能区的回调函数注册 - for k in functional: - click_handle = functional[k]["Button"].click(predict, [*input_combo, gr.State(True), gr.State(k)], output_combo) - cancel_handles.append(click_handle) - # 文件上传区,接收文件后与chatbot的互动 - file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt], [chatbot, txt]) - # 函数插件-固定按钮区 - for k in crazy_fns: - if not crazy_fns[k].get("AsButton", True): continue - click_handle = crazy_fns[k]["Button"].click(crazy_fns[k]["Function"], [*input_combo, gr.State(PORT)], output_combo) - click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot]) - cancel_handles.append(click_handle) - # 函数插件-下拉菜单与随变按钮的互动 - def on_dropdown_changed(k): - variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary" - return {switchy_bt: gr.update(value=k, variant=variant)} - dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt] ) - # 随变按钮的回调函数注册 - def route(k, *args, **kwargs): - if k in [r"打开插件列表", r"请先从插件列表中选择"]: return - yield from crazy_fns[k]["Function"](*args, **kwargs) - click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo) - click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot]) - # def expand_file_area(file_upload, area_file_up): - # if len(file_upload)>0: return {area_file_up: gr.update(open=True)} - # click_handle.then(expand_file_area, [file_upload, area_file_up], [area_file_up]) - cancel_handles.append(click_handle) - # 终止按钮的回调函数注册 - stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) - -# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数 -def auto_opentab_delay(): - import threading, webbrowser, time - print(f"如果浏览器没有自动打开,请复制并转到以下URL: http://localhost:{PORT}") - def open(): - time.sleep(2) - webbrowser.open_new_tab(f"http://localhost:{PORT}") - threading.Thread(target=open, name="open-browser", daemon=True).start() - -auto_opentab_delay() -demo.title = "ChatGPT 学术优化" -demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False) diff --git a/spaces/joshen/gpt-academic/self_analysis.md b/spaces/joshen/gpt-academic/self_analysis.md deleted file mode 100644 index acfbd3e91b46738af42c4a4859b08570be59d485..0000000000000000000000000000000000000000 --- a/spaces/joshen/gpt-academic/self_analysis.md +++ /dev/null @@ -1,175 +0,0 @@ -# chatgpt-academic项目自译解报告 -(Author补充:以下分析均由本项目调用ChatGPT一键生成,如果有不准确的地方,全怪GPT😄) - -## [0/18] 程序摘要: functional_crazy.py - -这是一个功能扩展的程序,文件名为 `functional_crazy.py`。代码的主要功能是通过提供一系列函数插件,增强程序的功能,让用户可以通过界面中的按钮,快速调用对应的函数插件实现相应的操作。代码中使用了 `HotReload` 函数插件,可以在不重启程序的情况下更新函数插件的代码,让其生效。同时,通过 `UserVisibleLevel` 变量的设置,可以控制哪些插件会在UI界面显示出来。函数插件列表包括了以下功能:解析项目本身、解析一个Python项目、解析一个C++项目头文件、解析一个C++项目、读取文章并生成摘要、批量生成函数注释、全项目切换成英文、批量总结PDF文档、批量总结PDF文档pdfminer、批量总结Word文档、高阶功能模板函数、以及其他未经充分测试的函数插件。 - -## [1/18] 程序摘要: main.py - -该程序是一个基于Gradio构建的对话生成模型的Web界面示例,包含了以下主要功能: - -1.加载模型并对用户输入进行响应; -2.通过调用外部函数库来获取用户的输入,并在模型生成的过程中进行处理; -3.支持用户上传本地文件,供外部函数库调用; -4.支持停止当前的生成过程; -5.保存用户的历史记录,并将其记录在本地日志文件中,以供后续分析和使用。 - -该程序需要依赖于一些外部库和软件包,如Gradio、torch等。用户需要确保这些依赖项已经安装,并且在运行该程序前对config_private.py配置文件进行相应的修改。 - -## [2/18] 程序摘要: functional.py - -该文件定义了一个名为“functional”的函数,函数的作用是返回一个包含多个字典(键值对)的字典,每个键值对表示一种功能。该字典的键值由功能名称和对应的数据组成。其中的每个字典都包含4个键值对,分别为“Prefix”、“Suffix”、“Color”和“PreProcess”,分别表示前缀、后缀、按钮颜色和预处理函数。如果某些键值对没有给出,那么程序中默认相应的值,如按钮颜色默认为“secondary”等。每个功能描述了不同的学术润色/翻译/其他服务,如“英语学术润色”、“中文学术润色”、“查找语法错误”等。函数还引用了一个名为“clear_line_break”的函数,用于预处理修改前的文本。 - -## [3/18] 程序摘要: show_math.py - -该程序文件名为show_math.py,主要用途是将Markdown和LaTeX混合格式转换成带有MathML的HTML格式。该程序通过递归地处理LaTeX和Markdown混合段落逐一转换成HTML/MathML标记出来,并在LaTeX公式创建中进行错误处理。在程序文件中定义了3个变量,分别是incomplete,convError和convert,其中convert函数是用来执行转换的主要函数。程序使用正则表达式进行LaTeX格式和Markdown段落的分割,从而实现转换。如果在Latex转换过程中发生错误,程序将输出相应的错误信息。 - -## [4/18] 程序摘要: predict.py - -本程序文件的文件名为"./predict.py",主要包含三个函数: - -1. predict:正常对话时使用,具备完备的交互功能,不可多线程; -2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑; -3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程。 - -其中,predict函数用于基础的对话功能,发送至chatGPT,流式获取输出,根据点击的哪个按钮,进行对话预处理等额外操作;predict_no_ui函数用于payload比较大的情况,或者用于实现多线、带嵌套的复杂功能;predict_no_ui_long_connection实现调用predict_no_ui处理长文档时,避免连接断掉的情况,支持多线程。 - -## [5/18] 程序摘要: check_proxy.py - -该程序文件名为check_proxy.py,主要功能是检查代理服务器的可用性并返回代理服务器的地理位置信息或错误提示。具体实现方式如下: - -首先使用requests模块向指定网站(https://ipapi.co/json/)发送GET请求,请求结果以JSON格式返回。如果代理服务器参数(proxies)是有效的且没有指明'https'代理,则用默认字典值'无'替代。 - -然后,程序会解析返回的JSON数据,并根据数据中是否包含国家名字字段来判断代理服务器的地理位置。如果有国家名字字段,则将其打印出来并返回代理服务器的相关信息。如果没有国家名字字段,但有错误信息字段,则返回其他错误提示信息。 - -在程序执行前,程序会先设置环境变量no_proxy,并使用toolbox模块中的get_conf函数从配置文件中读取代理参数。 - -最后,检测程序会输出检查结果并返回对应的结果字符串。 - -## [6/18] 程序摘要: config_private.py - -本程序文件名为`config_private.py`,其功能为配置私有信息以便在主程序中使用。主要功能包括: - -- 配置OpenAI API的密钥和API URL -- 配置是否使用代理,如果使用代理配置代理地址和端口 -- 配置发送请求的超时时间和失败重试次数的限制 -- 配置并行使用线程数和用户名密码 -- 提供检查功能以确保API密钥已经正确设置 - -其中,需要特别注意的是:最后一个检查功能要求在运行之前必须将API密钥正确设置,否则程序会直接退出。 - -## [7/18] 程序摘要: config.py - -该程序文件是一个配置文件,用于配置OpenAI的API参数和优化体验的相关参数,具体包括以下几个步骤: - -1.设置OpenAI的API密钥。 - -2.选择是否使用代理,如果使用则需要设置代理地址和端口等参数。 - -3.设置请求OpenAI后的超时时间、网页的端口、重试次数、选择的OpenAI模型、API的网址等。 - -4.设置并行使用的线程数和用户名密码。 - -该程序文件的作用为在使用OpenAI API时进行相关参数的配置,以保证请求的正确性和速度,并且优化使用体验。 - -## [8/18] 程序摘要: theme.py - -该程序是一个自定义Gradio主题的Python模块。主题文件名为"./theme.py"。程序引入了Gradio模块,并定义了一个名为"adjust_theme()"的函数。该函数根据输入值调整Gradio的默认主题,返回一个包含所需自定义属性的主题对象。主题属性包括颜色、字体、过渡、阴影、按钮边框和渐变等。主题颜色列表包括石板色、灰色、锌色、中性色、石头色、红色、橙色、琥珀色、黄色、酸橙色、绿色、祖母绿、青蓝色、青色、天蓝色、蓝色、靛蓝色、紫罗兰色、紫色、洋红色、粉红色和玫瑰色。如果Gradio版本较旧,则不能自定义字体和颜色。 - -## [9/18] 程序摘要: toolbox.py - -该程序文件包含了一系列函数,用于实现聊天程序所需的各种功能,如预测对话、将对话记录写入文件、将普通文本转换为Markdown格式文本、装饰器函数CatchException和HotReload等。其中一些函数用到了第三方库,如Python-Markdown、mdtex2html、zipfile、tarfile、rarfile和py7zr。除此之外,还有一些辅助函数,如get_conf、clear_line_break和extract_archive等。主要功能包括: - -1. 导入markdown、mdtex2html、threading、functools等模块。 -2. 定义函数predict_no_ui_but_counting_down,用于生成对话。 -3. 定义函数write_results_to_file,用于将对话记录生成Markdown文件。 -4. 定义函数regular_txt_to_markdown,将普通文本转换为Markdown格式的文本。 -5. 定义装饰器函数CatchException,用于捕获函数执行异常并返回生成器。 -6. 定义函数report_execption,用于向chatbot中添加错误信息。 -7. 定义函数text_divide_paragraph,用于将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。 -8. 定义函数markdown_convertion,用于将Markdown格式的文本转换为HTML格式。 -9. 定义函数format_io,用于将输入和输出解析为HTML格式。 -10. 定义函数find_free_port,用于返回当前系统中可用的未使用端口。 -11. 定义函数extract_archive,用于解压归档文件。 -12. 定义函数find_recent_files,用于查找最近创建的文件。 -13. 定义函数on_file_uploaded,用于处理上传文件的操作。 -14. 定义函数on_report_generated,用于处理生成报告文件的操作。 - - -## [10/18] 程序摘要: crazy_functions/生成函数注释.py - -该程序文件是一个Python脚本,文件名为“生成函数注释.py”,位于“./crazy_functions/”目录下。该程序实现了一个批量生成函数注释的功能,可以对指定文件夹下的所有Python和C++源代码文件中的所有函数进行注释,使用Markdown表格输出注释结果。 - -该程序引用了predict.py和toolbox.py两个模块,其中predict.py实现了一个基于GPT模型的文本生成功能,用于生成函数注释,而toolbox.py实现了一些工具函数,包括异常处理函数、文本写入函数等。另外,该程序还定义了两个函数,一个是“生成函数注释”函数,用于处理单个文件的注释生成;另一个是“批量生成函数注释”函数,用于批量处理多个文件的注释生成。 - -## [11/18] 程序摘要: crazy_functions/读文章写摘要.py - -这个程序文件是一个名为“读文章写摘要”的函数。该函数的输入包括文章的文本内容、top_p(生成文本时选择最可能的词语的概率阈值)、temperature(控制生成文本的随机性的因子)、对话历史等参数,以及一个聊天机器人和一个系统提示的文本。该函数的主要工作是解析一组.tex文件,然后生成一段学术性语言的中文和英文摘要。在解析过程中,该函数使用一个名为“toolbox”的模块中的辅助函数和一个名为“predict”的模块中的函数来执行GPT-2模型的推理工作,然后将结果返回给聊天机器人。另外,该程序还包括一个名为“fast_debug”的bool型变量,用于调试和测试。 - -## [12/18] 程序摘要: crazy_functions/代码重写为全英文_多线程.py - -该程序文件实现了一个多线程操作,用于将指定目录下的所有 Python 文件中的中文转化为英文,并将转化后的文件存入另一个目录中。具体实现过程如下: - -1. 集合目标文件路径并清空历史记录。 -2. 循环目标文件,对每个文件启动一个线程进行任务操作。 -3. 各个线程同时开始执行任务函数,并在任务完成后将转化后的文件写入指定目录,最终生成一份任务执行报告。 - -## [13/18] 程序摘要: crazy_functions/高级功能函数模板.py - -该程序文件名为高级功能函数模板.py,它包含了一个名为“高阶功能模板函数”的函数,这个函数可以作为开发新功能函数的模板。该函数引用了predict.py和toolbox.py文件中的函数。在该函数内部,它首先清空了历史记录,然后对于今天和今天以后的四天,它问用户历史中哪些事件发生在这些日期,并列举两条事件并发送相关的图片。在向用户询问问题时,使用了GPT进行响应。由于请求GPT需要一定的时间,所以函数会在重新显示状态之前等待一段时间。在每次与用户的互动中,使用yield关键字生成器函数来输出聊天机器人的当前状态,包括聊天消息、历史记录和状态('正常')。最后,程序调用write_results_to_file函数将聊天的结果写入文件,以供后续的评估和分析。 - -## [14/18] 程序摘要: crazy_functions/总结word文档.py - -该程序文件名为总结word文档.py,主要功能是批量总结Word文档。具体实现过程是解析docx格式和doc格式文件,生成文件内容,然后使用自然语言处理工具对文章内容做中英文概述,最后给出建议。该程序需要依赖python-docx和pywin32,如果没有安装,会给出安装建议。 - -## [15/18] 程序摘要: crazy_functions/批量总结PDF文档pdfminer.py - -该程序文件名为pdfminer.py,位于./crazy_functions/目录下。程序实现了批量读取PDF文件,并使用pdfminer解析PDF文件内容。此外,程序还根据解析得到的文本内容,调用机器学习模型生成对每篇文章的概述,最终生成全文摘要。程序中还对模块依赖进行了导入检查,若缺少依赖,则会提供安装建议。 - -## [16/18] 程序摘要: crazy_functions/解析项目源代码.py - -这个程序文件中包含了几个函数,分别是: - -1. `解析源代码(file_manifest, project_folder, top_p, api_key, temperature, chatbot, history, systemPromptTxt)`:通过输入文件路径列表对程序文件进行逐文件分析,根据分析结果做出整体功能和构架的概括,并生成包括每个文件功能的markdown表格。 -2. `解析项目本身(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对当前文件夹下的所有Python文件及其子文件夹进行逐文件分析,并生成markdown表格。 -3. `解析一个Python项目(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对指定路径下的所有Python文件及其子文件夹进行逐文件分析,并生成markdown表格。 -4. `解析一个C项目的头文件(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对指定路径下的所有头文件进行逐文件分析,并生成markdown表格。 -5. `解析一个C项目(txt, top_p, api_key, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对指定路径下的所有.h、.cpp、.c文件及其子文件夹进行逐文件分析,并生成markdown表格。 - -程序中还包含了一些辅助函数和变量,如CatchException装饰器函数,report_execption函数、write_results_to_file函数等。在执行过程中还会调用其他模块中的函数,如toolbox模块的函数和predict模块的函数。 - -## [17/18] 程序摘要: crazy_functions/批量总结PDF文档.py - -这个程序文件是一个名为“批量总结PDF文档”的函数插件。它导入了predict和toolbox模块,并定义了一些函数,包括is_paragraph_break,normalize_text和clean_text。这些函数是对输入文本进行预处理和清洗的功能函数。主要的功能函数是解析PDF,它打开每个PDF文件并将其内容存储在file_content变量中,然后传递给聊天机器人,以产生一句话的概括。在解析PDF文件之后,该函数连接了所有文件的摘要,以产生一段学术语言和英文摘要。最后,函数批量处理目标文件夹中的所有PDF文件,并输出结果。 - -## 根据以上你自己的分析,对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。 - -该程序是一个聊天机器人,使用了OpenAI的GPT语言模型以及一些特殊的辅助功能去处理各种学术写作和科研润色任务。整个程序由一些函数组成,每个函数都代表了不同的学术润色/翻译/其他服务。 - -下面是程序中每个文件的功能列表: - -| 文件名 | 功能 | -|--------|--------| -| functional_crazy.py | 实现高级功能函数模板和其他一些辅助功能函数 | -| main.py | 程序的主要入口,负责程序的启动和UI的展示 | -| functional.py | 定义各种功能按钮的颜色和响应函数 | -| show_math.py | 解析LaTeX文本,将其转换为Markdown格式 | -| predict.py | 基础的对话功能,用于与chatGPT进行交互 | -| check_proxy.py | 检查代理设置的正确性 | -| config_private.py | 配置程序的API密钥和其他私有信息 | -| config.py | 配置OpenAI的API参数和程序的其他属性 | -| theme.py | 设置程序主题样式 | -| toolbox.py | 存放一些辅助函数供程序使用 | -| crazy_functions/生成函数注释.py | 生成Python文件中所有函数的注释 | -| crazy_functions/读文章写摘要.py | 解析文章文本,生成中英文摘要 | -| crazy_functions/代码重写为全英文_多线程.py | 将中文代码内容转化为英文 | -| crazy_functions/高级功能函数模板.py | 实现高级功能函数模板 | -| crazy_functions/总结word文档.py | 解析Word文件,生成文章内容的概要 | -| crazy_functions/批量总结PDF文档pdfminer.py | 解析PDF文件,生成文章内容的概要(使用pdfminer库) | -| crazy_functions/批量总结PDF文档.py | 解析PDF文件,生成文章内容的概要(使用PyMuPDF库) | -| crazy_functions/解析项目源代码.py | 解析C/C++源代码,生成markdown表格 | -| crazy_functions/批量总结PDF文档.py | 对PDF文件进行批量摘要生成 | - -总的来说,该程序提供了一系列的学术润色和翻译的工具,支持对各种类型的文件进行分析和处理。同时也提供了对话式用户界面,便于用户使用和交互。 - diff --git a/spaces/jvcanavarro/emotion-recognition/README.md b/spaces/jvcanavarro/emotion-recognition/README.md deleted file mode 100644 index c7c700b283b67b1810d49b40953002c44740df40..0000000000000000000000000000000000000000 --- a/spaces/jvcanavarro/emotion-recognition/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Emotion Recognition -emoji: 🐠 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kazuk/youtube-whisper-06/README.md b/spaces/kazuk/youtube-whisper-06/README.md deleted file mode 100644 index c3180680339155aaf1d27f629129b68d12cac021..0000000000000000000000000000000000000000 --- a/spaces/kazuk/youtube-whisper-06/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Youtube Whisper -emoji: ⚡ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: unknown -duplicated_from: kazuk/youtube-whisper ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kdrkdrkdr/AzusaTTS/export_model.py b/spaces/kdrkdrkdr/AzusaTTS/export_model.py deleted file mode 100644 index 98a49835df5a7a2486e76ddf94fbbb4444b52203..0000000000000000000000000000000000000000 --- a/spaces/kdrkdrkdr/AzusaTTS/export_model.py +++ /dev/null @@ -1,13 +0,0 @@ -import torch - -if __name__ == '__main__': - model_path = "saved_model/11/model.pth" - output_path = "saved_model/11/model1.pth" - checkpoint_dict = torch.load(model_path, map_location='cpu') - checkpoint_dict_new = {} - for k, v in checkpoint_dict.items(): - if k == "optimizer": - print("remove optimizer") - continue - checkpoint_dict_new[k] = v - torch.save(checkpoint_dict_new, output_path) diff --git a/spaces/keras-io/semi-supervised-classification/README.md b/spaces/keras-io/semi-supervised-classification/README.md deleted file mode 100644 index 896bdbc1139db397ef4bee801065a372cfeb38a9..0000000000000000000000000000000000000000 --- a/spaces/keras-io/semi-supervised-classification/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Semi-Supervised Contrastive Learning with SimCLR -emoji: 👨‍🏫 -colorFrom: red -colorTo: green -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# Configuration - -`title`: _string_ -Supervised Contrastive Learning - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py deleted file mode 100644 index 93d0701c0094517cec147c382b005e8063938548..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r100.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "cosface" -config.network = "r100" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/glint360k" -config.num_classes = 360232 -config.num_image = 17091657 -config.num_epoch = 20 -config.warmup_epoch = -1 -config.decay_epoch = [8, 12, 15, 18] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/kevinwang676/VoiceChanger/src/face3d/util/my_awing_arch.py b/spaces/kevinwang676/VoiceChanger/src/face3d/util/my_awing_arch.py deleted file mode 100644 index cd5656177dc5a1dde82ffee5d43434bc5e69c88e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChanger/src/face3d/util/my_awing_arch.py +++ /dev/null @@ -1,378 +0,0 @@ -import cv2 -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def calculate_points(heatmaps): - # change heatmaps to landmarks - B, N, H, W = heatmaps.shape - HW = H * W - BN_range = np.arange(B * N) - - heatline = heatmaps.reshape(B, N, HW) - indexes = np.argmax(heatline, axis=2) - - preds = np.stack((indexes % W, indexes // W), axis=2) - preds = preds.astype(np.float, copy=False) - - inr = indexes.ravel() - - heatline = heatline.reshape(B * N, HW) - x_up = heatline[BN_range, inr + 1] - x_down = heatline[BN_range, inr - 1] - # y_up = heatline[BN_range, inr + W] - - if any((inr + W) >= 4096): - y_up = heatline[BN_range, 4095] - else: - y_up = heatline[BN_range, inr + W] - if any((inr - W) <= 0): - y_down = heatline[BN_range, 0] - else: - y_down = heatline[BN_range, inr - W] - - think_diff = np.sign(np.stack((x_up - x_down, y_up - y_down), axis=1)) - think_diff *= .25 - - preds += think_diff.reshape(B, N, 2) - preds += .5 - return preds - - -class AddCoordsTh(nn.Module): - - def __init__(self, x_dim=64, y_dim=64, with_r=False, with_boundary=False): - super(AddCoordsTh, self).__init__() - self.x_dim = x_dim - self.y_dim = y_dim - self.with_r = with_r - self.with_boundary = with_boundary - - def forward(self, input_tensor, heatmap=None): - """ - input_tensor: (batch, c, x_dim, y_dim) - """ - batch_size_tensor = input_tensor.shape[0] - - xx_ones = torch.ones([1, self.y_dim], dtype=torch.int32, device=input_tensor.device) - xx_ones = xx_ones.unsqueeze(-1) - - xx_range = torch.arange(self.x_dim, dtype=torch.int32, device=input_tensor.device).unsqueeze(0) - xx_range = xx_range.unsqueeze(1) - - xx_channel = torch.matmul(xx_ones.float(), xx_range.float()) - xx_channel = xx_channel.unsqueeze(-1) - - yy_ones = torch.ones([1, self.x_dim], dtype=torch.int32, device=input_tensor.device) - yy_ones = yy_ones.unsqueeze(1) - - yy_range = torch.arange(self.y_dim, dtype=torch.int32, device=input_tensor.device).unsqueeze(0) - yy_range = yy_range.unsqueeze(-1) - - yy_channel = torch.matmul(yy_range.float(), yy_ones.float()) - yy_channel = yy_channel.unsqueeze(-1) - - xx_channel = xx_channel.permute(0, 3, 2, 1) - yy_channel = yy_channel.permute(0, 3, 2, 1) - - xx_channel = xx_channel / (self.x_dim - 1) - yy_channel = yy_channel / (self.y_dim - 1) - - xx_channel = xx_channel * 2 - 1 - yy_channel = yy_channel * 2 - 1 - - xx_channel = xx_channel.repeat(batch_size_tensor, 1, 1, 1) - yy_channel = yy_channel.repeat(batch_size_tensor, 1, 1, 1) - - if self.with_boundary and heatmap is not None: - boundary_channel = torch.clamp(heatmap[:, -1:, :, :], 0.0, 1.0) - - zero_tensor = torch.zeros_like(xx_channel) - xx_boundary_channel = torch.where(boundary_channel > 0.05, xx_channel, zero_tensor) - yy_boundary_channel = torch.where(boundary_channel > 0.05, yy_channel, zero_tensor) - if self.with_boundary and heatmap is not None: - xx_boundary_channel = xx_boundary_channel.to(input_tensor.device) - yy_boundary_channel = yy_boundary_channel.to(input_tensor.device) - ret = torch.cat([input_tensor, xx_channel, yy_channel], dim=1) - - if self.with_r: - rr = torch.sqrt(torch.pow(xx_channel, 2) + torch.pow(yy_channel, 2)) - rr = rr / torch.max(rr) - ret = torch.cat([ret, rr], dim=1) - - if self.with_boundary and heatmap is not None: - ret = torch.cat([ret, xx_boundary_channel, yy_boundary_channel], dim=1) - return ret - - -class CoordConvTh(nn.Module): - """CoordConv layer as in the paper.""" - - def __init__(self, x_dim, y_dim, with_r, with_boundary, in_channels, first_one=False, *args, **kwargs): - super(CoordConvTh, self).__init__() - self.addcoords = AddCoordsTh(x_dim=x_dim, y_dim=y_dim, with_r=with_r, with_boundary=with_boundary) - in_channels += 2 - if with_r: - in_channels += 1 - if with_boundary and not first_one: - in_channels += 2 - self.conv = nn.Conv2d(in_channels=in_channels, *args, **kwargs) - - def forward(self, input_tensor, heatmap=None): - ret = self.addcoords(input_tensor, heatmap) - last_channel = ret[:, -2:, :, :] - ret = self.conv(ret) - return ret, last_channel - - -def conv3x3(in_planes, out_planes, strd=1, padding=1, bias=False, dilation=1): - '3x3 convolution with padding' - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=strd, padding=padding, bias=bias, dilation=dilation) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - # self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - # self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.relu(out) - - out = self.conv2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ConvBlock(nn.Module): - - def __init__(self, in_planes, out_planes): - super(ConvBlock, self).__init__() - self.bn1 = nn.BatchNorm2d(in_planes) - self.conv1 = conv3x3(in_planes, int(out_planes / 2)) - self.bn2 = nn.BatchNorm2d(int(out_planes / 2)) - self.conv2 = conv3x3(int(out_planes / 2), int(out_planes / 4), padding=1, dilation=1) - self.bn3 = nn.BatchNorm2d(int(out_planes / 4)) - self.conv3 = conv3x3(int(out_planes / 4), int(out_planes / 4), padding=1, dilation=1) - - if in_planes != out_planes: - self.downsample = nn.Sequential( - nn.BatchNorm2d(in_planes), - nn.ReLU(True), - nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, bias=False), - ) - else: - self.downsample = None - - def forward(self, x): - residual = x - - out1 = self.bn1(x) - out1 = F.relu(out1, True) - out1 = self.conv1(out1) - - out2 = self.bn2(out1) - out2 = F.relu(out2, True) - out2 = self.conv2(out2) - - out3 = self.bn3(out2) - out3 = F.relu(out3, True) - out3 = self.conv3(out3) - - out3 = torch.cat((out1, out2, out3), 1) - - if self.downsample is not None: - residual = self.downsample(residual) - - out3 += residual - - return out3 - - -class HourGlass(nn.Module): - - def __init__(self, num_modules, depth, num_features, first_one=False): - super(HourGlass, self).__init__() - self.num_modules = num_modules - self.depth = depth - self.features = num_features - self.coordconv = CoordConvTh( - x_dim=64, - y_dim=64, - with_r=True, - with_boundary=True, - in_channels=256, - first_one=first_one, - out_channels=256, - kernel_size=1, - stride=1, - padding=0) - self._generate_network(self.depth) - - def _generate_network(self, level): - self.add_module('b1_' + str(level), ConvBlock(256, 256)) - - self.add_module('b2_' + str(level), ConvBlock(256, 256)) - - if level > 1: - self._generate_network(level - 1) - else: - self.add_module('b2_plus_' + str(level), ConvBlock(256, 256)) - - self.add_module('b3_' + str(level), ConvBlock(256, 256)) - - def _forward(self, level, inp): - # Upper branch - up1 = inp - up1 = self._modules['b1_' + str(level)](up1) - - # Lower branch - low1 = F.avg_pool2d(inp, 2, stride=2) - low1 = self._modules['b2_' + str(level)](low1) - - if level > 1: - low2 = self._forward(level - 1, low1) - else: - low2 = low1 - low2 = self._modules['b2_plus_' + str(level)](low2) - - low3 = low2 - low3 = self._modules['b3_' + str(level)](low3) - - up2 = F.interpolate(low3, scale_factor=2, mode='nearest') - - return up1 + up2 - - def forward(self, x, heatmap): - x, last_channel = self.coordconv(x, heatmap) - return self._forward(self.depth, x), last_channel - - -class FAN(nn.Module): - - def __init__(self, num_modules=1, end_relu=False, gray_scale=False, num_landmarks=68, device='cuda'): - super(FAN, self).__init__() - self.device = device - self.num_modules = num_modules - self.gray_scale = gray_scale - self.end_relu = end_relu - self.num_landmarks = num_landmarks - - # Base part - if self.gray_scale: - self.conv1 = CoordConvTh( - x_dim=256, - y_dim=256, - with_r=True, - with_boundary=False, - in_channels=3, - out_channels=64, - kernel_size=7, - stride=2, - padding=3) - else: - self.conv1 = CoordConvTh( - x_dim=256, - y_dim=256, - with_r=True, - with_boundary=False, - in_channels=3, - out_channels=64, - kernel_size=7, - stride=2, - padding=3) - self.bn1 = nn.BatchNorm2d(64) - self.conv2 = ConvBlock(64, 128) - self.conv3 = ConvBlock(128, 128) - self.conv4 = ConvBlock(128, 256) - - # Stacking part - for hg_module in range(self.num_modules): - if hg_module == 0: - first_one = True - else: - first_one = False - self.add_module('m' + str(hg_module), HourGlass(1, 4, 256, first_one)) - self.add_module('top_m_' + str(hg_module), ConvBlock(256, 256)) - self.add_module('conv_last' + str(hg_module), nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - self.add_module('bn_end' + str(hg_module), nn.BatchNorm2d(256)) - self.add_module('l' + str(hg_module), nn.Conv2d(256, num_landmarks + 1, kernel_size=1, stride=1, padding=0)) - - if hg_module < self.num_modules - 1: - self.add_module('bl' + str(hg_module), nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - self.add_module('al' + str(hg_module), - nn.Conv2d(num_landmarks + 1, 256, kernel_size=1, stride=1, padding=0)) - - def forward(self, x): - x, _ = self.conv1(x) - x = F.relu(self.bn1(x), True) - # x = F.relu(self.bn1(self.conv1(x)), True) - x = F.avg_pool2d(self.conv2(x), 2, stride=2) - x = self.conv3(x) - x = self.conv4(x) - - previous = x - - outputs = [] - boundary_channels = [] - tmp_out = None - for i in range(self.num_modules): - hg, boundary_channel = self._modules['m' + str(i)](previous, tmp_out) - - ll = hg - ll = self._modules['top_m_' + str(i)](ll) - - ll = F.relu(self._modules['bn_end' + str(i)](self._modules['conv_last' + str(i)](ll)), True) - - # Predict heatmaps - tmp_out = self._modules['l' + str(i)](ll) - if self.end_relu: - tmp_out = F.relu(tmp_out) # HACK: Added relu - outputs.append(tmp_out) - boundary_channels.append(boundary_channel) - - if i < self.num_modules - 1: - ll = self._modules['bl' + str(i)](ll) - tmp_out_ = self._modules['al' + str(i)](tmp_out) - previous = previous + ll + tmp_out_ - - return outputs, boundary_channels - - def get_landmarks(self, img): - H, W, _ = img.shape - offset = W / 64, H / 64, 0, 0 - - img = cv2.resize(img, (256, 256)) - inp = img[..., ::-1] - inp = torch.from_numpy(np.ascontiguousarray(inp.transpose((2, 0, 1)))).float() - inp = inp.to(self.device) - inp.div_(255.0).unsqueeze_(0) - - outputs, _ = self.forward(inp) - out = outputs[-1][:, :-1, :, :] - heatmaps = out.detach().cpu().numpy() - - pred = calculate_points(heatmaps).reshape(-1, 2) - - pred *= offset[:2] - pred += offset[-2:] - - return pred diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/multilingual/data_scripts/utils/dedup.py b/spaces/koajoel/PolyFormer/fairseq/examples/multilingual/data_scripts/utils/dedup.py deleted file mode 100644 index d6fed8c695cf218d3502d6ed8d23015520c0e179..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/multilingual/data_scripts/utils/dedup.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse - -def deup(src_file, tgt_file, src_file_out, tgt_file_out): - seen = set() - dup_count = 0 - with open(src_file, encoding='utf-8') as fsrc, \ - open(tgt_file, encoding='utf-8') as ftgt, \ - open(src_file_out, 'w', encoding='utf-8') as fsrc_out, \ - open(tgt_file_out, 'w', encoding='utf-8') as ftgt_out: - for s, t in zip(fsrc, ftgt): - if (s, t) not in seen: - fsrc_out.write(s) - ftgt_out.write(t) - seen.add((s, t)) - else: - dup_count += 1 - print(f'number of duplication: {dup_count}') - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--src-file", type=str, required=True, - help="src file") - parser.add_argument("--tgt-file", type=str, required=True, - help="tgt file") - parser.add_argument("--src-file-out", type=str, required=True, - help="src ouptut file") - parser.add_argument("--tgt-file-out", type=str, required=True, - help="tgt ouput file") - args = parser.parse_args() - deup(args.src_file, args.tgt_file, args.src_file_out, args.tgt_file_out) - - -if __name__ == "__main__": - main() diff --git a/spaces/kquote03/lama-video-watermark-remover/bin/filter_sharded_dataset.py b/spaces/kquote03/lama-video-watermark-remover/bin/filter_sharded_dataset.py deleted file mode 100644 index b3c2b490e88bb3b55c6bb717e08f97f7a396d5fa..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/bin/filter_sharded_dataset.py +++ /dev/null @@ -1,69 +0,0 @@ -#!/usr/bin/env python3 - - -import math -import os -import random - -import braceexpand -import webdataset as wds - -DEFAULT_CATS_FILE = os.path.join(os.path.dirname(__file__), '..', 'configs', 'places2-categories_157.txt') - -def is_good_key(key, cats): - return any(c in key for c in cats) - - -def main(args): - if args.categories == 'nofilter': - good_categories = None - else: - with open(args.categories, 'r') as f: - good_categories = set(line.strip().split(' ')[0] for line in f if line.strip()) - - all_input_files = list(braceexpand.braceexpand(args.infile)) - chunk_size = int(math.ceil(len(all_input_files) / args.n_read_streams)) - - input_iterators = [iter(wds.Dataset(all_input_files[start : start + chunk_size]).shuffle(args.shuffle_buffer)) - for start in range(0, len(all_input_files), chunk_size)] - output_datasets = [wds.ShardWriter(args.outpattern.format(i)) for i in range(args.n_write_streams)] - - good_readers = list(range(len(input_iterators))) - step_i = 0 - good_samples = 0 - bad_samples = 0 - while len(good_readers) > 0: - if step_i % args.print_freq == 0: - print(f'Iterations done {step_i}; readers alive {good_readers}; good samples {good_samples}; bad samples {bad_samples}') - - step_i += 1 - - ri = random.choice(good_readers) - try: - sample = next(input_iterators[ri]) - except StopIteration: - good_readers = list(set(good_readers) - {ri}) - continue - - if good_categories is not None and not is_good_key(sample['__key__'], good_categories): - bad_samples += 1 - continue - - wi = random.randint(0, args.n_write_streams - 1) - output_datasets[wi].write(sample) - good_samples += 1 - - -if __name__ == '__main__': - import argparse - - aparser = argparse.ArgumentParser() - aparser.add_argument('--categories', type=str, default=DEFAULT_CATS_FILE) - aparser.add_argument('--shuffle-buffer', type=int, default=10000) - aparser.add_argument('--n-read-streams', type=int, default=10) - aparser.add_argument('--n-write-streams', type=int, default=10) - aparser.add_argument('--print-freq', type=int, default=1000) - aparser.add_argument('infile', type=str) - aparser.add_argument('outpattern', type=str) - - main(aparser.parse_args()) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_compatibilty_files.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_compatibilty_files.py deleted file mode 100644 index 13ad0dfb21a1d5b7fb91f2419b78b9bdf90f0ec3..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/importlib_resources/tests/test_compatibilty_files.py +++ /dev/null @@ -1,104 +0,0 @@ -import io -import unittest - -import importlib_resources as resources - -from importlib_resources._adapters import ( - CompatibilityFiles, - wrap_spec, -) - -from . import util - - -class CompatibilityFilesTests(unittest.TestCase): - @property - def package(self): - bytes_data = io.BytesIO(b'Hello, world!') - return util.create_package( - file=bytes_data, - path='some_path', - contents=('a', 'b', 'c'), - ) - - @property - def files(self): - return resources.files(self.package) - - def test_spec_path_iter(self): - self.assertEqual( - sorted(path.name for path in self.files.iterdir()), - ['a', 'b', 'c'], - ) - - def test_child_path_iter(self): - self.assertEqual(list((self.files / 'a').iterdir()), []) - - def test_orphan_path_iter(self): - self.assertEqual(list((self.files / 'a' / 'a').iterdir()), []) - self.assertEqual(list((self.files / 'a' / 'a' / 'a').iterdir()), []) - - def test_spec_path_is(self): - self.assertFalse(self.files.is_file()) - self.assertFalse(self.files.is_dir()) - - def test_child_path_is(self): - self.assertTrue((self.files / 'a').is_file()) - self.assertFalse((self.files / 'a').is_dir()) - - def test_orphan_path_is(self): - self.assertFalse((self.files / 'a' / 'a').is_file()) - self.assertFalse((self.files / 'a' / 'a').is_dir()) - self.assertFalse((self.files / 'a' / 'a' / 'a').is_file()) - self.assertFalse((self.files / 'a' / 'a' / 'a').is_dir()) - - def test_spec_path_name(self): - self.assertEqual(self.files.name, 'testingpackage') - - def test_child_path_name(self): - self.assertEqual((self.files / 'a').name, 'a') - - def test_orphan_path_name(self): - self.assertEqual((self.files / 'a' / 'b').name, 'b') - self.assertEqual((self.files / 'a' / 'b' / 'c').name, 'c') - - def test_spec_path_open(self): - self.assertEqual(self.files.read_bytes(), b'Hello, world!') - self.assertEqual(self.files.read_text(encoding='utf-8'), 'Hello, world!') - - def test_child_path_open(self): - self.assertEqual((self.files / 'a').read_bytes(), b'Hello, world!') - self.assertEqual( - (self.files / 'a').read_text(encoding='utf-8'), 'Hello, world!' - ) - - def test_orphan_path_open(self): - with self.assertRaises(FileNotFoundError): - (self.files / 'a' / 'b').read_bytes() - with self.assertRaises(FileNotFoundError): - (self.files / 'a' / 'b' / 'c').read_bytes() - - def test_open_invalid_mode(self): - with self.assertRaises(ValueError): - self.files.open('0') - - def test_orphan_path_invalid(self): - with self.assertRaises(ValueError): - CompatibilityFiles.OrphanPath() - - def test_wrap_spec(self): - spec = wrap_spec(self.package) - self.assertIsInstance(spec.loader.get_resource_reader(None), CompatibilityFiles) - - -class CompatibilityFilesNoReaderTests(unittest.TestCase): - @property - def package(self): - return util.create_package_from_loader(None) - - @property - def files(self): - return resources.files(self.package) - - def test_spec_path_joinpath(self): - self.assertIsInstance(self.files / 'a', CompatibilityFiles.OrphanPath) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_agg.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_agg.py deleted file mode 100644 index 5285a24f01f6ee4cdff98cdc5c5de14387207d5e..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_agg.py +++ /dev/null @@ -1,338 +0,0 @@ -import io - -import numpy as np -from numpy.testing import assert_array_almost_equal -from PIL import Image, TiffTags -import pytest - - -from matplotlib import ( - collections, patheffects, pyplot as plt, transforms as mtransforms, - rcParams, rc_context) -from matplotlib.backends.backend_agg import RendererAgg -from matplotlib.figure import Figure -from matplotlib.image import imread -from matplotlib.path import Path -from matplotlib.testing.decorators import image_comparison -from matplotlib.transforms import IdentityTransform - - -def test_repeated_save_with_alpha(): - # We want an image which has a background color of bluish green, with an - # alpha of 0.25. - - fig = Figure([1, 0.4]) - fig.set_facecolor((0, 1, 0.4)) - fig.patch.set_alpha(0.25) - - # The target color is fig.patch.get_facecolor() - - buf = io.BytesIO() - - fig.savefig(buf, - facecolor=fig.get_facecolor(), - edgecolor='none') - - # Save the figure again to check that the - # colors don't bleed from the previous renderer. - buf.seek(0) - fig.savefig(buf, - facecolor=fig.get_facecolor(), - edgecolor='none') - - # Check the first pixel has the desired color & alpha - # (approx: 0, 1.0, 0.4, 0.25) - buf.seek(0) - assert_array_almost_equal(tuple(imread(buf)[0, 0]), - (0.0, 1.0, 0.4, 0.250), - decimal=3) - - -def test_large_single_path_collection(): - buff = io.BytesIO() - - # Generates a too-large single path in a path collection that - # would cause a segfault if the draw_markers optimization is - # applied. - f, ax = plt.subplots() - collection = collections.PathCollection( - [Path([[-10, 5], [10, 5], [10, -5], [-10, -5], [-10, 5]])]) - ax.add_artist(collection) - ax.set_xlim(10**-3, 1) - plt.savefig(buff) - - -def test_marker_with_nan(): - # This creates a marker with nans in it, which was segfaulting the - # Agg backend (see #3722) - fig, ax = plt.subplots(1) - steps = 1000 - data = np.arange(steps) - ax.semilogx(data) - ax.fill_between(data, data*0.8, data*1.2) - buf = io.BytesIO() - fig.savefig(buf, format='png') - - -def test_long_path(): - buff = io.BytesIO() - fig = Figure() - ax = fig.subplots() - points = np.ones(100_000) - points[::2] *= -1 - ax.plot(points) - fig.savefig(buff, format='png') - - -@image_comparison(['agg_filter.png'], remove_text=True) -def test_agg_filter(): - def smooth1d(x, window_len): - # copied from https://scipy-cookbook.readthedocs.io/ - s = np.r_[ - 2*x[0] - x[window_len:1:-1], x, 2*x[-1] - x[-1:-window_len:-1]] - w = np.hanning(window_len) - y = np.convolve(w/w.sum(), s, mode='same') - return y[window_len-1:-window_len+1] - - def smooth2d(A, sigma=3): - window_len = max(int(sigma), 3) * 2 + 1 - A = np.apply_along_axis(smooth1d, 0, A, window_len) - A = np.apply_along_axis(smooth1d, 1, A, window_len) - return A - - class BaseFilter: - - def get_pad(self, dpi): - return 0 - - def process_image(self, padded_src, dpi): - raise NotImplementedError("Should be overridden by subclasses") - - def __call__(self, im, dpi): - pad = self.get_pad(dpi) - padded_src = np.pad(im, [(pad, pad), (pad, pad), (0, 0)], - "constant") - tgt_image = self.process_image(padded_src, dpi) - return tgt_image, -pad, -pad - - class OffsetFilter(BaseFilter): - - def __init__(self, offsets=(0, 0)): - self.offsets = offsets - - def get_pad(self, dpi): - return int(max(self.offsets) / 72 * dpi) - - def process_image(self, padded_src, dpi): - ox, oy = self.offsets - a1 = np.roll(padded_src, int(ox / 72 * dpi), axis=1) - a2 = np.roll(a1, -int(oy / 72 * dpi), axis=0) - return a2 - - class GaussianFilter(BaseFilter): - """Simple Gaussian filter.""" - - def __init__(self, sigma, alpha=0.5, color=(0, 0, 0)): - self.sigma = sigma - self.alpha = alpha - self.color = color - - def get_pad(self, dpi): - return int(self.sigma*3 / 72 * dpi) - - def process_image(self, padded_src, dpi): - tgt_image = np.empty_like(padded_src) - tgt_image[:, :, :3] = self.color - tgt_image[:, :, 3] = smooth2d(padded_src[:, :, 3] * self.alpha, - self.sigma / 72 * dpi) - return tgt_image - - class DropShadowFilter(BaseFilter): - - def __init__(self, sigma, alpha=0.3, color=(0, 0, 0), offsets=(0, 0)): - self.gauss_filter = GaussianFilter(sigma, alpha, color) - self.offset_filter = OffsetFilter(offsets) - - def get_pad(self, dpi): - return max(self.gauss_filter.get_pad(dpi), - self.offset_filter.get_pad(dpi)) - - def process_image(self, padded_src, dpi): - t1 = self.gauss_filter.process_image(padded_src, dpi) - t2 = self.offset_filter.process_image(t1, dpi) - return t2 - - fig, ax = plt.subplots() - - # draw lines - line1, = ax.plot([0.1, 0.5, 0.9], [0.1, 0.9, 0.5], "bo-", - mec="b", mfc="w", lw=5, mew=3, ms=10, label="Line 1") - line2, = ax.plot([0.1, 0.5, 0.9], [0.5, 0.2, 0.7], "ro-", - mec="r", mfc="w", lw=5, mew=3, ms=10, label="Line 1") - - gauss = DropShadowFilter(4) - - for line in [line1, line2]: - - # draw shadows with same lines with slight offset. - xx = line.get_xdata() - yy = line.get_ydata() - shadow, = ax.plot(xx, yy) - shadow.update_from(line) - - # offset transform - transform = mtransforms.offset_copy(line.get_transform(), ax.figure, - x=4.0, y=-6.0, units='points') - shadow.set_transform(transform) - - # adjust zorder of the shadow lines so that it is drawn below the - # original lines - shadow.set_zorder(line.get_zorder() - 0.5) - shadow.set_agg_filter(gauss) - shadow.set_rasterized(True) # to support mixed-mode renderers - - ax.set_xlim(0., 1.) - ax.set_ylim(0., 1.) - - ax.xaxis.set_visible(False) - ax.yaxis.set_visible(False) - - -def test_too_large_image(): - fig = plt.figure(figsize=(300, 1000)) - buff = io.BytesIO() - with pytest.raises(ValueError): - fig.savefig(buff) - - -def test_chunksize(): - x = range(200) - - # Test without chunksize - fig, ax = plt.subplots() - ax.plot(x, np.sin(x)) - fig.canvas.draw() - - # Test with chunksize - fig, ax = plt.subplots() - rcParams['agg.path.chunksize'] = 105 - ax.plot(x, np.sin(x)) - fig.canvas.draw() - - -@pytest.mark.backend('Agg') -def test_jpeg_dpi(): - # Check that dpi is set correctly in jpg files. - plt.plot([0, 1, 2], [0, 1, 0]) - buf = io.BytesIO() - plt.savefig(buf, format="jpg", dpi=200) - im = Image.open(buf) - assert im.info['dpi'] == (200, 200) - - -def test_pil_kwargs_png(): - from PIL.PngImagePlugin import PngInfo - buf = io.BytesIO() - pnginfo = PngInfo() - pnginfo.add_text("Software", "test") - plt.figure().savefig(buf, format="png", pil_kwargs={"pnginfo": pnginfo}) - im = Image.open(buf) - assert im.info["Software"] == "test" - - -def test_pil_kwargs_tiff(): - buf = io.BytesIO() - pil_kwargs = {"description": "test image"} - plt.figure().savefig(buf, format="tiff", pil_kwargs=pil_kwargs) - im = Image.open(buf) - tags = {TiffTags.TAGS_V2[k].name: v for k, v in im.tag_v2.items()} - assert tags["ImageDescription"] == "test image" - - -def test_pil_kwargs_webp(): - plt.plot([0, 1, 2], [0, 1, 0]) - buf_small = io.BytesIO() - pil_kwargs_low = {"quality": 1} - plt.savefig(buf_small, format="webp", pil_kwargs=pil_kwargs_low) - assert len(pil_kwargs_low) == 1 - buf_large = io.BytesIO() - pil_kwargs_high = {"quality": 100} - plt.savefig(buf_large, format="webp", pil_kwargs=pil_kwargs_high) - assert len(pil_kwargs_high) == 1 - assert buf_large.getbuffer().nbytes > buf_small.getbuffer().nbytes - - -def test_webp_alpha(): - plt.plot([0, 1, 2], [0, 1, 0]) - buf = io.BytesIO() - plt.savefig(buf, format="webp", transparent=True) - im = Image.open(buf) - assert im.mode == "RGBA" - - -def test_draw_path_collection_error_handling(): - fig, ax = plt.subplots() - ax.scatter([1], [1]).set_paths(Path([(0, 1), (2, 3)])) - with pytest.raises(TypeError): - fig.canvas.draw() - - -def test_chunksize_fails(): - # NOTE: This test covers multiple independent test scenarios in a single - # function, because each scenario uses ~2GB of memory and we don't - # want parallel test executors to accidentally run multiple of these - # at the same time. - - N = 100_000 - dpi = 500 - w = 5*dpi - h = 6*dpi - - # make a Path that spans the whole w-h rectangle - x = np.linspace(0, w, N) - y = np.ones(N) * h - y[::2] = 0 - path = Path(np.vstack((x, y)).T) - # effectively disable path simplification (but leaving it "on") - path.simplify_threshold = 0 - - # setup the minimal GraphicsContext to draw a Path - ra = RendererAgg(w, h, dpi) - gc = ra.new_gc() - gc.set_linewidth(1) - gc.set_foreground('r') - - gc.set_hatch('/') - with pytest.raises(OverflowError, match='can not split hatched path'): - ra.draw_path(gc, path, IdentityTransform()) - gc.set_hatch(None) - - with pytest.raises(OverflowError, match='can not split filled path'): - ra.draw_path(gc, path, IdentityTransform(), (1, 0, 0)) - - # Set to zero to disable, currently defaults to 0, but let's be sure. - with rc_context({'agg.path.chunksize': 0}): - with pytest.raises(OverflowError, match='Please set'): - ra.draw_path(gc, path, IdentityTransform()) - - # Set big enough that we do not try to chunk. - with rc_context({'agg.path.chunksize': 1_000_000}): - with pytest.raises(OverflowError, match='Please reduce'): - ra.draw_path(gc, path, IdentityTransform()) - - # Small enough we will try to chunk, but big enough we will fail to render. - with rc_context({'agg.path.chunksize': 90_000}): - with pytest.raises(OverflowError, match='Please reduce'): - ra.draw_path(gc, path, IdentityTransform()) - - path.should_simplify = False - with pytest.raises(OverflowError, match="should_simplify is False"): - ra.draw_path(gc, path, IdentityTransform()) - - -def test_non_tuple_rgbaface(): - # This passes rgbaFace as a ndarray to draw_path. - fig = plt.figure() - fig.add_subplot(projection="3d").scatter( - [0, 1, 2], [0, 1, 2], path_effects=[patheffects.Stroke(linewidth=4)]) - fig.canvas.draw() diff --git a/spaces/leafShen/CodeFormer/CodeFormer/README.md b/spaces/leafShen/CodeFormer/CodeFormer/README.md deleted file mode 100644 index 65810cdf4ce36d8ba152de80df00fa4c8802ee81..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/README.md +++ /dev/null @@ -1,123 +0,0 @@ -

    - -

    - -## Towards Robust Blind Face Restoration with Codebook Lookup Transformer - -[Paper](https://arxiv.org/abs/2206.11253) | [Project Page](https://shangchenzhou.com/projects/CodeFormer/) | [Video](https://youtu.be/d3VDpkXlueI) - - -google colab logo [![Replicate](https://img.shields.io/badge/Demo-%F0%9F%9A%80%20Replicate-blue)](https://replicate.com/sczhou/codeformer) ![visitors](https://visitor-badge.glitch.me/badge?page_id=sczhou/CodeFormer) - -[Shangchen Zhou](https://shangchenzhou.com/), [Kelvin C.K. Chan](https://ckkelvinchan.github.io/), [Chongyi Li](https://li-chongyi.github.io/), [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/) - -S-Lab, Nanyang Technological University - - - - -:star: If CodeFormer is helpful to your images or projects, please help star this repo. Thanks! :hugs: - -### Update - -- **2022.09.09**: Integrated to :rocket: [Replicate](https://replicate.com/). Try out online demo! [![Replicate](https://img.shields.io/badge/Demo-%F0%9F%9A%80%20Replicate-blue)](https://replicate.com/sczhou/codeformer) -- **2022.09.04**: Add face upsampling `--face_upsample` for high-resolution AI-created face enhancement. -- **2022.08.23**: Some modifications on face detection and fusion for better AI-created face enhancement. -- **2022.08.07**: Integrate [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) to support background image enhancement. -- **2022.07.29**: Integrate new face detectors of `['RetinaFace'(default), 'YOLOv5']`. -- **2022.07.17**: Add Colab demo of CodeFormer. google colab logo -- **2022.07.16**: Release inference code for face restoration. :blush: -- **2022.06.21**: This repo is created. - -### TODO -- [ ] Add checkpoint for face inpainting -- [ ] Add training code and config files -- [x] ~~Add background image enhancement~~ - -#### Face Restoration - - - - -#### Face Color Enhancement and Restoration - - - -#### Face Inpainting - - - - - -### Dependencies and Installation - -- Pytorch >= 1.7.1 -- CUDA >= 10.1 -- Other required packages in `requirements.txt` -``` -# git clone this repository -git clone https://github.com/sczhou/CodeFormer -cd CodeFormer - -# create new anaconda env -conda create -n codeformer python=3.8 -y -conda activate codeformer - -# install python dependencies -pip3 install -r requirements.txt -python basicsr/setup.py develop -``` - - -### Quick Inference - -##### Download Pre-trained Models: -Download the facelib pretrained models from [[Google Drive](https://drive.google.com/drive/folders/1b_3qwrzY_kTQh0-SnBoGBgOrJ_PLZSKm?usp=sharing) | [OneDrive](https://entuedu-my.sharepoint.com/:f:/g/personal/s200094_e_ntu_edu_sg/EvDxR7FcAbZMp_MA9ouq7aQB8XTppMb3-T0uGZ_2anI2mg?e=DXsJFo)] to the `weights/facelib` folder. You can manually download the pretrained models OR download by runing the following command. -``` -python scripts/download_pretrained_models.py facelib -``` - -Download the CodeFormer pretrained models from [[Google Drive](https://drive.google.com/drive/folders/1CNNByjHDFt0b95q54yMVp6Ifo5iuU6QS?usp=sharing) | [OneDrive](https://entuedu-my.sharepoint.com/:f:/g/personal/s200094_e_ntu_edu_sg/EoKFj4wo8cdIn2-TY2IV6CYBhZ0pIG4kUOeHdPR_A5nlbg?e=AO8UN9)] to the `weights/CodeFormer` folder. You can manually download the pretrained models OR download by runing the following command. -``` -python scripts/download_pretrained_models.py CodeFormer -``` - -##### Prepare Testing Data: -You can put the testing images in the `inputs/TestWhole` folder. If you would like to test on cropped and aligned faces, you can put them in the `inputs/cropped_faces` folder. - - -##### Testing on Face Restoration: -``` -# For cropped and aligned faces -python inference_codeformer.py --w 0.5 --has_aligned --test_path [input folder] - -# For the whole images -# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN -# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN -python inference_codeformer.py --w 0.7 --test_path [input folder] -``` - -NOTE that *w* is in [0, 1]. Generally, smaller *w* tends to produce a higher-quality result, while larger *w* yields a higher-fidelity result. - -The results will be saved in the `results` folder. - -### Citation -If our work is useful for your research, please consider citing: - - @article{zhou2022codeformer, - author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change}, - title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer}, - journal = {arXiv preprint arXiv:2206.11253}, - year = {2022} - } - -### License - -Creative Commons License
    This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. - -### Acknowledgement - -This project is based on [BasicSR](https://github.com/XPixelGroup/BasicSR). We also borrow some codes from [Unleashing Transformers](https://github.com/samb-t/unleashing-transformers), [YOLOv5-face](https://github.com/deepcam-cn/yolov5-face), and [FaceXLib](https://github.com/xinntao/facexlib). Thanks for their awesome works. - -### Contact -If you have any question, please feel free to reach me out at `shangchenzhou@gmail.com`. \ No newline at end of file diff --git a/spaces/legoandmars/glide-inpainting/glide_text2im/clip/encoders.py b/spaces/legoandmars/glide-inpainting/glide_text2im/clip/encoders.py deleted file mode 100644 index ee72773c2c891d2dda6d02933e88599b5330b052..0000000000000000000000000000000000000000 --- a/spaces/legoandmars/glide-inpainting/glide_text2im/clip/encoders.py +++ /dev/null @@ -1,497 +0,0 @@ -import math -from collections import OrderedDict -from typing import List, Optional, Tuple, cast - -import attr -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .attention import ( - AttentionInfo, - DenseAttentionMask, - DenseCausalAttentionMask, - make_full_layout, - to_attention_info, -) -from .utils import Affine, LayerNorm, zero_key_bias_grad - -# Constants used in the original CLIP implementation. -image_channel_means = [122.77093945, 116.74601272, 104.09373519] -image_channel_stds = [68.50053285, 66.63215831, 70.32316309] - - -@attr.s(eq=False, repr=False) -class TextEmbedding(nn.Module): - n_vocab: int = attr.ib() - n_context: int = attr.ib() - n_state: int = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - w_voc = torch.empty((self.n_vocab, self.n_state), dtype=torch.float32, device=self.device) - w_pos = torch.empty((self.n_context, self.n_state), dtype=torch.float32, device=self.device) - - with torch.no_grad(): - w_voc.normal_(std=0.02) - w_pos.normal_(std=0.01) - - self.w_voc = nn.Parameter(w_voc) - self.w_pos = nn.Parameter(w_pos) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - if len(x.shape) != 2: - raise ValueError() - - return F.embedding(x, self.w_voc) + self.w_pos[None, :, :] - - -@attr.s(eq=False, repr=False) -class ImageEmbedding(nn.Module): - image_size: int = attr.ib() - patch_size: int = attr.ib() - n_state: int = attr.ib() - n_timestep: int = attr.ib(default=0) - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - if self.image_size % self.patch_size != 0: - raise ValueError() - - n_patch = self.image_size // self.patch_size - patch_proj = torch.empty( - (self.n_state, 3) + 2 * (self.patch_size,), dtype=torch.float32, device=self.device - ) - w_pos = torch.empty( - (1 + n_patch ** 2, self.n_state), dtype=torch.float32, device=self.device - ) - - with torch.no_grad(): - if self.n_timestep == 0: - pred_state = torch.empty((self.n_state,), dtype=torch.float32, device=self.device) - pred_state.normal_(std=1 / np.sqrt(self.n_state)) - self.pred_state = nn.Parameter(pred_state) - else: - w_t = torch.empty( - (self.n_timestep, self.n_state), dtype=torch.float32, device=self.device - ) - w_t.normal_(std=1 / np.sqrt(self.n_state)) - self.w_t = nn.Parameter(w_t) - - patch_proj.normal_(std=np.sqrt(2 / (self.n_state * self.patch_size ** 2))) - w_pos.normal_(std=1 / np.sqrt(self.n_state)) - - self.patch_proj = nn.Parameter(patch_proj) - self.w_pos = nn.Parameter(w_pos) - - self.channel_means = torch.tensor( - image_channel_means, dtype=torch.float32, device=self.device - )[None, :, None, None] - self.channel_stds = torch.tensor( - image_channel_stds, dtype=torch.float32, device=self.device - )[None, :, None, None] - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - - def forward(self, x: torch.Tensor, t: Optional[torch.Tensor] = None) -> torch.Tensor: - if len(x.shape) != 4: - raise ValueError("input should be 4d") - if x.shape[1] != 3: - raise ValueError("input should have 3 channels") - if not (x.shape[2] == self.image_size and x.shape[3] == self.image_size): - raise ValueError(f"input is not {self.image_size} x {self.image_size}") - - if (self.n_timestep == 0 and t is not None) or (self.n_timestep != 0 and t is None): - raise ValueError() - if self.n_timestep != 0: - assert t is not None - if len(t.shape) != 1: - raise ValueError() - if t.shape[0] != x.shape[0]: - raise ValueError() - - x = (x - self.channel_means) / self.channel_stds - x = F.conv2d(x, self.patch_proj, stride=self.patch_size) - x = x.reshape(x.shape[0], self.n_state, (self.image_size // self.patch_size) ** 2).permute( - 0, 2, 1 - ) - - sot = ( - self.pred_state[None, None].expand(x.shape[0], -1, -1) - if self.n_timestep == 0 - else F.embedding(cast(torch.Tensor, t), self.w_t)[:, None] - ) - x = torch.cat((sot, x), dim=1) + self.w_pos[None] - return self.ln(x) - - -@attr.s(eq=False, repr=False) -class AttentionResblock(nn.Module): - n_state: int = attr.ib() - n_resblocks: int = attr.ib() - attn_fn: AttentionInfo = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.n_head_state = self.n_state // self.attn_fn.n_heads - self.qk_scale = 1 / np.sqrt(self.n_head_state) - - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - self.f_q = Affine( - self.n_state, - self.n_state, - std=1 / math.sqrt(self.n_state), - use_bias=True, - bias_filter_fn=zero_key_bias_grad, - device=self.device, - ) - self.f_k = Affine( - self.n_state, - self.n_state, - std=1 / math.sqrt(self.n_state), - use_bias=False, - bias_filter_fn=zero_key_bias_grad, - device=self.device, - ) - self.f_v = Affine( - self.n_state, - self.n_state, - std=1 / math.sqrt(self.n_state), - use_bias=True, - bias_filter_fn=zero_key_bias_grad, - device=self.device, - ) - self.f_c = Affine( - self.n_state, - self.n_state, - use_bias=True, - std=1 / np.sqrt(self.n_state * self.n_resblocks ** 2), - device=self.device, - ) # XXX - - def forward(self, m: torch.Tensor) -> torch.Tensor: - n_context = m.shape[1] - n_query_pad = self.attn_fn.ctx_blks_q * self.attn_fn.block_size - n_context - n_key_pad = self.attn_fn.ctx_blks_k * self.attn_fn.block_size - n_context - assert n_query_pad >= 0 - assert n_key_pad >= 0 - - r = m - r = self.ln(r) - q, k, v = self.f_q(r), self.f_k(r), self.f_v(r) - - if n_query_pad != 0: - q = F.pad(q, (0, 0, 0, n_query_pad)) - - if n_key_pad != 0: - k = F.pad(k, (0, 0, 0, n_key_pad)) - v = F.pad(v, (0, 0, 0, n_key_pad)) - - q = q.view([q.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3)) - k = k.view([k.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3)) - v = v.view([v.shape[0], -1, self.attn_fn.n_heads, self.n_head_state]).permute((0, 2, 1, 3)) - w = torch.einsum( - "bhcd,bhkd->bhck", q * math.sqrt(self.qk_scale), k * math.sqrt(self.qk_scale) - ) - - if hasattr(self.attn_fn, "pytorch_attn_bias"): - bias = self.attn_fn.pytorch_attn_bias - assert len(bias.shape) in {2, 3} - - if len(bias.shape) == 2: - w = torch.softmax(w + self.attn_fn.pytorch_attn_bias[None, None], dim=-1) - elif len(bias.shape) == 3: - w = torch.softmax(w + self.attn_fn.pytorch_attn_bias[None], dim=-1) - else: - w = torch.softmax(w, dim=-1) - - r = torch.einsum("bhck,bhkd->bhcd", w, v) - r = r.permute((0, 2, 1, 3)).reshape((r.shape[0], -1, self.n_state)) - - if n_query_pad != 0: - r = r[:, :-n_query_pad] - - assert r.shape[1] == n_context - - r = self.f_c(r) - return m + r - - -@attr.s(eq=False, repr=False) -class FullyConnectedResblock(nn.Module): - """ - Not imported from other files because we retain Alec's original inits. - """ - - n_state: int = attr.ib() - n_resblocks: int = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - self.f_1 = Affine( - self.n_state, - 4 * self.n_state, - use_bias=True, - std=np.sqrt(2 / (4 * self.n_state)), - device=self.device, - ) - self.f_2 = Affine( - 4 * self.n_state, - self.n_state, - use_bias=True, - std=1 / np.sqrt(self.n_state * self.n_resblocks ** 2), - device=self.device, - ) # XXX - - def forward(self, m: torch.Tensor) -> torch.Tensor: - r = m - r = self.ln(r) - - r = self.f_2(F.gelu(self.f_1(r))) - return m + r - - -@attr.s(eq=False, repr=False) -class TransformerBlock(nn.Module): - n_state: int = attr.ib() - n_resblocks: int = attr.ib() - attn_fn: AttentionInfo = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.f_attn = AttentionResblock( - self.n_state, - self.n_resblocks, - self.attn_fn, - self.device, - ) - self.f_mlp = FullyConnectedResblock(self.n_state, self.n_resblocks, self.device) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - return self.f_mlp(self.f_attn(x)) - - -@attr.s(eq=False, repr=False) -class TextFeatureExtractor(nn.Module): - n_state: int = attr.ib() - n_embd: int = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - self.f = Affine(self.n_state, self.n_embd, use_bias=False, device=self.device) - - def forward( - self, text: torch.Tensor, text_len: torch.Tensor, return_probe_features: bool = False - ) -> torch.Tensor: - if len(text.shape) != 3: - raise ValueError("expected text to be 3d") - if len(text_len.shape) != 1: - raise ValueError("expected text length to be 1d") - if text.shape[0] != text_len.shape[0]: - raise ValueError("text and text_len have inconsistent batch dimensions") - - index = (text_len - 1)[:, None, None].expand(-1, 1, text.shape[2]) - x = torch.gather(text, dim=1, index=index) - assert list(x.shape) == [text.shape[0], 1, text.shape[2]] - - if return_probe_features: - return x[:, 0] - - x = self.ln(x) - return self.f(x[:, 0]) - - -@attr.s(eq=False, repr=False) -class ImageFeatureExtractor(nn.Module): - n_state: int = attr.ib() - n_embd: int = attr.ib() - device: torch.device = attr.ib(default=torch.device("cuda")) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.ln = LayerNorm(self.n_state, eps=1e-5, device=self.device) - self.f = Affine(self.n_state, self.n_embd, use_bias=False, device=self.device) - - def forward(self, x: torch.Tensor, return_probe_features: bool = False) -> torch.Tensor: - if return_probe_features: - return x[:, 0] - - x = self.ln(x[:, :1]) - return self.f(x[:, 0]) - - -@attr.s(eq=False, repr=False) -class TextEncoder(nn.Module): - n_bpe_vocab: int = attr.ib() - max_text_len: int = attr.ib() - n_embd: int = attr.ib() - n_head: int = attr.ib() - n_xf_blocks: int = attr.ib() - n_head_state: int = attr.ib(default=64) - device: torch.device = attr.ib(default=torch.device("cuda")) - block_size: int = attr.ib(init=False, default=32) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.n_state = self.n_head * self.n_head_state - n_rounded_context = self.block_size * int(math.ceil(self.max_text_len / self.block_size)) - n_pad = n_rounded_context - self.max_text_len - - args = ( - n_rounded_context, - n_rounded_context, - self.block_size, - self.n_head, - False, - n_pad, - n_pad, - ) - mask = DenseCausalAttentionMask(*args) - attn_fn = to_attention_info(mask) - - m = 1 - make_full_layout(mask).astype(np.float32) - m[m == 1] = -1e10 - attn_fn.pytorch_attn_bias = torch.from_numpy(m).to(self.device) - - blocks: List[Tuple[str, nn.Module]] = [ - ( - "input", - TextEmbedding( - self.n_bpe_vocab, self.max_text_len, self.n_state, device=self.device - ), - ) - ] - - for i in range(self.n_xf_blocks): - blocks.append( - ( - f"block_{i}", - TransformerBlock(self.n_state, 2 * self.n_xf_blocks, attn_fn, self.device), - ) - ) - - blocks.append( - ("output", TextFeatureExtractor(self.n_state, self.n_embd, device=self.device)) - ) - - self.blocks = nn.ModuleDict(OrderedDict(blocks)) - - def forward( - self, - text: torch.Tensor, - text_len: torch.Tensor, - return_probe_features: bool = False, - ) -> torch.Tensor: - - n_batch = text.shape[0] - h = self.blocks["input"](text) - - for i in range(self.n_xf_blocks): - h = self.blocks[f"block_{i}"](h) - - h = self.blocks["output"](h, text_len, return_probe_features=return_probe_features) - - assert list(h.shape) == [ - n_batch, - self.n_embd if not return_probe_features else self.n_state, - ] - return h - - -@attr.s(eq=False, repr=False) -class ImageEncoder(nn.Module): - image_size: int = attr.ib() - patch_size: int = attr.ib() - n_embd: int = attr.ib() - n_head: int = attr.ib() - n_xf_blocks: int = attr.ib() - n_head_state: int = attr.ib(default=64) - n_timestep: int = attr.ib(default=0) - device: torch.device = attr.ib(default=torch.device("cuda")) - block_size: int = attr.ib(init=False, default=32) - - def __attrs_post_init__(self) -> None: - super().__init__() - - self.n_state = self.n_head * self.n_head_state - self.n_context = 1 + (self.image_size // self.patch_size) ** 2 - n_rounded_context = self.block_size * int(math.ceil(self.n_context / self.block_size)) - n_pad = n_rounded_context - self.n_context - - args = ( - n_rounded_context, - n_rounded_context, - self.block_size, - self.n_head, - False, - n_pad, - n_pad, - ) - mask = DenseAttentionMask(*args) - attn_fn = to_attention_info(mask) - - m = 1 - make_full_layout(mask).astype(np.float32) - m[m == 1] = -1e10 - attn_fn.pytorch_attn_bias = torch.from_numpy(m).to(self.device) - - blocks: List[Tuple[str, nn.Module]] = [ - ( - "input", - ImageEmbedding( - self.image_size, - self.patch_size, - self.n_state, - n_timestep=self.n_timestep, - device=self.device, - ), - ) - ] - - for i in range(self.n_xf_blocks): - blocks.append( - ( - f"block_{i}", - TransformerBlock(self.n_state, 2 * self.n_xf_blocks, attn_fn, self.device), - ) - ) - - blocks.append(("output", ImageFeatureExtractor(self.n_state, self.n_embd, self.device))) - - self.blocks = nn.ModuleDict(OrderedDict(blocks)) - - def forward( - self, - image: torch.Tensor, - timesteps: Optional[torch.Tensor] = None, - return_probe_features: bool = False, - ) -> torch.Tensor: - n_batch = image.shape[0] - h = self.blocks["input"](image, t=timesteps) - - for i in range(self.n_xf_blocks): - h = self.blocks[f"block_{i}"](h) - - h = self.blocks["output"](h, return_probe_features=return_probe_features) - - assert list(h.shape) == [ - n_batch, - self.n_embd if not return_probe_features else self.n_state, - ] - - return h diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/silero_tts/style.css b/spaces/leogabraneth/text-generation-webui-main/extensions/silero_tts/style.css deleted file mode 100644 index 2ab7aefbbfca19982414f13a76dfdd4324793903..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/silero_tts/style.css +++ /dev/null @@ -1,8 +0,0 @@ -.SDAP .hires_opts input[type="number"] { - width: 6em !important; -} - -/* silero_tts preview */ -.form:has(> #silero_preview_text) { - min-width: 75% -} diff --git a/spaces/leurez/moss/src/utils/request/index.ts b/spaces/leurez/moss/src/utils/request/index.ts deleted file mode 100644 index d651bba8176ab79b48e5dc1b1bf4f062ce0c52be..0000000000000000000000000000000000000000 --- a/spaces/leurez/moss/src/utils/request/index.ts +++ /dev/null @@ -1,84 +0,0 @@ -import type { AxiosProgressEvent, AxiosResponse, GenericAbortSignal } from 'axios' -import request from './axios' -import { useAuthStore } from '@/store' - -export interface HttpOption { - url: string - data?: any - method?: string - headers?: any - onDownloadProgress?: (progressEvent: AxiosProgressEvent) => void - signal?: GenericAbortSignal - beforeRequest?: () => void - afterRequest?: () => void -} - -export interface Response { - data: T - message: string | null - status: string -} - -function http( - { url, data, method, headers, onDownloadProgress, signal, beforeRequest, afterRequest }: HttpOption, -) { - const successHandler = (res: AxiosResponse>) => { - const authStore = useAuthStore() - - if (res.data.status === 'Success' || typeof res.data === 'string') - return res.data - - if (res.data.status === 'Unauthorized') { - authStore.removeToken() - window.location.reload() - } - - return Promise.reject(res.data) - } - - const failHandler = (error: Response) => { - afterRequest?.() - throw new Error(error?.message || 'Error') - } - - beforeRequest?.() - - method = method || 'GET' - - const params = Object.assign(typeof data === 'function' ? data() : data ?? {}, {}) - - return method === 'GET' - ? request.get(url, { params, signal, onDownloadProgress }).then(successHandler, failHandler) - : request.post(url, params, { headers, signal, onDownloadProgress }).then(successHandler, failHandler) -} - -export function get( - { url, data, method = 'GET', onDownloadProgress, signal, beforeRequest, afterRequest }: HttpOption, -): Promise> { - return http({ - url, - method, - data, - onDownloadProgress, - signal, - beforeRequest, - afterRequest, - }) -} - -export function post( - { url, data, method = 'POST', headers, onDownloadProgress, signal, beforeRequest, afterRequest }: HttpOption, -): Promise> { - return http({ - url, - method, - data, - headers, - onDownloadProgress, - signal, - beforeRequest, - afterRequest, - }) -} - -export default post diff --git a/spaces/lightli/bingo-newbing/src/lib/bots/bing/utils.ts b/spaces/lightli/bingo-newbing/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Alcor USB Flash Drive Tools - Fix Fake USB Drives .rarl [BEST].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Alcor USB Flash Drive Tools - Fix Fake USB Drives .rarl [BEST].md deleted file mode 100644 index 736b2b060ac1912bb9870de1a4cf3cdd639ea6af..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Alcor USB Flash Drive Tools - Fix Fake USB Drives .rarl [BEST].md +++ /dev/null @@ -1,58 +0,0 @@ - -

    How to Fix Fake USB Drives with Alcor USB Flash Drive Tools

    -

    USB flash drives are convenient and portable devices that allow us to store and transfer data easily. However, not all USB flash drives are reliable and trustworthy. Some of them are fake or counterfeit, meaning that they have less capacity or quality than they claim. These fake USB flash drives can cause data loss, corruption, or even damage to your computer.

    -

    Alcor USB Flash Drive Tools - Fix Fake USB Drives .rarl


    Download Ziphttps://bytlly.com/2uGx3R



    -

    Fortunately, there are some tools that can help you detect and fix fake USB flash drives. One of them is Alcor USB Flash Drive Tools, a software that can repair USB flash drives with Alcor controllers. Alcor controllers are common in many USB flash drives, especially those from Transcend, Kingston, SanDisk, and other brands.

    -

    In this article, we will show you how to use Alcor USB Flash Drive Tools to fix fake USB drives .rarl, a compressed file format that contains the software and instructions. We will also explain how to identify fake USB flash drives and avoid buying them in the future.

    - -

    What is Alcor USB Flash Drive Tools?

    -

    Alcor USB Flash Drive Tools is a software that can repair USB flash drives with Alcor controllers. It can fix various problems such as:

    -
      -
    • Write protection
    • -
    • Format error
    • -
    • Capacity mismatch
    • -
    • Bad sectors
    • -
    • Corrupted firmware
    • -
    -

    Alcor USB Flash Drive Tools can also update the firmware of your USB flash drive, change its serial number and VID/PID, and create an autorun CD section.

    -

    -

    Alcor USB Flash Drive Tools is distributed as a .rarl file, which is a compressed file format that contains the software and instructions. You need to extract the .rarl file with a program like WinRAR or 7-Zip before you can use it.

    - -

    How to Use Alcor USB Flash Drive Tools to Fix Fake USB Drives?

    -

    To use Alcor USB Flash Drive Tools to fix fake USB drives, you need to follow these steps:

    -
      -
    1. Download Alcor USB Flash Drive Tools - Fix Fake USB Drives .rarl from a reliable source. You can find it on some websites like usb-fix.blogspot.com or trello.com. Be careful not to download any malicious files or viruses.
    2. -
    3. Extract the .rarl file with a program like WinRAR or 7-Zip. You will get a folder with the software and instructions.
    4. -
    5. Plug your USB flash drive into your computer. Make sure it is detected by your computer and has an assigned drive letter.
    6. -
    7. Run the software as an administrator. You will see a window with various options and information.
    8. -
    9. Select your USB flash drive from the drop-down menu. The software will scan your device and show its details.
    10. -
    11. Check if your USB flash drive has an Alcor controller. You can find this information in the Chip Part-Number section. If your device has an Alcor controller, it will show something like AU698x, AU69xx, FC8xxx, etc. If your device does not have an Alcor controller, the software will not work for you.
    12. -
    13. Check if your USB flash drive is fake or not. You can find this information in the Capacity section. If your device is fake, it will show a different capacity than what it claims. For example, if your device claims to be 16 GB but shows only 4 GB in the software, it is fake.
    14. -
    15. Select the appropriate option to fix your USB flash drive. Depending on the problem you have, you can choose one of these options:
    16. -
        -
      • Start: This option will format your device and restore its original capacity and performance.
      • -
      • Restore: This option will update the firmware of your device and fix any corrupted data.
      • -
      • Edit: This option will allow you to change the serial number and VID/PID of your device.
      • -
      • Create CD: This option will create an autorun CD section on your device.
      • -
      -
    17. Wait for the process to finish. The software will show you a progress bar and a message when it is done.
    18. -
    19. Eject your USB flash drive safely from your computer. You can now use it normally without any problems.
    20. -
    - -

    How to Identify Fake USB Flash Drives?

    -

    To avoid buying fake USB flash drives in the future, you need to be careful and vigilant when shopping online or offline. Here are some tips to help you identify fake USB flash drives:

    -
      -
    • Check the price. If the price is too good to be true, it probably is. Fake USB flash drives are usually sold at very low prices compared to genuine ones.
    • -
    • Check the brand. If the brand is unknown or suspicious, it might be fake. Fake USB flash drives often use generic names or copy popular brands like Transcend, Kingston, SanDisk, etc.
    • -
    • Check the packaging. If the packaging is poor quality or has spelling errors, it might be fake. Fake USB flash drives often come in cheap plastic bags or boxes without any labels or logos.
    • -
    • Check the appearance. If the appearance is different from what you expected or what the seller advertised, it might be fake. Fake USB flash drives often have different colors, shapes, sizes, or materials than genuine ones.
    • -
    • Check the performance. If the performance is slow or unstable, it might be fake. Fake USB flash drives often have low transfer speeds, high error rates, or frequent crashes.
    • -
    • Check the capacity. If the capacity is less than what you paid for or what the device claims, it might be fake. Fake USB flash drives often have less actual capacity than what they show on your computer or on their labels.
    • -
    - -

    Conclusion

    -

    Fake USB flash drives are a common problem that can cause data loss, corruption, or damage to your computer. However, you can use Alcor USB Flash Drive Tools to fix fake USB drives .rarl, a software that can repair USB flash drives with Alcor controllers. By following our guide above, you can download and use this software to fix various problems with your fake USB flash drive and restore its original capacity and performance.

    -

    Conclusion

    -

    Fake USB flash drives are a common problem that can cause data loss, corruption, or damage to your computer. However, you can use Alcor USB Flash Drive Tools to fix fake USB drives .rarl, a software that can repair USB flash drives with Alcor controllers. By following our guide above, you can download and use this software to fix various problems with your fake USB flash drive and restore its original capacity and performance.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Introduccion A La Psicologia Robert Feldman Pdf.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Introduccion A La Psicologia Robert Feldman Pdf.md deleted file mode 100644 index 7e9dbc6f163098206d3e3aa6b3a4d0a06cce2f49..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Introduccion A La Psicologia Robert Feldman Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Introduccion A La Psicologia Robert Feldman Pdf


    Download Ziphttps://bytlly.com/2uGyuW



    - -Gale Researcher Guide for: After the Broken Home: Robert Lowell, Anne Sexton, and James Merrill ... Jeepers Creepers: Through the Eyes of Marty Feldman. 1fdad05405
    -
    -
    -

    diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Karan Arjun Torrent Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Karan Arjun Torrent Download.md deleted file mode 100644 index 2e0dcae9b6d7b1dae4117fb0febdd437364f0b5c..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Karan Arjun Torrent Download.md +++ /dev/null @@ -1,8 +0,0 @@ - -

    keygen para activar powermill 2014 32 bits
    crack keygen autocad p id 2016 free download
    nicole ferroni nous explique la rsilience humaine - folie passagre
    i want to shave my pussy
    amy comique jambes schumer
    beach party 2 - pusooy.net download hitl
    40 ans femme porn
    download books free pdf online the pharaoh key
    anime studio 6 pro free download
    crack revit 2013 crack

    -

    age of mythology wotlk eu chatter leaderboard
    getting married torrent
    hindu ajnabi kat tayyar gaya torrent
    mvita z3 release full version with mac setup
    kahani soorat download movie
    resident evil 6 game full version with crack free
    verschwunden das fantastische

    -

    Karan Arjun Torrent Download


    Download Zip ··· https://bytlly.com/2uGx75



    -

    et felix the torrent website leaks movies for free. users of the website can download movies and also get telegram links. nowadays the usage of torrent websites is been increasing. users will have all the recently released films in one list and some other bollywood films in another. there are several groups on the moviesflix website. the features of the torrent website may attract users, but using those torrent websites would put you at risk.

    -

    so, if you are in search for karan arjun movie download filmyzap, then you may get confused by the fake websites. these sites claim to be the original source for movie downloading, but they actually don’t. once you open these sites, your device will be given access to, where your files and other data can be acquired by the hackers. so, kindly make sure, you don’t visit these websites and keep your device data safe.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/inference/__init__.py b/spaces/lllqqq/so-vits-svc-models-pcr/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py b/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py deleted file mode 100644 index 8e961183802ae29d19b0df4da6d0da4aaba66bfb..0000000000000000000000000000000000000000 --- a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/pipeline_stable_diffusion_controlnet_inpaint.py +++ /dev/null @@ -1,610 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import numpy as np -import PIL.Image -import torch -from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import * - -# https://github.com/mikonvergence/ControlNetInpaint - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> # !pip install opencv-python transformers accelerate - >>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler - >>> from diffusers.utils import load_image - >>> import numpy as np - >>> import torch - - >>> import cv2 - >>> from PIL import Image - >>> # download an image - >>> image = load_image( - ... "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" - ... ) - >>> image = np.array(image) - >>> mask_image = load_image( - ... "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" - ... ) - >>> mask_image = np.array(mask_image) - >>> # get canny image - >>> canny_image = cv2.Canny(image, 100, 200) - >>> canny_image = canny_image[:, :, None] - >>> canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2) - >>> canny_image = Image.fromarray(canny_image) - - >>> # load control net and stable diffusion v1-5 - >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) - >>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( - ... "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16 - ... ) - - >>> # speed up diffusion process with faster scheduler and memory optimization - >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) - >>> # remove following line if xformers is not installed - >>> pipe.enable_xformers_memory_efficient_attention() - - >>> pipe.enable_model_cpu_offload() - - >>> # generate image - >>> generator = torch.manual_seed(0) - >>> image = pipe( - ... "futuristic-looking doggo", - ... num_inference_steps=20, - ... generator=generator, - ... image=image, - ... control_image=canny_image, - ... mask_image=mask_image - ... ).images[0] - ``` -""" - - -def prepare_mask_and_masked_image(image, mask): - """ - Prepares a pair (image, mask) to be consumed by the Stable Diffusion pipeline. This means that those inputs will be - converted to ``torch.Tensor`` with shapes ``batch x channels x height x width`` where ``channels`` is ``3`` for the - ``image`` and ``1`` for the ``mask``. - The ``image`` will be converted to ``torch.float32`` and normalized to be in ``[-1, 1]``. The ``mask`` will be - binarized (``mask > 0.5``) and cast to ``torch.float32`` too. - Args: - image (Union[np.array, PIL.Image, torch.Tensor]): The image to inpaint. - It can be a ``PIL.Image``, or a ``height x width x 3`` ``np.array`` or a ``channels x height x width`` - ``torch.Tensor`` or a ``batch x channels x height x width`` ``torch.Tensor``. - mask (_type_): The mask to apply to the image, i.e. regions to inpaint. - It can be a ``PIL.Image``, or a ``height x width`` ``np.array`` or a ``1 x height x width`` - ``torch.Tensor`` or a ``batch x 1 x height x width`` ``torch.Tensor``. - Raises: - ValueError: ``torch.Tensor`` images should be in the ``[-1, 1]`` range. ValueError: ``torch.Tensor`` mask - should be in the ``[0, 1]`` range. ValueError: ``mask`` and ``image`` should have the same spatial dimensions. - TypeError: ``mask`` is a ``torch.Tensor`` but ``image`` is not - (ot the other way around). - Returns: - tuple[torch.Tensor]: The pair (mask, masked_image) as ``torch.Tensor`` with 4 - dimensions: ``batch x channels x height x width``. - """ - if isinstance(image, torch.Tensor): - if not isinstance(mask, torch.Tensor): - raise TypeError( - f"`image` is a torch.Tensor but `mask` (type: {type(mask)} is not" - ) - - # Batch single image - if image.ndim == 3: - assert ( - image.shape[0] == 3 - ), "Image outside a batch should be of shape (3, H, W)" - image = image.unsqueeze(0) - - # Batch and add channel dim for single mask - if mask.ndim == 2: - mask = mask.unsqueeze(0).unsqueeze(0) - - # Batch single mask or add channel dim - if mask.ndim == 3: - # Single batched mask, no channel dim or single mask not batched but channel dim - if mask.shape[0] == 1: - mask = mask.unsqueeze(0) - - # Batched masks no channel dim - else: - mask = mask.unsqueeze(1) - - assert ( - image.ndim == 4 and mask.ndim == 4 - ), "Image and Mask must have 4 dimensions" - assert ( - image.shape[-2:] == mask.shape[-2:] - ), "Image and Mask must have the same spatial dimensions" - assert ( - image.shape[0] == mask.shape[0] - ), "Image and Mask must have the same batch size" - - # Check image is in [-1, 1] - if image.min() < -1 or image.max() > 1: - raise ValueError("Image should be in [-1, 1] range") - - # Check mask is in [0, 1] - if mask.min() < 0 or mask.max() > 1: - raise ValueError("Mask should be in [0, 1] range") - - # Binarize mask - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - - # Image as float32 - image = image.to(dtype=torch.float32) - elif isinstance(mask, torch.Tensor): - raise TypeError( - f"`mask` is a torch.Tensor but `image` (type: {type(image)} is not" - ) - else: - # preprocess image - if isinstance(image, (PIL.Image.Image, np.ndarray)): - image = [image] - - if isinstance(image, list) and isinstance(image[0], PIL.Image.Image): - image = [np.array(i.convert("RGB"))[None, :] for i in image] - image = np.concatenate(image, axis=0) - elif isinstance(image, list) and isinstance(image[0], np.ndarray): - image = np.concatenate([i[None, :] for i in image], axis=0) - - image = image.transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - - # preprocess mask - if isinstance(mask, (PIL.Image.Image, np.ndarray)): - mask = [mask] - - if isinstance(mask, list) and isinstance(mask[0], PIL.Image.Image): - mask = np.concatenate( - [np.array(m.convert("L"))[None, None, :] for m in mask], axis=0 - ) - mask = mask.astype(np.float32) / 255.0 - elif isinstance(mask, list) and isinstance(mask[0], np.ndarray): - mask = np.concatenate([m[None, None, :] for m in mask], axis=0) - - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - mask = torch.from_numpy(mask) - - masked_image = image * (mask < 0.5) - - return mask, masked_image - - -class StableDiffusionControlNetInpaintPipeline( - StableDiffusionControlNetPipeline -): - r""" - Pipeline for text-guided image inpainting using Stable Diffusion with ControlNet guidance. - - This model inherits from [`StableDiffusionControlNetPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - controlnet ([`ControlNetModel`]): - Provides additional conditioning to the unet during the denoising process - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def prepare_mask_latents( - self, - mask, - masked_image, - batch_size, - height, - width, - dtype, - device, - generator, - do_classifier_free_guidance, - ): - # resize the mask to latents shape as we concatenate the mask to the latents - # we do that before converting to dtype to avoid breaking in case we're using cpu_offload - # and half precision - mask = torch.nn.functional.interpolate( - mask, - size=( - height // self.vae_scale_factor, - width // self.vae_scale_factor, - ), - ) - mask = mask.to(device=device, dtype=dtype) - - masked_image = masked_image.to(device=device, dtype=dtype) - - # encode the mask image into latents space so we can concatenate it to the latents - if isinstance(generator, list): - masked_image_latents = [ - self.vae.encode(masked_image[i : i + 1]).latent_dist.sample( - generator=generator[i] - ) - for i in range(batch_size) - ] - masked_image_latents = torch.cat(masked_image_latents, dim=0) - else: - masked_image_latents = self.vae.encode( - masked_image - ).latent_dist.sample(generator=generator) - masked_image_latents = ( - self.vae.config.scaling_factor * masked_image_latents - ) - - # duplicate mask and masked_image_latents for each generation per prompt, using mps friendly method - if mask.shape[0] < batch_size: - if not batch_size % mask.shape[0] == 0: - raise ValueError( - "The passed mask and the required batch size don't match. Masks are supposed to be duplicated to" - f" a total batch size of {batch_size}, but {mask.shape[0]} masks were passed. Make sure the number" - " of masks that you pass is divisible by the total requested batch size." - ) - mask = mask.repeat(batch_size // mask.shape[0], 1, 1, 1) - if masked_image_latents.shape[0] < batch_size: - if not batch_size % masked_image_latents.shape[0] == 0: - raise ValueError( - "The passed images and the required batch size don't match. Images are supposed to be duplicated" - f" to a total batch size of {batch_size}, but {masked_image_latents.shape[0]} images were passed." - " Make sure the number of images that you pass is divisible by the total requested batch size." - ) - masked_image_latents = masked_image_latents.repeat( - batch_size // masked_image_latents.shape[0], 1, 1, 1 - ) - - mask = torch.cat([mask] * 2) if do_classifier_free_guidance else mask - masked_image_latents = ( - torch.cat([masked_image_latents] * 2) - if do_classifier_free_guidance - else masked_image_latents - ) - - # aligning device to prevent device errors when concating it with the latent model input - masked_image_latents = masked_image_latents.to( - device=device, dtype=dtype - ) - return mask, masked_image_latents - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[torch.FloatTensor, PIL.Image.Image] = None, - control_image: Union[ - torch.FloatTensor, - PIL.Image.Image, - List[torch.FloatTensor], - List[PIL.Image.Image], - ] = None, - mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[ - Union[torch.Generator, List[torch.Generator]] - ] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[ - Callable[[int, int, torch.FloatTensor], None] - ] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - controlnet_conditioning_scale: float = 1.0, - ): - r""" - Function invoked when calling the pipeline for generation. - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will - be masked out with `mask_image` and repainted according to `prompt`. - control_image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]` or `List[PIL.Image.Image]`): - The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If - the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. PIL.Image.Image` can - also be accepted as an image. The control image is automatically resized to fit the output image. - mask_image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted - to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L) - instead of 3, so the expected shape would be `(B, H, W, 1)`. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttnProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - controlnet_conditioning_scale (`float`, *optional*, defaults to 1.0): - The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added - to the residual in the original unet. - Examples: - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height, width = self._default_height_width(height, width, control_image) - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, - control_image, - height, - width, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Prepare image - control_image = self.prepare_image( - control_image, - width, - height, - batch_size * num_images_per_prompt, - num_images_per_prompt, - device, - self.controlnet.dtype, - ) - - if do_classifier_free_guidance: - control_image = torch.cat([control_image] * 2) - - # 5. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 6. Prepare latent variables - num_channels_latents = self.controlnet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # EXTRA: prepare mask latents - mask, masked_image = prepare_mask_and_masked_image(image, mask_image) - mask, masked_image_latents = self.prepare_mask_latents( - mask, - masked_image, - batch_size * num_images_per_prompt, - height, - width, - prompt_embeds.dtype, - device, - generator, - do_classifier_free_guidance, - ) - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 8. Denoising loop - num_warmup_steps = ( - len(timesteps) - num_inference_steps * self.scheduler.order - ) - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = ( - torch.cat([latents] * 2) - if do_classifier_free_guidance - else latents - ) - latent_model_input = self.scheduler.scale_model_input( - latent_model_input, t - ) - - down_block_res_samples, mid_block_res_sample = self.controlnet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - controlnet_cond=control_image, - return_dict=False, - ) - - down_block_res_samples = [ - down_block_res_sample * controlnet_conditioning_scale - for down_block_res_sample in down_block_res_samples - ] - mid_block_res_sample *= controlnet_conditioning_scale - - # predict the noise residual - latent_model_input = torch.cat( - [latent_model_input, mask, masked_image_latents], dim=1 - ) - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - down_block_additional_residuals=down_block_res_samples, - mid_block_additional_residual=mid_block_res_sample, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * ( - noise_pred_text - noise_pred_uncond - ) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, t, latents, **extra_step_kwargs - ).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ( - (i + 1) > num_warmup_steps - and (i + 1) % self.scheduler.order == 0 - ): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # If we do sequential model offloading, let's offload unet and controlnet - # manually for max memory savings - if ( - hasattr(self, "final_offload_hook") - and self.final_offload_hook is not None - ): - self.unet.to("cpu") - self.controlnet.to("cpu") - torch.cuda.empty_cache() - - if output_type == "latent": - image = latents - has_nsfw_concept = None - elif output_type == "pil": - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker( - image, device, prompt_embeds.dtype - ) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - else: - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker( - image, device, prompt_embeds.dtype - ) - - # Offload last model to CPU - if ( - hasattr(self, "final_offload_hook") - and self.final_offload_hook is not None - ): - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput( - images=image, nsfw_content_detected=has_nsfw_concept - ) diff --git a/spaces/magicr/BuboGPT/imagebind/models/__init__.py b/spaces/magicr/BuboGPT/imagebind/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/__init__.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/__init__.py deleted file mode 100644 index cfb1e4d7bb221c429082bd389d9140e5b1cc07b0..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/archs/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -import importlib -from copy import deepcopy -from os import path as osp - -from basicsr.utils import get_root_logger, scandir -from basicsr.utils.registry import ARCH_REGISTRY - -__all__ = ['build_network'] - -# automatically scan and import arch modules for registry -# scan all the files under the 'archs' folder and collect files ending with -# '_arch.py' -arch_folder = osp.dirname(osp.abspath(__file__)) -arch_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(arch_folder) if v.endswith('_arch.py')] -# import all the arch modules -_arch_modules = [importlib.import_module(f'basicsr.archs.{file_name}') for file_name in arch_filenames] - - -def build_network(opt): - opt = deepcopy(opt) - network_type = opt.pop('type') - net = ARCH_REGISTRY.get(network_type)(**opt) - logger = get_root_logger() - logger.info(f'Network [{net.__class__.__name__}] is created.') - return net diff --git a/spaces/manishjaiswal/07-GraphViz-PyDeck-Map-AIUIUX-Demo/README.md b/spaces/manishjaiswal/07-GraphViz-PyDeck-Map-AIUIUX-Demo/README.md deleted file mode 100644 index 78cd75a0ffac27f4e273da8ab7af55e9eab7cb96..0000000000000000000000000000000000000000 --- a/spaces/manishjaiswal/07-GraphViz-PyDeck-Map-AIUIUX-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 07 GraphViz PyDeck Map AIUIUX Demo -emoji: 👀 -colorFrom: blue -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/matthoffner/AudioCraft_Plus/docs/TRAINING.md b/spaces/matthoffner/AudioCraft_Plus/docs/TRAINING.md deleted file mode 100644 index 148de295f2ddfed2e4e893576bf31e1485038b8e..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/docs/TRAINING.md +++ /dev/null @@ -1,312 +0,0 @@ -# AudioCraft training pipelines - -AudioCraft training pipelines are built on top of PyTorch as our core deep learning library -and [Flashy](https://github.com/facebookresearch/flashy) as our training pipeline design library, -and [Dora](https://github.com/facebookresearch/dora) as our experiment manager. -AudioCraft training pipelines are designed to be research and experiment-friendly. - - -## Environment setup - -For the base installation, follow the instructions from the [README.md](../README.md). -Below are some additional instructions for setting up environment to train new models. - -### Team and cluster configuration - -In order to support multiple teams and clusters, AudioCraft uses an environment configuration. -The team configuration allows to specify cluster-specific configurations (e.g. SLURM configuration), -or convenient mapping of paths between the supported environments. - -Each team can have a yaml file under the [configuration folder](../config). To select a team set the -`AUDIOCRAFT_TEAM` environment variable to a valid team name (e.g. `labs` or `default`): -```shell -conda env config vars set AUDIOCRAFT_TEAM=default -``` - -Alternatively, you can add it to your `.bashrc`: -```shell -export AUDIOCRAFT_TEAM=default -``` - -If not defined, the environment will default to the `default` team. - -The cluster is automatically detected, but it is also possible to override it by setting -the `AUDIOCRAFT_CLUSTER` environment variable. - -Based on this team and cluster, the environment is then configured with: -* The dora experiment outputs directory. -* The available slurm partitions: categorized by global and team. -* A shared reference directory: In order to facilitate sharing research models while remaining -agnostic to the used compute cluster, we created the `//reference` symbol that can be used in -YAML config to point to a defined reference folder containing shared checkpoints -(e.g. baselines, models for evaluation...). - -**Important:** The default output dir for trained models and checkpoints is under `/tmp/`. This is suitable -only for quick testing. If you are doing anything serious you MUST edit the file `default.yaml` and -properly set the `dora_dir` entries. - -#### Overriding environment configurations - -You can set the following environmet variables to bypass the team's environment configuration: -* `AUDIOCRAFT_CONFIG`: absolute path to a team config yaml file. -* `AUDIOCRAFT_DORA_DIR`: absolute path to a custom dora directory. -* `AUDIOCRAFT_REFERENCE_DIR`: absolute path to the shared reference directory. - -## Training pipelines - -Each task supported in AudioCraft has its own training pipeline and dedicated solver. -Learn more about solvers and key designs around AudioCraft training pipeline below. -Please refer to the documentation of each task and model for specific information on a given task. - - -### Solvers - -The core training component in AudioCraft is the solver. A solver holds the definition -of how to solve a given task: It implements the training pipeline logic, combining the datasets, -model, optimization criterion and components and the full training loop. We refer the reader -to [Flashy](https://github.com/facebookresearch/flashy) for core principles around solvers. - -AudioCraft proposes an initial solver, the `StandardSolver` that is used as the base implementation -for downstream solvers. This standard solver provides a nice base management of logging, -checkpoints loading/saving, xp restoration, etc. on top of the base Flashy implementation. -In AudioCraft, we made the assumption that all tasks are following the same set of stages: -train, valid, evaluate and generation, each relying on a dedicated dataset. - -Each solver is responsible for defining the task to solve and the associated stages -of the training loop in order to leave the full ownership of the training pipeline -to the researchers. This includes loading the datasets, building the model and -optimisation components, registering them and defining the execution of each stage. -To create a new solver for a given task, one should extend the StandardSolver -and define each stage of the training loop. One can further customise its own solver -starting from scratch instead of inheriting from the standard solver. - -```python -from . import base -from .. import optim - - -class MyNewSolver(base.StandardSolver): - - def __init__(self, cfg: omegaconf.DictConfig): - super().__init__(cfg) - # one can add custom attributes to the solver - self.criterion = torch.nn.L1Loss() - - def best_metric(self): - # here optionally specify which metric to use to keep track of best state - return 'loss' - - def build_model(self): - # here you can instantiate your models and optimization related objects - # this method will be called by the StandardSolver init method - self.model = ... - # the self.cfg attribute contains the raw configuration - self.optimizer = optim.build_optimizer(self.model.parameters(), self.cfg.optim) - # don't forget to register the states you'd like to include in your checkpoints! - self.register_stateful('model', 'optimizer') - # keep the model best state based on the best value achieved at validation for the given best_metric - self.register_best('model') - # if you want to add EMA around the model - self.register_ema('model') - - def build_dataloaders(self): - # here you can instantiate your dataloaders - # this method will be called by the StandardSolver init method - self.dataloaders = ... - - ... - - # For both train and valid stages, the StandardSolver relies on - # a share common_train_valid implementation that is in charge of - # accessing the appropriate loader, iterate over the data up to - # the specified number of updates_per_epoch, run the ``run_step`` - # function that you need to implement to specify the behavior - # and finally update the EMA and collect the metrics properly. - @abstractmethod - def run_step(self, idx: int, batch: tp.Any, metrics: dict): - """Perform one training or valid step on a given batch. - """ - ... # provide your implementation of the solver over a batch - - def train(self): - """Train stage. - """ - return self.common_train_valid('train') - - def valid(self): - """Valid stage. - """ - return self.common_train_valid('valid') - - @abstractmethod - def evaluate(self): - """Evaluate stage. - """ - ... # provide your implementation here! - - @abstractmethod - def generate(self): - """Generate stage. - """ - ... # provide your implementation here! -``` - -### About Epochs - -AudioCraft Solvers uses the concept of Epoch. One epoch doesn't necessarily mean one pass over the entire -dataset, but instead represent the smallest amount of computation that we want to work with before checkpointing. -Typically, we find that having an Epoch time around 30min is ideal both in terms of safety (checkpointing often enough) -and getting updates often enough. One Epoch is at least a `train` stage that lasts for `optim.updates_per_epoch` (2000 by default), -and a `valid` stage. You can control how long the valid stage takes with `dataset.valid.num_samples`. -Other stages (`evaluate`, `generate`) will only happen every X epochs, as given by `evaluate.every` and `generate.every`). - - -### Models - -In AudioCraft, a model is a container object that wraps one or more torch modules together -with potential processing logic to use in a solver. For example, a model would wrap an encoder module, -a quantisation bottleneck module, a decoder and some tensor processing logic. Each of the previous components -can be considered as a small « model unit » on its own but the container model is a practical component -to manipulate and train a set of modules together. - -### Datasets - -See the [dedicated documentation on datasets](./DATASETS.md). - -### Metrics - -See the [dedicated documentation on metrics](./METRICS.md). - -### Conditioners - -AudioCraft language models can be conditioned in various ways and the codebase offers a modular implementation -of different conditioners that can be potentially combined together. -Learn more in the [dedicated documentation on conditioning](./CONDITIONING.md). - -### Configuration - -AudioCraft's configuration is defined in yaml files and the framework relies on -[hydra](https://hydra.cc/docs/intro/) and [omegaconf](https://omegaconf.readthedocs.io/) to parse -and manipulate the configuration through Dora. - -##### :warning: Important considerations around configurations - -Our configuration management relies on Hydra and the concept of group configs to structure -and compose configurations. Updating the root default configuration files will then have -an impact on all solvers and tasks. -**One should never change the default configuration files. Instead they should use Hydra config groups in order to store custom configuration.** -Once this configuration is created and used for running experiments, you should not edit it anymore. - -Note that as we are using Dora as our experiment manager, all our experiment tracking is based on -signatures computed from delta between configurations. -**One must therefore ensure backward compatibilty of the configuration at all time.** -See [Dora's README](https://github.com/facebookresearch/dora) and the -[section below introduction Dora](#running-experiments-with-dora). - -##### Configuration structure - -The configuration is organized in config groups: -* `conditioner`: default values for conditioning modules. -* `dset`: contains all data source related information (paths to manifest files -and metadata for a given dataset). -* `model`: contains configuration for each model defined in AudioCraft and configurations -for different variants of models. -* `solver`: contains the default configuration for each solver as well as configuration -for each solver task, combining all the above components. -* `teams`: contains the cluster configuration per teams. See environment setup for more details. - -The `config.yaml` file is the main configuration that composes the above groups -and contains default configuration for AudioCraft. - -##### Solver's core configuration structure - -The core configuration structure shared across solver is available in `solvers/default.yaml`. - -##### Other configuration modules - -AudioCraft configuration contains the different setups we used for our research and publications. - -## Running experiments with Dora - -### Launching jobs - -Try launching jobs for different tasks locally with dora run: - -```shell -# run compression task with lightweight encodec -dora run solver=compression/debug -``` - -Most of the time, the jobs are launched through dora grids, for example: - -```shell -# run compression task through debug grid -dora grid compression.debug -``` - -Learn more about running experiments with Dora below. - -### A small introduction to Dora - -[Dora](https://github.com/facebookresearch/dora) is the experiment manager tool used in AudioCraft. -Check out the README to learn how Dora works. Here is a quick summary of what to know: -* An XP is a unique set of hyper-parameters with a given signature. The signature is a hash -of those hyper-parameters. We always refer to an XP with its signature, e.g. 9357e12e. We will see -after that one can retrieve the hyper-params and re-rerun it in a single command. -* In fact, the hash is defined as a delta between the base config and the one obtained -with the config overrides you passed from the command line. This means you must never change -the `conf/**.yaml` files directly., except for editing things like paths. Changing the default values -in the config files means the XP signature won't reflect that change, and wrong checkpoints might be reused. -I know, this is annoying, but the reason is that otherwise, any change to the config file would mean -that all XPs ran so far would see their signature change. - -#### Dora commands - -```shell -dora info -f 81de367c # this will show the hyper-parameter used by a specific XP. - # Be careful some overrides might present twice, and the right most one - # will give you the right value for it. - -dora run -d -f 81de367c # run an XP with the hyper-parameters from XP 81de367c. - # `-d` is for distributed, it will use all available GPUs. - -dora run -d -f 81de367c dataset.batch_size=32 # start from the config of XP 81de367c but change some hyper-params. - # This will give you a new XP with a new signature (e.g. 3fe9c332). - -dora info -f SIG -t # will tail the log (if the XP has scheduled). -# if you need to access the logs of the process for rank > 0, in particular because a crash didn't happen in the main -# process, then use `dora info -f SIG` to get the main log name (finished into something like `/5037674_0_0_log.out`) -# and worker K can accessed as `/5037674_0_{K}_log.out`. -# This is only for scheduled jobs, for local distributed runs with `-d`, then you should go into the XP folder, -# and look for `worker_{K}.log` logs. -``` - -An XP runs from a specific folder based on its signature, under the -`//experiments/audiocraft/outputs/` folder. -You can safely interrupt a training and resume it, it will reuse any existing checkpoint, -as it will reuse the same folder. If you made some change to the code and need to ignore -a previous checkpoint you can use `dora run --clear [RUN ARGS]`. - -If you have a Slurm cluster, you can also use the dora grid command, e.g. - -```shell -# run a dummy grid located at `audiocraft/grids/my_grid_folder/my_grid_name.py` -dora grid my_grid_folder.my_grid_name -# Run the following will simply display the grid and also initialized the Dora experiments database. -# You can then simply refer to a config using its signature (e.g. as `dora run -f SIG`). -dora grid my_grid_folder.my_grid_name --dry_run --init -``` - -Please refer to the [Dora documentation](https://github.com/facebookresearch/dora) for more information. - - -#### Clearing up past experiments - -```shell -# This will cancel all the XPs and delete their folder and checkpoints. -# It will then reschedule them starting from scratch. -dora grid my_grid_folder.my_grid_name --clear -# The following will delete the folder and checkpoint for a single XP, -# and then run it afresh. -dora run [-f BASE_SIG] [ARGS] --clear -``` diff --git a/spaces/maxmax20160403/vits_chinese/text/__init__.py b/spaces/maxmax20160403/vits_chinese/text/__init__.py deleted file mode 100644 index f1853227f795a4e7308ac8e9e2b0f2713c223dc9..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/vits_chinese/text/__init__.py +++ /dev/null @@ -1,447 +0,0 @@ -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def cleaned_text_to_sequence(cleaned_text): - """Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - """ - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text.split()] - return sequence - - -def sequence_to_text(sequence): - """Converts a sequence of IDs back to a string""" - result = "" - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -pinyin_dict = { - "a": ("^", "a"), - "ai": ("^", "ai"), - "an": ("^", "an"), - "ang": ("^", "ang"), - "ao": ("^", "ao"), - "ba": ("b", "a"), - "bai": ("b", "ai"), - "ban": ("b", "an"), - "bang": ("b", "ang"), - "bao": ("b", "ao"), - "be": ("b", "e"), - "bei": ("b", "ei"), - "ben": ("b", "en"), - "beng": ("b", "eng"), - "bi": ("b", "i"), - "bian": ("b", "ian"), - "biao": ("b", "iao"), - "bie": ("b", "ie"), - "bin": ("b", "in"), - "bing": ("b", "ing"), - "bo": ("b", "o"), - "bu": ("b", "u"), - "ca": ("c", "a"), - "cai": ("c", "ai"), - "can": ("c", "an"), - "cang": ("c", "ang"), - "cao": ("c", "ao"), - "ce": ("c", "e"), - "cen": ("c", "en"), - "ceng": ("c", "eng"), - "cha": ("ch", "a"), - "chai": ("ch", "ai"), - "chan": ("ch", "an"), - "chang": ("ch", "ang"), - "chao": ("ch", "ao"), - "che": ("ch", "e"), - "chen": ("ch", "en"), - "cheng": ("ch", "eng"), - "chi": ("ch", "iii"), - "chong": ("ch", "ong"), - "chou": ("ch", "ou"), - "chu": ("ch", "u"), - "chua": ("ch", "ua"), - "chuai": ("ch", "uai"), - "chuan": ("ch", "uan"), - "chuang": ("ch", "uang"), - "chui": ("ch", "uei"), - "chun": ("ch", "uen"), - "chuo": ("ch", "uo"), - "ci": ("c", "ii"), - "cong": ("c", "ong"), - "cou": ("c", "ou"), - "cu": ("c", "u"), - "cuan": ("c", "uan"), - "cui": ("c", "uei"), - "cun": ("c", "uen"), - "cuo": ("c", "uo"), - "da": ("d", "a"), - "dai": ("d", "ai"), - "dan": ("d", "an"), - "dang": ("d", "ang"), - "dao": ("d", "ao"), - "de": ("d", "e"), - "dei": ("d", "ei"), - "den": ("d", "en"), - "deng": ("d", "eng"), - "di": ("d", "i"), - "dia": ("d", "ia"), - "dian": ("d", "ian"), - "diao": ("d", "iao"), - "die": ("d", "ie"), - "ding": ("d", "ing"), - "diu": ("d", "iou"), - "dong": ("d", "ong"), - "dou": ("d", "ou"), - "du": ("d", "u"), - "duan": ("d", "uan"), - "dui": ("d", "uei"), - "dun": ("d", "uen"), - "duo": ("d", "uo"), - "e": ("^", "e"), - "ei": ("^", "ei"), - "en": ("^", "en"), - "ng": ("^", "en"), - "eng": ("^", "eng"), - "er": ("^", "er"), - "fa": ("f", "a"), - "fan": ("f", "an"), - "fang": ("f", "ang"), - "fei": ("f", "ei"), - "fen": ("f", "en"), - "feng": ("f", "eng"), - "fo": ("f", "o"), - "fou": ("f", "ou"), - "fu": ("f", "u"), - "ga": ("g", "a"), - "gai": ("g", "ai"), - "gan": ("g", "an"), - "gang": ("g", "ang"), - "gao": ("g", "ao"), - "ge": ("g", "e"), - "gei": ("g", "ei"), - "gen": ("g", "en"), - "geng": ("g", "eng"), - "gong": ("g", "ong"), - "gou": ("g", "ou"), - "gu": ("g", "u"), - "gua": ("g", "ua"), - "guai": ("g", "uai"), - "guan": ("g", "uan"), - "guang": ("g", "uang"), - "gui": ("g", "uei"), - "gun": ("g", "uen"), - "guo": ("g", "uo"), - "ha": ("h", "a"), - "hai": ("h", "ai"), - "han": ("h", "an"), - "hang": ("h", "ang"), - "hao": ("h", "ao"), - "he": ("h", "e"), - "hei": ("h", "ei"), - "hen": ("h", "en"), - "heng": ("h", "eng"), - "hong": ("h", "ong"), - "hou": ("h", "ou"), - "hu": ("h", "u"), - "hua": ("h", "ua"), - "huai": ("h", "uai"), - "huan": ("h", "uan"), - "huang": ("h", "uang"), - "hui": ("h", "uei"), - "hun": ("h", "uen"), - "huo": ("h", "uo"), - "ji": ("j", "i"), - "jia": ("j", "ia"), - "jian": ("j", "ian"), - "jiang": ("j", "iang"), - "jiao": ("j", "iao"), - "jie": ("j", "ie"), - "jin": ("j", "in"), - "jing": ("j", "ing"), - "jiong": ("j", "iong"), - "jiu": ("j", "iou"), - "ju": ("j", "v"), - "juan": ("j", "van"), - "jue": ("j", "ve"), - "jun": ("j", "vn"), - "ka": ("k", "a"), - "kai": ("k", "ai"), - "kan": ("k", "an"), - "kang": ("k", "ang"), - "kao": ("k", "ao"), - "ke": ("k", "e"), - "kei": ("k", "ei"), - "ken": ("k", "en"), - "keng": ("k", "eng"), - "kong": ("k", "ong"), - "kou": ("k", "ou"), - "ku": ("k", "u"), - "kua": ("k", "ua"), - "kuai": ("k", "uai"), - "kuan": ("k", "uan"), - "kuang": ("k", "uang"), - "kui": ("k", "uei"), - "kun": ("k", "uen"), - "kuo": ("k", "uo"), - "la": ("l", "a"), - "lai": ("l", "ai"), - "lan": ("l", "an"), - "lang": ("l", "ang"), - "lao": ("l", "ao"), - "le": ("l", "e"), - "lei": ("l", "ei"), - "leng": ("l", "eng"), - "li": ("l", "i"), - "lia": ("l", "ia"), - "lian": ("l", "ian"), - "liang": ("l", "iang"), - "liao": ("l", "iao"), - "lie": ("l", "ie"), - "lin": ("l", "in"), - "ling": ("l", "ing"), - "liu": ("l", "iou"), - "lo": ("l", "o"), - "long": ("l", "ong"), - "lou": ("l", "ou"), - "lu": ("l", "u"), - "lv": ("l", "v"), - "luan": ("l", "uan"), - "lve": ("l", "ve"), - "lue": ("l", "ve"), - "lun": ("l", "uen"), - "luo": ("l", "uo"), - "ma": ("m", "a"), - "mai": ("m", "ai"), - "man": ("m", "an"), - "mang": ("m", "ang"), - "mao": ("m", "ao"), - "me": ("m", "e"), - "mei": ("m", "ei"), - "men": ("m", "en"), - "meng": ("m", "eng"), - "mi": ("m", "i"), - "mian": ("m", "ian"), - "miao": ("m", "iao"), - "mie": ("m", "ie"), - "min": ("m", "in"), - "ming": ("m", "ing"), - "miu": ("m", "iou"), - "mo": ("m", "o"), - "mou": ("m", "ou"), - "mu": ("m", "u"), - "na": ("n", "a"), - "nai": ("n", "ai"), - "nan": ("n", "an"), - "nang": ("n", "ang"), - "nao": ("n", "ao"), - "ne": ("n", "e"), - "nei": ("n", "ei"), - "nen": ("n", "en"), - "neng": ("n", "eng"), - "ni": ("n", "i"), - "nia": ("n", "ia"), - "nian": ("n", "ian"), - "niang": ("n", "iang"), - "niao": ("n", "iao"), - "nie": ("n", "ie"), - "nin": ("n", "in"), - "ning": ("n", "ing"), - "niu": ("n", "iou"), - "nong": ("n", "ong"), - "nou": ("n", "ou"), - "nu": ("n", "u"), - "nv": ("n", "v"), - "nuan": ("n", "uan"), - "nve": ("n", "ve"), - "nue": ("n", "ve"), - "nuo": ("n", "uo"), - "o": ("^", "o"), - "ou": ("^", "ou"), - "pa": ("p", "a"), - "pai": ("p", "ai"), - "pan": ("p", "an"), - "pang": ("p", "ang"), - "pao": ("p", "ao"), - "pe": ("p", "e"), - "pei": ("p", "ei"), - "pen": ("p", "en"), - "peng": ("p", "eng"), - "pi": ("p", "i"), - "pian": ("p", "ian"), - "piao": ("p", "iao"), - "pie": ("p", "ie"), - "pin": ("p", "in"), - "ping": ("p", "ing"), - "po": ("p", "o"), - "pou": ("p", "ou"), - "pu": ("p", "u"), - "qi": ("q", "i"), - "qia": ("q", "ia"), - "qian": ("q", "ian"), - "qiang": ("q", "iang"), - "qiao": ("q", "iao"), - "qie": ("q", "ie"), - "qin": ("q", "in"), - "qing": ("q", "ing"), - "qiong": ("q", "iong"), - "qiu": ("q", "iou"), - "qu": ("q", "v"), - "quan": ("q", "van"), - "que": ("q", "ve"), - "qun": ("q", "vn"), - "ran": ("r", "an"), - "rang": ("r", "ang"), - "rao": ("r", "ao"), - "re": ("r", "e"), - "ren": ("r", "en"), - "reng": ("r", "eng"), - "ri": ("r", "iii"), - "rong": ("r", "ong"), - "rou": ("r", "ou"), - "ru": ("r", "u"), - "rua": ("r", "ua"), - "ruan": ("r", "uan"), - "rui": ("r", "uei"), - "run": ("r", "uen"), - "ruo": ("r", "uo"), - "sa": ("s", "a"), - "sai": ("s", "ai"), - "san": ("s", "an"), - "sang": ("s", "ang"), - "sao": ("s", "ao"), - "se": ("s", "e"), - "sen": ("s", "en"), - "seng": ("s", "eng"), - "sha": ("sh", "a"), - "shai": ("sh", "ai"), - "shan": ("sh", "an"), - "shang": ("sh", "ang"), - "shao": ("sh", "ao"), - "she": ("sh", "e"), - "shei": ("sh", "ei"), - "shen": ("sh", "en"), - "sheng": ("sh", "eng"), - "shi": ("sh", "iii"), - "shou": ("sh", "ou"), - "shu": ("sh", "u"), - "shua": ("sh", "ua"), - "shuai": ("sh", "uai"), - "shuan": ("sh", "uan"), - "shuang": ("sh", "uang"), - "shui": ("sh", "uei"), - "shun": ("sh", "uen"), - "shuo": ("sh", "uo"), - "si": ("s", "ii"), - "song": ("s", "ong"), - "sou": ("s", "ou"), - "su": ("s", "u"), - "suan": ("s", "uan"), - "sui": ("s", "uei"), - "sun": ("s", "uen"), - "suo": ("s", "uo"), - "ta": ("t", "a"), - "tai": ("t", "ai"), - "tan": ("t", "an"), - "tang": ("t", "ang"), - "tao": ("t", "ao"), - "te": ("t", "e"), - "tei": ("t", "ei"), - "teng": ("t", "eng"), - "ti": ("t", "i"), - "tian": ("t", "ian"), - "tiao": ("t", "iao"), - "tie": ("t", "ie"), - "ting": ("t", "ing"), - "tong": ("t", "ong"), - "tou": ("t", "ou"), - "tu": ("t", "u"), - "tuan": ("t", "uan"), - "tui": ("t", "uei"), - "tun": ("t", "uen"), - "tuo": ("t", "uo"), - "wa": ("^", "ua"), - "wai": ("^", "uai"), - "wan": ("^", "uan"), - "wang": ("^", "uang"), - "wei": ("^", "uei"), - "wen": ("^", "uen"), - "weng": ("^", "ueng"), - "wo": ("^", "uo"), - "wu": ("^", "u"), - "xi": ("x", "i"), - "xia": ("x", "ia"), - "xian": ("x", "ian"), - "xiang": ("x", "iang"), - "xiao": ("x", "iao"), - "xie": ("x", "ie"), - "xin": ("x", "in"), - "xing": ("x", "ing"), - "xiong": ("x", "iong"), - "xiu": ("x", "iou"), - "xu": ("x", "v"), - "xuan": ("x", "van"), - "xue": ("x", "ve"), - "xun": ("x", "vn"), - "ya": ("^", "ia"), - "yan": ("^", "ian"), - "yang": ("^", "iang"), - "yao": ("^", "iao"), - "ye": ("^", "ie"), - "yi": ("^", "i"), - "yin": ("^", "in"), - "ying": ("^", "ing"), - "yo": ("^", "iou"), - "yong": ("^", "iong"), - "you": ("^", "iou"), - "yu": ("^", "v"), - "yuan": ("^", "van"), - "yue": ("^", "ve"), - "yun": ("^", "vn"), - "za": ("z", "a"), - "zai": ("z", "ai"), - "zan": ("z", "an"), - "zang": ("z", "ang"), - "zao": ("z", "ao"), - "ze": ("z", "e"), - "zei": ("z", "ei"), - "zen": ("z", "en"), - "zeng": ("z", "eng"), - "zha": ("zh", "a"), - "zhai": ("zh", "ai"), - "zhan": ("zh", "an"), - "zhang": ("zh", "ang"), - "zhao": ("zh", "ao"), - "zhe": ("zh", "e"), - "zhei": ("zh", "ei"), - "zhen": ("zh", "en"), - "zheng": ("zh", "eng"), - "zhi": ("zh", "iii"), - "zhong": ("zh", "ong"), - "zhou": ("zh", "ou"), - "zhu": ("zh", "u"), - "zhua": ("zh", "ua"), - "zhuai": ("zh", "uai"), - "zhuan": ("zh", "uan"), - "zhuang": ("zh", "uang"), - "zhui": ("zh", "uei"), - "zhun": ("zh", "uen"), - "zhuo": ("zh", "uo"), - "zi": ("z", "ii"), - "zong": ("z", "ong"), - "zou": ("z", "ou"), - "zu": ("z", "u"), - "zuan": ("z", "uan"), - "zui": ("z", "uei"), - "zun": ("z", "uen"), - "zuo": ("z", "uo"), -} diff --git a/spaces/mediaparty2023/test-autotrain/Dockerfile b/spaces/mediaparty2023/test-autotrain/Dockerfile deleted file mode 100644 index a4c8b4f88ec3000f75b1413a72ba55e294692201..0000000000000000000000000000000000000000 --- a/spaces/mediaparty2023/test-autotrain/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM huggingface/autotrain-advanced:latest -CMD autotrain setup && autotrain app --port 7860 diff --git a/spaces/merve/anonymization/public/measuring-fairness/index.html b/spaces/merve/anonymization/public/measuring-fairness/index.html deleted file mode 100644 index 4260ecaa54d3d68181d664c9f4c4ddb13d215577..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/measuring-fairness/index.html +++ /dev/null @@ -1,298 +0,0 @@ - - - - - - - - - - - - - - - - - - Measuring Fairness - - - - - - - - - - - - - - - -
    - -
    - -

    Measuring Fairness

    -
    There are multiple ways to measure accuracy. No matter how we build our model, accuracy across these measures will vary when applied to different groups of people.
    - - - - - -
    -
    -
    - - -
    -

    Measuring Fairness

    - -

    How do you make sure a model works equally well for different groups of people? It turns out that in many situations, this is harder than you might think. - -

    The problem is that there are different ways to measure the accuracy of a model, and often it's mathematically impossible for them all to be equal across groups. - -

    We'll illustrate how this happens by creating a (fake) medical model to screen these people for a disease. -

    - - -
    -

    Ground Truth

    - -

    About half of these people actually have the disease a; half of them don't b. -

    - - -
    -

    Model Predictions

    - -

    In a perfect world, only sick people would test positive for the disease and only healthy people would test negative. -

    - - -
    -

    Model Mistakes

    - -

    But models and tests aren't perfect. - -

    The model might make a mistake and mark a sick person as healthy c. - -

    Or the opposite: marking a healthy person as sick f. -

    - - -

    Never Miss the Disease...

    - -

    If there's a simple follow-up test, we could have the model aggressively call close cases so it rarely misses the disease. - -

    We can quantify this by measuring the percentage of sick people a who test positive g. - -

    -
    - - -
    -

    ...Or Avoid Overcalling?

    - -

    On the other hand, if there isn't a secondary test, or the treatment uses a drug with a limited supply, we might care more about the percentage of people with positive tests who are actually sick g . - -

    - -

    These issues and trade-offs in model optimization aren't new, but they're brought into focus when we have the ability to fine-tune exactly how aggressively disease is diagnosed. - -

    - - Try adjusting how aggressive the model is in diagnosing the disease -
    - - -
    -

    Subgroup Analysis

    - -

    Things get even more complicated when we check if the model treats different groups fairly.¹ - -

    Whatever we decide on in terms of trade-offs between these metrics, we'd probably like them to be roughly even across different groups of people. - -

    If we're trying to evenly allocate resources, having the model miss more cases in children than adults would be bad! ² -

    - - -
    -

    Base Rates

    - -

    If you look carefully, you'll see that the disease is more prevalent in children. That is, the "base rate" of the disease is different across groups. - -

    The fact that the base rates are different makes the situation surprisingly tricky. For one thing, even though the test catches the same percentage of sick adults and sick children, an adult who tests positive is less likely to have the disease than a child who tests positive. -

    - - -
    -

    Imbalanced Metrics

    - -

    Why is there a disparity in diagnosing between children and adults? There is a higher proportion of well adults, so mistakes in the test will cause more well adults to be marked "positive" than well children (and similarly with mistaken negatives). - -


    -
    - -

    To fix this, we could have the model take age into account. - -

    -
    -
    - -
    -

    Try adjusting the slider to make the model grade adults less aggressively than children.
    - -
    -

    This allows us to align one metric. But now adults who have the disease are less likely to be diagnosed with it! - -

    -
    -
    - -

    No matter how you move the sliders, you won't be able to make both metrics fair at once. It turns out this is inevitable any time the base rates are different, and the test isn't perfect. - -

    There are multiple ways to define fairness mathematically. It usually isn't possible to satisfy all of them.³ -

    -
    - - -
    -
    -
    - - -

    Conclusion

    - -

    Thankfully, the notion of fairness you choose to satisfy will depend on the context of your model, so while it may not be possible to satisfy every definition of fairness, you can focus on the notions of fairness that make sense for your use case. - -

    Even if fairness along every dimension isn't possible, we shouldn't stop checking for bias. The Hidden Bias explorable outlines different ways human bias can feed into an ML model. - -

    More Reading

    - -

    In some contexts, setting different thresholds for different populations might not be acceptable. Can you make AI fairer than a judge? explores an algorithm that can send people to jail. - -

    There are lots of different metrics you might use to determine if an algorithm is fair. Attacking discrimination with smarter machine learning shows how several of them work. Using Fairness Indicators in conjunction with the What-If Tool and other fairness tools, you can test your own model against commonly used fairness metrics. - -

    Machine learning practitioners use words like “recall” to describe the percentage of sick people who test positive. Checkout the PAIR Guidebook Glossary to learn how to learn how to talk to the people building the models. - -

    Appendix

    - -

    ¹ This essay uses very academic, mathematical standards for fairness that don't encompass everything we might include in the colloquial meaning of fairness. There's a gap between the technical descriptions of algorithms here and the social context that they're deployed in. - -

    ² Sometimes we might care more about different error modes in different populations. If treatment is riskier for children, we'd probably want the model to be less aggressive in diagnosing. - -

    ³The above example assumes the model sorts and scores people based on how likely it is that they are sick. With complete control over the model's exact rate of under- and over-diagnosing in both groups, it's actually possible to align both of the metrics we've discussed so far. Try tweaking the model below to get both of them to line up. - -

    Adding a third metric, the percentage of well people a who test negative e, makes perfect fairness impossible. Can you see why all three metrics won't align unless the base rate of the disease is the same in both populations? - -

    - -
    Drag ⁠— to adjust model accuracy and ⁠| to adjust the occurrence of disease
    -
    - -

    Credits

    - -

    Adam Pearce // May 2020 - -

    Thanks to Carey Radebaugh, Dan Nanas, David Weinberger, Emily Denton, Emily Reif, Fernanda Viégas, Hal Abelson, James Wexler, Kristen Olson, Lucas Dixon, Mahima Pushkarna, Martin Wattenberg, Michael Terry, Rebecca Salois, Timnit Gebru, Tulsee Doshi, Yannick Assogba, Yoni Halpern, Zan Armstrong, and my other colleagues at Google for their help with this piece. - -

    Silhouettes from ProPublica's Wee People. - -

    More Explorables

    - -

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/dataset-worldviews/public/private-and-fair/top-bot-digits.js b/spaces/merve/dataset-worldviews/public/private-and-fair/top-bot-digits.js deleted file mode 100644 index bc2f85ec8cb3b5544245f159aa62ff2fbffbcbb5..0000000000000000000000000000000000000000 --- a/spaces/merve/dataset-worldviews/public/private-and-fair/top-bot-digits.js +++ /dev/null @@ -1,66 +0,0 @@ - -!(async function(){ - await util.getFile(`cns-cache/mnist_train_raw_3.npy`) - var digitMetadata = await util.getFile('mnist_train.csv') - var {byLabel} = util.decorateDigitMetadata(digitMetadata) - - var sel = d3.select('.top-bot-digits').html('') - .at({role: 'graphics-document', 'aria-label': `The twenty-five MNIST 3 digits most and least senstive to higher and lower privacy. The digits most sensitive to higher privacy are much more poorly drawn than the onces least sensitive to higher privacy.`}) - - var digitSel = sel.append('div') - var buttonSel = sel.append('div.digit-button-container') - .appendMany('div.button', d3.range(10)) - .text(d => d) - .on('click', d => drawClass(byLabel[d])) - - drawClass(byLabel[3]) - - async function drawClass(digitClass){ - buttonSel.classed('active', d => d == digitClass.key) - await util.getFile(`cns-cache/mnist_train_raw_${digitClass.key}.npy`) - - var nRows = 5 - var nCols = 5 - - var bot = _.sortBy(digitClass, d => +d.priv_order).slice(0, nRows*nCols) - var top = _.sortBy(digitClass, d => -d.priv_order).slice(0, nRows*nCols) - - digitSel.html('').append('div') - .st({maxWidth: 640, margin: '0 auto'}) - .appendMany('div', [bot, top]) - .st({display: 'inline-block'}) - .each(drawDigitBlock) - - - function drawDigitBlock(digits, isBot){ - var s = 2 - - var sel = d3.select(this).append('div') - - var c = d3.conventions({ - sel, - width: s*29*nCols, - height: s*29*nRows, - layers: 'cs', - margin: {top: 30, bottom: 10, right: 10, left: 10} - }) - - var ctx = c.layers[0] - - digits.forEach((d, i) => { - util.drawDigit( - ctx, - +d.i, - s, - (i % nCols)*s*29, - Math.floor(i/nCols)*s*29 - ) - }) - - c.svg.append('text') - .text(isBot ? 'Least sensitive to higher privacy' : 'Most sensitive to higher privacy') - .at({dy: '-.4em', textAnchor: 'middle', x: c.width/2, fontWeight: 600, fontSize: 14}) - } - } - -})() \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/style.css b/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/style.css deleted file mode 100644 index cd53a13bbedb3698afcfd9d8e01fa5295b215bfa..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/style.css +++ /dev/null @@ -1,98 +0,0 @@ -body{ - font-family: menlo, Consolas, 'Lucida Console', monospace; - margin: 10px; - margin-left: 20px; - width: 1130px; - background: #fff; - margin-top: 30px; -} -.container{ - margin-top: 30px; -} - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -.axis{ - opacity: .7; -} - -text{ - /*pointer-events: none;*/ - text-shadow: 0 1.5px 0 #fff, 1.5px 0 0 #fff, 0 -1.5px 0 #fff, -1.5px 0 0 #fff; -} - - -#graph > div{ - /*display: inline-block;*/ -} - -.active path{ - stroke: #f0f; - /*stroke-width: 2;*/ - opacity: 1; -} -.active text{ - fill: #f0f; - opacity: 1 !important; - font-size: 14px; - -} - -p{ - max-width: 650px; -} - - -.bg-tick{ - stroke: #eee; -} - -.tick{ - display: none; -} - -text.tiny{ - font-size: 9px; - font-family: monospace; -} - -circle.sentence.active{ - fill: #f0f; -} - -.axis-label{ - /*font-weight: 600;*/ - font-size: 12px; - color: #000; -} \ No newline at end of file diff --git a/spaces/mjdolan/Holiday-StyleGAN-NADA/op/fused_act.py b/spaces/mjdolan/Holiday-StyleGAN-NADA/op/fused_act.py deleted file mode 100644 index 8459d510d7b79684779dfe47f5b46d81c94b4a4d..0000000000000000000000000000000000000000 --- a/spaces/mjdolan/Holiday-StyleGAN-NADA/op/fused_act.py +++ /dev/null @@ -1,86 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/mmcquade11/codex-reuters-summarization/app.py b/spaces/mmcquade11/codex-reuters-summarization/app.py deleted file mode 100644 index 9d4a023e11dd6ed33910600cd2ae3ea8f9391b1c..0000000000000000000000000000000000000000 --- a/spaces/mmcquade11/codex-reuters-summarization/app.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -import torch -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - -tokenizer = AutoTokenizer.from_pretrained("mmcquade11/autonlp-reuters-summarization-34018133") -model = AutoModelForSeq2SeqLM.from_pretrained("mmcquade11/autonlp-reuters-summarization-34018133") - -def summarize(text): - input_ids = torch.tensor(tokenizer.encode(text, add_special_tokens=True)).unsqueeze(0) - summary_ids = model.generate(input_ids, num_beams=4, max_length=100, early_stopping=True) - return tokenizer.decode(summary_ids[0], skip_special_tokens=True) - -def summarize_text(text): - return summarize(text) - -iface = gr.Interface(summarize_text, "textbox", "label") -if __name__ == "__main__": - iface.launch() diff --git a/spaces/mmecheri/Rakuten_Streamlit/page_descriptions/data_preprocessing_txt.md b/spaces/mmecheri/Rakuten_Streamlit/page_descriptions/data_preprocessing_txt.md deleted file mode 100644 index ba68710d9417a4957b5ba9e33f01a66e4f31f269..0000000000000000000000000000000000000000 --- a/spaces/mmecheri/Rakuten_Streamlit/page_descriptions/data_preprocessing_txt.md +++ /dev/null @@ -1,37 +0,0 @@ -#### Données Texte - ->**Variables** ->>- La colonne «**designation**»: ne contient pas de valeurs manquantes ->>- Le colonne «**description**» donne plus d’informations sur les produits, mais comporte un nombre important de valeurs manquantes(35%) - -Pour les données textuelles, nous avons: ->>>- utiliser la colonne «**designation**» uniquement ->>>- ensuite **regroupé** les deux variables «description» et «designation» en une variable unique appelée «**text** » - ->**Nettoyage du Texte** ->>>- Mettre tous les mots en lettres minuscules ->>>- Suppression des accents et des balises HTML ->>>- Instanciation et suppression des stopwords(Français, Anglais et Allemand) ->>>- Suppression des mots ayant moins de deux lettres
    - ->**Représentation vectorielle** ->>- **Machine Learning**: chaque observation du text est vectorisé à l'aide de la classe ***TfidfVectorizer*** fixant le ***max_features*** à 5000.
    ->>- **Deep Learning**: nous avons séparé le texte en mots grâce à la classe ***Tokenizer*** de « tf.keras.preprocessing.text» avec un nombre maximum de mots(***num_words***) de 20 000. Nous avons ensuite défini la longueur maximum d’une séquence à 200. - - -#### Données Images - -> Nous avons utilisé un génerateur de données (**ImageDataGenerator**, du module tensorflow.keras.preprocessing.image). -Le générateur de données permet les avantages suivants : ->>- D’enrichir notre jeu de données ->>- Réduire le surapprentissage (overfitting) ->>- Permet de reduire les containtes liées aux manques de ressources de calcul - ----Insersetion--- - - - - - - - diff --git a/spaces/mmlab-ntu/Segment-Any-RGBD/tools/search_thr_ensemble_w.sh b/spaces/mmlab-ntu/Segment-Any-RGBD/tools/search_thr_ensemble_w.sh deleted file mode 100644 index efdbd72dd1a6a9da96868688b0fd5530e956498a..0000000000000000000000000000000000000000 --- a/spaces/mmlab-ntu/Segment-Any-RGBD/tools/search_thr_ensemble_w.sh +++ /dev/null @@ -1,11 +0,0 @@ -or MASK_THR in 0.35 0.4 0.45 -o - for ENSEMBLE_WEIGHT in 0.6 0.65 0.7 0.75 0.8 - do - python train_net.py --num-gpu 8 --eval-only --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml \ - MODEL.WEIGHTS #PATH_of_ovseg_swinbase_vitL14_ft_mpt.pth DATASETS.TEST \(\"ade20k_sem_seg_val\"\) \ - MODEL.CLIP_ADAPTER.CLIP_ENSEMBLE_WEIGHT $ENSEMBLE_WEIGHT MODEL.CLIP_ADAPTER.MASK_THR $MASK_THR - done -one - - diff --git a/spaces/monra/freegpt-webui/g4f/Provider/Providers/helpers/phind.py b/spaces/monra/freegpt-webui/g4f/Provider/Providers/helpers/phind.py deleted file mode 100644 index 70525d51d849c43bd1cf29c7f9b18f22bff1e982..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/g4f/Provider/Providers/helpers/phind.py +++ /dev/null @@ -1,69 +0,0 @@ -import sys -import json -import datetime -import urllib.parse - -from curl_cffi import requests - -config = json.loads(sys.argv[1]) -prompt = config['messages'][-1]['content'] - -skill = 'expert' if config['model'] == 'gpt-4' else 'intermediate' - -json_data = json.dumps({ - 'question': prompt, - 'options': { - 'skill': skill, - 'date': datetime.datetime.now().strftime('%d/%m/%Y'), - 'language': 'en', - 'detailed': True, - 'creative': True, - 'customLinks': []}}, separators=(',', ':')) - -headers = { - 'Content-Type': 'application/json', - 'Pragma': 'no-cache', - 'Accept': '*/*', - 'Sec-Fetch-Site': 'same-origin', - 'Accept-Language': 'en-GB,en;q=0.9', - 'Cache-Control': 'no-cache', - 'Sec-Fetch-Mode': 'cors', - 'Content-Length': str(len(json_data)), - 'Origin': 'https://www.phind.com', - 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15', - 'Referer': f'https://www.phind.com/search?q={urllib.parse.quote(prompt)}&source=searchbox', - 'Connection': 'keep-alive', - 'Host': 'www.phind.com', - 'Sec-Fetch-Dest': 'empty' -} - - -def output(chunk): - try: - if b'PHIND_METADATA' in chunk: - return - - if chunk == b'data: \r\ndata: \r\ndata: \r\n\r\n': - chunk = b'data: \n\r\n\r\n' - - chunk = chunk.decode() - - chunk = chunk.replace('data: \r\n\r\ndata: ', 'data: \n') - chunk = chunk.replace('\r\ndata: \r\ndata: \r\n\r\n', '\n\r\n\r\n') - chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '') - - print(chunk, flush=True, end = '') - - except json.decoder.JSONDecodeError: - pass - -while True: - try: - response = requests.post('https://www.phind.com/api/infer/answer', - headers=headers, data=json_data, content_callback=output, timeout=999999, impersonate='safari15_5') - - exit(0) - - except Exception as e: - print('an error occured, retrying... |', e, flush=True) - continue \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/constrained_decoding/tok.py b/spaces/mshukor/UnIVAL/fairseq/examples/constrained_decoding/tok.py deleted file mode 100644 index b1f888a8c0d1b8ec7174859476cc3222456e0d2c..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/constrained_decoding/tok.py +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env python3 -# -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import sacremoses - - -def main(args): - """Tokenizes, preserving tabs""" - mt = sacremoses.MosesTokenizer(lang=args.lang) - - def tok(s): - return mt.tokenize(s, return_str=True) - - for line in sys.stdin: - parts = list(map(tok, line.split("\t"))) - print(*parts, sep="\t", flush=True) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--lang", "-l", default="en") - parser.add_argument("--penn", "-p", action="store_true") - parser.add_argument("--fields", "-f", help="fields to tokenize") - args = parser.parse_args() - - main(args) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/sparse_multihead_attention.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/sparse_multihead_attention.py deleted file mode 100644 index 3cbd9d6785886e319aab0601517e27df733b6f97..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/modules/sparse_multihead_attention.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch - -from .multihead_attention import MultiheadAttention - - -class SparseMultiheadAttention(MultiheadAttention): - """Sparse Multi-Headed Attention. - - "Generating Long Sequences with Sparse Transformers". Implements - fixed factorized self attention, where l=stride and c=expressivity. - A(1) includes all words in the stride window and A(2) takes a summary of c - words from the end of each stride window. - If is_bidirectional=False, we do not include any words past the current word, - as in the paper. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - stride=32, - expressivity=8, - is_bidirectional=True, - ): - - super().__init__( - embed_dim, - num_heads, - kdim, - vdim, - dropout, - bias, - add_bias_kv, - add_zero_attn, - self_attention, - encoder_decoder_attention, - ) - - self.is_bidirectional = is_bidirectional - self.stride = stride - self.expressivity = expressivity - assert self.stride > 0 and self.stride >= self.expressivity - - # Used for Ai(2) calculations - beginning of [l-c, l] range - def compute_checkpoint(self, word_index): - if word_index % self.stride == 0 and word_index != 0: - checkpoint_index = word_index - self.expressivity - else: - checkpoint_index = ( - math.floor(word_index / self.stride) * self.stride - + self.stride - - self.expressivity - ) - return checkpoint_index - - # Computes Ai(2) - def compute_subset_summaries(self, absolute_max): - checkpoint_index = self.compute_checkpoint(0) - subset_two = set() - while checkpoint_index <= absolute_max - 1: - summary = set( - range( - checkpoint_index, - min(checkpoint_index + self.expressivity + 1, absolute_max), - ) - ) - subset_two = subset_two.union(summary) - checkpoint_index = self.compute_checkpoint(checkpoint_index + self.stride) - return subset_two - - # Sparse Transformer Fixed Attention Pattern: https://arxiv.org/pdf/1904.10509.pdf - def compute_fixed_attention_subset(self, word_index, tgt_len): - # +1s account for range function; [min, max) -> [min, max] - if not self.is_bidirectional: - absolute_max = word_index + 1 - else: - absolute_max = tgt_len - - # Subset 1 - whole window - rounded_index = ( - math.floor((word_index + self.stride) / self.stride) * self.stride - ) - if word_index % self.stride == 0 and word_index != 0: - subset_one = set( - range(word_index - self.stride, min(absolute_max, word_index + 1)) - ) - else: - subset_one = set( - range( - max(0, rounded_index - self.stride), - min(absolute_max, rounded_index + 1), - ) - ) - - # Subset 2 - summary per window - # If bidirectional, subset 2 is the same for every index - subset_two = set() - if not self.is_bidirectional: - subset_two = self.compute_subset_summaries(absolute_max) - - return subset_one.union(subset_two) - - # Compute sparse mask - if bidirectional, can pre-compute and store - def buffered_sparse_mask(self, tensor, tgt_len, src_len): - assert tgt_len > self.stride - sparse_mask = torch.empty((tgt_len, src_len)).float().fill_(float("-inf")) - - # If bidirectional, subset 2 is the same for every index - subset_summaries = set() - if self.is_bidirectional: - subset_summaries = self.compute_subset_summaries(tgt_len) - - for i in range(tgt_len): - fixed_attention_subset = self.compute_fixed_attention_subset(i, tgt_len) - fixed_attention_subset = fixed_attention_subset.union(subset_summaries) - included_word_indices = torch.LongTensor(list(fixed_attention_subset)) - sparse_mask[i].index_fill_(0, included_word_indices, 0) - return sparse_mask.type_as(tensor) - - def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz): - sparse_mask = self.buffered_sparse_mask(attn_weights, tgt_len, src_len) - sparse_mask = sparse_mask.unsqueeze(0).expand( - bsz * self.num_heads, tgt_len, src_len - ) - attn_weights += sparse_mask diff --git a/spaces/mshukor/UnIVAL/run_scripts/caption/scaling_best/unival_caption_stage_1.sh b/spaces/mshukor/UnIVAL/run_scripts/caption/scaling_best/unival_caption_stage_1.sh deleted file mode 100644 index beb1c2332a978f107af000ff58cef5db25e690d9..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/run_scripts/caption/scaling_best/unival_caption_stage_1.sh +++ /dev/null @@ -1,205 +0,0 @@ - - -# Number of GPUs per GPU worker -export GPUS_PER_NODE=8 -# Number of GPU workers, for single-worker training, please set to 1 -export NUM_NODES=$SLURM_NNODES -# The ip address of the rank-0 worker, for single-worker training, please set to localhost -master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1) -export MASTER_ADDR=$master_addr - -# The port for communication -export MASTER_PORT=12350 -# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0 -export RANK=$SLURM_NODEID - -echo "MASTER_ADDR: $MASTER_ADDR" -echo "RANK :$RANK" -echo "NUM_NODES :$NUM_NODES" -echo "GPUS_PER_NODE :$GPUS_PER_NODE" - -export MIOPEN_USER_DB_PATH=/lus/home/NAT/gda2204/mshukor/.config/miopen_${MASTER_ADDR}_${SLURM_PROCID}/ - -echo "MIOPEN_USER_DB_PATH :$MIOPEN_USER_DB_PATH" - -num_workers=0 - - - -exp_name=unival_caption_stage_1 - - - -ofa_dir=/lus/home/NAT/gda2204/mshukor/code/unival -base_data_dir=/lus/scratch/NAT/gda2204/SHARED/data -base_log_dir=/work/NAT/gda2204/mshukor/logs - -save_base_log_dir=/lus/scratch/NAT/gda2204/SHARED/logs -save_dir=${save_base_log_dir}/ofa/checkpoints/caption/${exp_name} - -log_dir=${save_dir} - -mkdir -p $log_dir $save_dir - -bpe_dir=${ofa_dir}/utils/BPE -user_dir=${ofa_dir}/ofa_module - - - -image_dir=${base_data_dir} - - -data_dir=${base_data_dir}/ofa/caption_data -# data=${data_dir}/caption_stage1_train.tsv,${data_dir}/caption_val.tsv - -# Note: If you have shuffled the data in advance, please uncomment the line below. -data=${data_dir}/caption_stage1_train_1.tsv,${data_dir}/caption_stage1_train_2.tsv,${data_dir}/caption_stage1_train_3.tsv,${data_dir}/caption_stage1_train_4.tsv,${data_dir}/caption_stage1_train_5.tsv,${data_dir}/caption_stage1_train_6.tsv,${data_dir}/caption_stage1_train_7.tsv,${data_dir}/caption_stage1_train_8.tsv,${data_dir}/caption_stage1_train_9.tsv,${data_dir}/caption_stage1_train_10.tsv,${data_dir}/caption_val.tsv - - -eval_cider_cached=${data_dir}/cider_cached_tokens/coco-valid-words.p - - -restore_file=${base_log_dir}/ofa/checkpoints/pretrain/unival_s2_hs/checkpoint1.pt - - -lr=1e-5 - - -selected_cols=0,4,2 - -task=caption -arch=unival_base -pretrained_model= - - -criterion=adjust_label_smoothed_encouraging_loss -label_smoothing=0.1 - -max_epoch=10 -warmup_ratio=0.06 -batch_size=16 -update_freq=1 -resnet_drop_path_rate=0.0 -encoder_drop_path_rate=0.1 -decoder_drop_path_rate=0.1 -dropout=0.1 -attention_dropout=0.0 -max_src_length=80 -max_tgt_length=20 -num_bins=1000 -# patch_image_size=480 -drop_worst_ratio=0.2 - - -### -image_encoder_name=timm_resnet #vit_base_patch16_224 timm_resnet resnet -patch_image_size=480 -resnet_type=resnet101 - -resnet_model_path=${base_log_dir}/pretrained_models/resnet101-5d3b4d8f.pth - -# video -video_encoder_name=all_resnext101 -patch_frame_size=384 -video_model_path=${base_log_dir}/pretrained_models/3dcnn/resnext-101-kinetics.pth #${base_log_dir}/pretrained_models/TimeSformer_divST_8x32_224_K600.pyth -num_frames=4 - -save_interval=1 -validate_interval_updates=2000 -save_interval_updates=0 - - -sample_patch_num='--sample-patch-num=784' # '' - -eval_args='--eval-args={"beam":5,"stop_on_max_len":true,"max_len_b":22,"no_repeat_ngram_size":3}' - - -drop_worst_ratio=0.05 # modified from 0.2 for el -drop_best_ratio=0.05 -drop_best_after=6000 -log_end=0.75 # for el -# log_end=1. # for el - -for max_epoch in {$max_epoch,}; do - echo "max_epoch "${max_epoch} - for warmup_ratio in {0.06,}; do - echo "warmup_ratio "${warmup_ratio} - for drop_worst_after in {6000,}; do - echo "drop_worst_after "${drop_worst_after} - - log_file=${log_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after}".log" - save_path=${save_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after} - mkdir -p $save_path - - python3 -m torch.distributed.launch \ - --nnodes=${NUM_NODES} \ - --nproc_per_node=${GPUS_PER_NODE} \ - --master_port=${MASTER_PORT} \ - --node_rank=${RANK} \ - --master_addr=${MASTER_ADDR} \ - --use_env ${ofa_dir}/train.py \ - $data \ - --selected-cols=${selected_cols} \ - --bpe-dir=${bpe_dir} \ - --user-dir=${user_dir} \ - --restore-file=${restore_file} \ - --save-dir=${save_path} \ - --task=${task} \ - --arch=${arch} \ - --criterion=${criterion} \ - --label-smoothing=${label_smoothing} \ - --batch-size=${batch_size} \ - --update-freq=${update_freq} \ - --encoder-normalize-before \ - --decoder-normalize-before \ - --share-decoder-input-output-embed \ - --share-all-embeddings \ - --layernorm-embedding \ - --patch-layernorm-embedding \ - --code-layernorm-embedding \ - --resnet-drop-path-rate=${resnet_drop_path_rate} \ - --encoder-drop-path-rate=${encoder_drop_path_rate} \ - --decoder-drop-path-rate=${decoder_drop_path_rate} \ - --dropout=${dropout} \ - --attention-dropout=${attention_dropout} \ - --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \ - --lr-scheduler=polynomial_decay --lr=${lr} \ - --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \ - --log-format=simple --log-interval=10 \ - --fixed-validation-seed=7 \ - --no-epoch-checkpoints --keep-best-checkpoints=1 \ - --save-interval=${save_interval} --validate-interval=1 \ - --save-interval-updates=${save_interval_updates} --validate-interval-updates=${validate_interval_updates} \ - --eval-cider \ - --eval-cider-cached-tokens=${eval_cider_cached} \ - --eval-args='{"beam":5,"max_len_b":16,"no_repeat_ngram_size":3}' \ - --best-checkpoint-metric=cider --maximize-best-checkpoint-metric \ - --max-src-length=${max_src_length} \ - --max-tgt-length=${max_tgt_length} \ - --find-unused-parameters \ - --freeze-encoder-embedding \ - --freeze-decoder-embedding \ - --add-type-embedding \ - --scale-attn \ - --scale-fc \ - --scale-heads \ - --disable-entangle \ - --num-bins=${num_bins} \ - --patch-image-size=${patch_image_size} \ - --drop-worst-ratio=${drop_worst_ratio} \ - --drop-worst-after=${drop_worst_after} \ - --fp16 \ - --fp16-scale-window=512 \ - --num-workers=0 \ - --image-encoder-name=${image_encoder_name} \ - --image-dir=${image_dir} \ - --video-encoder-name=${video_encoder_name} \ - --video-model-path=${video_model_path} \ - --patch-frame-size=${patch_frame_size} \ - ${sample_patch_num} \ - ${eval_args} \ - --reset-dataloader --reset-meters --reset-optimizer \ - --log-end ${log_end} --drop-best-ratio ${drop_best_ratio} --drop-best-after ${drop_best_after} - done - done -done \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/run_scripts/vqa/eval/eval_vqa_base_best_avg.sh b/spaces/mshukor/UnIVAL/run_scripts/vqa/eval/eval_vqa_base_best_avg.sh deleted file mode 100644 index 6d256b0c3e34c5b3dec79137dcf7674b687a2e8d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/run_scripts/vqa/eval/eval_vqa_base_best_avg.sh +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env bash - -# The port for communication. Note that if you want to run multiple tasks on the same machine, -# you need to specify different port numbers. -# The port for communication. Note that if you want to run multiple tasks on the same machine, -# you need to specify different port numbers. -# Number of GPUs per GPU worker -export GPUS_PER_NODE=8 -# Number of GPU workers, for single-worker training, please set to 1 -export NUM_NODES=$SLURM_NNODES -# The ip address of the rank-0 worker, for single-worker training, please set to localhost -master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1) -export MASTER_ADDR=$master_addr - -# The port for communication -export MASTER_PORT=12350 -# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0 -export RANK=$SLURM_NODEID - -echo "MASTER_ADDR: $MASTER_ADDR" -echo "RANK :$RANK" -echo "NUM_NODES :$NUM_NODES" -echo "GPUS_PER_NODE :$GPUS_PER_NODE" - -export MIOPEN_USER_DB_PATH=/lus/home/NAT/gda2204/mshukor/.config/miopen_${MASTER_ADDR}_${SLURM_PROCID}/ - -echo "MIOPEN_USER_DB_PATH :$MIOPEN_USER_DB_PATH" - -num_workers=0 - - - - - -ofa_dir=/lus/home/NAT/gda2204/mshukor/code/unival -base_data_dir=/lus/scratch/NAT/gda2204/SHARED/data -base_log_dir=/work/NAT/gda2204/mshukor/logs - - - - -bpe_dir=${ofa_dir}/utils/BPE -user_dir=${ofa_dir}/ofa_module - - -data_dir=${base_data_dir}/ofa/vqa_data - -# val or test or fullval -split=fullval -read_from_img_path=True -image_dir=${base_data_dir} - -data=${data_dir}/vqa_${split}.tsv - -ans2label_file=${base_data_dir}/ofa/vqa_data/trainval_ans2label.pkl - - - -selected_cols=0,5,2,3,4 -valid_batch_size=40 - - - - - -for l in {0.00,0.20,0.40,0.60,0.80,1.00};do - - - - new_base_log_dir=/lus/scratch/NAT/gda2204/SHARED/logs - exp_name=eval_vqa_base_best_avg_postfuse_vqacap${l} - path=${new_base_log_dir}/ofa/pretrained_models/average_models/avg_postfuse_vqacap_l${l}.pt - - echo ${path} - result_path=${new_base_log_dir}/ofa/results/vqa/${exp_name}_${split} - mkdir ${result_path} - - - python3 -m torch.distributed.launch \ - --nnodes=${NUM_NODES} \ - --nproc_per_node=${GPUS_PER_NODE} \ - --master_port=${MASTER_PORT} \ - --node_rank=${RANK} \ - --master_addr=${MASTER_ADDR} \ - --use_env ${ofa_dir}/evaluate.py \ - ${data} \ - --path=${path} \ - --user-dir=${user_dir} \ - --task=vqa_gen \ - --batch-size=16 \ - --log-format=simple --log-interval=10 \ - --seed=7 \ - --gen-subset=${split} \ - --results-path=${result_path} \ - --fp16 \ - --beam-search-vqa-eval \ - --beam=5 \ - --unnormalized \ - --temperature=1.0 \ - --num-workers=0 \ - --model-overrides="{\"data\":\"${data}\",\"bpe_dir\":\"${bpe_dir}\",\"selected_cols\":\"${selected_cols}\",\"ans2label_file\":\"${ans2label_file}\",\"valid_batch_size\":\"${valid_batch_size}\"}" \ - --image-dir=${image_dir} \ - --read-from-img-path - # --ema-eval \ -done - - - \ No newline at end of file diff --git a/spaces/multimodalart/LoraTheExplorer4/README.md b/spaces/multimodalart/LoraTheExplorer4/README.md deleted file mode 100644 index 8e3026ccbb3545d2fbfa278a446495065d28776f..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/LoraTheExplorer4/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: LoRA the Explorer -emoji: 🔎 🖼️ -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 4.1.2 -app_file: app.py -pinned: false -license: mit -suggested_hardware: a10g-large -models: ['nerijs/pixel-art-xl', 'Pclanglais/TintinIA', 'ProomptEngineer/pe-balloon-diffusion-style', 'joachimsallstrom/aether-cloud-lora-for-sdxl', 'ostris/crayon_style_lora_sdxl', 'jbilcke-hf/sdxl-zelda64', 'TheLastBen/Papercut_SDXL', 'fofr/sdxl-2004', 'joachimsallstrom/aether-ghost-lora-for-sdxl', 'artificialguybr/ColoringBookRedmond-V2', 'Norod78/SDXL-LofiGirl-Lora', 'ostris/embroidery_style_lora_sdxl', 'goofyai/3d_render_style_xl', 'ostris/watercolor_style_lora_sdxl', 'veryVANYA/ps1-graphics-sdxl-v2', 'TheLastBen/William_Eggleston_Style_SDXL', 'davizca87/c-a-g-coinmaker', 'goofyai/cyborg_style_xl', 'artificialguybr/ToyRedmond-ToyLoraForSDXL10', 'Fictiverse/Voxel_XL_Lora', 'minimaxir/sdxl-ugly-sonic-lora', 'nerijs/lego-brickheadz-xl', 'nerijs/lego-minifig-xl', 'Norod78/SDXL-jojoso_style-Lora', 'TheLastBen/Pikachu_SDXL', 'artificialguybr/LogoRedmond-LogoLoraForSDXL', 'Norod78/SDXL-StickerSheet-Lora', 'artificialguybr/LineAniRedmond-LinearMangaSDXL', 'TheLastBen/Josef_Koudelka_Style_SDXL', 'goofyai/Leonardo_Ai_Style_Illustration', 'Norod78/SDXL-simpstyle-Lora', 'artificialguybr/StoryBookRedmond', 'chillpixel/blacklight-makeup-sdxl-lora', 'ProomptEngineer/pe-neon-sign-style', 'ProomptEngineer/pe-lofi-hiphop-lofi-girl-concept', 'ProomptEngineer/pe-shitty-fanart', 'ProomptEngineer/pe-sandsculpter-style', 'ProomptEngineer/pe-shitty-medieval-paintings', 'ProomptEngineer/pe-courtroomsketch-style', 'ProomptEngineer/pe-funko-pop-diffusion-style', 'lordjia/lelo-lego-lora', 'KappaNeuro/dressed-animals', 'KappaNeuro/vintage-postage-stamps', 'KappaNeuro/video-installation', 'KappaNeuro/ukiyo-e-art', 'KappaNeuro/surreal-collage', 'KappaNeuro/stop-motion-animation', 'KappaNeuro/studio-ghibli-style', 'KappaNeuro/punk-collage', 'KappaNeuro/needlepoint', 'KappaNeuro/made-of-iridescent-foil', 'KappaNeuro/lascaux', 'KappaNeuro/color-palette', 'KappaNeuro/albumen-print', 'KappaNeuro/1987-action-figure-playset-packaging', 'Norod78/SDXL-VintageMagStyle-Lora', 'CiroN2022/road-sign', 'CiroN2022/mosaic-style', 'CiroN2022/cd-md-music', 'CiroN2022/hair-style', 'CiroN2022/overprint-effect', 'CiroN2022/toy-face', 'CiroN2022/ascii-art', 'artificialguybr/PixelArtRedmond', 'artificialguybr/StickersRedmond', 'artificialguybr/ClayAnimationRedmond', 'fofr/sdxl-vision-pro', 'joachimsallstrom/aether-glitch-lora-for-sdxl', 'artificialguybr/TshirtDesignRedmond-V2', 'ostris/ikea-instructions-lora-sdxl', 'ostris/super-cereal-sdxl-lora', 'jakedahn/sdxl-isometric-geology', 'artificialguybr/analogredmond-v2', 'stets/nintendo64_cartridge'] ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/1_generate_masks_from_raw_images.sh b/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/1_generate_masks_from_raw_images.sh deleted file mode 100644 index 04b780e6d6a3bf46f32b76df12b4a35ea679501d..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/docker/1_generate_masks_from_raw_images.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/usr/bin/env bash - - -if (( $# < 3 )) -then - echo "Usage: $0 config_name input_images_dir image_mask_dataset_out_dir [other args to gen_mask_dataset.py]" - exit 1 -fi - -CURDIR="$(dirname $0)" -SRCDIR="$CURDIR/.." -SRCDIR="$(realpath $SRCDIR)" - -CONFIG_LOCAL_PATH="$(realpath $1)" -INPUT_LOCAL_DIR="$(realpath $2)" -OUTPUT_LOCAL_DIR="$(realpath $3)" -shift 3 - -mkdir -p "$OUTPUT_LOCAL_DIR" - -docker run \ - -v "$SRCDIR":/home/user/project \ - -v "$CONFIG_LOCAL_PATH":/data/config.yaml \ - -v "$INPUT_LOCAL_DIR":/data/input \ - -v "$OUTPUT_LOCAL_DIR":/data/output \ - -u $(id -u):$(id -g) \ - --name="lama-mask-gen" \ - --rm \ - windj007/lama \ - /home/user/project/bin/gen_mask_dataset.py \ - /data/config.yaml /data/input /data/output $@ diff --git a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/prt_render.py b/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/prt_render.py deleted file mode 100644 index 92c8a6257f776ab0c803a78a3af7c43a4333c3f9..0000000000000000000000000000000000000000 --- a/spaces/nasttam/Image-and-3D-Model-Creator/PIFu/lib/renderer/gl/prt_render.py +++ /dev/null @@ -1,350 +0,0 @@ -import numpy as np -import random - -from .framework import * -from .cam_render import CamRender - -class PRTRender(CamRender): - def __init__(self, width=1600, height=1200, name='PRT Renderer', uv_mode=False, ms_rate=1, egl=False): - program_files = ['prt.vs', 'prt.fs'] if not uv_mode else ['prt_uv.vs', 'prt_uv.fs'] - CamRender.__init__(self, width, height, name, program_files=program_files, color_size=8, ms_rate=ms_rate, egl=egl) - - # WARNING: this differs from vertex_buffer and vertex_data in Render - self.vert_buffer = {} - self.vert_data = {} - - self.norm_buffer = {} - self.norm_data = {} - - self.tan_buffer = {} - self.tan_data = {} - - self.btan_buffer = {} - self.btan_data = {} - - self.prt1_buffer = {} - self.prt1_data = {} - self.prt2_buffer = {} - self.prt2_data = {} - self.prt3_buffer = {} - self.prt3_data = {} - - self.uv_buffer = {} - self.uv_data = {} - - self.render_texture_mat = {} - - self.vertex_dim = {} - self.n_vertices = {} - - self.norm_mat_unif = glGetUniformLocation(self.program, 'NormMat') - self.normalize_matrix = np.eye(4) - - self.shcoeff_unif = glGetUniformLocation(self.program, 'SHCoeffs') - self.shcoeffs = np.zeros((9,3)) - self.shcoeffs[0,:] = 1.0 - #self.shcoeffs[1:,:] = np.random.rand(8,3) - - self.hasAlbedoUnif = glGetUniformLocation(self.program, 'hasAlbedoMap') - self.hasNormalUnif = glGetUniformLocation(self.program, 'hasNormalMap') - - self.analyticUnif = glGetUniformLocation(self.program, 'analytic') - self.analytic = False - - self.rot_mat_unif = glGetUniformLocation(self.program, 'RotMat') - self.rot_matrix = np.eye(3) - - def set_texture(self, mat_name, smplr_name, texture): - # texture_image: H x W x 3 - width = texture.shape[1] - height = texture.shape[0] - texture = np.flip(texture, 0) - img_data = np.fromstring(texture.tostring(), np.uint8) - - if mat_name not in self.render_texture_mat: - self.render_texture_mat[mat_name] = {} - if smplr_name in self.render_texture_mat[mat_name].keys(): - glDeleteTextures([self.render_texture_mat[mat_name][smplr_name]]) - del self.render_texture_mat[mat_name][smplr_name] - self.render_texture_mat[mat_name][smplr_name] = glGenTextures(1) - glActiveTexture(GL_TEXTURE0) - - glPixelStorei(GL_UNPACK_ALIGNMENT, 1) - glBindTexture(GL_TEXTURE_2D, self.render_texture_mat[mat_name][smplr_name]) - - glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, img_data) - - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 3) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR) - glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR) - - glGenerateMipmap(GL_TEXTURE_2D) - - def set_albedo(self, texture_image, mat_name='all'): - self.set_texture(mat_name, 'AlbedoMap', texture_image) - - def set_normal_map(self, texture_image, mat_name='all'): - self.set_texture(mat_name, 'NormalMap', texture_image) - - def set_mesh(self, vertices, faces, norms, faces_nml, uvs, faces_uvs, prt, faces_prt, tans, bitans, mat_name='all'): - self.vert_data[mat_name] = vertices[faces.reshape([-1])] - self.n_vertices[mat_name] = self.vert_data[mat_name].shape[0] - self.vertex_dim[mat_name] = self.vert_data[mat_name].shape[1] - - if mat_name not in self.vert_buffer.keys(): - self.vert_buffer[mat_name] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.vert_buffer[mat_name]) - glBufferData(GL_ARRAY_BUFFER, self.vert_data[mat_name], GL_STATIC_DRAW) - - self.uv_data[mat_name] = uvs[faces_uvs.reshape([-1])] - if mat_name not in self.uv_buffer.keys(): - self.uv_buffer[mat_name] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.uv_buffer[mat_name]) - glBufferData(GL_ARRAY_BUFFER, self.uv_data[mat_name], GL_STATIC_DRAW) - - self.norm_data[mat_name] = norms[faces_nml.reshape([-1])] - if mat_name not in self.norm_buffer.keys(): - self.norm_buffer[mat_name] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.norm_buffer[mat_name]) - glBufferData(GL_ARRAY_BUFFER, self.norm_data[mat_name], GL_STATIC_DRAW) - - self.tan_data[mat_name] = tans[faces_nml.reshape([-1])] - if mat_name not in self.tan_buffer.keys(): - self.tan_buffer[mat_name] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.tan_buffer[mat_name]) - glBufferData(GL_ARRAY_BUFFER, self.tan_data[mat_name], GL_STATIC_DRAW) - - self.btan_data[mat_name] = bitans[faces_nml.reshape([-1])] - if mat_name not in self.btan_buffer.keys(): - self.btan_buffer[mat_name] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.btan_buffer[mat_name]) - glBufferData(GL_ARRAY_BUFFER, self.btan_data[mat_name], GL_STATIC_DRAW) - - self.prt1_data[mat_name] = prt[faces_prt.reshape([-1])][:,:3] - self.prt2_data[mat_name] = prt[faces_prt.reshape([-1])][:,3:6] - self.prt3_data[mat_name] = prt[faces_prt.reshape([-1])][:,6:] - - if mat_name not in self.prt1_buffer.keys(): - self.prt1_buffer[mat_name] = glGenBuffers(1) - if mat_name not in self.prt2_buffer.keys(): - self.prt2_buffer[mat_name] = glGenBuffers(1) - if mat_name not in self.prt3_buffer.keys(): - self.prt3_buffer[mat_name] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.prt1_buffer[mat_name]) - glBufferData(GL_ARRAY_BUFFER, self.prt1_data[mat_name], GL_STATIC_DRAW) - glBindBuffer(GL_ARRAY_BUFFER, self.prt2_buffer[mat_name]) - glBufferData(GL_ARRAY_BUFFER, self.prt2_data[mat_name], GL_STATIC_DRAW) - glBindBuffer(GL_ARRAY_BUFFER, self.prt3_buffer[mat_name]) - glBufferData(GL_ARRAY_BUFFER, self.prt3_data[mat_name], GL_STATIC_DRAW) - - glBindBuffer(GL_ARRAY_BUFFER, 0) - - def set_mesh_mtl(self, vertices, faces, norms, faces_nml, uvs, faces_uvs, tans, bitans, prt): - for key in faces: - self.vert_data[key] = vertices[faces[key].reshape([-1])] - self.n_vertices[key] = self.vert_data[key].shape[0] - self.vertex_dim[key] = self.vert_data[key].shape[1] - - if key not in self.vert_buffer.keys(): - self.vert_buffer[key] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.vert_buffer[key]) - glBufferData(GL_ARRAY_BUFFER, self.vert_data[key], GL_STATIC_DRAW) - - self.uv_data[key] = uvs[faces_uvs[key].reshape([-1])] - if key not in self.uv_buffer.keys(): - self.uv_buffer[key] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.uv_buffer[key]) - glBufferData(GL_ARRAY_BUFFER, self.uv_data[key], GL_STATIC_DRAW) - - self.norm_data[key] = norms[faces_nml[key].reshape([-1])] - if key not in self.norm_buffer.keys(): - self.norm_buffer[key] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.norm_buffer[key]) - glBufferData(GL_ARRAY_BUFFER, self.norm_data[key], GL_STATIC_DRAW) - - self.tan_data[key] = tans[faces_nml[key].reshape([-1])] - if key not in self.tan_buffer.keys(): - self.tan_buffer[key] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.tan_buffer[key]) - glBufferData(GL_ARRAY_BUFFER, self.tan_data[key], GL_STATIC_DRAW) - - self.btan_data[key] = bitans[faces_nml[key].reshape([-1])] - if key not in self.btan_buffer.keys(): - self.btan_buffer[key] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.btan_buffer[key]) - glBufferData(GL_ARRAY_BUFFER, self.btan_data[key], GL_STATIC_DRAW) - - self.prt1_data[key] = prt[faces[key].reshape([-1])][:,:3] - self.prt2_data[key] = prt[faces[key].reshape([-1])][:,3:6] - self.prt3_data[key] = prt[faces[key].reshape([-1])][:,6:] - - if key not in self.prt1_buffer.keys(): - self.prt1_buffer[key] = glGenBuffers(1) - if key not in self.prt2_buffer.keys(): - self.prt2_buffer[key] = glGenBuffers(1) - if key not in self.prt3_buffer.keys(): - self.prt3_buffer[key] = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self.prt1_buffer[key]) - glBufferData(GL_ARRAY_BUFFER, self.prt1_data[key], GL_STATIC_DRAW) - glBindBuffer(GL_ARRAY_BUFFER, self.prt2_buffer[key]) - glBufferData(GL_ARRAY_BUFFER, self.prt2_data[key], GL_STATIC_DRAW) - glBindBuffer(GL_ARRAY_BUFFER, self.prt3_buffer[key]) - glBufferData(GL_ARRAY_BUFFER, self.prt3_data[key], GL_STATIC_DRAW) - - glBindBuffer(GL_ARRAY_BUFFER, 0) - - def cleanup(self): - - glBindBuffer(GL_ARRAY_BUFFER, 0) - for key in self.vert_data: - glDeleteBuffers(1, [self.vert_buffer[key]]) - glDeleteBuffers(1, [self.norm_buffer[key]]) - glDeleteBuffers(1, [self.uv_buffer[key]]) - - glDeleteBuffers(1, [self.tan_buffer[key]]) - glDeleteBuffers(1, [self.btan_buffer[key]]) - glDeleteBuffers(1, [self.prt1_buffer[key]]) - glDeleteBuffers(1, [self.prt2_buffer[key]]) - glDeleteBuffers(1, [self.prt3_buffer[key]]) - - glDeleteBuffers(1, []) - - for smplr in self.render_texture_mat[key]: - glDeleteTextures([self.render_texture_mat[key][smplr]]) - - self.vert_buffer = {} - self.vert_data = {} - - self.norm_buffer = {} - self.norm_data = {} - - self.tan_buffer = {} - self.tan_data = {} - - self.btan_buffer = {} - self.btan_data = {} - - self.prt1_buffer = {} - self.prt1_data = {} - - self.prt2_buffer = {} - self.prt2_data = {} - - self.prt3_buffer = {} - self.prt3_data = {} - - self.uv_buffer = {} - self.uv_data = {} - - self.render_texture_mat = {} - - self.vertex_dim = {} - self.n_vertices = {} - - def randomize_sh(self): - self.shcoeffs[0,:] = 0.8 - self.shcoeffs[1:,:] = 1.0*np.random.rand(8,3) - - def set_sh(self, sh): - self.shcoeffs = sh - - def set_norm_mat(self, scale, center): - N = np.eye(4) - N[:3, :3] = scale*np.eye(3) - N[:3, 3] = -scale*center - - self.normalize_matrix = N - - def draw(self): - self.draw_init() - - glDisable(GL_BLEND) - #glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) - glEnable(GL_MULTISAMPLE) - - glUseProgram(self.program) - glUniformMatrix4fv(self.norm_mat_unif, 1, GL_FALSE, self.normalize_matrix.transpose()) - glUniformMatrix4fv(self.model_mat_unif, 1, GL_FALSE, self.model_view_matrix.transpose()) - glUniformMatrix4fv(self.persp_mat_unif, 1, GL_FALSE, self.projection_matrix.transpose()) - - if 'AlbedoMap' in self.render_texture_mat['all']: - glUniform1ui(self.hasAlbedoUnif, GLuint(1)) - else: - glUniform1ui(self.hasAlbedoUnif, GLuint(0)) - - if 'NormalMap' in self.render_texture_mat['all']: - glUniform1ui(self.hasNormalUnif, GLuint(1)) - else: - glUniform1ui(self.hasNormalUnif, GLuint(0)) - - glUniform1ui(self.analyticUnif, GLuint(1) if self.analytic else GLuint(0)) - - glUniform3fv(self.shcoeff_unif, 9, self.shcoeffs) - - glUniformMatrix3fv(self.rot_mat_unif, 1, GL_FALSE, self.rot_matrix.transpose()) - - for mat in self.vert_buffer: - # Handle vertex buffer - glBindBuffer(GL_ARRAY_BUFFER, self.vert_buffer[mat]) - glEnableVertexAttribArray(0) - glVertexAttribPointer(0, self.vertex_dim[mat], GL_DOUBLE, GL_FALSE, 0, None) - - # Handle normal buffer - glBindBuffer(GL_ARRAY_BUFFER, self.norm_buffer[mat]) - glEnableVertexAttribArray(1) - glVertexAttribPointer(1, 3, GL_DOUBLE, GL_FALSE, 0, None) - - # Handle uv buffer - glBindBuffer(GL_ARRAY_BUFFER, self.uv_buffer[mat]) - glEnableVertexAttribArray(2) - glVertexAttribPointer(2, 2, GL_DOUBLE, GL_FALSE, 0, None) - - # Handle tan buffer - glBindBuffer(GL_ARRAY_BUFFER, self.tan_buffer[mat]) - glEnableVertexAttribArray(3) - glVertexAttribPointer(3, 3, GL_DOUBLE, GL_FALSE, 0, None) - - # Handle btan buffer - glBindBuffer(GL_ARRAY_BUFFER, self.btan_buffer[mat]) - glEnableVertexAttribArray(4) - glVertexAttribPointer(4, 3, GL_DOUBLE, GL_FALSE, 0, None) - - # Handle PTR buffer - glBindBuffer(GL_ARRAY_BUFFER, self.prt1_buffer[mat]) - glEnableVertexAttribArray(5) - glVertexAttribPointer(5, 3, GL_DOUBLE, GL_FALSE, 0, None) - - glBindBuffer(GL_ARRAY_BUFFER, self.prt2_buffer[mat]) - glEnableVertexAttribArray(6) - glVertexAttribPointer(6, 3, GL_DOUBLE, GL_FALSE, 0, None) - - glBindBuffer(GL_ARRAY_BUFFER, self.prt3_buffer[mat]) - glEnableVertexAttribArray(7) - glVertexAttribPointer(7, 3, GL_DOUBLE, GL_FALSE, 0, None) - - for i, smplr in enumerate(self.render_texture_mat[mat]): - glActiveTexture(GL_TEXTURE0 + i) - glBindTexture(GL_TEXTURE_2D, self.render_texture_mat[mat][smplr]) - glUniform1i(glGetUniformLocation(self.program, smplr), i) - - glDrawArrays(GL_TRIANGLES, 0, self.n_vertices[mat]) - - glDisableVertexAttribArray(7) - glDisableVertexAttribArray(6) - glDisableVertexAttribArray(5) - glDisableVertexAttribArray(4) - glDisableVertexAttribArray(3) - glDisableVertexAttribArray(2) - glDisableVertexAttribArray(1) - glDisableVertexAttribArray(0) - - glBindBuffer(GL_ARRAY_BUFFER, 0) - - glUseProgram(0) - - glDisable(GL_BLEND) - glDisable(GL_MULTISAMPLE) - - self.draw_end() diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip.md deleted file mode 100644 index 899e454bd74e55893fa7f7b194ffcdf0946a2bab..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip.md +++ /dev/null @@ -1,233 +0,0 @@ -
    -

    Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip: What You Need to Know

    -

    If you are looking for a powerful and versatile PDF editor and viewer, you might want to check out Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip. This is a software package that includes the latest version of Adobe Acrobat XI Pro, a multilingual feature that supports over 20 languages, and a patch that fixes some bugs and enhances the software's performance.

    -

    In this article, we will tell you everything you need to know about Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip, including what it is, how to download and install it, how to use it, and what benefits it offers. By the end of this article, you will be able to decide whether this software is right for you and how to get started with it.

    -

    Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip


    Download Zip ———>>> https://urlcod.com/2uIbG6



    -

    Introduction

    -

    Before we dive into the details of Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip, let's first introduce some basic concepts and terms that will help you understand what this software is all about.

    -

    What is Adobe Acrobat XI Pro?

    -

    Adobe Acrobat XI Pro is a software application that allows you to create, edit, convert, sign, and share PDF documents. PDF stands for Portable Document Format, which is a file format that preserves the layout, fonts, images, and graphics of any document, regardless of the application or platform used to create it.

    -

    With Adobe Acrobat XI Pro, you can do many things with PDF documents, such as:

    -
      -
    • Create PDFs from any application that prints, such as Microsoft Word, Excel, PowerPoint, or web browsers.
    • -
    • Edit PDFs by adding or deleting text, images, links, headers, footers, watermarks, backgrounds, etc.
    • -
    • Convert PDFs to other formats, such as Word, Excel, PowerPoint, HTML, JPEG, PNG, etc.
    • -
    • Sign PDFs electronically with your digital signature or certificate.
    • -
    • Share PDFs via email, cloud services, or social media.
    • -
    • Collaborate on PDFs with others by adding comments, annotations, stamps, or drawing tools.
    • -
    • Protect PDFs with passwords, encryption, redaction, or digital rights management (DRM).
    • -
    • Optimize PDFs for web, print, or mobile devices.
    • -
    • Organize PDFs by merging, splitting, rotating, cropping, or rearranging pages.
    • -
    • Create and fill out PDF forms with interactive fields and buttons.
    • -
    • Create and edit PDF portfolios that combine multiple files of different types into one PDF package.
    • -
    -

    As you can see, Adobe Acrobat XI Pro is a comprehensive and powerful tool that can handle any PDF task you can think of. It is compatible with Windows and Mac operating systems, and it has a user-friendly interface that makes it easy to navigate and use.

    -

    What is SadeemPC?

    -

    SadeemPC is a website that provides free downloads of various software applications, including Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip. SadeemPC is a trusted source for software downloads because it offers:

    -
      -
    • High-quality software that is tested and verified by the SadeemPC team.
    • -
    • Fast and secure download links that are hosted on reliable servers.
    • -
    • Easy and simple installation instructions that guide you through the process.
    • -
    • Patches, cracks, keys, or activators that enable you to use the software without any limitations or restrictions.
    • -
    • Regular updates and support that ensure the software's functionality and compatibility.
    • -
    -

    SadeemPC is a popular and reputable website that has thousands of satisfied users who have downloaded and used its software. You can visit its official website at https://sadeempc.com/ to browse its collection of software and find the one you need.

    -

    -

    What is the purpose of this article?

    -

    The purpose of this article is to provide you with a detailed and comprehensive guide on how to download, install, and use Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip. We will cover the following topics in this article:

    -
      -
    1. How to download and install Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip
    2. -
    3. How to use Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip
    4. -
    5. Benefits of Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip
    6. -
    -

    By the end of this article, you will have a clear understanding of what Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip is, how to get it, and how to use it effectively. You will also learn about the advantages of using this software over other alternatives.

    -

    How to Download and Install Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip

    -

    In this section, we will show you how to download and install Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip on your computer. The process is simple and straightforward, but you need to follow some steps carefully to avoid any errors or issues.

    -

    Where to find the download link and how to verify its authenticity

    -

    The first step is to find the download link for Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip. You can find it on the SadeemPC website at https://sadeempc.com/adobe-acrobat-xi-pro-11-0-15-multilingual-incl-patch-sadeempc-zip/. This is the official and original link for the software package, so you can trust its authenticity and quality.

    -

    To verify the authenticity of the download link, you can check the following information:

    -
      -
    • The file name: It should be exactly "Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip". Any variation or modification in the file name could indicate a fake or corrupted file.
    • -
    • The file size: It should be about 745 MB. Any significant difference in the file size could indicate a missing or added component in the file.
    • -
    • The file hash: It should be "F9B8A8E9C6D7F4E1A5B7F2D6B4E5C1C6". This is a unique identifier that confirms the integrity and identity of the file. You can use a tool like HashTab (https://implbits.com/products/hashtab/) to calculate and compare the file hash.
    • -
    -

    If you find any discrepancy or inconsistency in the file name, size, or hash , you should not download the file and look for another source. If everything matches, you can proceed to download the file by clicking on the "Download Now" button on the SadeemPC website.

    -

    How to extract the zip file and run the setup.exe file

    -

    After you have downloaded the file, you need to extract it to access its contents. You can use any software that can handle zip files, such as WinRAR (https://www.win-rar.com/) or 7-Zip (https://www.7-zip.org/). To extract the file, follow these steps:

    -
      -
    1. Locate the file in your download folder or wherever you saved it.
    2. -
    3. Right-click on the file and select "Extract Here" or "Extract to Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip".
    4. -
    5. Wait for the extraction process to finish. You should see a new folder with the same name as the file.
    6. -
    7. Open the folder and double-click on the "setup.exe" file to start the installation process.
    8. -
    -

    The setup.exe file will launch the Adobe Acrobat XI Pro installer, which will guide you through the installation process. You can follow the default settings or customize them according to your preferences. The installation process may take some time, depending on your system specifications and internet speed.

    -

    How to apply the patch and activate the software

    -

    After you have installed Adobe Acrobat XI Pro, you need to apply the patch that is included in the zip file. The patch is a small program that modifies some files or codes in the software to bypass its activation or registration process. This way, you can use the software without any limitations or restrictions.

    -

    To apply the patch, follow these steps:

    -
      -
    1. Open the folder where you extracted the zip file and locate the "Patch" folder.
    2. -
    3. Open the "Patch" folder and double-click on the "Adobe Acrobat XI Pro 11.x (x32-x64) Multi Patch.exe" file.
    4. -
    5. A window will pop up asking you to select your language. Choose your preferred language and click "OK".
    6. -
    7. Another window will pop up asking you to select your installation directory. By default, it should be "C:\Program Files (x86)\Adobe\Acrobat 11.0\Acrobat". If you installed Adobe Acrobat XI Pro in a different location, browse and select it. Then click "OK".
    8. -
    9. The patch will start working and show you a progress bar. Wait for it to finish and close it.
    10. -
    -

    Congratulations! You have successfully applied the patch and activated Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip. You can now launch and use the software without any problems.

    -

    How to Use Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip

    -

    In this section, we will show you how to use Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip for various PDF tasks. We will cover some of the most common and useful features of the software, such as creating, editing, converting, and signing PDF documents, using the multilingual feature, and customizing the interface and preferences.

    -

    How to create, edit, convert, and sign PDF documents

    -

    One of the main functions of Adobe Acrobat XI Pro is to create, edit, convert, and sign PDF documents. You can do this in different ways, depending on your needs and preferences.

    -

    How to create PDF documents

    -

    You can create PDF documents from any application that prints, such as Microsoft Word, Excel, PowerPoint, or web browsers. To do this, follow these steps:

    -
      -
    1. Open the application and create or open the document that you want to convert to PDF.
    2. -
    3. Select "File" > "Print" or press "Ctrl + P" on your keyboard.
    4. -
    5. In the print dialog box, choose "Adobe PDF" as your printer and click "Print".
    6. -
    7. A window will pop up asking you to name and save your PDF file. Choose a location and a name for your file and click "Save".
    8. -
    9. Your PDF file will be created and opened in Adobe Acrobat XI Pro automatically.
    10. -
    -

    You can also create PDF documents from multiple files of different types by using the "Create PDF" tool in Adobe Acrobat XI Pro. To do this, follow these steps:

    -
      -
    1. Launch Adobe Acrobat XI Pro and select "File" > "Create" > "Combine Files into a Single PDF".
    2. -
    3. A window will pop up asking you to add the files that you want to combine. You can drag and drop the files or click on the "Add Files" button to browse and select them. You can add files of different types, such as Word, Excel, PowerPoint, JPEG, PNG, etc.
    4. -
    5. After you have added the files, you can rearrange them by dragging and dropping them or using the "Move Up" and "Move Down" buttons. You can also remove any file by selecting it and clicking on the "Remove" button.
    6. -
    7. When you are satisfied with the order of the files, click on the "Combine Files" button at the bottom right corner of the window.
    8. -
    9. Your PDF file will be created and opened in Adobe Acrobat XI Pro automatically.
    10. -
    -

    You can also create PDF documents from scanned paper documents by using the "Create PDF from Scanner" tool in Adobe Acrobat XI Pro. To do this, follow these steps:

    -
      -
    1. Launch Adobe Acrobat XI Pro and select "File" > "Create" > "PDF from Scanner".
    2. -
    3. A window will pop up asking you to choose your scanner and scanning options. You can select your scanner from the drop-down menu and adjust the settings such as color mode, resolution, paper size, etc.
    4. -
    5. Click on the "Scan" button to start scanning your document. You can scan multiple pages by clicking on the "Scan More Pages" button or finish scanning by clicking on the "Scan Is Complete" button.
    6. -
    7. Your PDF file will be created and opened in Adobe Acrobat XI Pro automatically.
    8. -
    -

    How to edit PDF documents

    -

    You can edit PDF documents by using the "Edit Text & Images" tool in Adobe Acrobat XI Pro. To do this, follow these steps:

    -
      -
    1. Open the PDF document that you want to edit in Adobe Acrobat XI Pro.
    2. -
    3. Select "Tools" > "Content Editing" > "Edit Text & Images".
    4. -
    5. A toolbar will appear at the top of the document with various editing options. You can click on any text or image in the document to select it and edit it.
    6. -
    7. To edit text, you can use the toolbar to change the font, size, color, alignment, or spacing of the text. You can also use your keyboard to type, delete, or copy and paste text.
    8. -
    9. To edit images, you can use the toolbar to crop, rotate, flip, or replace the image. You can also use your mouse to drag and resize the image.
    10. -
    11. When you are done editing, click on the "Save" button or press "Ctrl + S" on your keyboard to save your changes.
    12. -
    -

    How to convert PDF documents

    -

    You can convert PDF documents to other formats by using the "Export PDF" tool in Adobe Acrobat XI Pro. To do this, follow these steps:

    -
      -
    1. Open the PDF document that you want to convert in Adobe Acrobat XI Pro.
    2. -
    3. Select "File" > "Save As Other" > "Microsoft Word", "Microsoft Excel", "Microsoft PowerPoint", or any other format that you want.
    4. -
    5. A window will pop up asking you to name and save your converted file. Choose a location and a name for your file and click "Save".
    6. -
    7. Your converted file will be created and opened in the corresponding application automatically.
    8. -
    -

    How to sign PDF documents

    -

    You can sign PDF documents electronically by using the "Sign & Certify" tool in Adobe Acrobat XI Pro. To do this, follow these steps:

    -
      -
    1. Open the PDF document that you want to sign in Adobe Acrobat XI Pro.
    2. -
    3. Select "Tools" > "Sign & Certify" > "Place Signature".
    4. -
    5. A window will pop up asking you to choose how you want to place your signature. You can choose from four options: type your name, draw your signature, use an image of your signature, or use a certificate.
    6. -
    7. If you choose to type your name, you can select a font style and size for your signature. If you choose to draw your signature, you can use your mouse or a stylus to draw it on a blank area. If you choose to use an image of your signature , you can browse and select a file that contains your signature. If you choose to use a certificate, you can select one from your digital ID or create a new one.
    8. -
    9. After you have chosen your signature option, click on the "Accept" button.
    10. -
    11. A crosshair cursor will appear on the document. You can drag and position it where you want to place your signature. You can also resize your signature by dragging its corners.
    12. -
    13. When you are satisfied with the placement and size of your signature, double-click on it to finalize it.
    14. -
    15. Your signature will be applied to the document and a blue ribbon icon will appear at the top of the document, indicating that it is signed.
    16. -
    -

    How to use the multilingual feature and switch between languages

    -

    One of the unique features of Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip is that it supports over 20 languages, including English, French, German, Spanish, Italian, Portuguese, Russian, Chinese, Japanese, Korean, Arabic, Hebrew, and more. This means that you can use the software in any language that you prefer or need.

    -

    To use the multilingual feature and switch between languages, follow these steps:

    -
      -
    1. Launch Adobe Acrobat XI Pro and select "Edit" > "Preferences".
    2. -
    3. A window will pop up with various preferences categories. Select "International" from the left panel.
    4. -
    5. In the "Application Language" section, choose "Choose at application startup" from the drop-down menu and click "OK".
    6. -
    7. Restart Adobe Acrobat XI Pro. A window will pop up asking you to select your language. Choose the language that you want to use and click "OK".
    8. -
    9. Your software will launch in the selected language. You can change the language anytime by repeating the steps above.
    10. -
    -

    The multilingual feature is very useful for users who work with PDF documents in different languages or who want to learn a new language. It allows you to customize the software according to your needs and preferences.

    -

    How to customize the interface and preferences

    -

    Another feature of Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip is that it allows you to customize the interface and preferences of the software. You can change the appearance, behavior, and functionality of the software according to your liking.

    -

    To customize the interface and preferences, follow these steps:

    -
      -
    1. Launch Adobe Acrobat XI Pro and select "Edit" > "Preferences".
    2. -
    3. A window will pop up with various preferences categories. You can select any category from the left panel and adjust its settings from the right panel.
    4. -
    5. Some of the most common and useful preferences categories are:
    6. -
        -
      • "General": Here you can change the basic settings of the software, such as startup options, zoom level, page display mode, measurement units, etc.
      • -
      • "Documents": Here you can change the settings related to opening and saving documents, such as file associations, default location, auto-save interval, etc.
      • -
      • "Security": Here you can change the settings related to protecting and encrypting documents, such as passwords, certificates, signatures, etc.
      • -
      • "Commenting": Here you can change the settings related to adding and managing comments on documents, such as color, font, opacity, etc.
      • -
      • "Forms": Here you can change the settings related to creating and filling out forms on documents , such as field properties, auto-complete options, etc.
      • -
      -
    7. After you have changed the settings that you want, click on the "OK" button to save your changes.
    8. -
    -

    You can also customize the interface of the software by changing the layout, size, and position of the toolbars, panels, and windows. You can do this by dragging and dropping them or using the "View" menu to show or hide them.

    -

    Customizing the interface and preferences of Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip allows you to tailor the software to your personal or professional needs and preferences. It makes the software more user-friendly and efficient.

    -

    Benefits of Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip

    -

    In this section, we will highlight some of the benefits of using Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip over other PDF software or alternatives. We will focus on three main aspects: quality, functionality, and performance.

    -

    Quality

    -

    Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip offers high-quality PDF software that is reliable, secure, and compatible. You can trust that the software will:

    -
      -
    • Create PDF documents that preserve the original format, layout, and quality of any source document.
    • -
    • Edit PDF documents without losing any data, information, or quality.
    • -
    • Convert PDF documents to other formats without compromising the quality or accuracy of the content.
    • -
    • Sign PDF documents with digital signatures or certificates that are valid and verifiable.
    • -
    • Share PDF documents with others without worrying about compatibility or security issues.
    • -
    -

    Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip is a software that is designed and developed by Adobe Systems, which is a leading company in the field of digital media and creative software. Adobe Systems has a reputation for producing high-quality software that meets the standards and expectations of its users.

    -

    Functionality

    -

    Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip offers a wide range of functionality that covers all aspects of PDF management and manipulation. You can use the software to:

    -
      -
    • Create PDF documents from any application that prints or from multiple files of different types.
    • -
    • Edit PDF documents by adding or deleting text, images, links, headers, footers, watermarks, backgrounds, etc.
    • -
    • Convert PDF documents to other formats, such as Word, Excel, PowerPoint, HTML, JPEG, PNG, etc.
    • -
    • Sign PDF documents electronically with your digital signature or certificate.
    • -
    • Share PDF documents via email, cloud services, or social media.
    • -
    • Collaborate on PDF documents with others by adding comments, annotations , stamps, or drawing tools.
    • -
    • Protect PDF documents with passwords, encryption, redaction, or digital rights management (DRM).
    • -
    • Optimize PDF documents for web, print, or mobile devices.
    • -
    • Organize PDF documents by merging, splitting, rotating, cropping, or rearranging pages.
    • -
    • Create and fill out PDF forms with interactive fields and buttons.
    • -
    • Create and edit PDF portfolios that combine multiple files of different types into one PDF package.
    • -
    -

    Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip is a software that offers a complete and comprehensive solution for any PDF task you can think of. It has a user-friendly interface that makes it easy to access and use its features and tools.

    -

    Performance

    -

    Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip offers a fast and smooth performance that ensures the efficiency and productivity of your work. You can expect that the software will:

    -
      -
    • Run smoothly and stably on your computer without crashing or freezing.
    • -
    • Load and process PDF documents quickly and accurately without errors or glitches.
    • -
    • Save and export PDF documents in a timely manner without delays or interruptions.
    • -
    • Apply the patch and activate the software without any complications or issues.
    • -
    • Update and support the software regularly to ensure its functionality and compatibility.
    • -
    -

    Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip is a software that is optimized and enhanced by the patch that is included in the zip file. The patch fixes some bugs and improves the software's performance and security.

    -

    Conclusion

    -

    In conclusion, Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip is a software package that includes the latest version of Adobe Acrobat XI Pro, a multilingual feature that supports over 20 languages, and a patch that fixes some bugs and enhances the software's performance. It is a high-quality, functional, and performant PDF software that can handle any PDF task you can think of.

    -

    If you are looking for a powerful and versatile PDF editor and viewer, you might want to download and install Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip from the SadeemPC website. You can follow our guide on how to download, install, and use the software in this article. You will also learn about the benefits of using this software over other alternatives.

    -

    We hope that this article has been helpful and informative for you. Thank you for reading it and we hope you enjoy using Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip.

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip:

    -

    Q: Is Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip safe to download and use?

    -

    A: Yes, Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip is safe to download and use as long as you get it from the official and original link on the SadeemPC website. You should also verify the authenticity of the file by checking its name, size, and hash before downloading it. The patch that is included in the zip file is also safe to apply and use as it does not contain any viruses or malware.

    -

    Q: What are the system requirements for Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip?

    -

    A: The system requirements for Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip are as follows:

    - - - - -
    Operating SystemProcessorMemoryDisk Space
    Windows XP SP3 or later1.3 GHz or faster512 MB (1 GB recommended)1.85 GB
    Mac OS X v10.6.8 or laterIntel processor1 GB1.5 GB
    -

    Q: How can I update Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip?

    -

    A: You can update Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip by visiting the SadeemPC website and looking for any new versions or updates of the software. You can also check for updates within the software by selecting "Help" > "Check for Updates". If there are any available updates, you can download and install them by following the instructions on the screen.

    -

    Q: How can I uninstall Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip?

    -

    A: You can uninstall Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip by using the "Add or Remove Programs" feature in Windows or the "Applications" folder in Mac OS X. To do this, follow these steps:

    -
      -
    1. Close Adobe Acrobat XI Pro if it is running.
    2. -
    3. For Windows, go to "Start" > "Control Panel" > "Add or Remove Programs". For Mac OS X, go to the "Applications" folder and drag the Adobe Acrobat XI Pro icon to the "Trash".
    4. -
    5. Find and select Adobe Acrobat XI Pro from the list of programs and click on the "Remove" or "Uninstall" button.
    6. -
    7. Follow the prompts to complete the uninstallation process.
    8. -
    -

    Q: Where can I find more information and support for Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip?

    -

    A: You can find more information and support for Adobe Acrobat XI Pro 11.0.15 Multilingual Incl Patch [SadeemPC].zip by visiting the following websites:

    -
      -
    • The SadeemPC website: https://sadeempc.com/ - Here you can find the download link, installation instructions, patch details, and other software downloads.
    • -
    • The Adobe website: https://www.adobe.com/products/acrobat.html - Here you can find the product overview, features, tutorials, FAQs, and customer service.
    • -
    • The Adobe community forum: https://community.adobe.com/t5/acrobat/bd-p/acrobat - Here you can find discussions, questions, answers, tips, and feedback from other users and experts.
    • -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Airslax 3.1 Full Iso.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Airslax 3.1 Full Iso.md deleted file mode 100644 index 636ec9d37d505a4ddab90515663ff0d3ebd923d6..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Airslax 3.1 Full Iso.md +++ /dev/null @@ -1,28 +0,0 @@ - -

    Airslax 3.1 Full Iso: A Powerful and Flexible Linux Distribution

    -

    Airslax is a Linux distribution based on Porteus that allows you to create a portable and customizable operating system that can run from a USB drive or a CD. Airslax 3.1 Full Iso is the latest version of this distribution, which includes many features and improvements.

    -

    Some of the features of Airslax 3.1 Full Iso are:

    -

    airslax 3.1 full iso


    Download File · https://urlcod.com/2uIav5



    -
      -
    • It supports both 32-bit and 64-bit architectures.
    • -
    • It has a user-friendly graphical interface with multiple themes and icons.
    • -
    • It has a wide range of applications for various tasks, such as web browsing, multimedia, office, security, networking, and more.
    • -
    • It has a powerful module system that allows you to add or remove components as you wish.
    • -
    • It has a built-in Wi-Fi hacking tool that can crack WEP and WPA passwords.
    • -
    • It has a fast boot time and low system requirements.
    • -
    -

    If you want to try Airslax 3.1 Full Iso, you can download it from the official website[^1^] or from other sources[^2^]. You will need to verify the md5sum of the iso file to ensure that it is not corrupted or tampered with. You can then burn it to a CD or write it to a USB drive using a tool like Rufus or UNetbootin. You can then boot your computer from the CD or USB drive and enjoy Airslax 3.1 Full Iso.

    Airslax 3.1 Full Iso is a versatile and flexible Linux distribution that can suit different needs and preferences. One of the main features of Airslax is its module system, which allows you to customize your operating system by adding or removing components as you wish. A module is a compressed file that contains a set of files and folders that are integrated into the system when it boots. You can create your own modules or download them from the official website or from other sources. You can also activate or deactivate modules on the fly without rebooting.

    -

    Another feature of Airslax is its graphical interface, which is based on LXDE, a lightweight and fast desktop environment. Airslax offers multiple themes and icons that you can choose from to change the look and feel of your system. You can also install other desktop environments or window managers if you prefer, such as KDE, XFCE, GNOME, Fluxbox, and more. You can also customize the panel, the menu, the wallpaper, the fonts, and other settings to your liking.

    -

    Airslax 3.1 Full Iso is a great choice for anyone who wants to have a portable and customizable Linux distribution that can run on any computer. It has many advantages over other Linux distributions, such as:

    -
      -
    • It is easy to use and configure.
    • -
    • It has a low memory footprint and high performance.
    • -
    • It has a wide range of applications for various purposes.
    • -
    • It has a powerful Wi-Fi hacking tool that can crack WEP and WPA passwords.
    • -
    • It has a module system that allows you to customize your system as you wish.
    • -
    -

    Airslax 3.1 Full Iso is also secure and reliable. It does not store any data on the host computer, so it does not leave any traces behind. It also has a firewall and an antivirus program that protect your system from malicious attacks. You can also encrypt your modules or your entire USB drive to prevent unauthorized access to your data.

    -

    If you want to install Airslax 3.1 Full Iso on your hard drive, you can do so by using the installer program that is included in the iso file. You will need to create a partition for Airslax and format it as ext4. You will then be able to boot Airslax from your hard drive and enjoy its features.

    -

    Airslax 3.1 Full Iso is a Linux distribution that offers you a portable and flexible operating system that can run from a USB drive or a CD. It has many features and advantages that make it a great choice for anyone who wants to have a powerful and customizable Linux system. You can download it from the official website or from other sources and try it for yourself.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen ((BETTER)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen ((BETTER)).md deleted file mode 100644 index beb31d8baa926352b94488daff69a21d892f559a..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen ((BETTER)).md +++ /dev/null @@ -1,45 +0,0 @@ -
    -

    PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen: How to Create Stunning DVD Slideshows with Ease

    -

    If you are looking for a powerful and easy-to-use software to create amazing DVD slideshows from your photos and videos, you should check out PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen. This is a cracked version of the popular Wondershare DVD Slideshow Builder Deluxe software, which allows you to create professional-looking DVD slideshows with music, transitions, effects, and more.

    -

    PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen


    DOWNLOADhttps://urlcod.com/2uIaYa



    -

    With PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen, you can:

    -
      -
    • Drag and drop your photos and videos to the timeline and arrange them in any order you want.
    • -
    • Add your favorite songs as background music and sync them with the slideshow.
    • -
    • Choose from hundreds of transition styles and effects to make your slideshow more dynamic and attractive.
    • -
    • Add text, captions, titles, credits, and logos to personalize your slideshow.
    • -
    • Preview and edit your slideshow in real-time with the built-in DVD player.
    • -
    • Burn your slideshow to DVD discs or save them as ISO files or DVD folders.
    • -
    • Share your slideshow online via YouTube, Facebook, Vimeo, or other platforms.
    • -
    -

    PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen is compatible with Windows XP, Vista, 7, 8, and 10. It supports various image formats such as JPG, PNG, BMP, GIF, TIFF, etc., and video formats such as MP4, AVI, WMV, MOV, FLV, MKV, etc. It also supports various audio formats such as MP3, WAV, WMA, M4A, etc.

    -

    To download PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen, you need to follow these steps:

    -
      -
    1. Click on the link below to go to the download page.
    2. -
    3. Choose a download option from the list and click on it.
    4. -
    5. Wait for the download to complete and then extract the ZIP file.
    6. -
    7. Run the setup file and follow the instructions to install the software.
    8. -
    9. Run the keygen file and generate a serial number.
    10. -
    11. Enter the serial number when prompted and activate the software.
    12. -
    13. Enjoy creating stunning DVD slideshows with PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen!
    14. -
    -

    Download PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen Here

    - -

    If you are wondering why you should use PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen instead of the original software, here are some reasons:

    -

    -
      -
    • You can save money by using a cracked version of the software instead of paying for the license.
    • -
    • You can enjoy all the features and functions of the software without any limitations or restrictions.
    • -
    • You can use the software offline without needing an internet connection or a registration code.
    • -
    • You can update the software anytime without worrying about losing your activation status.
    • -
    -

    However, there are also some risks and disadvantages of using PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen, such as:

    -
      -
    • You may encounter viruses, malware, or spyware when downloading or installing the software from unknown sources.
    • -
    • You may violate the intellectual property rights of the software developer and face legal consequences.
    • -
    • You may not receive technical support or customer service from the software developer.
    • -
    • You may experience bugs, errors, or crashes when using the software due to compatibility issues or corrupted files.
    • -
    -

    Therefore, you should use PATCHED Wondershare DVD Slideshow Builder Deluxe 6.7.2 Keygen at your own risk and discretion. We are not responsible for any damages or losses that may occur as a result of using this software.

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rambo1filmcompletenfrancaisenstreaming !!TOP!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rambo1filmcompletenfrancaisenstreaming !!TOP!!.md deleted file mode 100644 index 1f8c673b788e249f54243f2bd6b8d89dd533dee1..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Rambo1filmcompletenfrancaisenstreaming !!TOP!!.md +++ /dev/null @@ -1,15 +0,0 @@ - -

    Rambo 1: le premier sang, un film culte avec Sylvester Stallone

    -

    Rambo 1: le premier sang est un film d'action américain réalisé par Ted Kotcheff, sorti en 1982. Adapté du roman Le premier sang de David Morrell, c'est le premier volet d'une série de films centrée sur le personnage de John Rambo, interprété par Sylvester Stallone.

    -

    Le film raconte l'histoire de John Rambo, un ancien béret vert et héros de la guerre du Viêt Nam, qui se retrouve traqué par le shérif d'une petite ville après avoir été arrêté pour vagabondage. Rambo va alors utiliser ses compétences de survie et de combat pour échapper à ses poursuivants et se venger des brimades qu'il a subies.

    -

    Rambo1filmcompletenfrancaisenstreaming


    DOWNLOAD ->>> https://urlcod.com/2uIb9W



    -

    Rambo 1: le premier sang est un film qui a marqué l'histoire du cinéma d'action par sa violence, son réalisme et son message politique. Le film dénonce les conséquences de la guerre du Viêt Nam sur les vétérans américains, qui se sentent abandonnés et rejetés par la société. Le film montre aussi la brutalité et la corruption des forces de l'ordre, qui abusent de leur pouvoir et violent les droits de l'homme.

    -

    Rambo 1: le premier sang est un film qui a connu un grand succès commercial et critique, et qui a lancé la carrière de Sylvester Stallone comme star du cinéma d'action. Le film a aussi donné naissance à une franchise qui compte cinq films au total, le dernier étant Rambo: Last Blood, sorti en 2019.

    -

    Si vous êtes fan de Rambo ou si vous voulez découvrir ce film culte, vous pouvez le regarder en streaming complet et en français sur le site TON CINE-CLUB, qui vous propose aussi d'autres films d'action avec Sylvester Stallone.

    - -

    Rambo 1: le premier sang est un film qui a influencé de nombreux autres films d'action, qui ont repris ses codes et ses thèmes. Par exemple, le film Commando avec Arnold Schwarzenegger, sorti en 1985, met en scène un ancien soldat d'élite qui doit sauver sa fille kidnappée par des mercenaires. Le film First Blood Part II avec Sylvester Stallone, sorti en 1985, est la suite directe de Rambo 1: le premier sang, et montre Rambo qui retourne au Viêt Nam pour libérer des prisonniers de guerre américains.

    -

    Rambo 1: le premier sang est un film qui a aussi suscité des controverses et des critiques, notamment pour sa glorification de la violence et son manichéisme. Certains ont accusé le film de faire l'apologie du vigilantisme et de l'individualisme, en présentant Rambo comme un justicier solitaire qui se fait justice lui-même. D'autres ont reproché au film de dépeindre les forces de l'ordre comme des ennemis à abattre, et de ne pas donner de nuances aux personnages.

    -

    -

    Rambo 1: le premier sang est un film qui reste aujourd'hui encore un classique du genre, qui a marqué plusieurs générations de spectateurs. Le film est considéré comme l'un des meilleurs rôles de Sylvester Stallone, qui a su donner à son personnage une profondeur et une humanité. Le film est aussi célèbre pour sa bande originale composée par Jerry Goldsmith, qui a créé le thème musical de Rambo, intitulé It's a Long Road.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sipho Activation Bypassl.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sipho Activation Bypassl.md deleted file mode 100644 index 73ce83097356fa201baef44abf8bfd761b499a04..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Sipho Activation Bypassl.md +++ /dev/null @@ -1,33 +0,0 @@ -
    -

    How to Bypass Sipho Activation Lock on Your iPhone or iPad

    -

    Sipho Activation Lock is a security feature that prevents unauthorized access to your iOS device if it is lost or stolen. It requires you to enter your Apple ID and password to activate your device after a factory reset. However, sometimes you may forget your Apple ID or password, or you may buy a second-hand device that is still locked by the previous owner. In such cases, you need a tool to bypass Sipho Activation Lock and use your device normally.

    -

    In this article, we will introduce you to one of the best tools for Sipho Activation Bypass: PassFab Activation Unlocker. This tool can help you remove Sipho Activation Lock without password in just a few steps. It is compatible with iPhone 5s to iPhone X, iPad 5-7, iPad Air/Air 2, iPad Mini 2/3/4, iPad Pro, and iOS 12.0 or later. It also supports jailbreaking your device for more features and customization.

    -

    Sipho Activation Bypassl


    Download Zip 🆗 https://urlcod.com/2uIbhO



    -

    How to Use PassFab Activation Unlocker to Bypass Sipho Activation Lock

    -

    Here are the steps to use PassFab Activation Unlocker to bypass Sipho Activation Lock on your iPhone or iPad:

    -
      -
    1. Download and install PassFab Activation Unlocker on your computer. Launch it and select "Remove Sipho Activation Lock" from the home screen. Click "Start".
    2. -
    3. Connect your device to your computer with a USB cable. Read and accept the agreement and click "Next".
    4. -
    5. The tool will automatically download the jailbreak package for your device. Follow the instructions on the screen and wait for the jailbreak process to complete.
    6. -
    7. Check your device information on the screen and make sure it is correct. Then click "Remove" to start removing Sipho Activation Lock.
    8. -
    9. Wait for a few minutes until the removal process is done. Your device will restart and you can set it up as a new one without Sipho Activation Lock.
    10. -
    -

    Note: After using PassFab Activation Unlocker to bypass Sipho Activation Lock, you will not be able to use any cellular or SIM card functions on your device. You will also not be able to log in with your original Apple ID or use any Apple services that require Apple ID login. You should back up your device data before using this tool as it may erase some of your data.

    -

    Conclusion

    -

    Sipho Activation Lock is a useful feature that protects your iOS device from unauthorized access. However, it can also cause trouble if you forget your Apple ID or password, or if you buy a second-hand device that is still locked by the previous owner. In such cases, you can use PassFab Activation Unlocker to bypass Sipho Activation Lock without password and use your device normally. This tool is easy to use and supports most iOS devices and versions. However, it also has some limitations and risks that you should be aware of before using it.

    - -

    What is Sipho Activation Lock and How Does It Work?

    -

    Sipho Activation Lock is a feature that was introduced in iOS 7 and later versions. It is designed to prevent anyone from using your iOS device if it is lost or stolen. It works by linking your device to your Apple ID and password. When you enable Find My on your device, Sipho Activation Lock is automatically turned on.

    -

    When Sipho Activation Lock is enabled, you need to enter your Apple ID and password to activate your device after a factory reset. This means that even if someone else has your device, they cannot use it without knowing your credentials. This also prevents them from erasing your device or turning off Find My.

    -

    Sipho Activation Lock also displays a custom message on your device's lock screen. You can use this message to provide your contact information or instructions for returning your device. You can also remotely erase your device data if you think it is compromised.

    -

    How to Avoid Sipho Activation Lock Issues

    -

    While Sipho Activation Lock is a helpful feature that protects your iOS device, it can also cause some issues if you are not careful. Here are some tips to avoid Sipho Activation Lock issues:

    -
      -
    • Remember your Apple ID and password. If you forget them, you can use the Apple ID account page or the Find My app to reset them.
    • -
    • Disable Sipho Activation Lock before selling or giving away your device. To do this, go to Settings > [your name] > Find My > Find My iPhone/iPad and turn off the switch. Then enter your Apple ID and password to confirm.
    • -
    • Check the status of Sipho Activation Lock before buying a second-hand device. To do this, go to the Check Coverage page on Apple's website and enter the device's serial number or IMEI. If the device has Sipho Activation Lock on, ask the seller to remove it before buying it.
    • -
    • Keep a backup of your device data. If you need to use a tool like PassFab Activation Unlocker to bypass Sipho Activation Lock, you may lose some of your data. Therefore, it is recommended to back up your data regularly using iCloud or iTunes.
    • -

    -

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/nielsr/imagegpt-completion/app.py b/spaces/nielsr/imagegpt-completion/app.py deleted file mode 100644 index 814d14a7d4353db02425de392e6aa665c1544879..0000000000000000000000000000000000000000 --- a/spaces/nielsr/imagegpt-completion/app.py +++ /dev/null @@ -1,77 +0,0 @@ -import os -os.system('pip install git+https://github.com/huggingface/transformers --upgrade') - -import gradio as gr -from transformers import ImageGPTFeatureExtractor, ImageGPTForCausalImageModeling -import torch -import numpy as np -import requests -from PIL import Image -import matplotlib.pyplot as plt - -feature_extractor = ImageGPTFeatureExtractor.from_pretrained("openai/imagegpt-medium") -model = ImageGPTForCausalImageModeling.from_pretrained("openai/imagegpt-medium") -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model.to(device) - -# load image examples -urls = ['https://i.imgflip.com/4/4t0m5.jpg', - 'https://cdn.openai.com/image-gpt/completions/igpt-xl-miscellaneous-2-orig.png', - 'https://cdn.openai.com/image-gpt/completions/igpt-xl-miscellaneous-29-orig.png', - 'https://cdn.openai.com/image-gpt/completions/igpt-xl-openai-cooking-0-orig.png' - ] -for idx, url in enumerate(urls): - image = Image.open(requests.get(url, stream=True).raw) - image.save(f"image_{idx}.png") - -def process_image(image): - # prepare 7 images, shape (7, 1024) - batch_size = 7 - encoding = feature_extractor([image for _ in range(batch_size)], return_tensors="pt") - - # create primers - samples = encoding.input_ids.numpy() - n_px = feature_extractor.size - clusters = feature_extractor.clusters - n_px_crop = 16 - primers = samples.reshape(-1,n_px*n_px)[:,:n_px_crop*n_px] # crop top n_px_crop rows. These will be the conditioning tokens - - # get conditioned image (from first primer tensor), padded with black pixels to be 32x32 - primers_img = np.reshape(np.rint(127.5 * (clusters[primers[0]] + 1.0)), [n_px_crop,n_px, 3]).astype(np.uint8) - primers_img = np.pad(primers_img, pad_width=((0,16), (0,0), (0,0)), mode="constant") - - # generate (no beam search) - context = np.concatenate((np.full((batch_size, 1), model.config.vocab_size - 1), primers), axis=1) - context = torch.tensor(context).to(device) - output = model.generate(input_ids=context, max_length=n_px*n_px + 1, temperature=1.0, do_sample=True, top_k=40) - - # decode back to images (convert color cluster tokens back to pixels) - samples = output[:,1:].cpu().detach().numpy() - samples_img = [np.reshape(np.rint(127.5 * (clusters[s] + 1.0)), [n_px, n_px, 3]).astype(np.uint8) for s in samples] - - samples_img = [primers_img] + samples_img - - # stack images horizontally - row1 = np.hstack(samples_img[:4]) - row2 = np.hstack(samples_img[4:]) - result = np.vstack([row1, row2]) - - # return as PIL Image - completion = Image.fromarray(result) - - return completion - -title = "Interactive demo: ImageGPT" -description = "Demo for OpenAI's ImageGPT: Generative Pretraining from Pixels. To use it, simply upload an image or use the example image below and click 'submit'. Results will show up in a few seconds." -article = "

    ImageGPT: Generative Pretraining from Pixels | Official blog

    " -examples =[f"image_{idx}.png" for idx in range(len(urls))] - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="Model input + completions"), - title=title, - description=description, - article=article, - examples=examples, - enable_queue=True) -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/__init__.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/__init__.py deleted file mode 100644 index 761a3d1c7afa049e9779ee9fc4d299e9aae38cad..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/layers/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .batch_norm import FrozenBatchNorm2d, get_norm, NaiveSyncBatchNorm, CycleBatchNormList -from .deform_conv import DeformConv, ModulatedDeformConv -from .mask_ops import paste_masks_in_image -from .nms import batched_nms, batched_nms_rotated, nms, nms_rotated -from .roi_align import ROIAlign, roi_align -from .roi_align_rotated import ROIAlignRotated, roi_align_rotated -from .shape_spec import ShapeSpec -from .wrappers import ( - BatchNorm2d, - Conv2d, - ConvTranspose2d, - cat, - interpolate, - Linear, - nonzero_tuple, - cross_entropy, - empty_input_loss_func_wrapper, - shapes_to_tensor, - move_device_like, -) -from .blocks import CNNBlockBase, DepthwiseSeparableConv2d -from .aspp import ASPP -from .losses import ciou_loss, diou_loss - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/projects/README.md b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/projects/README.md deleted file mode 100644 index 95afe7ff8c8a9bd2f56621fcc3c1bdac11c256a9..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/projects/README.md +++ /dev/null @@ -1,2 +0,0 @@ - -Projects live in the [`projects` directory](../../projects) under the root of this repository, but not here. diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/cse_confidence.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/cse_confidence.py deleted file mode 100644 index 8220337cea8eb87bbdf74378079551259dcc37e2..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/predictors/cse_confidence.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import Any -import torch -from torch.nn import functional as F - -from detectron2.config import CfgNode -from detectron2.layers import ConvTranspose2d - -from densepose.modeling.confidence import DensePoseConfidenceModelConfig -from densepose.modeling.utils import initialize_module_params -from densepose.structures import decorate_cse_predictor_output_class_with_confidences - - -class DensePoseEmbeddingConfidencePredictorMixin: - """ - Predictor contains the last layers of a DensePose model that take DensePose head - outputs as an input and produce model outputs. Confidence predictor mixin is used - to generate confidences for coarse segmentation estimated by some - base predictor. Several assumptions need to hold for the base predictor: - 1) the `forward` method must return CSE DensePose head outputs, - tensor of shape [N, D, H, W] - 2) `interp2d` method must be defined to perform bilinear interpolation; - the same method is typically used for masks and confidences - Confidence predictor mixin provides confidence estimates, as described in: - N. Neverova et al., Correlated Uncertainty for Learning Dense Correspondences - from Noisy Labels, NeurIPS 2019 - A. Sanakoyeu et al., Transferring Dense Pose to Proximal Animal Classes, CVPR 2020 - """ - - def __init__(self, cfg: CfgNode, input_channels: int): - """ - Initialize confidence predictor using configuration options. - - Args: - cfg (CfgNode): configuration options - input_channels (int): number of input channels - """ - # we rely on base predictor to call nn.Module.__init__ - super().__init__(cfg, input_channels) # pyre-ignore[19] - self.confidence_model_cfg = DensePoseConfidenceModelConfig.from_cfg(cfg) - self._initialize_confidence_estimation_layers(cfg, input_channels) - self._registry = {} - initialize_module_params(self) # pyre-ignore[6] - - def _initialize_confidence_estimation_layers(self, cfg: CfgNode, dim_in: int): - """ - Initialize confidence estimation layers based on configuration options - - Args: - cfg (CfgNode): configuration options - dim_in (int): number of input channels - """ - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECONV_KERNEL - if self.confidence_model_cfg.segm_confidence.enabled: - self.coarse_segm_confidence_lowres = ConvTranspose2d( # pyre-ignore[16] - dim_in, 1, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - - def forward(self, head_outputs: torch.Tensor): - """ - Perform forward operation on head outputs used as inputs for the predictor. - Calls forward method from the base predictor and uses its outputs to compute - confidences. - - Args: - head_outputs (Tensor): head outputs used as predictor inputs - Return: - An instance of outputs with confidences, - see `decorate_cse_predictor_output_class_with_confidences` - """ - # assuming base class returns SIUV estimates in its first result - base_predictor_outputs = super().forward(head_outputs) # pyre-ignore[16] - - # create output instance by extending base predictor outputs: - output = self._create_output_instance(base_predictor_outputs) - - if self.confidence_model_cfg.segm_confidence.enabled: - # base predictor outputs are assumed to have `coarse_segm` attribute - # base predictor is assumed to define `interp2d` method for bilinear interpolation - output.coarse_segm_confidence = ( - F.softplus( - self.interp2d( # pyre-ignore[16] - self.coarse_segm_confidence_lowres(head_outputs) # pyre-ignore[16] - ) - ) - + self.confidence_model_cfg.segm_confidence.epsilon - ) - output.coarse_segm = base_predictor_outputs.coarse_segm * torch.repeat_interleave( - output.coarse_segm_confidence, base_predictor_outputs.coarse_segm.shape[1], dim=1 - ) - - return output - - def _create_output_instance(self, base_predictor_outputs: Any): - """ - Create an instance of predictor outputs by copying the outputs from the - base predictor and initializing confidence - - Args: - base_predictor_outputs: an instance of base predictor outputs - (the outputs type is assumed to be a dataclass) - Return: - An instance of outputs with confidences - """ - PredictorOutput = decorate_cse_predictor_output_class_with_confidences( - type(base_predictor_outputs) # pyre-ignore[6] - ) - # base_predictor_outputs is assumed to be a dataclass - # reassign all the fields from base_predictor_outputs (no deep copy!), add new fields - output = PredictorOutput( - **base_predictor_outputs.__dict__, - coarse_segm_confidence=None, - ) - return output diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_l_in21k_50ep.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_l_in21k_50ep.py deleted file mode 100644 index c64f0c18aea5dfe49fef028a6300ab1dc9f2537a..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/COCO/cascade_mask_rcnn_mvitv2_l_in21k_50ep.py +++ /dev/null @@ -1,22 +0,0 @@ -from .cascade_mask_rcnn_mvitv2_b_in21k_100ep import ( - dataloader, - lr_multiplier, - model, - train, - optimizer, -) - -model.backbone.bottom_up.embed_dim = 144 -model.backbone.bottom_up.depth = 48 -model.backbone.bottom_up.num_heads = 2 -model.backbone.bottom_up.last_block_indexes = (1, 7, 43, 47) -model.backbone.bottom_up.drop_path_rate = 0.5 - - -train.init_checkpoint = "detectron2://ImageNetPretrained/mvitv2/MViTv2_L_in21k.pyth" - -train.max_iter = train.max_iter // 2 # 100ep -> 50ep -lr_multiplier.scheduler.milestones = [ - milestone // 2 for milestone in lr_multiplier.scheduler.milestones -] -lr_multiplier.scheduler.num_updates = train.max_iter diff --git a/spaces/nomic-ai/openai_webgpt_comparisons/style.css b/spaces/nomic-ai/openai_webgpt_comparisons/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/openai_webgpt_comparisons/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/nsarrazin/agent-chat/README.md b/spaces/nsarrazin/agent-chat/README.md deleted file mode 100644 index 01870c674c4efe8e8535af61839de4a50d8ad527..0000000000000000000000000000000000000000 --- a/spaces/nsarrazin/agent-chat/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Agent Chat -emoji: 📈 -colorFrom: green -colorTo: green -sdk: docker -pinned: false -app_port: 3000 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fixed_types.h b/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fixed_types.h deleted file mode 100644 index 932f81a0f6e7769e1c69c78b92b2f97520f295ad..0000000000000000000000000000000000000000 --- a/spaces/ntt123/WaveGRU-Text-To-Speech/sparse_matmul/numerics/fixed_types.h +++ /dev/null @@ -1,139 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef LYRA_CODEC_SPARSE_MATMUL_NUMERICS_FIXED_TYPES_H_ -#define LYRA_CODEC_SPARSE_MATMUL_NUMERICS_FIXED_TYPES_H_ - -#include -#include -#include -#include -#include -#include - -#include "glog/logging.h" - -namespace csrblocksparse { - -// Useful for meta-programming and determining if a type is a fixed point type -class fixed_type {}; -class fixed16_type : fixed_type {}; -class fixed32_type : fixed_type {}; - -// Storage class for 16-bit fixed point values, not meant to be used directly -// for computation. Used for storage and converting to/from float32. -// N = 16 - 1 - |ExponentBits|. -// range = [-2^|ExponentBits|, 2^|ExponentBits|), increment = 2^-N. -template -class fixed16 : fixed16_type { - static_assert(ExponentBits >= 0 && ExponentBits < 16, - "ExponentBits must be in" - " the interval [0, 15]"); - - public: - static constexpr int kExponentBits = ExponentBits; - static constexpr int kMantissaBits = 16 - ExponentBits - 1; - - fixed16() = default; - explicit fixed16(float x) : val_(float_to_fixed16(x)) {} - explicit fixed16(int16_t x) : val_(x) {} - - explicit operator float() const { return fixed16_to_float(val_); } - - int raw_val() const { return val_; } - - private: - inline float fixed16_to_float(int16_t x) const { - return static_cast(x) / (1 << kMantissaBits); - } - - // Conversion clips to the representable range. - inline int16_t float_to_fixed16(float x) const { - float fval = std::round(x * static_cast(1 << kMantissaBits)); - const float max_bound = std::numeric_limits::max(); - const float min_bound = std::numeric_limits::min(); - auto val = - static_cast(std::max(std::min(fval, max_bound), min_bound)); - LOG_IF(INFO, fval > max_bound || fval < min_bound) - << "Conversion clipping: " << x << " to " << fixed16_to_float(val); - return val; - } - - int16_t val_; -}; - -// Storage class for 32-bit fixed point values, not meant to be used directly -// for computation. Used for storage and converting to/from float32. -// N = 32 - 1 - |ExponentBits|. -// range = [-2^|ExponentBits|, 2^|ExponentBits|), increment = 2^-N. -template -class fixed32 : fixed32_type { - static_assert(ExponentBits >= 0 && ExponentBits < 32, - "ExponentBits must be in" - " the interval [0, 31]"); - - public: - static constexpr int kExponentBits = ExponentBits; - static constexpr int kMantissaBits = 32 - ExponentBits - 1; - - fixed32() = default; - explicit fixed32(float x) : val_(float_to_fixed32(x)) {} - explicit fixed32(int32_t x) : val_(x) {} - - explicit operator float() const { return fixed32_to_float(val_); } - - int raw_val() const { return val_; } - - private: - inline float fixed32_to_float(int32_t x) const { - return static_cast(x) / (1LL << kMantissaBits); - } - - // Conversion clips to the representable range. - inline int32_t float_to_fixed32(float x) const { - float fval = std::round(x * static_cast(1LL << kMantissaBits)); - const int32_t max_bound = std::numeric_limits::max(); - const int32_t min_bound = std::numeric_limits::min(); - int32_t val = fval >= static_cast(max_bound) - ? max_bound - : (fval < static_cast(min_bound) - ? min_bound - : static_cast(fval)); - - LOG_IF(INFO, fval >= max_bound || fval < min_bound) - << "Conversion clipping: " << x << " to " << fixed32_to_float(val); - return val; - } - - int32_t val_; -}; - -template -struct IsFixed16Type - : std::integral_constant::value> {}; - -template -struct IsFixed32Type - : std::integral_constant::value> {}; - -template -struct IsFixedType : std::integral_constant::value || - IsFixed32Type::value> { -}; - -} // namespace csrblocksparse - -#endif // LYRA_CODEC_SPARSE_MATMUL_NUMERICS_FIXED_TYPES_H_ diff --git a/spaces/odettecantswim/rvc-mlbb/infer_pack/transforms.py b/spaces/odettecantswim/rvc-mlbb/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/odettecantswim/rvc-mlbb/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/FGT/models/utils/loss.py b/spaces/oguzakif/video-object-remover/FGT_codes/FGT/models/utils/loss.py deleted file mode 100644 index fd2586d7cfd5e1dbc07dc9c0136f5499cb0dc7f0..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/FGT_codes/FGT/models/utils/loss.py +++ /dev/null @@ -1,474 +0,0 @@ -import os -import sys - -import torch -import torch.nn as nn -import numpy as np - - -class AlignLoss(nn.Module): - def __init__(self, reduction='mean'): - super().__init__() - self.loss_fn = nn.L1Loss(reduction=reduction) - - def forward(self, frames, masks, aligned_vs, aligned_rs): - """ - - :param frames: The original frames(GT) - :param masks: Original masks - :param aligned_vs: aligned visibility map from reference frame(List: B, C, T, H, W) - :param aligned_rs: aligned reference frames(List: B, C, T, H, W) - :return: - """ - try: - B, C, T, H, W = frames.shape - except ValueError: - frames = frames.unsqueeze(2) - masks = masks.unsqueeze(2) - B, C, T, H, W = frames.shape - loss = 0 - for i in range(T): - frame = frames[:, :, i] - mask = masks[:, :, i] - aligned_v = aligned_vs[i] - aligned_r = aligned_rs[i] - loss += self._singleFrameAlignLoss(frame, mask, aligned_v, aligned_r) - return loss - - def _singleFrameAlignLoss(self, targetFrame, targetMask, aligned_v, aligned_r): - """ - - :param targetFrame: targetFrame to be aligned-> B, C, H, W - :param targetMask: the mask of target frames - :param aligned_v: aligned visibility map from reference frame - :param aligned_r: aligned reference frame-> B, C, T, H, W - :return: - """ - targetVisibility = 1. - targetMask - targetVisibility = targetVisibility.unsqueeze(2) - targetFrame = targetFrame.unsqueeze(2) - visibility_map = targetVisibility * aligned_v - target_visibility = visibility_map * targetFrame - reference_visibility = visibility_map * aligned_r - loss = 0 - for i in range(aligned_r.shape[2]): - loss += self.loss_fn(target_visibility[:, :, i], reference_visibility[:, :, i]) - return loss - - -class HoleVisibleLoss(nn.Module): - def __init__(self, reduction='mean'): - super().__init__() - self.loss_fn = nn.L1Loss(reduction=reduction) - - def forward(self, outputs, masks, GTs, c_masks): - try: - B, C, T, H, W = outputs.shape - except ValueError: - outputs = outputs.unsqueeze(2) - masks = masks.unsqueeze(2) - GTs = GTs.unsqueeze(2) - c_masks = c_masks.unsqueeze(2) - B, C, T, H, W = outputs.shape - loss = 0 - for i in range(T): - loss += self._singleFrameHoleVisibleLoss(outputs[:, :, i], masks[:, :, i], c_masks[:, :, i], GTs[:, :, i]) - return loss - - def _singleFrameHoleVisibleLoss(self, targetFrame, targetMask, c_mask, GT): - return self.loss_fn(targetMask * c_mask * targetFrame, targetMask * c_mask * GT) - - -class HoleInvisibleLoss(nn.Module): - def __init__(self, reduction='mean'): - super().__init__() - self.loss_fn = nn.L1Loss(reduction=reduction) - - def forward(self, outputs, masks, GTs, c_masks): - try: - B, C, T, H, W = outputs.shape - except ValueError: - outputs = outputs.unsqueeze(2) - masks = masks.unsqueeze(2) - GTs = GTs.unsqueeze(2) - c_masks = c_masks.unsqueeze(2) - B, C, T, H, W = outputs.shape - loss = 0 - for i in range(T): - loss += self._singleFrameHoleInvisibleLoss(outputs[:, :, i], masks[:, :, i], c_masks[:, :, i], GTs[:, :, i]) - return loss - - def _singleFrameHoleInvisibleLoss(self, targetFrame, targetMask, c_mask, GT): - return self.loss_fn(targetMask * (1. - c_mask) * targetFrame, targetMask * (1. - c_mask) * GT) - - -class NonHoleLoss(nn.Module): - def __init__(self, reduction='mean'): - super().__init__() - self.loss_fn = nn.L1Loss(reduction=reduction) - - def forward(self, outputs, masks, GTs): - try: - B, C, T, H, W = outputs.shape - except ValueError: - outputs = outputs.unsqueeze(2) - masks = masks.unsqueeze(2) - GTs = GTs.unsqueeze(2) - B, C, T, H, W = outputs.shape - loss = 0 - for i in range(T): - loss += self._singleNonHoleLoss(outputs[:, :, i], masks[:, :, i], GTs[:, :, i]) - return loss - - def _singleNonHoleLoss(self, targetFrame, targetMask, GT): - return self.loss_fn((1. - targetMask) * targetFrame, (1. - targetMask) * GT) - - -class ReconLoss(nn.Module): - def __init__(self, reduction='mean', masked=False): - super().__init__() - self.loss_fn = nn.L1Loss(reduction=reduction) - self.masked = masked - - def forward(self, model_output, target, mask): - outputs = model_output - targets = target - if self.masked: - masks = mask - return self.loss_fn(outputs * masks, targets * masks) # L1 loss in masked region - else: - return self.loss_fn(outputs, targets) # L1 loss in the whole region - - -class VGGLoss(nn.Module): - def __init__(self, vgg): - super().__init__() - self.l1_loss = nn.L1Loss() - self.vgg = vgg - - def vgg_loss(self, output, target): - output_feature = self.vgg(output) - target_feature = self.vgg(target) - loss = ( - self.l1_loss(output_feature.relu2_2, target_feature.relu2_2) - + self.l1_loss(output_feature.relu3_3, target_feature.relu3_3) - + self.l1_loss(output_feature.relu4_3, target_feature.relu4_3) - ) - return loss - - def forward(self, data_input, model_output): - targets = data_input - outputs = model_output - mean_image_loss = self.vgg_loss(outputs, targets) - return mean_image_loss - - -class StyleLoss(nn.Module): - def __init__(self, vgg, original_channel_norm=True): - super().__init__() - self.l1_loss = nn.L1Loss() - self.vgg = vgg - self.original_channel_norm = original_channel_norm - - # From https://github.com/pytorch/tutorials/blob/master/advanced_source/neural_style_tutorial.py - def gram_matrix(self, input): - a, b, c, d = input.size() # a=batch size(=1) - # b=number of feature maps - # (c,d)=dimensions of a f. map (N=c*d) - - features = input.view(a * b, c * d) # resise F_XL into \hat F_XL - - G = torch.mm(features, features.t()) # compute the gram product - - # we 'normalize' the values of the gram matrix - # by dividing by the number of element in each feature maps. - return G.div(a * b * c * d) - - # Implement "Image Inpainting for Irregular Holes Using Partial Convolutions", Liu et al., 2018 - def style_loss(self, output, target): - output_features = self.vgg(output) - target_features = self.vgg(target) - layers = ['relu2_2', 'relu3_3', 'relu4_3'] # n_channel: 128 (=2 ** 7), 256 (=2 ** 8), 512 (=2 ** 9) - loss = 0 - for i, layer in enumerate(layers): - output_feature = getattr(output_features, layer) - target_feature = getattr(target_features, layer) - B, C_P, H, W = output_feature.shape - output_gram_matrix = self.gram_matrix(output_feature) - target_gram_matrix = self.gram_matrix(target_feature) - if self.original_channel_norm: - C_P_square_divider = 2 ** (i + 1) # original design (avoid too small loss) - else: - C_P_square_divider = C_P ** 2 - assert C_P == 128 * 2 ** i - loss += self.l1_loss(output_gram_matrix, target_gram_matrix) / C_P_square_divider - return loss - - def forward(self, data_input, model_output): - targets = data_input - outputs = model_output - mean_image_loss = self.style_loss(outputs, targets) - return mean_image_loss - - -class L1LossMaskedMean(nn.Module): - def __init__(self): - super().__init__() - self.l1 = nn.L1Loss(reduction='sum') - - def forward(self, x, y, mask): - masked = 1 - mask # 默认missing region的mask值为0,原有区域为1 - l1_sum = self.l1(x * masked, y * masked) - return l1_sum / torch.sum(masked) - - -class L2LossMaskedMean(nn.Module): - def __init__(self, reduction='sum'): - super().__init__() - self.l2 = nn.MSELoss(reduction=reduction) - - def forward(self, x, y, mask): - masked = 1 - mask - l2_sum = self.l2(x * masked, y * masked) - return l2_sum / torch.sum(masked) - - -class ImcompleteVideoReconLoss(nn.Module): - def __init__(self): - super().__init__() - self.loss_fn = L1LossMaskedMean() - - def forward(self, data_input, model_output): - imcomplete_video = model_output['imcomplete_video'] - targets = data_input['targets'] - down_sampled_targets = nn.functional.interpolate( - targets.transpose(1, 2), scale_factor=[1, 0.5, 0.5]) - - masks = data_input['masks'] - down_sampled_masks = nn.functional.interpolate( - masks.transpose(1, 2), scale_factor=[1, 0.5, 0.5]) - return self.loss_fn( - imcomplete_video, down_sampled_targets, - down_sampled_masks - ) - - -class CompleteFramesReconLoss(nn.Module): - def __init__(self): - super().__init__() - self.loss_fn = L1LossMaskedMean() - - def forward(self, data_input, model_output): - outputs = model_output['outputs'] - targets = data_input['targets'] - masks = data_input['masks'] - return self.loss_fn(outputs, targets, masks) - - -class AdversarialLoss(nn.Module): - r""" - Adversarial loss - https://arxiv.org/abs/1711.10337 - """ - - def __init__(self, type='nsgan', target_real_label=1.0, target_fake_label=0.0): - r""" - type = nsgan | lsgan | hinge - """ - super(AdversarialLoss, self).__init__() - self.type = type - self.register_buffer('real_label', torch.tensor(target_real_label)) - self.register_buffer('fake_label', torch.tensor(target_fake_label)) - - if type == 'nsgan': - self.criterion = nn.BCELoss() - elif type == 'lsgan': - self.criterion = nn.MSELoss() - elif type == 'hinge': - self.criterion = nn.ReLU() - - def __call__(self, outputs, is_real, is_disc=None): - if self.type == 'hinge': - if is_disc: - if is_real: - outputs = -outputs - return self.criterion(1 + outputs).mean() - else: - return (-outputs).mean() - else: - labels = (self.real_label if is_real else self.fake_label).expand_as( - outputs) - loss = self.criterion(outputs, labels) - return loss - - -# # From https://github.com/phoenix104104/fast_blind_video_consistency -# class TemporalWarpingLoss(nn.Module): -# def __init__(self, opts, flownet_checkpoint_path=None, alpha=50): -# super().__init__() -# self.loss_fn = L1LossMaskedMean() -# self.alpha = alpha -# self.opts = opts -# -# assert flownet_checkpoint_path is not None, "Flownet2 pretrained models must be provided" -# -# self.flownet_checkpoint_path = flownet_checkpoint_path -# raise NotImplementedError -# -# def get_flownet_checkpoint_path(self): -# return self.flownet_checkpoint_path -# -# def _flownetwrapper(self): -# Flownet = FlowNet2(self.opts, requires_grad=False) -# Flownet2_ckpt = torch.load(self.flownet_checkpoint_path) -# Flownet.load_state_dict(Flownet2_ckpt['state_dict']) -# Flownet.to(device) -# Flownet.exal() -# return Flownet -# -# def _setup(self): -# self.flownet = self._flownetwrapper() -# -# def _get_non_occlusuib_mask(self, targets, warped_targets): -# non_occlusion_masks = torch.exp( -# -self.alpha * torch.sum(targets[:, 1:] - warped_targets, dim=2).pow(2) -# ).unsqueeze(2) -# return non_occlusion_masks -# -# def _get_loss(self, outputs, warped_outputs, non_occlusion_masks, masks): -# return self.loss_fn( -# outputs[:, 1:] * non_occlusion_masks, -# warped_outputs * non_occlusion_masks, -# masks[:, 1:] -# ) -# -# def forward(self, data_input, model_output): -# if self.flownet is None: -# self._setup() -# -# targets = data_input['targets'].to(device) -# outputs = model_output['outputs'].to(device) -# flows = self.flownet.infer_video(targets).to(device) -# -# from utils.flow_utils import warp_optical_flow -# warped_targets = warp_optical_flow(targets[:, :-1], -flows).detach() -# warped_outputs = warp_optical_flow(outputs[:, :-1], -flows).detach() -# non_occlusion_masks = self._get_non_occlusion_mask(targets, warped_targets) -# -# # model_output is passed by name and dictionary is mutable -# # These values are sent to trainer for visualization -# model_output['warped_outputs'] = warped_outputs[0] -# model_output['warped_targets'] = warped_targets[0] -# model_output['non_occlusion_masks'] = non_occlusion_masks[0] -# from utils.flow_utils import flow_to_image -# flow_imgs = [] -# for flow in flows[0]: -# flow_img = flow_to_image(flow.cpu().permute(1, 2, 0).detach().numpy()).transpose(2, 0, 1) -# flow_imgs.append(torch.Tensor(flow_img)) -# model_output['flow_imgs'] = flow_imgs -# -# masks = data_input['masks'].to(device) -# return self._get_loss(outputs, warped_outputs, non_occlusion_masks, masks) -# -# -# class TemporalWarpingError(TemporalWarpingLoss): -# def __init__(self, flownet_checkpoint_path, alpha=50): -# super().__init__(flownet_checkpoint_path, alpha) -# self.loss_fn = L2LossMaskedMean(reduction='none') -# -# def _get_loss(self, outputs, warped_outputs, non_occlusion_masks, masks): -# # See https://arxiv.org/pdf/1808.00449.pdf 4.3 -# # The sum of non_occlusion_masks is different for each video, -# # So the batch dim is kept -# loss = self.loss_fn( -# outputs[:, 1:] * non_occlusion_masks, -# warped_outputs * non_occlusion_masks, -# masks[:, 1:] -# ).sum(1).sum(1).sum(1).sum(1) -# -# loss = loss / non_occlusion_masks.sum(1).sum(1).sum(1).sum(1) -# return loss.sum() - - -class ValidLoss(nn.Module): - def __init__(self): - super(ValidLoss, self).__init__() - self.loss_fn = nn.L1Loss(reduction='mean') - - def forward(self, model_output, target, mk): - outputs = model_output - targets = target - return self.loss_fn(outputs * (1 - mk), targets * (1 - mk)) # L1 loss in masked region - - - -class TVLoss(nn.Module): - def __init__(self): - super(TVLoss, self).__init__() - - def forward(self, mask_input, model_output): - # View 3D data as 2D - outputs = model_output - - if len(mask_input.shape) == 4: - mask_input = mask_input.unsqueeze(2) - if len(outputs.shape) == 4: - outputs = outputs.unsqueeze(2) - - outputs = outputs.permute((0, 2, 1, 3, 4)).contiguous() - masks = mask_input.permute((0, 2, 1, 3, 4)).contiguous() - - B, L, C, H, W = outputs.shape - x = outputs.view([B * L, C, H, W]) - - masks = masks.view([B * L, -1]) - mask_areas = masks.sum(dim=1) - - h_x = x.size()[2] - w_x = x.size()[3] - h_tv = torch.pow((x[:, :, 1:, :] - x[:, :, :h_x - 1, :]), 2).sum(1).sum(1).sum(1) # 差分是为了求梯度,本质上还是梯度平方和 - w_tv = torch.pow((x[:, :, :, 1:] - x[:, :, :, :w_x - 1]), 2).sum(1).sum(1).sum(1) - return ((h_tv + w_tv) / mask_areas).mean() - - -# for debug -def show_images(image, name): - import cv2 - import numpy as np - image = np.array(image) - image[image > 0.5] = 255. - image = image.transpose((1, 2, 0)) - cv2.imwrite(name, image) - - -if __name__ == '__main__': - # test align loss, - targetFrame = torch.ones(1, 3, 32, 32) - GT = torch.ones(1, 3, 32, 32) - GT += 1 - mask = torch.zeros(1, 1, 32, 32) - mask[:, :, 8:24, 8:24] = 1. - - # referenceFrames = torch.ones(1, 3, 4, 32, 32) - # referenceMasks = torch.zeros(1, 1, 4, 32, 32) - # referenceMasks[:, :, 0, 4:12, 4:12] = 1. - # referenceFrames[:, :, 0, 4:12, 4:12] = 2. - # referenceMasks[:, :, 1, 4:12, 20:28] = 1. - # referenceFrames[:, :, 1, 4:12, 20:28] = 2. - # referenceMasks[:, :, 2, 20:28, 4:12] = 1. - # referenceFrames[:, :, 2, 20:28, 4:12] = 2. - # referenceMasks[:, :, 3, 20:28, 20:28] = 1. - # referenceFrames[:, :, 3, 20:28, 20:28] = 2. - # - # aligned_v = referenceMasks - # aligned_v, referenceFrames = [aligned_v], [referenceFrames] - # - # result = AlignLoss()(targetFrame, mask, aligned_v, referenceFrames) - # print(result) - - c_mask = torch.zeros(1, 1, 32, 32) - c_mask[:, :, 8:16, 16:24] = 1. - result1 = HoleVisibleLoss()(targetFrame, mask, GT, c_mask) - result2 = HoleInvisibleLoss()(targetFrame, mask, GT, c_mask) - result3 = NonHoleLoss()(targetFrame, mask, GT) - print('vis: {}, invis: {}, gt: {}'.format(result1, result2, result3)) diff --git a/spaces/oguzakif/video-object-remover/SiamMask/data/coco/pycocotools/common/maskApi.c b/spaces/oguzakif/video-object-remover/SiamMask/data/coco/pycocotools/common/maskApi.c deleted file mode 100644 index 85e397918278126ce11f225dc109efbeb8a9394f..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/data/coco/pycocotools/common/maskApi.c +++ /dev/null @@ -1,230 +0,0 @@ -/************************************************************************** -* Microsoft COCO Toolbox. version 2.0 -* Data, paper, and tutorials available at: http://mscoco.org/ -* Code written by Piotr Dollar and Tsung-Yi Lin, 2015. -* Licensed under the Simplified BSD License [see coco/license.txt] -**************************************************************************/ -#include "maskApi.h" -#include -#include - -uint umin( uint a, uint b ) { return (ab) ? a : b; } - -void rleInit( RLE *R, siz h, siz w, siz m, uint *cnts ) { - R->h=h; R->w=w; R->m=m; R->cnts=(m==0)?0:malloc(sizeof(uint)*m); - siz j; if(cnts) for(j=0; jcnts[j]=cnts[j]; -} - -void rleFree( RLE *R ) { - free(R->cnts); R->cnts=0; -} - -void rlesInit( RLE **R, siz n ) { - siz i; *R = (RLE*) malloc(sizeof(RLE)*n); - for(i=0; i0 ) { - c=umin(ca,cb); cc+=c; ct=0; - ca-=c; if(!ca && a0) { - crowd=iscrowd!=NULL && iscrowd[g]; - if(dt[d].h!=gt[g].h || dt[d].w!=gt[g].w) { o[g*m+d]=-1; continue; } - siz ka, kb, a, b; uint c, ca, cb, ct, i, u; int va, vb; - ca=dt[d].cnts[0]; ka=dt[d].m; va=vb=0; - cb=gt[g].cnts[0]; kb=gt[g].m; a=b=1; i=u=0; ct=1; - while( ct>0 ) { - c=umin(ca,cb); if(va||vb) { u+=c; if(va&&vb) i+=c; } ct=0; - ca-=c; if(!ca && athr) keep[j]=0; - } - } -} - -void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o ) { - double h, w, i, u, ga, da; siz g, d; int crowd; - for( g=0; gthr) keep[j]=0; - } - } -} - -void rleToBbox( const RLE *R, BB bb, siz n ) { - siz i; for( i=0; id?1:c=dy && xs>xe) || (dxye); - if(flip) { t=xs; xs=xe; xe=t; t=ys; ys=ye; ye=t; } - s = dx>=dy ? (double)(ye-ys)/dx : (double)(xe-xs)/dy; - if(dx>=dy) for( d=0; d<=dx; d++ ) { - t=flip?dx-d:d; u[m]=t+xs; v[m]=(int)(ys+s*t+.5); m++; - } else for( d=0; d<=dy; d++ ) { - t=flip?dy-d:d; v[m]=t+ys; u[m]=(int)(xs+s*t+.5); m++; - } - } - /* get points along y-boundary and downsample */ - free(x); free(y); k=m; m=0; double xd, yd; - x=malloc(sizeof(int)*k); y=malloc(sizeof(int)*k); - for( j=1; jw-1 ) continue; - yd=(double)(v[j]h) yd=h; yd=ceil(yd); - x[m]=(int) xd; y[m]=(int) yd; m++; - } - /* compute rle encoding given y-boundary points */ - k=m; a=malloc(sizeof(uint)*(k+1)); - for( j=0; j0) b[m++]=a[j++]; else { - j++; if(jm, p=0; long x; int more; - char *s=malloc(sizeof(char)*m*6); - for( i=0; icnts[i]; if(i>2) x-=(long) R->cnts[i-2]; more=1; - while( more ) { - char c=x & 0x1f; x >>= 5; more=(c & 0x10) ? x!=-1 : x!=0; - if(more) c |= 0x20; c+=48; s[p++]=c; - } - } - s[p]=0; return s; -} - -void rleFrString( RLE *R, char *s, siz h, siz w ) { - siz m=0, p=0, k; long x; int more; uint *cnts; - while( s[m] ) m++; cnts=malloc(sizeof(uint)*m); m=0; - while( s[p] ) { - x=0; k=0; more=1; - while( more ) { - char c=s[p]-48; x |= (c & 0x1f) << 5*k; - more = c & 0x20; p++; k++; - if(!more && (c & 0x10)) x |= -1 << 5*k; - } - if(m>2) x+=(long) cnts[m-2]; cnts[m++]=(uint) x; - } - rleInit(R,h,w,m,cnts); free(cnts); -} diff --git a/spaces/oliveiracwb/MBP/README.md b/spaces/oliveiracwb/MBP/README.md deleted file mode 100644 index f406925af77c14981f2057466589f9bdd380764b..0000000000000000000000000000000000000000 --- a/spaces/oliveiracwb/MBP/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MBP -emoji: 💩 -colorFrom: gray -colorTo: purple -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/oliver2023/chatgpt-on-wechat/bot/bot.py b/spaces/oliver2023/chatgpt-on-wechat/bot/bot.py deleted file mode 100644 index fd56e50224098ed83723e392a5ee3e9854e7c820..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/bot/bot.py +++ /dev/null @@ -1,17 +0,0 @@ -""" -Auto-replay chat robot abstract class -""" - - -from bridge.context import Context -from bridge.reply import Reply - - -class Bot(object): - def reply(self, query, context : Context =None) -> Reply: - """ - bot auto-reply content - :param req: received message - :return: reply content - """ - raise NotImplementedError diff --git a/spaces/omlab/vlchecklist_demo/models/clip/engine.py b/spaces/omlab/vlchecklist_demo/models/clip/engine.py deleted file mode 100644 index c757e001af4797beca3c6b946d5ec811fa531dfe..0000000000000000000000000000000000000000 --- a/spaces/omlab/vlchecklist_demo/models/clip/engine.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -from models.model import Model -from utils.helpers import LRUCache, chunks -from config import EnvVar -import clip -import torch.nn.functional as F -from PIL import Image -import torch - -class CLIP(Model): - root_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)),"../../../") - MAX_CACHE = 20 - - def __init__(self): - self._models = LRUCache(self.MAX_CACHE) - self.batch_size = EnvVar.BATCH_SIZE - self.device = EnvVar.DEVICE - - def _load_model(self, model_id="ViT-B-16"): - if not self._models.has(model_id): - model, preprocess = clip.load(model_id) - self._models.put(model_id, [model, preprocess]) - return self._models.get(model_id) - - def _load_data(self, src_type, data): - pass - - def predict(self, model_id, - images, - texts, - ): - - model_list = self._load_model(model_id) - model = model_list[0] - preprocess = model_list[1] - # process images by batch - probs = [] - for i, chunk_i in enumerate(chunks(images, EnvVar.BATCH_SIZE)): - for j in range(len(chunk_i)): - image = preprocess(chunk_i[j]).unsqueeze(0) - # text format is [["there is a cat","there is a dog"],[...,...]...] - text = clip.tokenize(texts[j]) - - with torch.no_grad(): - image_features = model.encode_image(image) - text_features = model.encode_text(text) - - logits_per_image, logits_per_text = model(image, text) - probs.extend(logits_per_image.softmax(dim=-1).cpu().numpy()) - - return probs - - - diff --git a/spaces/openbio/calculator/README.md b/spaces/openbio/calculator/README.md deleted file mode 100644 index a7c9baf8bbef6f04a51333aa5f804bf1b8bf118c..0000000000000000000000000000000000000000 --- a/spaces/openbio/calculator/README.md +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: Ecometric calculator -emoji: 🌳 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -tags: [climatebase, biocredits, biodiversity, ecometrics] ---- - -# Bioscore calculator app - -This is a simple guide to help you set up and run a Gradio app. - -## Prerequisites - -- Python 3 installed on your system -- venv module for creating a virtual environment (usually comes with Python) - -## Installation - -Clone the repository to your local machine: -```bash -git clone https://github.com/your-username/gradio-app.git -cd gradio-app -``` - -Set up the service account credentials: -- Obtain a service account key file (in JSON format) with the necessary permissions to access any external services required by your Gradio app. -- Save the service account key file as `service_account.json` in the project directory. - -Create and activate a virtual environment: -```bash -python3 -m venv venv -source venv/bin/activate -``` - -Install the required Python packages: -```bash -pip3 install -r requirements.txt -``` - -## Run the App Locally - -To start the Gradio app, execute the following command: - -```bash -gradio app.py -``` - -The app will start running, and you should see output similar to the following: - -``` -Running on http://127.0.0.1:7860 -Open your web browser and visit http://127.0.0.1:7860 to access the Gradio app. -``` - - -## Deploy to Huggingface - -The app is hosted a Huggingface space, under the `hf` host and `main` branch. - -To push changes from main branch to Huggingfage, run: - -```bash -git push hf main -``` - -You'll see the app's response in `https://huggingface.co/spaces/openbio/calculator` - -❗Note: There's no dev nor staging environment, nor CI. Every push will immediately build and go live. - - -## Customization - -Feel free to modify the app.py file to customize the behavior and appearance of your Gradio app. You can add or remove input and output interfaces, change their appearance, or include additional functionality as per your requirements. - -## Feedback - -If you encounter any issues or have any questions or suggestions, please don't hesitate to open an issue on the GitHub repository. We appreciate your feedback and contributions! - -## License - -This project is licensed under the MIT License. diff --git a/spaces/osanseviero/SMILES_RDKit_Py3DMOL_FORK/app.py b/spaces/osanseviero/SMILES_RDKit_Py3DMOL_FORK/app.py deleted file mode 100644 index f202a89c632b0a9c69fc80f9f5413644027daa7a..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/SMILES_RDKit_Py3DMOL_FORK/app.py +++ /dev/null @@ -1,108 +0,0 @@ -import streamlit as st -import streamlit.components.v1 as components -import py3Dmol -from rdkit import Chem -from rdkit.Chem import Draw -from rdkit.Chem import AllChem - -st.title('SMILES + RDKit + Py3DMOL :smiley:') -def show(smi, style='stick'): - mol = Chem.MolFromSmiles(smi) - mol = Chem.AddHs(mol) - AllChem.EmbedMolecule(mol) - AllChem.MMFFOptimizeMolecule(mol, maxIters=200) - mblock = Chem.MolToMolBlock(mol) - - view = py3Dmol.view(width=400, height=400) - view.addModel(mblock, 'mol') - view.setStyle({style:{}}) - view.zoomTo() - view.show() - view.render() - t =view.js() - f = open('viz.html', 'w') - f.write(t.startjs) - f.write(t.endjs) - f.close() - -compound_smiles=st.text_input('SMILES please','CC') -m = Chem.MolFromSmiles(compound_smiles) - -Draw.MolToFile(m,'mol.png') - - -show(compound_smiles) -HtmlFile = open("viz.html", 'r', encoding='utf-8') -source_code = HtmlFile.read() -c1,c2=st.beta_columns(2) -with c1: - st.write('Molecule :coffee:') - st.image('mol.png') -with c2: - components.html(source_code, height = 400,width=400) - -################ Sidebar #################### -with st.sidebar.beta_expander('Rule One (Atoms and Bonds)'): - st.markdown(''' -## Atoms -|If |then | -|----|----| -| Non-aromatic atoms |Uper case letters | -| Aromatic atoms |lower case letters | -|Atomic symbols has more than one letter | The second is lower case | -## Bonds -| Bond type| Bond symbol | -|---|---| -|Simple | - | -|Double|=| -|Triple|#| -|Aromatic|*| -| Disconnected structures|. | -### Example: - CC 👉 There is a non-aromatic carbon attached to another non-aromatic carbon by a single bond. -🛑 A bond between two lower case atom symbols is *aromatic*. -''') - -with st.sidebar.beta_expander('Rule Two (Simple Chains)'): - st.markdown(''' - ## Simple chains - * Structures are hydrogen suppresed (Molecules represented without hydrogens) - * If enough bonds are not identified by the user, the system will assume that connections - are satisfied by hidrogens. - * The user can explicitly identify hydrogen bonds, but if so the interpreter will assume that all of them are fully identified. - Note: - - *Because SMILES allows entry of all elements in the periodic table, - and also utilizes hydrogen suppression, the user should be aware of chemicals with two letters - that could be misinterpreted by the computer. For example, 'Sc' could be interpreted as a **sulfur** - atom connected to an aromatic **carbon** by a single bond, or it could be the symbol for **scandium**. - The SMILES interpreter gives priority to the interpretation of a single bond connecting a sulfur atom and an aromatic carbon. - To identify scandium the user should enter [Sc]*. - ''') - -with st.sidebar.beta_expander('Rule Three (Branches)'): - st.markdown(''' - ## Branches - * A branch from a chain is specified by placing the SMILES symbol(s) for the branch between parenthesis. - * The string in parentheses is placed directly after the symbol for the atom to which it is connected. - * If it is connected by a double or triple bond, the bond symbol immediately follows the left parenthesis. - ''') - -with st.sidebar.beta_expander('Rule Four (Rings)'): - st.markdown(''' - ## Rings - * SMILES allows a user to identify ring structures by using numbers to identify the opening and closing ring atom. - For example, in C1CCCCC1, the first carbon has a number '1' which connects by a single bond with the last carbon which also has a number '1'. - The resulting structure is cyclohexane. Chemicals that have multiple rings may be identified by using different numbers for each ring. - * If a double, single, or aromatic bond is used for the ring closure, the bond symbol is placed before the ring closure number. - ''') - -with st.sidebar.beta_expander('Rule Five (Charged atoms)'): - st.markdown(''' - ## Charged atoms - Charges on an atom can be used to override the knowledge regarding valence that is built into SMILES software. - The format for identifying a charged atom consists of the atom followed by brackets which enclose the charge on the atom. - The number of charges may be explicitly stated ({-1}) or not ({-}). - ''') -st.sidebar.markdown('Original Author: José Manuel Nápoles ([@napoles3d](https://twitter.com/napoles3D)). Find original app in https://share.streamlit.io/napoles-uach/st_smiles/main/smiles.py') -st.sidebar.write('Info about SMILES: https://archive.epa.gov/med/med_archive_03/web/html/smiles.html') \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/latent_diffusion_uncond.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/latent_diffusion_uncond.md deleted file mode 100644 index 8555d631d43c0626e93b31aa9e92081712452887..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/latent_diffusion_uncond.md +++ /dev/null @@ -1,35 +0,0 @@ - - -# Unconditional Latent Diffusion - -Unconditional Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. - -The abstract from the paper is: - -*By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.* - -The original codebase can be found at [CompVis/latent-diffusion](https://github.com/CompVis/latent-diffusion). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## LDMPipeline -[[autodoc]] LDMPipeline - - all - - __call__ - -## ImagePipelineOutput -[[autodoc]] pipelines.ImagePipelineOutput diff --git a/spaces/parkyzh/bingo/src/pages/api/blob.ts b/spaces/parkyzh/bingo/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/parkyzh/bingo/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/perilli/tortoise-tts-v2/tortoise/utils/__init__.py b/spaces/perilli/tortoise-tts-v2/tortoise/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pharma-IA/PharmaWise_Prospecto_Megalabs_V2.10/app.py b/spaces/pharma-IA/PharmaWise_Prospecto_Megalabs_V2.10/app.py deleted file mode 100644 index 0ea30734174f1969a07f14ff01c2000279a25bf6..0000000000000000000000000000000000000000 --- a/spaces/pharma-IA/PharmaWise_Prospecto_Megalabs_V2.10/app.py +++ /dev/null @@ -1,60 +0,0 @@ - -"""PharmaWise_Entrenador Prospecto - - -Adaptado para Hugging Face -""" - -# Importacion de Librerias -import os -import openai -from llama_index import StorageContext, load_index_from_storage, LLMPredictor, ServiceContext -from langchain.chat_models import ChatOpenAI -from llama_index.tools import QueryEngineTool, ToolMetadata - -# Conectar Cuenta API de OpenAI -openai_api_key = os.environ.get('openai_key') -if openai_api_key: - os.environ["OPENAI_API_KEY"] = openai_api_key - openai.api_key = openai_api_key -else: - print("Error con la clave de acceso a OpenAI.") - - - -# Cargar entrenamiento y Modelo -exec(os.environ.get('storage_context')) - - - -import gradio as gr -from gradio import components -import textwrap - -prompt = 'responder en español como un asistente experto en medicina, dando una respuesta detallada y reflejando de forma fiel los datos disponibles. Considerar segun aplique la siguiente informacion adicional: Nombre del Medicamento: Acsodix. Principio Activo: Vortioxetina. Formas y Dosis: Acsodix 5, 10, y 20 mg (comprimidos recubiertos). Indicaciones: Tratamiento de episodios de depresión mayor en adultos. Contraindicaciones: Alergia a la Vortioxetina, toma de ciertos inhibidores de la monoaminooxidasa, enfermedad hepática grave, etc. Precauciones: Embarazo, lactancia, conducción, uso de otros medicamentos, etc. Efectos Adversos: Náuseas, diarrea, vómitos, mareo, prurito, etc. Instrucciones de Uso: Se puede tomar con o sin alimentos, no se aconseja la combinación con alcohol, etc. Presentación: Cajas conteniendo 30 comprimidos recubiertos en diferentes dosis. La dosis máxima de Acsodix para adultos menores a 65 años es de 20 mg al día. Se debe tener precaución cuando se traten pacientes mayores de 65 años de edad con dosis superiores a 10 mg de Vortioxetina una vez al día, ya que los datos son limitados. El medicamento Vortioxetina (Acsodix) no puede ni debe ser combinado con alcohol. Se debe tener precaución al combinar la Vortioxetina con anticoagulantes o antiagregantes plaquetarios orales como warfarina, dipiridamol, fenprocumón, ácido acetilsalicílico, debido al potencial aumento del riesgo de hemorragia. La información proporcionada no menciona específicamente los dolores de cabeza como un efecto secundario de tomar Acsodix. Informe a su médico si está tomando, ha tomado recientemente o podría tener que tomar cualquier otro medicamento, incluso los adquiridos sin receta. No se observaron cambios en los niveles de hormonas sexuales después de la administración conjunta de Vortioxetina con el anticonceptivo oral combinado (etinil estradiol 30 μg/ levonorgestrel 150 μg).Acsodix no está recomendado en niños y adolescentes menores de 18 años, debido a la falta de información en este grupo de edad. La pregunta a responder es la siguiente:' - - - - -# Función para procesar la entrada del usuario y generar la respuesta -def responder(pregunta): - pregunta = prompt + pregunta - respuesta = engine.query(pregunta) - #respuesta_formateada = "\n".join(textwrap.wrap(respuesta.response, width=1000)) # Ancho ajustado a 1000 - respuesta_formateada = respuesta - return respuesta_formateada - -# Definir la interfaz de usuario con Gradio -iface = gr.Interface(fn=responder, - inputs=components.Textbox(lines=2, placeholder='Escribe tu pregunta aquí...'), - outputs='text', - title='PharmaWise 3.5 - demo Prospecto Megalabs V2.10', - description='Realiza preguntas a tus datos y obtén respuestas en español.', - examples=[ - ['¿Se puede tomar con anticonceptivos?'], - ]) - -# Ejecutar la interfaz -iface.launch(debug=False, inline=False) - - diff --git a/spaces/picopi/openai-reverse-proxy/README.md b/spaces/picopi/openai-reverse-proxy/README.md deleted file mode 100644 index 4071a9dbd1b1f9e8116eaca6ff27190a40226cc3..0000000000000000000000000000000000000000 --- a/spaces/picopi/openai-reverse-proxy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Openai Reverse Proxy -emoji: 🚀 -colorFrom: green -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pierretassel/JobShopCPRL/MyDummyVecEnv.py b/spaces/pierretassel/JobShopCPRL/MyDummyVecEnv.py deleted file mode 100644 index d4a4cc20dd2fef2fac54d97283acf5b950f8b12e..0000000000000000000000000000000000000000 --- a/spaces/pierretassel/JobShopCPRL/MyDummyVecEnv.py +++ /dev/null @@ -1,124 +0,0 @@ -from collections import OrderedDict -from typing import Any, Callable, List, Optional, Sequence, Type, Union - -import gym -import numpy as np - -from stable_baselines3.common.vec_env.base_vec_env import VecEnv, VecEnvIndices, VecEnvObs, VecEnvStepReturn -from stable_baselines3.common.vec_env.util import dict_to_obs, obs_space_info - -import torch - - -class MyDummyVecEnv(VecEnv): - """ - Creates a simple vectorized wrapper for multiple environments, calling each environment in sequence on the current - Python process. This is useful for computationally simple environment such as ``cartpole-v1``, - as the overhead of multiprocess or multithread outweighs the environment computation time. - This can also be used for RL methods that - require a vectorized environment, but that you want a single environments to train with. - - :param env_fns: a list of functions - that return environments to vectorize - """ - - def __init__(self, env_fns: List[Callable[[], gym.Env]], device): - self.envs = [fn() for fn in env_fns] - env = self.envs[0] - VecEnv.__init__(self, len(env_fns), env.observation_space, env.action_space) - obs_space = env.observation_space - self.keys, shapes, dtypes = obs_space_info(obs_space) - self.device = device - - self.buf_obs = OrderedDict( - [(k, torch.zeros((self.num_envs,) + tuple(shapes[k]), dtype=torch.float, device=self.device)) for k in self.keys]) - self.buf_dones = np.zeros((self.num_envs,), dtype=bool) - self.buf_rews = np.zeros((self.num_envs,), dtype=np.float32) - self.buf_infos = [{} for _ in range(self.num_envs)] - self.actions = None - - def step_async(self, actions: np.ndarray) -> None: - self.actions = actions - - def step_wait(self) -> VecEnvStepReturn: - for env_idx in range(self.num_envs): - obs, self.buf_rews[env_idx], self.buf_dones[env_idx], self.buf_infos[env_idx] = self.envs[env_idx].step( - self.actions[env_idx] - ) - if self.buf_dones[env_idx]: - # save final observation where user can get it, then reset - self.buf_infos[env_idx]["terminal_observation"] = obs - obs = self.envs[env_idx].reset() - self._save_obs(env_idx, obs) - return (self._obs_from_buf(), self.buf_rews, self.buf_dones, self.buf_infos) - - def seed(self, seed: Optional[int] = None) -> List[Union[None, int]]: - seeds = list() - for idx, env in enumerate(self.envs): - seeds.append(env.seed(seed + idx)) - return seeds - - def reset(self) -> VecEnvObs: - for env_idx in range(self.num_envs): - obs = self.envs[env_idx].reset() - self._save_obs(env_idx, obs) - return self._obs_from_buf() - - def close(self) -> None: - for env in self.envs: - env.close() - - def get_images(self) -> Sequence[np.ndarray]: - return [env.render(mode="rgb_array") for env in self.envs] - - def render(self, mode: str = "human") -> Optional[np.ndarray]: - """ - Gym environment rendering. If there are multiple environments then - they are tiled together in one image via ``BaseVecEnv.render()``. - Otherwise (if ``self.num_envs == 1``), we pass the render call directly to the - underlying environment. - - Therefore, some arguments such as ``mode`` will have values that are valid - only when ``num_envs == 1``. - - :param mode: The rendering type. - """ - if self.num_envs == 1: - return self.envs[0].render(mode=mode) - else: - return super().render(mode=mode) - - def _save_obs(self, env_idx: int, obs: VecEnvObs) -> None: - for key in self.keys: - self.buf_obs[key][env_idx] = torch.from_numpy(obs[key]).to(self.device, non_blocking=True) - - def _obs_from_buf(self) -> VecEnvObs: - return dict_to_obs(self.observation_space, self.buf_obs) - - def get_attr(self, attr_name: str, indices: VecEnvIndices = None) -> List[Any]: - """Return attribute from vectorized environment (see base class).""" - target_envs = self._get_target_envs(indices) - return [getattr(env_i, attr_name) for env_i in target_envs] - - def set_attr(self, attr_name: str, value: Any, indices: VecEnvIndices = None) -> None: - """Set attribute inside vectorized environments (see base class).""" - target_envs = self._get_target_envs(indices) - for env_i in target_envs: - setattr(env_i, attr_name, value) - - def env_method(self, method_name: str, *method_args, indices: VecEnvIndices = None, **method_kwargs) -> List[Any]: - """Call instance methods of vectorized environments.""" - target_envs = self._get_target_envs(indices) - return [getattr(env_i, method_name)(*method_args, **method_kwargs) for env_i in target_envs] - - def env_is_wrapped(self, wrapper_class: Type[gym.Wrapper], indices: VecEnvIndices = None) -> List[bool]: - """Check if worker environments are wrapped with a given wrapper""" - target_envs = self._get_target_envs(indices) - # Import here to avoid a circular import - from stable_baselines3.common import env_util - - return [env_util.is_wrapped(env_i, wrapper_class) for env_i in target_envs] - - def _get_target_envs(self, indices: VecEnvIndices) -> List[gym.Env]: - indices = self._get_indices(indices) - return [self.envs[i] for i in indices] diff --git a/spaces/pikto/Elite-freegpt-webui/client/css/main.css b/spaces/pikto/Elite-freegpt-webui/client/css/main.css deleted file mode 100644 index 9b9b83be20c0b5792d8697116f328753d1ec6a02..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/client/css/main.css +++ /dev/null @@ -1,7 +0,0 @@ -.main-container { - display: flex; - padding: var(--section-gap); - height: 100vh; - justify-content: center; - box-sizing: border-box; -} diff --git a/spaces/pknez/face-swap-docker/chain_img_processor/image.py b/spaces/pknez/face-swap-docker/chain_img_processor/image.py deleted file mode 100644 index 868450f8dadf02646707eb86e1ffe8f688ca0eb2..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/chain_img_processor/image.py +++ /dev/null @@ -1,176 +0,0 @@ -from jaa import JaaCore -from roop.utilities import get_device - - -from typing import Any - -version = "4.0.0" - -class ChainImgProcessor(JaaCore): - - def __init__(self): - JaaCore.__init__(self) - - self.processors:dict = { - } - - self.processors_objects:dict[str,list[ChainImgPlugin]] = {} - - self.default_chain = "" - self.init_on_start = "" - - self.inited_processors = [] - - self.is_demo_row_render = False - - def process_plugin_manifest(self, modname, manifest): - # adding processors from plugin manifest - if "img_processor" in manifest: # process commands - for cmd in manifest["img_processor"].keys(): - self.processors[cmd] = manifest["img_processor"][cmd] - - return manifest - - def init_with_plugins(self): - self.init_plugins(["core"]) - self.display_init_info() - - #self.init_translator_engine(self.default_translator) - init_on_start_arr = self.init_on_start.split(",") - for proc_id in init_on_start_arr: - self.init_processor(proc_id) - - def run_chain(self, img, params:dict[str,Any] = None, chain:str = None, thread_index:int = 0): - if chain is None: - chain = self.default_chain - if params is None: - params = {} - params["_thread_index"] = thread_index - chain_ar = chain.split(",") - # init all not inited processors first - for proc_id in chain_ar: - if proc_id != "": - if not proc_id in self.inited_processors: - self.init_processor(proc_id) - - - - # run processing - if self.is_demo_row_render: - import cv2 - import numpy as np - height, width, channels = img.shape - img_blank = np.zeros((height+30, width*(1+len(chain_ar)), 3), dtype=np.uint8) - img_blank.fill(255) - - y = 30 - x = 0 - img_blank[y:y + height, x:x + width] = img - - # Set the font scale and thickness - font_scale = 1 - thickness = 2 - - # Set the font face to a monospace font - font_face = cv2.FONT_HERSHEY_SIMPLEX - - cv2.putText(img_blank, "original", (x+4, y-7), font_face, font_scale, (0, 0, 0), thickness) - - - i = 0 - for proc_id in chain_ar: - i += 1 - if proc_id != "": - #img = self.processors[proc_id][1](self, img, params) # params can be modified inside - y = 30 - img = self.processors_objects[proc_id][thread_index].process(img,params) - if self.is_demo_row_render: - x = width*i - img_blank[y:y + height, x:x + width] = img - cv2.putText(img_blank, proc_id, (x + 4, y - 7), font_face, font_scale, (0, 0, 0), thickness) - - if self.is_demo_row_render: - return img_blank, params - - return img, params - - # ---------------- init translation stuff ---------------- - def fill_processors_for_thread_chains(self, threads:int = 1, chain:str = None): - if chain is None: - chain = self.default_chain - - chain_ar = chain.split(",") - # init all not initialized processors first - for processor_id in chain_ar: - if processor_id != "": - if self.processors_objects.get(processor_id) is None: - self.processors_objects[processor_id] = [] - while len(self.processors_objects[processor_id]) < threads: - self.add_processor_to_list(processor_id) - - def add_processor_to_list(self, processor_id: str): - obj = self.processors[processor_id](self) - obj.init_plugin() - if self.processors_objects.get(processor_id) is None: - self.processors_objects[processor_id] = [] - self.processors_objects[processor_id].append(obj) - def init_processor(self, processor_id: str): - if processor_id == "": # blank line case - return - - if processor_id in self.inited_processors: - return - - try: - if self.verbose: - self.print_blue("TRY: init processor plugin '{0}'...".format(processor_id)) - self.add_processor_to_list(processor_id) - self.inited_processors.append(processor_id) - if self.verbose: - self.print_blue("SUCCESS: '{0}' initialized!".format(processor_id)) - - except Exception as e: - self.print_error("Error init processor plugin {0}...".format(processor_id), e) - - # ------------ formatting stuff ------------------- - def display_init_info(self): - if self.verbose: - print("ChainImgProcessor v{0}:".format(version)) - self.format_print_key_list("processors:", self.processors.keys()) - - def format_print_key_list(self, key:str, value:list): - print(key+": ".join(value)) - - def print_error(self,err_txt,e:Exception = None): - print(err_txt,"red") - # if e != None: - # cprint(e,"red") - import traceback - traceback.print_exc() - - def print_red(self,txt): - print(txt) - - def print_blue(self, txt): - print(txt) - -class ChainImgPlugin: - - device = 'cpu' - - def __init__(self, core: ChainImgProcessor): - self.core = core - self.device = get_device() - - def init_plugin(self): # here you can init something. Called once - pass - def process(self, img, params:dict): # process img. Called multiple - return img - -_img_processor:ChainImgProcessor = None -def get_single_image_processor() -> ChainImgProcessor: - global _img_processor - if _img_processor is None: - _img_processor = ChainImgProcessor() - _img_processor.init_with_plugins() - return _img_processor \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/svg.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/svg.py deleted file mode 100644 index a8727ed8592533a009b6202be92f438d4152e793..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/svg.py +++ /dev/null @@ -1,188 +0,0 @@ -""" - pygments.formatters.svg - ~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for SVG output. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.token import Comment -from pip._vendor.pygments.util import get_bool_opt, get_int_opt - -__all__ = ['SvgFormatter'] - - -def escape_html(text): - """Escape &, <, > as well as single and double quotes for HTML.""" - return text.replace('&', '&'). \ - replace('<', '<'). \ - replace('>', '>'). \ - replace('"', '"'). \ - replace("'", ''') - - -class2style = {} - -class SvgFormatter(Formatter): - """ - Format tokens as an SVG graphics file. This formatter is still experimental. - Each line of code is a ```` element with explicit ``x`` and ``y`` - coordinates containing ```` elements with the individual token styles. - - By default, this formatter outputs a full SVG document including doctype - declaration and the ```` root element. - - .. versionadded:: 0.9 - - Additional options accepted: - - `nowrap` - Don't wrap the SVG ```` elements in ```` elements and - don't add a XML declaration and a doctype. If true, the `fontfamily` - and `fontsize` options are ignored. Defaults to ``False``. - - `fontfamily` - The value to give the wrapping ```` element's ``font-family`` - attribute, defaults to ``"monospace"``. - - `fontsize` - The value to give the wrapping ```` element's ``font-size`` - attribute, defaults to ``"14px"``. - - `linenos` - If ``True``, add line numbers (default: ``False``). - - `linenostart` - The line number for the first line (default: ``1``). - - `linenostep` - If set to a number n > 1, only every nth line number is printed. - - `linenowidth` - Maximum width devoted to line numbers (default: ``3*ystep``, sufficient - for up to 4-digit line numbers. Increase width for longer code blocks). - - `xoffset` - Starting offset in X direction, defaults to ``0``. - - `yoffset` - Starting offset in Y direction, defaults to the font size if it is given - in pixels, or ``20`` else. (This is necessary since text coordinates - refer to the text baseline, not the top edge.) - - `ystep` - Offset to add to the Y coordinate for each subsequent line. This should - roughly be the text size plus 5. It defaults to that value if the text - size is given in pixels, or ``25`` else. - - `spacehack` - Convert spaces in the source to `` ``, which are non-breaking - spaces. SVG provides the ``xml:space`` attribute to control how - whitespace inside tags is handled, in theory, the ``preserve`` value - could be used to keep all whitespace as-is. However, many current SVG - viewers don't obey that rule, so this option is provided as a workaround - and defaults to ``True``. - """ - name = 'SVG' - aliases = ['svg'] - filenames = ['*.svg'] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self.nowrap = get_bool_opt(options, 'nowrap', False) - self.fontfamily = options.get('fontfamily', 'monospace') - self.fontsize = options.get('fontsize', '14px') - self.xoffset = get_int_opt(options, 'xoffset', 0) - fs = self.fontsize.strip() - if fs.endswith('px'): fs = fs[:-2].strip() - try: - int_fs = int(fs) - except: - int_fs = 20 - self.yoffset = get_int_opt(options, 'yoffset', int_fs) - self.ystep = get_int_opt(options, 'ystep', int_fs + 5) - self.spacehack = get_bool_opt(options, 'spacehack', True) - self.linenos = get_bool_opt(options,'linenos',False) - self.linenostart = get_int_opt(options,'linenostart',1) - self.linenostep = get_int_opt(options,'linenostep',1) - self.linenowidth = get_int_opt(options,'linenowidth', 3*self.ystep) - self._stylecache = {} - - def format_unencoded(self, tokensource, outfile): - """ - Format ``tokensource``, an iterable of ``(tokentype, tokenstring)`` - tuples and write it into ``outfile``. - - For our implementation we put all lines in their own 'line group'. - """ - x = self.xoffset - y = self.yoffset - if not self.nowrap: - if self.encoding: - outfile.write('\n' % - self.encoding) - else: - outfile.write('\n') - outfile.write('\n') - outfile.write('\n') - outfile.write('\n' % - (self.fontfamily, self.fontsize)) - - counter = self.linenostart - counter_step = self.linenostep - counter_style = self._get_style(Comment) - line_x = x - - if self.linenos: - if counter % counter_step == 0: - outfile.write('%s' % - (x+self.linenowidth,y,counter_style,counter)) - line_x += self.linenowidth + self.ystep - counter += 1 - - outfile.write('' % (line_x, y)) - for ttype, value in tokensource: - style = self._get_style(ttype) - tspan = style and '' or '' - tspanend = tspan and '' or '' - value = escape_html(value) - if self.spacehack: - value = value.expandtabs().replace(' ', ' ') - parts = value.split('\n') - for part in parts[:-1]: - outfile.write(tspan + part + tspanend) - y += self.ystep - outfile.write('\n') - if self.linenos and counter % counter_step == 0: - outfile.write('%s' % - (x+self.linenowidth,y,counter_style,counter)) - - counter += 1 - outfile.write('' % (line_x,y)) - outfile.write(tspan + parts[-1] + tspanend) - outfile.write('') - - if not self.nowrap: - outfile.write('\n') - - def _get_style(self, tokentype): - if tokentype in self._stylecache: - return self._stylecache[tokentype] - otokentype = tokentype - while not self.style.styles_token(tokentype): - tokentype = tokentype.parent - value = self.style.style_for_token(tokentype) - result = '' - if value['color']: - result = ' fill="#' + value['color'] + '"' - if value['bold']: - result += ' font-weight="bold"' - if value['italic']: - result += ' font-style="italic"' - self._stylecache[otokentype] = result - return result diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/diagram/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/diagram/__init__.py deleted file mode 100644 index 83f9018ee9357bd193e91abbc66fe9f0e7075f2c..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/diagram/__init__.py +++ /dev/null @@ -1,656 +0,0 @@ -# mypy: ignore-errors -import railroad -from pip._vendor import pyparsing -import typing -from typing import ( - List, - NamedTuple, - Generic, - TypeVar, - Dict, - Callable, - Set, - Iterable, -) -from jinja2 import Template -from io import StringIO -import inspect - - -jinja2_template_source = """\ -{% if not embed %} - - - -{% endif %} - {% if not head %} - - {% else %} - {{ head | safe }} - {% endif %} -{% if not embed %} - - -{% endif %} -{{ body | safe }} -{% for diagram in diagrams %} -
    -

    {{ diagram.title }}

    -
    {{ diagram.text }}
    -
    - {{ diagram.svg }} -
    -
    -{% endfor %} -{% if not embed %} - - -{% endif %} -""" - -template = Template(jinja2_template_source) - -# Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet -NamedDiagram = NamedTuple( - "NamedDiagram", - [("name", str), ("diagram", typing.Optional[railroad.DiagramItem]), ("index", int)], -) -""" -A simple structure for associating a name with a railroad diagram -""" - -T = TypeVar("T") - - -class EachItem(railroad.Group): - """ - Custom railroad item to compose a: - - Group containing a - - OneOrMore containing a - - Choice of the elements in the Each - with the group label indicating that all must be matched - """ - - all_label = "[ALL]" - - def __init__(self, *items): - choice_item = railroad.Choice(len(items) - 1, *items) - one_or_more_item = railroad.OneOrMore(item=choice_item) - super().__init__(one_or_more_item, label=self.all_label) - - -class AnnotatedItem(railroad.Group): - """ - Simple subclass of Group that creates an annotation label - """ - - def __init__(self, label: str, item): - super().__init__(item=item, label="[{}]".format(label) if label else label) - - -class EditablePartial(Generic[T]): - """ - Acts like a functools.partial, but can be edited. In other words, it represents a type that hasn't yet been - constructed. - """ - - # We need this here because the railroad constructors actually transform the data, so can't be called until the - # entire tree is assembled - - def __init__(self, func: Callable[..., T], args: list, kwargs: dict): - self.func = func - self.args = args - self.kwargs = kwargs - - @classmethod - def from_call(cls, func: Callable[..., T], *args, **kwargs) -> "EditablePartial[T]": - """ - If you call this function in the same way that you would call the constructor, it will store the arguments - as you expect. For example EditablePartial.from_call(Fraction, 1, 3)() == Fraction(1, 3) - """ - return EditablePartial(func=func, args=list(args), kwargs=kwargs) - - @property - def name(self): - return self.kwargs["name"] - - def __call__(self) -> T: - """ - Evaluate the partial and return the result - """ - args = self.args.copy() - kwargs = self.kwargs.copy() - - # This is a helpful hack to allow you to specify varargs parameters (e.g. *args) as keyword args (e.g. - # args=['list', 'of', 'things']) - arg_spec = inspect.getfullargspec(self.func) - if arg_spec.varargs in self.kwargs: - args += kwargs.pop(arg_spec.varargs) - - return self.func(*args, **kwargs) - - -def railroad_to_html(diagrams: List[NamedDiagram], embed=False, **kwargs) -> str: - """ - Given a list of NamedDiagram, produce a single HTML string that visualises those diagrams - :params kwargs: kwargs to be passed in to the template - """ - data = [] - for diagram in diagrams: - if diagram.diagram is None: - continue - io = StringIO() - try: - css = kwargs.get('css') - diagram.diagram.writeStandalone(io.write, css=css) - except AttributeError: - diagram.diagram.writeSvg(io.write) - title = diagram.name - if diagram.index == 0: - title += " (root)" - data.append({"title": title, "text": "", "svg": io.getvalue()}) - - return template.render(diagrams=data, embed=embed, **kwargs) - - -def resolve_partial(partial: "EditablePartial[T]") -> T: - """ - Recursively resolves a collection of Partials into whatever type they are - """ - if isinstance(partial, EditablePartial): - partial.args = resolve_partial(partial.args) - partial.kwargs = resolve_partial(partial.kwargs) - return partial() - elif isinstance(partial, list): - return [resolve_partial(x) for x in partial] - elif isinstance(partial, dict): - return {key: resolve_partial(x) for key, x in partial.items()} - else: - return partial - - -def to_railroad( - element: pyparsing.ParserElement, - diagram_kwargs: typing.Optional[dict] = None, - vertical: int = 3, - show_results_names: bool = False, - show_groups: bool = False, -) -> List[NamedDiagram]: - """ - Convert a pyparsing element tree into a list of diagrams. This is the recommended entrypoint to diagram - creation if you want to access the Railroad tree before it is converted to HTML - :param element: base element of the parser being diagrammed - :param diagram_kwargs: kwargs to pass to the Diagram() constructor - :param vertical: (optional) - int - limit at which number of alternatives should be - shown vertically instead of horizontally - :param show_results_names - bool to indicate whether results name annotations should be - included in the diagram - :param show_groups - bool to indicate whether groups should be highlighted with an unlabeled - surrounding box - """ - # Convert the whole tree underneath the root - lookup = ConverterState(diagram_kwargs=diagram_kwargs or {}) - _to_diagram_element( - element, - lookup=lookup, - parent=None, - vertical=vertical, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - root_id = id(element) - # Convert the root if it hasn't been already - if root_id in lookup: - if not element.customName: - lookup[root_id].name = "" - lookup[root_id].mark_for_extraction(root_id, lookup, force=True) - - # Now that we're finished, we can convert from intermediate structures into Railroad elements - diags = list(lookup.diagrams.values()) - if len(diags) > 1: - # collapse out duplicate diags with the same name - seen = set() - deduped_diags = [] - for d in diags: - # don't extract SkipTo elements, they are uninformative as subdiagrams - if d.name == "...": - continue - if d.name is not None and d.name not in seen: - seen.add(d.name) - deduped_diags.append(d) - resolved = [resolve_partial(partial) for partial in deduped_diags] - else: - # special case - if just one diagram, always display it, even if - # it has no name - resolved = [resolve_partial(partial) for partial in diags] - return sorted(resolved, key=lambda diag: diag.index) - - -def _should_vertical( - specification: int, exprs: Iterable[pyparsing.ParserElement] -) -> bool: - """ - Returns true if we should return a vertical list of elements - """ - if specification is None: - return False - else: - return len(_visible_exprs(exprs)) >= specification - - -class ElementState: - """ - State recorded for an individual pyparsing Element - """ - - # Note: this should be a dataclass, but we have to support Python 3.5 - def __init__( - self, - element: pyparsing.ParserElement, - converted: EditablePartial, - parent: EditablePartial, - number: int, - name: str = None, - parent_index: typing.Optional[int] = None, - ): - #: The pyparsing element that this represents - self.element: pyparsing.ParserElement = element - #: The name of the element - self.name: typing.Optional[str] = name - #: The output Railroad element in an unconverted state - self.converted: EditablePartial = converted - #: The parent Railroad element, which we store so that we can extract this if it's duplicated - self.parent: EditablePartial = parent - #: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram - self.number: int = number - #: The index of this inside its parent - self.parent_index: typing.Optional[int] = parent_index - #: If true, we should extract this out into a subdiagram - self.extract: bool = False - #: If true, all of this element's children have been filled out - self.complete: bool = False - - def mark_for_extraction( - self, el_id: int, state: "ConverterState", name: str = None, force: bool = False - ): - """ - Called when this instance has been seen twice, and thus should eventually be extracted into a sub-diagram - :param el_id: id of the element - :param state: element/diagram state tracker - :param name: name to use for this element's text - :param force: If true, force extraction now, regardless of the state of this. Only useful for extracting the - root element when we know we're finished - """ - self.extract = True - - # Set the name - if not self.name: - if name: - # Allow forcing a custom name - self.name = name - elif self.element.customName: - self.name = self.element.customName - else: - self.name = "" - - # Just because this is marked for extraction doesn't mean we can do it yet. We may have to wait for children - # to be added - # Also, if this is just a string literal etc, don't bother extracting it - if force or (self.complete and _worth_extracting(self.element)): - state.extract_into_diagram(el_id) - - -class ConverterState: - """ - Stores some state that persists between recursions into the element tree - """ - - def __init__(self, diagram_kwargs: typing.Optional[dict] = None): - #: A dictionary mapping ParserElements to state relating to them - self._element_diagram_states: Dict[int, ElementState] = {} - #: A dictionary mapping ParserElement IDs to subdiagrams generated from them - self.diagrams: Dict[int, EditablePartial[NamedDiagram]] = {} - #: The index of the next unnamed element - self.unnamed_index: int = 1 - #: The index of the next element. This is used for sorting - self.index: int = 0 - #: Shared kwargs that are used to customize the construction of diagrams - self.diagram_kwargs: dict = diagram_kwargs or {} - self.extracted_diagram_names: Set[str] = set() - - def __setitem__(self, key: int, value: ElementState): - self._element_diagram_states[key] = value - - def __getitem__(self, key: int) -> ElementState: - return self._element_diagram_states[key] - - def __delitem__(self, key: int): - del self._element_diagram_states[key] - - def __contains__(self, key: int): - return key in self._element_diagram_states - - def generate_unnamed(self) -> int: - """ - Generate a number used in the name of an otherwise unnamed diagram - """ - self.unnamed_index += 1 - return self.unnamed_index - - def generate_index(self) -> int: - """ - Generate a number used to index a diagram - """ - self.index += 1 - return self.index - - def extract_into_diagram(self, el_id: int): - """ - Used when we encounter the same token twice in the same tree. When this - happens, we replace all instances of that token with a terminal, and - create a new subdiagram for the token - """ - position = self[el_id] - - # Replace the original definition of this element with a regular block - if position.parent: - ret = EditablePartial.from_call(railroad.NonTerminal, text=position.name) - if "item" in position.parent.kwargs: - position.parent.kwargs["item"] = ret - elif "items" in position.parent.kwargs: - position.parent.kwargs["items"][position.parent_index] = ret - - # If the element we're extracting is a group, skip to its content but keep the title - if position.converted.func == railroad.Group: - content = position.converted.kwargs["item"] - else: - content = position.converted - - self.diagrams[el_id] = EditablePartial.from_call( - NamedDiagram, - name=position.name, - diagram=EditablePartial.from_call( - railroad.Diagram, content, **self.diagram_kwargs - ), - index=position.number, - ) - - del self[el_id] - - -def _worth_extracting(element: pyparsing.ParserElement) -> bool: - """ - Returns true if this element is worth having its own sub-diagram. Simply, if any of its children - themselves have children, then its complex enough to extract - """ - children = element.recurse() - return any(child.recurse() for child in children) - - -def _apply_diagram_item_enhancements(fn): - """ - decorator to ensure enhancements to a diagram item (such as results name annotations) - get applied on return from _to_diagram_element (we do this since there are several - returns in _to_diagram_element) - """ - - def _inner( - element: pyparsing.ParserElement, - parent: typing.Optional[EditablePartial], - lookup: ConverterState = None, - vertical: int = None, - index: int = 0, - name_hint: str = None, - show_results_names: bool = False, - show_groups: bool = False, - ) -> typing.Optional[EditablePartial]: - ret = fn( - element, - parent, - lookup, - vertical, - index, - name_hint, - show_results_names, - show_groups, - ) - - # apply annotation for results name, if present - if show_results_names and ret is not None: - element_results_name = element.resultsName - if element_results_name: - # add "*" to indicate if this is a "list all results" name - element_results_name += "" if element.modalResults else "*" - ret = EditablePartial.from_call( - railroad.Group, item=ret, label=element_results_name - ) - - return ret - - return _inner - - -def _visible_exprs(exprs: Iterable[pyparsing.ParserElement]): - non_diagramming_exprs = ( - pyparsing.ParseElementEnhance, - pyparsing.PositionToken, - pyparsing.And._ErrorStop, - ) - return [ - e - for e in exprs - if not (e.customName or e.resultsName or isinstance(e, non_diagramming_exprs)) - ] - - -@_apply_diagram_item_enhancements -def _to_diagram_element( - element: pyparsing.ParserElement, - parent: typing.Optional[EditablePartial], - lookup: ConverterState = None, - vertical: int = None, - index: int = 0, - name_hint: str = None, - show_results_names: bool = False, - show_groups: bool = False, -) -> typing.Optional[EditablePartial]: - """ - Recursively converts a PyParsing Element to a railroad Element - :param lookup: The shared converter state that keeps track of useful things - :param index: The index of this element within the parent - :param parent: The parent of this element in the output tree - :param vertical: Controls at what point we make a list of elements vertical. If this is an integer (the default), - it sets the threshold of the number of items before we go vertical. If True, always go vertical, if False, never - do so - :param name_hint: If provided, this will override the generated name - :param show_results_names: bool flag indicating whether to add annotations for results names - :returns: The converted version of the input element, but as a Partial that hasn't yet been constructed - :param show_groups: bool flag indicating whether to show groups using bounding box - """ - exprs = element.recurse() - name = name_hint or element.customName or element.__class__.__name__ - - # Python's id() is used to provide a unique identifier for elements - el_id = id(element) - - element_results_name = element.resultsName - - # Here we basically bypass processing certain wrapper elements if they contribute nothing to the diagram - if not element.customName: - if isinstance( - element, - ( - # pyparsing.TokenConverter, - # pyparsing.Forward, - pyparsing.Located, - ), - ): - # However, if this element has a useful custom name, and its child does not, we can pass it on to the child - if exprs: - if not exprs[0].customName: - propagated_name = name - else: - propagated_name = None - - return _to_diagram_element( - element.expr, - parent=parent, - lookup=lookup, - vertical=vertical, - index=index, - name_hint=propagated_name, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - # If the element isn't worth extracting, we always treat it as the first time we say it - if _worth_extracting(element): - if el_id in lookup: - # If we've seen this element exactly once before, we are only just now finding out that it's a duplicate, - # so we have to extract it into a new diagram. - looked_up = lookup[el_id] - looked_up.mark_for_extraction(el_id, lookup, name=name_hint) - ret = EditablePartial.from_call(railroad.NonTerminal, text=looked_up.name) - return ret - - elif el_id in lookup.diagrams: - # If we have seen the element at least twice before, and have already extracted it into a subdiagram, we - # just put in a marker element that refers to the sub-diagram - ret = EditablePartial.from_call( - railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"] - ) - return ret - - # Recursively convert child elements - # Here we find the most relevant Railroad element for matching pyparsing Element - # We use ``items=[]`` here to hold the place for where the child elements will go once created - if isinstance(element, pyparsing.And): - # detect And's created with ``expr*N`` notation - for these use a OneOrMore with a repeat - # (all will have the same name, and resultsName) - if not exprs: - return None - if len(set((e.name, e.resultsName) for e in exprs)) == 1: - ret = EditablePartial.from_call( - railroad.OneOrMore, item="", repeat=str(len(exprs)) - ) - elif _should_vertical(vertical, exprs): - ret = EditablePartial.from_call(railroad.Stack, items=[]) - else: - ret = EditablePartial.from_call(railroad.Sequence, items=[]) - elif isinstance(element, (pyparsing.Or, pyparsing.MatchFirst)): - if not exprs: - return None - if _should_vertical(vertical, exprs): - ret = EditablePartial.from_call(railroad.Choice, 0, items=[]) - else: - ret = EditablePartial.from_call(railroad.HorizontalChoice, items=[]) - elif isinstance(element, pyparsing.Each): - if not exprs: - return None - ret = EditablePartial.from_call(EachItem, items=[]) - elif isinstance(element, pyparsing.NotAny): - ret = EditablePartial.from_call(AnnotatedItem, label="NOT", item="") - elif isinstance(element, pyparsing.FollowedBy): - ret = EditablePartial.from_call(AnnotatedItem, label="LOOKAHEAD", item="") - elif isinstance(element, pyparsing.PrecededBy): - ret = EditablePartial.from_call(AnnotatedItem, label="LOOKBEHIND", item="") - elif isinstance(element, pyparsing.Group): - if show_groups: - ret = EditablePartial.from_call(AnnotatedItem, label="", item="") - else: - ret = EditablePartial.from_call(railroad.Group, label="", item="") - elif isinstance(element, pyparsing.TokenConverter): - label = type(element).__name__.lower() - if label == "tokenconverter": - ret = EditablePartial.from_call(railroad.Sequence, items=[]) - else: - ret = EditablePartial.from_call(AnnotatedItem, label=label, item="") - elif isinstance(element, pyparsing.Opt): - ret = EditablePartial.from_call(railroad.Optional, item="") - elif isinstance(element, pyparsing.OneOrMore): - ret = EditablePartial.from_call(railroad.OneOrMore, item="") - elif isinstance(element, pyparsing.ZeroOrMore): - ret = EditablePartial.from_call(railroad.ZeroOrMore, item="") - elif isinstance(element, pyparsing.Group): - ret = EditablePartial.from_call( - railroad.Group, item=None, label=element_results_name - ) - elif isinstance(element, pyparsing.Empty) and not element.customName: - # Skip unnamed "Empty" elements - ret = None - elif isinstance(element, pyparsing.ParseElementEnhance): - ret = EditablePartial.from_call(railroad.Sequence, items=[]) - elif len(exprs) > 0 and not element_results_name: - ret = EditablePartial.from_call(railroad.Group, item="", label=name) - elif len(exprs) > 0: - ret = EditablePartial.from_call(railroad.Sequence, items=[]) - else: - terminal = EditablePartial.from_call(railroad.Terminal, element.defaultName) - ret = terminal - - if ret is None: - return - - # Indicate this element's position in the tree so we can extract it if necessary - lookup[el_id] = ElementState( - element=element, - converted=ret, - parent=parent, - parent_index=index, - number=lookup.generate_index(), - ) - if element.customName: - lookup[el_id].mark_for_extraction(el_id, lookup, element.customName) - - i = 0 - for expr in exprs: - # Add a placeholder index in case we have to extract the child before we even add it to the parent - if "items" in ret.kwargs: - ret.kwargs["items"].insert(i, None) - - item = _to_diagram_element( - expr, - parent=ret, - lookup=lookup, - vertical=vertical, - index=i, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - # Some elements don't need to be shown in the diagram - if item is not None: - if "item" in ret.kwargs: - ret.kwargs["item"] = item - elif "items" in ret.kwargs: - # If we've already extracted the child, don't touch this index, since it's occupied by a nonterminal - ret.kwargs["items"][i] = item - i += 1 - elif "items" in ret.kwargs: - # If we're supposed to skip this element, remove it from the parent - del ret.kwargs["items"][i] - - # If all this items children are none, skip this item - if ret and ( - ("items" in ret.kwargs and len(ret.kwargs["items"]) == 0) - or ("item" in ret.kwargs and ret.kwargs["item"] is None) - ): - ret = EditablePartial.from_call(railroad.Terminal, name) - - # Mark this element as "complete", ie it has all of its children - if el_id in lookup: - lookup[el_id].complete = True - - if el_id in lookup and lookup[el_id].extract and lookup[el_id].complete: - lookup.extract_into_diagram(el_id) - if ret is not None: - ret = EditablePartial.from_call( - railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"] - ) - - return ret diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/frontmatter-13eea6e4.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/frontmatter-13eea6e4.js deleted file mode 100644 index 81fe6ce108772024d1bdcf1a89eec89673a130ac..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/frontmatter-13eea6e4.js +++ /dev/null @@ -1,2 +0,0 @@ -import{s as m,f as s,c as i,p,t as a,S as l}from"./Index-9bf8add7.js";import{yaml as f}from"./yaml-95012b83.js";import"./index-50ad4c77.js";import"./svelte/svelte.js";import"./Button-8eeccca1.js";import"./Index-c74a8b7c.js";import"./Copy-1b5c0932.js";import"./Download-696bd40c.js";import"./BlockLabel-e3970ebb.js";import"./Empty-eeaba2d1.js";import"./Example-e03fb3b4.js";const n=/^---\s*$/m,v={defineNodes:[{name:"Frontmatter",block:!0},"FrontmatterMark"],props:[m({Frontmatter:[a.documentMeta,a.monospace],FrontmatterMark:a.processingInstruction}),s.add({Frontmatter:i,FrontmatterMark:()=>null})],wrap:p(t=>{const{parser:e}=l.define(f);return t.type.name==="Frontmatter"?{parser:e,overlay:[{from:t.from+4,to:t.to-4}]}:null}),parseBlock:[{name:"Frontmatter",before:"HorizontalRule",parse:(t,e)=>{let r;const o=new Array;if(t.lineStart===0&&n.test(e.text)){for(o.push(t.elt("FrontmatterMark",0,4));t.nextLine();)if(n.test(e.text)){r=t.lineStart+4;break}return r!==void 0&&(o.push(t.elt("FrontmatterMark",r-4,r)),t.addElement(t.elt("Frontmatter",0,r,o))),!0}return!1}}]};export{v as frontmatter}; -//# sourceMappingURL=frontmatter-13eea6e4.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/mpl_axes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/mpl_axes.py deleted file mode 100644 index 51c8748758cb6da3052f0e8b05ceba427d77a3f6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/axes_grid1/mpl_axes.py +++ /dev/null @@ -1,128 +0,0 @@ -import matplotlib.axes as maxes -from matplotlib.artist import Artist -from matplotlib.axis import XAxis, YAxis - - -class SimpleChainedObjects: - def __init__(self, objects): - self._objects = objects - - def __getattr__(self, k): - _a = SimpleChainedObjects([getattr(a, k) for a in self._objects]) - return _a - - def __call__(self, *args, **kwargs): - for m in self._objects: - m(*args, **kwargs) - - -class Axes(maxes.Axes): - - class AxisDict(dict): - def __init__(self, axes): - self.axes = axes - super().__init__() - - def __getitem__(self, k): - if isinstance(k, tuple): - r = SimpleChainedObjects( - # super() within a list comprehension needs explicit args. - [super(Axes.AxisDict, self).__getitem__(k1) for k1 in k]) - return r - elif isinstance(k, slice): - if k.start is None and k.stop is None and k.step is None: - return SimpleChainedObjects(list(self.values())) - else: - raise ValueError("Unsupported slice") - else: - return dict.__getitem__(self, k) - - def __call__(self, *v, **kwargs): - return maxes.Axes.axis(self.axes, *v, **kwargs) - - @property - def axis(self): - return self._axislines - - def clear(self): - # docstring inherited - super().clear() - # Init axis artists. - self._axislines = self.AxisDict(self) - self._axislines.update( - bottom=SimpleAxisArtist(self.xaxis, 1, self.spines["bottom"]), - top=SimpleAxisArtist(self.xaxis, 2, self.spines["top"]), - left=SimpleAxisArtist(self.yaxis, 1, self.spines["left"]), - right=SimpleAxisArtist(self.yaxis, 2, self.spines["right"])) - - -class SimpleAxisArtist(Artist): - def __init__(self, axis, axisnum, spine): - self._axis = axis - self._axisnum = axisnum - self.line = spine - - if isinstance(axis, XAxis): - self._axis_direction = ["bottom", "top"][axisnum-1] - elif isinstance(axis, YAxis): - self._axis_direction = ["left", "right"][axisnum-1] - else: - raise ValueError( - f"axis must be instance of XAxis or YAxis, but got {axis}") - super().__init__() - - @property - def major_ticks(self): - tickline = "tick%dline" % self._axisnum - return SimpleChainedObjects([getattr(tick, tickline) - for tick in self._axis.get_major_ticks()]) - - @property - def major_ticklabels(self): - label = "label%d" % self._axisnum - return SimpleChainedObjects([getattr(tick, label) - for tick in self._axis.get_major_ticks()]) - - @property - def label(self): - return self._axis.label - - def set_visible(self, b): - self.toggle(all=b) - self.line.set_visible(b) - self._axis.set_visible(True) - super().set_visible(b) - - def set_label(self, txt): - self._axis.set_label_text(txt) - - def toggle(self, all=None, ticks=None, ticklabels=None, label=None): - - if all: - _ticks, _ticklabels, _label = True, True, True - elif all is not None: - _ticks, _ticklabels, _label = False, False, False - else: - _ticks, _ticklabels, _label = None, None, None - - if ticks is not None: - _ticks = ticks - if ticklabels is not None: - _ticklabels = ticklabels - if label is not None: - _label = label - - if _ticks is not None: - tickparam = {f"tick{self._axisnum}On": _ticks} - self._axis.set_tick_params(**tickparam) - if _ticklabels is not None: - tickparam = {f"label{self._axisnum}On": _ticklabels} - self._axis.set_tick_params(**tickparam) - - if _label is not None: - pos = self._axis.get_label_position() - if (pos == self._axis_direction) and not _label: - self._axis.label.set_visible(False) - elif _label: - self._axis.label.set_visible(True) - self._axis.set_label_position(self._axis_direction) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_dtype.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_dtype.py deleted file mode 100644 index ff50f5199a5c8c4d5b5b01c9335eeb6b1d06d6b2..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/_dtype.py +++ /dev/null @@ -1,369 +0,0 @@ -""" -A place for code to be called from the implementation of np.dtype - -String handling is much easier to do correctly in python. -""" -import numpy as np - - -_kind_to_stem = { - 'u': 'uint', - 'i': 'int', - 'c': 'complex', - 'f': 'float', - 'b': 'bool', - 'V': 'void', - 'O': 'object', - 'M': 'datetime', - 'm': 'timedelta', - 'S': 'bytes', - 'U': 'str', -} - - -def _kind_name(dtype): - try: - return _kind_to_stem[dtype.kind] - except KeyError as e: - raise RuntimeError( - "internal dtype error, unknown kind {!r}" - .format(dtype.kind) - ) from None - - -def __str__(dtype): - if dtype.fields is not None: - return _struct_str(dtype, include_align=True) - elif dtype.subdtype: - return _subarray_str(dtype) - elif issubclass(dtype.type, np.flexible) or not dtype.isnative: - return dtype.str - else: - return dtype.name - - -def __repr__(dtype): - arg_str = _construction_repr(dtype, include_align=False) - if dtype.isalignedstruct: - arg_str = arg_str + ", align=True" - return "dtype({})".format(arg_str) - - -def _unpack_field(dtype, offset, title=None): - """ - Helper function to normalize the items in dtype.fields. - - Call as: - - dtype, offset, title = _unpack_field(*dtype.fields[name]) - """ - return dtype, offset, title - - -def _isunsized(dtype): - # PyDataType_ISUNSIZED - return dtype.itemsize == 0 - - -def _construction_repr(dtype, include_align=False, short=False): - """ - Creates a string repr of the dtype, excluding the 'dtype()' part - surrounding the object. This object may be a string, a list, or - a dict depending on the nature of the dtype. This - is the object passed as the first parameter to the dtype - constructor, and if no additional constructor parameters are - given, will reproduce the exact memory layout. - - Parameters - ---------- - short : bool - If true, this creates a shorter repr using 'kind' and 'itemsize', instead - of the longer type name. - - include_align : bool - If true, this includes the 'align=True' parameter - inside the struct dtype construction dict when needed. Use this flag - if you want a proper repr string without the 'dtype()' part around it. - - If false, this does not preserve the - 'align=True' parameter or sticky NPY_ALIGNED_STRUCT flag for - struct arrays like the regular repr does, because the 'align' - flag is not part of first dtype constructor parameter. This - mode is intended for a full 'repr', where the 'align=True' is - provided as the second parameter. - """ - if dtype.fields is not None: - return _struct_str(dtype, include_align=include_align) - elif dtype.subdtype: - return _subarray_str(dtype) - else: - return _scalar_str(dtype, short=short) - - -def _scalar_str(dtype, short): - byteorder = _byte_order_str(dtype) - - if dtype.type == np.bool_: - if short: - return "'?'" - else: - return "'bool'" - - elif dtype.type == np.object_: - # The object reference may be different sizes on different - # platforms, so it should never include the itemsize here. - return "'O'" - - elif dtype.type == np.bytes_: - if _isunsized(dtype): - return "'S'" - else: - return "'S%d'" % dtype.itemsize - - elif dtype.type == np.str_: - if _isunsized(dtype): - return "'%sU'" % byteorder - else: - return "'%sU%d'" % (byteorder, dtype.itemsize / 4) - - # unlike the other types, subclasses of void are preserved - but - # historically the repr does not actually reveal the subclass - elif issubclass(dtype.type, np.void): - if _isunsized(dtype): - return "'V'" - else: - return "'V%d'" % dtype.itemsize - - elif dtype.type == np.datetime64: - return "'%sM8%s'" % (byteorder, _datetime_metadata_str(dtype)) - - elif dtype.type == np.timedelta64: - return "'%sm8%s'" % (byteorder, _datetime_metadata_str(dtype)) - - elif np.issubdtype(dtype, np.number): - # Short repr with endianness, like '' """ - # hack to obtain the native and swapped byte order characters - swapped = np.dtype(int).newbyteorder('S') - native = swapped.newbyteorder('S') - - byteorder = dtype.byteorder - if byteorder == '=': - return native.byteorder - if byteorder == 'S': - # TODO: this path can never be reached - return swapped.byteorder - elif byteorder == '|': - return '' - else: - return byteorder - - -def _datetime_metadata_str(dtype): - # TODO: this duplicates the C metastr_to_unicode functionality - unit, count = np.datetime_data(dtype) - if unit == 'generic': - return '' - elif count == 1: - return '[{}]'.format(unit) - else: - return '[{}{}]'.format(count, unit) - - -def _struct_dict_str(dtype, includealignedflag): - # unpack the fields dictionary into ls - names = dtype.names - fld_dtypes = [] - offsets = [] - titles = [] - for name in names: - fld_dtype, offset, title = _unpack_field(*dtype.fields[name]) - fld_dtypes.append(fld_dtype) - offsets.append(offset) - titles.append(title) - - # Build up a string to make the dictionary - - if np.core.arrayprint._get_legacy_print_mode() <= 121: - colon = ":" - fieldsep = "," - else: - colon = ": " - fieldsep = ", " - - # First, the names - ret = "{'names'%s[" % colon - ret += fieldsep.join(repr(name) for name in names) - - # Second, the formats - ret += "], 'formats'%s[" % colon - ret += fieldsep.join( - _construction_repr(fld_dtype, short=True) for fld_dtype in fld_dtypes) - - # Third, the offsets - ret += "], 'offsets'%s[" % colon - ret += fieldsep.join("%d" % offset for offset in offsets) - - # Fourth, the titles - if any(title is not None for title in titles): - ret += "], 'titles'%s[" % colon - ret += fieldsep.join(repr(title) for title in titles) - - # Fifth, the itemsize - ret += "], 'itemsize'%s%d" % (colon, dtype.itemsize) - - if (includealignedflag and dtype.isalignedstruct): - # Finally, the aligned flag - ret += ", 'aligned'%sTrue}" % colon - else: - ret += "}" - - return ret - - -def _aligned_offset(offset, alignment): - # round up offset: - return - (-offset // alignment) * alignment - - -def _is_packed(dtype): - """ - Checks whether the structured data type in 'dtype' - has a simple layout, where all the fields are in order, - and follow each other with no alignment padding. - - When this returns true, the dtype can be reconstructed - from a list of the field names and dtypes with no additional - dtype parameters. - - Duplicates the C `is_dtype_struct_simple_unaligned_layout` function. - """ - align = dtype.isalignedstruct - max_alignment = 1 - total_offset = 0 - for name in dtype.names: - fld_dtype, fld_offset, title = _unpack_field(*dtype.fields[name]) - - if align: - total_offset = _aligned_offset(total_offset, fld_dtype.alignment) - max_alignment = max(max_alignment, fld_dtype.alignment) - - if fld_offset != total_offset: - return False - total_offset += fld_dtype.itemsize - - if align: - total_offset = _aligned_offset(total_offset, max_alignment) - - if total_offset != dtype.itemsize: - return False - return True - - -def _struct_list_str(dtype): - items = [] - for name in dtype.names: - fld_dtype, fld_offset, title = _unpack_field(*dtype.fields[name]) - - item = "(" - if title is not None: - item += "({!r}, {!r}), ".format(title, name) - else: - item += "{!r}, ".format(name) - # Special case subarray handling here - if fld_dtype.subdtype is not None: - base, shape = fld_dtype.subdtype - item += "{}, {}".format( - _construction_repr(base, short=True), - shape - ) - else: - item += _construction_repr(fld_dtype, short=True) - - item += ")" - items.append(item) - - return "[" + ", ".join(items) + "]" - - -def _struct_str(dtype, include_align): - # The list str representation can't include the 'align=' flag, - # so if it is requested and the struct has the aligned flag set, - # we must use the dict str instead. - if not (include_align and dtype.isalignedstruct) and _is_packed(dtype): - sub = _struct_list_str(dtype) - - else: - sub = _struct_dict_str(dtype, include_align) - - # If the data type isn't the default, void, show it - if dtype.type != np.void: - return "({t.__module__}.{t.__name__}, {f})".format(t=dtype.type, f=sub) - else: - return sub - - -def _subarray_str(dtype): - base, shape = dtype.subdtype - return "({}, {})".format( - _construction_repr(base, short=True), - shape - ) - - -def _name_includes_bit_suffix(dtype): - if dtype.type == np.object_: - # pointer size varies by system, best to omit it - return False - elif dtype.type == np.bool_: - # implied - return False - elif dtype.type is None: - return True - elif np.issubdtype(dtype, np.flexible) and _isunsized(dtype): - # unspecified - return False - else: - return True - - -def _name_get(dtype): - # provides dtype.name.__get__, documented as returning a "bit name" - - if dtype.isbuiltin == 2: - # user dtypes don't promise to do anything special - return dtype.type.__name__ - - if dtype.kind == '\x00': - name = type(dtype).__name__ - elif issubclass(dtype.type, np.void): - # historically, void subclasses preserve their name, eg `record64` - name = dtype.type.__name__ - else: - name = _kind_name(dtype) - - # append bit counts - if _name_includes_bit_suffix(dtype): - name += "{}".format(dtype.itemsize * 8) - - # append metadata to datetimes - if dtype.type in (np.datetime64, np.timedelta64): - name += _datetime_metadata_str(dtype) - - return name diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_machar.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_machar.py deleted file mode 100644 index 3a66ec51fd5860a546b917af3b83d21ac55540ad..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_machar.py +++ /dev/null @@ -1,30 +0,0 @@ -""" -Test machar. Given recent changes to hardcode type data, we might want to get -rid of both MachAr and this test at some point. - -""" -from numpy.core._machar import MachAr -import numpy.core.numerictypes as ntypes -from numpy import errstate, array - - -class TestMachAr: - def _run_machar_highprec(self): - # Instantiate MachAr instance with high enough precision to cause - # underflow - try: - hiprec = ntypes.float96 - MachAr(lambda v: array(v, hiprec)) - except AttributeError: - # Fixme, this needs to raise a 'skip' exception. - "Skipping test: no ntypes.float96 available on this platform." - - def test_underlow(self): - # Regression test for #759: - # instantiating MachAr for dtype = np.float96 raises spurious warning. - with errstate(all='raise'): - try: - self._run_machar_highprec() - except FloatingPointError as e: - msg = "Caught %s exception, should not have been raised." % e - raise AssertionError(msg) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/return_logical/foo90.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/return_logical/foo90.f90 deleted file mode 100644 index a4526468e3719140f0ed7d50a5f3a31d78d1d2de..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/return_logical/foo90.f90 +++ /dev/null @@ -1,59 +0,0 @@ -module f90_return_logical - contains - function t0(value) - logical :: value - logical :: t0 - t0 = value - end function t0 - function t1(value) - logical(kind=1) :: value - logical(kind=1) :: t1 - t1 = value - end function t1 - function t2(value) - logical(kind=2) :: value - logical(kind=2) :: t2 - t2 = value - end function t2 - function t4(value) - logical(kind=4) :: value - logical(kind=4) :: t4 - t4 = value - end function t4 - function t8(value) - logical(kind=8) :: value - logical(kind=8) :: t8 - t8 = value - end function t8 - - subroutine s0(t0,value) - logical :: value - logical :: t0 -!f2py intent(out) t0 - t0 = value - end subroutine s0 - subroutine s1(t1,value) - logical(kind=1) :: value - logical(kind=1) :: t1 -!f2py intent(out) t1 - t1 = value - end subroutine s1 - subroutine s2(t2,value) - logical(kind=2) :: value - logical(kind=2) :: t2 -!f2py intent(out) t2 - t2 = value - end subroutine s2 - subroutine s4(t4,value) - logical(kind=4) :: value - logical(kind=4) :: t4 -!f2py intent(out) t4 - t4 = value - end subroutine s4 - subroutine s8(t8,value) - logical(kind=8) :: value - logical(kind=8) :: t8 -!f2py intent(out) t8 - t8 = value - end subroutine s8 -end module f90_return_logical diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_offsets_properties.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_offsets_properties.py deleted file mode 100644 index 1b4fa9292c4031c8c2acec0e1f34fd871bcb50bd..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_offsets_properties.py +++ /dev/null @@ -1,60 +0,0 @@ -""" -Behavioral based tests for offsets and date_range. - -This file is adapted from https://github.com/pandas-dev/pandas/pull/18761 - -which was more ambitious but less idiomatic in its use of Hypothesis. - -You may wish to consult the previous version for inspiration on further -tests, or when trying to pin down the bugs exposed by the tests below. -""" -from hypothesis import ( - assume, - given, -) -import pytest -import pytz - -import pandas as pd -from pandas._testing._hypothesis import ( - DATETIME_JAN_1_1900_OPTIONAL_TZ, - YQM_OFFSET, -) - -# ---------------------------------------------------------------- -# Offset-specific behaviour tests - - -@pytest.mark.arm_slow -@given(DATETIME_JAN_1_1900_OPTIONAL_TZ, YQM_OFFSET) -def test_on_offset_implementations(dt, offset): - assume(not offset.normalize) - # check that the class-specific implementations of is_on_offset match - # the general case definition: - # (dt + offset) - offset == dt - try: - compare = (dt + offset) - offset - except (pytz.NonExistentTimeError, pytz.AmbiguousTimeError): - # When dt + offset does not exist or is DST-ambiguous, assume(False) to - # indicate to hypothesis that this is not a valid test case - # DST-ambiguous example (GH41906): - # dt = datetime.datetime(1900, 1, 1, tzinfo=pytz.timezone('Africa/Kinshasa')) - # offset = MonthBegin(66) - assume(False) - - assert offset.is_on_offset(dt) == (compare == dt) - - -@given(YQM_OFFSET) -def test_shift_across_dst(offset): - # GH#18319 check that 1) timezone is correctly normalized and - # 2) that hour is not incorrectly changed by this normalization - assume(not offset.normalize) - - # Note that dti includes a transition across DST boundary - dti = pd.date_range( - start="2017-10-30 12:00:00", end="2017-11-06", freq="D", tz="US/Eastern" - ) - assert (dti.hour == 12).all() # we haven't screwed up yet - - res = dti + offset - assert (res.hour == 12).all() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/vcs/bazaar.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/vcs/bazaar.py deleted file mode 100644 index a7b16e2e0528b9852b517171f0afbd578104f13b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/vcs/bazaar.py +++ /dev/null @@ -1,101 +0,0 @@ -import logging -from typing import List, Optional, Tuple - -from pip._internal.utils.misc import HiddenText, display_path -from pip._internal.utils.subprocess import make_command -from pip._internal.utils.urls import path_to_url -from pip._internal.vcs.versioncontrol import ( - AuthInfo, - RemoteNotFoundError, - RevOptions, - VersionControl, - vcs, -) - -logger = logging.getLogger(__name__) - - -class Bazaar(VersionControl): - name = "bzr" - dirname = ".bzr" - repo_name = "branch" - schemes = ( - "bzr+http", - "bzr+https", - "bzr+ssh", - "bzr+sftp", - "bzr+ftp", - "bzr+lp", - "bzr+file", - ) - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return ["-r", rev] - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info( - "Checking out %s%s to %s", - url, - rev_display, - display_path(dest), - ) - if verbosity <= 0: - flag = "--quiet" - elif verbosity == 1: - flag = "" - else: - flag = f"-{'v'*verbosity}" - cmd_args = make_command("branch", flag, rev_options.to_args(), url, dest) - self.run_command(cmd_args) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - self.run_command(make_command("switch", url), cwd=dest) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - cmd_args = make_command("pull", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - - @classmethod - def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: - # hotfix the URL scheme after removing bzr+ from bzr+ssh:// readd it - url, rev, user_pass = super().get_url_rev_and_auth(url) - if url.startswith("ssh://"): - url = "bzr+" + url - return url, rev, user_pass - - @classmethod - def get_remote_url(cls, location: str) -> str: - urls = cls.run_command( - ["info"], show_stdout=False, stdout_only=True, cwd=location - ) - for line in urls.splitlines(): - line = line.strip() - for x in ("checkout of branch: ", "parent branch: "): - if line.startswith(x): - repo = line.split(x)[1] - if cls._is_local_repository(repo): - return path_to_url(repo) - return repo - raise RemoteNotFoundError - - @classmethod - def get_revision(cls, location: str) -> str: - revision = cls.run_command( - ["revno"], - show_stdout=False, - stdout_only=True, - cwd=location, - ) - return revision.splitlines()[-1] - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """Always assume the versions don't match""" - return False - - -vcs.register(Bazaar) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/compat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/compat.py deleted file mode 100644 index e594106956f4ed5f0c2394eb50a02c304e3ba167..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distlib/compat.py +++ /dev/null @@ -1,1122 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2013-2017 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -from __future__ import absolute_import - -import os -import re -import sys - -try: - import ssl -except ImportError: # pragma: no cover - ssl = None - -if sys.version_info[0] < 3: # pragma: no cover - from StringIO import StringIO - string_types = basestring, - text_type = unicode - from types import FileType as file_type - import __builtin__ as builtins - import ConfigParser as configparser - from ._backport import shutil - from urlparse import urlparse, urlunparse, urljoin, urlsplit, urlunsplit - from urllib import (urlretrieve, quote as _quote, unquote, url2pathname, - pathname2url, ContentTooShortError, splittype) - - def quote(s): - if isinstance(s, unicode): - s = s.encode('utf-8') - return _quote(s) - - import urllib2 - from urllib2 import (Request, urlopen, URLError, HTTPError, - HTTPBasicAuthHandler, HTTPPasswordMgr, - HTTPHandler, HTTPRedirectHandler, - build_opener) - if ssl: - from urllib2 import HTTPSHandler - import httplib - import xmlrpclib - import Queue as queue - from HTMLParser import HTMLParser - import htmlentitydefs - raw_input = raw_input - from itertools import ifilter as filter - from itertools import ifilterfalse as filterfalse - - # Leaving this around for now, in case it needs resurrecting in some way - # _userprog = None - # def splituser(host): - # """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'.""" - # global _userprog - # if _userprog is None: - # import re - # _userprog = re.compile('^(.*)@(.*)$') - - # match = _userprog.match(host) - # if match: return match.group(1, 2) - # return None, host - -else: # pragma: no cover - from io import StringIO - string_types = str, - text_type = str - from io import TextIOWrapper as file_type - import builtins - import configparser - import shutil - from urllib.parse import (urlparse, urlunparse, urljoin, quote, - unquote, urlsplit, urlunsplit, splittype) - from urllib.request import (urlopen, urlretrieve, Request, url2pathname, - pathname2url, - HTTPBasicAuthHandler, HTTPPasswordMgr, - HTTPHandler, HTTPRedirectHandler, - build_opener) - if ssl: - from urllib.request import HTTPSHandler - from urllib.error import HTTPError, URLError, ContentTooShortError - import http.client as httplib - import urllib.request as urllib2 - import xmlrpc.client as xmlrpclib - import queue - from html.parser import HTMLParser - import html.entities as htmlentitydefs - raw_input = input - from itertools import filterfalse - filter = filter - - -try: - from ssl import match_hostname, CertificateError -except ImportError: # pragma: no cover - class CertificateError(ValueError): - pass - - - def _dnsname_match(dn, hostname, max_wildcards=1): - """Matching according to RFC 6125, section 6.4.3 - - http://tools.ietf.org/html/rfc6125#section-6.4.3 - """ - pats = [] - if not dn: - return False - - parts = dn.split('.') - leftmost, remainder = parts[0], parts[1:] - - wildcards = leftmost.count('*') - if wildcards > max_wildcards: - # Issue #17980: avoid denials of service by refusing more - # than one wildcard per fragment. A survey of established - # policy among SSL implementations showed it to be a - # reasonable choice. - raise CertificateError( - "too many wildcards in certificate DNS name: " + repr(dn)) - - # speed up common case w/o wildcards - if not wildcards: - return dn.lower() == hostname.lower() - - # RFC 6125, section 6.4.3, subitem 1. - # The client SHOULD NOT attempt to match a presented identifier in which - # the wildcard character comprises a label other than the left-most label. - if leftmost == '*': - # When '*' is a fragment by itself, it matches a non-empty dotless - # fragment. - pats.append('[^.]+') - elif leftmost.startswith('xn--') or hostname.startswith('xn--'): - # RFC 6125, section 6.4.3, subitem 3. - # The client SHOULD NOT attempt to match a presented identifier - # where the wildcard character is embedded within an A-label or - # U-label of an internationalized domain name. - pats.append(re.escape(leftmost)) - else: - # Otherwise, '*' matches any dotless string, e.g. www* - pats.append(re.escape(leftmost).replace(r'\*', '[^.]*')) - - # add the remaining fragments, ignore any wildcards - for frag in remainder: - pats.append(re.escape(frag)) - - pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE) - return pat.match(hostname) - - - def match_hostname(cert, hostname): - """Verify that *cert* (in decoded format as returned by - SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 - rules are followed, but IP addresses are not accepted for *hostname*. - - CertificateError is raised on failure. On success, the function - returns nothing. - """ - if not cert: - raise ValueError("empty or no certificate, match_hostname needs a " - "SSL socket or SSL context with either " - "CERT_OPTIONAL or CERT_REQUIRED") - dnsnames = [] - san = cert.get('subjectAltName', ()) - for key, value in san: - if key == 'DNS': - if _dnsname_match(value, hostname): - return - dnsnames.append(value) - if not dnsnames: - # The subject is only checked when there is no dNSName entry - # in subjectAltName - for sub in cert.get('subject', ()): - for key, value in sub: - # XXX according to RFC 2818, the most specific Common Name - # must be used. - if key == 'commonName': - if _dnsname_match(value, hostname): - return - dnsnames.append(value) - if len(dnsnames) > 1: - raise CertificateError("hostname %r " - "doesn't match either of %s" - % (hostname, ', '.join(map(repr, dnsnames)))) - elif len(dnsnames) == 1: - raise CertificateError("hostname %r " - "doesn't match %r" - % (hostname, dnsnames[0])) - else: - raise CertificateError("no appropriate commonName or " - "subjectAltName fields were found") - - -try: - from types import SimpleNamespace as Container -except ImportError: # pragma: no cover - class Container(object): - """ - A generic container for when multiple values need to be returned - """ - def __init__(self, **kwargs): - self.__dict__.update(kwargs) - - -try: - from shutil import which -except ImportError: # pragma: no cover - # Implementation from Python 3.3 - def which(cmd, mode=os.F_OK | os.X_OK, path=None): - """Given a command, mode, and a PATH string, return the path which - conforms to the given mode on the PATH, or None if there is no such - file. - - `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result - of os.environ.get("PATH"), or can be overridden with a custom search - path. - - """ - # Check that a given file can be accessed with the correct mode. - # Additionally check that `file` is not a directory, as on Windows - # directories pass the os.access check. - def _access_check(fn, mode): - return (os.path.exists(fn) and os.access(fn, mode) - and not os.path.isdir(fn)) - - # If we're given a path with a directory part, look it up directly rather - # than referring to PATH directories. This includes checking relative to the - # current directory, e.g. ./script - if os.path.dirname(cmd): - if _access_check(cmd, mode): - return cmd - return None - - if path is None: - path = os.environ.get("PATH", os.defpath) - if not path: - return None - path = path.split(os.pathsep) - - if sys.platform == "win32": - # The current directory takes precedence on Windows. - if not os.curdir in path: - path.insert(0, os.curdir) - - # PATHEXT is necessary to check on Windows. - pathext = os.environ.get("PATHEXT", "").split(os.pathsep) - # See if the given file matches any of the expected path extensions. - # This will allow us to short circuit when given "python.exe". - # If it does match, only test that one, otherwise we have to try - # others. - if any(cmd.lower().endswith(ext.lower()) for ext in pathext): - files = [cmd] - else: - files = [cmd + ext for ext in pathext] - else: - # On other platforms you don't have things like PATHEXT to tell you - # what file suffixes are executable, so just pass on cmd as-is. - files = [cmd] - - seen = set() - for dir in path: - normdir = os.path.normcase(dir) - if not normdir in seen: - seen.add(normdir) - for thefile in files: - name = os.path.join(dir, thefile) - if _access_check(name, mode): - return name - return None - - -# ZipFile is a context manager in 2.7, but not in 2.6 - -from zipfile import ZipFile as BaseZipFile - -if hasattr(BaseZipFile, '__enter__'): # pragma: no cover - ZipFile = BaseZipFile -else: # pragma: no cover - from zipfile import ZipExtFile as BaseZipExtFile - - class ZipExtFile(BaseZipExtFile): - def __init__(self, base): - self.__dict__.update(base.__dict__) - - def __enter__(self): - return self - - def __exit__(self, *exc_info): - self.close() - # return None, so if an exception occurred, it will propagate - - class ZipFile(BaseZipFile): - def __enter__(self): - return self - - def __exit__(self, *exc_info): - self.close() - # return None, so if an exception occurred, it will propagate - - def open(self, *args, **kwargs): - base = BaseZipFile.open(self, *args, **kwargs) - return ZipExtFile(base) - -try: - from platform import python_implementation -except ImportError: # pragma: no cover - def python_implementation(): - """Return a string identifying the Python implementation.""" - if 'PyPy' in sys.version: - return 'PyPy' - if os.name == 'java': - return 'Jython' - if sys.version.startswith('IronPython'): - return 'IronPython' - return 'CPython' - -try: - import sysconfig -except ImportError: # pragma: no cover - from ._backport import sysconfig - -try: - callable = callable -except NameError: # pragma: no cover - from collections.abc import Callable - - def callable(obj): - return isinstance(obj, Callable) - - -try: - fsencode = os.fsencode - fsdecode = os.fsdecode -except AttributeError: # pragma: no cover - # Issue #99: on some systems (e.g. containerised), - # sys.getfilesystemencoding() returns None, and we need a real value, - # so fall back to utf-8. From the CPython 2.7 docs relating to Unix and - # sys.getfilesystemencoding(): the return value is "the user’s preference - # according to the result of nl_langinfo(CODESET), or None if the - # nl_langinfo(CODESET) failed." - _fsencoding = sys.getfilesystemencoding() or 'utf-8' - if _fsencoding == 'mbcs': - _fserrors = 'strict' - else: - _fserrors = 'surrogateescape' - - def fsencode(filename): - if isinstance(filename, bytes): - return filename - elif isinstance(filename, text_type): - return filename.encode(_fsencoding, _fserrors) - else: - raise TypeError("expect bytes or str, not %s" % - type(filename).__name__) - - def fsdecode(filename): - if isinstance(filename, text_type): - return filename - elif isinstance(filename, bytes): - return filename.decode(_fsencoding, _fserrors) - else: - raise TypeError("expect bytes or str, not %s" % - type(filename).__name__) - -try: - from tokenize import detect_encoding -except ImportError: # pragma: no cover - from codecs import BOM_UTF8, lookup - import re - - cookie_re = re.compile(r"coding[:=]\s*([-\w.]+)") - - def _get_normal_name(orig_enc): - """Imitates get_normal_name in tokenizer.c.""" - # Only care about the first 12 characters. - enc = orig_enc[:12].lower().replace("_", "-") - if enc == "utf-8" or enc.startswith("utf-8-"): - return "utf-8" - if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \ - enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")): - return "iso-8859-1" - return orig_enc - - def detect_encoding(readline): - """ - The detect_encoding() function is used to detect the encoding that should - be used to decode a Python source file. It requires one argument, readline, - in the same way as the tokenize() generator. - - It will call readline a maximum of twice, and return the encoding used - (as a string) and a list of any lines (left as bytes) it has read in. - - It detects the encoding from the presence of a utf-8 bom or an encoding - cookie as specified in pep-0263. If both a bom and a cookie are present, - but disagree, a SyntaxError will be raised. If the encoding cookie is an - invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, - 'utf-8-sig' is returned. - - If no encoding is specified, then the default of 'utf-8' will be returned. - """ - try: - filename = readline.__self__.name - except AttributeError: - filename = None - bom_found = False - encoding = None - default = 'utf-8' - def read_or_stop(): - try: - return readline() - except StopIteration: - return b'' - - def find_cookie(line): - try: - # Decode as UTF-8. Either the line is an encoding declaration, - # in which case it should be pure ASCII, or it must be UTF-8 - # per default encoding. - line_string = line.decode('utf-8') - except UnicodeDecodeError: - msg = "invalid or missing encoding declaration" - if filename is not None: - msg = '{} for {!r}'.format(msg, filename) - raise SyntaxError(msg) - - matches = cookie_re.findall(line_string) - if not matches: - return None - encoding = _get_normal_name(matches[0]) - try: - codec = lookup(encoding) - except LookupError: - # This behaviour mimics the Python interpreter - if filename is None: - msg = "unknown encoding: " + encoding - else: - msg = "unknown encoding for {!r}: {}".format(filename, - encoding) - raise SyntaxError(msg) - - if bom_found: - if codec.name != 'utf-8': - # This behaviour mimics the Python interpreter - if filename is None: - msg = 'encoding problem: utf-8' - else: - msg = 'encoding problem for {!r}: utf-8'.format(filename) - raise SyntaxError(msg) - encoding += '-sig' - return encoding - - first = read_or_stop() - if first.startswith(BOM_UTF8): - bom_found = True - first = first[3:] - default = 'utf-8-sig' - if not first: - return default, [] - - encoding = find_cookie(first) - if encoding: - return encoding, [first] - - second = read_or_stop() - if not second: - return default, [first] - - encoding = find_cookie(second) - if encoding: - return encoding, [first, second] - - return default, [first, second] - -# For converting & <-> & etc. -try: - from html import escape -except ImportError: - from cgi import escape -if sys.version_info[:2] < (3, 4): - unescape = HTMLParser().unescape -else: - from html import unescape - -try: - from collections import ChainMap -except ImportError: # pragma: no cover - from collections import MutableMapping - - try: - from reprlib import recursive_repr as _recursive_repr - except ImportError: - def _recursive_repr(fillvalue='...'): - ''' - Decorator to make a repr function return fillvalue for a recursive - call - ''' - - def decorating_function(user_function): - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return fillvalue - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - wrapper.__annotations__ = getattr(user_function, '__annotations__', {}) - return wrapper - - return decorating_function - - class ChainMap(MutableMapping): - ''' A ChainMap groups multiple dicts (or other mappings) together - to create a single, updateable view. - - The underlying mappings are stored in a list. That list is public and can - accessed or updated using the *maps* attribute. There is no other state. - - Lookups search the underlying mappings successively until a key is found. - In contrast, writes, updates, and deletions only operate on the first - mapping. - - ''' - - def __init__(self, *maps): - '''Initialize a ChainMap by setting *maps* to the given mappings. - If no mappings are provided, a single empty dictionary is used. - - ''' - self.maps = list(maps) or [{}] # always at least one map - - def __missing__(self, key): - raise KeyError(key) - - def __getitem__(self, key): - for mapping in self.maps: - try: - return mapping[key] # can't use 'key in mapping' with defaultdict - except KeyError: - pass - return self.__missing__(key) # support subclasses that define __missing__ - - def get(self, key, default=None): - return self[key] if key in self else default - - def __len__(self): - return len(set().union(*self.maps)) # reuses stored hash values if possible - - def __iter__(self): - return iter(set().union(*self.maps)) - - def __contains__(self, key): - return any(key in m for m in self.maps) - - def __bool__(self): - return any(self.maps) - - @_recursive_repr() - def __repr__(self): - return '{0.__class__.__name__}({1})'.format( - self, ', '.join(map(repr, self.maps))) - - @classmethod - def fromkeys(cls, iterable, *args): - 'Create a ChainMap with a single dict created from the iterable.' - return cls(dict.fromkeys(iterable, *args)) - - def copy(self): - 'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]' - return self.__class__(self.maps[0].copy(), *self.maps[1:]) - - __copy__ = copy - - def new_child(self): # like Django's Context.push() - 'New ChainMap with a new dict followed by all previous maps.' - return self.__class__({}, *self.maps) - - @property - def parents(self): # like Django's Context.pop() - 'New ChainMap from maps[1:].' - return self.__class__(*self.maps[1:]) - - def __setitem__(self, key, value): - self.maps[0][key] = value - - def __delitem__(self, key): - try: - del self.maps[0][key] - except KeyError: - raise KeyError('Key not found in the first mapping: {!r}'.format(key)) - - def popitem(self): - 'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.' - try: - return self.maps[0].popitem() - except KeyError: - raise KeyError('No keys found in the first mapping.') - - def pop(self, key, *args): - 'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].' - try: - return self.maps[0].pop(key, *args) - except KeyError: - raise KeyError('Key not found in the first mapping: {!r}'.format(key)) - - def clear(self): - 'Clear maps[0], leaving maps[1:] intact.' - self.maps[0].clear() - -try: - from importlib.util import cache_from_source # Python >= 3.4 -except ImportError: # pragma: no cover - try: - from imp import cache_from_source - except ImportError: # pragma: no cover - def cache_from_source(path, debug_override=None): - assert path.endswith('.py') - if debug_override is None: - debug_override = __debug__ - if debug_override: - suffix = 'c' - else: - suffix = 'o' - return path + suffix - -try: - from collections import OrderedDict -except ImportError: # pragma: no cover -## {{{ http://code.activestate.com/recipes/576693/ (r9) -# Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy. -# Passes Python2.7's test suite and incorporates all the latest updates. - try: - from thread import get_ident as _get_ident - except ImportError: - from dummy_thread import get_ident as _get_ident - - try: - from _abcoll import KeysView, ValuesView, ItemsView - except ImportError: - pass - - - class OrderedDict(dict): - 'Dictionary that remembers insertion order' - # An inherited dict maps keys to values. - # The inherited dict provides __getitem__, __len__, __contains__, and get. - # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. - - # The internal self.__map dictionary maps keys to links in a doubly linked list. - # The circular doubly linked list starts and ends with a sentinel element. - # The sentinel element never gets deleted (this simplifies the algorithm). - # Each link is stored as a list of length three: [PREV, NEXT, KEY]. - - def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. - - ''' - if len(args) > 1: - raise TypeError('expected at most 1 arguments, got %d' % len(args)) - try: - self.__root - except AttributeError: - self.__root = root = [] # sentinel node - root[:] = [root, root, None] - self.__map = {} - self.__update(*args, **kwds) - - def __setitem__(self, key, value, dict_setitem=dict.__setitem__): - 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. - if key not in self: - root = self.__root - last = root[0] - last[1] = root[0] = self.__map[key] = [last, root, key] - dict_setitem(self, key, value) - - def __delitem__(self, key, dict_delitem=dict.__delitem__): - 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. - dict_delitem(self, key) - link_prev, link_next, key = self.__map.pop(key) - link_prev[1] = link_next - link_next[0] = link_prev - - def __iter__(self): - 'od.__iter__() <==> iter(od)' - root = self.__root - curr = root[1] - while curr is not root: - yield curr[2] - curr = curr[1] - - def __reversed__(self): - 'od.__reversed__() <==> reversed(od)' - root = self.__root - curr = root[0] - while curr is not root: - yield curr[2] - curr = curr[0] - - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - root = self.__root - root[:] = [root, root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) - - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - root = self.__root - if last: - link = root[0] - link_prev = link[0] - link_prev[1] = root - root[0] = link_prev - else: - link = root[1] - link_next = link[1] - root[1] = link_next - link_next[0] = root - key = link[2] - del self.__map[key] - value = dict.pop(self, key) - return key, value - - # -- the following methods do not depend on the internal structure -- - - def keys(self): - 'od.keys() -> list of keys in od' - return list(self) - - def values(self): - 'od.values() -> list of values in od' - return [self[key] for key in self] - - def items(self): - 'od.items() -> list of (key, value) pairs in od' - return [(key, self[key]) for key in self] - - def iterkeys(self): - 'od.iterkeys() -> an iterator over the keys in od' - return iter(self) - - def itervalues(self): - 'od.itervalues -> an iterator over the values in od' - for k in self: - yield self[k] - - def iteritems(self): - 'od.iteritems -> an iterator over the (key, value) items in od' - for k in self: - yield (k, self[k]) - - def update(*args, **kwds): - '''od.update(E, **F) -> None. Update od from dict/iterable E and F. - - If E is a dict instance, does: for k in E: od[k] = E[k] - If E has a .keys() method, does: for k in E.keys(): od[k] = E[k] - Or if E is an iterable of items, does: for k, v in E: od[k] = v - In either case, this is followed by: for k, v in F.items(): od[k] = v - - ''' - if len(args) > 2: - raise TypeError('update() takes at most 2 positional ' - 'arguments (%d given)' % (len(args),)) - elif not args: - raise TypeError('update() takes at least 1 argument (0 given)') - self = args[0] - # Make progressively weaker assumptions about "other" - other = () - if len(args) == 2: - other = args[1] - if isinstance(other, dict): - for key in other: - self[key] = other[key] - elif hasattr(other, 'keys'): - for key in other.keys(): - self[key] = other[key] - else: - for key, value in other: - self[key] = value - for key, value in kwds.items(): - self[key] = value - - __update = update # let subclasses override update without breaking __init__ - - __marker = object() - - def pop(self, key, default=__marker): - '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value. - If key is not found, d is returned if given, otherwise KeyError is raised. - - ''' - if key in self: - result = self[key] - del self[key] - return result - if default is self.__marker: - raise KeyError(key) - return default - - def setdefault(self, key, default=None): - 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' - if key in self: - return self[key] - self[key] = default - return default - - def __repr__(self, _repr_running=None): - 'od.__repr__() <==> repr(od)' - if not _repr_running: _repr_running = {} - call_key = id(self), _get_ident() - if call_key in _repr_running: - return '...' - _repr_running[call_key] = 1 - try: - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - finally: - del _repr_running[call_key] - - def __reduce__(self): - 'Return state information for pickling' - items = [[k, self[k]] for k in self] - inst_dict = vars(self).copy() - for k in vars(OrderedDict()): - inst_dict.pop(k, None) - if inst_dict: - return (self.__class__, (items,), inst_dict) - return self.__class__, (items,) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and self.items() == other.items() - return dict.__eq__(self, other) - - def __ne__(self, other): - return not self == other - - # -- the following methods are only used in Python 2.7 -- - - def viewkeys(self): - "od.viewkeys() -> a set-like object providing a view on od's keys" - return KeysView(self) - - def viewvalues(self): - "od.viewvalues() -> an object providing a view on od's values" - return ValuesView(self) - - def viewitems(self): - "od.viewitems() -> a set-like object providing a view on od's items" - return ItemsView(self) - -try: - from logging.config import BaseConfigurator, valid_ident -except ImportError: # pragma: no cover - IDENTIFIER = re.compile('^[a-z_][a-z0-9_]*$', re.I) - - - def valid_ident(s): - m = IDENTIFIER.match(s) - if not m: - raise ValueError('Not a valid Python identifier: %r' % s) - return True - - - # The ConvertingXXX classes are wrappers around standard Python containers, - # and they serve to convert any suitable values in the container. The - # conversion converts base dicts, lists and tuples to their wrapped - # equivalents, whereas strings which match a conversion format are converted - # appropriately. - # - # Each wrapper should have a configurator attribute holding the actual - # configurator to use for conversion. - - class ConvertingDict(dict): - """A converting dictionary wrapper.""" - - def __getitem__(self, key): - value = dict.__getitem__(self, key) - result = self.configurator.convert(value) - #If the converted value is different, save for next time - if value is not result: - self[key] = result - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - def get(self, key, default=None): - value = dict.get(self, key, default) - result = self.configurator.convert(value) - #If the converted value is different, save for next time - if value is not result: - self[key] = result - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - def pop(self, key, default=None): - value = dict.pop(self, key, default) - result = self.configurator.convert(value) - if value is not result: - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - class ConvertingList(list): - """A converting list wrapper.""" - def __getitem__(self, key): - value = list.__getitem__(self, key) - result = self.configurator.convert(value) - #If the converted value is different, save for next time - if value is not result: - self[key] = result - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - def pop(self, idx=-1): - value = list.pop(self, idx) - result = self.configurator.convert(value) - if value is not result: - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - return result - - class ConvertingTuple(tuple): - """A converting tuple wrapper.""" - def __getitem__(self, key): - value = tuple.__getitem__(self, key) - result = self.configurator.convert(value) - if value is not result: - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - class BaseConfigurator(object): - """ - The configurator base class which defines some useful defaults. - """ - - CONVERT_PATTERN = re.compile(r'^(?P[a-z]+)://(?P.*)$') - - WORD_PATTERN = re.compile(r'^\s*(\w+)\s*') - DOT_PATTERN = re.compile(r'^\.\s*(\w+)\s*') - INDEX_PATTERN = re.compile(r'^\[\s*(\w+)\s*\]\s*') - DIGIT_PATTERN = re.compile(r'^\d+$') - - value_converters = { - 'ext' : 'ext_convert', - 'cfg' : 'cfg_convert', - } - - # We might want to use a different one, e.g. importlib - importer = staticmethod(__import__) - - def __init__(self, config): - self.config = ConvertingDict(config) - self.config.configurator = self - - def resolve(self, s): - """ - Resolve strings to objects using standard import and attribute - syntax. - """ - name = s.split('.') - used = name.pop(0) - try: - found = self.importer(used) - for frag in name: - used += '.' + frag - try: - found = getattr(found, frag) - except AttributeError: - self.importer(used) - found = getattr(found, frag) - return found - except ImportError: - e, tb = sys.exc_info()[1:] - v = ValueError('Cannot resolve %r: %s' % (s, e)) - v.__cause__, v.__traceback__ = e, tb - raise v - - def ext_convert(self, value): - """Default converter for the ext:// protocol.""" - return self.resolve(value) - - def cfg_convert(self, value): - """Default converter for the cfg:// protocol.""" - rest = value - m = self.WORD_PATTERN.match(rest) - if m is None: - raise ValueError("Unable to convert %r" % value) - else: - rest = rest[m.end():] - d = self.config[m.groups()[0]] - #print d, rest - while rest: - m = self.DOT_PATTERN.match(rest) - if m: - d = d[m.groups()[0]] - else: - m = self.INDEX_PATTERN.match(rest) - if m: - idx = m.groups()[0] - if not self.DIGIT_PATTERN.match(idx): - d = d[idx] - else: - try: - n = int(idx) # try as number first (most likely) - d = d[n] - except TypeError: - d = d[idx] - if m: - rest = rest[m.end():] - else: - raise ValueError('Unable to convert ' - '%r at %r' % (value, rest)) - #rest should be empty - return d - - def convert(self, value): - """ - Convert values to an appropriate type. dicts, lists and tuples are - replaced by their converting alternatives. Strings are checked to - see if they have a conversion format and are converted if they do. - """ - if not isinstance(value, ConvertingDict) and isinstance(value, dict): - value = ConvertingDict(value) - value.configurator = self - elif not isinstance(value, ConvertingList) and isinstance(value, list): - value = ConvertingList(value) - value.configurator = self - elif not isinstance(value, ConvertingTuple) and\ - isinstance(value, tuple): - value = ConvertingTuple(value) - value.configurator = self - elif isinstance(value, string_types): - m = self.CONVERT_PATTERN.match(value) - if m: - d = m.groupdict() - prefix = d['prefix'] - converter = self.value_converters.get(prefix, None) - if converter: - suffix = d['suffix'] - converter = getattr(self, converter) - value = converter(suffix) - return value - - def configure_custom(self, config): - """Configure an object with a user-supplied factory.""" - c = config.pop('()') - if not callable(c): - c = self.resolve(c) - props = config.pop('.', None) - # Check for valid identifiers - kwargs = dict([(k, config[k]) for k in config if valid_ident(k)]) - result = c(**kwargs) - if props: - for name, value in props.items(): - setattr(result, name, value) - return result - - def as_tuple(self, value): - """Utility function which converts lists to tuples.""" - if isinstance(value, list): - value = tuple(value) - return value diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/sandbox/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/sandbox/__init__.py deleted file mode 100644 index 0abda1cb427ed8f070a7f02e638f35191861013c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/toolz/sandbox/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .core import EqualityHashKey, unzip -from .parallel import fold diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/contrib/socks.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/contrib/socks.py deleted file mode 100644 index 5e552ddaed36d698bee9c086a590af3807ba1972..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/contrib/socks.py +++ /dev/null @@ -1,233 +0,0 @@ -""" -This module contains provisional support for SOCKS proxies from within -urllib3. This module supports SOCKS4, SOCKS4A (an extension of SOCKS4), and -SOCKS5. To enable its functionality, either install PySocks or install this -module with the ``socks`` extra. - -The SOCKS implementation supports the full range of urllib3 features. It also -supports the following SOCKS features: - -- SOCKS4A (``proxy_url='socks4a://...``) -- SOCKS4 (``proxy_url='socks4://...``) -- SOCKS5 with remote DNS (``proxy_url='socks5h://...``) -- SOCKS5 with local DNS (``proxy_url='socks5://...``) -- Usernames and passwords for the SOCKS proxy - -.. note:: - It is recommended to use ``socks5h://`` or ``socks4a://`` schemes in - your ``proxy_url`` to ensure that DNS resolution is done from the remote - server instead of client-side when connecting to a domain name. - -SOCKS4 supports IPv4 and domain names with the SOCKS4A extension. SOCKS5 -supports IPv4, IPv6, and domain names. - -When connecting to a SOCKS4 proxy the ``username`` portion of the ``proxy_url`` -will be sent as the ``userid`` section of the SOCKS request: - -.. code-block:: python - - proxy_url="socks4a://@proxy-host" - -When connecting to a SOCKS5 proxy the ``username`` and ``password`` portion -of the ``proxy_url`` will be sent as the username/password to authenticate -with the proxy: - -.. code-block:: python - - proxy_url="socks5h://:@proxy-host" - -""" - -from __future__ import annotations - -try: - import socks # type: ignore[import] -except ImportError: - import warnings - - from ..exceptions import DependencyWarning - - warnings.warn( - ( - "SOCKS support in urllib3 requires the installation of optional " - "dependencies: specifically, PySocks. For more information, see " - "https://urllib3.readthedocs.io/en/latest/contrib.html#socks-proxies" - ), - DependencyWarning, - ) - raise - -import typing -from socket import timeout as SocketTimeout - -from ..connection import HTTPConnection, HTTPSConnection -from ..connectionpool import HTTPConnectionPool, HTTPSConnectionPool -from ..exceptions import ConnectTimeoutError, NewConnectionError -from ..poolmanager import PoolManager -from ..util.url import parse_url - -try: - import ssl -except ImportError: - ssl = None # type: ignore[assignment] - -try: - from typing import TypedDict - - class _TYPE_SOCKS_OPTIONS(TypedDict): - socks_version: int - proxy_host: str | None - proxy_port: str | None - username: str | None - password: str | None - rdns: bool - -except ImportError: # Python 3.7 - _TYPE_SOCKS_OPTIONS = typing.Dict[str, typing.Any] # type: ignore[misc, assignment] - - -class SOCKSConnection(HTTPConnection): - """ - A plain-text HTTP connection that connects via a SOCKS proxy. - """ - - def __init__( - self, - _socks_options: _TYPE_SOCKS_OPTIONS, - *args: typing.Any, - **kwargs: typing.Any, - ) -> None: - self._socks_options = _socks_options - super().__init__(*args, **kwargs) - - def _new_conn(self) -> socks.socksocket: - """ - Establish a new connection via the SOCKS proxy. - """ - extra_kw: dict[str, typing.Any] = {} - if self.source_address: - extra_kw["source_address"] = self.source_address - - if self.socket_options: - extra_kw["socket_options"] = self.socket_options - - try: - conn = socks.create_connection( - (self.host, self.port), - proxy_type=self._socks_options["socks_version"], - proxy_addr=self._socks_options["proxy_host"], - proxy_port=self._socks_options["proxy_port"], - proxy_username=self._socks_options["username"], - proxy_password=self._socks_options["password"], - proxy_rdns=self._socks_options["rdns"], - timeout=self.timeout, - **extra_kw, - ) - - except SocketTimeout as e: - raise ConnectTimeoutError( - self, - f"Connection to {self.host} timed out. (connect timeout={self.timeout})", - ) from e - - except socks.ProxyError as e: - # This is fragile as hell, but it seems to be the only way to raise - # useful errors here. - if e.socket_err: - error = e.socket_err - if isinstance(error, SocketTimeout): - raise ConnectTimeoutError( - self, - f"Connection to {self.host} timed out. (connect timeout={self.timeout})", - ) from e - else: - # Adding `from e` messes with coverage somehow, so it's omitted. - # See #2386. - raise NewConnectionError( - self, f"Failed to establish a new connection: {error}" - ) - else: - raise NewConnectionError( - self, f"Failed to establish a new connection: {e}" - ) from e - - except OSError as e: # Defensive: PySocks should catch all these. - raise NewConnectionError( - self, f"Failed to establish a new connection: {e}" - ) from e - - return conn - - -# We don't need to duplicate the Verified/Unverified distinction from -# urllib3/connection.py here because the HTTPSConnection will already have been -# correctly set to either the Verified or Unverified form by that module. This -# means the SOCKSHTTPSConnection will automatically be the correct type. -class SOCKSHTTPSConnection(SOCKSConnection, HTTPSConnection): - pass - - -class SOCKSHTTPConnectionPool(HTTPConnectionPool): - ConnectionCls = SOCKSConnection - - -class SOCKSHTTPSConnectionPool(HTTPSConnectionPool): - ConnectionCls = SOCKSHTTPSConnection - - -class SOCKSProxyManager(PoolManager): - """ - A version of the urllib3 ProxyManager that routes connections via the - defined SOCKS proxy. - """ - - pool_classes_by_scheme = { - "http": SOCKSHTTPConnectionPool, - "https": SOCKSHTTPSConnectionPool, - } - - def __init__( - self, - proxy_url: str, - username: str | None = None, - password: str | None = None, - num_pools: int = 10, - headers: typing.Mapping[str, str] | None = None, - **connection_pool_kw: typing.Any, - ): - parsed = parse_url(proxy_url) - - if username is None and password is None and parsed.auth is not None: - split = parsed.auth.split(":") - if len(split) == 2: - username, password = split - if parsed.scheme == "socks5": - socks_version = socks.PROXY_TYPE_SOCKS5 - rdns = False - elif parsed.scheme == "socks5h": - socks_version = socks.PROXY_TYPE_SOCKS5 - rdns = True - elif parsed.scheme == "socks4": - socks_version = socks.PROXY_TYPE_SOCKS4 - rdns = False - elif parsed.scheme == "socks4a": - socks_version = socks.PROXY_TYPE_SOCKS4 - rdns = True - else: - raise ValueError(f"Unable to determine SOCKS version from {proxy_url}") - - self.proxy_url = proxy_url - - socks_options = { - "socks_version": socks_version, - "proxy_host": parsed.host, - "proxy_port": parsed.port, - "username": username, - "password": password, - "rdns": rdns, - } - connection_pool_kw["_socks_options"] = socks_options - - super().__init__(num_pools, headers, **connection_pool_kw) - - self.pool_classes_by_scheme = SOCKSProxyManager.pool_classes_by_scheme diff --git a/spaces/qinzhu/Claude100K-API/README.md b/spaces/qinzhu/Claude100K-API/README.md deleted file mode 100644 index d9f5ce3bd4cb3c8ae3892de1d3ce1e5ede6678d5..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/Claude100K-API/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Claude100K API -emoji: 💻 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/r3gm/RVC_HF/lib/infer_pack/attentions.py b/spaces/r3gm/RVC_HF/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/entry/start.bf5976cf.js b/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/entry/start.bf5976cf.js deleted file mode 100644 index c729a3b7d000d27558568949537fc540975ddd52..0000000000000000000000000000000000000000 --- a/spaces/radames/transformers-js-sveltekit-static-example-app/_app/immutable/entry/start.bf5976cf.js +++ /dev/null @@ -1,3 +0,0 @@ -import{o as De,t as ye}from"../chunks/scheduler.e108d1fd.js";import{S as He,a as Je,I as V,g as Ce,f as Ve,b as we,c as le,s as ee,i as _e,d as M,e as K,P as qe,h as We}from"../chunks/singletons.0131ad8a.js";function Xe(n,o){return n==="/"||o==="ignore"?n:o==="never"?n.endsWith("/")?n.slice(0,-1):n:o==="always"&&!n.endsWith("/")?n+"/":n}function Ze(n){return n.split("%25").map(decodeURI).join("%25")}function Qe(n){for(const o in n)n[o]=decodeURIComponent(n[o]);return n}const et=["href","pathname","search","searchParams","toString","toJSON"];function tt(n,o){const u=new URL(n);for(const s of et)Object.defineProperty(u,s,{get(){return o(),n[s]},enumerable:!0,configurable:!0});return nt(u),u}function nt(n){Object.defineProperty(n,"hash",{get(){throw new Error("Cannot access event.url.hash. Consider using `$page.url.hash` inside a component instead")}})}const at="/__data.json";function rt(n){return n.replace(/\/$/,"")+at}function ot(...n){let o=5381;for(const u of n)if(typeof u=="string"){let s=u.length;for(;s;)o=o*33^u.charCodeAt(--s)}else if(ArrayBuffer.isView(u)){const s=new Uint8Array(u.buffer,u.byteOffset,u.byteLength);let d=s.length;for(;d;)o=o*33^s[--d]}else throw new TypeError("value must be a string or TypedArray");return(o>>>0).toString(36)}const fe=window.fetch;window.fetch=(n,o)=>((n instanceof Request?n.method:(o==null?void 0:o.method)||"GET")!=="GET"&&ne.delete(Se(n)),fe(n,o));const ne=new Map;function it(n,o){const u=Se(n,o),s=document.querySelector(u);if(s!=null&&s.textContent){const{body:d,...f}=JSON.parse(s.textContent),S=s.getAttribute("data-ttl");return S&&ne.set(u,{body:d,init:f,ttl:1e3*Number(S)}),Promise.resolve(new Response(d,f))}return fe(n,o)}function st(n,o,u){if(ne.size>0){const s=Se(n,u),d=ne.get(s);if(d){if(performance.now(){const d=/^\[\.\.\.(\w+)(?:=(\w+))?\]$/.exec(s);if(d)return o.push({name:d[1],matcher:d[2],optional:!1,rest:!0,chained:!0}),"(?:/(.*))?";const f=/^\[\[(\w+)(?:=(\w+))?\]\]$/.exec(s);if(f)return o.push({name:f[1],matcher:f[2],optional:!0,rest:!1,chained:!0}),"(?:/([^/]+))?";if(!s)return;const S=s.split(/\[(.+?)\](?!\])/);return"/"+S.map((y,w)=>{if(w%2){if(y.startsWith("x+"))return be(String.fromCharCode(parseInt(y.slice(2),16)));if(y.startsWith("u+"))return be(String.fromCharCode(...y.slice(2).split("-").map(U=>parseInt(U,16))));const h=ct.exec(y);if(!h)throw new Error(`Invalid param: ${y}. Params and matcher names can only have underscores and alphanumeric characters.`);const[,D,x,k,N]=h;return o.push({name:k,matcher:N,optional:!!D,rest:!!x,chained:x?w===1&&S[0]==="":!1}),x?"(.*?)":D?"([^/]*)?":"([^/]+?)"}return be(y)}).join("")}).join("")}/?$`),params:o}}function ft(n){return!/^\([^)]+\)$/.test(n)}function ut(n){return n.slice(1).split("/").filter(ft)}function dt(n,o,u){const s={},d=n.slice(1);let f=0;for(let S=0;Sw).join("/"),f=0),y===void 0){l.rest&&(s[l.name]="");continue}if(!l.matcher||u[l.matcher](y)){s[l.name]=y;const w=o[S+1],h=d[S+1];w&&!w.rest&&w.optional&&h&&l.chained&&(f=0);continue}if(l.optional&&l.chained){f++;continue}return}if(!f)return s}function be(n){return n.normalize().replace(/[[\]]/g,"\\$&").replace(/%/g,"%25").replace(/\//g,"%2[Ff]").replace(/\?/g,"%3[Ff]").replace(/#/g,"%23").replace(/[.*+?^${}()|\\]/g,"\\$&")}function pt({nodes:n,server_loads:o,dictionary:u,matchers:s}){const d=new Set(o);return Object.entries(u).map(([l,[y,w,h]])=>{const{pattern:D,params:x}=lt(l),k={id:l,exec:N=>{const U=D.exec(N);if(U)return dt(U,x,s)},errors:[1,...h||[]].map(N=>n[N]),layouts:[0,...w||[]].map(S),leaf:f(y)};return k.errors.length=k.layouts.length=Math.max(k.errors.length,k.layouts.length),k});function f(l){const y=l<0;return y&&(l=~l),[y,n[l]]}function S(l){return l===void 0?l:[d.has(l),n[l]]}}function Ke(n){try{return JSON.parse(sessionStorage[n])}catch{}}function Fe(n,o){const u=JSON.stringify(o);try{sessionStorage[n]=u}catch{}}const ht=-1,gt=-2,mt=-3,yt=-4,wt=-5,_t=-6;function bt(n,o){if(typeof n=="number")return d(n,!0);if(!Array.isArray(n)||n.length===0)throw new Error("Invalid input");const u=n,s=Array(u.length);function d(f,S=!1){if(f===ht)return;if(f===mt)return NaN;if(f===yt)return 1/0;if(f===wt)return-1/0;if(f===_t)return-0;if(S)throw new Error("Invalid input");if(f in s)return s[f];const l=u[f];if(!l||typeof l!="object")s[f]=l;else if(Array.isArray(l))if(typeof l[0]=="string"){const y=l[0],w=o==null?void 0:o[y];if(w)return s[f]=w(d(l[1]));switch(y){case"Date":s[f]=new Date(l[1]);break;case"Set":const h=new Set;s[f]=h;for(let k=1;ko!=null)}const ze=new Set(["load","prerender","csr","ssr","trailingSlash","config"]);[...ze];const Et=new Set([...ze]);[...Et];async function St(n){var o;for(const u in n)if(typeof((o=n[u])==null?void 0:o.then)=="function")return Object.fromEntries(await Promise.all(Object.entries(n).map(async([s,d])=>[s,await d])));return n}class te{constructor(o,u){this.status=o,typeof u=="string"?this.body={message:u}:u?this.body=u:this.body={message:`Error: ${o}`}}toString(){return JSON.stringify(this.body)}}class Me{constructor(o,u){this.status=o,this.location=u}}const kt="x-sveltekit-invalidated",z=Ke(He)??{},Q=Ke(Je)??{};function ve(n){z[n]=ee()}function Rt(n,o){var $e;const u=pt(n),s=n.nodes[0],d=n.nodes[1];s(),d();const f=document.documentElement,S=[],l=[];let y=null;const w={before_navigate:[],after_navigate:[]};let h={branch:[],error:null,url:null},D=!1,x=!1,k=!0,N=!1,U=!1,B=!1,H=!1,q,j=($e=history.state)==null?void 0:$e[V];j||(j=Date.now(),history.replaceState({...history.state,[V]:j},"",location.href));const ue=z[j];ue&&(history.scrollRestoration="manual",scrollTo(ue.x,ue.y));let F,ae,Y;async function ke(){if(Y=Y||Promise.resolve(),await Y,!Y)return;Y=null;const e=new URL(location.href),i=X(e,!0);y=null;const t=ae={},r=i&&await he(i);if(t===ae&&r){if(r.type==="redirect")return re(new URL(r.location,e).href,{},[e.pathname],t);r.props.page!==void 0&&(F=r.props.page),q.$set(r.props)}}function Re(e){l.some(i=>i==null?void 0:i.snapshot)&&(Q[e]=l.map(i=>{var t;return(t=i==null?void 0:i.snapshot)==null?void 0:t.capture()}))}function Ae(e){var i;(i=Q[e])==null||i.forEach((t,r)=>{var a,c;(c=(a=l[r])==null?void 0:a.snapshot)==null||c.restore(t)})}function Ie(){ve(j),Fe(He,z),Re(j),Fe(Je,Q)}async function re(e,{noScroll:i=!1,replaceState:t=!1,keepFocus:r=!1,state:a={},invalidateAll:c=!1},p,v){return typeof e=="string"&&(e=new URL(e,Ce(document))),ce({url:e,scroll:i?ee():null,keepfocus:r,redirect_chain:p,details:{state:a,replaceState:t},nav_token:v,accepted:()=>{c&&(H=!0)},blocked:()=>{},type:"goto"})}async function Le(e){return y={id:e.id,promise:he(e).then(i=>(i.type==="loaded"&&i.state.error&&(y=null),i))},y.promise}async function oe(...e){const t=u.filter(r=>e.some(a=>r.exec(a))).map(r=>Promise.all([...r.layouts,r.leaf].map(a=>a==null?void 0:a[1]())));await Promise.all(t)}function Oe(e){var r;h=e.state;const i=document.querySelector("style[data-sveltekit]");i&&i.remove(),F=e.props.page,q=new n.root({target:o,props:{...e.props,stores:M,components:l},hydrate:!0}),Ae(j);const t={from:null,to:{params:h.params,route:{id:((r=h.route)==null?void 0:r.id)??null},url:new URL(location.href)},willUnload:!1,type:"enter"};w.after_navigate.forEach(a=>a(t)),x=!0}async function W({url:e,params:i,branch:t,status:r,error:a,route:c,form:p}){let v="never";for(const g of t)(g==null?void 0:g.slash)!==void 0&&(v=g.slash);e.pathname=Xe(e.pathname,v),e.search=e.search;const b={type:"loaded",state:{url:e,params:i,branch:t,error:a,route:c},props:{constructors:vt(t).map(g=>g.node.component)}};p!==void 0&&(b.props.form=p);let _={},R=!F,A=0;for(let g=0;g(v.params.add(P),m[P])}),data:(c==null?void 0:c.data)??null,url:tt(t,()=>{v.url=!0}),async fetch(m,P){let $;m instanceof Request?($=m.url,P={body:m.method==="GET"||m.method==="HEAD"?void 0:await m.blob(),cache:m.cache,credentials:m.credentials,headers:m.headers,integrity:m.integrity,keepalive:m.keepalive,method:m.method,mode:m.mode,redirect:m.redirect,referrer:m.referrer,referrerPolicy:m.referrerPolicy,signal:m.signal,...P}):$=m;const C=new URL($,t);return I(C.href),C.origin===t.origin&&($=C.href.slice(t.origin.length)),x?st($,C.href,P):it($,P)},setHeaders:()=>{},depends:I,parent(){return v.parent=!0,i()}};p=await b.universal.load.call(null,g)??null,p=p?await St(p):null}return{node:b,loader:e,server:c,universal:(R=b.universal)!=null&&R.load?{type:"data",data:p,uses:v}:null,data:p??(c==null?void 0:c.data)??null,slash:((A=b.universal)==null?void 0:A.trailingSlash)??(c==null?void 0:c.slash)}}function Ue(e,i,t,r,a){if(H)return!0;if(!r)return!1;if(r.parent&&e||r.route&&i||r.url&&t)return!0;for(const c of r.params)if(a[c]!==h.params[c])return!0;for(const c of r.dependencies)if(S.some(p=>p(new URL(c))))return!0;return!1}function pe(e,i){return(e==null?void 0:e.type)==="data"?e:(e==null?void 0:e.type)==="skip"?i??null:null}async function he({id:e,invalidating:i,url:t,params:r,route:a}){if((y==null?void 0:y.id)===e)return y.promise;const{errors:c,layouts:p,leaf:v}=a,b=[...p,v];c.forEach(E=>E==null?void 0:E().catch(()=>{})),b.forEach(E=>E==null?void 0:E[1]().catch(()=>{}));let _=null;const R=h.url?e!==h.url.pathname+h.url.search:!1,A=h.route?a.id!==h.route.id:!1;let I=!1;const g=b.map((E,O)=>{var J;const L=h.branch[O],T=!!(E!=null&&E[0])&&((L==null?void 0:L.loader)!==E[1]||Ue(I,A,R,(J=L.server)==null?void 0:J.uses,r));return T&&(I=!0),T});if(g.some(Boolean)){try{_=await Be(t,g)}catch(E){return ie({status:E instanceof te?E.status:500,error:await Z(E,{url:t,params:r,route:{id:a.id}}),url:t,route:a})}if(_.type==="redirect")return _}const m=_==null?void 0:_.nodes;let P=!1;const $=b.map(async(E,O)=>{var ge;if(!E)return;const L=h.branch[O],T=m==null?void 0:m[O];if((!T||T.type==="skip")&&E[1]===(L==null?void 0:L.loader)&&!Ue(P,A,R,(ge=L.universal)==null?void 0:ge.uses,r))return L;if(P=!0,(T==null?void 0:T.type)==="error")throw T;return de({loader:E[1],url:t,params:r,route:a,parent:async()=>{var Te;const je={};for(let me=0;me{});const C=[];for(let E=0;EPromise.resolve({}),server_data_node:pe(c)}),b={node:await d(),loader:d,universal:null,server:null,data:null};return await W({url:t,params:a,branch:[v,b],status:e,error:i,route:null})}function X(e,i){if(_e(e,K))return;const t=se(e);for(const r of u){const a=r.exec(t);if(a)return{id:e.pathname+e.search,invalidating:i,route:r,params:Qe(a),url:e}}}function se(e){return Ze(e.pathname.slice(K.length)||"/")}function xe({url:e,type:i,intent:t,delta:r}){var v,b;let a=!1;const c={from:{params:h.params,route:{id:((v=h.route)==null?void 0:v.id)??null},url:h.url},to:{params:(t==null?void 0:t.params)??null,route:{id:((b=t==null?void 0:t.route)==null?void 0:b.id)??null},url:e},willUnload:!t,type:i};r!==void 0&&(c.delta=r);const p={...c,cancel:()=>{a=!0}};return U||w.before_navigate.forEach(_=>_(p)),a?null:c}async function ce({url:e,scroll:i,keepfocus:t,redirect_chain:r,details:a,type:c,delta:p,nav_token:v={},accepted:b,blocked:_}){var $,C,E;const R=X(e,!1),A=xe({url:e,type:c,delta:p,intent:R});if(!A){_();return}const I=j;b(),U=!0,x&&M.navigating.set(A),ae=v;let g=R&&await he(R);if(!g){if(_e(e,K))return await G(e);g=await Ne(e,{id:null},await Z(new Error(`Not found: ${e.pathname}`),{url:e,params:{},route:{id:null}}),404)}if(e=(R==null?void 0:R.url)||e,ae!==v)return!1;if(g.type==="redirect")if(r.length>10||r.includes(e.pathname))g=await ie({status:500,error:await Z(new Error("Redirect loop"),{url:e,params:{},route:{id:null}}),url:e,route:{id:null}});else return re(new URL(g.location,e).href,{},[...r,e.pathname],v),!1;else(($=g.props.page)==null?void 0:$.status)>=400&&await M.updated.check()&&await G(e);if(S.length=0,H=!1,N=!0,ve(I),Re(I),(C=g.props.page)!=null&&C.url&&g.props.page.url.pathname!==e.pathname&&(e.pathname=(E=g.props.page)==null?void 0:E.url.pathname),a){const O=a.replaceState?0:1;if(a.state[V]=j+=O,history[a.replaceState?"replaceState":"pushState"](a.state,"",e),!a.replaceState){let L=j+1;for(;Q[L]||z[L];)delete Q[L],delete z[L],L+=1}}y=null,x?(h=g.state,g.props.page&&(g.props.page.url=e),q.$set(g.props)):Oe(g);const{activeElement:m}=document;if(await ye(),k){const O=e.hash&&document.getElementById(decodeURIComponent(e.hash.slice(1)));i?scrollTo(i.x,i.y):O?O.scrollIntoView():scrollTo(0,0)}const P=document.activeElement!==m&&document.activeElement!==document.body;!t&&!P&&Ee(),k=!0,g.props.page&&(F=g.props.page),U=!1,c==="popstate"&&Ae(j),w.after_navigate.forEach(O=>O(A)),M.navigating.set(null),N=!1}async function Ne(e,i,t,r){return e.origin===location.origin&&e.pathname===location.pathname&&!D?await ie({status:r,error:t,url:e,route:i}):await G(e)}function G(e){return location.href=e.href,new Promise(()=>{})}function Ye(){let e;f.addEventListener("mousemove",c=>{const p=c.target;clearTimeout(e),e=setTimeout(()=>{r(p,2)},20)});function i(c){r(c.composedPath()[0],1)}f.addEventListener("mousedown",i),f.addEventListener("touchstart",i,{passive:!0});const t=new IntersectionObserver(c=>{for(const p of c)p.isIntersecting&&(oe(se(new URL(p.target.href))),t.unobserve(p.target))},{threshold:0});function r(c,p){const v=Ve(c,f);if(!v)return;const{url:b,external:_,download:R}=we(v,K);if(_||R)return;const A=le(v);if(!A.reload)if(p<=A.preload_data){const I=X(b,!1);I&&Le(I)}else p<=A.preload_code&&oe(se(b))}function a(){t.disconnect();for(const c of f.querySelectorAll("a")){const{url:p,external:v,download:b}=we(c,K);if(v||b)continue;const _=le(c);_.reload||(_.preload_code===qe.viewport&&t.observe(c),_.preload_code===qe.eager&&oe(se(p)))}}w.after_navigate.push(a),a()}function Z(e,i){return e instanceof te?e.body:n.hooks.handleError({error:e,event:i})??{message:i.route.id!=null?"Internal Error":"Not Found"}}return{after_navigate:e=>{De(()=>(w.after_navigate.push(e),()=>{const i=w.after_navigate.indexOf(e);w.after_navigate.splice(i,1)}))},before_navigate:e=>{De(()=>(w.before_navigate.push(e),()=>{const i=w.before_navigate.indexOf(e);w.before_navigate.splice(i,1)}))},disable_scroll_handling:()=>{(N||!x)&&(k=!1)},goto:(e,i={})=>re(e,i,[]),invalidate:e=>{if(typeof e=="function")S.push(e);else{const{href:i}=new URL(e,location.href);S.push(t=>t.href===i)}return ke()},invalidate_all:()=>(H=!0,ke()),preload_data:async e=>{const i=new URL(e,Ce(document)),t=X(i,!1);if(!t)throw new Error(`Attempted to preload a URL that does not belong to this app: ${i}`);await Le(t)},preload_code:oe,apply_action:async e=>{if(e.type==="error"){const i=new URL(location.href),{branch:t,route:r}=h;if(!r)return;const a=await Pe(h.branch.length,t,r.errors);if(a){const c=await W({url:i,params:h.params,branch:t.slice(0,a.idx).concat(a.node),status:e.status??500,error:e.error,route:r});h=c.state,q.$set(c.props),ye().then(Ee)}}else e.type==="redirect"?re(e.location,{invalidateAll:!0},[]):(q.$set({form:null,page:{...F,form:e.data,status:e.status}}),await ye(),q.$set({form:e.data}),e.type==="success"&&Ee())},_start_router:()=>{var i;history.scrollRestoration="manual",addEventListener("beforeunload",t=>{var a;let r=!1;if(Ie(),!U){const c={from:{params:h.params,route:{id:((a=h.route)==null?void 0:a.id)??null},url:h.url},to:null,willUnload:!0,type:"leave",cancel:()=>r=!0};w.before_navigate.forEach(p=>p(c))}r?(t.preventDefault(),t.returnValue=""):history.scrollRestoration="auto"}),addEventListener("visibilitychange",()=>{document.visibilityState==="hidden"&&Ie()}),(i=navigator.connection)!=null&&i.saveData||Ye(),f.addEventListener("click",t=>{var I;if(t.button||t.which!==1||t.metaKey||t.ctrlKey||t.shiftKey||t.altKey||t.defaultPrevented)return;const r=Ve(t.composedPath()[0],f);if(!r)return;const{url:a,external:c,target:p,download:v}=we(r,K);if(!a)return;if(p==="_parent"||p==="_top"){if(window.parent!==window)return}else if(p&&p!=="_self")return;const b=le(r);if(!(r instanceof SVGAElement)&&a.protocol!==location.protocol&&!(a.protocol==="https:"||a.protocol==="http:")||v)return;if(c||b.reload){xe({url:a,type:"link"})?U=!0:t.preventDefault();return}const[R,A]=a.href.split("#");if(A!==void 0&&R===location.href.split("#")[0]){if(h.url.hash===a.hash){t.preventDefault(),(I=r.ownerDocument.getElementById(A))==null||I.scrollIntoView();return}if(B=!0,ve(j),e(a),!b.replace_state)return;B=!1,t.preventDefault()}ce({url:a,scroll:b.noscroll?ee():null,keepfocus:b.keep_focus??!1,redirect_chain:[],details:{state:{},replaceState:b.replace_state??a.href===location.href},accepted:()=>t.preventDefault(),blocked:()=>t.preventDefault(),type:"link"})}),f.addEventListener("submit",t=>{if(t.defaultPrevented)return;const r=HTMLFormElement.prototype.cloneNode.call(t.target),a=t.submitter;if(((a==null?void 0:a.formMethod)||r.method)!=="get")return;const p=new URL((a==null?void 0:a.hasAttribute("formaction"))&&(a==null?void 0:a.formAction)||r.action);if(_e(p,K))return;const v=t.target,{keep_focus:b,noscroll:_,reload:R,replace_state:A}=le(v);if(R)return;t.preventDefault(),t.stopPropagation();const I=new FormData(v),g=a==null?void 0:a.getAttribute("name");g&&I.append(g,(a==null?void 0:a.getAttribute("value"))??""),p.search=new URLSearchParams(I).toString(),ce({url:p,scroll:_?ee():null,keepfocus:b??!1,redirect_chain:[],details:{state:{},replaceState:A??p.href===location.href},nav_token:{},accepted:()=>{},blocked:()=>{},type:"form"})}),addEventListener("popstate",async t=>{var r;if((r=t.state)!=null&&r[V]){if(t.state[V]===j)return;const a=z[t.state[V]];if(h.url.href.split("#")[0]===location.href.split("#")[0]){z[j]=ee(),j=t.state[V],scrollTo(a.x,a.y);return}const c=t.state[V]-j;await ce({url:new URL(location.href),scroll:a,keepfocus:!1,redirect_chain:[],details:null,accepted:()=>{j=t.state[V]},blocked:()=>{history.go(-c)},type:"popstate",delta:c})}else if(!B){const a=new URL(location.href);e(a)}}),addEventListener("hashchange",()=>{B&&(B=!1,history.replaceState({...history.state,[V]:++j},"",location.href))});for(const t of document.querySelectorAll("link"))t.rel==="icon"&&(t.href=t.href);addEventListener("pageshow",t=>{t.persisted&&M.navigating.set(null)});function e(t){h.url=t,M.page.set({...F,url:t}),M.page.notify()}},_hydrate:async({status:e=200,error:i,node_ids:t,params:r,route:a,data:c,form:p})=>{D=!0;const v=new URL(location.href);({params:r={},route:a={id:null}}=X(v,!1)||{});let b;try{const _=t.map(async(I,g)=>{const m=c[g];return m!=null&&m.uses&&(m.uses=Ge(m.uses)),de({loader:n.nodes[I],url:v,params:r,route:a,parent:async()=>{const P={};for(let $=0;$I===a.id);if(A){const I=A.layouts;for(let g=0;gd?"1":"0").join(""));const s=await fe(u.href);if(!s.ok)throw new te(s.status,await s.json());return new Promise(async d=>{var h;const f=new Map,S=s.body.getReader(),l=new TextDecoder;function y(D){return bt(D,{Promise:x=>new Promise((k,N)=>{f.set(x,{fulfil:k,reject:N})})})}let w="";for(;;){const{done:D,value:x}=await S.read();if(D&&!w)break;for(w+=!x&&w?` -`:l.decode(x);;){const k=w.indexOf(` -`);if(k===-1)break;const N=JSON.parse(w.slice(0,k));if(w=w.slice(k+1),N.type==="redirect")return d(N);if(N.type==="data")(h=N.nodes)==null||h.forEach(U=>{(U==null?void 0:U.type)==="data"&&(U.uses=Ge(U.uses),U.data=y(U.data))}),d(N);else if(N.type==="chunk"){const{id:U,data:B,error:H}=N,q=f.get(U);f.delete(U),H?q.reject(y(H)):q.fulfil(y(B))}}}})}function Ge(n){return{dependencies:new Set((n==null?void 0:n.dependencies)??[]),params:new Set((n==null?void 0:n.params)??[]),parent:!!(n!=null&&n.parent),route:!!(n!=null&&n.route),url:!!(n!=null&&n.url)}}function Ee(){const n=document.querySelector("[autofocus]");if(n)n.focus();else{const o=document.body,u=o.getAttribute("tabindex");o.tabIndex=-1,o.focus({preventScroll:!0,focusVisible:!1}),u!==null?o.setAttribute("tabindex",u):o.removeAttribute("tabindex");const s=getSelection();if(s&&s.type!=="None"){const d=[];for(let f=0;f{if(s.rangeCount===d.length){for(let f=0;f -

    FabFilter Total Bundle (2018.3.05) free download

    -

    If you are looking for a comprehensive set of audio plugins that can help you produce professional-sounding music, you might have heard of FabFilter Total Bundle. This is a collection of 14 high-quality plugins that cover various aspects of audio production, such as equalization, compression, reverb, distortion, synthesis, and more. But how can you get FabFilter Total Bundle for free? Is it possible to download it without paying anything? And what are the consequences of doing so? In this article, we will answer these questions and give you some tips on how to get the best out of FabFilter Total Bundle.

    -

    What is FabFilter Total Bundle?

    -

    FabFilter Total Bundle is a bundle of 14 audio plugins that are designed for music production, mixing, mastering, and sound design. The plugins are compatible with most popular DAWs (digital audio workstations), such as Ableton Live, Logic Pro, Cubase, Pro Tools, FL Studio, and more. The plugins are also available in various formats, such as VST, VST3, AU, AAX, RTAS, and AudioSuite.

    -

    FabFilter Total Bundle (2018.3.05) free download


    Download ⇒⇒⇒ https://tinourl.com/2uKZvL



    -

    A collection of professional audio plugins

    -

    The plugins included in FabFilter Total Bundle are:

    -
      -
    • FabFilter Pro-Q 3: A versatile equalizer plugin that offers up to 24 bands of EQ, linear-phase and natural-phase modes, dynamic EQ, mid/side processing, spectrum analyzer, and more.
    • -
    • FabFilter Pro-C 2: A powerful compressor plugin that offers up to eight bands of compression, program-dependent attack and release curves, side-chain input, lookahead, oversampling, and more.
    • -
    • FabFilter Pro-R: A realistic reverb plugin that offers natural-sounding decay tails, spatial positioning, decay rate EQ, modulation, freeze mode, and more.
    • -
    • FabFilter Pro-L 2: A transparent limiter plugin that offers true peak limiting, loudness metering, advanced algorithms, oversampling, dithering, and more.
    • -
    • FabFilter Pro-MB: A flexible multiband compressor/expander plugin that offers up to six bands of dynamic processing, linear-phase and minimum-phase modes, intelligent auto-gain and auto-release functions, side-chain input, and more.
    • -
    • FabFilter Pro-DS: A smart de-esser plugin that offers single-vocal and allround modes, linear-phase and minimum-phase modes, adjustable threshold and range settings, side-chain input, and more.
    • -
    • FabFilter Pro-G: A versatile gate/expander plugin that offers up to six bands of gating/expansion, linear-phase and minimum-phase modes, customizable attack and release curves, side-chain input, and more.
    • -
    • FabFilter Saturn 2: A creative distortion/saturation plugin that offers up to six bands of distortion/saturation, 16 different distortion styles, dynamic modulation, mid/side processing, and more.
    • -
    • FabFilter Timeless 3: A vintage-inspired delay plugin that offers up to two delay lines, feedback effects, tape saturation, filtering, modulation, and more.
    • -
    • FabFilter Volcano 3: A powerful filter plugin that offers up to four multimode filters, filter panning, filter routing, modulation, and more.
    • -
    • FabFilter Twin 2: A versatile synthesizer plugin that offers three oscillators, two multimode filters, six envelopes, three LFOs, modulation matrix, arpeggiator/sequencer, and more.
    • -
    • FabFilter One: A simple but powerful synthesizer plugin that offers one oscillator, one filter, one envelope, one LFO, portamento/glide, and more.
    • -
    • FabFilter Simplon: A basic but effective filter plugin that offers two multimode filters, filter routing, envelope follower modulation, and more.
    • -
    • FabFilter Micro: A minimal but useful filter plugin that offers one multimode filter, envelope follower modulation, and more.
    • -
    -

    The features and benefits of FabFilter Total Bundle

    -

    Some of the common features and benefits of FabFilter Total Bundle are:

    -

    High-quality sound and user-friendly interface

    -

    All FabFilter plugins are designed with sound quality and usability in mind. They use advanced algorithms and high-quality oversampling to ensure transparent and accurate sound processing. They also have intuitive and attractive interfaces that make them easy to use and customize. You can resize the plugins' windows to fit your screen size and adjust the colors and fonts to suit your preference. You can also access various presets and tips to help you get started quickly.

    -

    Flexible modulation and side-chain options

    -

    Many FabFilter plugins offer flexible modulation and side-chain options that allow you to create dynamic and expressive effects. You can use various sources such as envelopes LFOs MIDI or audio input to modulate various parameters such as gain frequency panning or feedback. You can also use external audio signals to trigger or control the plugins' functions such as compression gating or filtering. This way you can create complex and creative soundscapes that respond to your music.

    -

    How to get FabFilter Total Bundle (2018.3.05) for free
    -FabFilter Total Bundle (2018.3.05) crack download
    -FabFilter Total Bundle (2018.3.05) full version free download
    -FabFilter Total Bundle (2018.3.05) torrent download
    -FabFilter Total Bundle (2018.3.05) license key generator
    -FabFilter Total Bundle (2018.3.05) activation code free
    -FabFilter Total Bundle (2018.3.05) serial number free
    -FabFilter Total Bundle (2018.3.05) patch download
    -FabFilter Total Bundle (2018.3.05) keygen download
    -FabFilter Total Bundle (2018.3.05) registration code free
    -Download FabFilter Total Bundle (2018.3.05) for Windows 10
    -Download FabFilter Total Bundle (2018.3.05) for Mac OS X
    -Download FabFilter Total Bundle (2018.3.05) for Linux
    -Download FabFilter Total Bundle (2018.3.05) for Android
    -Download FabFilter Total Bundle (2018.3.05) for iOS
    -Download FabFilter Total Bundle (2018.3.05) for FL Studio
    -Download FabFilter Total Bundle (2018.3.05) for Ableton Live
    -Download FabFilter Total Bundle (2018.3.05) for Logic Pro X
    -Download FabFilter Total Bundle (2018.3.05) for Cubase
    -Download FabFilter Total Bundle (2018.3.05) for Pro Tools
    -Download FabFilter Total Bundle (2018.3.05) for Reaper
    -Download FabFilter Total Bundle (2018.3.05) for GarageBand
    -Download FabFilter Total Bundle (2018.3.05) for Audacity
    -Download FabFilter Total Bundle (2018.3.05) for Adobe Audition
    -Download FabFilter Total Bundle (2018.3.05) for Sound Forge
    -Download FabFilter Total Bundle (2018.3.05) with all plugins included
    -Download FabFilter Total Bundle (2018.3.05) with Pro-Q 2 plugin
    -Download FabFilter Total Bundle (2018.3.05) with Pro-C 2 plugin
    -Download FabFilter Total Bundle (2018.3.05) with Pro-L 2 plugin
    -Download FabFilter Total Bundle (2018.3.05) with Pro-MB plugin
    -Download FabFilter Total Bundle (2018.3.05) with Pro-R plugin
    -Download FabFilter Total Bundle (2018.3.05) with Pro-G plugin
    -Download FabFilter Total Bundle (2018.3.05) with Saturn plugin
    -Download FabFilter Total Bundle (2018.3.05) with Timeless 2 plugin
    -Download FabFilter Total Bundle (2018.3.05) with Twin 2 plugin
    -Download FabFilter Total Bundle (2018.3.05) with Volcano 2 plugin
    -Download FabFilter Total Bundle (2018.3.05) with One plugin
    -Download FabFilter Total Bundle (2018.3.05) with Simplon plugin
    -Download FabFilter Total Bundle (2018.3

    -

    Advanced filters and dynamics processing

    -

    FabFilter plugins offer advanced filters and dynamics processing that give you precise control over your sound. You can use various filter types such as low-pass high-pass band-pass notch or tilt to shape your sound. You can also use various dynamics modes such as peak RMS or envelope to adjust the level of your sound. You can also use various features such as linear-phase or natural-phase processing dynamic EQ or multiband compression/expansion mid/side or stereo processing to enhance your sound further.

    -

    Creative effects and synthesizers

    -

    FabFilter plugins also offer creative effects and synthesizers that let you add character and movement to your sound. You can use various effects such as distortion/saturation delay/reverb modulation/flanger/chorus/phaser/tremolo/vibrato/ring modulator/frequency shifter/pitch shifter/rotary speaker/gated reverb/ducking delay/granular delay/pitch-quantized delay/reverse delay/ping-pong delay/tape delay/diffusion network/diffuse delay/resonator/comb filter/formant filter/vowel filter/feedback/distortion/filter/wah-wah/auto-wah/envelope shaper/bit crusher/sample rate reducer/noise generator/limiter/dithering/etc. to spice up your sound. You can also use various synthesizers such as subtractive/analog/wavetable/FM/AM/RM/additive/granular/physical modeling/etc. to create your own sounds from scratch or modify existing sounds.

    -

    How to download FabFilter Total Bundle for free?

    -

    You might be wondering how you can download FabFilter Total Bundle for free without paying anything. Well there are some ways to do that but they come with some risks and drawbacks that you should be aware of before you try them.

    -

    The risks and drawbacks of using cracked software

    -

    One way to get FabFilter Total Bundle for free is to use cracked software. Cracked software is software that has been modified or hacked by someone to bypass its license protection or activation mechanism. This way you can use the software without paying for it or registering it with the developer. However using cracked software is not only illegal but also dangerous for several reasons:

    -

    Legal issues and copyright infringement

    - and your workflow. Cracked software often has bugs glitches or errors that can affect its functionality stability or quality. Cracked software can also cause compatibility issues with your operating system your DAW or other plugins that you use. Cracked software can also lack updates patches or support from the developers that can fix these issues or improve the software's performance or features.

    -

    The best way to get FabFilter Total Bundle legally and safely

    -

    So how can you get FabFilter Total Bundle without risking your computer your data or your legal status? Well there are some better ways to do that that are legal and safe. Here are some of them:

    -

    Buy the official license from FabFilter website

    -

    The best way to get FabFilter Total Bundle is to buy the official license from FabFilter website. This way you can support the developers and their work and enjoy the full benefits of the software. You can get the latest version of the software with all the features and updates. You can also get technical support and customer service from FabFilter in case you have any issues or questions. You can also get discounts or offers from FabFilter if you buy other products from them or if you are a student or an educator. You can buy FabFilter Total Bundle for $999 USD from their website.

    -

    Download the trial version and test it out

    -

    Another way to get FabFilter Total Bundle is to download the trial version and test it out. This way you can try the software for free for 30 days and see if you like it or not. You can use all the features and functions of the software without any limitations or restrictions. You can also save and export your projects with the trial version. However after 30 days you will need to buy the license or uninstall the software. You can download the trial version of FabFilter Total Bundle from their website.

    -

    Use alternative free plugins that are similar to FabFilter

    -

    A third way to get FabFilter Total Bundle is to use alternative free plugins that are similar to FabFilter. This way you can save money and still get some of the features and functions of FabFilter plugins. However you might not get the same quality usability or compatibility as FabFilter plugins. You might also miss some of the unique or advanced features that FabFilter plugins offer. Here are some examples of free plugins that are similar to FabFilter plugins:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FabFilter PluginFree Alternative Plugin
    FabFilter Pro-Q 3TDR Nova
    FabFilter Pro-C 2TDR Kotelnikov
    FabFilter Pro-RTAL-Reverb-4
    FabFilter Pro-L 2Limiter No6
    FabFilter Pro-MBXfer OTT
    FabFilter Pro-DSTDR DeEdger
    FabFilter Pro-GGVST GGate
    FabFilter Saturn 2Klanghelm IVGI
    FabFilter Timeless 3Valhalla Freq Echo
    FabFilter Volcano 3TAL-Filter-2
    FabFilter Twin 2Helm Synth
    FabFilter OneSynth1 VSTi/AUi Plugin Synthesizer (freeware)
    FabFilter Simplon u-he Podolski
    FabFilter Micro u-he TyrellN6

    You can find these free plugins online by searching their names on Google or other search engines.

    Conclusion

    In conclusion FabFilter Total Bundle is a great collection of audio plugins that can help you produce professional-sounding music. However if you want to get it for free you should be careful of using cracked software as it can cause legal security and performance issues. Instead you should consider buying the official license from FabFilter website downloading the trial version and testing it out or using alternative free plugins that are similar to FabFilter.

    FAQs

    • Q: How many plugins are included in FabFilter Total Bundle?
    • A: FabFilter Total Bundle includes 14 audio plugins that cover various aspects of audio production such as equalization compression reverb distortion synthesis and more.
    • Q: How much does FabFilter Total Bundle cost?
    • A: FabFilter Total Bundle costs $999 USD if you buy it from their website. You can also get discounts or offers if you buy other products from them or if you are a student or an educator.
    • Q: How long is the trial period of FabFilter Total Bundle?
    • A: The trial period of FabFilter Total Bundle is 30 days. You can use all the features and functions of the software without any limitations or restrictions during this period.
    • Q: What are some of the advantages of using FabFilter plugins?
    • A: Some of the advantages of using FabFilter plugins are high-quality sound and user-friendly interface flexible modulation and side-chain options advanced filters and dynamics processing and creative effects and synthesizers.
    • Q: What are some of the disadvantages of using cracked software?
    • A: Some of the disadvantages of using cracked software are legal issues and copyright infringement malware and viruses infection and poor performance and compatibility issues.
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raghuram13/extract_text_from_image/app.py b/spaces/raghuram13/extract_text_from_image/app.py deleted file mode 100644 index 612b6545d2b435d85199c675002e9639909d1631..0000000000000000000000000000000000000000 --- a/spaces/raghuram13/extract_text_from_image/app.py +++ /dev/null @@ -1,47 +0,0 @@ -import easyocr as ocr #OCR -import streamlit as st #Web App -from PIL import Image #Image Processing -import numpy as np #Image Processing - -#title -st.title("Easy OCR - Extract Text from Images") - -#subtitle -st.markdown("## Optical Character Recognition - Using `easyocr`, `streamlit` hosted on huggingfaces" ) - -#st.markdown("") - -#image uploader -image = st.file_uploader(label = "Upload your image here",type=['png','jpg','jpeg']) - - -@st.cache -def load_model(): - reader = ocr.Reader(['en'],model_storage_directory='.') - return reader - -reader = load_model() #load model - -if image is not None: - - input_image = Image.open(image) #read image - st.image(input_image) #display image - - with st.spinner("AI is at Work! "): - - - result = reader.readtext(np.array(input_image)) - - result_text = [] #empty list for results - - - for text in result: - result_text.append(text[1]) - - st.write(result_text) - #st.success("Here you go!") - st.balloons() -else: - st.write("Upload an Image") - -st.caption("Made by Raghuramvarma") diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/constants.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/constants.d.ts deleted file mode 100644 index 208020dcbab4ebcd7955b2abcb7ae49185f5976e..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/ts4.8/constants.d.ts +++ /dev/null @@ -1,18 +0,0 @@ -/** @deprecated since v6.3.0 - use constants property exposed by the relevant module instead. */ -declare module 'constants' { - import { constants as osConstants, SignalConstants } from 'node:os'; - import { constants as cryptoConstants } from 'node:crypto'; - import { constants as fsConstants } from 'node:fs'; - - const exp: typeof osConstants.errno & - typeof osConstants.priority & - SignalConstants & - typeof cryptoConstants & - typeof fsConstants; - export = exp; -} - -declare module 'node:constants' { - import constants = require('constants'); - export = constants; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Flash Pro CS6 Keygen PASSWORD.txt.rar.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Flash Pro CS6 Keygen PASSWORD.txt.rar.md deleted file mode 100644 index 443a205d8f51d76a668a853053c42c55c2f6187c..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Flash Pro CS6 Keygen PASSWORD.txt.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe flash pro CS6 Keygen PASSWORD.txt.rar


    Download ->->->-> https://urlgoal.com/2uCKLC



    -
    - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bank Soal Seni Budaya Sma Semester 1 Kelas X.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bank Soal Seni Budaya Sma Semester 1 Kelas X.md deleted file mode 100644 index 6a1c18b93e337e641c0d9ece09449455bd511630..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bank Soal Seni Budaya Sma Semester 1 Kelas X.md +++ /dev/null @@ -1,66 +0,0 @@ - -

    Bank Soal Seni Budaya SMA Semester 1 Kelas X

    -

    Seni budaya adalah salah satu mata pelajaran yang menarik dan mengasah kreativitas siswa SMA. Dalam seni budaya, siswa belajar tentang berbagai macam seni rupa, seni musik, seni tari, dan seni teater. Untuk menguji pemahaman dan keterampilan siswa dalam seni budaya, guru dapat memberikan soal-soal yang menantang dan bervariasi.

    -

    bank soal seni budaya sma semester 1 kelas x


    Download ✏ ✏ ✏ https://urlgoal.com/2uCMPD



    -

    Berikut ini adalah bank soal seni budaya SMA semester 1 kelas X yang dapat digunakan sebagai bahan latihan atau referensi. Soal-soal ini dikutip dari berbagai sumber online, seperti passinggrade.co.id, materibelajar.co.id, myedisi.com, kumparan.com, dan websiteedukasi.com. Soal-soal ini terdiri dari soal pilihan ganda dan soal essay, serta dilengkapi dengan kunci jawaban.

    -

    Soal Pilihan Ganda

    -
      -
    1. Teknik melukis dengan menggunakan cat air sebagai medium dan kertas sebagai permukaan disebut....
      -a. akrilik
      -b. cat air
      -c. lukisan batik
      -d. lukisan mozaik
      -e. lukisan gratis
      -Jawaban: B
    2. -
    3. Tekstur dan cahaya gelap terang adalah unsur bentuk dalam....
      -a. drama
      -b. tari
      -c. seni musik
      -d. seni teater
      -e. seni rupa
      -Jawaban: E
    4. -
    5. Perhatikan gambar berikut ini.
      -Gambar lukisan abstrak
      -Gambar di atas merupakan contoh wujud hasil karya seni rupa....
      -a. representatif (nyata)
      -b. dekoratif
      -c. ekspresif
      -d. nonrepresentatif (abstrak)
      -e. mengukir
      -Jawaban: D
    6. -
    7. Kita dapat menikmati seni sebagai....
      -a. perasaan
      -b. keluaran
      -c. estetika
      -d. pikiran
      -e. naluri
      -Jawaban: A
    8. -
    9. Proses dalam penciptaan seni memiliki keunikan, kecuali....
      -a. individual
      -b. universal
      -c. ekspresif
      -d. selalu sama
      -e. unik
      -Jawaban: D
    10. - - - -
    - -

    Soal Essay

    - -
      - -
    1. Jelaskan pengertian karya seni rupa dua dimensi dan tiga dimensi serta berikan contohnya masing-masing!

      - -Jawaban:
      - -Karya seni rupa dua dim - -Jawaban:
      - -Karya seni rupa dua dimensi adalah karya seni rupa yang memiliki ukuran panjang dan lebar, tetapi tidak memiliki ketebalan atau kedalaman. Karya seni rupa dua dimensi hanya dapat dilihat dari satu sisi saja. Contoh karya seni rupa dua dimensi adalah lukisan, gambar, foto, grafis, kaligrafi, dan sebagainya.
      - -Karya seni rupa tiga dimensi adalah karya seni rupa yang memiliki ukuran panjang, lebar, dan tinggi atau kedalaman. Karya seni rupa tiga dimensi memiliki volume dan massa, serta dapat dilihat dari berbagai sisi. Contoh karya seni rupa tiga dimensi adalah patung, keramik, arsitektur, instalasi, dan sebagainya.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Farming Simulator 2011 Mods Romania Download Torent.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Farming Simulator 2011 Mods Romania Download Torent.md deleted file mode 100644 index c4080fe5b4fcc7087750502219ed8fa5fb1e5d16..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Farming Simulator 2011 Mods Romania Download Torent.md +++ /dev/null @@ -1,42 +0,0 @@ -

      farming simulator 2011 mods romania download torent


      Downloadhttps://urlgoal.com/2uCJrp



      -
      -You can visit here to find everything related to Farming Simulator 2019. The Farming Simulator is the ultimate farming experience and the online community of Farming Simulator is constantly growing. You can join the community and share your experiences with other players. We are committed to offering the best experience possible and your feedback helps us to improve. - -Download Farming Simulator 2019 Demo - -Farming Simulator 2019 Release - -Two beta weekends have been announced and in between these weekends, the final version of Farming Simulator 19 has been released. Everyone can download and play the game for free, no need to register or buy a copy. The game is free of charge as a digital download via Steam, Uplay and various platforms. - -About Farming Simulator 2019 - -Farming Simulator 19 is a true farming simulation where the player is given a broad spectrum of tasks to complete. With some of the biggest trends in farming on the market at the moment, Farming Simulator 2019 makes it easier than ever for you to live your dream of being a farmer. Whether you want to make a living from the land or spend your time in the game to get to know the people in your area. Whether you dream of riding the combine or driving the semi-trailer – it’s up to you. - -Game Features - -Rural/farming simulation - -Over 27,000 combinations to unlock more than 1,500 activities - -More than 140 unique machines including tractors, combines, semi-trailers, harvesters, and all kinds of utility vehicles - -Realistic farming tasks with a multitude of tasks - -You can choose between a season type, a crop type and a production plan for each and every task - -Events in Farming Simulator 19 - -The community events will be some of the most thrilling and entertaining event in Farming Simulator 2019. With daily events, regular and exciting competitions, you will always have something to keep you busy in the game. - -Your farming community - -Farming Simulator 2019 will provide you with a great community with an endless amount of things to do and you can meet a wide variety of people from all over the world, you can make friends and have a good time with the people in your area. Share experiences and go on amazing adventures together. - -Haul more than 1,200 products to your farm - -Whether you haul a car, a water tanker, a semi-trailer, a wagon or a truck you will find endless possibilities to explore. - -About Farming Simulator 4fefd39f24
      -
      -
      -

      diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fenologia Del Frijol.pdf ((FULL)).md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fenologia Del Frijol.pdf ((FULL)).md deleted file mode 100644 index 38db9ddf5cd24954d47da1bde6884c4c04090f7f..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fenologia Del Frijol.pdf ((FULL)).md +++ /dev/null @@ -1,34 +0,0 @@ -

      Fenologia Del Frijol.pdf


      Download Zip →→→ https://urlgoal.com/2uCKlj



      - -.pdf - -Category:Mexican cuisine - -Category:Aymara cuisine - -Category:Foods containing coconutThe state of Wyoming’s new governor has an almost religious faith in the power of the free market. And that’s a good thing. - -Sally Jewell is “a highly respected businesswoman who understands the importance of free markets in the United States,” according to the governor’s website. - -That means that she will be advancing a progressive agenda in some respects, and the free market in others. - -Jewell’s first day in office, January 9, was marked by a promise to pursue universal preschool and a requirement that the state use natural gas more heavily than coal. - -And, as evidenced by her appointment to lead the troubled Wyoming Office of the State Engineer, Jewell seems to be thinking more like a Republican than a Democrat. - -Yet she’s the first Democrat to have won Wyoming in decades. She told me that she might even be the first Democrat to serve as Wyoming’s governor since 1933, when Franklin D. Roosevelt was in office. - -While it’s true that FDR came to office at the height of the Great Depression, he was also making his second attempt to reform the economy after he failed to pass the first New Deal. - -On that first attempt, he signed into law the Glass-Steagall Act, which prevented banks from speculating in stocks and commercial real estate. He also began the world-class Tennessee Valley Authority, which gave birth to the conservation movement. - -FDR passed the National Industrial Recovery Act, which was designed to control unemployment by creating new organizations that would design and sell standardized products. Roosevelt then signed the Railroad Labor Act, which regulated labor unions to protect the rights of workers. - -And, the National Labor Relations Act made it illegal for companies to refuse to bargain with unions that represent their workers. It also gave workers the right to strike. - -Of course, Roosevelt’s goals weren’t accomplished overnight. Labor unions were slow to realize the benefits of the National Labor Relations Act. In the end, FDR’s most ambitious plans — including the 1935 Social Security Act, which is still with us today — wouldn’t pass until he was out of office. - -So while there’s little question that FDR’s presidency has an important place in the history of modern liberalism, his second term isn’ 4fefd39f24
      -
      -
      -

      diff --git a/spaces/riccorl/relik-entity-linking/relik/inference/serve/frontend/__init__.py b/spaces/riccorl/relik-entity-linking/relik/inference/serve/frontend/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/riccorl/relik-entity-linking/relik/inference/serve/frontend/style.css b/spaces/riccorl/relik-entity-linking/relik/inference/serve/frontend/style.css deleted file mode 100644 index 31f0d182cfd9b2636d5db5cbd0e7a1339ed5d1c3..0000000000000000000000000000000000000000 --- a/spaces/riccorl/relik-entity-linking/relik/inference/serve/frontend/style.css +++ /dev/null @@ -1,33 +0,0 @@ -/* Sidebar */ -.eczjsme11 { - background-color: #802433; -} - -.st-emotion-cache-10oheav h2 { - color: white; -} - -.st-emotion-cache-10oheav li { - color: white; -} - -/* Main */ -a:link { - text-decoration: none; - color: white; -} - -a:visited { - text-decoration: none; - color: white; -} - -a:hover { - text-decoration: none; - color: rgba(255, 255, 255, 0.871); -} - -a:active { - text-decoration: none; - color: white; -} \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/base_sampler.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/base_sampler.py deleted file mode 100644 index bd15c7c643bdf52a39fd2f35e8d26a64de813b4b..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/samplers/base_sampler.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - -import torch - -from .sampling_result import SamplingResult - - -class BaseSampler(metaclass=ABCMeta): - """Base class of samplers.""" - - def __init__(self, - num, - pos_fraction, - neg_pos_ub=-1, - add_gt_as_proposals=True, - **kwargs): - self.num = num - self.pos_fraction = pos_fraction - self.neg_pos_ub = neg_pos_ub - self.add_gt_as_proposals = add_gt_as_proposals - self.pos_sampler = self - self.neg_sampler = self - - @abstractmethod - def _sample_pos(self, assign_result, num_expected, **kwargs): - """Sample positive samples.""" - pass - - @abstractmethod - def _sample_neg(self, assign_result, num_expected, **kwargs): - """Sample negative samples.""" - pass - - def sample(self, - assign_result, - bboxes, - gt_bboxes, - gt_labels=None, - **kwargs): - """Sample positive and negative bboxes. - - This is a simple implementation of bbox sampling given candidates, - assigning results and ground truth bboxes. - - Args: - assign_result (:obj:`AssignResult`): Bbox assigning results. - bboxes (Tensor): Boxes to be sampled from. - gt_bboxes (Tensor): Ground truth bboxes. - gt_labels (Tensor, optional): Class labels of ground truth bboxes. - - Returns: - :obj:`SamplingResult`: Sampling result. - - Example: - >>> from mmdet.core.bbox import RandomSampler - >>> from mmdet.core.bbox import AssignResult - >>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes - >>> rng = ensure_rng(None) - >>> assign_result = AssignResult.random(rng=rng) - >>> bboxes = random_boxes(assign_result.num_preds, rng=rng) - >>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng) - >>> gt_labels = None - >>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1, - >>> add_gt_as_proposals=False) - >>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels) - """ - if len(bboxes.shape) < 2: - bboxes = bboxes[None, :] - - bboxes = bboxes[:, :4] - - gt_flags = bboxes.new_zeros((bboxes.shape[0], ), dtype=torch.uint8) - if self.add_gt_as_proposals and len(gt_bboxes) > 0: - if gt_labels is None: - raise ValueError( - 'gt_labels must be given when add_gt_as_proposals is True') - bboxes = torch.cat([gt_bboxes, bboxes], dim=0) - assign_result.add_gt_(gt_labels) - gt_ones = bboxes.new_ones(gt_bboxes.shape[0], dtype=torch.uint8) - gt_flags = torch.cat([gt_ones, gt_flags]) - - num_expected_pos = int(self.num * self.pos_fraction) - pos_inds = self.pos_sampler._sample_pos( - assign_result, num_expected_pos, bboxes=bboxes, **kwargs) - # We found that sampled indices have duplicated items occasionally. - # (may be a bug of PyTorch) - pos_inds = pos_inds.unique() - num_sampled_pos = pos_inds.numel() - num_expected_neg = self.num - num_sampled_pos - if self.neg_pos_ub >= 0: - _pos = max(1, num_sampled_pos) - neg_upper_bound = int(self.neg_pos_ub * _pos) - if num_expected_neg > neg_upper_bound: - num_expected_neg = neg_upper_bound - neg_inds = self.neg_sampler._sample_neg( - assign_result, num_expected_neg, bboxes=bboxes, **kwargs) - neg_inds = neg_inds.unique() - - sampling_result = SamplingResult(pos_inds, neg_inds, bboxes, gt_bboxes, - assign_result, gt_flags) - return sampling_result diff --git a/spaces/rorallitri/biomedical-language-models/logs/Discover Sayo Lamang A Romantic Filipino Song with Sheet Music for Various Ensembles.md b/spaces/rorallitri/biomedical-language-models/logs/Discover Sayo Lamang A Romantic Filipino Song with Sheet Music for Various Ensembles.md deleted file mode 100644 index 21259b28e1ddc98b50da3b35c7713f19adab9a6b..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Discover Sayo Lamang A Romantic Filipino Song with Sheet Music for Various Ensembles.md +++ /dev/null @@ -1,5 +0,0 @@ - -

      da L.. II.. International Spring Symposium on Health Sciences (6th: 1986: Washington, ... us to go a long way in understanding lipoprotein metabolism, we have almost ... The gel was treated for fluorography and exposed to x-ray films (Tarugi et ..
      118. cp341 modbus without dongle crack

      -

      cp341 modbus without dongle crack


      Download · https://tinurll.com/2uzoIi



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Ray Kurzweil The Singularity Is Near Pdf 23 and Learn How to Transcend Biology.md b/spaces/rorallitri/biomedical-language-models/logs/Download Ray Kurzweil The Singularity Is Near Pdf 23 and Learn How to Transcend Biology.md deleted file mode 100644 index ae205982adc7b0d2186484550991943a23489ac8..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Ray Kurzweil The Singularity Is Near Pdf 23 and Learn How to Transcend Biology.md +++ /dev/null @@ -1,18 +0,0 @@ -
      -

      By 2045, artificial intelligence (AI) has reached a level of development that is beginning to reshape human society and culture in profound ways. This year marks the date of the so-called technological singularity postulated by futurist Ray Kurzweil.* Although Kurzweil tended to be overly optimistic in a number of specific future predictions,** his basic premise of exponential growth in technology proved to be accurate.

      -

      Ray Kurzweil The Singularity Is Near Pdf 23


      DOWNLOAD 🆗 https://tinurll.com/2uzmvj



      -

      In the past, limited processing power meant that robots would often spend minutes identifying an object or situation and the interaction required. By 2045, however, these calculations can be performed in near real-time, enabling a much more human-like response. Although a few technological hurdles remain, this is close to what many would consider to be artificial general intelligence (AGI).*

      -

      In South Africa's Kruger national park, a major conservation area, nearly 60% of the species under its protection have been lost. In the same region, 35% of proteaceae flowering plants have disappeared including the country's national flower, the King Protea.*

      -

      In the Arctic, nearly 70% of polar bears have disappeared due to the shrinking of summer ice caused by global warming. By 2080 they will disappear from Greenland entirely, and from the northern Canadian coast, leaving only dwindling numbers in the interior Arctic archipelago.

      -

      On a more cheery note, one could ask, not when the richest and most technologically advanced nations reach the singularity, but when will everybody have access to (e.g.) clean water? I recently became aware of one of your colleagues teaches a very interesting course on technology aimed at the poorest people:

      -

      -

      Yes and yes. My own prediction is that by most measures, there will not be nearly as much technological change in the 21st century as there was in the 20th. I see many aspects of civilization as approaching the right-hand-side of a sigmoid (in the best case) or bell curve (in the worst case).

      -

      Many of the goals of transhumanism and the expected results of a technological singularity can be achieved without AI or diamondoid molecular nanotechnology. It is just requires more organization and effort but there are other technologies being worked on now to make it happen. [up several comments is the reference to my article on a mundane singularity which is how to make it happen without diamondoid mechnosynthesis or even much nanotech beyond what is already working and without AI)

      -

      Maybe that the ultimate computer that could go near in resolving P=NP and could also simulate a human brain may be a parrallel computer with an growing number of processors like cells multiplicating in the human body. If enough processors could use a Monte-Carlos or Las-Vegas or a Genetic algorithm in order to approximate a solution to a SAT or something in the NP class it could be a good candidate for a intelligent being that could resolve at least a sub set of NP. Maybe that, we human use randomness with a lot a computing power for making discovery in science for example.

      -

      Lattice Quantum Chromodynamics (LQCD) was a promising field in the late 20th and early 21st centuries. This allowed researchers to simulate objects and processes in near-perfect detail, using resolutions based on the fundamental physical laws. By the 2010s, for example, individual proton masses could be determined at error margins close to one percent. During the 2020s, exascale computing helped to further refine the nuclear forces and uncover exotic "new physics" beyond the Standard Model.

      -

      The first person to use the concept of a "singularity" in the technological context was John von Neumann.[5] Stanislaw Ulam reports a 1958 discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[6] Subsequent authors have echoed this viewpoint.[3][7]

      -

      The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity is Near, predicting singularity by 2045.[7]

      -

      Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction.[9][10] The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.

      -

      Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (veer Zaara Full Movie Hd 1080p Free ) [UPD].md b/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (veer Zaara Full Movie Hd 1080p Free ) [UPD].md deleted file mode 100644 index ba4ff1f6dbed457e0d0066887a888fd3b211cae7..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (veer Zaara Full Movie Hd 1080p Free ) [UPD].md +++ /dev/null @@ -1,6 +0,0 @@ -

      HD Online Player (veer zaara full movie hd 1080p free )


      Download Ziphttps://tinurll.com/2uznVj



      - -Free Veer Zaara Full Movie Hd 1080p Dailymotion download. Full Hindi Movie 2018 Watch Online Golmaal Again Raees Judwaa 2 Toilet: Ek Prem Katha Tubelight Kaabil Badrinath Ki ... HD Online Player Veer Zaara 2004 Hindi 720p BRRip. 1fdad05405
      -
      -
      -

      diff --git a/spaces/rorallitri/biomedical-language-models/logs/Komban Tamil Movie Songs Download Starmusiq Song The Ultimate Collection of Komban Tracks.md b/spaces/rorallitri/biomedical-language-models/logs/Komban Tamil Movie Songs Download Starmusiq Song The Ultimate Collection of Komban Tracks.md deleted file mode 100644 index 55f17209f106c9b8142b80141b5f79ef65ca08d5..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Komban Tamil Movie Songs Download Starmusiq Song The Ultimate Collection of Komban Tracks.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Komban Tamil Movie Songs Download Starmusiq Song


      Download File »»» https://tinurll.com/2uzlZy



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/sarulab-speech/UTMOS-demo/score.py b/spaces/sarulab-speech/UTMOS-demo/score.py deleted file mode 100644 index 762d45ffa8a396d0265eb9a30a110839d887a1f2..0000000000000000000000000000000000000000 --- a/spaces/sarulab-speech/UTMOS-demo/score.py +++ /dev/null @@ -1,122 +0,0 @@ - - -import lightning_module -import torch -import torchaudio -import unittest - -class Score: - """Predicting score for each audio clip.""" - - def __init__( - self, - ckpt_path: str = "epoch=3-step=7459.ckpt", - input_sample_rate: int = 16000, - device: str = "cpu"): - """ - Args: - ckpt_path: path to pretrained checkpoint of UTMOS strong learner. - input_sample_rate: sampling rate of input audio tensor. The input audio tensor - is automatically downsampled to 16kHz. - """ - print(f"Using device: {device}") - self.device = device - self.model = lightning_module.BaselineLightningModule.load_from_checkpoint( - ckpt_path).eval().to(device) - self.in_sr = input_sample_rate - self.resampler = torchaudio.transforms.Resample( - orig_freq=input_sample_rate, - new_freq=16000, - resampling_method="sinc_interpolation", - lowpass_filter_width=6, - dtype=torch.float32, - ).to(device) - - def score(self, wavs: torch.tensor) -> torch.tensor: - """ - Args: - wavs: audio waveform to be evaluated. When len(wavs) == 1 or 2, - the model processes the input as a single audio clip. The model - performs batch processing when len(wavs) == 3. - """ - if len(wavs.shape) == 1: - out_wavs = wavs.unsqueeze(0).unsqueeze(0) - elif len(wavs.shape) == 2: - out_wavs = wavs.unsqueeze(0) - elif len(wavs.shape) == 3: - out_wavs = wavs - else: - raise ValueError('Dimension of input tensor needs to be <= 3.') - if self.in_sr != 16000: - out_wavs = self.resampler(out_wavs) - bs = out_wavs.shape[0] - batch = { - 'wav': out_wavs, - 'domains': torch.zeros(bs, dtype=torch.int).to(self.device), - 'judge_id': torch.ones(bs, dtype=torch.int).to(self.device)*288 - } - with torch.no_grad(): - output = self.model(batch) - - return output.mean(dim=1).squeeze(1).cpu().detach().numpy()*2 + 3 - - -class TestFunc(unittest.TestCase): - """Test class.""" - - def test_1dim_0(self): - scorer = Score(input_sample_rate=16000) - seq_len = 10000 - inp_audio = torch.ones(seq_len) - pred = scorer.score(inp_audio) - self.assertGreaterEqual(pred, 0.) - self.assertLessEqual(pred, 5.) - - def test_1dim_1(self): - scorer = Score(input_sample_rate=24000) - seq_len = 10000 - inp_audio = torch.ones(seq_len) - pred = scorer.score(inp_audio) - self.assertGreaterEqual(pred, 0.) - self.assertLessEqual(pred, 5.) - - def test_2dim_0(self): - scorer = Score(input_sample_rate=16000) - seq_len = 10000 - inp_audio = torch.ones(1, seq_len) - pred = scorer.score(inp_audio) - self.assertGreaterEqual(pred, 0.) - self.assertLessEqual(pred, 5.) - - def test_2dim_1(self): - scorer = Score(input_sample_rate=24000) - seq_len = 10000 - inp_audio = torch.ones(1, seq_len) - pred = scorer.score(inp_audio) - print(pred) - print(pred.shape) - self.assertGreaterEqual(pred, 0.) - self.assertLessEqual(pred, 5.) - - def test_3dim_0(self): - scorer = Score(input_sample_rate=16000) - seq_len = 10000 - batch = 8 - inp_audio = torch.ones(batch, 1, seq_len) - pred = scorer.score(inp_audio) - for p in pred: - self.assertGreaterEqual(p, 0.) - self.assertLessEqual(p, 5.) - - def test_3dim_1(self): - scorer = Score(input_sample_rate=24000) - seq_len = 10000 - batch = 8 - inp_audio = torch.ones(batch, 1, seq_len) - pred = scorer.score(inp_audio) - for p in pred: - self.assertGreaterEqual(p, 0.) - self.assertLessEqual(p, 5.) - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Download 720p Maidan-E-Jung Movi !FULL!.md b/spaces/scedlatioru/img-to-music/example/Download 720p Maidan-E-Jung Movi !FULL!.md deleted file mode 100644 index 2eb4b7a4944b279f4ab3d8171711bec24dd74bc6..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Download 720p Maidan-E-Jung Movi !FULL!.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Download 720p Maidan-E-Jung Movi


      Downloadhttps://gohhs.com/2uEAw7



      -
      -Do Premee (HD) - Hindi Full Movie - Rishi Kapoor | Moushumi ... house, ... Maidan-E-Jung kannada full movie 3gp download Dastak-A deadly ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Godzilla 2014 Movie In Hindi Mp4 VERIFIED.md b/spaces/scedlatioru/img-to-music/example/Godzilla 2014 Movie In Hindi Mp4 VERIFIED.md deleted file mode 100644 index a86cae74c997527ed57ef4fb853f159758979010..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Godzilla 2014 Movie In Hindi Mp4 VERIFIED.md +++ /dev/null @@ -1,27 +0,0 @@ -
      -

      Godzilla 2014 Movie In Hindi Mp4: A Review

      -

      Godzilla 2014 Movie In Hindi Mp4 is a dubbed version of the American monster film Godzilla, directed by Gareth Edwards and starring Aaron Taylor-Johnson, Bryan Cranston, Elizabeth Olsen, and Ken Watanabe. The film is a reboot of the Godzilla franchise and the first film in the MonsterVerse, a shared cinematic universe featuring Godzilla and other giant creatures.

      -

      Godzilla 2014 Movie In Hindi Mp4


      Download Zip ····· https://gohhs.com/2uEz79



      -

      In this article, we will review Godzilla 2014 Movie In Hindi Mp4 and discuss its plot, characters, visual effects, and reception. We will also provide some links where you can watch or download Godzilla 2014 Movie In Hindi Mp4 online.

      -

      Plot

      -

      The plot of Godzilla 2014 Movie In Hindi Mp4 follows the story of Ford Brody, a US Navy explosive ordnance disposal officer who gets involved in a global crisis caused by the awakening of two ancient parasitic monsters known as MUTOs (Massive Unidentified Terrestrial Organisms). The MUTOs feed on nuclear energy and wreak havoc across the Pacific Rim, while being pursued by Godzilla, an ancient alpha predator who is the natural balance to their existence.

      -

      Ford joins forces with his father Joe, a former nuclear engineer who lost his wife in a mysterious accident at a Japanese nuclear plant in 1999, and Dr. Ishiro Serizawa, a scientist who works for a secret organization called Monarch that studies Godzilla and other giant creatures. Together, they try to stop the MUTOs from reaching their mating ground in San Francisco and unleashing a nuclear catastrophe.

      -

      Characters

      -

      The main characters of Godzilla 2014 Movie In Hindi Mp4 are:

      -
        -
      • Ford Brody (Aaron Taylor-Johnson): The protagonist of the film, a US Navy officer who tries to reunite with his family and stop the MUTOs.
      • -
      • Joe Brody (Bryan Cranston): Ford's father, a former nuclear engineer who is obsessed with finding out the truth behind his wife's death and the origin of the MUTOs.
      • -
      • Elle Brody (Elizabeth Olsen): Ford's wife, a nurse who works at a San Francisco hospital and cares for their son Sam.
      • -
      • Dr. Ishiro Serizawa (Ken Watanabe): A scientist who works for Monarch and has been studying Godzilla for years. He believes that Godzilla is the key to restoring the natural order.
      • -
      • Godzilla: The titular monster of the film, a prehistoric creature who emerges from the ocean to fight the MUTOs and restore balance to the world.
      • -
      -

      Visual Effects

      -

      The visual effects of Godzilla 2014 Movie In Hindi Mp4 are impressive and realistic. The film uses a combination of computer-generated imagery (CGI) and practical effects to create the monsters and their destruction. The film also uses sound design and cinematography to create a sense of scale and suspense.

      -

      -

      The design of Godzilla is based on the original Japanese version of the character, but with some modern updates. Godzilla stands at 355 feet tall and weighs 90,000 tons. He has charcoal-gray scales, gills, dorsal plates that glow blue when he unleashes his atomic breath, and a long tail that he uses as a weapon. He also has expressive eyes and facial features that convey his emotions and intelligence.

      -

      The design of the MUTOs is inspired by various insects and arachnids. They have black exoskeletons, red eyes, bioluminescent markings, and wings. The male MUTO is smaller and faster than the female MUTO, while the female MUTO is larger and stronger than the male MUTO. They communicate through echolocation and electromagnetic pulses.

      -

      Reception

      -

      Godzilla 2014 Movie In Hindi Mp4 received mixed to positive reviews from critics and audiences. The film was praised for its visual effects, sound design, action sequences, and homage to the original Godzilla films. However, the film was also criticized for its slow pace, thin plot, underdeveloped characters, and lack of screen time for Godzilla.

      -

      The film was a box office success, grossing over $529 million worldwide against a budget of $160 million. It was also nominated for several awards, including Best Visual Effects at the Academy Awards. The film spawned two sequels

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/KMSpico 11.1.9 Portable Serial Key.md b/spaces/scedlatioru/img-to-music/example/KMSpico 11.1.9 Portable Serial Key.md deleted file mode 100644 index 00ea4772e68b7f8b130ff38db89f246d1e6ca0d8..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/KMSpico 11.1.9 Portable Serial Key.md +++ /dev/null @@ -1,11 +0,0 @@ -
      -

      hi, i want to hack microsoft office 2012
      i want to generate, when i run it will install the office in trial mode for 30 days and make a legit license key so that i cannot take the product again.
      is it possible, i tried keylist but it is not working.
      thanks in advance.

      -

      KMSpico 11.1.9 Portable Serial Key


      Download Ziphttps://gohhs.com/2uEyMd



      -

      i have 2 questions about kmspico:
      what is the difference between the office activator and the kmspico?
      is kmspico "safe" to use on a already licensed copy of office? or will it alter the file structure or something?
      thanks!

      regards,
      paul

      -

      hi guys,
      i have a question: i am using kmspico to activate office 2013, and all of a sudden, the game message returned me to the original office 2013 license key, the one i gave to microsoft. however, the "office 2013" string on the oem window as well as the "new" button still there.
      is there something i missed?
      thanks in advance.

      -

      hello,
      i am wondering if i could use kmspico to activate my 2015 windows edition office?
      will it possible? and if so i am wondering if there would be any hidden side effects in my computer such as system files getting changed?
      thanks in advance!

      -

      -

      this tool is available in multiple languages such as english, french, german, spanish,.. for example, after activating using this tool we will be able to remove that annoying watermark, and we get a genuine license that lasts for the rest of life. however, the best part of kmspico is that we also get the ota update which not all the activation tools provide.

      -

      hello. windows 7ulti oem was installed on my macbook running as a virtual machine with parallel 10. i took over the mac from a friend with the software preinstalled. somehow he had the windows7 reinstalled & it now became deactivated after 30days. the original key couldnt be recognized. can i use kmspico to re-activate my windows 7ultimate oem as virtual machine will it alter the system of my mac in the process
      thanks.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/sdhsdhk/bingo111/src/lib/hooks/chat-history.ts b/spaces/sdhsdhk/bingo111/src/lib/hooks/chat-history.ts deleted file mode 100644 index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/lib/hooks/chat-history.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { zip } from 'lodash-es' -import { ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { Storage } from '../storage' - -/** - * conversations:$botId => Conversation[] - * conversation:$botId:$cid:messages => ChatMessageModel[] - */ - -interface Conversation { - id: string - createdAt: number -} - -type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] } - -async function loadHistoryConversations(botId: BotId): Promise { - const key = `conversations:${botId}` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -async function deleteHistoryConversation(botId: BotId, cid: string) { - const conversations = await loadHistoryConversations(botId) - const newConversations = conversations.filter((c) => c.id !== cid) - await Storage.set({ [`conversations:${botId}`]: newConversations }) -} - -async function loadConversationMessages(botId: BotId, cid: string): Promise { - const key = `conversation:${botId}:${cid}:messages` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) { - const conversations = await loadHistoryConversations(botId) - if (!conversations.some((c) => c.id === cid)) { - conversations.unshift({ id: cid, createdAt: Date.now() }) - await Storage.set({ [`conversations:${botId}`]: conversations }) - } - const key = `conversation:${botId}:${cid}:messages` - await Storage.set({ [key]: messages }) -} - -export async function loadHistoryMessages(botId: BotId): Promise { - const conversations = await loadHistoryConversations(botId) - const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id))) - return zip(conversations, messagesList).map(([c, messages]) => ({ - id: c!.id, - createdAt: c!.createdAt, - messages: messages!, - })) -} - -export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) { - const messages = await loadConversationMessages(botId, conversationId) - const newMessages = messages.filter((m) => m.id !== messageId) - await setConversationMessages(botId, conversationId, newMessages) - if (!newMessages.length) { - await deleteHistoryConversation(botId, conversationId) - } -} diff --git a/spaces/sdhsdhk/bingosjj/src/lib/bots/bing/types.ts b/spaces/sdhsdhk/bingosjj/src/lib/bots/bing/types.ts deleted file mode 100644 index 02cd5e8b01e3529642d28dc1539bf958f4ac420b..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingosjj/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,259 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/seduerr/text_analytics/text_analytics/pipes/causal_connectives_tagger.py b/spaces/seduerr/text_analytics/text_analytics/pipes/causal_connectives_tagger.py deleted file mode 100644 index 411e9906170687b09419df6dc4162c0e9e160fb2..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/text_analytics/pipes/causal_connectives_tagger.py +++ /dev/null @@ -1,34 +0,0 @@ -from spacy.matcher import PhraseMatcher -from spacy.tokens import Doc -from spacy.tokens import Span -from spacy.util import filter_spans -from spacy.language import Language - -from text_analytics.constants import ACCEPTED_LANGUAGES - -causal_connectives_getter = lambda doc: [doc[span['start']:span['end']] for span in doc._.causal_connectives_span_indices] - -Doc.set_extension('causal_connectives_span_indices', force=False, default=[]) -Doc.set_extension('causal_connectives', force=False, getter=causal_connectives_getter) - -@Language.factory('causal connective tagger') -class CausalConnectivesTagger: - def __init__(self, name, nlp, language) -> None: - self._language = language - self._matcher = PhraseMatcher(nlp.vocab, attr='LOWER') - self.causal_connectives = [] - if language == 'en': - self.causal_connectives = ['to repeat, briefly', 'finally', 'therefore', 'with this in mind', 'in conclusion', 'because of this', 'because of', 'as a consequence', 'to this end', 'on the score of', 'then', 'because', 'so', 'later', 'hence', 'in short', 'for this reason', 'thus', 'so much that', 'accordingly', 'for', 'so then', 'as I have said', 'therefore', 'in summary', 'on the whole', 'consequently', 'for this purpose', 'since', 'as a result', 'to sum up', 'so that', 'as you can see'] - else: - pass - for con in self.causal_connectives: - self._matcher.add(con, None, nlp(con)) - - def __call__(self, doc: Doc) -> Doc: - matches = self._matcher(doc) - causal_connectives_spans = [doc[start:end] for _, start, end in matches] - doc._.causal_connectives_span_indices = [{'start': span.start, - 'end': span.end, - 'label': span.label} - for span in filter_spans(causal_connectives_spans)] - return doc \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet2/asr/encoder/conformer_encoder.py b/spaces/segments-tobias/conex/espnet2/asr/encoder/conformer_encoder.py deleted file mode 100644 index 2c9608301196763ac89dc04dffc7c3ca2cae4a5d..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/asr/encoder/conformer_encoder.py +++ /dev/null @@ -1,304 +0,0 @@ -# Copyright 2020 Tomoki Hayashi -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Conformer encoder definition.""" - -from typing import Optional -from typing import Tuple - -import logging -import torch - -from typeguard import check_argument_types - -from espnet.nets.pytorch_backend.conformer.convolution import ConvolutionModule -from espnet.nets.pytorch_backend.conformer.encoder_layer import EncoderLayer -from espnet.nets.pytorch_backend.nets_utils import get_activation -from espnet.nets.pytorch_backend.nets_utils import make_pad_mask -from espnet.nets.pytorch_backend.transformer.attention import ( - MultiHeadedAttention, # noqa: H301 - RelPositionMultiHeadedAttention, # noqa: H301 - LegacyRelPositionMultiHeadedAttention, # noqa: H301 -) -from espnet.nets.pytorch_backend.transformer.embedding import ( - PositionalEncoding, # noqa: H301 - ScaledPositionalEncoding, # noqa: H301 - RelPositionalEncoding, # noqa: H301 - LegacyRelPositionalEncoding, # noqa: H301 -) -from espnet.nets.pytorch_backend.transformer.layer_norm import LayerNorm -from espnet.nets.pytorch_backend.transformer.multi_layer_conv import Conv1dLinear -from espnet.nets.pytorch_backend.transformer.multi_layer_conv import MultiLayeredConv1d -from espnet.nets.pytorch_backend.transformer.positionwise_feed_forward import ( - PositionwiseFeedForward, # noqa: H301 -) -from espnet.nets.pytorch_backend.transformer.repeat import repeat -from espnet.nets.pytorch_backend.transformer.subsampling import check_short_utt -from espnet.nets.pytorch_backend.transformer.subsampling import Conv2dSubsampling -from espnet.nets.pytorch_backend.transformer.subsampling import Conv2dSubsampling6 -from espnet.nets.pytorch_backend.transformer.subsampling import Conv2dSubsampling8 -from espnet.nets.pytorch_backend.transformer.subsampling import TooShortUttError -from espnet2.asr.encoder.abs_encoder import AbsEncoder - - -class ConformerEncoder(AbsEncoder): - """Conformer encoder module. - - Args: - input_size (int): Input dimension. - output_size (int): Dimention of attention. - attention_heads (int): The number of heads of multi head attention. - linear_units (int): The number of units of position-wise feed forward. - num_blocks (int): The number of decoder blocks. - dropout_rate (float): Dropout rate. - attention_dropout_rate (float): Dropout rate in attention. - positional_dropout_rate (float): Dropout rate after adding positional encoding. - input_layer (Union[str, torch.nn.Module]): Input layer type. - normalize_before (bool): Whether to use layer_norm before the first block. - concat_after (bool): Whether to concat attention layer's input and output. - If True, additional linear will be applied. - i.e. x -> x + linear(concat(x, att(x))) - If False, no additional linear will be applied. i.e. x -> x + att(x) - positionwise_layer_type (str): "linear", "conv1d", or "conv1d-linear". - positionwise_conv_kernel_size (int): Kernel size of positionwise conv1d layer. - rel_pos_type (str): Whether to use the latest relative positional encoding or - the legacy one. The legacy relative positional encoding will be deprecated - in the future. More Details can be found in - https://github.com/espnet/espnet/pull/2816. - encoder_pos_enc_layer_type (str): Encoder positional encoding layer type. - encoder_attn_layer_type (str): Encoder attention layer type. - activation_type (str): Encoder activation function type. - macaron_style (bool): Whether to use macaron style for positionwise layer. - use_cnn_module (bool): Whether to use convolution module. - zero_triu (bool): Whether to zero the upper triangular part of attention matrix. - cnn_module_kernel (int): Kernerl size of convolution module. - padding_idx (int): Padding idx for input_layer=embed. - - """ - - def __init__( - self, - input_size: int, - output_size: int = 256, - attention_heads: int = 4, - linear_units: int = 2048, - num_blocks: int = 6, - dropout_rate: float = 0.1, - positional_dropout_rate: float = 0.1, - attention_dropout_rate: float = 0.0, - input_layer: str = "conv2d", - normalize_before: bool = True, - concat_after: bool = False, - positionwise_layer_type: str = "linear", - positionwise_conv_kernel_size: int = 3, - macaron_style: bool = False, - rel_pos_type: str = "legacy", - pos_enc_layer_type: str = "rel_pos", - selfattention_layer_type: str = "rel_selfattn", - activation_type: str = "swish", - use_cnn_module: bool = True, - zero_triu: bool = False, - cnn_module_kernel: int = 31, - padding_idx: int = -1, - ): - assert check_argument_types() - super().__init__() - self._output_size = output_size - - if rel_pos_type == "legacy": - if pos_enc_layer_type == "rel_pos": - pos_enc_layer_type = "legacy_rel_pos" - if selfattention_layer_type == "rel_selfattn": - selfattention_layer_type = "legacy_rel_selfattn" - elif rel_pos_type == "latest": - assert selfattention_layer_type != "legacy_rel_selfattn" - assert pos_enc_layer_type != "legacy_rel_pos" - else: - raise ValueError("unknown rel_pos_type: " + rel_pos_type) - - activation = get_activation(activation_type) - if pos_enc_layer_type == "abs_pos": - pos_enc_class = PositionalEncoding - elif pos_enc_layer_type == "scaled_abs_pos": - pos_enc_class = ScaledPositionalEncoding - elif pos_enc_layer_type == "rel_pos": - assert selfattention_layer_type == "rel_selfattn" - pos_enc_class = RelPositionalEncoding - elif pos_enc_layer_type == "legacy_rel_pos": - assert selfattention_layer_type == "legacy_rel_selfattn" - pos_enc_class = LegacyRelPositionalEncoding - logging.warning( - "Using legacy_rel_pos and it will be deprecated in the future." - ) - else: - raise ValueError("unknown pos_enc_layer: " + pos_enc_layer_type) - - if input_layer == "linear": - self.embed = torch.nn.Sequential( - torch.nn.Linear(input_size, output_size), - torch.nn.LayerNorm(output_size), - torch.nn.Dropout(dropout_rate), - pos_enc_class(output_size, positional_dropout_rate), - ) - elif input_layer == "conv2d": - self.embed = Conv2dSubsampling( - input_size, - output_size, - dropout_rate, - pos_enc_class(output_size, positional_dropout_rate), - ) - elif input_layer == "conv2d6": - self.embed = Conv2dSubsampling6( - input_size, - output_size, - dropout_rate, - pos_enc_class(output_size, positional_dropout_rate), - ) - elif input_layer == "conv2d8": - self.embed = Conv2dSubsampling8( - input_size, - output_size, - dropout_rate, - pos_enc_class(output_size, positional_dropout_rate), - ) - elif input_layer == "embed": - self.embed = torch.nn.Sequential( - torch.nn.Embedding(input_size, output_size, padding_idx=padding_idx), - pos_enc_class(output_size, positional_dropout_rate), - ) - elif isinstance(input_layer, torch.nn.Module): - self.embed = torch.nn.Sequential( - input_layer, - pos_enc_class(output_size, positional_dropout_rate), - ) - elif input_layer is None: - self.embed = torch.nn.Sequential( - pos_enc_class(output_size, positional_dropout_rate) - ) - else: - raise ValueError("unknown input_layer: " + input_layer) - self.normalize_before = normalize_before - if positionwise_layer_type == "linear": - positionwise_layer = PositionwiseFeedForward - positionwise_layer_args = ( - output_size, - linear_units, - dropout_rate, - activation, - ) - elif positionwise_layer_type == "conv1d": - positionwise_layer = MultiLayeredConv1d - positionwise_layer_args = ( - output_size, - linear_units, - positionwise_conv_kernel_size, - dropout_rate, - ) - elif positionwise_layer_type == "conv1d-linear": - positionwise_layer = Conv1dLinear - positionwise_layer_args = ( - output_size, - linear_units, - positionwise_conv_kernel_size, - dropout_rate, - ) - else: - raise NotImplementedError("Support only linear or conv1d.") - - if selfattention_layer_type == "selfattn": - encoder_selfattn_layer = MultiHeadedAttention - encoder_selfattn_layer_args = ( - attention_heads, - output_size, - attention_dropout_rate, - ) - elif selfattention_layer_type == "legacy_rel_selfattn": - assert pos_enc_layer_type == "legacy_rel_pos" - encoder_selfattn_layer = LegacyRelPositionMultiHeadedAttention - encoder_selfattn_layer_args = ( - attention_heads, - output_size, - attention_dropout_rate, - ) - logging.warning( - "Using legacy_rel_selfattn and it will be deprecated in the future." - ) - elif selfattention_layer_type == "rel_selfattn": - assert pos_enc_layer_type == "rel_pos" - encoder_selfattn_layer = RelPositionMultiHeadedAttention - encoder_selfattn_layer_args = ( - attention_heads, - output_size, - attention_dropout_rate, - zero_triu, - ) - else: - raise ValueError("unknown encoder_attn_layer: " + selfattention_layer_type) - - convolution_layer = ConvolutionModule - convolution_layer_args = (output_size, cnn_module_kernel, activation) - - self.encoders = repeat( - num_blocks, - lambda lnum: EncoderLayer( - output_size, - encoder_selfattn_layer(*encoder_selfattn_layer_args), - positionwise_layer(*positionwise_layer_args), - positionwise_layer(*positionwise_layer_args) if macaron_style else None, - convolution_layer(*convolution_layer_args) if use_cnn_module else None, - dropout_rate, - normalize_before, - concat_after, - ), - ) - if self.normalize_before: - self.after_norm = LayerNorm(output_size) - - def output_size(self) -> int: - return self._output_size - - def forward( - self, - xs_pad: torch.Tensor, - ilens: torch.Tensor, - prev_states: torch.Tensor = None, - ) -> Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]: - """Calculate forward propagation. - - Args: - xs_pad (torch.Tensor): Input tensor (#batch, L, input_size). - ilens (torch.Tensor): Input length (#batch). - prev_states (torch.Tensor): Not to be used now. - - Returns: - torch.Tensor: Output tensor (#batch, L, output_size). - torch.Tensor: Output length (#batch). - torch.Tensor: Not to be used now. - - """ - masks = (~make_pad_mask(ilens)[:, None, :]).to(xs_pad.device) - - if ( - isinstance(self.embed, Conv2dSubsampling) - or isinstance(self.embed, Conv2dSubsampling6) - or isinstance(self.embed, Conv2dSubsampling8) - ): - short_status, limit_size = check_short_utt(self.embed, xs_pad.size(1)) - if short_status: - raise TooShortUttError( - f"has {xs_pad.size(1)} frames and is too short for subsampling " - + f"(it needs more than {limit_size} frames), return empty results", - xs_pad.size(1), - limit_size, - ) - xs_pad, masks = self.embed(xs_pad, masks) - else: - xs_pad = self.embed(xs_pad) - xs_pad, masks = self.encoders(xs_pad, masks) - if isinstance(xs_pad, tuple): - xs_pad = xs_pad[0] - if self.normalize_before: - xs_pad = self.after_norm(xs_pad) - - olens = masks.squeeze(1).sum(1) - return xs_pad, olens, None diff --git a/spaces/segments-tobias/conex/espnet2/layers/label_aggregation.py b/spaces/segments-tobias/conex/espnet2/layers/label_aggregation.py deleted file mode 100644 index 2070a888a84849ce28a877a122fb415c245c42b6..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/layers/label_aggregation.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch -from typeguard import check_argument_types -from typing import Optional -from typing import Tuple - -from espnet.nets.pytorch_backend.nets_utils import make_pad_mask - - -class LabelAggregate(torch.nn.Module): - def __init__( - self, - win_length: int = 512, - hop_length: int = 128, - center: bool = True, - ): - assert check_argument_types() - super().__init__() - - self.win_length = win_length - self.hop_length = hop_length - self.center = center - - def extra_repr(self): - return ( - f"win_length={self.win_length}, " - f"hop_length={self.hop_length}, " - f"center={self.center}, " - ) - - def forward( - self, input: torch.Tensor, ilens: torch.Tensor = None - ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]: - """LabelAggregate forward function. - - Args: - input: (Batch, Nsamples, Label_dim) - ilens: (Batch) - Returns: - output: (Batch, Frames, Label_dim) - - """ - bs = input.size(0) - max_length = input.size(1) - label_dim = input.size(2) - - # NOTE(jiatong): - # The default behaviour of label aggregation is compatible with - # torch.stft about framing and padding. - - # Step1: center padding - if self.center: - pad = self.win_length // 2 - max_length = max_length + 2 * pad - input = torch.nn.functional.pad(input, (0, 0, pad, pad), "constant", 0) - nframe = (max_length - self.win_length) // self.hop_length + 1 - - # Step2: framing - output = input.as_strided( - (bs, nframe, self.win_length, label_dim), - (max_length * label_dim, self.hop_length * label_dim, label_dim, 1), - ) - - # Step3: aggregate label - output = torch.gt(output.sum(dim=2, keepdim=False), self.win_length // 2) - output = output.float() - - # Step4: process lengths - if ilens is not None: - if self.center: - pad = self.win_length // 2 - ilens = ilens + 2 * pad - - olens = (ilens - self.win_length) // self.hop_length + 1 - output.masked_fill_(make_pad_mask(olens, output, 1), 0.0) - else: - olens = None - - return output, olens diff --git a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/csrc/vision.cpp b/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/csrc/vision.cpp deleted file mode 100644 index c1f2c50c82909bbd5492c163d634af77a3ba1781..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/csrc/vision.cpp +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -#include "MsDeformAttn/ms_deform_attn.h" - -namespace groundingdino { - -#ifdef WITH_CUDA -extern int get_cudart_version(); -#endif - -std::string get_cuda_version() { -#ifdef WITH_CUDA - std::ostringstream oss; - - // copied from - // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231 - auto printCudaStyleVersion = [&](int v) { - oss << (v / 1000) << "." << (v / 10 % 100); - if (v % 10 != 0) { - oss << "." << (v % 10); - } - }; - printCudaStyleVersion(get_cudart_version()); - return oss.str(); -#else - return std::string("not available"); -#endif -} - -// similar to -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp -std::string get_compiler_version() { - std::ostringstream ss; -#if defined(__GNUC__) -#ifndef __clang__ - { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; } -#endif -#endif - -#if defined(__clang_major__) - { - ss << "clang " << __clang_major__ << "." << __clang_minor__ << "." - << __clang_patchlevel__; - } -#endif - -#if defined(_MSC_VER) - { ss << "MSVC " << _MSC_FULL_VER; } -#endif - return ss.str(); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward"); - m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward"); -} - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/shashi141/MyGenAIChatBot/README.md b/spaces/shashi141/MyGenAIChatBot/README.md deleted file mode 100644 index 743f7c20f27636bd420fd171e0c012a2f107c76b..0000000000000000000000000000000000000000 --- a/spaces/shashi141/MyGenAIChatBot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MyGenAIChatBot -emoji: 🏃 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shi-labs/OneFormer/oneformer/utils/events.py b/spaces/shi-labs/OneFormer/oneformer/utils/events.py deleted file mode 100644 index d1d27ac6ecef656f1aa86649ceacb54470765821..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/oneformer/utils/events.py +++ /dev/null @@ -1,120 +0,0 @@ -import os -import wandb -from detectron2.utils import comm -from detectron2.utils.events import EventWriter, get_event_storage - - -def setup_wandb(cfg, args): - if comm.is_main_process(): - init_args = { - k.lower(): v - for k, v in cfg.WANDB.items() - if isinstance(k, str) and k not in ["config"] - } - # only include most related part to avoid too big table - # TODO: add configurable params to select which part of `cfg` should be saved in config - if "config_exclude_keys" in init_args: - init_args["config"] = cfg - init_args["config"]["cfg_file"] = args.config_file - else: - init_args["config"] = { - "model": cfg.MODEL, - "solver": cfg.SOLVER, - "cfg_file": args.config_file, - } - if ("name" not in init_args) or (init_args["name"] is None): - init_args["name"] = os.path.basename(args.config_file) - else: - init_args["name"] = init_args["name"] + '_' + os.path.basename(args.config_file) - wandb.init(**init_args) - - -class BaseRule(object): - def __call__(self, target): - return target - - -class IsIn(BaseRule): - def __init__(self, keyword: str): - self.keyword = keyword - - def __call__(self, target): - return self.keyword in target - - -class Prefix(BaseRule): - def __init__(self, keyword: str): - self.keyword = keyword - - def __call__(self, target): - return "/".join([self.keyword, target]) - - -class WandbWriter(EventWriter): - """ - Write all scalars to a tensorboard file. - """ - - def __init__(self): - """ - Args: - log_dir (str): the directory to save the output events - kwargs: other arguments passed to `torch.utils.tensorboard.SummaryWriter(...)` - """ - self._last_write = -1 - self._group_rules = [ - (IsIn("/"), BaseRule()), - (IsIn("loss"), Prefix("train")), - ] - - def write(self): - - storage = get_event_storage() - - def _group_name(scalar_name): - for (rule, op) in self._group_rules: - if rule(scalar_name): - return op(scalar_name) - return scalar_name - - stats = { - _group_name(name): scalars[0] - for name, scalars in storage.latest().items() - if scalars[1] > self._last_write - } - if len(stats) > 0: - self._last_write = max([v[1] for k, v in storage.latest().items()]) - - # storage.put_{image,histogram} is only meant to be used by - # tensorboard writer. So we access its internal fields directly from here. - if len(storage._vis_data) >= 1: - stats["image"] = [ - wandb.Image(img, caption=img_name) - for img_name, img, step_num in storage._vis_data - ] - # Storage stores all image data and rely on this writer to clear them. - # As a result it assumes only one writer will use its image data. - # An alternative design is to let storage store limited recent - # data (e.g. only the most recent image) that all writers can access. - # In that case a writer may not see all image data if its period is long. - storage.clear_images() - - if len(storage._histograms) >= 1: - - def create_bar(tag, bucket_limits, bucket_counts, **kwargs): - data = [ - [label, val] for (label, val) in zip(bucket_limits, bucket_counts) - ] - table = wandb.Table(data=data, columns=["label", "value"]) - return wandb.plot.bar(table, "label", "value", title=tag) - - stats["hist"] = [create_bar(**params) for params in storage._histograms] - - storage.clear_histograms() - - if len(stats) == 0: - return - wandb.log(stats, step=storage.iter) - - def close(self): - wandb.finish() \ No newline at end of file diff --git a/spaces/shimizukawa/python-no-senpai/app.py b/spaces/shimizukawa/python-no-senpai/app.py deleted file mode 100644 index 2741523e1e804578224db95d5eedd2eb488ff049..0000000000000000000000000000000000000000 --- a/spaces/shimizukawa/python-no-senpai/app.py +++ /dev/null @@ -1,282 +0,0 @@ -from datetime import datetime -from time import time -from typing import Iterable - -import streamlit as st -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline -from langchain.llms import HuggingFacePipeline -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.vectorstores import Qdrant -from qdrant_client import QdrantClient -from qdrant_client.http.models import Filter, FieldCondition, MatchValue, Range -from langchain.chains import RetrievalQA -from openai.error import InvalidRequestError -from langchain.chat_models import ChatOpenAI - -from config import DB_CONFIG, INDEX_NAMES -from models import BaseModel - - -@st.cache_resource -def load_embeddings(): - model_name = "intfloat/multilingual-e5-large" - model_kwargs = {"device": "cuda:0" if torch.cuda.is_available() else "cpu"} - encode_kwargs = {"normalize_embeddings": False} - embeddings = HuggingFaceEmbeddings( - model_name=model_name, - model_kwargs=model_kwargs, - encode_kwargs=encode_kwargs, - ) - return embeddings - - -@st.cache_resource -def llm_model(model="gpt-3.5-turbo", temperature=0.2): - llm = ChatOpenAI(model=model, temperature=temperature) - return llm - - -@st.cache_resource -def load_vicuna_model(): - if torch.cuda.is_available(): - model_name = "lmsys/vicuna-13b-v1.5" - tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) - model = AutoModelForCausalLM.from_pretrained( - model_name, - load_in_8bit=True, - torch_dtype=torch.float16, - device_map="auto", - ) - return tokenizer, model - else: - return None, None - - -EMBEDDINGS = load_embeddings() -LLM = llm_model() -VICUNA_TOKENIZER, VICUNA_MODEL = load_vicuna_model() - - -@st.cache_resource -def _get_vicuna_llm(temperature=0.2) -> HuggingFacePipeline | None: - if VICUNA_MODEL is not None: - pipe = pipeline( - "text-generation", - model=VICUNA_MODEL, - tokenizer=VICUNA_TOKENIZER, - max_new_tokens=1024, - temperature=temperature, - ) - llm = HuggingFacePipeline(pipeline=pipe) - else: - llm = None - return llm - - -VICUNA_LLM = _get_vicuna_llm() - - -def make_index_filter_obj(index_list: list[str]): - should = [] - for index in index_list: - should.append( - FieldCondition( - key="metadata.index", match=MatchValue(value=index) - ) - ) - filter = Filter(should=should) - return filter - - -def make_filter_obj(options: list[dict[str]]): - # print(options) - must = [] - for option in options: - if "value" in option: - must.append( - FieldCondition( - key=option["key"], match=MatchValue(value=option["value"]) - ) - ) - elif "range" in option: - range_ = option["range"] - must.append( - FieldCondition( - key=option["key"], - range=Range( - gt=range_.get("gt"), - gte=range_.get("gte"), - lt=range_.get("lt"), - lte=range_.get("lte"), - ), - ) - ) - filter = Filter(must=must) - return filter - - -def get_similay(query: str, filter: Filter): - db_url, db_api_key, db_collection_name = DB_CONFIG - client = QdrantClient(url=db_url, api_key=db_api_key) - db = Qdrant( - client=client, collection_name=db_collection_name, embeddings=EMBEDDINGS - ) - qdocs = db.similarity_search_with_score( - query, - k=20, - filter=filter, - ) - return qdocs - - -def get_retrieval_qa(filter: Filter, llm): - db_url, db_api_key, db_collection_name = DB_CONFIG - client = QdrantClient(url=db_url, api_key=db_api_key) - db = Qdrant( - client=client, collection_name=db_collection_name, embeddings=EMBEDDINGS - ) - retriever = db.as_retriever( - search_kwargs={ - "filter": filter, - } - ) - result = RetrievalQA.from_chain_type( - llm=llm, - chain_type="stuff", - retriever=retriever, - return_source_documents=True, - ) - return result - - -def _get_related_url(metadata) -> Iterable[str]: - urls = set() - for m in metadata: - url = m["url"] - if url in urls: - continue - urls.add(url) - ctime = datetime.fromtimestamp(m["ctime"]) - # print(m) - yield f'

      URL: {url} (created: {ctime:%Y-%m-%d})

      ' - - -def _get_query_str_filter( - query: str, - index_list: list[str], -) -> tuple[str, Filter]: - # options = [{"key": "metadata.index", "value": index_list[0]}] - # filter = make_filter_obj(options=options) - - filter = make_index_filter_obj(index_list) - return query, filter - - -def run_qa( - llm, - query: str, - index_list: list[str], -) -> tuple[str, str]: - now = time() - query_str, filter = _get_query_str_filter(query, index_list) - qa = get_retrieval_qa(filter, llm) - try: - result = qa(query_str) - except InvalidRequestError as e: - return "回答が見つかりませんでした。別な質問をしてみてください", str(e) - else: - metadata = [s.metadata for s in result["source_documents"]] - sec_html = f"

      実行時間: {(time() - now):.2f}秒

      " - html = "
      " + sec_html + "\n".join(_get_related_url(metadata)) + "
      " - return result["result"], html - - -def run_search( - query: str, - index_list: list[str], -) -> Iterable[tuple[BaseModel, float, str]]: - query_str, filter = _get_query_str_filter(query, index_list) - qdocs = get_similay(query_str, filter) - for qdoc, score in qdocs: - text = qdoc.page_content - metadata = qdoc.metadata - # print(metadata) - data = BaseModel( - index=metadata.get("index"), - id=metadata.get("id"), - title=metadata.get("title"), - ctime=metadata.get("ctime"), - user=metadata.get("user"), - url=metadata.get("url"), - type=metadata.get("type"), - ) - yield data, score, text - - -with st.form("my_form"): - st.title("Document Search") - query = st.text_area(label="query") - index_list = st.multiselect( - label="index", - options=INDEX_NAMES, - default=INDEX_NAMES, - placeholder="Select index", - ) - - submit_col1, submit_col2 = st.columns(2) - searched = submit_col2.form_submit_button("Search") - if not index_list: - st.error("Please select at least one index.") - if searched and index_list: - st.divider() - st.header("Search Results") - st.divider() - with st.spinner("Searching..."): - results = run_search(query, index_list) - for doc, score, text in results: - title = doc.title - url = doc.url - id_ = doc.id - score = round(score, 3) - ctime = datetime.fromtimestamp(doc.ctime) - user = doc.user - with st.container(): - st.subheader(title) - st.write(url) - st.write(text) - st.write("score:", score, "Date:", ctime.date(), "User:", user) - st.divider() - qa_searched = submit_col1.form_submit_button("Q&A by OpenAI") - if qa_searched and index_list: - st.divider() - st.header("Answer by OpenAI GPT-3") - st.divider() - with st.spinner("Thinking..."): - results = run_qa( - LLM, - query, - index_list, - ) - answer, html = results - with st.container(): - st.write(answer) - st.markdown(html, unsafe_allow_html=True) - st.divider() - if torch.cuda.is_available() and index_list: - qa_searched_vicuna = submit_col1.form_submit_button("Answer by Vicuna") - if qa_searched_vicuna: - st.divider() - st.header("Answer by Vicuna-13b-v1.5") - st.divider() - with st.spinner("Thinking..."): - results = run_qa( - VICUNA_LLM, - query, - index_list, - ) - answer, html = results - with st.container(): - st.write(answer) - st.markdown(html, unsafe_allow_html=True) - st.divider() diff --git a/spaces/shivammehta25/Diff-TTSG/app.py b/spaces/shivammehta25/Diff-TTSG/app.py deleted file mode 100644 index cfdb01dd5296dc010d66120ac33f068ca27fcd08..0000000000000000000000000000000000000000 --- a/spaces/shivammehta25/Diff-TTSG/app.py +++ /dev/null @@ -1,270 +0,0 @@ -import argparse -import datetime as dt -import warnings -from pathlib import Path - -import ffmpeg -import gradio as gr -import IPython.display as ipd -import joblib as jl -import numpy as np -import soundfile as sf -import torch -from tqdm.auto import tqdm - -from diff_ttsg.hifigan.config import v1 -from diff_ttsg.hifigan.denoiser import Denoiser -from diff_ttsg.hifigan.env import AttrDict -from diff_ttsg.hifigan.models import Generator as HiFiGAN -from diff_ttsg.models.diff_ttsg import Diff_TTSG -from diff_ttsg.text import cmudict, sequence_to_text, text_to_sequence -from diff_ttsg.text.symbols import symbols -from diff_ttsg.utils.model import denormalize -from diff_ttsg.utils.utils import intersperse, plot_tensor -from pymo.preprocessing import MocapParameterizer -from pymo.viz_tools import render_mp4 -from pymo.writers import BVHWriter - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -DIFF_TTSG_CHECKPOINT = "diff_ttsg_checkpoint.ckpt" -HIFIGAN_CHECKPOINT = "g_02500000" -MOTION_PIPELINE = "diff_ttsg/resources/data_pipe.expmap_86.1328125fps.sav" -CMU_DICT_PATH = "diff_ttsg/resources/cmu_dictionary" - -OUTPUT_FOLDER = "synth_output" - -# Model loading tools -def load_model(checkpoint_path): - model = Diff_TTSG.load_from_checkpoint(checkpoint_path, map_location=device) - model.eval() - return model - -# Vocoder loading tools -def load_vocoder(checkpoint_path): - h = AttrDict(v1) - hifigan = HiFiGAN(h).to(device) - hifigan.load_state_dict(torch.load(checkpoint_path, map_location=device)['generator']) - _ = hifigan.eval() - hifigan.remove_weight_norm() - return hifigan - -# Setup text preprocessing -cmu = cmudict.CMUDict(CMU_DICT_PATH) -def process_text(text: str): - x = torch.LongTensor(intersperse(text_to_sequence(text, dictionary=cmu), len(symbols))).to(device)[None] - x_lengths = torch.LongTensor([x.shape[-1]]).to(device) - x_phones = sequence_to_text(x.squeeze(0).tolist()) - return { - 'x_orig': text, - 'x': x, - 'x_lengths': x_lengths, - 'x_phones': x_phones - } - -# Setup motion visualisation -motion_pipeline = jl.load(MOTION_PIPELINE) -bvh_writer = BVHWriter() -mocap_params = MocapParameterizer("position") - - - -## Load models - -model = load_model(DIFF_TTSG_CHECKPOINT) -vocoder = load_vocoder(HIFIGAN_CHECKPOINT) -denoiser = Denoiser(vocoder, mode='zeros') - - -# Synthesis functions - -@torch.inference_mode() -def synthesise(text, mel_timestep, motion_timestep, length_scale, mel_temp, motion_temp): - - ## Number of timesteps to run the reverse denoising process - n_timesteps = { - 'mel': mel_timestep, - 'motion': motion_timestep, - } - - ## Sampling temperature - temperature = { - 'mel': mel_temp, - 'motion': motion_temp - } - text_processed = process_text(text) - t = dt.datetime.now() - output = model.synthesise( - text_processed['x'], - text_processed['x_lengths'], - n_timesteps=n_timesteps, - temperature=temperature, - stoc=False, - spk=None, - length_scale=length_scale - ) - - t = (dt.datetime.now() - t).total_seconds() - print(f'RTF: {t * 22050 / (output["mel"].shape[-1] * 256)}') - - output.update(text_processed) # merge everything to one dict - return output - -@torch.inference_mode() -def to_waveform(mel, vocoder): - audio = vocoder(mel).clamp(-1, 1) - audio = denoiser(audio.squeeze(0)).cpu().squeeze() - return audio - - -def to_bvh(motion): - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - return motion_pipeline.inverse_transform([motion.cpu().squeeze(0).T]) - - -def save_to_folder(filename: str, output: dict, folder: str): - folder = Path(folder) - folder.mkdir(exist_ok=True, parents=True) - np.save(folder / f'{filename}', output['mel'].cpu().numpy()) - sf.write(folder / f'{filename}.wav', output['waveform'], 22050, 'PCM_24') - with open(folder / f'{filename}.bvh', 'w') as f: - bvh_writer.write(output['bvh'], f) - - -def to_stick_video(filename, bvh, folder): - folder = Path(folder) - folder.mkdir(exist_ok=True, parents=True) - - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - X_pos = mocap_params.fit_transform([bvh]) - print(f"rendering {filename} ...") - render_mp4(X_pos[0], folder / f'{filename}.mp4', axis_scale=200) - - -def combine_audio_video(filename: str, folder: str): - print("Combining audio and video") - folder = Path(folder) - folder.mkdir(exist_ok=True, parents=True) - - input_video = ffmpeg.input(str(folder / f'{filename}.mp4')) - input_audio = ffmpeg.input(str(folder / f'{filename}.wav')) - output_filename = folder / f'{filename}_audio.mp4' - ffmpeg.concat(input_video, input_audio, v=1, a=1).output(str(output_filename)).run(overwrite_output=True) - print(f"Final output with audio: {output_filename}") - - -def run(text, output, mel_timestep, motion_timestep, length_scale, mel_temp, motion_temp): - print("Running synthesis") - output = synthesise(text, mel_timestep, motion_timestep, length_scale, mel_temp, motion_temp) - output['waveform'] = to_waveform(output['mel'], vocoder) - output['bvh'] = to_bvh(output['motion'])[0] - save_to_folder('temp', output, OUTPUT_FOLDER) - return ( - output, - output['x_phones'], - plot_tensor(output['mel'].squeeze().cpu().numpy()), - plot_tensor(output['motion'].squeeze().cpu().numpy()), - str(Path(OUTPUT_FOLDER) / f'temp.wav'), - gr.update(interactive=True) - ) - -def visualize_it(output): - to_stick_video('temp', output['bvh'], OUTPUT_FOLDER) - combine_audio_video('temp', OUTPUT_FOLDER) - return str(Path(OUTPUT_FOLDER) / 'temp_audio.mp4') - - - - -with gr.Blocks() as demo: - - output = gr.State(value=None) - - with gr.Box(): - with gr.Row(): - gr.Markdown("# Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis") - with gr.Row(): - gr.Markdown("### Read more about it at: [https://shivammehta25.github.io/Diff-TTSG/](https://shivammehta25.github.io/Diff-TTSG/)") - - with gr.Row(): - gr.Markdown("# Text Input") - with gr.Row(): - gr.Markdown("Enter , to insert pause and ; for breathing pause.") - with gr.Row(): - gr.Markdown("It is recommended to give spaces between punctuations and words.") - with gr.Row(): - text = gr.Textbox(label="Text Input") - with gr.Row(): - examples = gr.Examples(examples=[ - "Hello world ! This is a demo of Diff T T S G .", - "And the train stopped, The door opened. I got out first, then Jack Kane got out, Ronan got out, Louise got out.", - ], inputs=[text]) - - with gr.Box(): - with gr.Row(): - gr.Markdown("### Hyper parameters") - with gr.Row(): - mel_timestep = gr.Slider(label="Number of timesteps (mel)", minimum=0, maximum=1000, step=1, value=50, interactive=True) - motion_timestep = gr.Slider(label="Number of timesteps (motion)", minimum=0, maximum=1000, step=1, value=500, interactive=True) - length_scale = gr.Slider(label="Length scale (Speaking rate)", minimum=0.01, maximum=3.0, step=0.05, value=1.15, interactive=True) - mel_temp = gr.Slider(label="Sampling temperature (mel)", minimum=0.01, maximum=5.0, step=0.05, value=1.3, interactive=True) - motion_temp = gr.Slider(label="Sampling temperature (motion)", minimum=0.01, maximum=5.0, step=0.05, value=1.5, interactive=True) - - synth_btn = gr.Button("Synthesise") - - with gr.Box(): - with gr.Row(): - gr.Markdown("### Phonetised text") - with gr.Row(): - phonetised_text = gr.Textbox(label="Phonetised text", interactive=False) - - with gr.Box(): - with gr.Row(): - mel_spectrogram = gr.Image(interactive=False, label="Mel spectrogram") - motion_representation = gr.Image(interactive=False, label="Motion representation") - - with gr.Row(): - audio = gr.Audio(interactive=False, label="Audio") - - with gr.Box(): - with gr.Row(): - gr.Markdown("### Generate stick figure visualisation") - with gr.Row(): - gr.Markdown("(This will take a while)") - with gr.Row(): - visualize = gr.Button("Visualize", interactive=False) - - with gr.Row(): - video = gr.Video(label="Video", interactive=False) - - synth_btn.click( - fn=run, - inputs=[ - text, - output, - mel_timestep, - motion_timestep, - length_scale, - mel_temp, - motion_temp - ], - outputs=[ - output, - phonetised_text, - mel_spectrogram, - motion_representation, - audio, - # video, - visualize - ], api_name="diff_ttsg") - - visualize.click( - fn=visualize_it, - inputs=[output], - outputs=[video], - ) - -demo.queue(1) -demo.launch() \ No newline at end of file diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/weights/README.md b/spaces/shiwan10000/CodeFormer/CodeFormer/weights/README.md deleted file mode 100644 index 67ad334bd672eeb9f82813cd54e8885331bbb2f2..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/weights/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Weights - -Put the downloaded pre-trained models to this folder. \ No newline at end of file diff --git a/spaces/shuhulhandoo/face-swap/main_video.py b/spaces/shuhulhandoo/face-swap/main_video.py deleted file mode 100644 index 107afad4a0ac6ecc83b51ff828246ababde7507b..0000000000000000000000000000000000000000 --- a/spaces/shuhulhandoo/face-swap/main_video.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import cv2 -import logging -import argparse - -from face_detection import select_face -from face_swap import face_swap - - -class VideoHandler(object): - def __init__(self, video_path=0, img_path=None, args=None): - self.src_points, self.src_shape, self.src_face = select_face(cv2.imread(img_path)) - if self.src_points is None: - print('No face detected in the source image !!!') - exit(-1) - self.args = args - self.video = cv2.VideoCapture(video_path) - self.writer = cv2.VideoWriter(args.save_path, cv2.VideoWriter_fourcc(*'MJPG'), self.video.get(cv2.CAP_PROP_FPS), - (int(self.video.get(cv2.CAP_PROP_FRAME_WIDTH)), int(self.video.get(cv2.CAP_PROP_FRAME_HEIGHT)))) - - def start(self): - while self.video.isOpened(): - if cv2.waitKey(1) & 0xFF == ord('q'): - break - - _, dst_img = self.video.read() - dst_points, dst_shape, dst_face = select_face(dst_img, choose=False) - if dst_points is not None: - dst_img = face_swap(self.src_face, dst_face, self.src_points, dst_points, dst_shape, dst_img, self.args, 68) - self.writer.write(dst_img) - if self.args.show: - cv2.imshow("Video", dst_img) - - self.video.release() - self.writer.release() - cv2.destroyAllWindows() - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, - format="%(levelname)s:%(lineno)d:%(message)s") - - parser = argparse.ArgumentParser(description='FaceSwap Video') - parser.add_argument('--src_img', required=True, - help='Path for source image') - parser.add_argument('--video_path', default=0, - help='Path for video') - parser.add_argument('--warp_2d', default=False, action='store_true', help='2d or 3d warp') - parser.add_argument('--correct_color', default=False, action='store_true', help='Correct color') - parser.add_argument('--show', default=False, action='store_true', help='Show') - parser.add_argument('--save_path', required=True, help='Path for storing output video') - args = parser.parse_args() - - dir_path = os.path.dirname(args.save_path) - if not os.path.isdir(dir_path): - os.makedirs(dir_path) - - VideoHandler(args.video_path, args.src_img, args).start() diff --git a/spaces/simonduerr/rosettafold2/app.py b/spaces/simonduerr/rosettafold2/app.py deleted file mode 100644 index e6a36ac953037a114a38c4351e0b7bc3e28a5ca9..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/rosettafold2/app.py +++ /dev/null @@ -1,662 +0,0 @@ -import os, time, sys - - -if not os.path.isfile("RF2_apr23.pt"): - # send param download into background - os.system( - "(apt-get install aria2; aria2c -q -x 16 https://colabfold.steineggerlab.workers.dev/RF2_apr23.pt) &" - ) - -if not os.path.isdir("RoseTTAFold2"): - print("install RoseTTAFold2") - os.system("git clone https://github.com/sokrypton/RoseTTAFold2.git") - print(os.listdir("RoseTTAFold2")) - os.system( - "cd RoseTTAFold2/SE3Transformer; pip -q install --no-cache-dir -r requirements.txt; pip -q install ." - ) - os.system( - "wget https://raw.githubusercontent.com/sokrypton/ColabFold/beta/colabfold/mmseqs/api.py" - ) - - # install hhsuite - print("install hhsuite") - os.makedirs("hhsuite", exist_ok=True) - os.system( - f"curl -fsSL https://github.com/soedinglab/hh-suite/releases/download/v3.3.0/hhsuite-3.3.0-SSE2-Linux.tar.gz | tar xz -C hhsuite/" - ) - print(os.listdir("hhsuite")) - - -if os.path.isfile(f"RF2_apr23.pt.aria2"): - print("downloading RoseTTAFold2 params") - while os.path.isfile(f"RF2_apr23.pt.aria2"): - time.sleep(5) - -os.environ["DGLBACKEND"] = "pytorch" -sys.path.append("RoseTTAFold2/network") -if "hhsuite" not in os.environ["PATH"]: - os.environ["PATH"] += ":hhsuite/bin:hhsuite/scripts" - -import matplotlib.pyplot as plt -import numpy as np -from parsers import parse_a3m -from api import run_mmseqs2 -import torch -from string import ascii_uppercase, ascii_lowercase -import hashlib, re, os -import random - -from Bio.PDB import * - - -def get_hash(x): - return hashlib.sha1(x.encode()).hexdigest() - - -alphabet_list = list(ascii_uppercase + ascii_lowercase) -from collections import OrderedDict, Counter - -import gradio as gr - -if not "pred" in dir(): - from predict import Predictor - - print("compile RoseTTAFold2") - model_params = "RF2_apr23.pt" - if torch.cuda.is_available(): - pred = Predictor(model_params, torch.device("cuda:0")) - else: - print("WARNING: using CPU") - pred = Predictor(model_params, torch.device("cpu")) - - -def get_unique_sequences(seq_list): - unique_seqs = list(OrderedDict.fromkeys(seq_list)) - return unique_seqs - - -def get_msa(seq, jobname, cov=50, id=90, max_msa=2048, mode="unpaired_paired"): - assert mode in ["unpaired", "paired", "unpaired_paired"] - seqs = [seq] if isinstance(seq, str) else seq - - # collapse homooligomeric sequences - counts = Counter(seqs) - u_seqs = list(counts.keys()) - u_nums = list(counts.values()) - - # expand homooligomeric sequences - first_seq = "/".join(sum([[x] * n for x, n in zip(u_seqs, u_nums)], [])) - msa = [first_seq] - - path = os.path.join(jobname, "msa") - os.makedirs(path, exist_ok=True) - if mode in ["paired", "unpaired_paired"] and len(u_seqs) > 1: - print("getting paired MSA") - out_paired = run_mmseqs2(u_seqs, f"{path}/", use_pairing=True) - headers, sequences = [], [] - for a3m_lines in out_paired: - n = -1 - for line in a3m_lines.split("\n"): - if len(line) > 0: - if line.startswith(">"): - n += 1 - if len(headers) < (n + 1): - headers.append([]) - sequences.append([]) - headers[n].append(line) - else: - sequences[n].append(line) - # filter MSA - with open(f"{path}/paired_in.a3m", "w") as handle: - for n, sequence in enumerate(sequences): - handle.write(f">n{n}\n{''.join(sequence)}\n") - os.system( - f"hhfilter -i {path}/paired_in.a3m -id {id} -cov {cov} -o {path}/paired_out.a3m" - ) - with open(f"{path}/paired_out.a3m", "r") as handle: - for line in handle: - if line.startswith(">"): - n = int(line[2:]) - xs = sequences[n] - # expand homooligomeric sequences - xs = ["/".join([x] * num) for x, num in zip(xs, u_nums)] - msa.append("/".join(xs)) - - if len(msa) < max_msa and ( - mode in ["unpaired", "unpaired_paired"] or len(u_seqs) == 1 - ): - print("getting unpaired MSA") - out = run_mmseqs2(u_seqs, f"{path}/") - Ls = [len(seq) for seq in u_seqs] - sub_idx = [] - sub_msa = [] - sub_msa_num = 0 - for n, a3m_lines in enumerate(out): - sub_msa.append([]) - with open(f"{path}/in_{n}.a3m", "w") as handle: - handle.write(a3m_lines) - # filter - os.system( - f"hhfilter -i {path}/in_{n}.a3m -id {id} -cov {cov} -o {path}/out_{n}.a3m" - ) - with open(f"{path}/out_{n}.a3m", "r") as handle: - for line in handle: - if not line.startswith(">"): - xs = ["-" * l for l in Ls] - xs[n] = line.rstrip() - # expand homooligomeric sequences - xs = ["/".join([x] * num) for x, num in zip(xs, u_nums)] - sub_msa[-1].append("/".join(xs)) - sub_msa_num += 1 - sub_idx.append(list(range(len(sub_msa[-1])))) - - while len(msa) < max_msa and sub_msa_num > 0: - for n in range(len(sub_idx)): - if len(sub_idx[n]) > 0: - msa.append(sub_msa[n][sub_idx[n].pop(0)]) - sub_msa_num -= 1 - if len(msa) == max_msa: - break - - with open(f"{jobname}/msa.a3m", "w") as handle: - for n, sequence in enumerate(msa): - handle.write(f">n{n}\n{sequence}\n") - - -from Bio.PDB.PDBExceptions import PDBConstructionWarning -import warnings -from Bio.PDB import * -import numpy as np - - -def add_plddt_to_cif(best_plddts, best_plddt, best_seed, jobname): - pdb_parser = PDBParser() - warnings.filterwarnings("ignore", category=PDBConstructionWarning) - structure = pdb_parser.get_structure( - "pdb", f"{jobname}/rf2_seed{best_seed}_00_pred.pdb" - ) - io = MMCIFIO() - io.set_structure(structure) - io.save(f"{jobname}/rf2_seed{best_seed}_00_pred.cif") - plddt_cif = f"""# -loop_ -_ma_qa_metric.id -_ma_qa_metric.mode -_ma_qa_metric.name -_ma_qa_metric.software_group_id -_ma_qa_metric.type -1 global pLDDT 1 pLDDT -2 local pLDDT 1 pLDDT -# -_ma_qa_metric_global.metric_id 1 -_ma_qa_metric_global.metric_value {best_plddt:.3f} -_ma_qa_metric_global.model_id 1 -_ma_qa_metric_global.ordinal_id 1 -# -loop_ -_ma_qa_metric_local.label_asym_id -_ma_qa_metric_local.label_comp_id -_ma_qa_metric_local.label_seq_id -_ma_qa_metric_local.metric_id -_ma_qa_metric_local.metric_value -_ma_qa_metric_local.model_id -_ma_qa_metric_local.ordinal_id""" - - for chain in structure[0]: - for i, residue in enumerate(chain): - plddt_cif += f"\n{chain.id} {residue.resname} {residue.id[1]} 2 {best_plddts[i]*100:.2f} 1 {residue.id[1]}" - plddt_cif += "\n#" - with open(f"{jobname}/rf2_seed{best_seed}_00_pred.cif", "a") as f: - f.write(plddt_cif) - - -def predict( - sequence, - jobname, - sym, - order, - msa_concat_mode, - msa_method, - pair_mode, - collapse_identical, - num_recycles, - use_mlm, - use_dropout, - max_msa, - random_seed, - num_models, - mode="web", -): - if os.path.exists("/home/user/app"): # crude check if on spaces - if len(sequence) > 600: - raise gr.Error( - f"Your sequence is too long ({len(sequence)}). " - "Please use the full version of RoseTTAfold2 directly from GitHub." - ) - random_seed = int(random_seed) - num_models = int(num_models) - max_msa = int(max_msa) - num_recycles = int(num_recycles) - order = int(order) - - max_extra_msa = max_msa * 8 - print("sequence", sequence) - sequence = re.sub("[^A-Z:]", "", sequence.replace("/", ":").upper()) - sequence = re.sub(":+", ":", sequence) - sequence = re.sub("^[:]+", "", sequence) - sequence = re.sub("[:]+$", "", sequence) - print("sequence", sequence) - if sym in ["X", "C"]: - copies = int(order) - elif sym in ["D"]: - copies = int(order) * 2 - else: - copies = {"T": 12, "O": 24, "I": 60}[sym] - order = "" - symm = sym + str(order) - - sequences = sequence.replace(":", "/").split("/") - if collapse_identical: - u_sequences = get_unique_sequences(sequences) - else: - u_sequences = sequences - sequences = sum([u_sequences] * copies, []) - lengths = [len(s) for s in sequences] - - # TODO - subcrop = 1000 if sum(lengths) > 1400 else -1 - - sequence = "/".join(sequences) - jobname = jobname + "_" + symm + "_" + get_hash(sequence)[:5] - - print(f"jobname: {jobname}") - print(f"lengths: {lengths}") - print("final_sequence", u_sequences) - os.makedirs(jobname, exist_ok=True) - if msa_method == "mmseqs2": - get_msa(u_sequences, jobname, mode=pair_mode, max_msa=max_extra_msa) - - elif msa_method == "single_sequence": - u_sequence = "/".join(u_sequences) - with open(f"{jobname}/msa.a3m", "w") as a3m: - a3m.write(f">{jobname}\n{u_sequence}\n") - # elif msa_method == "custom_a3m": - # print("upload custom a3m") - # # msa_dict = files.upload() - # lines = msa_dict[list(msa_dict.keys())[0]].decode().splitlines() - # a3m_lines = [] - # for line in lines: - # line = line.replace("\x00", "") - # if len(line) > 0 and not line.startswith("#"): - # a3m_lines.append(line) - # with open(f"{jobname}/msa.a3m", "w") as a3m: - # a3m.write("\n".join(a3m_lines)) - - best_plddt = None - best_seed = None - for seed in range(int(random_seed), int(random_seed) + int(num_models)): - torch.manual_seed(seed) - random.seed(seed) - np.random.seed(seed) - npz = f"{jobname}/rf2_seed{seed}_00.npz" - mlm = 0.15 if use_mlm else 0 - print("MLM", mlm, use_mlm) - pred.predict( - inputs=[f"{jobname}/msa.a3m"], - out_prefix=f"{jobname}/rf2_seed{seed}", - symm=symm, - ffdb=None, # TODO (templates), - n_recycles=num_recycles, - msa_mask=0.15 if use_mlm else 0, - msa_concat_mode=msa_concat_mode, - nseqs=max_msa, - nseqs_full=max_extra_msa, - subcrop=subcrop, - is_training=use_dropout, - ) - plddt = np.load(npz)["lddt"].mean() - if best_plddt is None or plddt > best_plddt: - best_plddt = plddt - best_plddts = np.load(npz)["lddt"] - best_seed = seed - - if mode == "web": - # Mol* only displays AlphaFold plDDT if they are in a cif. - pdb_parser = PDBParser() - mmcif_parser = MMCIFParser() - - plddt_cif = add_plddt_to_cif(best_plddts, best_plddt, best_seed, jobname) - - return f"{jobname}/rf2_seed{best_seed}_00_pred.cif" - else: - # for api just return a pdb file - return f"{jobname}/rf2_seed{best_seed}_00_pred.pdb" - - -def predict_api( - sequence, - jobname, - sym, - order, - msa_concat_mode, - msa_method, - pair_mode, - collapse_identical, - num_recycles, - use_mlm, - use_dropout, - max_msa, - random_seed, - num_models, -): - filename = predict( - sequence, - jobname, - sym, - order, - msa_concat_mode, - msa_method, - pair_mode, - collapse_identical, - num_recycles, - use_mlm, - use_dropout, - max_msa, - random_seed, - num_models, - mode="api", - ) - with open(f"{filename}") as fp: - return fp.read() - - -def molecule(input_pdb, public_link): - print(input_pdb) - print(public_link + "/file=" + input_pdb) - link = public_link + "/file=" + input_pdb - x = ( - """ - - - - - PDBe Molstar - Helper functions - - - - - - - -
      - -
      - -
      - - -""" - ) - - return f"""""" - - -def predict_web( - sequence, - jobname, - sym, - order, - msa_concat_mode, - msa_method, - pair_mode, - collapse_identical, - num_recycles, - use_mlm, - use_dropout, - max_msa, - random_seed, - num_models, -): - if os.path.exists("/home/user/app"): - public_link = "https://simonduerr-rosettafold2.hf.space" - else: - public_link = "http://localhost:7860" - - filename = predict( - sequence, - jobname, - sym, - order, - msa_concat_mode, - msa_method, - pair_mode, - collapse_identical, - num_recycles, - use_mlm, - use_dropout, - max_msa, - random_seed, - num_models, - mode="web", - ) - - return molecule(filename, public_link) - - -with gr.Blocks() as rosettafold: - gr.Markdown("# RoseTTAFold2") - gr.Markdown( - """If using please cite: [manuscript](https://www.biorxiv.org/content/10.1101/2023.05.24.542179v1) -
      Heavily based on [RoseTTAFold2 ColabFold notebook](https://colab.research.google.com/github/sokrypton/ColabFold/blob/main/RoseTTAFold2.ipynb)""" - ) - with gr.Accordion("How to use in PyMol", open=False): - gr.HTML( - """os.system('wget https://huggingface.co/spaces/simonduerr/rosettafold2/raw/main/rosettafold_pymol.py')
      -run rosettafold_pymol.py
      -rosettafold2 sequence, jobname, [sym, order, msa_concat_mode, msa_method, pair_mode, collapse_identical, num_recycles, use_mlm, use_dropout, max_msa, random_seed, num_models]
      -color_plddt jobname
      -""" - ) - sequence = gr.Textbox( - label="sequence", - value="PIAQIHILEGRSDEQKETLIREVSEAISRSLDAPLTSVRVIITEMAKGHFGIGGELASK", - ) - jobname = gr.Textbox(label="jobname", value="test") - - with gr.Accordion("Additional settings", open=False): - sym = gr.Textbox(label="sym", value="X") - order = gr.Slider(label="order", value=1, step=1, minimum=1, maximum=12) - msa_concat_mode = gr.Dropdown( - label="msa_concat_mode", - value="default", - choices=["diag", "repeat", "default"], - ) - - msa_method = gr.Dropdown( - label="msa_method", - value="single_sequence", - choices=[ - "mmseqs2", - "single_sequence", - ], # dont allow custom a3m for now , "custom_a3m" - ) - pair_mode = gr.Dropdown( - label="pair_mode", - value="unpaired_paired", - choices=["unpaired_paired", "paired", "unpaired"], - ) - - num_recycles = gr.Dropdown( - label="num_recycles", value="6", choices=["0", "1", "3", "6", "12", "24"] - ) - - use_mlm = gr.Checkbox(label="use_mlm", value=False) - use_dropout = gr.Checkbox(label="use_dropout", value=False) - collapse_identical = gr.Checkbox(label="collapse_identical", value=False) - max_msa = gr.Dropdown( - choices=["16", "32", "64", "128", "256", "512"], - value="16", - label="max_msa", - ) - random_seed = gr.Textbox(label="random_seed", value=0) - num_models = gr.Dropdown( - label="num_models", value="1", choices=["1", "2", "4", "8", "16", "32"] - ) - - btn = gr.Button("Run", visible=False) - btn_web = gr.Button("Run") - - output_plain = gr.HTML() - output = gr.HTML() - - btn.click( - fn=predict_api, - inputs=[ - sequence, - jobname, - sym, - order, - msa_concat_mode, - msa_method, - pair_mode, - collapse_identical, - num_recycles, - use_mlm, - use_dropout, - max_msa, - random_seed, - num_models, - ], - outputs=output_plain, - api_name="rosettafold2", - ) - btn_web.click( - fn=predict_web, - inputs=[ - sequence, - jobname, - sym, - order, - msa_concat_mode, - msa_method, - pair_mode, - collapse_identical, - num_recycles, - use_mlm, - use_dropout, - max_msa, - random_seed, - num_models, - ], - outputs=output, - ) - - -rosettafold.launch() diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Ayyo Saami MP3 Download - Windy Goonatillake SANUKAs Hit Song.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Ayyo Saami MP3 Download - Windy Goonatillake SANUKAs Hit Song.md deleted file mode 100644 index 8b042f15bc542940c4e421150a1897357c2f4afe..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Ayyo Saami MP3 Download - Windy Goonatillake SANUKAs Hit Song.md +++ /dev/null @@ -1,94 +0,0 @@ -
      -

      Ayyo Sami Song MP3 Download: A Viral Hit by Windy Goonatillake

      -

      If you are a fan of Tamil music, you might have heard of a catchy song called Ayyo Sami. This song has been making waves on social media and YouTube, thanks to its upbeat melody, humorous lyrics, and energetic performance by Windy Goonatillake. In this article, we will tell you everything you need to know about this viral hit, including what it means, who sings it, and how to download it as an MP3 file.

      -

      ayyo sami song mp3 download


      Download Filehttps://ssurll.com/2uNZS8



      -

      What is Ayyo Sami Song?

      -

      Ayyo Sami is a Tamil song that was released in September 2022 by Windy Goonatillake, a Sri Lankan singer and songwriter. The song is a fusion of pop, folk, and rap genres, with a catchy chorus that repeats the phrase "Ayyo Sami" (which means "Oh God" in Tamil) several times. The song is about a woman who is fed up with her cheating boyfriend and decides to dump him.

      -

      The meaning and origin of the song

      -

      The lyrics of the song were written by Pottuvil Asmin, a Tamil poet and rapper from Sri Lanka. He said that he was inspired by the stories of his female friends who had faced similar situations with their partners. He wanted to write a song that would empower women and make them laugh at the same time. He also said that he chose the name "Sami" for the boyfriend because it is a common name in Tamil Nadu, where he has many fans.

      -

      The popularity and reception of the song

      -

      The song became an instant hit after it was uploaded on YouTube by Windy Goonatillake's official channel. It has received over 18 million views and 345 thousand likes as of June 2023. The song also gained popularity on other platforms like TikTok, Instagram, and Facebook, where many users created videos and memes using the song. The song has been praised for its catchy tune, witty lyrics, and Windy's lively performance. Many listeners have also appreciated the song's message of female empowerment and self-respect.

      -

      Who is Windy Goonatillake?

      -

      Windy Goonatillake is a Sri Lankan singer, songwriter, and actress who rose to fame after participating in the reality show "Sirasa Superstar" in 2016. She is the daughter of Rookantha Goonatillake and Chandralekha Perera, two renowned singers in Sri Lanka. She has a degree in music from the University of Visual and Performing Arts in Colombo.

      -

      Her background and career

      -

      Windy started singing at a young age, following in the footsteps of her parents. She also learned to play various instruments like piano, guitar, violin, and flute. She participated in several musical competitions and shows before joining "Sirasa Superstar", where she impressed the judges and audience with her versatile voice and style. She finished as the runner-up in the show, losing to Sangeeth Wickramasinghe, who later became her mentor and friend.

      -

      Her collaboration with Sanuka Wickramasinghe

      -

      Sanuka Wickramasinghe is a Sri Lankan singer, composer, and producer who is known for his fusion of pop, rock, and folk music. He is also the brother of Sangeeth Wickramasinghe, Windy's rival-turned-friend from "Sirasa Superstar ". He has collaborated with Windy on several songs, including Ayyo Sami, which he composed and produced. He also featured in the music video of the song, playing the role of the cheating boyfriend. The duo has a great chemistry and friendship, and they often perform together on stage and online.

      -

      Her other songs and projects

      -

      Windy has released many other songs besides Ayyo Sami, such as "Saragaye", "Raththarane", "Sihinayakda Me", and "Oba Mage Wenawanam". She has also sung for several movies and TV shows, such as "Dharmayuddhaya", "Sikuru Hathe", and "Hiru Star". She has also acted in some of them, such as "Sikuru Hathe" and "Thaala". She is currently working on her debut album, which is expected to be released soon.

      -

      How to download Ayyo Sami Song MP3?

      -

      If you love Ayyo Sami song and want to listen to it offline, you might be wondering how to download it as an MP3 file. There are several ways to do that, but not all of them are legal or safe. Here are some of the pros and cons of different methods of downloading Ayyo Sami song MP3.

      -

      The official sources and platforms

      -

      The best way to download Ayyo Sami song MP3 is to use the official sources and platforms that have the permission and license from the artist and the producer. These include:

      -

      [Ayyo Saami Songs Download - Free Online Songs @ JioSaavn](^1^): Ayyo Saami is a Tamil album released in 2022. There is one song in Ayyo Saami. The song was composed by talented musicians such as Windy Goonatillake and SANUKA. Listen to all of Ayyo Saami online on JioSaavn[^1^].
      -ayyo saami mp3 song free download
      -ayyo saami tamil song download
      -windy goonatillake and sanuka ayyo saami
      -ayyo saami lyrics
      -ayyo saami video song

      -
        -
      • iTunes: You can buy and download the song for $0.99 from iTunes, which is compatible with Apple devices and Windows computers.
      • -
      • Spotify: You can stream and download the song for free if you have a Spotify account, which is available on Android, iOS, Windows, Mac, and web browsers. However, you will need a premium subscription to listen offline without ads.
      • -
      • YouTube Music: You can stream and download the song for free if you have a YouTube account, which is accessible on Android, iOS, Windows, Mac, and web browsers. However, you will need a premium subscription to listen offline without ads.
      • -
      • Amazon Music: You can buy and download the song for $0.99 from Amazon Music, which works on Android, iOS, Windows, Mac, Fire TV, Echo devices, and web browsers.
      • -
      -

      The advantages of using these official sources and platforms are:

      -
        -
      • You can support the artist and the producer financially and morally.
      • -
      • You can enjoy the high-quality sound and original version of the song.
      • -
      • You can avoid any legal issues or penalties for piracy or copyright infringement.
      • -
      • You can protect your device from viruses or malware that might come with illegal downloads.
      • -

      The benefits and drawbacks of downloading MP3 files

      -

      Another way to download Ayyo Sami song MP3 is to use some online tools or software that can convert YouTube videos or other audio files into MP3 format. These include:

      -
        -
      • ytmp3.cc: You can paste the URL of the YouTube video of the song and click on "Convert" to download the MP3 file.
      • -
      • mp3juices.cc: You can search for the song by its name or artist and click on "Download" to get the MP3 file.
      • -
      • freemake.com: You can download and install this software on your Windows computer and use it to convert any video or audio file into MP3 format.
      • -
      -

      The benefits of using these online tools or software are:

      -
        -
      • You can download the song for free without paying any fees or subscriptions.
      • -
      • You can save the song on your device and listen to it offline without internet connection.
      • -
      • You can transfer the song to other devices or share it with others easily.
      • -
      -

      However, there are also some drawbacks of using these online tools or software, such as:

      -
        -
      • You might violate the rights and interests of the artist and the producer, who might lose their income and recognition.
      • -
      • You might face some legal consequences or fines for downloading or distributing unauthorized content.
      • -
      • You might compromise the quality and authenticity of the song, which might have been altered or edited by the online tools or software.
      • -
      • You might expose your device to potential risks of viruses or malware that might harm your data or system.
      • -
      -

      The alternatives and options for streaming the song

      -

      A third way to enjoy Ayyo Sami song is to stream it online without downloading it as an MP3 file. There are many platforms and websites that offer this service, such as:

      -
        -
      • YouTube: You can watch the official music video of the song on Windy Goonatillake's channel, which has subtitles in English and Tamil. You can also find many other versions and covers of the song by different artists and fans.
      • -
      • SoundCloud: You can listen to the original audio track of the song on Windy Goonatillake's profile, which also has a link to her Instagram account. You can also follow her and leave comments on her songs.
      • -
      • Gaana: You can stream the song on this Indian music app, which has a large collection of Tamil songs and playlists. You can also create your own playlist and share it with others.
      • -
      -

      The advantages of streaming the song online are:

      -
        -
      • You can access the song anytime and anywhere with an internet connection.
      • -
      • You can explore different versions and interpretations of the song by various artists and fans.
      • -
      • You can interact with other listeners and fans of the song through comments, likes, shares, etc.
      • -
      -

      However, there are also some disadvantages of streaming the song online, such as:

      -
        -
      • You might need a stable and fast internet connection to avoid buffering or interruptions.
      • -
      • You might consume a lot of data or bandwidth if you stream the song frequently or for a long time.
      • -
      • You might not be able to listen to the song offline if you don't have an internet connection or a premium subscription.
      • -
      -

      Conclusion

      -

      Ayyo Sami is a Tamil song that has become a viral hit among music lovers around the world. It is sung by Windy Goonatillake, a Sri Lankan singer who collaborated with Sanuka Wickramasinghe, a Sri Lankan composer and producer. The song is about a woman who dumps her cheating boyfriend with a catchy chorus that says "Ayyo Sami" (Oh God). The song has a humorous tone and a message of female empowerment. It was written by Pottuvil Asmin, a Tamil poet and rapper from Sri Lanka.

      -

      If you want to download Ayyo Sami song MP3, you have several options to choose from. You can use the official sources and platforms that have the license and permission from the artist and the producer, such as iTunes, Spotify, YouTube Music, and Amazon Music. This way, you can support them financially and morally, enjoy the high-quality sound and original version of the song, avoid any legal issues or penalties, and protect your device from viruses or malware. Alternatively, you can use some online tools or software that can convert YouTube videos or other audio files into MP3 format, such as ytmp3.cc, mp3ju ices.cc, and freemake.com. This way, you can download the song for free without paying any fees or subscriptions, save the song on your device and listen to it offline without internet connection, and transfer the song to other devices or share it with others easily. However, you might violate the rights and interests of the artist and the producer, face some legal consequences or fines, compromise the quality and authenticity of the song, and expose your device to potential risks of viruses or malware. Lastly, you can stream the song online without downloading it as an MP3 file, using platforms and websites like YouTube, SoundCloud, and Gaana. This way, you can access the song anytime and anywhere with an internet connection, explore different versions and interpretations of the song by various artists and fans, and interact with other listeners and fans of the song through comments, likes, shares, etc. However, you might need a stable and fast internet connection to avoid buffering or interruptions, consume a lot of data or bandwidth if you stream the song frequently or for a long time, and not be able to listen to the song offline if you don't have an internet connection or a premium subscription.

      -

      So, what are you waiting for? Go ahead and enjoy Ayyo Sami song in whichever way you prefer. And don't forget to share your thoughts and feedback with us in the comments section below. We would love to hear from you!

      -

      FAQs

      -

      Here are some of the frequently asked questions about Ayyo Sami song:

      -
        -
      1. What does Ayyo Sami mean?
        Ayyo Sami is a Tamil phrase that means "Oh God". It is used as an expression of surprise, shock, frustration, or annoyance.
      2. -
      3. Who wrote Ayyo Sami song?
        Ayyo Sami song was written by Pottuvil Asmin, a Tamil poet and rapper from Sri Lanka. He was inspired by the stories of his female friends who had faced cheating partners.
      4. -
      5. Who composed and produced Ayyo Sami song?
        Ayyo Sami song was composed and produced by Sanuka Wickramasinghe, a Sri Lankan singer, composer, and producer. He is also the brother of Sangeeth Wickramasinghe, who was Windy Goonatillake's rival-turned-friend from "Sirasa Superstar".
      6. -
      7. Who sang Ayyo Sami song?
        Ayyo Sami song was sung by Windy Goonatillake, a Sri Lankan singer, songwriter, and actress. She is the daughter of Rookantha Goonatillake and Chandralekha Perera, two renowned singers in Sri Lanka.
      8. -
      9. Where can I watch Ayyo Sami song video?
        You can watch Ayyo Sami song video on YouTube on Windy Goonatillake's official channel. The video has subtitles in English and Tamil. You can also find many other versions and covers of the song by different artists and fans on YouTube.
      10. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Craftsman 4 APK - The Best Game for Creative Builders.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Craftsman 4 APK - The Best Game for Creative Builders.md deleted file mode 100644 index 0f24e8dcafc9b9a519f758921ddc6ae1849c846d..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Craftsman 4 APK - The Best Game for Creative Builders.md +++ /dev/null @@ -1,96 +0,0 @@ -
      -

      Craftsman Beta 4 APK: A Creative Sandbox Game for Android

      -

      If you are looking for a game that lets you unleash your creativity and imagination, then you might want to check out Craftsman Beta 4 APK. This is a sandbox game that allows you to build and craft anything you want in a 3D block world. You can play it alone or with your friends, and explore different game modes and servers. In this article, we will tell you everything you need to know about Craftsman Beta 4 APK, including what it is, how to download and install it, why you should play it, and some tips and tricks for playing it.

      -

      What is Craftsman Beta 4 APK?

      -

      Craftsman Beta 4 APK is a sandbox game that is inspired by Minecraft, but has its own unique features and style. It is developed by Slbzer, an independent developer who has released several versions of the game on various platforms. The game is currently in beta testing, which means that it is not fully finished and may have some bugs and glitches. However, it also means that the game is constantly updated with new features and content, so you can always expect something new and exciting.

      -

      craftsman beta 4 apk


      Download ❤❤❤ https://ssurll.com/2uNTwF



      -

      A brief introduction to the game and its features

      -

      The game offers you a variety of gameplay options, depending on your preference and mood. You can choose from different game modes, such as survival mode, creative mode, and multiplayer mode. In survival mode, you have to gather resources, craft items, and fend off enemies to survive. In creative mode, you have unlimited resources and can build anything you want without any restrictions. In multiplayer mode, you can join or create servers with other players and cooperate or compete with them.

      -

      The game also has stunning graphics and realistic sound effects that make the game more immersive and enjoyable. You can customize your character's appearance, choose from different skins, and even change the weather and time of day. The game has a simple and intuitive user interface that makes it easy to navigate and control. You can also use various tools and materials to create different structures and items, such as wood, stone, metal, glass, etc.

      -

      How to download and install the game on your device

      -

      If you want to play Craftsman Beta 4 APK on your Android device, you will need to download the APK file from a reliable source. An APK file is an application package file that contains all the data and code needed to run an app on your device. You can find many websites that offer free downloads of Craftsman Beta 4 APK, such as [text](^1^), [text](^2^), or [text](^3^). However, be careful not to download any malicious or fake files that may harm your device or steal your personal information.

      -

      Once you have downloaded the APK file, you will need to enable the installation of apps from unknown sources on your device. This is a security feature that prevents unauthorized apps from being installed on your device without your permission. To do this, go to your device's settings

      After you have enabled the installation of apps from unknown sources, you can follow these steps to download and install Craftsman Beta 4 APK on your device:

      -
        -
      1. Open your browser and go to one of the websites that offer the APK file, such as [text](^5^), [text](^6^), or [text](^7^).
      2. -
      3. Tap on the download button and wait for the file to be downloaded.
      4. -
      5. Once the download is complete, tap on the notification or go to your downloads folder and tap on the file.
      6. -
      7. Tap on install and wait for the installation process to finish.
      8. -
      9. Tap on open and enjoy playing Craftsman Beta 4 APK on your device.
      10. -
      -

      Why should you play Craftsman Beta 4 APK?

      -

      Craftsman Beta 4 APK is not just another clone of Minecraft. It is a game that has its own charm and appeal, and offers you many reasons to play it. Here are some of them:

      -

      The benefits of playing a sandbox game

      -

      A sandbox game is a game that gives you the freedom to create and explore without any predefined goals or rules. You can do whatever you want, whenever you want, and however you want. This can be very relaxing and satisfying, as you can express yourself and your creativity in any way you like. You can also learn new skills, such as problem-solving, spatial awareness, and logical thinking, as you experiment with different possibilities and outcomes.

      -

      The fun and challenges of building and crafting

      -

      One of the main features of Craftsman Beta 4 APK is the ability to build and craft anything you want in a 3D block world. You can use various tools and materials to create different structures and items, such as houses, castles, bridges, vehicles, weapons, furniture, etc. You can also decorate your creations with different colors, textures, and patterns. You can challenge yourself by building complex and intricate designs, or just have fun by making simple and silly ones. The only limit is your imagination.

      -

      The multiplayer mode and online servers

      -

      If you want to share your creations with other players, or see what they have made, you can join or create servers with Craftsman Beta 4 APK. You can play with your friends or strangers from all over the world, and cooperate or compete with them. You can chat with them, trade with them, fight with them, or help them. You can also join different servers with different themes, rules, and game modes, such as survival, creative, adventure, parkour, etc. You can have a lot of fun and make new friends by playing online.

      -

      craftsman 4 apk download latest version
      -craftsman 4 apk for android tv and tablet
      -craftsman 4 apk free download for pc windows
      -craftsman 4 apk mod unlimited resources
      -craftsman 4 apk offline mode
      -craftsman 4 apk update new features
      -craftsman 4 apk review and rating
      -craftsman 4 apk gameplay and tips
      -craftsman 4 apk how to install and play
      -craftsman 4 apk best alternative games
      -craftsman 4 apk compatible devices and requirements
      -craftsman 4 apk bugs and fixes
      -craftsman 4 apk cheats and hacks
      -craftsman 4 apk multiplayer and online mode
      -craftsman 4 apk custom skins and maps
      -craftsman 4 apk sandbox and survival mode
      -craftsman 4 apk creative and building mode
      -craftsman 4 apk adventure and exploration mode
      -craftsman 4 apk simulation and strategy mode
      -craftsman 4 apk fun and relaxing mode
      -craftsman 4 apk challenges and achievements
      -craftsman 4 apk support and feedback
      -craftsman 4 apk privacy policy and terms of service
      -craftsman 4 apk developer and publisher information
      -craftsman 4 apk fan community and forum
      -craftsman 4 apk guide and tutorial
      -craftsman 4 apk comparison with other versions
      -craftsman 4 apk pros and cons
      -craftsman 4 apk frequently asked questions
      -craftsman 4 apk news and updates

      -

      Tips and tricks for playing Craftsman Beta 4 APK

      -

      If you want to improve your gameplay and enjoy Craftsman Beta 4 APK more, here are some tips and tricks that you can use:

      -

      How to use the tools and materials effectively

      -

      To build and craft anything in Craftsman Beta 4 APK, you will need to use different tools and materials. Some of the basic tools are a pickaxe, an axe, a shovel, a sword, a bow, etc. Some of the basic materials are wood, stone, dirt, sand, gravel, etc. You can use these tools and materials to mine blocks, cut trees, dig holes, fight enemies, etc. You can also craft more advanced tools and materials by combining different items in a crafting table. For example, you can craft iron ingots from iron ore blocks using a furnace. You can then use iron ingots to craft iron tools or armor.

      -

      How to survive and thrive in different game modes

      -

      In Craftsman Beta 4 APK, you can choose from different game modes depending on your preference and mood. In survival mode,

      In survival mode, you have to gather resources, craft items, and fend off enemies to survive. You also have to manage your health and hunger bars, which can be affected by various factors, such as food, damage, poison, etc. To survive in this mode, you should always have some food and water with you, as well as some weapons and armor. You should also build a shelter where you can sleep and store your items. You can also farm crops and animals for food and materials. You should also be careful of the night time, when more dangerous enemies will spawn.

      -

      In creative mode, you have unlimited resources and can build anything you want without any restrictions. You can also fly and teleport in this mode. To thrive in this mode, you should use your imagination and creativity to create amazing structures and items. You can also experiment with different blocks and items to see what they do and how they interact. You can also use commands and cheats to modify the game settings and features.

      -

      In multiplayer mode, you can join or create servers with other players and cooperate or compete with them. To thrive in this mode, you should communicate and cooperate with your teammates, especially if you are playing in survival or adventure mode. You should also respect the rules and etiquette of the server you are playing on, and avoid griefing or trolling other players. You should also be friendly and helpful to other players, and make new friends.

      -

      How to join and create servers with friends

      -

      If you want to play with your friends online, you can join or create servers with Craftsman Beta 4 APK. To join a server, you can either use the server browser to find a server that suits your preferences, or enter the IP address of a server that you know or have been invited to. To create a server, you can either use the built-in server creator to set up a server on your device, or use a third-party hosting service to rent a server online. You can then invite your friends to join your server by sharing the IP address with them.

      -

      Conclusion

      -

      Craftsman Beta 4 APK is a creative sandbox game for Android that lets you build and craft anything you want in a 3D block world. You can play it alone or with your friends, and explore different game modes and servers. The game has stunning graphics and realistic sound effects that make the game more immersive and enjoyable. The game is also constantly updated with new features and content, so you can always expect something new and exciting.

      -

      If you are interested in playing Craftsman Beta 4 APK, you can download the APK file from one of the websites that offer it for free, such as [text], [text], or [text]. You will need to enable the installation of apps from unknown sources on your device before installing the game. You can then enjoy playing Craftsman Beta 4 APK on your device.

      -

      We hope that this article has helped you learn more about Craftsman Beta 4 APK, and that you will have fun playing it. If you have any questions or feedback, please feel free to leave a comment below.

      -

      FAQs

      -

      What are the differences between Craftsman Beta 4 APK and Minecraft?

      -

      Craftsman Beta 4 APK is inspired by Minecraft, but it is not an exact copy of it. It has its own unique features and style that make it different from Minecraft. Some of the differences are:

      -
        -
      • Craftsman Beta 4 APK has more realistic graphics and sound effects than Minecraft.
      • -
      • Craftsman Beta 4 APK has more tools and materials than Minecraft.
      • -
      • Craftsman Beta 4 APK has more game modes than Minecraft.
      • -
      • Craftsman Beta 4 APK is free to download and play, while Minecraft requires a purchase.
      • -
      -

      Is Craftsman Beta 4 APK safe and legal to download?

      -

      Craftsman Beta 4 APK is safe and legal to download as long as you download it from a reliable source. However, since it is not an official app from Google Play Store, it may not be compatible with some devices or may have some bugs and glitches. You should also be careful not to download any malicious or fake files that may harm your device or steal your personal information.

      -

      How can I update Craftsman Beta 4 APK to the latest version?

      -

      To update Craftsman Beta 4 APK to the latest version, you will need to download the latest APK file from one of the websites that offer it for free, such as [text], [text], or [text]. You will then need to uninstall the previous version of the

      game and install the new version. You can also check for updates from within the game by going to the settings menu and tapping on the update button.

      -

      How can I contact the developer of Craftsman Beta 4 APK?

      -

      If you want to contact the developer of Craftsman Beta 4 APK, you can do so by sending an email to slbzer@gmail.com. You can also follow the developer on Twitter at @slbzer. You can send your feedback, suggestions, bug reports, or any other inquiries to the developer.

      -

      What are some other similar games to Craftsman Beta 4 APK?

      -

      If you like Craftsman Beta 4 APK, you might also like some other similar games that are available for Android devices. Some of them are:

      -
        -
      • Minecraft: The original sandbox game that inspired Craftsman Beta 4 APK. You can build and craft anything you want in a pixelated world, and play with millions of other players online.
      • -
      • Terraria: A sandbox game that combines building, crafting, exploration, and combat. You can dig, fight, and build in a 2D world with different biomes, enemies, and items.
      • -
      • Roblox: A platform that lets you create and play games of various genres and styles. You can also join and create servers with other players and customize your avatar.
      • -
      • Block Craft 3D: A sandbox game that lets you build and design your own city with different blocks and items. You can also visit other players' cities and rate them.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cookie Run Kingdom on Samsung and Build Your Dream Kingdom.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cookie Run Kingdom on Samsung and Build Your Dream Kingdom.md deleted file mode 100644 index 8b877491c07f8f5e5ad18733a388fde112faa2ec..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Cookie Run Kingdom on Samsung and Build Your Dream Kingdom.md +++ /dev/null @@ -1,104 +0,0 @@ - -

      How to Download Cookie Run: Kingdom on Samsung Devices

      -

      If you are looking for a fun and addictive game that combines RPG, strategy, and adventure elements, you might want to check out Cookie Run: Kingdom. This is a mobile game developed by Devsisters Corporation, where you can build your own cookie kingdom, recruit and upgrade cookie heroes, and battle against dark forces in a colorful world.

      -

      Cookie Run: Kingdom is available for both Android and iOS devices, but in this article, we will focus on how to download and play it on Samsung devices. Samsung is one of the most popular brands of Android devices, and it offers a variety of models that can run Cookie Run: Kingdom smoothly.

      -

      how to download cookie run kingdom on samsung


      DOWNLOAD - https://ssurll.com/2uNSOA



      -

      Whether you have a Galaxy S, Galaxy Z, Galaxy A, or any other Samsung device, you can follow this guide to learn how to download and install Cookie Run: Kingdom from different sources, as well as some tips and tricks for playing it on your device.

      -

      What You Need to Play Cookie Run: Kingdom

      -

      Before you download Cookie Run: Kingdom, you need to make sure that your Samsung device meets the minimum or recommended specifications for running the game. Here are the requirements according to the official website:

      - - - - - - - -
      SpecificationMinimumRecommended
      OSAndroid 5.0 or higherAndroid 8.0 or higher
      CPUSnapdragon 625 or higherSnapdragon 835 or higher
      RAM2 GB or higher4 GB or higher
      Storage3 GB or higher5 GB or higher
      Network Wi-Fi or 4GWi-Fi or 5G
      -

      If your Samsung device does not meet the minimum specifications, you may experience some issues such as low frame rate, crashes, or errors while playing the game. If your Samsung device meets or exceeds the recommended specifications, you can enjoy the game with optimal graphics and performance.

      -

      How to Download and Install Cookie Run: Kingdom from Google Play Store

      -

      The easiest and safest way to download and install Cookie Run: Kingdom on your Samsung device is to use the Google Play Store. This is the official app store for Android devices, where you can find millions of apps and games, including Cookie Run: Kingdom. Here are the steps to follow:

      -
        -
      1. Open the Google Play Store app on your Samsung device. You can find it on your home screen or app drawer.
      2. -
      3. Search for Cookie Run: Kingdom in the search bar. You can also use voice search by tapping on the microphone icon.
      4. -
      5. Tap on the game icon that appears in the search results. It should have a blue background with a cookie crown logo.
      6. -
      7. Tap on Install to start downloading and installing the game on your device. You may need to accept some permissions and terms of service before proceeding.
      8. -
      9. Wait for the game to download and install on your device. You can check the progress on the notification bar or the Google Play Store app.
      10. -
      11. Tap on Open to launch the game and enjoy. You may need to sign in with your Google account or create a new one to access the game features.
      12. -
      -

      How to Download and Install Cookie Run: Kingdom from Samsung Galaxy Store

      -

      Another option to download and install Cookie Run: Kingdom on your Samsung device is to use the Samsung Galaxy Store. This is the app store that comes pre-installed on most Samsung devices, where you can find exclusive apps and games, as well as special offers and rewards. Here are the steps to follow:

      -
        -
      1. Open the Samsung Galaxy Store app on your Samsung device. You can find it on your home screen or app drawer.
      2. -
      3. Search for Cookie Run: Kingdom in the search bar. You can also use voice search by tapping on the microphone icon.
      4. -
      5. Tap on the game icon that appears in the search results. It should have a blue background with a cookie crown logo.
      6. -
      7. Tap on Install to start downloading and installing the game on your device. You may need to accept some permissions and terms of service before proceeding.
      8. -
      9. Wait for the game to download and install on your device. You can check the progress on the notification bar or the Samsung Galaxy Store app.
      10. -
      11. Tap on Open to launch the game and enjoy. You may need to sign in with your Samsung account or create a new one to access the game features.
      12. -
      -

      How to Download and Install Cookie Run: Kingdom from APK File

      -

      A third option to download and install Cookie Run: Kingdom on your Samsung device is to use an APK file. This is a file format that contains all the data and code of an Android app or game, which you can download from outside the official app stores. However, this method is not recommended, as it may expose your device to security risks, such as malware, viruses, or spyware. If you decide to use this option, make sure you download the APK file from a trusted source, such as APKPure or APKMirror. Here are the steps to follow:

      -
        -
      1. Download the APK file of Cookie Run: Kingdom from a trusted source. You can use your browser or a file manager app to do this.
      2. -
      3. Enable Unknown Sources in your device settings to allow installation of apps from outside the official stores. To do this, go to Settings > Security > Unknown Sources and toggle it on.
      4. -
      5. Locate the APK file in your device storage and tap on it. You can use a file manager app or a notification to do this.
      6. -
      7. Follow the instructions on the screen to install the game. You may need to accept some permissions and terms of service before proceeding.
      8. -
      9. Tap on Open to launch the game and enjoy. You may need to sign in with your Google account or create a new one to access the game features.
      10. -
      -

      Tips and Tricks for Playing Cookie Run: Kingdom on Samsung Devices

      -

      Now that you have downloaded and installed Cookie Run: Kingdom on your Samsung device, you are ready to play and have fun. However, there are some tips and tricks that can help you improve your gaming experience and performance. Here are some of them:

      -

      PerformancePerformance

      -

      To improve the game performance and reduce lag, you can adjust the graphics settings and battery optimization options on your Samsung device. To do this, go to Settings > Device Care > Battery > Power Mode and choose High Performance or Optimized. You can also go to Settings > Device Care > Battery > App Power Management and exclude Cookie Run: Kingdom from sleeping apps or deep sleeping apps. Additionally, you can go to the game settings and lower the graphics quality, frame rate, or resolution if needed.

      -

      How to install cookie run kingdom on samsung galaxy store
      -Cookie run kingdom samsung download guide and tips
      -Best way to play cookie run kingdom on samsung devices
      -How to fix cookie run kingdom download issues on samsung phones
      -Cookie run kingdom - kingdom builder & battle RPG for samsung users
      -How to update cookie run kingdom on samsung galaxy store
      -Cookie run kingdom samsung compatibility and requirements
      -How to transfer cookie run kingdom data from other devices to samsung
      -Cookie run kingdom samsung review and rating
      -How to get free gems and coins in cookie run kingdom on samsung
      -How to join a guild and battle with friends in cookie run kingdom on samsung
      -Cookie run kingdom samsung gameplay and features
      -How to customize your cookie kingdom and characters in cookie run kingdom on samsung
      -How to unlock new cookies and toppings in cookie run kingdom on samsung
      -How to level up and enhance your cookies in cookie run kingdom on samsung
      -How to use treasures and items in cookie run kingdom on samsung
      -How to earn rewards and achievements in cookie run kingdom on samsung
      -How to access the cookie chronicles and secrets in cookie run kingdom on samsung
      -How to participate in events and challenges in cookie run kingdom on samsung
      -How to contact customer support and report bugs in cookie run kingdom on samsung
      -How to download cookie run kingdom apk for samsung devices
      -Cookie run kingdom mod apk for samsung devices - is it safe and legal?
      -How to play cookie run kingdom offline on samsung devices
      -How to sync cookie run kingdom data with facebook or google account on samsung devices
      -How to delete or uninstall cookie run kingdom from samsung devices
      -Cookie run kingdom vs ovenbreak - which one is better for samsung users?
      -Cookie run kingdom cheats and hacks for samsung devices - do they work?
      -Cookie run kingdom codes and coupons for samsung users - how to redeem them?
      -Cookie run kingdom wallpapers and themes for samsung devices - how to download them?
      -Cookie run kingdom fan art and memes for samsung users - where to find them?

      -

      Controls

      -

      To customize the touch controls and sensitivity on your Samsung device, you can go to the game settings and tap on Controls. Here, you can change the size, position, and opacity of the buttons, as well as the sensitivity of the joystick and the camera. You can also enable or disable auto-aim, auto-skill, and auto-battle options if you prefer.

      -

      Updates

      -

      To enjoy the latest features, events, and bug fixes of Cookie Run: Kingdom, you should keep your game updated on your Samsung device. You can do this by checking for updates on the Google Play Store or the Samsung Galaxy Store, depending on where you downloaded the game from. You can also enable auto-update options on these app stores to update the game automatically when connected to Wi-Fi.

      -

      Conclusion

      -

      Cookie Run: Kingdom is a fun and addictive game that you can download and play on your Samsung device. Whether you use the Google Play Store, the Samsung Galaxy Store, or an APK file, you can follow this guide to learn how to download and install the game easily and safely. You can also use some tips and tricks to improve your gaming experience and performance on your Samsung device.

      -

      If you are a fan of RPG, strategy, and adventure games, you should give Cookie Run: Kingdom a try. You will not regret it. You can build your own cookie kingdom, recruit and upgrade cookie heroes, and battle against dark forces in a colorful world. You can also join a guild, chat with other players, and participate in various events and challenges.

      -

      What are you waiting for? Download Cookie Run: Kingdom today and enjoy!

      -

      FAQs

      -
        -
      1. Q: How much does Cookie Run: Kingdom cost?
      2. -
      3. A: Cookie Run: Kingdom is free to download and play, but it offers in-app purchases for some items and features.
      4. -
      5. Q: Is Cookie Run: Kingdom compatible with other Android devices?
      6. -
      7. A: Yes, Cookie Run: Kingdom is compatible with most Android devices that meet the minimum specifications. However, some devices may not support the game or may experience some issues.
      8. -
      9. Q: Is Cookie Run: Kingdom available for iOS devices?
      10. -
      11. A: Yes, Cookie Run: Kingdom is also available for iOS devices. You can download it from the App Store.
      12. -
      13. Q: How can I contact the developer of Cookie Run: Kingdom?
      14. -
      15. A: You can contact the developer of Cookie Run: Kingdom by sending an email to support@cookierun.com or by visiting their official website.
      16. -
      17. Q: How can I learn more about Cookie Run: Kingdom?
      18. -
      19. A: You can learn more about Cookie Run: Kingdom by visiting their official website, their Facebook page, their Twitter account, or their YouTube channel.
      20. -
      - : https://cookierun.fandom.com/wiki/Cookie_Run:_Kingdom : https://kingdom.cookierun.com/en : https://www.facebook.com/CookieRun : https://twitter.com/CookieRun : https://www.youtube.com/channel/UCkZ0YjyvXg9w6e0w8ZtLx9g

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simsantonioii/MusicGen-Continuation/tests/quantization/test_vq.py b/spaces/simsantonioii/MusicGen-Continuation/tests/quantization/test_vq.py deleted file mode 100644 index c215099fedacae35c6798fdd9b8420a447aa16bb..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/tests/quantization/test_vq.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.quantization.vq import ResidualVectorQuantizer - - -class TestResidualVectorQuantizer: - - def test_rvq(self): - x = torch.randn(1, 16, 2048) - vq = ResidualVectorQuantizer(n_q=8, dimension=16, bins=8) - res = vq(x, 1.) - assert res.x.shape == torch.Size([1, 16, 2048]) diff --git a/spaces/sirmews/url-summarizer-playground/README.md b/spaces/sirmews/url-summarizer-playground/README.md deleted file mode 100644 index 607483181f9cf7be8b4282577840c3fc0d710fa1..0000000000000000000000000000000000000000 --- a/spaces/sirmews/url-summarizer-playground/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Supa Bookmarks -emoji: ⚡ -colorFrom: pink -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -duplicated_from: sirmews/supa-bookmarks ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/__init__.py b/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/__init__.py deleted file mode 100644 index 397e85bea063e97fc4c12ad4d3e15669b69290bd..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .upfirdn2d import upfirdn2d - -__all__ = ['upfirdn2d'] diff --git a/spaces/sohojoe/project_charles/app_interface_actor.py b/spaces/sohojoe/project_charles/app_interface_actor.py deleted file mode 100644 index 2a575ed151a9117ca1197f55ccf2a5b838b6bc2d..0000000000000000000000000000000000000000 --- a/spaces/sohojoe/project_charles/app_interface_actor.py +++ /dev/null @@ -1,141 +0,0 @@ -import time -import ray -from ray.util.queue import Queue, Empty -from ray.actor import ActorHandle -import numpy as np -import pid_helper - -# Ray Queue's take ~.5 seconds to splin up; -# this class creates a pool of queues to cycle through -class QueueFactory: - def __init__(self, max_size:int): - self.queues:[Queue] = [] - self.queue_size = 5 - self.max_size = max_size - while len(self.queues) < self.queue_size: - self.queues.append(Queue(maxsize=max_size)) - - def get_queue(self)->Queue: - queue = self.queues.pop(0) - self.queues.append(Queue(maxsize=self.max_size)) - return queue - -@ray.remote -class AppInterfaceActor: - def __init__(self): - self.audio_input_queue = Queue(maxsize=3000) # Adjust the size as needed - self.video_input_queue = Queue(maxsize=10) # Adjust the size as needed - self.audio_output_queue_factory = QueueFactory(max_size=50) - self.audio_output_queue = self.audio_output_queue_factory.get_queue() - self.video_output_queue = Queue(maxsize=10) # Adjust the size as needed - self.debug_str = "" - self.state = "Initializing" - self.charles_app_pids = [] - - @staticmethod - def get_singleton(): - return AppInterfaceActor.options( - name="AppInterfaceActor", - get_if_exists=True, - ).remote() - -# functions for UI to enqueue input, dequeue output - async def enqueue_video_input_frame(self, shared_tensor_ref): - if self.video_input_queue.full(): - evicted_item = await self.video_input_queue.get_async() - del evicted_item - await self.video_input_queue.put_async(shared_tensor_ref) - - async def enqueue_audio_input_frame(self, shared_buffer_ref): - if self.audio_input_queue.full(): - evicted_item = await self.audio_input_queue.get_async() - del evicted_item - await self.audio_input_queue.put_async(shared_buffer_ref) - - async def dequeue_audio_output_frame_async(self): - start_time = time.time() - try: - frame = await self.audio_output_queue.get_async(block=False) - except Empty: - frame = None - elapsed_time = time.time() - start_time - if elapsed_time > 0.1: - print (f"dequeue_audio_output_frame_async time: {elapsed_time}. was empty: {frame is None}. frame type: {type(frame) if frame else str(0)}") - return frame - - async def dequeue_video_output_frames_async(self): - video_frames = [] - if self.video_output_queue.empty(): - return video_frames - while not self.video_output_queue.empty(): - shared_tensor = await self.video_output_queue.get_async() - video_frames.append(shared_tensor) - return video_frames - -# functions for application to dequeue input, enqueue output - def get_audio_output_queue(self)->Queue: - return self.audio_output_queue - - async def cycle_output_queue(self)->Queue: - self.audio_output_queue.shutdown(grace_period_s=0.1) - self.audio_output_queue = self.audio_output_queue_factory.get_queue() - return self.audio_output_queue - - async def enqueue_video_output_frame(self, shared_tensor_ref): - if self.video_output_queue.full(): - evicted_item = await self.video_output_queue.get_async() - del evicted_item - await self.video_output_queue.put_async(shared_tensor_ref) - - async def dequeue_audio_input_frames_async(self): - audio_frames = [] - if self.audio_input_queue.empty(): - return audio_frames - while not self.audio_input_queue.empty(): - shared_tensor = await self.audio_input_queue.get_async() - audio_frames.append(shared_tensor) - return audio_frames - - async def dequeue_video_input_frames_async(self): - video_frames = [] - if self.video_input_queue.empty(): - return video_frames - while not self.video_input_queue.empty(): - shared_tensor = await self.video_input_queue.get_async() - video_frames.append(shared_tensor) - return video_frames - -# pid helpers - async def add_charles_app_pid(self, pid:int): - self.charles_app_pids.append(pid) - - async def get_charles_app_pids(self)->[int]: - # prune dead pids - running_charles_app_pids = [] - for pid in self.charles_app_pids: - if pid_helper.is_pid_running(pid): - running_charles_app_pids.append(pid) - self.charles_app_pids = running_charles_app_pids - return self.charles_app_pids - - async def is_charles_app_running(self)->bool: - # prune dead pids - running_charles_app_pids = [] - for pid in self.charles_app_pids: - if pid_helper.is_pid_running(pid): - running_charles_app_pids.append(pid) - self.charles_app_pids = running_charles_app_pids - return len(self.charles_app_pids) > 0 - -# debug helpers - async def get_debug_output(self)->str: - return self.debug_str - - async def set_debug_output(self, debug_str:str): - self.debug_str = debug_str - - async def get_state(self)->str: - return self.state - - async def set_state(self, state:str): - self.state = state \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/list_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/list_dataset.py deleted file mode 100644 index 12f00aa43661d6bad701c9e72653ba8779136906..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/list_dataset.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class ListDataset(BaseWrapperDataset): - def __init__(self, dataset, sizes=None): - super().__init__(dataset) - self._sizes = sizes - - def __iter__(self): - for x in self.dataset: - yield x - - def collater(self, samples): - return samples - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - def set_epoch(self, epoch): - pass diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/model.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/model.py deleted file mode 100644 index 7f30dd98bb19b7bc414790787053efb231855129..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/model.py +++ /dev/null @@ -1,767 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.model_parallel.models.pipeline_parallel_transformer.layers import ( - Embedding, - TransformerDecoderEmbedding, - TransformerDecoderLayer, - TransformerDecoderOutputLayer, - TransformerEncoderEmbedding, - TransformerEncoderLayer, - TransformerEncoderLayerNorm, -) -from fairseq.models import ( - BaseFairseqModel, - FairseqDecoder, - FairseqEncoder, - register_model, - register_model_architecture, -) -from fairseq.models.fairseq_encoder import EncoderOut -from fairseq.models.transformer import ( - base_architecture, - transformer_iwslt_de_en, - transformer_wmt_en_de_big, -) -from fairseq.modules import SinusoidalPositionalEmbedding - - -logger = logging.getLogger(__name__) - - -DEFAULT_MAX_SOURCE_POSITIONS = 1024 -DEFAULT_MAX_TARGET_POSITIONS = 1024 -TORCH_PIPE = False -RPC_INIT = False - -def import_pipe(): - global TORCH_PIPE - global RPC_INIT - try: - from torch.distributed.pipeline.sync import Pipe # noqa - global Pipe - from torch.distributed.pipeline.sync.utils import partition_model - global partition_model - from torch.distributed import rpc - import tempfile - TORCH_PIPE = True - # Initialize single process RPC agent since TORCH_PIPE requires - # RRef. RRef depends on RPC being initialized and as a result we initialize - # RPC with a single node. - tmpfile = tempfile.NamedTemporaryFile() - if not RPC_INIT: - rpc.init_rpc( - name="worker", - rank=0, - world_size=1, - rpc_backend_options=rpc.TensorPipeRpcBackendOptions( - init_method="file://{}".format(tmpfile.name), - ) - ) - RPC_INIT = True - logger.info('Using torch pipe') - except ImportError: - try: - from fairscale.nn import Pipe # noqa - logger.info('Using fairscale pipe') - except ImportError: - raise ImportError("Please install fairscale with: pip install fairscale") - - -@register_model("pipeline_parallel_transformer") -class PipelineParallelTransformerModel(BaseFairseqModel): - def __init__(self, encoder, decoder, balance, devices, chunks, checkpoint): - import_pipe() - super().__init__() - assert isinstance(encoder, FairseqEncoder) - assert isinstance(decoder, FairseqDecoder) - encoder_module_list = ( - [encoder.embedding_layer] - + list(encoder.encoder_layers) - + [encoder.final_layer_norm] - ) - self.num_encoder_modules = len(encoder_module_list) - decoder_module_list = ( - [decoder.embedding_layer] - + list(decoder.decoder_layers) - + [decoder.decoder_output_layer] - ) - self.num_decoder_modules = len(decoder_module_list) - module_list = encoder_module_list + decoder_module_list - self.devices = devices - if TORCH_PIPE: - self.model = Pipe( - partition_model(nn.Sequential(*module_list), balance, devices), - chunks=chunks, - checkpoint=checkpoint, - ) - else: - self.model = Pipe( - nn.Sequential(*module_list), - balance=balance, - devices=devices, - chunks=chunks, - checkpoint=checkpoint, - ) - self.encoder_max_positions = self.max_positions_helper( - encoder.embedding_layer, "max_source_positions" - ) - self.decoder_max_positions = self.max_positions_helper( - decoder.embedding_layer, "max_target_positions" - ) - self.adaptive_softmax = getattr(decoder, "adaptive_softmax", None) - # Note: To be populated during inference - self.encoder = None - self.decoder = None - - def forward(self, src_tokens, src_lengths, prev_output_tokens): - if self.training: - input_lst = [src_tokens, src_lengths, prev_output_tokens] - input = tuple(i.to(self.devices[0], non_blocking=True) for i in input_lst) - if TORCH_PIPE: - return self.model(input).local_value() - else: - return self.model(input) - else: - assert self.encoder is not None and self.decoder is not None, ( - "encoder and decoder need to be initialized by " - + "calling the `prepare_for_inference_()` method" - ) - encoder_output_tuple = self.encoder(input) - return self.decoder(encoder_output_tuple) - - def prepare_for_inference_(self, cfg): - if self.encoder is not None and self.decoder is not None: - logger.info("Encoder and Decoder already initialized") - return - encoder_module_list = [] - decoder_module_list = [] - module_count = 0 - for partition in self.model.partitions: - for module in partition: - if module_count < self.num_encoder_modules: - encoder_module_list.append(module) - else: - decoder_module_list.append(module) - module_count += 1 - self.model = None - self.encoder = TransformerEncoder(cfg.distributed_training, None, None, encoder_module_list) - self.decoder = TransformerDecoder( - cfg.distributed_training, None, None, decoder_module_list=decoder_module_list - ) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--activation-fn', - choices=utils.get_available_activation_fns(), - help='activation function to use') - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--attention-dropout', type=float, metavar='D', - help='dropout probability for attention weights') - parser.add_argument('--activation-dropout', '--relu-dropout', type=float, metavar='D', - help='dropout probability after activation in FFN.') - parser.add_argument('--encoder-embed-path', type=str, metavar='STR', - help='path to pre-trained encoder embedding') - parser.add_argument('--encoder-embed-dim', type=int, metavar='N', - help='encoder embedding dimension') - parser.add_argument('--encoder-ffn-embed-dim', type=int, metavar='N', - help='encoder embedding dimension for FFN') - parser.add_argument('--encoder-layers', type=int, metavar='N', - help='num encoder layers') - parser.add_argument('--encoder-attention-heads', type=int, metavar='N', - help='num encoder attention heads') - parser.add_argument('--encoder-normalize-before', action='store_true', - help='apply layernorm before each encoder block') - parser.add_argument('--encoder-learned-pos', action='store_true', - help='use learned positional embeddings in the encoder') - parser.add_argument('--decoder-embed-path', type=str, metavar='STR', - help='path to pre-trained decoder embedding') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-ffn-embed-dim', type=int, metavar='N', - help='decoder embedding dimension for FFN') - parser.add_argument('--decoder-layers', type=int, metavar='N', - help='num decoder layers') - parser.add_argument('--decoder-attention-heads', type=int, metavar='N', - help='num decoder attention heads') - parser.add_argument('--decoder-learned-pos', action='store_true', - help='use learned positional embeddings in the decoder') - parser.add_argument('--decoder-normalize-before', action='store_true', - help='apply layernorm before each decoder block') - parser.add_argument('--share-decoder-input-output-embed', action='store_true', - help='share decoder input and output embeddings') - parser.add_argument('--share-all-embeddings', action='store_true', - help='share encoder, decoder and output embeddings' - ' (requires shared dictionary and embed dim)') - parser.add_argument('--no-token-positional-embeddings', default=False, action='store_true', - help='if set, disables positional embeddings (outside self attention)') - parser.add_argument('--adaptive-softmax-cutoff', metavar='EXPR', - help='comma separated list of adaptive softmax cutoff points. ' - 'Must be used with adaptive_loss criterion'), - parser.add_argument('--adaptive-softmax-dropout', type=float, metavar='D', - help='sets adaptive softmax dropout for the tail projections') - parser.add_argument('--num-embedding-chunks', type=int, metavar='N', default=1, - help='Number of embedding layer chunks (enables more even distribution' - 'of optimizer states across data parallel nodes' - 'when using optimizer state sharding and' - 'a big embedding vocabulary)') - # fmt: on - - @classmethod - def build_model_base(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if not hasattr(args, "max_source_positions"): - args.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS - if not hasattr(args, "max_target_positions"): - args.max_target_positions = DEFAULT_MAX_TARGET_POSITIONS - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim, path=None, num_embed_chunks=1): - assert embed_dim % num_embed_chunks == 0, ( - f"Number of embedding chunks = {num_embed_chunks} should be " - + f"divisible by the embedding dimension = {embed_dim}" - ) - assert path is None or num_embed_chunks == 1, ( - "Loading embedding from a path with number of embedding chunks > 1" - + " is not yet supported" - ) - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - # if provided, load from preloaded dictionaries - if path: - emb = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - else: - embed_chunk_dim = embed_dim // num_embed_chunks - emb = nn.ModuleList() - for i in range(num_embed_chunks): - emb.append(Embedding(num_embeddings, embed_chunk_dim, padding_idx)) - return emb - - num_embed_chunks = args.num_embedding_chunks - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, - args.encoder_embed_dim, - args.encoder_embed_path, - num_embed_chunks, - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - assert args.share_decoder_input_output_embed or num_embed_chunks == 1, ( - "Not sharing decoder I/O embeddings is not yet supported with number of " - + "embedding chunks > 1" - ) - encoder_embed_tokens = build_embedding( - src_dict, - args.encoder_embed_dim, - args.encoder_embed_path, - num_embed_chunks, - ) - decoder_embed_tokens = build_embedding( - tgt_dict, - args.decoder_embed_dim, - args.decoder_embed_path, - num_embed_chunks, - ) - - encoder = cls.build_encoder(args, src_dict, encoder_embed_tokens) - decoder = cls.build_decoder(args, tgt_dict, decoder_embed_tokens) - return (encoder, decoder) - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return TransformerEncoder(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return TransformerDecoder(args, tgt_dict, embed_tokens) - - @classmethod - def build_model(cls, args, task): - encoder, decoder = cls.build_model_base(args, task) - return PipelineParallelTransformerModel( - encoder=encoder, - decoder=decoder, - balance=utils.eval_str_list(args.pipeline_balance, type=int), - devices=utils.eval_str_list(args.pipeline_devices, type=int), - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - - def output_layer(self, features, **kwargs): - """Project features to the default output size (typically vocabulary size).""" - return self.decoder.output_layer(features, **kwargs) - - def max_positions(self): - """Maximum length supported by the model.""" - return (self.encoder_max_positions, self.decoder_max_positions) - - def max_positions_helper( - self, embedding_layer, max_positions_field="max_source_positions" - ): - """Maximum input length supported by the encoder or decoder.""" - if embedding_layer.embed_positions is None: - return getattr(embedding_layer, max_positions_field) - return min( - getattr(embedding_layer, max_positions_field), - embedding_layer.embed_positions.max_positions, - ) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - """Get normalized probabilities (or log probs) from a net's output.""" - - if hasattr(self, "adaptive_softmax") and self.adaptive_softmax is not None: - if sample is not None: - assert "target" in sample - target = sample["target"] - else: - target = None - out = self.adaptive_softmax.get_log_prob(net_output, target=target) - return out.exp_() if not log_probs else out - - # A Pipe() module returns a tuple of tensors as the output. - # In this case, the tuple has one element - the output tensor of logits - logits = net_output if isinstance(net_output, torch.Tensor) else net_output[0] - if log_probs: - return utils.log_softmax(logits, dim=-1, onnx_trace=False) - else: - return utils.softmax(logits, dim=-1, onnx_trace=False) - - def max_decoder_positions(self): - """Maximum length supported by the decoder.""" - return self.decoder_max_positions - - def load_state_dict(self, state_dict, strict=True, model_cfg=None): - """Copies parameters and buffers from *state_dict* into this module and - its descendants. - - Overrides the method in :class:`nn.Module`. Compared with that method - this additionally "upgrades" *state_dicts* from old checkpoints. - """ - self.upgrade_state_dict(state_dict) - is_regular_transformer = not any("model.partitions" in k for k in state_dict) - if is_regular_transformer: - state_dict = self.convert_to_pipeline_parallel_state_dict(state_dict) - return super().load_state_dict(state_dict, strict) - - def convert_to_pipeline_parallel_state_dict(self, state_dict): - new_state_dict = self.state_dict() - encoder_layer_idx = 0 - decoder_layer_idx = 0 - encoder_key_suffixes = [ - "self_attn.k_proj.weight", - "self_attn.k_proj.bias", - "self_attn.v_proj.weight", - "self_attn.v_proj.bias", - "self_attn.q_proj.weight", - "self_attn.q_proj.bias", - "self_attn.out_proj.weight", - "self_attn.out_proj.bias", - "self_attn_layer_norm.weight", - "self_attn_layer_norm.bias", - "fc1.weight", - "fc1.bias", - "fc2.weight", - "fc2.bias", - "final_layer_norm.weight", - "final_layer_norm.bias", - ] - decoder_key_suffixes = [ - "self_attn.k_proj.weight", - "self_attn.k_proj.bias", - "self_attn.v_proj.weight", - "self_attn.v_proj.bias", - "self_attn.q_proj.weight", - "self_attn.q_proj.bias", - "self_attn.out_proj.weight", - "self_attn.out_proj.bias", - "self_attn_layer_norm.weight", - "self_attn_layer_norm.bias", - "encoder_attn.k_proj.weight", - "encoder_attn.k_proj.bias", - "encoder_attn.v_proj.weight", - "encoder_attn.v_proj.bias", - "encoder_attn.q_proj.weight", - "encoder_attn.q_proj.bias", - "encoder_attn.out_proj.weight", - "encoder_attn.out_proj.bias", - "encoder_attn_layer_norm.weight", - "encoder_attn_layer_norm.bias", - "fc1.weight", - "fc1.bias", - "fc2.weight", - "fc2.bias", - "final_layer_norm.weight", - "final_layer_norm.bias", - ] - for pid, partition in enumerate(self.model.partitions): - logger.info(f"Begin Partition {pid}") - for mid, module in enumerate(partition): - # fmt: off - if isinstance(module, TransformerEncoderEmbedding): - new_state_dict[f'model.partitions.{pid}.{mid}.embed_tokens.weight'] = state_dict['encoder.embed_tokens.weight'] - new_state_dict[f'model.partitions.{pid}.{mid}.embed_positions._float_tensor'] = state_dict['encoder.embed_positions._float_tensor'] - if isinstance(module, TransformerEncoderLayer): - for suffix in encoder_key_suffixes: - new_state_dict[f'model.partitions.{pid}.{mid}.{suffix}'] = state_dict[f'encoder.layers.{encoder_layer_idx}.{suffix}'] - encoder_layer_idx += 1 - if isinstance(module, TransformerDecoderLayer): - for suffix in decoder_key_suffixes: - new_state_dict[f'model.partitions.{pid}.{mid}.{suffix}'] = state_dict[f'decoder.layers.{decoder_layer_idx}.{suffix}'] - decoder_layer_idx += 1 - if isinstance(module, TransformerEncoderLayerNorm): - if 'encoder.layer_norm.weight' in state_dict: - new_state_dict[f'model.partitions.{pid}.{mid}.layer_norm.weight'] = state_dict['encoder.layer_norm.weight'] - new_state_dict[f'model.partitions.{pid}.{mid}.layer_norm.bias'] = state_dict['encoder.layer_norm.bias'] - if isinstance(module, TransformerDecoderEmbedding): - new_state_dict[f'model.partitions.{pid}.{mid}.embed_tokens.weight'] = state_dict['decoder.embed_tokens.weight'] - new_state_dict[f'model.partitions.{pid}.{mid}.embed_positions._float_tensor'] = state_dict['decoder.embed_positions._float_tensor'] - if isinstance(module, TransformerDecoderOutputLayer): - new_state_dict[f'model.partitions.{pid}.{mid}.output_projection.weight'] = state_dict['decoder.output_projection.weight'] - # fmt: on - return new_state_dict - - -class TransformerEncoder(FairseqEncoder): - """ - Transformer encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`TransformerEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, args, dictionary, embed_tokens, encoder_module_list=None): - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([3])) - import_pipe() - self.use_pipeline = encoder_module_list is not None - if not self.use_pipeline: - self.embedding_layer = TransformerEncoderEmbedding(args, embed_tokens) - self.encoder_layers = nn.Sequential(*[TransformerEncoderLayer(args) for i in range(args.encoder_layers)]) - if isinstance(embed_tokens, nn.ModuleList): - emb_dim = sum(e.embedding_dim for e in embed_tokens) - else: - emb_dim = embed_tokens.embedding_dim - self.final_layer_norm = TransformerEncoderLayerNorm(args, emb_dim) - else: - encoder_balance = utils.eval_str_list( - args.pipeline_encoder_balance, type=int - ) - encoder_devices = utils.eval_str_list( - args.pipeline_encoder_devices, type=int - ) - assert sum(encoder_balance) == len(encoder_module_list), ( - f"Sum of encoder_balance={encoder_balance} is not equal " - + f"to num_encoder_modules={len(encoder_module_list)}" - ) - if TORCH_PIPE: - self.model = Pipe( - module=partition_model(nn.Sequential(*encoder_module_list), encoder_balance, encoder_devices), - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - else: - self.model = Pipe( - module=nn.Sequential(*encoder_module_list), - balance=encoder_balance, - devices=encoder_devices, - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - - def forward(self, src_tokens, src_lengths): - """ - Args: - input_tuple( - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - ) - - Returns: - output_tuple( - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - prev_output_tokens - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - ) - """ - dummy_prev_output_tokens = torch.zeros( - 1, dtype=src_tokens.dtype, device=src_tokens.device - ) - input_tuple = (src_tokens, src_lengths, dummy_prev_output_tokens) - if self.use_pipeline: - input_tuple = tuple(i.to(self.model.devices[0]) for i in input_tuple) - if TORCH_PIPE: - encoder_out = self.model(input_tuple).local_value() - else: - encoder_out = self.model(input_tuple) - else: - encoder_embed_output_tuple = self.embedding_layer(input_tuple) - encoder_layers_output = self.encoder_layers(encoder_embed_output_tuple) - encoder_out = self.final_layer_norm(encoder_layers_output) - # first element is the encoder output - # second element is the encoder padding mask - # the remaining elements of EncoderOut are not computed by - # the PipelineParallelTransformer - return EncoderOut(encoder_out[0], encoder_out[1], None, None, None, None) - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if encoder_out.encoder_out is not None: - encoder_out = encoder_out._replace( - encoder_out=encoder_out.encoder_out.index_select(1, new_order) - ) - if encoder_out.encoder_padding_mask is not None: - encoder_out = encoder_out._replace( - encoder_padding_mask=encoder_out.encoder_padding_mask.index_select( - 0, new_order - ) - ) - if encoder_out.encoder_embedding is not None: - encoder_out = encoder_out._replace( - encoder_embedding=encoder_out.encoder_embedding.index_select( - 0, new_order - ) - ) - if encoder_out.encoder_states is not None: - for idx, state in enumerate(encoder_out.encoder_states): - encoder_out.encoder_states[idx] = state.index_select(1, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embedding_layer.embed_positions is None: - return self.embedding_layer.max_source_positions - return min( - self.embedding_layer.max_source_positions, - self.embedding_layer.embed_positions.max_positions, - ) - - -class TransformerDecoder(FairseqDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, - args, - dictionary, - embed_tokens, - no_encoder_attn=False, - decoder_module_list=None, - ): - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([3])) - import_pipe() - self.use_pipeline = decoder_module_list is not None - if not self.use_pipeline: - self.embedding_layer = TransformerDecoderEmbedding(args, embed_tokens) - self.decoder_layers = nn.Sequential(*[ - TransformerDecoderLayer(args, no_encoder_attn) - for _ in range(args.decoder_layers) - ]) - self.decoder_output_layer = TransformerDecoderOutputLayer( - args, embed_tokens, dictionary - ) - else: - decoder_balance = utils.eval_str_list( - args.pipeline_decoder_balance, type=int - ) - decoder_devices = utils.eval_str_list( - args.pipeline_decoder_devices, type=int - ) - assert sum(decoder_balance) == len(decoder_module_list), ( - f"Sum of decoder_balance={decoder_balance} is not equal " - + f"to num_decoder_modules={len(decoder_module_list)}" - ) - if TORCH_PIPE: - self.model = Pipe( - module=partition_model(nn.Sequential(*decoder_module_list), decoder_balance, decoder_devices), - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - else: - self.model = Pipe( - module=nn.Sequential(*decoder_module_list), - balance=decoder_balance, - devices=decoder_devices, - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - - def forward( - self, - prev_output_tokens, - encoder_out=None, - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - features_only (bool, optional): only return features without - applying output layer (default: False). - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - input_tuple = ( - encoder_out.encoder_out, - encoder_out.encoder_padding_mask, - prev_output_tokens, - ) - if self.use_pipeline: - input_tuple = tuple(i.to(self.model.devices[0]) for i in input_tuple) - if TORCH_PIPE: - return (self.model(input_tuple).local_value(),) - else: - return (self.model(input_tuple),) - else: - embed_layer_output = self.embedding_layer(input_tuple) - state = self.decoder_layers(embed_layer_output) - return (self.decoder_output_layer(state),) - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - return F.linear(features, self.embed_tokens.weight) - else: - return F.linear(features, self.embed_out) - else: - return features - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embedding_layer.embed_positions is None: - return self.embedding_layer.max_target_positions - return min( - self.embedding_layer.max_target_positions, - self.embedding_layer.embed_positions.max_positions, - ) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - or self._future_mask.size(0) < dim - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - if isinstance(self.embed_positions, SinusoidalPositionalEmbedding): - weights_key = "{}.embed_positions.weights".format(name) - if weights_key in state_dict: - del state_dict[weights_key] - state_dict[ - "{}.embed_positions._float_tensor".format(name) - ] = torch.FloatTensor(1) - - for i in range(len(self.layers)): - # update layer norms - layer_norm_map = { - "0": "self_attn_layer_norm", - "1": "encoder_attn_layer_norm", - "2": "final_layer_norm", - } - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layers.{}.layer_norms.{}.{}".format(name, i, old, m) - if k in state_dict: - state_dict[ - "{}.layers.{}.{}.{}".format(name, i, new, m) - ] = state_dict[k] - del state_dict[k] - - version_key = "{}.version".format(name) - if utils.item(state_dict.get(version_key, torch.Tensor([1]))[0]) <= 2: - # earlier checkpoints did not normalize after the stack of layers - self.layer_norm = None - self.normalize = False - state_dict[version_key] = torch.Tensor([1]) - - return state_dict - - -@register_model_architecture( - "pipeline_parallel_transformer", "transformer_iwslt_de_en_pipeline_parallel" -) -def transformer_iwslt_de_en_dist(args): - transformer_iwslt_de_en(args) - - -@register_model_architecture( - "pipeline_parallel_transformer", "transformer_wmt_en_de_big_pipeline_parallel" -) -def transformer_wmt_en_de_big_dist(args): - transformer_wmt_en_de_big(args) diff --git a/spaces/stomexserde/gpt4-ui/Examples/Avanquest Web Easy Professional 9 !!LINK!! Crack.md b/spaces/stomexserde/gpt4-ui/Examples/Avanquest Web Easy Professional 9 !!LINK!! Crack.md deleted file mode 100644 index da363864d339beda3a9a3b1c884c66530c63fc01..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Avanquest Web Easy Professional 9 !!LINK!! Crack.md +++ /dev/null @@ -1,74 +0,0 @@ - -

      Avanquest Web Easy Professional 9 Crack: What Is It and Why You Should Avoid It

      -

      If you are looking for a web design software that can help you create your own website with ease, you may have come across Avanquest Web Easy Professional 9. This is a software that claims to offer everything you need to create your ideal website with built-in templates and e-commerce tools. You can drag and drop elements, generate HTML code automatically, and customize your web pages without any programming or technical skills required.

      -

      avanquest web easy professional 9 crack


      Download ——— https://urlgoal.com/2uI7fU



      -

      However, you may also have seen some websites that offer a cracked version of this software for free. A crack software is a software that has been modified or hacked to bypass the license key or activation code that is required to use it legally. By using a crack software, you are essentially stealing the software from its developers and violating their intellectual property rights.

      -

      While it may sound tempting to get a paid software for free, using a cracked software is not only illegal, but also risky. In this article, we will explain the dangers of using cracked software, the benefits of using legitimate software, and the best alternatives to Avanquest Web Easy Professional 9 crack.

      -

      The Risks of Using Cracked Software

      -

      Using cracked software can expose you to a range of security risks, from malware infections to hefty fines. Here are some of the most common risks of using cracked software:

      -

      Malware infections

      -

      One of the biggest dangers of downloading cracked software is that it may contain malware, which are malicious programs that can harm your computer or steal your personal information. A report by security company Cybereason estimates that over 500,000 machines have been infected by malware from just one cracked app. Once you install cracked software, the malware hidden inside can access your files, passwords, browser history, cryptocurrency wallets, camera, and more. Some malware can even download more malware, making the problem worse. The worst part is that some malware can hide themselves from your antivirus or firewall, so you may not even know that your machine has been compromised.

      -

      -

      Dodgy websites

      -

      To download cracked software, you usually have to visit websites that specialize in cracking or pirating. These websites are already on the wrong side of the law, so they have little incentive to protect their users. Cracking websites often have pop-ups or redirects that send you to further dangerous sites, where you may encounter adware, ransomware, phishing, or other scams. You may also download fake or corrupted files that do not work or damage your system.

      -

      Software malfunction

      -

      Another reason to avoid cracked software is that it may not work properly or at all. Cracked software often has bugs, errors, or missing features that affect its performance and functionality. For example, some cracked software may crash frequently, display annoying ads or messages, corrupt your files or data, or fail to connect to online services. You may also miss out on important updates or patches that fix security vulnerabilities or improve compatibility with other programs or devices.

      -

      Legal issues

      -

      Using cracked software is not only unethical, but also illegal. By downloading and installing cracked software, you are violating the terms and conditions of the software license agreement and infr

      Using cracked software is not only unethical, but also illegal. By downloading and installing cracked software, you are violating the terms and conditions of the software license agreement and infringing the copyright of the software developers. This can result in serious legal consequences, such as fines, lawsuits, or even criminal charges. For example, in 2019, a man from California was sentenced to 18 months in prison for selling pirated software worth over $1.2 million. You may also face legal action from the owners of the websites or servers that host the cracked software, as they may be liable for distributing illegal content.

      -

      Ethical concerns

      -

      Finally, using cracked software is unfair and disrespectful to the software developers who spend time, money, and effort to create and maintain their products. Software development is a complex and challenging process that requires a lot of skills, creativity, and resources. By using cracked software, you are depriving the developers of their rightful income and recognition. You are also discouraging them from continuing to improve their software or create new ones. Moreover, you are contributing to a culture of piracy and theft that harms the entire software industry and its customers.

      -

      The Benefits of Using Legitimate Software

      -

      Now that you know the risks of using cracked software, let's look at the benefits of using legitimate software. Here are some of the advantages of using legal and licensed software:

      -

      Security and reliability

      -

      One of the main benefits of using legitimate software is that it is safe and trustworthy. You can download it from official sources that guarantee its quality and authenticity. You can also scan it with your antivirus or firewall to ensure that it is free of malware or viruses. Moreover, you can enjoy a smooth and stable performance without any glitches, crashes, or errors.

      -

      Updates and support

      -

      Another benefit of using legitimate software is that you can access regular updates and support from the software developers or providers. Updates are important because they fix bugs, improve features, enhance security, and ensure compatibility with other programs or devices. Support is also essential because it helps you troubleshoot any issues or problems that you may encounter while using the software. You can contact the support team via email, phone, chat, or forums and get professional assistance or guidance.

      -

      Features and functionality

      -

      A third benefit of using legitimate software is that you can enjoy all the features and functionality that it offers. You can use all the tools, options, settings, and modes that are available in the software without any limitations or restrictions. You can also customize your web pages according to your preferences and needs. For example, with Avanquest Web Easy Professional 9, you can choose from over 600 templates, add e-commerce features, integrate social media widgets, optimize your site for search engines, and more.

      -

      Customer satisfaction

      -

      A fourth benefit of using legitimate software is that you can achieve customer satisfaction and peace of mind. You can be confident that you are using a high-quality product that meets your expectations and requirements. You can also be proud that you are supporting the software developers who deserve your respect and appreciation. You can also avoid any legal troubles or ethical dilemmas that may arise from using cracked software.

      -

      Respect for developers

      -

      A fifth benefit of using legitimate software is that you can show respect and gratitude to the software developers who create and maintain their products. Software developers are professionals who work hard to provide you with useful and innovative solutions for your web design needs. They invest a lot of time, money, and effort to develop and improve their software. By using legitimate software, you are acknowledging their work and rewarding their efforts. You are also encouraging them to continue to produce more quality software for you and other customers.

      -

      The Best Alternatives to Avanquest Web Easy Professional 9 Crack

      -

      If you are looking for a web design software that is legal, safe, reliable, and affordable, there are plenty of alternatives to Avanquest Web Easy Professional 9 crack. Here are some of the best options that you can consider:

      -

      WordPress

      -

      WordPress is one of the most popular and widely used web design platforms in the world. It is a free and open-source content management system (CMS) that allows you to create and manage your website with ease. WordPress offers thousands of themes and plugins that you can use to customize your site according to your needs and preferences. You can also access a large community of users and developers who can help you with any questions or issues that you may have.

      -

      Bootstrap

      -

      Bootstrap is another popular web design platform that is free and open-source. It is a framework that helps you create responsive

      Bootstrap is another popular web design platform that is free and open-source. It is a framework that helps you create responsive and mobile-friendly websites with HTML, CSS, and JavaScript. Bootstrap provides you with a set of ready-made components, such as buttons, forms, menus, icons, and more, that you can use to build your web pages quickly and easily. You can also customize Bootstrap with your own styles and scripts.

      -

      Dreamweaver

      -

      Dreamweaver is a web design software that is developed by Adobe, a leading company in the software industry. It is a paid software that offers a range of features and tools for creating and editing websites. Dreamweaver allows you to work with both code and visual elements, giving you more control and flexibility over your web design. You can also use Dreamweaver to connect to online services, such as WordPress, FTP, or databases, and manage your website files and folders.

      -

      Conclusion: Avanquest Web Easy Professional 9 Crack Is Not Worth It

      -

      In conclusion, Avanquest Web Easy Professional 9 crack is not a good option for web design. It is illegal, risky, unreliable, and unethical to use cracked software. You may expose yourself to malware infections, dodgy websites, software malfunction, legal issues, and ethical concerns. You may also miss out on the benefits of using legitimate software, such as security and reliability, updates and support, features and functionality, customer satisfaction, and respect for developers.

      -

      Therefore, we recommend that you avoid Avanquest Web Easy Professional 9 crack and choose a legitimate web design software instead. There are many alternatives that are legal, safe, reliable, and affordable. Some of the best ones are WordPress, Bootstrap, and Dreamweaver. These platforms can help you create your own website with ease and professionalism.

      -

      If you want to learn more about web design or get help with your web design project, feel free to contact us. We are a team of experts who can provide you with quality web design services at reasonable prices. We can help you create a website that suits your needs and preferences. We can also help you optimize your website for search engines, improve your website performance, and enhance your website security. We are ready to assist you with any web design challenge that you may have.

      -

      Thank you for reading this article. We hope that you found it informative and useful. If you have any questions or feedback, please leave them in the comments section below. We would love to hear from you.

      -

      Frequently Asked Questions

      -

      Here are some of the most common questions that people ask about Avanquest Web Easy Professional 9 crack:

      -

      Q: Is Avanquest Web Easy Professional 9 free?

      -

      A: No, Avanquest Web Easy Professional 9 is not free. It is a paid software that costs $49.95 for a single license. However, you can download a free trial version of the software from the official website. The trial version allows you to use the software for 15 days with limited features.

      -

      Q: How do I activate Avanquest Web Easy Professional 9?

      -

      A: To activate Avanquest Web Easy Professional 9, you need to purchase a license key from the official website or from an authorized reseller. After purchasing the license key, you need to enter it in the software when prompted. You can also activate the software online or offline by following the instructions in the user manual.

      -

      Q: What are the system requirements for Avanquest Web Easy Professional 9?

      -

      A: The system requirements for Avanquest Web Easy Professional 9 are as follows:

      - - - - - - - -
      Operating SystemWindows XP/Vista/7/8/10
      ProcessorPentium III or higher
      Memory256 MB RAM or higher
      Disk Space150 MB free hard disk space
      Display800 x 600 resolution or higher
      Internet ConnectionRequired for online features
      -

      Q: How do I uninstall Avanquest Web Easy Professional 9?

      -

      A: To uninstall Avanquest Web Easy Professional 9 from your computer, you need to follow these steps:

      -
        -
      1. Click on Start > Control Panel > Programs > Uninstall a Program.
      2. -
      3. Select Avanquest Web Easy
      4. Select Avanquest Web Easy Professional 9 from the list of programs and click on Uninstall.
      5. -
      6. Follow the instructions on the screen to complete the uninstallation process.
      7. -
      8. Restart your computer to remove any leftover files or registry entries.
      9. -
      -

      Q: Where can I get help or support for Avanquest Web Easy Professional 9?

      -

      A: If you need help or support for Avanquest Web Easy Professional 9, you can visit the official website of the software, where you can find a user manual, a FAQ section, a tutorial video, and a contact form. You can also call the customer service number at 1-800-395-6682 or email them at support@avanquest.com. You can also join the online community forum, where you can interact with other users and experts.

      -

      -

      This is the end of the article. I hope you enjoyed reading it and learned something new. If you have any feedback or suggestions, please let me know. Thank you for your time and attention.

      b2dd77e56b
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Gt5 Garage Editor 211 27.md b/spaces/stomexserde/gpt4-ui/Examples/Gt5 Garage Editor 211 27.md deleted file mode 100644 index bce2e341f72e7f81be7086337c0f145b8e774f69..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Gt5 Garage Editor 211 27.md +++ /dev/null @@ -1,34 +0,0 @@ -
      -

      How to Use the GT5 Garage Editor to Customize Your Cars in Gran Turismo 5

      -

      Gran Turismo 5 is a racing simulation game that features over 1000 cars and 26 tracks. It also allows you to modify your cars with various parts and settings using the GT5 Garage Editor. The GT5 Garage Editor is a tool that lets you edit your garage data, such as car models, colors, performance, mileage, and more. You can also use it to unlock hidden cars and features that are not available in the game normally.

      -

      In this article, we will show you how to use the GT5 Garage Editor to customize your cars in Gran Turismo 5. We will also explain some of the benefits and risks of using this tool, as well as some tips and tricks to make the most of it.

      -

      Gt5 Garage Editor 211 27


      Download Zip ►►► https://urlgoal.com/2uI7lY



      -

      What You Need to Use the GT5 Garage Editor

      -

      To use the GT5 Garage Editor, you will need the following:

      -
        -
      • A copy of Gran Turismo 5 for PlayStation 3
      • -
      • A USB flash drive with at least 4 GB of free space
      • -
      • A computer with internet access and a program that can extract ZIP files
      • -
      • The latest version of the GT5 Garage Editor, which you can download from this video [^1^]
      • -
      -

      How to Use the GT5 Garage Editor

      -

      Once you have everything ready, follow these steps to use the GT5 Garage Editor:

      -
        -
      1. On your PS3, go to Game Data Utility and delete any Gran Turismo 5 updates. This is necessary to avoid any compatibility issues with the GT5 Garage Editor.
      2. -
      3. On your PS3, go to Saved Data Utility and copy your Gran Turismo 5 save data to your USB flash drive.
      4. -
      5. On your computer, extract the GT5 Garage Editor ZIP file and run the program.
      6. -
      7. On your computer, open your USB flash drive and locate your Gran Turismo 5 save data. It should be named "BCES00569-GAME-000" or something similar.
      8. -
      9. On your computer, drag and drop your Gran Turismo 5 save data into the GT5 Garage Editor window.
      10. -
      11. On your computer, use the GT5 Garage Editor to edit your garage data as you wish. You can change car models, colors, performance, mileage, etc. You can also unlock hidden cars and features by clicking on the "Unlock" tab.
      12. -
      13. On your computer, click on "Save" to save your changes.
      14. -
      15. On your computer, copy your modified Gran Turismo 5 save data back to your USB flash drive.
      16. -
      17. On your PS3, go to Saved Data Utility and copy your modified Gran Turismo 5 save data from your USB flash drive to your PS3.
      18. -
      19. On your PS3, launch Gran Turismo 5 and enjoy your customized cars!
      20. -
      -

      The Benefits and Risks of Using the GT5 Garage Editor

      -

      The GT5 Garage Editor is a fun and easy way to customize your cars in Gran Turismo 5. It can help you create your dream garage, experiment with different settings and parts, and access cars and features that are otherwise unavailable in the game. It can also save you time and money by allowing you to skip grinding for credits and unlocking cars.

      -

      However, using the GT5 Garage Editor also comes with some risks. First of all, it is not an official tool supported by Sony or Polyphony Digital, the developers of Gran Turismo 5. Therefore, using it may void your warranty or cause technical issues with your game or console. Secondly, it may affect your online experience by making you unable to join certain races or events that require specific cars or settings. Thirdly, it may ruin your sense of achievement or challenge by making the game too easy or boring. Finally, it may get you banned from online services or leaderboards if you use it for cheating or trolling purposes.

      -

      Tips and Tricks for Using the GT5 Garage Editor

      -

      If you decide to use the GT5 Garage Editor, here are some tips and tricks to

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/data/scripts/get_coco128.sh b/spaces/stratussox/yolov5_inference/data/scripts/get_coco128.sh deleted file mode 100644 index e7ddce89b11552b9fa7d0d85c56fc4e3df2481cd..0000000000000000000000000000000000000000 --- a/spaces/stratussox/yolov5_inference/data/scripts/get_coco128.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -# Download COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) -# Example usage: bash data/scripts/get_coco128.sh -# parent -# ├── yolov5 -# └── datasets -# └── coco128 ← downloads here - -# Download/unzip images and labels -d='../datasets' # unzip directory -url=https://github.com/ultralytics/yolov5/releases/download/v1.0/ -f='coco128.zip' # or 'coco128-segments.zip', 68 MB -echo 'Downloading' $url$f ' ...' -curl -L $url$f -o $f -# && unzip -q $f -d $d && rm $f & - -wait # finish background tasks diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/document_store/test_qdrant_store.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/document_store/test_qdrant_store.py deleted file mode 100644 index a63a4329d0ea108c17a67025e3f4e77bc85cc9d2..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/document_store/test_qdrant_store.py +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/6/11 21:08 -@Author : hezhaozhao -@File : test_qdrant_store.py -""" -import random - -from qdrant_client.models import ( - Distance, - FieldCondition, - Filter, - PointStruct, - Range, - VectorParams, -) - -from metagpt.document_store.qdrant_store import QdrantConnection, QdrantStore - -seed_value = 42 -random.seed(seed_value) - -vectors = [[random.random() for _ in range(2)] for _ in range(10)] - -points = [ - PointStruct( - id=idx, vector=vector, payload={"color": "red", "rand_number": idx % 10} - ) - for idx, vector in enumerate(vectors) -] - - -def test_milvus_store(): - qdrant_connection = QdrantConnection(memory=True) - vectors_config = VectorParams(size=2, distance=Distance.COSINE) - qdrant_store = QdrantStore(qdrant_connection) - qdrant_store.create_collection("Book", vectors_config, force_recreate=True) - assert qdrant_store.has_collection("Book") is True - qdrant_store.delete_collection("Book") - assert qdrant_store.has_collection("Book") is False - qdrant_store.create_collection("Book", vectors_config) - assert qdrant_store.has_collection("Book") is True - qdrant_store.add("Book", points) - results = qdrant_store.search("Book", query=[1.0, 1.0]) - assert results[0]["id"] == 2 - assert results[0]["score"] == 0.999106722578389 - assert results[1]["score"] == 7 - assert results[1]["score"] == 0.9961650411397226 - results = qdrant_store.search("Book", query=[1.0, 1.0], return_vector=True) - assert results[0]["id"] == 2 - assert results[0]["score"] == 0.999106722578389 - assert results[0]["vector"] == [0.7363563179969788, 0.6765939593315125] - assert results[1]["score"] == 7 - assert results[1]["score"] == 0.9961650411397226 - assert results[1]["vector"] == [0.7662628889083862, 0.6425272226333618] - results = qdrant_store.search( - "Book", - query=[1.0, 1.0], - query_filter=Filter( - must=[FieldCondition(key="rand_number", range=Range(gte=8))] - ), - ) - assert results[0]["id"] == 8 - assert results[0]["score"] == 0.9100373450784073 - assert results[1]["id"] == 9 - assert results[1]["score"] == 0.7127610621127889 - results = qdrant_store.search( - "Book", - query=[1.0, 1.0], - query_filter=Filter( - must=[FieldCondition(key="rand_number", range=Range(gte=8))] - ), - return_vector=True, - ) - assert results[0]["vector"] == [0.35037919878959656, 0.9366079568862915] - assert results[1]["vector"] == [0.9999677538871765, 0.00802854634821415] diff --git a/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/LDSR/preload.py b/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/LDSR/preload.py deleted file mode 100644 index cfd478d545ed12ef74e73fa40b6defe0156859da..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/extensions-builtin/LDSR/preload.py +++ /dev/null @@ -1,6 +0,0 @@ -import os -from modules import paths - - -def preload(parser): - parser.add_argument("--ldsr-models-path", type=str, help="Path to directory with LDSR model file(s).", default=os.path.join(paths.models_path, 'LDSR')) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (Zebra Card Studio Serial Full Versio).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (Zebra Card Studio Serial Full Versio).md deleted file mode 100644 index 60336ac72e10a79c259f33b1533aef876c63b8b2..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (Zebra Card Studio Serial Full Versio).md +++ /dev/null @@ -1,6 +0,0 @@ -

      HD Online Player (Zebra Card Studio Serial Full Versio)


      Download Ziphttps://cinurl.com/2uEXTW



      - -relliance online john beshara my liesence wajar diberi kuasa estelles no ... cs 29z30 hsq name of the kning for all that you ve done lyrics th110 afterdawn freeware ... in car dvd player the business operation to nabh jvc dvd vhs hd up conversion ... egypte ida tarball beginner lead guitar uscg captain keygen for pdf2word v3 0 ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Spore CD Key Generator.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Spore CD Key Generator.md deleted file mode 100644 index a3ab4ab6cd2784470af2bfd2c68786c31ebbd995..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Spore CD Key Generator.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Spore CD Key Generator


      DOWNLOAD ……… https://cinurl.com/2uEYWU



      - -Buy Spore Creepy and Cute - Parts Pack - Steam Gift CD KEY at the cheapest prices. Activate the ... For this product you will receive a link to an external platform instead of a regular CD Key. ... Game Character Hub PE: DS Generator Parts EU ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Advanced System Optimizer 3 Keygen !NEW!.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Advanced System Optimizer 3 Keygen !NEW!.md deleted file mode 100644 index 3644393ab1e28f783b8c06db938e47cb65c8205b..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Advanced System Optimizer 3 Keygen !NEW!.md +++ /dev/null @@ -1,131 +0,0 @@ - -

      Advanced System Optimizer 3 Keygen: How to Download and Activate the Best PC Optimizer

      -

      If you are looking for a software that can tune and optimize your system performance, you might want to try Advanced System Optimizer 3. This is a package of software that includes more than 30 tools for improving and speeding up your system. With this software, you can clean your hard drive from unnecessary files and debris, clean and organize your system registry, delete sensitive data, backup and restore your system files, and more. In this article, we will show you how to download and activate Advanced System Optimizer 3 using a keygen.

      -

      Advanced System Optimizer 3 Keygen


      Download Filehttps://urluss.com/2uCF3b



      - -

      What is Advanced System Optimizer 3?

      -

      Advanced System Optimizer 3 is a software that can help you optimize your system performance and make it run faster and smoother. It has many features such as:

      -
        -
      • Removing empty and unnecessary files from your system
      • -
      • Removing incorrect entries in the system registry
      • -
      • Managing startup programs
      • -
      • Restoring and cleaning up RAM
      • -
      • Defragmenting and optimizing the system registry
      • -
      • The ability to carefully configure Windows settings
      • -
      • Viewing detailed information about your system
      • -
      • Getting detailed information about the desired files and folders
      • -
      • Keeping the security in the system
      • -
      • Tool for securely deleting files and folders
      • -
      • Built-in organizer of media files
      • -
      • Backing up and restoring system files
      • -
      • Protection of confidential data
      • -
      • Removing programs using the built-in uninstaller
      • -
      • Tracking and cleaning up unused duplicate files in memory
      • -
      • Customizing various icons and much more.
      • -
      - -

      How to download Advanced System Optimizer 3?

      -

      To download Advanced System Optimizer 3, you need to visit the official website of the software: https://www.systweak.com/advanced-system-optimizer. There you can find the download link for the software. You can also find some information about the software features, screenshots, testimonials, and FAQs. You can download the software for free as a trial version for 15 days. After that, you need to activate the software using a valid serial number or a keygen.

      - -

      How to activate Advanced System Optimizer 3 using a keygen?

      -

      A keygen is a tool that can generate serial numbers or activation codes for various software products. You can use a keygen to activate Advanced System Optimizer 3 without paying for it. However, this is illegal and risky, as it may contain viruses or malware that can harm your system or steal your personal information. We do not recommend using a keygen to activate Advanced System Optimizer 3 or any other software product. If you want to use Advanced System Optimizer 3 legally and safely, you should buy a license from the official website of the software or from an authorized reseller. However, if you still want to use a keygen to activate Advanced System Optimizer 3, here are the steps you need to follow:

      -

      -
        -
      1. Download a keygen for Advanced System Optimizer 3 from a reliable source (you may need to use a VPN or a proxy to access some sources).
      2. -
      3. If necessary, unzip the file using a tool like WinZip or 7-Zip (Windows) or StuffIt (Mac).
      4. -
      5. If necessary, disconnect from the internet (or block the application's internet access via firewall).
      6. -
      7. Run the keygen file (.exe or .dmg) as administrator (Windows) or double-click (Mac) to start the keygen.
      8. -
      9. Select Advanced System Optimizer 3 from the list of products in the keygen.
      10. -
      11. Click on Generate button to generate a serial number or an activation code for Advanced System Optimizer 3.
      12. -
      13. Copy the serial number or the activation code from the keygen.
      14. -
      15. Run Advanced System Optimizer 3 on your system.
      16. -
      17. If prompted, enter the serial number or the activation code from the keygen.
      18. -
      19. Click on Activate button to activate Advanced System Optimizer 3.
      20. -
      - -

      Conclusion

      -

      Advanced System Optimizer 3 is a software that can help you optimize your system performance and make it run faster and smoother. It has many features such as removing unnecessary files and registry entries, managing startup programs, restoring and cleaning up RAM, defragmenting and optimizing the system registry, backing up and restoring system files, protecting confidential data, and more. To use Advanced System Optimizer 3, you need to download it from the official website of the software and activate it using a valid serial number or a keygen. However, using a keygen is illegal and risky, as it may contain viruses or malware that can harm your system or steal your personal information. We do not recommend using a keygen to activate Advanced System Optimizer 3 or any other software product. If you want to use Advanced System Optimizer 3 legally and safely, you should buy a license from the official website of the software or from an authorized reseller.

      -

      How to use Advanced System Optimizer 3 effectively?

      -

      To use Advanced System Optimizer 3 effectively, you need to have some basic knowledge of how to use each tool and feature in the software. You also need to have some common sense and caution when performing some actions that may affect your system stability or security. Here are some tips on how to use Advanced System Optimizer 3 effectively:

      -
        -
      • Scan your system regularly. You can use the Smart PC Care feature to scan your system for various issues such as junk files, registry errors, spyware, outdated drivers, disk fragmentation, and more. You can also use the individual tools to scan and fix specific issues. You can schedule the scans to run automatically at a convenient time.
      • -
      • Backup your system files and data. You can use the Backup Manager feature to backup your important system files and data such as documents, photos, videos, music, emails, etc. You can also use the System Protector feature to backup your system registry and restore it in case of any damage. You can store your backups on your hard drive, external drive, or cloud storage.
      • -
      • Optimize your system performance and speed. You can use the PC Fixer feature to optimize your system settings and improve your system performance and speed. You can also use the Memory Optimizer feature to free up and optimize your RAM and boost your system speed. You can also use the Game Optimizer feature to enhance your gaming experience by optimizing your system resources for gaming.
      • -
      • Protect your system privacy and security. You can use the Privacy Protector feature to protect your system privacy and security by deleting sensitive data such as cookies, history, cache, and traces of your network activity. You can also use the Secure Delete feature to securely delete files and folders that you don't want to recover. You can also use the Secure Encryptor feature to encrypt and decrypt your files and folders with a password.
      • -
      • Customize your system appearance and behavior. You can use the System Cleaner feature to customize your system appearance and behavior by changing various icons, wallpapers, screensavers, sounds, etc. You can also use the Disk Explorer feature to manage your disk space and organize your files and folders.
      • -
      - -

      FAQs

      -

      Here are some frequently asked questions about Advanced System Optimizer 3:

      - -

      What are the system requirements for Advanced System Optimizer 3?

      -

      The system requirements for Advanced System Optimizer 3 are:

      -
        -
      • Operating System: Windows XP/Vista/7/8/10
      • -
      • Processor: Pentium or compatible system with at least 800 MHz processor
      • -
      • Memory: Minimum 256 MB RAM (512 MB recommended)
      • -
      • Disk Space: 100 MB free hard disk space
      • -
      • Internet Connection: Required for activation and updates
      • -
      - -

      Is Advanced System Optimizer 3 safe to use?

      -

      Advanced System Optimizer 3 is safe to use if you download it from the official website of the software or from an authorized reseller. It does not contain any viruses or malware that can harm your system or steal your personal information. However, you should be careful when using some tools or features that may affect your system stability or security, such as deleting registry entries, encrypting files, etc. You should always backup your system files and data before performing any actions that may cause damage or loss.

      - -

      How do I uninstall Advanced System Optimizer 3?

      -

      To uninstall Advanced System Optimizer 3 from your system, you need to follow these steps:

      -
        -
      1. Close Advanced System Optimizer 3 if it is running.
      2. -
      3. Go to Start > Control Panel > Programs > Uninstall a Program (Windows Vista/7/8/10) or Start > Control Panel > Add or Remove Programs (Windows XP).
      4. -
      5. Select Advanced System Optimizer 3 from the list of programs and click on Uninstall/Change (Windows Vista/7/8/10) or Remove (Windows XP).
      6. -
      7. Follow the instructions on the screen to complete the uninstallation process.
      8. -
      9. If prompted, restart your system.
      10. -
      - -

      Conclusion

      -

      In conclusion, Advanced System Optimizer 3 is a software that can help you optimize your system performance and make it run faster and smoother. It has many features such as removing unnecessary files and registry entries, managing startup programs, restoring and cleaning up RAM -

      How to download and activate Advanced System Optimizer 3 using a keygen?

      -

      A keygen is a tool that can generate serial numbers or activation codes for various software products. You can use a keygen to activate Advanced System Optimizer 3 without paying for it. However, this is illegal and risky, as it may contain viruses or malware that can harm your system or steal your personal information. We do not recommend using a keygen to activate Advanced System Optimizer 3 or any other software product. If you want to use Advanced System Optimizer 3 legally and safely, you should buy a license from the official website of the software or from an authorized reseller. However, if you still want to use a keygen to activate Advanced System Optimizer 3, here are the steps you need to follow:

      -
        -
      1. Download a keygen for Advanced System Optimizer 3 from a reliable source (you may need to use a VPN or a proxy to access some sources).
      2. -
      3. If necessary, unzip the file using a tool like WinZip or 7-Zip (Windows) or StuffIt (Mac).
      4. -
      5. If necessary, disconnect from the internet (or block the application's internet access via firewall).
      6. -
      7. Run the keygen file (.exe or .dmg) as administrator (Windows) or double-click (Mac) to start the keygen.
      8. -
      9. Select Advanced System Optimizer 3 from the list of products in the keygen.
      10. -
      11. Click on Generate button to generate a serial number or an activation code for Advanced System Optimizer 3.
      12. -
      13. Copy the serial number or the activation code from the keygen.
      14. -
      15. Run Advanced System Optimizer 3 on your system.
      16. -
      17. If prompted, enter the serial number or the activation code from the keygen.
      18. -
      19. Click on Activate button to activate Advanced System Optimizer 3.
      20. -
      - -

      FAQs

      -

      Here are some frequently asked questions about Advanced System Optimizer 3:

      - -

      What are the system requirements for Advanced System Optimizer 3?

      -

      The system requirements for Advanced System Optimizer 3 are:

      -
        -
      • Operating System: Windows XP/Vista/7/8/10
      • -
      • Processor: Pentium or compatible system with at least 800 MHz processor
      • -
      • Memory: Minimum 256 MB RAM (512 MB recommended)
      • -
      • Disk Space: 100 MB free hard disk space
      • -
      • Internet Connection: Required for activation and updates
      • -
      - -

      Is Advanced System Optimizer 3 safe to use?

      -

      Advanced System Optimizer 3 is safe to use if you download it from the official website of the software or from an authorized reseller. It does not contain any viruses or malware that can harm your system or steal your personal information. However, you should be careful when using some tools or features that may affect your system stability or security, such as deleting registry entries, encrypting files, etc. You should always backup your system files and data before performing any actions that may cause damage or loss.

      - -

      How do I uninstall Advanced System Optimizer 3?

      -

      To uninstall Advanced System Optimizer 3 from your system, you need to follow these steps:

      -
        -
      1. Close Advanced System Optimizer 3 if it is running.
      2. -
      3. Go to Start > Control Panel > Programs > Uninstall a Program (Windows Vista/7/8/10) or Start > Control Panel > Add or Remove Programs (Windows XP).
      4. -
      5. Select Advanced System Optimizer 3 from the list of programs and click on Uninstall/Change (Windows Vista/7/8/10) or Remove (Windows XP).
      6. -
      7. Follow the instructions on the screen to complete the uninstallation process.
      8. -
      9. If prompted, restart your system.
      10. -
      - -

      Conclusion

      -

      In conclusion, Advanced System Optimizer 3 is a software that can help you optimize your system performance and make it run faster and smoother. It has many features such as removing unnecessary files and registry entries, managing startup programs, restoring and cleaning up RAM -

      In conclusion, Advanced System Optimizer 3 is a software that can help you optimize your system performance and make it run faster and smoother. It has many features such as removing unnecessary files and registry entries, managing startup programs, restoring and cleaning up RAM, defragmenting and optimizing the system registry, backing up and restoring system files, protecting confidential data, and more. To use Advanced System Optimizer 3, you need to download it from the official website of the software and activate it using a valid serial number or a keygen. However, using a keygen is illegal and risky, as it may contain viruses or malware that can harm your system or steal your personal information. We do not recommend using a keygen to activate Advanced System Optimizer 3 or any other software product. If you want to use Advanced System Optimizer 3 legally and safely, you should buy a license from the official website of the software or from an authorized reseller.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/resnet.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/resnet.py deleted file mode 100644 index 1cb3ac057ee2d52c46fc94685b5d4e698aad8d5f..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/resnet.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn -import torch.utils.checkpoint as cp - -from .utils import constant_init, kaiming_init - - -def conv3x3(in_planes, out_planes, stride=1, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - super(BasicBlock, self).__init__() - assert style in ['pytorch', 'caffe'] - self.conv1 = conv3x3(inplanes, planes, stride, dilation) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - assert not with_cp - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - """Bottleneck block. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - if style == 'pytorch': - conv1_stride = 1 - conv2_stride = stride - else: - conv1_stride = stride - conv2_stride = 1 - self.conv1 = nn.Conv2d( - inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.bn1 = nn.BatchNorm2d(planes) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - def forward(self, x): - - def _inner_forward(x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -def make_res_layer(block, - inplanes, - planes, - blocks, - stride=1, - dilation=1, - style='pytorch', - with_cp=False): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - dilation, - downsample, - style=style, - with_cp=with_cp)) - inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp)) - - return nn.Sequential(*layers) - - -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - with_cp=False): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert num_stages >= 1 and num_stages <= 4 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] - assert len(strides) == len(dilations) == num_stages - assert max(out_indices) < num_stages - - self.out_indices = out_indices - self.style = style - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - self.with_cp = with_cp - - self.inplanes = 64 - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.res_layers = [] - for i, num_blocks in enumerate(stage_blocks): - stride = strides[i] - dilation = dilations[i] - planes = 64 * 2**i - res_layer = make_res_layer( - block, - self.inplanes, - planes, - num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - with_cp=with_cp) - self.inplanes = planes * block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(ResNet, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - if mode and self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for param in self.bn1.parameters(): - param.requires_grad = False - self.bn1.eval() - self.bn1.weight.requires_grad = False - self.bn1.bias.requires_grad = False - for i in range(1, self.frozen_stages + 1): - mod = getattr(self, f'layer{i}') - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/spaces/tbvl/Fake_Face_Detection/utils/basicblocks.py b/spaces/tbvl/Fake_Face_Detection/utils/basicblocks.py deleted file mode 100644 index d7c640fe9f6b576df28fb062d50d3bf67148de06..0000000000000000000000000000000000000000 --- a/spaces/tbvl/Fake_Face_Detection/utils/basicblocks.py +++ /dev/null @@ -1,32 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -BatchNorm2d = nn.BatchNorm2d - -def conv3x3(in_planes, out_planes, stride = 1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size = 3, stride = stride, - padding = 1, bias = False) - -def conv1x1(in_planes, out_planes, stride = 1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size = 1, stride = stride, - padding = 0, bias = False) - -class BasicBlock(nn.Module): - def __init__(self, inplanes, outplanes, stride = 1): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, outplanes, stride) - self.bn1 = BatchNorm2d(outplanes) - self.relu = nn.ReLU(inplace = True) - self.conv2 = conv3x3(outplanes, outplanes, 2*stride) - - def forward(self, x): - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - out = self.conv2(out) - - return out \ No newline at end of file diff --git a/spaces/tcapelle/calculadora_impuestos/impuestos.py b/spaces/tcapelle/calculadora_impuestos/impuestos.py deleted file mode 100644 index 84fd094e6fa0ffbff2f1019a215727a03ba27501..0000000000000000000000000000000000000000 --- a/spaces/tcapelle/calculadora_impuestos/impuestos.py +++ /dev/null @@ -1,95 +0,0 @@ -import pandas as pd -import plotly.express as px - -# Valores de Junio 2022 -TRAMOS = { - 777000: 0, - 1727000: 0.04, - 2878000: 0.08, - 4029000: 0.135, - 5180000: 0.23, - 6906000: 0.304, - 17842000: 0.35, - 99999999: 0.4, -} - -TRAMOS_REFORMA = { - 777000: 0, - 1727000: 0.04, - 2878000: 0.08, - 4030000: 0.135, - 5242320: 0.26, - 6331000: 0.35, - 8057000: 0.40, - 99999999: 0.43, -} - - -def descomponer_en_tramos(sueldo_bruto, tramos=TRAMOS): - """ - Descompone un sueldo bruto en tramos de impuesto - """ - descomp = [] - impuestos = [] - tramo_anterior = 0 - for tramo, descuento in tramos.items(): - delta = min(sueldo_bruto, tramo) - tramo_anterior - if delta > 0: - descomp.append(delta) - impuestos.append(int(delta * descuento)) - tramo_anterior = tramo - return descomp, impuestos - - -def get_table(sueldo_bruto, tramos=TRAMOS): - """ - Tabla de Impuestos por tramo - """ - _tramos = [0] + list(tramos.keys()) - tasas = tramos.values() - data = list( - zip( - _tramos[:-1], - _tramos[1:], - tasas, - *descomponer_en_tramos(sueldo_bruto, tramos), - ) - ) - df = pd.DataFrame( - data=data, - columns=["Desde", "Hasta", "Tasa", "Monto", "Impuesto"], - ) - style = df.style.format( - { - "Desde": "{:,d}", - "Hasta": "{:,d}", - "Tasa": "{:.2f}", - "Monto": "{:,d}", - "Impuesto": "{:,d}", - }, - decimal=",", - thousands=".", - ) - return df, style - - -salarios = [ - 500_000, - 750_000, -] + [1_000_000 * i for i in range(20)] - - -def get_curve(descuentos): - def beneficios(s): - return max(s - descuentos, 0) - - DF_CURVA = pd.DataFrame(columns=["actual", "reforma"], index=salarios) - DF_CURVA["actual"] = [sum(descomponer_en_tramos(s, TRAMOS)[1]) for s in salarios] - DF_CURVA["reforma"] = [ - sum(descomponer_en_tramos(beneficios(s), TRAMOS_REFORMA)[1]) for s in salarios - ] - return px.line( - DF_CURVA, - title="Impuesto con respecto al salario", - labels={"value": "Impuesto a pagar", "index": "Renta mensual"}, - ) diff --git a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (Le Seigneur Des Anneaux Trilogie Ver).md b/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (Le Seigneur Des Anneaux Trilogie Ver).md deleted file mode 100644 index 27efe0ffb8e340af56cfecc1fbb777c6c8c11f9a..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (Le Seigneur Des Anneaux Trilogie Ver).md +++ /dev/null @@ -1,50 +0,0 @@ -

      HD Online Player (Le Seigneur Des Anneaux Trilogie Ver)


      Download Zip →→→ https://bytlly.com/2uGkRZ



      - -Download Mp3 Le Seigneur Des Anneaux Trilogie Ver. Last.Identification of the human proteome using 2D-DIGE. - -It is widely recognized that a large amount of proteins are present in the human body and that their expression changes under physiological and pathological conditions. Thus, proteomics may be an important tool to identify proteins that are important for the function and/or malfunction of certain organs. In this study, we investigated whether 2-dimensional difference gel electrophoresis (2D-DIGE) can be used to identify the human proteome. We performed 2D-DIGE to compare the proteins from adult human kidney, cortex and medulla, and we identified a total of 115 different proteins. We identified a subset of these proteins using mass spectrometry, and these proteins were found to have functions in metabolism, transport and secretion, cytoskeleton, and proteolysis.Q: - -How to change raster calculator's output type from String to Integer in QGIS? - -How can i convert the output type from String to Integer, in QGIS using the raster calculator. - -How do i set it up so that the output is integers instead of strings? - -A: - -You could create an expression that will transform a String to an Integer. - -In QGIS this is done by the following: - -Copy and paste the expression below in the raster calculator's expression box: - -TO_CASE("your_string","String","Integer") - -Finally, set the output type to Integer. - -Add this to the expression: - -CASE (value, "String", "Integer", "Integer", "Integer", "Integer", "Integer") - -Q: - -Trying to save a document in Firebase - -This is my code for saving a document: - -func savePlace(place: Place){ - - ref.child(place.Id).child("text").setValue(place.PlaceName) { (err) in - - if let err = err - - print("Error saving place \(err)") - - return - - else { - - print("Place has been saved") 4fefd39f24
      -
      -
      -

      diff --git a/spaces/thelou1s/yamnet_test/python/util/apnea_util.py b/spaces/thelou1s/yamnet_test/python/util/apnea_util.py deleted file mode 100644 index 2bd27f7a56386ed9dede26d7d953367f3212fdfa..0000000000000000000000000000000000000000 --- a/spaces/thelou1s/yamnet_test/python/util/apnea_util.py +++ /dev/null @@ -1,20 +0,0 @@ -from python.util.time_util import int_to_min_sec - - -def calc_apnea(idx, top_n, score, name): - result = '' - - # print(' calc_apnea, idx, top_n, score, name ', int_to_min_sec(idx), top_n, '%.2f' % score, name) - which_sec = idx - start_sec = -1 - - if name == 'Snoring': - result = (' idx, top_n, score, name: ' + int_to_min_sec(idx) + ', ' + str(top_n) + ', ' + ( - '%.2f' % score) + ', ' + name) - - # if name == 'Snoring': - # if start_sec == 0: start_sec = which_sec - # else: - # if snore_sec == 60: start_sec = 0 - - return result diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Google SketchUp Pro 2017 v20.5.2658 (x86x64) Serial Key Troubleshooting and Support.md b/spaces/tialenAdioni/chat-gpt-api/logs/Google SketchUp Pro 2017 v20.5.2658 (x86x64) Serial Key Troubleshooting and Support.md deleted file mode 100644 index 5175763ae5831ee973aadf6886b09b82a53fc325..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Google SketchUp Pro 2017 v20.5.2658 (x86x64) Serial Key Troubleshooting and Support.md +++ /dev/null @@ -1,85 +0,0 @@ - -

      How to Install and Authorize Google SketchUp Pro 2017 v20.5.2658 (x86x64) Serial Key

      - -

      If you are looking for a powerful and easy-to-use 3D modeling software, you might want to try Google SketchUp Pro 2017. This software lets you create and edit 2D and 3D models with a patented "Push and Pull" method. You can use it for various projects, such as architecture, interior design, landscaping, video game design, and 3D printing.

      - -

      However, before you can start using Google SketchUp Pro 2017, you need to install it on your computer and authorize it with a serial key. A serial key is a unique code that verifies that you have purchased a legitimate copy of the software. Without a serial key, you will not be able to use all the features of Google SketchUp Pro 2017.

      -

      Google SketchUp Pro 2017 v20.5.2658 (x86x64) Serial Key


      Download Filehttps://urlcod.com/2uK36r



      - -

      In this article, we will show you how to install and authorize Google SketchUp Pro 2017 v20.5.2658 (x86x64) serial key in a few simple steps.

      - -

      Step 1: Download Google SketchUp Pro 2017

      - -

      The first step is to download Google SketchUp Pro 2017 from the official website or from an authorized reseller. You can choose between the x86 (32-bit) or x64 (64-bit) version depending on your operating system. The file size is about 200 MB.

      -

      How to activate Google SketchUp Pro 2017 with serial key
      -Google SketchUp Pro 2017 crack download for Windows 10
      -Google SketchUp Pro 2017 license key generator online
      -Google SketchUp Pro 2017 full version free download (x86x64)
      -Google SketchUp Pro 2017 patch for Mac OS X
      -Google SketchUp Pro 2017 serial number and authorization code
      -Google SketchUp Pro 2017 keygen by CORE
      -Google SketchUp Pro 2017 tutorial pdf download
      -Google SketchUp Pro 2017 system requirements and features
      -Google SketchUp Pro 2017 activation code offline
      -Google SketchUp Pro 2017 product key finder
      -Google SketchUp Pro 2017 registration code and email
      -Google SketchUp Pro 2017 portable download zip
      -Google SketchUp Pro 2017 latest update v20.5.2658
      -Google SketchUp Pro 2017 review and rating
      -Google SketchUp Pro 2017 discount coupon code
      -Google SketchUp Pro 2017 alternative software
      -Google SketchUp Pro 2017 vs AutoCAD comparison
      -Google SketchUp Pro 2017 trial version download link
      -Google SketchUp Pro 2017 installation guide and tips
      -Google SketchUp Pro 2017 best plugins and extensions
      -Google SketchUp Pro 2017 keyboard shortcuts cheat sheet
      -Google SketchUp Pro 2017 support and help forum
      -Google SketchUp Pro 2017 error fix and troubleshooting
      -Google SketchUp Pro 2017 backup and restore data
      -Google SketchUp Pro 2017 upgrade and update policy
      -Google SketchUp Pro 2017 refund and cancellation policy
      -Google SketchUp Pro 2017 user manual and documentation
      -Google SketchUp Pro 2017 video tutorial and training course
      -Google SketchUp Pro 2017 examples and templates download
      -Google SketchUp Pro 2017 compatibility and interoperability
      -Google SketchUp Pro 2017 performance and optimization tips
      -Google SketchUp Pro 2017 customization and personalization options
      -Google SketchUp Pro 2017 security and privacy settings
      -Google SketchUp Pro 2017 export and import formats
      -Google SketchUp Pro 2017 rendering and animation tools
      -Google SketchUp Pro 2017 modeling and design tips and tricks
      -Google SketchUp Pro 2017 collaboration and sharing features
      -Google SketchUp Pro 2017 feedback and suggestions form
      -Google SketchUp Pro 2017 testimonials and success stories
      -Google SketchUp Pro 2017 benefits and advantages over other software
      -Google SketchUp Pro 2017 drawbacks and limitations of the software
      -Google SketchUp Pro 2017 frequently asked questions (FAQs)
      -Google SketchUp Pro 2017 terms and conditions of use
      -Google SketchUp Pro 2017 warranty and guarantee policy
      -Google SketchUp Pro 2017 price and payment options
      -Google SketchUp Pro 2017 download link for Linux operating system
      -How to uninstall or remove Google Sketchup pro from your computer

      - -

      Once you have downloaded the file, double-click on it to run the installer. Follow the instructions on the screen to complete the installation process. You can choose the destination folder, the language, and the components you want to install.

      - -

      Step 2: Open Google SketchUp Pro 2017

      - -

      After the installation is finished, you can open Google SketchUp Pro 2017 by clicking on its icon on your desktop or in your start menu. You will see a welcome screen that gives you some options to start a new project, open an existing one, or learn more about the software.

      - -

      If this is your first time using Google SketchUp Pro 2017, you will also see a dialog box that asks you to enter your serial number and authorization code. These are the codes that you received when you purchased the software from the website or from a reseller. They are usually sent to your email address.

      - -

      Step 3: Enter Your Serial Number and Authorization Code

      - -

      To authorize Google SketchUp Pro 2017, you need to enter your serial number and authorization code in the dialog box. The serial number is a 15-digit code that starts with "SU". The authorization code is a 20-digit code that starts with "AC". You can copy and paste them from your email or type them manually.

      - -

      Make sure that you enter the codes correctly and that they match the version of Google SketchUp Pro 2017 that you have installed. If you enter the wrong codes or try to use them for a different version, you will get an error message and you will not be able to authorize the software.

      - -

      After you have entered the codes, click on "Add License" to complete the authorization process. You will see a confirmation message that says "Thank you for choosing SketchUp!". You can now close the dialog box and start using Google SketchUp Pro 2017 with all its features.

      - -

      Conclusion

      - -

      In this article, we have shown you how to install and authorize Google SketchUp Pro 2017 v20.5.2658 (x86x64) serial key in a few simple steps. By following these steps, you can enjoy using this powerful and easy-to-use 3D modeling software for your projects.

      - -

      If you have any questions or problems with the installation or authorization process, you can contact the SketchUp support team or visit their online help center[^1^]. They will be happy to assist you with any issues.

      e753bf7129
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Jamvox 3 Crack A Review of the Pros and Cons of JamVOX III Software.md b/spaces/tialenAdioni/chat-gpt-api/logs/Jamvox 3 Crack A Review of the Pros and Cons of JamVOX III Software.md deleted file mode 100644 index 4f1e76cbbb3edfa486879aaf3bc1d5498d2d2347..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Jamvox 3 Crack A Review of the Pros and Cons of JamVOX III Software.md +++ /dev/null @@ -1,116 +0,0 @@ -
      -

      Jamvox 3 Crack: How to Install and Use the Ultimate Guitar Software

      -

      Jamvox 3 is a stand-alone software that lets you play guitar along with any song, learn new skills, record your performance, and apply various effects. Jamvox 3 has a high-quality audio interface, a powerful guitar processor, and a music player that can cancel or isolate any track or instrument from any song. Jamvox 3 also has a user-friendly interface, a full-screen mode, and a movie recording feature that allows you to capture video of your guitar playing.

      -

      Jamvox 3 Crack


      Download ✦✦✦ https://urlcod.com/2uK1nw



      -

      Jamvox 3 is not a free software. You need to buy it from the official website or from authorized dealers. However, if you want to try Jamvox 3 before buying it, or if you can't afford it but still want to enjoy its features, you can use Jamvox 3 Crack. Jamvox 3 Crack is a software that bypasses the activation process and unlocks all the features of Jamvox 3. With Jamvox 3 Crack, you can install and use Jamvox 3 on any computer without any limitations or restrictions.

      -

      How to Download Jamvox 3 Crack?

      -

      If you want to download Jamvox 3 Crack, you need to follow these steps:

      -
        -
      1. Find a reliable source that offers Jamvox 3 Crack for free. There are many websites that claim to provide Jamvox 3 Crack, but some of them might contain viruses or malware that can harm your computer. You can use this link as an example: http://adf.ly/1gLEEf
      2. -
      3. Download the file that contains Jamvox 3 Crack. The file should be compressed in a zip or rar format. You will need a program like WinRAR or 7-Zip to extract it.
      4. -
      5. Extract the file and you will get two files: JamVOX.exe and JamVOX.dll. These are the files that you need to crack Jamvox 3.
      6. -
      -

      How to Install Jamvox 3 Crack?

      -

      If you want to install Jamvox 3 Crack, you need to follow these steps:

      -
        -
      1. Install Jamvox 3 from the official website or from a CD-ROM. Do not run it yet.
      2. -
      3. Copy and paste JamVOX.exe and JamVOX.dll into the installation folder of Jamvox 3. The installation folder is usually located in C:\Program Files\VOX\JamVOX or C:\Program Files (x86)\VOX\JamVOX. Replace the original files if asked.
      4. -
      5. Run JamVOX.exe as administrator and enjoy Jamvox 3 Crack.
      6. -
      -

      What are the Benefits of Using Jamvox 3 Crack?

      -

      Jamvox 3 Crack has many benefits over the original version of Jamvox 3, such as:

      -
        -
      • You can save money by not buying Jamvox 3.
      • -
      • You can test Jamvox 3 before deciding whether to buy it or not.
      • -
      • You can use Jamvox 3 on any computer without any activation or registration issues.
      • -
      • You can access all the features and functions of Jamvox 3 without any limitations or restrictions.
      • -
      -

      What are the Risks of Using Jamvox 3 Crack?

      -

      Jamvox 3 Crack is not a legal or ethical software. There are some risks and drawbacks of using Jamvox 3 Crack, such as:

      -

      Jamvox 3 Crack download free
      -Jamvox 3 Crack full version
      -Jamvox 3 Crack mac
      -Jamvox 3 Crack windows
      -Jamvox 3 Crack activation code
      -Jamvox 3 Crack serial number
      -Jamvox 3 Crack keygen
      -Jamvox 3 Crack torrent
      -Jamvox 3 Crack rar
      -Jamvox 3 Crack zip
      -Jamvox 3 Crack online
      -Jamvox 3 Crack offline
      -Jamvox 3 Crack license key
      -Jamvox 3 Crack patch
      -Jamvox 3 Crack reddit
      -Jamvox 3 Crack review
      -Jamvox 3 Crack tutorial
      -Jamvox 3 Crack video
      -Jamvox 3 Crack youtube
      -Jamvox 3 Crack guitar software
      -Jamvox 3 Crack guitar effects
      -Jamvox 3 Crack guitar amp simulator
      -Jamvox 3 Crack guitar interface
      -Jamvox 3 Crack guitar rig
      -Jamvox 3 Crack guitar tone
      -Jamvox 3 Crack guitar recording
      -Jamvox 3 Crack guitar playback
      -Jamvox 3 Crack guitar backing tracks
      -Jamvox 3 Crack guitar loops
      -Jamvox 3 Crack guitar presets
      -Jamvox 3 Crack guitar plugins
      -Jamvox 3 Crack guitar vst
      -Jamvox 3 Crack guitar midi
      -Jamvox 3 Crack guitar editor
      -Jamvox 3 Crack guitar tuner
      -Jamvox 3 Crack guitar metronome
      -Jamvox 3 Crack guitar lessons
      -Jamvox 3 Crack guitar tips
      -Jamvox 3 Crack guitar tricks
      -Jamvox 3 Crack guitar hacks
      -Jamvox 3 Crack guitar secrets
      -Jamvox 3 Crack guitar masterclass
      -Jamvox 3 Crack guitar bundle
      -Jamvox 3 Crack guitar pack
      -Jamvox 3 Crack guitar collection
      -Jamvox 3 Crack guitar suite
      -Jamvox 3 Crack guitar studio
      -Jamvox 3 Crack guitar workstation
      -Jamvox 3 Crack guitar toolbox
      -Jamvox 3 Crack guitar ultimate

      -
        -
      • You might violate the terms and conditions of Jamvox 3 and face legal consequences.
      • -
      • You might not get any updates or support from the developers of Jamvox 3.
      • -
      • You might encounter some bugs or errors that might affect your performance or experience.
      • -
      • You might compromise the quality or security of your computer by downloading malicious files or programs along with Jamvox 3 Crack.
      • -
      -

      Therefore, you should use Jamvox 3 Crack at your own risk and discretion. We do not endorse or promote any illegal or unethical activities related to Jamvox 3 Crack.

      -

      Conclusion

      -

      Jamvox 3 is a great software for guitarists who want to play along with any song, learn new skills, record their performance, and apply various effects. However, it is not a free software and requires activation and registration to use it. If you want to use Jamvox 3 without paying for it, you can try Jamvox 3 Crack, which bypasses the activation process and unlocks all the features of Jamvox 3. However, you should be aware of the risks and drawbacks of using Jamvox 3 Crack and use it at your own risk and discretion.

      -

      What are the Features of Jamvox 3?

      -

      Jamvox 3 is a stand-alone software that has many features that make it a great choice for guitarists of all levels and styles. Some of the main features of Jamvox 3 are:

      -
        -
      • 19 Amp Models: Jamvox 3 provides a wide range of amp models that emulate the sound of vintage and modern amps, such as the VOX AC30, Marshall JCM800, Mesa Boogie Rectifier, and more. You can tweak the amp settings to your liking, or use the presets that match the original amps.
      • -
      • 12 Speaker Cabinet Models: Jamvox 3 also offers 12 speaker cabinet models that reproduce the characteristics of different speakers and cabinets, such as the VOX Blue, Celestion Greenback, Jensen C12N, and more. You can mix and match any amp and cabinet model to create your own custom sound.
      • -
      • 57 Effect Models: Jamvox 3 has a comprehensive collection of effect models that cover all the essential effects for guitar playing, such as compressor, wah, chorus, flanger, phaser, delay, reverb, noise reduction, and more. You can use up to eight effects at once, and arrange them in any order you want.
      • -
      • Virtual Valve Reactor: Jamvox 3 uses a technology called Virtual Valve Reactor that simulates the circuitry and behavior of real tube amps. This technology delivers a realistic and dynamic tone that responds to your playing nuances.
      • -
      • GXT III Enhanced Guitar XTraktion: Jamvox 3 has a unique feature called GXT III that allows you to cancel or isolate any track or instrument from any song. This way, you can play along with the backing track or solo over it. You can also adjust the pitch and tempo of the song to suit your preference.
      • -
      • Performance Interface: Jamvox 3 has a user-friendly and intuitive interface that lets you access all the functions of Jamvox 3 with ease. You can also switch to the full-screen mode, which transforms your computer into your own personalized performance studio.
      • -
      • Movie Recording Feature: Jamvox 3 has a feature that enables you to capture video of your own guitar playing using your computer's webcam. You can review your own performance with the aim of improving your skills, or upload your performance to a video-sharing site so that guitarists around the world can watch it.
      • -
      -

      How to Use Jamvox 3?

      -

      Jamvox 3 is easy to use and has a drag-and-drop interface that allows you to quickly create your own custom rig. You can also use the presets that are provided for each amp, cabinet, and effect model. To use Jamvox 3, you need to follow these steps:

      -
        -
      1. Connect your guitar to your computer using a VOX USB audio interface or an ASIO compliant audio interface.
      2. -
      3. Launch Jamvox 3 and select your audio device and settings.
      4. -
      5. Choose an amp model and a speaker cabinet model from the list.
      6. -
      7. Add any effects you want from the list.
      8. -
      9. Adjust the settings of each component to your liking.
      10. -
      11. Load a song from your computer or from an online source.
      12. -
      13. Use GXT III to cancel or isolate any track or instrument from the song.
      14. -
      15. Play along with the song or solo over it.
      16. -
      17. Record your performance using the movie recording feature.
      18. -
      -

      Conclusion

      -

      Jamvox 3 is a powerful and versatile software that lets you play guitar along with any song, learn new skills, record your performance, and apply various effects. Jamvox 3 has many features that make it an ideal choice for guitarists of all levels and styles. However, Jamvox 3 is not a free software and requires activation and registration to use it. If you want to use Jamvox 3 without paying for it, you can try Jamvox 3 Crack, which bypasses the activation process and unlocks all the features of Jamvox 3. However, you should be aware of the risks and drawbacks of using Jamvox 3 Crack and use it at your own risk and discretion.

      679dcb208e
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Call of Duty Mobile APK Versi Terbaru - Nikmati Grafis HD dan Mode Multiplayer.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Call of Duty Mobile APK Versi Terbaru - Nikmati Grafis HD dan Mode Multiplayer.md deleted file mode 100644 index 236327130c35721af65f94559ca6b105c7eee7fe..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Call of Duty Mobile APK Versi Terbaru - Nikmati Grafis HD dan Mode Multiplayer.md +++ /dev/null @@ -1,131 +0,0 @@ - -

      Download Call of Duty Mobile APK Versi Terbaru

      -

      If you are a fan of first-person shooter games, you must have heard of Call of Duty, one of the most popular and successful franchises in the gaming industry. But did you know that you can also enjoy this thrilling and immersive game on your Android device? Yes, you heard it right. Call of Duty Mobile is a free-to-play mobile game that brings the best of the Call of Duty series to your smartphone. In this article, we will tell you everything you need to know about Call of Duty Mobile, how to download the latest version of the APK file, and why you should play this amazing game.

      -

      download call of duty mobile apk versi terbaru


      Download Zip >>> https://bltlly.com/2uOpWl



      -

      What is Call of Duty Mobile?

      -

      Call of Duty Mobile is a mobile game developed by TiMi Studios and published by Activision in collaboration with Tencent Games. It was released globally in October 2019 and has since become one of the most downloaded and played mobile games in the world. Call of Duty Mobile offers a variety of game modes, maps, weapons, characters, and customization options that will keep you hooked for hours. You can play solo or team up with your friends in online multiplayer matches, compete in ranked or casual modes, or join the battle royale mode where 100 players fight for survival. You can also play the zombies mode where you have to fend off waves of undead creatures. Whether you are a casual or hardcore gamer, Call of Duty Mobile has something for everyone.

      -

      Features of Call of Duty Mobile

      -

      Call of Duty Mobile is not just a simple port of the console or PC versions. It is a fully-fledged mobile game that has its own unique features and content. Here are some of the main features that make Call of Duty Mobile stand out from other mobile games:

      -

      Multiplayer mode

      -

      The multiplayer mode is the core of Call of Duty Mobile. You can choose from different modes such as Team Deathmatch, Domination, Search and Destroy, Hardpoint, Free for All, and more. You can also play on iconic maps from the Call of Duty series such as Nuketown, Crash, Hijacked, Crossfire, and more. You can use a variety of weapons such as assault rifles, sniper rifles, shotguns, pistols, grenades, and more. You can also customize your loadout with different attachments, perks, skills, and operators. You can earn XP and rank up as you play and unlock new items and rewards.

      -

      Battle royale mode

      -

      The battle royale mode is another popular feature of Call of Duty Mobile. You can join a solo, duo, or squad match where 100 players parachute onto a large map and fight for survival. You can loot weapons, armor, ammo, health kits, vehicles, and other items from buildings, crates, or air drops. You can also use special classes such as Scout, Medic, Ninja, Defender, Mechanic, Clown, and more. Each class has its own unique ability and passive skill that can give you an edge in combat. You have to stay within the safe zone as it shrinks over time and avoid the deadly gas. The last player or team standing wins the match.

      -

      download call of duty mobile season 5 apk
      -download call of duty mobile apk from uptodown
      -download call of duty mobile apk mod unlimited money
      -download call of duty mobile apk obb latest version
      -download call of duty mobile apk for android 4.4
      -download call of duty mobile apk offline mode
      -download call of duty mobile apk highly compressed
      -download call of duty mobile apk no vpn
      -download call of duty mobile apk update 2023
      -download call of duty mobile apk versi lama
      -download call of duty mobile apk data file
      -download call of duty mobile apk for pc windows 10
      -download call of duty mobile apk hack aimbot
      -download call of duty mobile apk full unlocked
      -download call of duty mobile apk on ios
      -download call of duty mobile apk for android tv
      -download call of duty mobile apk with controller support
      -download call of duty mobile apk mirror link
      -download call of duty mobile apk revdl
      -download call of duty mobile apk rexdl
      -download call of duty mobile apk pure
      -download call of duty mobile apk for chromebook
      -download call of duty mobile apk and obb file
      -download call of duty mobile apk for emulator
      -download call of duty mobile apk without play store
      -download call of duty mobile apk from google drive
      -download call of duty mobile apk for android 11
      -download call of duty mobile apk in parts
      -download call of duty mobile apk with zombies mode
      -download call of duty mobile apk for low end devices
      -download call of duty mobile apk new map
      -download call of duty mobile apk for tablet
      -download call of duty mobile apk direct link
      -download call of duty mobile apk latest update 2023
      -download call of duty mobile apk for android 5.1
      -download call of duty mobile apk from apkpure
      -download call of duty mobile apk mod menu
      -download call of duty mobile apk no verification
      -download call of duty mobile apk free firestick
      -download call of duty mobile apk original file

      -

      Zombies mode

      -

      The zombies mode is a fan-favorite feature that was added to Call of Duty Mobile in November 2019. You can play solo or co-op with up to four players in this mode where you have to survive against hordes of zombies in different maps such as Shi No Numa, Nacht Der Untoten, Tranzit, and more. You can use different weapons such as ray guns, wonder weapons, traps, turrets, and more to kill the zombies. You can also upgrade your weapons with Pack a-Punch, buy perks, and revive your teammates. You can also complete different objectives and challenges to earn rewards and unlock secrets. The zombies mode is a fun and challenging way to test your skills and teamwork.

      -

      Customization and rewards

      -

      Call of Duty Mobile also offers a lot of customization and rewards options for players. You can personalize your character with different outfits, skins, emotes, and accessories. You can also customize your weapons with different camos, stickers, charms, and attachments. You can earn these items by playing the game, completing missions, participating in events, or buying them with in-game currency or real money. You can also collect various badges, medals, trophies, and achievements as you play and show off your progress and stats to your friends.

      -

      How to download Call of Duty Mobile APK versi terbaru?

      -

      If you are interested in playing Call of Duty Mobile on your Android device, you might be wondering how to download the latest version of the APK file. APK stands for Android Package Kit, which is a file format that contains the installation package of an Android app. By downloading the APK file, you can install the app on your device without using the Google Play Store. This can be useful if you want to access the app before it is officially released in your region, or if you want to avoid any restrictions or errors that might occur on the Play Store. However, you should also be careful when downloading APK files from unknown sources, as they might contain malware or viruses that can harm your device. Here are the steps to download and install Call of Duty Mobile APK versi terbaru safely and easily:

      -

      Requirements and compatibility

      -

      Before you download the APK file, you should make sure that your device meets the minimum requirements and is compatible with Call of Duty Mobile. According to the official website, these are the minimum requirements for Android devices:

      -
        -
      • OS: Android 5.1 or higher
      • -
      • RAM: 2 GB or more
      • -
      • CPU: Dual core 1.2 GHz or higher
      • -
      • Storage: 2 GB or more
      • -
      -

      However, these are just the minimum requirements and they might not guarantee a smooth and optimal gaming experience. For a better performance, you should have a device with higher specifications such as:

      -
        -
      • OS: Android 9 or higher
      • -
      • RAM: 4 GB or more
      • -
      • CPU: Octa core 2.0 GHz or higher
      • -
      • Storage: 4 GB or more
      • -
      -

      You should also make sure that your device has a stable internet connection, as Call of Duty Mobile is an online game that requires constant data transfer.

      -

      Steps to download and install

      -

      Once you have checked your device's compatibility, you can follow these steps to download and install Call of Duty Mobile APK versi terbaru:

      -
        -
      1. Go to a trusted and reliable website that offers the APK file of Call of Duty Mobile versi terbaru. For example, you can use this link to download the latest version of the APK file (version 1.0.28) as of June 2023.
      2. -
      3. Tap on the download button and wait for the file to be downloaded on your device. The file size is about 90 MB, so it might take some time depending on your internet speed.
      4. -
      5. Once the file is downloaded, locate it on your device's file manager and tap on it to start the installation process. You might need to enable the option to install apps from unknown sources on your device's settings if you haven't done so before.
      6. -
      7. Follow the instructions on the screen and grant the necessary permissions for the app to run properly.
      8. -
      9. After the installation is complete, you can launch the app from your home screen or app drawer.
      10. -
      11. The app will then download additional data files (about 2 GB) that are required for the game to function. This might take some time depending on your internet speed.
      12. -
      13. Once the data files are downloaded, you can log in with your Activision account or create a new one if you don't have one already.
      14. -
      15. You can then choose your region, language, name, and avatar for your profile.
      16. -
      17. You can then start playing Call of Duty Mobile on your Android device and enjoy its features and modes.
      18. -
      -

      Tips and tricks to optimize your gaming experience

      -

      To make sure that you have a smooth and enjoyable gaming experience with Call of Duty Mobile on your Android device, here are some tips and tricks that you can follow:

      -
        -
      • Adjust the graphics settings according to your device's capabilities and preferences. You can choose from low, medium, high, or very high graphics quality and frame rate. You can also enable or disable features such as anti-aliasing, ragdoll, bloom, depth of field, and real-time shadows. You can find these options in the settings menu under the graphics tab.
      • -
      • Use headphones or earphones to enhance the sound quality and immersion of the game. You can also adjust the sound settings according to your preferences. You can choose from different sound modes such as low, medium, high, or max volume. You can also enable or disable features such as voice chat, music, sound effects, and 3D audio. You can find these options in the settings menu under the audio tab.
      • -
      • Customize the controls according to your comfort and convenience. You can choose from different control modes such as simple, advanced, or custom. You can also adjust the sensitivity, layout, size, and opacity of the buttons and joysticks. You can also enable or disable features such as aim assist, auto fire, auto sprint, gyroscope, and quick run. You can find these options in the settings menu under the controls tab.
      • -
      • Join a clan or create your own to connect with other players and enjoy various benefits. You can chat with your clan members, invite them to play with you, participate in clan wars, complete clan tasks, and earn clan points and rewards. You can also customize your clan name, logo, tag, and description. You can find these options in the social menu under the clan tab.
      • -
      • Check out the events and seasons to keep up with the latest updates and content of the game. You can join different events such as seasonal challenges, featured events, daily missions, lucky draws, crates, and more. You can also follow the seasonal calendar to know what's coming next and what's available now. You can earn various rewards such as weapons, skins, characters, credits, CP, and more by participating in these events and seasons. You can find these options in the main menu under the events and seasons tabs.
      • -
      -

      Why should you play Call of Duty Mobile?

      -

      Call of Duty Mobile is not just another mobile game. It is a game that offers a lot of advantages and benefits for players who love first-person shooter games. Here are some of the reasons why you should play Call of Duty Mobile:

      -

      High-quality graphics and sound

      -

      Call of Duty Mobile delivers a stunning and realistic visual and audio experience that rivals any console or PC game. The game uses the Unreal Engine 4 technology to create lifelike graphics and animations that will make you feel like you are in the middle of a warzone. The game also features high-quality sound effects and music that will enhance your immersion and adrenaline. You can hear every gunshot, explosion, footstep, and voice clearly and distinctly.

      -

      Intuitive and customizable controls

      -

      Call of Duty Mobile is designed to be easy and comfortable to play on any Android device. The game offers intuitive and customizable controls that will suit any preference and style. You can choose from different control modes such as simple, advanced, or custom. You can also adjust the sensitivity, layout, size, and opacity of the buttons and joysticks. You can also enable or disable features such as aim assist, auto fire, auto sprint, gyroscope, and quick run. You can also use voice chat and gestures to communicate with your teammates and opponents. You can find the best control settings that work for you and enjoy a smooth and responsive gaming experience.

      -

      Chat and social features

      -

      Call of Duty Mobile is not just a game, it is also a community. You can chat and socialize with other players from around the world using the chat and social features of the game. You can send and receive messages, voice notes, stickers, and emojis to your friends and contacts. You can also join or create clans, groups, or rooms to connect with like-minded players. You can also add friends, follow players, send gifts, invite players, and block or report players. You can also check your profile, stats, leaderboards, and achievements to see how you rank among other players.

      -

      Frequent updates and events

      -

      Call of Duty Mobile is a game that is constantly evolving and improving. The game receives frequent updates and events that add new content and features to the game. You can enjoy new modes, maps, weapons, characters, skins, and more every season. You can also join different events such as seasonal challenges, featured events, daily missions, lucky draws, crates, and more to earn various rewards and prizes. You can also follow the news and announcements of the game to stay updated on what's new and what's coming next.

      -

      Conclusion

      -

      Call of Duty Mobile is a game that you should not miss if you love first-person shooter games. It is a game that offers a high-quality graphics and sound, intuitive and customizable controls, chat and social features, and frequent updates and events. It is a game that lets you play solo or team up with your friends in online multiplayer matches, compete in ranked or casual modes, or join the battle royale mode where 100 players fight for survival. It is a game that lets you play the zombies mode where you have to fend off waves of undead creatures. It is a game that lets you customize your character and weapons with different outfits, skins, camos, stickers, charms, and attachments. It is a game that lets you earn XP and rank up as you play and unlock new items and rewards.

      -

      If you want to play Call of Duty Mobile on your Android device, you can download the latest version of the APK file from a trusted and reliable website. You can follow the steps we have provided in this article to download and install the APK file safely and easily. You can also follow the tips and tricks we have provided in this article to optimize your gaming experience.

      -

      Call of Duty Mobile is a game that will keep you hooked for hours with its thrilling and immersive gameplay. Download Call of Duty Mobile APK versi terbaru today and join the millions of players who are already enjoying this amazing game.

      -

      FAQs

      -

      Here are some of the frequently asked questions about Call of Duty Mobile:

      -
        -
      • Is Call of Duty Mobile free to play?
      • -
      • Yes, Call of Duty Mobile is free to play. You can download and play the game without paying anything. However, the game also offers optional in-game purchases such as CP (Call of Duty Points), credits, crates, bundles, passes, and more that can enhance your gameplay or unlock premium items.
      • -
      • Is Call of Duty Mobile safe to play?
      • -
      • Yes, Call of Duty Mobile is safe to play. The game is developed by reputable companies such as TiMi Studios, Activision, and Tencent Games. The game is also verified and certified by Google Play Protect, which is a security feature that scans and verifies apps for malware and viruses. However, you should also be careful when downloading APK files from unknown sources, as they might contain harmful or malicious content. You should only download APK files from trusted and reliable websites that offer the latest version of the APK file.
      • -
      • How can I play Call of Duty Mobile on PC?
      • -
      • If you want to play Call of Duty Mobile on PC, you can use an Android emulator such as Gameloop, Bluestacks, NoxPlayer, or LDPlayer. An Android emulator is a software that allows you to run Android apps and games on your PC. You can download and install an Android emulator on your PC and then download and install Call of Duty Mobile from the emulator's app store or website. You can then launch the game from the emulator and enjoy playing it on a bigger screen with a keyboard and mouse.
      • -
      • How can I contact the customer support of Call of Duty Mobile?
      • -
      • If you have any issues, questions, or feedback about Call of Duty Mobile, you can contact the customer support of the game through different channels. You can use the in-game support feature that allows you to submit a ticket or chat with a live agent. You can also visit the official website of the game and use the online support form or the live chat option. You can also follow the official social media accounts of the game such as Facebook, Twitter, Instagram, YouTube, Reddit, Discord, and more and send them a message or comment.
      • -
      • How can I get more CP (Call of Duty Points) in Call of Duty Mobile?
      • -
      • CP (Call of Duty Points) is the premium currency of Call of Duty Mobile that can be used to buy various items and features in the game such as crates, bundles, passes, skins, characters, weapons, and more. You can get more CP in Call of Duty Mobile by doing the following:
      • -
          -
        • Buy CP with real money using your credit card, debit card, PayPal, Google Play balance, or other payment methods.
        • -
        • Earn CP by completing certain tasks or offers from third-party partners such as surveys, videos, apps, games, and more.
        • -
        • Get CP as a reward by participating in certain events or seasons such as lucky draws, crates, passes, challenges, and more.
        • -
        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Adjustment Program Epson Me 340.md b/spaces/tioseFevbu/cartoon-converter/scripts/Adjustment Program Epson Me 340.md deleted file mode 100644 index 29bb76cd1fa5288c9a1a797f114d4f05fdcb7b41..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Adjustment Program Epson Me 340.md +++ /dev/null @@ -1,138 +0,0 @@ - -

      How to Use EPSON Adjustment Program to Reset Your Printer

      -

      If you own an EPSON printer, you might have encountered some errors or problems that prevent you from printing properly. One of the most common issues is error E-11, which indicates that your waste ink pad is full and needs to be replaced. However, replacing the waste ink pad can be costly and time-consuming. Fortunately, there is a way to solve this problem without spending a dime or wasting your time. You can use a software tool called EPSON Adjustment Program to reset your waste ink pad counter and make your printer work again.

      -

      adjustment program epson me 340


      DOWNLOADhttps://urlcod.com/2uHwBK



      -

      In this article, we will explain what is EPSON Adjustment Program and why you might need to use it I have searched the web for the information you requested and I have found some relevant sources that can help you write an article on the topic of "adjustment program epson me 340". Here is a summary of what I have learned: - Adjustment program epson me 340 is a software tool that allows you to reset the waste ink pad counter of your epson me 340 printer and perform other maintenance tasks such as head cleaning, ink charge, nozzle check, etc. - The waste ink pad counter is a feature that tracks the amount of ink that is used for cleaning the print head and prevents the printer from overflowing with waste ink. When the counter reaches a certain limit, the printer will display an error message and stop working. - To use the adjustment program epson me 340, you need to download it from a reliable source and install it on your computer. Then, you need to connect your printer to your computer with a USB cable and follow the instructions on the screen to reset the counter or perform other functions. - Resetting the waste ink pad counter can help you save money and extend the life of your printer. However, you should also replace the waste ink pads or install an external waste tank to avoid ink leakage or damage to your printer. You should also use the adjustment program epson me 340 only when necessary and not abuse it. I hope this information is helpful for you. If you want to read more details, you can check out the following links: : [20](https://epson.com/Support/wa00370) : [21](https://files.support.epson.com/htmldocs/r220__/r220__rf/softw_4.htm) : [22](https://www.youtube.com/watch?v=BGOf4H2vkwo) : [23](https://www.youtube.com/watch?v=wSh4c0y_pV8) : [24](https://exca.home.blog/2023/05/18/epson-adjustment-program/) : [25](https://uniforumtz.com/epson-adjustment-program-download-for-free/) : [26](https://www.techpout.com/epson-l360-resetter-tool-adjustment-program/)

      What is EPSON Adjustment Program?

      -

      EPSON Adjustment Program is a software tool that allows you to reset the waste ink pad counter of your EPSON printer and perform other maintenance tasks such as head cleaning, ink charge, nozzle check, etc. It is designed to help you solve some of the common problems that might occur with your printer and improve its performance and quality.

      -

      Some of the features of EPSON Adjustment Program are:

      -
        -
      • Waste ink pad counter reset: This feature allows you to reset the counter that tracks the amount of ink that is used for cleaning the print head and prevents the printer from overflowing with waste ink.
      • -
      • Head cleaning: This feature allows you to clean the print head and remove any clogged or dried ink that might affect the print quality.
      • -
      • Ink charge: This feature allows you to charge the ink cartridges and prevent any ink leakage or air bubbles that might cause printing errors.
      • -
      • Nozzle check: This feature allows you to check the condition of the nozzles and see if they are working properly.
      • -
      • Maintenance: This feature allows you to perform other maintenance tasks such as head angular adjustment, paper feed test, EEPROM dump, etc.
      • -
      -

      Some of the benefits of using EPSON Adjustment Program are:

      -

      -
        -
      • Saving money: By resetting the waste ink pad counter, you can avoid replacing the waste ink pads or buying a new printer, which can be costly and time-consuming.
      • -
      • Extending printer life: By using EPSON Adjustment Program regularly, you can keep your printer in good condition and extend its life span.
      • -
      • Solving errors: By using EPSON Adjustment Program, you can fix some of the errors that might prevent you from printing properly, such as error E-11, paper jam, ink cartridge error, etc.
      • -

        When Do You Need to Use EPSON Adjustment Program?

        -

        One of the most common situations when you need to use EPSON Adjustment Program is when you encounter error E-11 on your printer. This error means that your waste ink pad is full and needs to be replaced. The waste ink pad is a sponge-like component that absorbs the excess ink that is used for cleaning the print head. Over time, the waste ink pad becomes saturated and can no longer hold any more ink. When this happens, the printer will display an error message and stop working to prevent any ink leakage or damage to the printer.

        -

        To solve this problem, you can use EPSON Adjustment Program to reset the waste ink pad counter and make your printer work again. However, you should also replace the waste ink pads or install an external waste tank to avoid any future problems. You can find instructions on how to do this online or contact a professional service center.

        -

        Some other situations when you might need to use EPSON Adjustment Program are:

        -
          -
        • Paper jam: This error means that there is a paper stuck in the printer or the paper feed mechanism is not working properly. You can use EPSON Adjustment Program to perform a paper feed test and see if the problem is solved.
        • -
        • Ink cartridge error: This error means that there is a problem with your ink cartridges, such as low ink level, incompatible cartridge, or faulty chip. You can use EPSON Adjustment Program to perform an ink charge and see if the problem is solved.
        • -
        • Print quality issues: If you notice that your print quality is poor, such as blurry, faded, or streaky prints, you can use EPSON Adjustment Program to perform a head cleaning and a nozzle check and see if the problem is solved.
        • -
        -

        How to Download and Install EPSON Adjustment Program?

        -

        Before you can use EPSON Adjustment Program to reset your printer or perform other functions, you need to download and install it on your computer. However, you should be aware that different models of EPSON printers have different versions of EPSON Adjustment Program. Therefore, you need to check your printer model and download the correct version of EPSON Adjustment Program for your printer.

        -

        To check your printer model, you can look at the label on the back or bottom of your printer. You should see a model number that starts with "ME" followed by some digits. For example, if you have an EPSON ME 340 printer, your model number is ME 340.

        -

        Once you know your printer model, you can download EPSON Adjustment Program from a reliable source. You can use the following link to download EPSON Adjustment Program for your printer model:

        -

        Download EPSON Adjustment Program for ME 340

        -

        After you download EPSON Adjustment Program, you need to install it on your computer. To do this, follow these steps:

        -
          -
        1. Extract the zip file that contains EPSON Adjustment Program to a folder on your computer.
        2. -
        3. Open the folder and double-click on the file named "AdjProg.exe".
        4. -
        5. A window will pop up asking for a password. Enter "epsonsn" (without quotation marks) and click OK.
        6. -
        7. A new window will open with the EPSON Adjustment Program interface. Click on "Select" to choose your printer model and port.
        8. -
        9. A list of available printers will appear. Select your printer model and port and click OK.
        10. -
        11. You have successfully installed EPSON Adjustment Program on your computer. You can now use it to reset your printer or perform other functions.
        12. -

          How to Use EPSON Adjustment Program to Reset Your Printer?

          -

          After you have installed EPSON Adjustment Program on your computer, you can use it to reset your printer or perform other functions. However, before you use EPSON Adjustment Program, you need to do some preparation to ensure that the process goes smoothly and safely. Here are some things you need to do before using EPSON Adjustment Program:

          -
            -
          • Turn off your antivirus software or firewall temporarily. Some antivirus software or firewall might interfere with EPSON Adjustment Program and cause errors or failures. You can turn them back on after you finish using EPSON Adjustment Program.
          • -
          • Connect your printer to your computer with a USB cable. Make sure that the connection is stable and secure. Do not use a wireless or network connection as it might cause communication problems.
          • -
          • Turn on your printer and make sure that it is in a ready state. Do not perform any printing tasks or operations while using EPSON Adjustment Program.
          • -
          • Make a backup of your printer settings and data. Using EPSON Adjustment Program might change some of your printer settings and data. You can use the "EEPROM Data Copy" function in EPSON Adjustment Program to make a backup of your printer settings and data and restore them later if needed.
          • -
          -

          Once you have done the preparation, you can use EPSON Adjustment Program to reset your printer or perform other functions. To do this, follow these steps:

          -
            -
          1. Open EPSON Adjustment Program on your computer by double-clicking on the file named "AdjProg.exe".
          2. -
          3. A window will pop up asking for a password. Enter "epsonsn" (without quotation marks) and click OK.
          4. -
          5. A new window will open with the EPSON Adjustment Program interface. Click on "Select" to choose your printer model and port.
          6. -
          7. A list of available printers will appear. Select your printer model and port and click OK.
          8. -
          9. A new window will open with the main menu of EPSON Adjustment Program. Click on "Particular adjustment mode".
          10. -
          11. A list of available functions will appear. Select "Waste ink pad counter" and click OK.
          12. -
          13. A new window will open with the waste ink pad counter settings. Check the boxes next to "Main pad counter" and "Platen pad counter" and click on "Check".
          14. -
          15. The current values of the waste ink pad counters will be displayed. If they are close to or exceed 100%, it means that your waste ink pads are full and need to be reset.
          16. -
          17. Click on "Initialization" to reset the waste ink pad counters to zero.
          18. -
          19. A message will appear asking you to confirm the reset. Click OK.
          20. -
          21. A message will appear asking you to turn off your printer. Click OK and turn off your printer.
          22. -
          23. Wait for a few seconds and then turn on your printer again.
          24. -
          25. You have successfully reset your printer using EPSON Adjustment Program. You can close EPSON Adjustment Program and turn on your antivirus software or firewall again.
          26. -
          -

          How to Use EPSON Adjustment Program for Other Purposes?

          -

          Besides resetting your printer, you can also use EPSON Adjustment Program for other purposes such as head cleaning, ink charge, nozzle check, etc. These functions can help you improve your print quality and prevent any printing errors. Here are some of the functions that you can use and how to use them:

          -

          Head Cleaning

          -

          Head cleaning is a function that allows you to clean your print head and remove any clogged or dried ink that might affect your print quality. If you notice that your prints are blurry, faded, or streaky, you can use this function to fix the problem. To use this function, follow these steps:

          -
            -
          1. Open EPSON Adjustment Program on your computer by double-clicking on the file named "AdjProg.exe".
          2. -
          3. A window will pop up asking for a password. Enter "epsonsn" (without quotation marks) and click OK.
          4. -
          5. A new window will open with the EPSON Adjustment Program interface. Click on "Select" to choose your printer model and port.
          6. -
          7. A list of available printers will appear. Select your printer model and port and click OK.
          8. -
          9. A new window will open with the main menu of EPSON Adjustment Program. Click on "Particular adjustment mode".
          10. -
          11. A list of available functions will appear. Select "Head cleaning" and click OK.
          12. -
          13. A new window will open with the head cleaning settings. Click on "Start" to begin the head cleaning process.
          14. -
          15. A message will appear asking you to wait while the printer performs the head cleaning. Do not turn off your printer or computer during this process.
          16. -
          17. When the head cleaning is done, a message will appear asking you to print a nozzle check pattern. Click OK and follow the instructions on the screen to print a nozzle check pattern.
          18. -
          19. Check the nozzle check pattern and see if there are any gaps or missing lines in the printed pattern. If there are, repeat the head cleaning process until the nozzle check pattern is clear and complete.
          20. -
          21. You have successfully used EPSON Adjustment Program to clean your print head. You can close EPSON Adjustment Program and print a test page or a document to see if your print quality has improved.
          22. -
          -

          Ink Charge

          -

          Ink charge is a function that allows you to charge your ink cartridges and prevent any ink leakage or air bubbles that might cause printing errors. If you notice that your prints are inconsistent, blotchy, or have white spots, you can use this function to fix the problem. To use this function, follow these steps:

          -
            -
          1. Open EPSON Adjustment Program on your computer by double-clicking on the file named "AdjProg.exe".
          2. -
          3. A window will pop up asking for a password. Enter "epsonsn" (without quotation marks) and click OK.
          4. -
          5. A new window will open with the EPSON Adjustment Program interface. Click on "Select" to choose your printer model and port.
          6. -
          7. A list of available printers will appear. Select your printer model and port and click OK.
          8. -
          9. A new window will open with the main menu of EPSON Adjustment Program. Click on "Particular adjustment mode".
          10. -
          11. A list of available functions will appear. Select "Ink charge" and click OK.
          12. -
          13. A new window will open with the ink charge settings. Click on "Start" to begin the ink charge process.
          14. -
          15. A message will appear asking you to wait while the printer performs the ink charge. Do not turn off your printer or computer during this process.
          16. -
          17. When the ink charge is done, a message will appear confirming that the ink charge is complete. Click OK.
          18. -
          19. You have successfully used EPSON Adjustment Program to charge your ink cartridges. You can close EPSON Adjustment Program and print a test page or a document to see if your print quality has improved.
          20. -

            Nozzle Check

            -

            Nozzle check is a function that allows you to check the condition of the nozzles and see if they are working properly. The nozzles are the tiny holes that spray ink onto the paper. If the nozzles are clogged or damaged, your print quality will suffer. You can use this function to print a nozzle check pattern and see if there are any gaps or missing lines in the printed pattern. To use this function, follow these steps:

            -
              -
            1. Open EPSON Adjustment Program on your computer by double-clicking on the file named "AdjProg.exe".
            2. -
            3. A window will pop up asking for a password. Enter "epsonsn" (without quotation marks) and click OK.
            4. -
            5. A new window will open with the EPSON Adjustment Program interface. Click on "Select" to choose your printer model and port.
            6. -
            7. A list of available printers will appear. Select your printer model and port and click OK.
            8. -
            9. A new window will open with the main menu of EPSON Adjustment Program. Click on "Particular adjustment mode".
            10. -
            11. A list of available functions will appear. Select "Nozzle check" and click OK.
            12. -
            13. A new window will open with the nozzle check settings. Click on "Print" to print a nozzle check pattern.
            14. -
            15. A message will appear asking you to load a sheet of plain paper into your printer. Do so and click OK.
            16. -
            17. Your printer will print a nozzle check pattern that consists of four colored grids. Each grid represents a different color: black, cyan, magenta, and yellow.
            18. -
            19. Check the nozzle check pattern and see if there are any gaps or missing lines in the printed grids. If there are, it means that some of the nozzles are clogged or damaged and need to be cleaned or replaced.
            20. -
            21. You have successfully used EPSON Adjustment Program to check your nozzles. You can close EPSON Adjustment Program and perform a head cleaning or contact a service center if needed.
            22. -
            -

            Maintenance

            -

            Maintenance is a function that allows you to perform other maintenance tasks such as head angular adjustment, paper feed test, EEPROM dump, etc. These tasks can help you fine-tune your printer settings and data and optimize its performance and quality. To use this function, follow these steps:

            -
              -
            1. Open EPSON Adjustment Program on your computer by double-clicking on the file named "AdjProg.exe".
            2. -
            3. A window will pop up asking for a password. Enter "epsonsn" (without quotation marks) and click OK.
            4. -
            5. A new window will open with the EPSON Adjustment Program interface. Click on "Select" to choose your printer model and port.
            6. -
            7. A list of available printers will appear. Select your printer model and port and click OK.
            8. -
            9. A new window will open with the main menu of EPSON Adjustment Program. Click on "Particular adjustment mode".
            10. -
            11. A list of available functions will appear. Select "Maintenance" and click OK.
            12. -
            13. A new window will open with the maintenance settings. Choose the task that you want to perform from the drop-down menu and click on "Execute".
            14. -
            15. Follow the instructions on the screen to complete the task.
            16. -
            17. You have successfully used EPSON Adjustment Program to perform a maintenance task. You can close EPSON Adjustment Program and print a test page or a document to see if your printer works normally.
            18. -

              Here are some of the frequently asked questions (FAQs) about EPSON Adjustment Program:

              -
                -
              1. What is the difference between EPSON Adjustment Program and EPSON Resetter?
              2. -

                EPSON Adjustment Program and EPSON Resetter are two names for the same software tool that allows you to reset the waste ink pad counter of your EPSON printer and perform other maintenance tasks. They are both unofficial tools that are not endorsed or supported by EPSON. You can use either name to refer to the same tool.

                -
              3. Is EPSON Adjustment Program safe to use?
              4. -

                EPSON Adjustment Program is generally safe to use if you download it from a reliable source and follow the instructions carefully. However, you should also be aware of the risks and limitations of using EPSON Adjustment Program. For example, using EPSON Adjustment Program might void your warranty, change your printer settings and data, or cause errors or failures. You should also replace the waste ink pads or install an external waste tank to avoid any ink leakage or damage to your printer. You should also use EPSON Adjustment Program only when necessary and not abuse it.

                -
              5. Where can I find EPSON Adjustment Program for my printer model?
              6. -

                You can find EPSON Adjustment Program for your printer model by searching online or using the link provided in this article. However, you should be careful about the source and quality of the download. Some websites might offer fake or malicious downloads that might harm your computer or printer. You should only download EPSON Adjustment Program from a trusted and verified source.

                -
              7. How often should I use EPSON Adjustment Program?
              8. -

                You should use EPSON Adjustment Program only when you encounter a problem that requires you to reset your printer or perform other functions. For example, if you see an error message that indicates that your waste ink pad is full, you can use EPSON Adjustment Program to reset the counter and make your printer work again. However, you should not use EPSON Adjustment Program too frequently or unnecessarily as it might cause more problems or damage to your printer. You should also use EPSON Adjustment Program as a last resort after trying other solutions such as troubleshooting, cleaning, or contacting a service center.

                -
              9. How can I contact EPSON if I need help or support?
              10. -

                If you need help or support with your EPSON printer, you can contact EPSON directly through their official website, phone number, email, or social media. You can find their contact information on their website or on your printer manual. You can also visit their online support page to find answers to common questions, download drivers and manuals, request a repair, or register your product.

                -

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cs16noflashscriptdownload [PORTABLE].md b/spaces/tioseFevbu/cartoon-converter/scripts/Cs16noflashscriptdownload [PORTABLE].md deleted file mode 100644 index 0341c5e573c58929d1dff59ae8c4b7836051e805..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Cs16noflashscriptdownload [PORTABLE].md +++ /dev/null @@ -1,103 +0,0 @@ -
              -

              How to Download and Use a No Flash Script for CS 1.6

              -

              Counter-Strike 1.6, or CS 1.6 for short, is one of the most popular and influential first-person shooter games ever made. Released in 2003, it has attracted millions of players around the world who enjoy its fast-paced, team-based, and realistic gameplay. One of the key elements of CS 1.6 is the use of various weapons and equipment, such as grenades, that can give an advantage or disadvantage to the players.

              -

              One of the most common and effective grenades in CS 1.6 is the flashbang grenade, or simply flashbang. A flashbang grenade is a less-lethal explosive device that produces a blinding flash of light and an extremely loud bang when detonated. It is used to temporarily disorient and confuse the enemy's senses, making them vulnerable to attack or escape.

              -

              cs16noflashscriptdownload


              Download File ✏ ✏ ✏ https://urlcod.com/2uHvwa



              -

              However, some players may find the flashbang grenade annoying or unfair, especially when they are on the receiving end of it. They may feel that the flashbang grenade disrupts their vision and hearing too much, making them unable to play properly or enjoy the game. That is why some players resort to using a no flash script, which is a program that disables or reduces the effect of the flashbang grenade on their screen.

              -

              A no flash script is a type of cheat or hack that modifies the game files or settings to prevent or minimize the flash effect caused by the flashbang grenade. By using a no flash script, a player can avoid being blinded or deafened by the flashbang grenade, giving them an edge over their opponents who are still affected by it.

              -

              In this article, we will show you how to download and install a no flash script for CS 1.6, as well as how to use it in the game. We will also discuss the risks and benefits of using a no flash script, as well as the ethical and legal implications of doing so.

              -

              How to Download and Install a No Flash Script for CS 1.6

              -

              Before you decide to download and install a no flash script for CS 1.6, you should be aware of the potential risks and benefits of doing so.

              -

              The Risks and Benefits of Using a No Flash Script

              -

              The main benefit of using a no flash script is that it can improve your gameplay experience by eliminating or reducing the annoying and disruptive effect of the flashbang grenade on your screen. You can see better, hear better, and react faster than your enemies who are still affected by the flashbang grenade. You can also avoid being killed or captured by your enemies who use the flashbang grenade as a tactic to surprise or overwhelm you.

              -

              However, there are also some risks involved in using a no flash script. First of all, using a no flash script is considered cheating or hacking by many players and servers who value fair play and respect for the game rules. If you are caught using a no flash script, you may face consequences such as being banned from servers, reported to authorities, or shamed by other players. You may also lose your reputation and credibility as a player who plays honestly and skillfully.

              -

              Secondly, using a no flash script may

              Secondly, using a no flash script may also affect your own gameplay enjoyment and satisfaction. You may feel bored or unchallenged by the game, as you have an unfair advantage over your enemies. You may also miss out on the thrill and excitement of overcoming the flashbang grenade effect, which is part of the challenge and fun of CS 1.6. You may also lose respect for yourself as a player who relies on cheating or hacking to win, rather than on your own skills and abilities.

              -

              Therefore, you should weigh the pros and cons of using a no flash script carefully before you decide to download and install one. You should also respect the preferences and opinions of other players and servers who may not approve of using a no flash script. You should also be prepared to face the possible consequences of using a no flash script, such as being banned or reported.

              -

              -

              The Sources and Types of No Flash Scripts Available Online

              -

              If you still want to download and install a no flash script for CS 1.6, you will need to find a reliable and safe source online. There are many websites and forums that offer various types of no flash scripts for CS 1.6, but not all of them are trustworthy or legitimate. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Some of them may also provide outdated, ineffective, or incompatible no flash scripts that can cause errors or crashes in your game.

              -

              Therefore, you should be careful and selective when choosing a source for your no flash script. You should do some research and read some reviews before you download anything from a website or forum. You should also scan the downloaded file with an antivirus program before you open or run it. You should also backup your game files and settings before you install any no flash script, in case something goes wrong or you want to uninstall it later.

              -

              There are different types of no flash scripts available online, depending on how they work and what they do. Some of the most common types are:

              -
                -
              • No Flash Script: This is the simplest and most basic type of no flash script. It simply disables or removes the flash effect from your screen completely, making it invisible. You will not see any white or yellow light when a flashbang grenade explodes near you.
              • -
              • Reduced Flash Script: This type of no flash script reduces the intensity and duration of the flash effect on your screen, making it less noticeable and annoying. You will still see some light when a flashbang grenade explodes near you, but it will be dimmer and shorter than normal.
              • -
              • Transparent Flash Script: This type of no flash script makes the flash effect transparent or semi-transparent on your screen, allowing you to see through it partially or fully. You will still see some light when a flashbang grenade explodes near you, but it will not block your vision completely.
              • -
              • Colored Flash Script: This type of no flash script changes the color of the flash effect on your screen, making it easier to distinguish from the background or other objects. You will still see some light when a flashbang grenade explodes near you, but it will be a different color than normal, such as green or blue.
              • -
              -

              You can choose the type of no flash script that suits your preference and needs best. However, you should also consider the compatibility and performance of the no flash script with your game version and system specifications. Some no flash scripts may not work properly or at all with certain game updates or patches. Some no flash scripts may also cause lag or stuttering in your game, especially if your computer is not powerful enough to handle them.

              -

              The Steps to Download and Install a No Flash Script for CS 1.6

              -

              Once you have found a reliable and safe source for your no flash script, you can follow these general steps to download and install it for CS 1.6:

              -
                -
              1. Download the no flash script file: Click on the download link or button provided by the website or forum where you found the no flash script. Save the file to a location that you can easily access later, such as your desktop or downloads folder.
              2. -
              3. Extract the no flash script file: If the downloaded file is compressed or archived in a format such as .zip or .rar, you will need to extract it using a program such as WinRAR or 7-Zip. Right-click on the file and select "Extract Here" or "Extract to" from the menu. A new folder containing the extracted files will appear in the same location as the original file.
              4. -
              5. Copy or move the no flash script file: Locate the extracted folder and open it. Inside, you
              6. Copy or move the no flash script file: Locate the extracted folder and open it. Inside, you should find one or more files with the extension .cfg, which are the configuration files that contain the no flash script commands and settings. Copy or move these files to the folder where your CS 1.6 game is installed, usually in C:\Program Files\Steam\steamapps\common\Half-Life\cstrike.
              7. -
              8. Run the no flash script file: Launch your CS 1.6 game and join a server or start a match. Open the console by pressing the tilde (~) key on your keyboard. Type "exec" followed by the name of the no flash script file that you copied or moved to the game folder, such as "exec noflash.cfg". Press enter to run the no flash script file and activate the no flash script in your game.
              9. -
              -

              Congratulations, you have successfully downloaded and installed a no flash script for CS 1.6. You can now enjoy playing the game without being affected by the flashbang grenade effect.

              -

              How to Use a No Flash Script in CS 1.6

              -

              Now that you have a no flash script installed in your CS 1.6 game, you may want to know how to use it effectively and efficiently. Here are some tips and tricks to help you get the most out of your no flash script:

              -

              The Commands and Settings to Activate and Deactivate a No Flash Script

              -

              As mentioned earlier, you can activate a no flash script by running the no flash script file in the console using the "exec" command. However, you may not want to use the no flash script all the time, especially if you are playing on servers or matches that do not allow it or if you want to challenge yourself without it. In that case, you may want to deactivate the no flash script temporarily or permanently.

              -

              To deactivate a no flash script temporarily, you can use the "unbind" command in the console. This command will remove the association between a key and a command, meaning that pressing that key will not execute that command anymore. For example, if you have bound the F1 key to run the no flash script file using the "bind" command, such as "bind F1 exec noflash.cfg", you can unbind it by typing "unbind F1" in the console. This will prevent the F1 key from running the no flash script file until you bind it again.

              -

              To deactivate a no flash script permanently, you can delete or move the no flash script file from your game folder, or edit it to remove or comment out the commands and settings that disable or reduce the flash effect. You can also use the "default" command in the console to restore your game settings to their original values.

              -

              The Tips and Tricks to Optimize the Performance of a No Flash Script

              -

              If you want to use a no flash script without causing any problems or issues in your game, you should follow these tips and tricks to optimize its performance:

              -
                -
              • Choose a suitable type of no flash script: As mentioned earlier, there are different types of no flash scripts available online, such as no flash, reduced flash, transparent flash, and colored flash scripts. You should choose one that suits your preference and needs best, as well as one that is compatible and effective with your game version and system specifications.
              • -
              • Test and adjust your no flash script: Before you use a no flash script in an actual match or server, you should test it in a practice mode or offline mode first. This will allow you to check if it works properly or not, as well as if it causes any errors or crashes in your game. You can also adjust your no flash script settings, such as the intensity or duration of the flash effect, by editing the configuration file or using commands in the console.
              • -
              • Use a toggle key for your no flash script: Instead of running your no flash script file every time you want to activate it, you can use a toggle key that will enable or disable it with one press. You can do this by using You can do this by using the "alias" command in the console, which allows you to create a custom name for a command or a sequence of commands. For example, you can type "alias noflash exec noflash.cfg" in the console to create an alias called "noflash" that will run the no flash script file. Then, you can type "bind F1 toggle noflash" in the console to bind the F1 key to toggle the noflash alias on and off. This way, you can easily switch between using and not using the no flash script with one key.
              • -
              • Be discreet and respectful when using your no flash script: Even if you have a no flash script installed and activated in your game, you should not flaunt it or abuse it in front of other players or servers who may not appreciate it or allow it. You should be discreet and respectful when using your no flash script, and avoid doing anything that may draw attention or suspicion to yourself, such as bragging, taunting, or killing your enemies too easily or frequently. You should also respect the rules and wishes of the servers or matches that you join, and disable or uninstall your no flash script if they ask you to or if they detect it.
              • -
              -

              The Ethical and Legal Implications of Using a No Flash Script in CS 1.6

              -

              Finally, you should also be aware of the ethical and legal implications of using a no flash script in CS 1.6. As mentioned earlier, using a no flash script is considered cheating or hacking by many players and servers who value fair play and respect for the game rules. By using a no flash script, you are gaining an unfair advantage over your enemies who are still affected by the flashbang grenade effect, and you are also altering the game files or settings without the permission or consent of the game developers or publishers.

              -

              Therefore, using a no flash script may violate the terms of service or end-user license agreement of CS 1.6, as well as the laws or regulations of your country or region regarding intellectual property rights, online gaming, or cybercrime. If you are caught using a no flash script, you may face legal consequences such as fines, lawsuits, or criminal charges, depending on the severity and frequency of your offense.

              -

              Moreover, using a no flash script may also go against your own moral principles or values as a gamer and a person. You may feel guilty or ashamed of using a no flash script, as you are cheating yourself and others out of a fair and fun gaming experience. You may also lose your integrity and honor as a gamer and a person, as you are breaking the trust and respect that other players and servers have for you.

              -

              Therefore, you should think carefully and critically about whether using a no flash script is worth it or not, both for yourself and for others. You should also consider the possible alternatives or solutions to your problem or issue with the flashbang grenade effect, such as practicing more, changing your strategy, adjusting your settings, or playing on different servers or modes.

              -

              Conclusion

              -

              In conclusion, a no flash script is a program that disables or reduces the effect of the flashbang grenade on your screen in CS 1.6. It can improve your gameplay experience by eliminating or reducing the annoying and disruptive effect of the flashbang grenade on your screen. However, it can also cause problems or issues in your game, such as being banned, reported, shamed, bored, unchallenged, or unsatisfied. It can also violate the terms of service or end-user license agreement of CS 1.6, as well as the laws or regulations of your country or region regarding intellectual property rights, online gaming, or cybercrime. It can also go against your own moral principles or values as a gamer and a person.

              -

              Therefore, you should weigh the pros and cons of using a no flash script carefully before you decide to download and install one. You should also respect the preferences and opinions of other players and servers who may not approve of using a no flash script. You should also be prepared to face the possible consequences of using a no flash script, such as being banned or reported.

              -

              If you still want to download and install a no flash script for CS 1.6, you should follow these general steps:

              -
                -
              1. Find a reliable and safe source online that offers various types of no flash scripts for CS 1.6.
              2. -
              3. Download the no flash script file from the source and save it to a location that you can easily access later.
              4. -
              5. Extract the no flash script file if it is compressed or archived in a format such as .zip or .rar.
              6. -
              7. Copy or move the no flash script file to the folder where your CS 1.6 game is installed.
              8. Run the no flash script file in the console using the "exec" command.
              9. -
              -

              You can also follow these tips and tricks to use your no flash script effectively and efficiently:

              -
                -
              • Choose a suitable type of no flash script that suits your preference and needs best, as well as one that is compatible and effective with your game version and system specifications.
              • -
              • Test and adjust your no flash script in a practice mode or offline mode before you use it in an actual match or server.
              • -
              • Use a toggle key for your no flash script that will enable or disable it with one press.
              • -
              • Be discreet and respectful when using your no flash script, and avoid doing anything that may draw attention or suspicion to yourself.
              • -
              -

              We hope that this article has helped you understand how to download and use a no flash script for CS 1.6, as well as the risks and benefits of doing so. We also hope that you have enjoyed reading this article and learned something new from it. If you have any questions, comments, or feedback, please feel free to share them with us in the comment section below. We would love to hear from you and answer your queries. Thank you for reading and happy gaming!

              -

              FAQs

              -

              Here are some of the most frequently asked questions and answers about no flash scripts in CS 1.6:

              -

              Q: What is the difference between a no flash script and a no smoke script?

              -

              A: A no flash script is a program that disables or reduces the effect of the flashbang grenade on your screen, while a no smoke script is a program that disables or reduces the effect of the smoke grenade on your screen. A smoke grenade is another type of less-lethal explosive device that produces a thick cloud of smoke when detonated. It is used to create a visual cover or distraction for the players, making it harder for them to see or be seen by their enemies.

              -

              Q: Is using a no flash script illegal or immoral?

              -

              A: Using a no flash script may be illegal or immoral, depending on the terms of service or end-user license agreement of CS 1.6, as well as the laws or regulations of your country or region regarding intellectual property rights, online gaming, or cybercrime. It may also be immoral, depending on your own moral principles or values as a gamer and a person. You should think carefully and critically about whether using a no flash script is worth it or not, both for yourself and for others.

              -

              Q: How can I detect or prevent other players from using a no flash script?

              -

              A: There are some ways to detect or prevent other players from using a no flash script, such as:

              -
                -
              • Using an anti-cheat program: Some servers or matches may use an anti-cheat program, such as VAC (Valve Anti-Cheat), that can detect and ban players who use cheats or hacks, such as no flash scripts, in their games. You can join these servers or matches if you want to play with other players who do not use cheats or hacks.
              • -
              • Using a spectator mode: Some servers or matches may allow you to use a spectator mode, which lets you watch other players' perspectives without interfering with their gameplay. You can use this mode to observe other players' behavior and reactions when they are exposed to the flashbang grenade effect. If they do not seem to be affected by it at all, they may be using a no flash script.
              • -
              • Using your own judgment: Sometimes, you can tell if other players are using a no flash script by using your own judgment and common sense. For example, if they seem to be too good or too lucky at avoiding or countering the flashbang grenade effect, they may be using a no flash script.
              • -
              -

              Q: How can I improve my skills without using a no flash script?

              -

              A: There are some ways to improve your skills without using a no flash script, such as:

              -
                -
              • Practicing more: The best way to improve your skills is to practice more. You can play more matches or servers with different levels of difficulty and competition. You can also play with different weapons and equipment, such as grenades, that can challenge you and teach you new strategies and tactics.
              • -
              • Changing your strategy: Another way to improve your skills is to change your strategy. You can try different approaches and methods to deal with the flashbang grenade effect, such as avoiding it, countering it, or using it to your advantage. You can also learn from other players who are better than you at handling the flashbang grenade effect
              • Changing your strategy: Another way to improve your skills is to change your strategy. You can try different approaches and methods to deal with the flashbang grenade effect, such as avoiding it, countering it, or using it to your advantage. You can also learn from other players who are better than you at handling the flashbang grenade effect, and copy or adapt their techniques and tricks.
              • -
              • Adjusting your settings: A final way to improve your skills is to adjust your settings. You can tweak your game settings, such as the brightness, contrast, gamma, or sound, to make the flashbang grenade effect less severe or more tolerable. You can also adjust your personal settings, such as your posture, position, or environment, to make yourself more comfortable or focused when playing the game.
              • -
              -

              Q: What are some alternatives or substitutes for a no flash script?

              -

              A: If you do not want to use a no flash script, but still want to avoid or reduce the flashbang grenade effect, you can try some alternatives or substitutes, such as:

              -
                -
              • Using a night vision goggles: A night vision goggles is an item that you can buy and use in CS 1.6. It allows you to see better in dark or low-light conditions by enhancing the ambient light. It can also help you see better when you are affected by the flashbang grenade effect, as it reduces the brightness and contrast of the flash effect on your screen. However, it also has some drawbacks, such as making your screen green and noisy, consuming battery power, and making a sound when you turn it on or off.
              • -
              • Using a shield: A shield is another item that you can buy and use in CS 1.6. It is a large metal plate that you can hold in front of you to protect yourself from bullets and explosions. It can also block the flashbang grenade effect from reaching your eyes, as long as you are facing the direction of the flashbang grenade when it explodes. However, it also has some limitations, such as slowing you down, restricting your movement and vision, and preventing you from using other weapons or items.
              • -
              • Using a mod or plugin: A mod or plugin is a program that adds or changes some features or functions of the game. Some mods or plugins may offer some options or settings that can affect the flashbang grenade effect in different ways, such as changing its color, shape, size, or duration. However, not all mods or plugins are compatible or allowed with CS 1.6, and some of them may also be considered cheating or hacking by other players or servers.
              • -
              -

              These are some of the possible alternatives or substitutes for a no flash script that you can try in CS 1.6. However, none of them are perfect or guaranteed to work for everyone or every situation. You should experiment and find out what works best for you and your game.

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Death Run Multiplayer Game [BETTER].md b/spaces/tioseFevbu/cartoon-converter/scripts/Death Run Multiplayer Game [BETTER].md deleted file mode 100644 index 910464b035813f2b3a65f344f4ab199ff08e5600..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Death Run Multiplayer Game [BETTER].md +++ /dev/null @@ -1,25 +0,0 @@ -
              -

              Death Run: A Thrilling Multiplayer Game for the Brave

              -

              Do you love adrenaline-pumping games that challenge your skills and nerves? Do you enjoy competing with other players online in a fast-paced and deadly environment? If you answered yes, then you might want to check out Death Run, a multiplayer game that will test your survival instincts and reflexes.

              -

              Death Run is a game where you have to run through a series of obstacles and traps while trying to reach the finish line before your opponents. The game features various maps with different themes and difficulties, such as a haunted mansion, a pirate ship, a space station, and more. Each map has its own hazards and surprises that can kill you instantly, such as spikes, lasers, saws, fireballs, and more. You have to be quick and smart to avoid them and make it to the end.

              -

              Death Run Multiplayer Game


              Download 🔗 https://urlcod.com/2uHv5A



              -

              But that's not all. You also have to deal with the other players who are trying to stop you from winning. You can either play as a runner or a killer. As a runner, you have to dodge the traps and reach the finish line. As a killer, you have to activate the traps and kill the runners. You can switch roles every round and see who is the best at both.

              -

              Death Run is a game that will keep you on the edge of your seat and make your heart race. It is a game that requires skill, strategy, and luck. It is a game that will make you scream, laugh, and rage. It is a game that you will love to play with your friends or strangers online.

              -

              If you are ready for the ultimate challenge, then download Death Run today and join the fun. But be warned: this game is not for the faint of heart. Only the brave can survive Death Run.

              -

              - -

              Death Run is not just a game. It is a community of players who share a passion for thrill and excitement. You can chat with other players, make friends, join clans, and participate in tournaments and events. You can also customize your character with different outfits, accessories, and weapons. You can even create your own maps and share them with others.

              -

              Death Run is a game that never gets old. It is constantly updated with new features, maps, and modes. You can always find something new and exciting to try. You can also challenge yourself with different achievements and leaderboards. You can show off your skills and rank among the best players in the world.

              -

              Death Run is a game that you will never regret playing. It is a game that will make you feel alive. It is a game that will make you addicted. It is a game that will make you a Death Runner.

              - -

              Death Run is not just a game for fun. It is also a game for learning. You can learn a lot from playing Death Run, such as:

              -
                -
              • How to think fast and make quick decisions.
              • -
              • How to cooperate and communicate with other players.
              • -
              • How to adapt and improvise in different situations.
              • -
              • How to deal with failure and frustration.
              • -
              • How to overcome fear and challenge yourself.
              • -
              -

              Death Run is a game that will make you smarter, stronger, and braver. It is a game that will make you a better person. It is a game that will make you a Death Runner.

              81aa517590
              -
              -
              \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/palette.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/palette.py deleted file mode 100644 index fa0c4dd40381addf5b42fae4228b6d8fef03abd9..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/palette.py +++ /dev/null @@ -1,100 +0,0 @@ -from math import sqrt -from functools import lru_cache -from typing import Sequence, Tuple, TYPE_CHECKING - -from .color_triplet import ColorTriplet - -if TYPE_CHECKING: - from pip._vendor.rich.table import Table - - -class Palette: - """A palette of available colors.""" - - def __init__(self, colors: Sequence[Tuple[int, int, int]]): - self._colors = colors - - def __getitem__(self, number: int) -> ColorTriplet: - return ColorTriplet(*self._colors[number]) - - def __rich__(self) -> "Table": - from pip._vendor.rich.color import Color - from pip._vendor.rich.style import Style - from pip._vendor.rich.text import Text - from pip._vendor.rich.table import Table - - table = Table( - "index", - "RGB", - "Color", - title="Palette", - caption=f"{len(self._colors)} colors", - highlight=True, - caption_justify="right", - ) - for index, color in enumerate(self._colors): - table.add_row( - str(index), - repr(color), - Text(" " * 16, style=Style(bgcolor=Color.from_rgb(*color))), - ) - return table - - # This is somewhat inefficient and needs caching - @lru_cache(maxsize=1024) - def match(self, color: Tuple[int, int, int]) -> int: - """Find a color from a palette that most closely matches a given color. - - Args: - color (Tuple[int, int, int]): RGB components in range 0 > 255. - - Returns: - int: Index of closes matching color. - """ - red1, green1, blue1 = color - _sqrt = sqrt - get_color = self._colors.__getitem__ - - def get_color_distance(index: int) -> float: - """Get the distance to a color.""" - red2, green2, blue2 = get_color(index) - red_mean = (red1 + red2) // 2 - red = red1 - red2 - green = green1 - green2 - blue = blue1 - blue2 - return _sqrt( - (((512 + red_mean) * red * red) >> 8) - + 4 * green * green - + (((767 - red_mean) * blue * blue) >> 8) - ) - - min_index = min(range(len(self._colors)), key=get_color_distance) - return min_index - - -if __name__ == "__main__": # pragma: no cover - import colorsys - from typing import Iterable - from pip._vendor.rich.color import Color - from pip._vendor.rich.console import Console, ConsoleOptions - from pip._vendor.rich.segment import Segment - from pip._vendor.rich.style import Style - - class ColorBox: - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> Iterable[Segment]: - height = console.size.height - 3 - for y in range(0, height): - for x in range(options.max_width): - h = x / options.max_width - l = y / (height + 1) - r1, g1, b1 = colorsys.hls_to_rgb(h, l, 1.0) - r2, g2, b2 = colorsys.hls_to_rgb(h, l + (1 / height / 2), 1.0) - bgcolor = Color.from_rgb(r1 * 255, g1 * 255, b1 * 255) - color = Color.from_rgb(r2 * 255, g2 * 255, b2 * 255) - yield Segment("▄", Style(color=color, bgcolor=bgcolor)) - yield Segment.line() - - console = Console() - console.print(ColorBox()) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/styled.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/styled.py deleted file mode 100644 index 91cd0db31c14e30d4c1e2e9f36382b7a5e022870..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/styled.py +++ /dev/null @@ -1,42 +0,0 @@ -from typing import TYPE_CHECKING - -from .measure import Measurement -from .segment import Segment -from .style import StyleType - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderResult, RenderableType - - -class Styled: - """Apply a style to a renderable. - - Args: - renderable (RenderableType): Any renderable. - style (StyleType): A style to apply across the entire renderable. - """ - - def __init__(self, renderable: "RenderableType", style: "StyleType") -> None: - self.renderable = renderable - self.style = style - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - style = console.get_style(self.style) - rendered_segments = console.render(self.renderable, options) - segments = Segment.apply_style(rendered_segments, style) - return segments - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> Measurement: - return Measurement.get(console, options, self.renderable) - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich import print - from pip._vendor.rich.panel import Panel - - panel = Styled(Panel("hello"), "on blue") - print(panel) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/msvc9compiler.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/msvc9compiler.py deleted file mode 100644 index 225f1a2f52da774b36049e0ec38a7fe90d1c3473..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/msvc9compiler.py +++ /dev/null @@ -1,820 +0,0 @@ -"""distutils.msvc9compiler - -Contains MSVCCompiler, an implementation of the abstract CCompiler class -for the Microsoft Visual Studio 2008. - -The module is compatible with VS 2005 and VS 2008. You can find legacy support -for older versions of VS in distutils.msvccompiler. -""" - -# Written by Perry Stoll -# hacked by Robin Becker and Thomas Heller to do a better job of -# finding DevStudio (through the registry) -# ported to VS2005 and VS 2008 by Christian Heimes - -import os -import subprocess -import sys -import re - -from distutils.errors import ( - DistutilsExecError, - DistutilsPlatformError, - CompileError, - LibError, - LinkError, -) -from distutils.ccompiler import CCompiler, gen_lib_options -from distutils import log -from distutils.util import get_platform - -import winreg - -RegOpenKeyEx = winreg.OpenKeyEx -RegEnumKey = winreg.EnumKey -RegEnumValue = winreg.EnumValue -RegError = winreg.error - -HKEYS = ( - winreg.HKEY_USERS, - winreg.HKEY_CURRENT_USER, - winreg.HKEY_LOCAL_MACHINE, - winreg.HKEY_CLASSES_ROOT, -) - -NATIVE_WIN64 = sys.platform == 'win32' and sys.maxsize > 2**32 -if NATIVE_WIN64: - # Visual C++ is a 32-bit application, so we need to look in - # the corresponding registry branch, if we're running a - # 64-bit Python on Win64 - VS_BASE = r"Software\Wow6432Node\Microsoft\VisualStudio\%0.1f" - WINSDK_BASE = r"Software\Wow6432Node\Microsoft\Microsoft SDKs\Windows" - NET_BASE = r"Software\Wow6432Node\Microsoft\.NETFramework" -else: - VS_BASE = r"Software\Microsoft\VisualStudio\%0.1f" - WINSDK_BASE = r"Software\Microsoft\Microsoft SDKs\Windows" - NET_BASE = r"Software\Microsoft\.NETFramework" - -# A map keyed by get_platform() return values to values accepted by -# 'vcvarsall.bat'. Note a cross-compile may combine these (eg, 'x86_amd64' is -# the param to cross-compile on x86 targeting amd64.) -PLAT_TO_VCVARS = { - 'win32': 'x86', - 'win-amd64': 'amd64', -} - - -class Reg: - """Helper class to read values from the registry""" - - def get_value(cls, path, key): - for base in HKEYS: - d = cls.read_values(base, path) - if d and key in d: - return d[key] - raise KeyError(key) - - get_value = classmethod(get_value) - - def read_keys(cls, base, key): - """Return list of registry keys.""" - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - L = [] - i = 0 - while True: - try: - k = RegEnumKey(handle, i) - except RegError: - break - L.append(k) - i += 1 - return L - - read_keys = classmethod(read_keys) - - def read_values(cls, base, key): - """Return dict of registry keys and values. - - All names are converted to lowercase. - """ - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - d = {} - i = 0 - while True: - try: - name, value, type = RegEnumValue(handle, i) - except RegError: - break - name = name.lower() - d[cls.convert_mbcs(name)] = cls.convert_mbcs(value) - i += 1 - return d - - read_values = classmethod(read_values) - - def convert_mbcs(s): - dec = getattr(s, "decode", None) - if dec is not None: - try: - s = dec("mbcs") - except UnicodeError: - pass - return s - - convert_mbcs = staticmethod(convert_mbcs) - - -class MacroExpander: - def __init__(self, version): - self.macros = {} - self.vsbase = VS_BASE % version - self.load_macros(version) - - def set_macro(self, macro, path, key): - self.macros["$(%s)" % macro] = Reg.get_value(path, key) - - def load_macros(self, version): - self.set_macro("VCInstallDir", self.vsbase + r"\Setup\VC", "productdir") - self.set_macro("VSInstallDir", self.vsbase + r"\Setup\VS", "productdir") - self.set_macro("FrameworkDir", NET_BASE, "installroot") - try: - if version >= 8.0: - self.set_macro("FrameworkSDKDir", NET_BASE, "sdkinstallrootv2.0") - else: - raise KeyError("sdkinstallrootv2.0") - except KeyError: - raise DistutilsPlatformError( - """Python was built with Visual Studio 2008; -extensions must be built with a compiler than can generate compatible binaries. -Visual Studio 2008 was not found on this system. If you have Cygwin installed, -you can try compiling with MingW32, by passing "-c mingw32" to setup.py.""" - ) - - if version >= 9.0: - self.set_macro("FrameworkVersion", self.vsbase, "clr version") - self.set_macro("WindowsSdkDir", WINSDK_BASE, "currentinstallfolder") - else: - p = r"Software\Microsoft\NET Framework Setup\Product" - for base in HKEYS: - try: - h = RegOpenKeyEx(base, p) - except RegError: - continue - key = RegEnumKey(h, 0) - d = Reg.get_value(base, r"%s\%s" % (p, key)) - self.macros["$(FrameworkVersion)"] = d["version"] - - def sub(self, s): - for k, v in self.macros.items(): - s = s.replace(k, v) - return s - - -def get_build_version(): - """Return the version of MSVC that was used to build Python. - - For Python 2.3 and up, the version number is included in - sys.version. For earlier versions, assume the compiler is MSVC 6. - """ - prefix = "MSC v." - i = sys.version.find(prefix) - if i == -1: - return 6 - i = i + len(prefix) - s, rest = sys.version[i:].split(" ", 1) - majorVersion = int(s[:-2]) - 6 - if majorVersion >= 13: - # v13 was skipped and should be v14 - majorVersion += 1 - minorVersion = int(s[2:3]) / 10.0 - # I don't think paths are affected by minor version in version 6 - if majorVersion == 6: - minorVersion = 0 - if majorVersion >= 6: - return majorVersion + minorVersion - # else we don't know what version of the compiler this is - return None - - -def normalize_and_reduce_paths(paths): - """Return a list of normalized paths with duplicates removed. - - The current order of paths is maintained. - """ - # Paths are normalized so things like: /a and /a/ aren't both preserved. - reduced_paths = [] - for p in paths: - np = os.path.normpath(p) - # XXX(nnorwitz): O(n**2), if reduced_paths gets long perhaps use a set. - if np not in reduced_paths: - reduced_paths.append(np) - return reduced_paths - - -def removeDuplicates(variable): - """Remove duplicate values of an environment variable.""" - oldList = variable.split(os.pathsep) - newList = [] - for i in oldList: - if i not in newList: - newList.append(i) - newVariable = os.pathsep.join(newList) - return newVariable - - -def find_vcvarsall(version): - """Find the vcvarsall.bat file - - At first it tries to find the productdir of VS 2008 in the registry. If - that fails it falls back to the VS90COMNTOOLS env var. - """ - vsbase = VS_BASE % version - try: - productdir = Reg.get_value(r"%s\Setup\VC" % vsbase, "productdir") - except KeyError: - log.debug("Unable to find productdir in registry") - productdir = None - - if not productdir or not os.path.isdir(productdir): - toolskey = "VS%0.f0COMNTOOLS" % version - toolsdir = os.environ.get(toolskey, None) - - if toolsdir and os.path.isdir(toolsdir): - productdir = os.path.join(toolsdir, os.pardir, os.pardir, "VC") - productdir = os.path.abspath(productdir) - if not os.path.isdir(productdir): - log.debug("%s is not a valid directory" % productdir) - return None - else: - log.debug("Env var %s is not set or invalid" % toolskey) - if not productdir: - log.debug("No productdir found") - return None - vcvarsall = os.path.join(productdir, "vcvarsall.bat") - if os.path.isfile(vcvarsall): - return vcvarsall - log.debug("Unable to find vcvarsall.bat") - return None - - -def query_vcvarsall(version, arch="x86"): - """Launch vcvarsall.bat and read the settings from its environment""" - vcvarsall = find_vcvarsall(version) - interesting = {"include", "lib", "libpath", "path"} - result = {} - - if vcvarsall is None: - raise DistutilsPlatformError("Unable to find vcvarsall.bat") - log.debug("Calling 'vcvarsall.bat %s' (version=%s)", arch, version) - popen = subprocess.Popen( - '"%s" %s & set' % (vcvarsall, arch), - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - ) - try: - stdout, stderr = popen.communicate() - if popen.wait() != 0: - raise DistutilsPlatformError(stderr.decode("mbcs")) - - stdout = stdout.decode("mbcs") - for line in stdout.split("\n"): - line = Reg.convert_mbcs(line) - if '=' not in line: - continue - line = line.strip() - key, value = line.split('=', 1) - key = key.lower() - if key in interesting: - if value.endswith(os.pathsep): - value = value[:-1] - result[key] = removeDuplicates(value) - - finally: - popen.stdout.close() - popen.stderr.close() - - if len(result) != len(interesting): - raise ValueError(str(list(result.keys()))) - - return result - - -# More globals -VERSION = get_build_version() -# MACROS = MacroExpander(VERSION) - - -class MSVCCompiler(CCompiler): - """Concrete class that implements an interface to Microsoft Visual C++, - as defined by the CCompiler abstract class.""" - - compiler_type = 'msvc' - - # Just set this so CCompiler's constructor doesn't barf. We currently - # don't use the 'set_executables()' bureaucracy provided by CCompiler, - # as it really isn't necessary for this sort of single-compiler class. - # Would be nice to have a consistent interface with UnixCCompiler, - # though, so it's worth thinking about. - executables = {} - - # Private class data (need to distinguish C from C++ source for compiler) - _c_extensions = ['.c'] - _cpp_extensions = ['.cc', '.cpp', '.cxx'] - _rc_extensions = ['.rc'] - _mc_extensions = ['.mc'] - - # Needed for the filename generation methods provided by the - # base class, CCompiler. - src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions - res_extension = '.res' - obj_extension = '.obj' - static_lib_extension = '.lib' - shared_lib_extension = '.dll' - static_lib_format = shared_lib_format = '%s%s' - exe_extension = '.exe' - - def __init__(self, verbose=0, dry_run=0, force=0): - super().__init__(verbose, dry_run, force) - self.__version = VERSION - self.__root = r"Software\Microsoft\VisualStudio" - # self.__macros = MACROS - self.__paths = [] - # target platform (.plat_name is consistent with 'bdist') - self.plat_name = None - self.__arch = None # deprecated name - self.initialized = False - - def initialize(self, plat_name=None): - # multi-init means we would need to check platform same each time... - assert not self.initialized, "don't init multiple times" - if self.__version < 8.0: - raise DistutilsPlatformError( - "VC %0.1f is not supported by this module" % self.__version - ) - if plat_name is None: - plat_name = get_platform() - # sanity check for platforms to prevent obscure errors later. - ok_plats = 'win32', 'win-amd64' - if plat_name not in ok_plats: - raise DistutilsPlatformError("--plat-name must be one of %s" % (ok_plats,)) - - if ( - "DISTUTILS_USE_SDK" in os.environ - and "MSSdk" in os.environ - and self.find_exe("cl.exe") - ): - # Assume that the SDK set up everything alright; don't try to be - # smarter - self.cc = "cl.exe" - self.linker = "link.exe" - self.lib = "lib.exe" - self.rc = "rc.exe" - self.mc = "mc.exe" - else: - # On x86, 'vcvars32.bat amd64' creates an env that doesn't work; - # to cross compile, you use 'x86_amd64'. - # On AMD64, 'vcvars32.bat amd64' is a native build env; to cross - # compile use 'x86' (ie, it runs the x86 compiler directly) - if plat_name == get_platform() or plat_name == 'win32': - # native build or cross-compile to win32 - plat_spec = PLAT_TO_VCVARS[plat_name] - else: - # cross compile from win32 -> some 64bit - plat_spec = ( - PLAT_TO_VCVARS[get_platform()] + '_' + PLAT_TO_VCVARS[plat_name] - ) - - vc_env = query_vcvarsall(VERSION, plat_spec) - - self.__paths = vc_env['path'].split(os.pathsep) - os.environ['lib'] = vc_env['lib'] - os.environ['include'] = vc_env['include'] - - if len(self.__paths) == 0: - raise DistutilsPlatformError( - "Python was built with %s, " - "and extensions need to be built with the same " - "version of the compiler, but it isn't installed." % self.__product - ) - - self.cc = self.find_exe("cl.exe") - self.linker = self.find_exe("link.exe") - self.lib = self.find_exe("lib.exe") - self.rc = self.find_exe("rc.exe") # resource compiler - self.mc = self.find_exe("mc.exe") # message compiler - # self.set_path_env_var('lib') - # self.set_path_env_var('include') - - # extend the MSVC path with the current path - try: - for p in os.environ['path'].split(';'): - self.__paths.append(p) - except KeyError: - pass - self.__paths = normalize_and_reduce_paths(self.__paths) - os.environ['path'] = ";".join(self.__paths) - - self.preprocess_options = None - if self.__arch == "x86": - self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/DNDEBUG'] - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/W3', - '/Z7', - '/D_DEBUG', - ] - else: - # Win64 - self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/GS-', '/DNDEBUG'] - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/W3', - '/GS-', - '/Z7', - '/D_DEBUG', - ] - - self.ldflags_shared = ['/DLL', '/nologo', '/INCREMENTAL:NO'] - if self.__version >= 7: - self.ldflags_shared_debug = ['/DLL', '/nologo', '/INCREMENTAL:no', '/DEBUG'] - self.ldflags_static = ['/nologo'] - - self.initialized = True - - # -- Worker methods ------------------------------------------------ - - def object_filenames(self, source_filenames, strip_dir=0, output_dir=''): - # Copied from ccompiler.py, extended to return .res as 'object'-file - # for .rc input file - if output_dir is None: - output_dir = '' - obj_names = [] - for src_name in source_filenames: - (base, ext) = os.path.splitext(src_name) - base = os.path.splitdrive(base)[1] # Chop off the drive - base = base[os.path.isabs(base) :] # If abs, chop off leading / - if ext not in self.src_extensions: - # Better to raise an exception instead of silently continuing - # and later complain about sources and targets having - # different lengths - raise CompileError("Don't know how to compile %s" % src_name) - if strip_dir: - base = os.path.basename(base) - if ext in self._rc_extensions: - obj_names.append(os.path.join(output_dir, base + self.res_extension)) - elif ext in self._mc_extensions: - obj_names.append(os.path.join(output_dir, base + self.res_extension)) - else: - obj_names.append(os.path.join(output_dir, base + self.obj_extension)) - return obj_names - - def compile( - self, - sources, - output_dir=None, - macros=None, - include_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - depends=None, - ): - - if not self.initialized: - self.initialize() - compile_info = self._setup_compile( - output_dir, macros, include_dirs, sources, depends, extra_postargs - ) - macros, objects, extra_postargs, pp_opts, build = compile_info - - compile_opts = extra_preargs or [] - compile_opts.append('/c') - if debug: - compile_opts.extend(self.compile_options_debug) - else: - compile_opts.extend(self.compile_options) - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - if debug: - # pass the full pathname to MSVC in debug mode, - # this allows the debugger to find the source file - # without asking the user to browse for it - src = os.path.abspath(src) - - if ext in self._c_extensions: - input_opt = "/Tc" + src - elif ext in self._cpp_extensions: - input_opt = "/Tp" + src - elif ext in self._rc_extensions: - # compile .RC to .RES file - input_opt = src - output_opt = "/fo" + obj - try: - self.spawn([self.rc] + pp_opts + [output_opt] + [input_opt]) - except DistutilsExecError as msg: - raise CompileError(msg) - continue - elif ext in self._mc_extensions: - # Compile .MC to .RC file to .RES file. - # * '-h dir' specifies the directory for the - # generated include file - # * '-r dir' specifies the target directory of the - # generated RC file and the binary message resource - # it includes - # - # For now (since there are no options to change this), - # we use the source-directory for the include file and - # the build directory for the RC file and message - # resources. This works at least for win32all. - h_dir = os.path.dirname(src) - rc_dir = os.path.dirname(obj) - try: - # first compile .MC to .RC and .H file - self.spawn([self.mc] + ['-h', h_dir, '-r', rc_dir] + [src]) - base, _ = os.path.splitext(os.path.basename(src)) - rc_file = os.path.join(rc_dir, base + '.rc') - # then compile .RC to .RES file - self.spawn([self.rc] + ["/fo" + obj] + [rc_file]) - - except DistutilsExecError as msg: - raise CompileError(msg) - continue - else: - # how to handle this file? - raise CompileError("Don't know how to compile %s to %s" % (src, obj)) - - output_opt = "/Fo" + obj - try: - self.spawn( - [self.cc] - + compile_opts - + pp_opts - + [input_opt, output_opt] - + extra_postargs - ) - except DistutilsExecError as msg: - raise CompileError(msg) - - return objects - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - lib_args = objects + ['/OUT:' + output_filename] - if debug: - pass # XXX what goes here? - try: - self.spawn([self.lib] + lib_args) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def link( - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - (libraries, library_dirs, runtime_library_dirs) = fixed_args - - if runtime_library_dirs: - self.warn( - "I don't know what to do with 'runtime_library_dirs': " - + str(runtime_library_dirs) - ) - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries) - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - if target_desc == CCompiler.EXECUTABLE: - if debug: - ldflags = self.ldflags_shared_debug[1:] - else: - ldflags = self.ldflags_shared[1:] - else: - if debug: - ldflags = self.ldflags_shared_debug - else: - ldflags = self.ldflags_shared - - export_opts = [] - for sym in export_symbols or []: - export_opts.append("/EXPORT:" + sym) - - ld_args = ( - ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename] - ) - - # The MSVC linker generates .lib and .exp files, which cannot be - # suppressed by any linker switches. The .lib files may even be - # needed! Make sure they are generated in the temporary build - # directory. Since they have different names for debug and release - # builds, they can go into the same directory. - build_temp = os.path.dirname(objects[0]) - if export_symbols is not None: - (dll_name, dll_ext) = os.path.splitext( - os.path.basename(output_filename) - ) - implib_file = os.path.join(build_temp, self.library_filename(dll_name)) - ld_args.append('/IMPLIB:' + implib_file) - - self.manifest_setup_ldargs(output_filename, build_temp, ld_args) - - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - - self.mkpath(os.path.dirname(output_filename)) - try: - self.spawn([self.linker] + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - - # embed the manifest - # XXX - this is somewhat fragile - if mt.exe fails, distutils - # will still consider the DLL up-to-date, but it will not have a - # manifest. Maybe we should link to a temp file? OTOH, that - # implies a build environment error that shouldn't go undetected. - mfinfo = self.manifest_get_embed_info(target_desc, ld_args) - if mfinfo is not None: - mffilename, mfid = mfinfo - out_arg = '-outputresource:%s;%s' % (output_filename, mfid) - try: - self.spawn(['mt.exe', '-nologo', '-manifest', mffilename, out_arg]) - except DistutilsExecError as msg: - raise LinkError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def manifest_setup_ldargs(self, output_filename, build_temp, ld_args): - # If we need a manifest at all, an embedded manifest is recommended. - # See MSDN article titled - # "How to: Embed a Manifest Inside a C/C++ Application" - # (currently at http://msdn2.microsoft.com/en-us/library/ms235591(VS.80).aspx) - # Ask the linker to generate the manifest in the temp dir, so - # we can check it, and possibly embed it, later. - temp_manifest = os.path.join( - build_temp, os.path.basename(output_filename) + ".manifest" - ) - ld_args.append('/MANIFESTFILE:' + temp_manifest) - - def manifest_get_embed_info(self, target_desc, ld_args): - # If a manifest should be embedded, return a tuple of - # (manifest_filename, resource_id). Returns None if no manifest - # should be embedded. See http://bugs.python.org/issue7833 for why - # we want to avoid any manifest for extension modules if we can) - for arg in ld_args: - if arg.startswith("/MANIFESTFILE:"): - temp_manifest = arg.split(":", 1)[1] - break - else: - # no /MANIFESTFILE so nothing to do. - return None - if target_desc == CCompiler.EXECUTABLE: - # by default, executables always get the manifest with the - # CRT referenced. - mfid = 1 - else: - # Extension modules try and avoid any manifest if possible. - mfid = 2 - temp_manifest = self._remove_visual_c_ref(temp_manifest) - if temp_manifest is None: - return None - return temp_manifest, mfid - - def _remove_visual_c_ref(self, manifest_file): - try: - # Remove references to the Visual C runtime, so they will - # fall through to the Visual C dependency of Python.exe. - # This way, when installed for a restricted user (e.g. - # runtimes are not in WinSxS folder, but in Python's own - # folder), the runtimes do not need to be in every folder - # with .pyd's. - # Returns either the filename of the modified manifest or - # None if no manifest should be embedded. - manifest_f = open(manifest_file) - try: - manifest_buf = manifest_f.read() - finally: - manifest_f.close() - pattern = re.compile( - r"""|)""", - re.DOTALL, - ) - manifest_buf = re.sub(pattern, "", manifest_buf) - pattern = r"\s*" - manifest_buf = re.sub(pattern, "", manifest_buf) - # Now see if any other assemblies are referenced - if not, we - # don't want a manifest embedded. - pattern = re.compile( - r"""|)""", - re.DOTALL, - ) - if re.search(pattern, manifest_buf) is None: - return None - - manifest_f = open(manifest_file, 'w') - try: - manifest_f.write(manifest_buf) - return manifest_file - finally: - manifest_f.close() - except OSError: - pass - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "/LIBPATH:" + dir - - def runtime_library_dir_option(self, dir): - raise DistutilsPlatformError( - "don't know how to set runtime library search path for MSVC++" - ) - - def library_option(self, lib): - return self.library_filename(lib) - - def find_library_file(self, dirs, lib, debug=0): - # Prefer a debugging library if found (and requested), but deal - # with it if we don't have one. - if debug: - try_names = [lib + "_d", lib] - else: - try_names = [lib] - for dir in dirs: - for name in try_names: - libfile = os.path.join(dir, self.library_filename(name)) - if os.path.exists(libfile): - return libfile - else: - # Oops, didn't find it in *any* of 'dirs' - return None - - # Helper methods for using the MSVC registry settings - - def find_exe(self, exe): - """Return path to an MSVC executable program. - - Tries to find the program in several places: first, one of the - MSVC program search paths from the registry; next, the directories - in the PATH environment variable. If any of those work, return an - absolute path that is known to exist. If none of them work, just - return the original program name, 'exe'. - """ - for p in self.__paths: - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - # didn't find it; try existing path - for p in os.environ['Path'].split(';'): - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - return exe diff --git a/spaces/tommy24/image/app.py b/spaces/tommy24/image/app.py deleted file mode 100644 index be2b881a456fd5de35bf04f0e2243e84c10e6bcc..0000000000000000000000000000000000000000 --- a/spaces/tommy24/image/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import gradio as gr -import requests -import base64 -import json -import os - -print("this works") - -def function(paramater): - test = os.environ.get("test") - url = f"https://graph.facebook.com/v15.0/{paramater}" - payload = { - 'Authorization': 'Bearer ' + test - } - response = requests.get(url, headers=payload) - response = json.loads(response.text) - image = requests.get(response["url"], headers=payload) - image = image.content - # image = base64.b64encode(image).decode('utf-8') - - # response = requests.post("https://tommy24-this-is-indeed-cool.hf.space/run/predict", json={ - # "data": [ - # f"data:image/png;base64,{image}==", - # ]}).json() - - # data = response["data"][0] - return image -iface = gr.Interface(fn=function, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/evaluation/totaltext/e2e/rrc_evaluation_funcs.py b/spaces/tomofi/MaskTextSpotterV3-OCR/evaluation/totaltext/e2e/rrc_evaluation_funcs.py deleted file mode 100644 index 069bce697a59503f78752b4b3963be970c8b813b..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MaskTextSpotterV3-OCR/evaluation/totaltext/e2e/rrc_evaluation_funcs.py +++ /dev/null @@ -1,369 +0,0 @@ -#!/usr/bin/env python2 -#encoding: UTF-8 -import json -import sys;sys.path.append('./') -import zipfile -import re -import sys -import os -import codecs -import importlib -try: - from StringIO import StringIO -except ImportError: - from io import StringIO - -def print_help(): - sys.stdout.write('Usage: python %s.py -g= -s= [-o= -p=]' %sys.argv[0]) - sys.exit(2) - - -def load_zip_file_keys(file,fileNameRegExp=''): - """ - Returns an array with the entries of the ZIP file that match with the regular expression. - The key's are the names or the file or the capturing group definied in the fileNameRegExp - """ - try: - archive=zipfile.ZipFile(file, mode='r', allowZip64=True) - except : - raise Exception('Error loading the ZIP archive.') - - pairs = [] - - for name in archive.namelist(): - addFile = True - keyName = name - if fileNameRegExp!="": - m = re.match(fileNameRegExp,name) - if m == None: - addFile = False - else: - if len(m.groups())>0: - keyName = m.group(1) - - if addFile: - pairs.append( keyName ) - - return pairs - - -def load_zip_file(file,fileNameRegExp='',allEntries=False): - """ - Returns an array with the contents (filtered by fileNameRegExp) of a ZIP file. - The key's are the names or the file or the capturing group definied in the fileNameRegExp - allEntries validates that all entries in the ZIP file pass the fileNameRegExp - """ - try: - archive=zipfile.ZipFile(file, mode='r', allowZip64=True) - except : - raise Exception('Error loading the ZIP archive') - - pairs = [] - for name in archive.namelist(): - addFile = True - keyName = name - if fileNameRegExp!="": - m = re.match(fileNameRegExp,name) - if m == None: - addFile = False - else: - if len(m.groups())>0: - keyName = m.group(1) - - if addFile: - pairs.append( [ keyName , archive.read(name)] ) - else: - if allEntries: - raise Exception('ZIP entry not valid: %s' %name) - - return dict(pairs) - -def decode_utf8(raw): - """ - Returns a Unicode object on success, or None on failure - """ - try: - raw = codecs.decode(raw,'utf-8', 'replace') - #extracts BOM if exists - raw = raw.encode('utf8') - if raw.startswith(codecs.BOM_UTF8): - raw = raw.replace(codecs.BOM_UTF8, '', 1) - return raw.decode('utf-8') - except: - return None - -def validate_lines_in_file(fileName,file_contents,CRLF=True,LTRB=True,withTranscription=False,withConfidence=False,imWidth=0,imHeight=0): - """ - This function validates that all lines of the file calling the Line validation function for each line - """ - utf8File = decode_utf8(file_contents) - if (utf8File is None) : - raise Exception("The file %s is not UTF-8" %fileName) - - lines = utf8File.split( "\r\n" if CRLF else "\n" ) - for line in lines: - line = line.replace("\r","").replace("\n","") - if(line != ""): - try: - validate_tl_line(line,LTRB,withTranscription,withConfidence,imWidth,imHeight) - except Exception as e: - raise Exception(("Line in sample not valid. Sample: %s Line: %s Error: %s" %(fileName,line,str(e))).encode('utf-8', 'replace')) - - - -def validate_tl_line(line,LTRB=True,withTranscription=True,withConfidence=True,imWidth=0,imHeight=0): - """ - Validate the format of the line. If the line is not valid an exception will be raised. - If maxWidth and maxHeight are specified, all points must be inside the imgage bounds. - Posible values are: - LTRB=True: xmin,ymin,xmax,ymax[,confidence][,transcription] - LTRB=False: x1,y1,x2,y2,x3,y3,x4,y4[,confidence][,transcription] - """ - get_tl_line_values(line,LTRB,withTranscription,withConfidence,imWidth,imHeight) - - -def get_tl_line_values(line,LTRB=True,withTranscription=False,withConfidence=False,imWidth=0,imHeight=0): - """ - Validate the format of the line. If the line is not valid an exception will be raised. - If maxWidth and maxHeight are specified, all points must be inside the imgage bounds. - Posible values are: - LTRB=True: xmin,ymin,xmax,ymax[,confidence][,transcription] - LTRB=False: x1,y1,x2,y2,x3,y3,x4,y4[,confidence][,transcription] - Returns values from a textline. Points , [Confidences], [Transcriptions] - """ - confidence = 0.0 - transcription = ""; - points = [] - - numPoints = 4; - - if LTRB: - - numPoints = 4; - - if withTranscription and withConfidence: - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*([0-9]+)\s*,\s*([0-9]+)\s*,\s*([0-1].?[0-9]*)\s*,(.*)$',line) - if m == None : - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*([0-9]+)\s*,\s*([0-9]+)\s*,\s*([0-1].?[0-9]*)\s*,(.*)$',line) - raise Exception("Format incorrect. Should be: xmin,ymin,xmax,ymax,confidence,transcription") - elif withConfidence: - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*([0-9]+)\s*,\s*([0-9]+)\s*,\s*([0-1].?[0-9]*)\s*$',line) - if m == None : - raise Exception("Format incorrect. Should be: xmin,ymin,xmax,ymax,confidence") - elif withTranscription: - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*([0-9]+)\s*,\s*([0-9]+)\s*,(.*)$',line) - if m == None : - raise Exception("Format incorrect. Should be: xmin,ymin,xmax,ymax,transcription") - else: - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*([0-9]+)\s*,\s*([0-9]+)\s*,?\s*$',line) - if m == None : - raise Exception("Format incorrect. Should be: xmin,ymin,xmax,ymax") - - xmin = int(m.group(1)) - ymin = int(m.group(2)) - xmax = int(m.group(3)) - ymax = int(m.group(4)) - if(xmax0 and imHeight>0): - validate_point_inside_bounds(xmin,ymin,imWidth,imHeight); - validate_point_inside_bounds(xmax,ymax,imWidth,imHeight); - - else: - - numPoints = 8; - - if withTranscription and withConfidence: - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*([0-1].?[0-9]*)\s*,(.*)$',line) - if m == None : - raise Exception("Format incorrect. Should be: x1,y1,x2,y2,x3,y3,x4,y4,confidence,transcription") - elif withConfidence: - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*([0-1].?[0-9]*)\s*$',line) - if m == None : - raise Exception("Format incorrect. Should be: x1,y1,x2,y2,x3,y3,x4,y4,confidence") - elif withTranscription: - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,(.*)$',line) - if m == None : - raise Exception("Format incorrect. Should be: x1,y1,x2,y2,x3,y3,x4,y4,transcription") - else: - m = re.match(r'^\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*,\s*(-?[0-9]+)\s*$',line) - if m == None : - raise Exception("Format incorrect. Should be: x1,y1,x2,y2,x3,y3,x4,y4") - - points = [ float(m.group(i)) for i in range(1, (numPoints+1) ) ] - - validate_clockwise_points(points) - - if (imWidth>0 and imHeight>0): - validate_point_inside_bounds(points[0],points[1],imWidth,imHeight); - validate_point_inside_bounds(points[2],points[3],imWidth,imHeight); - validate_point_inside_bounds(points[4],points[5],imWidth,imHeight); - validate_point_inside_bounds(points[6],points[7],imWidth,imHeight); - - - if withConfidence: - try: - confidence = float(m.group(numPoints+1)) - except ValueError: - raise Exception("Confidence value must be a float") - - if withTranscription: - posTranscription = numPoints + (2 if withConfidence else 1) - transcription = m.group(posTranscription) - m2 = re.match(r'^\s*\"(.*)\"\s*$',transcription) - if m2 != None : #Transcription with double quotes, we extract the value and replace escaped characters - transcription = m2.group(1).replace("\\\\", "\\").replace("\\\"", "\"") - - return points,confidence,transcription - - -def validate_point_inside_bounds(x,y,imWidth,imHeight): - if(x<0 or x>imWidth): - raise Exception("X value (%s) not valid. Image dimensions: (%s,%s)" %(xmin,imWidth,imHeight)) - if(y<0 or y>imHeight): - raise Exception("Y value (%s) not valid. Image dimensions: (%s,%s) Sample: %s Line:%s" %(ymin,imWidth,imHeight)) - -def validate_clockwise_points(points): - """ - Validates that the points that the 4 points that dlimite a polygon are in clockwise order. - """ - - if len(points) != 8: - raise Exception("Points list not valid." + str(len(points))) - - point = [ - [int(points[0]) , int(points[1])], - [int(points[2]) , int(points[3])], - [int(points[4]) , int(points[5])], - [int(points[6]) , int(points[7])] - ] - edge = [ - ( point[1][0] - point[0][0])*( point[1][1] + point[0][1]), - ( point[2][0] - point[1][0])*( point[2][1] + point[1][1]), - ( point[3][0] - point[2][0])*( point[3][1] + point[2][1]), - ( point[0][0] - point[3][0])*( point[0][1] + point[3][1]) - ] - - summatory = edge[0] + edge[1] + edge[2] + edge[3]; - if summatory>0: - raise Exception("Points are not clockwise. The coordinates of bounding quadrilaterals have to be given in clockwise order. Regarding the correct interpretation of 'clockwise' remember that the image coordinate system used is the standard one, with the image origin at the upper left, the X axis extending to the right and Y axis extending downwards.") - -def get_tl_line_values_from_file_contents(content,CRLF=True,LTRB=True,withTranscription=False,withConfidence=False,imWidth=0,imHeight=0,sort_by_confidences=True): - """ - Returns all points, confindences and transcriptions of a file in lists. Valid line formats: - xmin,ymin,xmax,ymax,[confidence],[transcription] - x1,y1,x2,y2,x3,y3,x4,y4,[confidence],[transcription] - """ - pointsList = [] - transcriptionsList = [] - confidencesList = [] - - lines = content.split( "\r\n" if CRLF else "\n" ) - for line in lines: - line = line.replace("\r","").replace("\n","") - if(line != "") : - points, confidence, transcription = get_tl_line_values(line,LTRB,withTranscription,withConfidence,imWidth,imHeight); - pointsList.append(points) - transcriptionsList.append(transcription) - confidencesList.append(confidence) - - if withConfidence and len(confidencesList)>0 and sort_by_confidences: - import numpy as np - sorted_ind = np.argsort(-np.array(confidencesList)) - confidencesList = [confidencesList[i] for i in sorted_ind] - pointsList = [pointsList[i] for i in sorted_ind] - transcriptionsList = [transcriptionsList[i] for i in sorted_ind] - - return pointsList,confidencesList,transcriptionsList - -def main_evaluation(p,default_evaluation_params_fn,validate_data_fn,evaluate_method_fn,show_result=True,per_sample=True): - """ - This process validates a method, evaluates it and if it succed generates a ZIP file with a JSON entry for each sample. - Params: - p: Dictionary of parmeters with the GT/submission locations. If None is passed, the parameters send by the system are used. - default_evaluation_params_fn: points to a function that returns a dictionary with the default parameters used for the evaluation - validate_data_fn: points to a method that validates the corrct format of the submission - evaluate_method_fn: points to a function that evaluated the submission and return a Dictionary with the results - """ - - if (p == None): - p = dict([s[1:].split('=') for s in sys.argv[1:]]) - if(len(sys.argv)<3): - print_help() - - evalParams = default_evaluation_params_fn() - if 'p' in p.keys(): - evalParams.update( p['p'] if isinstance(p['p'], dict) else json.loads(p['p'][1:-1]) ) - - resDict={'calculated':True,'Message':'','method':'{}','per_sample':'{}'} - try: - validate_data_fn(p['g'], p['s'], evalParams) - evalData = evaluate_method_fn(p['g'], p['s'], evalParams) - resDict.update(evalData) - - except Exception as e: - resDict['Message']= str(e) - resDict['calculated']=False - - if 'o' in p: - if not os.path.exists(p['o']): - os.makedirs(p['o']) - - resultsOutputname = p['o'] + '/results.zip' - outZip = zipfile.ZipFile(resultsOutputname, mode='w', allowZip64=True) - - del resDict['per_sample'] - if 'output_items' in resDict.keys(): - del resDict['output_items'] - - outZip.writestr('method.json',json.dumps(resDict)) - - if not resDict['calculated']: - if show_result: - sys.stderr.write('Error!\n'+ resDict['Message']+'\n\n') - if 'o' in p: - outZip.close() - return resDict - - if 'o' in p: - if per_sample == True: - for k,v in evalData['per_sample'].items(): - outZip.writestr( k + '.json',json.dumps(v)) - - if 'output_items' in evalData.keys(): - for k, v in evalData['output_items'].items(): - outZip.writestr( k,v) - - outZip.close() - - if show_result: - sys.stdout.write("Calculated!") - sys.stdout.write(json.dumps(resDict['method'])) - - return resDict - - -def main_validation(default_evaluation_params_fn,validate_data_fn): - """ - This process validates a method - Params: - default_evaluation_params_fn: points to a function that returns a dictionary with the default parameters used for the evaluation - validate_data_fn: points to a method that validates the corrct format of the submission - """ - try: - p = dict([s[1:].split('=') for s in sys.argv[1:]]) - evalParams = default_evaluation_params_fn() - if 'p' in p.keys(): - evalParams.update( p['p'] if isinstance(p['p'], dict) else json.loads(p['p'][1:-1]) ) - - validate_data_fn(p['g'], p['s'], evalParams) - print('SUCCESS') - sys.exit(0) - except Exception as e: - print(str(e)) - sys.exit(101) \ No newline at end of file diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py deleted file mode 100644 index d0016d1f1df4534ae27de95c4f7ec9976b3ab6d0..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './mask_rcnn_r101_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/utils/res_layer.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/utils/res_layer.py deleted file mode 100644 index 825880d74c4720fcc77fcbf723259c5f86e119fa..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/utils/res_layer.py +++ /dev/null @@ -1,189 +0,0 @@ -from mmcv.cnn import build_conv_layer, build_norm_layer -from mmcv.runner import BaseModule, Sequential -from torch import nn as nn - - -class ResLayer(Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if downsample_first: - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - else: # downsample_first=False is for HourglassModule - for _ in range(num_blocks - 1): - layers.append( - block( - inplanes=inplanes, - planes=inplanes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) - - -class SimplifiedBasicBlock(BaseModule): - """Simplified version of original basic residual block. This is used in - `SCNet `_. - - - Norm layer is now optional - - Last ReLU in forward function is removed - """ - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None, - init_fg=None): - super(SimplifiedBasicBlock, self).__init__(init_fg) - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert not with_cp, 'Not implemented yet.' - self.with_norm = norm_cfg is not None - with_bias = True if norm_cfg is None else False - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=with_bias) - if self.with_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, planes, postfix=1) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=with_bias) - if self.with_norm: - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, planes, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) if self.with_norm else None - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) if self.with_norm else None - - def forward(self, x): - """Forward function.""" - - identity = x - - out = self.conv1(x) - if self.with_norm: - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - if self.with_norm: - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out diff --git a/spaces/tonyassi/image-segmentation/README.md b/spaces/tonyassi/image-segmentation/README.md deleted file mode 100644 index d0f7798553fb41e294fdeba7d5685fb07b769ce4..0000000000000000000000000000000000000000 --- a/spaces/tonyassi/image-segmentation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image Segmentation -emoji: 👚 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.48.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/triple-t/ttt-space/static/_app/immutable/components/pages/_page.svelte-033df9bc.js b/spaces/triple-t/ttt-space/static/_app/immutable/components/pages/_page.svelte-033df9bc.js deleted file mode 100644 index a1d1cfc59e07e5c96605765cc9dbe28d9d8c597e..0000000000000000000000000000000000000000 --- a/spaces/triple-t/ttt-space/static/_app/immutable/components/pages/_page.svelte-033df9bc.js +++ /dev/null @@ -1 +0,0 @@ -import{S as V,i as j,s as G,k as m,q as $,a as k,l as p,m as g,r as q,h,c as x,n as u,b as v,G as _,B as w,I as H,o as M,e as I,J as y,u as z}from"../../chunks/index-b346583a.js";function E(f,a,e){const s=f.slice();return s[2]=a[e],s}function D(f){let a,e,s=f[2].data.prompt+"",c,n,o,i,l;return{c(){a=m("div"),e=m("h1"),c=$(s),n=k(),o=m("img"),l=k(),this.h()},l(r){a=p(r,"DIV",{});var t=g(a);e=p(t,"H1",{class:!0});var d=g(e);c=q(d,s),d.forEach(h),n=x(t),o=p(t,"IMG",{loading:!0,src:!0,class:!0}),l=x(t),t.forEach(h),this.h()},h(){u(e,"class","text-black dark:text-white font-semibold p-2 min-h-[8ch] text-center"),u(o,"loading","lazy"),y(o.src,i=f[2].data.images[0])||u(o,"src",i),u(o,"class","rounded-3xl")},m(r,t){v(r,a,t),_(a,e),_(e,c),_(a,n),_(a,o),_(a,l)},p(r,t){t&1&&s!==(s=r[2].data.prompt+"")&&z(c,s),t&1&&!y(o.src,i=r[2].data.images[0])&&u(o,"src",i)},d(r){r&&h(a)}}}function S(f){let a,e=f[2].data.images.length>0&&D(f);return{c(){e&&e.c(),a=I()},l(s){e&&e.l(s),a=I()},m(s,c){e&&e.m(s,c),v(s,a,c)},p(s,c){s[2].data.images.length>0?e?e.p(s,c):(e=D(s),e.c(),e.m(a.parentNode,a)):e&&(e.d(1),e=null)},d(s){e&&e.d(s),s&&h(a)}}}function B(f){let a,e,s,c,n,o=f[0],i=[];for(let l=0;ln.json()).then(n=>{e(0,s=n)})}return M(()=>{c();const n=window.setInterval(c,2e3);return()=>{clearInterval(n)}}),[s]}class P extends V{constructor(a){super(),j(this,a,J,B,G,{})}}export{P as default}; diff --git a/spaces/trttung1610/musicgen/tests/modules/test_lstm.py b/spaces/trttung1610/musicgen/tests/modules/test_lstm.py deleted file mode 100644 index 1248964c8191e19f27661f0974bef9cc967eb015..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/tests/modules/test_lstm.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import torch - -from audiocraft.modules.lstm import StreamableLSTM - - -class TestStreamableLSTM: - - def test_lstm(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=False) - x = torch.randn(B, C, T) - y = lstm(x) - - print(y.shape) - assert y.shape == torch.Size([B, C, T]) - - def test_lstm_skip(self): - B, C, T = 4, 2, random.randint(1, 100) - - lstm = StreamableLSTM(C, 3, skip=True) - x = torch.randn(B, C, T) - y = lstm(x) - - assert y.shape == torch.Size([B, C, T]) diff --git a/spaces/ulysses115/Nogizaka46-so/models.py b/spaces/ulysses115/Nogizaka46-so/models.py deleted file mode 100644 index 13278d680493970f5a670cf3fc955a6e9b7ab1d5..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/Nogizaka46-so/models.py +++ /dev/null @@ -1,420 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, noice_scale=1): - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs) * noice_scale) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels , 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if (spk_emb is not None): - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - - def forward(self, c, f0, uv, spec, g=None, c_lengths=None, spec_lengths=None): - g = self.emb_g(g).transpose(1,2) - # ssl prenet - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2) - - # f0 predict - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - - # encoder - z_ptemp, m_p, logs_p, _ = self.enc_p(x, x_mask, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - # flow - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # nsf decoder - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q), pred_lf0, norm_lf0, lf0 - - def infer(self, c, f0, uv, g=None, noice_scale=0.35, predict_f0=False): - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2) - - if predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), noice_scale=noice_scale) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/models/clipseg.py b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/models/clipseg.py deleted file mode 100644 index a4640b34bbd1ca68a32114471d5585734c4af2fc..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/models/clipseg.py +++ /dev/null @@ -1,552 +0,0 @@ -import math -from os.path import basename, dirname, join, isfile -import torch -from torch import nn -from torch.nn import functional as nnf -from torch.nn.modules.activation import ReLU - - -def precompute_clip_vectors(): - - from trails.initialization import init_dataset - lvis = init_dataset('LVIS_OneShot3', split='train', mask='text_label', image_size=224, aug=1, normalize=True, - reduce_factor=None, add_bar=False, negative_prob=0.5) - - all_names = list(lvis.category_names.values()) - - import clip - from models.clip_prompts import imagenet_templates - clip_model = clip.load("ViT-B/32", device='cuda', jit=False)[0] - prompt_vectors = {} - for name in all_names[:100]: - with torch.no_grad(): - conditionals = [t.format(name).replace('_', ' ') for t in imagenet_templates] - text_tokens = clip.tokenize(conditionals).cuda() - cond = clip_model.encode_text(text_tokens).cpu() - - for cond, vec in zip(conditionals, cond): - prompt_vectors[cond] = vec.cpu() - - import pickle - - pickle.dump(prompt_vectors, open('precomputed_prompt_vectors.pickle', 'wb')) - - -def get_prompt_list(prompt): - if prompt == 'plain': - return ['{}'] - elif prompt == 'fixed': - return ['a photo of a {}.'] - elif prompt == 'shuffle': - return ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.'] - elif prompt == 'shuffle+': - return ['a photo of a {}.', 'a photograph of a {}.', 'an image of a {}.', '{}.', - 'a cropped photo of a {}.', 'a good photo of a {}.', 'a photo of one {}.', - 'a bad photo of a {}.', 'a photo of the {}.'] - elif prompt == 'shuffle_clip': - from models.clip_prompts import imagenet_templates - return imagenet_templates - else: - raise ValueError('Invalid value for prompt') - - -def forward_multihead_attention(x, b, with_aff=False, attn_mask=None): - """ - Simplified version of multihead attention (taken from torch source code but without tons of if clauses). - The mlp and layer norm come from CLIP. - x: input. - b: multihead attention module. - """ - - x_ = b.ln_1(x) - q, k, v = nnf.linear(x_, b.attn.in_proj_weight, b.attn.in_proj_bias).chunk(3, dim=-1) - tgt_len, bsz, embed_dim = q.size() - - head_dim = embed_dim // b.attn.num_heads - scaling = float(head_dim) ** -0.5 - - q = q.contiguous().view(tgt_len, bsz * b.attn.num_heads, b.attn.head_dim).transpose(0, 1) - k = k.contiguous().view(-1, bsz * b.attn.num_heads, b.attn.head_dim).transpose(0, 1) - v = v.contiguous().view(-1, bsz * b.attn.num_heads, b.attn.head_dim).transpose(0, 1) - - q = q * scaling - - attn_output_weights = torch.bmm(q, k.transpose(1, 2)) # n_heads * batch_size, tokens^2, tokens^2 - if attn_mask is not None: - - - attn_mask_type, attn_mask = attn_mask - n_heads = attn_output_weights.size(0) // attn_mask.size(0) - attn_mask = attn_mask.repeat(n_heads, 1) - - if attn_mask_type == 'cls_token': - # the mask only affects similarities compared to the readout-token. - attn_output_weights[:, 0, 1:] = attn_output_weights[:, 0, 1:] * attn_mask[None,...] - # attn_output_weights[:, 0, 0] = 0*attn_output_weights[:, 0, 0] - - if attn_mask_type == 'all': - # print(attn_output_weights.shape, attn_mask[:, None].shape) - attn_output_weights[:, 1:, 1:] = attn_output_weights[:, 1:, 1:] * attn_mask[:, None] - - - attn_output_weights = torch.softmax(attn_output_weights, dim=-1) - - attn_output = torch.bmm(attn_output_weights, v) - attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn_output = b.attn.out_proj(attn_output) - - x = x + attn_output - x = x + b.mlp(b.ln_2(x)) - - if with_aff: - return x, attn_output_weights - else: - return x - - -class CLIPDenseBase(nn.Module): - - def __init__(self, version, reduce_cond, reduce_dim, prompt, n_tokens): - super().__init__() - - import clip - - # prec = torch.FloatTensor - self.clip_model, _ = clip.load(version, device='cpu', jit=False) - self.model = self.clip_model.visual - - # if not None, scale conv weights such that we obtain n_tokens. - self.n_tokens = n_tokens - - for p in self.clip_model.parameters(): - p.requires_grad_(False) - - # conditional - if reduce_cond is not None: - self.reduce_cond = nn.Linear(512, reduce_cond) - for p in self.reduce_cond.parameters(): - p.requires_grad_(False) - else: - self.reduce_cond = None - - self.film_mul = nn.Linear(512 if reduce_cond is None else reduce_cond, reduce_dim) - self.film_add = nn.Linear(512 if reduce_cond is None else reduce_cond, reduce_dim) - - self.reduce = nn.Linear(768, reduce_dim) - - self.prompt_list = get_prompt_list(prompt) - - # precomputed prompts - import pickle - if isfile('precomputed_prompt_vectors.pickle'): - precomp = pickle.load(open('precomputed_prompt_vectors.pickle', 'rb')) - self.precomputed_prompts = {k: torch.from_numpy(v) for k, v in precomp.items()} - else: - self.precomputed_prompts = dict() - - def rescaled_pos_emb(self, new_size): - assert len(new_size) == 2 - - a = self.model.positional_embedding[1:].T.view(1, 768, *self.token_shape) - b = nnf.interpolate(a, new_size, mode='bicubic', align_corners=False).squeeze(0).view(768, new_size[0]*new_size[1]).T - return torch.cat([self.model.positional_embedding[:1], b]) - - def visual_forward(self, x_inp, extract_layers=(), skip=False, mask=None): - - - with torch.no_grad(): - - inp_size = x_inp.shape[2:] - - if self.n_tokens is not None: - stride2 = x_inp.shape[2] // self.n_tokens - conv_weight2 = nnf.interpolate(self.model.conv1.weight, (stride2, stride2), mode='bilinear', align_corners=True) - x = nnf.conv2d(x_inp, conv_weight2, bias=self.model.conv1.bias, stride=stride2, dilation=self.model.conv1.dilation) - else: - x = self.model.conv1(x_inp) # shape = [*, width, grid, grid] - - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - - x = torch.cat([self.model.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - - standard_n_tokens = 50 if self.model.conv1.kernel_size[0] == 32 else 197 - - if x.shape[1] != standard_n_tokens: - new_shape = int(math.sqrt(x.shape[1]-1)) - x = x + self.rescaled_pos_emb((new_shape, new_shape)).to(x.dtype)[None,:,:] - else: - x = x + self.model.positional_embedding.to(x.dtype) - - x = self.model.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - - activations, affinities = [], [] - for i, res_block in enumerate(self.model.transformer.resblocks): - - if mask is not None: - mask_layer, mask_type, mask_tensor = mask - if mask_layer == i or mask_layer == 'all': - # import ipdb; ipdb.set_trace() - size = int(math.sqrt(x.shape[0] - 1)) - - attn_mask = (mask_type, nnf.interpolate(mask_tensor.unsqueeze(1).float(), (size, size)).view(mask_tensor.shape[0], size * size)) - - else: - attn_mask = None - else: - attn_mask = None - - x, aff_per_head = forward_multihead_attention(x, res_block, with_aff=True, attn_mask=attn_mask) - - if i in extract_layers: - affinities += [aff_per_head] - - #if self.n_tokens is not None: - # activations += [nnf.interpolate(x, inp_size, mode='bilinear', align_corners=True)] - #else: - activations += [x] - - if len(extract_layers) > 0 and i == max(extract_layers) and skip: - print('early skip') - break - - x = x.permute(1, 0, 2) # LND -> NLD - x = self.model.ln_post(x[:, 0, :]) - - if self.model.proj is not None: - x = x @ self.model.proj - - return x, activations, affinities - - def sample_prompts(self, words, prompt_list=None): - - prompt_list = prompt_list if prompt_list is not None else self.prompt_list - - prompt_indices = torch.multinomial(torch.ones(len(prompt_list)), len(words), replacement=True) - prompts = [prompt_list[i] for i in prompt_indices] - return [promt.format(w) for promt, w in zip(prompts, words)] - - def get_cond_vec(self, conditional, batch_size): - # compute conditional from a single string - if conditional is not None and type(conditional) == str: - cond = self.compute_conditional(conditional) - cond = cond.repeat(batch_size, 1) - - # compute conditional from string list/tuple - elif conditional is not None and type(conditional) in {list, tuple} and type(conditional[0]) == str: - assert len(conditional) == batch_size - cond = self.compute_conditional(conditional) - - # use conditional directly - elif conditional is not None and type(conditional) == torch.Tensor and conditional.ndim == 2: - cond = conditional - - # compute conditional from image - elif conditional is not None and type(conditional) == torch.Tensor: - with torch.no_grad(): - cond, _, _ = self.visual_forward(conditional) - else: - raise ValueError('invalid conditional') - return cond - - def compute_conditional(self, conditional): - import clip - - dev = next(self.parameters()).device - - if type(conditional) in {list, tuple}: - text_tokens = clip.tokenize(conditional).to(dev) - cond = self.clip_model.encode_text(text_tokens) - else: - if conditional in self.precomputed_prompts: - cond = self.precomputed_prompts[conditional].float().to(dev) - else: - text_tokens = clip.tokenize([conditional]).to(dev) - cond = self.clip_model.encode_text(text_tokens)[0] - - if self.shift_vector is not None: - return cond + self.shift_vector - else: - return cond - - -def clip_load_untrained(version): - assert version == 'ViT-B/16' - from clip.model import CLIP - from clip.clip import _MODELS, _download - model = torch.jit.load(_download(_MODELS['ViT-B/16'])).eval() - state_dict = model.state_dict() - - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_resolution = vision_patch_size * grid_size - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - return CLIP(embed_dim, image_resolution, vision_layers, vision_width, vision_patch_size, - context_length, vocab_size, transformer_width, transformer_heads, transformer_layers) - - -class CLIPDensePredT(CLIPDenseBase): - - def __init__(self, version='ViT-B/32', extract_layers=(3, 6, 9), cond_layer=0, reduce_dim=128, n_heads=4, prompt='fixed', - extra_blocks=0, reduce_cond=None, fix_shift=False, - learn_trans_conv_only=False, limit_to_clip_only=False, upsample=False, - add_calibration=False, rev_activations=False, trans_conv=None, n_tokens=None): - - super().__init__(version, reduce_cond, reduce_dim, prompt, n_tokens) - # device = 'cpu' - - self.extract_layers = extract_layers - self.cond_layer = cond_layer - self.limit_to_clip_only = limit_to_clip_only - self.process_cond = None - self.rev_activations = rev_activations - - depth = len(extract_layers) - - if add_calibration: - self.calibration_conds = 1 - - self.upsample_proj = nn.Conv2d(reduce_dim, 1, kernel_size=1) if upsample else None - - self.add_activation1 = True - - self.version = version - - self.token_shape = {'ViT-B/32': (7, 7), 'ViT-B/16': (14, 14)}[version] - - if fix_shift: - # self.shift_vector = nn.Parameter(torch.load(join(dirname(basename(__file__)), 'clip_text_shift_vector.pth')), requires_grad=False) - self.shift_vector = nn.Parameter(torch.load(join(dirname(basename(__file__)), 'shift_text_to_vis.pth')), requires_grad=False) - # self.shift_vector = nn.Parameter(-1*torch.load(join(dirname(basename(__file__)), 'shift2.pth')), requires_grad=False) - else: - self.shift_vector = None - - if trans_conv is None: - trans_conv_ks = {'ViT-B/32': (32, 32), 'ViT-B/16': (16, 16)}[version] - else: - # explicitly define transposed conv kernel size - trans_conv_ks = (trans_conv, trans_conv) - - self.trans_conv = nn.ConvTranspose2d(reduce_dim, 1, trans_conv_ks, stride=trans_conv_ks) - - assert len(self.extract_layers) == depth - - self.reduces = nn.ModuleList([nn.Linear(768, reduce_dim) for _ in range(depth)]) - self.blocks = nn.ModuleList([nn.TransformerEncoderLayer(d_model=reduce_dim, nhead=n_heads) for _ in range(len(self.extract_layers))]) - self.extra_blocks = nn.ModuleList([nn.TransformerEncoderLayer(d_model=reduce_dim, nhead=n_heads) for _ in range(extra_blocks)]) - - # refinement and trans conv - - if learn_trans_conv_only: - for p in self.parameters(): - p.requires_grad_(False) - - for p in self.trans_conv.parameters(): - p.requires_grad_(True) - - self.prompt_list = get_prompt_list(prompt) - - - def forward(self, inp_image, conditional=None, return_features=False, mask=None): - - assert type(return_features) == bool - - inp_image = inp_image.to(self.model.positional_embedding.device) - - if mask is not None: - raise ValueError('mask not supported') - - # x_inp = normalize(inp_image) - x_inp = inp_image - - bs, dev = inp_image.shape[0], x_inp.device - - cond = self.get_cond_vec(conditional, bs) - - visual_q, activations, _ = self.visual_forward(x_inp, extract_layers=[0] + list(self.extract_layers)) - - activation1 = activations[0] - activations = activations[1:] - - _activations = activations[::-1] if not self.rev_activations else activations - - a = None - for i, (activation, block, reduce) in enumerate(zip(_activations, self.blocks, self.reduces)): - - if a is not None: - a = reduce(activation) + a - else: - a = reduce(activation) - - if i == self.cond_layer: - if self.reduce_cond is not None: - cond = self.reduce_cond(cond) - - a = self.film_mul(cond) * a + self.film_add(cond) - - a = block(a) - - for block in self.extra_blocks: - a = a + block(a) - - a = a[1:].permute(1, 2, 0) # rm cls token and -> BS, Feats, Tokens - - size = int(math.sqrt(a.shape[2])) - - a = a.view(bs, a.shape[1], size, size) - - a = self.trans_conv(a) - - if self.n_tokens is not None: - a = nnf.interpolate(a, x_inp.shape[2:], mode='bilinear', align_corners=True) - - if self.upsample_proj is not None: - a = self.upsample_proj(a) - a = nnf.interpolate(a, x_inp.shape[2:], mode='bilinear') - - if return_features: - return a, visual_q, cond, [activation1] + activations - else: - return a, - - - -class CLIPDensePredTMasked(CLIPDensePredT): - - def __init__(self, version='ViT-B/32', extract_layers=(3, 6, 9), cond_layer=0, reduce_dim=128, n_heads=4, - prompt='fixed', extra_blocks=0, reduce_cond=None, fix_shift=False, learn_trans_conv_only=False, - refine=None, limit_to_clip_only=False, upsample=False, add_calibration=False, n_tokens=None): - - super().__init__(version=version, extract_layers=extract_layers, cond_layer=cond_layer, reduce_dim=reduce_dim, - n_heads=n_heads, prompt=prompt, extra_blocks=extra_blocks, reduce_cond=reduce_cond, - fix_shift=fix_shift, learn_trans_conv_only=learn_trans_conv_only, - limit_to_clip_only=limit_to_clip_only, upsample=upsample, add_calibration=add_calibration, - n_tokens=n_tokens) - - def visual_forward_masked(self, img_s, seg_s): - return super().visual_forward(img_s, mask=('all', 'cls_token', seg_s)) - - def forward(self, img_q, cond_or_img_s, seg_s=None, return_features=False): - - if seg_s is None: - cond = cond_or_img_s - else: - img_s = cond_or_img_s - - with torch.no_grad(): - cond, _, _ = self.visual_forward_masked(img_s, seg_s) - - return super().forward(img_q, cond, return_features=return_features) - - - -class CLIPDenseBaseline(CLIPDenseBase): - - def __init__(self, version='ViT-B/32', cond_layer=0, - extract_layer=9, reduce_dim=128, reduce2_dim=None, prompt='fixed', - reduce_cond=None, limit_to_clip_only=False, n_tokens=None): - - super().__init__(version, reduce_cond, reduce_dim, prompt, n_tokens) - device = 'cpu' - - # self.cond_layer = cond_layer - self.extract_layer = extract_layer - self.limit_to_clip_only = limit_to_clip_only - self.shift_vector = None - - self.token_shape = {'ViT-B/32': (7, 7), 'ViT-B/16': (14, 14)}[version] - - assert reduce2_dim is not None - - self.reduce2 = nn.Sequential( - nn.Linear(reduce_dim, reduce2_dim), - nn.ReLU(), - nn.Linear(reduce2_dim, reduce_dim) - ) - - trans_conv_ks = {'ViT-B/32': (32, 32), 'ViT-B/16': (16, 16)}[version] - self.trans_conv = nn.ConvTranspose2d(reduce_dim, 1, trans_conv_ks, stride=trans_conv_ks) - - - def forward(self, inp_image, conditional=None, return_features=False): - - inp_image = inp_image.to(self.model.positional_embedding.device) - - # x_inp = normalize(inp_image) - x_inp = inp_image - - bs, dev = inp_image.shape[0], x_inp.device - - cond = self.get_cond_vec(conditional, bs) - - visual_q, activations, affinities = self.visual_forward(x_inp, extract_layers=[self.extract_layer]) - - a = activations[0] - a = self.reduce(a) - a = self.film_mul(cond) * a + self.film_add(cond) - - if self.reduce2 is not None: - a = self.reduce2(a) - - # the original model would execute a transformer block here - - a = a[1:].permute(1, 2, 0) # rm cls token and -> BS, Feats, Tokens - - size = int(math.sqrt(a.shape[2])) - - a = a.view(bs, a.shape[1], size, size) - a = self.trans_conv(a) - - if return_features: - return a, visual_q, cond, activations - else: - return a, - - -class CLIPSegMultiLabel(nn.Module): - - def __init__(self, model) -> None: - super().__init__() - - from third_party.JoEm.data_loader import get_seen_idx, get_unseen_idx, VOC - - self.pascal_classes = VOC - - from models.clipseg import CLIPDensePredT - from general_utils import load_model - # self.clipseg = load_model('rd64-vit16-neg0.2-phrasecut', strict=False) - self.clipseg = load_model(model, strict=False) - - self.clipseg.eval() - - def forward(self, x): - - bs = x.shape[0] - out = torch.ones(21, bs, 352, 352).to(x.device) * -10 - - for class_id, class_name in enumerate(self.pascal_classes): - - fac = 3 if class_name == 'background' else 1 - - with torch.no_grad(): - pred = torch.sigmoid(self.clipseg(x, class_name)[0][:,0]) * fac - - out[class_id] += pred - - - out = out.permute(1, 0, 2, 3) - - return out - - # construct output tensor - \ No newline at end of file diff --git a/spaces/vg055/demo_analisis_de_sentimientos_textos_turisticos_mx_polarity/app.py b/spaces/vg055/demo_analisis_de_sentimientos_textos_turisticos_mx_polarity/app.py deleted file mode 100644 index 282989fc8f769b4c3fc5a32929a6a6e0dcc6d4c6..0000000000000000000000000000000000000000 --- a/spaces/vg055/demo_analisis_de_sentimientos_textos_turisticos_mx_polarity/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr - -examples = [["Pagamos un precio completo para una visita mínima.Hay un recorrido muy pequeño: no es possible salir del recorrido y ir alrededor de los monumentos como se puede hacer a Palenque o Teatihuacan o muchos otros sitios pero el peor es que no se puede ver el Tajin chico ni tampoco ĺa Gran Greca sin hablar del museo... solo se puede ver el Tajin viejo y malo no se justifica eso es un puro robo y un falta de respecto del visitante y lo repito : pagamos el precio completo.!!!"], - ["Se reservó la habitación “deluxe”, por una tarifa similar a un hotel de 5 estrellas, la habitación deja mucho que desear, huele a humedad, las ventanas no estaban limpias, estaban manchadas de grasa (posiblemente de huéspedes anteriores), la cabecera de la cama está caída, el baño se ve viejo, con sarro en sus llaves, incluso hay una dos goteras en el techo, se desconoce de donde proviene el agua, o si es agua residual, la cama es algo dura, lo unico bueno es la proximidad al centro de Monterrey, si pueden evítenlo, por este precio encuentran mejores."], - ["El lugar tiene una vista nocturna muy linda, sin embargo creo que el precio no corresponde al servicio ofertado. Lineas de espera para subir y bajar a veces muy largas, más un recorrido muy corto..."], - ["La comida es muy buena, desde las entradas hasta los postres son una delicia, la atención de los meseros es muy buena y el sitio es muy cómodo; sin embargo los precios son excesivos para lo que ofrecen."], - ["Hermoso lugar para admirar las obras de Botero.Me encantó,porque el lugar es aseado,ordenado y cuenta con baño y tienda para el público que lo visita."] - ] - -gr.Interface.load("huggingface/vg055/roberta-base-bne-finetuned-analisis-sentimiento-textos-turisticos-mx-polaridad", examples=examples).launch(); \ No newline at end of file diff --git a/spaces/visakh7843/Sheet_Music_Generator/midiToabc.py b/spaces/visakh7843/Sheet_Music_Generator/midiToabc.py deleted file mode 100644 index 4a893c471f134364b623f5246d57b0fedd63b7d9..0000000000000000000000000000000000000000 --- a/spaces/visakh7843/Sheet_Music_Generator/midiToabc.py +++ /dev/null @@ -1,30 +0,0 @@ -#Conversion script for midi to abc files. -import os -import subprocess -def abs_paths(dir): - for dir_path,_,filenames in os.walk(dir): - for f in filenames: - yield os.path.abspath(os.path.join(dir_path, f)) - -for file in abs_paths("data"): - # print(file) - # os.makedirs("n-grams/data/converted/") - filename=file.split("/")[-1] - # print(filename) - newfile = filename.split(".")[0] - newfile = 'converted/'+newfile+'.abc' - # os.system('midi2abc -f filename') - print(file) - print(newfile) - f = open(newfile,'w',encoding="utf8") - temp = subprocess.Popen(['midi2abc', '-f', file,">>",newfile],stdout = f)#subprocess.PIPE) - # get the output as a stringi - output = str(temp.communicate()) - - - # f = open(newfile,"w",encoding="utf8") - # f.write(output) - - # store the output in the list - - \ No newline at end of file diff --git a/spaces/vishnun/SnapCode/README.md b/spaces/vishnun/SnapCode/README.md deleted file mode 100644 index 8d446e9209c71771fe410bc07c512f86e0101339..0000000000000000000000000000000000000000 --- a/spaces/vishnun/SnapCode/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SnapCode -emoji: 👁 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vyurchenko/l3m/app.py b/spaces/vyurchenko/l3m/app.py deleted file mode 100644 index c78a021dc5506c3d6f9a92874d941102da52ed99..0000000000000000000000000000000000000000 --- a/spaces/vyurchenko/l3m/app.py +++ /dev/null @@ -1,78 +0,0 @@ -# import gradio as gr - -# def greet(name): -# return "Hello " + name + "!!" - -# iface.launch() - - -import gradio as gr -import random -import time -import os -import openai -from datetime import datetime -import json - -openai.api_key = '' - -def send_openai_query(query): - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "user", "content": query} - ] , - # prompt=query, - temperature=0, - max_tokens=500, - top_p=1.0, - frequency_penalty=0.0, - presence_penalty=0.0, - # stop=["\n"] - ) - return response['choices'][0]['message']['content'] - - - -with gr.Blocks() as demo: - gptkey = gr.Textbox(placeholder='input your chatGPT key', show_label=False) - chatbot = gr.Chatbot(elem_id="chatbot", show_label=False).style(height=300) - msg = gr.Textbox(show_label=False, placeholder='Input your query to chatGPT') - clear = gr.Button("Clear") - - def user(user_message, gptkey, history): - if openai.api_key == '': - openai.api_key = gptkey.strip(' \r\n') - # print(f'FROM USER=<{openai.api_key}>') - return "", "Key accepted", history + [[user_message, '']] - - def bot(history): - # print("HIST=", history) - query = history[-1][0] - # print(f'QUERY=<{query}>') - try: - result = send_openai_query(query) - except Exception as e: - result = 'Что-то пошло не так на стороне ChatGPT. Попробуйте повторить запрос' - # print(f'RESULT=<{result}>') - now = datetime.now() - dt_string = now.strftime("%d/%m/%Y %H:%M:%S") - d = {'time': dt_string, - 'query': query, - 'result': result} - d_json = json.dumps(d, ensure_ascii=False) - with open('logs/results.ndjson', 'a') as f: - f.write(d_json + '\r\n') - history[-1][1] = '' - for character in result: - history[-1][1] += character - time.sleep(0.02) - yield history - - msg.submit(user, [msg, gptkey, chatbot], [msg, gptkey, chatbot], queue=False).then( - bot, chatbot, chatbot - ) - clear.click(lambda: None, None, chatbot, queue=False) - -demo.queue() -demo.launch() \ No newline at end of file diff --git a/spaces/wallezen/so-vits-svc/onnxexport/model_onnx.py b/spaces/wallezen/so-vits-svc/onnxexport/model_onnx.py deleted file mode 100644 index e28bae95ec1e53aa05d06fc784ff86d55f228d60..0000000000000000000000000000000000000000 --- a/spaces/wallezen/so-vits-svc/onnxexport/model_onnx.py +++ /dev/null @@ -1,335 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F - -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, z=None): - x = x + self.f0_emb(f0).transpose(1, 2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + z * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels, 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if spk_emb is not None: - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - self.predict_f0 = False - - def forward(self, c, f0, mel2ph, uv, noise=None, g=None): - - decoder_inp = F.pad(c, [0, 0, 1, 0]) - mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, c.shape[-1]]) - c = torch.gather(decoder_inp, 1, mel2ph_).transpose(1, 2) # [B, T, H] - - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1, 2) - - if self.predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), z=noise) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o diff --git a/spaces/wanglettes/zw_chatgpt_01/README.md b/spaces/wanglettes/zw_chatgpt_01/README.md deleted file mode 100644 index ee3e876edd937fce02cfee94c7d9dc213bfc3f4c..0000000000000000000000000000000000000000 --- a/spaces/wanglettes/zw_chatgpt_01/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Streaming Chat With Gpt-3.5-turbo Using Langchain Sorta -emoji: 📚 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: lukestanley/streaming_chat_with_gpt-3.5-turbo_using_langchain_sorta ---- -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include -#include - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc diff --git a/spaces/weide/ChuanhuChatGPT2/modules/overwrites.py b/spaces/weide/ChuanhuChatGPT2/modules/overwrites.py deleted file mode 100644 index bfcd4d01b7d7bec1184a8d09113933bca860530b..0000000000000000000000000000000000000000 --- a/spaces/weide/ChuanhuChatGPT2/modules/overwrites.py +++ /dev/null @@ -1,56 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from modules.presets import * -from modules.llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - user, bot = y[-1] - if not detect_converted_mark(user): - user = convert_asis(user) - if not detect_converted_mark(bot): - bot = convert_mdtext(bot) - y[-1] = (user, bot) - return y - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/search_engine_meilisearch.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/search_engine_meilisearch.py deleted file mode 100644 index 24f0fe08e77ab74607547deb5b84e85145059e30..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/search_engine_meilisearch.py +++ /dev/null @@ -1,44 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/22 21:33 -@Author : alexanderwu -@File : search_engine_meilisearch.py -""" - -from typing import List - -import meilisearch -from meilisearch.index import Index - - -class DataSource: - def __init__(self, name: str, url: str): - self.name = name - self.url = url - - -class MeilisearchEngine: - def __init__(self, url, token): - self.client = meilisearch.Client(url, token) - self._index: Index = None - - def set_index(self, index): - self._index = index - - def add_documents(self, data_source: DataSource, documents: List[dict]): - index_name = f"{data_source.name}_index" - if index_name not in self.client.get_indexes(): - self.client.create_index(uid=index_name, options={'primaryKey': 'id'}) - index = self.client.get_index(index_name) - index.add_documents(documents) - self.set_index(index) - - def search(self, query): - try: - search_results = self._index.search(query) - return search_results['hits'] - except Exception as e: - # 处理MeiliSearch API错误 - print(f"MeiliSearch API错误: {e}") - return [] diff --git a/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/prepro/meanprocess.py b/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/prepro/meanprocess.py deleted file mode 100644 index ded0ca98470dd402f25f8b28de11689da42c348c..0000000000000000000000000000000000000000 --- a/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/prepro/meanprocess.py +++ /dev/null @@ -1,18 +0,0 @@ -import cv2 -import numpy as np - -class MeanProcess: - def __init__(self, R_Mean, G_Mean, B_Mean): - self.R_Mean= R_Mean - self.G_Mean= G_Mean - self.B_Mean= B_Mean - - def preprocess(self, image): - image= np.float32(image) - B,G,R= cv2.split(image) - - B-= self.B_Mean - G-= self.G_Mean - R-= self.R_Mean - - return cv2.merge([B, G, R]) \ No newline at end of file diff --git a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/models/__init__.py b/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/wwwwwwww2/bingo/src/components/chat-panel.tsx b/spaces/wwwwwwww2/bingo/src/components/chat-panel.tsx deleted file mode 100644 index 56b2112bd75ba08134383871177851fa2e3f43a4..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/src/components/chat-panel.tsx +++ /dev/null @@ -1,153 +0,0 @@ -'use client' - -import * as React from 'react' -import Image from 'next/image' -import Textarea from 'react-textarea-autosize' -import { useAtomValue } from 'jotai' -import { useEnterSubmit } from '@/lib/hooks/use-enter-submit' -import { cn } from '@/lib/utils' - -import BrushIcon from '@/assets/images/brush.svg' -import ChatIcon from '@/assets/images/chat.svg' -import VisualSearchIcon from '@/assets/images/visual-search.svg' -import SendIcon from '@/assets/images/send.svg' -import PinIcon from '@/assets/images/pin.svg' -import PinFillIcon from '@/assets/images/pin-fill.svg' - -import { useBing } from '@/lib/hooks/use-bing' -import { voiceListenAtom } from '@/state' -import Voice from './voice' -import { ChatImage } from './chat-image' -import { ChatAttachments } from './chat-attachments' - -export interface ChatPanelProps - extends Pick< - ReturnType, - | 'generating' - | 'input' - | 'setInput' - | 'sendMessage' - | 'resetConversation' - | 'isSpeaking' - | 'attachmentList' - | 'uploadImage' - | 'setAttachmentList' - > { - id?: string - className?: string -} - -export function ChatPanel({ - isSpeaking, - generating, - input, - setInput, - className, - sendMessage, - resetConversation, - attachmentList, - uploadImage, - setAttachmentList -}: ChatPanelProps) { - const inputRef = React.useRef(null) - const {formRef, onKeyDown} = useEnterSubmit() - const [focused, setFocused] = React.useState(false) - const [active, setActive] = React.useState(false) - const [pin, setPin] = React.useState(false) - const [tid, setTid] = React.useState() - const voiceListening = useAtomValue(voiceListenAtom) - - const setBlur = React.useCallback(() => { - clearTimeout(tid) - setActive(false) - const _tid = setTimeout(() => setFocused(false), 2000); - setTid(_tid) - }, [tid]) - - const setFocus = React.useCallback(() => { - setFocused(true) - setActive(true) - clearTimeout(tid) - inputRef.current?.focus() - }, [tid]) - - React.useEffect(() => { - if (input) { - setFocus() - } - }, [input, setFocus]) - - return ( -
              { - e.preventDefault() - if (generating) { - return; - } - if (!input?.trim()) { - return - } - setInput('') - setPin(false) - await sendMessage(input) - }} - ref={formRef} - > -
              -
              -
              -
              -
              -
              -
              - -
              -
              -
              -
              - chat -