diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Battlefield 3 Game Files Part35.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Battlefield 3 Game Files Part35.rar.md deleted file mode 100644 index b5e1ed98695fc7a33a3d64c38f2ac32e2fd1188d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Battlefield 3 Game Files Part35.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

battlefield 3 game files part35.rar


Download ———>>> https://imgfil.com/2uxYmY



-
-battlefield 3 gamefiles.part35.rar download battlefield 3 save game file download battlefield 3 save game files download pc battlefield 4 save ... 4d29de3e1b
-
-
-

diff --git a/spaces/1phancelerku/anime-remove-background/Corra com carros e motos brasileiros em Estilo BR Download grtis do mod com dinheiro infinito e mediafre.md b/spaces/1phancelerku/anime-remove-background/Corra com carros e motos brasileiros em Estilo BR Download grtis do mod com dinheiro infinito e mediafre.md deleted file mode 100644 index cdf8c1eb16b11cb60d2e9016f02c86f38b53ce29..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Corra com carros e motos brasileiros em Estilo BR Download grtis do mod com dinheiro infinito e mediafre.md +++ /dev/null @@ -1,109 +0,0 @@ - -

Estilo BR: How to Download and Play the Ultimate Drag Racing Game in Brazil

|

If you are a racing enthusiast in Brazil, you have probably heard of Estilo BR, the definitive drag racing game for Android devices. With 43 different vehicles to choose from, all Brazilian, from the most classic to the most modern, you can experience the thrill of high-speed racing against competitors from around the world, including motorcycles, trucks and trailers.

-

estilo br dinheiro infinito download mediafıre


Download Zip »»» https://jinyurl.com/2uNPfK



-

In Estilo BR, you can participate in global multiplayer races with up to 500 players, both in an open world global room and in private rooms created to play with friends. Compete against drivers from different countries and show your skills on the track, enjoying the style and culture of street racing in Brazil.

-

But Estilo BR is not just about racing. You can also customize your vehicles with a wide variety of aesthetic and performance upgrades. From custom paint jobs to engine modifications, you have the freedom to make your vehicles truly unique.

-

Estilo BR is the best of its kind in Brazil, offering an unparalleled racing experience. Whether you are a seasoned veteran or a new player, Estilo BR has something for everyone. Download now and join the drag racing revolution in Brazil, listening to your favorite music while you play.

-

What is Estilo BR?

-

Estilo BR is a mobile game developed by RF Entertainment, a Brazilian indie studio that specializes in racing games. The game was released in 2019 and has since received several updates and improvements.

-

The game is inspired by the real-life street racing scene in Brazil, where drivers compete in illegal drag races with modified cars and bikes. The game features realistic physics and responsive controls, as well as stunning pixel art graphics that create a nostalgic atmosphere.

-

The game also allows you to play music from your own phone, giving you the possibility to listen to your favorite songs while playing. You can choose from different genres and playlists, or create your own custom mix.

-

estilo br apk mod dinheiro infinito
-estilo br hack diamantes infinitos 2021
-estilo br atualizado 2021 download mediafire
-estilo br com carros brasileiros e rachas
-estilo br grau de moto e corridas
-estilo br rio de janeiro e brasília
-estilo br multiplayer com outros players
-estilo br personalizar carro ou moto
-estilo br fusca opala golf uno
-estilo br apk obb dinheiro infinito
-estilo br mod menu diamantes infinitos
-estilo br versão mais recente download mediafire
-estilo br rachas de tunados brasil
-estilo br arrancadas e manobras
-estilo br 4 novos veículos e correções de bugs
-estilo br como instalar apk + obb
-estilo br youtube dinheiro infinito
-estilo br mediafire link direto sem anúncios
-estilo br dicas e truques para ganhar dinheiro
-estilo br gameplay e review 2021
-estilo br baixar grátis para android
-estilo br mod apk unlimited money and diamonds
-estilo br hack apk download mediafire 2021
-estilo br brazilian cars and races
-estilo br wheelie and drag racing
-estilo br rio de janeiro and brasilia maps
-estilo br multiplayer with other players online
-estilo br customize car or bike
-estilo br beetle opala golf uno cars
-estilo br apk obb unlimited money
-estilo br mod menu unlimited diamonds
-estilo br latest version download mediafire
-estilo br drag racing brazil game
-estilo br stunts and tricks
-estilo br 4 new vehicles and bug fixes
-estilo br how to install apk + obb file
-estilo br youtube unlimited money hack
-estilo br mediafire direct link no ads
-estilo br tips and tricks to earn money fast
-estilo br gameplay and review 2021 video
-download do jogo estilo br dinheiro infinito mediafire
-baixar o jogo estilo br diamantes infinitos mediafire
-como baixar e instalar o jogo estilo br dinheiro infinito
-como jogar o jogo estilo br diamantes infinitos online
-como personalizar o seu carro ou moto no jogo estilo br
-como ganhar rachas e manobras no jogo estilo br
-quais são os melhores carros e motos do jogo estilo br
-quais são os novos veículos e atualizações do jogo estilo br
-qual é a versão mais atualizada do jogo estilo br
-qual é o link do mediafire para baixar o jogo estilo br

-

The game has a rating of 4.2 out of 5 stars on Google Play Store, with over 5 million downloads and more than 130 thousand reviews. The game is free to play, but it contains ads and in-app purchases.

-

How to download Estilo BR from mediafıre?

-

If you want to download Estilo BR from mediafıre, a popular file-sharing platform, you will need to follow these steps:

-
    -
  1. Go to this link: Estilo BR v0.977 DINHEIRO INFINITO - BAIXAR APK MOD. This is a modded version of the game that gives you unlimited money and diamonds.
  2. -
  3. Click on the green button that says "Download APK (125.77 MB)". This will start downloading the APK file to your device.
  4. -
  5. Once the download is complete, locate the file in your device's download folder. Tap on it to start the installation process.
  6. -
  7. If you see a message that says "For your security, your phone is not allowed to install unknown apps from this source", go to your device's settings and enable the option to install apps from unknown sources.
  8. -
  9. Follow the steps on screen to complete the installation. You may need to grant some permissions to the app.
  10. -
  11. Once the installation is done, you can open the app and enjoy Estilo BR with unlimited money and diamonds.
  12. -
-

Note: This method is not endorsed by the official developers of Estilo BR, and it may violate their terms of service. Use it at your own risk.

-

How to get unlimited money and diamonds in Estilo BR?

-

If you want to get unlimited money and diamonds in Estilo BR, you have two options:

- -

Both of these options are not recommended, as they can ruin the fun and challenge of the game. The best way to enjoy Estilo BR is to play it fair and square, earning money and diamonds by winning races, completing missions, and watching ads. This will also support the developers of the game and help them improve it further.

-

What are the best tips and tricks for Estilo BR?

-

If you want to master Estilo BR and become a drag racing legend in Brazil, here are some tips and tricks that can help you:

- -

Conclusion

-

Estilo BR is a fantastic drag racing game that will keep you hooked for hours. Whether you want to race against other players online, customize your vehicles with endless options, or explore the open world map with realistic graphics and physics, Estilo BR has it all.

-

If you are looking for a way to download Estilo BR from mediafıre, you can follow the steps we provided above. However, we advise you to be careful when using modded or hacked versions of the game, as they may cause problems or get you banned.

-

The best way to enjoy Estilo BR is to play it fair and square, earning money and diamonds by winning races, completing missions, and watching ads. This will also support the developers of the game and help them improve it further.

-

So what are you waiting for? Download Estilo BR now and join the drag racing revolution in Brazil!

-

FAQs

-

Here are some frequently asked questions about Estilo BR:

-
    -
  1. Is Estilo BR available for iOS devices?
  2. -

    No, Estilo BR is only available for Android devices at the moment. The developers have not announced any plans to release an iOS version of the game.

    -
  3. How can I contact the developers of Estilo BR?
  4. -

    You can contact the developers of Estilo BR through their official Facebook page: RF Entertainment - Home | Facebook. You can also send them an email at rfentertainmentoficial@gmail.com.

    -
  5. How can I report a bug or a problem in Estilo BR?
  6. -

    You can report a bug or a problem in Estilo BR through the game's settings menu. Tap on the gear icon on the top right corner of the screen, then tap on "Report Bug". You can also send a screenshot or a video of the bug or problem to help the developers fix it.

    -
  7. How can I support Estilo BR?
  8. -

    You can support Estilo BR by playing the game regularly, rating it on Google Play Store, writing positive reviews, sharing it with your friends, and making in-app purchases. You can also follow the developers on their social media accounts and join their community of fans.

    -
  9. How can I learn more about Estilo BR?
  10. -

    You can learn more about Estilo BR by visiting the game's official website: Estilo BR - RF Entertainment. You can also watch gameplay videos and tutorials on YouTube, such as this one: Estilo BR - Gameplay (Android) - YouTube.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Velocity Rush Z Mod APK and Enjoy Unlimited Action and Money.md b/spaces/1phancelerku/anime-remove-background/Download Velocity Rush Z Mod APK and Enjoy Unlimited Action and Money.md deleted file mode 100644 index a4b951cd1f72b113890be0193c2208c7307e5580..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Velocity Rush Z Mod APK and Enjoy Unlimited Action and Money.md +++ /dev/null @@ -1,128 +0,0 @@ -
-

Velocity Rush Z Mod Apk: A Fast-Paced Shooter with Parkour Elements

-

Introduction

-

If you are looking for a thrilling and adrenaline-pumping game that combines shooting and parkour, then you should check out Velocity Rush Z mod apk. This is a first-person shooter game with parkour elements from the creator of Velocity Rush. You can vault, climb, wallrun, slide and shoot mercenaries and zombies in the apocalyptic city to earn money to buy more weapons. In this article, we will tell you what is Velocity Rush Z, why you should download the mod apk version, what are its features, and how to install it on your device.

-

velocity rush z mod apk


DOWNLOAD ✓✓✓ https://jinyurl.com/2uNP39



-

What is Velocity Rush Z?

-

Velocity Rush Z is a game that was released in 2021 by sosomod.net. It is a sequel to the popular game Velocity Rush, which was also a shooter with parkour elements. The game has improved graphics, gameplay, and features compared to the original one. You can experience high action shooting in close combat, bullet time (slowmo), parkour moves and skills, various weapons and upgrades, and an apocalyptic city setting. The game has a rating of 4.5 out of 5 stars on sosomod.net.

-

Why download Velocity Rush Z mod apk?

-

The mod apk version of Velocity Rush Z has some advantages over the original one. The mod apk version gives you unlimited money, which means you can buy any weapon or upgrade you want without worrying about the cost. You can also unlock all the levels and modes in the game, which means you can enjoy the game without any restrictions. The mod apk version also removes ads from the game, which means you can play without any interruptions or annoyances. The mod apk version is also safe and easy to install, as we will show you later.

-

Features of Velocity Rush Z mod apk

-

High action shooting in close combat

-

The game is not for the faint-hearted, as you will face hordes of enemies in close quarters. You will have to use your reflexes and skills to survive and eliminate them. You can use different types of weapons, such as pistols, shotguns, rifles, grenades, and more. You can also switch between weapons quickly and reload them efficiently.

-

Bullet time (Slowmo)

-

One of the coolest features of the game is bullet time, which allows you to slow down time and aim more precisely at your enemies. You can activate bullet time by tapping on the screen or by using a special item. Bullet time can help you avoid bullets, dodge attacks, and take out multiple enemies at once.

-

Parkour moves and skills

-

The game is not just about shooting, but also about moving around the city with style and agility. You can perform parkour moves and skills, such as vaulting over obstacles, climbing walls, wallrunning, sliding under gaps, and more. You can also use these moves to reach hidden areas, find secrets, and escape from danger.

-

velocity rush z mod apk download
-velocity rush z apk mod unlimited money
-velocity rush z mod apk latest version
-velocity rush z mod apk android 1
-velocity rush z mod apk free shopping
-velocity rush z hack mod apk
-velocity rush z mod apk revdl
-velocity rush z mod apk rexdl
-velocity rush z mod apk offline
-velocity rush z mod apk no ads
-velocity rush z fps shooter mod apk
-velocity rush z parkour shooter mod apk
-velocity rush z action game mod apk
-velocity rush z bullet time mod apk
-velocity rush z zombie mode mod apk
-velocity rush z mod apk unlimited ammo
-velocity rush z mod apk unlimited health
-velocity rush z mod apk unlocked all weapons
-velocity rush z mod apk unlimited coins
-velocity rush z mod apk unlimited gems
-velocity rush z premium mod apk
-velocity rush z pro mod apk
-velocity rush z vip mod apk
-velocity rush z full version mod apk
-velocity rush z cracked mod apk
-download game velocity rush z mod apk
-download velocity rush z hack mod apk
-download velocity rush z cheat mod apk
-download velocity rush z premium mod apk free
-download velocity rush z pro mod apk gratis
-how to install velocity rush z mod apk
-how to play velocity rush z mod apk
-how to update velocity rush z mod apk
-how to get velocity rush z mod apk
-how to download velocity rush z mod apk on pc
-how to download velocity rush z mod apk on ios
-how to download velocity rush z mod apk on android
-best settings for velocity rush z mod apk
-best tips for velocity rush z mod apk
-best tricks for velocity rush z mod apk
-best weapons in velocity rush z mod apk
-best maps in velocity rush z mod apk
-best modes in velocity rush z mod apk
-best cheats for velocity rush z mod apk
-best hacks for velocity rush z mod apk
-best mods for velocity rush z mod apk
-best sites to download velocity rush z mod apk
-best reviews of velocity rush z mod apk
-best alternatives to velocity rush z mod apk

-

Various weapons and upgrades

-

The game offers a variety of weapons and upgrades for you to choose from. You can buy new weapons or upgrade your existing ones with money that you earn from completing missions or killing enemies. You can also customize your weapons with different skins, attachments, and effects. Some of the weapons and upgrades available in the game are:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
WeaponDescriptionUpgrade
PistolA basic weapon that can fire fast and accurate shots.You can upgrade the pistol's damage, fire rate, magazine size, and reload speed.
ShotgunA powerful weapon that can deal massive damage at close range.You can upgrade the shotgun's damage, spread, magazine size, and reload speed.
RifleA versatile weapon that can fire bursts of bullets at medium range.You can upgrade the rifle's damage, fire rate, magazine size, and reload speed.
GrenadeA explosive weapon that can cause area damage and knockback enemies.You can upgrade the grenade's damage, blast radius, and number of grenades you can carry.
Slowmo ItemA special item that can activate bullet time for a limited duration.You can upgrade the slowmo item's duration, cooldown, and number of slowmo items you can carry.
-

Apocalyptic city setting

-

The game is set in a post-apocalyptic city that has been overrun by mercenaries and zombies. You will explore different locations in the city, such as rooftops, streets, alleys, buildings, and more. You will also encounter different types of enemies, such as snipers, melee fighters, bombers, and bosses. The game has a dark and gritty atmosphere that suits the theme of the game.

-

How to download and install Velocity Rush Z mod apk

-

Step 1: Download the apk file from a trusted source

-

The first step is to download the apk file of Velocity Rush Z mod apk from a trusted source. You can use the link below to download the apk file from sosomod.net, which is a reliable website that provides mod apk games. The apk file size is about 100 MB, so make sure you have enough space on your device.

-

Step 2: Enable unknown sources on your device

-

The second step is to enable unknown sources on your device. This is necessary to install apk files that are not from the Google Play Store. To enable unknown sources, go to your device settings, then security or privacy, then toggle on the option that says "allow installation of apps from unknown sources". You may also need to confirm this action by tapping on "OK" or "Yes".

-

Step 3: Install the apk file and launch the game

-

The third step is to install the apk file and launch the game. To install the apk file, locate it in your device storage or downloads folder, then tap on it. You may see a pop-up window that asks you to confirm the installation. Tap on "Install" or "Next" until the installation is complete. Then, tap on "Open" or "Done" to launch the game. You can now enjoy Velocity Rush Z mod apk with unlimited money and unlocked levels.

-

Conclusion

-

Velocity Rush Z mod apk is a fast-paced shooter with parkour elements that will keep you on the edge of your seat. You can experience high action shooting in close combat, bullet time (slowmo), parkour moves and skills, various weapons and upgrades, and an apocalyptic city setting. You can also download the mod apk version of the game to get unlimited money and unlocked levels. To download and install Velocity Rush Z mod apk, follow the steps above. We hope you enjoy playing this game as much as we do.

-

FAQs

-

Here are some frequently asked questions about Velocity Rush Z mod apk:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Free Download Hitman Sniper APK - Play the Tactical Sniper Mission Game on Android.md b/spaces/1phancelerku/anime-remove-background/Free Download Hitman Sniper APK - Play the Tactical Sniper Mission Game on Android.md deleted file mode 100644 index 5f1db66b8ae0d168fc2a3b2e17ef687dd50606b1..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Free Download Hitman Sniper APK - Play the Tactical Sniper Mission Game on Android.md +++ /dev/null @@ -1,12 +0,0 @@ - -

Hitman Sniper: How to Download and Play the Best Sniper Game on Mobile

- If you are a fan of stealth, strategy, and shooting games, you might want to check out Hitman Sniper, one of the most popular and acclaimed sniper games on mobile. In this article, we will tell you what Hitman Sniper is, why you should play it, how to download it for free, and how to play it effectively.

What is Hitman Sniper?

- Hitman Sniper is a mobile game developed by CDE Entertainment and published by Square Enix. It is based on the Hitman franchise, which follows the adventures of Agent 47, a professional assassin who works for a mysterious organization. In Hitman Sniper, you step into the shoes of Agent 47 and take on various sniping missions in different locations. You have to use your strategic skills and creativity to orchestrate the perfect assassination kill shot, while avoiding detection and eliminating other threats. The game features more than 150 missions and 11 different contracts, each with its own objectives, targets, and secrets. You can also unlock and upgrade 17 unique weapons, each with its own perks and abilities. The game also has a zombie mode, where you have to survive waves of undead enemies in a desert valley. You have to use your accuracy and speed to take down as many zombies as possible, while collecting weapon parts and blueprints.

Why should you play Hitman Sniper?

- Hitman Sniper is not just a simple shooting game. It is a game that requires you to think, plan, and execute your actions with precision and finesse. Here are some of the benefits of playing Hitman Sniper: - It improves your concentration and focus. You have to pay attention to every detail in the environment, such as guards, cameras, traps, windows, doors, etc. You also have to monitor your target's movements and behavior, and wait for the right moment to strike. - It enhances your problem-solving and decision-making skills. You have to analyze the situation and choose the best course of action. You can use various methods to eliminate your target, such as headshots, body shots, accidents, explosions, distractions, etc. You also have to deal with unexpected events, such as alarms, reinforcements, witnesses, etc. - It stimulates your creativity and imagination. You can use your environment to your advantage, such as shooting objects to cause chain reactions, shooting electrical wires to electrocute enemies, shooting gas tanks to create fireballs, etc. You can also use your weapons in different ways, such as using silencers, scopes, suppressors, etc. - It provides you with entertainment and satisfaction. You can enjoy the stunning graphics and realistic sound effects of the game. You can also feel the thrill and excitement of pulling off a perfect kill shot. You can also compete against your friends and other players in the leaderboards.

How to download Hitman Sniper APK for free?

- If you want to play Hitman Sniper on your Android device, you can download it from the Google Play Store for $0.99. However, if you want to get it for free, you can download an APK file from a third-party website. An APK file is an Android application package file that contains all the files needed to install an app on your device. However, before you download an APK file, you need to take some precautions: - Make sure that your device has enough storage space for the file. - Make sure that your device is compatible with the game's requirements. - Make sure that you have a reliable internet connection for the download. - Make sure that you have enabled the option to install apps from unknown sources in your device's settings. Once you have taken these precautions , you can follow these steps to install the APK file on your Android device: - Connect your Android device to your computer using a USB cable. - Copy the APK file from your computer to your device's storage. You can use any folder you want, but make sure you remember where you put it. - Disconnect your device from your computer and open your file explorer app on your device. - Locate the APK file you copied and tap on it to open it. - Tap Install at the bottom of the screen and wait for the installation to finish. - Tap Open to launch the game or Done to exit the installer. You have successfully installed Hitman Sniper APK for free on your Android device. Enjoy!

How to play Hitman Sniper effectively?

- Now that you have downloaded and installed Hitman Sniper, you might want to know how to play it well and complete all the missions. Here are some tips and tricks to help you master the sniper skills and become the ultimate assassin: - Use the variable scope to zoom in and out while aiming. You can adjust the level of zoom by tapping the plus and minus buttons on the screen. You can also swipe left and right to move the scope horizontally and up and down to move it vertically. - Use the marksman perk to improve your aim and slow time. You can activate this perk by pressing the Shift key on your keyboard or tapping the icon on the screen. This will allow you to aim more precisely and take advantage of opportunities that might otherwise be missed. - Use the piercing perk to penetrate bodies and objects. This perk will let you shoot through multiple targets with one bullet, creating collateral damage and saving ammo. You can also use this perk to shoot through glass, walls, doors, etc. - Use the environment to your advantage. You can shoot various objects in the environment to cause chain reactions, accidents, explosions, distractions, etc. For example, you can shoot a car's gas tank to make it explode, a chandelier to make it fall, a fire extinguisher to create a smoke screen, etc. - Use different methods to eliminate your target. You don't have to always go for a headshot or a body shot. You can also use other methods, such as accidents, poison, explosions, etc. For example, you can shoot a gas pipe near your target to make it leak, then shoot a nearby candle to ignite it and create a fireball. - Use different weapons and perks for different scenarios. You can unlock and upgrade 17 unique weapons in the game, each with its own perks and abilities. You can also equip different perks for each weapon, such as damage, rate of fire, extended magazine, ammo, subsonic, suppressor, etc. You should choose the weapon and perk combination that suits your style and mission objective. - Complete challenges and contracts to earn money and rewards. You can complete various challenges and contracts in each mission, such as killing a certain number of targets, killing targets in a certain way, killing targets within a time limit, etc. These will earn you money and rewards, such as weapon parts, blueprints, perks, etc. You can use these to unlock and upgrade your weapons and perks.

Conclusion

- Hitman Sniper is a fun and challenging sniper game that will test your strategic skills and creativity. You can download it for free from a third-party website using an APK file, but make sure you take some precautions before doing so. You can also use our tips and tricks to play the game effectively and complete all the missions. If you are ready to become the best sniper in the world, download Hitman Sniper today and enjoy!

FAQs

- Q: How do I get more money in Hitman Sniper? A: You can get more money by completing challenges and contracts in each mission. You can also replay missions to earn more money. Q: How do I unlock more weapons in Hitman Sniper? A: You can unlock more weapons by collecting weapon parts and blueprints in each mission. You can also buy some weapons with real money. Q: How do I upgrade my weapons in Hitman Sniper? A: You can upgrade your weapons by using weapon parts and blueprints that you have collected or bought. You can also equip different perks for each weapon. Q: How do I switch weapons in Hitman Sniper? A: You can switch weapons by tapping the weapon icon on the screen or pressing the Q key on your keyboard. Q: How do I play zombie mode in Hitman Sniper? A: You can play zombie mode by tapping the zombie icon on the main menu or pressing the Z key on your keyboard.

-

hitman sniper download apk free


Download ->->->-> https://jinyurl.com/2uNQut



197e85843d
-
-
\ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/src/lib/bots/bing/tts.ts b/spaces/2023Liu2023/bingo/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/2ndelement/voicevox/voicevox_engine/utility/core_version_utility.py b/spaces/2ndelement/voicevox/voicevox_engine/utility/core_version_utility.py deleted file mode 100644 index 25f2d3a3e7e7ed3a25e52075eb74be08c96451db..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/voicevox_engine/utility/core_version_utility.py +++ /dev/null @@ -1,14 +0,0 @@ -from typing import Iterable - -from semver.version import Version - - -def parse_core_version(version: str) -> Version: - return Version.parse(version) - - -def get_latest_core_version(versions: Iterable[str]) -> str: - if len(versions) == 0: - raise Exception("versions must be non-empty.") - - return str(max(map(parse_core_version, versions))) diff --git a/spaces/801artistry/RVC801/utils/backups.py b/spaces/801artistry/RVC801/utils/backups.py deleted file mode 100644 index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/utils/backups.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -import shutil -import hashlib -import time -import base64 - - - - -LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' -WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' -GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' - -def import_google_drive_backup(): - print("Importing Google Drive backup...") - weights_exist = False - for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH): - for filename in files: - filepath = os.path.join(root, filename) - if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')): - backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - shutil.copy2(filepath, backup_filepath) # copy file with metadata - print(f'Imported file from Google Drive backup: {filename}') - elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'): - weights_exist = True - weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights'))) - weights_folderpath = os.path.dirname(weights_filepath) - if not os.path.exists(weights_folderpath): - os.makedirs(weights_folderpath) - print(f'Created weights folder: {weights_folderpath}', flush=True) - shutil.copy2(filepath, weights_filepath) # copy file with metadata - print(f'Imported file from weights: {filename}') - if weights_exist: - print("Copied weights from Google Drive backup to local weights folder.") - else: - print("No weights found in Google Drive backup.") - print("Google Drive backup import completed.") - -def get_md5_hash(file_path): - hash_md5 = hashlib.md5() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() - -def copy_weights_folder_to_drive(): - destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights') - try: - if not os.path.exists(destination_folder): - os.makedirs(destination_folder) - - num_copied = 0 - for filename in os.listdir(WEIGHTS_FOLDER): - if filename.endswith('.pth'): - source_file = os.path.join(WEIGHTS_FOLDER, filename) - destination_file = os.path.join(destination_folder, filename) - if not os.path.exists(destination_file): - shutil.copy2(source_file, destination_file) - num_copied += 1 - print(f"Copied {filename} to Google Drive!") - - if num_copied == 0: - print("No new finished models found for copying.") - else: - print(f"Finished copying {num_copied} files to Google Drive!") - - except Exception as e: - print(f"An error occurred while copying weights: {str(e)}") - # You can log the error or take appropriate actions here. - -def backup_files(): - print("\nStarting backup loop...") - last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt') - fully_updated = False # boolean to track if all files are up to date - - while True: - try: - updated = False # flag to check if any files were updated - last_backup_timestamps = {} - - try: - with open(last_backup_timestamps_path, 'r') as f: - last_backup_timestamps = dict(line.strip().split(':') for line in f) - except FileNotFoundError: - pass # File does not exist yet, which is fine - - for root, dirs, files in os.walk(LOGS_FOLDER): - for filename in files: - if filename != 'last_backup_timestamps.txt': - filepath = os.path.join(root, filename) - if os.path.isfile(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - # check if file has changed since last backup - last_backup_timestamp = last_backup_timestamps.get(filepath) - current_timestamp = os.path.getmtime(filepath) - if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp: - shutil.copy2(filepath, backup_filepath) # copy file with metadata - last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp - if last_backup_timestamp is None: - print(f'Backed up file: {filename}') - else: - print(f'Updating backed up file: {filename}') - updated = True - fully_updated = False # if a file is updated, all files are not up to date - - # check if any files were deleted in Colab and delete them from the backup drive - for filepath in list(last_backup_timestamps.keys()): - if not os.path.exists(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - if os.path.exists(backup_filepath): - os.remove(backup_filepath) - print(f'Deleted file: {filepath}') - del last_backup_timestamps[filepath] - updated = True - fully_updated = False # if a file is deleted, all files are not up to date - - if not updated and not fully_updated: - print("Files are up to date.") - fully_updated = True # if all files are up to date, set the boolean to True - copy_weights_folder_to_drive() - sleep_time = 15 - else: - sleep_time = 0.1 - - with open(last_backup_timestamps_path, 'w') as f: - for filepath, timestamp in last_backup_timestamps.items(): - f.write(f'{filepath}:{timestamp}\n') - - time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups - - except Exception as e: - print(f"An error occurred: {str(e)}") - # You can log the error or take appropriate actions here. diff --git a/spaces/AI4PD/hexviz/hexviz/ec_number.py b/spaces/AI4PD/hexviz/hexviz/ec_number.py deleted file mode 100644 index e3549cda973dd06b6d3c788540567aaf52cc772b..0000000000000000000000000000000000000000 --- a/spaces/AI4PD/hexviz/hexviz/ec_number.py +++ /dev/null @@ -1,9 +0,0 @@ -class ECNumber: - def __init__(self, number, coordinate, color, radius): - self.number = number - self.coordinate = coordinate - self.color = color - self.radius = radius - - def __str__(self): - return f"(EC: {self.number}, Coordinate: {self.coordinate}, Color: {self.color})" diff --git a/spaces/Abhilashvj/planogram-compliance/utils/loss.py b/spaces/Abhilashvj/planogram-compliance/utils/loss.py deleted file mode 100644 index 2d0878c8c05848d77ec975ddc14f227a99351ad9..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/utils/loss.py +++ /dev/null @@ -1,291 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Loss functions -""" - -import torch -import torch.nn as nn - -from utils.metrics import bbox_iou -from utils.torch_utils import de_parallel - - -def smooth_BCE( - eps=0.1, -): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441 - # return positive, negative label smoothing BCE targets - return 1.0 - 0.5 * eps, 0.5 * eps - - -class BCEBlurWithLogitsLoss(nn.Module): - # BCEwithLogitLoss() with reduced missing label effects. - def __init__(self, alpha=0.05): - super().__init__() - self.loss_fcn = nn.BCEWithLogitsLoss( - reduction="none" - ) # must be nn.BCEWithLogitsLoss() - self.alpha = alpha - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - pred = torch.sigmoid(pred) # prob from logits - dx = pred - true # reduce only missing label effects - # dx = (pred - true).abs() # reduce missing label and false label effects - alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4)) - loss *= alpha_factor - return loss.mean() - - -class FocalLoss(nn.Module): - # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super().__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = ( - "none" # required to apply FL to each element - ) - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - # p_t = torch.exp(-loss) - # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability - - # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py - pred_prob = torch.sigmoid(pred) # prob from logits - p_t = true * pred_prob + (1 - true) * (1 - pred_prob) - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = (1.0 - p_t) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == "mean": - return loss.mean() - elif self.reduction == "sum": - return loss.sum() - else: # 'none' - return loss - - -class QFocalLoss(nn.Module): - # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super().__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = ( - "none" # required to apply FL to each element - ) - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - - pred_prob = torch.sigmoid(pred) # prob from logits - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = torch.abs(true - pred_prob) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == "mean": - return loss.mean() - elif self.reduction == "sum": - return loss.sum() - else: # 'none' - return loss - - -class ComputeLoss: - sort_obj_iou = False - - # Compute losses - def __init__(self, model, autobalance=False): - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss( - pos_weight=torch.tensor([h["cls_pw"]], device=device) - ) - BCEobj = nn.BCEWithLogitsLoss( - pos_weight=torch.tensor([h["obj_pw"]], device=device) - ) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE( - eps=h.get("label_smoothing", 0.0) - ) # positive, negative BCE targets - - # Focal loss - g = h["fl_gamma"] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - m = de_parallel(model).model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get( - m.nl, [4.0, 1.0, 0.25, 0.06, 0.02] - ) # P3-P7 - self.ssi = ( - list(m.stride).index(16) if autobalance else 0 - ) # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = ( - BCEcls, - BCEobj, - 1.0, - h, - autobalance, - ) - self.na = m.na # number of anchors - self.nc = m.nc # number of classes - self.nl = m.nl # number of layers - self.anchors = m.anchors - self.device = device - - def __call__(self, p, targets): # predictions, targets - lcls = torch.zeros(1, device=self.device) # class loss - lbox = torch.zeros(1, device=self.device) # box loss - lobj = torch.zeros(1, device=self.device) # object loss - tcls, tbox, indices, anchors = self.build_targets( - p, targets - ) # targets - - # Losses - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = indices[i] # image, anchor, gridy, gridx - tobj = torch.zeros( - pi.shape[:4], dtype=pi.dtype, device=self.device - ) # target obj - - n = b.shape[0] # number of targets - if n: - # pxy, pwh, _, pcls = pi[b, a, gj, gi].tensor_split((2, 4, 5), dim=1) # faster, requires torch 1.8.0 - pxy, pwh, _, pcls = pi[b, a, gj, gi].split( - (2, 2, 1, self.nc), 1 - ) # target-subset of predictions - - # Regression - pxy = pxy.sigmoid() * 2 - 0.5 - pwh = (pwh.sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1) # predicted box - iou = bbox_iou( - pbox, tbox[i], CIoU=True - ).squeeze() # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - iou = iou.detach().clamp(0).type(tobj.dtype) - if self.sort_obj_iou: - j = iou.argsort() - b, a, gj, gi, iou = b[j], a[j], gj[j], gi[j], iou[j] - if self.gr < 1: - iou = (1.0 - self.gr) + self.gr * iou - tobj[b, a, gj, gi] = iou # iou ratio - - # Classification - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like( - pcls, self.cn, device=self.device - ) # targets - t[range(n), tcls[i]] = self.cp - lcls += self.BCEcls(pcls, t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - obji = self.BCEobj(pi[..., 4], tobj) - lobj += obji * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = ( - self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - ) - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp["box"] - lobj *= self.hyp["obj"] - lcls *= self.hyp["cls"] - bs = tobj.shape[0] # batch size - - return (lbox + lobj + lcls) * bs, torch.cat( - (lbox, lobj, lcls) - ).detach() - - def build_targets(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - tcls, tbox, indices, anch = [], [], [], [] - gain = torch.ones( - 7, device=self.device - ) # normalized to gridspace gain - ai = ( - torch.arange(na, device=self.device) - .float() - .view(na, 1) - .repeat(1, nt) - ) # same as .repeat_interleave(nt) - targets = torch.cat( - (targets.repeat(na, 1, 1), ai[..., None]), 2 - ) # append anchor indices - - g = 0.5 # bias - off = ( - torch.tensor( - [ - [0, 0], - [1, 0], - [0, 1], - [-1, 0], - [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], - device=self.device, - ).float() - * g - ) # offsets - - for i in range(self.nl): - anchors, shape = self.anchors[i], p[i].shape - gain[2:6] = torch.tensor(shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain # shape(3,n,7) - if nt: - # Matches - r = t[..., 4:6] / anchors[:, None] # wh ratio - j = ( - torch.max(r, 1 / r).max(2)[0] < self.hyp["anchor_t"] - ) # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1 < g) & (gxy > 1)).T - l, m = ((gxi % 1 < g) & (gxi > 1)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - bc, gxy, gwh, a = t.chunk( - 4, 1 - ) # (image, class), grid xy, grid wh, anchors - a, (b, c) = a.long().view(-1), bc.long().T # anchors, image, class - gij = (gxy - offsets).long() - gi, gj = gij.T # grid indices - - # Append - indices.append( - (b, a, gj.clamp_(0, shape[2] - 1), gi.clamp_(0, shape[3] - 1)) - ) # image, anchor, grid - tbox.append(torch.cat((gxy - gij, gwh), 1)) # box - anch.append(anchors[a]) # anchors - tcls.append(c) # class - - return tcls, tbox, indices, anch diff --git a/spaces/AchyuthGamer/ImMagician-Image-Generator/style.css b/spaces/AchyuthGamer/ImMagician-Image-Generator/style.css deleted file mode 100644 index 379210ecf8db217898c227dc6a016698f3205f81..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/ImMagician-Image-Generator/style.css +++ /dev/null @@ -1,24 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: #fff; - background: #1565c0; - border-radius: 100vh; -} - -#component-0 { - max-width: 730px; - margin: auto; -} - -#share-btn-container{padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; max-width: 13rem; margin-left: auto;margin-top: 0.35em;} -div#share-btn-container > div {flex-direction: row;background: black;align-items: center} -#share-btn-container:hover {background-color: #060606} -#share-btn {all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.5rem !important; padding-bottom: 0.5rem !important;right:0;font-size: 15px;} -#share-btn * {all: unset} -#share-btn-container div:nth-child(-n+2){width: auto !important;min-height: 0px !important;} -#share-btn-container .wrap {display: none !important} -#share-btn-container.hidden {display: none!important} \ No newline at end of file diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/dpt_depth.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/dpt_depth.py deleted file mode 100644 index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/dpt_depth.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock, - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__(self, path=None, non_negative=True, **kwargs): - features = kwargs["features"] if "features" in kwargs else 256 - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x): - return super().forward(x).squeeze(dim=1) - diff --git a/spaces/Addai/Breast_cancer_detection_with_deep_transfer_learning/README.md b/spaces/Addai/Breast_cancer_detection_with_deep_transfer_learning/README.md deleted file mode 100644 index deb8da1ad28473f479d17a39c365ddaa51e800bc..0000000000000000000000000000000000000000 --- a/spaces/Addai/Breast_cancer_detection_with_deep_transfer_learning/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Breast Cancer Detection With Deep Transfer Learning -emoji: 📈 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Aditya9790/yolo7-object-tracking/models/__init__.py b/spaces/Aditya9790/yolo7-object-tracking/models/__init__.py deleted file mode 100644 index 84952a8167bc2975913a6def6b4f027d566552a9..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/models/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# init \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/classes/event_center.ts b/spaces/AgentVerse/agentVerse/ui/src/classes/event_center.ts deleted file mode 100644 index ff4cbb288f100e6b27b53ebb68aa389a21cea497..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/classes/event_center.ts +++ /dev/null @@ -1,5 +0,0 @@ -import { Events } from "phaser"; - -const eventsCenter = new Events.EventEmitter(); - -export default eventsCenter; diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/localforage-files.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/localforage-files.d.ts deleted file mode 100644 index 4e91f51458f5c165129bc4745c828f37153e4b99..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/localforage-files.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import Files from './storage/localforage/files/Files'; -export default Files; \ No newline at end of file diff --git a/spaces/AkashKhamkar/QnA-generator/before_run.py b/spaces/AkashKhamkar/QnA-generator/before_run.py deleted file mode 100644 index 4d0e3085dd4436e24dbb2a90e581d342f4e6ce54..0000000000000000000000000000000000000000 --- a/spaces/AkashKhamkar/QnA-generator/before_run.py +++ /dev/null @@ -1,6 +0,0 @@ -import nltk - -nltk.download('stopwords') -nltk.download('wordnet') -nltk.download('punkt') -nltk.download('brown') \ No newline at end of file diff --git a/spaces/Ame42/UBTH/utils.py b/spaces/Ame42/UBTH/utils.py deleted file mode 100644 index d7a416f4397f8c55f1e991877d0ce9f9d0ba7515..0000000000000000000000000000000000000000 --- a/spaces/Ame42/UBTH/utils.py +++ /dev/null @@ -1,132 +0,0 @@ -# This is a sample Python script. - -# Press Shift+F10 to execute it or replace it with your code. -import os.path -import pandas as pd -import glob -import os - -sn = "S/N" -ipp = "IPPIS" -gif = "GIFMIS" -col_1 = "BENEFICIARY NAME" -gif_col = [col_1, "Employee", "Rank", "Amount"] -ipp_col = ["Employee Number", "Full Name", "Grade Level", "Step", "Grosss Deductions SUM 1"] - - -def get_raw(link, sheet, file_ext='.xlsx'): - match file_ext: - # handle testing files - case '.csv': - return pd.read_csv(link) - - case '.xlsx' | '.xls': - return pd.read_excel(link, sheet_name=sheet) - - case _: - return UnusualFileError(link, "Invalid file extension") - - -def get_data(link, sheet, doc_type=ipp, file_type='.csv'): - match file_type: - # handle testing files - case '.csv': - return pd.read_csv(link) - - # handle GIFMIS files - case '.xlsx' | '.xls' if doc_type == gif: - - try: - data = pd.read_excel(link, sheet_name=sheet, skiprows=3, header=0) - return data.drop(data.columns.difference(gif_col), axis=1) - except ValueError as err: - raise UnusualFileError(link, str(err)) - except KeyError: - return None - - # handle IPPIS files - case '.xlsx' | '.xls' if doc_type == ipp: - - try: - data = pd.read_excel(link, sheet_name=sheet, skiprows=4, header=0) - return data.drop(data.columns.difference(ipp_col), axis=1) - except ValueError as err: - raise UnusualFileError(link, str(err)) - except KeyError: - return None - - # default - case _: - return None - - -def merge_two(first: pd.DataFrame, second: pd.DataFrame, doc_type): - hows = ['inner', 'left', 'right'] - first = first.drop(sn, axis=1, errors="ignore") - second = second.drop(sn, axis=1, errors="ignore") - - both, prev, curr = tuple( - [first.merge(second, how=how, on=first.columns[0] if doc_type == ipp else first.columns[1]) for how in hows] - ) - - prev = prev[ - prev[ - prev.columns[5] if doc_type == ipp else prev.columns[4] # Get rows where name column is empty - ].isnull() - ].dropna(subset=[ - prev.columns[0] if doc_type == ipp else prev.columns[1] # Check for empty rows in the employee number column - ]).dropna(axis=1, how="all") # Remove empty columns - - curr = curr[ - curr[ - curr.columns[1] if doc_type == ipp else curr.columns[0] # Get rows where name column is empty - ].isnull() - ].dropna(subset=[ - curr.columns[0] if doc_type == ipp else curr.columns[1] # Check for empty rows in the employee number column - ]).dropna(axis=1, how="all") # Remove empty columns - - return both, prev, curr - - -def merge_all(data_list, keys=tuple("Employee")): - return pd.concat( - [data.drop(sn, axis=1, errors="ignore") for data in data_list], - axis=1, - join='inner', - keys=keys, - ignore_index=True - ) - - -def retrieve(dt): - return get_data(dt.name, os.path.splitext(dt.name)[1]) - - -def clear_csv_trash(): - pattern = '*.csv' # Desired file pattern - - # Get a list of file paths matching the pattern - matching_files = glob.glob(pattern) - - # Loop through the matching files and delete them - for file_path in matching_files: - try: - os.remove(file_path) - except OSError as e: - print(f"Error deleting {file_path}: {e}") - - -class UnusualFileError(Exception): - def __init__(self, file, message): - self.source = file - self.cause = message - - def __str__(self): - from numpy.core._dtype import __repr__ - return __repr__(self.source) - - def get_file(self): - return self.source - - def get_message(self): - return self.cause diff --git a/spaces/Andy1621/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py deleted file mode 100644 index e73a098d32d6ce3f6a0e121538ed90de81699ff5..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py +++ /dev/null @@ -1,63 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - pretrained='open-mmlab://regnetx_3.2gf', - backbone=dict( - _delete_=True, - type='RegNet', - arch='regnetx_3.2gf', - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[96, 192, 432, 1008], - out_channels=256, - num_outs=5)) -img_norm_cfg = dict( - # The mean and std are used in PyCls when training RegNets - mean=[103.53, 116.28, 123.675], - std=[57.375, 57.12, 58.395], - to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.00005) -lr_config = dict(step=[28, 34]) -runner = dict(type='EpochBasedRunner', max_epochs=36) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/hrnet.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/hrnet.py deleted file mode 100644 index c0fd0a974192231506aa68b1e1719f618b78a1b3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/hrnet.py +++ /dev/null @@ -1,537 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init, - kaiming_init) -from mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES -from .resnet import BasicBlock, Bottleneck - - -class HRModule(nn.Module): - """High-Resolution Module for HRNet. - - In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange - is in this module. - """ - - def __init__(self, - num_branches, - blocks, - num_blocks, - in_channels, - num_channels, - multiscale_output=True, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN')): - super(HRModule, self).__init__() - self._check_branches(num_branches, num_blocks, in_channels, - num_channels) - - self.in_channels = in_channels - self.num_branches = num_branches - - self.multiscale_output = multiscale_output - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - self.with_cp = with_cp - self.branches = self._make_branches(num_branches, blocks, num_blocks, - num_channels) - self.fuse_layers = self._make_fuse_layers() - self.relu = nn.ReLU(inplace=False) - - def _check_branches(self, num_branches, num_blocks, in_channels, - num_channels): - if num_branches != len(num_blocks): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_BLOCKS({len(num_blocks)})' - raise ValueError(error_msg) - - if num_branches != len(num_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_CHANNELS({len(num_channels)})' - raise ValueError(error_msg) - - if num_branches != len(in_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_INCHANNELS({len(in_channels)})' - raise ValueError(error_msg) - - def _make_one_branch(self, - branch_index, - block, - num_blocks, - num_channels, - stride=1): - downsample = None - if stride != 1 or \ - self.in_channels[branch_index] != \ - num_channels[branch_index] * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - self.in_channels[branch_index], - num_channels[branch_index] * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, num_channels[branch_index] * - block.expansion)[1]) - - layers = [] - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - self.in_channels[branch_index] = \ - num_channels[branch_index] * block.expansion - for i in range(1, num_blocks[branch_index]): - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*layers) - - def _make_branches(self, num_branches, block, num_blocks, num_channels): - branches = [] - - for i in range(num_branches): - branches.append( - self._make_one_branch(i, block, num_blocks, num_channels)) - - return nn.ModuleList(branches) - - def _make_fuse_layers(self): - if self.num_branches == 1: - return None - - num_branches = self.num_branches - in_channels = self.in_channels - fuse_layers = [] - num_out_branches = num_branches if self.multiscale_output else 1 - for i in range(num_out_branches): - fuse_layer = [] - for j in range(num_branches): - if j > i: - fuse_layer.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=1, - stride=1, - padding=0, - bias=False), - build_norm_layer(self.norm_cfg, in_channels[i])[1], - nn.Upsample( - scale_factor=2**(j - i), mode='nearest'))) - elif j == i: - fuse_layer.append(None) - else: - conv_downsamples = [] - for k in range(i - j): - if k == i - j - 1: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[i])[1])) - else: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[j], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[j])[1], - nn.ReLU(inplace=False))) - fuse_layer.append(nn.Sequential(*conv_downsamples)) - fuse_layers.append(nn.ModuleList(fuse_layer)) - - return nn.ModuleList(fuse_layers) - - def forward(self, x): - """Forward function.""" - if self.num_branches == 1: - return [self.branches[0](x[0])] - - for i in range(self.num_branches): - x[i] = self.branches[i](x[i]) - - x_fuse = [] - for i in range(len(self.fuse_layers)): - y = 0 - for j in range(self.num_branches): - if i == j: - y += x[j] - else: - y += self.fuse_layers[i][j](x[j]) - x_fuse.append(self.relu(y)) - return x_fuse - - -@BACKBONES.register_module() -class HRNet(nn.Module): - """HRNet backbone. - - High-Resolution Representations for Labeling Pixels and Regions - arXiv: https://arxiv.org/abs/1904.04514 - - Args: - extra (dict): detailed configuration for each stage of HRNet. - in_channels (int): Number of input image channels. Default: 3. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet.models import HRNet - >>> import torch - >>> extra = dict( - >>> stage1=dict( - >>> num_modules=1, - >>> num_branches=1, - >>> block='BOTTLENECK', - >>> num_blocks=(4, ), - >>> num_channels=(64, )), - >>> stage2=dict( - >>> num_modules=1, - >>> num_branches=2, - >>> block='BASIC', - >>> num_blocks=(4, 4), - >>> num_channels=(32, 64)), - >>> stage3=dict( - >>> num_modules=4, - >>> num_branches=3, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4), - >>> num_channels=(32, 64, 128)), - >>> stage4=dict( - >>> num_modules=3, - >>> num_branches=4, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4, 4), - >>> num_channels=(32, 64, 128, 256))) - >>> self = HRNet(extra, in_channels=1) - >>> self.eval() - >>> inputs = torch.rand(1, 1, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 32, 8, 8) - (1, 64, 4, 4) - (1, 128, 2, 2) - (1, 256, 1, 1) - """ - - blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} - - def __init__(self, - extra, - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN'), - norm_eval=True, - with_cp=False, - zero_init_residual=False): - super(HRNet, self).__init__() - self.extra = extra - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - self.zero_init_residual = zero_init_residual - - # stem net - self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1) - self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2) - - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - self.conv_cfg, - 64, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.relu = nn.ReLU(inplace=True) - - # stage 1 - self.stage1_cfg = self.extra['stage1'] - num_channels = self.stage1_cfg['num_channels'][0] - block_type = self.stage1_cfg['block'] - num_blocks = self.stage1_cfg['num_blocks'][0] - - block = self.blocks_dict[block_type] - stage1_out_channels = num_channels * block.expansion - self.layer1 = self._make_layer(block, 64, num_channels, num_blocks) - - # stage 2 - self.stage2_cfg = self.extra['stage2'] - num_channels = self.stage2_cfg['num_channels'] - block_type = self.stage2_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition1 = self._make_transition_layer([stage1_out_channels], - num_channels) - self.stage2, pre_stage_channels = self._make_stage( - self.stage2_cfg, num_channels) - - # stage 3 - self.stage3_cfg = self.extra['stage3'] - num_channels = self.stage3_cfg['num_channels'] - block_type = self.stage3_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition2 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage3, pre_stage_channels = self._make_stage( - self.stage3_cfg, num_channels) - - # stage 4 - self.stage4_cfg = self.extra['stage4'] - num_channels = self.stage4_cfg['num_channels'] - block_type = self.stage4_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition3 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage4, pre_stage_channels = self._make_stage( - self.stage4_cfg, num_channels) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: the normalization layer named "norm2" """ - return getattr(self, self.norm2_name) - - def _make_transition_layer(self, num_channels_pre_layer, - num_channels_cur_layer): - num_branches_cur = len(num_channels_cur_layer) - num_branches_pre = len(num_channels_pre_layer) - - transition_layers = [] - for i in range(num_branches_cur): - if i < num_branches_pre: - if num_channels_cur_layer[i] != num_channels_pre_layer[i]: - transition_layers.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - num_channels_pre_layer[i], - num_channels_cur_layer[i], - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - num_channels_cur_layer[i])[1], - nn.ReLU(inplace=True))) - else: - transition_layers.append(None) - else: - conv_downsamples = [] - for j in range(i + 1 - num_branches_pre): - in_channels = num_channels_pre_layer[-1] - out_channels = num_channels_cur_layer[i] \ - if j == i - num_branches_pre else in_channels - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, out_channels)[1], - nn.ReLU(inplace=True))) - transition_layers.append(nn.Sequential(*conv_downsamples)) - - return nn.ModuleList(transition_layers) - - def _make_layer(self, block, inplanes, planes, blocks, stride=1): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, planes * block.expansion)[1]) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append( - block( - inplanes, - planes, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*layers) - - def _make_stage(self, layer_config, in_channels, multiscale_output=True): - num_modules = layer_config['num_modules'] - num_branches = layer_config['num_branches'] - num_blocks = layer_config['num_blocks'] - num_channels = layer_config['num_channels'] - block = self.blocks_dict[layer_config['block']] - - hr_modules = [] - for i in range(num_modules): - # multi_scale_output is only used for the last module - if not multiscale_output and i == num_modules - 1: - reset_multiscale_output = False - else: - reset_multiscale_output = True - - hr_modules.append( - HRModule( - num_branches, - block, - num_blocks, - in_channels, - num_channels, - reset_multiscale_output, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*hr_modules), in_channels - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.relu(x) - x = self.layer1(x) - - x_list = [] - for i in range(self.stage2_cfg['num_branches']): - if self.transition1[i] is not None: - x_list.append(self.transition1[i](x)) - else: - x_list.append(x) - y_list = self.stage2(x_list) - - x_list = [] - for i in range(self.stage3_cfg['num_branches']): - if self.transition2[i] is not None: - x_list.append(self.transition2[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage3(x_list) - - x_list = [] - for i in range(self.stage4_cfg['num_branches']): - if self.transition3[i] is not None: - x_list.append(self.transition3[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage4(x_list) - - return y_list - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(HRNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/bsrgan_light.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/bsrgan_light.py deleted file mode 100644 index 808c7f882cb75e2ba2340d5b55881d11927351f0..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/image_degradation/bsrgan_light.py +++ /dev/null @@ -1,651 +0,0 @@ -# -*- coding: utf-8 -*- -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - - wd2 = wd2/4 - wd = wd/4 - - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random()) - img = ndimage.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(80, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None, up=False): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - # elif i == 1: - # image = add_blur(image, sf=sf) - - if i == 0: - pass - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.8: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - # - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - if up: - image = cv2.resize(image, (w1, h1), interpolation=cv2.INTER_CUBIC) # todo: random, as above? want to condition on it then - example = {"image": image} - return example - - - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_hq = img - img_lq = deg_fn(img)["image"] - img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), - (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') diff --git a/spaces/Aqdas/YouTube_Video_OpenAI_whisper/app.py b/spaces/Aqdas/YouTube_Video_OpenAI_whisper/app.py deleted file mode 100644 index c9342c6fdf8f9d286ac628d44c0b3ccbecc98fd7..0000000000000000000000000000000000000000 --- a/spaces/Aqdas/YouTube_Video_OpenAI_whisper/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import streamlit as st -from whisper import dowload_youtube_video, transcribe_audio -import os - - -st.title("Youtube Video + OpenAI Whisper") -if st.text_input('Please Enter the access code') == os.environ['password']: - - user_input = st.text_input('Enter Your YouTube URL') - - with st.spinner('Sit back and relax. It takes a minute.'): - if st.button('Transcribe'): - if user_input: - download_audio = dowload_youtube_video(user_input) - st.write(transcribe_audio()) - - \ No newline at end of file diff --git a/spaces/BAAI/AltDiffusion/header.html b/spaces/BAAI/AltDiffusion/header.html deleted file mode 100644 index 552442988d335fd65384dd234086a13996c6e96a..0000000000000000000000000000000000000000 --- a/spaces/BAAI/AltDiffusion/header.html +++ /dev/null @@ -1,43 +0,0 @@ -
-
FlagAI -
-
-

- FlagStudio -

-
-

- FlagStudio 项目致力于贡献优秀AI生成艺术作品。此双语文生图模型项目基于 stable diffusion,由BAAI旗下的FlagAI团队提供支持,相关代码和模型权重在AltDiffusion中进行开源。 -

-

- FlagStudio aims to provide high quality AI-generated artwork. Our current bilingual model is based on the original stable diffusion model and is capable to generate images from both Chinese and English text. FlagStudio is developed and supported by the FlagAI team. Relevant code and model weights released in AltDiffusion.(open.platform@baai.ac.cn) -

-

- AltDiffusion has been added to 🧨Diffusers, see the documentation page: 🧨 Pipeline doc -

-

- 我们在colab设置了一个脚本,你可以在colab试用我们的模型!(We have a script on colab, You can try our models on colab.Enjoy it!) - Open In Colab -

-
\ No newline at end of file diff --git a/spaces/BIASLab/sars-cov-2-classification-fcgr/src/pipeline.py b/spaces/BIASLab/sars-cov-2-classification-fcgr/src/pipeline.py deleted file mode 100644 index eaa3deaa6138fc538bcb3fd051b7d33a1ed69b1d..0000000000000000000000000000000000000000 --- a/spaces/BIASLab/sars-cov-2-classification-fcgr/src/pipeline.py +++ /dev/null @@ -1,85 +0,0 @@ - -import json -from pathlib import Path -from collections import OrderedDict -from typing import List, Tuple, Optional, Union - -FUNCTIONS_PIPELINE = OrderedDict() - -def register_in_pipeline(func): - """Collect functions for the pipeline""" - print(f"Adding {func.__name__}") - if func.__name__ not in FUNCTIONS_PIPELINE: - FUNCTIONS_PIPELINE[func.__name__] = func - else: - raise Exception(f"Duplicated function with name {func.__name__}") - -class Pipeline: - """Define a sequence of functions to be applied to one input""" - FUNCTIONS_PIPELINE = FUNCTIONS_PIPELINE - def __init__(self, pipeline: Optional[List[Tuple[str, dict]]] = None): - self.pipeline = pipeline if pipeline else [] - - def __call__(self, x): - """Apply pipeline to the input 'x'""" - for pipe in self.pipeline: - func_name, *args, kwargs = pipe - assert isinstance(kwargs, dict), f"Wrong declaration in {func_name!r}. Must be (str, dict) or (str, tuple, dict)" - # apply preprocessing - if args: - #print("args and kwargs") - x = self.apply(x, func_name, *args, **kwargs) - else: - #print("only kwargs") - x = self.apply(x, func_name, **kwargs) - return x - - @classmethod - def apply(cls, x, func, *args, **kwargs): - """Compute func(x, *args, **kwargs)""" - if func in cls.FUNCTIONS_PIPELINE: - return cls.FUNCTIONS_PIPELINE[func](x, *args, **kwargs) - else: - raise TypeError(f"{func} not available") - - def __gt__(self, add_pipe: Union[List,Tuple]): - """Add a pipe ("func_name", args, kwargs) or ("func_name", kwargs) to the current pipeline""" - if self.is_available(add_pipe[0]): - self.pipeline.append(add_pipe) - return self - else: - raise NotImplementedError(f"{add_pipe[0]!r} not available in Pipeline") - - def is_available(self, func_name: str): - """Return True if the function 'func_name' is available in Pipeline""" - return True if func_name in self.FUNCTIONS_PIPELINE else False - - def asJSON(self, path_save: str =None): - """Save pipeline configuration as json file""" - path_save = Path(path_save) if path_save else Path("pipeline.json") - with open(path_save, "w", encoding="utf8") as fp: - json.dump(self.pipeline, fp, indent=4, ensure_ascii=False) - print(f"Pipeline configuration saved at {path_save!r}") - - def fromJSON(self, path_pipeline: str): - """Load pipeline configuration from json file""" - path_pipeline = Path(path_pipeline) - with open(path_pipeline, "r", encoding="utf8") as fp: - pipeline = json.load(fp) - - # Corrobate that all functions are availables - available_functions = {pipe[0]: self.is_available(pipe[0]) - for pipe in pipeline} - - # TODO: change with the right Exception here - if not all(available_functions.values()): - print(""" - Some functions are not availables. - Please use the @register_in_pipeline decorator to include this functions to the Pipeline. - """) - functions_not_availables = dict(filter(lambda item: item[0], available_functions.items())) - return [func_name for func_name, available in functions_not_availables.items() - if available is False] - - self.pipeline = pipeline - print(f"Pipeline loaded from {path_pipeline!r}") \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/guidml.py b/spaces/Bart92/RVC_HF/guidml.py deleted file mode 100644 index aa35e9f8e3386bfec61fc9ad6f807b458ab35882..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/guidml.py +++ /dev/null @@ -1,710 +0,0 @@ -""" -0416后的更新: - 引入config中half - 重建npy而不用填写 - v2支持 - 无f0模型支持 - 修复 - - int16: - 增加无索引支持 - f0算法改harvest(怎么看就只有这个会影响CPU占用),但是不这么改效果不好 -""" -import os, sys, traceback, re - -import json - -now_dir = os.getcwd() -sys.path.append(now_dir) -from configs.config import Config - -Config = Config() - -import torch_directml -import PySimpleGUI as sg -import sounddevice as sd -import noisereduce as nr -import numpy as np -from fairseq import checkpoint_utils -import librosa, torch, pyworld, faiss, time, threading -import torch.nn.functional as F -import torchaudio.transforms as tat -import scipy.signal as signal - - -# import matplotlib.pyplot as plt -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from i18n import I18nAuto - -i18n = I18nAuto() -device = torch_directml.device(torch_directml.default_device()) -current_dir = os.getcwd() - - -class RVC: - def __init__( - self, key, hubert_path, pth_path, index_path, npy_path, index_rate - ) -> None: - """ - 初始化 - """ - try: - self.f0_up_key = key - self.time_step = 160 / 16000 * 1000 - self.f0_min = 50 - self.f0_max = 1100 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - self.sr = 16000 - self.window = 160 - if index_rate != 0: - self.index = faiss.read_index(index_path) - # self.big_npy = np.load(npy_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - print("index search enabled") - self.index_rate = index_rate - model_path = hubert_path - print("load model(s) from {}".format(model_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", - ) - self.model = models[0] - self.model = self.model.to(device) - if Config.is_half: - self.model = self.model.half() - else: - self.model = self.model.float() - self.model.eval() - cpt = torch.load(pth_path, map_location="cpu") - self.tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = cpt.get("f0", 1) - self.version = cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del self.net_g.enc_q - print(self.net_g.load_state_dict(cpt["weight"], strict=False)) - self.net_g.eval().to(device) - if Config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - except: - print(traceback.format_exc()) - - def get_f0(self, x, f0_up_key, inp_f0=None): - x_pad = 1 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def infer(self, feats: torch.Tensor) -> np.ndarray: - """ - 推理函数 - """ - audio = feats.clone().cpu().numpy() - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - if Config.is_half: - feats = feats.half() - else: - feats = feats.float() - inputs = { - "source": feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9 if self.version == "v1" else 12, - } - torch.cuda.synchronize() - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = ( - self.model.final_proj(logits[0]) if self.version == "v1" else logits[0] - ) - - ####索引优化 - try: - if ( - hasattr(self, "index") - and hasattr(self, "big_npy") - and self.index_rate != 0 - ): - npy = feats[0].cpu().numpy().astype("float32") - score, ix = self.index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - if Config.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate - + (1 - self.index_rate) * feats - ) - else: - print("index search FAIL or disabled") - except: - traceback.print_exc() - print("index search FAIL") - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - torch.cuda.synchronize() - print(feats.shape) - if self.if_f0 == 1: - pitch, pitchf = self.get_f0(audio, self.f0_up_key) - p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存 - else: - pitch, pitchf = None, None - p_len = min(feats.shape[1], 13000) # 太大了爆显存 - torch.cuda.synchronize() - # print(feats.shape,pitch.shape) - feats = feats[:, :p_len, :] - if self.if_f0 == 1: - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - p_len = torch.LongTensor([p_len]).to(device) - ii = 0 # sid - sid = torch.LongTensor([ii]).to(device) - with torch.no_grad(): - if self.if_f0 == 1: - infered_audio = ( - self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - ) - else: - infered_audio = ( - self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float() - ) - torch.cuda.synchronize() - return infered_audio - - -class GUIConfig: - def __init__(self) -> None: - self.hubert_path: str = "" - self.pth_path: str = "" - self.index_path: str = "" - self.npy_path: str = "" - self.pitch: int = 12 - self.samplerate: int = 44100 - self.block_time: float = 1.0 # s - self.buffer_num: int = 1 - self.threhold: int = -30 - self.crossfade_time: float = 0.08 - self.extra_time: float = 0.04 - self.I_noise_reduce = False - self.O_noise_reduce = False - self.index_rate = 0.3 - - -class GUI: - def __init__(self) -> None: - self.config = GUIConfig() - self.flag_vc = False - - self.launcher() - - def load(self): - ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) = self.get_devices() - try: - with open("values1.json", "r") as j: - data = json.load(j) - except: - with open("values1.json", "w") as j: - data = { - "pth_path": "", - "index_path": "", - "sg_input_device": input_devices[ - input_devices_indices.index(sd.default.device[0]) - ], - "sg_output_device": output_devices[ - output_devices_indices.index(sd.default.device[1]) - ], - "threhold": "-45", - "pitch": "0", - "index_rate": "0", - "block_time": "1", - "crossfade_length": "0.04", - "extra_time": "1", - } - return data - - def launcher(self): - data = self.load() - sg.theme("LightBlue3") - input_devices, output_devices, _, _ = self.get_devices() - layout = [ - [ - sg.Frame( - title=i18n("Load model"), - layout=[ - [ - sg.Input( - default_text="hubert_base.pt", - key="hubert_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Hubert Model"), - initial_folder=os.path.join(os.getcwd()), - file_types=(("pt files", "*.pt"),), - ), - ], - [ - sg.Input( - default_text=data.get("pth_path", ""), - key="pth_path", - ), - sg.FileBrowse( - i18n("Select the .pth file"), - initial_folder=os.path.join(os.getcwd(), "weights"), - file_types=(("weight files", "*.pth"),), - ), - ], - [ - sg.Input( - default_text=data.get("index_path", ""), - key="index_path", - ), - sg.FileBrowse( - i18n("Select the .index file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("index files", "*.index"),), - ), - ], - [ - sg.Input( - default_text="你不需要填写这个You don't need write this.", - key="npy_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Select the .npy file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("feature files", "*.npy"),), - ), - ], - ], - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Input device")), - sg.Combo( - input_devices, - key="sg_input_device", - default_value=data.get("sg_input_device", ""), - ), - ], - [ - sg.Text(i18n("Output device")), - sg.Combo( - output_devices, - key="sg_output_device", - default_value=data.get("sg_output_device", ""), - ), - ], - ], - title=i18n("Audio device (please use the same type of driver)"), - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Response threshold")), - sg.Slider( - range=(-60, 0), - key="threhold", - resolution=1, - orientation="h", - default_value=data.get("threhold", ""), - ), - ], - [ - sg.Text(i18n("Pitch settings")), - sg.Slider( - range=(-24, 24), - key="pitch", - resolution=1, - orientation="h", - default_value=data.get("pitch", ""), - ), - ], - [ - sg.Text(i18n("Index Rate")), - sg.Slider( - range=(0.0, 1.0), - key="index_rate", - resolution=0.01, - orientation="h", - default_value=data.get("index_rate", ""), - ), - ], - ], - title=i18n("General settings"), - ), - sg.Frame( - layout=[ - [ - sg.Text(i18n("Sample length")), - sg.Slider( - range=(0.1, 3.0), - key="block_time", - resolution=0.1, - orientation="h", - default_value=data.get("block_time", ""), - ), - ], - [ - sg.Text(i18n("Fade length")), - sg.Slider( - range=(0.01, 0.15), - key="crossfade_length", - resolution=0.01, - orientation="h", - default_value=data.get("crossfade_length", ""), - ), - ], - [ - sg.Text(i18n("Extra推理时长")), - sg.Slider( - range=(0.05, 3.00), - key="extra_time", - resolution=0.01, - orientation="h", - default_value=data.get("extra_time", ""), - ), - ], - [ - sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"), - sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"), - ], - ], - title=i18n("Performance settings"), - ), - ], - [ - sg.Button(i18n("开始音频Convert"), key="start_vc"), - sg.Button(i18n("停止音频Convert"), key="stop_vc"), - sg.Text(i18n("Inference time (ms):")), - sg.Text("0", key="infer_time"), - ], - ] - self.window = sg.Window("RVC - GUI", layout=layout) - self.event_handler() - - def event_handler(self): - while True: - event, values = self.window.read() - if event == sg.WINDOW_CLOSED: - self.flag_vc = False - exit() - if event == "start_vc" and self.flag_vc == False: - if self.set_values(values) == True: - print("using_cuda:" + str(torch.cuda.is_available())) - self.start_vc() - settings = { - "pth_path": values["pth_path"], - "index_path": values["index_path"], - "sg_input_device": values["sg_input_device"], - "sg_output_device": values["sg_output_device"], - "threhold": values["threhold"], - "pitch": values["pitch"], - "index_rate": values["index_rate"], - "block_time": values["block_time"], - "crossfade_length": values["crossfade_length"], - "extra_time": values["extra_time"], - } - with open("values1.json", "w") as j: - json.dump(settings, j) - if event == "stop_vc" and self.flag_vc == True: - self.flag_vc = False - - def set_values(self, values): - if len(values["pth_path"].strip()) == 0: - sg.popup(i18n("Select the pth file")) - return False - if len(values["index_path"].strip()) == 0: - sg.popup(i18n("Select the index file")) - return False - pattern = re.compile("[^\x00-\x7F]+") - if pattern.findall(values["hubert_path"]): - sg.popup(i18n("The hubert model path must not contain Chinese characters")) - return False - if pattern.findall(values["pth_path"]): - sg.popup(i18n("The pth file path must not contain Chinese characters.")) - return False - if pattern.findall(values["index_path"]): - sg.popup(i18n("The index file path must not contain Chinese characters.")) - return False - self.set_devices(values["sg_input_device"], values["sg_output_device"]) - self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt") - self.config.pth_path = values["pth_path"] - self.config.index_path = values["index_path"] - self.config.npy_path = values["npy_path"] - self.config.threhold = values["threhold"] - self.config.pitch = values["pitch"] - self.config.block_time = values["block_time"] - self.config.crossfade_time = values["crossfade_length"] - self.config.extra_time = values["extra_time"] - self.config.I_noise_reduce = values["I_noise_reduce"] - self.config.O_noise_reduce = values["O_noise_reduce"] - self.config.index_rate = values["index_rate"] - return True - - def start_vc(self): - torch.cuda.empty_cache() - self.flag_vc = True - self.block_frame = int(self.config.block_time * self.config.samplerate) - self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate) - self.sola_search_frame = int(0.012 * self.config.samplerate) - self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s - self.extra_frame = int(self.config.extra_time * self.config.samplerate) - self.rvc = None - self.rvc = RVC( - self.config.pitch, - self.config.hubert_path, - self.config.pth_path, - self.config.index_path, - self.config.npy_path, - self.config.index_rate, - ) - self.input_wav: np.ndarray = np.zeros( - self.extra_frame - + self.crossfade_frame - + self.sola_search_frame - + self.block_frame, - dtype="float32", - ) - self.output_wav: torch.Tensor = torch.zeros( - self.block_frame, device=device, dtype=torch.float32 - ) - self.sola_buffer: torch.Tensor = torch.zeros( - self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_in_window: torch.Tensor = torch.linspace( - 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_out_window: torch.Tensor = 1 - self.fade_in_window - self.resampler1 = tat.Resample( - orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32 - ) - self.resampler2 = tat.Resample( - orig_freq=self.rvc.tgt_sr, - new_freq=self.config.samplerate, - dtype=torch.float32, - ) - thread_vc = threading.Thread(target=self.soundinput) - thread_vc.start() - - def soundinput(self): - """ - 接受音频输入 - """ - with sd.Stream( - channels=2, - callback=self.audio_callback, - blocksize=self.block_frame, - samplerate=self.config.samplerate, - dtype="float32", - ): - while self.flag_vc: - time.sleep(self.config.block_time) - print("Audio block passed.") - print("ENDing VC") - - def audio_callback( - self, indata: np.ndarray, outdata: np.ndarray, frames, times, status - ): - """ - 音频处理 - """ - start_time = time.perf_counter() - indata = librosa.to_mono(indata.T) - if self.config.I_noise_reduce: - indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate) - - """noise gate""" - frame_length = 2048 - hop_length = 1024 - rms = librosa.feature.rms( - y=indata, frame_length=frame_length, hop_length=hop_length - ) - db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold - # print(rms.shape,db.shape,db) - for i in range(db_threhold.shape[0]): - if db_threhold[i]: - indata[i * hop_length : (i + 1) * hop_length] = 0 - self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata) - - # infer - print("input_wav:" + str(self.input_wav.shape)) - # print('infered_wav:'+str(infer_wav.shape)) - infer_wav: torch.Tensor = self.resampler2( - self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav))) - )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to( - device - ) - print("infer_wav:" + str(infer_wav.shape)) - - # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC - cor_nom = F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame], - self.sola_buffer[None, None, :], - ) - cor_den = torch.sqrt( - F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame] - ** 2, - torch.ones(1, 1, self.crossfade_frame, device=device), - ) - + 1e-8 - ) - sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0]) - print("sola offset: " + str(int(sola_offset))) - - # crossfade - self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame] - self.output_wav[: self.crossfade_frame] *= self.fade_in_window - self.output_wav[: self.crossfade_frame] += self.sola_buffer[:] - if sola_offset < self.sola_search_frame: - self.sola_buffer[:] = ( - infer_wav[ - -self.sola_search_frame - - self.crossfade_frame - + sola_offset : -self.sola_search_frame - + sola_offset - ] - * self.fade_out_window - ) - else: - self.sola_buffer[:] = ( - infer_wav[-self.crossfade_frame :] * self.fade_out_window - ) - - if self.config.O_noise_reduce: - outdata[:] = np.tile( - nr.reduce_noise( - y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate - ), - (2, 1), - ).T - else: - outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy() - total_time = time.perf_counter() - start_time - self.window["infer_time"].update(int(total_time * 1000)) - print("infer time:" + str(total_time)) - - def get_devices(self, update: bool = True): - """获取设备列表""" - if update: - sd._terminate() - sd._initialize() - devices = sd.query_devices() - hostapis = sd.query_hostapis() - for hostapi in hostapis: - for device_idx in hostapi["devices"]: - devices[device_idx]["hostapi_name"] = hostapi["name"] - input_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_output_channels"] > 0 - ] - input_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_output_channels"] > 0 - ] - return ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) - - def set_devices(self, input_device, output_device): - """设置输出设备""" - ( - input_devices, - output_devices, - input_device_indices, - output_device_indices, - ) = self.get_devices() - sd.default.device[0] = input_device_indices[input_devices.index(input_device)] - sd.default.device[1] = output_device_indices[ - output_devices.index(output_device) - ] - print("input device:" + str(sd.default.device[0]) + ":" + str(input_device)) - print("output device:" + str(sd.default.device[1]) + ":" + str(output_device)) - - -gui = GUI() diff --git a/spaces/Benson/text-generation/Examples/Botn Fiebre Descargar Pc.md b/spaces/Benson/text-generation/Examples/Botn Fiebre Descargar Pc.md deleted file mode 100644 index 3190b6b24beeafe61cb35bf8939c62cf60d5c4ce..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Botn Fiebre Descargar Pc.md +++ /dev/null @@ -1,58 +0,0 @@ - -

Cómo descargar y jugar Button Fever en PC

-

¿Te encantan los juegos de puzzle que ponen a prueba tus habilidades multitarea y creatividad? Si es así, es posible que desee probar Button Fever, un juego divertido y adictivo que le permite colocar y combinar botones en un tablero. En este artículo, te mostraremos cómo descargar y jugar Button Fever en tu PC, así como algunos consejos y trucos para ayudarte a sacarle el máximo partido.

-

botón fiebre descargar pc


DOWNLOAD >>>>> https://bltlly.com/2v6Jpl



-

¿Qué es la fiebre de los botones?

-

Un divertido y adictivo juego de puzzle

-

Button Fever es un juego desarrollado por Rollic Games, una empresa especializada en juegos casuales e hiper-casuales para dispositivos móviles. Button Fever es uno de sus títulos más populares, con más de 10 millones de descargas en Google Play Store. El juego es adecuado para todas las edades y se puede jugar fuera de línea o en línea.

-

Características y jugabilidad

-

El juego es simple pero desafiante. Tienes un tablero con ranuras vacías y una cola de botones en la parte inferior. Su objetivo es colocar los botones en el tablero y limpiar las líneas haciendo coincidir los colores o formas de los botones. Cuantas más líneas borres, más puntos ganarás. También puedes ganar monedas completando niveles, que puedes usar para desbloquear nuevos botones y temas.

-

El juego tiene diferentes niveles de dificultad, que van de fácil a difícil. Cada nivel tiene un tamaño de tablero diferente, número de botones y límite de tiempo. También puedes elegir entre diferentes modos, como clásico, árcade o zen. El juego también tiene desafíos diarios, tablas de clasificación y logros para mantenerte involucrado.

-

Cómo descargar Button Fever en PC?

-

Opción 1: Descargar desde el sitio web oficial

-

Si desea descargar Button Fever directamente desde el sitio web del desarrollador, puede seguir estos pasos:

-

Paso 1: Visita el sitio web y haz clic en el botón de descarga.

-

Esto te llevará a una página donde puedes elegir tu sistema operativo (Windows o Mac) y descargar el archivo de instalación.

-

- -

Una vez que haya descargado el archivo, haga doble clic en él para iniciar el proceso de instalación. Es posible que tenga que conceder permiso para que el programa realice cambios en su dispositivo. Siga las instrucciones en la pantalla para completar la instalación.

-

Paso 3: Lanza el juego y disfruta.

-

Después de la instalación, puede encontrar un icono de acceso directo para Button Fever en su escritorio o menú de inicio. Haga clic en él para iniciar el juego y empezar a jugar.

-

Opción 2: Descarga desde una plataforma de terceros

-

Si prefiere descargar Button Fever desde una plataforma de terceros que ofrece una variedad de juegos, puede utilizar una de estas opciones:

-

Paso 1: Instalar un lanzador de juegos como Epic Games o Steam.

-

Un lanzador de juegos es un programa que te permite acceder, descargar, instalar, actualizar y jugar juegos de diferentes desarrolladores y editores. Algunos de los lanzadores de juegos más populares son Epic Games y Steam, que puedes descargar gratis desde sus respectivos sitios web.

-

Paso 2: Crea una cuenta e inicia sesión.

-

Después de haber instalado el lanzador de juegos, tendrá que crear una cuenta e iniciar sesión con su correo electrónico y contraseña. También es posible que necesite verificar su cuenta y aceptar los términos y condiciones de la plataforma.

-

Paso 3: Busca Button Fever y cómpralo o consíguelo gratis.

-

Una vez que haya iniciado sesión, puede navegar por la biblioteca de juegos y buscar Button Fever. Dependiendo de la plataforma, es posible que tenga que comprar el juego o conseguirlo de forma gratuita. Por ejemplo, en Epic Games, Button Fever está disponible de forma gratuita, mientras que en Steam, cuesta $4.99. También puedes consultar las reseñas, valoraciones, capturas de pantalla y vídeos del juego antes de decidirte a conseguirlo.

-

Paso 4: Instalar el juego y jugar desde el lanzador.

- -

Consejos y trucos para jugar Button Fever en PC

-

Jugando Button Fever en PC puede ser más agradable y conveniente que reproducirlo en un dispositivo móvil. Aquí hay algunos consejos y trucos para ayudarte a jugar mejor y divertirte más:

-

Usa el ratón o el teclado para interactuar con los botones.

-

Una de las ventajas de jugar Button Fever en PC es que puedes usar el ratón o el teclado para interactuar con los botones. Puede arrastrar y soltar los botones con el ratón, o utilizar las teclas de flecha para moverlos. También puede utilizar la barra espaciadora para girarlos o presionar la tecla Intro para colocarlos en el tablero. Esto puede hacer que su juego sea más rápido y suave.

-

Borrar las líneas haciendo coincidir los colores o formas de los botones.

-

El objetivo principal de Button Fever es limpiar las líneas haciendo coincidir los colores o formas de los botones. Puede combinar tres o más botones del mismo color o forma horizontal, vertical o diagonalmente. Cuando borres una línea, ganarás puntos y monedas, y harás espacio para más botones. También puedes crear combos borrando varias líneas a la vez, lo que te dará puntos de bonificación y monedas.

-

Gana monedas y desbloquea nuevos botones y temas.

-

Al jugar Button Fever, ganarás monedas que puedes usar para desbloquear nuevos botones y temas. Cada botón tiene un color diferente, forma y diseño, tales como estrellas, corazones, flores, animales, frutas, etc. Cada tema tiene un fondo diferente, música y efectos de sonido, tales como bosque, playa, espacio, etc. Puede personalizar su juego eligiendo sus botones y temas favoritos de la tienda.

-

Ponte a prueba con diferentes niveles y modos.

- -

Conclusión

-

Button Fever es un divertido y adictivo juego de puzzle que te permite colocar y combinar botones en un tablero. Puede descargar y reproducir Button Fever en su PC siguiendo una de las opciones que hemos mostrado anteriormente. También puede utilizar algunos de nuestros consejos y trucos para mejorar su juego y divertirse más. Si te gustan los juegos de puzzle que ponen a prueba tus habilidades multitarea y creatividad, definitivamente deberías probar Button Fever.

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Button Fever:

-
Compatibility and LicenseThis download is licensed as shareware for the Windows operating system from organizer and PIM software and can be used as a free trial until the trial period ends (after 30 days). The VueMinder Pro 2022.01 demo is available to all software users as a free download with potential restrictions and is not necessarily the full version of this software.What version of Windows can VueMinder Pro run on?VueMinder Pro can be used on a computer running Windows 11 or Windows 10. Previous versions of the operating system shouldn't be a problem with Windows 8, Windows 7 and Windows Vista having been tested. Windows XP is supported. It runs on both 32-bit and 64-bit systems with no dedicated 64-bit download provided.Filed under: VueMinder Pro DownloadWe have tested VueMinder Pro 2022.01 against malware with several different programs. We certify that this program is clean of viruses, malware and trojans.Free Download for Windows 42.76 MB - Tested clean
  • $$ Cost:Free Trial

    -

    The download has been tested by an editor here on a PC and a list of features has been compiled; see below. We've also created some screenshots of PTFB Pro to illustrate the user interface and show the overall usage and features of this utility program.

    -

    Features of PTFB Pro

  • Automation: Automate repetitive tasks and processes.
  • Clipboard: Record and playback clipboard data.
  • Debugging: Debug and troubleshoot programs.
  • Error Trapping: Capture and respond to errors.
  • Hotkeys: Create keyboard shortcuts.
  • Logging: Record and review events.
  • Popup Blocker: Stop popup windows and ads.
  • Reminders: Schedule reminders and alerts.
  • Reporting: Generate reports and summaries.
  • Scheduling: Set up recurring tasks.
  • Scripting: Create programs and macros.
  • Shutdown: Schedule system shutdown.
  • Startup: Configure programs to start up with Windows.
  • Tray Icon: Manage program features from the system tray.
  • Compatibility and LicenseThis download is licensed as shareware for the Windows operating system from PC utilities and can be used as a free trial until the trial period ends (after 30 days). The PTFB Pro 5.4.6 demo is available to all software users as a free download with potential restrictions and is not necessarily the full version of this software. We have determined PTFB Pro to have reached end of life and no further updates are to be expected.What version of Windows can PTFB Pro run on?PTFB Pro can be used on a computer running Windows 11 or Windows 10. Previous versions of the operating system shouldn't be a problem with Windows 8, Windows 7 and Windows Vista having been tested. Windows XP is supported. It runs on both 32-bit and 64-bit systems with no dedicated 64-bit download provided.Filed under: PTFB Pro DownloadWe have tested PTFB Pro 5.4.6 against malware with several different programs. We certify that this program is clean of viruses, malware and trojans.Free Download for Windows 8.4 MB - Tested clean
  • $$ Cost:Free Trial

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/data/encoders/bytes.py b/spaces/gradio/HuBERT/fairseq/data/encoders/bytes.py deleted file mode 100644 index f88f8f6929f5b6bdb0db470be9ebedf8fe1f752d..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/data/encoders/bytes.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from fairseq.data.encoders import register_bpe -from fairseq.data.encoders.byte_utils import ( - SPACE, - SPACE_ESCAPE, - byte_encode, - smart_byte_decode, -) - - -@register_bpe("bytes") -class Bytes(object): - def __init__(self, *unused): - pass - - @staticmethod - def add_args(parser): - pass - - @staticmethod - def encode(x: str) -> str: - encoded = byte_encode(x) - escaped = encoded.replace(SPACE, SPACE_ESCAPE) - return SPACE.join(list(escaped)) - - @staticmethod - def decode(x: str) -> str: - unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE) - return smart_byte_decode(unescaped) diff --git a/spaces/gradio/gpt-neo/tasks.py b/spaces/gradio/gpt-neo/tasks.py deleted file mode 100644 index f4a0304746dc0e8cf93ec34069144e9f7ba3f7b6..0000000000000000000000000000000000000000 --- a/spaces/gradio/gpt-neo/tasks.py +++ /dev/null @@ -1,116 +0,0 @@ -import os.path -import json -import requests -import numpy as np -import ftfy -from data.encoders import fetch_encoder, encode -import tensorflow as tf -import re -from functools import partial - -lambada_src_uri = 'http://eaidata.bmk.sh/data/lambada_test.jsonl' -normalization = 'NFKC' - - -# Note: this task is called "lambada" but it really refers to OpenAI's version -# of the task, which actually differs in some ways from the task described in -# the original paper. So, strictly speaking, accuracy values from this task -# should not be compared to accuracy values from the original lambada task. -# For more information, see -# https://github.com/openai/gpt-2/issues/131 - -def lambada_create_tokens_data(params, path): - with open(path, 'w') as f: - req = requests.get(lambada_src_uri) - req.raise_for_status() - jsons = [json.loads(l) for l in req.iter_lines()] - texts = [ftfy.fix_text(j['text'], normalization=normalization) for j in jsons] - enc = fetch_encoder(params) - arrays = [encode(enc, t) for t in texts] - json.dump(arrays, f) - return arrays - - -def lambada_read_or_create_tokens_data(params, path): - # if you tell me where the file should go, i will helpfully create it for you - if not os.path.exists(path): - return lambada_create_tokens_data(params, path) - with open(path) as f: - return json.load(f) - - -def bin_pack(params, tokens_data): - eos_token = params['eos_id'] - n_ctx = params['n_ctx'] - dummy_token = 1 - pad_batch_size = params['eval_batch_size'] - bins = [] - for a in tokens_data: - if len(bins) == 0 or len(bins[-1]) + len(a) + 1 > n_ctx: - bins.append([]) - bins[-1] += a - bins[-1].append(eos_token) - while len(bins) % pad_batch_size != 0: - bins.append([]) - bins_array = np.full((len(bins), n_ctx), dummy_token, dtype=np.uint16) - for i, b in enumerate(bins): - bins_array[i, 0:len(b)] = b - return bins_array - - -def lambada_init(params): - ds_configs = params['dataset_configs'] - l = [ - ds_configs[ds_id].get('lambada_tokens_path', "./lambada.json") - for ds_id, _, _, _ in params['datasets'] - ] - assert len(l) > 0, 'lambada_tokens_path not found in the dataset config' - lt_path = l[0] - assert lt_path.endswith('.json'), 'lambada_tokens_path must have extension json' - - tokens_data = lambada_read_or_create_tokens_data(params, lt_path) - bins_array = bin_pack(params, tokens_data) - params['lambada_tokens_path'] = lt_path - params['lambada_n_steps'] = len(bins_array) // params['eval_batch_size'] - - -def lambada_get_task_info(params): - return { - 'n_steps': params['lambada_n_steps'], - } - - -# The LAMBADA evaluation code looks at the logits of each position just before an eos_token -def lambada_input(params): - eos_token = 50256 if params['n_vocab'] >= 50257 else 0 - n_ctx = params['n_ctx'] - lt_path = params['lambada_tokens_path'] - tokens_data = lambada_read_or_create_tokens_data(params, lt_path) - bins_array = bin_pack(params, tokens_data) - dataset = tf.data.Dataset.from_tensor_slices(bins_array) - - def _get_output(bin): - bin = tf.cast(bin, dtype=tf.int32) - indexes = tf.range(n_ctx) - results = tf.gather(bin, (indexes + 1) % n_ctx) - eos_next_positions = tf.math.equal(tf.gather(bin, (indexes + 2) % n_ctx), eos_token) - output = tf.where(eos_next_positions, results, tf.constant(eos_token, shape=[n_ctx])) - bin = tf.reshape(bin, [n_ctx]) - bin = tf.cast(bin, dtype=tf.int32) - output = tf.reshape(output, [n_ctx]) - output = tf.cast(output, dtype=tf.int32) - return bin, output - - dataset = dataset.map(_get_output) - dataset = dataset.batch(params['eval_batch_size'], drop_remainder=True) - dataset = dataset.repeat() - return dataset - - -task_descriptors = { - 'lambada': { - 'init_fn': lambada_init, - 'get_task_info_fn': lambada_get_task_info, - 'input_fn': lambada_input, - } -} diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/experiments/pretrained_models/README.md b/spaces/guetLzy/Real-ESRGAN-Demo/experiments/pretrained_models/README.md deleted file mode 100644 index d0cc4afcbdd2c733f6b946bb86bd00baa90e8295..0000000000000000000000000000000000000000 --- a/spaces/guetLzy/Real-ESRGAN-Demo/experiments/pretrained_models/README.md +++ /dev/null @@ -1 +0,0 @@ -# Put downloaded pre-trained models here diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/training/loss.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/training/loss.py deleted file mode 100644 index 3b6d0833ca639bb3b08f216419dfa25f1e657da2..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/training/loss.py +++ /dev/null @@ -1,159 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Loss functions.""" - -import numpy as np -import torch -from torch_utils import training_stats -from torch_utils.ops import conv2d_gradfix -from torch_utils.ops import upfirdn2d - -# ---------------------------------------------------------------------------- - - -class Loss: - # to be overridden by subclass - def accumulate_gradients(self, phase, real_img, real_c, gen_z, gen_c, gain, cur_nimg): - raise NotImplementedError() - -# ---------------------------------------------------------------------------- - - -class StyleGAN2Loss(Loss): - def __init__(self, device, G, D, augment_pipe=None, r1_gamma=10, style_mixing_prob=0, pl_weight=0, pl_batch_shrink=2, pl_decay=0.01, pl_no_weight_grad=False, blur_init_sigma=0, blur_fade_kimg=0): - super().__init__() - self.device = device - self.G = G - self.D = D - self.augment_pipe = augment_pipe - self.r1_gamma = r1_gamma - self.style_mixing_prob = style_mixing_prob - self.pl_weight = pl_weight - self.pl_batch_shrink = pl_batch_shrink - self.pl_decay = pl_decay - self.pl_no_weight_grad = pl_no_weight_grad - self.pl_mean = torch.zeros([], device=device) - self.blur_init_sigma = blur_init_sigma - self.blur_fade_kimg = blur_fade_kimg - - def run_G(self, z, c, update_emas=False): - ws = self.G.mapping(z, c, update_emas=update_emas) - if self.style_mixing_prob > 0: - with torch.autograd.profiler.record_function('style_mixing'): - cutoff = torch.empty([], dtype=torch.int64, - device=ws.device).random_(1, ws.shape[1]) - cutoff = torch.where(torch.rand( - [], device=ws.device) < self.style_mixing_prob, cutoff, torch.full_like(cutoff, ws.shape[1])) - ws[:, cutoff:] = self.G.mapping( - torch.randn_like(z), c, update_emas=False)[:, cutoff:] - img = self.G.synthesis(ws, update_emas=update_emas) - return img, ws - - def run_D(self, img, c, blur_sigma=0, update_emas=False): - blur_size = np.floor(blur_sigma * 3) - if blur_size > 0: - with torch.autograd.profiler.record_function('blur'): - f = torch.arange(-blur_size, blur_size + 1, - device=img.device).div(blur_sigma).square().neg().exp2() - img = upfirdn2d.filter2d(img, f / f.sum()) - if self.augment_pipe is not None: - img = self.augment_pipe(img) - logits = self.D(img, c, update_emas=update_emas) - return logits - - def accumulate_gradients(self, phase, real_img, real_c, gen_z, gen_c, gain, cur_nimg): - assert phase in ['Gmain', 'Greg', 'Gboth', 'Dmain', 'Dreg', 'Dboth'] - if self.pl_weight == 0: - phase = {'Greg': 'none', 'Gboth': 'Gmain'}.get(phase, phase) - if self.r1_gamma == 0: - phase = {'Dreg': 'none', 'Dboth': 'Dmain'}.get(phase, phase) - blur_sigma = max(1 - cur_nimg / (self.blur_fade_kimg * 1e3), 0) * \ - self.blur_init_sigma if self.blur_fade_kimg > 0 else 0 - - # Gmain: Maximize logits for generated images. - if phase in ['Gmain', 'Gboth']: - with torch.autograd.profiler.record_function('Gmain_forward'): - gen_img, _gen_ws = self.run_G(gen_z, gen_c) - gen_logits = self.run_D(gen_img, gen_c, blur_sigma=blur_sigma) - training_stats.report('Loss/scores/fake', gen_logits) - training_stats.report('Loss/signs/fake', gen_logits.sign()) - # -log(sigmoid(gen_logits)) - loss_Gmain = torch.nn.functional.softplus(-gen_logits) - training_stats.report('Loss/G/loss', loss_Gmain) - with torch.autograd.profiler.record_function('Gmain_backward'): - loss_Gmain.mean().mul(gain).backward() - - # Gpl: Apply path length regularization. - if phase in ['Greg', 'Gboth']: - with torch.autograd.profiler.record_function('Gpl_forward'): - batch_size = gen_z.shape[0] // self.pl_batch_shrink - gen_img, gen_ws = self.run_G( - gen_z[:batch_size], gen_c[:batch_size]) - pl_noise = torch.randn_like( - gen_img) / np.sqrt(gen_img.shape[2] * gen_img.shape[3]) - with torch.autograd.profiler.record_function('pl_grads'), conv2d_gradfix.no_weight_gradients(self.pl_no_weight_grad): - pl_grads = torch.autograd.grad(outputs=[( - gen_img * pl_noise).sum()], inputs=[gen_ws], create_graph=True, only_inputs=True)[0] - pl_lengths = pl_grads.square().sum(2).mean(1).sqrt() - pl_mean = self.pl_mean.lerp(pl_lengths.mean(), self.pl_decay) - self.pl_mean.copy_(pl_mean.detach()) - pl_penalty = (pl_lengths - pl_mean).square() - training_stats.report('Loss/pl_penalty', pl_penalty) - loss_Gpl = pl_penalty * self.pl_weight - training_stats.report('Loss/G/reg', loss_Gpl) - with torch.autograd.profiler.record_function('Gpl_backward'): - loss_Gpl.mean().mul(gain).backward() - - # Dmain: Minimize logits for generated images. - loss_Dgen = 0 - if phase in ['Dmain', 'Dboth']: - with torch.autograd.profiler.record_function('Dgen_forward'): - gen_img, _gen_ws = self.run_G(gen_z, gen_c, update_emas=True) - gen_logits = self.run_D( - gen_img, gen_c, blur_sigma=blur_sigma, update_emas=True) - training_stats.report('Loss/scores/fake', gen_logits) - training_stats.report('Loss/signs/fake', gen_logits.sign()) - loss_Dgen = torch.nn.functional.softplus( - gen_logits) # -log(1 - sigmoid(gen_logits)) - with torch.autograd.profiler.record_function('Dgen_backward'): - loss_Dgen.mean().mul(gain).backward() - - # Dmain: Maximize logits for real images. - # Dr1: Apply R1 regularization. - if phase in ['Dmain', 'Dreg', 'Dboth']: - name = 'Dreal' if phase == 'Dmain' else 'Dr1' if phase == 'Dreg' else 'Dreal_Dr1' - with torch.autograd.profiler.record_function(name + '_forward'): - real_img_tmp = real_img.detach().requires_grad_( - phase in ['Dreg', 'Dboth']) - real_logits = self.run_D( - real_img_tmp, real_c, blur_sigma=blur_sigma) - training_stats.report('Loss/scores/real', real_logits) - training_stats.report('Loss/signs/real', real_logits.sign()) - - loss_Dreal = 0 - if phase in ['Dmain', 'Dboth']: - # -log(sigmoid(real_logits)) - loss_Dreal = torch.nn.functional.softplus(-real_logits) - training_stats.report( - 'Loss/D/loss', loss_Dgen + loss_Dreal) - - loss_Dr1 = 0 - if phase in ['Dreg', 'Dboth']: - with torch.autograd.profiler.record_function('r1_grads'), conv2d_gradfix.no_weight_gradients(): - r1_grads = torch.autograd.grad(outputs=[real_logits.sum()], inputs=[ - real_img_tmp], create_graph=True, only_inputs=True)[0] - r1_penalty = r1_grads.square().sum([1, 2, 3]) - loss_Dr1 = r1_penalty * (self.r1_gamma / 2) - training_stats.report('Loss/r1_penalty', r1_penalty) - training_stats.report('Loss/D/reg', loss_Dr1) - - with torch.autograd.profiler.record_function(name + '_backward'): - (loss_Dreal + loss_Dr1).mean().mul(gain).backward() - -# ---------------------------------------------------------------------------- diff --git a/spaces/hamacojr/CAT-Seg/train_net.py b/spaces/hamacojr/CAT-Seg/train_net.py deleted file mode 100644 index cab55aad19003957fd12b5a650f75d2f360a2d36..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/train_net.py +++ /dev/null @@ -1,324 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -MaskFormer Training Script. - -This script is a simplified version of the training script in detectron2/tools. -""" -import copy -import itertools -import logging -import os -from collections import OrderedDict -from typing import Any, Dict, List, Set - -import torch - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import MetadataCatalog, build_detection_train_loader -from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch -from detectron2.evaluation import CityscapesInstanceEvaluator, CityscapesSemSegEvaluator, \ - COCOEvaluator, COCOPanopticEvaluator, DatasetEvaluators, SemSegEvaluator, verify_results, \ - DatasetEvaluator - -from detectron2.projects.deeplab import add_deeplab_config, build_lr_scheduler -from detectron2.solver.build import maybe_add_gradient_clipping -from detectron2.utils.logger import setup_logger - -from detectron2.utils.file_io import PathManager -import numpy as np -from PIL import Image -import glob - -import pycocotools.mask as mask_util - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.utils.comm import all_gather, is_main_process, synchronize -import json - -# from detectron2.evaluation import SemSegGzeroEvaluator -# from mask_former.evaluation.sem_seg_evaluation_gzero import SemSegGzeroEvaluator - -class VOCbEvaluator(SemSegEvaluator): - """ - Evaluate semantic segmentation metrics. - """ - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a model. - It is a list of dicts. Each dict corresponds to an image and - contains keys like "height", "width", "file_name". - outputs: the outputs of a model. It is either list of semantic segmentation predictions - (Tensor [H, W]) or list of dicts with key "sem_seg" that contains semantic - segmentation prediction in the same format. - """ - for input, output in zip(inputs, outputs): - output = output["sem_seg"].argmax(dim=0).to(self._cpu_device) - pred = np.array(output, dtype=np.int) - pred[pred >= 20] = 20 - with PathManager.open(self.input_file_to_gt_file[input["file_name"]], "rb") as f: - gt = np.array(Image.open(f), dtype=np.int) - - gt[gt == self._ignore_label] = self._num_classes - - self._conf_matrix += np.bincount( - (self._num_classes + 1) * pred.reshape(-1) + gt.reshape(-1), - minlength=self._conf_matrix.size, - ).reshape(self._conf_matrix.shape) - - self._predictions.extend(self.encode_json_sem_seg(pred, input["file_name"])) - -# MaskFormer -from cat_seg import ( - DETRPanopticDatasetMapper, - MaskFormerPanopticDatasetMapper, - MaskFormerSemanticDatasetMapper, - SemanticSegmentorWithTTA, - add_cat_seg_config, -) - - -class Trainer(DefaultTrainer): - """ - Extension of the Trainer class adapted to DETR. - """ - - @classmethod - def build_evaluator(cls, cfg, dataset_name, output_folder=None): - """ - Create evaluator(s) for a given dataset. - This uses the special metadata "evaluator_type" associated with each - builtin dataset. For your own dataset, you can simply create an - evaluator manually in your script and do not have to worry about the - hacky if-else logic here. - """ - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - evaluator_list = [] - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - if evaluator_type in ["sem_seg", "ade20k_panoptic_seg"]: - evaluator_list.append( - SemSegEvaluator( - dataset_name, - distributed=True, - output_dir=output_folder, - ) - ) - - if evaluator_type == "sem_seg_background": - evaluator_list.append( - VOCbEvaluator( - dataset_name, - distributed=True, - output_dir=output_folder, - ) - ) - if evaluator_type == "coco": - evaluator_list.append(COCOEvaluator(dataset_name, output_dir=output_folder)) - if evaluator_type in [ - "coco_panoptic_seg", - "ade20k_panoptic_seg", - "cityscapes_panoptic_seg", - ]: - evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder)) - if evaluator_type == "cityscapes_instance": - assert ( - torch.cuda.device_count() >= comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - return CityscapesInstanceEvaluator(dataset_name) - if evaluator_type == "cityscapes_sem_seg": - assert ( - torch.cuda.device_count() >= comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - return CityscapesSemSegEvaluator(dataset_name) - if evaluator_type == "cityscapes_panoptic_seg": - assert ( - torch.cuda.device_count() >= comm.get_rank() - ), "CityscapesEvaluator currently do not work with multiple machines." - evaluator_list.append(CityscapesSemSegEvaluator(dataset_name)) - if len(evaluator_list) == 0: - raise NotImplementedError( - "no Evaluator for the dataset {} with the type {}".format( - dataset_name, evaluator_type - ) - ) - elif len(evaluator_list) == 1: - return evaluator_list[0] - return DatasetEvaluators(evaluator_list) - - @classmethod - def build_train_loader(cls, cfg): - # Semantic segmentation dataset mapper - if cfg.INPUT.DATASET_MAPPER_NAME == "mask_former_semantic": - mapper = MaskFormerSemanticDatasetMapper(cfg, True) - # Panoptic segmentation dataset mapper - elif cfg.INPUT.DATASET_MAPPER_NAME == "mask_former_panoptic": - mapper = MaskFormerPanopticDatasetMapper(cfg, True) - # DETR-style dataset mapper for COCO panoptic segmentation - elif cfg.INPUT.DATASET_MAPPER_NAME == "detr_panoptic": - mapper = DETRPanopticDatasetMapper(cfg, True) - else: - mapper = None - return build_detection_train_loader(cfg, mapper=mapper) - - @classmethod - def build_lr_scheduler(cls, cfg, optimizer): - """ - It now calls :func:`detectron2.solver.build_lr_scheduler`. - Overwrite it if you'd like a different scheduler. - """ - return build_lr_scheduler(cfg, optimizer) - - @classmethod - def build_optimizer(cls, cfg, model): - weight_decay_norm = cfg.SOLVER.WEIGHT_DECAY_NORM - weight_decay_embed = cfg.SOLVER.WEIGHT_DECAY_EMBED - - defaults = {} - defaults["lr"] = cfg.SOLVER.BASE_LR - defaults["weight_decay"] = cfg.SOLVER.WEIGHT_DECAY - - norm_module_types = ( - torch.nn.BatchNorm1d, - torch.nn.BatchNorm2d, - torch.nn.BatchNorm3d, - torch.nn.SyncBatchNorm, - # NaiveSyncBatchNorm inherits from BatchNorm2d - torch.nn.GroupNorm, - torch.nn.InstanceNorm1d, - torch.nn.InstanceNorm2d, - torch.nn.InstanceNorm3d, - torch.nn.LayerNorm, - torch.nn.LocalResponseNorm, - ) - - params: List[Dict[str, Any]] = [] - memo: Set[torch.nn.parameter.Parameter] = set() - # import ipdb; - # ipdb.set_trace() - for module_name, module in model.named_modules(): - for module_param_name, value in module.named_parameters(recurse=False): - if not value.requires_grad: - continue - # Avoid duplicating parameters - if value in memo: - continue - memo.add(value) - hyperparams = copy.copy(defaults) - if "backbone" in module_name: - hyperparams["lr"] = hyperparams["lr"] * cfg.SOLVER.BACKBONE_MULTIPLIER - if "clip_model" in module_name: - hyperparams["lr"] = hyperparams["lr"] * cfg.SOLVER.CLIP_MULTIPLIER - # for deformable detr - - if ( - "relative_position_bias_table" in module_param_name - or "absolute_pos_embed" in module_param_name - ): - print(module_param_name) - hyperparams["weight_decay"] = 0.0 - if isinstance(module, norm_module_types): - hyperparams["weight_decay"] = weight_decay_norm - if isinstance(module, torch.nn.Embedding): - hyperparams["weight_decay"] = weight_decay_embed - params.append({"params": [value], **hyperparams}) - - def maybe_add_full_model_gradient_clipping(optim): - # detectron2 doesn't have full model gradient clipping now - clip_norm_val = cfg.SOLVER.CLIP_GRADIENTS.CLIP_VALUE - enable = ( - cfg.SOLVER.CLIP_GRADIENTS.ENABLED - and cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model" - and clip_norm_val > 0.0 - ) - - class FullModelGradientClippingOptimizer(optim): - def step(self, closure=None): - all_params = itertools.chain(*[x["params"] for x in self.param_groups]) - torch.nn.utils.clip_grad_norm_(all_params, clip_norm_val) - super().step(closure=closure) - - return FullModelGradientClippingOptimizer if enable else optim - - optimizer_type = cfg.SOLVER.OPTIMIZER - if optimizer_type == "SGD": - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.SGD)( - params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM - ) - elif optimizer_type == "ADAMW": - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.AdamW)( - params, cfg.SOLVER.BASE_LR - ) - else: - raise NotImplementedError(f"no optimizer type {optimizer_type}") - if not cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model": - optimizer = maybe_add_gradient_clipping(cfg, optimizer) - return optimizer - - @classmethod - def test_with_TTA(cls, cfg, model): - logger = logging.getLogger("detectron2.trainer") - # In the end of training, run an evaluation with TTA. - logger.info("Running inference with test-time augmentation ...") - model = SemanticSegmentorWithTTA(cfg, model) - evaluators = [ - cls.build_evaluator( - cfg, name, output_folder=os.path.join(cfg.OUTPUT_DIR, "inference_TTA") - ) - for name in cfg.DATASETS.TEST - ] - res = cls.test(cfg, model, evaluators) - res = OrderedDict({k + "_TTA": v for k, v in res.items()}) - return res - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - # for poly lr schedule - add_deeplab_config(cfg) - add_cat_seg_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup(cfg, args) - # Setup logger for "mask_former" module - setup_logger(output=cfg.OUTPUT_DIR, distributed_rank=comm.get_rank(), name="mask_former") - return cfg - - -def main(args): - cfg = setup(args) - torch.set_float32_matmul_precision("high") - if args.eval_only: - model = Trainer.build_model(cfg) - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - res = Trainer.test(cfg, model) - if cfg.TEST.AUG.ENABLED: - res.update(Trainer.test_with_TTA(cfg, model)) - if comm.is_main_process(): - verify_results(cfg, res) - return res - - trainer = Trainer(cfg) - trainer.resume_or_load(resume=args.resume) - return trainer.train() - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/hanithar/Trees/app.py b/spaces/hanithar/Trees/app.py deleted file mode 100644 index 2bc066a05095e69fbb815e05e6684180fad4370b..0000000000000000000000000000000000000000 --- a/spaces/hanithar/Trees/app.py +++ /dev/null @@ -1,20 +0,0 @@ - -__all__ = ['learn','classify_image','categories','image','label','examples','intf'] - -from fastai.vision.all import * -import gradio as gr -import timm - -learn = load_learner('model_convnext.pkl') - -categories = learn.dls.vocab -def classify_image(img): - pred,idx,probs = learn.predict(img) - return dict(zip(categories,map(float,probs))) - -image = gr.inputs.Image(shape=(256,256)) -label = gr.outputs.Label() -examples = ['apple.jpg','banyan.jpg','mango.jpg','neem.jpg'] - -intf = gr.Interface(fn=classify_image, inputs = image, outputs= label, examples = examples) -intf.launch(inline=False) \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/voc/voc_eval.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/voc/voc_eval.py deleted file mode 100644 index f8b0c1084e8fa866ee9b1043bf4bc9fdd4383669..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/voc/voc_eval.py +++ /dev/null @@ -1,216 +0,0 @@ -# A modification version from chainercv repository. -# (See https://github.com/chainer/chainercv/blob/master/chainercv/evaluations/eval_detection_voc.py) -from __future__ import division - -import os -from collections import defaultdict -import numpy as np -from maskrcnn_benchmark.structures.bounding_box import BoxList -from maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou - - -def do_voc_evaluation(dataset, predictions, output_folder, logger): - # TODO need to make the use_07_metric format available - # for the user to choose - pred_boxlists = [] - gt_boxlists = [] - for image_id, prediction in enumerate(predictions): - img_info = dataset.get_img_info(image_id) - if len(prediction) == 0: - continue - image_width = img_info["width"] - image_height = img_info["height"] - prediction = prediction.resize((image_width, image_height)) - pred_boxlists.append(prediction) - - gt_boxlist = dataset.get_groundtruth(image_id) - gt_boxlists.append(gt_boxlist) - result = eval_detection_voc( - pred_boxlists=pred_boxlists, - gt_boxlists=gt_boxlists, - iou_thresh=0.5, - use_07_metric=True, - ) - result_str = "mAP: {:.4f}\n".format(result["map"]) - for i, ap in enumerate(result["ap"]): - if i == 0: # skip background - continue - result_str += "{:<16}: {:.4f}\n".format( - dataset.map_class_id_to_class_name(i), ap - ) - logger.info(result_str) - if output_folder: - with open(os.path.join(output_folder, "result.txt"), "w") as fid: - fid.write(result_str) - return result - - -def eval_detection_voc(pred_boxlists, gt_boxlists, iou_thresh=0.5, use_07_metric=False): - """Evaluate on voc dataset. - Args: - pred_boxlists(list[BoxList]): pred boxlist, has labels and scores fields. - gt_boxlists(list[BoxList]): ground truth boxlist, has labels field. - iou_thresh: iou thresh - use_07_metric: boolean - Returns: - dict represents the results - """ - assert len(gt_boxlists) == len( - pred_boxlists - ), "Length of gt and pred lists need to be same." - prec, rec = calc_detection_voc_prec_rec( - pred_boxlists=pred_boxlists, gt_boxlists=gt_boxlists, iou_thresh=iou_thresh - ) - ap = calc_detection_voc_ap(prec, rec, use_07_metric=use_07_metric) - return {"ap": ap, "map": np.nanmean(ap)} - - -def calc_detection_voc_prec_rec(gt_boxlists, pred_boxlists, iou_thresh=0.5): - """Calculate precision and recall based on evaluation code of PASCAL VOC. - This function calculates precision and recall of - predicted bounding boxes obtained from a dataset which has :math:`N` - images. - The code is based on the evaluation code used in PASCAL VOC Challenge. - """ - n_pos = defaultdict(int) - score = defaultdict(list) - match = defaultdict(list) - for gt_boxlist, pred_boxlist in zip(gt_boxlists, pred_boxlists): - pred_bbox = pred_boxlist.bbox.numpy() - pred_label = pred_boxlist.get_field("labels").numpy() - pred_score = pred_boxlist.get_field("scores").numpy() - gt_bbox = gt_boxlist.bbox.numpy() - gt_label = gt_boxlist.get_field("labels").numpy() - gt_difficult = gt_boxlist.get_field("difficult").numpy() - - for l in np.unique(np.concatenate((pred_label, gt_label)).astype(int)): - pred_mask_l = pred_label == l - pred_bbox_l = pred_bbox[pred_mask_l] - pred_score_l = pred_score[pred_mask_l] - # sort by score - order = pred_score_l.argsort()[::-1] - pred_bbox_l = pred_bbox_l[order] - pred_score_l = pred_score_l[order] - - gt_mask_l = gt_label == l - gt_bbox_l = gt_bbox[gt_mask_l] - gt_difficult_l = gt_difficult[gt_mask_l] - - n_pos[l] += np.logical_not(gt_difficult_l).sum() - score[l].extend(pred_score_l) - - if len(pred_bbox_l) == 0: - continue - if len(gt_bbox_l) == 0: - match[l].extend((0,) * pred_bbox_l.shape[0]) - continue - - # VOC evaluation follows integer typed bounding boxes. - pred_bbox_l = pred_bbox_l.copy() - pred_bbox_l[:, 2:] += 1 - gt_bbox_l = gt_bbox_l.copy() - gt_bbox_l[:, 2:] += 1 - iou = boxlist_iou( - BoxList(pred_bbox_l, gt_boxlist.size), - BoxList(gt_bbox_l, gt_boxlist.size), - ).numpy() - gt_index = iou.argmax(axis=1) - # set -1 if there is no matching ground truth - gt_index[iou.max(axis=1) < iou_thresh] = -1 - del iou - - selec = np.zeros(gt_bbox_l.shape[0], dtype=bool) - for gt_idx in gt_index: - if gt_idx >= 0: - if gt_difficult_l[gt_idx]: - match[l].append(-1) - else: - if not selec[gt_idx]: - match[l].append(1) - else: - match[l].append(0) - selec[gt_idx] = True - else: - match[l].append(0) - - n_fg_class = max(n_pos.keys()) + 1 - prec = [None] * n_fg_class - rec = [None] * n_fg_class - - for l in n_pos.keys(): - score_l = np.array(score[l]) - match_l = np.array(match[l], dtype=np.int8) - - order = score_l.argsort()[::-1] - match_l = match_l[order] - - tp = np.cumsum(match_l == 1) - fp = np.cumsum(match_l == 0) - - # If an element of fp + tp is 0, - # the corresponding element of prec[l] is nan. - prec[l] = tp / (fp + tp) - # If n_pos[l] is 0, rec[l] is None. - if n_pos[l] > 0: - rec[l] = tp / n_pos[l] - - return prec, rec - - -def calc_detection_voc_ap(prec, rec, use_07_metric=False): - """Calculate average precisions based on evaluation code of PASCAL VOC. - This function calculates average precisions - from given precisions and recalls. - The code is based on the evaluation code used in PASCAL VOC Challenge. - Args: - prec (list of numpy.array): A list of arrays. - :obj:`prec[l]` indicates precision for class :math:`l`. - If :obj:`prec[l]` is :obj:`None`, this function returns - :obj:`numpy.nan` for class :math:`l`. - rec (list of numpy.array): A list of arrays. - :obj:`rec[l]` indicates recall for class :math:`l`. - If :obj:`rec[l]` is :obj:`None`, this function returns - :obj:`numpy.nan` for class :math:`l`. - use_07_metric (bool): Whether to use PASCAL VOC 2007 evaluation metric - for calculating average precision. The default value is - :obj:`False`. - Returns: - ~numpy.ndarray: - This function returns an array of average precisions. - The :math:`l`-th value corresponds to the average precision - for class :math:`l`. If :obj:`prec[l]` or :obj:`rec[l]` is - :obj:`None`, the corresponding value is set to :obj:`numpy.nan`. - """ - - n_fg_class = len(prec) - ap = np.empty(n_fg_class) - for l in range(n_fg_class): - if prec[l] is None or rec[l] is None: - ap[l] = np.nan - continue - - if use_07_metric: - # 11 point metric - ap[l] = 0 - for t in np.arange(0.0, 1.1, 0.1): - if np.sum(rec[l] >= t) == 0: - p = 0 - else: - p = np.max(np.nan_to_num(prec[l])[rec[l] >= t]) - ap[l] += p / 11 - else: - # correct AP calculation - # first append sentinel values at the end - mpre = np.concatenate(([0], np.nan_to_num(prec[l]), [0])) - mrec = np.concatenate(([0], rec[l], [1])) - - mpre = np.maximum.accumulate(mpre[::-1])[::-1] - - # to calculate area under PR curve, look for points - # where X axis (recall) changes value - i = np.where(mrec[1:] != mrec[:-1])[0] - - # and sum (\Delta recall) * prec - ap[l] = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) - - return ap diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/metric_logger.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/metric_logger.py deleted file mode 100644 index e1eec73f2e14b57ced85568b96538c4d7afff4e2..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/metric_logger.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -from collections import defaultdict -from collections import deque - -import torch -import time -from datetime import datetime -from .comm import is_main_process - - -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20): - self.deque = deque(maxlen=window_size) - # self.series = [] - self.total = 0.0 - self.count = 0 - - def update(self, value): - self.deque.append(value) - # self.series.append(value) - self.count += 1 - if value != value: - value = 0 - self.total += value - - @property - def median(self): - d = torch.tensor(list(self.deque)) - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque)) - return d.mean().item() - - @property - def global_avg(self): - return self.total / self.count - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -class MetricLogger(object): - def __init__(self, delimiter="\t"): - self.meters = defaultdict(SmoothedValue) - self.delimiter = delimiter - - def update(self, **kwargs): - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.meters[k].update(v) - - def __getattr__(self, attr): - if attr in self.meters: - return self.meters[attr] - if attr in self.__dict__: - return self.__dict__[attr] - raise AttributeError("'{}' object has no attribute '{}'".format( - type(self).__name__, attr)) - - def __str__(self): - loss_str = [] - for name, meter in self.meters.items(): - loss_str.append( - "{}: {:.4f} ({:.4f})".format(name, meter.median, meter.global_avg) - ) - return self.delimiter.join(loss_str) - - -# haotian added tensorboard support -class TensorboardLogger(MetricLogger): - def __init__(self, - log_dir, - start_iter=0, - delimiter='\t' - ): - super(TensorboardLogger, self).__init__(delimiter) - self.iteration = start_iter - self.writer = self._get_tensorboard_writer(log_dir) - - @staticmethod - def _get_tensorboard_writer(log_dir): - try: - from tensorboardX import SummaryWriter - except ImportError: - raise ImportError( - 'To use tensorboard please install tensorboardX ' - '[ pip install tensorflow tensorboardX ].' - ) - - if is_main_process(): - # timestamp = datetime.fromtimestamp(time.time()).strftime('%Y%m%d-%H:%M') - tb_logger = SummaryWriter('{}'.format(log_dir)) - return tb_logger - else: - return None - - def update(self, **kwargs): - super(TensorboardLogger, self).update(**kwargs) - if self.writer: - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.writer.add_scalar(k, v, self.iteration) - - self.iteration += 1 diff --git a/spaces/hardon-server/space-diffusion-txt2img-1-5/app.py b/spaces/hardon-server/space-diffusion-txt2img-1-5/app.py deleted file mode 100644 index 90388d79566b47da88e2d317dc99783e372465da..0000000000000000000000000000000000000000 --- a/spaces/hardon-server/space-diffusion-txt2img-1-5/app.py +++ /dev/null @@ -1,41 +0,0 @@ -import gradio as gr - -examples = [ - [ - 'The spirit of a tamagotchi wandering in the city of Paris', -# 4, -# 45, -# 7.5, -# 1024, - ], - [ - 'A delicious ceviche cheesecake slice', -# 4, -# 45, -# 7, -# 1024, - ], - [ - 'A pao de queijo foodcart in front of a japanese castle', -# 4, -# 45, -# 7, -# 1024, - ], - [ - 'alone in the amusement park by Edward Hopper', -# 4, -# 45, -# 7, -# 1024, - ], - [ - "A large cabin on top of a sunny mountain in the style of Dreamworks, artstation", -# 4, -# 45, -# 7, -# 1024, - ], -] - -gr.Interface.load("models/runwayml/stable-diffusion-v1-5", title=" ", examples=examples).launch(debug=False, show_error=False) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/collect_env.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/collect_env.py deleted file mode 100644 index c25b99cb0ab626cc4f4dabca5eb81f710011f2e3..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/utils/collect_env.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import importlib -import numpy as np -import os -import re -import subprocess -import sys -from collections import defaultdict -import PIL -import torch -import torchvision -from tabulate import tabulate - -__all__ = ["collect_env_info"] - - -def collect_torch_env(): - try: - import torch.__config__ - - return torch.__config__.show() - except ImportError: - # compatible with older versions of pytorch - from torch.utils.collect_env import get_pretty_env_info - - return get_pretty_env_info() - - -def get_env_module(): - var_name = "DETECTRON2_ENV_MODULE" - return var_name, os.environ.get(var_name, "") - - -def detect_compute_compatibility(CUDA_HOME, so_file): - try: - cuobjdump = os.path.join(CUDA_HOME, "bin", "cuobjdump") - if os.path.isfile(cuobjdump): - output = subprocess.check_output( - "'{}' --list-elf '{}'".format(cuobjdump, so_file), shell=True - ) - output = output.decode("utf-8").strip().split("\n") - sm = [] - for line in output: - line = re.findall(r"\.sm_[0-9]*\.", line)[0] - sm.append(line.strip(".")) - sm = sorted(set(sm)) - return ", ".join(sm) - else: - return so_file + "; cannot find cuobjdump" - except Exception: - # unhandled failure - return so_file - - -def collect_env_info(): - has_cuda = torch.cuda.is_available() - # NOTE: the use of CUDA_HOME requires the CUDA build deps, though in - # theory detectron2 should be made runnable with only the CUDA runtime - from torch.utils.cpp_extension import CUDA_HOME - - data = [] - data.append(("sys.platform", sys.platform)) - data.append(("Python", sys.version.replace("\n", ""))) - data.append(("numpy", np.__version__)) - - try: - import detectron2 # noqa - - data.append( - ("detectron2", detectron2.__version__ + " @" + os.path.dirname(detectron2.__file__)) - ) - except ImportError: - data.append(("detectron2", "failed to import")) - else: - try: - from detectron2 import _C - except ImportError: - data.append(("detectron2._C", "failed to import")) - else: - data.append(("detectron2 compiler", _C.get_compiler_version())) - data.append(("detectron2 CUDA compiler", _C.get_cuda_version())) - if has_cuda: - data.append( - ("detectron2 arch flags", detect_compute_compatibility(CUDA_HOME, _C.__file__)) - ) - - data.append(get_env_module()) - data.append(("PyTorch", torch.__version__ + " @" + os.path.dirname(torch.__file__))) - data.append(("PyTorch debug build", torch.version.debug)) - - data.append(("CUDA available", has_cuda)) - if has_cuda: - devices = defaultdict(list) - for k in range(torch.cuda.device_count()): - devices[torch.cuda.get_device_name(k)].append(str(k)) - for name, devids in devices.items(): - data.append(("GPU " + ",".join(devids), name)) - - from torch.utils.cpp_extension import CUDA_HOME - - data.append(("CUDA_HOME", str(CUDA_HOME))) - - if CUDA_HOME is not None and os.path.isdir(CUDA_HOME): - try: - nvcc = os.path.join(CUDA_HOME, "bin", "nvcc") - nvcc = subprocess.check_output("'{}' -V | tail -n1".format(nvcc), shell=True) - nvcc = nvcc.decode("utf-8").strip() - except subprocess.SubprocessError: - nvcc = "Not Available" - data.append(("NVCC", nvcc)) - - cuda_arch_list = os.environ.get("TORCH_CUDA_ARCH_LIST", None) - if cuda_arch_list: - data.append(("TORCH_CUDA_ARCH_LIST", cuda_arch_list)) - data.append(("Pillow", PIL.__version__)) - - try: - data.append( - ( - "torchvision", - str(torchvision.__version__) + " @" + os.path.dirname(torchvision.__file__), - ) - ) - if has_cuda: - try: - torchvision_C = importlib.util.find_spec("torchvision._C").origin - msg = detect_compute_compatibility(CUDA_HOME, torchvision_C) - data.append(("torchvision arch flags", msg)) - except ImportError: - data.append(("torchvision._C", "failed to find")) - except AttributeError: - data.append(("torchvision", "unknown")) - - try: - import fvcore - - data.append(("fvcore", fvcore.__version__)) - except ImportError: - pass - - try: - import cv2 - - data.append(("cv2", cv2.__version__)) - except ImportError: - pass - env_str = tabulate(data) + "\n" - env_str += collect_torch_env() - return env_str - - -if __name__ == "__main__": - try: - import detectron2 # noqa - except ImportError: - print(collect_env_info()) - else: - from detectron2.utils.collect_env import collect_env_info - - print(collect_env_info()) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tensormask/config.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tensormask/config.py deleted file mode 100644 index 44479f211811bd4060c6afef9ed86791b0dcd0d4..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/tensormask/config.py +++ /dev/null @@ -1,50 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -from detectron2.config import CfgNode as CN - - -def add_tensormask_config(cfg): - """ - Add config for TensorMask. - """ - cfg.MODEL.TENSOR_MASK = CN() - - # Anchor parameters - cfg.MODEL.TENSOR_MASK.IN_FEATURES = ["p2", "p3", "p4", "p5", "p6", "p7"] - - # Convolutions to use in the towers - cfg.MODEL.TENSOR_MASK.NUM_CONVS = 4 - - # Number of foreground classes. - cfg.MODEL.TENSOR_MASK.NUM_CLASSES = 80 - # Channel size for the classification tower - cfg.MODEL.TENSOR_MASK.CLS_CHANNELS = 256 - - cfg.MODEL.TENSOR_MASK.SCORE_THRESH_TEST = 0.05 - # Only the top (1000 * #levels) candidate boxes across all levels are - # considered jointly during test (to improve speed) - cfg.MODEL.TENSOR_MASK.TOPK_CANDIDATES_TEST = 6000 - cfg.MODEL.TENSOR_MASK.NMS_THRESH_TEST = 0.5 - - # Box parameters - # Channel size for the box tower - cfg.MODEL.TENSOR_MASK.BBOX_CHANNELS = 128 - # Weights on (dx, dy, dw, dh) - cfg.MODEL.TENSOR_MASK.BBOX_REG_WEIGHTS = (1.5, 1.5, 0.75, 0.75) - - # Loss parameters - cfg.MODEL.TENSOR_MASK.FOCAL_LOSS_GAMMA = 3.0 - cfg.MODEL.TENSOR_MASK.FOCAL_LOSS_ALPHA = 0.3 - - # Mask parameters - # Channel size for the mask tower - cfg.MODEL.TENSOR_MASK.MASK_CHANNELS = 128 - # Mask loss weight - cfg.MODEL.TENSOR_MASK.MASK_LOSS_WEIGHT = 2.0 - # weight on positive pixels within the mask - cfg.MODEL.TENSOR_MASK.POSITIVE_WEIGHT = 1.5 - # Whether to predict in the aligned representation - cfg.MODEL.TENSOR_MASK.ALIGNED_ON = False - # Whether to use the bipyramid architecture - cfg.MODEL.TENSOR_MASK.BIPYRAMID_ON = False diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/src/inplace_abn.cpp b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/src/inplace_abn.cpp deleted file mode 100644 index 0a6b1128cc20cbfc476134154e23e5869a92b856..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/modules/src/inplace_abn.cpp +++ /dev/null @@ -1,95 +0,0 @@ -#include - -#include - -#include "inplace_abn.h" - -std::vector mean_var(at::Tensor x) { - if (x.is_cuda()) { - if (x.type().scalarType() == at::ScalarType::Half) { - return mean_var_cuda_h(x); - } else { - return mean_var_cuda(x); - } - } else { - return mean_var_cpu(x); - } -} - -at::Tensor forward(at::Tensor x, at::Tensor mean, at::Tensor var, at::Tensor weight, at::Tensor bias, - bool affine, float eps) { - if (x.is_cuda()) { - if (x.type().scalarType() == at::ScalarType::Half) { - return forward_cuda_h(x, mean, var, weight, bias, affine, eps); - } else { - return forward_cuda(x, mean, var, weight, bias, affine, eps); - } - } else { - return forward_cpu(x, mean, var, weight, bias, affine, eps); - } -} - -std::vector edz_eydz(at::Tensor z, at::Tensor dz, at::Tensor weight, at::Tensor bias, - bool affine, float eps) { - if (z.is_cuda()) { - if (z.type().scalarType() == at::ScalarType::Half) { - return edz_eydz_cuda_h(z, dz, weight, bias, affine, eps); - } else { - return edz_eydz_cuda(z, dz, weight, bias, affine, eps); - } - } else { - return edz_eydz_cpu(z, dz, weight, bias, affine, eps); - } -} - -at::Tensor backward(at::Tensor z, at::Tensor dz, at::Tensor var, at::Tensor weight, at::Tensor bias, - at::Tensor edz, at::Tensor eydz, bool affine, float eps) { - if (z.is_cuda()) { - if (z.type().scalarType() == at::ScalarType::Half) { - return backward_cuda_h(z, dz, var, weight, bias, edz, eydz, affine, eps); - } else { - return backward_cuda(z, dz, var, weight, bias, edz, eydz, affine, eps); - } - } else { - return backward_cpu(z, dz, var, weight, bias, edz, eydz, affine, eps); - } -} - -void leaky_relu_forward(at::Tensor z, float slope) { - at::leaky_relu_(z, slope); -} - -void leaky_relu_backward(at::Tensor z, at::Tensor dz, float slope) { - if (z.is_cuda()) { - if (z.type().scalarType() == at::ScalarType::Half) { - return leaky_relu_backward_cuda_h(z, dz, slope); - } else { - return leaky_relu_backward_cuda(z, dz, slope); - } - } else { - return leaky_relu_backward_cpu(z, dz, slope); - } -} - -void elu_forward(at::Tensor z) { - at::elu_(z); -} - -void elu_backward(at::Tensor z, at::Tensor dz) { - if (z.is_cuda()) { - return elu_backward_cuda(z, dz); - } else { - return elu_backward_cpu(z, dz); - } -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("mean_var", &mean_var, "Mean and variance computation"); - m.def("forward", &forward, "In-place forward computation"); - m.def("edz_eydz", &edz_eydz, "First part of backward computation"); - m.def("backward", &backward, "Second part of backward computation"); - m.def("leaky_relu_forward", &leaky_relu_forward, "Leaky relu forward computation"); - m.def("leaky_relu_backward", &leaky_relu_backward, "Leaky relu backward computation and inversion"); - m.def("elu_forward", &elu_forward, "Elu forward computation"); - m.def("elu_backward", &elu_backward, "Elu backward computation and inversion"); -} diff --git a/spaces/haya44433/anything-v3.0/app.py b/spaces/haya44433/anything-v3.0/app.py deleted file mode 100644 index 62c8768d6f448b1a0387eaa5d551f3743ebd9462..0000000000000000000000000000000000000000 --- a/spaces/haya44433/anything-v3.0/app.py +++ /dev/null @@ -1,276 +0,0 @@ -from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image -import utils -import datetime -import time -import psutil - -start_time = time.time() -is_colab = utils.is_google_colab() - -class Model: - def __init__(self, name, path="", prefix=""): - self.name = name - self.path = path - self.prefix = prefix - self.pipe_t2i = None - self.pipe_i2i = None - -models = [ - Model("anything v3", "Linaqruf/anything-v3.0", "anything v3 style"), - ] - # Model("Spider-Verse", "nitrosocke/spider-verse-diffusion", "spiderverse style "), - # Model("Balloon Art", "Fictiverse/Stable_Diffusion_BalloonArt_Model", "BalloonArt "), - # Model("Elden Ring", "nitrosocke/elden-ring-diffusion", "elden ring style "), - # Model("Tron Legacy", "dallinmackay/Tron-Legacy-diffusion", "trnlgcy ") - #Model("Pokémon", "lambdalabs/sd-pokemon-diffusers", ""), - #Model("Pony Diffusion", "AstraliteHeart/pony-diffusion", ""), - #Model("Robo Diffusion", "nousr/robo-diffusion", ""), - -scheduler = DPMSolverMultistepScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - trained_betas=None, - predict_epsilon=True, - thresholding=False, - algorithm_type="dpmsolver++", - solver_type="midpoint", - lower_order_final=True, -) - -custom_model = None -if is_colab: - models.insert(0, Model("Custom model")) - custom_model = models[0] - -last_mode = "txt2img" -current_model = models[1] if is_colab else models[0] -current_model_path = current_model.path - -if is_colab: - pipe = StableDiffusionPipeline.from_pretrained(current_model.path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - -else: # download all models - print(f"{datetime.datetime.now()} Downloading vae...") - vae = AutoencoderKL.from_pretrained(current_model.path, subfolder="vae", torch_dtype=torch.float16) - for model in models: - try: - print(f"{datetime.datetime.now()} Downloading {model.name} model...") - unet = UNet2DConditionModel.from_pretrained(model.path, subfolder="unet", torch_dtype=torch.float16) - model.pipe_t2i = StableDiffusionPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler) - model.pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained(model.path, unet=unet, vae=vae, torch_dtype=torch.float16, scheduler=scheduler) - except Exception as e: - print(f"{datetime.datetime.now()} Failed to load model " + model.name + ": " + str(e)) - models.remove(model) - pipe = models[0].pipe_t2i - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - -device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def custom_model_changed(path): - models[0].path = path - global current_model - current_model = models[0] - -def on_model_change(model_name): - - prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), None) + "\" is prefixed automatically" if model_name != models[0].name else "Don't forget to use the custom model prefix in the prompt!" - - return gr.update(visible = model_name == models[0].name), gr.update(placeholder=prefix) - -def inference(model_name, prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt=""): - - print(psutil.virtual_memory()) # print memory usage - - global current_model - for model in models: - if model.name == model_name: - current_model = model - model_path = current_model.path - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - - try: - if img is not None: - return img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(model_path, prompt, neg_prompt, guidance, steps, width, height, generator): - - print(f"{datetime.datetime.now()} txt_to_img, model: {current_model.name}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "txt2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - else: - pipe = pipe.to("cpu") - pipe = current_model.pipe_t2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - last_mode = "txt2img" - - prompt = current_model.prefix + prompt - result = pipe( - prompt, - negative_prompt = neg_prompt, - # num_images_per_prompt=n_images, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return replace_nsfw_images(result) - -def img_to_img(model_path, prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - print(f"{datetime.datetime.now()} img_to_img, model: {model_path}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "img2img": - current_model_path = model_path - - if is_colab or current_model == custom_model: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained(current_model_path, torch_dtype=torch.float16, scheduler=scheduler, safety_checker=lambda images, clip_input: (images, False)) - else: - pipe = pipe.to("cpu") - pipe = current_model.pipe_i2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - last_mode = "img2img" - - prompt = current_model.prefix + prompt - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe( - prompt, - negative_prompt = neg_prompt, - # num_images_per_prompt=n_images, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return replace_nsfw_images(result) - -def replace_nsfw_images(results): - - if is_colab: - return results.images[0] - - for i in range(len(results.images)): - if results.nsfw_content_detected[i]: - results.images[i] = Image.open("nsfw.png") - return results.images[0] - -css = """.finetuned-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.finetuned-diffusion-div div h1{font-weight:900;margin-bottom:7px}.finetuned-diffusion-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
    -
    -

    Anything V3

    -
    -

    - Demo for Anything V3 -

    -

    You can skip the queue by duplicating this space: Duplicate Space

    -

    -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name) - with gr.Box(visible=False) as custom_model_group: - custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True) - gr.HTML("
    Custom models have to be downloaded first, so give it some time.
    ") - - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder="Enter prompt. Style applied automatically").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - - image_out = gr.Image(height=512) - # gallery = gr.Gallery( - # label="Generated images", show_label=False, elem_id="gallery" - # ).style(grid=[1], height="auto") - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - - # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - if is_colab: - model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False) - custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None) - # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery) - - inputs = [model_name, prompt, guidance, steps, width, height, seed, image, strength, neg_prompt] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - ex = gr.Examples([ - [models[0].name, "iron man", 7.5, 50], - - ], inputs=[model_name, prompt, guidance, steps, seed], outputs=outputs, fn=inference, cache_examples=False) - - gr.HTML(""" -
    -
    -

    Model by Linaqruf

    -
    - """) - -print(f"Space built in {time.time() - start_time:.2f} seconds") - -if not is_colab: - demo.queue(concurrency_count=1) -demo.launch(debug=is_colab, share=is_colab) \ No newline at end of file diff --git a/spaces/hbestm/gpt-academic-play/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/hbestm/gpt-academic-play/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index e46a4c01e804aa4b649bd40af6c13d5981c873d4..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -name: Feature request -about: Suggest an idea for this project -title: '' -labels: '' -assignees: '' - ---- - - diff --git a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/aws/userdata.sh b/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/aws/userdata.sh deleted file mode 100644 index 5fc1332ac1b0d1794cf8f8c5f6918059ae5dc381..0000000000000000000000000000000000000000 --- a/spaces/hca97/Mosquito-Detection/my_models/torch_hub_cache/yolov5/utils/aws/userdata.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -# AWS EC2 instance startup script https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html -# This script will run only once on first instance start (for a re-start script see mime.sh) -# /home/ubuntu (ubuntu) or /home/ec2-user (amazon-linux) is working dir -# Use >300 GB SSD - -cd home/ubuntu -if [ ! -d yolov5 ]; then - echo "Running first-time script." # install dependencies, download COCO, pull Docker - git clone https://github.com/ultralytics/yolov5 -b master && sudo chmod -R 777 yolov5 - cd yolov5 - bash data/scripts/get_coco.sh && echo "COCO done." & - sudo docker pull ultralytics/yolov5:latest && echo "Docker done." & - python -m pip install --upgrade pip && pip install -r requirements.txt && python detect.py && echo "Requirements done." & - wait && echo "All tasks done." # finish background tasks -else - echo "Running re-start script." # resume interrupted runs - i=0 - list=$(sudo docker ps -qa) # container list i.e. $'one\ntwo\nthree\nfour' - while IFS= read -r id; do - ((i++)) - echo "restarting container $i: $id" - sudo docker start $id - # sudo docker exec -it $id python train.py --resume # single-GPU - sudo docker exec -d $id python utils/aws/resume.py # multi-scenario - done <<<"$list" -fi diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_momentum09in2D.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_momentum09in2D.py deleted file mode 100644 index 83ffbec64ec86260209ae7673cf2b5f09256218e..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_momentum09in2D.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import torch -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 - - -class nnUNetTrainerV2_momentum09in2D(nnUNetTrainerV2): - def initialize_optimizer_and_scheduler(self): - if self.threeD: - momentum = 0.99 - else: - momentum = 0.9 - assert self.network is not None, "self.initialize_network must be called first" - self.optimizer = torch.optim.SGD(self.network.parameters(), self.initial_lr, weight_decay=self.weight_decay, - momentum=momentum, nesterov=True) - self.lr_scheduler = None diff --git a/spaces/hrdtbs/rvc-mochinoa/vc_infer_pipeline.py b/spaces/hrdtbs/rvc-mochinoa/vc_infer_pipeline.py deleted file mode 100644 index 7ff98b2c812f4e74afe92048fb26009fb008479d..0000000000000000000000000000000000000000 --- a/spaces/hrdtbs/rvc-mochinoa/vc_infer_pipeline.py +++ /dev/null @@ -1,320 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss -from scipy import signal - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None): - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9, # layer 9 - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/huak95/personaGPT_custom/README.md b/spaces/huak95/personaGPT_custom/README.md deleted file mode 100644 index 1241ed14fd0df4c3e842c9b0c5c27041ec2e308d..0000000000000000000000000000000000000000 --- a/spaces/huak95/personaGPT_custom/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Poet Cat -emoji: 🐱 -colorFrom: purple -colorTo: yellow -sdk: docker -app_port: 7860 -pinned: false -license: mit -duplicated_from: ngxson/poet-cat ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huggingface/data-measurements-tool/cache_dir/glue_rte_train_sentence2/zipf_fig.html b/spaces/huggingface/data-measurements-tool/cache_dir/glue_rte_train_sentence2/zipf_fig.html deleted file mode 100644 index 56a6689e8a5746baf921130a304e10a31eeba312..0000000000000000000000000000000000000000 --- a/spaces/huggingface/data-measurements-tool/cache_dir/glue_rte_train_sentence2/zipf_fig.html +++ /dev/null @@ -1,64 +0,0 @@ - - - -
    -
    - - \ No newline at end of file diff --git a/spaces/iccv23-diffusers-demo/sdxl/app.py b/spaces/iccv23-diffusers-demo/sdxl/app.py deleted file mode 100644 index af3215e052e1ab5b363455c4ad2b58e17f55fe08..0000000000000000000000000000000000000000 --- a/spaces/iccv23-diffusers-demo/sdxl/app.py +++ /dev/null @@ -1,348 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import random - -import gradio as gr -import numpy as np -import PIL.Image -import torch -from diffusers import DiffusionPipeline - -DESCRIPTION = "# SD-XL" -if not torch.cuda.is_available(): - DESCRIPTION += "\n

    Running on CPU 🥶 This demo does not work on CPU.

    " - -MAX_SEED = np.iinfo(np.int32).max -CACHE_EXAMPLES = torch.cuda.is_available() and os.getenv("CACHE_EXAMPLES") == "1" -MAX_IMAGE_SIZE = int(os.getenv("MAX_IMAGE_SIZE", "1024")) -USE_TORCH_COMPILE = os.getenv("USE_TORCH_COMPILE") == "1" -ENABLE_CPU_OFFLOAD = os.getenv("ENABLE_CPU_OFFLOAD") == "1" -ENABLE_REFINER = os.getenv("ENABLE_REFINER", "1") == "1" - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") -if torch.cuda.is_available(): - pipe = DiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-xl-base-1.0", - torch_dtype=torch.float16, - use_safetensors=True, - variant="fp16", - ) - if ENABLE_REFINER: - refiner = DiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-xl-refiner-1.0", - torch_dtype=torch.float16, - use_safetensors=True, - variant="fp16", - ) - - if ENABLE_CPU_OFFLOAD: - pipe.enable_model_cpu_offload() - if ENABLE_REFINER: - refiner.enable_model_cpu_offload() - else: - pipe.to(device) - if ENABLE_REFINER: - refiner.to(device) - - if USE_TORCH_COMPILE: - pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) - if ENABLE_REFINER: - refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) -else: - pipe = None - refiner = None - - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - - -def generate( - prompt: str, - negative_prompt: str = "", - prompt_2: str = "", - negative_prompt_2: str = "", - use_negative_prompt: bool = False, - use_prompt_2: bool = False, - use_negative_prompt_2: bool = False, - seed: int = 0, - width: int = 1024, - height: int = 1024, - guidance_scale_base: float = 5.0, - guidance_scale_refiner: float = 5.0, - num_inference_steps_base: int = 50, - num_inference_steps_refiner: int = 50, - apply_refiner: bool = False, -) -> PIL.Image.Image: - generator = torch.Generator().manual_seed(seed) - - if not use_negative_prompt: - negative_prompt = None # type: ignore - if not use_prompt_2: - prompt_2 = None # type: ignore - if not use_negative_prompt_2: - negative_prompt_2 = None # type: ignore - - if not apply_refiner: - return pipe( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - width=width, - height=height, - guidance_scale=guidance_scale_base, - num_inference_steps=num_inference_steps_base, - generator=generator, - output_type="pil", - ).images[0] - else: - latents = pipe( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - width=width, - height=height, - guidance_scale=guidance_scale_base, - num_inference_steps=num_inference_steps_base, - generator=generator, - output_type="latent", - ).images - image = refiner( - prompt=prompt, - negative_prompt=negative_prompt, - prompt_2=prompt_2, - negative_prompt_2=negative_prompt_2, - guidance_scale=guidance_scale_refiner, - num_inference_steps=num_inference_steps_refiner, - image=latents, - generator=generator, - ).images[0] - return image - - -examples = [ - "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", - "An astronaut riding a green horse", -] - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Group(): - with gr.Row(): - prompt = gr.Text( - label="Prompt", - show_label=False, - max_lines=1, - placeholder="Enter your prompt", - container=False, - ) - run_button = gr.Button("Run", scale=0) - result = gr.Image(label="Result", show_label=False) - with gr.Accordion("Advanced options", open=False): - with gr.Row(): - use_negative_prompt = gr.Checkbox(label="Use negative prompt", value=False) - use_prompt_2 = gr.Checkbox(label="Use prompt 2", value=False) - use_negative_prompt_2 = gr.Checkbox(label="Use negative prompt 2", value=False) - negative_prompt = gr.Text( - label="Negative prompt", - max_lines=1, - placeholder="Enter a negative prompt", - visible=False, - ) - prompt_2 = gr.Text( - label="Prompt 2", - max_lines=1, - placeholder="Enter your prompt", - visible=False, - ) - negative_prompt_2 = gr.Text( - label="Negative prompt 2", - max_lines=1, - placeholder="Enter a negative prompt", - visible=False, - ) - - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=0, - ) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - with gr.Row(): - width = gr.Slider( - label="Width", - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=1024, - ) - height = gr.Slider( - label="Height", - minimum=256, - maximum=MAX_IMAGE_SIZE, - step=32, - value=1024, - ) - apply_refiner = gr.Checkbox(label="Apply refiner", value=False, visible=ENABLE_REFINER) - with gr.Row(): - guidance_scale_base = gr.Slider( - label="Guidance scale for base", - minimum=1, - maximum=20, - step=0.1, - value=5.0, - ) - num_inference_steps_base = gr.Slider( - label="Number of inference steps for base", - minimum=10, - maximum=100, - step=1, - value=50, - ) - with gr.Row(visible=False) as refiner_params: - guidance_scale_refiner = gr.Slider( - label="Guidance scale for refiner", - minimum=1, - maximum=20, - step=0.1, - value=5.0, - ) - num_inference_steps_refiner = gr.Slider( - label="Number of inference steps for refiner", - minimum=10, - maximum=100, - step=1, - value=50, - ) - - gr.Examples( - examples=examples, - inputs=prompt, - outputs=result, - fn=generate, - cache_examples=CACHE_EXAMPLES, - ) - - use_negative_prompt.change( - fn=lambda x: gr.update(visible=x), - inputs=use_negative_prompt, - outputs=negative_prompt, - queue=False, - api_name=False, - ) - use_prompt_2.change( - fn=lambda x: gr.update(visible=x), - inputs=use_prompt_2, - outputs=prompt_2, - queue=False, - api_name=False, - ) - use_negative_prompt_2.change( - fn=lambda x: gr.update(visible=x), - inputs=use_negative_prompt_2, - outputs=negative_prompt_2, - queue=False, - api_name=False, - ) - apply_refiner.change( - fn=lambda x: gr.update(visible=x), - inputs=apply_refiner, - outputs=refiner_params, - queue=False, - api_name=False, - ) - - inputs = [ - prompt, - negative_prompt, - prompt_2, - negative_prompt_2, - use_negative_prompt, - use_prompt_2, - use_negative_prompt_2, - seed, - width, - height, - guidance_scale_base, - guidance_scale_refiner, - num_inference_steps_base, - num_inference_steps_refiner, - apply_refiner, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=inputs, - outputs=result, - api_name="run", - ) - negative_prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=inputs, - outputs=result, - api_name=False, - ) - prompt_2.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=inputs, - outputs=result, - api_name=False, - ) - negative_prompt_2.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=inputs, - outputs=result, - api_name=False, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - api_name=False, - ).then( - fn=generate, - inputs=inputs, - outputs=result, - api_name=False, - ) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/ikechan8370/vits-uma-genshin-honkai/text/cleaners.py b/spaces/ikechan8370/vits-uma-genshin-honkai/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/ikechan8370/vits-uma-genshin-honkai/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if idescargar sibelius 6 portable

    Downloadhttps://gohhs.com/2uz4lK



    - -sibelius portable, sibelius portable free download, sibelius portable mega, sibelius portable descargar, avid sibelius portable, sibelius 8.2 portable, ... Sibelius 8.5 download free via torrent for windows 7 from torrent. -All versions of Sibelius 8.5, 8.0, 7.5, 7.4, 7.2 as well as 8.1, 8.2 portable, 8.1 and 8.2 rus with ... -Sibelius is a music writing program. -Download Sibelius. -Sibelius (Sibelius) - a program for ... -Sibelius (Sibelius) free download. -Software for writing music compositions. -Download Sibelius free. -A program for writing and playing music (music editing, music manipulation ... 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Carewell Ecg 1103g Manuall.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Carewell Ecg 1103g Manuall.md deleted file mode 100644 index 11299dd05f59d9af0e1ce08e9afbefee6a749f47..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Carewell Ecg 1103g Manuall.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Carewell Ecg 1103g Manuall


    Download ✪✪✪ https://urlin.us/2uExYv



    -
    -Carewell Ecg 1103g Manuall. 24 Décembre 2019 … carewell manual, carewell ecg 1101 manual, carewell ecg 1103 manual, carewell ecg 1103 service ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Konduit Osrs Download [CRACKED].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Konduit Osrs Download [CRACKED].md deleted file mode 100644 index c6a165f7b19c74e38c36f1f75d314885e542e5e4..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Konduit Osrs Download [CRACKED].md +++ /dev/null @@ -1,24 +0,0 @@ -

    Konduit Osrs Download


    Downloadhttps://urlin.us/2uExRz



    -
    -But then nothing since. - -Those cats on the right are Koffin1 and the Koffin2. Koffin1 has been active on the Discord since 2017. He has been very active on the discord and has updated the site sporadically but has not released a release since 2017 and been quiet for about 6 months. - -The last two cats on the left side of the banner are the lv1n3us and the lv3n3us. lv1n3us has been quite active for a while but has not had the heart to actually release a release but they have been quite active on the discord. - -We are super excited to get this site back up. We will be working for the next few weeks to bring back the site as soon as possible. We’re always here to answer questions and work on the site. Feel free to join the discord and ask us anything. - -Once again, we would like to thank our sponsor Xeric for helping to bring this site back up. Xeric is a brand new site for CSS level 3 and is written in all manner of languages (Even the platform it runs on (Framework)! There is no hosting and they give you a unique URL and space to develop on. They also have some great support if you need it. If you want to check them out visit - -Xeric is an all-around great company and we are so happy to have them as a sponsor. We hope you enjoy the site once it comes back online! - -If you have questions regarding this site or Xeric then you can always leave a comment below or contact us via the discord. - -Thanks for reading! - --The Eranos.The present invention relates to a game of skill and a method of playing a game of skill, particularly to a game of skill in which the speed and timing of a player's responses is a determinative factor of the outcome of the game. - -Generally, skill games are known, particularly games involving skill in which, for example, players must use a calculator to determine if a sum of a given number of whole numbers is a multiple of a given number. For example, there are games in which the players have to try to determine if a given sum of a given number of whole numbers is a multiple of a given number. For example, in U.S. Pat. No 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Blaupunkt-Code-Uni-V3-0exe.md b/spaces/inreVtussa/clothingai/Blaupunkt-Code-Uni-V3-0exe.md deleted file mode 100644 index 6e12200c3140ed32567209cd4c3d61402027971c..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Blaupunkt-Code-Uni-V3-0exe.md +++ /dev/null @@ -1,114 +0,0 @@ -## Blaupunkt Code Uni V3 0.exe - - - - - - ![Blaupunkt Code Uni V3 0.exe](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTkDa1NxvlKAPy36ZvcEcALr0oH2bvQaaRG2KwEl9fUZZibEwMA_g4P) - - - - - -**Blaupunkt Code Uni V3 0.exe ===== [https://hendmulrelan.blogspot.com/?d=2tycFY](https://hendmulrelan.blogspot.com/?d=2tycFY)** - - - - - - - - - - - - - -# How to Unlock Your Blaupunkt Car Radio with Blaupunkt Code Uni V3 0.exe - - - -If you have a Blaupunkt car radio that is locked and requires a code to activate it, you may be wondering how to get the code and unlock your radio. One way to do this is to use a software called Blaupunkt Code Uni V3 0.exe, which can generate the code for your radio based on its serial number. In this article, we will show you how to use Blaupunkt Code Uni V3 0.exe to unlock your Blaupunkt car radio. - - - -## What is Blaupunkt Code Uni V3 0.exe? - - - -Blaupunkt Code Uni V3 0.exe is a software that can generate the code for your Blaupunkt car radio based on its serial number. It is compatible with most Blaupunkt car radios, such as TravelPilot, RNS149, RNS150, RCD200, RCD300, RCD500, and more. It is easy to use and does not require any installation or registration. - - - -## How to use Blaupunkt Code Uni V3 0.exe? - - - -To use Blaupunkt Code Uni V3 0.exe, you will need the following: - - - -- A computer with Windows operating system - -- A USB flash drive or a CD - -- The serial number of your Blaupunkt car radio - -- The software Blaupunkt Code Uni V3 0.exe, which you can download from [here](https://netalinuterpe.wixsite.com/patotathe/post/blaupunkt-code-uni-v3-0-20) - - - -Once you have everything ready, follow these steps: - - - -1. Copy the software Blaupunkt Code Uni V3 0.exe to your USB flash drive or CD - -2. Insert the USB flash drive or CD into your computer and run the software - -3. Select your Blaupunkt car radio model from the list - -4. Enter the serial number of your Blaupunkt car radio in the box - -5. Click on "Generate" and wait for the software to calculate the code - -6. Write down the code that appears on the screen - -7. Remove the USB flash drive or CD from your computer - -8. Turn on your Blaupunkt car radio and enter the code using the buttons on the radio - -9. Enjoy your unlocked Blaupunkt car radio! - - - -## Tips and warnings - - - -Here are some tips and warnings to keep in mind when using Blaupunkt Code Uni V3 0.exe: - - - -- Make sure you enter the correct serial number of your Blaupunkt car radio. You can find it on a sticker on the side or back of the radio. It usually starts with "BP" followed by 12 digits. - -- Do not enter any wrong codes more than three times, as this may permanently lock your Blaupunkt car radio. - -- Blaupunkt Code Uni V3 0.exe is not an official product of Blaupunkt GmbH. It is a third-party software that may not work for all Blaupunkt car radios. Use it at your own risk. - -- Blaupunkt GmbH is a German manufacturer of mostly car audio equipment. It was owned by Robert Bosch GmbH from 1933 until 2009, when it was sold to Aurelius AG of Germany. It filed for bankruptcy in late 2015 with liquidation proceedings completed in early 2016[^6^]. - - - -## Conclusion - - - -Blaupunkt Code Uni V3 0.exe is a software that can generate the code for your Blaupunkt car radio based on its serial number. It is easy to use and does not require any installation or registration. However, it is not an official product of Blaupunkt GmbH and may not work - - dfd1c89656 - - - - - diff --git a/spaces/iqbalc/Speech-to-text-demo/README.md b/spaces/iqbalc/Speech-to-text-demo/README.md deleted file mode 100644 index 81819eb34490db17f19f47ed0a69ea18be9fdd69..0000000000000000000000000000000000000000 --- a/spaces/iqbalc/Speech-to-text-demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Speech To Text Demo -emoji: 🏢 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/training/training_loop.py b/spaces/james-oldfield/PandA/networks/stylegan3/training/training_loop.py deleted file mode 100644 index ddd0c15e226b0436048fee4469341e3fb653c71b..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/training/training_loop.py +++ /dev/null @@ -1,427 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Main training loop.""" - -import os -import time -import copy -import json -import pickle -import psutil -import PIL.Image -import numpy as np -import torch -import dnnlib -from torch_utils import misc -from torch_utils import training_stats -from torch_utils.ops import conv2d_gradfix -from torch_utils.ops import grid_sample_gradfix - -import legacy -from metrics import metric_main - -#---------------------------------------------------------------------------- - -def setup_snapshot_image_grid(training_set, random_seed=0): - rnd = np.random.RandomState(random_seed) - gw = np.clip(7680 // training_set.image_shape[2], 7, 32) - gh = np.clip(4320 // training_set.image_shape[1], 4, 32) - - # No labels => show random subset of training samples. - if not training_set.has_labels: - all_indices = list(range(len(training_set))) - rnd.shuffle(all_indices) - grid_indices = [all_indices[i % len(all_indices)] for i in range(gw * gh)] - - else: - # Group training samples by label. - label_groups = dict() # label => [idx, ...] - for idx in range(len(training_set)): - label = tuple(training_set.get_details(idx).raw_label.flat[::-1]) - if label not in label_groups: - label_groups[label] = [] - label_groups[label].append(idx) - - # Reorder. - label_order = sorted(label_groups.keys()) - for label in label_order: - rnd.shuffle(label_groups[label]) - - # Organize into grid. - grid_indices = [] - for y in range(gh): - label = label_order[y % len(label_order)] - indices = label_groups[label] - grid_indices += [indices[x % len(indices)] for x in range(gw)] - label_groups[label] = [indices[(i + gw) % len(indices)] for i in range(len(indices))] - - # Load data. - images, labels = zip(*[training_set[i] for i in grid_indices]) - return (gw, gh), np.stack(images), np.stack(labels) - -#---------------------------------------------------------------------------- - -def save_image_grid(img, fname, drange, grid_size): - lo, hi = drange - img = np.asarray(img, dtype=np.float32) - img = (img - lo) * (255 / (hi - lo)) - img = np.rint(img).clip(0, 255).astype(np.uint8) - - gw, gh = grid_size - _N, C, H, W = img.shape - img = img.reshape([gh, gw, C, H, W]) - img = img.transpose(0, 3, 1, 4, 2) - img = img.reshape([gh * H, gw * W, C]) - - assert C in [1, 3] - if C == 1: - PIL.Image.fromarray(img[:, :, 0], 'L').save(fname) - if C == 3: - PIL.Image.fromarray(img, 'RGB').save(fname) - -#---------------------------------------------------------------------------- - -def training_loop( - run_dir = '.', # Output directory. - training_set_kwargs = {}, # Options for training set. - data_loader_kwargs = {}, # Options for torch.utils.data.DataLoader. - G_kwargs = {}, # Options for generator network. - D_kwargs = {}, # Options for discriminator network. - G_opt_kwargs = {}, # Options for generator optimizer. - D_opt_kwargs = {}, # Options for discriminator optimizer. - augment_kwargs = None, # Options for augmentation pipeline. None = disable. - loss_kwargs = {}, # Options for loss function. - metrics = [], # Metrics to evaluate during training. - random_seed = 0, # Global random seed. - num_gpus = 1, # Number of GPUs participating in the training. - rank = 0, # Rank of the current process in [0, num_gpus[. - batch_size = 4, # Total batch size for one training iteration. Can be larger than batch_gpu * num_gpus. - batch_gpu = 4, # Number of samples processed at a time by one GPU. - ema_kimg = 10, # Half-life of the exponential moving average (EMA) of generator weights. - ema_rampup = 0.05, # EMA ramp-up coefficient. None = no rampup. - G_reg_interval = None, # How often to perform regularization for G? None = disable lazy regularization. - D_reg_interval = 16, # How often to perform regularization for D? None = disable lazy regularization. - augment_p = 0, # Initial value of augmentation probability. - ada_target = None, # ADA target value. None = fixed p. - ada_interval = 4, # How often to perform ADA adjustment? - ada_kimg = 500, # ADA adjustment speed, measured in how many kimg it takes for p to increase/decrease by one unit. - total_kimg = 25000, # Total length of the training, measured in thousands of real images. - kimg_per_tick = 4, # Progress snapshot interval. - image_snapshot_ticks = 50, # How often to save image snapshots? None = disable. - network_snapshot_ticks = 50, # How often to save network snapshots? None = disable. - resume_pkl = None, # Network pickle to resume training from. - resume_kimg = 0, # First kimg to report when resuming training. - cudnn_benchmark = True, # Enable torch.backends.cudnn.benchmark? - abort_fn = None, # Callback function for determining whether to abort training. Must return consistent results across ranks. - progress_fn = None, # Callback function for updating training progress. Called for all ranks. -): - # Initialize. - start_time = time.time() - device = torch.device('cuda', rank) - np.random.seed(random_seed * num_gpus + rank) - torch.manual_seed(random_seed * num_gpus + rank) - torch.backends.cudnn.benchmark = cudnn_benchmark # Improves training speed. - torch.backends.cuda.matmul.allow_tf32 = False # Improves numerical accuracy. - torch.backends.cudnn.allow_tf32 = False # Improves numerical accuracy. - conv2d_gradfix.enabled = True # Improves training speed. - grid_sample_gradfix.enabled = True # Avoids errors with the augmentation pipe. - - # Load training set. - if rank == 0: - print('Loading training set...') - training_set = dnnlib.util.construct_class_by_name(**training_set_kwargs) # subclass of training.dataset.Dataset - training_set_sampler = misc.InfiniteSampler(dataset=training_set, rank=rank, num_replicas=num_gpus, seed=random_seed) - training_set_iterator = iter(torch.utils.data.DataLoader(dataset=training_set, sampler=training_set_sampler, batch_size=batch_size//num_gpus, **data_loader_kwargs)) - if rank == 0: - print() - print('Num images: ', len(training_set)) - print('Image shape:', training_set.image_shape) - print('Label shape:', training_set.label_shape) - print() - - # Construct networks. - if rank == 0: - print('Constructing networks...') - common_kwargs = dict(c_dim=training_set.label_dim, img_resolution=training_set.resolution, img_channels=training_set.num_channels) - G = dnnlib.util.construct_class_by_name(**G_kwargs, **common_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module - D = dnnlib.util.construct_class_by_name(**D_kwargs, **common_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module - G_ema = copy.deepcopy(G).eval() - - # Resume from existing pickle. - if (resume_pkl is not None) and (rank == 0): - print(f'Resuming from "{resume_pkl}"') - with dnnlib.util.open_url(resume_pkl) as f: - resume_data = legacy.load_network_pkl(f) - for name, module in [('G', G), ('D', D), ('G_ema', G_ema)]: - misc.copy_params_and_buffers(resume_data[name], module, require_all=False) - - # Print network summary tables. - if rank == 0: - z = torch.empty([batch_gpu, G.z_dim], device=device) - c = torch.empty([batch_gpu, G.c_dim], device=device) - img = misc.print_module_summary(G, [z, c]) - misc.print_module_summary(D, [img, c]) - - # Setup augmentation. - if rank == 0: - print('Setting up augmentation...') - augment_pipe = None - ada_stats = None - if (augment_kwargs is not None) and (augment_p > 0 or ada_target is not None): - augment_pipe = dnnlib.util.construct_class_by_name(**augment_kwargs).train().requires_grad_(False).to(device) # subclass of torch.nn.Module - augment_pipe.p.copy_(torch.as_tensor(augment_p)) - if ada_target is not None: - ada_stats = training_stats.Collector(regex='Loss/signs/real') - - # Distribute across GPUs. - if rank == 0: - print(f'Distributing across {num_gpus} GPUs...') - for module in [G, D, G_ema, augment_pipe]: - if module is not None and num_gpus > 1: - for param in misc.params_and_buffers(module): - torch.distributed.broadcast(param, src=0) - - # Setup training phases. - if rank == 0: - print('Setting up training phases...') - loss = dnnlib.util.construct_class_by_name(device=device, G=G, D=D, augment_pipe=augment_pipe, **loss_kwargs) # subclass of training.loss.Loss - phases = [] - for name, module, opt_kwargs, reg_interval in [('G', G, G_opt_kwargs, G_reg_interval), ('D', D, D_opt_kwargs, D_reg_interval)]: - if reg_interval is None: - opt = dnnlib.util.construct_class_by_name(params=module.parameters(), **opt_kwargs) # subclass of torch.optim.Optimizer - phases += [dnnlib.EasyDict(name=name+'both', module=module, opt=opt, interval=1)] - else: # Lazy regularization. - mb_ratio = reg_interval / (reg_interval + 1) - opt_kwargs = dnnlib.EasyDict(opt_kwargs) - opt_kwargs.lr = opt_kwargs.lr * mb_ratio - opt_kwargs.betas = [beta ** mb_ratio for beta in opt_kwargs.betas] - opt = dnnlib.util.construct_class_by_name(module.parameters(), **opt_kwargs) # subclass of torch.optim.Optimizer - phases += [dnnlib.EasyDict(name=name+'main', module=module, opt=opt, interval=1)] - phases += [dnnlib.EasyDict(name=name+'reg', module=module, opt=opt, interval=reg_interval)] - for phase in phases: - phase.start_event = None - phase.end_event = None - if rank == 0: - phase.start_event = torch.cuda.Event(enable_timing=True) - phase.end_event = torch.cuda.Event(enable_timing=True) - - # Export sample images. - grid_size = None - grid_z = None - grid_c = None - if rank == 0: - print('Exporting sample images...') - grid_size, images, labels = setup_snapshot_image_grid(training_set=training_set) - save_image_grid(images, os.path.join(run_dir, 'reals.png'), drange=[0,255], grid_size=grid_size) - grid_z = torch.randn([labels.shape[0], G.z_dim], device=device).split(batch_gpu) - grid_c = torch.from_numpy(labels).to(device).split(batch_gpu) - images = torch.cat([G_ema(z=z, c=c, noise_mode='const').cpu() for z, c in zip(grid_z, grid_c)]).numpy() - save_image_grid(images, os.path.join(run_dir, 'fakes_init.png'), drange=[-1,1], grid_size=grid_size) - - # Initialize logs. - if rank == 0: - print('Initializing logs...') - stats_collector = training_stats.Collector(regex='.*') - stats_metrics = dict() - stats_jsonl = None - stats_tfevents = None - if rank == 0: - stats_jsonl = open(os.path.join(run_dir, 'stats.jsonl'), 'wt') - try: - import torch.utils.tensorboard as tensorboard - stats_tfevents = tensorboard.SummaryWriter(run_dir) - except ImportError as err: - print('Skipping tfevents export:', err) - - # Train. - if rank == 0: - print(f'Training for {total_kimg} kimg...') - print() - cur_nimg = resume_kimg * 1000 - cur_tick = 0 - tick_start_nimg = cur_nimg - tick_start_time = time.time() - maintenance_time = tick_start_time - start_time - batch_idx = 0 - if progress_fn is not None: - progress_fn(0, total_kimg) - while True: - - # Fetch training data. - with torch.autograd.profiler.record_function('data_fetch'): - phase_real_img, phase_real_c = next(training_set_iterator) - phase_real_img = (phase_real_img.to(device).to(torch.float32) / 127.5 - 1).split(batch_gpu) - phase_real_c = phase_real_c.to(device).split(batch_gpu) - all_gen_z = torch.randn([len(phases) * batch_size, G.z_dim], device=device) - all_gen_z = [phase_gen_z.split(batch_gpu) for phase_gen_z in all_gen_z.split(batch_size)] - all_gen_c = [training_set.get_label(np.random.randint(len(training_set))) for _ in range(len(phases) * batch_size)] - all_gen_c = torch.from_numpy(np.stack(all_gen_c)).pin_memory().to(device) - all_gen_c = [phase_gen_c.split(batch_gpu) for phase_gen_c in all_gen_c.split(batch_size)] - - # Execute training phases. - for phase, phase_gen_z, phase_gen_c in zip(phases, all_gen_z, all_gen_c): - if batch_idx % phase.interval != 0: - continue - if phase.start_event is not None: - phase.start_event.record(torch.cuda.current_stream(device)) - - # Accumulate gradients. - phase.opt.zero_grad(set_to_none=True) - phase.module.requires_grad_(True) - for real_img, real_c, gen_z, gen_c in zip(phase_real_img, phase_real_c, phase_gen_z, phase_gen_c): - loss.accumulate_gradients(phase=phase.name, real_img=real_img, real_c=real_c, gen_z=gen_z, gen_c=gen_c, gain=phase.interval, cur_nimg=cur_nimg) - phase.module.requires_grad_(False) - - # Update weights. - with torch.autograd.profiler.record_function(phase.name + '_opt'): - params = [param for param in phase.module.parameters() if param.grad is not None] - if len(params) > 0: - flat = torch.cat([param.grad.flatten() for param in params]) - if num_gpus > 1: - torch.distributed.all_reduce(flat) - flat /= num_gpus - misc.nan_to_num(flat, nan=0, posinf=1e5, neginf=-1e5, out=flat) - grads = flat.split([param.numel() for param in params]) - for param, grad in zip(params, grads): - param.grad = grad.reshape(param.shape) - phase.opt.step() - - # Phase done. - if phase.end_event is not None: - phase.end_event.record(torch.cuda.current_stream(device)) - - # Update G_ema. - with torch.autograd.profiler.record_function('Gema'): - ema_nimg = ema_kimg * 1000 - if ema_rampup is not None: - ema_nimg = min(ema_nimg, cur_nimg * ema_rampup) - ema_beta = 0.5 ** (batch_size / max(ema_nimg, 1e-8)) - for p_ema, p in zip(G_ema.parameters(), G.parameters()): - p_ema.copy_(p.lerp(p_ema, ema_beta)) - for b_ema, b in zip(G_ema.buffers(), G.buffers()): - b_ema.copy_(b) - - # Update state. - cur_nimg += batch_size - batch_idx += 1 - - # Execute ADA heuristic. - if (ada_stats is not None) and (batch_idx % ada_interval == 0): - ada_stats.update() - adjust = np.sign(ada_stats['Loss/signs/real'] - ada_target) * (batch_size * ada_interval) / (ada_kimg * 1000) - augment_pipe.p.copy_((augment_pipe.p + adjust).max(misc.constant(0, device=device))) - - # Perform maintenance tasks once per tick. - done = (cur_nimg >= total_kimg * 1000) - if (not done) and (cur_tick != 0) and (cur_nimg < tick_start_nimg + kimg_per_tick * 1000): - continue - - # Print status line, accumulating the same information in training_stats. - tick_end_time = time.time() - fields = [] - fields += [f"tick {training_stats.report0('Progress/tick', cur_tick):<5d}"] - fields += [f"kimg {training_stats.report0('Progress/kimg', cur_nimg / 1e3):<8.1f}"] - fields += [f"time {dnnlib.util.format_time(training_stats.report0('Timing/total_sec', tick_end_time - start_time)):<12s}"] - fields += [f"sec/tick {training_stats.report0('Timing/sec_per_tick', tick_end_time - tick_start_time):<7.1f}"] - fields += [f"sec/kimg {training_stats.report0('Timing/sec_per_kimg', (tick_end_time - tick_start_time) / (cur_nimg - tick_start_nimg) * 1e3):<7.2f}"] - fields += [f"maintenance {training_stats.report0('Timing/maintenance_sec', maintenance_time):<6.1f}"] - fields += [f"cpumem {training_stats.report0('Resources/cpu_mem_gb', psutil.Process(os.getpid()).memory_info().rss / 2**30):<6.2f}"] - fields += [f"gpumem {training_stats.report0('Resources/peak_gpu_mem_gb', torch.cuda.max_memory_allocated(device) / 2**30):<6.2f}"] - fields += [f"reserved {training_stats.report0('Resources/peak_gpu_mem_reserved_gb', torch.cuda.max_memory_reserved(device) / 2**30):<6.2f}"] - torch.cuda.reset_peak_memory_stats() - fields += [f"augment {training_stats.report0('Progress/augment', float(augment_pipe.p.cpu()) if augment_pipe is not None else 0):.3f}"] - training_stats.report0('Timing/total_hours', (tick_end_time - start_time) / (60 * 60)) - training_stats.report0('Timing/total_days', (tick_end_time - start_time) / (24 * 60 * 60)) - if rank == 0: - print(' '.join(fields)) - - # Check for abort. - if (not done) and (abort_fn is not None) and abort_fn(): - done = True - if rank == 0: - print() - print('Aborting...') - - # Save image snapshot. - if (rank == 0) and (image_snapshot_ticks is not None) and (done or cur_tick % image_snapshot_ticks == 0): - images = torch.cat([G_ema(z=z, c=c, noise_mode='const').cpu() for z, c in zip(grid_z, grid_c)]).numpy() - save_image_grid(images, os.path.join(run_dir, f'fakes{cur_nimg//1000:06d}.png'), drange=[-1,1], grid_size=grid_size) - - # Save network snapshot. - snapshot_pkl = None - snapshot_data = None - if (network_snapshot_ticks is not None) and (done or cur_tick % network_snapshot_ticks == 0): - snapshot_data = dict(G=G, D=D, G_ema=G_ema, augment_pipe=augment_pipe, training_set_kwargs=dict(training_set_kwargs)) - for key, value in snapshot_data.items(): - if isinstance(value, torch.nn.Module): - value = copy.deepcopy(value).eval().requires_grad_(False) - if num_gpus > 1: - misc.check_ddp_consistency(value, ignore_regex=r'.*\.[^.]+_(avg|ema)') - for param in misc.params_and_buffers(value): - torch.distributed.broadcast(param, src=0) - snapshot_data[key] = value.cpu() - del value # conserve memory - snapshot_pkl = os.path.join(run_dir, f'network-snapshot-{cur_nimg//1000:06d}.pkl') - if rank == 0: - with open(snapshot_pkl, 'wb') as f: - pickle.dump(snapshot_data, f) - - # Evaluate metrics. - if (snapshot_data is not None) and (len(metrics) > 0): - if rank == 0: - print('Evaluating metrics...') - for metric in metrics: - result_dict = metric_main.calc_metric(metric=metric, G=snapshot_data['G_ema'], - dataset_kwargs=training_set_kwargs, num_gpus=num_gpus, rank=rank, device=device) - if rank == 0: - metric_main.report_metric(result_dict, run_dir=run_dir, snapshot_pkl=snapshot_pkl) - stats_metrics.update(result_dict.results) - del snapshot_data # conserve memory - - # Collect statistics. - for phase in phases: - value = [] - if (phase.start_event is not None) and (phase.end_event is not None): - phase.end_event.synchronize() - value = phase.start_event.elapsed_time(phase.end_event) - training_stats.report0('Timing/' + phase.name, value) - stats_collector.update() - stats_dict = stats_collector.as_dict() - - # Update logs. - timestamp = time.time() - if stats_jsonl is not None: - fields = dict(stats_dict, timestamp=timestamp) - stats_jsonl.write(json.dumps(fields) + '\n') - stats_jsonl.flush() - if stats_tfevents is not None: - global_step = int(cur_nimg / 1e3) - walltime = timestamp - start_time - for name, value in stats_dict.items(): - stats_tfevents.add_scalar(name, value.mean, global_step=global_step, walltime=walltime) - for name, value in stats_metrics.items(): - stats_tfevents.add_scalar(f'Metrics/{name}', value, global_step=global_step, walltime=walltime) - stats_tfevents.flush() - if progress_fn is not None: - progress_fn(cur_nimg // 1000, total_kimg) - - # Update state. - cur_tick += 1 - tick_start_nimg = cur_nimg - tick_start_time = time.time() - maintenance_time = tick_start_time - tick_end_time - if done: - break - - # Done. - if rank == 0: - print() - print('Exiting...') - -#---------------------------------------------------------------------------- diff --git a/spaces/javakhangnguyen/Object-Remove/README.md b/spaces/javakhangnguyen/Object-Remove/README.md deleted file mode 100644 index fe19898cabac7a393215e57625fbae53bb6197f8..0000000000000000000000000000000000000000 --- a/spaces/javakhangnguyen/Object-Remove/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Remove Photo Object -emoji: ⚡ -colorFrom: pink -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -python_version: 3.9.5 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/jbraun19/Webcam-Object-Recognition-Yolo-n-Coco/class_names/README.md b/spaces/jbraun19/Webcam-Object-Recognition-Yolo-n-Coco/class_names/README.md deleted file mode 100644 index 30d74d258442c7c65512eafab474568dd706c430..0000000000000000000000000000000000000000 --- a/spaces/jbraun19/Webcam-Object-Recognition-Yolo-n-Coco/class_names/README.md +++ /dev/null @@ -1 +0,0 @@ -test \ No newline at end of file diff --git a/spaces/jeffistyping/Youtube-Whisperer/README.md b/spaces/jeffistyping/Youtube-Whisperer/README.md deleted file mode 100644 index 7facf79e0faf3a9f48a16b87a2682c7c0c97c7ba..0000000000000000000000000000000000000000 --- a/spaces/jeffistyping/Youtube-Whisperer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Youtube Whisperer -emoji: ⚡ -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jeonchangbin49/De-limiter/eval_delimit/score_fad.py b/spaces/jeonchangbin49/De-limiter/eval_delimit/score_fad.py deleted file mode 100644 index 6b7244f6aaa7b62d7c37d6a38a2295455fc34cbd..0000000000000000000000000000000000000000 --- a/spaces/jeonchangbin49/De-limiter/eval_delimit/score_fad.py +++ /dev/null @@ -1,75 +0,0 @@ -# We are going to use FAD based on https://github.com/gudgud96/frechet-audio-distance -import os -import subprocess -import glob -import argparse - -from frechet_audio_distance import FrechetAudioDistance - -from utils import str2bool - - -parser = argparse.ArgumentParser(description="model test.py") - -parser.add_argument( - "--target", - type=str, - default="all", - help="target source. all, vocals, drums, bass, other", -) -parser.add_argument( - "--root", - type=str, - default="/path/to/musdb18hq_loudnorm", -) -parser.add_argument( - "--output_directory", - type=str, - default="/path/to/results", -) -parser.add_argument("--exp_name", type=str, default="delimit_6_s") -parser.add_argument( - "--calc_results", - type=str2bool, - default=True, - help="Set this True when you want to calculate the results of the test set. Set this False when calculating musdb-hq vs musdb-XL. (top row in Table 1.)", -) - -args, _ = parser.parse_known_args() - -os.makedirs(f"{args.root}/musdb_hq_loudnorm_16k_mono_link", exist_ok=True) - -song_list = glob.glob(f"{args.root}/musdb_hq_loudnorm_16k_mono/*/mixture.wav") -for song in song_list: - song_name = os.path.basename(os.path.dirname(song)) - subprocess.run( - f'ln --symbolic "{song}" "{args.root}/musdb_hq_loudnorm_16k_mono_link/{song_name}.wav"', - shell=True, - ) - - -if args.calc_results: - args.test_output_dir = f"{args.output_directory}/test/{args.exp_name}" -else: - args.test_output_dir = f"{args.output_directory}/{args.exp_name}" - -os.makedirs(f"{args.test_output_dir}_16k_mono_link", exist_ok=True) - -song_list = glob.glob(f"{args.test_output_dir}_16k_mono/*/{args.target}.wav") -for song in song_list: - song_name = os.path.basename(os.path.dirname(song)) - subprocess.run( - f'ln --symbolic "{song}" "{args.test_output_dir}_16k_mono_link/{song_name}.wav"', - shell=True, - ) - - -frechet = FrechetAudioDistance() - -fad_score = frechet.score( - f"{args.root}/musdb_hq_loudnorm_16k_mono_link", - f"{args.test_output_dir}_16k_mono_link", -) - -print(f"{args.exp_name}") -print(f"FAD score: {fad_score}") diff --git a/spaces/jiawei011/dreamgaussian/guidance/zero123_utils.py b/spaces/jiawei011/dreamgaussian/guidance/zero123_utils.py deleted file mode 100644 index b92c1c626cb955d1ac438b4a4e9955d76db89eb2..0000000000000000000000000000000000000000 --- a/spaces/jiawei011/dreamgaussian/guidance/zero123_utils.py +++ /dev/null @@ -1,226 +0,0 @@ -from transformers import CLIPTextModel, CLIPTokenizer, logging -from diffusers import ( - AutoencoderKL, - UNet2DConditionModel, - DDIMScheduler, - StableDiffusionPipeline, -) -import torchvision.transforms.functional as TF - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -import sys -sys.path.append('./') - -from zero123 import Zero123Pipeline - - -class Zero123(nn.Module): - def __init__(self, device, fp16=True, t_range=[0.02, 0.98]): - super().__init__() - - self.device = device - self.fp16 = fp16 - self.dtype = torch.float16 if fp16 else torch.float32 - - self.pipe = Zero123Pipeline.from_pretrained( - # "bennyguo/zero123-diffusers", - "bennyguo/zero123-xl-diffusers", - # './model_cache/zero123_xl', - variant="fp16_ema" if self.fp16 else None, - torch_dtype=self.dtype, - ).to(self.device) - - # for param in self.pipe.parameters(): - # param.requires_grad = False - - self.pipe.image_encoder.eval() - self.pipe.vae.eval() - self.pipe.unet.eval() - self.pipe.clip_camera_projection.eval() - - self.vae = self.pipe.vae - self.unet = self.pipe.unet - - self.pipe.set_progress_bar_config(disable=True) - - self.scheduler = DDIMScheduler.from_config(self.pipe.scheduler.config) - self.num_train_timesteps = self.scheduler.config.num_train_timesteps - - self.min_step = int(self.num_train_timesteps * t_range[0]) - self.max_step = int(self.num_train_timesteps * t_range[1]) - self.alphas = self.scheduler.alphas_cumprod.to(self.device) # for convenience - - self.embeddings = None - - @torch.no_grad() - def get_img_embeds(self, x): - # x: image tensor in [0, 1] - x = F.interpolate(x, (256, 256), mode='bilinear', align_corners=False) - x_pil = [TF.to_pil_image(image) for image in x] - x_clip = self.pipe.feature_extractor(images=x_pil, return_tensors="pt").pixel_values.to(device=self.device, dtype=self.dtype) - c = self.pipe.image_encoder(x_clip).image_embeds - v = self.encode_imgs(x.to(self.dtype)) / self.vae.config.scaling_factor - self.embeddings = [c, v] - - @torch.no_grad() - def refine(self, pred_rgb, polar, azimuth, radius, - guidance_scale=5, steps=50, strength=0.8, - ): - - batch_size = pred_rgb.shape[0] - - self.scheduler.set_timesteps(steps) - - if strength == 0: - init_step = 0 - latents = torch.randn((1, 4, 32, 32), device=self.device, dtype=self.dtype) - else: - init_step = int(steps * strength) - pred_rgb_256 = F.interpolate(pred_rgb, (256, 256), mode='bilinear', align_corners=False) - latents = self.encode_imgs(pred_rgb_256.to(self.dtype)) - latents = self.scheduler.add_noise(latents, torch.randn_like(latents), self.scheduler.timesteps[init_step]) - - T = np.stack([np.deg2rad(polar), np.sin(np.deg2rad(azimuth)), np.cos(np.deg2rad(azimuth)), radius], axis=-1) - T = torch.from_numpy(T).unsqueeze(1).to(self.dtype).to(self.device) # [8, 1, 4] - cc_emb = torch.cat([self.embeddings[0].repeat(batch_size, 1, 1), T], dim=-1) - cc_emb = self.pipe.clip_camera_projection(cc_emb) - cc_emb = torch.cat([cc_emb, torch.zeros_like(cc_emb)], dim=0) - - vae_emb = self.embeddings[1].repeat(batch_size, 1, 1, 1) - vae_emb = torch.cat([vae_emb, torch.zeros_like(vae_emb)], dim=0) - - for i, t in enumerate(self.scheduler.timesteps[init_step:]): - - x_in = torch.cat([latents] * 2) - t_in = torch.cat([t.view(1)] * 2).to(self.device) - - noise_pred = self.unet( - torch.cat([x_in, vae_emb], dim=1), - t_in.to(self.unet.dtype), - encoder_hidden_states=cc_emb, - ).sample - - noise_pred_cond, noise_pred_uncond = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_cond - noise_pred_uncond) - - latents = self.scheduler.step(noise_pred, t, latents).prev_sample - - imgs = self.decode_latents(latents) # [1, 3, 256, 256] - return imgs - - def train_step(self, pred_rgb, polar, azimuth, radius, step_ratio=None, guidance_scale=5, as_latent=False): - # pred_rgb: tensor [1, 3, H, W] in [0, 1] - - batch_size = pred_rgb.shape[0] - - if as_latent: - latents = F.interpolate(pred_rgb, (32, 32), mode='bilinear', align_corners=False) * 2 - 1 - else: - pred_rgb_256 = F.interpolate(pred_rgb, (256, 256), mode='bilinear', align_corners=False) - latents = self.encode_imgs(pred_rgb_256.to(self.dtype)) - - if step_ratio is not None: - # dreamtime-like - # t = self.max_step - (self.max_step - self.min_step) * np.sqrt(step_ratio) - t = np.round((1 - step_ratio) * self.num_train_timesteps).clip(self.min_step, self.max_step) - t = torch.full((batch_size,), t, dtype=torch.long, device=self.device) - else: - t = torch.randint(self.min_step, self.max_step + 1, (batch_size,), dtype=torch.long, device=self.device) - - w = (1 - self.alphas[t]).view(batch_size, 1, 1, 1) - - with torch.no_grad(): - noise = torch.randn_like(latents) - latents_noisy = self.scheduler.add_noise(latents, noise, t) - - x_in = torch.cat([latents_noisy] * 2) - t_in = torch.cat([t] * 2) - - T = np.stack([np.deg2rad(polar), np.sin(np.deg2rad(azimuth)), np.cos(np.deg2rad(azimuth)), radius], axis=-1) - T = torch.from_numpy(T).unsqueeze(1).to(self.dtype).to(self.device) # [8, 1, 4] - cc_emb = torch.cat([self.embeddings[0].repeat(batch_size, 1, 1), T], dim=-1) - cc_emb = self.pipe.clip_camera_projection(cc_emb) - cc_emb = torch.cat([cc_emb, torch.zeros_like(cc_emb)], dim=0) - - vae_emb = self.embeddings[1].repeat(batch_size, 1, 1, 1) - vae_emb = torch.cat([vae_emb, torch.zeros_like(vae_emb)], dim=0) - - noise_pred = self.unet( - torch.cat([x_in, vae_emb], dim=1), - t_in.to(self.unet.dtype), - encoder_hidden_states=cc_emb, - ).sample - - noise_pred_cond, noise_pred_uncond = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_cond - noise_pred_uncond) - - grad = w * (noise_pred - noise) - grad = torch.nan_to_num(grad) - - target = (latents - grad).detach() - loss = 0.5 * F.mse_loss(latents.float(), target, reduction='sum') - - return loss - - - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - - imgs = self.vae.decode(latents).sample - imgs = (imgs / 2 + 0.5).clamp(0, 1) - - return imgs - - def encode_imgs(self, imgs, mode=False): - # imgs: [B, 3, H, W] - - imgs = 2 * imgs - 1 - - posterior = self.vae.encode(imgs).latent_dist - if mode: - latents = posterior.mode() - else: - latents = posterior.sample() - latents = latents * self.vae.config.scaling_factor - - return latents - - -if __name__ == '__main__': - import cv2 - import argparse - import numpy as np - import matplotlib.pyplot as plt - - parser = argparse.ArgumentParser() - - parser.add_argument('input', type=str) - parser.add_argument('--polar', type=float, default=0, help='delta polar angle in [-90, 90]') - parser.add_argument('--azimuth', type=float, default=0, help='delta azimuth angle in [-180, 180]') - parser.add_argument('--radius', type=float, default=0, help='delta camera radius multiplier in [-0.5, 0.5]') - - opt = parser.parse_args() - - device = torch.device('cuda') - - print(f'[INFO] loading image from {opt.input} ...') - image = cv2.imread(opt.input, cv2.IMREAD_UNCHANGED) - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - image = cv2.resize(image, (256, 256), interpolation=cv2.INTER_AREA) - image = image.astype(np.float32) / 255.0 - image = torch.from_numpy(image).permute(2, 0, 1).unsqueeze(0).contiguous().to(device) - - print(f'[INFO] loading model ...') - zero123 = Zero123(device) - - print(f'[INFO] running model ...') - zero123.get_img_embeds(image) - - while True: - outputs = zero123.refine(image, polar=[opt.polar], azimuth=[opt.azimuth], radius=[opt.radius], strength=0) - plt.imshow(outputs.float().cpu().numpy().transpose(0, 2, 3, 1)[0]) - plt.show() \ No newline at end of file diff --git a/spaces/jlmarrugom/voice_fixer_app/voicefixer/tools/__init__.py b/spaces/jlmarrugom/voice_fixer_app/voicefixer/tools/__init__.py deleted file mode 100644 index f7e3a147919b9ec10502fcb6a28204eb47979276..0000000000000000000000000000000000000000 --- a/spaces/jlmarrugom/voice_fixer_app/voicefixer/tools/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env python -# -*- encoding: utf-8 -*- -""" -@File : __init__.py.py -@Contact : haoheliu@gmail.com -@License : (C)Copyright 2020-2100 - -@Modify Time @Author @Version @Desciption ------------- ------- -------- ----------- -9/14/21 12:28 AM Haohe Liu 1.0 None -""" diff --git a/spaces/joaofranca13/CESAR-NN-Human-Expression-HF/app.py b/spaces/joaofranca13/CESAR-NN-Human-Expression-HF/app.py deleted file mode 100644 index ddc9e59b2028f70b416b505ef0404aca7ab6566b..0000000000000000000000000000000000000000 --- a/spaces/joaofranca13/CESAR-NN-Human-Expression-HF/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import torch -import gradio as gr -from huggingface_hub import hf_hub_download -from PIL import Image - -REPO_ID = "joaofranca13/YOLOv5-Human-Expressions" -FILENAME = "best.pt" - -yolov5_weights = hf_hub_download(repo_id=REPO_ID, filename=FILENAME) - -model = torch.hub.load('ultralytics/yolov5', 'custom', path=yolov5_weights, force_reload=True) - -def yolo(im, size=640): - #g = (size / max(im.size)) # gain - #im = im.resize((int(x * g) for x in im.size), Image.ANTIALIAS) # resize - results = model(im) # inference - results.render() # updates results.imgs with boxes and labels - return Image.fromarray(results.ims[0]) - -title = "Human Expressions Detection" -description = """This model is a small demo based on an analysis of about 300 images only. For more reliable and generic results, more examples (images) are needed. -""" - -inputs = gr.inputs.Image(shape=(640, 640), type='pil', label="Original Image") -outputs = gr.outputs.Image(type="pil", label="Output Image") - -gr.Interface( - fn=yolo, - inputs=inputs, - outputs=outputs, - title=title, - description=description, - examples=[["assets/ex1.jpg"], ["assets/ex2.jpeg"], ["assets/ex4.jpg"], ["assets/ex7.jpg"], ["assets/ex8.jpeg"], ["assets/ex9.jpg"], ["assets/ex12.jpeg"], ["assets/ex13.jpg"]] -).launch(debug=True) \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_builder.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_builder.py deleted file mode 100644 index 75370712a8c38162b167b1ee358e4adc75c6fff2..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bs4/tests/test_builder.py +++ /dev/null @@ -1,29 +0,0 @@ -import pytest -from unittest.mock import patch -from bs4.builder import DetectsXMLParsedAsHTML - -class TestDetectsXMLParsedAsHTML(object): - - @pytest.mark.parametrize( - "markup,looks_like_xml", - [("No xml declaration", False), - ("obviously HTMLActually XHTML", False), - (" < html>Tricky XHTML", False), - ("", True), - ] - ) - def test_warn_if_markup_looks_like_xml(self, markup, looks_like_xml): - # Test of our ability to guess at whether markup looks XML-ish - # _and_ not HTML-ish. - with patch('bs4.builder.DetectsXMLParsedAsHTML._warn') as mock: - for data in markup, markup.encode('utf8'): - result = DetectsXMLParsedAsHTML.warn_if_markup_looks_like_xml( - data - ) - assert result == looks_like_xml - if looks_like_xml: - assert mock.called - else: - assert not mock.called - mock.reset_mock() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/SMIMEA.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/SMIMEA.py deleted file mode 100644 index 55d87bf85cbe9d9f98bfddf53e2646db789742ca..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/SMIMEA.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -import dns.immutable -import dns.rdtypes.tlsabase - - -@dns.immutable.immutable -class SMIMEA(dns.rdtypes.tlsabase.TLSABase): - """SMIMEA record""" diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/base.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/base.py deleted file mode 100644 index f5cc30de3365fe4a08a28f1e95a9bdd2b17e09a0..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/base.py +++ /dev/null @@ -1,20 +0,0 @@ -"""Base reader class.""" -from abc import abstractmethod -from typing import Any, List - -from langchain.docstore.document import Document as LCDocument - -from gpt_index.readers.schema.base import Document - - -class BaseReader: - """Utilities for loading data from a directory.""" - - @abstractmethod - def load_data(self, *args: Any, **load_kwargs: Any) -> List[Document]: - """Load data from the input directory.""" - - def load_langchain_documents(self, **load_kwargs: Any) -> List[LCDocument]: - """Load data in LangChain document format.""" - docs = self.load_data(**load_kwargs) - return [d.to_langchain_format() for d in docs] diff --git a/spaces/juancopi81/multitrack-midi-music-generator/main.py b/spaces/juancopi81/multitrack-midi-music-generator/main.py deleted file mode 100644 index 2d9188d55bbaebad3518f93ba926213ff923d881..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/multitrack-midi-music-generator/main.py +++ /dev/null @@ -1,157 +0,0 @@ -import os - -import gradio as gr - -from utils import ( - generate_song, - remove_last_instrument, - regenerate_last_instrument, - change_tempo, -) - - -os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python" - -DESCRIPTION = """ -

    🎵 Multitrack Midi Generator 🎶

    -

    AI-driven Music Composer: Creating Music One Instrument at a Time!

    -

    This interactive application uses an AI model to generate music sequences based on a chosen genre and various user inputs. The apps constructs the piece instrument by instrument

    - -
    -
    -

    Features:

    -
      -
    • 🎼 Select the genre for the music.
    • -
    • 🌡️ Use the "Temperature" slider to adjust the randomness of the music generated (higher values will produce more random outputs).
    • -
    • ⏱️ Adjust the "Tempo" slider to change the speed of the music.
    • -
    • 🎹 Use the buttons to generate a new song from scratch, continue generation with the current settings, remove the last added instrument, regenerate the last added instrument with a new one, or change the tempo of the current song.
    • -
    -
    -
    -

    Outputs:

    -

    The app outputs the following:

    -
      -
    • 🎧 The audio of the generated song.
    • -
    • 📁 A MIDI file of the song.
    • -
    • 📊 A plot of the song's sequence.
    • -
    • 🎸 A list of the generated instruments.
    • -
    • 📝 The text sequence of the song.
    • -
    -
    -
    - -
    - -

    This application is built upon the inspiring work of Dr. Tristan Behrens

    - -

    Enjoy creating your own music!

    -""" - - -genres = ["ROCK", "POP", "OTHER", "R&B/SOUL", "JAZZ", "ELECTRONIC", "RANDOM"] - -demo = gr.Blocks() - - -def run(): - with demo: - gr.HTML(DESCRIPTION) - gr.DuplicateButton(value="Duplicate Space for private use") - with gr.Row(): - with gr.Column(): - temp = gr.Slider( - minimum=0, maximum=1, step=0.05, value=0.85, label="Temperature" - ) - genre = gr.Dropdown( - choices=genres, value="POP", label="Select the genre" - ) - with gr.Row(): - btn_from_scratch = gr.Button("🧹 Start from scratch") - btn_continue = gr.Button("➡️ Continue Generation") - btn_remove_last = gr.Button("↩️ Remove last instrument") - btn_regenerate_last = gr.Button("🔄 Regenerate last instrument") - with gr.Column(): - with gr.Box(): - audio_output = gr.Video(show_share_button=True) - midi_file = gr.File() - with gr.Row(): - qpm = gr.Slider( - minimum=60, maximum=140, step=10, value=120, label="Tempo" - ) - btn_qpm = gr.Button("Change Tempo") - with gr.Row(): - with gr.Column(): - plot_output = gr.Plot() - with gr.Column(): - instruments_output = gr.Markdown("# List of generated instruments") - with gr.Row(): - text_sequence = gr.Text() - empty_sequence = gr.Text(visible=False) - with gr.Row(): - num_tokens = gr.Text(visible=False) - btn_from_scratch.click( - fn=generate_song, - inputs=[genre, temp, empty_sequence, qpm], - outputs=[ - audio_output, - midi_file, - plot_output, - instruments_output, - text_sequence, - num_tokens, - ], - ) - btn_continue.click( - fn=generate_song, - inputs=[genre, temp, text_sequence, qpm], - outputs=[ - audio_output, - midi_file, - plot_output, - instruments_output, - text_sequence, - num_tokens, - ], - ) - btn_remove_last.click( - fn=remove_last_instrument, - inputs=[text_sequence, qpm], - outputs=[ - audio_output, - midi_file, - plot_output, - instruments_output, - text_sequence, - num_tokens, - ], - ) - btn_regenerate_last.click( - fn=regenerate_last_instrument, - inputs=[text_sequence, qpm], - outputs=[ - audio_output, - midi_file, - plot_output, - instruments_output, - text_sequence, - num_tokens, - ], - ) - btn_qpm.click( - fn=change_tempo, - inputs=[text_sequence, qpm], - outputs=[ - audio_output, - midi_file, - plot_output, - instruments_output, - text_sequence, - num_tokens, - ], - ) - - demo.launch(server_name="0.0.0.0", server_port=7860) - - -if __name__ == "__main__": - run() diff --git a/spaces/kangvcar/RealChar/client/web/src/components/Characters/style.css b/spaces/kangvcar/RealChar/client/web/src/components/Characters/style.css deleted file mode 100644 index 7fa384757d3a65f0bee1e63d4d9713c06d28a67e..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/client/web/src/components/Characters/style.css +++ /dev/null @@ -1,116 +0,0 @@ -.main-container -{ - flex-direction: row; -} - -.radio-buttons -{ - display: flex; - width: 100%; - margin: 0 auto; - text-align: center; -} - -.custom-radio input -{ - opacity: 0; - height: 0; - width: 0; -} - -.radio-btn -{ - margin: 8px; - width: 160px; - height: 185px; - border: 2.4px solid transparent; - display: inline-block; - border-radius: 8px; - position: relative; - text-align: center; - box-shadow: 0 0 16px #c3c3c367; - cursor: pointer; -} - -.radio-btn > i { - color: #ffffff; - background-color: #FFDAE9; - font-size: 16px; - position: absolute; - top: -12px; - left: 50%; - transform: translateX(-50%) scale(1.6); - border-radius: 40px; - padding: 2.4px; - transition: 0.5s; - pointer-events: none; - opacity: 0; -} - -.radio-btn .hobbies-icon -{ - width: 110px; - height: 110px; - position: absolute; - top: 40%; - left: 50%; - transform: translate(-50%, -50%); -} -.radio-btn .hobbies-icon img -{ - display:block; - width:100%; - margin-bottom:16px; -} -.radio-btn .hobbies-icon i -{ - color: #FFDAE9; - line-height: 64px; - font-size: 40px; -} - -.radio-btn .hobbies-icon h4 -{ - color: rgb(214, 214, 214); - font-size: 12px; - font-weight: 300; - text-transform: uppercase; - letter-spacing:0.8px; -} - -.custom-radio input:checked + .radio-btn -{ - border: 1.6px solid #FFDAE9; -} - -.custom-radio input:checked + .radio-btn > i -{ - opacity: 1; - transform: translateX(-50%) scale(0.8); -} - -@keyframes pulse { - 0%, - 100% { - box-shadow: 0 0 0 0 rgba(173, 216, 230, 0.4); - } - 25% { - box-shadow: 0 0 0 10px rgba(173, 216, 230, 0.15); - } - 50% { - box-shadow: 0 0 0 20px rgba(173, 216, 230, 0.55); - } - 75% { - box-shadow: 0 0 0 10px rgba(173, 216, 230, 0.25); - } -} - -.pulse-animation-1 { - animation: pulse 1.5s infinite ease-in-out; - border-radius: 8px; -} - -.pulse-animation-2 { - animation: pulse 2.2s infinite ease-in-out; - border-radius: 8px; -} \ No newline at end of file diff --git a/spaces/katielink/biogpt-large-demo/README.md b/spaces/katielink/biogpt-large-demo/README.md deleted file mode 100644 index 7c1b0f8ef328cb204b5b38662de719d5e570d046..0000000000000000000000000000000000000000 --- a/spaces/katielink/biogpt-large-demo/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: BioGPT-Large Demo -emoji: 🧪 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: mit -models: [microsoft/BioGPT-Large] ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/util.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/util.py deleted file mode 100644 index b916deefbb8b957ad6ab3cd7403c28513e5ae18e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/util.py +++ /dev/null @@ -1,564 +0,0 @@ -from torch import nn - -import torch.nn.functional as F -import torch - -from src.facerender.sync_batchnorm import SynchronizedBatchNorm2d as BatchNorm2d -from src.facerender.sync_batchnorm import SynchronizedBatchNorm3d as BatchNorm3d - -import torch.nn.utils.spectral_norm as spectral_norm - - -def kp2gaussian(kp, spatial_size, kp_variance): - """ - Transform a keypoint into gaussian like representation - """ - mean = kp['value'] - - coordinate_grid = make_coordinate_grid(spatial_size, mean.type()) - number_of_leading_dimensions = len(mean.shape) - 1 - shape = (1,) * number_of_leading_dimensions + coordinate_grid.shape - coordinate_grid = coordinate_grid.view(*shape) - repeats = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 1) - coordinate_grid = coordinate_grid.repeat(*repeats) - - # Preprocess kp shape - shape = mean.shape[:number_of_leading_dimensions] + (1, 1, 1, 3) - mean = mean.view(*shape) - - mean_sub = (coordinate_grid - mean) - - out = torch.exp(-0.5 * (mean_sub ** 2).sum(-1) / kp_variance) - - return out - -def make_coordinate_grid_2d(spatial_size, type): - """ - Create a meshgrid [-1,1] x [-1,1] of given spatial_size. - """ - h, w = spatial_size - x = torch.arange(w).type(type) - y = torch.arange(h).type(type) - - x = (2 * (x / (w - 1)) - 1) - y = (2 * (y / (h - 1)) - 1) - - yy = y.view(-1, 1).repeat(1, w) - xx = x.view(1, -1).repeat(h, 1) - - meshed = torch.cat([xx.unsqueeze_(2), yy.unsqueeze_(2)], 2) - - return meshed - - -def make_coordinate_grid(spatial_size, type): - d, h, w = spatial_size - x = torch.arange(w).type(type) - y = torch.arange(h).type(type) - z = torch.arange(d).type(type) - - x = (2 * (x / (w - 1)) - 1) - y = (2 * (y / (h - 1)) - 1) - z = (2 * (z / (d - 1)) - 1) - - yy = y.view(1, -1, 1).repeat(d, 1, w) - xx = x.view(1, 1, -1).repeat(d, h, 1) - zz = z.view(-1, 1, 1).repeat(1, h, w) - - meshed = torch.cat([xx.unsqueeze_(3), yy.unsqueeze_(3), zz.unsqueeze_(3)], 3) - - return meshed - - -class ResBottleneck(nn.Module): - def __init__(self, in_features, stride): - super(ResBottleneck, self).__init__() - self.conv1 = nn.Conv2d(in_channels=in_features, out_channels=in_features//4, kernel_size=1) - self.conv2 = nn.Conv2d(in_channels=in_features//4, out_channels=in_features//4, kernel_size=3, padding=1, stride=stride) - self.conv3 = nn.Conv2d(in_channels=in_features//4, out_channels=in_features, kernel_size=1) - self.norm1 = BatchNorm2d(in_features//4, affine=True) - self.norm2 = BatchNorm2d(in_features//4, affine=True) - self.norm3 = BatchNorm2d(in_features, affine=True) - - self.stride = stride - if self.stride != 1: - self.skip = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=1, stride=stride) - self.norm4 = BatchNorm2d(in_features, affine=True) - - def forward(self, x): - out = self.conv1(x) - out = self.norm1(out) - out = F.relu(out) - out = self.conv2(out) - out = self.norm2(out) - out = F.relu(out) - out = self.conv3(out) - out = self.norm3(out) - if self.stride != 1: - x = self.skip(x) - x = self.norm4(x) - out += x - out = F.relu(out) - return out - - -class ResBlock2d(nn.Module): - """ - Res block, preserve spatial resolution. - """ - - def __init__(self, in_features, kernel_size, padding): - super(ResBlock2d, self).__init__() - self.conv1 = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.conv2 = nn.Conv2d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.norm1 = BatchNorm2d(in_features, affine=True) - self.norm2 = BatchNorm2d(in_features, affine=True) - - def forward(self, x): - out = self.norm1(x) - out = F.relu(out) - out = self.conv1(out) - out = self.norm2(out) - out = F.relu(out) - out = self.conv2(out) - out += x - return out - - -class ResBlock3d(nn.Module): - """ - Res block, preserve spatial resolution. - """ - - def __init__(self, in_features, kernel_size, padding): - super(ResBlock3d, self).__init__() - self.conv1 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.conv2 = nn.Conv3d(in_channels=in_features, out_channels=in_features, kernel_size=kernel_size, - padding=padding) - self.norm1 = BatchNorm3d(in_features, affine=True) - self.norm2 = BatchNorm3d(in_features, affine=True) - - def forward(self, x): - out = self.norm1(x) - out = F.relu(out) - out = self.conv1(out) - out = self.norm2(out) - out = F.relu(out) - out = self.conv2(out) - out += x - return out - - -class UpBlock2d(nn.Module): - """ - Upsampling block for use in decoder. - """ - - def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1): - super(UpBlock2d, self).__init__() - - self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups) - self.norm = BatchNorm2d(out_features, affine=True) - - def forward(self, x): - out = F.interpolate(x, scale_factor=2) - out = self.conv(out) - out = self.norm(out) - out = F.relu(out) - return out - -class UpBlock3d(nn.Module): - """ - Upsampling block for use in decoder. - """ - - def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1): - super(UpBlock3d, self).__init__() - - self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups) - self.norm = BatchNorm3d(out_features, affine=True) - - def forward(self, x): - # out = F.interpolate(x, scale_factor=(1, 2, 2), mode='trilinear') - out = F.interpolate(x, scale_factor=(1, 2, 2)) - out = self.conv(out) - out = self.norm(out) - out = F.relu(out) - return out - - -class DownBlock2d(nn.Module): - """ - Downsampling block for use in encoder. - """ - - def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1): - super(DownBlock2d, self).__init__() - self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups) - self.norm = BatchNorm2d(out_features, affine=True) - self.pool = nn.AvgPool2d(kernel_size=(2, 2)) - - def forward(self, x): - out = self.conv(x) - out = self.norm(out) - out = F.relu(out) - out = self.pool(out) - return out - - -class DownBlock3d(nn.Module): - """ - Downsampling block for use in encoder. - """ - - def __init__(self, in_features, out_features, kernel_size=3, padding=1, groups=1): - super(DownBlock3d, self).__init__() - ''' - self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups, stride=(1, 2, 2)) - ''' - self.conv = nn.Conv3d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size, - padding=padding, groups=groups) - self.norm = BatchNorm3d(out_features, affine=True) - self.pool = nn.AvgPool3d(kernel_size=(1, 2, 2)) - - def forward(self, x): - out = self.conv(x) - out = self.norm(out) - out = F.relu(out) - out = self.pool(out) - return out - - -class SameBlock2d(nn.Module): - """ - Simple block, preserve spatial resolution. - """ - - def __init__(self, in_features, out_features, groups=1, kernel_size=3, padding=1, lrelu=False): - super(SameBlock2d, self).__init__() - self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, - kernel_size=kernel_size, padding=padding, groups=groups) - self.norm = BatchNorm2d(out_features, affine=True) - if lrelu: - self.ac = nn.LeakyReLU() - else: - self.ac = nn.ReLU() - - def forward(self, x): - out = self.conv(x) - out = self.norm(out) - out = self.ac(out) - return out - - -class Encoder(nn.Module): - """ - Hourglass Encoder - """ - - def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256): - super(Encoder, self).__init__() - - down_blocks = [] - for i in range(num_blocks): - down_blocks.append(DownBlock3d(in_features if i == 0 else min(max_features, block_expansion * (2 ** i)), - min(max_features, block_expansion * (2 ** (i + 1))), - kernel_size=3, padding=1)) - self.down_blocks = nn.ModuleList(down_blocks) - - def forward(self, x): - outs = [x] - for down_block in self.down_blocks: - outs.append(down_block(outs[-1])) - return outs - - -class Decoder(nn.Module): - """ - Hourglass Decoder - """ - - def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256): - super(Decoder, self).__init__() - - up_blocks = [] - - for i in range(num_blocks)[::-1]: - in_filters = (1 if i == num_blocks - 1 else 2) * min(max_features, block_expansion * (2 ** (i + 1))) - out_filters = min(max_features, block_expansion * (2 ** i)) - up_blocks.append(UpBlock3d(in_filters, out_filters, kernel_size=3, padding=1)) - - self.up_blocks = nn.ModuleList(up_blocks) - # self.out_filters = block_expansion - self.out_filters = block_expansion + in_features - - self.conv = nn.Conv3d(in_channels=self.out_filters, out_channels=self.out_filters, kernel_size=3, padding=1) - self.norm = BatchNorm3d(self.out_filters, affine=True) - - def forward(self, x): - out = x.pop() - # for up_block in self.up_blocks[:-1]: - for up_block in self.up_blocks: - out = up_block(out) - skip = x.pop() - out = torch.cat([out, skip], dim=1) - # out = self.up_blocks[-1](out) - out = self.conv(out) - out = self.norm(out) - out = F.relu(out) - return out - - -class Hourglass(nn.Module): - """ - Hourglass architecture. - """ - - def __init__(self, block_expansion, in_features, num_blocks=3, max_features=256): - super(Hourglass, self).__init__() - self.encoder = Encoder(block_expansion, in_features, num_blocks, max_features) - self.decoder = Decoder(block_expansion, in_features, num_blocks, max_features) - self.out_filters = self.decoder.out_filters - - def forward(self, x): - return self.decoder(self.encoder(x)) - - -class KPHourglass(nn.Module): - """ - Hourglass architecture. - """ - - def __init__(self, block_expansion, in_features, reshape_features, reshape_depth, num_blocks=3, max_features=256): - super(KPHourglass, self).__init__() - - self.down_blocks = nn.Sequential() - for i in range(num_blocks): - self.down_blocks.add_module('down'+ str(i), DownBlock2d(in_features if i == 0 else min(max_features, block_expansion * (2 ** i)), - min(max_features, block_expansion * (2 ** (i + 1))), - kernel_size=3, padding=1)) - - in_filters = min(max_features, block_expansion * (2 ** num_blocks)) - self.conv = nn.Conv2d(in_channels=in_filters, out_channels=reshape_features, kernel_size=1) - - self.up_blocks = nn.Sequential() - for i in range(num_blocks): - in_filters = min(max_features, block_expansion * (2 ** (num_blocks - i))) - out_filters = min(max_features, block_expansion * (2 ** (num_blocks - i - 1))) - self.up_blocks.add_module('up'+ str(i), UpBlock3d(in_filters, out_filters, kernel_size=3, padding=1)) - - self.reshape_depth = reshape_depth - self.out_filters = out_filters - - def forward(self, x): - out = self.down_blocks(x) - out = self.conv(out) - bs, c, h, w = out.shape - out = out.view(bs, c//self.reshape_depth, self.reshape_depth, h, w) - out = self.up_blocks(out) - - return out - - - -class AntiAliasInterpolation2d(nn.Module): - """ - Band-limited downsampling, for better preservation of the input signal. - """ - def __init__(self, channels, scale): - super(AntiAliasInterpolation2d, self).__init__() - sigma = (1 / scale - 1) / 2 - kernel_size = 2 * round(sigma * 4) + 1 - self.ka = kernel_size // 2 - self.kb = self.ka - 1 if kernel_size % 2 == 0 else self.ka - - kernel_size = [kernel_size, kernel_size] - sigma = [sigma, sigma] - # The gaussian kernel is the product of the - # gaussian function of each dimension. - kernel = 1 - meshgrids = torch.meshgrid( - [ - torch.arange(size, dtype=torch.float32) - for size in kernel_size - ] - ) - for size, std, mgrid in zip(kernel_size, sigma, meshgrids): - mean = (size - 1) / 2 - kernel *= torch.exp(-(mgrid - mean) ** 2 / (2 * std ** 2)) - - # Make sure sum of values in gaussian kernel equals 1. - kernel = kernel / torch.sum(kernel) - # Reshape to depthwise convolutional weight - kernel = kernel.view(1, 1, *kernel.size()) - kernel = kernel.repeat(channels, *[1] * (kernel.dim() - 1)) - - self.register_buffer('weight', kernel) - self.groups = channels - self.scale = scale - inv_scale = 1 / scale - self.int_inv_scale = int(inv_scale) - - def forward(self, input): - if self.scale == 1.0: - return input - - out = F.pad(input, (self.ka, self.kb, self.ka, self.kb)) - out = F.conv2d(out, weight=self.weight, groups=self.groups) - out = out[:, :, ::self.int_inv_scale, ::self.int_inv_scale] - - return out - - -class SPADE(nn.Module): - def __init__(self, norm_nc, label_nc): - super().__init__() - - self.param_free_norm = nn.InstanceNorm2d(norm_nc, affine=False) - nhidden = 128 - - self.mlp_shared = nn.Sequential( - nn.Conv2d(label_nc, nhidden, kernel_size=3, padding=1), - nn.ReLU()) - self.mlp_gamma = nn.Conv2d(nhidden, norm_nc, kernel_size=3, padding=1) - self.mlp_beta = nn.Conv2d(nhidden, norm_nc, kernel_size=3, padding=1) - - def forward(self, x, segmap): - normalized = self.param_free_norm(x) - segmap = F.interpolate(segmap, size=x.size()[2:], mode='nearest') - actv = self.mlp_shared(segmap) - gamma = self.mlp_gamma(actv) - beta = self.mlp_beta(actv) - out = normalized * (1 + gamma) + beta - return out - - -class SPADEResnetBlock(nn.Module): - def __init__(self, fin, fout, norm_G, label_nc, use_se=False, dilation=1): - super().__init__() - # Attributes - self.learned_shortcut = (fin != fout) - fmiddle = min(fin, fout) - self.use_se = use_se - # create conv layers - self.conv_0 = nn.Conv2d(fin, fmiddle, kernel_size=3, padding=dilation, dilation=dilation) - self.conv_1 = nn.Conv2d(fmiddle, fout, kernel_size=3, padding=dilation, dilation=dilation) - if self.learned_shortcut: - self.conv_s = nn.Conv2d(fin, fout, kernel_size=1, bias=False) - # apply spectral norm if specified - if 'spectral' in norm_G: - self.conv_0 = spectral_norm(self.conv_0) - self.conv_1 = spectral_norm(self.conv_1) - if self.learned_shortcut: - self.conv_s = spectral_norm(self.conv_s) - # define normalization layers - self.norm_0 = SPADE(fin, label_nc) - self.norm_1 = SPADE(fmiddle, label_nc) - if self.learned_shortcut: - self.norm_s = SPADE(fin, label_nc) - - def forward(self, x, seg1): - x_s = self.shortcut(x, seg1) - dx = self.conv_0(self.actvn(self.norm_0(x, seg1))) - dx = self.conv_1(self.actvn(self.norm_1(dx, seg1))) - out = x_s + dx - return out - - def shortcut(self, x, seg1): - if self.learned_shortcut: - x_s = self.conv_s(self.norm_s(x, seg1)) - else: - x_s = x - return x_s - - def actvn(self, x): - return F.leaky_relu(x, 2e-1) - -class audio2image(nn.Module): - def __init__(self, generator, kp_extractor, he_estimator_video, he_estimator_audio, train_params): - super().__init__() - # Attributes - self.generator = generator - self.kp_extractor = kp_extractor - self.he_estimator_video = he_estimator_video - self.he_estimator_audio = he_estimator_audio - self.train_params = train_params - - def headpose_pred_to_degree(self, pred): - device = pred.device - idx_tensor = [idx for idx in range(66)] - idx_tensor = torch.FloatTensor(idx_tensor).to(device) - pred = F.softmax(pred) - degree = torch.sum(pred*idx_tensor, 1) * 3 - 99 - - return degree - - def get_rotation_matrix(self, yaw, pitch, roll): - yaw = yaw / 180 * 3.14 - pitch = pitch / 180 * 3.14 - roll = roll / 180 * 3.14 - - roll = roll.unsqueeze(1) - pitch = pitch.unsqueeze(1) - yaw = yaw.unsqueeze(1) - - roll_mat = torch.cat([torch.ones_like(roll), torch.zeros_like(roll), torch.zeros_like(roll), - torch.zeros_like(roll), torch.cos(roll), -torch.sin(roll), - torch.zeros_like(roll), torch.sin(roll), torch.cos(roll)], dim=1) - roll_mat = roll_mat.view(roll_mat.shape[0], 3, 3) - - pitch_mat = torch.cat([torch.cos(pitch), torch.zeros_like(pitch), torch.sin(pitch), - torch.zeros_like(pitch), torch.ones_like(pitch), torch.zeros_like(pitch), - -torch.sin(pitch), torch.zeros_like(pitch), torch.cos(pitch)], dim=1) - pitch_mat = pitch_mat.view(pitch_mat.shape[0], 3, 3) - - yaw_mat = torch.cat([torch.cos(yaw), -torch.sin(yaw), torch.zeros_like(yaw), - torch.sin(yaw), torch.cos(yaw), torch.zeros_like(yaw), - torch.zeros_like(yaw), torch.zeros_like(yaw), torch.ones_like(yaw)], dim=1) - yaw_mat = yaw_mat.view(yaw_mat.shape[0], 3, 3) - - rot_mat = torch.einsum('bij,bjk,bkm->bim', roll_mat, pitch_mat, yaw_mat) - - return rot_mat - - def keypoint_transformation(self, kp_canonical, he): - kp = kp_canonical['value'] # (bs, k, 3) - yaw, pitch, roll = he['yaw'], he['pitch'], he['roll'] - t, exp = he['t'], he['exp'] - - yaw = self.headpose_pred_to_degree(yaw) - pitch = self.headpose_pred_to_degree(pitch) - roll = self.headpose_pred_to_degree(roll) - - rot_mat = self.get_rotation_matrix(yaw, pitch, roll) # (bs, 3, 3) - - # keypoint rotation - kp_rotated = torch.einsum('bmp,bkp->bkm', rot_mat, kp) - - - - # keypoint translation - t = t.unsqueeze_(1).repeat(1, kp.shape[1], 1) - kp_t = kp_rotated + t - - # add expression deviation - exp = exp.view(exp.shape[0], -1, 3) - kp_transformed = kp_t + exp - - return {'value': kp_transformed} - - def forward(self, source_image, target_audio): - pose_source = self.he_estimator_video(source_image) - pose_generated = self.he_estimator_audio(target_audio) - kp_canonical = self.kp_extractor(source_image) - kp_source = self.keypoint_transformation(kp_canonical, pose_source) - kp_transformed_generated = self.keypoint_transformation(kp_canonical, pose_generated) - generated = self.generator(source_image, kp_source=kp_source, kp_driving=kp_transformed_generated) - return generated \ No newline at end of file diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/docs/speed_benchmark.md b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/docs/speed_benchmark.md deleted file mode 100644 index 055aee0defe2c43a523ced48260242f0f99b7cea..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/docs/speed_benchmark.md +++ /dev/null @@ -1,93 +0,0 @@ -## Test Training Speed - -- Test Commands - -You need to use the following two commands to test the Partial FC training performance. -The number of identites is **3 millions** (synthetic data), turn mixed precision training on, backbone is resnet50, -batch size is 1024. -```shell -# Model Parallel -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/3millions -# Partial FC 0.1 -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/3millions_pfc -``` - -- GPU Memory - -``` -# (Model Parallel) gpustat -i -[0] Tesla V100-SXM2-32GB | 64'C, 94 % | 30338 / 32510 MB -[1] Tesla V100-SXM2-32GB | 60'C, 99 % | 28876 / 32510 MB -[2] Tesla V100-SXM2-32GB | 60'C, 99 % | 28872 / 32510 MB -[3] Tesla V100-SXM2-32GB | 69'C, 99 % | 28872 / 32510 MB -[4] Tesla V100-SXM2-32GB | 66'C, 99 % | 28888 / 32510 MB -[5] Tesla V100-SXM2-32GB | 60'C, 99 % | 28932 / 32510 MB -[6] Tesla V100-SXM2-32GB | 68'C, 100 % | 28916 / 32510 MB -[7] Tesla V100-SXM2-32GB | 65'C, 99 % | 28860 / 32510 MB - -# (Partial FC 0.1) gpustat -i -[0] Tesla V100-SXM2-32GB | 60'C, 95 % | 10488 / 32510 MB │······················· -[1] Tesla V100-SXM2-32GB | 60'C, 97 % | 10344 / 32510 MB │······················· -[2] Tesla V100-SXM2-32GB | 61'C, 95 % | 10340 / 32510 MB │······················· -[3] Tesla V100-SXM2-32GB | 66'C, 95 % | 10340 / 32510 MB │······················· -[4] Tesla V100-SXM2-32GB | 65'C, 94 % | 10356 / 32510 MB │······················· -[5] Tesla V100-SXM2-32GB | 61'C, 95 % | 10400 / 32510 MB │······················· -[6] Tesla V100-SXM2-32GB | 68'C, 96 % | 10384 / 32510 MB │······················· -[7] Tesla V100-SXM2-32GB | 64'C, 95 % | 10328 / 32510 MB │······················· -``` - -- Training Speed - -```python -# (Model Parallel) trainging.log -Training: Speed 2271.33 samples/sec Loss 1.1624 LearningRate 0.2000 Epoch: 0 Global Step: 100 -Training: Speed 2269.94 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 150 -Training: Speed 2272.67 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 200 -Training: Speed 2266.55 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 250 -Training: Speed 2272.54 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 300 - -# (Partial FC 0.1) trainging.log -Training: Speed 5299.56 samples/sec Loss 1.0965 LearningRate 0.2000 Epoch: 0 Global Step: 100 -Training: Speed 5296.37 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 150 -Training: Speed 5304.37 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 200 -Training: Speed 5274.43 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 250 -Training: Speed 5300.10 samples/sec Loss 0.0000 LearningRate 0.2000 Epoch: 0 Global Step: 300 -``` - -In this test case, Partial FC 0.1 only use1 1/3 of the GPU memory of the model parallel, -and the training speed is 2.5 times faster than the model parallel. - - -## Speed Benchmark - -1. Training speed of different parallel methods (samples/second), Tesla V100 32GB * 8. (Larger is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 4681 | 4824 | 5004 | -|250000 | 4047 | 4521 | 4976 | -|500000 | 3087 | 4013 | 4900 | -|1000000 | 2090 | 3449 | 4803 | -|1400000 | 1672 | 3043 | 4738 | -|2000000 | - | 2593 | 4626 | -|4000000 | - | 1748 | 4208 | -|5500000 | - | 1389 | 3975 | -|8000000 | - | - | 3565 | -|16000000 | - | - | 2679 | -|29000000 | - | - | 1855 | - -2. GPU memory cost of different parallel methods (GB per GPU), Tesla V100 32GB * 8. (Smaller is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 7358 | 5306 | 4868 | -|250000 | 9940 | 5826 | 5004 | -|500000 | 14220 | 7114 | 5202 | -|1000000 | 23708 | 9966 | 5620 | -|1400000 | 32252 | 11178 | 6056 | -|2000000 | - | 13978 | 6472 | -|4000000 | - | 23238 | 8284 | -|5500000 | - | 32188 | 9854 | -|8000000 | - | - | 12310 | -|16000000 | - | - | 19950 | -|29000000 | - | - | 32324 | diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py deleted file mode 100644 index 0c02eaf70fc0140aca7925f621c29a496f491cae..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py +++ /dev/null @@ -1,16 +0,0 @@ -import importlib -import os.path as osp - - -def get_config(config_file): - assert config_file.startswith('configs/'), 'config file setting must start with configs/' - temp_config_name = osp.basename(config_file) - temp_module_name = osp.splitext(temp_config_name)[0] - config = importlib.import_module("configs.base") - cfg = config.config - config = importlib.import_module("configs.%s" % temp_module_name) - job_cfg = config.config - cfg.update(job_cfg) - if cfg.output is None: - cfg.output = osp.join('work_dirs', temp_module_name) - return cfg \ No newline at end of file diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/frontend.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/frontend.py deleted file mode 100644 index 32549ed050655d79be1793a9cf04d9d52644794a..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/ppg_extractor/frontend.py +++ /dev/null @@ -1,115 +0,0 @@ -import copy -from typing import Tuple -import numpy as np -import torch -from torch_complex.tensor import ComplexTensor - -from .log_mel import LogMel -from .stft import Stft - - -class DefaultFrontend(torch.nn.Module): - """Conventional frontend structure for ASR - - Stft -> WPE -> MVDR-Beamformer -> Power-spec -> Mel-Fbank -> CMVN - """ - - def __init__( - self, - fs: 16000, - n_fft: int = 1024, - win_length: int = 800, - hop_length: int = 160, - center: bool = True, - pad_mode: str = "reflect", - normalized: bool = False, - onesided: bool = True, - n_mels: int = 80, - fmin: int = None, - fmax: int = None, - htk: bool = False, - norm=1, - frontend_conf=None, #Optional[dict] = get_default_kwargs(Frontend), - kaldi_padding_mode=False, - downsample_rate: int = 1, - ): - super().__init__() - self.downsample_rate = downsample_rate - - # Deepcopy (In general, dict shouldn't be used as default arg) - frontend_conf = copy.deepcopy(frontend_conf) - - self.stft = Stft( - n_fft=n_fft, - win_length=win_length, - hop_length=hop_length, - center=center, - pad_mode=pad_mode, - normalized=normalized, - onesided=onesided, - kaldi_padding_mode=kaldi_padding_mode - ) - if frontend_conf is not None: - self.frontend = Frontend(idim=n_fft // 2 + 1, **frontend_conf) - else: - self.frontend = None - - self.logmel = LogMel( - fs=fs, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax, htk=htk, norm=norm, - ) - self.n_mels = n_mels - - def output_size(self) -> int: - return self.n_mels - - def forward( - self, input: torch.Tensor, input_lengths: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor]: - # 1. Domain-conversion: e.g. Stft: time -> time-freq - input_stft, feats_lens = self.stft(input, input_lengths) - - assert input_stft.dim() >= 4, input_stft.shape - # "2" refers to the real/imag parts of Complex - assert input_stft.shape[-1] == 2, input_stft.shape - - # Change torch.Tensor to ComplexTensor - # input_stft: (..., F, 2) -> (..., F) - input_stft = ComplexTensor(input_stft[..., 0], input_stft[..., 1]) - - # 2. [Option] Speech enhancement - if self.frontend is not None: - assert isinstance(input_stft, ComplexTensor), type(input_stft) - # input_stft: (Batch, Length, [Channel], Freq) - input_stft, _, mask = self.frontend(input_stft, feats_lens) - - # 3. [Multi channel case]: Select a channel - if input_stft.dim() == 4: - # h: (B, T, C, F) -> h: (B, T, F) - if self.training: - # Select 1ch randomly - ch = np.random.randint(input_stft.size(2)) - input_stft = input_stft[:, :, ch, :] - else: - # Use the first channel - input_stft = input_stft[:, :, 0, :] - - # 4. STFT -> Power spectrum - # h: ComplexTensor(B, T, F) -> torch.Tensor(B, T, F) - input_power = input_stft.real ** 2 + input_stft.imag ** 2 - - # 5. Feature transform e.g. Stft -> Log-Mel-Fbank - # input_power: (Batch, [Channel,] Length, Freq) - # -> input_feats: (Batch, Length, Dim) - input_feats, _ = self.logmel(input_power, feats_lens) - - # NOTE(sx): pad - max_len = input_feats.size(1) - if self.downsample_rate > 1 and max_len % self.downsample_rate != 0: - padding = self.downsample_rate - max_len % self.downsample_rate - # print("Logmel: ", input_feats.size()) - input_feats = torch.nn.functional.pad(input_feats, (0, 0, 0, padding), - "constant", 0) - # print("Logmel(after padding): ",input_feats.size()) - feats_lens[torch.argmax(feats_lens)] = max_len + padding - - return input_feats, feats_lens diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py deleted file mode 100644 index a0986143fa4f2bd36f5271354fe5f843f35b9e6f..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/sep_fcn_head.py +++ /dev/null @@ -1,51 +0,0 @@ -from annotator.uniformer.mmcv.cnn import DepthwiseSeparableConvModule - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class DepthwiseSeparableFCNHead(FCNHead): - """Depthwise-Separable Fully Convolutional Network for Semantic - Segmentation. - - This head is implemented according to Fast-SCNN paper. - Args: - in_channels(int): Number of output channels of FFM. - channels(int): Number of middle-stage channels in the decode head. - concat_input(bool): Whether to concatenate original decode input into - the result of several consecutive convolution layers. - Default: True. - num_classes(int): Used to determine the dimension of - final prediction tensor. - in_index(int): Correspond with 'out_indices' in FastSCNN backbone. - norm_cfg (dict | None): Config of norm layers. - align_corners (bool): align_corners argument of F.interpolate. - Default: False. - loss_decode(dict): Config of loss type and some - relevant additional options. - """ - - def __init__(self, **kwargs): - super(DepthwiseSeparableFCNHead, self).__init__(**kwargs) - self.convs[0] = DepthwiseSeparableConvModule( - self.in_channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - for i in range(1, self.num_convs): - self.convs[i] = DepthwiseSeparableConvModule( - self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) - - if self.concat_input: - self.conv_cat = DepthwiseSeparableConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=self.kernel_size, - padding=self.kernel_size // 2, - norm_cfg=self.norm_cfg) diff --git a/spaces/krazyxki/V-1488abed/src/logger.ts b/spaces/krazyxki/V-1488abed/src/logger.ts deleted file mode 100644 index aa2fd3b8b2d160466bd95fc73182fed7dcec6452..0000000000000000000000000000000000000000 --- a/spaces/krazyxki/V-1488abed/src/logger.ts +++ /dev/null @@ -1,6 +0,0 @@ -import pino from "pino"; -import { config } from "./config"; - -export const logger = pino({ - level: config.logLevel, -}); diff --git a/spaces/krithiksai/weather_based_on_tree_photos/app.py b/spaces/krithiksai/weather_based_on_tree_photos/app.py deleted file mode 100644 index 14b4a47cb6c5f8954b489acab58fac5e956b6502..0000000000000000000000000000000000000000 --- a/spaces/krithiksai/weather_based_on_tree_photos/app.py +++ /dev/null @@ -1,13 +0,0 @@ -import gradio as gr -from fastai.vision.all import * - - - -learn = load_learner('export1.pkl') -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_ix,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -gr.Interface(fn=predict, inputs=gr.inputs.Image(shape=(512, 512)), outputs=gr.outputs.Label(num_top_classes=3)).launch() \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/asciiTable.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/asciiTable.py deleted file mode 100644 index 6f81c526b372b268b253da47c337715e316ee4d4..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/asciiTable.py +++ /dev/null @@ -1,20 +0,0 @@ -from fontTools.misc.textTools import strjoin, tobytes, tostr -from . import DefaultTable - - -class asciiTable(DefaultTable.DefaultTable): - def toXML(self, writer, ttFont): - data = tostr(self.data) - # removing null bytes. XXX needed?? - data = data.split("\0") - data = strjoin(data) - writer.begintag("source") - writer.newline() - writer.write_noindent(data) - writer.newline() - writer.endtag("source") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - lines = strjoin(content).split("\n") - self.data = tobytes("\n".join(lines[1:-1])) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/dim/reference.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/dim/reference.py deleted file mode 100644 index ee351199c974a26c7b9f5556c5ac9e54019d70f8..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/dim/reference.py +++ /dev/null @@ -1,557 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the BSD-style license found in the -# LICENSE file in the root directory of this source tree. - -# reference python implementations for C ops -import torch -from .tree_map import tree_flatten, tree_map -from .batch_tensor import _enable_layers -from . import op_properties -from functorch._C import dim as _C -DimList = _C.DimList -from functools import reduce -import operator - - -# use dict to avoid writing C++ bindings for set -pointwise = set(op_properties.pointwise) -def prod(x): - return reduce(operator.mul, x, 1) - - -def _wrap_dim(d, N, keepdim): - from . import Dim - if isinstance(d, Dim): - assert not keepdim, "cannot preserve first-class dimensions with keepdim=True" - return d - elif d >= 0: - return d - N - else: - return d - -def _dims(d, N, keepdim, single_dim): - from . import Dim - if isinstance(d, (Dim, int)): - return ltuple((_wrap_dim(d, N, keepdim),)) - assert not single_dim, f"expected a single dimension or int but found: {d}" - return ltuple(_wrap_dim(x, N, keepdim) for x in d) - -def _bind_dims_to_size(lhs_size, rhs, lhs_debug): - from . import DimensionMismatchError - not_bound = tuple((i, r) for i, r in enumerate(rhs) if not r.is_bound) - if len(not_bound) == 1: - idx, d = not_bound[0] - rhs_so_far = prod(r.size for r in rhs if r.is_bound) - if lhs_size % rhs_so_far != 0: - rhs_s = tuple('?' if not r.is_bound else str(r.size) for r in rhs) - raise DimensionMismatchError(f"inferred dimension does not evenly fit into larger dimension: {lhs_size} vs {rhs_s}") - new_size = lhs_size // rhs_so_far - d.size = new_size - elif len(not_bound) > 1: - rhs_s = tuple('?' if not r.is_bound else str(r.size) for r in rhs) - raise DimensionMismatchError(f"cannot infer the size of two dimensions at once: {rhs} with sizes {rhs_s}") - else: - rhs_size = prod(r.size for r in rhs) - if lhs_size != rhs_size: - raise DimensionMismatchError( - f"Dimension sizes to do not match ({lhs_size} != {rhs_size}) when matching {lhs_debug} to {rhs}") - -def _tensor_levels(inp): - from . import _Tensor - if isinstance(inp, _Tensor): - return inp._tensor, llist(inp._levels), inp._has_device - else: - return inp, llist(range(-inp.ndim, 0)), True - -def _match_levels(v, from_levels, to_levels): - view = [] - permute = [] - requires_view = False - size = v.size() - for t in to_levels: - try: - idx = from_levels.index(t) - permute.append(idx) - view.append(size[idx]) - except ValueError: - view.append(1) - requires_view = True - if permute != list(range(len(permute))): - v = v.permute(*permute) - if requires_view: - v = v.view(*view) - return v - - -# make a single dimension positional but do not permute it, -# used to do multi-tensor operators where the dim being acted on -# should not physically move if possible -def _positional_no_permute(self, dim, expand_dim=False): - from . import Tensor - ptensor, levels = self._tensor, llist(self._levels) - try: - idx = levels.index(dim) - except ValueError: - if not expand_dim: - raise - idx = 0 - ptensor = ptensor.expand(dim.size, *ptensor.size()) - levels.insert(0, 0) - idx_batched = 0 - for i in range(idx): - if isinstance(levels[i], int): - levels[i] -= 1 - idx_batched += 1 - levels[idx] = -idx_batched - 1 - return Tensor.from_positional(ptensor, levels, self._has_device), idx_batched - -def seq(a, b): - from . import Dim - if isinstance(a, Dim) != isinstance(b, Dim): - return False - if isinstance(a, Dim): - return a is b - else: - return a == b - -class isin: - def __contains__(self, item): - for x in self: - if seq(item, x): - return True - return False - - def index(self, item): - for i, x in enumerate(self): - if seq(item, x): - return i - raise ValueError - - -class llist(isin, list): - pass - -class ltuple(isin, tuple): - pass - -empty_dict = {} -@classmethod -def __torch_function__(self, orig, cls, args, kwargs=empty_dict): - from . import _Tensor, TensorLike, Tensor - from .delayed_mul_tensor import DelayedMulTensor - - if orig is torch.Tensor.__mul__: - lhs, rhs = args - if isinstance(lhs, _Tensor) and isinstance(rhs, _Tensor) and lhs.ndim == 0 and rhs.ndim == 0: - return DelayedMulTensor(lhs, rhs) - all_dims = llist() - flat_args, unflatten = tree_flatten((args, kwargs)) - device_holding_tensor = None - for f in flat_args: - if isinstance(f, _Tensor): - if f._has_device: - device_holding_tensor = f._batchtensor - for d in f.dims: - if d not in all_dims: - all_dims.append(d) - - def unwrap(t): - if isinstance(t, _Tensor): - r = t._batchtensor - if device_holding_tensor is not None and not t._has_device: - r = r.to(device=device_holding_tensor.device) - return r - return t - - if orig in pointwise: - result_levels = llist() - arg_levels = llist() - to_expand = [] - for i, f in enumerate(flat_args): - if isinstance(f, TensorLike): - ptensor, levels, _ = _tensor_levels(f) - if isinstance(f, _Tensor) and not f._has_device and device_holding_tensor is not None: - ptensor = ptensor.to(device=device_holding_tensor.device) - flat_args[i] = ptensor - for l in levels: - if l not in result_levels: - result_levels.append(l) - to_expand.append((i, levels)) - - for i, levels in to_expand: - flat_args[i] = _match_levels(flat_args[i], levels, result_levels) - args, kwargs = unflatten(flat_args) - result = orig(*args, **kwargs) - - def wrap(t): - if isinstance(t, TensorLike): - return Tensor.from_positional(t, result_levels, device_holding_tensor is not None) - return t - return tree_map(wrap, result) - else: - def wrap(t): - if isinstance(t, TensorLike): - return Tensor.from_batched(t, device_holding_tensor is not None) - return t - with _enable_layers(all_dims): - print(f"batch_tensor for {orig}") - args, kwargs = unflatten(unwrap(f) for f in flat_args) - result = orig(*args, **kwargs) - # print("END", orig) - return tree_map(wrap, result) - -def positional(self, *dims): - from . import Dim, Tensor - ptensor, levels = self._tensor, llist(self._levels) - flat_dims = llist() - view = [] - needs_view = False - ndim = self.ndim - for d in dims: - if isinstance(d, DimList): - flat_dims.extend(d) - view.extend(e.size for e in d) - elif isinstance(d, Dim): - flat_dims.append(d) - view.append(d.size) - elif isinstance(d, int): - d = _wrap_dim(d, ndim, False) - flat_dims.append(d) - view.append(ptensor.size(d)) - else: - flat_dims.extend(d) - view.append(prod(e.size for e in d)) - needs_view = True - - permute = list(range(len(levels))) - nflat = len(flat_dims) - for i, d in enumerate(flat_dims): - try: - idx = levels.index(d) - except ValueError as e: - raise DimensionBindError(f'tensor of dimensions {self.dims} does not contain dim {d}') from e - p = permute[idx] - del levels[idx] - del permute[idx] - levels.insert(i, 0) - permute.insert(i, p) - ptensor = ptensor.permute(*permute) - seen = 0 - for i in range(len(levels) - 1, -1, -1): - if isinstance(levels[i], int): - seen += 1 - levels[i] = -seen - result = Tensor.from_positional(ptensor, levels, self._has_device) - if needs_view: - result = result.reshape(*view, *result.size()[len(flat_dims):]) - return result - -def _contains_dim(input): - from . import Dim - for i in input: - if isinstance(i, Dim): - return True - -def expand(self, *sizes): - if not _contains_dim(sizes): - return self.__torch_function__(torch.Tensor.expand, None, (self, *sizes)) - dims = sizes - sizes = [d.size for d in dims] + [-1] * self.ndim - self = self.expand(*sizes) - return self[dims] - - -_not_present = object() - -def _getarg(name, offset, args, kwargs, default): - if len(args) > offset: - return args[offset] - return kwargs.get(name, default) - -def _patcharg(name, offset, args, kwargs, value): - if len(args) > offset: - args[offset] = value - else: - kwargs[name] = value - -def _wrap(orig, dim_offset=0, keepdim_offset=1, dim_name='dim', single_dim=False, reduce=True): - from . import TensorLike, Dim, Tensor - - def fn(self, *args, **kwargs): - dim = _getarg(dim_name, dim_offset, args, kwargs, _not_present) - if dim is _not_present or (single_dim and not isinstance(dim, Dim)): - with _enable_layers(self.dims): - print(f"dim fallback batch_tensor for {orig}") - return Tensor.from_batched(orig(self._batchtensor, *args, **kwargs), self._has_device) - keepdim = _getarg('keepdim', keepdim_offset, args, kwargs, False) if reduce else False - t, levels = self._tensor, llist(self._levels) - dims = _dims(dim, self._batchtensor.ndim, keepdim, single_dim) - dim_indices = tuple(levels.index(d) for d in dims) - if reduce and not keepdim: - new_levels = [l for i, l in enumerate(levels) if i not in dim_indices] - else: - new_levels = levels - - if len(dim_indices) == 1: - dim_indices = dim_indices[0] # so that dims that really only take a single argument work... - args = list(args) - _patcharg(dim_name, dim_offset, args, kwargs, dim_indices) - - def wrap(t): - if isinstance(t, TensorLike): - return Tensor.from_positional(t, new_levels, self._has_device) - return t - with _enable_layers(new_levels): - print(f"dim used batch_tensor for {orig}") - r = orig(t, *args, **kwargs) - return tree_map(wrap, r) - return fn - -def _def(name, *args, **kwargs): - from . import _Tensor - orig = getattr(torch.Tensor, name) - setattr(_Tensor, name, _wrap(orig, *args, **kwargs)) - -no_slice = slice(None) - -_orig_getitem = torch.Tensor.__getitem__ - -class dim_tracker: - def __init__(self): - self.dims = llist() - self.count = [] - - def record(self, d): - if d not in self.dims: - self.dims.append(d) - self.count.append(1) - - def __getitem__(self, d): - return self.count[self.dims.index(d)] - -def t__getitem__(self, input): - from . import Dim, DimensionBindError, _Tensor, TensorLike, DimList, Tensor - # * bail to original example if we have a single non-Dim tensor, or a non-tensor - # * locate ... or an unbound tensor list, and determine its size, bind dim list - # (remember that None does not count to the total dim count) - # * bind simple dims and dim-packs to their sizes, count the number of uses of each dim, - # produce the re-view if needed - # * for each single-use dim index, replace with no_slice and mark that it will be added - # (keep track of whether we have to call super) - # * call super if needed - # * if we have dims to bind, bind them (it will help if we eliminated ... and None before) - - # this handles bool indexing handling, as well as some other simple cases. - - is_simple = (not isinstance(input, Dim) and - not isinstance(input, (tuple, list)) and - # WAR for functorch bug where zero time tensors in getitem are not handled correctly. - not (isinstance(input, TensorLike) and input.ndim == 0)) - - if is_simple: - if isinstance(self, _Tensor): - return _Tensor.__torch_function__(_orig_getitem, None, (self, input)) - else: - return _orig_getitem(self, input) - - # can further optimize this case - if not isinstance(input, tuple): - input = [input] - else: - input = list(input) - - dims_indexed = 0 - expanding_object = None - dimlists = [] - for i, s in enumerate(input): - if s is ... or isinstance(s, DimList) and not s.is_bound: - if expanding_object is not None: - msg = 'at most one ... or unbound dimension list can exist in indexing list but' \ - f' found 2 at offsets {i} and {expanding_object}' - raise DimensionBindError(msg) - expanding_object = i - - if isinstance(s, DimList): - dims_indexed += len(s) if s.is_bound else 0 - dimlists.append(i) - elif s is not None and s is not ...: - dims_indexed += 1 - - ndim = self.ndim - if dims_indexed > ndim: - raise IndexError(f'at least {dims_indexed} indices were supplied but the tensor only has {ndim} dimensions.') - if expanding_object is not None: - expanding_ndims = ndim - dims_indexed - obj = input[expanding_object] - if obj is ...: - input[expanding_object:expanding_object + 1] = [no_slice] * expanding_ndims - else: - obj.bind_len(expanding_ndims) - # flatten the dimslists into the indexing - for i in reversed(dimlists): - input[i:i + 1] = input[i] - dims_indexed = 0 - requires_view = False - size = self.size() - view_sizes = [] - dims_seen = dim_tracker() - - def add_dims(t): - if not isinstance(t, _Tensor): - return - for d in t.dims: - dims_seen.record(d) - - add_dims(self) - dim_packs = [] - for i, idx in enumerate(input): - if idx is None: - input[i] = no_slice - view_sizes.append(1) - requires_view = True - else: - sz = size[dims_indexed] - if isinstance(idx, Dim): - idx.size = sz - dims_seen.record(idx) - view_sizes.append(sz) - elif isinstance(idx, (tuple, list)) and idx and isinstance(idx[0], Dim): - for d in idx: - dims_seen.record(idx) - _bind_dims_to_size(sz, idx, f'offset {i}') - view_sizes.extend(d.size for d in idx) - requires_view = True - dim_packs.append(i) - else: - add_dims(idx) - view_sizes.append(sz) - dims_indexed += 1 - if requires_view: - self = self.view(*view_sizes) - for i in reversed(dim_packs): - input[i:i + 1] = input[i] - - # currenty: - # input is flat, containing either Dim, or Tensor, or something valid for standard indexing - # self may have first-class dims as well. - - # to index: - # drop the first class dims from self, they just become direct indices of their positions - - # figure out the dimensions of the indexing tensors: union of all the dims in the tensors in the index. - # these dimensions will appear and need to be bound at the first place tensor occures - - if isinstance(self, _Tensor): - ptensor_self, levels = self._tensor, list(self._levels) - # indices to ptensor rather than self which has first-class dimensions - input_it = iter(input) - flat_inputs = [next(input_it) if isinstance(l, int) else l for l in levels] - has_device = self._has_device - to_pad = 0 - else: - ptensor_self, flat_inputs = self, input - to_pad = ptensor_self.ndim - len(flat_inputs) - has_device = True - - result_levels = [] - index_levels = [] - tensor_insert_point = None - to_expand = {} - requires_getindex = False - for i, inp in enumerate(flat_inputs): - if isinstance(inp, Dim) and dims_seen[inp] == 1: - flat_inputs[i] = no_slice - result_levels.append(inp) - elif isinstance(inp, TensorLike): - requires_getindex = True - if tensor_insert_point is None: - tensor_insert_point = len(result_levels) - ptensor, levels, _ = _tensor_levels(inp) - to_expand[i] = levels - flat_inputs[i] = ptensor - for l in levels: - if l not in index_levels: - index_levels.append(l) - else: - requires_getindex = True - result_levels.append(0) - - if tensor_insert_point is not None: - result_levels[tensor_insert_point:tensor_insert_point] = index_levels - - for i, levels in to_expand.items(): - flat_inputs[i] = _match_levels(flat_inputs[i], levels, index_levels) - - if requires_getindex: - result = _orig_getitem(ptensor_self, flat_inputs) - else: - result = ptensor_self - - next_positional = -1 - if to_pad > 0: - result_levels.extend([0] * to_pad) - for i, r in enumerate(reversed(result_levels)): - if isinstance(r, int): - result_levels[-1 - i] = next_positional - next_positional -= 1 - - return Tensor.from_positional(result, result_levels, has_device) - -# XXX - dim is optional and can be the outer-most dimension... -def stack(tensors, new_dim, dim=0, out=None): - if isinstance(dim, int): - return torch.stack(tensors, dim, out).index(dim, new_dim) - index = None - if out is not None: - out, index = _positional_no_permute(out, dim, expand_dim=True) - ptensors = [] - for t in tensors: - pt, pi = _positional_no_permute(t, dim, expand_dim=True) - if index is not None and pi != index: - pt = pt.move_dim(pi, index) - else: - index = pi - ptensors.append(pt) - pr = torch.stack(ptensors, index, out=out) - return pr.index((index, index + 1), (new_dim, dim)) - -_orig_split = torch.Tensor.split -def split(self, split_size_or_sections, dim=0): - from . import Dim, _Tensor - if isinstance(split_size_or_sections, int) or any(isinstance(t, int) for t in split_size_or_sections): - if isinstance(dim, Dim): - raise ValueError('when dim is specified as a Dim object, split sizes must also be dimensions.') - return _orig_split(self, split_size_or_sections, dim=dim) - - if isinstance(dim, Dim): - assert isinstance(self, _Tensor), f"Tensor does not have dimension {dim}" - self, dim = _positional_no_permute(self, dim) - - size = self.size(dim) - total_bound_size = 0 - unbound = [] - sizes = [] - for i, d in enumerate(split_size_or_sections): - if d.is_bound: - sizes.append(d.size) - total_bound_size += d.size - else: - sizes.append(0) - unbound.append(i) - - if unbound: - assert total_bound_size <= size, \ - f"result dimensions are larger than original: {total_bound_size} vs {size} ({split_size_or_sections})" - remaining_size = size - total_bound_size - chunk_size = -(-remaining_size // len(unbound)) - for u in unbound: - sz = min(chunk_size, remaining_size) - split_size_or_sections[u].size = sz - sizes[u] = sz - remaining_size -= sz - else: - assert total_bound_size == size, \ - f"result dimensions do not match original: {total_bound_size} vs {size} ({split_size_or_sections})" - return tuple(t.index(dim, d) for d, t in zip(split_size_or_sections, _orig_split(self, sizes, dim=dim))) diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_msrresnet.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_msrresnet.py deleted file mode 100644 index d5f7964b4dcf49b66d4c38eb90572b3474c32577..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/models/network_msrresnet.py +++ /dev/null @@ -1,182 +0,0 @@ -import math -import torch.nn as nn -import models.basicblock as B -import functools -import torch.nn.functional as F -import torch.nn.init as init - - -""" -# -------------------------------------------- -# modified SRResNet -# -- MSRResNet0 (v0.0) -# -- MSRResNet1 (v0.1) -# -------------------------------------------- -References: -@inproceedings{wang2018esrgan, - title={Esrgan: Enhanced super-resolution generative adversarial networks}, - author={Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Change Loy, Chen}, - booktitle={European Concerence on Computer Vision (ECCV)}, - pages={0--0}, - year={2018} -} -@inproceedings{ledig2017photo, - title={Photo-realistic single image super-resolution using a generative adversarial network}, - author={Ledig, Christian and Theis, Lucas and Husz{\'a}r, Ferenc and Caballero, Jose and Cunningham, Andrew and Acosta, Alejandro and Aitken, Andrew and Tejani, Alykhan and Totz, Johannes and Wang, Zehan and others}, - booktitle={IEEE concerence on computer vision and pattern recognition}, - pages={4681--4690}, - year={2017} -} -# -------------------------------------------- -""" - - -# -------------------------------------------- -# modified SRResNet v0.0 -# https://github.com/xinntao/ESRGAN -# -------------------------------------------- -class MSRResNet0(nn.Module): - def __init__(self, in_nc=3, out_nc=3, nc=64, nb=16, upscale=4, act_mode='R', upsample_mode='upconv'): - """ - in_nc: channel number of input - out_nc: channel number of output - nc: channel number - nb: number of residual blocks - upscale: up-scale factor - act_mode: activation function - upsample_mode: 'upconv' | 'pixelshuffle' | 'convtranspose' - """ - super(MSRResNet0, self).__init__() - assert 'R' in act_mode or 'L' in act_mode, 'Examples of activation function: R, L, BR, BL, IR, IL' - - n_upscale = int(math.log(upscale, 2)) - if upscale == 3: - n_upscale = 1 - - m_head = B.conv(in_nc, nc, mode='C') - - m_body = [B.ResBlock(nc, nc, mode='C'+act_mode+'C') for _ in range(nb)] - m_body.append(B.conv(nc, nc, mode='C')) - - if upsample_mode == 'upconv': - upsample_block = B.upsample_upconv - elif upsample_mode == 'pixelshuffle': - upsample_block = B.upsample_pixelshuffle - elif upsample_mode == 'convtranspose': - upsample_block = B.upsample_convtranspose - else: - raise NotImplementedError('upsample mode [{:s}] is not found'.format(upsample_mode)) - if upscale == 3: - m_uper = upsample_block(nc, nc, mode='3'+act_mode) - else: - m_uper = [upsample_block(nc, nc, mode='2'+act_mode) for _ in range(n_upscale)] - - H_conv0 = B.conv(nc, nc, mode='C'+act_mode) - H_conv1 = B.conv(nc, out_nc, bias=False, mode='C') - m_tail = B.sequential(H_conv0, H_conv1) - - self.model = B.sequential(m_head, B.ShortcutBlock(B.sequential(*m_body)), *m_uper, m_tail) - - def forward(self, x): - x = self.model(x) - return x - - -# -------------------------------------------- -# modified SRResNet v0.1 -# https://github.com/xinntao/ESRGAN -# -------------------------------------------- -class MSRResNet1(nn.Module): - def __init__(self, in_nc=3, out_nc=3, nc=64, nb=16, upscale=4, act_mode='R', upsample_mode='upconv'): - super(MSRResNet1, self).__init__() - self.upscale = upscale - - self.conv_first = nn.Conv2d(in_nc, nc, 3, 1, 1, bias=True) - basic_block = functools.partial(ResidualBlock_noBN, nc=nc) - self.recon_trunk = make_layer(basic_block, nb) - - # upsampling - if self.upscale == 2: - self.upconv1 = nn.Conv2d(nc, nc * 4, 3, 1, 1, bias=True) - self.pixel_shuffle = nn.PixelShuffle(2) - elif self.upscale == 3: - self.upconv1 = nn.Conv2d(nc, nc * 9, 3, 1, 1, bias=True) - self.pixel_shuffle = nn.PixelShuffle(3) - elif self.upscale == 4: - self.upconv1 = nn.Conv2d(nc, nc * 4, 3, 1, 1, bias=True) - self.upconv2 = nn.Conv2d(nc, nc * 4, 3, 1, 1, bias=True) - self.pixel_shuffle = nn.PixelShuffle(2) - - self.HRconv = nn.Conv2d(nc, nc, 3, 1, 1, bias=True) - self.conv_last = nn.Conv2d(nc, out_nc, 3, 1, 1, bias=True) - - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True) - - # initialization - initialize_weights([self.conv_first, self.upconv1, self.HRconv, self.conv_last], 0.1) - if self.upscale == 4: - initialize_weights(self.upconv2, 0.1) - - def forward(self, x): - fea = self.lrelu(self.conv_first(x)) - out = self.recon_trunk(fea) - - if self.upscale == 4: - out = self.lrelu(self.pixel_shuffle(self.upconv1(out))) - out = self.lrelu(self.pixel_shuffle(self.upconv2(out))) - elif self.upscale == 3 or self.upscale == 2: - out = self.lrelu(self.pixel_shuffle(self.upconv1(out))) - - out = self.conv_last(self.lrelu(self.HRconv(out))) - base = F.interpolate(x, scale_factor=self.upscale, mode='bilinear', align_corners=False) - out += base - return out - - -def initialize_weights(net_l, scale=1): - if not isinstance(net_l, list): - net_l = [net_l] - for net in net_l: - for m in net.modules(): - if isinstance(m, nn.Conv2d): - init.kaiming_normal_(m.weight, a=0, mode='fan_in') - m.weight.data *= scale # for residual block - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - init.kaiming_normal_(m.weight, a=0, mode='fan_in') - m.weight.data *= scale - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, nn.BatchNorm2d): - init.constant_(m.weight, 1) - init.constant_(m.bias.data, 0.0) - - -def make_layer(block, n_layers): - layers = [] - for _ in range(n_layers): - layers.append(block()) - return nn.Sequential(*layers) - - -class ResidualBlock_noBN(nn.Module): - '''Residual block w/o BN - ---Conv-ReLU-Conv-+- - |________________| - ''' - - def __init__(self, nc=64): - super(ResidualBlock_noBN, self).__init__() - self.conv1 = nn.Conv2d(nc, nc, 3, 1, 1, bias=True) - self.conv2 = nn.Conv2d(nc, nc, 3, 1, 1, bias=True) - - # initialization - initialize_weights([self.conv1, self.conv2], 0.1) - - def forward(self, x): - identity = x - out = F.relu(self.conv1(x), inplace=True) - out = self.conv2(out) - return identity + out diff --git a/spaces/laxmikant/ChatGPT4/README.md b/spaces/laxmikant/ChatGPT4/README.md deleted file mode 100644 index 7938de14e5355209aaae713f289ca469181bbb17..0000000000000000000000000000000000000000 --- a/spaces/laxmikant/ChatGPT4/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Chat-with-GPT4 -emoji: 🚀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ysharma/ChatGPT4 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/legoandmars/glide-inpainting/glide_text2im/clip/__init__.py b/spaces/legoandmars/glide-inpainting/glide_text2im/clip/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/train_utils.py b/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/train_utils.py deleted file mode 100644 index b6a3a7cb8b66593fb363dff7fe489cde2ab0ec6f..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/train_utils.py +++ /dev/null @@ -1,368 +0,0 @@ -import os -from modules import shared, utils -from pathlib import Path -import requests -import tqdm -import json - -''' -def get_gpu_memory_usage(rank): - return { - 'total': round(torch.cuda.get_device_properties(rank).total_memory / (1024**3), 2), - 'max': round(torch.cuda.max_memory_allocated(rank) / (1024**3), 2), - 'reserved': round(torch.cuda.memory_reserved(rank) / (1024**3), 2), - 'allocated': round(torch.cuda.memory_allocated(rank) / (1024**3), 2) - } -''' - -def list_subfoldersByTime(directory): - - if not directory.endswith('/'): - directory += '/' - subfolders = [] - subfolders.append('None') - path = directory - name_list = os.listdir(path) - full_list = [os.path.join(path,i) for i in name_list] - time_sorted_list = sorted(full_list, key=os.path.getmtime,reverse=True) - - for entry in time_sorted_list: - if os.path.isdir(entry): - entry_str = f"{entry}" # Convert entry to a string - full_path = entry_str - entry_str = entry_str.replace('\\','/') - entry_str = entry_str.replace(f"{directory}", "") # Remove directory part - subfolders.append(entry_str) - - return subfolders - -def get_available_loras_local(_sortedByTime): - - model_dir = shared.args.lora_dir # Update with the appropriate directory path - subfolders = [] - if _sortedByTime: - subfolders = list_subfoldersByTime(model_dir) - else: - subfolders = utils.get_available_loras() - - return subfolders - - -# FPHAM SPLIT BY SENTENCE BLOCK =============== - -def split_sentences(text: str, cutoff_len: int): - sentences = [] - sentence = '' - delimiters = ['. ', '? ', '! ', '... ', '.\n', '?\n', '!\n','...\n','',''] - abbreviations = ['Mr. ', 'Mrs. ', 'Dr. ', 'Ms. ', 'St. ', 'Prof. ', 'Jr. ', 'Ltd. ', 'Capt. ', 'Col. ', 'Gen. ', 'Ave. ', 'Blvd. ', 'Co. ', 'Corp. ', 'Dept. ', 'Est. ', 'Gov. ', 'Inc. ', 'Ph.D. ', 'Univ. '] - errors = 0 - max_cut = cutoff_len-1 - prev_char = '' - - for char in text: - sentence += char - - - if (any(sentence.endswith(delimiter) for delimiter in delimiters) and - not (prev_char.isupper() and len(sentence) >= 3 and sentence[-3] != ' ') and - not any(sentence.endswith(abbreviation) for abbreviation in abbreviations)): - tokens = shared.tokenizer.encode(sentence) - - if len(tokens) > max_cut: - tokens = tokens[:max_cut] - sentence = shared.tokenizer.decode(tokens, skip_special_tokens=True) - errors = errors + 1 - - sentences.append({'text': sentence, 'size': len(tokens)}) - - sentence = '' - - prev_char = char - - if sentence: - tokens = shared.tokenizer.encode(sentence) - if len(tokens) > max_cut: - tokens = tokens[:max_cut] - sentence = shared.tokenizer.decode(tokens, skip_special_tokens=True) - errors = errors + 1 - - sentences.append({'text': sentence, 'size': len(tokens)}) - - if errors > 0: - print(f"Trimmed sentences beyond Cutoff Length: {errors}") - - return sentences - -# The goal of following code is to create blocks of text + overlapping blocks while: -# respects sentence boundaries -# always uses all the text -# hard cut defined by hard_cut_string or will always end at the end of data block -# no overlapping blocks will be created across hard cut or across token - -def precise_cut(text: str, overlap: bool, min_chars_cut: int, eos_to_hc: bool, cutoff_len: int, hard_cut_string: str, debug_slicer:bool): - - EOSX_str = '' #hardcut placeholder - EOS_str = '' - print("Precise raw text slicer: ON") - - cut_string = hard_cut_string.replace('\\n', '\n') - text = text.replace(cut_string, EOSX_str) - sentences = split_sentences(text, cutoff_len) - - print(f"Sentences: {len(sentences)}") - sentencelist = [] - currentSentence = '' - totalLength = 0 - max_cut = cutoff_len-1 - half_cut = cutoff_len//2 - halfcut_length = 0 - - edgeindex = [] - half_index = 0 - - for index, item in enumerate(sentences): - - if halfcut_length+ item['size'] < half_cut: - halfcut_length += item['size'] - half_index = index - else: - edgeindex.append(half_index) - halfcut_length = -2 * max_cut - - - if totalLength + item['size'] < max_cut and not currentSentence.endswith(EOSX_str): - currentSentence += item['text'] - totalLength += item['size'] - else: - - if len(currentSentence.strip()) > min_chars_cut: - sentencelist.append(currentSentence.strip()) - - currentSentence = item['text'] - totalLength = item['size'] - halfcut_length = item['size'] - - if len(currentSentence.strip()) > min_chars_cut: - sentencelist.append(currentSentence.strip()) - - unique_blocks = len(sentencelist) - print(f"Text Blocks: {unique_blocks}") - - #overlap strategies: - # don't overlap across HARD CUT (EOSX) - if overlap: - for edge_idx in edgeindex: - currentSentence = '' - totalLength = 0 - - for item in sentences[edge_idx:]: - if totalLength + item['size'] < max_cut: - currentSentence += item['text'] - totalLength += item['size'] - else: - #if by chance EOSX is at the end then it's acceptable - if currentSentence.endswith(EOSX_str) and len(currentSentence.strip()) > min_chars_cut: - sentencelist.append(currentSentence.strip()) - # otherwise don't cross hard cut - elif EOSX_str not in currentSentence and len(currentSentence.strip()) > min_chars_cut: - sentencelist.append(currentSentence.strip()) - - currentSentence = '' - totalLength = 0 - break - - print(f"+ Overlapping blocks: {len(sentencelist)-unique_blocks}") - - num_EOS = 0 - for i in range(len(sentencelist)): - if eos_to_hc: - sentencelist[i] = sentencelist[i].replace(EOSX_str, EOS_str) - else: - sentencelist[i] = sentencelist[i].replace(EOSX_str, '') - - #someone may have had stop strings in the raw text... - sentencelist[i] = sentencelist[i].replace("", EOS_str) - num_EOS += sentencelist[i].count(EOS_str) - - if num_EOS > 0: - print(f"+ EOS count: {num_EOS}") - - #final check for useless lines - sentencelist = [item for item in sentencelist if item.strip() != ""] - sentencelist = [item for item in sentencelist if item.strip() != ""] - - - if debug_slicer: - # Write the log file - Path('logs').mkdir(exist_ok=True) - sentencelist_dict = {index: sentence for index, sentence in enumerate(sentencelist)} - output_file = "logs/sentencelist.json" - with open(output_file, 'w') as f: - json.dump(sentencelist_dict, f,indent=2) - - print("Saved sentencelist.json in logs folder") - - return sentencelist - - -def sliding_block_cut(text: str, min_chars_cut: int, eos_to_hc: bool, cutoff_len: int, hard_cut_string: str, debug_slicer:bool): - - EOSX_str = '' #hardcut placeholder - EOS_str = '' - print("Mega Block Overlap: ON") - - cut_string = hard_cut_string.replace('\\n', '\n') - text = text.replace(cut_string, EOSX_str) - sentences = split_sentences(text, cutoff_len) - - print(f"Sentences: {len(sentences)}") - sentencelist = [] - - max_cut = cutoff_len-1 - - #print(f"max_cut: {max_cut}") - advancing_to = 0 - - prev_block_lastsentence = "" - - - for i in range(len(sentences)): - totalLength = 0 - currentSentence = '' - lastsentence = "" - - if i >= advancing_to: - for k in range(i, len(sentences)): - - current_length = sentences[k]['size'] - - if totalLength + current_length <= max_cut and not currentSentence.endswith(EOSX_str): - currentSentence += sentences[k]['text'] - totalLength += current_length - lastsentence = sentences[k]['text'] - else: - if len(currentSentence.strip()) > min_chars_cut: - if prev_block_lastsentence!=lastsentence: - sentencelist.append(currentSentence.strip()) - prev_block_lastsentence = lastsentence - - advancing_to = 0 - if currentSentence.endswith(EOSX_str): - advancing_to = k - - currentSentence = "" - totalLength = 0 - break - - if currentSentence != "": - if len(currentSentence.strip()) > min_chars_cut: - sentencelist.append(currentSentence.strip()) - - unique_blocks = len(sentencelist) - print(f"Text Blocks: {unique_blocks}") - num_EOS = 0 - for i in range(len(sentencelist)): - if eos_to_hc: - sentencelist[i] = sentencelist[i].replace(EOSX_str, EOS_str) - else: - sentencelist[i] = sentencelist[i].replace(EOSX_str, '') - - #someone may have had stop strings in the raw text... - sentencelist[i] = sentencelist[i].replace("", EOS_str) - num_EOS += sentencelist[i].count(EOS_str) - - if num_EOS > 0: - print(f"+ EOS count: {num_EOS}") - - #final check for useless lines - sentencelist = [item for item in sentencelist if item.strip() != ""] - sentencelist = [item for item in sentencelist if item.strip() != ""] - - - if debug_slicer: - # Write the log file - Path('logs').mkdir(exist_ok=True) - sentencelist_dict = {index: sentence for index, sentence in enumerate(sentencelist)} - output_file = "logs/sentencelist.json" - with open(output_file, 'w') as f: - json.dump(sentencelist_dict, f,indent=2) - - print("Saved sentencelist.json in logs folder") - - return sentencelist - -# Example usage: -# download_file_from_url('https://example.com/path/to/your/file.ext', '/output/directory') - -def download_file_from_url(url, overwrite, output_dir_in, valid_extensions = {'.txt', '.json'}): - try: - # Validate and sanitize the URL - #parsed_url = urllib.parse.urlparse(url) - #if not parsed_url.netloc: - # raise ValueError("Invalid URL") - #filename = os.path.basename(parsed_url.path) - - # Get the filename from the URL - - session = requests.Session() - headers = {} - mode = 'wb' - filename = url.split('/')[-1] - - output_dir = str(output_dir_in) - # Construct the full path to the output file - local_filename = os.path.join(output_dir, filename) - - # Check if the local file already exists - overw = '' - if os.path.exists(local_filename): - if not overwrite: - yield f"File '{local_filename}' already exists. Aborting." - return - else: - overw = ' [Overwrite existing]' - - filename_lower = filename.lower() - - # Send an HTTP GET request to the URL with a timeout - file_extension = os.path.splitext(filename_lower)[-1] - - if file_extension not in valid_extensions: - yield f"Invalid file extension: {file_extension}. Only {valid_extensions} files are supported." - return - - with session.get(url, stream=True, headers=headers, timeout=10) as r: - r.raise_for_status() - # total size can be wildly inaccurate - #total_size = int(r.headers.get('content-length', 0)) - - block_size = 1024 * 4 - with open(local_filename, mode) as f: - count = 0 - for data in r.iter_content(block_size): - f.write(data) - count += len(data) - - yield f"Downloaded: {count} " + overw - - # Verify file size if possible - if os.path.exists(local_filename): - downloaded_size = os.path.getsize(local_filename) - if downloaded_size > 0: - yield f"File '{filename}' downloaded to '{output_dir}' ({downloaded_size} bytes)." - print("File Downloaded") - else: - print("Downloaded file is zero") - yield f"Failed. Downloaded file size is zero)." - else: - print(f"Error: {local_filename} failed to download.") - yield f"Error: {local_filename} failed to download" - - except Exception as e: - print(f"An error occurred: {e}") - yield f"An error occurred: {e}" - - finally: - # Close the session to release resources - session.close() - diff --git a/spaces/lexi1343/Hi/style.css b/spaces/lexi1343/Hi/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/lexi1343/Hi/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/liimefruit/RVCollection/infer_pack/models_onnx_moess.py b/spaces/liimefruit/RVCollection/infer_pack/models_onnx_moess.py deleted file mode 100644 index fbc5c8864113c2b5b127c7e8be18a6be9586a3b7..0000000000000000000000000000000000000000 --- a/spaces/liimefruit/RVCollection/infer_pack/models_onnx_moess.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Atsisveikinimas Su Aukos Vaidmeniu Pdf 11.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Atsisveikinimas Su Aukos Vaidmeniu Pdf 11.md deleted file mode 100644 index 57b07240015ecaaf0180cfe4f802834445e59f8f..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Atsisveikinimas Su Aukos Vaidmeniu Pdf 11.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Atsisveikinimas Su Aukos Vaidmeniu Pdf 11


    Download Zip ✫✫✫ https://bytlly.com/2uGvRf



    -
    -Žmogiškieji santykiai, jų formavimasis ir . 12. P.Beldyko, . 13. A.Gruysimovas, . 14. D.Sternbergas, . 15. E.Gibsonas, . 16. A.Vecerisas, . 17. E.Hoffas, . 18. V.A.Gatsenko, . 19. A.Vetőas, . 20. B.Zvonokas, . 21. A.Kienas, . 22. A.Maretsas, . 23. L.Biroas, . 24. A.Kazuajis, . 25. D.Čokas, . 26. M.Ustukas, . 27. A.Kravčenko, . 28. I.Sainas, . 29. E.Viatkina, . 30. K.Kuliauskas, . 31. D.Sodinauskas, . 32. I.Kuzminas, . 33. I.Sukovas, . 34. A.Černauskas, . 35. S.Juodzivinė, . 36. L.Jegulskas, . 37. I.Pavilonis, . 38. A.Butkevičius, . 39. J.Černavskas, . 40. J.Veljutas, . 41. G.Berėnas, . 42. G.Lutasas, . 43. S.Galvis, . 44. J.Jurdanas, . 45. M.Ustukas, . 46. V.Andriuskevicius, . 47. M.Jurgilas, . 48. K.Samburskiene, . 49. J.Jorgas, . 50. R.Aberer, . 51. I.Muzamalinas, . 52. G.Akaasas, . 53. A.Malisauskas, . 54. E.Chomeas, . 55. J.Vėkel 4fefd39f24
    -
    -
    -

    diff --git a/spaces/lingbionlp/PhenoTagger-Demo/src/nn_model.py b/spaces/lingbionlp/PhenoTagger-Demo/src/nn_model.py deleted file mode 100644 index 3e18c22869346324ab81f4cd7116e59e51e76e9c..0000000000000000000000000000000000000000 --- a/spaces/lingbionlp/PhenoTagger-Demo/src/nn_model.py +++ /dev/null @@ -1,150 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Thu Mar 26 09:04:13 2020 - -@author: luol2 -""" - -import time -import sys -import numpy as np -import keras -from src.nn_represent import CNN_RepresentationLayer,BERT_RepresentationLayer -from keras.layers import * -from keras.models import Model -from keras_bert import load_trained_model_from_checkpoint - - - - -class bioTag_CNN(): - def __init__(self, model_files): - self.model_type='cnn' - model_test_type='cnn' - self.fea_dict = {'word': 1, - 'char': 1, - 'lemma':0, - 'pos':0} - - self.hyper = {'sen_max' :20, - 'word_max' :40, - 'charvec_size' :50, - 'pos_size' :50} - - self.w2vfile=model_files['w2vfile'] - self.charfile=model_files['charfile'] - self.labelfile=model_files['labelfile'] - self.posfile=model_files['posfile'] - - vocab={'char':self.charfile,'label':self.labelfile,'pos':self.posfile} - print('loading w2v model.....') - self.rep = CNN_RepresentationLayer(self.w2vfile,vocab_file=vocab, frequency=400000) - - print('building model......') - all_fea = [] - fea_list = [] - - if self.fea_dict['word'] == 1: - word_input = Input(shape=(self.hyper['sen_max'],), dtype='int32', name='word_input') - all_fea.append(word_input) - word_fea = Embedding(self.rep.vec_table.shape[0], self.rep.vec_table.shape[1], weights=[self.rep.vec_table], trainable=True,mask_zero=False, input_length=self.hyper['sen_max'], name='word_emd')(word_input) - fea_list.append(word_fea) - - if self.fea_dict['char'] == 1: - char_input = Input(shape=(self.hyper['sen_max'],self.hyper['word_max']), dtype='int32', name='char_input') - all_fea.append(char_input) - char_fea = TimeDistributed(Embedding(self.rep.char_table_size, self.hyper['charvec_size'], trainable=True,mask_zero=False), name='char_emd')(char_input) - char_fea = TimeDistributed(Conv1D(self.hyper['charvec_size']*2, 3, padding='same',activation='relu'), name="char_cnn")(char_fea) - char_fea_max = TimeDistributed(GlobalMaxPooling1D(), name="char_pooling_max")(char_fea) - fea_list.append(char_fea_max) - - if self.fea_dict['lemma'] == 1: - lemma_input = Input(shape=(self.hyper['sen_max'],), dtype='int32', name='lemma_input') - all_fea.append(lemma_input) - lemma_fea = Embedding(self.rep.vec_table.shape[0], self.rep.vec_table.shape[1], weights=[self.rep.vec_table], trainable=True,mask_zero=False, input_length=self.hyper['sen_max'], name='lemma_emd')(lemma_input) - fea_list.append(lemma_fea) - - if self.fea_dict['pos'] == 1: - pos_input = Input(shape=(self.hyper['sen_max'],), dtype='int32', name='pos_input') - all_fea.append(pos_input) - pos_fea = Embedding(self.rep.pos_table_size, self.hyper['pos_size'], trainable=True,mask_zero=False, input_length=self.hyper['sen_max'], name='pos_emd')(pos_input) - fea_list.append(pos_fea) - - if len(fea_list) == 1: - concate_vec = fea_list[0] - else: - concate_vec = Concatenate()(fea_list) - - concate_vec = Dropout(0.4)(concate_vec) - - # model - if model_test_type=='cnn': - cnn = Conv1D(1024, 1, padding='valid', activation='relu',name='cnn1')(concate_vec) - cnn = GlobalMaxPooling1D()(cnn) - elif model_test_type=='lstm': - bilstm = Bidirectional(LSTM(200, return_sequences=True, implementation=2, dropout=0.4, recurrent_dropout=0.4), name='bilstm1')(concate_vec) - cnn = GlobalMaxPooling1D()(bilstm) - - - dense = Dense(1024, activation='relu')(cnn) - dense= Dropout(0.4)(dense) - output = Dense(self.rep.label_table_size, activation='softmax')(dense) - self.model = Model(inputs=all_fea, outputs=output) - def load_model(self,model_file): - self.model.load_weights(model_file) - #self.model.summary() - print('load cnn model done!') - -class bioTag_BERT(): - def __init__(self, model_files): - self.model_type='bert' - self.maxlen = 64 - config_path = model_files['config_path'] - checkpoint_path = model_files['checkpoint_path'] - vocab_path = model_files['vocab_path'] - self.label_file=model_files['labelfile'] - - self.rep = BERT_RepresentationLayer( vocab_path, self.label_file) - - - bert_model = load_trained_model_from_checkpoint(config_path, checkpoint_path, training=False, trainable=True,seq_len=self.maxlen) - - x1_in = Input(shape=(None,)) - x2_in = Input(shape=(None,)) - x = bert_model([x1_in, x2_in]) - x = Lambda(lambda x: x[:, 0])(x) - outputs = Dense(self.rep.label_table_size, activation='softmax')(x) - - self.model = Model(inputs=[x1_in,x2_in], outputs=outputs) - - def load_model(self,model_file): - self.model.load_weights(model_file) - #self.model.summary() - -class bioTag_Bioformer(): - def __init__(self, model_files): - self.model_type='bioformer' - self.maxlen = 32 - config_path = model_files['config_path'] - checkpoint_path = model_files['checkpoint_path'] - vocab_path = model_files['vocab_path'] - self.label_file=model_files['labelfile'] - - self.rep = BERT_RepresentationLayer( vocab_path, self.label_file) - - - bert_model = load_trained_model_from_checkpoint(config_path, checkpoint_path, training=False, trainable=True,seq_len=self.maxlen) - - x1_in = Input(shape=(None,)) - x2_in = Input(shape=(None,)) - x = bert_model([x1_in, x2_in]) - x = Lambda(lambda x: x[:, 0])(x) - outputs = Dense(self.rep.label_table_size, activation='softmax')(x) - - self.model = Model(inputs=[x1_in,x2_in], outputs=outputs) - - def load_model(self,model_file): - self.model.load_weights(model_file) - #self.model.summary() - print('load bioformer model done!') - diff --git a/spaces/liuxiaopai/background-remover/app.py b/spaces/liuxiaopai/background-remover/app.py deleted file mode 100644 index 55ea8940c07128fb124c1e3108dc8921bd1006be..0000000000000000000000000000000000000000 --- a/spaces/liuxiaopai/background-remover/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import cv2 -import gradio as gr -import numpy as np -import onnxruntime -import requests -from huggingface_hub import hf_hub_download -from PIL import Image - - -# Get x_scale_factor & y_scale_factor to resize image -def get_scale_factor(im_h, im_w, ref_size=512): - - if max(im_h, im_w) < ref_size or min(im_h, im_w) > ref_size: - if im_w >= im_h: - im_rh = ref_size - im_rw = int(im_w / im_h * ref_size) - elif im_w < im_h: - im_rw = ref_size - im_rh = int(im_h / im_w * ref_size) - else: - im_rh = im_h - im_rw = im_w - - im_rw = im_rw - im_rw % 32 - im_rh = im_rh - im_rh % 32 - - x_scale_factor = im_rw / im_w - y_scale_factor = im_rh / im_h - - return x_scale_factor, y_scale_factor - - -MODEL_PATH = hf_hub_download('nateraw/background-remover-files', 'modnet.onnx', repo_type='dataset') - - -def main(image_path, threshold): - - # read image - im = cv2.imread(image_path) - im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) - - # unify image channels to 3 - if len(im.shape) == 2: - im = im[:, :, None] - if im.shape[2] == 1: - im = np.repeat(im, 3, axis=2) - elif im.shape[2] == 4: - im = im[:, :, 0:3] - - # normalize values to scale it between -1 to 1 - im = (im - 127.5) / 127.5 - - im_h, im_w, im_c = im.shape - x, y = get_scale_factor(im_h, im_w) - - # resize image - im = cv2.resize(im, None, fx=x, fy=y, interpolation=cv2.INTER_AREA) - - # prepare input shape - im = np.transpose(im) - im = np.swapaxes(im, 1, 2) - im = np.expand_dims(im, axis=0).astype('float32') - - # Initialize session and get prediction - session = onnxruntime.InferenceSession(MODEL_PATH, None) - input_name = session.get_inputs()[0].name - output_name = session.get_outputs()[0].name - result = session.run([output_name], {input_name: im}) - - # refine matte - matte = (np.squeeze(result[0]) * 255).astype('uint8') - matte = cv2.resize(matte, dsize=(im_w, im_h), interpolation=cv2.INTER_AREA) - - # HACK - Could probably just convert this to PIL instead of writing - cv2.imwrite('out.png', matte) - - image = Image.open(image_path) - matte = Image.open('out.png') - - # obtain predicted foreground - image = np.asarray(image) - if len(image.shape) == 2: - image = image[:, :, None] - if image.shape[2] == 1: - image = np.repeat(image, 3, axis=2) - elif image.shape[2] == 4: - image = image[:, :, 0:3] - - b, g, r = cv2.split(image) - - mask = np.asarray(matte) - a = np.ones(mask.shape, dtype='uint8') * 255 - alpha_im = cv2.merge([b, g, r, a], 4) - bg = np.zeros(alpha_im.shape) - new_mask = np.stack([mask, mask, mask, mask], axis=2) - foreground = np.where(new_mask > threshold, alpha_im, bg).astype(np.uint8) - - return Image.fromarray(foreground) - - -title = "Image Background Remover" -description = "Remove Background from Image for Free. " -article = "
    " - -url = "https://huggingface.co/datasets/nateraw/background-remover-files/resolve/main/twitter_profile_pic.jpeg" -image = Image.open(requests.get(url, stream=True).raw) -image.save('twitter_profile_pic.jpg') - -url = "https://upload.wikimedia.org/wikipedia/commons/8/8d/President_Barack_Obama.jpg" -image = Image.open(requests.get(url, stream=True).raw) -image.save('obama.jpg') - -interface = gr.Interface( - fn=main, - inputs=[ - gr.inputs.Image(type='filepath'), - gr.inputs.Slider(minimum=0, maximum=250, default=100, step=5, label='Mask Cutoff Threshold'), - ], - outputs='image', - examples=[['twitter_profile_pic.jpg', 120], ['obama.jpg', 155]], - title=title, - description=description, - article=article, -) - -if __name__ == '__main__': - interface.launch(debug=True) diff --git a/spaces/lj1995/vocal2guitar/infer_pack/modules.py b/spaces/lj1995/vocal2guitar/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/llamaindex/llama_index_term_definition_demo/terms_definitions_tutorial.md b/spaces/llamaindex/llama_index_term_definition_demo/terms_definitions_tutorial.md deleted file mode 100644 index 2bb438dd16753e42e80b7a604421761d5cb77f3e..0000000000000000000000000000000000000000 --- a/spaces/llamaindex/llama_index_term_definition_demo/terms_definitions_tutorial.md +++ /dev/null @@ -1,494 +0,0 @@ -# Llama Index Problem Solving - Extracting Terms and Definitions - -Llama Index has many use cases (semantic search, summarization, etc.) that are [well documented](https://gpt-index.readthedocs.io/en/latest/use_cases/queries.html). However, this doesn't mean we can't apply Llama Index to very specific use cases! - -In this tutorial, we will go through the design process of using Llama Index to extract terms and definitions from text, while allowing users to query those terms later. Using [Streamlit](https://streamlit.io/), we can provide an easy to build frontend for running and testing all of this, and quickly iterate with our design. - -This tutorial assumes you have the following packages installed: - -- python3.9+ -- llama_index -- streamlit - -At the base level, our objective is to take text from a document, extract terms and definitions, and then provide a way for users to query that knowledge base of terms and definitions. The tutorial will go over features from both Llama Index and Streamlit, and hopefully provide some interesting solutions for common problems that come up. - -The final version of this tutorial can be found [here](https://github.com/logan-markewich/llama_index_starter_pack). - -## Uploading Text - -So step one is giving users a way to upload documents. Let’s write some code using Streamlit to provide the interface for this! Use the following code and launch the app with `streamlit run app.py`. - -```python -import streamlit as st - -st.title("🦙 Llama Index Term Extractor 🦙") - -document_text = st.text_area("Or enter raw text") -if st.button("Extract Terms and Definitions") and document_text: - with st.spinner("Extracting..."): - extracted_terms = document text # this is a placeholder! - st.write(extracted_terms) -``` - -Super simple right! But you'll notice that the app doesn't do anything useful yet. To use llama_index, we also need to setup our OpenAI LLM. There are a bunch of possible settings for the LLM, so we can let the user figure out what's best. We should also let the user set the prompt that will extract the terms (which will also help us debug what works best). - -## LLM Settings - -This next step introduces some tabs to our app, to separate it into different panes that provide different features. Let's create a tab for LLM settings and for uploading text: - -```python -import os -import streamlit as st - -DEFAULT_TERM_STR = ( - "Make a list of terms and definitions that are defined in the context, " - "with one pair on each line. " - "If a term is missing it's definition, use your best judgment. " - "Write each line as as follows:\nTerm: Definition: " -) - -st.title("🦙 Llama Index Term Extractor 🦙") - -setup_tab, upload_tab = st.tabs(["Setup", "Upload/Extract Terms"]) - -with setup_tab: - st.subheader("LLM Setup") - api_key = st.text_input("Enter your OpenAI API key here", type="password") - llm_name = st.selectbox('Which LLM?', ["text-davinci-003", "gpt-3.5-turbo", "gpt-4"]) - model_temperature = st.slider("LLM Temperature", min_value=0.0, max_value=1.0, step=0.1) - term_extract_str = st.text_area("The query to extract terms and definitions with.", value=DEFAULT_TERM_STR) - -with upload_tab: - st.subheader("Extract and Query Definitions") - document_text = st.text_area("Or enter raw text") - if st.button("Extract Terms and Definitions") and document_text: - with st.spinner("Extracting..."): - extracted_terms = document text # this is a placeholder! - st.write(extracted_terms) -``` - -Now our app has two tabs, which really helps with the organization. You'll also noticed I added a default prompt to extract terms -- you can change this later once you try extracting some terms, it's just the prompt I arrived at after experimenting a bit. - -Speaking of extracting terms, it's time to add some functions to do just that! - -## Extracting and Storing Terms - -Now that we are able to define LLM settings and upload text, we can try using Llama Index to extract the terms from text for us! - -We can add the following functions to both initialize our LLM, as well as use it to extract terms from the input text. - -```python -from llama_index import Document, GPTListIndex, LLMPredictor, ServiceContext, PromptHelper - -def get_llm(llm_name, model_temperature, api_key, max_tokens=256): - os.environ['OPENAI_API_KEY'] = api_key - if llm_name == "text-davinci-003": - return OpenAI(temperature=model_temperature, model_name=llm_name, max_tokens=max_tokens) - else: - return ChatOpenAI(temperature=model_temperature, model_name=llm_name, max_tokens=max_tokens) - -def extract_terms(documents, term_extract_str, llm_name, model_temperature, api_key): - llm = get_llm(llm_name, model_temperature, api_key, max_tokens=1024) - - service_context = ServiceContext.from_defaults(llm_predictor=LLMPredictor(llm=llm), - prompt_helper=PromptHelper(max_input_size=4096, - max_chunk_overlap=20, - num_output=1024), - chunk_size_limit=1024) - - temp_index = GPTListIndex.from_documents(documents, service_context=service_context) - terms_definitions = str(temp_index.query(term_extract_str, response_mode="tree_summarize")) - terms_definitions = [x for x in terms_definitions.split("\n") if x and 'Term:' in x and 'Definition:' in x] - # parse the text into a dict - terms_to_definition = {x.split("Definition:")[0].split("Term:")[-1].strip(): x.split("Definition:")[-1].strip() for x in terms_definitions} - return terms_to_definition -``` - -Now, using the new functions, we can finally extract our terms! - -```python -... -with upload_tab: - st.subheader("Extract and Query Definitions") - document_text = st.text_area("Or enter raw text") - if st.button("Extract Terms and Definitions") and document_text: - with st.spinner("Extracting..."): - extracted_terms = extract_terms([Document(document_text)], - term_extract_str, llm_name, - model_temperature, api_key) - st.write(extracted_terms) -``` - -There's a lot going on now, so let's take a moment to go over what is happening. - -`get_llm()` is instantiating the LLM based on the user configuration from the setup tab. Based on the model name, we need to use the appropriate class (`OpenAI` vs. `ChatOpenAI`). - -`extract_terms()` is where all the good stuff happens. First, we call `get_llm()` with `max_tokens=1024`, since we don't want to limit the model too much when it is extracting our terms and definitions (the default is 256 if not set). Then, we define our `ServiceContext` object, aligning `num_output` with our `max_tokens` value, as well as setting the chunk size to be no larger than the output. When documents are indexed by Llama Index, they are broken into chunks (also called nodes) if they are large, and `chunk_size_limit` sets the maximum size for these chunks. - -Next, we create a temporary list index and pass in our service context. A list index will read every single piece of text in our index, which is perfect for extracting terms. Finally, we use are pre-define query text to extract terms, using `response_mode="tree_summarize`. This response mode will generate a tree of summaries from the bottom up, where each parent summarizes its children. Finally, the top of the tree is returned, which will contain all our extracted terms and definitions. - -Lastly, we do some minor post processing. We assume the model followed instructions and put a term/definition pair on each line. If a line is missing the `Term:` or `Definition:` labels, we skip it. Then, we convert this to a dictionary for easy storage! - -## Saving Extracted Terms - -Now that we can extract terms, we need to put them somewhere so that we can query for them later. A `GPTSimpleVectorIndex` should be a perfect choice for now! But in addition, our app should also keep track of which terms are inserted into the index so that we can inspect them later. Using `st.session_state`, we can store the current list of terms in a session dict, unique to each user! - -First things first though, let's add a feature to initialize a global vector index and another function to insert the extracted terms. - -```python -... -if 'all_terms' not in st.session_state: - st.session_state['all_terms'] = DEFAULT_TERMS -... - -def insert_terms(terms_to_definition): - for term, definition in terms_to_definition.items(): - doc = Document(f"Term: {term}\nDefinition: {definition}") - st.session_state['llama_index'].insert(doc) - -@st.cache_resource -def initialize_index(llm_name, model_temperature, api_key): - """Create the GPTSQLStructStoreIndex object.""" - llm = get_llm(llm_name, model_temperature, api_key) - - service_context = ServiceContext.from_defaults(llm_predictor=LLMPredictor(llm=llm)) - - index = GPTSimpleVectorIndex([], service_context=service_context) - - return index - -... - -with upload_tab: - st.subheader("Extract and Query Definitions") - if st.button("Initialize Index and Reset Terms"): - st.session_state['llama_index'] = initialize_index(llm_name, model_temperature, api_key) - st.session_state['all_terms'] = {} - - if "llama_index" in st.session_state: - st.markdown("Either upload an image/screenshot of a document, or enter the text manually.") - document_text = st.text_area("Or enter raw text") - if st.button("Extract Terms and Definitions") and (uploaded_file or document_text): - st.session_state['terms'] = {} - terms_docs = {} - with st.spinner("Extracting..."): - terms_docs.update(extract_terms([Document(document_text)], term_extract_str, llm_name, model_temperature, api_key)) - st.session_state['terms'].update(terms_docs) - - if "terms" in st.session_state and st.session_state["terms"]:: - st.markdown("Extracted terms") - st.json(st.session_state['terms']) - - if st.button("Insert terms?"): - with st.spinner("Inserting terms"): - insert_terms(st.session_state['terms']) - st.session_state['all_terms'].update(st.session_state['terms']) - st.session_state['terms'] = {} - st.experimental_rerun() -``` - -Now you are really starting to leverage the power of streamlit! Let's start with the code under the upload tab. We added a button to initialize the vector index, and we store it in the global streamlit state dictionary, as well as resetting the currently extracted terms. Then, after extracting terms from the input text, we store it the extracted terms in the global state again and give the user a chance to review them before inserting. If the insert button is pressed, then we call our insert terms function, update our global tracking of inserted terms, and remove the most recently extracted terms from the session state. - -## Querying for Extracted Terms/Definitions - -With the terms and definitions extracted and saved, how can we use them? And how will the user even remember what's previously been saved?? We can simply add some more tabs to the app to handle these features. - -```python -... -setup_tab, terms_tab, upload_tab, query_tab = st.tabs( - ["Setup", "All Terms", "Upload/Extract Terms", "Query Terms"] -) -... -with terms_tab: - with terms_tab: - st.subheader("Current Extracted Terms and Definitions") - st.json(st.session_state["all_terms"]) -... -with query_tab: - st.subheader("Query for Terms/Definitions!") - st.markdown( - ( - "The LLM will attempt to answer your query, and augment it's answers using the terms/definitions you've inserted. " - "If a term is not in the index, it will answer using it's internal knowledge." - ) - ) - if st.button("Initialize Index and Reset Terms", key="init_index_2"): - st.session_state["llama_index"] = initialize_index( - llm_name, model_temperature, api_key - ) - st.session_state["all_terms"] = {} - - if "llama_index" in st.session_state: - query_text = st.text_input("Ask about a term or definition:") - if query_text: - query_text = query_text + "\nIf you can't find the answer, answer the query with the best of your knowledge." - with st.spinner("Generating answer..."): - response = st.session_state["llama_index"].query( - query_text, similarity_top_k=5, response_mode="compact" - ) - st.markdown(str(response)) -``` - -While this is mostly basic, some important things to note: - -- Our initialize button has the same text as our other button. Streamlit will complain about this, so we provide a unique key instead. -- Some additional text has been added to the query! This is to try and compensate for times when the index does not have the answer. -- In our index query, we've specified two options: - - `similarity_top_k=5` means the index will fetch the top 5 closest matching terms/definitions to the query. - - `response_mode="compact"` means as much text as possible from the 5 matching terms/definitions will be used in each LLM call. Without this, the index would make at least 5 calls to the LLM, which can slow things down for the user. - -## Dry Run Test - -Well, actually I hope you've been testing as we went. But now, let's try one complete test. - -1. Refresh the app -2. Enter your LLM settings -3. Head over to the query tab -4. Ask the following: `What is a bunnyhug?` -5. The app should give some nonsense response. If you didn't know, a bunnyhug is another word for a hoodie, used by people from the Canadian Prairies! -6. Let's add this definition to the app. Open the upload tab and enter the following text: `A bunnyhug is a common term used to describe a hoodie. This term is used by people from the Canadian Prairies.` -7. Click the extract button. After a few moments, the app should display the correctly extracted term/definition. Click the insert term button to save it! -8. If we open the terms tab, the term and definition we just extracted should be displayed -9. Go back to the query tab and try asking what a bunnyhug is. Now, the answer should be correct! - -## Improvement #1 - Create a Starting Index - -With our base app working, it might feel like a lot of work to build up a useful index. What if we gave the user some kind of starting point to show off the app's query capabilities? We can do just that! First, let's make a small change to our app so that we save the index to disk after every upload: - -```python -def insert_terms(terms_to_definition): - for term, definition in terms_to_definition.items(): - doc = Document(f"Term: {term}\nDefinition: {definition}") - st.session_state['llama_index'].insert(doc) - # TEMPORARY - save to disk - st.session_state['llama_index'].save_to_disk("index.json") -``` - -Now, we need some document to extract from! The repository for this project used the wikipedia page on New York City, and you can find the text [here](https://github.com/jerryjliu/llama_index/blob/main/examples/test_wiki/data/nyc_text.txt). - -If you paste the text into the upload tab and run it (it may take some time), we can insert the extracted terms. Make sure to also copy the text for the extracted terms into a notepad or similar before inserting into the index! We will need them in a second. - -After inserting, remove the line of code we used to save the index to disk. With a starting index now saved, we can modify our `initialize_index` function to look like this: - -```python -@st.cache_resource -def initialize_index(llm_name, model_temperature, api_key): - """Create the GPTSQLStructStoreIndex object.""" - llm = get_llm(llm_name, model_temperature, api_key) - - service_context = ServiceContext.from_defaults(llm_predictor=LLMPredictor(llm=llm)) - - index = GPTSimpleVectorIndex.load_from_disk( - "./index.json", service_context=service_context - ) - - return index -``` - -Did you remember to save that giant list of extracted terms in a notepad? Now when our app initializes, we want to pass in the default terms that are in the index to our global terms state: - -```python -... -if "all_terms" not in st.session_state: - st.session_state["all_terms"] = DEFAULT_TERMS -... -``` - -Repeat the above anywhere where we were previously resetting the `all_terms` values. - -## Improvement #2 - (Refining) Better Prompts - -If you play around with the app a bit now, you might notice that it stopped following our prompt! Remember, we added to our `query_str` variable that if the term/definition could not be found, answer to the best of it's knowledge. But now if you try asking about random terms (like bunnyhug!), it may or may not follow those instructions. - -This is due to the concept of "refining" answers in Llama Index. Since we are querying across the top 5 matching results, sometimes all the results do not fit in a single prompt! OpenAI models typically have a max input size of 4097 tokens. So, Llama Index accounts for this by breaking up the matching results into chunks that will fit into the prompt. After Llama Index gets an initial answer from the first API call, it sends the next chunk to the API, along with the previous answer, and asks the model to refine that answer. - -So, the refine process seems to be messing with our results! Rather than appending extra instructions to the `query_str`, remove that, and Llama Index will let us provide our own custom prompts! Let's create those now, using the [default prompts](https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/default_prompts.py) and [chat specific prompts](https://github.com/jerryjliu/llama_index/blob/main/gpt_index/prompts/chat_prompts.py) as a guide. Using a new file `constants.py`, let's create some new query templates: - -```python -from langchain.chains.prompt_selector import ConditionalPromptSelector, is_chat_model -from langchain.prompts.chat import ( - AIMessagePromptTemplate, - ChatPromptTemplate, - HumanMessagePromptTemplate, -) - -from gpt_index.prompts.prompts import QuestionAnswerPrompt, RefinePrompt - -# Text QA templates -DEFAULT_TEXT_QA_PROMPT_TMPL = ( - "Context information is below. \n" - "---------------------\n" - "{context_str}" - "\n---------------------\n" - "Given the context information answer the following question " - "(if you don't know the answer, use the best of your knowledge): {query_str}\n" -) -TEXT_QA_TEMPLATE = QuestionAnswerPrompt(DEFAULT_TEXT_QA_PROMPT_TMPL) - -# Refine templates -DEFAULT_REFINE_PROMPT_TMPL = ( - "The original question is as follows: {query_str}\n" - "We have provided an existing answer: {existing_answer}\n" - "We have the opportunity to refine the existing answer " - "(only if needed) with some more context below.\n" - "------------\n" - "{context_msg}\n" - "------------\n" - "Given the new context and using the best of your knowledge, improve the existing answer. " - "If you can't improve the existing answer, just repeat it again." -) -DEFAULT_REFINE_PROMPT = RefinePrompt(DEFAULT_REFINE_PROMPT_TMPL) - -CHAT_REFINE_PROMPT_TMPL_MSGS = [ - HumanMessagePromptTemplate.from_template("{query_str}"), - AIMessagePromptTemplate.from_template("{existing_answer}"), - HumanMessagePromptTemplate.from_template( - "We have the opportunity to refine the above answer " - "(only if needed) with some more context below.\n" - "------------\n" - "{context_msg}\n" - "------------\n" - "Given the new context and using the best of your knowledge, improve the existing answer. " - "If you can't improve the existing answer, just repeat it again." - ), -] - -CHAT_REFINE_PROMPT_LC = ChatPromptTemplate.from_messages(CHAT_REFINE_PROMPT_TMPL_MSGS) -CHAT_REFINE_PROMPT = RefinePrompt.from_langchain_prompt(CHAT_REFINE_PROMPT_LC) - -# refine prompt selector -DEFAULT_REFINE_PROMPT_SEL_LC = ConditionalPromptSelector( - default_prompt=DEFAULT_REFINE_PROMPT.get_langchain_prompt(), - conditionals=[(is_chat_model, CHAT_REFINE_PROMPT.get_langchain_prompt())], -) -REFINE_TEMPLATE = RefinePrompt( - langchain_prompt_selector=DEFAULT_REFINE_PROMPT_SEL_LC -) -``` - -So that seems like a lot of code, but it's not too bad! If you looked at the default prompts, you might have noticed that there are default prompts, and prompts specific to chat models. Continuing that trend, we do the same for our custom prompts. Then, using a prompt selector, we can combine both prompts into a single object. If the LLM being used is a chat model (ChatGPT, GPT-4), then the chat prompts are used. Otherwise, use the normal prompt templates. - -Another thing to note is that we only defined one QA template. In a chat model, this will be converted to a single "human" message. - -So, now we can import these prompts into our app and use them during the query. - -```python -from constants import REFINE_TEMPLATE, TEXT_QA_TEMPLATE -... - if "llama_index" in st.session_state: - query_text = st.text_input("Ask about a term or definition:") - if query_text: - query_text = query_text # Notice we removed the old instructions - with st.spinner("Generating answer..."): - response = st.session_state["llama_index"].query( - query_text, similarity_top_k=5, response_mode="compact", - text_qa_template=TEXT_QA_TEMPLATE, refine_template=REFINE_TEMPLATE - ) - st.markdown(str(response)) -... -``` - -If you experiment a bit more with queries, hopefully you notice that the responses follow our instructions a little better now! - -## Improvement #3 - Image Support - -Llama index also supports images! Using Llama Index, we can upload images of documents (papers, letters, etc.), and Llama Index handles extracting the text. We can leverage this to also allow users to upload images of their documents and extract terms and definitions from them. - -If you get an import error about PIL, install it using `pip install Pillow` first. - -```python -from PIL import Image -from llama_index.readers.file.base import DEFAULT_FILE_EXTRACTOR, ImageParser - -@st.cache_resource -def get_file_extractor(): - image_parser = ImageParser(keep_image=True, parse_text=True) - file_extractor = DEFAULT_FILE_EXTRACTOR - file_extractor.update( - { - ".jpg": image_parser, - ".png": image_parser, - ".jpeg": image_parser, - } - ) - - return file_extractor - -file_extractor = get_file_extractor() -... -with upload_tab: - st.subheader("Extract and Query Definitions") - if st.button("Initialize Index and Reset Terms", key="init_index_1"): - st.session_state["llama_index"] = initialize_index( - llm_name, model_temperature, api_key - ) - st.session_state["all_terms"] = DEFAULT_TERMS - - if "llama_index" in st.session_state: - st.markdown( - "Either upload an image/screenshot of a document, or enter the text manually." - ) - uploaded_file = st.file_uploader( - "Upload an image/screenshot of a document:", type=["png", "jpg", "jpeg"] - ) - document_text = st.text_area("Or enter raw text") - if st.button("Extract Terms and Definitions") and ( - uploaded_file or document_text - ): - st.session_state["terms"] = {} - terms_docs = {} - with st.spinner("Extracting (images may be slow)..."): - if document_text: - terms_docs.update( - extract_terms( - [Document(document_text)], - term_extract_str, - llm_name, - model_temperature, - api_key, - ) - ) - if uploaded_file: - Image.open(uploaded_file).convert("RGB").save("temp.png") - img_reader = SimpleDirectoryReader( - input_files=["temp.png"], file_extractor=file_extractor - ) - img_docs = img_reader.load_data() - os.remove("temp.png") - terms_docs.update( - extract_terms( - img_docs, - term_extract_str, - llm_name, - model_temperature, - api_key, - ) - ) - st.session_state["terms"].update(terms_docs) - - if "terms" in st.session_state and st.session_state["terms"]: - st.markdown("Extracted terms") - st.json(st.session_state["terms"]) - - if st.button("Insert terms?"): - with st.spinner("Inserting terms"): - insert_terms(st.session_state["terms"]) - st.session_state["all_terms"].update(st.session_state["terms"]) - st.session_state["terms"] = {} - st.experimental_rerun() -``` - -Here, we added the option to upload a file using Streamlit. Then the image is opened and saved to disk (this seems hacky but it keeps things simple). Then we pass the image path to the reader, extract the documents/text, and remove our temp image file. - -Now that we have the documents, we can call `extract_terms()` the same as before. - -## Conclusion/TLDR - -In this tutorial, we covered a ton of information, while solving some common issues and problems along the way: - -- Using different indexes for different use cases (List vs. Vector index) -- Storing global state values with Streamlit's `session_state` concept -- Customizing internal prompts with Llama Index -- Reading text from images with Llama Index - -The final version of this tutorial can be found [here](https://github.com/logan-markewich/llama_index_starter_pack). diff --git a/spaces/luckli/anon8231489123-gpt4-x-alpaca-13b-native-4bit-128g/README.md b/spaces/luckli/anon8231489123-gpt4-x-alpaca-13b-native-4bit-128g/README.md deleted file mode 100644 index 98ad75557d22e6956164ed8437df95dea5246e33..0000000000000000000000000000000000000000 --- a/spaces/luckli/anon8231489123-gpt4-x-alpaca-13b-native-4bit-128g/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Anon8231489123 Gpt4 X Alpaca 13b Native 4bit 128g -emoji: 👀 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lukesteuber/textual/README.md b/spaces/lukesteuber/textual/README.md deleted file mode 100644 index 377abb330f86eda10d1481fc2e5c1ef18ec15bd5..0000000000000000000000000000000000000000 --- a/spaces/lukesteuber/textual/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: mindmosaic -emoji: ☸️ -colorFrom: purple -colorTo: gold -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -duplicated_from: mosaicml/mpt-7b-chat ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_canny.py b/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_canny.py deleted file mode 100644 index a313ffda0a74b6373e90681aba6cd0e9a8736c86..0000000000000000000000000000000000000000 --- a/spaces/lychees/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_canny.py +++ /dev/null @@ -1,183 +0,0 @@ -import cv2 -import gradio as gr -import numpy as np -import torch -from diffusers import ControlNetModel, StableDiffusionControlNetPipeline -from PIL import Image - -from diffusion_webui.utils.model_list import ( - controlnet_canny_model_list, - stable_model_list, -) -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_LIST, - get_scheduler_list, -) - - -class StableDiffusionControlNetCannyGenerator: - def __init__(self): - self.pipe = None - - def load_model(self, stable_model_path, controlnet_model_path, scheduler): - if self.pipe is None: - controlnet = ControlNetModel.from_pretrained( - controlnet_model_path, torch_dtype=torch.float16 - ) - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - pretrained_model_name_or_path=stable_model_path, - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - - self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def controlnet_canny( - self, - image_path: str, - ): - image = Image.open(image_path) - image = np.array(image) - - image = cv2.Canny(image, 100, 200) - image = image[:, :, None] - image = np.concatenate([image, image, image], axis=2) - image = Image.fromarray(image) - - return image - - def generate_image( - self, - image_path: str, - stable_model_path: str, - controlnet_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - scheduler: str, - seed_generator: int, - ): - pipe = self.load_model( - stable_model_path=stable_model_path, - controlnet_model_path=controlnet_model_path, - scheduler=scheduler, - ) - - image = self.controlnet_canny(image_path=image_path) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - output = pipe( - prompt=prompt, - image=image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - controlnet_canny_image_file = gr.Image( - type="filepath", label="Image" - ) - - controlnet_canny_prompt = gr.Textbox( - lines=1, - placeholder="Prompt", - show_label=False, - ) - - controlnet_canny_negative_prompt = gr.Textbox( - lines=1, - placeholder="Negative Prompt", - show_label=False, - ) - with gr.Row(): - with gr.Column(): - controlnet_canny_stable_model_id = gr.Dropdown( - choices=stable_model_list, - value=stable_model_list[0], - label="Stable Model Id", - ) - - controlnet_canny_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - controlnet_canny_num_inference_step = gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - controlnet_canny_num_images_per_prompt = gr.Slider( - minimum=1, - maximum=10, - step=1, - value=1, - label="Number Of Images", - ) - with gr.Row(): - with gr.Column(): - controlnet_canny_model_id = gr.Dropdown( - choices=controlnet_canny_model_list, - value=controlnet_canny_model_list[0], - label="ControlNet Model Id", - ) - - controlnet_canny_scheduler = gr.Dropdown( - choices=SCHEDULER_LIST, - value=SCHEDULER_LIST[0], - label="Scheduler", - ) - - controlnet_canny_seed_generator = gr.Number( - value=0, - label="Seed Generator", - ) - controlnet_canny_predict = gr.Button(value="Generator") - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - controlnet_canny_predict.click( - fn=StableDiffusionControlNetCannyGenerator().generate_image, - inputs=[ - controlnet_canny_image_file, - controlnet_canny_stable_model_id, - controlnet_canny_model_id, - controlnet_canny_prompt, - controlnet_canny_negative_prompt, - controlnet_canny_num_images_per_prompt, - controlnet_canny_guidance_scale, - controlnet_canny_num_inference_step, - controlnet_canny_scheduler, - controlnet_canny_seed_generator, - ], - outputs=[output_image], - ) diff --git a/spaces/m3hrdadfi/typo-detector/libs/__init__.py b/spaces/m3hrdadfi/typo-detector/libs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ma-xu/LIVE/thrust/examples/cpp_integration/host.cpp b/spaces/ma-xu/LIVE/thrust/examples/cpp_integration/host.cpp deleted file mode 100644 index 009f3fa87dd6e318c97a4749a392a57f01814bd6..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/examples/cpp_integration/host.cpp +++ /dev/null @@ -1,27 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include - -// defines the function prototype -#include "device.h" - -int main(void) -{ - // generate 20 random numbers on the host - thrust::host_vector h_vec(20); - thrust::default_random_engine rng; - thrust::generate(h_vec.begin(), h_vec.end(), rng); - - // interface to CUDA code - sort_on_device(h_vec); - - // print sorted array - thrust::copy(h_vec.begin(), h_vec.end(), std::ostream_iterator(std::cout, "\n")); - - return 0; -} - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/generate.h b/spaces/ma-xu/LIVE/thrust/thrust/generate.h deleted file mode 100644 index a651dd0dccee089f4b31df03000e724fdab13648..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/generate.h +++ /dev/null @@ -1,213 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file generate.h - * \brief Fills a range with values "generated" from a function of no arguments - */ - -#pragma once - -#include -#include - -namespace thrust -{ - - -/*! \addtogroup transformations - * \{ - */ - - -/*! \p generate assigns the result of invoking \p gen, a function object that takes no arguments, - * to each element in the range [first,last). - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The first element in the range of interest. - * \param last The last element in the range of interest. - * \param gen A function argument, taking no parameters, used to generate values to assign to - * elements in the range [first,last). - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable. - * \tparam Generator is a model of Generator, - * and \p Generator's \c result_type is convertible to \p ForwardIterator's \c value_type. - * - * The following code snippet demonstrates how to fill a \c host_vector with random numbers, - * using the standard C library function \c rand using the \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::host_vector v(10); - * srand(13); - * thrust::generate(thrust::host, v.begin(), v.end(), rand); - * - * // the elements of v are now pseudo-random numbers - * \endcode - * - * \see generate_n - * \see http://www.sgi.com/tech/stl/generate.html - */ -template -__host__ __device__ - void generate(const thrust::detail::execution_policy_base &exec, - ForwardIterator first, - ForwardIterator last, - Generator gen); - - -/*! \p generate assigns the result of invoking \p gen, a function object that takes no arguments, - * to each element in the range [first,last). - * - * \param first The first element in the range of interest. - * \param last The last element in the range of interest. - * \param gen A function argument, taking no parameters, used to generate values to assign to - * elements in the range [first,last). - * - * \tparam ForwardIterator is a model of Forward Iterator, - * and \p ForwardIterator is mutable. - * \tparam Generator is a model of Generator, - * and \p Generator's \c result_type is convertible to \p ForwardIterator's \c value_type. - * - * The following code snippet demonstrates how to fill a \c host_vector with random numbers, - * using the standard C library function \c rand. - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::host_vector v(10); - * srand(13); - * thrust::generate(v.begin(), v.end(), rand); - * - * // the elements of v are now pseudo-random numbers - * \endcode - * - * \see generate_n - * \see http://www.sgi.com/tech/stl/generate.html - */ -template - void generate(ForwardIterator first, - ForwardIterator last, - Generator gen); - - -/*! \p generate_n assigns the result of invoking \p gen, a function object that takes no arguments, - * to each element in the range [first,first + n). The return value is first + n. - * - * The algorithm's execution is parallelized as determined by \p exec. - * - * \param exec The execution policy to use for parallelization. - * \param first The first element in the range of interest. - * \param n The size of the range of interest. - * \param gen A function argument, taking no parameters, used to generate values to assign to - * elements in the range [first,first + n). - * - * \tparam DerivedPolicy The name of the derived execution policy. - * \tparam OutputIterator is a model of Output Iterator. - * \tparam Size is an integral type (either signed or unsigned). - * \tparam Generator is a model of Generator, - * and \p Generator's \c result_type is convertible to a type in \p OutputIterator's set of \c value_types. - * - * The following code snippet demonstrates how to fill a \c host_vector with random numbers, - * using the standard C library function \c rand using the \p thrust::host execution policy for parallelization: - * - * \code - * #include - * #include - * #include - * #include - * ... - * thrust::host_vector v(10); - * srand(13); - * thrust::generate_n(thrust::host, v.begin(), 10, rand); - * - * // the elements of v are now pseudo-random numbers - * \endcode - * - * \see generate - * \see http://www.sgi.com/tech/stl/generate.html - */ -template -__host__ __device__ - OutputIterator generate_n(const thrust::detail::execution_policy_base &exec, - OutputIterator first, - Size n, - Generator gen); - - -/*! \p generate_n assigns the result of invoking \p gen, a function object that takes no arguments, - * to each element in the range [first,first + n). The return value is first + n. - * - * \param first The first element in the range of interest. - * \param n The size of the range of interest. - * \param gen A function argument, taking no parameters, used to generate values to assign to - * elements in the range [first,first + n). - * - * \tparam OutputIterator is a model of Output Iterator. - * \tparam Size is an integral type (either signed or unsigned). - * \tparam Generator is a model of Generator, - * and \p Generator's \c result_type is convertible to a type in \p OutputIterator's set of \c value_types. - * - * The following code snippet demonstrates how to fill a \c host_vector with random numbers, - * using the standard C library function \c rand. - * - * \code - * #include - * #include - * #include - * ... - * thrust::host_vector v(10); - * srand(13); - * thrust::generate_n(v.begin(), 10, rand); - * - * // the elements of v are now pseudo-random numbers - * \endcode - * - * \see generate - * \see http://www.sgi.com/tech/stl/generate.html - */ -template - OutputIterator generate_n(OutputIterator first, - Size n, - Generator gen); - - -/*! \} // end transformations - */ - -} // end namespace thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/iterator/zip_iterator.h b/spaces/ma-xu/LIVE/thrust/thrust/iterator/zip_iterator.h deleted file mode 100644 index 7b86d06d513253c5c89dd1d88ef508bbc2a3684f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/iterator/zip_iterator.h +++ /dev/null @@ -1,245 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file thrust/iterator/zip_iterator.h - * \brief An iterator which returns a tuple of the result of dereferencing - * a tuple of iterators when dereferenced - */ - -/* - * Copyright David Abrahams and Thomas Becker 2000-2006. - * - * Distributed under the Boost Software License, Version 1.0. - * (See accompanying NOTICE file for the complete license) - * - * For more information, see http://www.boost.org - */ - -#pragma once - -#include -#include -#include -#include - -namespace thrust -{ - -/*! \addtogroup iterators - * \{ - */ - -/*! \addtogroup fancyiterator Fancy Iterators - * \ingroup iterators - * \{ - */ - -/*! \p zip_iterator is an iterator which represents a pointer into a range - * of \p tuples whose elements are themselves taken from a \p tuple of input - * iterators. This iterator is useful for creating a virtual array of structures - * while achieving the same performance and bandwidth as the structure of arrays - * idiom. \p zip_iterator also facilitates kernel fusion by providing a convenient - * means of amortizing the execution of the same operation over multiple ranges. - * - * The following code snippet demonstrates how to create a \p zip_iterator - * which represents the result of "zipping" multiple ranges together. - * - * \code - * #include - * #include - * #include - * ... - * thrust::device_vector int_v(3); - * int_v[0] = 0; int_v[1] = 1; int_v[2] = 2; - * - * thrust::device_vector float_v(3); - * float_v[0] = 0.0f; float_v[1] = 1.0f; float_v[2] = 2.0f; - * - * thrust::device_vector char_v(3); - * char_v[0] = 'a'; char_v[1] = 'b'; char_v[2] = 'c'; - * - * // typedef these iterators for shorthand - * typedef thrust::device_vector::iterator IntIterator; - * typedef thrust::device_vector::iterator FloatIterator; - * typedef thrust::device_vector::iterator CharIterator; - * - * // typedef a tuple of these iterators - * typedef thrust::tuple IteratorTuple; - * - * // typedef the zip_iterator of this tuple - * typedef thrust::zip_iterator ZipIterator; - * - * // finally, create the zip_iterator - * ZipIterator iter(thrust::make_tuple(int_v.begin(), float_v.begin(), char_v.begin())); - * - * *iter; // returns (0, 0.0f, 'a') - * iter[0]; // returns (0, 0.0f, 'a') - * iter[1]; // returns (1, 1.0f, 'b') - * iter[2]; // returns (2, 2.0f, 'c') - * - * thrust::get<0>(iter[2]); // returns 2 - * thrust::get<1>(iter[0]); // returns 0.0f - * thrust::get<2>(iter[1]); // returns 'b' - * - * // iter[3] is an out-of-bounds error - * \endcode - * - * Defining the type of a \p zip_iterator can be complex. The next code example demonstrates - * how to use the \p make_zip_iterator function with the \p make_tuple function to avoid - * explicitly specifying the type of the \p zip_iterator. This example shows how to use - * \p zip_iterator to copy multiple ranges with a single call to \p thrust::copy. - * - * \code - * #include - * #include - * #include - * - * int main() - * { - * thrust::device_vector int_in(3), int_out(3); - * int_in[0] = 0; - * int_in[1] = 1; - * int_in[2] = 2; - * - * thrust::device_vector float_in(3), float_out(3); - * float_in[0] = 0.0f; - * float_in[1] = 10.0f; - * float_in[2] = 20.0f; - * - * thrust::copy(thrust::make_zip_iterator(thrust::make_tuple(int_in.begin(), float_in.begin())), - * thrust::make_zip_iterator(thrust::make_tuple(int_in.end(), float_in.end())), - * thrust::make_zip_iterator(thrust::make_tuple(int_out.begin(),float_out.begin()))); - * - * // int_out is now [0, 1, 2] - * // float_out is now [0.0f, 10.0f, 20.0f] - * - * return 0; - * } - * \endcode - * - * \see make_zip_iterator - * \see make_tuple - * \see tuple - * \see get - */ -template - class zip_iterator - : public detail::zip_iterator_base::type -{ - public: - /*! Null constructor does nothing. - */ - inline __host__ __device__ - zip_iterator(); - - /*! This constructor creates a new \p zip_iterator from a - * \p tuple of iterators. - * - * \param iterator_tuple The \p tuple of iterators to copy from. - */ - inline __host__ __device__ - zip_iterator(IteratorTuple iterator_tuple); - - /*! This copy constructor creates a new \p zip_iterator from another - * \p zip_iterator. - * - * \param other The \p zip_iterator to copy. - */ - template - inline __host__ __device__ - zip_iterator(const zip_iterator &other, - typename thrust::detail::enable_if_convertible< - OtherIteratorTuple, - IteratorTuple - >::type * = 0); - - /*! This method returns a \c const reference to this \p zip_iterator's - * \p tuple of iterators. - * - * \return A \c const reference to this \p zip_iterator's \p tuple - * of iterators. - */ - inline __host__ __device__ - const IteratorTuple &get_iterator_tuple() const; - - /*! \cond - */ - private: - typedef typename - detail::zip_iterator_base::type super_t; - - friend class thrust::iterator_core_access; - - // Dereferencing returns a tuple built from the dereferenced - // iterators in the iterator tuple. - __host__ __device__ - typename super_t::reference dereference() const; - - // Two zip_iterators are equal if the two first iterators of the - // tuple are equal. Note this differs from Boost's implementation, which - // considers the entire tuple. - template - inline __host__ __device__ - bool equal(const zip_iterator &other) const; - - // Advancing a zip_iterator means to advance all iterators in the tuple - inline __host__ __device__ - void advance(typename super_t::difference_type n); - - // Incrementing a zip iterator means to increment all iterators in the tuple - inline __host__ __device__ - void increment(); - - // Decrementing a zip iterator means to decrement all iterators in the tuple - inline __host__ __device__ - void decrement(); - - // Distance is calculated using the first iterator in the tuple. - template - inline __host__ __device__ - typename super_t::difference_type - distance_to(const zip_iterator &other) const; - - // The iterator tuple. - IteratorTuple m_iterator_tuple; - - /*! \endcond - */ -}; // end zip_iterator - -/*! \p make_zip_iterator creates a \p zip_iterator from a \p tuple - * of iterators. - * - * \param t The \p tuple of iterators to copy. - * \return A newly created \p zip_iterator which zips the iterators encapsulated in \p t. - * - * \see zip_iterator - */ -template -inline __host__ __device__ -zip_iterator make_zip_iterator(IteratorTuple t); - -/*! \} // end fancyiterators - */ - -/*! \} // end iterators - */ - -} // end thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/random/subtract_with_carry_engine.h b/spaces/ma-xu/LIVE/thrust/thrust/random/subtract_with_carry_engine.h deleted file mode 100644 index 0b12ca3530a5bed1d38b816359fcce4b99d6d9d5..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/random/subtract_with_carry_engine.h +++ /dev/null @@ -1,256 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file subtract_with_carry_engine.h - * \brief A subtract-with-carry pseudorandom number generator - * based on Marsaglia & Zaman. - */ - -#pragma once - -#include -#include - -#include -#include // for size_t -#include - -namespace thrust -{ - -namespace random -{ - - -/*! \addtogroup random_number_engine_templates - * \{ - */ - -/*! \class subtract_with_carry_engine - * \brief A \p subtract_with_carry_engine random number engine produces unsigned - * integer random numbers using the subtract with carry algorithm of Marsaglia & Zaman. - * - * The generation algorithm is performed as follows: - * -# Let Y = X_{i-s}- X_{i-r} - c. - * -# Set X_i to y = T mod m. Set \c c to \c 1 if Y < 0, otherwise set \c c to \c 0. - * - * This algorithm corresponds to a modular linear function of the form - * - * TA(x_i) = (a * x_i) mod b, where \c b is of the form m^r - m^s + 1 and - * a = b - (b-1)/m. - * - * \tparam UIntType The type of unsigned integer to produce. - * \tparam w The word size of the produced values ( w <= sizeof(UIntType)). - * \tparam s The short lag of the generation algorithm. - * \tparam r The long lag of the generation algorithm. - * - * \note Inexperienced users should not use this class template directly. Instead, use - * \p ranlux24_base or \p ranlux48_base, which are instances of \p subtract_with_carry_engine. - * - * \see thrust::random::ranlux24_base - * \see thrust::random::ranlux48_base - */ -template - class subtract_with_carry_engine -{ - /*! \cond - */ - private: - static const UIntType modulus = UIntType(1) << w; - /*! \endcond - */ - - public: - // types - - /*! \typedef result_type - * \brief The type of the unsigned integer produced by this \p subtract_with_carry_engine. - */ - typedef UIntType result_type; - - // engine characteristics - - /*! The word size of the produced values. - */ - static const size_t word_size = w; - - /*! The size of the short lag used in the generation algorithm. - */ - static const size_t short_lag = s; - - /*! The size of the long lag used in the generation algorithm. - */ - static const size_t long_lag = r; - - /*! The smallest value this \p subtract_with_carry_engine may potentially produce. - */ - static const result_type min = 0; - - /*! The largest value this \p subtract_with_carry_engine may potentially produce. - */ - static const result_type max = modulus - 1; - - /*! The default seed of this \p subtract_with_carry_engine. - */ - static const result_type default_seed = 19780503u; - - // constructors and seeding functions - - /*! This constructor, which optionally accepts a seed, initializes a new - * \p subtract_with_carry_engine. - * - * \param value The seed used to intialize this \p subtract_with_carry_engine's state. - */ - __host__ __device__ - explicit subtract_with_carry_engine(result_type value = default_seed); - - /*! This method initializes this \p subtract_with_carry_engine's state, and optionally accepts - * a seed value. - * - * \param value The seed used to initializes this \p subtract_with_carry_engine's state. - */ - __host__ __device__ - void seed(result_type value = default_seed); - - // generating functions - - /*! This member function produces a new random value and updates this \p subtract_with_carry_engine's state. - * \return A new random number. - */ - __host__ __device__ - result_type operator()(void); - - /*! This member function advances this \p subtract_with_carry_engine's state a given number of times - * and discards the results. - * - * \param z The number of random values to discard. - * \note This function is provided because an implementation may be able to accelerate it. - */ - __host__ __device__ - void discard(unsigned long long z); - - /*! \cond - */ - private: - result_type m_x[long_lag]; - unsigned int m_k; - int m_carry; - - friend struct thrust::random::detail::random_core_access; - - __host__ __device__ - bool equal(const subtract_with_carry_engine &rhs) const; - - template - std::basic_ostream& stream_out(std::basic_ostream &os) const; - - template - std::basic_istream& stream_in(std::basic_istream &is); - - /*! \endcond - */ -}; // end subtract_with_carry_engine - - -/*! This function checks two \p subtract_with_carry_engines for equality. - * \param lhs The first \p subtract_with_carry_engine to test. - * \param rhs The second \p subtract_with_carry_engine to test. - * \return \c true if \p lhs is equal to \p rhs; \c false, otherwise. - */ -template -__host__ __device__ -bool operator==(const subtract_with_carry_engine &lhs, - const subtract_with_carry_engine &rhs); - - -/*! This function checks two \p subtract_with_carry_engines for inequality. - * \param lhs The first \p subtract_with_carry_engine to test. - * \param rhs The second \p subtract_with_carry_engine to test. - * \return \c true if \p lhs is not equal to \p rhs; \c false, otherwise. - */ -template -__host__ __device__ -bool operator!=(const subtract_with_carry_engine&lhs, - const subtract_with_carry_engine&rhs); - - -/*! This function streams a subtract_with_carry_engine to a \p std::basic_ostream. - * \param os The \p basic_ostream to stream out to. - * \param e The \p subtract_with_carry_engine to stream out. - * \return \p os - */ -template -std::basic_ostream& -operator<<(std::basic_ostream &os, - const subtract_with_carry_engine &e); - - -/*! This function streams a subtract_with_carry_engine in from a std::basic_istream. - * \param is The \p basic_istream to stream from. - * \param e The \p subtract_with_carry_engine to stream in. - * \return \p is - */ -template -std::basic_istream& -operator>>(std::basic_istream &is, - subtract_with_carry_engine &e); - - -/*! \} // end random_number_engine_templates - */ - - -/*! \addtogroup predefined_random - * \{ - */ - -// XXX N2111 uses uint_fast32_t here - -/*! \typedef ranlux24_base - * \brief A random number engine with predefined parameters which implements the - * base engine of the \p ranlux24 random number engine. - * \note The 10000th consecutive invocation of a default-constructed object of type \p ranlux24_base - * shall produce the value \c 7937952 . - */ -typedef subtract_with_carry_engine ranlux24_base; - - -// XXX N2111 uses uint_fast64_t here - -/*! \typedef ranlux48_base - * \brief A random number engine with predefined parameters which implements the - * base engine of the \p ranlux48 random number engine. - * \note The 10000th consecutive invocation of a default-constructed object of type \p ranlux48_base - * shall produce the value \c 192113843633948 . - */ -typedef subtract_with_carry_engine ranlux48_base; - -/*! \} // end predefined_random - */ - -} // end random - -// import names into thrust:: -using random::subtract_with_carry_engine; -using random::ranlux24_base; -using random::ranlux48_base; - -} // end thrust - -#include - diff --git a/spaces/manhdo/head_pose_estimation_tracking_app/utils/detection.py b/spaces/manhdo/head_pose_estimation_tracking_app/utils/detection.py deleted file mode 100644 index e454d0617ee78a7608ca193611d683b4b076d804..0000000000000000000000000000000000000000 --- a/spaces/manhdo/head_pose_estimation_tracking_app/utils/detection.py +++ /dev/null @@ -1,205 +0,0 @@ -import numpy as np -import cv2 - -from .general import resize_img - - -def detect_face_pose_informations_from_image(image, face_detection, face_mesh, img_size, pad_infos, - img_size_no_pad, head_pose_info, get_descending_order=True): - """ - Input: - `image` (np.ndarray): Image for face landmarks detection - `face_detection` (mp.solutions.face_detection): face detection model - `face_mesh` (mp.solutions.face_mesh): landmarks detection model - `img_size` (int, int): shape of the image (width, height) - `pad_infos` (int,int,int,int): number of padding each corner (top,bottom,left,right) - `img_size_no_pad` (int, int): new image size but no padding - `head_pose_info` (dict): head pose configs dictionary - Return: - `face_pose_infos` (list(list[float])): pose info for each face (x,y,z) - `face_direction_infos` (list[str]): direction for each face - """ - - img_w, img_h = img_size - pad_top, pad_bot, pad_left, pad_right = pad_infos - - chosen_rectangle_pos_list = [] - face_detection_infos = [] - face_coordinate_infos = [] - face_pose_infos = [] - face_direction_infos = [] - face_position_infos = [] - face_area_infos = [] - - # Get the result - detection_results = face_detection.process(image) - - if detection_results.detections: - for _, detection in enumerate(detection_results.detections): - - bbox_info = detection.location_data.relative_bounding_box - xmin = bbox_info.xmin - ymin = bbox_info.ymin - width = bbox_info.width - height = bbox_info.height - # Check area - width_ori = (width * img_w) / (img_w - pad_top - pad_bot) - height_ori = (height * img_h) / (img_h - pad_left - pad_right) - area = width_ori * height_ori - - if area < head_pose_info['FACE_OPTIONS']['MIN_FACE_RATIO_OVER_IMAGE']: - continue - - face_area_infos.append(area) - - face_sample_pad = head_pose_info['FACE_SAMPLE_PAD'] - - x_center = xmin + width / 2 - y_center = ymin + height / 2 - face_detection_infos.append(detection) - - x_center_int = int(x_center * img_w) - y_center_int = int(y_center * img_h) - - x_center_int_ori = x_center_int - pad_left # x center according to original size - y_center_int_ori = y_center_int - pad_top # y center according to original size - - x_center_ori = min(max(x_center_int_ori / img_size_no_pad[1], 0), 1) - y_center_ori = min(max(y_center_int_ori / img_size_no_pad[0], 0), 1) - - xmax = int((xmin + width) * img_w) - ymax = int((ymin + height) * img_h) - xmin = int(xmin * img_w) - ymin = int(ymin * img_h) - - - face_coordinate_infos.append((xmin,ymin,xmax,ymax)) - - image_pad = cv2.copyMakeBorder(image.copy(), img_h, img_h, img_w, img_w, cv2.BORDER_CONSTANT, value=(0, 0, 0)) - image_face_ori = image_pad[ymin + img_h - face_sample_pad:ymax + img_h + face_sample_pad, - xmin + img_w - face_sample_pad: xmax + img_h + face_sample_pad] - - face_position = 'MIDDLE' - for head_pose_pos_name, head_pose_pos in head_pose_info['POSITION'].items(): - if (head_pose_pos[0][0] < x_center_ori and x_center_ori <= head_pose_pos[1][0]) and \ - (head_pose_pos[0][1] < y_center_ori and y_center_ori <= head_pose_pos[1][1]): - face_position = head_pose_pos_name - chosen_rectangle_pos_list.append(head_pose_pos) - break - face_position_infos.append(face_position) - - face_pose_infos_for_cur_face, face_direction_infos_for_cur_face = \ - detect_landmarks_from_image(image_face_ori, face_mesh, img_size, head_pose_info, get_one_landmark=True) - - face_pose_infos.extend(face_pose_infos_for_cur_face) - face_direction_infos.extend(face_direction_infos_for_cur_face) - - if get_descending_order: # according to face area - face_orders = np.argsort(face_area_infos)[::-1] - - face_detection_infos = list(np.array(face_detection_infos)[face_orders]) - face_direction_infos = list(np.array(face_direction_infos)[face_orders]) - face_position_infos = list(np.array(face_position_infos)[face_orders]) - face_coordinate_infos = list(np.array(face_coordinate_infos)[face_orders]) - face_pose_infos = list(np.array(face_pose_infos)[face_orders]) - face_area_infos = list(np.array(face_area_infos)[face_orders]) - - chosen_rectangle_pos_list = [chosen_rectangle_pos for _, chosen_rectangle_pos in sorted(zip(face_orders, chosen_rectangle_pos_list))] - - - return face_detection_infos, face_direction_infos, face_position_infos, face_coordinate_infos, face_pose_infos, face_area_infos, chosen_rectangle_pos_list - - -def detect_landmarks_from_image(image, face_mesh, img_size, head_pose_info, get_one_landmark=False): - """ - Input: - `image` (np.ndarray): Image for face landmarks detection - `img_size` (int, int): shape of the image (width, height) - `face_mesh` (mp.solutions.face_mesh): landmarks detection model - `get_one_landmark` (bool): In case you just want predict one landmarks for one face, `True` for return one landmarks infos, `False` otherwise - `head_pose_info` (dict): head pose configs dictionary - Return: - `face_pose_infos` (list(list[float])): pose info for each face (x,y,z) - `face_direction_infos` (list[str]): direction for each face - """ - image_resize = resize_img(image, img_size) - results = face_mesh.process(image_resize) - - # Multi landmarks informations from landmarks detection model - multi_face_landmarks = results.multi_face_landmarks - - if not multi_face_landmarks: - return [(0, 0, 0)], ['NO FACE'] - else: - face_pose_infos, face_direction_infos = [], [] - img_w, img_h = img_size - - face_3d = [] - face_2d = [] - - for face_landmarks in multi_face_landmarks: - for idx, lm in enumerate(face_landmarks.landmark): - if idx == 33 or idx == 263 or idx == 1 or idx == 61 or idx == 291 or idx == 199: - if idx == 1: - nose_2d = (lm.x * img_w, lm.y * img_h) - nose_3d = (lm.x * img_w, lm.y * img_h, lm.z * 3000) - - x, y = int(lm.x * img_w), int(lm.y * img_h) - - # Get the 2D Coordinates - face_2d.append([x, y]) - - # Get the 3D Coordinates - face_3d.append([x, y, lm.z]) - - # Convert to Numpy array - face_2d = np.array(face_2d, dtype=np.float64) - face_3d = np.array(face_3d, dtype=np.float64) - - # The camera matrix - focal_length = 1 * img_w - - cam_matrix = np.array([ [focal_length, 0, img_h/2], - [0, focal_length, img_w/2], - [0, 0, 1]]) - - # The distortion parameters - dist_matrix = np.zeros((4, 1), dtype=np.float64) - - # Solve PnP - success, rot_vec, trans_vec = cv2.solvePnP(face_3d, face_2d, cam_matrix,dist_matrix) - - # Get rotational matrix - rmat, jac = cv2.Rodrigues(rot_vec) - - # Get angles - angles, mtxR, mtxQ, Qx, Qy, Qz = cv2.RQDecomp3x3(rmat) - - # Get the y rotation degree - x = angles[0] * 360 - y = angles[1] * 360 - z = angles[2] * 360 - - face_pose_infos.append((x,y,z)) - - # See where the user's head tilting - face_direction = 'NOT FRONT' - for head_pose_direction_name, head_pose_direction in head_pose_info['DIRECTION'].items(): - if (head_pose_direction[0][0] < x and x <= head_pose_direction[1][0]) and \ - (head_pose_direction[0][1] < y and y <= head_pose_direction[1][1]): - face_direction = head_pose_direction_name - break - face_direction_infos.append(face_direction) - - # Display the nose direction - nose_3d_projection, jacobian = cv2.projectPoints(nose_3d, rot_vec, trans_vec, cam_matrix, dist_matrix) - - p1 = (int(nose_2d[0]), int(nose_2d[1])) - p2 = (int(nose_2d[0] + y*10), int(nose_2d[1] - x*10)) - - # cv2.line(image_face, p1, p2, (255, 0, 0), 3) - - if get_one_landmark: - break - - return face_pose_infos, face_direction_infos \ No newline at end of file diff --git a/spaces/manymoon22173/RVC_MODELS/infer_pack/modules.py b/spaces/manymoon22173/RVC_MODELS/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/manymoon22173/RVC_MODELS/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/martinlmedina/tf_hub_Fast_Style_Transfer_for_Arbitrary_Styles_v2/app.py b/spaces/martinlmedina/tf_hub_Fast_Style_Transfer_for_Arbitrary_Styles_v2/app.py deleted file mode 100644 index 022fa33e178d7494b8211959c42eaf8f367c2a96..0000000000000000000000000000000000000000 --- a/spaces/martinlmedina/tf_hub_Fast_Style_Transfer_for_Arbitrary_Styles_v2/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import gradio as gr -import numpy as np -from PIL import Image -import tensorflow as tf -import tensorflow_hub as hub - -style_transfer_model = hub.load("https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2") - -def perform_style_transfer(content_image, style_image): - - content_image = tf.convert_to_tensor(content_image, np.float32)[tf.newaxis, ...] / 255. - style_image = tf.convert_to_tensor(style_image, np.float32)[tf.newaxis, ...] / 255. - - output = style_transfer_model(content_image, style_image) - stylized_image = output[0] - - return Image.fromarray(np.uint8(stylized_image[0] * 255)) - - -content_image_input = gr.inputs.Image(label="Content Image") -style_image_input = gr.inputs.Image(shape=(256, 256), label="Style Image") - -# Examples -golden_gate = ["golden_gate_bridge.jpeg", "the_great_wave.jpeg"] -joshua_tree = ["joshua_tree.jpeg", "starry_night.jpeg"] -glacier = ["glacier_national_park.jpeg", "the_scream.jpg"] - -app_interface = gr.Interface(fn=perform_style_transfer, - inputs=[content_image_input, style_image_input], - outputs="image", - title="Fast Neural Style Transfer", - description="Gradio demo for Fast Neural Style Transfer using a pretrained Image Stylization model from TensorFlow Hub. To use it, simply upload a content image and style image, or click one of the examples to load them. To learn more about the project, please find the references listed below.", - examples=[glacier, golden_gate, joshua_tree], - article="**References**\n\n" - "1. Tutorial to implement Fast Neural Style Transfer using the pretrained model from TensorFlow Hub \n" - "2. The idea to build a neural style transfer application was inspired from this Hugging Face Space ") -app_interface.launch() diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/models/__init__.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/models/__init__.py deleted file mode 100644 index be6bfe4b787a132aeaabaed1c3437c9ecd5c656c..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/models/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -Models for EnCodec, AudioGen, MusicGen, as well as the generic LMModel. -""" -# flake8: noqa -from . import builders, loaders -from .encodec import ( - CompressionModel, EncodecModel, DAC, - HFEncodecModel, HFEncodecCompressionModel) -from .audiogen import AudioGen -from .lm import LMModel -from .multibanddiffusion import MultiBandDiffusion -from .musicgen import MusicGen -from .unet import DiffusionUnet diff --git a/spaces/maurol/lyrics-translator/README.md b/spaces/maurol/lyrics-translator/README.md deleted file mode 100644 index 07af0186df8955831b6326693ee304cb8b074188..0000000000000000000000000000000000000000 --- a/spaces/maurol/lyrics-translator/README.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Lyrics Translator -emoji: 🎵 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - - - - - -

    🎵 LyricsTranslator - automated lyrics translation

    ---- - -**Documentation**: https://mauroluzzatto.github.io/lyrics-translator - -**Source Code**: https://github.com/MauroLuzzatto/lyrics-translator - ---- - - -The `LyricsTranslator` downloads lyrics from [genius](https://genius.com/) and uses 🤗[hugging face](https://huggingface.co/) to translate the lyrics into a target language. - - -All languages that are supported by [OPUS-MT](https://github.com/Helsinki-NLP/Opus-MT) are available for translation.The full list of list of languages can be found on 🤗[hugging face](https://huggingface.co/models?other=marian). - -- German: `de` -- Swedish: `sv` -- French: `fr` -- Spanish: `es` -- Chinese: `zh` -- Japanese: `ja` -- Portuguese: `pt` -- Arabic: `ar` -- Italian: `it` - -and many more ... diff --git a/spaces/maxmon/digital_double/src/proxy.py b/spaces/maxmon/digital_double/src/proxy.py deleted file mode 100644 index d60544cb3f76146bc1eb627d6d893464fc25213d..0000000000000000000000000000000000000000 --- a/spaces/maxmon/digital_double/src/proxy.py +++ /dev/null @@ -1,20 +0,0 @@ -import requests - -def chat(input, history=[]): - headers = { - "Authorization": "Bearer sk-TmUhOQKWsJ5t43QVoGBblSw3GFOMZwZhpGFlCGX7jxwedsdN" - } - _history = [] - for item in history: - _history.append({"role": "user", "content": item[0]}) - _history.append({"role": "assistant", "content": item[1]}) - j = { - "model": "gpt-3.5-turbo", - "messages": [*_history, {"role": "user", "content": input}], - "temperature": 0.7 - } - result = requests.post('https://api.aiproxy.io/v1/chat/completions', json=j, headers=headers) - return result.json()['choices'][0]['message']['content'] - -if __name__ == '__main__': - print(chat('你好')) diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/conversation/[id]/stop-generating/$types.d.ts b/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/conversation/[id]/stop-generating/$types.d.ts deleted file mode 100644 index 108ad3f4ad676b574668ee54fc0f30b38a90220c..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/.svelte-kit/types/src/routes/conversation/[id]/stop-generating/$types.d.ts +++ /dev/null @@ -1,9 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { id: string } -type RouteId = '/conversation/[id]/stop-generating'; - -export type EntryGenerator = () => Promise> | Array; -export type RequestHandler = Kit.RequestHandler; -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/miyaaa666/bingo/src/components/ui/sheet.tsx b/spaces/miyaaa666/bingo/src/components/ui/sheet.tsx deleted file mode 100644 index c9f5ce0f81a91067bb013e988a07eb1e6bf6953b..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/components/ui/sheet.tsx +++ /dev/null @@ -1,122 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SheetPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Sheet = SheetPrimitive.Root - -const SheetTrigger = SheetPrimitive.Trigger - -const SheetClose = SheetPrimitive.Close - -const SheetPortal = ({ - className, - children, - ...props -}: SheetPrimitive.DialogPortalProps) => ( - - {children} - -) -SheetPortal.displayName = SheetPrimitive.Portal.displayName - -const SheetOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -SheetOverlay.displayName = SheetPrimitive.Overlay.displayName - -const SheetContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - {children} - - - Close - - - -)) -SheetContent.displayName = SheetPrimitive.Content.displayName - -const SheetHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -SheetHeader.displayName = 'SheetHeader' - -const SheetFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
    -) -SheetFooter.displayName = 'SheetFooter' - -const SheetTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetTitle.displayName = SheetPrimitive.Title.displayName - -const SheetDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SheetDescription.displayName = SheetPrimitive.Description.displayName - -export { - Sheet, - SheetTrigger, - SheetClose, - SheetContent, - SheetHeader, - SheetFooter, - SheetTitle, - SheetDescription -} diff --git a/spaces/miyaaa666/bingo/src/components/ui/tooltip.tsx b/spaces/miyaaa666/bingo/src/components/ui/tooltip.tsx deleted file mode 100644 index af1d48beb90dd5ae311796539843700871052cae..0000000000000000000000000000000000000000 --- a/spaces/miyaaa666/bingo/src/components/ui/tooltip.tsx +++ /dev/null @@ -1,30 +0,0 @@ -'use client' - -import * as React from 'react' -import * as TooltipPrimitive from '@radix-ui/react-tooltip' - -import { cn } from '@/lib/utils' - -const TooltipProvider = TooltipPrimitive.Provider - -const Tooltip = TooltipPrimitive.Root - -const TooltipTrigger = TooltipPrimitive.Trigger - -const TooltipContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - -)) -TooltipContent.displayName = TooltipPrimitive.Content.displayName - -export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider } diff --git a/spaces/mnauf/detect-bees/utils/autobatch.py b/spaces/mnauf/detect-bees/utils/autobatch.py deleted file mode 100644 index bdeb91c3d2bd15e53eb65715228932d3e87e0989..0000000000000000000000000000000000000000 --- a/spaces/mnauf/detect-bees/utils/autobatch.py +++ /dev/null @@ -1,72 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Auto-batch utils -""" - -from copy import deepcopy - -import numpy as np -import torch - -from utils.general import LOGGER, colorstr -from utils.torch_utils import profile - - -def check_train_batch_size(model, imgsz=640, amp=True): - # Check YOLOv5 training batch size - with torch.cuda.amp.autocast(amp): - return autobatch(deepcopy(model).train(), imgsz) # compute optimal batch size - - -def autobatch(model, imgsz=640, fraction=0.8, batch_size=16): - # Automatically estimate best YOLOv5 batch size to use `fraction` of available CUDA memory - # Usage: - # import torch - # from utils.autobatch import autobatch - # model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False) - # print(autobatch(model)) - - # Check device - prefix = colorstr('AutoBatch: ') - LOGGER.info(f'{prefix}Computing optimal batch size for --imgsz {imgsz}') - device = next(model.parameters()).device # get model device - if device.type == 'cpu': - LOGGER.info(f'{prefix}CUDA not detected, using default CPU batch-size {batch_size}') - return batch_size - if torch.backends.cudnn.benchmark: - LOGGER.info(f'{prefix} ⚠️ Requires torch.backends.cudnn.benchmark=False, using default batch-size {batch_size}') - return batch_size - - # Inspect CUDA memory - gb = 1 << 30 # bytes to GiB (1024 ** 3) - d = str(device).upper() # 'CUDA:0' - properties = torch.cuda.get_device_properties(device) # device properties - t = properties.total_memory / gb # GiB total - r = torch.cuda.memory_reserved(device) / gb # GiB reserved - a = torch.cuda.memory_allocated(device) / gb # GiB allocated - f = t - (r + a) # GiB free - LOGGER.info(f'{prefix}{d} ({properties.name}) {t:.2f}G total, {r:.2f}G reserved, {a:.2f}G allocated, {f:.2f}G free') - - # Profile batch sizes - batch_sizes = [1, 2, 4, 8, 16] - try: - img = [torch.empty(b, 3, imgsz, imgsz) for b in batch_sizes] - results = profile(img, model, n=3, device=device) - except Exception as e: - LOGGER.warning(f'{prefix}{e}') - - # Fit a solution - y = [x[2] for x in results if x] # memory [2] - p = np.polyfit(batch_sizes[:len(y)], y, deg=1) # first degree polynomial fit - b = int((f * fraction - p[1]) / p[0]) # y intercept (optimal batch size) - if None in results: # some sizes failed - i = results.index(None) # first fail index - if b >= batch_sizes[i]: # y intercept above failure point - b = batch_sizes[max(i - 1, 0)] # select prior safe point - if b < 1 or b > 1024: # b outside of safe range - b = batch_size - LOGGER.warning(f'{prefix}WARNING ⚠️ CUDA anomaly detected, recommend restart environment and retry command.') - - fraction = (np.polyval(p, b) + r + a) / t # actual fraction predicted - LOGGER.info(f'{prefix}Using batch-size {b} for {d} {t * fraction:.2f}G/{t:.2f}G ({fraction * 100:.0f}%) ✅') - return b diff --git a/spaces/monra/freegpt-webui/client/css/conversation.css b/spaces/monra/freegpt-webui/client/css/conversation.css deleted file mode 100644 index d20f178c45e8ccbfc9539f99914b25fc572045bd..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/client/css/conversation.css +++ /dev/null @@ -1,158 +0,0 @@ -.conversation { - width: 60%; - margin: 0px 16px; - display: flex; - flex-direction: column; -} - -.conversation #messages { - width: 100%; - display: flex; - flex-direction: column; - overflow: auto; - overflow-wrap: break-word; - padding-bottom: 8px; -} - -.conversation .user-input { - max-height: 180px; - margin: 16px 0px; -} - -.conversation .user-input input { - font-size: 1rem; - background: none; - border: none; - outline: none; - color: var(--colour-3); -} - -.conversation .user-input input::placeholder { - color: var(--user-input); -} - -.conversation-title { - color: var(--colour-3); - font-size: 14px; -} - -.conversation .user-input textarea { - font-size: 1rem; - width: 100%; - height: 100%; - padding: 12px; - background: none; - border: none; - outline: none; - color: var(--colour-3); - resize: vertical; - max-height: 150px; - min-height: 80px; -} - -.box { - backdrop-filter: blur(20px); - -webkit-backdrop-filter: blur(20px); - background-color: var(--blur-bg); - height: 100%; - width: 100%; - border-radius: var(--border-radius-1); - border: 1px solid var(--blur-border); -} - -.box.input-box { - position: relative; - align-items: center; - padding: 8px; - cursor: pointer; -} - -#send-button { - position: absolute; - bottom: 25%; - right: 10px; - z-index: 1; - padding: 16px; -} - -#cursor { - line-height: 17px; - margin-left: 3px; - -webkit-animation: blink 0.8s infinite; - animation: blink 0.8s infinite; - width: 7px; - height: 15px; -} - -@keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -@-webkit-keyframes blink { - 0% { - background: #ffffff00; - } - - 50% { - background: white; - } - - 100% { - background: #ffffff00; - } -} - -/* scrollbar */ -.conversation #messages::-webkit-scrollbar { - width: 4px; - padding: 8px 0px; -} - -.conversation #messages::-webkit-scrollbar-track { - background-color: #ffffff00; -} - -.conversation #messages::-webkit-scrollbar-thumb { - background-color: #555555; - border-radius: 10px; -} - -@media screen and (max-width: 990px) { - .conversation { - width: 100%; - height: 90%; - } -} - -@media screen and (max-height: 720px) { - .conversation.box { - height: 70%; - } - - .conversation .user-input textarea { - font-size: 0.875rem; - } -} - -@media screen and (max-width: 360px) { - .box { - border-radius: 0; - } - .conversation { - margin: 0; - margin-top: 48px; - } - .conversation .user-input { - margin: 2px 0 8px 0; - } -} diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/model_parallel/modules/transformer_layer.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/model_parallel/modules/transformer_layer.py deleted file mode 100644 index 7ab53c6e5f12f15562717effb86ab8cb8d6b4fa3..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/model_parallel/modules/transformer_layer.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.model_parallel.modules import ModelParallelMultiheadAttention -from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer - - -try: - from fairseq.model_parallel.megatron.mpu import ( - ColumnParallelLinear, - RowParallelLinear, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -class ModelParallelTransformerEncoderLayer(TransformerEncoderLayer): - """Encoder layer block over multiple gpus. - - See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details. - """ - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - if q_noise > 0: - raise NotImplementedError - return ColumnParallelLinear(input_dim, output_dim, gather_output=False) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - if q_noise > 0: - raise NotImplementedError - return RowParallelLinear(input_dim, output_dim, input_is_parallel=True) - - def build_self_attention(self, embed_dim, args, **unused_kwargs): - return ModelParallelMultiheadAttention( - embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - ) - - -class ModelParallelTransformerDecoderLayer(TransformerDecoderLayer): - """Decoder layer block. - - See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details. - """ - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - if q_noise > 0: - raise NotImplementedError - return ColumnParallelLinear(input_dim, output_dim, gather_output=False) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - if q_noise > 0: - raise NotImplementedError - return RowParallelLinear(input_dim, output_dim, input_is_parallel=True) - - def build_self_attention(self, embed_dim, args, **unused_kwargs): - return ModelParallelMultiheadAttention( - embed_dim=embed_dim, - num_heads=args.decoder_attention_heads, - dropout=args.attention_dropout, - self_attention=not getattr(args, "cross_self_attention", False), - ) - - def build_encoder_attention(self, embed_dim, args, **unused_kwargs): - return ModelParallelMultiheadAttention( - embed_dim=embed_dim, - num_heads=args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) diff --git a/spaces/mshukor/UnIVAL/models/unival/unify_transformer_layer.py b/spaces/mshukor/UnIVAL/models/unival/unify_transformer_layer.py deleted file mode 100644 index ec9555f612a0ad60b93a69ecc221b7ee649169ab..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/models/unival/unify_transformer_layer.py +++ /dev/null @@ -1,668 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.modules import LayerNorm -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor - -from .unify_multihead_attention import MultiheadAttention - - -def drop_path(x, drop_prob: float = 0.0, training: bool = False): - """ - Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, - however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the - layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the - argument. - """ - if drop_prob == 0.0 or not training: - return x - keep_prob = 1 - drop_prob - shape = (1, x.shape[1], 1) - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - -def init_bert_weights(module): - """Initialize the weights.""" - if isinstance(module, (nn.Linear, nn.Embedding)): - # std defaults to 0.02, this might need to be changed - module.weight.data.normal_(mean=0.0, std=0.02) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - -class Adapter_Layer(torch.nn.Module): - def __init__(self, - d_model=None, - down_size=None, - dropout=0.0, - init_option="bert", - adapter_scalar="1.0"): - super().__init__() - self.n_embd = d_model - self.down_size = down_size - - - if adapter_scalar == "learnable_scalar": - self.scale = nn.Parameter(torch.ones(1)) - else: - self.scale = float(adapter_scalar) - - self.down_proj = nn.Linear(self.n_embd, self.down_size) - self.non_linear_func = nn.ReLU() - self.up_proj = nn.Linear(self.down_size, self.n_embd) - - self.dropout = dropout - if init_option == "bert": - self.apply(init_bert_weights) - elif init_option == "lora": - with torch.no_grad(): - nn.init.kaiming_uniform_(self.down_proj.weight, a=math.sqrt(5)) - nn.init.zeros_(self.up_proj.weight) - nn.init.zeros_(self.down_proj.bias) - nn.init.zeros_(self.up_proj.bias) - - def forward(self, x, add_residual=True, residual=None): - residual = x if residual is None else residual - - down = self.down_proj(x) - down = self.non_linear_func(down) - down = nn.functional.dropout(down, p=self.dropout, training=self.training) - up = self.up_proj(down) - up = up * self.scale - if add_residual: - output = up + residual - else: - output = up - - return output - -class VLAdapter_Layer(torch.nn.Module): - def __init__(self, - d_model=None, - down_size=None, - dropout=0.0, - init_option="bert", - adapter_scalar="1.0"): - super().__init__() - print("load VL adapter") - self.v_adapter = Adapter_Layer(d_model=d_model, - down_size=down_size, - dropout=dropout, - init_option=init_option, - adapter_scalar=adapter_scalar) - - self.l_adapter = Adapter_Layer(d_model=d_model, - down_size=down_size, - dropout=dropout, - init_option=init_option, - adapter_scalar=adapter_scalar) - - - def forward(self, x, add_residual=True, residual=None, num_image_tokens=None): - - if num_image_tokens is not None: - v_x = x[:num_image_tokens, :, :] - l_x = x[num_image_tokens:, :, :] - else: - v_x = x - l_x = x - - v_x = self.v_adapter(v_x, add_residual=add_residual, residual=residual) - l_x = self.l_adapter(l_x, add_residual=add_residual, residual=residual) - - if num_image_tokens is not None: - x = torch.cat((v_x, l_x), dim=0) - else: - x = v_x + l_x - - return x - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob=None): - super().__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return "p={}".format(self.drop_prob) - - -class TransformerEncoderLayer(nn.Module): - """Encoder layer block. - - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.encoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, args, drop_path_rate=0.0, use_adapter=False, adapter_dim=200, adapter_type='UM'): - super().__init__() - self.args = args - self.use_adapter = use_adapter - self.embed_dim = args.encoder_embed_dim - self.adapter_type = adapter_type - if self.use_adapter: - if adapter_type == 'VL': - self.adapter = VLAdapter_Layer(d_model=self.embed_dim, down_size=adapter_dim) - else: - self.adapter = Adapter_Layer(d_model=self.embed_dim, down_size=adapter_dim) - self.quant_noise = getattr(args, 'quant_noise_pq', 0) - self.quant_noise_block_size = getattr(args, 'quant_noise_pq_block_size', 8) or 8 - self.self_attn = self.build_self_attention(self.embed_dim, args) - self.self_attn_layer_norm = LayerNorm(self.embed_dim) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, 'activation_fn', 'relu') or "relu" - ) - activation_dropout_p = getattr(args, "activation_dropout", 0) or 0 - if activation_dropout_p == 0: - # for backwards compatibility with models that use args.relu_dropout - activation_dropout_p = getattr(args, "relu_dropout", 0) or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = args.encoder_normalize_before - self.fc1 = self.build_fc1( - self.embed_dim, - args.encoder_ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - args.encoder_ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None - self.nh = self.self_attn.num_heads - self.head_dim = self.self_attn.head_dim - - self.ffn_layernorm = LayerNorm(args.encoder_ffn_embed_dim) if getattr(args, 'scale_fc', False) else None - self.w_resid = nn.Parameter(torch.ones(self.embed_dim, ), requires_grad=True) if getattr(args, 'scale_resids', False) else None - - self.final_layer_norm = LayerNorm(self.embed_dim) - - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity() - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_self_attention(self, embed_dim, args): - return MultiheadAttention( - embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - scale_factor=args.attn_scale_factor, - scale_heads=getattr(args, 'scale_heads', False), - qk_norm=getattr(args, 'qk_norm', False), - ) - - def residual_connection(self, x, residual): - return residual + self.drop_path(x) - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - if "{}.{}.{}".format(name, new, m) not in state_dict and "{}.{}".format(new, m) in self.state_dict(): - state_dict[ - "{}.{}.{}".format(name, new, m) - ] = self.state_dict()["{}.{}".format(new, m)] - - prefix = name + "." if name != "" else "" - for param_name, param_tensor in self.state_dict().items(): - if (prefix + param_name) not in state_dict: - state_dict[prefix + param_name] = self.state_dict()[param_name] - - def forward( - self, - x, - encoder_padding_mask: Optional[Tensor], - attn_mask: Optional[Tensor] = None, - self_attn_bias: Optional[Tensor] = None, - prompt_kv: Optional[Tensor] = None, - num_image_tokens = None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, seq_len)` where padding elements are indicated by ``1``. - attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`, - where `tgt_len` is the length of output and `src_len` is the - length of input, though here both are equal to `seq_len`. - `attn_mask[tgt_i, src_j] = 1` means that when calculating the - embedding for `tgt_i`, we exclude (mask out) `src_j`. This is - useful for strided self-attention. - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - # anything in original attn_mask = 1, becomes -1e8 - # anything in original attn_mask = 0, becomes 0 - # Note that we cannot use -inf here, because at some edge cases, - # the attention weight (before softmax) for some padded element in query - # will become -inf, which results in NaN in model parameters - if attn_mask is not None: - attn_mask = attn_mask.masked_fill( - attn_mask.to(torch.bool), - -1e8 if x.dtype == torch.float32 else -1e4 - ) - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - need_weights=False, - attn_mask=attn_mask, - attn_bias=self_attn_bias, - prompt_kv=prompt_kv - ) - if self.attn_ln is not None: - x = self.attn_ln(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - if self.ffn_layernorm is not None: - x = self.ffn_layernorm(x) - x = self.fc2(x) - x = self.dropout_module(x) - if self.use_adapter: - if self.adapter_type == 'VL': - x = self.adapter(x, num_image_tokens=num_image_tokens) - else: - x = self.adapter(x) - if self.w_resid is not None: - residual = torch.mul(self.w_resid, residual) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - return x - - -class TransformerDecoderLayer(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.decoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False, \ - drop_path_rate=0.0, use_adapter=False, adapter_dim=200): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.use_adapter = use_adapter - if use_adapter == True: - self.adapter = Adapter_Layer(d_model=self.embed_dim, down_size=adapter_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.quant_noise = getattr(args, "quant_noise_pq", 0) - self.quant_noise_block_size = getattr(args, "quant_noise_pq_block_size", 8) - - self.cross_self_attention = getattr(args, "cross_self_attention", False) - - self.self_attn = self.build_self_attention( - self.embed_dim, - args, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - self.self_attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None - self.cross_attn_ln = LayerNorm(self.embed_dim) if getattr(args, 'scale_attn', False) else None - self.nh = self.self_attn.num_heads - self.head_dim = self.self_attn.head_dim - - self.activation_fn = utils.get_activation_fn( - activation=str(args.activation_fn) - if getattr(args, "activation_fn", None) is not None - else "relu" - ) - activation_dropout_p = getattr(args, "activation_dropout", 0) or 0 - if activation_dropout_p == 0: - # for backwards compatibility with models that use args.relu_dropout - activation_dropout_p = getattr(args, "relu_dropout", 0) or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = args.decoder_normalize_before - - # use layerNorm rather than FusedLayerNorm for exporting. - # char_inputs can be used to determint this. - # TODO remove this once we update apex with the fix - export = getattr(args, "char_inputs", False) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = self.build_encoder_attention(self.embed_dim, args) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - self.ffn_layernorm = LayerNorm(args.decoder_ffn_embed_dim) if getattr(args, 'scale_fc', False) else None - self.w_resid = nn.Parameter(torch.ones(self.embed_dim, ), requires_grad=True) if getattr(args, 'scale_resids', False) else None - - self.fc1 = self.build_fc1( - self.embed_dim, - args.decoder_ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - args.decoder_ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=export) - self.need_attn = True - - self.onnx_trace = False - - self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity() - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_self_attention( - self, embed_dim, args, add_bias_kv=False, add_zero_attn=False - ): - return MultiheadAttention( - embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=not getattr(args, "cross_self_attention", False), - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - scale_factor=args.attn_scale_factor, - scale_heads=getattr(args, 'scale_heads', False), - qk_norm=getattr(args, 'qk_norm', False), - ) - - def build_encoder_attention(self, embed_dim, args): - return MultiheadAttention( - embed_dim, - args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - scale_factor=args.attn_scale_factor, - scale_heads=getattr(args, 'scale_heads', False), - qk_norm=getattr(args, 'qk_norm', False), - ) - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def residual_connection(self, x, residual): - return residual + self.drop_path(x) - - def forward( - self, - x, - encoder_out: Optional[torch.Tensor] = None, - encoder_padding_mask: Optional[torch.Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[torch.Tensor]] = None, - prev_attn_state: Optional[List[torch.Tensor]] = None, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - self_attn_bias: Optional[Tensor] = None, - cross_attn_bias: Optional[Tensor] = None, - prompt_kv: Optional[Tensor] = None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - if self.cross_self_attention and not ( - incremental_state is not None - and _self_attn_input_buffer is not None - and "prev_key" in _self_attn_input_buffer - ): - if self_attn_mask is not None: - assert encoder_out is not None - self_attn_mask = torch.cat( - (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1 - ) - if self_attn_padding_mask is not None: - if encoder_padding_mask is None: - assert encoder_out is not None - encoder_padding_mask = self_attn_padding_mask.new_zeros( - encoder_out.size(1), encoder_out.size(0) - ) - self_attn_padding_mask = torch.cat( - (encoder_padding_mask, self_attn_padding_mask), dim=1 - ) - assert encoder_out is not None - y = torch.cat((encoder_out, x), dim=0) - else: - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - attn_bias=self_attn_bias, - prompt_kv=prompt_kv - ) - if self.self_attn_ln is not None: - x = self.self_attn_ln(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - if self.encoder_attn is not None and encoder_out is not None: - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - attn_bias=cross_attn_bias - ) - if self.cross_attn_ln is not None: - x = self.cross_attn_ln(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - if self.ffn_layernorm is not None: - x = self.ffn_layernorm(x) - x = self.fc2(x) - x = self.dropout_module(x) - if self.use_adapter == True: - x = self.adapter(x) - - if self.w_resid is not None: - residual = torch.mul(self.w_resid, residual) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - if self.onnx_trace and incremental_state is not None: - saved_state = self.self_attn._get_input_buffer(incremental_state) - assert saved_state is not None - if self_attn_padding_mask is not None: - self_attn_state = [ - saved_state["prev_key"], - saved_state["prev_value"], - saved_state["prev_key_padding_mask"], - ] - else: - self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]] - return x, attn, self_attn_state - return x, attn, None - - def make_generation_fast_(self, need_attn: bool = False, **kwargs): - self.need_attn = need_attn - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - # update layer norms - layer_norm_map = { - "0": "self_attn_layer_norm", - "1": "encoder_attn_layer_norm", - "2": "final_layer_norm", - } - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict[ - "{}.{}.{}".format(name, new, m) - ] = state_dict[k] - del state_dict[k] - if "{}.{}.{}".format(name, new, m) not in state_dict and "{}.{}".format(new, m) in self.state_dict(): - state_dict[ - "{}.{}.{}".format(name, new, m) - ] = self.state_dict()["{}.{}".format(new, m)] - - prefix = name + "." if name != "" else "" - for param_name, param_tensor in self.state_dict().items(): - if (prefix + param_name) not in state_dict: - state_dict[prefix + param_name] = self.state_dict()[param_name] diff --git a/spaces/mthsk/sovits-models-misc/vdecoder/hifigan/utils.py b/spaces/mthsk/sovits-models-misc/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/mthsk/sovits-models-misc/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/utils/conv_blocks.py b/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/utils/conv_blocks.py deleted file mode 100644 index ad87d4937f85c1f9638548ada984634ff5ea75fa..0000000000000000000000000000000000000000 --- a/spaces/mueller-franzes/medfusion-app/medical_diffusion/models/utils/conv_blocks.py +++ /dev/null @@ -1,528 +0,0 @@ -from typing import Optional, Sequence, Tuple, Union, Type - -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np - - -from monai.networks.blocks.dynunet_block import get_padding, get_output_padding -from monai.networks.layers import Pool, Conv -from monai.networks.layers.utils import get_act_layer, get_norm_layer, get_dropout_layer -from monai.utils.misc import ensure_tuple_rep - -from medical_diffusion.models.utils.attention_blocks import Attention, zero_module - -def save_add(*args): - args = [arg for arg in args if arg is not None] - return sum(args) if len(args)>0 else None - - -class SequentialEmb(nn.Sequential): - def forward(self, input, emb): - for module in self: - input = module(input, emb) - return input - - -class BasicDown(nn.Module): - def __init__( - self, - spatial_dims, - in_channels, - out_channels, - kernel_size=3, - stride=2, - learnable_interpolation=True, - use_res=False - ) -> None: - super().__init__() - - if learnable_interpolation: - Convolution = Conv[Conv.CONV, spatial_dims] - self.down_op = Convolution( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=get_padding(kernel_size, stride), - dilation=1, - groups=1, - bias=True, - ) - - if use_res: - self.down_skip = nn.PixelUnshuffle(2) # WARNING: Only supports 2D, , out_channels == 4*in_channels - - else: - Pooling = Pool['avg', spatial_dims] - self.down_op = Pooling( - kernel_size=kernel_size, - stride=stride, - padding=get_padding(kernel_size, stride) - ) - - - def forward(self, x, emb=None): - y = self.down_op(x) - if hasattr(self, 'down_skip'): - y = y+self.down_skip(x) - return y - -class BasicUp(nn.Module): - def __init__( - self, - spatial_dims, - in_channels, - out_channels, - kernel_size=2, - stride=2, - learnable_interpolation=True, - use_res=False, - ) -> None: - super().__init__() - self.learnable_interpolation = learnable_interpolation - if learnable_interpolation: - # TransConvolution = Conv[Conv.CONVTRANS, spatial_dims] - # padding = get_padding(kernel_size, stride) - # output_padding = get_output_padding(kernel_size, stride, padding) - # self.up_op = TransConvolution( - # in_channels, - # out_channels, - # kernel_size=kernel_size, - # stride=stride, - # padding=padding, - # output_padding=output_padding, - # groups=1, - # bias=True, - # dilation=1 - # ) - - self.calc_shape = lambda x: tuple((np.asarray(x)-1)*np.atleast_1d(stride)+np.atleast_1d(kernel_size) - -2*np.atleast_1d(get_padding(kernel_size, stride))) - Convolution = Conv[Conv.CONV, spatial_dims] - self.up_op = Convolution( - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - dilation=1, - groups=1, - bias=True, - ) - - if use_res: - self.up_skip = nn.PixelShuffle(2) # WARNING: Only supports 2D, out_channels == in_channels/4 - else: - self.calc_shape = lambda x: tuple((np.asarray(x)-1)*np.atleast_1d(stride)+np.atleast_1d(kernel_size) - -2*np.atleast_1d(get_padding(kernel_size, stride))) - - def forward(self, x, emb=None): - if self.learnable_interpolation: - new_size = self.calc_shape(x.shape[2:]) - x_res = F.interpolate(x, size=new_size, mode='nearest-exact') - y = self.up_op(x_res) - if hasattr(self, 'up_skip'): - y = y+self.up_skip(x) - return y - else: - new_size = self.calc_shape(x.shape[2:]) - return F.interpolate(x, size=new_size, mode='nearest-exact') - - -class BasicBlock(nn.Module): - """ - A block that consists of Conv-Norm-Drop-Act, similar to blocks.Convolution. - - Args: - spatial_dims: number of spatial dimensions. - in_channels: number of input channels. - out_channels: number of output channels. - kernel_size: convolution kernel size. - stride: convolution stride. - norm_name: feature normalization type and arguments. - act_name: activation layer type and arguments. - dropout: dropout probability. - zero_conv: zero out the parameters of the convolution. - """ - - def __init__( - self, - spatial_dims: int, - in_channels: int, - out_channels: int, - kernel_size: Union[Sequence[int], int], - stride: Union[Sequence[int], int]=1, - norm_name: Union[Tuple, str, None]=None, - act_name: Union[Tuple, str, None] = None, - dropout: Optional[Union[Tuple, str, float]] = None, - zero_conv: bool = False, - ): - super().__init__() - Convolution = Conv[Conv.CONV, spatial_dims] - conv = Convolution( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=get_padding(kernel_size, stride), - dilation=1, - groups=1, - bias=True, - ) - self.conv = zero_module(conv) if zero_conv else conv - - if norm_name is not None: - self.norm = get_norm_layer(name=norm_name, spatial_dims=spatial_dims, channels=out_channels) - if dropout is not None: - self.drop = get_dropout_layer(name=dropout, dropout_dim=spatial_dims) - if act_name is not None: - self.act = get_act_layer(name=act_name) - - - def forward(self, inp): - out = self.conv(inp) - if hasattr(self, "norm"): - out = self.norm(out) - if hasattr(self, 'drop'): - out = self.drop(out) - if hasattr(self, "act"): - out = self.act(out) - return out - -class BasicResBlock(nn.Module): - """ - A block that consists of Conv-Act-Norm + skip. - - Args: - spatial_dims: number of spatial dimensions. - in_channels: number of input channels. - out_channels: number of output channels. - kernel_size: convolution kernel size. - stride: convolution stride. - norm_name: feature normalization type and arguments. - act_name: activation layer type and arguments. - dropout: dropout probability. - zero_conv: zero out the parameters of the convolution. - """ - def __init__( - self, - spatial_dims: int, - in_channels: int, - out_channels: int, - kernel_size: Union[Sequence[int], int], - stride: Union[Sequence[int], int]=1, - norm_name: Union[Tuple, str, None]=None, - act_name: Union[Tuple, str, None] = None, - dropout: Optional[Union[Tuple, str, float]] = None, - zero_conv: bool = False - ): - super().__init__() - self.basic_block = BasicBlock(spatial_dims, in_channels, out_channels, kernel_size, stride, norm_name, act_name, dropout, zero_conv) - Convolution = Conv[Conv.CONV, spatial_dims] - self.conv_res = Convolution( - in_channels, - out_channels, - kernel_size=1, - stride=stride, - padding=get_padding(1, stride), - dilation=1, - groups=1, - bias=True, - ) if in_channels != out_channels else nn.Identity() - - - def forward(self, inp): - out = self.basic_block(inp) - residual = self.conv_res(inp) - out = out+residual - return out - - - -class UnetBasicBlock(nn.Module): - """ - A modified version of monai.networks.blocks.UnetBasicBlock with additional embedding - - Args: - spatial_dims: number of spatial dimensions. - in_channels: number of input channels. - out_channels: number of output channels. - kernel_size: convolution kernel size. - stride: convolution stride. - norm_name: feature normalization type and arguments. - act_name: activation layer type and arguments. - dropout: dropout probability. - emb_channels: Number of embedding channels - """ - - def __init__( - self, - spatial_dims: int, - in_channels: int, - out_channels: int, - kernel_size: Union[Sequence[int], int], - stride: Union[Sequence[int], int]=1, - norm_name: Union[Tuple, str]=None, - act_name: Union[Tuple, str]=None, - dropout: Optional[Union[Tuple, str, float]] = None, - emb_channels: int = None, - blocks = 2 - ): - super().__init__() - self.block_seq = nn.ModuleList([ - BasicBlock(spatial_dims, in_channels if i==0 else out_channels, out_channels, kernel_size, stride, norm_name, act_name, dropout, i==blocks-1) - for i in range(blocks) - ]) - - if emb_channels is not None: - self.local_embedder = nn.Sequential( - get_act_layer(name=act_name), - nn.Linear(emb_channels, out_channels), - ) - - def forward(self, x, emb=None): - # ------------ Embedding ---------- - if emb is not None: - emb = self.local_embedder(emb) - b,c, *_ = emb.shape - sp_dim = x.ndim-2 - emb = emb.reshape(b, c, *((1,)*sp_dim) ) - # scale, shift = emb.chunk(2, dim = 1) - # x = x * (scale + 1) + shift - # x = x+emb - - # ----------- Convolution --------- - n_blocks = len(self.block_seq) - for i, block in enumerate(self.block_seq): - x = block(x) - if (emb is not None) and i 0, "block args must be greater than 0" - self._global_params = global_params - self._blocks_args = blocks_args - - # Batch norm parameters - bn_mom = 1 - self._global_params.batch_norm_momentum - bn_eps = self._global_params.batch_norm_epsilon - - # Get stem static or dynamic convolution depending on image size - image_size = global_params.image_size - Conv2d = get_same_padding_conv2d(image_size=image_size) - - # Stem - in_channels = 3 # rgb - out_channels = round_filters( - 32, self._global_params - ) # number of output channels - self._conv_stem = Conv2d( - in_channels, out_channels, kernel_size=3, stride=2, bias=False - ) - self._bn0 = nn.BatchNorm2d( - num_features=out_channels, momentum=bn_mom, eps=bn_eps - ) - image_size = calculate_output_image_size(image_size, 2) - - # Build blocks - self._blocks = nn.ModuleList([]) - for block_args in self._blocks_args: - - # Update block input and output filters based on depth multiplier. - block_args = block_args._replace( - input_filters=round_filters( - block_args.input_filters, self._global_params - ), - output_filters=round_filters( - block_args.output_filters, self._global_params - ), - num_repeat=round_repeats(block_args.num_repeat, self._global_params), - ) - - # The first block needs to take care of stride and filter size increase. - self._blocks.append( - MBConvBlock(block_args, self._global_params, image_size=image_size) - ) - image_size = calculate_output_image_size(image_size, block_args.stride) - if block_args.num_repeat > 1: # modify block_args to keep same output size - block_args = block_args._replace( - input_filters=block_args.output_filters, stride=1 - ) - for _ in range(block_args.num_repeat - 1): - self._blocks.append( - MBConvBlock(block_args, self._global_params, image_size=image_size) - ) - # image_size = calculate_output_image_size(image_size, block_args.stride) # stride = 1 - - self._swish = MemoryEfficientSwish() - - def set_swish(self, memory_efficient=True): - """Sets swish function as memory efficient (for training) or standard (for export). - - Args: - memory_efficient (bool): Whether to use memory-efficient version of swish. - - """ - self._swish = MemoryEfficientSwish() if memory_efficient else Swish() - for block in self._blocks: - block.set_swish(memory_efficient) - - def extract_endpoints(self, inputs): - endpoints = dict() - - # Stem - x = self._swish(self._bn0(self._conv_stem(inputs))) - prev_x = x - - # Blocks - for idx, block in enumerate(self._blocks): - drop_connect_rate = self._global_params.drop_connect_rate - if drop_connect_rate: - drop_connect_rate *= float(idx) / len( - self._blocks - ) # scale drop connect_rate - x = block(x, drop_connect_rate=drop_connect_rate) - if prev_x.size(2) > x.size(2): - endpoints["reduction_{}".format(len(endpoints) + 1)] = prev_x - prev_x = x - - # Head - x = self._swish(self._bn1(self._conv_head(x))) - endpoints["reduction_{}".format(len(endpoints) + 1)] = x - - return endpoints - - def _change_in_channels(self, in_channels): - """Adjust model's first convolution layer to in_channels, if in_channels not equals 3. - - Args: - in_channels (int): Input data's channel number. - """ - if in_channels != 3: - Conv2d = get_same_padding_conv2d(image_size=self._global_params.image_size) - out_channels = round_filters(32, self._global_params) - self._conv_stem = Conv2d( - in_channels, out_channels, kernel_size=3, stride=2, bias=False - ) - - -class EfficientEncoderB7(EfficientNet): - def __init__(self): - super().__init__( - *create_block_args( - width_coefficient=2.0, - depth_coefficient=3.1, - dropout_rate=0.5, - image_size=600, - ) - ) - self._change_in_channels(3) - self.block_idx = [10, 17, 37, 54] - self.channels = [48, 80, 224, 640] - - def initial_conv(self, inputs): - x = self._swish(self._bn0(self._conv_stem(inputs))) - return x - - def get_blocks(self, x, H, W, block_idx): - features = [] - for idx, block in enumerate(self._blocks): - drop_connect_rate = self._global_params.drop_connect_rate - if drop_connect_rate: - drop_connect_rate *= float(idx) / len( - self._blocks - ) # scale drop connect_rate - x = block(x, drop_connect_rate=drop_connect_rate) - if idx == block_idx[0]: - features.append(x.clone()) - if idx == block_idx[1]: - features.append(x.clone()) - if idx == block_idx[2]: - features.append(x.clone()) - if idx == block_idx[3]: - features.append(x.clone()) - - return features - - def forward(self, inputs: torch.Tensor) -> List[Any]: - B, C, H, W = inputs.size() - x = self.initial_conv(inputs) # Prepare input for the backbone - return self.get_blocks( - x, H, W, block_idx=self.block_idx - ) # Get backbone features and edge maps diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/data/datasets/classification_dataset.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/data/datasets/classification_dataset.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/natdon/Michael_Scott_Bot/README.md b/spaces/natdon/Michael_Scott_Bot/README.md deleted file mode 100644 index a28a44a7c4d48df129f357b108479472ee2235dd..0000000000000000000000000000000000000000 --- a/spaces/natdon/Michael_Scott_Bot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Michael_Scott_Bot -emoji: 🦀 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.0.6 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/navaaesarosh/navaaesarosh-saqi_v0/README.md b/spaces/navaaesarosh/navaaesarosh-saqi_v0/README.md deleted file mode 100644 index 96b53f27656c85ae8d27ceaa4717cf673e3aa71a..0000000000000000000000000000000000000000 --- a/spaces/navaaesarosh/navaaesarosh-saqi_v0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Navaaesarosh-saqi V0 -emoji: 📈 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/AdobePhotoshopCS2InclKeygenPARADOX64bit ((HOT)).md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/AdobePhotoshopCS2InclKeygenPARADOX64bit ((HOT)).md deleted file mode 100644 index 22e52a9c23fa179c8f82b08ed49694e307a9de82..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/AdobePhotoshopCS2InclKeygenPARADOX64bit ((HOT)).md +++ /dev/null @@ -1,21 +0,0 @@ -
    -```html -

    How to Download and Install Adobe Photoshop CS2 with Keygen by PARADOX for 64-bit Windows

    -

    Adobe Photoshop CS2 is a powerful and popular image editing software that was released in 2005. It offers many features and tools for creating and enhancing photos, graphics, and web designs. However, Adobe has discontinued the support and distribution of Photoshop CS2 since 2013, and it is no longer available on their official website. If you still want to use this version of Photoshop, you will need to find a reliable source to download it and a working keygen to activate it.

    -

    AdobePhotoshopCS2InclKeygenPARADOX64bit


    Download File ••• https://urlcod.com/2uI9VS



    -

    In this article, we will show you how to download and install Adobe Photoshop CS2 with keygen by PARADOX for 64-bit Windows operating systems. This keygen is compatible with both 32-bit and 64-bit editions of Windows Vista, Windows XP and Mac OS X 10.5 or later version operating systems[^3^]. The keygen contains the key for Adobe Photoshop CS2 Extended, which has all the features of the standard edition plus extended support for high dynamic range imaging (HDRI) and 32-bit/64-bit image processing[^1^].

    -

    Step 1: Download Adobe Photoshop CS2

    -

    The first step is to download the setup file of Adobe Photoshop CS2 from a trusted source. You can use the link below to download it from TorrentLand.com[^1^], which is a torrent search engine that provides verified and safe torrents. You will need a torrent client such as uTorrent or BitTorrent to download the file.

    -

    Download Adobe Photoshop CS2 Keygen - PARADOX

    -

    The file size is about 329 MB and it contains the setup file of Photoshop CS2 (Photoshop_CS2_tryout.zip) and the keygen file by PARADOX (PARADOX.nfo). Save the file to your preferred location on your computer.

    -

    Step 2: Extract and Install Adobe Photoshop CS2

    -

    The next step is to extract the setup file of Photoshop CS2 using a file compression software such as WinRAR or 7-Zip. Right-click on the Photoshop_CS2_tryout.zip file and choose "Extract Here" or "Extract to Photoshop_CS2_tryout". You will get a folder named "Photoshop_CS2_tryout" with several files inside.

    -

    -

    Double-click on the "Setup.exe" file to start the installation process. Follow the instructions on the screen and choose your language, destination folder, and components to install. When you reach the screen that asks for a serial number, do not enter anything yet. Leave the window open and proceed to the next step.

    -

    Step 3: Generate a Serial Number with Keygen by PARADOX

    -

    The final step is to use the keygen by PARADOX to generate a valid serial number for Photoshop CS2. Open the folder where you saved the downloaded torrent file and double-click on the PARADOX.nfo file. This will open a text document with some information about the keygen and a link to download it.

    -

    Download Adobe Photoshop CS2 v9.0 Keygen - PARADOX

    -

    The file size is about 180 KB and it contains a single executable file named "Adobe.Photoshop.CS2.v9.0.Keygen-PARADOX.exe". Save the file to your preferred location on your computer.

    -

    Double-click on the "Adobe.Photoshop.CS2.v9.0.Keygen-PARADOX.exe" file to run the keygen. A small window will appear with a button that says "Generate". Click on it and you will see a serial number in the format of XXXX-XXXX-XXXX-XXXX-XXXX-XXXX. Copy this serial number and paste it into the installation window of Photoshop CS2 that you left open in step 2. Click on "Next" and complete the installation process. 7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flowjo 10 Serial Number Crack 19.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flowjo 10 Serial Number Crack 19.md deleted file mode 100644 index af196c39b92a51a6de88ad240eca6f32994fd399..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Flowjo 10 Serial Number Crack 19.md +++ /dev/null @@ -1,26 +0,0 @@ -
    -```html -

    How to Download and Install Flowjo 10 Serial Number Crack 19

    -

    Flowjo 10 is a powerful software for displaying and analyzing flow cytometric data. It allows you to create new files in the cytometry standard form, perform various statistical tests, and generate graphical reports. However, Flowjo 10 is not a free software and requires a license key to run. If you are looking for a way to download and install Flowjo 10 serial number crack 19, you have come to the right place.

    -

    In this article, we will show you how to get Flowjo 10 serial number crack 19 for free, without any risk of virus or malware. We will also provide you with some tips on how to use Flowjo 10 effectively and efficiently. Follow the steps below and enjoy Flowjo 10 serial number crack 19 on your PC or Mac.

    -

    Flowjo 10 Serial Number Crack 19


    DOWNLOADhttps://urlcod.com/2uIcyG



    -

    Step 1: Download Flowjo 10 Serial Number Crack 19

    -

    The first step is to download Flowjo 10 serial number crack 19 from a reliable source. There are many websites that claim to offer Flowjo 10 serial number crack 19, but not all of them are trustworthy. Some of them may contain harmful files that can damage your computer or steal your personal information. Therefore, you need to be careful when choosing where to download Flowjo 10 serial number crack 19.

    -

    One of the best sources to download Flowjo 10 serial number crack 19 is this website[^1^]. This website has been tested and verified by many users who have successfully downloaded and installed Flowjo 10 serial number crack 19 on their computers. The download link is safe and secure, and the file size is only about 200 MB. You can download Flowjo 10 serial number crack 19 from this website by clicking on the button below.

    - -

    Step 2: Install Flowjo 10 Serial Number Crack 19

    -

    After downloading Flowjo 10 serial number crack 19, you need to install it on your computer. The installation process is very simple and straightforward. Just follow the instructions below and you will be able to install Flowjo 10 serial number crack 19 in no time.

    -
      -
    1. Open the downloaded file and extract it using WinRAR or any other extraction tool.
    2. -
    3. Run the setup.exe file and follow the installation wizard.
    4. -
    5. When prompted, enter the serial number that is provided in the file.
    6. -
    7. Complete the installation and launch Flowjo 10.
    8. -
    -

    Congratulations! You have successfully installed Flowjo 10 serial number crack 19 on your computer. You can now use Flowjo 10 without any limitations or restrictions.

    -

    Step 3: Use Flowjo 10 Serial Number Crack 19

    -

    Now that you have installed Flowjo 10 serial number crack 19, you can start using it for your flow cytometry data analysis. Flowjo 10 has many features and functions that can help you perform various tasks, such as importing data, creating groups, applying gates, calculating statistics, generating plots, and exporting results. Here are some tips on how to use Flowjo 10 serial number crack 19 effectively and efficiently:

    -

    -
      -
    • Read the user manual and watch the tutorial videos that are available on the official website of Flowjo here[^4^]. These resources will help you understand how to use Flowjo 10 serial number crack

      e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pukar Telugu Movie Subtitle Free Download HOT.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pukar Telugu Movie Subtitle Free Download HOT.md deleted file mode 100644 index d3fcf2374e75689eceafce176a0d6bbe8f8fd601..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Pukar Telugu Movie Subtitle Free Download HOT.md +++ /dev/null @@ -1,18 +0,0 @@ - -

      Pukar Telugu Movie Subtitle Free Download

      -

      Pukar is a 2000 Hindi action drama film directed by Rajkumar Santoshi and starring Anil Kapoor, Madhuri Dixit, Namrata Shirodkar and Danny Denzongpa. The film tells the story of an Indian Army officer who is falsely accused of treason and must prove his innocence while fighting against terrorists.

      -

      Pukar Telugu Movie Subtitle Free Download


      Download File >>> https://urlcod.com/2uIa3y



      -

      If you are looking for Pukar Telugu movie subtitle free download, you can find it on various websites that offer subtitles for movies and TV shows. Here are some of the websites where you can download Pukar Telugu movie subtitle for free:

      -
        -
      • OpenSubtitles: This website has six subtitles for Pukar in different languages, including Telugu. You can choose the subtitle file that matches your video quality and format and download it easily.
      • -
      • SUBDL: This website has three subtitles for Pukar in Arabic, Persian and English. You can also search for other languages and download the subtitle file that suits your needs.
      • -
      • SUBDL: This website has another subtitle for Pukar in English. You can download it by clicking on the download button.
      • -
      -

      After downloading the subtitle file, you can add it to your video player and enjoy watching Pukar with Telugu subtitles. Hope this article helps you find Pukar Telugu movie subtitle free download.

      Pukar is a critically acclaimed film that won two National Film Awards and three Filmfare Awards. The film has a rating of 6.9 out of 10 on IMDb and 88% on Rotten Tomatoes. The film is praised for its gripping storyline, powerful performances, thrilling action sequences and melodious music.

      -

      The film's soundtrack was composed by A.R. Rahman and features songs like "Kay Sera Sera", "Ek Tu Hi Bharosa", "Sunta Hai Mera Khuda" and "Kismat Se Tum". The songs were sung by renowned singers like Lata Mangeshkar, Udit Narayan, Kavita Krishnamurthy and Shankar Mahadevan. The songs were also dubbed in Telugu and Tamil languages.

      -

      Pukar is a film that showcases patriotism, loyalty, love and courage. It is a film that will keep you hooked till the end and make you feel proud of the Indian Army. If you are a fan of action drama films, you should not miss Pukar. And if you want to watch it with Telugu subtitles, you can download them from the websites mentioned above.

      -

      Pukar is a film that was released in 2000 and became a hit at the box office. The film was directed by Rajkumar Santoshi, who is known for making films like Ghayal, Damini, Andaz Apna Apna and The Legend of Bhagat Singh. The film was produced by Surinder Kapoor and Boney Kapoor, who are the father and brother of Anil Kapoor respectively.

      -

      The film stars Anil Kapoor as Major Jaidev Rajvansh, a brave and loyal officer of the Indian Army. He is in love with Anjali (Madhuri Dixit), a journalist who is also his childhood friend. However, his life takes a turn when he is assigned a mission to rescue a group of hostages from a terrorist leader named Abhrush (Danny Denzongpa). During the mission, he saves the life of Pooja (Namrata Shirodkar), the daughter of the Chief of Army Staff. Pooja falls in love with Jaidev and decides to marry him. She also convinces her father to give Jaidev a medal of honor for his bravery.

      -

      However, things get complicated when Anjali returns from abroad and confesses her love to Jaidev. Jaidev is torn between his love for Anjali and his gratitude for Pooja. Meanwhile, Abhrush escapes from prison and plots to take revenge on Jaidev. He kidnaps Anjali and blackmails Jaidev to betray his country and join his terrorist group. Jaidev agrees to do so in order to save Anjali's life. However, he is unaware that Abhrush has planted a bomb in his car that will explode when he reaches the border. Will Jaidev be able to save Anjali and himself? Will he be able to prove his innocence and loyalty to his country? Watch Pukar to find out.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/ngxson/poet-cat/frontend/pages/index.tsx b/spaces/ngxson/poet-cat/frontend/pages/index.tsx deleted file mode 100644 index ee82d97786dd8f39460003b7d51eb9ce3efd52ea..0000000000000000000000000000000000000000 --- a/spaces/ngxson/poet-cat/frontend/pages/index.tsx +++ /dev/null @@ -1,17 +0,0 @@ -import Head from 'next/head' -import ChatPage from '@/components/ChatPage' - -export default function Home() { - return ( - <> - - Chatbot the poet cat - - - -
      - -
      - - ) -} diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/TensorMask/tests/test_swap_align2nat.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/TensorMask/tests/test_swap_align2nat.py deleted file mode 100644 index d9ee273de06cf881b89696ee4ee13a0953d6aa25..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/TensorMask/tests/test_swap_align2nat.py +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. - -import unittest -import torch -from torch.autograd import gradcheck - -from tensormask.layers.swap_align2nat import SwapAlign2Nat - - -class SwapAlign2NatTest(unittest.TestCase): - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_swap_align2nat_gradcheck_cuda(self): - dtype = torch.float64 - device = torch.device("cuda") - m = SwapAlign2Nat(2).to(dtype=dtype, device=device) - x = torch.rand(2, 4, 10, 10, dtype=dtype, device=device, requires_grad=True) - - self.assertTrue(gradcheck(m, x), "gradcheck failed for SwapAlign2Nat CUDA") - - def _swap_align2nat(self, tensor, lambda_val): - """ - The basic setup for testing Swap_Align - """ - op = SwapAlign2Nat(lambda_val, pad_val=0.0) - input = torch.from_numpy(tensor[None, :, :, :].astype("float32")) - output = op.forward(input.cuda()).cpu().numpy() - return output[0] - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/nomic-ai/allenai_prosocial-dialog/style.css b/spaces/nomic-ai/allenai_prosocial-dialog/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/allenai_prosocial-dialog/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/nothingsuspicious/curaude/Dockerfile b/spaces/nothingsuspicious/curaude/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/nothingsuspicious/curaude/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/olimpa/CalendarJs/jquery-calendar.js b/spaces/olimpa/CalendarJs/jquery-calendar.js deleted file mode 100644 index 4080c007c1518c1de9e38b3942f05ead73dbd501..0000000000000000000000000000000000000000 --- a/spaces/olimpa/CalendarJs/jquery-calendar.js +++ /dev/null @@ -1,1549 +0,0 @@ -/* - * @class Calendar ~jquery-calendar plugin~ (https://github.com/ArrobeFr/jquery-calendar) - * @author Developped by Arrobe (https://www.arrobe.fr) - * @license Licensed under MIT (https://github.com/ArrobeFr/jquery-calendar/blob/master/LICENSE) - */ - -jQuery(document).ready(function($){ - - function Calendar(element, Args) { - // Check Moment.js dependency - if (typeof(moment) !== 'function'){ - console.error('Calendar require Moment.js !'); - return; - } - - // Pre-defined events colors - var eventColors = [ // https://www.materialui.co/colors (800) - '#C62828', // Red - '#AD1457', // Pink - '#6A1B9A', // Purple - '#4527A0', // Deep Purple - '#283593', // Indigo - '#1565C0', // Blue - '#0277BD', // Light Blue - '#00838F', // Cyan - '#00695C', // Teal - '#2E7D32', // Green - '#558B2F', // Light Green - '#9E9D24', // Lime - '#F9A825', // Yellow - '#FF8F00', // Amber - '#EF6C00', // Orange - '#D84315', // Deep Orange - '#4E342E', // Brown - '#424242', // Grey - '#37474F', // Blue Grey - '#212121', // Grey (900) - ]; - - // Pre-defined events colors - var daynoteColors = [ // https://www.materialui.co/colors (800) - '#EF9A9A', // Red - '#F48FB1', // Pink - '#CE93D8', // Purple - '#B39DDB', // Deep Purple - '#9FA8DA', // Indigo - '#90CAF9', // Blue - '#81D4FA', // Light Blue - '#80DEEA', // Cyan - '#80CBC4', // Teal - '#A5D6A7', // Green - '#C5E1A5', // Light Green - '#E6EE9C', // Lime - '#FFF59D', // Yellow - '#FFE082', // Amber - '#FFCC80', // Orange - '#FFAB91', // Deep Orange - '#BCAAA4', // Brown - '#EEEEEE', // Grey - '#B0BEC5' // Blue Grey - ]; - - // Define default configuration - this.conf = { - locale: (Args.locale) ? Args.locale : 'fr', - view: 'week', - enableKeyboard: (Args.enableKeyboard) ? Args.enableKeyboard : true, - defaultView: { - largeScreen: (Args.defaultView) ? (Args.defaultView.largeScreen) ? (Args.defaultView.largeScreen) : 'week' : 'week', - smallScreen: (Args.defaultView) ? (Args.defaultView.smallScreen) ? (Args.defaultView.smallScreen) : 'day' : 'day', - smallScreenThreshold: (Args.defaultView) ? (Args.defaultView.smallScreenThreshold) ? (Args.defaultView.smallScreenThreshold) : 1000 : 1000 - }, - weekday: { - timeline: { - fromHour: (Args.weekday) ? (Args.weekday.timeline) ? (Args.weekday.timeline.fromHour) ? Args.weekday.timeline.fromHour : 7 : 7 : 7, - toHour: (Args.weekday) ? (Args.weekday.timeline) ? (Args.weekday.timeline.toHour) ? Args.weekday.timeline.toHour : 20 : 20 : 20, - intervalMinutes: (Args.weekday) ? (Args.weekday.timeline) ? (Args.weekday.timeline.intervalMinutes) ? Args.weekday.timeline.intervalMinutes : 60 : 60 : 60, - format: (Args.weekday) ? (Args.weekday.timeline) ? (Args.weekday.timeline.format) ? Args.weekday.timeline.format : 'HH:mm' : 'HH:mm' : 'HH:mm', - heightPx: (Args.weekday) ? (Args.weekday.timeline) ? (Args.weekday.timeline.heightPx) ? Args.weekday.timeline.heightPx : 50 : 50 : 50, - autoResize: (Args.weekday) ? (Args.weekday.timeline) ? (Args.weekday.timeline.autoResize !== undefined) ? Args.weekday.timeline.autoResize : true : true : true - }, - dayline: { - weekdays: (Args.weekday) ? (Args.weekday.dayline) ? (Args.weekday.dayline.weekdays) ? Args.weekday.dayline.weekdays : [0, 1, 2, 3, 4, 5, 6] : [0, 1, 2, 3, 4, 5, 6] : [0, 1, 2, 3, 4, 5, 6], - format: (Args.weekday) ? (Args.weekday.dayline) ? (Args.weekday.dayline.format) ? Args.weekday.dayline.format : 'dddd DD/MM' : 'dddd DD/MM' : 'dddd DD/MM', - heightPx: (Args.weekday) ? (Args.weekday.dayline) ? (Args.weekday.dayline.heightPx) ? Args.weekday.dayline.heightPx ? (Args.weekday.dayline.heightPx > 31) ? Args.weekday.dayline.heightPx : 31 : 31 : 31 : 31 : 31, - month: { - format: (Args.weekday) ? (Args.weekday.dayline) ? (Args.weekday.dayline.month) ? (Args.weekday.dayline.month.format) ? Args.weekday.dayline.month.format : 'MMMM YYYY' : 'MMMM YYYY' : 'MMMM YYYY' : 'MMMM YYYY', - heightPx: (Args.weekday) ? (Args.weekday.dayline) ? (Args.weekday.dayline.month) ? (Args.weekday.dayline.month.heightPx) ? Args.weekday.dayline.month.heightPx : 30 : 30 : 30 : 30, - weekFormat: (Args.weekday) ? (Args.weekday.dayline) ? (Args.weekday.dayline.month) ? (Args.weekday.dayline.month.weekFormat) ? Args.weekday.dayline.month.weekFormat : 'w' : 'w' : 'w' : 'w' - } - } - }, - month: { - format: (Args.month) ? (Args.month.format) ? Args.month.format : 'MMMM YYYY' : 'MMMM YYYY', - heightPx: (Args.month) ? (Args.month.heightPx) ? (Args.month.heightPx > 31) ? Args.month.heightPx : 31 : 31 : 31, - weekline: { - format: (Args.month) ? (Args.month.weekline) ? (Args.month.weekline.format) ? Args.month.weekline.format : 'w' : 'w' : 'w', - heightPx: (Args.month) ? (Args.month.weekline) ? (Args.month.weekline.heightPx) ? Args.month.weekline.heightPx : 80 : 80 : 80 - }, - dayheader: { - weekdays: (Args.month) ? (Args.month.dayheader) ? (Args.month.dayheader.weekdays) ? Args.month.dayheader.weekdays : [0, 1, 2, 3, 4, 5, 6] : [0, 1, 2, 3, 4, 5, 6] : [0, 1, 2, 3, 4, 5, 6], - format: (Args.month) ? (Args.month.dayheader) ? (Args.month.dayheader.format) ? Args.month.dayheader.format : 'dddd' : 'dddd' : 'dddd', - heightPx: (Args.month) ? (Args.month.dayheader) ? (Args.month.dayheader.heightPx) ? Args.month.dayheader.heightPx : 30 : 30 : 30 - }, - day: { - format: (Args.month) ? (Args.month.day) ? (Args.month.day.format) ? Args.month.day.format : 'DD/MM' : 'DD/MM' : 'DD/MM' - } - }, - unixTimestamp: (Args.unixTimestamp) ? Args.unixTimestamp : moment().format('X'), - event: { - hover: { - delay: (Args.event) ? (Args.event.hover) ? (Args.event.hover.delay) ? Args.event.hover.delay : 500 : 500 : 500 - } - }, - colors: { - events: (Args.colors) ? (Args.colors.events) ? Args.colors.events : eventColors : eventColors, - daynotes: (Args.colors) ? (Args.colors.daynotes) ? Args.colors.daynotes : daynoteColors : daynoteColors, - random: (Args.colors) ? (Args.colors.random) ? Args.colors.random : true : true - }, - categories: { - enable: (Args.categories) ? (Args.categories.enable !== undefined) ? Args.categories.enable : true : true, - hover: { - delay: (Args.categories) ? (Args.categories.hover) ? (Args.categories.hover.delay) ? Args.categories.hover.delay : 500 : 500 : 500 - } - }, - now: { - enable: (Args.now) ? (Args.now.enable !== undefined) ? (Args.now.enable) : false : false, - refresh: (Args.now) ? (Args.now.refresh !== undefined) ? (Args.now.refresh) : false : false, - heightPx: (Args.now) ? (Args.now.heightPx) ? (Args.now.heightPx) : 1 : 1, - style: (Args.now) ? (Args.now.style) ? (Args.now.style) : 'solid' : 'solid', - color: (Args.now) ? (Args.now.color) ? (Args.now.color) : '#03A9F4' : '#03A9F4' - } - }; - - // Sets moment's locale - moment.locale(this.conf.locale); - - // Sets colors - this.setEventColors(this.conf.colors.events); - this.setDaynoteColors(this.conf.colors.daynotes); - - // Create array to associate colors and categories - this.eventCategoryColor = []; - this.daynoteCategoryColor = []; - - // Create array to associate colors and categories as defined by setEventCategoriesColors or setDaynoteCategoriesColors - this.userEventCategoryColor = []; - this.userDaynoteCategoryColor = []; - - // Load events - this.setEvents((Args.events) ? Args.events : []); - - // Load day notes - this.setDaynotes((Args.daynotes) ? Args.daynotes : []); - - // Define the default view - if (this.mobileQuery() == 'mobile'){ - this.setView(this.conf.defaultView.smallScreen); - }else{ - this.setView(this.conf.defaultView.largeScreen); - } - - // Init - this.element = element; - this.initTime = false; - return this; - } - - Calendar.prototype.init = function() { - var millis = Date.now(); - this.element.addClass('loading'); - this.calculateCurrentInterval(); - $(this.element).trigger('Calendar.init', [ - this, - this.getPrevViewInterval(), - this.getViewInterval(), - this.getNextViewInterval() - ]); - $(this.element).html(''); - if (!$(this.element).hasClass('calendar')){ - $(this.element).addClass('calendar'); - } - if (this.getView() == 'day' || this.getView() == 'week'){ - if (this.conf.weekday.timeline.autoResize){ - this.resizeTimeline(); - } - } - this.drawCategories(); - if (this.getView() == 'day' || this.getView() == 'week'){ - this.weekDrawTime(); - this.weekDrawDays(); - this.weekDrawDaynotes(); - this.weekDrawEvents(); - } - if (this.getView() == 'month'){ - this.monthDrawWeek(); - this.monthDrawWeekNumbers(); - this.monthDrawWeekDays(); - this.monthDrawDaynotes(); - this.monthDrawEvents(); - } - this.drawModal(); - this.drawNow(); - this.positionEvents(); - this.hoverEventOrDaynote(); - this.clickEventOrDaynote(); - this.addBtnLeftRight(); - this.addSwipe(); - this.clickSwitchView(); - this.keyboardSwitchView(); - this.defaultEvents(); - this.element.removeClass('loading'); - this.initTime = (Date.now()-millis)+'ms'; - return this; - }; - - Calendar.prototype.defaultEvents = function() { - if (this.binded == undefined){ - var eventMouseenterDefault = function(event, self, elem){ - if (!event.isDefaultPrevented()){ - if (parseInt(elem.css('top')) >= (elem.closest('ul').height() / 2) - self.conf.weekday.timeline.heightPx){ - heightPx = parseInt(elem.css('top')) + parseInt(elem.css('height')); - elem - .css('z-index', 10) - .animate({ - height:heightPx, - top:0, - width:'100%', - left:0 - }, 50) - ; - }else{ - heightPx = elem.closest('ul').height() - parseInt(elem.css('top')); - elem - .css('z-index', 10) - .animate({ - height:heightPx, - width:'100%', - left:0 - }, 50) - ; - } - elem.find('.event-name').removeClass('hidden'); - elem.find('.event-content').removeClass('hidden'); - } - }; - $(self.element).off('Calendar.event-mouseenter', eventMouseenterDefault).on('Calendar.event-mouseenter', eventMouseenterDefault); - $(self.element).off('Calendar.daynote-mouseenter', eventMouseenterDefault).on('Calendar.daynote-mouseenter', eventMouseenterDefault); - var eventMouseleaveDefault = function(event, self, elem){ - if (!event.isDefaultPrevented()){ - elem - .css('z-index', 'auto') - .animate({ - height:parseFloat(elem.attr('data-height'))+'px', - top:parseFloat(elem.attr('data-top')), - width:parseFloat(elem.attr('data-width'))+'%', - left:parseFloat(elem.attr('data-left'))+'%' - }, 50) - ; - elem.find('.event-content').addClass('hidden'); - } - }; - $(self.element).off('Calendar.event-mouseleave', eventMouseleaveDefault).on('Calendar.event-mouseleave', eventMouseleaveDefault); - $(self.element).off('Calendar.daynote-mouseleave', eventMouseleaveDefault).on('Calendar.daynote-mouseleave', eventMouseleaveDefault); - var eventClickDefault = function(event, self, elem, evt){ - if (!event.isDefaultPrevented()){ - modal = $(self.element).find('#calendar-modal'); - rgb = self.hexToRgb(elem.attr('data-color')); - modal.css('background', 'rgba('+rgb.r+', '+rgb.g+', '+rgb.b+', 0.5)'); - modal.find('.modal-title').append(elem.attr('data-title')+' '); - modal.find('.modal-body').append( - $('

      ').append( - $(this).closest('.calendar-events-day').find('span').html() - ).append( - ' ' - ).append( - $('').text(elem.find('.event-date').text()) - ) - ); - modal.find('.modal-body').append(elem.find('.event-content').html()); - modal.modal('show'); - modal.on('hidden.bs.modal', function (e) { - $(e.target).find('.modal-title').html(''); - $(e.target).find('.modal-body').html(''); - }); - } - }; - $(self.element).off('Calendar.event-click', eventClickDefault).on('Calendar.event-click', eventClickDefault); - $(self.element).off('Calendar.daynote-click', eventClickDefault).on('Calendar.daynote-click', eventClickDefault); - var eventCategoryClickDefault = function(event, self, elem){ - if (!event.isDefaultPrevented()){ - var events = self.element.find('.calendar-event[data-category="'+$(elem).text()+'"]'); - if ($(elem).attr('data-clicked') == 'false'){ - events.animate({ - opacity: 0 - }, 200, function(){ - events.css('display', 'none'); - $(elem).css('background-color', '#E0E0E0'); - $(elem).attr('data-clicked', true); - }); - } - if ($(elem).attr('data-clicked') == 'true'){ - events.css('display', 'list-item'); - $(elem).css('background-color', $(elem).attr('data-color')); - events.animate({ - opacity: 1 - }, 200, function(){ - $(elem).attr('data-clicked', false); - }); - } - } - }; - $(self.element).off('Calendar.category-event-click', eventCategoryClickDefault).on('Calendar.category-event-click', eventCategoryClickDefault); - $(self.element).off('Calendar.category-daynote-click', eventCategoryClickDefault).on('Calendar.category-daynote-click', eventCategoryClickDefault); - var eventCategoryMouseenterDefault = function(event, self, elem){ - if (!event.isDefaultPrevented()){ - self.element.find('.calendar-event').each(function(i, e){ - if ($(e).attr('data-category') != elem.text()){ - $(e).css('opacity', 0.2); - } - }); - } - }; - $(self.element).off('Calendar.category-event-mouseenter', eventCategoryMouseenterDefault).on('Calendar.category-event-mouseenter', eventCategoryMouseenterDefault); - $(self.element).off('Calendar.category-daynote-mouseenter', eventCategoryMouseenterDefault).on('Calendar.category-daynote-mouseenter', eventCategoryMouseenterDefault); - var eventCategoryMouseleaveDefault = function(event, self, elem){ - if (!event.isDefaultPrevented()){ - self.element.find('.calendar-event').each(function(i, e){ - $(e).css('opacity', 1); - }); - } - }; - $(self.element).off('Calendar.category-event-mouseleave', eventCategoryMouseleaveDefault).on('Calendar.category-event-mouseleave', eventCategoryMouseleaveDefault); - $(self.element).off('Calendar.category-daynote-mouseleave', eventCategoryMouseleaveDefault).on('Calendar.category-daynote-mouseleave', eventCategoryMouseleaveDefault); - this.binded = true; - } - }; - - Calendar.prototype.weekDrawTime = function() { - $(this.element).append($('
      ', { - class: 'calendar-timeline' - })); - $(this.element).find('div.calendar-timeline').css('padding-top', this.conf.weekday.dayline.heightPx+'px'); - var marginTop = this.conf.weekday.dayline.month.heightPx; - if (this.conf.categories.enable){ - marginTop += 30; - } - $(this.element).find('div.calendar-timeline').css('margin-top', marginTop+'px'); - - $(this.element).find('div.calendar-timeline').append($('
        ')); - - ul = $(this.element).find('div.calendar-timeline').find('ul'); - - time = moment(moment()).startOf('Week'); - time.add(this.conf.weekday.timeline.fromHour, 'H'); - - var limit = (((this.conf.weekday.timeline.toHour+1)*60) - (this.conf.weekday.timeline.fromHour * 60)) / this.conf.weekday.timeline.intervalMinutes; - var i = 0; - while (i < limit){ - li = $('
      • '); - li.append($('').text(time.format(this.conf.weekday.timeline.format))); - li.height(this.conf.weekday.timeline.heightPx); - ul.append(li); - time.add(this.conf.weekday.timeline.intervalMinutes, 'm'); - i++; - } - }; - - Calendar.prototype.weekDrawDays = function() { - $(this.element).append($('
        ', { - class: 'calendar-events' - })); - - var div = $('
        ', { - class: 'calendar-month' - }) - .css('height', this.conf.weekday.dayline.month.heightPx+'px') - .css('text-align', 'center') - .css('padding-top', (this.conf.weekday.dayline.month.heightPx-20)/2+'px') - ; - if (this.getView() == 'week'){ - div.text(this.miscUcfirstString(moment.unix(this.conf.unixTimestamp).format(this.conf.weekday.dayline.month.format))); - div.addClass('weektomonth'); - } - if (this.getView() == 'day'){ - div.text(this.miscUcfirstString(moment.unix(this.conf.unixTimestamp).format(this.conf.weekday.dayline.month.weekFormat))); - div.addClass('daytoweek'); - } - $(this.element).find('div.calendar-events').append(div); - - $(this.element).find('div.calendar-events').append($('
          ')); - - ul = $(this.element).find('div.calendar-events').find('ul'); - - var days = this.getViewDays(); - - for (var i=0; i', { - class: 'calendar-events-day' - }); - li.css('width',100/days.length+'%'); - div = $('
          ', { - class: 'calendar-day-header' - }); - div.height(this.conf.weekday.dayline.heightPx); - if (i == 0 && this.mobileQuery() == 'desktop'){ - div.append($(''; - modal+= ''; - modal+= '
          '; - modal+= ''; - modal+= '
        '; - modal+= '
        '; - modal+= '
      '; - $(this.element).append(modal); - }; - - Calendar.prototype.drawNow = function() { - if (this.conf.now.enable){ - var hr = $('
      ', { - class: 'featurette-divider calendar-now' - }); - hr.css('width', '100%'); - hr.css('position', 'absolute'); - hr.css('z-index', 3); - hr.css('border-top', this.conf.now.heightPx+'px '+this.conf.now.style+' '+this.conf.now.color); - var top = ((moment().format('X') - moment().startOf('day').add(this.conf.weekday.timeline.fromHour, 'h').format('X')) / 60 / this.conf.weekday.timeline.intervalMinutes * this.conf.weekday.timeline.heightPx) - (this.conf.weekday.timeline.heightPx / 2); - hr.css('top', top+'px'); - this.element.find('li.calendar-events-day[data-time="'+moment().startOf('day').format('X')+'"]').find('ul').append(hr); - if (this.conf.now.refresh){ - var self = this; - setInterval(function(){ - var hr = self.element.find('hr.calendar-now').remove(); - var hr = $('
      ', { - class: 'featurette-divider calendar-now' - }); - hr.css('width', '100%'); - hr.css('position', 'absolute'); - hr.css('z-index', 2); - hr.css('border-top', self.conf.now.heightPx+'px '+self.conf.now.style+' '+self.conf.now.color); - var top = ((moment().format('X') - moment().startOf('day').add(self.conf.weekday.timeline.fromHour, 'h').format('X')) / 60 / self.conf.weekday.timeline.intervalMinutes * self.conf.weekday.timeline.heightPx) - (self.conf.weekday.timeline.heightPx / 2); - hr.css('top', top+'px'); - self.element.find('li.calendar-events-day[data-time="'+moment().startOf('day').format('X')+'"]').find('ul').append(hr); - }, 10000); - } - } - }; - - Calendar.prototype.hoverEventOrDaynote = function() { - var self = this; - this.element.find('.calendar-event').each(function(){ - var setTimeoutConst; - $(this).hover( - function(){ - elem = $(this); - setTimeoutConst = setTimeout(function(){ - if (elem.hasClass('calendar-daynote')){ - $(self.element).trigger('Calendar.daynote-mouseenter', [ - self, - elem - ]); - }else{ - $(self.element).trigger('Calendar.event-mouseenter', [ - self, - elem - ]); - } - - }, self.conf.event.hover.delay); - }, - function(){ - clearTimeout(setTimeoutConst); - elem = $(this); - if (elem.hasClass('calendar-daynote')){ - $(self.element).trigger('Calendar.daynote-mouseleave', [ - self, - elem - ]); - }else{ - $(self.element).trigger('Calendar.event-mouseleave', [ - self, - elem - ]); - } - - } - ); - }); - }; - - Calendar.prototype.clickEventOrDaynote = function() { - self = this; - this.element.find('.calendar-event').each(function(){ - $(this).click(function(event){ - elem = $(event.target); - if (elem.prop('nodeName') !== 'LI' && !elem.hasClass('calendar-event')){ - elem = elem.closest('li.calendar-event'); - } - if (elem.hasClass('calendar-daynote')){ - $(self.element).trigger('Calendar.daynote-click', [ - self, - elem, - self.daynotes[parseInt(elem.attr('data-index'))] - ]); - }else{ - $(self.element).trigger('Calendar.event-click', [ - self, - elem, - self.events[parseInt(elem.attr('data-index'))] - ]); - } - }); - }); - }; - - Calendar.prototype.resizeTimeline = function() { - for (var j=0; j parseInt(moment.unix(this.events[j].end).startOf('day').add(this.conf.weekday.timeline.toHour, 'hour').format('X'))){ - this.conf.weekday.timeline.toHour = parseInt(moment.unix(this.events[j].end).hour()); - if (this.conf.weekday.timeline.toHour < 23){ - this.conf.weekday.timeline.toHour++; - } - } - } - }; - - Calendar.prototype.getViewDays = function() { - var days = []; - if (this.getView() == 'day'){ - days.push(parseInt(moment.unix(this.conf.unixTimestamp).format('X'))); - } - if (this.getView() == 'week'){ - for (var i=0; i= parseInt(this.fromTimestamp) && parseInt(e[attribute2]) <= parseInt(this.toTimestamp)){ - categories.push(e.category); - } - } - categories = this.miscUniqueArray(categories); - return categories; - }; - - Calendar.prototype.getCategoryColor = function(category, object, colors, color) { - for (var i=0; i 0) ? this.userEventCategoryColor : []; - for (var i=0; i 0) ? this.userDaynoteCategoryColor : []; - for (var i=0; i b.start) ? 1 : -1;}); - return this; - }; - - Calendar.prototype.addEvents = function(events) { - this.events = this.events.concat(events); - this.events.sort(function(a,b) {return (a.start > b.start) ? 1 : -1;}); - return this; - }; - - Calendar.prototype.getDaynotes = function() { - return this.daynotes; - }; - - Calendar.prototype.setDaynotes = function(daynotes) { - this.daynotes = (daynotes) ? daynotes : []; - this.daynotes.sort(function(a,b) {return (a.start > b.start) ? 1 : -1;}); - return this; - }; - - Calendar.prototype.addDaynotes = function(daynotes) { - this.daynotes.concat(daynotes); - this.daynotes.sort(function(a,b) {return (a.start > b.start) ? 1 : -1;}); - return this; - }; - - Calendar.prototype.getInitTime = function() { - return this.initTime; - }; - - Calendar.prototype.getViewInterval = function() { - return [this.fromTimestamp, this.toTimestamp]; - }; - - Calendar.prototype.getNextViewInterval = function() { - if (this.getView() == 'day'){ - return [ - parseInt(moment.unix(this.fromTimestamp).add(1, 'd').format('X')), - parseInt(moment.unix(this.toTimestamp).add(1, 'd').format('X')) - ]; - } - if (this.getView() == 'week'){ - return [ - parseInt(moment.unix(this.fromTimestamp).add(1, 'w').format('X')), - parseInt(moment.unix(this.toTimestamp).add(1, 'w').format('X')) - ]; - } - if (this.getView() == 'month'){ - return [ - parseInt(moment.unix(this.fromTimestamp).add(1, 'M').format('X')), - parseInt(moment.unix(this.toTimestamp).add(1, 'M').format('X')) - ]; - } - }; - - Calendar.prototype.getPrevViewInterval = function() { - if (this.getView() == 'day'){ - return [ - parseInt(moment.unix(this.fromTimestamp).subtract(1, 'd').format('X')), - parseInt(moment.unix(this.toTimestamp).subtract(1, 'd').format('X')) - ]; - } - if (this.getView() == 'week'){ - return [ - parseInt(moment.unix(this.fromTimestamp).subtract(1, 'w').format('X')), - parseInt(moment.unix(this.toTimestamp).subtract(1, 'w').format('X')) - ]; - } - if (this.getView() == 'month'){ - return [ - parseInt(moment.unix(this.fromTimestamp).subtract(1, 'M').format('X')), - parseInt(moment.unix(this.toTimestamp).subtract(1, 'M').format('X')) - ]; - } - }; - - Calendar.prototype.getTimestamp = function() { - return this.conf.unixTimestamp; - }; - - Calendar.prototype.setTimestamp = function(timestamp) { - this.conf.unixTimestamp = parseInt(timestamp); - return this; - }; - - Calendar.prototype.getView = function(){ - return this.conf.view; - }; - - Calendar.prototype.setView = function(view){ - if (view == 'day' || view == 'week' || view == 'month') { - this.conf.view = view; - } - return this; - }; - - Calendar.prototype.miscDedupeArray = function(a) { - a = a.concat(); - for (var i=0; i List[str]: - """Returns the names of available CLIP models""" - return list(_MODELS.keys()) - - -def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit: bool = False, download_root: str = None): - """Load a CLIP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - - device : Union[str, torch.device] - The device to put the loaded model - - jit : bool - Whether to load the optimized JIT model or more hackable non-JIT model (default). - - download_root: str - path to download the model files; by default, it uses "~/.cache/clip" - - Returns - ------- - model : torch.nn.Module - The CLIP model - - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if name in _MODELS: - model_path = _download(_MODELS[name], download_root or os.path.expanduser("~/.cache/clip")) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError(f"Model {name} not found; available models = {available_models()}") - - with open(model_path, 'rb') as opened_file: - try: - # loading JIT archive - model = torch.jit.load(opened_file, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead") - jit = False - state_dict = torch.load(opened_file, map_location="cpu") - - if not jit: - model = build_model(state_dict or model.state_dict()).to(device) - if str(device) == "cpu": - model.float() - return model, _transform(model.visual.input_resolution) - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_image) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if str(device) == "cpu": - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_image) - patch_float(model.encode_text) - - model.float() - - return model, _transform(model.input_resolution.item()) - - -def tokenize(texts: Union[str, List[str]], context_length: int = 77, truncate: bool = False) -> Union[torch.IntTensor, torch.LongTensor]: - """ - Returns the tokenized representation of given input string(s) - - Parameters - ---------- - texts : Union[str, List[str]] - An input string or a list of input strings to tokenize - - context_length : int - The context length to use; all CLIP models use 77 as the context length - - truncate: bool - Whether to truncate the text in case its encoding is longer than the context length - - Returns - ------- - A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]. - We return LongTensor when torch version is <1.8.0, since older index_select requires indices to be long. - """ - if isinstance(texts, str): - texts = [texts] - - sot_token = _tokenizer.encoder["<|startoftext|>"] - eot_token = _tokenizer.encoder["<|endoftext|>"] - all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts] - if packaging.version.parse(torch.__version__) < packaging.version.parse("1.8.0"): - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - else: - result = torch.zeros(len(all_tokens), context_length, dtype=torch.int) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - if truncate: - tokens = tokens[:context_length] - tokens[-1] = eot_token - else: - raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}") - result[i, :len(tokens)] = torch.tensor(tokens) - - return result diff --git a/spaces/patrawtf/shopify_csv_qa/app/tapas2.py b/spaces/patrawtf/shopify_csv_qa/app/tapas2.py deleted file mode 100644 index d8c750fb386576e87db23c017593dec620d73b6d..0000000000000000000000000000000000000000 --- a/spaces/patrawtf/shopify_csv_qa/app/tapas2.py +++ /dev/null @@ -1,55 +0,0 @@ -from transformers import TapasTokenizer, TFTapasForQuestionAnswering -import pandas as pd -import datetime - - -def execute_query(query, csv_file): - a = datetime.datetime.now() - - table = pd.read_csv(csv_file.name, delimiter=",") - table.fillna(0, inplace=True) - table = table.astype(str) - - model_name = "google/tapas-base-finetuned-wtq" - model = TFTapasForQuestionAnswering.from_pretrained(model_name) - tokenizer = TapasTokenizer.from_pretrained(model_name) - - queries = [query] - - inputs = tokenizer(table=table, queries=queries, padding=True, return_tensors="tf",truncated=True) - outputs = model(**inputs) - - predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions( - inputs, outputs.logits, outputs.logits_aggregation - ) - - # let's print out the results: - id2aggregation = {0: "NONE", 1: "SUM", 2: "AVERAGE", 3: "COUNT"} - aggregation_predictions_string = [id2aggregation[x] for x in predicted_aggregation_indices] - - answers = [] - for coordinates in predicted_answer_coordinates: - if len(coordinates) == 1: - # only a single cell: - answers.append(table.iat[coordinates[0]]) - else: - # multiple cells - cell_values = [] - for coordinate in coordinates: - cell_values.append(table.iat[coordinate]) - answers.append(cell_values) - - for query, answer, predicted_agg in zip(queries, answers, aggregation_predictions_string): - if predicted_agg != "NONE": - answers.append(predicted_agg) - - query_result = { - "query": query, - "result": answers - } - - b = datetime.datetime.now() - print(b - a) - - return query_result, table - diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/val_loop_hook.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/val_loop_hook.py deleted file mode 100644 index dd1937b42da1f1d23747fa3262426e13db9b723e..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/val_loop_hook.py +++ /dev/null @@ -1,25 +0,0 @@ -from abc import ABC, abstractmethod -import torch -import pytorch_lightning as pl - -class ValidationLoopHook(ABC): - @abstractmethod - def process(self, batch: torch.Tensor, target_batch: torch.Tensor, logits_batch: torch.Tensor, prediction_batch: torch.Tensor) -> None: - """ - Called for every validation batch to process results. - """ - pass - - @abstractmethod - def trigger(self, module: pl.LightningModule): - """ - Called after the validation epoch has concluced to further interact with the module and/or log data. - """ - pass - - @abstractmethod - def reset(self): - """ - Called right after build() to clean up before the next validation epoch starts. - """ - pass \ No newline at end of file diff --git a/spaces/peterbonnesoeur/pose_demo/app.py b/spaces/peterbonnesoeur/pose_demo/app.py deleted file mode 100644 index f59105355fa6fe89dffde7b652a4a0c2a5e5f7a2..0000000000000000000000000000000000000000 --- a/spaces/peterbonnesoeur/pose_demo/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -import gradio as gr - -def inference(img, ver, white_overlay): - - if white_overlay: - white_overlay = "--white-overlay=0.3" - else: - white_overlay = "" - - if ver == 'pose': - os.system("python -m openpifpaf.predict "+img.name+" --checkpoint=shufflenetv2k30 --line-width=4 " + white_overlay + " -o out.jpg") - elif ver == 'whole-body': - os.system("python -m openpifpaf.predict "+img.name+" --checkpoint=shufflenetv2k30-wholebody --instance-threshold 0.05 " + white_overlay + " --seed-threshold 0.05 \ - --line-width 3 -o out.jpg") - elif ver == 'vehicles': - os.system("python -m openpifpaf.predict "+img.name+" --checkpoint=shufflenetv2k16-apollo-24 --line-width=5 " + white_overlay + " -o out.jpg") - elif ver == 'animal': - os.system("python -m openpifpaf.predict "+img.name+" --checkpoint=shufflenetv2k30-animalpose --line-width=5 --font-size=6 " + white_overlay + " \ - --long-edge=500 -o out.jpg") - else: - raise ValueError('invalid version') - - return "out.jpg" - - -title = "Openpifpaf - pose estimation for human, vehicles and animals" -description = "Gradio demo for openpifpaf. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below and don't hesitate to SMASH THAT LIKE BUTTON (and you do not have a dislike there either so...)" -article = "

      Github Repo Openpifpaf | Github Repo peterbonnesoeur

      " - -with open("article.html", "r", encoding='utf-8') as f: - article= f.read() - -examples=[ - ['basketball.jpg','whole-body'], - ['bill.png','whole-body'], - ['billie.png','whole-body'], - ['meeting.jpeg','pose'], - ['crowd.jpg','pose'], - ['dalmatian.jpg', 'animal'], - ['tappo_loomo.jpg', 'animal'], - ['cow.jpg', 'animal'], - ['india-vehicles.jpeg', 'vehicles'], - ['russia-vehicles.jpg', 'vehicles'], - ['paris-vehicles.jpg', 'vehicles'], - - ] - -gr.Interface( - inference, - [ - gr.inputs.Image(type="file", label="Input"), - gr.inputs.Radio(['whole-body', 'pose', 'vehicles', 'animal'], type="value", default='whole-body', label='version'), - gr.inputs.Checkbox(default=False, label="White overlay") - ], - gr.outputs.Image(type="file", label="Output"), - title=title, - description=description, - article=article, - enable_queue=True, - examples=examples).launch() \ No newline at end of file diff --git a/spaces/pharma-IA/PharmaWise_Prospecto_Megalabs_V2.10/README.md b/spaces/pharma-IA/PharmaWise_Prospecto_Megalabs_V2.10/README.md deleted file mode 100644 index 551a45b2845161bba774c6776575feb128b10a19..0000000000000000000000000000000000000000 --- a/spaces/pharma-IA/PharmaWise_Prospecto_Megalabs_V2.10/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: PharmaWise Demo Prospecto Mega v2.10 -emoji: 💻 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: artistic-2.0 -duplicated_from: pharma-IA/PharmaWise_Prospecto_Megalabs_V2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/pixiou/bingo/src/lib/bots/bing/tts.ts b/spaces/pixiou/bingo/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/pixiou/bingo/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/intranges.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/intranges.py deleted file mode 100644 index 6a43b0475347cb50d0d65ada1000a82eeca9e882..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/intranges.py +++ /dev/null @@ -1,54 +0,0 @@ -""" -Given a list of integers, made up of (hopefully) a small number of long runs -of consecutive integers, compute a representation of the form -((start1, end1), (start2, end2) ...). Then answer the question "was x present -in the original list?" in time O(log(# runs)). -""" - -import bisect -from typing import List, Tuple - -def intranges_from_list(list_: List[int]) -> Tuple[int, ...]: - """Represent a list of integers as a sequence of ranges: - ((start_0, end_0), (start_1, end_1), ...), such that the original - integers are exactly those x such that start_i <= x < end_i for some i. - - Ranges are encoded as single integers (start << 32 | end), not as tuples. - """ - - sorted_list = sorted(list_) - ranges = [] - last_write = -1 - for i in range(len(sorted_list)): - if i+1 < len(sorted_list): - if sorted_list[i] == sorted_list[i+1]-1: - continue - current_range = sorted_list[last_write+1:i+1] - ranges.append(_encode_range(current_range[0], current_range[-1] + 1)) - last_write = i - - return tuple(ranges) - -def _encode_range(start: int, end: int) -> int: - return (start << 32) | end - -def _decode_range(r: int) -> Tuple[int, int]: - return (r >> 32), (r & ((1 << 32) - 1)) - - -def intranges_contain(int_: int, ranges: Tuple[int, ...]) -> bool: - """Determine if `int_` falls into one of the ranges in `ranges`.""" - tuple_ = _encode_range(int_, 0) - pos = bisect.bisect_left(ranges, tuple_) - # we could be immediately ahead of a tuple (start, end) - # with start < int_ <= end - if pos > 0: - left, right = _decode_range(ranges[pos-1]) - if left <= int_ < right: - return True - # or we could be immediately behind a tuple (int_, end) - if pos < len(ranges): - left, _ = _decode_range(ranges[pos]) - if left == int_: - return True - return False diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/__init__.py deleted file mode 100644 index 39c84aae5d8e1f4701b0b04fb9fcb8d4ca219de4..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -""" - Pygments - ~~~~~~~~ - - Pygments is a syntax highlighting package written in Python. - - It is a generic syntax highlighter for general use in all kinds of software - such as forum systems, wikis or other applications that need to prettify - source code. Highlights are: - - * a wide range of common languages and markup formats is supported - * special attention is paid to details, increasing quality by a fair amount - * support for new languages and formats are added easily - * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image - formats that PIL supports, and ANSI sequences - * it is usable as a command-line tool and as a library - * ... and it highlights even Brainfuck! - - The `Pygments master branch`_ is installable with ``easy_install Pygments==dev``. - - .. _Pygments master branch: - https://github.com/pygments/pygments/archive/master.zip#egg=Pygments-dev - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" -from io import StringIO, BytesIO - -__version__ = '2.15.1' -__docformat__ = 'restructuredtext' - -__all__ = ['lex', 'format', 'highlight'] - - -def lex(code, lexer): - """ - Lex `code` with the `lexer` (must be a `Lexer` instance) - and return an iterable of tokens. Currently, this only calls - `lexer.get_tokens()`. - """ - try: - return lexer.get_tokens(code) - except TypeError: - # Heuristic to catch a common mistake. - from pip._vendor.pygments.lexer import RegexLexer - if isinstance(lexer, type) and issubclass(lexer, RegexLexer): - raise TypeError('lex() argument must be a lexer instance, ' - 'not a class') - raise - - -def format(tokens, formatter, outfile=None): # pylint: disable=redefined-builtin - """ - Format ``tokens`` (an iterable of tokens) with the formatter ``formatter`` - (a `Formatter` instance). - - If ``outfile`` is given and a valid file object (an object with a - ``write`` method), the result will be written to it, otherwise it - is returned as a string. - """ - try: - if not outfile: - realoutfile = getattr(formatter, 'encoding', None) and BytesIO() or StringIO() - formatter.format(tokens, realoutfile) - return realoutfile.getvalue() - else: - formatter.format(tokens, outfile) - except TypeError: - # Heuristic to catch a common mistake. - from pip._vendor.pygments.formatter import Formatter - if isinstance(formatter, type) and issubclass(formatter, Formatter): - raise TypeError('format() argument must be a formatter instance, ' - 'not a class') - raise - - -def highlight(code, lexer, formatter, outfile=None): - """ - This is the most high-level highlighting function. It combines `lex` and - `format` in one function. - """ - return format(lex(code, lexer), formatter, outfile) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build_ext.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build_ext.py deleted file mode 100644 index cbfe3ec1c28529aade613b000d5b051807287deb..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build_ext.py +++ /dev/null @@ -1,383 +0,0 @@ -import os -import sys -import itertools -from importlib.machinery import EXTENSION_SUFFIXES -from importlib.util import cache_from_source as _compiled_file_name -from typing import Dict, Iterator, List, Tuple - -from distutils.command.build_ext import build_ext as _du_build_ext -from distutils.ccompiler import new_compiler -from distutils.sysconfig import customize_compiler, get_config_var -from distutils import log - -from setuptools.errors import BaseError -from setuptools.extension import Extension, Library - -try: - # Attempt to use Cython for building extensions, if available - from Cython.Distutils.build_ext import build_ext as _build_ext - # Additionally, assert that the compiler module will load - # also. Ref #1229. - __import__('Cython.Compiler.Main') -except ImportError: - _build_ext = _du_build_ext - -# make sure _config_vars is initialized -get_config_var("LDSHARED") -from distutils.sysconfig import _config_vars as _CONFIG_VARS # noqa - - -def _customize_compiler_for_shlib(compiler): - if sys.platform == "darwin": - # building .dylib requires additional compiler flags on OSX; here we - # temporarily substitute the pyconfig.h variables so that distutils' - # 'customize_compiler' uses them before we build the shared libraries. - tmp = _CONFIG_VARS.copy() - try: - # XXX Help! I don't have any idea whether these are right... - _CONFIG_VARS['LDSHARED'] = ( - "gcc -Wl,-x -dynamiclib -undefined dynamic_lookup") - _CONFIG_VARS['CCSHARED'] = " -dynamiclib" - _CONFIG_VARS['SO'] = ".dylib" - customize_compiler(compiler) - finally: - _CONFIG_VARS.clear() - _CONFIG_VARS.update(tmp) - else: - customize_compiler(compiler) - - -have_rtld = False -use_stubs = False -libtype = 'shared' - -if sys.platform == "darwin": - use_stubs = True -elif os.name != 'nt': - try: - import dl - use_stubs = have_rtld = hasattr(dl, 'RTLD_NOW') - except ImportError: - pass - - -def if_dl(s): - return s if have_rtld else '' - - -def get_abi3_suffix(): - """Return the file extension for an abi3-compliant Extension()""" - for suffix in EXTENSION_SUFFIXES: - if '.abi3' in suffix: # Unix - return suffix - elif suffix == '.pyd': # Windows - return suffix - - -class build_ext(_build_ext): - editable_mode: bool = False - inplace: bool = False - - def run(self): - """Build extensions in build directory, then copy if --inplace""" - old_inplace, self.inplace = self.inplace, 0 - _build_ext.run(self) - self.inplace = old_inplace - if old_inplace: - self.copy_extensions_to_source() - - def _get_inplace_equivalent(self, build_py, ext: Extension) -> Tuple[str, str]: - fullname = self.get_ext_fullname(ext.name) - filename = self.get_ext_filename(fullname) - modpath = fullname.split('.') - package = '.'.join(modpath[:-1]) - package_dir = build_py.get_package_dir(package) - inplace_file = os.path.join(package_dir, os.path.basename(filename)) - regular_file = os.path.join(self.build_lib, filename) - return (inplace_file, regular_file) - - def copy_extensions_to_source(self): - build_py = self.get_finalized_command('build_py') - for ext in self.extensions: - inplace_file, regular_file = self._get_inplace_equivalent(build_py, ext) - - # Always copy, even if source is older than destination, to ensure - # that the right extensions for the current Python/platform are - # used. - if os.path.exists(regular_file) or not ext.optional: - self.copy_file(regular_file, inplace_file, level=self.verbose) - - if ext._needs_stub: - inplace_stub = self._get_equivalent_stub(ext, inplace_file) - self._write_stub_file(inplace_stub, ext, compile=True) - # Always compile stub and remove the original (leave the cache behind) - # (this behaviour was observed in previous iterations of the code) - - def _get_equivalent_stub(self, ext: Extension, output_file: str) -> str: - dir_ = os.path.dirname(output_file) - _, _, name = ext.name.rpartition(".") - return f"{os.path.join(dir_, name)}.py" - - def _get_output_mapping(self) -> Iterator[Tuple[str, str]]: - if not self.inplace: - return - - build_py = self.get_finalized_command('build_py') - opt = self.get_finalized_command('install_lib').optimize or "" - - for ext in self.extensions: - inplace_file, regular_file = self._get_inplace_equivalent(build_py, ext) - yield (regular_file, inplace_file) - - if ext._needs_stub: - # This version of `build_ext` always builds artifacts in another dir, - # when "inplace=True" is given it just copies them back. - # This is done in the `copy_extensions_to_source` function, which - # always compile stub files via `_compile_and_remove_stub`. - # At the end of the process, a `.pyc` stub file is created without the - # corresponding `.py`. - - inplace_stub = self._get_equivalent_stub(ext, inplace_file) - regular_stub = self._get_equivalent_stub(ext, regular_file) - inplace_cache = _compiled_file_name(inplace_stub, optimization=opt) - output_cache = _compiled_file_name(regular_stub, optimization=opt) - yield (output_cache, inplace_cache) - - def get_ext_filename(self, fullname): - so_ext = os.getenv('SETUPTOOLS_EXT_SUFFIX') - if so_ext: - filename = os.path.join(*fullname.split('.')) + so_ext - else: - filename = _build_ext.get_ext_filename(self, fullname) - so_ext = get_config_var('EXT_SUFFIX') - - if fullname in self.ext_map: - ext = self.ext_map[fullname] - use_abi3 = getattr(ext, 'py_limited_api') and get_abi3_suffix() - if use_abi3: - filename = filename[:-len(so_ext)] - so_ext = get_abi3_suffix() - filename = filename + so_ext - if isinstance(ext, Library): - fn, ext = os.path.splitext(filename) - return self.shlib_compiler.library_filename(fn, libtype) - elif use_stubs and ext._links_to_dynamic: - d, fn = os.path.split(filename) - return os.path.join(d, 'dl-' + fn) - return filename - - def initialize_options(self): - _build_ext.initialize_options(self) - self.shlib_compiler = None - self.shlibs = [] - self.ext_map = {} - self.editable_mode = False - - def finalize_options(self): - _build_ext.finalize_options(self) - self.extensions = self.extensions or [] - self.check_extensions_list(self.extensions) - self.shlibs = [ext for ext in self.extensions - if isinstance(ext, Library)] - if self.shlibs: - self.setup_shlib_compiler() - for ext in self.extensions: - ext._full_name = self.get_ext_fullname(ext.name) - for ext in self.extensions: - fullname = ext._full_name - self.ext_map[fullname] = ext - - # distutils 3.1 will also ask for module names - # XXX what to do with conflicts? - self.ext_map[fullname.split('.')[-1]] = ext - - ltd = self.shlibs and self.links_to_dynamic(ext) or False - ns = ltd and use_stubs and not isinstance(ext, Library) - ext._links_to_dynamic = ltd - ext._needs_stub = ns - filename = ext._file_name = self.get_ext_filename(fullname) - libdir = os.path.dirname(os.path.join(self.build_lib, filename)) - if ltd and libdir not in ext.library_dirs: - ext.library_dirs.append(libdir) - if ltd and use_stubs and os.curdir not in ext.runtime_library_dirs: - ext.runtime_library_dirs.append(os.curdir) - - if self.editable_mode: - self.inplace = True - - def setup_shlib_compiler(self): - compiler = self.shlib_compiler = new_compiler( - compiler=self.compiler, dry_run=self.dry_run, force=self.force - ) - _customize_compiler_for_shlib(compiler) - - if self.include_dirs is not None: - compiler.set_include_dirs(self.include_dirs) - if self.define is not None: - # 'define' option is a list of (name,value) tuples - for (name, value) in self.define: - compiler.define_macro(name, value) - if self.undef is not None: - for macro in self.undef: - compiler.undefine_macro(macro) - if self.libraries is not None: - compiler.set_libraries(self.libraries) - if self.library_dirs is not None: - compiler.set_library_dirs(self.library_dirs) - if self.rpath is not None: - compiler.set_runtime_library_dirs(self.rpath) - if self.link_objects is not None: - compiler.set_link_objects(self.link_objects) - - # hack so distutils' build_extension() builds a library instead - compiler.link_shared_object = link_shared_object.__get__(compiler) - - def get_export_symbols(self, ext): - if isinstance(ext, Library): - return ext.export_symbols - return _build_ext.get_export_symbols(self, ext) - - def build_extension(self, ext): - ext._convert_pyx_sources_to_lang() - _compiler = self.compiler - try: - if isinstance(ext, Library): - self.compiler = self.shlib_compiler - _build_ext.build_extension(self, ext) - if ext._needs_stub: - build_lib = self.get_finalized_command('build_py').build_lib - self.write_stub(build_lib, ext) - finally: - self.compiler = _compiler - - def links_to_dynamic(self, ext): - """Return true if 'ext' links to a dynamic lib in the same package""" - # XXX this should check to ensure the lib is actually being built - # XXX as dynamic, and not just using a locally-found version or a - # XXX static-compiled version - libnames = dict.fromkeys([lib._full_name for lib in self.shlibs]) - pkg = '.'.join(ext._full_name.split('.')[:-1] + ['']) - return any(pkg + libname in libnames for libname in ext.libraries) - - def get_outputs(self) -> List[str]: - if self.inplace: - return list(self.get_output_mapping().keys()) - return sorted(_build_ext.get_outputs(self) + self.__get_stubs_outputs()) - - def get_output_mapping(self) -> Dict[str, str]: - """See :class:`setuptools.commands.build.SubCommand`""" - mapping = self._get_output_mapping() - return dict(sorted(mapping, key=lambda x: x[0])) - - def __get_stubs_outputs(self): - # assemble the base name for each extension that needs a stub - ns_ext_bases = ( - os.path.join(self.build_lib, *ext._full_name.split('.')) - for ext in self.extensions - if ext._needs_stub - ) - # pair each base with the extension - pairs = itertools.product(ns_ext_bases, self.__get_output_extensions()) - return list(base + fnext for base, fnext in pairs) - - def __get_output_extensions(self): - yield '.py' - yield '.pyc' - if self.get_finalized_command('build_py').optimize: - yield '.pyo' - - def write_stub(self, output_dir, ext, compile=False): - stub_file = os.path.join(output_dir, *ext._full_name.split('.')) + '.py' - self._write_stub_file(stub_file, ext, compile) - - def _write_stub_file(self, stub_file: str, ext: Extension, compile=False): - log.info("writing stub loader for %s to %s", ext._full_name, stub_file) - if compile and os.path.exists(stub_file): - raise BaseError(stub_file + " already exists! Please delete.") - if not self.dry_run: - f = open(stub_file, 'w') - f.write( - '\n'.join([ - "def __bootstrap__():", - " global __bootstrap__, __file__, __loader__", - " import sys, os, pkg_resources, importlib.util" + - if_dl(", dl"), - " __file__ = pkg_resources.resource_filename" - "(__name__,%r)" - % os.path.basename(ext._file_name), - " del __bootstrap__", - " if '__loader__' in globals():", - " del __loader__", - if_dl(" old_flags = sys.getdlopenflags()"), - " old_dir = os.getcwd()", - " try:", - " os.chdir(os.path.dirname(__file__))", - if_dl(" sys.setdlopenflags(dl.RTLD_NOW)"), - " spec = importlib.util.spec_from_file_location(", - " __name__, __file__)", - " mod = importlib.util.module_from_spec(spec)", - " spec.loader.exec_module(mod)", - " finally:", - if_dl(" sys.setdlopenflags(old_flags)"), - " os.chdir(old_dir)", - "__bootstrap__()", - "" # terminal \n - ]) - ) - f.close() - if compile: - self._compile_and_remove_stub(stub_file) - - def _compile_and_remove_stub(self, stub_file: str): - from distutils.util import byte_compile - - byte_compile([stub_file], optimize=0, - force=True, dry_run=self.dry_run) - optimize = self.get_finalized_command('install_lib').optimize - if optimize > 0: - byte_compile([stub_file], optimize=optimize, - force=True, dry_run=self.dry_run) - if os.path.exists(stub_file) and not self.dry_run: - os.unlink(stub_file) - - -if use_stubs or os.name == 'nt': - # Build shared libraries - # - def link_shared_object( - self, objects, output_libname, output_dir=None, libraries=None, - library_dirs=None, runtime_library_dirs=None, export_symbols=None, - debug=0, extra_preargs=None, extra_postargs=None, build_temp=None, - target_lang=None): - self.link( - self.SHARED_LIBRARY, objects, output_libname, - output_dir, libraries, library_dirs, runtime_library_dirs, - export_symbols, debug, extra_preargs, extra_postargs, - build_temp, target_lang - ) -else: - # Build static libraries everywhere else - libtype = 'static' - - def link_shared_object( - self, objects, output_libname, output_dir=None, libraries=None, - library_dirs=None, runtime_library_dirs=None, export_symbols=None, - debug=0, extra_preargs=None, extra_postargs=None, build_temp=None, - target_lang=None): - # XXX we need to either disallow these attrs on Library instances, - # or warn/abort here if set, or something... - # libraries=None, library_dirs=None, runtime_library_dirs=None, - # export_symbols=None, extra_preargs=None, extra_postargs=None, - # build_temp=None - - assert output_dir is None # distutils build_ext doesn't pass this - output_dir, filename = os.path.split(output_libname) - basename, ext = os.path.splitext(filename) - if self.library_filename("x").startswith('lib'): - # strip 'lib' prefix; this is kludgy if some platform uses - # a different prefix - basename = basename[3:] - - self.create_static_lib( - objects, basename, output_dir, debug, target_lang - ) diff --git a/spaces/plzdontcry/dakubettergpt/src/components/ApiMenu/index.ts b/spaces/plzdontcry/dakubettergpt/src/components/ApiMenu/index.ts deleted file mode 100644 index 3760bfbd8027e63dc1913f3f35e7e4d3d81dab63..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/components/ApiMenu/index.ts +++ /dev/null @@ -1 +0,0 @@ -export { default } from './ApiMenu'; \ No newline at end of file diff --git a/spaces/pradosh/insurance_demo/README.md b/spaces/pradosh/insurance_demo/README.md deleted file mode 100644 index f468014755594a50291c626dfef2ccedfcb9b3c5..0000000000000000000000000000000000000000 --- a/spaces/pradosh/insurance_demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Insurance Demo -emoji: 🌖 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/presidio/presidio_demo/openai_fake_data_generator.py b/spaces/presidio/presidio_demo/openai_fake_data_generator.py deleted file mode 100644 index d89458f56ff2f1e1537f2ab49742922f0fb0d330..0000000000000000000000000000000000000000 --- a/spaces/presidio/presidio_demo/openai_fake_data_generator.py +++ /dev/null @@ -1,80 +0,0 @@ -from collections import namedtuple -from typing import Optional - -import openai -import logging - -logger = logging.getLogger("presidio-streamlit") - -OpenAIParams = namedtuple( - "open_ai_params", - ["openai_key", "model", "api_base", "deployment_name", "api_version", "api_type"], -) - - -def set_openai_params(openai_params: OpenAIParams): - """Set the OpenAI API key. - :param openai_params: OpenAIParams object with the following fields: key, model, api version, deployment_name, - The latter only relate to Azure OpenAI deployments. - """ - openai.api_key = openai_params.openai_key - openai.api_version = openai_params.api_version - if openai_params.api_base: - openai.api_base = openai_params.api_base - openai.api_type = openai_params.api_type - - -def call_completion_model( - prompt: str, - model: str = "text-davinci-003", - max_tokens: int = 512, - deployment_id: Optional[str] = None, -) -> str: - """Creates a request for the OpenAI Completion service and returns the response. - - :param prompt: The prompt for the completion model - :param model: OpenAI model name - :param max_tokens: Model's max_tokens parameter - :param deployment_id: Azure OpenAI deployment ID - """ - if deployment_id: - response = openai.Completion.create( - deployment_id=deployment_id, model=model, prompt=prompt, max_tokens=max_tokens - ) - else: - response = openai.Completion.create( - model=model, prompt=prompt, max_tokens=max_tokens - ) - - return response["choices"][0].text - - -def create_prompt(anonymized_text: str) -> str: - """ - Create the prompt with instructions to GPT-3. - - :param anonymized_text: Text with placeholders instead of PII values, e.g. My name is . - """ - - prompt = f""" - Your role is to create synthetic text based on de-identified text with placeholders instead of Personally Identifiable Information (PII). - Replace the placeholders (e.g. ,, {{DATE}}, {{ip_address}}) with fake values. - - Instructions: - - a. Use completely random numbers, so every digit is drawn between 0 and 9. - b. Use realistic names that come from diverse genders, ethnicities and countries. - c. If there are no placeholders, return the text as is and provide an answer. - d. Keep the formatting as close to the original as possible. - e. If PII exists in the input, replace it with fake values in the output. - - input: How do I change the limit on my credit card {{credit_card_number}}? - output: How do I change the limit on my credit card 2539 3519 2345 1555? - input: was the chief science officer at . - output: Katherine Buckjov was the chief science officer at NASA. - input: Cameroon lives in . - output: Vladimir lives in Moscow. - input: {anonymized_text} - output: - """ - return prompt diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/_version.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/_version.py deleted file mode 100644 index 0936d1a7f3556bba27e25cd2eff208e809b71d83..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/_version.py +++ /dev/null @@ -1,2 +0,0 @@ -# Master version for Pillow -__version__ = "10.1.0" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/test/test/backend/gradio_test/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/test/test/backend/gradio_test/__init__.py deleted file mode 100644 index 50ea9076715895b4572e91480783f30d43df28c6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/test/test/backend/gradio_test/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ - -from .test import Test - -__all__ = ['Test'] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/DropdownArrow-c9479e43.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/DropdownArrow-c9479e43.js deleted file mode 100644 index 54d872164577aa0b5c06ba810ac9e8c6eee03ca9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/DropdownArrow-c9479e43.js +++ /dev/null @@ -1,2 +0,0 @@ -import"./Index-37584f50.js";const{SvelteComponent:l,append:p,attr:e,detach:w,init:d,insert:c,noop:r,safe_not_equal:_,svg_element:s}=window.__gradio__svelte__internal;function g(a){let t,n;return{c(){t=s("svg"),n=s("path"),e(n,"d","M5 8l4 4 4-4z"),e(t,"class","dropdown-arrow svelte-xjn76a"),e(t,"xmlns","http://www.w3.org/2000/svg"),e(t,"width","100%"),e(t,"height","100%"),e(t,"viewBox","0 0 18 18")},m(o,i){c(o,t,i),p(t,n)},p:r,i:r,o:r,d(o){o&&w(t)}}}class v extends l{constructor(t){super(),d(this,t,null,g,_,{})}}export{v as D}; -//# sourceMappingURL=DropdownArrow-c9479e43.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_async/connection_pool.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_async/connection_pool.py deleted file mode 100644 index ddc0510e60e7b744b177394dba49f7541c81b803..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_async/connection_pool.py +++ /dev/null @@ -1,356 +0,0 @@ -import ssl -import sys -from types import TracebackType -from typing import AsyncIterable, AsyncIterator, Iterable, List, Optional, Type - -from .._backends.auto import AutoBackend -from .._backends.base import SOCKET_OPTION, AsyncNetworkBackend -from .._exceptions import ConnectionNotAvailable, UnsupportedProtocol -from .._models import Origin, Request, Response -from .._synchronization import AsyncEvent, AsyncLock, AsyncShieldCancellation -from .connection import AsyncHTTPConnection -from .interfaces import AsyncConnectionInterface, AsyncRequestInterface - - -class RequestStatus: - def __init__(self, request: Request): - self.request = request - self.connection: Optional[AsyncConnectionInterface] = None - self._connection_acquired = AsyncEvent() - - def set_connection(self, connection: AsyncConnectionInterface) -> None: - assert self.connection is None - self.connection = connection - self._connection_acquired.set() - - def unset_connection(self) -> None: - assert self.connection is not None - self.connection = None - self._connection_acquired = AsyncEvent() - - async def wait_for_connection( - self, timeout: Optional[float] = None - ) -> AsyncConnectionInterface: - if self.connection is None: - await self._connection_acquired.wait(timeout=timeout) - assert self.connection is not None - return self.connection - - -class AsyncConnectionPool(AsyncRequestInterface): - """ - A connection pool for making HTTP requests. - """ - - def __init__( - self, - ssl_context: Optional[ssl.SSLContext] = None, - max_connections: Optional[int] = 10, - max_keepalive_connections: Optional[int] = None, - keepalive_expiry: Optional[float] = None, - http1: bool = True, - http2: bool = False, - retries: int = 0, - local_address: Optional[str] = None, - uds: Optional[str] = None, - network_backend: Optional[AsyncNetworkBackend] = None, - socket_options: Optional[Iterable[SOCKET_OPTION]] = None, - ) -> None: - """ - A connection pool for making HTTP requests. - - Parameters: - ssl_context: An SSL context to use for verifying connections. - If not specified, the default `httpcore.default_ssl_context()` - will be used. - max_connections: The maximum number of concurrent HTTP connections that - the pool should allow. Any attempt to send a request on a pool that - would exceed this amount will block until a connection is available. - max_keepalive_connections: The maximum number of idle HTTP connections - that will be maintained in the pool. - keepalive_expiry: The duration in seconds that an idle HTTP connection - may be maintained for before being expired from the pool. - http1: A boolean indicating if HTTP/1.1 requests should be supported - by the connection pool. Defaults to True. - http2: A boolean indicating if HTTP/2 requests should be supported by - the connection pool. Defaults to False. - retries: The maximum number of retries when trying to establish a - connection. - local_address: Local address to connect from. Can also be used to connect - using a particular address family. Using `local_address="0.0.0.0"` - will connect using an `AF_INET` address (IPv4), while using - `local_address="::"` will connect using an `AF_INET6` address (IPv6). - uds: Path to a Unix Domain Socket to use instead of TCP sockets. - network_backend: A backend instance to use for handling network I/O. - socket_options: Socket options that have to be included - in the TCP socket when the connection was established. - """ - self._ssl_context = ssl_context - - self._max_connections = ( - sys.maxsize if max_connections is None else max_connections - ) - self._max_keepalive_connections = ( - sys.maxsize - if max_keepalive_connections is None - else max_keepalive_connections - ) - self._max_keepalive_connections = min( - self._max_connections, self._max_keepalive_connections - ) - - self._keepalive_expiry = keepalive_expiry - self._http1 = http1 - self._http2 = http2 - self._retries = retries - self._local_address = local_address - self._uds = uds - - self._pool: List[AsyncConnectionInterface] = [] - self._requests: List[RequestStatus] = [] - self._pool_lock = AsyncLock() - self._network_backend = ( - AutoBackend() if network_backend is None else network_backend - ) - self._socket_options = socket_options - - def create_connection(self, origin: Origin) -> AsyncConnectionInterface: - return AsyncHTTPConnection( - origin=origin, - ssl_context=self._ssl_context, - keepalive_expiry=self._keepalive_expiry, - http1=self._http1, - http2=self._http2, - retries=self._retries, - local_address=self._local_address, - uds=self._uds, - network_backend=self._network_backend, - socket_options=self._socket_options, - ) - - @property - def connections(self) -> List[AsyncConnectionInterface]: - """ - Return a list of the connections currently in the pool. - - For example: - - ```python - >>> pool.connections - [ - , - , - , - ] - ``` - """ - return list(self._pool) - - async def _attempt_to_acquire_connection(self, status: RequestStatus) -> bool: - """ - Attempt to provide a connection that can handle the given origin. - """ - origin = status.request.url.origin - - # If there are queued requests in front of us, then don't acquire a - # connection. We handle requests strictly in order. - waiting = [s for s in self._requests if s.connection is None] - if waiting and waiting[0] is not status: - return False - - # Reuse an existing connection if one is currently available. - for idx, connection in enumerate(self._pool): - if connection.can_handle_request(origin) and connection.is_available(): - self._pool.pop(idx) - self._pool.insert(0, connection) - status.set_connection(connection) - return True - - # If the pool is currently full, attempt to close one idle connection. - if len(self._pool) >= self._max_connections: - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.is_idle(): - await connection.aclose() - self._pool.pop(idx) - break - - # If the pool is still full, then we cannot acquire a connection. - if len(self._pool) >= self._max_connections: - return False - - # Otherwise create a new connection. - connection = self.create_connection(origin) - self._pool.insert(0, connection) - status.set_connection(connection) - return True - - async def _close_expired_connections(self) -> None: - """ - Clean up the connection pool by closing off any connections that have expired. - """ - # Close any connections that have expired their keep-alive time. - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.has_expired(): - await connection.aclose() - self._pool.pop(idx) - - # If the pool size exceeds the maximum number of allowed keep-alive connections, - # then close off idle connections as required. - pool_size = len(self._pool) - for idx, connection in reversed(list(enumerate(self._pool))): - if connection.is_idle() and pool_size > self._max_keepalive_connections: - await connection.aclose() - self._pool.pop(idx) - pool_size -= 1 - - async def handle_async_request(self, request: Request) -> Response: - """ - Send an HTTP request, and return an HTTP response. - - This is the core implementation that is called into by `.request()` or `.stream()`. - """ - scheme = request.url.scheme.decode() - if scheme == "": - raise UnsupportedProtocol( - "Request URL is missing an 'http://' or 'https://' protocol." - ) - if scheme not in ("http", "https", "ws", "wss"): - raise UnsupportedProtocol( - f"Request URL has an unsupported protocol '{scheme}://'." - ) - - status = RequestStatus(request) - - async with self._pool_lock: - self._requests.append(status) - await self._close_expired_connections() - await self._attempt_to_acquire_connection(status) - - while True: - timeouts = request.extensions.get("timeout", {}) - timeout = timeouts.get("pool", None) - try: - connection = await status.wait_for_connection(timeout=timeout) - except BaseException as exc: - # If we timeout here, or if the task is cancelled, then make - # sure to remove the request from the queue before bubbling - # up the exception. - async with self._pool_lock: - # Ensure only remove when task exists. - if status in self._requests: - self._requests.remove(status) - raise exc - - try: - response = await connection.handle_async_request(request) - except ConnectionNotAvailable: - # The ConnectionNotAvailable exception is a special case, that - # indicates we need to retry the request on a new connection. - # - # The most common case where this can occur is when multiple - # requests are queued waiting for a single connection, which - # might end up as an HTTP/2 connection, but which actually ends - # up as HTTP/1.1. - async with self._pool_lock: - # Maintain our position in the request queue, but reset the - # status so that the request becomes queued again. - status.unset_connection() - await self._attempt_to_acquire_connection(status) - except BaseException as exc: - with AsyncShieldCancellation(): - await self.response_closed(status) - raise exc - else: - break - - # When we return the response, we wrap the stream in a special class - # that handles notifying the connection pool once the response - # has been released. - assert isinstance(response.stream, AsyncIterable) - return Response( - status=response.status, - headers=response.headers, - content=ConnectionPoolByteStream(response.stream, self, status), - extensions=response.extensions, - ) - - async def response_closed(self, status: RequestStatus) -> None: - """ - This method acts as a callback once the request/response cycle is complete. - - It is called into from the `ConnectionPoolByteStream.aclose()` method. - """ - assert status.connection is not None - connection = status.connection - - async with self._pool_lock: - # Update the state of the connection pool. - if status in self._requests: - self._requests.remove(status) - - if connection.is_closed() and connection in self._pool: - self._pool.remove(connection) - - # Since we've had a response closed, it's possible we'll now be able - # to service one or more requests that are currently pending. - for status in self._requests: - if status.connection is None: - acquired = await self._attempt_to_acquire_connection(status) - # If we could not acquire a connection for a queued request - # then we don't need to check anymore requests that are - # queued later behind it. - if not acquired: - break - - # Housekeeping. - await self._close_expired_connections() - - async def aclose(self) -> None: - """ - Close any connections in the pool. - """ - async with self._pool_lock: - for connection in self._pool: - await connection.aclose() - self._pool = [] - self._requests = [] - - async def __aenter__(self) -> "AsyncConnectionPool": - return self - - async def __aexit__( - self, - exc_type: Optional[Type[BaseException]] = None, - exc_value: Optional[BaseException] = None, - traceback: Optional[TracebackType] = None, - ) -> None: - await self.aclose() - - -class ConnectionPoolByteStream: - """ - A wrapper around the response byte stream, that additionally handles - notifying the connection pool when the response has been closed. - """ - - def __init__( - self, - stream: AsyncIterable[bytes], - pool: AsyncConnectionPool, - status: RequestStatus, - ) -> None: - self._stream = stream - self._pool = pool - self._status = status - - async def __aiter__(self) -> AsyncIterator[bytes]: - async for part in self._stream: - yield part - - async def aclose(self) -> None: - try: - if hasattr(self._stream, "aclose"): - await self._stream.aclose() - finally: - with AsyncShieldCancellation(): - await self._pool.response_closed(self._status) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/helpers/parse_link_title.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/helpers/parse_link_title.py deleted file mode 100644 index 8f589336f60ea16a7fdf73c023cad2e5092d58e3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/helpers/parse_link_title.py +++ /dev/null @@ -1,60 +0,0 @@ -"""Parse link title -""" -from ..common.utils import charCodeAt, unescapeAll - - -class _Result: - __slots__ = ("ok", "pos", "lines", "str") - - def __init__(self) -> None: - self.ok = False - self.pos = 0 - self.lines = 0 - self.str = "" - - def __str__(self) -> str: - return self.str - - -def parseLinkTitle(string: str, pos: int, maximum: int) -> _Result: - lines = 0 - start = pos - result = _Result() - - if pos >= maximum: - return result - - marker = charCodeAt(string, pos) - - # /* " */ /* ' */ /* ( */ - if marker != 0x22 and marker != 0x27 and marker != 0x28: - return result - - pos += 1 - - # if opening marker is "(", switch it to closing marker ")" - if marker == 0x28: - marker = 0x29 - - while pos < maximum: - code = charCodeAt(string, pos) - if code == marker: - title = string[start + 1 : pos] - title = unescapeAll(title) - result.pos = pos + 1 - result.lines = lines - result.str = title - result.ok = True - return result - elif code == 0x28 and marker == 0x29: # /* ( */ /* ) */ - return result - elif code == 0x0A: - lines += 1 - elif code == 0x5C and pos + 1 < maximum: # /* \ */ - pos += 1 - if charCodeAt(string, pos) == 0x0A: - lines += 1 - - pos += 1 - - return result diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multidict/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multidict/__init__.py deleted file mode 100644 index d9ea7221675a9b3ff6bea78ac69dec0c832e342d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/multidict/__init__.py +++ /dev/null @@ -1,48 +0,0 @@ -"""Multidict implementation. - -HTTP Headers and URL query string require specific data structure: -multidict. It behaves mostly like a dict but it can have -several values for the same key. -""" - -from ._abc import MultiMapping, MutableMultiMapping -from ._compat import USE_EXTENSIONS - -__all__ = ( - "MultiMapping", - "MutableMultiMapping", - "MultiDictProxy", - "CIMultiDictProxy", - "MultiDict", - "CIMultiDict", - "upstr", - "istr", - "getversion", -) - -__version__ = "6.0.4" - - -try: - if not USE_EXTENSIONS: - raise ImportError - from ._multidict import ( - CIMultiDict, - CIMultiDictProxy, - MultiDict, - MultiDictProxy, - getversion, - istr, - ) -except ImportError: # pragma: no cover - from ._multidict_py import ( - CIMultiDict, - CIMultiDictProxy, - MultiDict, - MultiDictProxy, - getversion, - istr, - ) - - -upstr = istr diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/ma/mrecords.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/ma/mrecords.py deleted file mode 100644 index 1e8103bcf63271a51122dd90fd1ba6f4c722502c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/ma/mrecords.py +++ /dev/null @@ -1,783 +0,0 @@ -""":mod:`numpy.ma..mrecords` - -Defines the equivalent of :class:`numpy.recarrays` for masked arrays, -where fields can be accessed as attributes. -Note that :class:`numpy.ma.MaskedArray` already supports structured datatypes -and the masking of individual fields. - -.. moduleauthor:: Pierre Gerard-Marchant - -""" -# We should make sure that no field is called '_mask','mask','_fieldmask', -# or whatever restricted keywords. An idea would be to no bother in the -# first place, and then rename the invalid fields with a trailing -# underscore. Maybe we could just overload the parser function ? - -from numpy.ma import ( - MAError, MaskedArray, masked, nomask, masked_array, getdata, - getmaskarray, filled -) -import numpy.ma as ma -import warnings - -import numpy as np -from numpy import ( - bool_, dtype, ndarray, recarray, array as narray -) -from numpy.core.records import ( - fromarrays as recfromarrays, fromrecords as recfromrecords -) - -_byteorderconv = np.core.records._byteorderconv - - -_check_fill_value = ma.core._check_fill_value - - -__all__ = [ - 'MaskedRecords', 'mrecarray', 'fromarrays', 'fromrecords', - 'fromtextfile', 'addfield', -] - -reserved_fields = ['_data', '_mask', '_fieldmask', 'dtype'] - - -def _checknames(descr, names=None): - """ - Checks that field names ``descr`` are not reserved keywords. - - If this is the case, a default 'f%i' is substituted. If the argument - `names` is not None, updates the field names to valid names. - - """ - ndescr = len(descr) - default_names = ['f%i' % i for i in range(ndescr)] - if names is None: - new_names = default_names - else: - if isinstance(names, (tuple, list)): - new_names = names - elif isinstance(names, str): - new_names = names.split(',') - else: - raise NameError(f'illegal input names {names!r}') - nnames = len(new_names) - if nnames < ndescr: - new_names += default_names[nnames:] - ndescr = [] - for (n, d, t) in zip(new_names, default_names, descr.descr): - if n in reserved_fields: - if t[0] in reserved_fields: - ndescr.append((d, t[1])) - else: - ndescr.append(t) - else: - ndescr.append((n, t[1])) - return np.dtype(ndescr) - - -def _get_fieldmask(self): - mdescr = [(n, '|b1') for n in self.dtype.names] - fdmask = np.empty(self.shape, dtype=mdescr) - fdmask.flat = tuple([False] * len(mdescr)) - return fdmask - - -class MaskedRecords(MaskedArray): - """ - - Attributes - ---------- - _data : recarray - Underlying data, as a record array. - _mask : boolean array - Mask of the records. A record is masked when all its fields are - masked. - _fieldmask : boolean recarray - Record array of booleans, setting the mask of each individual field - of each record. - _fill_value : record - Filling values for each field. - - """ - - def __new__(cls, shape, dtype=None, buf=None, offset=0, strides=None, - formats=None, names=None, titles=None, - byteorder=None, aligned=False, - mask=nomask, hard_mask=False, fill_value=None, keep_mask=True, - copy=False, - **options): - - self = recarray.__new__(cls, shape, dtype=dtype, buf=buf, offset=offset, - strides=strides, formats=formats, names=names, - titles=titles, byteorder=byteorder, - aligned=aligned,) - - mdtype = ma.make_mask_descr(self.dtype) - if mask is nomask or not np.size(mask): - if not keep_mask: - self._mask = tuple([False] * len(mdtype)) - else: - mask = np.array(mask, copy=copy) - if mask.shape != self.shape: - (nd, nm) = (self.size, mask.size) - if nm == 1: - mask = np.resize(mask, self.shape) - elif nm == nd: - mask = np.reshape(mask, self.shape) - else: - msg = "Mask and data not compatible: data size is %i, " + \ - "mask size is %i." - raise MAError(msg % (nd, nm)) - if not keep_mask: - self.__setmask__(mask) - self._sharedmask = True - else: - if mask.dtype == mdtype: - _mask = mask - else: - _mask = np.array([tuple([m] * len(mdtype)) for m in mask], - dtype=mdtype) - self._mask = _mask - return self - - def __array_finalize__(self, obj): - # Make sure we have a _fieldmask by default - _mask = getattr(obj, '_mask', None) - if _mask is None: - objmask = getattr(obj, '_mask', nomask) - _dtype = ndarray.__getattribute__(self, 'dtype') - if objmask is nomask: - _mask = ma.make_mask_none(self.shape, dtype=_dtype) - else: - mdescr = ma.make_mask_descr(_dtype) - _mask = narray([tuple([m] * len(mdescr)) for m in objmask], - dtype=mdescr).view(recarray) - # Update some of the attributes - _dict = self.__dict__ - _dict.update(_mask=_mask) - self._update_from(obj) - if _dict['_baseclass'] == ndarray: - _dict['_baseclass'] = recarray - return - - @property - def _data(self): - """ - Returns the data as a recarray. - - """ - return ndarray.view(self, recarray) - - @property - def _fieldmask(self): - """ - Alias to mask. - - """ - return self._mask - - def __len__(self): - """ - Returns the length - - """ - # We have more than one record - if self.ndim: - return len(self._data) - # We have only one record: return the nb of fields - return len(self.dtype) - - def __getattribute__(self, attr): - try: - return object.__getattribute__(self, attr) - except AttributeError: - # attr must be a fieldname - pass - fielddict = ndarray.__getattribute__(self, 'dtype').fields - try: - res = fielddict[attr][:2] - except (TypeError, KeyError) as e: - raise AttributeError( - f'record array has no attribute {attr}') from e - # So far, so good - _localdict = ndarray.__getattribute__(self, '__dict__') - _data = ndarray.view(self, _localdict['_baseclass']) - obj = _data.getfield(*res) - if obj.dtype.names is not None: - raise NotImplementedError("MaskedRecords is currently limited to" - "simple records.") - # Get some special attributes - # Reset the object's mask - hasmasked = False - _mask = _localdict.get('_mask', None) - if _mask is not None: - try: - _mask = _mask[attr] - except IndexError: - # Couldn't find a mask: use the default (nomask) - pass - tp_len = len(_mask.dtype) - hasmasked = _mask.view((bool, ((tp_len,) if tp_len else ()))).any() - if (obj.shape or hasmasked): - obj = obj.view(MaskedArray) - obj._baseclass = ndarray - obj._isfield = True - obj._mask = _mask - # Reset the field values - _fill_value = _localdict.get('_fill_value', None) - if _fill_value is not None: - try: - obj._fill_value = _fill_value[attr] - except ValueError: - obj._fill_value = None - else: - obj = obj.item() - return obj - - def __setattr__(self, attr, val): - """ - Sets the attribute attr to the value val. - - """ - # Should we call __setmask__ first ? - if attr in ['mask', 'fieldmask']: - self.__setmask__(val) - return - # Create a shortcut (so that we don't have to call getattr all the time) - _localdict = object.__getattribute__(self, '__dict__') - # Check whether we're creating a new field - newattr = attr not in _localdict - try: - # Is attr a generic attribute ? - ret = object.__setattr__(self, attr, val) - except Exception: - # Not a generic attribute: exit if it's not a valid field - fielddict = ndarray.__getattribute__(self, 'dtype').fields or {} - optinfo = ndarray.__getattribute__(self, '_optinfo') or {} - if not (attr in fielddict or attr in optinfo): - raise - else: - # Get the list of names - fielddict = ndarray.__getattribute__(self, 'dtype').fields or {} - # Check the attribute - if attr not in fielddict: - return ret - if newattr: - # We just added this one or this setattr worked on an - # internal attribute. - try: - object.__delattr__(self, attr) - except Exception: - return ret - # Let's try to set the field - try: - res = fielddict[attr][:2] - except (TypeError, KeyError) as e: - raise AttributeError( - f'record array has no attribute {attr}') from e - - if val is masked: - _fill_value = _localdict['_fill_value'] - if _fill_value is not None: - dval = _localdict['_fill_value'][attr] - else: - dval = val - mval = True - else: - dval = filled(val) - mval = getmaskarray(val) - obj = ndarray.__getattribute__(self, '_data').setfield(dval, *res) - _localdict['_mask'].__setitem__(attr, mval) - return obj - - def __getitem__(self, indx): - """ - Returns all the fields sharing the same fieldname base. - - The fieldname base is either `_data` or `_mask`. - - """ - _localdict = self.__dict__ - _mask = ndarray.__getattribute__(self, '_mask') - _data = ndarray.view(self, _localdict['_baseclass']) - # We want a field - if isinstance(indx, str): - # Make sure _sharedmask is True to propagate back to _fieldmask - # Don't use _set_mask, there are some copies being made that - # break propagation Don't force the mask to nomask, that wreaks - # easy masking - obj = _data[indx].view(MaskedArray) - obj._mask = _mask[indx] - obj._sharedmask = True - fval = _localdict['_fill_value'] - if fval is not None: - obj._fill_value = fval[indx] - # Force to masked if the mask is True - if not obj.ndim and obj._mask: - return masked - return obj - # We want some elements. - # First, the data. - obj = np.array(_data[indx], copy=False).view(mrecarray) - obj._mask = np.array(_mask[indx], copy=False).view(recarray) - return obj - - def __setitem__(self, indx, value): - """ - Sets the given record to value. - - """ - MaskedArray.__setitem__(self, indx, value) - if isinstance(indx, str): - self._mask[indx] = ma.getmaskarray(value) - - def __str__(self): - """ - Calculates the string representation. - - """ - if self.size > 1: - mstr = [f"({','.join([str(i) for i in s])})" - for s in zip(*[getattr(self, f) for f in self.dtype.names])] - return f"[{', '.join(mstr)}]" - else: - mstr = [f"{','.join([str(i) for i in s])}" - for s in zip([getattr(self, f) for f in self.dtype.names])] - return f"({', '.join(mstr)})" - - def __repr__(self): - """ - Calculates the repr representation. - - """ - _names = self.dtype.names - fmt = "%%%is : %%s" % (max([len(n) for n in _names]) + 4,) - reprstr = [fmt % (f, getattr(self, f)) for f in self.dtype.names] - reprstr.insert(0, 'masked_records(') - reprstr.extend([fmt % (' fill_value', self.fill_value), - ' )']) - return str("\n".join(reprstr)) - - def view(self, dtype=None, type=None): - """ - Returns a view of the mrecarray. - - """ - # OK, basic copy-paste from MaskedArray.view. - if dtype is None: - if type is None: - output = ndarray.view(self) - else: - output = ndarray.view(self, type) - # Here again. - elif type is None: - try: - if issubclass(dtype, ndarray): - output = ndarray.view(self, dtype) - else: - output = ndarray.view(self, dtype) - # OK, there's the change - except TypeError: - dtype = np.dtype(dtype) - # we need to revert to MaskedArray, but keeping the possibility - # of subclasses (eg, TimeSeriesRecords), so we'll force a type - # set to the first parent - if dtype.fields is None: - basetype = self.__class__.__bases__[0] - output = self.__array__().view(dtype, basetype) - output._update_from(self) - else: - output = ndarray.view(self, dtype) - output._fill_value = None - else: - output = ndarray.view(self, dtype, type) - # Update the mask, just like in MaskedArray.view - if (getattr(output, '_mask', nomask) is not nomask): - mdtype = ma.make_mask_descr(output.dtype) - output._mask = self._mask.view(mdtype, ndarray) - output._mask.shape = output.shape - return output - - def harden_mask(self): - """ - Forces the mask to hard. - - """ - self._hardmask = True - - def soften_mask(self): - """ - Forces the mask to soft - - """ - self._hardmask = False - - def copy(self): - """ - Returns a copy of the masked record. - - """ - copied = self._data.copy().view(type(self)) - copied._mask = self._mask.copy() - return copied - - def tolist(self, fill_value=None): - """ - Return the data portion of the array as a list. - - Data items are converted to the nearest compatible Python type. - Masked values are converted to fill_value. If fill_value is None, - the corresponding entries in the output list will be ``None``. - - """ - if fill_value is not None: - return self.filled(fill_value).tolist() - result = narray(self.filled().tolist(), dtype=object) - mask = narray(self._mask.tolist()) - result[mask] = None - return result.tolist() - - def __getstate__(self): - """Return the internal state of the masked array. - - This is for pickling. - - """ - state = (1, - self.shape, - self.dtype, - self.flags.fnc, - self._data.tobytes(), - self._mask.tobytes(), - self._fill_value, - ) - return state - - def __setstate__(self, state): - """ - Restore the internal state of the masked array. - - This is for pickling. ``state`` is typically the output of the - ``__getstate__`` output, and is a 5-tuple: - - - class name - - a tuple giving the shape of the data - - a typecode for the data - - a binary string for the data - - a binary string for the mask. - - """ - (ver, shp, typ, isf, raw, msk, flv) = state - ndarray.__setstate__(self, (shp, typ, isf, raw)) - mdtype = dtype([(k, bool_) for (k, _) in self.dtype.descr]) - self.__dict__['_mask'].__setstate__((shp, mdtype, isf, msk)) - self.fill_value = flv - - def __reduce__(self): - """ - Return a 3-tuple for pickling a MaskedArray. - - """ - return (_mrreconstruct, - (self.__class__, self._baseclass, (0,), 'b',), - self.__getstate__()) - - -def _mrreconstruct(subtype, baseclass, baseshape, basetype,): - """ - Build a new MaskedArray from the information stored in a pickle. - - """ - _data = ndarray.__new__(baseclass, baseshape, basetype).view(subtype) - _mask = ndarray.__new__(ndarray, baseshape, 'b1') - return subtype.__new__(subtype, _data, mask=_mask, dtype=basetype,) - -mrecarray = MaskedRecords - - -############################################################################### -# Constructors # -############################################################################### - - -def fromarrays(arraylist, dtype=None, shape=None, formats=None, - names=None, titles=None, aligned=False, byteorder=None, - fill_value=None): - """ - Creates a mrecarray from a (flat) list of masked arrays. - - Parameters - ---------- - arraylist : sequence - A list of (masked) arrays. Each element of the sequence is first converted - to a masked array if needed. If a 2D array is passed as argument, it is - processed line by line - dtype : {None, dtype}, optional - Data type descriptor. - shape : {None, integer}, optional - Number of records. If None, shape is defined from the shape of the - first array in the list. - formats : {None, sequence}, optional - Sequence of formats for each individual field. If None, the formats will - be autodetected by inspecting the fields and selecting the highest dtype - possible. - names : {None, sequence}, optional - Sequence of the names of each field. - fill_value : {None, sequence}, optional - Sequence of data to be used as filling values. - - Notes - ----- - Lists of tuples should be preferred over lists of lists for faster processing. - - """ - datalist = [getdata(x) for x in arraylist] - masklist = [np.atleast_1d(getmaskarray(x)) for x in arraylist] - _array = recfromarrays(datalist, - dtype=dtype, shape=shape, formats=formats, - names=names, titles=titles, aligned=aligned, - byteorder=byteorder).view(mrecarray) - _array._mask.flat = list(zip(*masklist)) - if fill_value is not None: - _array.fill_value = fill_value - return _array - - -def fromrecords(reclist, dtype=None, shape=None, formats=None, names=None, - titles=None, aligned=False, byteorder=None, - fill_value=None, mask=nomask): - """ - Creates a MaskedRecords from a list of records. - - Parameters - ---------- - reclist : sequence - A list of records. Each element of the sequence is first converted - to a masked array if needed. If a 2D array is passed as argument, it is - processed line by line - dtype : {None, dtype}, optional - Data type descriptor. - shape : {None,int}, optional - Number of records. If None, ``shape`` is defined from the shape of the - first array in the list. - formats : {None, sequence}, optional - Sequence of formats for each individual field. If None, the formats will - be autodetected by inspecting the fields and selecting the highest dtype - possible. - names : {None, sequence}, optional - Sequence of the names of each field. - fill_value : {None, sequence}, optional - Sequence of data to be used as filling values. - mask : {nomask, sequence}, optional. - External mask to apply on the data. - - Notes - ----- - Lists of tuples should be preferred over lists of lists for faster processing. - - """ - # Grab the initial _fieldmask, if needed: - _mask = getattr(reclist, '_mask', None) - # Get the list of records. - if isinstance(reclist, ndarray): - # Make sure we don't have some hidden mask - if isinstance(reclist, MaskedArray): - reclist = reclist.filled().view(ndarray) - # Grab the initial dtype, just in case - if dtype is None: - dtype = reclist.dtype - reclist = reclist.tolist() - mrec = recfromrecords(reclist, dtype=dtype, shape=shape, formats=formats, - names=names, titles=titles, - aligned=aligned, byteorder=byteorder).view(mrecarray) - # Set the fill_value if needed - if fill_value is not None: - mrec.fill_value = fill_value - # Now, let's deal w/ the mask - if mask is not nomask: - mask = np.array(mask, copy=False) - maskrecordlength = len(mask.dtype) - if maskrecordlength: - mrec._mask.flat = mask - elif mask.ndim == 2: - mrec._mask.flat = [tuple(m) for m in mask] - else: - mrec.__setmask__(mask) - if _mask is not None: - mrec._mask[:] = _mask - return mrec - - -def _guessvartypes(arr): - """ - Tries to guess the dtypes of the str_ ndarray `arr`. - - Guesses by testing element-wise conversion. Returns a list of dtypes. - The array is first converted to ndarray. If the array is 2D, the test - is performed on the first line. An exception is raised if the file is - 3D or more. - - """ - vartypes = [] - arr = np.asarray(arr) - if arr.ndim == 2: - arr = arr[0] - elif arr.ndim > 2: - raise ValueError("The array should be 2D at most!") - # Start the conversion loop. - for f in arr: - try: - int(f) - except (ValueError, TypeError): - try: - float(f) - except (ValueError, TypeError): - try: - complex(f) - except (ValueError, TypeError): - vartypes.append(arr.dtype) - else: - vartypes.append(np.dtype(complex)) - else: - vartypes.append(np.dtype(float)) - else: - vartypes.append(np.dtype(int)) - return vartypes - - -def openfile(fname): - """ - Opens the file handle of file `fname`. - - """ - # A file handle - if hasattr(fname, 'readline'): - return fname - # Try to open the file and guess its type - try: - f = open(fname) - except FileNotFoundError as e: - raise FileNotFoundError(f"No such file: '{fname}'") from e - if f.readline()[:2] != "\\x": - f.seek(0, 0) - return f - f.close() - raise NotImplementedError("Wow, binary file") - - -def fromtextfile(fname, delimiter=None, commentchar='#', missingchar='', - varnames=None, vartypes=None, - *, delimitor=np._NoValue): # backwards compatibility - """ - Creates a mrecarray from data stored in the file `filename`. - - Parameters - ---------- - fname : {file name/handle} - Handle of an opened file. - delimiter : {None, string}, optional - Alphanumeric character used to separate columns in the file. - If None, any (group of) white spacestring(s) will be used. - commentchar : {'#', string}, optional - Alphanumeric character used to mark the start of a comment. - missingchar : {'', string}, optional - String indicating missing data, and used to create the masks. - varnames : {None, sequence}, optional - Sequence of the variable names. If None, a list will be created from - the first non empty line of the file. - vartypes : {None, sequence}, optional - Sequence of the variables dtypes. If None, it will be estimated from - the first non-commented line. - - - Ultra simple: the varnames are in the header, one line""" - if delimitor is not np._NoValue: - if delimiter is not None: - raise TypeError("fromtextfile() got multiple values for argument " - "'delimiter'") - # NumPy 1.22.0, 2021-09-23 - warnings.warn("The 'delimitor' keyword argument of " - "numpy.ma.mrecords.fromtextfile() is deprecated " - "since NumPy 1.22.0, use 'delimiter' instead.", - DeprecationWarning, stacklevel=2) - delimiter = delimitor - - # Try to open the file. - ftext = openfile(fname) - - # Get the first non-empty line as the varnames - while True: - line = ftext.readline() - firstline = line[:line.find(commentchar)].strip() - _varnames = firstline.split(delimiter) - if len(_varnames) > 1: - break - if varnames is None: - varnames = _varnames - - # Get the data. - _variables = masked_array([line.strip().split(delimiter) for line in ftext - if line[0] != commentchar and len(line) > 1]) - (_, nfields) = _variables.shape - ftext.close() - - # Try to guess the dtype. - if vartypes is None: - vartypes = _guessvartypes(_variables[0]) - else: - vartypes = [np.dtype(v) for v in vartypes] - if len(vartypes) != nfields: - msg = "Attempting to %i dtypes for %i fields!" - msg += " Reverting to default." - warnings.warn(msg % (len(vartypes), nfields), stacklevel=2) - vartypes = _guessvartypes(_variables[0]) - - # Construct the descriptor. - mdescr = [(n, f) for (n, f) in zip(varnames, vartypes)] - mfillv = [ma.default_fill_value(f) for f in vartypes] - - # Get the data and the mask. - # We just need a list of masked_arrays. It's easier to create it like that: - _mask = (_variables.T == missingchar) - _datalist = [masked_array(a, mask=m, dtype=t, fill_value=f) - for (a, m, t, f) in zip(_variables.T, _mask, vartypes, mfillv)] - - return fromarrays(_datalist, dtype=mdescr) - - -def addfield(mrecord, newfield, newfieldname=None): - """Adds a new field to the masked record array - - Uses `newfield` as data and `newfieldname` as name. If `newfieldname` - is None, the new field name is set to 'fi', where `i` is the number of - existing fields. - - """ - _data = mrecord._data - _mask = mrecord._mask - if newfieldname is None or newfieldname in reserved_fields: - newfieldname = 'f%i' % len(_data.dtype) - newfield = ma.array(newfield) - # Get the new data. - # Create a new empty recarray - newdtype = np.dtype(_data.dtype.descr + [(newfieldname, newfield.dtype)]) - newdata = recarray(_data.shape, newdtype) - # Add the existing field - [newdata.setfield(_data.getfield(*f), *f) - for f in _data.dtype.fields.values()] - # Add the new field - newdata.setfield(newfield._data, *newdata.dtype.fields[newfieldname]) - newdata = newdata.view(MaskedRecords) - # Get the new mask - # Create a new empty recarray - newmdtype = np.dtype([(n, bool_) for n in newdtype.names]) - newmask = recarray(_data.shape, newmdtype) - # Add the old masks - [newmask.setfield(_mask.getfield(*f), *f) - for f in _mask.dtype.fields.values()] - # Add the mask of the new field - newmask.setfield(getmaskarray(newfield), - *newmask.dtype.fields[newfieldname]) - newdata._mask = newmask - return newdata diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/fromnumeric.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/fromnumeric.py deleted file mode 100644 index 9e936e68465a735acc1e61eb91da46e6f40d6d37..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/fromnumeric.py +++ /dev/null @@ -1,260 +0,0 @@ -"""Tests for :mod:`numpy.core.fromnumeric`.""" - -import numpy as np - -A = np.array(True, ndmin=2, dtype=bool) -B = np.array(1.0, ndmin=2, dtype=np.float32) -A.setflags(write=False) -B.setflags(write=False) - -a = np.bool_(True) -b = np.float32(1.0) -c = 1.0 -d = np.array(1.0, dtype=np.float32) # writeable - -np.take(a, 0) -np.take(b, 0) -np.take(c, 0) -np.take(A, 0) -np.take(B, 0) -np.take(A, [0]) -np.take(B, [0]) - -np.reshape(a, 1) -np.reshape(b, 1) -np.reshape(c, 1) -np.reshape(A, 1) -np.reshape(B, 1) - -np.choose(a, [True, True]) -np.choose(A, [1.0, 1.0]) - -np.repeat(a, 1) -np.repeat(b, 1) -np.repeat(c, 1) -np.repeat(A, 1) -np.repeat(B, 1) - -np.swapaxes(A, 0, 0) -np.swapaxes(B, 0, 0) - -np.transpose(a) -np.transpose(b) -np.transpose(c) -np.transpose(A) -np.transpose(B) - -np.partition(a, 0, axis=None) -np.partition(b, 0, axis=None) -np.partition(c, 0, axis=None) -np.partition(A, 0) -np.partition(B, 0) - -np.argpartition(a, 0) -np.argpartition(b, 0) -np.argpartition(c, 0) -np.argpartition(A, 0) -np.argpartition(B, 0) - -np.sort(A, 0) -np.sort(B, 0) - -np.argsort(A, 0) -np.argsort(B, 0) - -np.argmax(A) -np.argmax(B) -np.argmax(A, axis=0) -np.argmax(B, axis=0) - -np.argmin(A) -np.argmin(B) -np.argmin(A, axis=0) -np.argmin(B, axis=0) - -np.searchsorted(A[0], 0) -np.searchsorted(B[0], 0) -np.searchsorted(A[0], [0]) -np.searchsorted(B[0], [0]) - -np.resize(a, (5, 5)) -np.resize(b, (5, 5)) -np.resize(c, (5, 5)) -np.resize(A, (5, 5)) -np.resize(B, (5, 5)) - -np.squeeze(a) -np.squeeze(b) -np.squeeze(c) -np.squeeze(A) -np.squeeze(B) - -np.diagonal(A) -np.diagonal(B) - -np.trace(A) -np.trace(B) - -np.ravel(a) -np.ravel(b) -np.ravel(c) -np.ravel(A) -np.ravel(B) - -np.nonzero(A) -np.nonzero(B) - -np.shape(a) -np.shape(b) -np.shape(c) -np.shape(A) -np.shape(B) - -np.compress([True], a) -np.compress([True], b) -np.compress([True], c) -np.compress([True], A) -np.compress([True], B) - -np.clip(a, 0, 1.0) -np.clip(b, -1, 1) -np.clip(a, 0, None) -np.clip(b, None, 1) -np.clip(c, 0, 1) -np.clip(A, 0, 1) -np.clip(B, 0, 1) -np.clip(B, [0, 1], [1, 2]) - -np.sum(a) -np.sum(b) -np.sum(c) -np.sum(A) -np.sum(B) -np.sum(A, axis=0) -np.sum(B, axis=0) - -np.all(a) -np.all(b) -np.all(c) -np.all(A) -np.all(B) -np.all(A, axis=0) -np.all(B, axis=0) -np.all(A, keepdims=True) -np.all(B, keepdims=True) - -np.any(a) -np.any(b) -np.any(c) -np.any(A) -np.any(B) -np.any(A, axis=0) -np.any(B, axis=0) -np.any(A, keepdims=True) -np.any(B, keepdims=True) - -np.cumsum(a) -np.cumsum(b) -np.cumsum(c) -np.cumsum(A) -np.cumsum(B) - -np.ptp(b) -np.ptp(c) -np.ptp(B) -np.ptp(B, axis=0) -np.ptp(B, keepdims=True) - -np.amax(a) -np.amax(b) -np.amax(c) -np.amax(A) -np.amax(B) -np.amax(A, axis=0) -np.amax(B, axis=0) -np.amax(A, keepdims=True) -np.amax(B, keepdims=True) - -np.amin(a) -np.amin(b) -np.amin(c) -np.amin(A) -np.amin(B) -np.amin(A, axis=0) -np.amin(B, axis=0) -np.amin(A, keepdims=True) -np.amin(B, keepdims=True) - -np.prod(a) -np.prod(b) -np.prod(c) -np.prod(A) -np.prod(B) -np.prod(a, dtype=None) -np.prod(A, dtype=None) -np.prod(A, axis=0) -np.prod(B, axis=0) -np.prod(A, keepdims=True) -np.prod(B, keepdims=True) -np.prod(b, out=d) -np.prod(B, out=d) - -np.cumprod(a) -np.cumprod(b) -np.cumprod(c) -np.cumprod(A) -np.cumprod(B) - -np.ndim(a) -np.ndim(b) -np.ndim(c) -np.ndim(A) -np.ndim(B) - -np.size(a) -np.size(b) -np.size(c) -np.size(A) -np.size(B) - -np.around(a) -np.around(b) -np.around(c) -np.around(A) -np.around(B) - -np.mean(a) -np.mean(b) -np.mean(c) -np.mean(A) -np.mean(B) -np.mean(A, axis=0) -np.mean(B, axis=0) -np.mean(A, keepdims=True) -np.mean(B, keepdims=True) -np.mean(b, out=d) -np.mean(B, out=d) - -np.std(a) -np.std(b) -np.std(c) -np.std(A) -np.std(B) -np.std(A, axis=0) -np.std(B, axis=0) -np.std(A, keepdims=True) -np.std(B, keepdims=True) -np.std(b, out=d) -np.std(B, out=d) - -np.var(a) -np.var(b) -np.var(c) -np.var(A) -np.var(B) -np.var(A, axis=0) -np.var(B, axis=0) -np.var(A, keepdims=True) -np.var(B, keepdims=True) -np.var(b, out=d) -np.var(B, out=d) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_asof.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_asof.py deleted file mode 100644 index 5683ec60b0d88048f85589f5cfa760eb04183577..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_asof.py +++ /dev/null @@ -1,198 +0,0 @@ -import numpy as np -import pytest - -from pandas._libs.tslibs import IncompatibleFrequency - -from pandas import ( - DataFrame, - Period, - Series, - Timestamp, - date_range, - period_range, - to_datetime, -) -import pandas._testing as tm - - -@pytest.fixture -def date_range_frame(): - """ - Fixture for DataFrame of ints with date_range index - - Columns are ['A', 'B']. - """ - N = 50 - rng = date_range("1/1/1990", periods=N, freq="53s") - return DataFrame({"A": np.arange(N), "B": np.arange(N)}, index=rng) - - -class TestFrameAsof: - def test_basic(self, date_range_frame): - # Explicitly cast to float to avoid implicit cast when setting np.nan - df = date_range_frame.astype({"A": "float"}) - N = 50 - df.loc[df.index[15:30], "A"] = np.nan - dates = date_range("1/1/1990", periods=N * 3, freq="25s") - - result = df.asof(dates) - assert result.notna().all(1).all() - lb = df.index[14] - ub = df.index[30] - - dates = list(dates) - - result = df.asof(dates) - assert result.notna().all(1).all() - - mask = (result.index >= lb) & (result.index < ub) - rs = result[mask] - assert (rs == 14).all(1).all() - - def test_subset(self, date_range_frame): - N = 10 - # explicitly cast to float to avoid implicit upcast when setting to np.nan - df = date_range_frame.iloc[:N].copy().astype({"A": "float"}) - df.loc[df.index[4:8], "A"] = np.nan - dates = date_range("1/1/1990", periods=N * 3, freq="25s") - - # with a subset of A should be the same - result = df.asof(dates, subset="A") - expected = df.asof(dates) - tm.assert_frame_equal(result, expected) - - # same with A/B - result = df.asof(dates, subset=["A", "B"]) - expected = df.asof(dates) - tm.assert_frame_equal(result, expected) - - # B gives df.asof - result = df.asof(dates, subset="B") - expected = df.resample("25s", closed="right").ffill().reindex(dates) - expected.iloc[20:] = 9 - # no "missing", so "B" can retain int dtype (df["A"].dtype platform-dependent) - expected["B"] = expected["B"].astype(df["B"].dtype) - - tm.assert_frame_equal(result, expected) - - def test_missing(self, date_range_frame): - # GH 15118 - # no match found - `where` value before earliest date in index - N = 10 - # Cast to 'float64' to avoid upcast when introducing nan in df.asof - df = date_range_frame.iloc[:N].copy().astype("float64") - - result = df.asof("1989-12-31") - - expected = Series( - index=["A", "B"], name=Timestamp("1989-12-31"), dtype=np.float64 - ) - tm.assert_series_equal(result, expected) - - result = df.asof(to_datetime(["1989-12-31"])) - expected = DataFrame( - index=to_datetime(["1989-12-31"]), columns=["A", "B"], dtype="float64" - ) - tm.assert_frame_equal(result, expected) - - # Check that we handle PeriodIndex correctly, dont end up with - # period.ordinal for series name - df = df.to_period("D") - result = df.asof("1989-12-31") - assert isinstance(result.name, Period) - - def test_asof_all_nans(self, frame_or_series): - # GH 15713 - # DataFrame/Series is all nans - result = frame_or_series([np.nan]).asof([0]) - expected = frame_or_series([np.nan]) - tm.assert_equal(result, expected) - - def test_all_nans(self, date_range_frame): - # GH 15713 - # DataFrame is all nans - - # testing non-default indexes, multiple inputs - N = 150 - rng = date_range_frame.index - dates = date_range("1/1/1990", periods=N, freq="25s") - result = DataFrame(np.nan, index=rng, columns=["A"]).asof(dates) - expected = DataFrame(np.nan, index=dates, columns=["A"]) - tm.assert_frame_equal(result, expected) - - # testing multiple columns - dates = date_range("1/1/1990", periods=N, freq="25s") - result = DataFrame(np.nan, index=rng, columns=["A", "B", "C"]).asof(dates) - expected = DataFrame(np.nan, index=dates, columns=["A", "B", "C"]) - tm.assert_frame_equal(result, expected) - - # testing scalar input - result = DataFrame(np.nan, index=[1, 2], columns=["A", "B"]).asof([3]) - expected = DataFrame(np.nan, index=[3], columns=["A", "B"]) - tm.assert_frame_equal(result, expected) - - result = DataFrame(np.nan, index=[1, 2], columns=["A", "B"]).asof(3) - expected = Series(np.nan, index=["A", "B"], name=3) - tm.assert_series_equal(result, expected) - - @pytest.mark.parametrize( - "stamp,expected", - [ - ( - Timestamp("2018-01-01 23:22:43.325+00:00"), - Series(2, name=Timestamp("2018-01-01 23:22:43.325+00:00")), - ), - ( - Timestamp("2018-01-01 22:33:20.682+01:00"), - Series(1, name=Timestamp("2018-01-01 22:33:20.682+01:00")), - ), - ], - ) - def test_time_zone_aware_index(self, stamp, expected): - # GH21194 - # Testing awareness of DataFrame index considering different - # UTC and timezone - df = DataFrame( - data=[1, 2], - index=[ - Timestamp("2018-01-01 21:00:05.001+00:00"), - Timestamp("2018-01-01 22:35:10.550+00:00"), - ], - ) - - result = df.asof(stamp) - tm.assert_series_equal(result, expected) - - def test_is_copy(self, date_range_frame): - # GH-27357, GH-30784: ensure the result of asof is an actual copy and - # doesn't track the parent dataframe / doesn't give SettingWithCopy warnings - df = date_range_frame.astype({"A": "float"}) - N = 50 - df.loc[df.index[15:30], "A"] = np.nan - dates = date_range("1/1/1990", periods=N * 3, freq="25s") - - result = df.asof(dates) - - with tm.assert_produces_warning(None): - result["C"] = 1 - - def test_asof_periodindex_mismatched_freq(self): - N = 50 - rng = period_range("1/1/1990", periods=N, freq="H") - df = DataFrame(np.random.default_rng(2).standard_normal(N), index=rng) - - # Mismatched freq - msg = "Input has different freq" - with pytest.raises(IncompatibleFrequency, match=msg): - df.asof(rng.asfreq("D")) - - def test_asof_preserves_bool_dtype(self): - # GH#16063 was casting bools to floats - dti = date_range("2017-01-01", freq="MS", periods=4) - ser = Series([True, False, True], index=dti[:-1]) - - ts = dti[-1] - res = ser.asof([ts]) - - expected = Series([True], index=[ts]) - tm.assert_series_equal(res, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_datetime.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_datetime.py deleted file mode 100644 index 11d66d4820fe6eaa2f9f0bb33ed65f1d184600ab..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/indexing/test_datetime.py +++ /dev/null @@ -1,497 +0,0 @@ -""" -Also test support for datetime64[ns] in Series / DataFrame -""" -from datetime import ( - datetime, - timedelta, -) -import re - -from dateutil.tz import ( - gettz, - tzutc, -) -import numpy as np -import pytest -import pytz - -from pandas._libs import index as libindex - -import pandas as pd -from pandas import ( - DataFrame, - Series, - Timestamp, - date_range, - period_range, -) -import pandas._testing as tm - - -def test_fancy_getitem(): - dti = date_range( - freq="WOM-1FRI", start=datetime(2005, 1, 1), end=datetime(2010, 1, 1) - ) - - s = Series(np.arange(len(dti)), index=dti) - - msg = "Series.__getitem__ treating keys as positions is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - assert s[48] == 48 - assert s["1/2/2009"] == 48 - assert s["2009-1-2"] == 48 - assert s[datetime(2009, 1, 2)] == 48 - assert s[Timestamp(datetime(2009, 1, 2))] == 48 - with pytest.raises(KeyError, match=r"^'2009-1-3'$"): - s["2009-1-3"] - tm.assert_series_equal( - s["3/6/2009":"2009-06-05"], s[datetime(2009, 3, 6) : datetime(2009, 6, 5)] - ) - - -def test_fancy_setitem(): - dti = date_range( - freq="WOM-1FRI", start=datetime(2005, 1, 1), end=datetime(2010, 1, 1) - ) - - s = Series(np.arange(len(dti)), index=dti) - - msg = "Series.__setitem__ treating keys as positions is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - s[48] = -1 - assert s.iloc[48] == -1 - s["1/2/2009"] = -2 - assert s.iloc[48] == -2 - s["1/2/2009":"2009-06-05"] = -3 - assert (s[48:54] == -3).all() - - -@pytest.mark.parametrize("tz_source", ["pytz", "dateutil"]) -def test_getitem_setitem_datetime_tz(tz_source): - if tz_source == "pytz": - tzget = pytz.timezone - else: - # handle special case for utc in dateutil - tzget = lambda x: tzutc() if x == "UTC" else gettz(x) - - N = 50 - # testing with timezone, GH #2785 - rng = date_range("1/1/1990", periods=N, freq="H", tz=tzget("US/Eastern")) - ts = Series(np.random.default_rng(2).standard_normal(N), index=rng) - - # also test Timestamp tz handling, GH #2789 - result = ts.copy() - result["1990-01-01 09:00:00+00:00"] = 0 - result["1990-01-01 09:00:00+00:00"] = ts.iloc[4] - tm.assert_series_equal(result, ts) - - result = ts.copy() - result["1990-01-01 03:00:00-06:00"] = 0 - result["1990-01-01 03:00:00-06:00"] = ts.iloc[4] - tm.assert_series_equal(result, ts) - - # repeat with datetimes - result = ts.copy() - result[datetime(1990, 1, 1, 9, tzinfo=tzget("UTC"))] = 0 - result[datetime(1990, 1, 1, 9, tzinfo=tzget("UTC"))] = ts.iloc[4] - tm.assert_series_equal(result, ts) - - result = ts.copy() - dt = Timestamp(1990, 1, 1, 3).tz_localize(tzget("US/Central")) - dt = dt.to_pydatetime() - result[dt] = 0 - result[dt] = ts.iloc[4] - tm.assert_series_equal(result, ts) - - -def test_getitem_setitem_datetimeindex(): - N = 50 - # testing with timezone, GH #2785 - rng = date_range("1/1/1990", periods=N, freq="H", tz="US/Eastern") - ts = Series(np.random.default_rng(2).standard_normal(N), index=rng) - - result = ts["1990-01-01 04:00:00"] - expected = ts.iloc[4] - assert result == expected - - result = ts.copy() - result["1990-01-01 04:00:00"] = 0 - result["1990-01-01 04:00:00"] = ts.iloc[4] - tm.assert_series_equal(result, ts) - - result = ts["1990-01-01 04:00:00":"1990-01-01 07:00:00"] - expected = ts[4:8] - tm.assert_series_equal(result, expected) - - result = ts.copy() - result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = 0 - result["1990-01-01 04:00:00":"1990-01-01 07:00:00"] = ts[4:8] - tm.assert_series_equal(result, ts) - - lb = "1990-01-01 04:00:00" - rb = "1990-01-01 07:00:00" - # GH#18435 strings get a pass from tzawareness compat - result = ts[(ts.index >= lb) & (ts.index <= rb)] - expected = ts[4:8] - tm.assert_series_equal(result, expected) - - lb = "1990-01-01 04:00:00-0500" - rb = "1990-01-01 07:00:00-0500" - result = ts[(ts.index >= lb) & (ts.index <= rb)] - expected = ts[4:8] - tm.assert_series_equal(result, expected) - - # But we do not give datetimes a pass on tzawareness compat - msg = "Cannot compare tz-naive and tz-aware datetime-like objects" - naive = datetime(1990, 1, 1, 4) - for key in [naive, Timestamp(naive), np.datetime64(naive, "ns")]: - with pytest.raises(KeyError, match=re.escape(repr(key))): - # GH#36148 as of 2.0 we require tzawareness-compat - ts[key] - - result = ts.copy() - # GH#36148 as of 2.0 we do not ignore tzawareness mismatch in indexing, - # so setting it as a new key casts to object rather than matching - # rng[4] - result[naive] = ts.iloc[4] - assert result.index.dtype == object - tm.assert_index_equal(result.index[:-1], rng.astype(object)) - assert result.index[-1] == naive - - msg = "Cannot compare tz-naive and tz-aware datetime-like objects" - with pytest.raises(TypeError, match=msg): - # GH#36148 require tzawareness compat as of 2.0 - ts[naive : datetime(1990, 1, 1, 7)] - - result = ts.copy() - with pytest.raises(TypeError, match=msg): - # GH#36148 require tzawareness compat as of 2.0 - result[naive : datetime(1990, 1, 1, 7)] = 0 - with pytest.raises(TypeError, match=msg): - # GH#36148 require tzawareness compat as of 2.0 - result[naive : datetime(1990, 1, 1, 7)] = 99 - # the __setitems__ here failed, so result should still match ts - tm.assert_series_equal(result, ts) - - lb = naive - rb = datetime(1990, 1, 1, 7) - msg = r"Invalid comparison between dtype=datetime64\[ns, US/Eastern\] and datetime" - with pytest.raises(TypeError, match=msg): - # tznaive vs tzaware comparison is invalid - # see GH#18376, GH#18162 - ts[(ts.index >= lb) & (ts.index <= rb)] - - lb = Timestamp(naive).tz_localize(rng.tzinfo) - rb = Timestamp(datetime(1990, 1, 1, 7)).tz_localize(rng.tzinfo) - result = ts[(ts.index >= lb) & (ts.index <= rb)] - expected = ts[4:8] - tm.assert_series_equal(result, expected) - - result = ts[ts.index[4]] - expected = ts.iloc[4] - assert result == expected - - result = ts[ts.index[4:8]] - expected = ts[4:8] - tm.assert_series_equal(result, expected) - - result = ts.copy() - result[ts.index[4:8]] = 0 - result.iloc[4:8] = ts.iloc[4:8] - tm.assert_series_equal(result, ts) - - # also test partial date slicing - result = ts["1990-01-02"] - expected = ts[24:48] - tm.assert_series_equal(result, expected) - - result = ts.copy() - result["1990-01-02"] = 0 - result["1990-01-02"] = ts[24:48] - tm.assert_series_equal(result, ts) - - -def test_getitem_setitem_periodindex(): - N = 50 - rng = period_range("1/1/1990", periods=N, freq="H") - ts = Series(np.random.default_rng(2).standard_normal(N), index=rng) - - result = ts["1990-01-01 04"] - expected = ts.iloc[4] - assert result == expected - - result = ts.copy() - result["1990-01-01 04"] = 0 - result["1990-01-01 04"] = ts.iloc[4] - tm.assert_series_equal(result, ts) - - result = ts["1990-01-01 04":"1990-01-01 07"] - expected = ts[4:8] - tm.assert_series_equal(result, expected) - - result = ts.copy() - result["1990-01-01 04":"1990-01-01 07"] = 0 - result["1990-01-01 04":"1990-01-01 07"] = ts[4:8] - tm.assert_series_equal(result, ts) - - lb = "1990-01-01 04" - rb = "1990-01-01 07" - result = ts[(ts.index >= lb) & (ts.index <= rb)] - expected = ts[4:8] - tm.assert_series_equal(result, expected) - - # GH 2782 - result = ts[ts.index[4]] - expected = ts.iloc[4] - assert result == expected - - result = ts[ts.index[4:8]] - expected = ts[4:8] - tm.assert_series_equal(result, expected) - - result = ts.copy() - result[ts.index[4:8]] = 0 - result.iloc[4:8] = ts.iloc[4:8] - tm.assert_series_equal(result, ts) - - -def test_datetime_indexing(): - index = date_range("1/1/2000", "1/7/2000") - index = index.repeat(3) - - s = Series(len(index), index=index) - stamp = Timestamp("1/8/2000") - - with pytest.raises(KeyError, match=re.escape(repr(stamp))): - s[stamp] - s[stamp] = 0 - assert s[stamp] == 0 - - # not monotonic - s = Series(len(index), index=index) - s = s[::-1] - - with pytest.raises(KeyError, match=re.escape(repr(stamp))): - s[stamp] - s[stamp] = 0 - assert s[stamp] == 0 - - -# test duplicates in time series - - -def test_indexing_with_duplicate_datetimeindex( - rand_series_with_duplicate_datetimeindex, -): - ts = rand_series_with_duplicate_datetimeindex - - uniques = ts.index.unique() - for date in uniques: - result = ts[date] - - mask = ts.index == date - total = (ts.index == date).sum() - expected = ts[mask] - if total > 1: - tm.assert_series_equal(result, expected) - else: - tm.assert_almost_equal(result, expected.iloc[0]) - - cp = ts.copy() - cp[date] = 0 - expected = Series(np.where(mask, 0, ts), index=ts.index) - tm.assert_series_equal(cp, expected) - - key = datetime(2000, 1, 6) - with pytest.raises(KeyError, match=re.escape(repr(key))): - ts[key] - - # new index - ts[datetime(2000, 1, 6)] = 0 - assert ts[datetime(2000, 1, 6)] == 0 - - -def test_loc_getitem_over_size_cutoff(monkeypatch): - # #1821 - - monkeypatch.setattr(libindex, "_SIZE_CUTOFF", 1000) - - # create large list of non periodic datetime - dates = [] - sec = timedelta(seconds=1) - half_sec = timedelta(microseconds=500000) - d = datetime(2011, 12, 5, 20, 30) - n = 1100 - for i in range(n): - dates.append(d) - dates.append(d + sec) - dates.append(d + sec + half_sec) - dates.append(d + sec + sec + half_sec) - d += 3 * sec - - # duplicate some values in the list - duplicate_positions = np.random.default_rng(2).integers(0, len(dates) - 1, 20) - for p in duplicate_positions: - dates[p + 1] = dates[p] - - df = DataFrame( - np.random.default_rng(2).standard_normal((len(dates), 4)), - index=dates, - columns=list("ABCD"), - ) - - pos = n * 3 - timestamp = df.index[pos] - assert timestamp in df.index - - # it works! - df.loc[timestamp] - assert len(df.loc[[timestamp]]) > 0 - - -def test_indexing_over_size_cutoff_period_index(monkeypatch): - # GH 27136 - - monkeypatch.setattr(libindex, "_SIZE_CUTOFF", 1000) - - n = 1100 - idx = period_range("1/1/2000", freq="T", periods=n) - assert idx._engine.over_size_threshold - - s = Series(np.random.default_rng(2).standard_normal(len(idx)), index=idx) - - pos = n - 1 - timestamp = idx[pos] - assert timestamp in s.index - - # it works! - s[timestamp] - assert len(s.loc[[timestamp]]) > 0 - - -def test_indexing_unordered(): - # GH 2437 - rng = date_range(start="2011-01-01", end="2011-01-15") - ts = Series(np.random.default_rng(2).random(len(rng)), index=rng) - ts2 = pd.concat([ts[0:4], ts[-4:], ts[4:-4]]) - - for t in ts.index: - expected = ts[t] - result = ts2[t] - assert expected == result - - # GH 3448 (ranges) - def compare(slobj): - result = ts2[slobj].copy() - result = result.sort_index() - expected = ts[slobj] - expected.index = expected.index._with_freq(None) - tm.assert_series_equal(result, expected) - - for key in [ - slice("2011-01-01", "2011-01-15"), - slice("2010-12-30", "2011-01-15"), - slice("2011-01-01", "2011-01-16"), - # partial ranges - slice("2011-01-01", "2011-01-6"), - slice("2011-01-06", "2011-01-8"), - slice("2011-01-06", "2011-01-12"), - ]: - with pytest.raises( - KeyError, match="Value based partial slicing on non-monotonic" - ): - compare(key) - - # single values - result = ts2["2011"].sort_index() - expected = ts["2011"] - expected.index = expected.index._with_freq(None) - tm.assert_series_equal(result, expected) - - -def test_indexing_unordered2(): - # diff freq - rng = date_range(datetime(2005, 1, 1), periods=20, freq="M") - ts = Series(np.arange(len(rng)), index=rng) - ts = ts.take(np.random.default_rng(2).permutation(20)) - - result = ts["2005"] - for t in result.index: - assert t.year == 2005 - - -def test_indexing(): - idx = date_range("2001-1-1", periods=20, freq="M") - ts = Series(np.random.default_rng(2).random(len(idx)), index=idx) - - # getting - - # GH 3070, make sure semantics work on Series/Frame - expected = ts["2001"] - expected.name = "A" - - df = DataFrame({"A": ts}) - - # GH#36179 pre-2.0 df["2001"] operated as slicing on rows. in 2.0 it behaves - # like any other key, so raises - with pytest.raises(KeyError, match="2001"): - df["2001"] - - # setting - ts["2001"] = 1 - expected = ts["2001"] - expected.name = "A" - - df.loc["2001", "A"] = 1 - - with pytest.raises(KeyError, match="2001"): - df["2001"] - - -def test_getitem_str_month_with_datetimeindex(): - # GH3546 (not including times on the last day) - idx = date_range(start="2013-05-31 00:00", end="2013-05-31 23:00", freq="H") - ts = Series(range(len(idx)), index=idx) - expected = ts["2013-05"] - tm.assert_series_equal(expected, ts) - - idx = date_range(start="2013-05-31 00:00", end="2013-05-31 23:59", freq="S") - ts = Series(range(len(idx)), index=idx) - expected = ts["2013-05"] - tm.assert_series_equal(expected, ts) - - -def test_getitem_str_year_with_datetimeindex(): - idx = [ - Timestamp("2013-05-31 00:00"), - Timestamp(datetime(2013, 5, 31, 23, 59, 59, 999999)), - ] - ts = Series(range(len(idx)), index=idx) - expected = ts["2013"] - tm.assert_series_equal(expected, ts) - - -def test_getitem_str_second_with_datetimeindex(): - # GH14826, indexing with a seconds resolution string / datetime object - df = DataFrame( - np.random.default_rng(2).random((5, 5)), - columns=["open", "high", "low", "close", "volume"], - index=date_range("2012-01-02 18:01:00", periods=5, tz="US/Central", freq="s"), - ) - - # this is a single date, so will raise - with pytest.raises(KeyError, match=r"^'2012-01-02 18:01:02'$"): - df["2012-01-02 18:01:02"] - - msg = r"Timestamp\('2012-01-02 18:01:02-0600', tz='US/Central'\)" - with pytest.raises(KeyError, match=msg): - df[df.index[2]] - - -def test_compare_datetime_with_all_none(): - # GH#54870 - ser = Series(["2020-01-01", "2020-01-02"], dtype="datetime64[ns]") - ser2 = Series([None, None]) - result = ser > ser2 - expected = Series([False, False]) - tm.assert_series_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_fiscal.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_fiscal.py deleted file mode 100644 index 7f8c34bc6832ea13d983a4c9172ce26e760caf84..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/tseries/offsets/test_fiscal.py +++ /dev/null @@ -1,652 +0,0 @@ -""" -Tests for Fiscal Year and Fiscal Quarter offset classes -""" -from datetime import datetime - -from dateutil.relativedelta import relativedelta -import pytest - -from pandas import Timestamp -from pandas.tests.tseries.offsets.common import ( - WeekDay, - assert_is_on_offset, - assert_offset_equal, -) - -from pandas.tseries.offsets import ( - FY5253, - FY5253Quarter, -) - - -def makeFY5253LastOfMonthQuarter(*args, **kwds): - return FY5253Quarter(*args, variation="last", **kwds) - - -def makeFY5253NearestEndMonthQuarter(*args, **kwds): - return FY5253Quarter(*args, variation="nearest", **kwds) - - -def makeFY5253NearestEndMonth(*args, **kwds): - return FY5253(*args, variation="nearest", **kwds) - - -def makeFY5253LastOfMonth(*args, **kwds): - return FY5253(*args, variation="last", **kwds) - - -def test_get_offset_name(): - assert ( - makeFY5253LastOfMonthQuarter( - weekday=1, startingMonth=3, qtr_with_extra_week=4 - ).freqstr - == "REQ-L-MAR-TUE-4" - ) - assert ( - makeFY5253NearestEndMonthQuarter( - weekday=1, startingMonth=3, qtr_with_extra_week=3 - ).freqstr - == "REQ-N-MAR-TUE-3" - ) - - -class TestFY5253LastOfMonth: - offset_lom_sat_aug = makeFY5253LastOfMonth(1, startingMonth=8, weekday=WeekDay.SAT) - offset_lom_sat_sep = makeFY5253LastOfMonth(1, startingMonth=9, weekday=WeekDay.SAT) - - on_offset_cases = [ - # From Wikipedia (see: - # https://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar#Last_Saturday_of_the_month_at_fiscal_year_end) - (offset_lom_sat_aug, datetime(2006, 8, 26), True), - (offset_lom_sat_aug, datetime(2007, 8, 25), True), - (offset_lom_sat_aug, datetime(2008, 8, 30), True), - (offset_lom_sat_aug, datetime(2009, 8, 29), True), - (offset_lom_sat_aug, datetime(2010, 8, 28), True), - (offset_lom_sat_aug, datetime(2011, 8, 27), True), - (offset_lom_sat_aug, datetime(2012, 8, 25), True), - (offset_lom_sat_aug, datetime(2013, 8, 31), True), - (offset_lom_sat_aug, datetime(2014, 8, 30), True), - (offset_lom_sat_aug, datetime(2015, 8, 29), True), - (offset_lom_sat_aug, datetime(2016, 8, 27), True), - (offset_lom_sat_aug, datetime(2017, 8, 26), True), - (offset_lom_sat_aug, datetime(2018, 8, 25), True), - (offset_lom_sat_aug, datetime(2019, 8, 31), True), - (offset_lom_sat_aug, datetime(2006, 8, 27), False), - (offset_lom_sat_aug, datetime(2007, 8, 28), False), - (offset_lom_sat_aug, datetime(2008, 8, 31), False), - (offset_lom_sat_aug, datetime(2009, 8, 30), False), - (offset_lom_sat_aug, datetime(2010, 8, 29), False), - (offset_lom_sat_aug, datetime(2011, 8, 28), False), - (offset_lom_sat_aug, datetime(2006, 8, 25), False), - (offset_lom_sat_aug, datetime(2007, 8, 24), False), - (offset_lom_sat_aug, datetime(2008, 8, 29), False), - (offset_lom_sat_aug, datetime(2009, 8, 28), False), - (offset_lom_sat_aug, datetime(2010, 8, 27), False), - (offset_lom_sat_aug, datetime(2011, 8, 26), False), - (offset_lom_sat_aug, datetime(2019, 8, 30), False), - # From GMCR (see for example: - # http://yahoo.brand.edgar-online.com/Default.aspx? - # companyid=3184&formtypeID=7) - (offset_lom_sat_sep, datetime(2010, 9, 25), True), - (offset_lom_sat_sep, datetime(2011, 9, 24), True), - (offset_lom_sat_sep, datetime(2012, 9, 29), True), - ] - - @pytest.mark.parametrize("case", on_offset_cases) - def test_is_on_offset(self, case): - offset, dt, expected = case - assert_is_on_offset(offset, dt, expected) - - def test_apply(self): - offset_lom_aug_sat = makeFY5253LastOfMonth(startingMonth=8, weekday=WeekDay.SAT) - offset_lom_aug_sat_1 = makeFY5253LastOfMonth( - n=1, startingMonth=8, weekday=WeekDay.SAT - ) - - date_seq_lom_aug_sat = [ - datetime(2006, 8, 26), - datetime(2007, 8, 25), - datetime(2008, 8, 30), - datetime(2009, 8, 29), - datetime(2010, 8, 28), - datetime(2011, 8, 27), - datetime(2012, 8, 25), - datetime(2013, 8, 31), - datetime(2014, 8, 30), - datetime(2015, 8, 29), - datetime(2016, 8, 27), - ] - - tests = [ - (offset_lom_aug_sat, date_seq_lom_aug_sat), - (offset_lom_aug_sat_1, date_seq_lom_aug_sat), - (offset_lom_aug_sat, [datetime(2006, 8, 25)] + date_seq_lom_aug_sat), - (offset_lom_aug_sat_1, [datetime(2006, 8, 27)] + date_seq_lom_aug_sat[1:]), - ( - makeFY5253LastOfMonth(n=-1, startingMonth=8, weekday=WeekDay.SAT), - list(reversed(date_seq_lom_aug_sat)), - ), - ] - for test in tests: - offset, data = test - current = data[0] - for datum in data[1:]: - current = current + offset - assert current == datum - - -class TestFY5253NearestEndMonth: - def test_get_year_end(self): - assert makeFY5253NearestEndMonth( - startingMonth=8, weekday=WeekDay.SAT - ).get_year_end(datetime(2013, 1, 1)) == datetime(2013, 8, 31) - assert makeFY5253NearestEndMonth( - startingMonth=8, weekday=WeekDay.SUN - ).get_year_end(datetime(2013, 1, 1)) == datetime(2013, 9, 1) - assert makeFY5253NearestEndMonth( - startingMonth=8, weekday=WeekDay.FRI - ).get_year_end(datetime(2013, 1, 1)) == datetime(2013, 8, 30) - - offset_n = FY5253(weekday=WeekDay.TUE, startingMonth=12, variation="nearest") - assert offset_n.get_year_end(datetime(2012, 1, 1)) == datetime(2013, 1, 1) - assert offset_n.get_year_end(datetime(2012, 1, 10)) == datetime(2013, 1, 1) - - assert offset_n.get_year_end(datetime(2013, 1, 1)) == datetime(2013, 12, 31) - assert offset_n.get_year_end(datetime(2013, 1, 2)) == datetime(2013, 12, 31) - assert offset_n.get_year_end(datetime(2013, 1, 3)) == datetime(2013, 12, 31) - assert offset_n.get_year_end(datetime(2013, 1, 10)) == datetime(2013, 12, 31) - - JNJ = FY5253(n=1, startingMonth=12, weekday=6, variation="nearest") - assert JNJ.get_year_end(datetime(2006, 1, 1)) == datetime(2006, 12, 31) - - offset_lom_aug_sat = makeFY5253NearestEndMonth( - 1, startingMonth=8, weekday=WeekDay.SAT - ) - offset_lom_aug_thu = makeFY5253NearestEndMonth( - 1, startingMonth=8, weekday=WeekDay.THU - ) - offset_n = FY5253(weekday=WeekDay.TUE, startingMonth=12, variation="nearest") - - on_offset_cases = [ - # From Wikipedia (see: - # https://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar - # #Saturday_nearest_the_end_of_month) - # 2006-09-02 2006 September 2 - # 2007-09-01 2007 September 1 - # 2008-08-30 2008 August 30 (leap year) - # 2009-08-29 2009 August 29 - # 2010-08-28 2010 August 28 - # 2011-09-03 2011 September 3 - # 2012-09-01 2012 September 1 (leap year) - # 2013-08-31 2013 August 31 - # 2014-08-30 2014 August 30 - # 2015-08-29 2015 August 29 - # 2016-09-03 2016 September 3 (leap year) - # 2017-09-02 2017 September 2 - # 2018-09-01 2018 September 1 - # 2019-08-31 2019 August 31 - (offset_lom_aug_sat, datetime(2006, 9, 2), True), - (offset_lom_aug_sat, datetime(2007, 9, 1), True), - (offset_lom_aug_sat, datetime(2008, 8, 30), True), - (offset_lom_aug_sat, datetime(2009, 8, 29), True), - (offset_lom_aug_sat, datetime(2010, 8, 28), True), - (offset_lom_aug_sat, datetime(2011, 9, 3), True), - (offset_lom_aug_sat, datetime(2016, 9, 3), True), - (offset_lom_aug_sat, datetime(2017, 9, 2), True), - (offset_lom_aug_sat, datetime(2018, 9, 1), True), - (offset_lom_aug_sat, datetime(2019, 8, 31), True), - (offset_lom_aug_sat, datetime(2006, 8, 27), False), - (offset_lom_aug_sat, datetime(2007, 8, 28), False), - (offset_lom_aug_sat, datetime(2008, 8, 31), False), - (offset_lom_aug_sat, datetime(2009, 8, 30), False), - (offset_lom_aug_sat, datetime(2010, 8, 29), False), - (offset_lom_aug_sat, datetime(2011, 8, 28), False), - (offset_lom_aug_sat, datetime(2006, 8, 25), False), - (offset_lom_aug_sat, datetime(2007, 8, 24), False), - (offset_lom_aug_sat, datetime(2008, 8, 29), False), - (offset_lom_aug_sat, datetime(2009, 8, 28), False), - (offset_lom_aug_sat, datetime(2010, 8, 27), False), - (offset_lom_aug_sat, datetime(2011, 8, 26), False), - (offset_lom_aug_sat, datetime(2019, 8, 30), False), - # From Micron, see: - # http://google.brand.edgar-online.com/?sym=MU&formtypeID=7 - (offset_lom_aug_thu, datetime(2012, 8, 30), True), - (offset_lom_aug_thu, datetime(2011, 9, 1), True), - (offset_n, datetime(2012, 12, 31), False), - (offset_n, datetime(2013, 1, 1), True), - (offset_n, datetime(2013, 1, 2), False), - ] - - @pytest.mark.parametrize("case", on_offset_cases) - def test_is_on_offset(self, case): - offset, dt, expected = case - assert_is_on_offset(offset, dt, expected) - - def test_apply(self): - date_seq_nem_8_sat = [ - datetime(2006, 9, 2), - datetime(2007, 9, 1), - datetime(2008, 8, 30), - datetime(2009, 8, 29), - datetime(2010, 8, 28), - datetime(2011, 9, 3), - ] - - JNJ = [ - datetime(2005, 1, 2), - datetime(2006, 1, 1), - datetime(2006, 12, 31), - datetime(2007, 12, 30), - datetime(2008, 12, 28), - datetime(2010, 1, 3), - datetime(2011, 1, 2), - datetime(2012, 1, 1), - datetime(2012, 12, 30), - ] - - DEC_SAT = FY5253(n=-1, startingMonth=12, weekday=5, variation="nearest") - - tests = [ - ( - makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT), - date_seq_nem_8_sat, - ), - ( - makeFY5253NearestEndMonth(n=1, startingMonth=8, weekday=WeekDay.SAT), - date_seq_nem_8_sat, - ), - ( - makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT), - [datetime(2006, 9, 1)] + date_seq_nem_8_sat, - ), - ( - makeFY5253NearestEndMonth(n=1, startingMonth=8, weekday=WeekDay.SAT), - [datetime(2006, 9, 3)] + date_seq_nem_8_sat[1:], - ), - ( - makeFY5253NearestEndMonth(n=-1, startingMonth=8, weekday=WeekDay.SAT), - list(reversed(date_seq_nem_8_sat)), - ), - ( - makeFY5253NearestEndMonth(n=1, startingMonth=12, weekday=WeekDay.SUN), - JNJ, - ), - ( - makeFY5253NearestEndMonth(n=-1, startingMonth=12, weekday=WeekDay.SUN), - list(reversed(JNJ)), - ), - ( - makeFY5253NearestEndMonth(n=1, startingMonth=12, weekday=WeekDay.SUN), - [datetime(2005, 1, 2), datetime(2006, 1, 1)], - ), - ( - makeFY5253NearestEndMonth(n=1, startingMonth=12, weekday=WeekDay.SUN), - [datetime(2006, 1, 2), datetime(2006, 12, 31)], - ), - (DEC_SAT, [datetime(2013, 1, 15), datetime(2012, 12, 29)]), - ] - for test in tests: - offset, data = test - current = data[0] - for datum in data[1:]: - current = current + offset - assert current == datum - - -class TestFY5253LastOfMonthQuarter: - def test_is_anchored(self): - assert makeFY5253LastOfMonthQuarter( - startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ).is_anchored() - assert makeFY5253LastOfMonthQuarter( - weekday=WeekDay.SAT, startingMonth=3, qtr_with_extra_week=4 - ).is_anchored() - assert not makeFY5253LastOfMonthQuarter( - 2, startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ).is_anchored() - - def test_equality(self): - assert makeFY5253LastOfMonthQuarter( - startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) == makeFY5253LastOfMonthQuarter( - startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - assert makeFY5253LastOfMonthQuarter( - startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) != makeFY5253LastOfMonthQuarter( - startingMonth=1, weekday=WeekDay.SUN, qtr_with_extra_week=4 - ) - assert makeFY5253LastOfMonthQuarter( - startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) != makeFY5253LastOfMonthQuarter( - startingMonth=2, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - - def test_offset(self): - offset = makeFY5253LastOfMonthQuarter( - 1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - offset2 = makeFY5253LastOfMonthQuarter( - 2, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - offset4 = makeFY5253LastOfMonthQuarter( - 4, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - - offset_neg1 = makeFY5253LastOfMonthQuarter( - -1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - offset_neg2 = makeFY5253LastOfMonthQuarter( - -2, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - - GMCR = [ - datetime(2010, 3, 27), - datetime(2010, 6, 26), - datetime(2010, 9, 25), - datetime(2010, 12, 25), - datetime(2011, 3, 26), - datetime(2011, 6, 25), - datetime(2011, 9, 24), - datetime(2011, 12, 24), - datetime(2012, 3, 24), - datetime(2012, 6, 23), - datetime(2012, 9, 29), - datetime(2012, 12, 29), - datetime(2013, 3, 30), - datetime(2013, 6, 29), - ] - - assert_offset_equal(offset, base=GMCR[0], expected=GMCR[1]) - assert_offset_equal( - offset, base=GMCR[0] + relativedelta(days=-1), expected=GMCR[0] - ) - assert_offset_equal(offset, base=GMCR[1], expected=GMCR[2]) - - assert_offset_equal(offset2, base=GMCR[0], expected=GMCR[2]) - assert_offset_equal(offset4, base=GMCR[0], expected=GMCR[4]) - - assert_offset_equal(offset_neg1, base=GMCR[-1], expected=GMCR[-2]) - assert_offset_equal( - offset_neg1, base=GMCR[-1] + relativedelta(days=+1), expected=GMCR[-1] - ) - assert_offset_equal(offset_neg2, base=GMCR[-1], expected=GMCR[-3]) - - date = GMCR[0] + relativedelta(days=-1) - for expected in GMCR: - assert_offset_equal(offset, date, expected) - date = date + offset - - date = GMCR[-1] + relativedelta(days=+1) - for expected in reversed(GMCR): - assert_offset_equal(offset_neg1, date, expected) - date = date + offset_neg1 - - lomq_aug_sat_4 = makeFY5253LastOfMonthQuarter( - 1, startingMonth=8, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - lomq_sep_sat_4 = makeFY5253LastOfMonthQuarter( - 1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - - on_offset_cases = [ - # From Wikipedia - (lomq_aug_sat_4, datetime(2006, 8, 26), True), - (lomq_aug_sat_4, datetime(2007, 8, 25), True), - (lomq_aug_sat_4, datetime(2008, 8, 30), True), - (lomq_aug_sat_4, datetime(2009, 8, 29), True), - (lomq_aug_sat_4, datetime(2010, 8, 28), True), - (lomq_aug_sat_4, datetime(2011, 8, 27), True), - (lomq_aug_sat_4, datetime(2019, 8, 31), True), - (lomq_aug_sat_4, datetime(2006, 8, 27), False), - (lomq_aug_sat_4, datetime(2007, 8, 28), False), - (lomq_aug_sat_4, datetime(2008, 8, 31), False), - (lomq_aug_sat_4, datetime(2009, 8, 30), False), - (lomq_aug_sat_4, datetime(2010, 8, 29), False), - (lomq_aug_sat_4, datetime(2011, 8, 28), False), - (lomq_aug_sat_4, datetime(2006, 8, 25), False), - (lomq_aug_sat_4, datetime(2007, 8, 24), False), - (lomq_aug_sat_4, datetime(2008, 8, 29), False), - (lomq_aug_sat_4, datetime(2009, 8, 28), False), - (lomq_aug_sat_4, datetime(2010, 8, 27), False), - (lomq_aug_sat_4, datetime(2011, 8, 26), False), - (lomq_aug_sat_4, datetime(2019, 8, 30), False), - # From GMCR - (lomq_sep_sat_4, datetime(2010, 9, 25), True), - (lomq_sep_sat_4, datetime(2011, 9, 24), True), - (lomq_sep_sat_4, datetime(2012, 9, 29), True), - (lomq_sep_sat_4, datetime(2013, 6, 29), True), - (lomq_sep_sat_4, datetime(2012, 6, 23), True), - (lomq_sep_sat_4, datetime(2012, 6, 30), False), - (lomq_sep_sat_4, datetime(2013, 3, 30), True), - (lomq_sep_sat_4, datetime(2012, 3, 24), True), - (lomq_sep_sat_4, datetime(2012, 12, 29), True), - (lomq_sep_sat_4, datetime(2011, 12, 24), True), - # INTC (extra week in Q1) - # See: http://www.intc.com/releasedetail.cfm?ReleaseID=542844 - ( - makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ), - datetime(2011, 4, 2), - True, - ), - # see: http://google.brand.edgar-online.com/?sym=INTC&formtypeID=7 - ( - makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ), - datetime(2012, 12, 29), - True, - ), - ( - makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ), - datetime(2011, 12, 31), - True, - ), - ( - makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ), - datetime(2010, 12, 25), - True, - ), - ] - - @pytest.mark.parametrize("case", on_offset_cases) - def test_is_on_offset(self, case): - offset, dt, expected = case - assert_is_on_offset(offset, dt, expected) - - def test_year_has_extra_week(self): - # End of long Q1 - assert makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ).year_has_extra_week(datetime(2011, 4, 2)) - - # Start of long Q1 - assert makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ).year_has_extra_week(datetime(2010, 12, 26)) - - # End of year before year with long Q1 - assert not makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ).year_has_extra_week(datetime(2010, 12, 25)) - - for year in [ - x for x in range(1994, 2011 + 1) if x not in [2011, 2005, 2000, 1994] - ]: - assert not makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ).year_has_extra_week(datetime(year, 4, 2)) - - # Other long years - assert makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ).year_has_extra_week(datetime(2005, 4, 2)) - - assert makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ).year_has_extra_week(datetime(2000, 4, 2)) - - assert makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ).year_has_extra_week(datetime(1994, 4, 2)) - - def test_get_weeks(self): - sat_dec_1 = makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1 - ) - sat_dec_4 = makeFY5253LastOfMonthQuarter( - 1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - - assert sat_dec_1.get_weeks(datetime(2011, 4, 2)) == [14, 13, 13, 13] - assert sat_dec_4.get_weeks(datetime(2011, 4, 2)) == [13, 13, 13, 14] - assert sat_dec_1.get_weeks(datetime(2010, 12, 25)) == [13, 13, 13, 13] - - -class TestFY5253NearestEndMonthQuarter: - offset_nem_sat_aug_4 = makeFY5253NearestEndMonthQuarter( - 1, startingMonth=8, weekday=WeekDay.SAT, qtr_with_extra_week=4 - ) - offset_nem_thu_aug_4 = makeFY5253NearestEndMonthQuarter( - 1, startingMonth=8, weekday=WeekDay.THU, qtr_with_extra_week=4 - ) - offset_n = FY5253(weekday=WeekDay.TUE, startingMonth=12, variation="nearest") - - on_offset_cases = [ - # From Wikipedia - (offset_nem_sat_aug_4, datetime(2006, 9, 2), True), - (offset_nem_sat_aug_4, datetime(2007, 9, 1), True), - (offset_nem_sat_aug_4, datetime(2008, 8, 30), True), - (offset_nem_sat_aug_4, datetime(2009, 8, 29), True), - (offset_nem_sat_aug_4, datetime(2010, 8, 28), True), - (offset_nem_sat_aug_4, datetime(2011, 9, 3), True), - (offset_nem_sat_aug_4, datetime(2016, 9, 3), True), - (offset_nem_sat_aug_4, datetime(2017, 9, 2), True), - (offset_nem_sat_aug_4, datetime(2018, 9, 1), True), - (offset_nem_sat_aug_4, datetime(2019, 8, 31), True), - (offset_nem_sat_aug_4, datetime(2006, 8, 27), False), - (offset_nem_sat_aug_4, datetime(2007, 8, 28), False), - (offset_nem_sat_aug_4, datetime(2008, 8, 31), False), - (offset_nem_sat_aug_4, datetime(2009, 8, 30), False), - (offset_nem_sat_aug_4, datetime(2010, 8, 29), False), - (offset_nem_sat_aug_4, datetime(2011, 8, 28), False), - (offset_nem_sat_aug_4, datetime(2006, 8, 25), False), - (offset_nem_sat_aug_4, datetime(2007, 8, 24), False), - (offset_nem_sat_aug_4, datetime(2008, 8, 29), False), - (offset_nem_sat_aug_4, datetime(2009, 8, 28), False), - (offset_nem_sat_aug_4, datetime(2010, 8, 27), False), - (offset_nem_sat_aug_4, datetime(2011, 8, 26), False), - (offset_nem_sat_aug_4, datetime(2019, 8, 30), False), - # From Micron, see: - # http://google.brand.edgar-online.com/?sym=MU&formtypeID=7 - (offset_nem_thu_aug_4, datetime(2012, 8, 30), True), - (offset_nem_thu_aug_4, datetime(2011, 9, 1), True), - # See: http://google.brand.edgar-online.com/?sym=MU&formtypeID=13 - (offset_nem_thu_aug_4, datetime(2013, 5, 30), True), - (offset_nem_thu_aug_4, datetime(2013, 2, 28), True), - (offset_nem_thu_aug_4, datetime(2012, 11, 29), True), - (offset_nem_thu_aug_4, datetime(2012, 5, 31), True), - (offset_nem_thu_aug_4, datetime(2007, 3, 1), True), - (offset_nem_thu_aug_4, datetime(1994, 3, 3), True), - (offset_n, datetime(2012, 12, 31), False), - (offset_n, datetime(2013, 1, 1), True), - (offset_n, datetime(2013, 1, 2), False), - ] - - @pytest.mark.parametrize("case", on_offset_cases) - def test_is_on_offset(self, case): - offset, dt, expected = case - assert_is_on_offset(offset, dt, expected) - - def test_offset(self): - offset = makeFY5253NearestEndMonthQuarter( - 1, startingMonth=8, weekday=WeekDay.THU, qtr_with_extra_week=4 - ) - - MU = [ - datetime(2012, 5, 31), - datetime(2012, 8, 30), - datetime(2012, 11, 29), - datetime(2013, 2, 28), - datetime(2013, 5, 30), - ] - - date = MU[0] + relativedelta(days=-1) - for expected in MU: - assert_offset_equal(offset, date, expected) - date = date + offset - - assert_offset_equal(offset, datetime(2012, 5, 31), datetime(2012, 8, 30)) - assert_offset_equal(offset, datetime(2012, 5, 30), datetime(2012, 5, 31)) - - offset2 = FY5253Quarter( - weekday=5, startingMonth=12, variation="last", qtr_with_extra_week=4 - ) - - assert_offset_equal(offset2, datetime(2013, 1, 15), datetime(2013, 3, 30)) - - -def test_bunched_yearends(): - # GH#14774 cases with two fiscal year-ends in the same calendar-year - fy = FY5253(n=1, weekday=5, startingMonth=12, variation="nearest") - dt = Timestamp("2004-01-01") - assert fy.rollback(dt) == Timestamp("2002-12-28") - assert (-fy)._apply(dt) == Timestamp("2002-12-28") - assert dt - fy == Timestamp("2002-12-28") - - assert fy.rollforward(dt) == Timestamp("2004-01-03") - assert fy._apply(dt) == Timestamp("2004-01-03") - assert fy + dt == Timestamp("2004-01-03") - assert dt + fy == Timestamp("2004-01-03") - - # Same thing, but starting from a Timestamp in the previous year. - dt = Timestamp("2003-12-31") - assert fy.rollback(dt) == Timestamp("2002-12-28") - assert (-fy)._apply(dt) == Timestamp("2002-12-28") - assert dt - fy == Timestamp("2002-12-28") - - -def test_fy5253_last_onoffset(): - # GH#18877 dates on the year-end but not normalized to midnight - offset = FY5253(n=-5, startingMonth=5, variation="last", weekday=0) - ts = Timestamp("1984-05-28 06:29:43.955911354+0200", tz="Europe/San_Marino") - fast = offset.is_on_offset(ts) - slow = (ts + offset) - offset == ts - assert fast == slow - - -def test_fy5253_nearest_onoffset(): - # GH#18877 dates on the year-end but not normalized to midnight - offset = FY5253(n=3, startingMonth=7, variation="nearest", weekday=2) - ts = Timestamp("2032-07-28 00:12:59.035729419+0000", tz="Africa/Dakar") - fast = offset.is_on_offset(ts) - slow = (ts + offset) - offset == ts - assert fast == slow - - -def test_fy5253qtr_onoffset_nearest(): - # GH#19036 - ts = Timestamp("1985-09-02 23:57:46.232550356-0300", tz="Atlantic/Bermuda") - offset = FY5253Quarter( - n=3, qtr_with_extra_week=1, startingMonth=2, variation="nearest", weekday=0 - ) - fast = offset.is_on_offset(ts) - slow = (ts + offset) - offset == ts - assert fast == slow - - -def test_fy5253qtr_onoffset_last(): - # GH#19036 - offset = FY5253Quarter( - n=-2, qtr_with_extra_week=1, startingMonth=7, variation="last", weekday=2 - ) - ts = Timestamp("2011-01-26 19:03:40.331096129+0200", tz="Africa/Windhoek") - slow = (ts + offset) - offset == ts - fast = offset.is_on_offset(ts) - assert fast == slow diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/eiffel.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/eiffel.py deleted file mode 100644 index 83bfe1ffd17fea1f01adfc138e9f29acd5aa69fe..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/eiffel.py +++ /dev/null @@ -1,69 +0,0 @@ -""" - pygments.lexers.eiffel - ~~~~~~~~~~~~~~~~~~~~~~ - - Lexer for the Eiffel language. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, include, words, bygroups -from pygments.token import Comment, Operator, Keyword, Name, String, Number, \ - Punctuation, Whitespace - -__all__ = ['EiffelLexer'] - - -class EiffelLexer(RegexLexer): - """ - For Eiffel source code. - - .. versionadded:: 2.0 - """ - name = 'Eiffel' - url = 'http://www.eiffel.com' - aliases = ['eiffel'] - filenames = ['*.e'] - mimetypes = ['text/x-eiffel'] - - tokens = { - 'root': [ - (r'[^\S\n]+', Whitespace), - (r'--.*?$', Comment.Single), - (r'[^\S\n]+', Whitespace), - # Please note that keyword and operator are case insensitive. - (r'(?i)(true|false|void|current|result|precursor)\b', Keyword.Constant), - (r'(?i)(not|xor|implies|or)\b', Operator.Word), - (r'(?i)(and)(?:(\s+)(then))?\b', - bygroups(Operator.Word, Whitespace, Operator.Word)), - (r'(?i)(or)(?:(\s+)(else))?\b', - bygroups(Operator.Word, Whitespace, Operator.Word)), - (words(( - 'across', 'agent', 'alias', 'all', 'as', 'assign', 'attached', - 'attribute', 'check', 'class', 'convert', 'create', 'debug', - 'deferred', 'detachable', 'do', 'else', 'elseif', 'end', 'ensure', - 'expanded', 'export', 'external', 'feature', 'from', 'frozen', 'if', - 'inherit', 'inspect', 'invariant', 'like', 'local', 'loop', 'none', - 'note', 'obsolete', 'old', 'once', 'only', 'redefine', 'rename', - 'require', 'rescue', 'retry', 'select', 'separate', 'then', - 'undefine', 'until', 'variant', 'when'), prefix=r'(?i)\b', suffix=r'\b'), - Keyword.Reserved), - (r'"\[([^\]%]|%(.|\n)|\][^"])*?\]"', String), - (r'"([^"%\n]|%.)*?"', String), - include('numbers'), - (r"'([^'%]|%'|%%)'", String.Char), - (r"(//|\\\\|>=|<=|:=|/=|~|/~|[\\?!#%&@|+/\-=>*$<^\[\]])", Operator), - (r"([{}():;,.])", Punctuation), - (r'([a-z]\w*)|([A-Z][A-Z0-9_]*[a-z]\w*)', Name), - (r'([A-Z][A-Z0-9_]*)', Name.Class), - (r'\n+', Whitespace), - ], - 'numbers': [ - (r'0[xX][a-fA-F0-9]+', Number.Hex), - (r'0[bB][01]+', Number.Bin), - (r'0[cC][0-7]+', Number.Oct), - (r'([0-9]+\.[0-9]*)|([0-9]*\.[0-9]+)', Number.Float), - (r'[0-9]+', Number.Integer), - ], - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/phix.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/phix.py deleted file mode 100644 index fb08b1dc77893692e090e9481324086414cd63d6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/phix.py +++ /dev/null @@ -1,364 +0,0 @@ -""" - pygments.lexers.phix - ~~~~~~~~~~~~~~~~~~~~ - - Lexers for Phix. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexer import RegexLexer, words -from pygments.token import Text, Comment, Operator, Keyword, Name, String, \ - Whitespace - -__all__ = ['PhixLexer'] - - -class PhixLexer(RegexLexer): - """ - Pygments Lexer for Phix files (.exw). - See http://phix.x10.mx - - .. versionadded:: 2.14.0 - """ - - name = 'Phix' - url = 'http://phix.x10.mx' - aliases = ['phix'] - filenames = ['*.exw'] - mimetypes = ['text/x-phix'] - - flags = re.MULTILINE # nb: **NOT** re.DOTALL! (totally spanners comment handling) - - preproc = ( - 'ifdef', 'elsifdef', 'elsedef' - ) - # Note these lists are auto-generated by pwa/p2js.exw, when pwa\src\p2js_keywords.e (etc) - # change, though of course subsequent copy/commit/pull requests are all manual steps. - types = ( - 'string', 'nullable_string', 'atom_string', 'atom', 'bool', 'boolean', - 'cdCanvan', 'cdCanvas', 'complex', 'CURLcode', 'dictionary', 'int', - 'integer', 'Ihandle', 'Ihandles', 'Ihandln', 'mpfr', 'mpq', 'mpz', - 'mpz_or_string', 'number', 'rid_string', 'seq', 'sequence', 'timedate', - 'object' - ) - keywords = ( - 'abstract', 'class', 'continue', 'export', 'extends', 'nullable', - 'private', 'public', 'static', 'struct', 'trace', - 'and', 'break', 'by', 'case', 'catch', 'const', 'constant', 'debug', - 'default', 'do', 'else', 'elsif', 'end', 'enum', 'exit', 'fallthru', - 'fallthrough', 'for', 'forward', 'function', 'global', 'if', 'in', - 'include', 'js', 'javascript', 'javascript_semantics', 'let', 'not', - 'or', 'procedure', 'profile', 'profile_time', 'return', 'safe_mode', - 'switch', 'then', 'to', 'try', 'type', 'type_check', 'until', 'warning', - 'while', 'with', 'without', 'xor' - ) - routines = ( - 'abort', 'abs', 'adjust_timedate', 'and_bits', 'and_bitsu', 'apply', - 'append', 'arccos', 'arcsin', 'arctan', 'assert', 'atan2', - 'atom_to_float32', 'atom_to_float64', 'bankers_rounding', 'beep', - 'begins', 'binary_search', 'bits_to_int', 'bk_color', 'bytes_to_int', - 'call_func', 'call_proc', 'cdCanvasActivate', 'cdCanvasArc', - 'cdCanvasBegin', 'cdCanvasBox', 'cdCanvasChord', 'cdCanvasCircle', - 'cdCanvasClear', 'cdCanvasEnd', 'cdCanvasFlush', 'cdCanvasFont', - 'cdCanvasGetImageRGB', 'cdCanvasGetSize', 'cdCanvasGetTextAlignment', - 'cdCanvasGetTextSize', 'cdCanvasLine', 'cdCanvasMark', - 'cdCanvasMarkSize', 'cdCanvasMultiLineVectorText', 'cdCanvasPixel', - 'cdCanvasRect', 'cdCanvasRoundedBox', 'cdCanvasRoundedRect', - 'cdCanvasSector', 'cdCanvasSetAttribute', 'cdCanvasSetBackground', - 'cdCanvasSetFillMode', 'cdCanvasSetForeground', - 'cdCanvasSetInteriorStyle', 'cdCanvasSetLineStyle', - 'cdCanvasSetLineWidth', 'cdCanvasSetTextAlignment', 'cdCanvasText', - 'cdCanvasSetTextOrientation', 'cdCanvasGetTextOrientation', - 'cdCanvasVectorText', 'cdCanvasVectorTextDirection', - 'cdCanvasVectorTextSize', 'cdCanvasVertex', 'cdCreateCanvas', - 'cdDecodeAlpha', 'cdDecodeColor', 'cdDecodeColorAlpha', 'cdEncodeAlpha', - 'cdEncodeColor', 'cdEncodeColorAlpha', 'cdKillCanvas', 'cdVersion', - 'cdVersionDate', 'ceil', 'change_timezone', 'choose', 'clear_screen', - 'columnize', 'command_line', 'compare', 'complex_abs', 'complex_add', - 'complex_arg', 'complex_conjugate', 'complex_cos', 'complex_cosh', - 'complex_div', 'complex_exp', 'complex_imag', 'complex_inv', - 'complex_log', 'complex_mul', 'complex_neg', 'complex_new', - 'complex_norm', 'complex_power', 'complex_rho', 'complex_real', - 'complex_round', 'complex_sin', 'complex_sinh', 'complex_sprint', - 'complex_sqrt', 'complex_sub', 'complex_theta', 'concat', 'cos', - 'crash', 'custom_sort', 'date', 'day_of_week', 'day_of_year', - 'days_in_month', 'decode_base64', 'decode_flags', 'deep_copy', 'deld', - 'deserialize', 'destroy_dict', 'destroy_queue', 'destroy_stack', - 'dict_name', 'dict_size', 'elapsed', 'elapsed_short', 'encode_base64', - 'equal', 'even', 'exp', 'extract', 'factorial', 'factors', - 'file_size_k', 'find', 'find_all', 'find_any', 'find_replace', 'filter', - 'flatten', 'float32_to_atom', 'float64_to_atom', 'floor', - 'format_timedate', 'free_console', 'from_polar', 'gcd', 'get_file_base', - 'get_file_extension', 'get_file_name', 'get_file_name_and_path', - 'get_file_path', 'get_file_path_and_name', 'get_maxprime', 'get_prime', - 'get_primes', 'get_primes_le', 'get_proper_dir', 'get_proper_path', - 'get_rand', 'get_routine_info', 'get_test_abort', 'get_test_logfile', - 'get_test_pause', 'get_test_verbosity', 'get_tzid', 'getd', 'getdd', - 'getd_all_keys', 'getd_by_index', 'getd_index', 'getd_partial_key', - 'glAttachShader', 'glBindBuffer', 'glBindTexture', 'glBufferData', - 'glCanvasSpecialText', 'glClear', 'glClearColor', 'glColor', - 'glCompileShader', 'glCreateBuffer', 'glCreateProgram', - 'glCreateShader', 'glCreateTexture', 'glDeleteProgram', - 'glDeleteShader', 'glDrawArrays', 'glEnable', - 'glEnableVertexAttribArray', 'glFloat32Array', 'glInt32Array', - 'glFlush', 'glGetAttribLocation', 'glGetError', 'glGetProgramInfoLog', - 'glGetProgramParameter', 'glGetShaderInfoLog', 'glGetShaderParameter', - 'glGetUniformLocation', 'glLinkProgram', 'glLoadIdentity', - 'glMatrixMode', 'glOrtho', 'glRotatef', 'glShadeModel', - 'glShaderSource', 'glSimpleA7texcoords', 'glTexImage2Dc', - 'glTexParameteri', 'glTranslate', 'glUniform1f', 'glUniform1i', - 'glUniformMatrix4fv', 'glUseProgram', 'glVertex', - 'glVertexAttribPointer', 'glViewport', 'head', 'hsv_to_rgb', 'iff', - 'iif', 'include_file', 'incl0de_file', 'insert', 'instance', - 'int_to_bits', 'int_to_bytes', 'is_dict', 'is_integer', 's_leap_year', - 'is_prime', 'is_prime2', 'islower', 'isupper', 'Icallback', - 'iup_isdouble', 'iup_isprint', 'iup_XkeyBase', 'IupAppend', 'IupAlarm', - 'IupBackgroundBox', 'IupButton', 'IupCalendar', 'IupCanvas', - 'IupClipboard', 'IupClose', 'IupCloseOnEscape', 'IupControlsOpen', - 'IupDatePick', 'IupDestroy', 'IupDialog', 'IupDrawArc', 'IupDrawBegin', - 'IupDrawEnd', 'IupDrawGetSize', 'IupDrawGetTextSize', 'IupDrawLine', - 'IupDrawRectangle', 'IupDrawText', 'IupExpander', 'IupFill', - 'IupFlatLabel', 'IupFlatList', 'IupFlatTree', 'IupFlush', 'IupFrame', - 'IupGetAttribute', 'IupGetAttributeId', 'IupGetAttributePtr', - 'IupGetBrother', 'IupGetChild', 'IupGetChildCount', 'IupGetClassName', - 'IupGetDialog', 'IupGetDialogChild', 'IupGetDouble', 'IupGetFocus', - 'IupGetGlobal', 'IupGetGlobalInt', 'IupGetGlobalIntInt', 'IupGetInt', - 'IupGetInt2', 'IupGetIntId', 'IupGetIntInt', 'IupGetParent', - 'IupGLCanvas', 'IupGLCanvasOpen', 'IupGLMakeCurrent', 'IupGraph', - 'IupHbox', 'IupHide', 'IupImage', 'IupImageRGBA', 'IupItem', - 'iupKeyCodeToName', 'IupLabel', 'IupLink', 'IupList', 'IupMap', - 'IupMenu', 'IupMenuItem', 'IupMessage', 'IupMessageDlg', 'IupMultiBox', - 'IupMultiLine', 'IupNextField', 'IupNormaliser', 'IupOpen', - 'IupPlayInput', 'IupPopup', 'IupPreviousField', 'IupProgressBar', - 'IupRadio', 'IupRecordInput', 'IupRedraw', 'IupRefresh', - 'IupRefreshChildren', 'IupSeparator', 'IupSetAttribute', - 'IupSetAttributes', 'IupSetAttributeHandle', 'IupSetAttributeId', - 'IupSetAttributePtr', 'IupSetCallback', 'IupSetCallbacks', - 'IupSetDouble', 'IupSetFocus', 'IupSetGlobal', 'IupSetGlobalInt', - 'IupSetGlobalFunction', 'IupSetHandle', 'IupSetInt', - 'IupSetStrAttribute', 'IupSetStrGlobal', 'IupShow', 'IupShowXY', - 'IupSplit', 'IupStoreAttribute', 'IupSubmenu', 'IupTable', - 'IupTableClearSelected', 'IupTableClick_cb', 'IupTableGetSelected', - 'IupTableResize_cb', 'IupTableSetData', 'IupTabs', 'IupText', - 'IupTimer', 'IupToggle', 'IupTreeAddNodes', 'IupTreeView', 'IupUpdate', - 'IupValuator', 'IupVbox', 'join', 'join_by', 'join_path', 'k_perm', - 'largest', 'lcm', 'length', 'log', 'log10', 'log2', 'lower', - 'm4_crossProduct', 'm4_inverse', 'm4_lookAt', 'm4_multiply', - 'm4_normalize', 'm4_perspective', 'm4_subtractVectors', 'm4_xRotate', - 'm4_yRotate', 'machine_bits', 'machine_word', 'match', 'match_all', - 'match_replace', 'max', 'maxsq', 'min', 'minsq', 'mod', 'mpfr_add', - 'mpfr_ceil', 'mpfr_cmp', 'mpfr_cmp_si', 'mpfr_const_pi', 'mpfr_div', - 'mpfr_div_si', 'mpfr_div_z', 'mpfr_floor', 'mpfr_free', 'mpfr_get_d', - 'mpfr_get_default_precision', 'mpfr_get_default_rounding_mode', - 'mpfr_get_fixed', 'mpfr_get_precision', 'mpfr_get_si', 'mpfr_init', - 'mpfr_inits', 'mpfr_init_set', 'mpfr_init_set_q', 'mpfr_init_set_z', - 'mpfr_mul', 'mpfr_mul_si', 'mpfr_pow_si', 'mpfr_set', 'mpfr_set_d', - 'mpfr_set_default_precision', 'mpfr_set_default_rounding_mode', - 'mpfr_set_precision', 'mpfr_set_q', 'mpfr_set_si', 'mpfr_set_str', - 'mpfr_set_z', 'mpfr_si_div', 'mpfr_si_sub', 'mpfr_sqrt', 'mpfr_sub', - 'mpfr_sub_si', 'mpq_abs', 'mpq_add', 'mpq_add_si', 'mpq_canonicalize', - 'mpq_cmp', 'mpq_cmp_si', 'mpq_div', 'mpq_div_2exp', 'mpq_free', - 'mpq_get_den', 'mpq_get_num', 'mpq_get_str', 'mpq_init', 'mpq_init_set', - 'mpq_init_set_si', 'mpq_init_set_str', 'mpq_init_set_z', 'mpq_inits', - 'mpq_inv', 'mpq_mul', 'mpq_neg', 'mpq_set', 'mpq_set_si', 'mpq_set_str', - 'mpq_set_z', 'mpq_sub', 'mpz_abs', 'mpz_add', 'mpz_addmul', - 'mpz_addmul_ui', 'mpz_addmul_si', 'mpz_add_si', 'mpz_add_ui', 'mpz_and', - 'mpz_bin_uiui', 'mpz_cdiv_q', 'mpz_cmp', 'mpz_cmp_si', 'mpz_divexact', - 'mpz_divexact_ui', 'mpz_divisible_p', 'mpz_divisible_ui_p', 'mpz_even', - 'mpz_fac_ui', 'mpz_factorstring', 'mpz_fdiv_q', 'mpz_fdiv_q_2exp', - 'mpz_fdiv_q_ui', 'mpz_fdiv_qr', 'mpz_fdiv_r', 'mpz_fdiv_ui', - 'mpz_fib_ui', 'mpz_fib2_ui', 'mpz_fits_atom', 'mpz_fits_integer', - 'mpz_free', 'mpz_gcd', 'mpz_gcd_ui', 'mpz_get_atom', 'mpz_get_integer', - 'mpz_get_short_str', 'mpz_get_str', 'mpz_init', 'mpz_init_set', - 'mpz_inits', 'mpz_invert', 'mpz_lcm', 'mpz_lcm_ui', 'mpz_max', - 'mpz_min', 'mpz_mod', 'mpz_mod_ui', 'mpz_mul', 'mpz_mul_2exp', - 'mpz_mul_d', 'mpz_mul_si', 'mpz_neg', 'mpz_nthroot', 'mpz_odd', - 'mpz_pollard_rho', 'mpz_pow_ui', 'mpz_powm', 'mpz_powm_ui', 'mpz_prime', - 'mpz_prime_factors', 'mpz_prime_mr', 'mpz_rand', 'mpz_rand_ui', - 'mpz_re_compose', 'mpz_remove', 'mpz_scan0', 'mpz_scan1', 'mpz_set', - 'mpz_set_d', 'mpz_set_si', 'mpz_set_str', 'mpz_set_v', 'mpz_sign', - 'mpz_sizeinbase', 'mpz_sqrt', 'mpz_sub', 'mpz_sub_si', 'mpz_sub_ui', - 'mpz_si_sub', 'mpz_tdiv_q_2exp', 'mpz_tdiv_r_2exp', 'mpz_tstbit', - 'mpz_ui_pow_ui', 'mpz_xor', 'named_dict', 'new_dict', 'new_queue', - 'new_stack', 'not_bits', 'not_bitsu', 'odd', 'or_all', 'or_allu', - 'or_bits', 'or_bitsu', 'ord', 'ordinal', 'ordinant', - 'override_timezone', 'pad', 'pad_head', 'pad_tail', 'parse_date_string', - 'papply', 'peep', 'peepn', 'peep_dict', 'permute', 'permutes', - 'platform', 'pop', 'popn', 'pop_dict', 'power', 'pp', 'ppEx', 'ppExf', - 'ppf', 'ppOpt', 'pq_add', 'pq_destroy', 'pq_empty', 'pq_new', 'pq_peek', - 'pq_pop', 'pq_pop_data', 'pq_size', 'prepend', 'prime_factors', - 'printf', 'product', 'proper', 'push', 'pushn', 'putd', 'puts', - 'queue_empty', 'queue_size', 'rand', 'rand_range', 'reinstate', - 'remainder', 'remove', 'remove_all', 'repeat', 'repeatch', 'replace', - 'requires', 'reverse', 'rfind', 'rgb', 'rmatch', 'rmdr', 'rnd', 'round', - 'routine_id', 'scanf', 'serialize', 'series', 'set_rand', - 'set_test_abort', 'set_test_logfile', 'set_test_module', - 'set_test_pause', 'set_test_verbosity', 'set_timedate_formats', - 'set_timezone', 'setd', 'setd_default', 'shorten', 'sha256', - 'shift_bits', 'shuffle', 'sign', 'sin', 'smallest', 'sort', - 'sort_columns', 'speak', 'splice', 'split', 'split_any', 'split_by', - 'sprint', 'sprintf', 'sq_abs', 'sq_add', 'sq_and', 'sq_and_bits', - 'sq_arccos', 'sq_arcsin', 'sq_arctan', 'sq_atom', 'sq_ceil', 'sq_cmp', - 'sq_cos', 'sq_div', 'sq_even', 'sq_eq', 'sq_floor', 'sq_floor_div', - 'sq_ge', 'sq_gt', 'sq_int', 'sq_le', 'sq_log', 'sq_log10', 'sq_log2', - 'sq_lt', 'sq_max', 'sq_min', 'sq_mod', 'sq_mul', 'sq_ne', 'sq_not', - 'sq_not_bits', 'sq_odd', 'sq_or', 'sq_or_bits', 'sq_power', 'sq_rand', - 'sq_remainder', 'sq_rmdr', 'sq_rnd', 'sq_round', 'sq_seq', 'sq_sign', - 'sq_sin', 'sq_sqrt', 'sq_str', 'sq_sub', 'sq_tan', 'sq_trunc', - 'sq_uminus', 'sq_xor', 'sq_xor_bits', 'sqrt', 'square_free', - 'stack_empty', 'stack_size', 'substitute', 'substitute_all', 'sum', - 'tail', 'tan', 'test_equal', 'test_fail', 'test_false', - 'test_not_equal', 'test_pass', 'test_summary', 'test_true', - 'text_color', 'throw', 'time', 'timedate_diff', 'timedelta', - 'to_integer', 'to_number', 'to_rgb', 'to_string', 'traverse_dict', - 'traverse_dict_partial_key', 'trim', 'trim_head', 'trim_tail', 'trunc', - 'tagset', 'tagstart', 'typeof', 'unique', 'unix_dict', 'upper', - 'utf8_to_utf32', 'utf32_to_utf8', 'version', 'vlookup', 'vslice', - 'wglGetProcAddress', 'wildcard_file', 'wildcard_match', 'with_rho', - 'with_theta', 'xml_new_doc', 'xml_new_element', 'xml_set_attribute', - 'xml_sprint', 'xor_bits', 'xor_bitsu', - 'accept', 'allocate', 'allocate_string', 'allow_break', 'ARM', - 'atom_to_float80', 'c_func', 'c_proc', 'call_back', 'chdir', - 'check_break', 'clearDib', 'close', 'closesocket', 'console', - 'copy_file', 'create', 'create_directory', 'create_thread', - 'curl_easy_cleanup', 'curl_easy_get_file', 'curl_easy_init', - 'curl_easy_perform', 'curl_easy_perform_ex', 'curl_easy_setopt', - 'curl_easy_strerror', 'curl_global_cleanup', 'curl_global_init', - 'curl_slist_append', 'curl_slist_free_all', 'current_dir', 'cursor', - 'define_c_func', 'define_c_proc', 'delete', 'delete_cs', 'delete_file', - 'dir', 'DLL', 'drawDib', 'drawShadedPolygonToDib', 'ELF32', 'ELF64', - 'enter_cs', 'eval', 'exit_thread', 'free', 'file_exists', 'final', - 'float80_to_atom', 'format', 'get_bytes', 'get_file_date', - 'get_file_size', 'get_file_type', 'get_interpreter', 'get_key', - 'get_socket_error', 'get_text', 'get_thread_exitcode', 'get_thread_id', - 'getc', 'getenv', 'gets', 'getsockaddr', 'glBegin', 'glCallList', - 'glFrustum', 'glGenLists', 'glGetString', 'glLight', 'glMaterial', - 'glNewList', 'glNormal', 'glPopMatrix', 'glPushMatrix', 'glRotate', - 'glEnd', 'glEndList', 'glTexImage2D', 'goto', 'GUI', 'icons', 'ilASM', - 'include_files', 'include_paths', 'init_cs', 'ip_to_string', - 'IupConfig', 'IupConfigDialogClosed', 'IupConfigDialogShow', - 'IupConfigGetVariableInt', 'IupConfigLoad', 'IupConfigSave', - 'IupConfigSetVariableInt', 'IupExitLoop', 'IupFileDlg', 'IupFileList', - 'IupGLSwapBuffers', 'IupHelp', 'IupLoopStep', 'IupMainLoop', - 'IupNormalizer', 'IupPlot', 'IupPlotAdd', 'IupPlotBegin', 'IupPlotEnd', - 'IupPlotInsert', 'IupSaveImage', 'IupTreeGetUserId', 'IupUser', - 'IupVersion', 'IupVersionDate', 'IupVersionNumber', 'IupVersionShow', - 'killDib', 'leave_cs', 'listen', 'manifest', 'mem_copy', 'mem_set', - 'mpfr_gamma', 'mpfr_printf', 'mpfr_sprintf', 'mpz_export', 'mpz_import', - 'namespace', 'new', 'newDib', 'open', 'open_dll', 'PE32', 'PE64', - 'peek', 'peek_string', 'peek1s', 'peek1u', 'peek2s', 'peek2u', 'peek4s', - 'peek4u', 'peek8s', 'peek8u', 'peekNS', 'peekns', 'peeknu', 'poke', - 'poke2', 'poke4', 'poke8', 'pokeN', 'poke_string', 'poke_wstring', - 'position', 'progress', 'prompt_number', 'prompt_string', 'read_file', - 'read_lines', 'recv', 'resume_thread', 'seek', 'select', 'send', - 'setHandler', 'shutdown', 'sleep', 'SO', 'sockaddr_in', 'socket', - 'split_path', 'suspend_thread', 'system', 'system_exec', 'system_open', - 'system_wait', 'task_clock_start', 'task_clock_stop', 'task_create', - 'task_delay', 'task_list', 'task_schedule', 'task_self', 'task_status', - 'task_suspend', 'task_yield', 'thread_safe_string', 'try_cs', - 'utf8_to_utf16', 'utf16_to_utf8', 'utf16_to_utf32', 'utf32_to_utf16', - 'video_config', 'WSACleanup', 'wait_thread', 'walk_dir', 'where', - 'write_lines', 'wait_key' - ) - constants = ( - 'ANY_QUEUE', 'ASCENDING', 'BLACK', 'BLOCK_CURSOR', 'BLUE', - 'BRIGHT_CYAN', 'BRIGHT_BLUE', 'BRIGHT_GREEN', 'BRIGHT_MAGENTA', - 'BRIGHT_RED', 'BRIGHT_WHITE', 'BROWN', 'C_DWORD', 'C_INT', 'C_POINTER', - 'C_USHORT', 'C_WORD', 'CD_AMBER', 'CD_BLACK', 'CD_BLUE', 'CD_BOLD', - 'CD_BOLD_ITALIC', 'CD_BOX', 'CD_CENTER', 'CD_CIRCLE', 'CD_CLOSED_LINES', - 'CD_CONTINUOUS', 'CD_CUSTOM', 'CD_CYAN', 'CD_DARK_BLUE', 'CD_DARK_CYAN', - 'CD_DARK_GRAY', 'CD_DARK_GREY', 'CD_DARK_GREEN', 'CD_DARK_MAGENTA', - 'CD_DARK_RED', 'CD_DARK_YELLOW', 'CD_DASH_DOT', 'CD_DASH_DOT_DOT', - 'CD_DASHED', 'CD_DBUFFER', 'CD_DEG2RAD', 'CD_DIAMOND', 'CD_DOTTED', - 'CD_EAST', 'CD_EVENODD', 'CD_FILL', 'CD_GL', 'CD_GRAY', 'CD_GREY', - 'CD_GREEN', 'CD_HATCH', 'CD_HOLLOW', 'CD_HOLLOW_BOX', - 'CD_HOLLOW_CIRCLE', 'CD_HOLLOW_DIAMOND', 'CD_INDIGO', 'CD_ITALIC', - 'CD_IUP', 'CD_IUPDBUFFER', 'CD_LIGHT_BLUE', 'CD_LIGHT_GRAY', - 'CD_LIGHT_GREY', 'CD_LIGHT_GREEN', 'CD_LIGHT_PARCHMENT', 'CD_MAGENTA', - 'CD_NAVY', 'CD_NORTH', 'CD_NORTH_EAST', 'CD_NORTH_WEST', 'CD_OLIVE', - 'CD_OPEN_LINES', 'CD_ORANGE', 'CD_PARCHMENT', 'CD_PATTERN', - 'CD_PRINTER', 'CD_PURPLE', 'CD_PLAIN', 'CD_PLUS', 'CD_QUERY', - 'CD_RAD2DEG', 'CD_RED', 'CD_SILVER', 'CD_SOLID', 'CD_SOUTH_EAST', - 'CD_SOUTH_WEST', 'CD_STAR', 'CD_STIPPLE', 'CD_STRIKEOUT', - 'CD_UNDERLINE', 'CD_WEST', 'CD_WHITE', 'CD_WINDING', 'CD_VIOLET', - 'CD_X', 'CD_YELLOW', 'CURLE_OK', 'CURLOPT_MAIL_FROM', - 'CURLOPT_MAIL_RCPT', 'CURLOPT_PASSWORD', 'CURLOPT_READDATA', - 'CURLOPT_READFUNCTION', 'CURLOPT_SSL_VERIFYPEER', - 'CURLOPT_SSL_VERIFYHOST', 'CURLOPT_UPLOAD', 'CURLOPT_URL', - 'CURLOPT_USE_SSL', 'CURLOPT_USERNAME', 'CURLOPT_VERBOSE', - 'CURLOPT_WRITEFUNCTION', 'CURLUSESSL_ALL', 'CYAN', 'D_NAME', - 'D_ATTRIBUTES', 'D_SIZE', 'D_YEAR', 'D_MONTH', 'D_DAY', 'D_HOUR', - 'D_MINUTE', 'D_SECOND', 'D_CREATION', 'D_LASTACCESS', 'D_MODIFICATION', - 'DT_YEAR', 'DT_MONTH', 'DT_DAY', 'DT_HOUR', 'DT_MINUTE', 'DT_SECOND', - 'DT_DOW', 'DT_MSEC', 'DT_DOY', 'DT_GMT', 'EULER', 'E_CODE', 'E_ADDR', - 'E_LINE', 'E_RTN', 'E_NAME', 'E_FILE', 'E_PATH', 'E_USER', 'false', - 'False', 'FALSE', 'FIFO_QUEUE', 'FILETYPE_DIRECTORY', 'FILETYPE_FILE', - 'GET_EOF', 'GET_FAIL', 'GET_IGNORE', 'GET_SUCCESS', - 'GL_AMBIENT_AND_DIFFUSE', 'GL_ARRAY_BUFFER', 'GL_CLAMP', - 'GL_CLAMP_TO_BORDER', 'GL_CLAMP_TO_EDGE', 'GL_COLOR_BUFFER_BIT', - 'GL_COMPILE', 'GL_COMPILE_STATUS', 'GL_CULL_FACE', - 'GL_DEPTH_BUFFER_BIT', 'GL_DEPTH_TEST', 'GL_EXTENSIONS', 'GL_FLAT', - 'GL_FLOAT', 'GL_FRAGMENT_SHADER', 'GL_FRONT', 'GL_LIGHT0', - 'GL_LIGHTING', 'GL_LINEAR', 'GL_LINK_STATUS', 'GL_MODELVIEW', - 'GL_NEAREST', 'GL_NO_ERROR', 'GL_NORMALIZE', 'GL_POSITION', - 'GL_PROJECTION', 'GL_QUAD_STRIP', 'GL_QUADS', 'GL_RENDERER', - 'GL_REPEAT', 'GL_RGB', 'GL_RGBA', 'GL_SMOOTH', 'GL_STATIC_DRAW', - 'GL_TEXTURE_2D', 'GL_TEXTURE_MAG_FILTER', 'GL_TEXTURE_MIN_FILTER', - 'GL_TEXTURE_WRAP_S', 'GL_TEXTURE_WRAP_T', 'GL_TRIANGLES', - 'GL_UNSIGNED_BYTE', 'GL_VENDOR', 'GL_VERSION', 'GL_VERTEX_SHADER', - 'GRAY', 'GREEN', 'GT_LF_STRIPPED', 'GT_WHOLE_FILE', 'INVLN10', - 'IUP_CLOSE', 'IUP_CONTINUE', 'IUP_DEFAULT', 'IUP_BLACK', 'IUP_BLUE', - 'IUP_BUTTON1', 'IUP_BUTTON3', 'IUP_CENTER', 'IUP_CYAN', 'IUP_DARK_BLUE', - 'IUP_DARK_CYAN', 'IUP_DARK_GRAY', 'IUP_DARK_GREY', 'IUP_DARK_GREEN', - 'IUP_DARK_MAGENTA', 'IUP_DARK_RED', 'IUP_GRAY', 'IUP_GREY', 'IUP_GREEN', - 'IUP_IGNORE', 'IUP_INDIGO', 'IUP_MAGENTA', 'IUP_MASK_INT', - 'IUP_MASK_UINT', 'IUP_MOUSEPOS', 'IUP_NAVY', 'IUP_OLIVE', 'IUP_RECTEXT', - 'IUP_RED', 'IUP_LIGHT_BLUE', 'IUP_LIGHT_GRAY', 'IUP_LIGHT_GREY', - 'IUP_LIGHT_GREEN', 'IUP_ORANGE', 'IUP_PARCHMENT', 'IUP_PURPLE', - 'IUP_SILVER', 'IUP_TEAL', 'IUP_VIOLET', 'IUP_WHITE', 'IUP_YELLOW', - 'K_BS', 'K_cA', 'K_cC', 'K_cD', 'K_cF5', 'K_cK', 'K_cM', 'K_cN', 'K_cO', - 'K_cP', 'K_cR', 'K_cS', 'K_cT', 'K_cW', 'K_CR', 'K_DEL', 'K_DOWN', - 'K_END', 'K_ESC', 'K_F1', 'K_F2', 'K_F3', 'K_F4', 'K_F5', 'K_F6', - 'K_F7', 'K_F8', 'K_F9', 'K_F10', 'K_F11', 'K_F12', 'K_HOME', 'K_INS', - 'K_LEFT', 'K_MIDDLE', 'K_PGDN', 'K_PGUP', 'K_RIGHT', 'K_SP', 'K_TAB', - 'K_UP', 'K_h', 'K_i', 'K_j', 'K_p', 'K_r', 'K_s', 'JS', 'LIFO_QUEUE', - 'LINUX', 'MAX_HEAP', 'MAGENTA', 'MIN_HEAP', 'Nan', 'NO_CURSOR', 'null', - 'NULL', 'PI', 'pp_Ascii', 'pp_Brkt', 'pp_Date', 'pp_File', 'pp_FltFmt', - 'pp_Indent', 'pp_IntCh', 'pp_IntFmt', 'pp_Maxlen', 'pp_Nest', - 'pp_Pause', 'pp_Q22', 'pp_StrFmt', 'RED', 'SEEK_OK', 'SLASH', - 'TEST_ABORT', 'TEST_CRASH', 'TEST_PAUSE', 'TEST_PAUSE_FAIL', - 'TEST_QUIET', 'TEST_SHOW_ALL', 'TEST_SHOW_FAILED', 'TEST_SUMMARY', - 'true', 'True', 'TRUE', 'VC_SCRNLINES', 'WHITE', 'WINDOWS', 'YELLOW' - ) - - tokens = { - 'root': [ - (r"\s+", Whitespace), - (r'/\*|--/\*|#\[', Comment.Multiline, 'comment'), - (r'(?://|--|#!).*$', Comment.Single), -#Alt: -# (r'//.*$|--.*$|#!.*$', Comment.Single), - (r'"([^"\\]|\\.)*"', String.Other), - (r'\'[^\']*\'', String.Other), - (r'`[^`]*`', String.Other), - - (words(types, prefix=r'\b', suffix=r'\b'), Name.Function), - (words(routines, prefix=r'\b', suffix=r'\b'), Name.Function), - (words(preproc, prefix=r'\b', suffix=r'\b'), Keyword.Declaration), - (words(keywords, prefix=r'\b', suffix=r'\b'), Keyword.Declaration), - (words(constants, prefix=r'\b', suffix=r'\b'), Name.Constant), - # Aside: Phix only supports/uses the ascii/non-unicode tilde - (r'!=|==|<<|>>|:=|[-~+/*%=<>&^|\.(){},?:\[\]$\\;#]', Operator), - (r'[\w-]+', Text) - ], - 'comment': [ - (r'[^*/#]+', Comment.Multiline), - (r'/\*|#\[', Comment.Multiline, '#push'), - (r'\*/|#\]', Comment.Multiline, '#pop'), - (r'[*/#]', Comment.Multiline) - ] - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/cyaml.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/cyaml.py deleted file mode 100644 index 0c21345879b298bb8668201bebe7d289586b17f9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/yaml/cyaml.py +++ /dev/null @@ -1,101 +0,0 @@ - -__all__ = [ - 'CBaseLoader', 'CSafeLoader', 'CFullLoader', 'CUnsafeLoader', 'CLoader', - 'CBaseDumper', 'CSafeDumper', 'CDumper' -] - -from yaml._yaml import CParser, CEmitter - -from .constructor import * - -from .serializer import * -from .representer import * - -from .resolver import * - -class CBaseLoader(CParser, BaseConstructor, BaseResolver): - - def __init__(self, stream): - CParser.__init__(self, stream) - BaseConstructor.__init__(self) - BaseResolver.__init__(self) - -class CSafeLoader(CParser, SafeConstructor, Resolver): - - def __init__(self, stream): - CParser.__init__(self, stream) - SafeConstructor.__init__(self) - Resolver.__init__(self) - -class CFullLoader(CParser, FullConstructor, Resolver): - - def __init__(self, stream): - CParser.__init__(self, stream) - FullConstructor.__init__(self) - Resolver.__init__(self) - -class CUnsafeLoader(CParser, UnsafeConstructor, Resolver): - - def __init__(self, stream): - CParser.__init__(self, stream) - UnsafeConstructor.__init__(self) - Resolver.__init__(self) - -class CLoader(CParser, Constructor, Resolver): - - def __init__(self, stream): - CParser.__init__(self, stream) - Constructor.__init__(self) - Resolver.__init__(self) - -class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver): - - def __init__(self, stream, - default_style=None, default_flow_style=False, - canonical=None, indent=None, width=None, - allow_unicode=None, line_break=None, - encoding=None, explicit_start=None, explicit_end=None, - version=None, tags=None, sort_keys=True): - CEmitter.__init__(self, stream, canonical=canonical, - indent=indent, width=width, encoding=encoding, - allow_unicode=allow_unicode, line_break=line_break, - explicit_start=explicit_start, explicit_end=explicit_end, - version=version, tags=tags) - Representer.__init__(self, default_style=default_style, - default_flow_style=default_flow_style, sort_keys=sort_keys) - Resolver.__init__(self) - -class CSafeDumper(CEmitter, SafeRepresenter, Resolver): - - def __init__(self, stream, - default_style=None, default_flow_style=False, - canonical=None, indent=None, width=None, - allow_unicode=None, line_break=None, - encoding=None, explicit_start=None, explicit_end=None, - version=None, tags=None, sort_keys=True): - CEmitter.__init__(self, stream, canonical=canonical, - indent=indent, width=width, encoding=encoding, - allow_unicode=allow_unicode, line_break=line_break, - explicit_start=explicit_start, explicit_end=explicit_end, - version=version, tags=tags) - SafeRepresenter.__init__(self, default_style=default_style, - default_flow_style=default_flow_style, sort_keys=sort_keys) - Resolver.__init__(self) - -class CDumper(CEmitter, Serializer, Representer, Resolver): - - def __init__(self, stream, - default_style=None, default_flow_style=False, - canonical=None, indent=None, width=None, - allow_unicode=None, line_break=None, - encoding=None, explicit_start=None, explicit_end=None, - version=None, tags=None, sort_keys=True): - CEmitter.__init__(self, stream, canonical=canonical, - indent=indent, width=width, encoding=encoding, - allow_unicode=allow_unicode, line_break=line_break, - explicit_start=explicit_start, explicit_end=explicit_end, - version=version, tags=tags) - Representer.__init__(self, default_style=default_style, - default_flow_style=default_flow_style, sort_keys=sort_keys) - Resolver.__init__(self) - diff --git a/spaces/pyInter/Liyuu_sovits4/cluster/train_cluster.py b/spaces/pyInter/Liyuu_sovits4/cluster/train_cluster.py deleted file mode 100644 index 4ac025d400414226e66849407f477ae786c3d5d3..0000000000000000000000000000000000000000 --- a/spaces/pyInter/Liyuu_sovits4/cluster/train_cluster.py +++ /dev/null @@ -1,89 +0,0 @@ -import os -from glob import glob -from pathlib import Path -import torch -import logging -import argparse -import torch -import numpy as np -from sklearn.cluster import KMeans, MiniBatchKMeans -import tqdm -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) -import time -import random - -def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False): - - logger.info(f"Loading features from {in_dir}") - features = [] - nums = 0 - for path in tqdm.tqdm(in_dir.glob("*.soft.pt")): - features.append(torch.load(path).squeeze(0).numpy().T) - # print(features[-1].shape) - features = np.concatenate(features, axis=0) - print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype) - features = features.astype(np.float32) - logger.info(f"Clustering features of shape: {features.shape}") - t = time.time() - if use_minibatch: - kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features) - else: - kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features) - print(time.time()-t, "s") - - x = { - "n_features_in_": kmeans.n_features_in_, - "_n_threads": kmeans._n_threads, - "cluster_centers_": kmeans.cluster_centers_, - } - print("end") - - return x - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument('--dataset', type=Path, default="./dataset/44k", - help='path of training data directory') - parser.add_argument('--output', type=Path, default="logs/44k", - help='path of model output directory') - - args = parser.parse_args() - - checkpoint_dir = args.output - dataset = args.dataset - n_clusters = 10000 - - ckpt = {} - for spk in os.listdir(dataset): - if os.path.isdir(dataset/spk): - print(f"train kmeans for {spk}...") - in_dir = dataset/spk - x = train_cluster(in_dir, n_clusters, verbose=False) - ckpt[spk] = x - - checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt" - checkpoint_path.parent.mkdir(exist_ok=True, parents=True) - torch.save( - ckpt, - checkpoint_path, - ) - - - # import cluster - # for spk in tqdm.tqdm(os.listdir("dataset")): - # if os.path.isdir(f"dataset/{spk}"): - # print(f"start kmeans inference for {spk}...") - # for feature_path in tqdm.tqdm(glob(f"dataset/{spk}/*.discrete.npy", recursive=True)): - # mel_path = feature_path.replace(".discrete.npy",".mel.npy") - # mel_spectrogram = np.load(mel_path) - # feature_len = mel_spectrogram.shape[-1] - # c = np.load(feature_path) - # c = utils.tools.repeat_expand_2d(torch.FloatTensor(c), feature_len).numpy() - # feature = c.T - # feature_class = cluster.get_cluster_result(feature, spk) - # np.save(feature_path.replace(".discrete.npy", ".discrete_class.npy"), feature_class) - - diff --git a/spaces/pycui/RealChar/realtime_ai_character/main.py b/spaces/pycui/RealChar/realtime_ai_character/main.py deleted file mode 100644 index 0268607508e995d70049178b28fe0f9651053af7..0000000000000000000000000000000000000000 --- a/spaces/pycui/RealChar/realtime_ai_character/main.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import warnings - -from dotenv import load_dotenv -from fastapi import FastAPI -from fastapi.middleware.cors import CORSMiddleware -from fastapi.staticfiles import StaticFiles - -from realtime_ai_character.audio.speech_to_text import get_speech_to_text -from realtime_ai_character.audio.text_to_speech import get_text_to_speech -from realtime_ai_character.character_catalog.catalog_manager import CatalogManager -from realtime_ai_character.restful_routes import router as restful_router -from realtime_ai_character.utils import ConnectionManager -from realtime_ai_character.websocket_routes import router as websocket_router - -load_dotenv() - -app = FastAPI() - -app.add_middleware( - CORSMiddleware, - # Change to domains if you deploy this to production - allow_origins=['*'], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], -) - -app.include_router(restful_router) -app.include_router(websocket_router) -app.mount("/static", StaticFiles(directory=os.path.join( - os.path.dirname(os.path.abspath(__file__)), 'static')), name="static") - - -# initializations -CatalogManager.initialize(overwrite=True) -ConnectionManager.initialize() -get_text_to_speech() -get_speech_to_text() - -# suppress deprecation warnings -warnings.filterwarnings("ignore", module="whisper") diff --git a/spaces/qingxu98/gpt-academic/tests/test_plugins.py b/spaces/qingxu98/gpt-academic/tests/test_plugins.py deleted file mode 100644 index d9f78d6de905ee17d357dcf83682eff37d13034a..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/tests/test_plugins.py +++ /dev/null @@ -1,60 +0,0 @@ -""" -对项目中的各个插件进行测试。运行方法:直接运行 python tests/test_plugins.py -""" - - -import os, sys -def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume) -validate_path() # 返回项目根路径 - -if __name__ == "__main__": - from tests.test_utils import plugin_test - plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"}) - - # plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep') - - # plugin_test(plugin='crazy_functions.批量翻译PDF文档_NOUGAT->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf') - - # plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='调用插件,对C:/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析') - - # plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表') - - # plugin_test(plugin='crazy_functions.解析项目源代码->解析一个Python项目', main_input="crazy_functions/test_project/python/dqn") - - # plugin_test(plugin='crazy_functions.解析项目源代码->解析一个C项目', main_input="crazy_functions/test_project/cpp/cppipc") - - # plugin_test(plugin='crazy_functions.Latex全文润色->Latex英文润色', main_input="crazy_functions/test_project/latex/attention") - - # plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown中译英', main_input="README.md") - - # plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf') - - # plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=") - - # plugin_test(plugin='crazy_functions.总结word文档->总结word文档', main_input="crazy_functions/test_project/pdf_and_word") - - # plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695") - - # plugin_test(plugin='crazy_functions.联网的ChatGPT->连接网络回答问题', main_input="谁是应急食品?") - - # plugin_test(plugin='crazy_functions.解析JupyterNotebook->解析ipynb文件', main_input="crazy_functions/test_samples") - - # plugin_test(plugin='crazy_functions.数学动画生成manim->动画生成', main_input="A ball split into 2, and then split into 4, and finally split into 8.") - - # for lang in ["English", "French", "Japanese", "Korean", "Russian", "Italian", "German", "Portuguese", "Arabic"]: - # plugin_test(plugin='crazy_functions.批量Markdown翻译->Markdown翻译指定语言', main_input="README.md", advanced_arg={"advanced_arg": lang}) - - # plugin_test(plugin='crazy_functions.Langchain知识库->知识库问答', main_input="./") - - # plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="What is the installation method?") - - # plugin_test(plugin='crazy_functions.Langchain知识库->读取知识库作答', main_input="远程云服务器部署?") - - # plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629") - - # advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求:100字以内,用第二人称。' --system_prompt=''" } - # plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg) - - # advanced_arg = {"advanced_arg":"--pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " } - # plugin_test(plugin='crazy_functions.chatglm微调工具->启动微调', main_input='build/dev.json', advanced_arg=advanced_arg) - diff --git a/spaces/qudehu123/BingAI/README.md b/spaces/qudehu123/BingAI/README.md deleted file mode 100644 index 937fcd61bd80af7f56c2bd2871bf9352825a849b..0000000000000000000000000000000000000000 --- a/spaces/qudehu123/BingAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BingAI -emoji: 🦀 -colorFrom: pink -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Cs Keygen Exe Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe Cs Keygen Exe Download.md deleted file mode 100644 index a614033220c0c8816bb48e378476c9bdab75cb2b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Cs Keygen Exe Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Adobe Cs Keygen Exe Download


      Download Zip ››››› https://geags.com/2uCqHJ



      -
      - 3cee63e6c2
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Bctplayer052download REPACK.md b/spaces/quidiaMuxgu/Expedit-SAM/Bctplayer052download REPACK.md deleted file mode 100644 index 0e33fe5ddf504ab9cb6820acd643e8b32a202573..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Bctplayer052download REPACK.md +++ /dev/null @@ -1,9 +0,0 @@ -

      bctplayer052download


      Download Zip 🆓 https://geags.com/2uCrkq



      -
      -Joe had to sell one of his cars to get funds for this project. While his crew assembled the engine, Joe's friends came in to do the artwork, which was ... very expensive. - -Over the next two years, Joe and his team worked on the engine, building it piece by piece from various parts they found at junkyards and used car auctions. When the project was finally completed, the engine weighed over 500 pounds. -In 2009, the engine was disassembled and cleaned, after which Joe's team traveled to Michigan to test it. During the tests, the engine was malfunctioning, but Joe and his team managed to calibrate it. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/CyberGhost VPN Premium 7.2.4294 With [Latest] Crack [UPDATED] 2020.md b/spaces/quidiaMuxgu/Expedit-SAM/CyberGhost VPN Premium 7.2.4294 With [Latest] Crack [UPDATED] 2020.md deleted file mode 100644 index ff89f39a9ece64b7d333f0cf13746dbba917ab92..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/CyberGhost VPN Premium 7.2.4294 With [Latest] Crack [UPDATED] 2020.md +++ /dev/null @@ -1,98 +0,0 @@ -
      -

      CyberGhost VPN Premium 7.2.4294 With [Latest] Crack 2020

      - -

      If you are looking for a way to protect your online privacy and access any website you want, you might be interested in CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020. This is a powerful and reliable VPN service that allows you to surf the web anonymously and securely.

      - -

      What is CyberGhost VPN Premium 7.2.4294?

      - -

      CyberGhost VPN Premium 7.2.4294 is the latest version of CyberGhost VPN, a popular VPN provider that has over 30 million users worldwide. CyberGhost VPN Premium 7.2.4294 offers you many features and benefits, such as:

      -

      CyberGhost VPN Premium 7.2.4294 With [Latest] Crack 2020


      Download Ziphttps://geags.com/2uCrrJ



      - -
        -
      • Access to over 7000 servers in 90 countries
      • -
      • Unlimited bandwidth and traffic
      • -
      • Military-grade encryption and kill switch
      • -
      • No logs policy and DNS leak protection
      • -
      • Support for up to 7 devices simultaneously
      • -
      • Compatible with Windows, Mac, Linux, Android, iOS, and more
      • -
      • 24/7 customer support and 45-day money-back guarantee
      • -
      - -

      With CyberGhost VPN Premium 7.2.4294, you can bypass geo-restrictions and censorship, stream your favorite content from Netflix, Hulu, BBC iPlayer, and more, download torrents safely and anonymously, and protect your sensitive data from hackers and snoopers.

      - -

      How to get CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020?

      - -

      If you want to enjoy all the benefits of CyberGhost VPN Premium 7.2.4294 without paying a subscription fee, you can use the [Latest] Crack 2020 that we provide in this article. The crack is a simple and easy way to activate CyberGhost VPN Premium 7.2.4294 for free and use it for as long as you want.

      - -

      To get CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020, you just need to follow these steps:

      - -
        -
      1. Download CyberGhost VPN Premium 7.2.4294 from the official website or from the link below
      2. -
      3. Install CyberGhost VPN Premium 7.2.4294 on your device
      4. -
      5. Download the [Latest] Crack 2020 from the link below
      6. -
      7. Extract the crack file and run it as administrator
      8. -
      9. Click on the "Activate" button and wait for the process to complete
      10. -
      11. Enjoy CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020 for free!
      12. -
      - -

      Note: The crack is tested and working on Windows 10, but it may not work on other operating systems or versions.

      - -

      Why choose CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020?

      - -

      CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020 is a great option for anyone who wants to enjoy the best of both worlds: a premium VPN service and a free activation method. By using CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020, you can:

      - -
        -
      • Save money on a monthly or yearly subscription fee
      • -
      • Get unlimited access to all the features and servers of CyberGhost VPN Premium 7.2.4294
      • -
      • Avoid any risks or complications of using other methods such as keygen, patch, or serial number
      • -
      • Update CyberGhost VPN Premium 7.2.4294 whenever a new version is released
      • -
      • Support the development of CyberGhost VPN by sharing your feedback and suggestions
      • -
      - -

      CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020 is a safe and reliable way to enjoy one of the best VPN services on the market without breaking the bank.

      - -

      Download Links:

      - -

      CyberGhost VPN Premium 7.2.4294: https://www.cyberghostvpn.com/en_US/download

      -

      - -

      [Latest] Crack 2020: https://bit.ly/3xY6wQw

      -

      What are the advantages of CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020?

      - -

      CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020 is not only a VPN service, but also a complete online security solution. By using CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020, you can enjoy many advantages, such as:

      - -
        -
      • Protect your identity and personal data from hackers, trackers, and advertisers
      • -
      • Access geo-blocked websites and services from anywhere in the world
      • -
      • Stream HD videos and download large files without any buffering or throttling
      • -
      • Use public Wi-Fi networks safely and securely
      • -
      • Bypass firewalls and censorship in countries with strict internet regulations
      • -
      • Customize your VPN experience with various features and settings
      • -
      - -

      CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020 is a versatile and user-friendly VPN service that can meet all your online needs.

      - -

      How to use CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020?

      - -

      CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020 is very easy to use, even for beginners. You don't need any technical skills or knowledge to use CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020. You just need to follow these simple steps:

      - -
        -
      1. Launch CyberGhost VPN Premium 7.2.4294 on your device
      2. -
      3. Select a server from the list or let CyberGhost choose the best one for you
      4. -
      5. Click on the "Connect" button and wait for the connection to be established
      6. -
      7. Enjoy your online freedom and privacy with CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020
      8. -
      - -

      You can also choose from different modes and profiles depending on your online activity, such as browsing, streaming, torrenting, gaming, etc. You can also adjust various settings and preferences to optimize your VPN experience.

      - -

      Conclusion:

      - -

      CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020 is a great solution for anyone who wants to enjoy a fast, secure, and anonymous online experience without paying a dime. CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020 offers you all the features and benefits of a premium VPN service for free and without any hassle.

      - -

      If you are interested in CyberGhost VPN Premium 7.2.4294 with [Latest] Crack 2020, you can download it from the links below and start using it right away.

      - -

      CyberGhost VPN Premium 7.2.4294: https://www.cyberghostvpn.com/en_US/download

      - -

      [Latest] Crack 2020: https://bit.ly/3xY6wQw

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Magix Music Maker 12 Deluxe Torrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Magix Music Maker 12 Deluxe Torrent.md deleted file mode 100644 index ef0f172b19517300e4c690db490079a0511b15cd..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Magix Music Maker 12 Deluxe Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

      magix music maker 12 deluxe torrent


      Download Ziphttps://geags.com/2uCr4C



      -
      -Download Magix music maker deluxe Torrents absolutely for free, Magnet Link And Direct Download also ... Magix Music Maker 12 Deluxe Crack By Core zip. 4d29de3e1b
      -
      -
      -

      diff --git "a/spaces/raedeXanto/academic-chatgpt-beta/? Angry Birds Epic RPG APK V1.3.3 MOD Unlimited\302\240Money.md" "b/spaces/raedeXanto/academic-chatgpt-beta/? Angry Birds Epic RPG APK V1.3.3 MOD Unlimited\302\240Money.md" deleted file mode 100644 index e36868dd9bbfd8805a04b80b187654b8339bf7d4..0000000000000000000000000000000000000000 --- "a/spaces/raedeXanto/academic-chatgpt-beta/? Angry Birds Epic RPG APK V1.3.3 MOD Unlimited\302\240Money.md" +++ /dev/null @@ -1,20 +0,0 @@ - -

      Angry Birds Epic RPG APK v1.3.3 MOD Unlimited Money: A Review

      -

      If you are a fan of the Angry Birds franchise, you might want to check out the latest installment in the series: Angry Birds Epic RPG APK v1.3.3 MOD Unlimited Money. This is a role-playing game that lets you explore the fantasy world of Piggy Island, where you can join the birds in their epic quest to defeat the evil King Pig and his minions.

      -

      The game features a turn-based combat system that allows you to use different weapons, skills, and items to defeat your enemies. You can also customize your birds with various outfits, hats, and accessories that give them different abilities and bonuses. You can also craft new items and upgrade your equipment using the resources you collect from battles and quests.

      -

      – Angry Birds Epic RPG APK v1.3.3 MOD Unlimited Money


      DOWNLOAD ►►►►► https://tinourl.com/2uL29i



      -

      The game also has a multiplayer mode where you can join other players in cooperative or competitive battles. You can also challenge other players in the Arena and climb the leaderboards. The game has a lot of content and events to keep you entertained for hours.

      -

      One of the best features of the game is that it has a MOD version that gives you unlimited money to spend on anything you want. You can buy all the items, upgrades, and costumes you need without worrying about running out of coins or gems. You can also unlock all the levels and characters without having to complete the previous ones.

      -

      The MOD version of the game is easy to install and use. You just need to download the APK file from a reliable source and install it on your device. You don't need to root your device or use any other tools. The game will run smoothly and without any errors.

      -

      Angry Birds Epic RPG APK v1.3.3 MOD Unlimited Money is a fun and addictive game that will appeal to both casual and hardcore gamers. It has amazing graphics, sound effects, and music that enhance the gameplay experience. It also has a humorous and engaging story that will keep you hooked until the end.

      -

      If you are looking for a new and exciting game to play on your Android device, you should definitely try Angry Birds Epic RPG APK v1.3.3 MOD Unlimited Money. It is one of the best RPG games available on the market today.

      - -

      How to play Angry Birds Epic RPG APK v1.3.3 MOD Unlimited Money?

      -

      The game is easy to play and suitable for players of all ages. You just need to tap on the screen to select your birds and drag them to the enemies you want to attack. You can also tap on your birds to activate their special skills or use items from your inventory. You can also swipe on the screen to dodge or counterattack your enemies.

      -

      -

      The game has different types of enemies, such as pigs, wolves, dragons, and more. Each enemy has its own strengths and weaknesses, so you need to use different strategies and tactics to defeat them. You can also encounter boss battles that require more skill and strategy to win.

      -

      The game has different modes and levels that you can choose from. You can play the story mode, where you follow the main plot and complete various quests and challenges. You can also play the event mode, where you can participate in limited-time events and earn rewards. You can also play the dungeon mode, where you can explore randomly generated dungeons and fight against waves of enemies.

      -

      The game also has a social aspect, where you can connect with other players online. You can join clans and chat with other members. You can also invite your friends to join your team and help you in battles. You can also send and receive gifts from your friends.

      -

      Angry Birds Epic RPG APK v1.3.3 MOD Unlimited Money is a game that will keep you entertained for hours. It has a lot of features and content that will make you want to play more and more. It is a game that you should not miss if you love RPG games and Angry Birds.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key The Best OCR Software for Windows.md b/spaces/raedeXanto/academic-chatgpt-beta/ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key The Best OCR Software for Windows.md deleted file mode 100644 index e9c1a62bd7919e401c38be9b69efb6e5f434087a..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key The Best OCR Software for Windows.md +++ /dev/null @@ -1,119 +0,0 @@ -
      -

      ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key

      -

      If you are looking for a powerful and versatile software that can help you convert scanned documents and image PDFs into editable formats, then you might want to check out ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key. This software is an optical character recognition (OCR) software that provides unmatched text recognition accuracy and conversion capabilities, virtually eliminating retyping and reformatting of documents.

      -

      ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key


      Download Ziphttps://tinourl.com/2uL0qy



      -

      In this article, we will explain what ABBYY FineReader is, what benefits it offers, what are the differences between Professional and Corporate Editions, and how to download and install it on your PC.

      -

      What is ABBYY FineReader?

      -

      ABBYY FineReader is a software that uses OCR technology to convert paper and image documents into editable formats such as Microsoft Office and searchable PDFs. OCR is a process that recognizes text characters from scanned images or photos and converts them into digital text that can be edited, searched, or copied.

      -

      ABBYY FineReader has many features and functions that make it a useful tool for various purposes, such as:

      -
        -
      • It supports up to 190 languages for text recognition, more than any other OCR software in this market.
      • -
      • It can recognize text from various types of documents, such as books, magazines, invoices, forms, contracts, etc.
      • -
      • It can handle complex layouts, such as tables, columns, headers, footers, etc.
      • -
      • It can export the converted documents to various formats, such as Word, Excel, PowerPoint, PDF, HTML, TXT, etc.
      • -
      • It can preserve the original formatting and layout of the documents.
      • -
      • It can perform batch processing of multiple documents at once.
      • -
      • It can integrate with other applications, such as Microsoft Office, Adobe Acrobat Reader DC,
      • -

        Optical Character Recognition (OCR)

        -

        Optical Character Recognition (OCR) is a technology that enables computers to recognize text characters from scanned images or photos and convert them into digital text that can be edited, searched, or copied. OCR is useful for various purposes, such as:

        -
          -
        • It saves time and effort by eliminating the need to retype or reformat documents.
        • -
        • It improves productivity and efficiency by allowing users to access and reuse information from paper or image documents.
        • -
        • It enhances document security and compliance by enabling users to create searchable and editable PDFs that can be encrypted, signed, or redacted.
        • -
        • It preserves document quality and integrity by reducing errors and inconsistencies that may occur during manual transcription.
        • -
        -

        ABBYY FineReader uses advanced OCR technology that provides unmatched text recognition accuracy and conversion capabilities. ABBYY FineReader can recognize text from various types of documents, such as books, magazines, invoices, forms, contracts, etc. It can also handle complex layouts, such as tables, columns, headers, footers, etc. It can export the converted documents to various formats, such as Word, Excel, PowerPoint, PDF, HTML, TXT, etc. It can also preserve the original formatting and layout of the documents.

        -

        Text Recognition and Conversion

        -

        Text recognition and conversion is one of the main features and functions of ABBYY FineReader. Text recognition and conversion refers to the process of recognizing text characters from scanned documents or image PDFs and converting them into editable formats that can be used for various purposes.

        -

        ABBYY FineReader has many advantages when it comes to text recognition and conversion, such as:

        -
          -
        • It supports up to 190 languages for text recognition, more than any other OCR software in this market. It can also recognize multilingual documents that contain text in different languages.
        • -
        • It can recognize text from various types of documents, such as books, magazines, invoices, forms, contracts, etc. It can also recognize text from various sources, such as scanners, cameras, mobile devices, etc.
        • -
        • Follow the installation wizard and accept the license agreement.
        • -
        • Choose the destination folder and the components you want to install.
        • -
        • Click Install and wait for the installation to complete.
        • -
        • Click Finish and exit the installation wizard.
        • - -

          Activating the software

          -

          The third step is to activate the software using the crack serial key. You need to follow these instructions:

          -

          ABBYY FineReader 12 Professional OCR software download
          -ABBYY FineReader 12 Corporate Edition + Crack [SadeemPC]
          -ABBYY FineReader 12.0.101.483 optical character recognition (OCR)
          -ABBYY FineReader 12 editable formats conversion
          -ABBYY FineReader 12 supports 190 languages for text recognition
          -ABBYY FineReader 12 eliminates the need to retype documents
          -ABBYY FineReader 12 edit a scanned document or an image PDF
          -ABBYY FineReader 12 search and archive documents
          -ABBYY FineReader 12 extract information from paper originals
          -ABBYY FineReader 12 Professional And Corporate Edition.rar - Google Drive
          -ABBYY FineReader 12 keygen free download
          -ABBYY FineReader 12 serial number activation code
          -ABBYY FineReader 12 crack patch torrent magnet link
          -ABBYY FineReader 12 license key generator online
          -ABBYY FineReader 12 full version with crack free download
          -ABBYY FineReader 12 how to install and activate
          -ABBYY FineReader 12 review and features comparison
          -ABBYY FineReader 12 system requirements and compatibility
          -ABBYY FineReader 12 best OCR software for Windows
          -ABBYY FineReader 12 alternative and similar software
          -ABBYY FineReader 12 discount coupon code and offer
          -ABBYY FineReader 12 customer support and contact information
          -ABBYY FineReader 12 user manual and tutorial guide
          -ABBYY FineReader 12 tips and tricks for better OCR results
          -ABBYY FineReader 12 pros and cons and user feedback
          -ABBYY FineReader 12 upgrade and update information
          -ABBYY FineReader 12 trial version and free download link
          -ABBYY FineReader 12 benefits and advantages of OCR technology
          -ABBYY FineReader 12 how to use with Microsoft Office and PDF files
          -ABBYY FineReader 12 how to scan and convert documents into editable formats
          -ABBYY FineReader 12 how to edit and create new documents based on paper or image-only originals
          -ABBYY FineReader 12 how to quickly access content trapped in image-only PDFs and scans
          -ABBYY FineReader 12 how to copy and quote sections of content, including text, tables or images
          -ABBYY FineReader 12 how to improve the quality and accuracy of OCR recognition
          -ABBYY FineReader 12 how to customize the settings and preferences of OCR software
          -ABBYY FineReader 12 how to solve common problems and errors of OCR software
          -ABBYY FineReader 12 how to uninstall and remove OCR software from your computer
          -ABBYY FineReader 12 how to get a refund or exchange for OCR software purchase
          -ABBYY FineReader 12 how to verify the authenticity and validity of OCR software license key
          -ABBYY FineReader 12 how to avoid malware and virus infection from downloading OCR software crack serial key

          -
            -
          1. Open the Crack.rar file that you extracted from the downloaded file.
          2. -
          3. Copy the ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key.txt file and paste it somewhere you can access easily.
          4. -
          5. Open ABBYY FineReader 12.0.101.483 Pro And Corp Edition on your PC.
          6. -
          7. Click on Help and then on Activate ABBYY FineReader.
          8. -
          9. Enter the serial number that you copied from the text file and click Next.
          10. -
          11. Select Activate by phone or fax and click Next.
          12. -
          13. Enter any name and organization and click Next.
          14. -
          15. Copy the Installation ID that appears on the screen and paste it into the Keygen.exe file that you extracted from the Crack.rar file.
          16. -
          17. Click Generate and copy the Activation Code that appears on the Keygen.exe file.
          18. -
          19. Paste the Activation Code into ABBYY FineReader 12.0.101.483 Pro And Corp Edition and click Next.
          20. -
          21. Click Finish and enjoy your activated software.
          22. -
          -

          Conclusion

          -

          In conclusion, ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key is a powerful and versatile software that can help you convert scanned documents and image PDFs into editable formats, edit and create new documents, search and archive documents, and more. It supports up to 190 languages for text recognition, handles complex layouts, exports to various formats, preserves original formatting and layout, performs batch processing, integrates with other applications, and offers additional features and functions depending on the edition you choose. You can download and install it on your PC by following the steps we provided in this article.

          -

          If you want to try out ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key for yourself, you can use this link to download it from a trusted website. We hope you found this article helpful and informative. Thank you for reading!

          -

          FAQs

          -

          Here are some frequently asked questions and answers about ABBYY FineReader 12.0.101.483 Pro And Corp Edition Crack Serial Key:

          -
            -
          1. What are the system requirements for ABBYY FineReader 12.0.101.483 Pro And Corp Edition?
          2. -

            The system requirements for ABBYY FineReader 12.0.101.483 Pro And Corp Edition are as follows:

            -
              -
            • Operating system: Windows XP/Vista/7/8/8.1/10 (32-bit or 64-bit)
            • -
            • Processor: 1 GHz or higher
            • -
            • Memory: 1024 MB or higher
            • -
            • Disk space: 850 MB for typical program installation and 850 MB for program operation
            • -
            • Video card: 1280x1024 resolution or higher
            • -
            -
          3. Is ABBYY FineReader 12.0.101.483 Pro And Corp Edition compatible with Mac OS?
          4. -

            No, ABBYY FineReader 12.0.101.483 Pro And Corp Edition is not compatible with Mac OS. However, there is a separate version of ABBYY FineReader for Mac OS that you can check out here.

            -
          5. Is ABBYY FineReader 12.0.101.483 Pro And Corp Edition safe to download and install?
          6. -

            Yes, ABBYY FineReader 12.0.101.483 Pro And Corp Edition is safe to download and install if you use a reliable source such as this one. However, you should always scan any downloaded file with an antivirus program before opening it to ensure its safety.

            -
          7. How long does it take to convert a document with ABBYY FineReader 12.0.101.483 Pro And Corp Edition?
          8. -

            The time it takes to convert a document with ABBYY FineReader 12.0.101.483 Pro And Corp Edition depends on various factors, such as the size, quality, complexity, language, format, and settings of the document, as well as the speed and performance of your PC.

            -

            In general, ABBYY FineReader 12.0.101.483 Pro And Corp Edition is fast and efficient in converting documents, especially if you use automated tasks or background processing features.

            -
          9. How can I contact ABBYY support if I have any issues or questions about ABBYY FineReader 12.0.101.483 Pro And Corp Edition?
          10. -

            If you have any issues or questions about ABBYY FineReader 12.0.101.483 Pro And Corp Edition, you can contact ABBYY support by visiting their website here or by sending an email to support@abbyy.com.

            -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar Why You Need This Antivirus Software.md b/spaces/raedeXanto/academic-chatgpt-beta/ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar Why You Need This Antivirus Software.md deleted file mode 100644 index 44e81a15b664a828031110850af02d9009b2ff74..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar Why You Need This Antivirus Software.md +++ /dev/null @@ -1,162 +0,0 @@ -
          -

          ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar

          -

          Are you looking for a reliable and powerful antivirus software that can protect your PC from all kinds of online threats? Do you want to enjoy the full features and benefits of ESET Internet Security without paying a hefty price? If yes, then you might be interested in downloading and installing ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar.

          -

          In this article, we will explain what ESET Internet Security is, how to download and install it with a crack, why you might need a crack for it, and how to use it safely and effectively. We will also provide some tips and precautions for using a crack for ESET Internet Security, as well as some common issues and how to troubleshoot them.

          -

          ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar


          Download Filehttps://tinourl.com/2uL3mw



          -

          By the end of this article, you will have a clear idea of whether ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar is the right choice for you, and how to get the most out of it.

          -

          What is ESET Internet Security?

          -

          ESET Internet Security is one of the most popular and trusted antivirus software in the market. It is developed by ESET, a Slovak company that has been in the cybersecurity industry since 1987.

          -

          ESET Internet Security offers comprehensive protection for your PC from various types of malware, such as viruses, worms, trojans, ransomware, spyware, adware, rootkits, and more. It also provides advanced features such as firewall, anti-phishing, anti-spam, parental control, webcam protection, banking and payment protection, network attack protection, botnet protection, and more.

          -

          ESET Internet Security is compatible with Windows 10, 8.1, 8, 7, Vista, and XP (32-bit and 64-bit). It requires at least 1 GB of RAM and 320 MB of disk space.

          -

          Features and benefits of ESET Internet Security

          -

          Some of the main features and benefits of ESET Internet Security are:

          -
            -
          • It uses a cloud-based scanning engine that updates itself automatically and detects new threats faster.
          • -
          • It has a low system impact that does not slow down your PC or interfere with your online activities.
          • -
          • It has a user-friendly interface that is easy to navigate and customize.
          • -
          • It has a multi-layered protection that blocks both known and unknown malware before they can harm your PC.
          • -
          • It has a ransomware shield that prevents unauthorized encryption of your files and data.
          • -
          • It has a webcam protection that alerts you when an application tries to access your webcam and lets you block it.
          • -
          • It has a banking and payment protection that secures your online transactions and prevents hackers from stealing your sensitive information.
          • -
          • It has a network attack protection that monitors your network traffic and blocks malicious attempts to exploit your system vulnerabilities.
          • -
          • It has a parental control that lets you set rules and limits for your children's online activity and block inappropriate websites.
          • -
          • It has an anti-theft feature that helps you locate your lost or stolen PC and remotely erase your data.
          • -
          -

          How to download and install ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar

          -

          To download and install ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar, you need to follow these steps:

          -
            -
          1. Step 1: Go to this link https://bit.ly/3HJpZjQ and click on the download button.
          2. -
          3. Step 2: Wait for the download to complete and then open the file with WinRAR or any other file extractor.
          4. -
          5. Step 3: Extract the contents of the file to a folder on your PC.
          6. -
          7. Step 4: Run the setup.exe file as administrator and follow the instructions on the screen.
          8. -
          9. Step 5: When prompted, enter the license key that is provided in the crack folder.
          10. -
          11. Step 6: Complete the installation process and restart your PC if required.
          12. -
          13. Step 7: Enjoy using ESET Internet Security with full features.
          14. -
          -

          Why do you need a crack for ESET Internet Security?

          -

          A crack is a software tool that modifies or bypasses the original code or security mechanism of another software program. In this case, a crack for ESET Internet Security allows you to use the software without paying for it or activating it with an official license key.

          -

          ESET Internet Security 11.0.161.1 21 full version download
          -How to install ESET Internet Security 11.0.161.1 21 crack
          -ESET Internet Security 11.0.161.1 21 license key generator
          -ESET Internet Security 11.0.161.1 21 activation code free
          -ESET Internet Security 11.0.161.1 21 patch download
          -ESET Internet Security 11.0.161.1 21 serial number crack
          -ESET Internet Security 11.0.161.1 21 keygen torrent
          -ESET Internet Security 11.0.161.1 21 (x86 x64) crack rar password
          -ESET Internet Security 11.0.161.1 21 (x86 x64) crack rar file
          -ESET Internet Security 11.0.161.1 21 (x86 x64) crack rar download
          -ESET Internet Security 11.0.161.1 21 (x86 x64) crack rar free
          -ESET Internet Security 11.0.161.1 21 (x86 x64) crack rar online
          -ESET Internet Security 11.0.161.1 21 (x86 x64) crack rar extractor
          -ESET Internet Security 11.0.161.1 21 (x86 x64) crack rar opener
          -ESET Internet Security 11.0.161.1 21 (x86 x64) crack rar software
          -ESET Internet Security 11 review and features
          -ESET Internet Security 11 system requirements and compatibility
          -ESET Internet Security 11 vs ESET NOD32 Antivirus
          -ESET Internet Security 11 vs other internet security software
          -ESET Internet Security 11 pros and cons
          -ESET Internet Security 11 coupon code and discount
          -ESET Internet Security 11 trial version and expiry date
          -ESET Internet Security 11 update and upgrade
          -ESET Internet Security 11 support and customer service
          -ESET Internet Security 11 user manual and guide
          -How to uninstall ESET Internet Security 11 completely
          -How to fix ESET Internet Security 11 errors and issues
          -How to optimize ESET Internet Security 11 performance and speed
          -How to backup and restore ESET Internet Security 11 settings and data
          -How to customize ESET Internet Security 11 preferences and options
          -How to scan and remove malware with ESET Internet Security 11
          -How to protect your online privacy with ESET Internet Security 11
          -How to block unwanted ads and pop-ups with ESET Internet Security 11
          -How to secure your Wi-Fi network with ESET Internet Security 11
          -How to prevent phishing and identity theft with ESET Internet Security 11
          -How to encrypt your files and folders with ESET Internet Security 11
          -How to use parental control and web filtering with ESET Internet Security 11
          -How to manage your passwords and accounts with ESET Internet Security 11
          -How to use anti-theft and anti-spam features with ESET Internet Security 11
          -How to use firewall and network protection with ESET Internet Security 11
          -How to use webcam and microphone protection with ESET Internet Security 11
          -How to use ransomware protection and recovery with ESET Internet Security 11
          -How to use cloud-based scanning and detection with ESET Internet Security 11
          -How to use gamer mode and battery saver with ESET Internet Security 11
          -How to use smart security premium features with ESET Internet Security 11
          -How to use multi-device security features with ESET Internet Security 11
          -How to use mobile security features with ESET Internet Security 11
          -How to use business security features with ESET Internet Security 11

          -

          You might need a crack for ESET Internet Security if:

          -
            -
          • You want to try out the software before buying it
          • -
          • You cannot afford to buy the software
          • -
          • You do not want to share your personal or financial information with the software vendor
          • -
          • You do not want to deal with activation or subscription issues
          • -
          • You want to use the software on multiple devices
          • -
          -

          Advantages of using a crack for ESET Internet Security

          -

          Some of the advantages of using a crack for ESET Internet Security are:

          -
            -
          • You can save money by not paying for the software
          • -
          • You can access all the features and functions of the software without any limitations
          • -
          • You can update the software without any problems
          • -
          • You can use the software offline without any internet connection
          • -
          • You can use the software on any PC without any restrictions
          • -
          -

          Risks and challenges of using a crack for ESET Internet Security

          - and challenges such as:

          -
            -
          • You might violate the terms and conditions of the software vendor
          • -
          • You might expose your PC to malware or viruses that are hidden in the crack file
          • -
          • You might compromise your PC's security or performance by using an outdated or incompatible crack
          • -
          • You might face legal consequences or penalties if caught by the authorities
          • -
          • You might lose technical support or customer service from the software vendor
          • -
          -

          How to use ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar safely and effectively

          -

          To use ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar safely and effectively, you need to follow these tips and precautions:

          -
            -
          • Tip 1: Download the crack file from a reputable source that has positive reviews and feedback from other users
          • -
          • Tip 2: Scan the crack file with another antivirus software before opening it
          • -
          • Tip 3: Backup your important files and data before installing the crack
          • -
          • Tip 4: Disable your internet connection while installing or running the crack
          • -
          • Tip 5: Do not update or upgrade the software unless there is a new version of the crack available
          • -
          • Tip 6: Do not share or distribute the crack file with others
          • -
          • Tip 7: Use common sense and caution when using any cracked software
          • -
          -

          How to troubleshoot common issues with ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar

          -

          If you encounter any issues with ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar such as:

          -
            -
          • The crack file does not work or is corrupted
          • -
          • The software does not accept the license key or shows an error message
          • -
          • The software does not run properly or crashes frequently
          • -
          • The software conflicts with other programs or devices on your PC
          • -
          -

          You can try these solutions:

          -
            -
          • Solution 1: Download another version of the crack file from another source
          • -
          • Solution 2: Reinstall the software with a fresh copy of the crack file
          • -
          • Solution 3: Run the software as administrator or in compatibility mode
          • -
          • Solution 4: Disable any antivirus or firewall programs that might interfere with the software
          • -
          • Solution 5: Contact the developer or creator of the crack file for assistance
          • -
          -

          Conclusion

          -

          ESET Internet Security is one of the best antivirus software that can protect your PC from various online threats. However, if you want to use it without paying for it or activating it with an official license key, you might need to download and install ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar.

          -

          This article has explained what ESET Internet Security is, how to download and install it with a crack, why you might need a crack for it, and how to use it safely and effectively. We have also provided some tips and precautions for using a crack for ESET Internet Security, as well as some common issues and how to troubleshoot them.

          -

          We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

          -

          If you are interested in downloading and installing ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar, you can click on this link https://bit.ly/3HJpZjQ and follow the steps we have mentioned above.

          -

          However, if you want to support the software vendor and enjoy their official services and updates, we recommend that you buy ESET Internet Security from their website https://www.eset.com/int/home/internet-security/.

          -

          Thank you for reading this article and have a great day!

          -

          Frequently Asked Questions (FAQs)

          -

          Here are some of the most frequently asked questions about ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar:

          -
            -
          1. Q: Is ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar safe to use?
          2. -
          3. A: ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar is safe to use if you download it from a reputable source and scan it with another antivirus software before opening it. However, there is always a risk of malware or viruses that are hidden in the crack file, so you should use it at your own discretion and responsibility.
          4. -
          5. Q: How long does ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar last?
          6. -
          7. A: ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar lasts until you update or upgrade the software or until there is a new version of the crack available.
          8. -
          9. Q: Can I use ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar on more than one PC?
          10. -
          11. A: Yes, you can use ESET Internet Security 11.0.161.1 21 (x86 x64) Crack .rar on any PC without any restrictions.
          12. -
          13. Q: What are some alternatives to ESET Internet Security?
          14. -
          15. A: Some alternatives to ESET Internet Security are Bitdefender Total Security, Kaspersky Internet Security, Norton 360 Deluxe, McAfee Total Protection, Avast Premium Security, etc.
          16. -
          17. Q: How can I contact ESET for support?
          18. -
          19. A: You can contact ESET for support by visiting their website https://www.eset.com/int/support/overview/ or by calling their phone number +421 (2) 322 44 111.
          20. -
          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/ravichodry/CHATGPT-LLAMA2/README.md b/spaces/ravichodry/CHATGPT-LLAMA2/README.md deleted file mode 100644 index dff7300a88bbf5194fe92e3f7d4f4f1308c4a99b..0000000000000000000000000000000000000000 --- a/spaces/ravichodry/CHATGPT-LLAMA2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CHATGPT LLAMA2 -emoji: 🐨 -colorFrom: red -colorTo: blue -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/reach-vb/whisper_word_timestamps/README.md b/spaces/reach-vb/whisper_word_timestamps/README.md deleted file mode 100644 index cdbb285c670bdb31f79299180bc5ebbae20b6a40..0000000000000000000000000000000000000000 --- a/spaces/reach-vb/whisper_word_timestamps/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Whisper Word-Level Timestamps -emoji: 💭⏰ -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Matthijs/whisper_word_timestamps ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Media Encoder CC 2015 Serial Number Download.epub.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Media Encoder CC 2015 Serial Number Download.epub.md deleted file mode 100644 index 9013a10b806832018e875ad78b7752b4c0203e0c..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe Media Encoder CC 2015 Serial Number Download.epub.md +++ /dev/null @@ -1,8 +0,0 @@ -
          -

          if you publish your media content as interactive pdf, your video and audio file's media controls won't work, because flash player has reached the end of life on december 31st, 2020. for more information, see the adobe flash player eolgeneralinformationpage.

          -

          features incorporate mpeg-4 and h.264 encoders, youtube screen capture, image masking, xml file creation, and html5 media element embedding. through these features, users can enjoy media file playback, including 3d video and audio clip playback with html5 flash player. the media encoder also supports google chrome, safari and firefox.

          -

          Adobe Media Encoder CC 2015 Serial Number Download.epub


          DOWNLOADhttps://urlgoal.com/2uCKpb



          -

          unless they are in "ready to work" state, which means they are in a state where they could be ready to use. its also possible to install more than one of the same version of a product, or a combination of versions on a single computer. so far so good, but then every time i attempt to run it, it crashes. from the "operations panel" in the application imported and exported successfully i haven't been able to find anything that would give me any information to see whats happening. when i go to "help" the same window pops up. ive spent countless hours trying to get this to work. at this point im just frustrated.

          -

          you must be logged in to access the help section. i tried searching and found nothing. it works on other computers so i know its not the drive. i also tried to reinstall it so i know it wont be an issue with the software. what i cant figure out is why it runs the installer just fine and then tries to run the program and crashes. please let me know if this is the correct place to put this post or if it should go to the more specific software forum. any help would be very much appreciated

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Clave Para Activar Fileviewpro Gratis Free.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Clave Para Activar Fileviewpro Gratis Free.md deleted file mode 100644 index 977e55cc724f85fb76f9be52fc5bf35ed9ef96ed..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Clave Para Activar Fileviewpro Gratis Free.md +++ /dev/null @@ -1,25 +0,0 @@ -
          -```markdown -

          ¿Cómo obtener la clave para activar FileViewPro gratis?

          -

          FileViewPro es un programa que te permite abrir cualquier tipo de archivo en tu computadora, sin necesidad de descargar programas adicionales. Con FileViewPro, puedes ver, editar e imprimir documentos, imágenes, videos, audios y más. Además, puedes convertir archivos a diferentes formatos con solo un clic.

          -

          FileViewPro es un programa muy útil y práctico, pero tiene un inconveniente: es de pago. Para poder usarlo sin limitaciones, necesitas comprar una licencia que cuesta alrededor de 40 dólares. Sin embargo, hay una forma de obtener la clave para activar FileViewPro gratis, y te la vamos a revelar en este artículo.

          -

          clave para activar fileviewpro gratis


          Download Ziphttps://urlgoal.com/2uCLGe



          -

          ¿Qué es la clave para activar FileViewPro gratis?

          -

          La clave para activar FileViewPro gratis es un código alfanumérico que se ingresa en el programa para desbloquear todas sus funciones. Esta clave se genera mediante un generador de claves o keygen, que es un software que crea códigos válidos para diferentes programas.

          -

          El generador de claves para FileViewPro gratis se puede descargar desde internet, y es muy fácil de usar. Solo tienes que ejecutarlo en tu computadora, seleccionar el programa que quieres activar (en este caso, FileViewPro), y hacer clic en el botón "Generate". El generador te mostrará una clave que puedes copiar y pegar en el programa.

          -

          ¿Es seguro usar la clave para activar FileViewPro gratis?

          -

          La respuesta corta es no. Usar la clave para activar FileViewPro gratis implica varios riesgos que debes tener en cuenta antes de decidirte a hacerlo. Estos son algunos de ellos:

          -
            -
          • Es ilegal. Usar la clave para activar FileViewPro gratis viola los derechos de autor del programa, y puede acarrear consecuencias legales. Además, estás privando al desarrollador del programa de los ingresos que le corresponden por su trabajo.
          • -
          • Es inseguro. El generador de claves para FileViewPro gratis puede contener virus, malware o spyware que dañen tu computadora o roben tu información personal. También puede ser detectado por tu antivirus o firewall, y bloqueado o eliminado.
          • -
          • Es ineficaz. La clave para activar FileViewPro gratis puede no funcionar correctamente, o dejar de funcionar después de un tiempo. También puede ser invalidada por el programa si detecta que es falsa o ha sido usada por varias personas. En ese caso, tendrás que buscar otra clave o comprar la licencia original.
          • -
          -

          ¿Qué alternativas hay a la clave para activar FileViewPro gratis?

          -

          Si quieres usar FileViewPro sin pagar y sin arriesgarte a usar la clave para activar FileViewPro gratis, hay algunas opciones que puedes considerar. Estas son algunas de ellas:

          -

          -
            -
          • Usar la versión de prueba. FileViewPro ofrece una versión de prueba gratuita por 14 días, que te permite usar el programa con todas sus funciones. Si solo necesitas abrir algunos archivos específicos, esta puede ser una buena opción. Eso sí, recuerda cancelar la suscripción antes de que termine el periodo de prueba, o se te cobrará automáticamente la licencia.
          • -
          • Usar programas alternativos. Hay muchos programas gratuitos o de bajo costo que pueden abrir diferentes tipos de archivos en tu computadora. Algunos ejemplos son VLC Media Player, IrfanView, LibreOffice o 7-Zip. Estos programas pueden no tener todas las funciones de FileViewPro, pero pueden cubrir tus necesidades básicas.
          • -
          • Usar servicios online. Si no quieres descargar ningún programa en tu computadora, puedes usar servicios online que te permiten abrir y convertir archivos desde tu navegador. Algunos ejemp

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Filme Leviathan 1989 Dublado 28.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Filme Leviathan 1989 Dublado 28.md deleted file mode 100644 index 30e37df65a59180fcfdda86cb285907bcca32752..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Filme Leviathan 1989 Dublado 28.md +++ /dev/null @@ -1,7 +0,0 @@ -
            -

            1988 was the year that the studio system crumbled a little. the run of leviathan had brought down the studio, which was now in dire straits, having just flopped (it was going to floop since it was the worst movie of the summer by a very wide margin). carolco made a decision to partner up with universal to make some of the last of the good sf movies to roll off the assembly lines in the 1980s (ten from midnight, the event, the freaks, the company of wolf men, etc.). since carolco was having financial issues in the south pacific region, the release date for the project (aside from the release of titanic) was still looming on the horizon. everyone is now in the same boat--working for free and fighting over scraps in a very cold movie market. for these reasons, the abyss was shelved for over a year and then released as a direct-to-video title. the production company was re-named pacific pictures and it was operated by the much-maligned "the horny man" (the david n. cairns).

            -

            Filme Leviathan 1989 Dublado 28


            Download Filehttps://urlgoal.com/2uCLmW



            -

            i rarely find myself thinking of japanese film as "religious". however, in the case of leviathan, i'd have to say "yes". the whale is a "god" both in the sense that it was a god to the ancient people of israel (and also to the japanese -- animals that are worshipped are often called "kami") and it is also a god of the sea (and this film is a "sea story"), so it's going to have a lot of resonance in that culture. i love that it's a "leviathan" and not a "leviathan", though: too many modern fans call it the "leviathan" when we should be calling it "ishtar".

            -

            the characters in 'leviathan' and the storytelling in 'leviathan' aren't just excellent, the music for 'leviathan' is pretty cool. the short strings at the beginning of the movie are really cool. but the song used throughout the movie, which isn't soundtrack music, is not only lame, but really is the worst song i've ever heard. from the time i first heard that music i couldn't think about anything else, it was like the music was haunting me.. like a person singing in my ear. i couldn't stop thinking about it, it was truly disgusting music. maybe i'm just trying to fixate about something that was always somewhat stifling for me.

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/rituthombre/QNim/qnim.py b/spaces/rituthombre/QNim/qnim.py deleted file mode 100644 index 350ba85e0f653d6b9585a825948484febee6a68b..0000000000000000000000000000000000000000 --- a/spaces/rituthombre/QNim/qnim.py +++ /dev/null @@ -1,239 +0,0 @@ -import numpy as np -from qiskit import BasicAer, QuantumCircuit, QuantumRegister, ClassicalRegister, execute -from qiskit import IBMQ -# provider = IBMQ.load_account() - - -# def misere_step(ones,piles): -# # even number of piles of 1 eg (1,1,3,0) or (0,0,3,0) -# if ones%2 == 0: -# objects_to_remove = [] -# removable_amount = 1 -# for i in range(len(piles)): -# if piles[i] > 1: -# objects_to_remove.append(piles[i]-1) -# else: -# objects_to_remove.append(0) -# # odd number of piles of 1 eg (1,1,3,1) -# else: -# objects_to_remove = [] -# removable_amount = 1 -# for i in range(len(piles)): -# if piles[i] > 1: -# objects_to_remove.append(piles[i]) -# else: -# objects_to_remove.append(0) -# return objects_to_remove, removable_amount - -def get_piles_to_remove(piles): - nim_sum = 0 - for p in piles: - nim_sum = nim_sum ^ p - objects_to_remove = [] - removable_amount = 0 - for p in piles: - new_p = p^nim_sum - if new_p < p: - objects_to_remove.append(p-new_p) - removable_amount = removable_amount + 1 - else: - objects_to_remove.append(0) - return objects_to_remove, removable_amount - - -def custom_qft(data_qubits): - qr_data = QuantumRegister(data_qubits) - qc = QuantumCircuit(qr_data) - i = data_qubits - while i>=1: - n = i - 1 - qc.h(qr_data[n]) - for qubit in range(n): - qc.cp(np.pi/2**(n-qubit), qr_data[qubit], qr_data[n]) - i = i-1 - return qc - -def subroutine_add_const(data_qubits: int, const: int, to_gate=True): - qc = QuantumCircuit(data_qubits) - for i in range(data_qubits): - angle = const*np.pi/(2**i) - qc.p(angle,i) - return qc.to_gate(label=" ["+str(const)+"] ") if to_gate else qc - -def diffusion_operation(qc, address, flag, removable_pile): - def nim_oracle(qc,address,flag,removable_pile): - - # 0001 -> 001 - if removable_pile[0] != 0: - qc.x(address[1]) - qc.x(address[2]) - qc.mct(address[:],flag) - qc.x(address[2]) - qc.x(address[1]) - - # 0010 -> 010 - if removable_pile[1] != 0: - qc.x(address[0]) - qc.x(address[2]) - qc.mct(address[:],flag) - qc.x(address[2]) - qc.x(address[0]) - - # 0100 -> 011 - if removable_pile[2] != 0: - qc.x(address[2]) - qc.mct(address[:],flag) - qc.x(address[2]) - - # 1000 -> 100 - if removable_pile[3] != 0: - qc.x(address[0]) - qc.x(address[1]) - qc.mct(address[:],flag) - qc.x(address[1]) - qc.x(address[0]) - - - qc.x(flag) - qc.h(flag) - - qc.h(address[:]) - nim_oracle(qc,address,flag,removable_pile) - qc.h(address[:]) - qc.x(address[:]) - qc.h(address[2]) - qc.mct(address[0:2], address[2]) - qc.h(address[2]) - qc.x(address[:]) - qc.h(address[:]) - - -def qc_process(qc,objects_to_remove,address,flag,piles,removable_pile,removable_count): - - if removable_count == 0: - for i in range(len(removable_pile)): - if piles[i] > 0: - removable_pile[i] = 1 - removable_count += 1 - - if removable_count == 4: - removable_pile[removable_pile.index(min(removable_pile))] = 0 - removable_count = removable_count - 1 - - - qft_gate = custom_qft(3).to_gate() - inverse_qft_gate = custom_qft(3).inverse().to_gate() - - if removable_count == 1: - qc.swap(objects_to_remove[0],objects_to_remove[2]) - qc.append(qft_gate,objects_to_remove[:]) - # 0001 -> 001 - if removable_pile[0] != 0: - add_gate = subroutine_add_const(3,removable_pile[0]) - qc.x(address[0]) - # 0010 -> 010 - elif removable_pile[1] != 0: - add_gate = subroutine_add_const(3,removable_pile[1]) - qc.x(address[1]) - # 0100 -> 011 - elif removable_pile[2] != 0: - add_gate = subroutine_add_const(3,removable_pile[2]) - qc.x(address[0]) - qc.x(address[1]) - # 1000 -> 100 - elif removable_pile[3] != 0: - add_gate = subroutine_add_const(3,removable_pile[3]) - qc.x(address[2]) - - qc.append(add_gate,objects_to_remove[:]) - qc.append(inverse_qft_gate,objects_to_remove[:]) - qc.swap(objects_to_remove[0],objects_to_remove[2]) - - else: - diffusion_operation(qc,address, flag, removable_pile) - qc.swap(objects_to_remove[0],objects_to_remove[2]) - qc.append(qft_gate,objects_to_remove[:]) - for i,remove_amount in enumerate(removable_pile): - if remove_amount != 0: - - bin_i = list(bin(i+1)[2:]) - while len(bin_i) != 3: - bin_i.insert(0,'0') - bin_i = bin_i[::-1] - for j in range(len(bin_i)): - if bin_i[j] == '0': - qc.x(address[j]) - - controlled_add_gate = subroutine_add_const(3,remove_amount).control(3) - qc.append(controlled_add_gate,address[:]+objects_to_remove[:]) - - for j in range(len(bin_i)): - if bin_i[j] == '0': - qc.x(address[j]) - - qc.append(inverse_qft_gate,objects_to_remove[:]) - qc.swap(objects_to_remove[0],objects_to_remove[2]) - -def get_quantum_move(piles, backend=None): - - # REMOVE MISERE STEP - # ones = piles.count(1) - # zeros = piles.count(0) - # non_zeros = 4 - (ones+zeros) - - # # all zeros except one eg (0,0,0,7) OR some zeros some ones some non_zeros - # # leave odd piles of 1s - # if non_zeros == 1: - # removable_pile, removable_count = misere_step(ones, piles) - # else: - # removable_pile, removable_count = get_piles_to_remove(piles) - - - removable_pile, removable_count = get_piles_to_remove(piles) - objects_to_remove = QuantumRegister(3,'piles') - flag = QuantumRegister(1,'flag') - output_piles = ClassicalRegister(3,'final_piles') - address = QuantumRegister(3,'address') - pick_pile = ClassicalRegister(3,'choose_pile') - qc = QuantumCircuit(objects_to_remove,address,flag,output_piles,pick_pile) - qc_process(qc,objects_to_remove,address,flag,piles,removable_pile,removable_count) - - qc.measure(address[:],pick_pile[:]) - qc.measure(objects_to_remove[:],output_piles[:]) - - if backend == None: - backend = BasicAer.get_backend('qasm_simulator') - # backend = provider.backends.ibmq_qasm_simulator - job = execute(qc,backend,shots=500) - result = job.result() - counts = result.get_counts() - - try: - qc_move = (counts.most_frequent()) - except Exception as e: - print(e) - vals = list(dict(counts).values()) - max_count = max(vals,key=vals.count) - for key in counts: - if counts[key] == max_count: - qc_move = key - break - - board_choice = qc_move.split(' ')[0] - board_choice = int(board_choice,2) - 1 - - print("Pick from:",board_choice+1) - - board_state = qc_move.split(' ')[1] - board_state = board_state[::-1] - amount = int(board_state,2) - print("Amount:", amount) - return board_choice,amount - - - - - - - - diff --git a/spaces/rizam/literature-research-tool/templates/test.html b/spaces/rizam/literature-research-tool/templates/test.html deleted file mode 100644 index 553055a72f7a7ad58a23b7d8ffedd628a6ce1292..0000000000000000000000000000000000000000 --- a/spaces/rizam/literature-research-tool/templates/test.html +++ /dev/null @@ -1,213 +0,0 @@ - - - - - Awesome-pyecharts - - - - -
            -
            - -
            - - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Alice-new-star-dog-horse.md b/spaces/rorallitri/biomedical-language-models/logs/Alice-new-star-dog-horse.md deleted file mode 100644 index bffa4f18aaaa1753d85c0098fd2c1251bfed66a8..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Alice-new-star-dog-horse.md +++ /dev/null @@ -1,10 +0,0 @@ - -

            alice-new-star-dog-horse. how to get rid of self-doubt: 8 tips from superstars. when i started working on my first novel, i was deeply. an artist. alice-new-star-dog-horse. i'm not looking for answers, i'm just asking questions and seeing what you think of them.

            -

            alice-new-star-dog-horse.what kind of a man would leave his pregnant girlfriend behind? - page 2. these creatures are constantly on the prowl, looking for fresh meat. alice-new-star-dog-horse. video: 'furious 7' has begun shooting in abu dhabi. the way she feels about alice-new-star-dog-horse.

            -

            alice-new-star-dog-horse


            Download Zip ———>>> https://tinurll.com/2uznS0



            -

            alice-new-star-dog-horse. > http://urlin.us/1tsj8. https://urlin.us/2cgt9i. trusted by over 4 million visitors each month. alice-new-star-dog-horse. 1. copyright: us.sarauensis-dot-com. related links: alice-new-star-dog-horse. surikad/istockphoto.

            -

            acog-690. joined: jun 2015. posts: 141. gender: man. post subject: alice-new-star-dog-horse. posted: sun 1 may 2016 04:23. alice-new-star-dog-horse > http://urlin.us/1tn80. everyone. alice-new-star-dog-horse. *alice-new-star-dog-horse* *alice-new-star-dog-horse*.

            -

            alice-new-star-dog-horse. alice-new-star-dog-horse. image with caption: download: ea5dcbe375. related links:. alice-new-star-dog-horse f40dba8b6f guys, kai alice + lauer + severino horse meat disco did fantastic balearic.

            -

            alice-new-star-dog-horse. alice-new-star-dog-horse. related links:. alice-new-star-dog-horse girl, will alice lauer has been on some alanis morissette covers over the years, but there's no question she has the voice for it. alice-new-star-dog-horse alice-new-star-dog-horse. image with caption: download: alice-new-star-dog-horse.

            899543212b
            -
            -
            \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Data Cash 230sony yeds 18 test 21 The Track List and Features of the Sony Test CD.md b/spaces/rorallitri/biomedical-language-models/logs/Data Cash 230sony yeds 18 test 21 The Track List and Features of the Sony Test CD.md deleted file mode 100644 index 482a9d1f8bb8e032ac290935f2a8c3e0e65798da..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Data Cash 230sony yeds 18 test 21 The Track List and Features of the Sony Test CD.md +++ /dev/null @@ -1,6 +0,0 @@ -

            sumit sambhal lega full episode download


            Download Zip ○○○ https://tinurll.com/2uzm8g



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/rstallman/Mayfair-Partner-Music/tests/modules/test_conv.py b/spaces/rstallman/Mayfair-Partner-Music/tests/modules/test_conv.py deleted file mode 100644 index 28fbc4f1a0ebaf41b56947b767958ae696e75eec..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Mayfair-Partner-Music/tests/modules/test_conv.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import math -import random - -import pytest -import torch -from torch import nn - -from audiocraft.modules import ( - NormConv1d, - NormConvTranspose1d, - StreamableConv1d, - StreamableConvTranspose1d, - pad1d, - unpad1d, -) - - -def test_get_extra_padding_for_conv1d(): - # TODO: Implement me! - pass - - -def test_pad1d_zeros(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='constant', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='constant', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='constant', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='constant', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='constant', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='constant', value=0.) - - -def test_pad1d_reflect(): - x = torch.randn(1, 1, 20) - - xp1 = pad1d(x, (0, 5), mode='reflect', value=0.) - assert xp1.shape[-1] == 25 - xp2 = pad1d(x, (5, 5), mode='reflect', value=0.) - assert xp2.shape[-1] == 30 - xp3 = pad1d(x, (0, 0), mode='reflect', value=0.) - assert xp3.shape[-1] == 20 - xp4 = pad1d(x, (10, 30), mode='reflect', value=0.) - assert xp4.shape[-1] == 60 - - with pytest.raises(AssertionError): - pad1d(x, (-1, 0), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (0, -1), mode='reflect', value=0.) - - with pytest.raises(AssertionError): - pad1d(x, (-1, -1), mode='reflect', value=0.) - - -def test_unpad1d(): - x = torch.randn(1, 1, 20) - - u1 = unpad1d(x, (5, 5)) - assert u1.shape[-1] == 10 - u2 = unpad1d(x, (0, 5)) - assert u2.shape[-1] == 15 - u3 = unpad1d(x, (5, 0)) - assert u3.shape[-1] == 15 - u4 = unpad1d(x, (0, 0)) - assert u4.shape[-1] == x.shape[-1] - - with pytest.raises(AssertionError): - unpad1d(x, (-1, 0)) - - with pytest.raises(AssertionError): - unpad1d(x, (0, -1)) - - with pytest.raises(AssertionError): - unpad1d(x, (-1, -1)) - - -class TestNormConv1d: - - def test_norm_conv1d_modules(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = int((T - kernel_size) / stride + 1) - wn_conv = NormConv1d(C, 1, kernel_size=4, norm='weight_norm') - gn_conv = NormConv1d(C, 1, kernel_size=4, norm='time_group_norm') - nn_conv = NormConv1d(C, 1, kernel_size=4, norm='none') - - assert isinstance(wn_conv.norm, nn.Identity) - assert isinstance(wn_conv.conv, nn.Conv1d) - - assert isinstance(gn_conv.norm, nn.GroupNorm) - assert isinstance(gn_conv.conv, nn.Conv1d) - - assert isinstance(nn_conv.norm, nn.Identity) - assert isinstance(nn_conv.conv, nn.Conv1d) - - for conv_layer in [wn_conv, gn_conv, nn_conv]: - out = conv_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestNormConvTranspose1d: - - def test_normalizations(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out, kernel_size, stride = 1, 4, 1 - expected_out_length = (T - 1) * stride + (kernel_size - 1) + 1 - - wn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='weight_norm') - gn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='time_group_norm') - nn_convtr = NormConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, norm='none') - - assert isinstance(wn_convtr.norm, nn.Identity) - assert isinstance(wn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(gn_convtr.norm, nn.GroupNorm) - assert isinstance(gn_convtr.convtr, nn.ConvTranspose1d) - - assert isinstance(nn_convtr.norm, nn.Identity) - assert isinstance(nn_convtr.convtr, nn.ConvTranspose1d) - - for convtr_layer in [wn_convtr, gn_convtr, nn_convtr]: - out = convtr_layer(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConv1d: - - def get_streamable_conv1d_output_length(self, length, kernel_size, stride, dilation): - # StreamableConv1d internally pads to make sure that the last window is full - padding_total = (kernel_size - 1) * dilation - (stride - 1) - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length // stride - - def test_streamable_conv1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - C_out = 1 - - # conv params are [(kernel_size, stride, dilation)] - conv_params = [(4, 1, 1), (4, 2, 1), (3, 1, 3), (10, 5, 1), (3, 2, 3)] - for causal, (kernel_size, stride, dilation) in product([False, True], conv_params): - expected_out_length = self.get_streamable_conv1d_output_length(T, kernel_size, stride, dilation) - sconv = StreamableConv1d(C, C_out, kernel_size=kernel_size, stride=stride, dilation=dilation, causal=causal) - out = sconv(t0) - assert isinstance(out, torch.Tensor) - print(list(out.shape), [N, C_out, expected_out_length]) - assert list(out.shape) == [N, C_out, expected_out_length] - - -class TestStreamableConvTranspose1d: - - def get_streamable_convtr1d_output_length(self, length, kernel_size, stride): - padding_total = (kernel_size - stride) - return (length - 1) * stride - padding_total + (kernel_size - 1) + 1 - - def test_streamable_convtr1d(self): - N, C, T = 2, 2, random.randrange(1, 100_000) - t0 = torch.randn(N, C, T) - - C_out = 1 - - with pytest.raises(AssertionError): - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=False, trim_right_ratio=0.5) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=-1.) - StreamableConvTranspose1d(C, C_out, kernel_size=4, causal=True, trim_right_ratio=2) - - # causal params are [(causal, trim_right)] - causal_params = [(False, 1.0), (True, 1.0), (True, 0.5), (True, 0.0)] - # conv params are [(kernel_size, stride)] - conv_params = [(4, 1), (4, 2), (3, 1), (10, 5)] - for ((causal, trim_right_ratio), (kernel_size, stride)) in product(causal_params, conv_params): - expected_out_length = self.get_streamable_convtr1d_output_length(T, kernel_size, stride) - sconvtr = StreamableConvTranspose1d(C, C_out, kernel_size=kernel_size, stride=stride, - causal=causal, trim_right_ratio=trim_right_ratio) - out = sconvtr(t0) - assert isinstance(out, torch.Tensor) - assert list(out.shape) == [N, C_out, expected_out_length] diff --git a/spaces/russel0719/deepfake_detector/training/losses.py b/spaces/russel0719/deepfake_detector/training/losses.py deleted file mode 100644 index a9bbb41dc6128e12fbf3989734b9b6d4c08a8977..0000000000000000000000000000000000000000 --- a/spaces/russel0719/deepfake_detector/training/losses.py +++ /dev/null @@ -1,28 +0,0 @@ -from typing import Any - -from pytorch_toolbelt.losses import BinaryFocalLoss -from torch import nn -from torch.nn.modules.loss import BCEWithLogitsLoss - - -class WeightedLosses(nn.Module): - def __init__(self, losses, weights): - super().__init__() - self.losses = losses - self.weights = weights - - def forward(self, *input: Any, **kwargs: Any): - cum_loss = 0 - for loss, w in zip(self.losses, self.weights): - cum_loss += w * loss.forward(*input, **kwargs) - return cum_loss - - -class BinaryCrossentropy(BCEWithLogitsLoss): - pass - - -class FocalLoss(BinaryFocalLoss): - def __init__(self, alpha=None, gamma=3, ignore_index=None, reduction="mean", normalized=False, - reduced_threshold=None): - super().__init__(alpha, gamma, ignore_index, reduction, normalized, reduced_threshold) \ No newline at end of file diff --git a/spaces/sai22/vits-models/attentions.py b/spaces/sai22/vits-models/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/sai22/vits-models/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/scedlatioru/img-to-music/example/Cahiers Danatomie Perlemuter Pdf 25.md b/spaces/scedlatioru/img-to-music/example/Cahiers Danatomie Perlemuter Pdf 25.md deleted file mode 100644 index 63700cabc803e413a7ccfedc113e3042dd89c5ae..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Cahiers Danatomie Perlemuter Pdf 25.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Cahiers Danatomie Perlemuter Pdf 25


            Download ►►► https://gohhs.com/2uEyYB



            -
            -amazon fr waligora et perlemuter livres, cahiers d anatomie tome 2 abdomen 1re ... cahiers d anatomie abebooks, cahiers danatomie perlemuter pdf 25, the ... 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/scedlatioru/img-to-music/example/Comfast Cf-1300ug Drivers Download [EXCLUSIVE].md b/spaces/scedlatioru/img-to-music/example/Comfast Cf-1300ug Drivers Download [EXCLUSIVE].md deleted file mode 100644 index c6a3f6596983b6a8824312b566a843e6a5ef65e9..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Comfast Cf-1300ug Drivers Download [EXCLUSIVE].md +++ /dev/null @@ -1,81 +0,0 @@ - -

            Comfast CF-1300UG Drivers Download

            - -

            Comfast CF-1300UG is a wireless adapter that supports dual-band Wi-Fi and USB 3.0 interface. It can provide fast and stable wireless connection for your desktop or laptop computer. However, to use this device, you need to download and install the drivers that are compatible with your operating system. In this article, we will show you how to download and install Comfast CF-1300UG drivers easily and safely.

            -

            comfast cf-1300ug drivers download


            DOWNLOADhttps://gohhs.com/2uEzjE



            - -

            Why You Need Comfast CF-1300UG Drivers?

            - -

            Drivers are software programs that enable your computer to communicate with your hardware devices. Without drivers, your computer will not be able to recognize or use your wireless adapter properly. Therefore, you need to download and install Comfast CF-1300UG drivers to make sure that your device works well with your computer.

            - -

            Comfast CF-1300UG drivers are also important for updating and improving the performance and functionality of your device. By downloading and installing the latest drivers, you can fix any bugs or errors that might occur with your device, enhance its speed and stability, and enjoy new features and benefits.

            - -

            How to Download Comfast CF-1300UG Drivers?

            - -

            There are two ways to download Comfast CF-1300UG drivers: manually or automatically. Here are the steps for each method:

            - -

            Manual Method

            - -

            The manual method involves finding and downloading the drivers from the official website of Comfast or from other reliable sources. Here are the steps for this method:

            -

            - -
              -
            1. Go to the official website of Comfast: http://en.comfast.com.cn/
            2. -
            3. Click on "Drivers Download" on the top menu.
            4. -
            5. Search for "CF-1300UG" in the search box and click on "Download".
            6. -
            7. Select the driver that matches your operating system and click on "Download" again.
            8. -
            9. Save the driver file to your computer.
            10. -
            - -

            You can also download Comfast CF-1300UG drivers from other websites that offer driver downloads, such as Driver Easy, Driver Booster, or Driver Talent. However, you should be careful when choosing a website to download drivers from, as some of them might be scams or malware distributors. Here are some tips to help you avoid potential dangers:

            - -
              -
            • Do some research on the website before you visit it. Check its reputation, reviews, ratings, and feedback from other users. Avoid websites that have poor or negative reputation, or that are unknown or suspicious.
            • -
            • Use a reliable antivirus program and a firewall on your computer. Scan any file that you download before you open it. Delete any file that is detected as malicious or harmful.
            • -
            • Do not provide any personal or financial information to any website that asks for it. Do not enter any passwords, credit card numbers, bank accounts

              -

              Do not download any software or program that is not related to Comfast CF-1300UG drivers. Do not install any toolbars, extensions, add-ons, or plugins that are offered by the website.

            • -
            • Do not trust any website that promises you a working Comfast CF-1300UG driver for free. Most likely, they are lying or trying to trick you into downloading something else.
            • -
            - -

            Automatic Method

            - -

            The automatic method involves using a software program that can scan your computer and find the best drivers for your device automatically. Here are the steps for this method:

            - -
              -
            1. Download and install a driver updater software program, such as Driver Easy, Driver Booster, or Driver Talent.
            2. -
            3. Launch the program and click on "Scan Now" or "Scan" button.
            4. -
            5. The program will scan your computer and detect any outdated, missing, or corrupted drivers.
            6. -
            7. Select Comfast CF-1300UG driver from the list of drivers and click on "Update" or "Install" button.
            8. -
            9. The program will download and install the latest driver for your device automatically.
            10. -
            - -

            The automatic method is easier and faster than the manual method, as you do not need to search for the drivers yourself or worry about compatibility issues. However, you should also be careful when choosing a driver updater software program, as some of them might be scams or malware distributors. Here are some tips to help you avoid potential dangers:

            - -
              -
            • Do some research on the program before you download it. Check its reputation, reviews, ratings, and feedback from other users. Avoid programs that have poor or negative reputation, or that are unknown or suspicious.
            • -
            • Use a reliable antivirus program and a firewall on your computer. Scan any file that you download before you open it. Delete any file that is detected as malicious or harmful.
            • -
            • Do not provide any personal or financial information to any program that asks for it. Do not enter any passwords, credit card numbers, bank accounts -

              Do not download any software or program that is not related to Comfast CF-1300UG drivers. Do not install any toolbars, extensions, add-ons, or plugins that are offered by the website.

            • -
            • Do not trust any website that promises you a working Comfast CF-1300UG driver for free. Most likely, they are lying or trying to trick you into downloading something else.
            • -
            - -

            Automatic Method

            - -

            The automatic method involves using a software program that can scan your computer and find the best drivers for your device automatically. Here are the steps for this method:

            - -
              -
            1. Download and install a driver updater software program, such as Driver Easy, Driver Booster, or Driver Talent.
            2. -
            3. Launch the program and click on "Scan Now" or "Scan" button.
            4. -
            5. The program will scan your computer and detect any outdated, missing, or corrupted drivers.
            6. -
            7. Select Comfast CF-1300UG driver from the list of drivers and click on "Update" or "Install" button.
            8. -
            9. The program will download and install the latest driver for your device automatically.
            10. -
            - -

            The automatic method is easier and faster than the manual method, as you do not need to search for the drivers yourself or worry about compatibility issues. However, you should also be careful when choosing a driver updater software program, as some of them might be scams or malware distributors. Here are some tips to help you avoid potential dangers:

            - -
              -
            • Do some research on the program before you download it. Check its reputation, reviews, ratings, and feedback from other users. Avoid programs that have poor or negative reputation, or that are unknown or suspicious.
            • -
            • Use a reliable antivirus program and a firewall on your computer. Scan any file that you download before you open it. Delete any file that is detected as malicious or harmful.
            • -
            • Do not provide any personal or financial information to any program that asks for it. Do not enter any passwords, credit card numbers, bank accounts

              3cee63e6c2
              -
              -
              \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/HACK Stardock Start10 1.56 Crack [CracksNow].md b/spaces/scedlatioru/img-to-music/example/HACK Stardock Start10 1.56 Crack [CracksNow].md deleted file mode 100644 index 976d2acc2241bdf5f31f8997f338fed5ef4a1d88..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/HACK Stardock Start10 1.56 Crack [CracksNow].md +++ /dev/null @@ -1,6 +0,0 @@ -

              HACK Stardock Start10 1.56 Crack [CracksNow]


              DOWNLOAD ✏ ✏ ✏ https://gohhs.com/2uEAnA



              -
              -Plik Foxit PhantomPDF Business 8.3.2.25013 + Crack [CracksNow].7z na koncie ... CSGO HACK PACK.7z ... Stardock Start10 1.56 + Crack [CracksNow].7z. 4d29de3e1b
              -
              -
              -

              diff --git a/spaces/scedlatioru/img-to-music/example/Peugeot Service Box Keygen Magic.md b/spaces/scedlatioru/img-to-music/example/Peugeot Service Box Keygen Magic.md deleted file mode 100644 index 7c439359340e9966ab729880b89b4a55934917c2..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Peugeot Service Box Keygen Magic.md +++ /dev/null @@ -1,10 +0,0 @@ - -

              Hey Folks, You Should Know about the new Losack x32 / x64 Version from Bimmer
              Agentserver5 keygen
              Superdrug DE Schuetzen Winter 2013 Cover Round
              Moco meixei 2013 serial
              Designer Shoes for Men 2013 2014 2015 2016 2017
              Color Your Channels

              -

              Peugeot service box keygen magic


              DOWNLOAD > https://gohhs.com/2uEAlU



              -

              Sanjuanekai 2 Spy ware PSA Agent 2.5.3 (free) (2012) PR ISC service pack 0.7
              Soal gerakan patung 2011 indonesia
              The Last Jedi dvd 2013 movie free
              Cahaya Gandaria Kereta Bicara
              Good game for kids
              de toekomst een fantasie

              -

              texas sex offenders list free
              99 Tisch Aladeera - Jogos de Casamento
              Naughty 40 - Peugeot Service Box
              jonathan trobinson october 21 2002 playlist
              How To Fix Macos Leopard Xcode Missing Firebird Application
              Dean And Roll - VideoGameDevil.com
              iphone application cracking
              iphone hacking
              iphone jailbreaking
              kik application cracking

              -

              full metal panic : sarah streak - Ben Hur : cars + tombraider
              Thomas Royall - Middlefield : Messer
              political news : Voters Willing To Wait
              somalian speaking tutorial
              The Green Inferno (2014) - Movie HD
              The First Part Of The Dark Overlord Keygen
              black nekkid pussy
              gb3t9

              -

              freenas image converter 1.3.3 rev.2
              Robin Williams To Officially Retire [Premium]
              PhotoPad - 1.7.4.8 Crack Full + Serial Keygen
              Popcorn-Time - Paid Version 1.8.4.1 Crack Free
              Fingria - YawysCoder.Feeg.com Full Version
              DEVIL

              -

              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Waves.API.Collection.VST.RTAS.v1.0-AiR.rar ((EXCLUSIVE)) Crack.md b/spaces/scedlatioru/img-to-music/example/Waves.API.Collection.VST.RTAS.v1.0-AiR.rar ((EXCLUSIVE)) Crack.md deleted file mode 100644 index c628caa50ea1e3925d55b596ad723cdcb2a155ae..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Waves.API.Collection.VST.RTAS.v1.0-AiR.rar ((EXCLUSIVE)) Crack.md +++ /dev/null @@ -1,104 +0,0 @@ - -

              Waves API Collection VST RTAS v1.0-AiR.rar: the ultimate plugin bundle for mixing and mastering

              -

              If you are looking for a plugin bundle that can give you the legendary sound of API consoles in your DAW, you should check out Waves API Collection VST RTAS v1.0-AiR.rar. This is a collection of four plugins that emulate the API 550A, 550B, 560 and 2500 models, which are renowned for their musical EQ curves, punchy compression and flexible routing options. In this article, we will tell you everything you need to know about Waves API Collection VST RTAS v1.0-AiR.rar and how to download it for free.

              - -

              What is Waves API Collection VST RTAS v1.0-AiR.rar?

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that consists of four plugins that emulate the API 550A, 550B, 560 and 2500 models. These are some of the most sought-after hardware units in the audio industry, used by countless engineers and producers on countless hit records. Waves API Collection VST RTAS v1.0-AiR.rar faithfully recreates the sound and features of these units, using Waves' proprietary modeling technology and meticulous attention to detail.

              -

              Waves.API.Collection.VST.RTAS.v1.0-AiR.rar crack


              Download Zip 🌟 https://gohhs.com/2uEAHd



              -

              The plugins included in Waves API Collection VST RTAS v1.0-AiR.rar are:

              -
                -
              • API 550A: a three-band EQ with a switchable high-pass filter and seven selectable frequencies per band.
              • -
              • API 550B: a four-band EQ with seven selectable frequencies per band and overlapping frequency ranges.
              • -
              • API 560: a ten-band graphic EQ with 12 dB of boost or cut per band and proportional Q.
              • -
              • API 2500: a stereo compressor with variable threshold, ratio, attack, release and knee settings, as well as a sidechain filter, a link switch and a thrust circuit.
              • -
              -

              These plugins can be used individually or together to shape your sound with precision and musicality. Whether you need to add some warmth, clarity, punch or glue to your mix, Waves API Collection VST RTAS v1.0-AiR.rar can help you achieve it.

              - -

              How to download Waves API Collection VST RTAS v1.0-AiR.rar for free?

              -

              If you want to download Waves API Collection VST RTAS v1.0-AiR.rar for free, you can do it from the following link: https://archive.org/details/WavesAPICollectionV1.0.part1. This is a part of a larger archive that contains the full installer of the plugin bundle, as well as a crack file that can activate it without requiring a license key.

              -

              To download Waves API Collection VST RTAS v1.0-AiR.rar for free, you just need to follow these steps:

              -
                -
              1. Click on the link above and wait for the page to load.
              2. -
              3. Click on the "DOWNLOAD OPTIONS" button and select "RAR download".
              4. -
              5. Save the file to your computer and extract it with a program like WinRAR or 7-Zip.
              6. -
              7. Run the installer and follow the instructions to install the plugin bundle on your computer.
              8. -
              9. Copy the crack file from the "AiR" folder and paste it into the folder where you installed the plugin bundle, replacing the original file.
              10. -
              11. Launch your DAW and enjoy using Waves API Collection VST RTAS v1.0-AiR.rar for free.
              12. -
              -

              Note: This method is for educational purposes only. We do not condone piracy or illegal downloading of software. If you like Waves API Collection VST RTAS v1.0-AiR.rar, please support the developers by buying it from their official website: https://www.waves.com/plugins/api-collection.

              - -

              Why choose Waves API Collection VST RTAS v1.0-AiR.rar?

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is one of the best plugin bundles for mixing and mastering that you can find in the market. Here are some of the reasons why you should choose Waves API Collection VST RTAS v1.0-AiR.rar:

              -
                -
              • It gives you the legendary sound of API consoles in your DAW, with all their warmth, clarity, punch and flexibility.
              • -
              • It offers you four plugins that cover all your EQ and compression needs, from subtle shaping to drastic sculpting.
              • -
              • It allows you to use each plugin individually or together, with flexible routing options and stereo linking capabilities.
              • -
              • It is compatible with Windows 10 and works with any VST or RTAS host application.
              • -
              • It is easy to use and intuitive, with user-friendly interfaces and presets.
              • -
              • It is developed by Waves, one of the most reputable and trusted brands in the audio industry.
              • -
              -

              If you want to take your mixes and masters to the next level with a plugin bundle that can give you the sound of API consoles in your DAW, you should definitely try Waves API Collection VST RTAS v1.0-AiR.rar.

              - -

              Conclusion

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that emulates the API 550A, 550B, 560 and 2500 models, which are renowned for their musical EQ curves, punchy compression and flexible routing options. With Waves API Collection VST RTAS v1.0-AiR.rar, you can get the legendary sound of API consoles in your DAW, with all their warmth, clarity, punch and flexibility. You can use each plugin individually or together to shape your sound with precision and musicality.

              -

              -

              If you want to download Waves API Collection VST RTAS v1.0-AiR.rar for free, you can do it from this link: https://archive.org/details/WavesAPICollectionV1.0.part1. This is a part of a larger archive that contains the full installer of the plugin bundle, as well as a crack file that can activate it without requiring a license key.

              -

              We hope this article has been helpful for you and that you have enjoyed using Waves API Collection VST RTAS v1.0-AiR.rar for free.

              -

              How to use Waves API Collection VST RTAS v1.0-AiR.rar?

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that is easy to use and intuitive. You can use each plugin individually or together, depending on your needs and preferences. To use Waves API Collection VST RTAS v1.0-AiR.rar, you just need to follow these steps:

              -
                -
              1. Launch your DAW and create a new project or open an existing one.
              2. -
              3. Insert the plugin or plugins that you want to use from Waves API Collection VST RTAS v1.0-AiR.rar on the tracks or buses that you want to process.
              4. -
              5. Adjust the parameters and settings of each plugin according to your taste and goals.
              6. -
              7. Use the presets and the A/B comparison feature to compare different settings and find the best ones for your mix.
              8. -
              9. Enjoy the sound of Waves API Collection VST RTAS v1.0-AiR.rar on your tracks.
              10. -
              -

              If you need more guidance or tips on how to use Waves API Collection VST RTAS v1.0-AiR.rar, you can check out the manual that comes with the plugin bundle or watch some tutorials and videos on the official website of Waves: https://www.waves.com/plugins/api-collection.

              - -

              What are the benefits of Waves API Collection VST RTAS v1.0-AiR.rar?

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that can bring you many benefits for your mixing and mastering projects. Here are some of the benefits that you can get from Waves API Collection VST RTAS v1.0-AiR.rar:

              -
                -
              • You can get the legendary sound of API consoles in your DAW, with all their warmth, clarity, punch and flexibility.
              • -
              • You can shape your sound with precision and musicality, using four plugins that cover all your EQ and compression needs.
              • -
              • You can enhance your workflow and creativity, using flexible routing options and stereo linking capabilities.
              • -
              • You can save money and space, using a plugin bundle that emulates four hardware units in one software package.
              • -
              • You can rely on the quality and reputation of Waves, one of the most reputable and trusted brands in the audio industry.
              • -
              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that can make a difference in your mixes and masters, giving you the sound of API consoles in your DAW.

              - -

              Conclusion

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that emulates the API 550A, 550B, 560 and 2500 models, which are renowned for their musical EQ curves, punchy compression and flexible routing options. With Waves API Collection VST RTAS v1.0-AiR.rar, you can get the legendary sound of API consoles in your DAW, with all their warmth, clarity, punch and flexibility. You can use each plugin individually or together to shape your sound with precision and musicality.

              -

              If you want to download Waves API Collection VST RTAS v1.0-AiR.rar for free, you can do it from this link: https://archive.org/details/WavesAPICollectionV1.0.part1. This is a part of a larger archive that contains the full installer of the plugin bundle, as well as a crack file that can activate it without requiring a license key.

              -

              We hope this article has been helpful for you and that you have enjoyed using Waves API Collection VST RTAS v1.0-AiR.rar for free.

              -

              What are the features of Waves API Collection VST RTAS v1.0-AiR.rar?

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that offers you a variety of features that can enhance your sound and workflow. Here are some of the features that you can find in Waves API Collection VST RTAS v1.0-AiR.rar:

              -
                -
              • Analog modeling: Waves API Collection VST RTAS v1.0-AiR.rar uses Waves' proprietary modeling technology and meticulous attention to detail to recreate the sound and behavior of the original hardware units, including their subtle harmonic distortion and hiss.
              • -
              • Presets: Waves API Collection VST RTAS v1.0-AiR.rar comes with a collection of presets from top producers, mixers and mastering engineers like Steve Lillywhite, Tony Maserati, Greg Wells, and Drew Lavyne, as well as the option to create and save your own presets.
              • -
              • A/B comparison: Waves API Collection VST RTAS v1.0-AiR.rar allows you to compare different settings and find the best ones for your mix, using the A/B comparison feature that lets you switch between two different parameter states.
              • -
              • Input/output meters: Waves API Collection VST RTAS v1.0-AiR.rar provides you with input and output meters that show you the level of the signal before and after processing, as well as a gain reduction meter for the compressor plugin.
              • -
              • Zero latency: Waves API Collection VST RTAS v1.0-AiR.rar operates with zero latency, which means that it does not introduce any delay or phase issues to your signal.
              • -
              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that offers you a variety of features that can enhance your sound and workflow.

              - -

              What are the tips and tricks for using Waves API Collection VST RTAS v1.0-AiR.rar?

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that is easy to use and intuitive, but it also has some tips and tricks that can help you get the most out of it. Here are some of the tips and tricks that you can use with Waves API Collection VST RTAS v1.0-AiR.rar:

              -
                -
              • Use the API 550A or 550B plugins to add some warmth and color to your tracks, using their musical EQ curves and proportional Q.
              • -
              • Use the API 560 plugin to sculpt your sound with precision and flexibility, using its ten-band graphic EQ and proportional Q.
              • -
              • Use the API 2500 plugin to add some punch and glue to your mix, using its versatile compression settings and thrust circuit.
              • -
              • Use the sidechain filter on the API 2500 plugin to control the frequency range that triggers the compression, avoiding unwanted pumping or breathing effects.
              • -
              • Use the link switch on the API 2500 plugin to link or unlink the left and right channels of the compressor, creating either a stereo or a dual-mono effect.
              • -
              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that has some tips and tricks that can help you get the most out of it.

              - -

              Conclusion

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that emulates the API 550A, 550B, 560 and 2500 models, which are renowned for their musical EQ curves, punchy compression and flexible routing options. With Waves API Collection VST RTAS v1.0-AiR.rar, you can get the legendary sound of API consoles in your DAW, with all their warmth, clarity, punch and flexibility. You can use each plugin individually or together to shape your sound with precision and musicality.

              -

              If you want to download Waves API Collection VST RTAS v1.0-AiR.rar for free, you can do it from this link: https://archive.org/details/WavesAPICollectionV1.0.part1. This is a part of a larger archive that contains the full installer of the plugin bundle, as well as a crack file that can activate it without requiring a license key.

              -

              We hope this article has been helpful for you and that you have enjoyed using Waves API Collection VST RTAS v1.0-AiR.rar for free.

              -

              Conclusion

              -

              Waves API Collection VST RTAS v1.0-AiR.rar is a plugin bundle that emulates the API 550A, 550B, 560 and 2500 models, which are renowned for their musical EQ curves, punchy compression and flexible routing options. With Waves API Collection VST RTAS v1.0-AiR.rar, you can get the legendary sound of API consoles in your DAW, with all their warmth, clarity, punch and flexibility. You can use each plugin individually or together to shape your sound with precision and musicality.

              -

              If you want to download Waves API Collection VST RTAS v1.0-AiR.rar for free, you can do it from this link: https://archive.org/details/WavesAPICollectionV1.0.part1. This is a part of a larger archive that contains the full installer of the plugin bundle, as well as a crack file that can activate it without requiring a license key.

              -

              We hope this article has been helpful for you and that you have enjoyed using Waves API Collection VST RTAS v1.0-AiR.rar for free.

              3cee63e6c2
              -
              -
              \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet2/enh/encoder/abs_encoder.py b/spaces/segments-tobias/conex/espnet2/enh/encoder/abs_encoder.py deleted file mode 100644 index ef1afb68213b670e1ad5cb7135ade64603e80b0b..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/enh/encoder/abs_encoder.py +++ /dev/null @@ -1,20 +0,0 @@ -from abc import ABC -from abc import abstractmethod -from typing import Tuple - -import torch - - -class AbsEncoder(torch.nn.Module, ABC): - @abstractmethod - def forward( - self, - input: torch.Tensor, - ilens: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor]: - raise NotImplementedError - - @property - @abstractmethod - def output_dim(self) -> int: - raise NotImplementedError diff --git a/spaces/shencc/gpt/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/shencc/gpt/crazy_functions/test_project/cpp/libJPG/jpge.h deleted file mode 100644 index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000 --- a/spaces/shencc/gpt/crazy_functions/test_project/cpp/libJPG/jpge.h +++ /dev/null @@ -1,172 +0,0 @@ - -// jpge.h - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// Alex Evans: Added RGBA support, linear memory allocator. -#ifndef JPEG_ENCODER_H -#define JPEG_ENCODER_H - -#include - -namespace jpge -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef signed int int32; - typedef unsigned short uint16; - typedef unsigned int uint32; - typedef unsigned int uint; - - // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common. - enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 }; - - // JPEG compression parameters structure. - struct params - { - inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { } - - inline bool check_valid() const - { - if ((m_quality < 1) || (m_quality > 100)) return false; - if ((uint)m_subsampling > (uint)H2V2) return false; - return true; - } - - // Quality: 1-100, higher is better. Typical values are around 50-95. - int m_quality; - - // m_subsampling: - // 0 = Y (grayscale) only - // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU) - // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU) - // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common) - subsampling_t m_subsampling; - - // Disables CbCr discrimination - only intended for testing. - // If true, the Y quantization table is also used for the CbCr channels. - bool m_no_chroma_discrim_flag; - - bool m_two_pass_flag; - }; - - // Writes JPEG image to a file. - // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels. - bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Writes JPEG image to memory buffer. - // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes. - // If return value is true, buf_size will be set to the size of the compressed data. - bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Output stream abstract class - used by the jpeg_encoder class to write to the output stream. - // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts. - class output_stream - { - public: - virtual ~output_stream() { }; - virtual bool put_buf(const void* Pbuf, int64_t len) = 0; - template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); } - }; - - // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions. - class jpeg_encoder - { - public: - jpeg_encoder(); - ~jpeg_encoder(); - - // Initializes the compressor. - // pStream: The stream object to use for writing compressed data. - // params - Compression parameters structure, defined above. - // width, height - Image dimensions. - // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data. - // Returns false on out of memory or if a stream write fails. - bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params()); - - const params &get_params() const { return m_params; } - - // Deinitializes the compressor, freeing any allocated memory. May be called at any time. - void deinit(); - - uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; } - inline uint get_cur_pass() { return m_pass_num; } - - // Call this method with each source scanline. - // width * src_channels bytes per scanline is expected (RGB or Y format). - // You must call with NULL after all scanlines are processed to finish compression. - // Returns false on out of memory or if a stream write fails. - bool process_scanline(const void* pScanline); - - private: - jpeg_encoder(const jpeg_encoder &); - jpeg_encoder &operator =(const jpeg_encoder &); - - typedef int32 sample_array_t; - - output_stream *m_pStream; - params m_params; - uint8 m_num_components; - uint8 m_comp_h_samp[3], m_comp_v_samp[3]; - int m_image_x, m_image_y, m_image_bpp, m_image_bpl; - int m_image_x_mcu, m_image_y_mcu; - int m_image_bpl_xlt, m_image_bpl_mcu; - int m_mcus_per_row; - int m_mcu_x, m_mcu_y; - uint8 *m_mcu_lines[16]; - uint8 m_mcu_y_ofs; - sample_array_t m_sample_array[64]; - int16 m_coefficient_array[64]; - int32 m_quantization_tables[2][64]; - uint m_huff_codes[4][256]; - uint8 m_huff_code_sizes[4][256]; - uint8 m_huff_bits[4][17]; - uint8 m_huff_val[4][256]; - uint32 m_huff_count[4][256]; - int m_last_dc_val[3]; - enum { JPGE_OUT_BUF_SIZE = 2048 }; - uint8 m_out_buf[JPGE_OUT_BUF_SIZE]; - uint8 *m_pOut_buf; - uint m_out_buf_left; - uint32 m_bit_buffer; - uint m_bits_in; - uint8 m_pass_num; - bool m_all_stream_writes_succeeded; - - void optimize_huffman_table(int table_num, int table_len); - void emit_byte(uint8 i); - void emit_word(uint i); - void emit_marker(int marker); - void emit_jfif_app0(); - void emit_dqt(); - void emit_sof(); - void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag); - void emit_dhts(); - void emit_sos(); - void emit_markers(); - void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val); - void compute_quant_table(int32 *dst, int16 *src); - void adjust_quant_table(int32 *dst, int32 *src); - void first_pass_init(); - bool second_pass_init(); - bool jpg_open(int p_x_res, int p_y_res, int src_channels); - void load_block_8_8_grey(int x); - void load_block_8_8(int x, int y, int c); - void load_block_16_8(int x, int c); - void load_block_16_8_8(int x, int c); - void load_quantized_coefficients(int component_num); - void flush_output_buffer(); - void put_bits(uint bits, uint len); - void code_coefficients_pass_one(int component_num); - void code_coefficients_pass_two(int component_num); - void code_block(int component_num); - void process_mcu_row(); - bool terminate_pass_one(); - bool terminate_pass_two(); - bool process_end_of_image(); - void load_mcu(const void* src); - void clear(); - void init(); - }; - -} // namespace jpge - -#endif // JPEG_ENCODER \ No newline at end of file diff --git a/spaces/shi-labs/OneFormer/oneformer/evaluation/instance_evaluation.py b/spaces/shi-labs/OneFormer/oneformer/evaluation/instance_evaluation.py deleted file mode 100644 index 7c5e429f97fb74c957fa5be76b4b0349d30e0459..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/oneformer/evaluation/instance_evaluation.py +++ /dev/null @@ -1,110 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/evaluation/instance_evaluation.py -# ------------------------------------------------------------------------------ - -import contextlib -import copy -import io -import itertools -import json -import logging -import numpy as np -import os -import pickle -from collections import OrderedDict -import pycocotools.mask as mask_util -import torch -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from tabulate import tabulate - -import detectron2.utils.comm as comm -from detectron2.config import CfgNode -from detectron2.data import MetadataCatalog -from detectron2.data.datasets.coco import convert_to_coco_json -from detectron2.evaluation.coco_evaluation import COCOEvaluator, _evaluate_predictions_on_coco -from detectron2.evaluation.fast_eval_api import COCOeval_opt -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import create_small_table - - -# modified from COCOEvaluator for instance segmetnat -class InstanceSegEvaluator(COCOEvaluator): - """ - Evaluate AR for object proposals, AP for instance detection/segmentation, AP - for keypoint detection outputs using COCO's metrics. - See http://cocodataset.org/#detection-eval and - http://cocodataset.org/#keypoints-eval to understand its metrics. - The metrics range from 0 to 100 (instead of 0 to 1), where a -1 or NaN means - the metric cannot be computed (e.g. due to no predictions made). - - In addition to COCO, this evaluator is able to support any bounding box detection, - instance segmentation, or keypoint detection dataset. - """ - - def _eval_predictions(self, predictions, img_ids=None): - """ - Evaluate predictions. Fill self._results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(coco_results) - - # unmap the category ids for COCO - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - dataset_id_to_contiguous_id = self._metadata.thing_dataset_id_to_contiguous_id - # all_contiguous_ids = list(dataset_id_to_contiguous_id.values()) - # num_classes = len(all_contiguous_ids) - # assert min(all_contiguous_ids) == 0 and max(all_contiguous_ids) == num_classes - 1 - - reverse_id_mapping = {v: k for k, v in dataset_id_to_contiguous_id.items()} - for result in coco_results: - category_id = result["category_id"] - # assert category_id < num_classes, ( - # f"A prediction has class={category_id}, " - # f"but the dataset only has {num_classes} classes and " - # f"predicted class id should be in [0, {num_classes - 1}]." - # ) - assert category_id in reverse_id_mapping, ( - f"A prediction has class={category_id}, " - f"but the dataset only has class ids in {dataset_id_to_contiguous_id}." - ) - result["category_id"] = reverse_id_mapping[category_id] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info( - "Evaluating predictions with {} COCO API...".format( - "unofficial" if self._use_fast_impl else "official" - ) - ) - for task in sorted(tasks): - assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!" - coco_eval = ( - _evaluate_predictions_on_coco( - self._coco_api, - coco_results, - task, - kpt_oks_sigmas=self._kpt_oks_sigmas, - use_fast_impl=self._use_fast_impl, - img_ids=img_ids, - max_dets_per_image=self._max_dets_per_image, - ) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res diff --git a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/ema.py b/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/ema.py deleted file mode 100644 index e5d61e90eadb4701c7c38d9ed63e4fca7afb78d9..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/ema.py +++ /dev/null @@ -1,75 +0,0 @@ -import torch -from torch import nn - -class LitEma(nn.Module): - def __init__(self, model, decay=0.9999, use_num_updates=True): - super().__init__() - if decay < 0.0 or decay > 1.0: - raise ValueError('Decay must be between 0 and 1') - - self.m_name2s_name = {} - self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32)) - self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_updates - else torch.tensor(-1,dtype=torch.int)) - - for name, p in model.named_parameters(): - if p.requires_grad: - #remove as '.'-character is not allowed in buffers - s_name = name.replace('.','') - self.m_name2s_name.update({name:s_name}) - self.register_buffer(s_name,p.clone().detach().data) - - self.collected_params = [] - - def forward(self, model): - decay = self.decay - - if self.num_updates >= 0: - self.num_updates += 1 - decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates)) - - one_minus_decay = 1.0 - decay - - with torch.no_grad(): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - - for key in m_param: - if m_param[key].requires_grad: - sname = self.m_name2s_name[key] - shadow_params[sname] = shadow_params[sname].type_as(m_param[key]) - shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key])) - else: - assert not key in self.m_name2s_name - - def copy_to(self, model): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - for key in m_param: - if m_param[key].requires_grad: - m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data) - else: - assert not key in self.m_name2s_name - - def store(self, parameters): - """ - Save the current parameters for restoring later. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - temporarily stored. - """ - self.collected_params = [param.clone() for param in parameters] - - def restore(self, parameters): - """ - Restore the parameters stored with the `store` method. - Useful to validate the model with EMA parameters without affecting the - original optimization process. Store the parameters before the - `copy_to` method. After validation (or model saving), use this to - restore the former parameters. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored parameters. - """ - for c_param, param in zip(self.collected_params, parameters): - param.data.copy_(c_param.data) diff --git a/spaces/shigel/langchain-function-calling/app.py b/spaces/shigel/langchain-function-calling/app.py deleted file mode 100644 index d559c95dc92dd370331c452ac68708051dc1dfa0..0000000000000000000000000000000000000000 --- a/spaces/shigel/langchain-function-calling/app.py +++ /dev/null @@ -1,208 +0,0 @@ -# 必要なモジュールをインポート -import gradio as gr -import os -import sys -import json -import csv -import dotenv -import openai -from langchain.chat_models import ChatOpenAI -from langchain.agents import initialize_agent, Tool -from langchain.schema import ( - AIMessage, - AgentAction, - HumanMessage, - FunctionMessage -) -from langchain.chat_models import ChatOpenAI -from langchain.agents import AgentType - -# .envファイルから環境変数をロード -dotenv.load_dotenv(".env") - -# OpenAIキーをosモジュールで取得 -openai.api_key = os.environ.get("OPENAI_API_KEY") - -# 民間伝承を取得する関数 -def fetch_folklore(location): - folklore_lookup = {} - # CSVファイルからデータを読み取り、地点をキー、伝承を値とする辞書を作成 - with open('folklore.csv', 'r') as f: - reader = csv.DictReader(f) - folklore_lookup = {row['location']: row['folklore'] for row in reader} - type_lookup = {row['type']: row['folklore'] for row in reader} - - # 指定された地点の伝承などを返す。存在しない場合は不明を返す。 - folklore = folklore_lookup.get((location), f"その地域の伝承は不明です。") - type = type_lookup.get((location), f"その地域の伝承は不明です。") - print("type:", type) - return folklore - -def serialize_agent_action(obj): - if isinstance(obj, AgentAction): - return { "tool": obj.tool, "tool_input": obj.tool_input, "log": obj.log} - if isinstance(obj, _FunctionsAgentAction): - return { "tool": obj.tool, "tool_input": obj.tool_input, "log": obj.log, "message_log": obj.message_log} - if isinstance(obj, AIMessage): - return { "content": obj.content, "additional_kwargs": obj.additional_kwargs, "example": obj.example} - raise TypeError(f"Type {type(obj)} not serializable") - -# LangChainエージェントからレスポンスを取得する関数 -def get_response_from_lang_chain_agent(query_text): - # ChatOpenAIを使用して言語モデルを初期化 - language_model = ChatOpenAI(model_name='gpt-3.5-turbo-0613') - tools = [ - # 民間伝承を取得するToolを作成 - Tool( - name="Folklore", - func=fetch_folklore, - description="伝承を知りたい施設や地名を入力。例: 箱根", - ) - ] - # エージェントを初期化してから応答を取得 - agent = initialize_agent(tools, language_model, agent="zero-shot-react-description", - verbose=True, return_intermediate_steps=True) - response = agent({"input": query_text}) - print(type(response)) - response = json.dumps(response, default=serialize_agent_action, indent=2, ensure_ascii=False) - - return response - -# Function Callingからレスポンスを取得する関数 -def get_response_from_function_calling(query_text): - function_definitions = [ - # 関数の定義を作成 - { - "name": "fetch_folklore", - "description": "伝承を調べる", - "parameters": { - "type": "object", - "properties": { - "location": { - "description": "伝承を知りたい施設や地名。例: 箱根", - }, - }, - "required": ["location"], - }, - } - ] - messages = [HumanMessage(content=query_text)] - language_model = ChatOpenAI(model_name='gpt-4') - # 言語モデルを使ってメッセージを予測 - message = language_model.predict_messages( - messages, functions=function_definitions) - - if message.additional_kwargs: - # 関数の名前と引数を取得 - function_name = message.additional_kwargs["function_call"]["name"] - arguments = message.additional_kwargs["function_call"]["arguments"] - - # JSON 文字列を辞書に変換 - arguments = json.loads(arguments) - location=arguments.get("location") - # type=arguments.get("type") - - # 関数を実行してレスポンスを取得 - function_response = fetch_folklore(location=location) - # 関数メッセージを作成 - function_message = FunctionMessage( - name=function_name, content=function_response) - # 関数のレスポンスをメッセージに追加して予測 - messages.append(function_message) - second_response = language_model.predict_messages( - messages=messages, functions=function_definitions) - content = second_response.content - else: - content = message.content - return content - -# Function Call Agentからレスポンスを取得する関数 -def get_response_from_function_calling_agent(query_text): - language_model = ChatOpenAI(model_name='gpt-3.5-turbo-0613') - tools = [ - # 民間伝承情報を提供するツールの追加 - Tool( - name="Folklore", - func=fetch_folklore, - description="伝承を知りたい施設や地名を入力。例: 箱根" - ) - ] - # エージェントの初期化とレスポンスの取得 - agent = initialize_agent(tools, language_model, agent=AgentType.OPENAI_FUNCTIONS, - verbose=True, return_intermediate_steps=True) - response = agent({"input": query_text}) - response = json.dumps(response, default=serialize_agent_action, indent=2, ensure_ascii=False) - return response - -# メインの実行部分 - - -def main(query_text, function_name="all"): - - response1 = "" - response2 = "" - response3 = "" - - if function_name == "all" or function_name == "langchain": - # LangChainエージェントからのレスポンス - response1 = get_response_from_lang_chain_agent(query_text) - print(response1) - - if function_name == "all" or function_name == "functioncalling": - # Function Callingからのレスポンス - response2 = get_response_from_function_calling(query_text) - print(response2) - - if function_name == "all" or function_name == "functioncallingagent": - # Function Callingエージェントからのレスポンス - response3 = get_response_from_function_calling_agent(query_text) - print(response3) - - return response1, response2, response3 - - -# スクリプトが直接実行された場合にmain()を実行 -if __name__ == "__main__": - if len(sys.argv) == 2: - query_text = sys.argv[1] - main(query_text=query_text) - elif len(sys.argv) > 2: - query_text = sys.argv[1] - function_name = sys.argv[2] - main(query_text=query_text, function_name=function_name) - else: - import time - - # インプット例をクリックした時のコールバック関数 - def click_example(example): - # クリックされたインプット例をテキストボックスに自動入力 - inputs.value = example - time.sleep(0.1) # テキストボックスに文字が表示されるまで待機 - # 自動入力後に実行ボタンをクリックして結果を表示 - execute_button.click() - - # gr.Interface()を使ってユーザーインターフェースを作成します - # gr.Text()はテキスト入力ボックスを作成し、 - # gr.Textbox()は出力テキストを表示するためのテキストボックスを作成します。 - iface = gr.Interface( - fn=main, - examples=[ - ["葛飾区の伝承を教えてください。"], - ["千代田区にはどんな伝承がありますか?"], - ["江戸川区で有名な伝承?"], - ], - inputs=gr.Textbox( - lines=5, placeholder="質問を入力してください"), - outputs=[ - gr.Textbox(label="LangChain Agentのレスポンス"), - gr.Textbox(label="Function Callingのレスポンス"), - gr.Textbox(label="Function Calling Agentのレスポンス") - ], - title="日本各地の伝承AI (東京23区版)", - description="最新のGPTモデルを使用し、LangChain, Function Calling, Function Calling + LangChain Agentの対話モデルのAIから回答を取得するシステムです。以下のインプット例をクリックすると入力欄に自動入力されます。", - example_columns=3, - example_callback=click_example - ) - - # インターフェースを起動します - iface.launch() diff --git a/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/model.py b/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/model.py deleted file mode 100644 index 22488abd92182a878fa1bedadfed50afbb472d3e..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/model.py +++ /dev/null @@ -1,345 +0,0 @@ -# coding: utf-8 -""" BigGAN PyTorch model. - From "Large Scale GAN Training for High Fidelity Natural Image Synthesis" - By Andrew Brocky, Jeff Donahuey and Karen Simonyan. - https://openreview.net/forum?id=B1xsqj09Fm - - PyTorch version implemented from the computational graph of the TF Hub module for BigGAN. - Some part of the code are adapted from https://github.com/brain-research/self-attention-gan - - This version only comprises the generator (since the discriminator's weights are not released). - This version only comprises the "deep" version of BigGAN (see publication). - - Modified by Erik Härkönen: - * Added support for per-layer latent vectors -""" -from __future__ import (absolute_import, division, print_function, unicode_literals) - -import os -import logging -import math - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .config import BigGANConfig -from .file_utils import cached_path - -logger = logging.getLogger(__name__) - -PRETRAINED_MODEL_ARCHIVE_MAP = { - 'biggan-deep-128': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-128-pytorch_model.bin", - 'biggan-deep-256': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-256-pytorch_model.bin", - 'biggan-deep-512': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-512-pytorch_model.bin", -} - -PRETRAINED_CONFIG_ARCHIVE_MAP = { - 'biggan-deep-128': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-128-config.json", - 'biggan-deep-256': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-256-config.json", - 'biggan-deep-512': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-512-config.json", -} - -WEIGHTS_NAME = 'pytorch_model.bin' -CONFIG_NAME = 'config.json' - - -def snconv2d(eps=1e-12, **kwargs): - return nn.utils.spectral_norm(nn.Conv2d(**kwargs), eps=eps) - -def snlinear(eps=1e-12, **kwargs): - return nn.utils.spectral_norm(nn.Linear(**kwargs), eps=eps) - -def sn_embedding(eps=1e-12, **kwargs): - return nn.utils.spectral_norm(nn.Embedding(**kwargs), eps=eps) - -class SelfAttn(nn.Module): - """ Self attention Layer""" - def __init__(self, in_channels, eps=1e-12): - super(SelfAttn, self).__init__() - self.in_channels = in_channels - self.snconv1x1_theta = snconv2d(in_channels=in_channels, out_channels=in_channels//8, - kernel_size=1, bias=False, eps=eps) - self.snconv1x1_phi = snconv2d(in_channels=in_channels, out_channels=in_channels//8, - kernel_size=1, bias=False, eps=eps) - self.snconv1x1_g = snconv2d(in_channels=in_channels, out_channels=in_channels//2, - kernel_size=1, bias=False, eps=eps) - self.snconv1x1_o_conv = snconv2d(in_channels=in_channels//2, out_channels=in_channels, - kernel_size=1, bias=False, eps=eps) - self.maxpool = nn.MaxPool2d(2, stride=2, padding=0) - self.softmax = nn.Softmax(dim=-1) - self.gamma = nn.Parameter(torch.zeros(1)) - - def forward(self, x): - _, ch, h, w = x.size() - # Theta path - theta = self.snconv1x1_theta(x) - theta = theta.view(-1, ch//8, h*w) - # Phi path - phi = self.snconv1x1_phi(x) - phi = self.maxpool(phi) - phi = phi.view(-1, ch//8, h*w//4) - # Attn map - attn = torch.bmm(theta.permute(0, 2, 1), phi) - attn = self.softmax(attn) - # g path - g = self.snconv1x1_g(x) - g = self.maxpool(g) - g = g.view(-1, ch//2, h*w//4) - # Attn_g - o_conv - attn_g = torch.bmm(g, attn.permute(0, 2, 1)) - attn_g = attn_g.view(-1, ch//2, h, w) - attn_g = self.snconv1x1_o_conv(attn_g) - # Out - out = x + self.gamma*attn_g - return out - - -class BigGANBatchNorm(nn.Module): - """ This is a batch norm module that can handle conditional input and can be provided with pre-computed - activation means and variances for various truncation parameters. - - We cannot just rely on torch.batch_norm since it cannot handle - batched weights (pytorch 1.0.1). We computate batch_norm our-self without updating running means and variances. - If you want to train this model you should add running means and variance computation logic. - """ - def __init__(self, num_features, condition_vector_dim=None, n_stats=51, eps=1e-4, conditional=True): - super(BigGANBatchNorm, self).__init__() - self.num_features = num_features - self.eps = eps - self.conditional = conditional - - # We use pre-computed statistics for n_stats values of truncation between 0 and 1 - self.register_buffer('running_means', torch.zeros(n_stats, num_features)) - self.register_buffer('running_vars', torch.ones(n_stats, num_features)) - self.step_size = 1.0 / (n_stats - 1) - - if conditional: - assert condition_vector_dim is not None - self.scale = snlinear(in_features=condition_vector_dim, out_features=num_features, bias=False, eps=eps) - self.offset = snlinear(in_features=condition_vector_dim, out_features=num_features, bias=False, eps=eps) - else: - self.weight = torch.nn.Parameter(torch.Tensor(num_features)) - self.bias = torch.nn.Parameter(torch.Tensor(num_features)) - - def forward(self, x, truncation, condition_vector=None): - # Retreive pre-computed statistics associated to this truncation - coef, start_idx = math.modf(truncation / self.step_size) - start_idx = int(start_idx) - if coef != 0.0: # Interpolate - running_mean = self.running_means[start_idx] * coef + self.running_means[start_idx + 1] * (1 - coef) - running_var = self.running_vars[start_idx] * coef + self.running_vars[start_idx + 1] * (1 - coef) - else: - running_mean = self.running_means[start_idx] - running_var = self.running_vars[start_idx] - - if self.conditional: - running_mean = running_mean.unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - running_var = running_var.unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - - weight = 1 + self.scale(condition_vector).unsqueeze(-1).unsqueeze(-1) - bias = self.offset(condition_vector).unsqueeze(-1).unsqueeze(-1) - - out = (x - running_mean) / torch.sqrt(running_var + self.eps) * weight + bias - else: - out = F.batch_norm(x, running_mean, running_var, self.weight, self.bias, - training=False, momentum=0.0, eps=self.eps) - - return out - - -class GenBlock(nn.Module): - def __init__(self, in_size, out_size, condition_vector_dim, reduction_factor=4, up_sample=False, - n_stats=51, eps=1e-12): - super(GenBlock, self).__init__() - self.up_sample = up_sample - self.drop_channels = (in_size != out_size) - middle_size = in_size // reduction_factor - - self.bn_0 = BigGANBatchNorm(in_size, condition_vector_dim, n_stats=n_stats, eps=eps, conditional=True) - self.conv_0 = snconv2d(in_channels=in_size, out_channels=middle_size, kernel_size=1, eps=eps) - - self.bn_1 = BigGANBatchNorm(middle_size, condition_vector_dim, n_stats=n_stats, eps=eps, conditional=True) - self.conv_1 = snconv2d(in_channels=middle_size, out_channels=middle_size, kernel_size=3, padding=1, eps=eps) - - self.bn_2 = BigGANBatchNorm(middle_size, condition_vector_dim, n_stats=n_stats, eps=eps, conditional=True) - self.conv_2 = snconv2d(in_channels=middle_size, out_channels=middle_size, kernel_size=3, padding=1, eps=eps) - - self.bn_3 = BigGANBatchNorm(middle_size, condition_vector_dim, n_stats=n_stats, eps=eps, conditional=True) - self.conv_3 = snconv2d(in_channels=middle_size, out_channels=out_size, kernel_size=1, eps=eps) - - self.relu = nn.ReLU() - - def forward(self, x, cond_vector, truncation): - x0 = x - - x = self.bn_0(x, truncation, cond_vector) - x = self.relu(x) - x = self.conv_0(x) - - x = self.bn_1(x, truncation, cond_vector) - x = self.relu(x) - if self.up_sample: - x = F.interpolate(x, scale_factor=2, mode='nearest') - x = self.conv_1(x) - - x = self.bn_2(x, truncation, cond_vector) - x = self.relu(x) - x = self.conv_2(x) - - x = self.bn_3(x, truncation, cond_vector) - x = self.relu(x) - x = self.conv_3(x) - - if self.drop_channels: - new_channels = x0.shape[1] // 2 - x0 = x0[:, :new_channels, ...] - if self.up_sample: - x0 = F.interpolate(x0, scale_factor=2, mode='nearest') - - out = x + x0 - return out - -class Generator(nn.Module): - def __init__(self, config): - super(Generator, self).__init__() - self.config = config - ch = config.channel_width - condition_vector_dim = config.z_dim * 2 - - self.gen_z = snlinear(in_features=condition_vector_dim, - out_features=4 * 4 * 16 * ch, eps=config.eps) - - layers = [] - for i, layer in enumerate(config.layers): - if i == config.attention_layer_position: - layers.append(SelfAttn(ch*layer[1], eps=config.eps)) - layers.append(GenBlock(ch*layer[1], - ch*layer[2], - condition_vector_dim, - up_sample=layer[0], - n_stats=config.n_stats, - eps=config.eps)) - self.layers = nn.ModuleList(layers) - - self.bn = BigGANBatchNorm(ch, n_stats=config.n_stats, eps=config.eps, conditional=False) - self.relu = nn.ReLU() - self.conv_to_rgb = snconv2d(in_channels=ch, out_channels=ch, kernel_size=3, padding=1, eps=config.eps) - self.tanh = nn.Tanh() - - def forward(self, cond_vector, truncation): - z = self.gen_z(cond_vector[0]) - - # We use this conversion step to be able to use TF weights: - # TF convention on shape is [batch, height, width, channels] - # PT convention on shape is [batch, channels, height, width] - z = z.view(-1, 4, 4, 16 * self.config.channel_width) - z = z.permute(0, 3, 1, 2).contiguous() - - cond_idx = 1 - for i, layer in enumerate(self.layers): - if isinstance(layer, GenBlock): - z = layer(z, cond_vector[cond_idx], truncation) - cond_idx += 1 - else: - z = layer(z) - - z = self.bn(z, truncation) - z = self.relu(z) - z = self.conv_to_rgb(z) - z = z[:, :3, ...] - z = self.tanh(z) - return z - -class BigGAN(nn.Module): - """BigGAN Generator.""" - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, cache_dir=None, *inputs, **kwargs): - if pretrained_model_name_or_path in PRETRAINED_MODEL_ARCHIVE_MAP: - model_file = PRETRAINED_MODEL_ARCHIVE_MAP[pretrained_model_name_or_path] - config_file = PRETRAINED_CONFIG_ARCHIVE_MAP[pretrained_model_name_or_path] - else: - model_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME) - config_file = os.path.join(pretrained_model_name_or_path, CONFIG_NAME) - - try: - resolved_model_file = cached_path(model_file, cache_dir=cache_dir) - resolved_config_file = cached_path(config_file, cache_dir=cache_dir) - except EnvironmentError: - logger.error("Wrong model name, should be a valid path to a folder containing " - "a {} file and a {} file or a model name in {}".format( - WEIGHTS_NAME, CONFIG_NAME, PRETRAINED_MODEL_ARCHIVE_MAP.keys())) - raise - - logger.info("loading model {} from cache at {}".format(pretrained_model_name_or_path, resolved_model_file)) - - # Load config - config = BigGANConfig.from_json_file(resolved_config_file) - logger.info("Model config {}".format(config)) - - # Instantiate model. - model = cls(config, *inputs, **kwargs) - state_dict = torch.load(resolved_model_file, map_location='cpu' if not torch.cuda.is_available() else None) - model.load_state_dict(state_dict, strict=False) - return model - - def __init__(self, config): - super(BigGAN, self).__init__() - self.config = config - self.embeddings = nn.Linear(config.num_classes, config.z_dim, bias=False) - self.generator = Generator(config) - self.n_latents = len(config.layers) + 1 # one for gen_z + one per layer - - def forward(self, z, class_label, truncation): - assert 0 < truncation <= 1 - - if not isinstance(z, list): - z = self.n_latents*[z] - - if isinstance(class_label, list): - embed = [self.embeddings(l) for l in class_label] - else: - embed = self.n_latents*[self.embeddings(class_label)] - - assert len(z) == self.n_latents, f'Expected {self.n_latents} latents, got {len(z)}' - assert len(embed) == self.n_latents, f'Expected {self.n_latents} class vectors, got {len(class_label)}' - - cond_vectors = [torch.cat((z, e), dim=1) for (z, e) in zip(z, embed)] - z = self.generator(cond_vectors, truncation) - return z - - -if __name__ == "__main__": - import PIL - from .utils import truncated_noise_sample, save_as_images, one_hot_from_names - from .convert_tf_to_pytorch import load_tf_weights_in_biggan - - load_cache = False - cache_path = './saved_model.pt' - config = BigGANConfig() - model = BigGAN(config) - if not load_cache: - model = load_tf_weights_in_biggan(model, config, './models/model_128/', './models/model_128/batchnorms_stats.bin') - torch.save(model.state_dict(), cache_path) - else: - model.load_state_dict(torch.load(cache_path)) - - model.eval() - - truncation = 0.4 - noise = truncated_noise_sample(batch_size=2, truncation=truncation) - label = one_hot_from_names('diver', batch_size=2) - - # Tests - # noise = np.zeros((1, 128)) - # label = [983] - - noise = torch.tensor(noise, dtype=torch.float) - label = torch.tensor(label, dtype=torch.float) - with torch.no_grad(): - outputs = model(noise, label, truncation) - print(outputs.shape) - - save_as_images(outputs) diff --git a/spaces/sidharthism/fashion-eye/utils.py b/spaces/sidharthism/fashion-eye/utils.py deleted file mode 100644 index 5498289425bb70e959c0194eb7c6fab63e0c045a..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/utils.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -import string -import numpy as np -from pathlib import Path -import requests -import pickle -import sys -import re -import gdown - -def prettify_name(name): - valid = "-_%s%s" % (string.ascii_letters, string.digits) - return ''.join(map(lambda c : c if c in valid else '_', name)) - -# Add padding to sequence of images -# Used in conjunction with np.hstack/np.vstack -# By default: adds one 64th of the width of horizontal padding -def pad_frames(strip, pad_fract_horiz=64, pad_fract_vert=0, pad_value=None): - dtype = strip[0].dtype - if pad_value is None: - if dtype in [np.float32, np.float64]: - pad_value = 1.0 - else: - pad_value = np.iinfo(dtype).max - - frames = [strip[0]] - for frame in strip[1:]: - if pad_fract_horiz > 0: - frames.append(pad_value*np.ones((frame.shape[0], frame.shape[1]//pad_fract_horiz, 3), dtype=dtype)) - elif pad_fract_vert > 0: - frames.append(pad_value*np.ones((frame.shape[0]//pad_fract_vert, frame.shape[1], 3), dtype=dtype)) - frames.append(frame) - return frames - - -def download_google_drive(url, output_name): - print('Downloading', url) - gdown.download(url, str(output_name)) - # session = requests.Session() - # r = session.get(url, allow_redirects=True) - # r.raise_for_status() - - # # Google Drive virus check message - # if r.encoding is not None: - # tokens = re.search('(confirm=.+)&id', str(r.content)) - # assert tokens is not None, 'Could not extract token from response' - - # url = url.replace('id=', f'{tokens[1]}&id=') - # r = session.get(url, allow_redirects=True) - # r.raise_for_status() - - # assert r.encoding is None, f'Failed to download weight file from {url}' - - # with open(output_name, 'wb') as f: - # f.write(r.content) - -def download_generic(url, output_name): - print('Downloading', url) - session = requests.Session() - r = session.get(url, allow_redirects=True) - r.raise_for_status() - - # No encoding means raw data - if r.encoding is None: - with open(output_name, 'wb') as f: - f.write(r.content) - else: - download_manual(url, output_name) - -def download_manual(url, output_name): - outpath = Path(output_name).resolve() - while not outpath.is_file(): - print('Could not find checkpoint') - print(f'Please download the checkpoint from\n{url}\nand save it as\n{outpath}') - input('Press any key to continue...') - -def download_ckpt(url, output_name): - if 'drive.google' in url: - download_google_drive(url, output_name) - elif 'mega.nz' in url: - download_manual(url, output_name) - else: - download_generic(url, output_name) \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cara Download Nada Dering Facebook Terbaru dan Gratis.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cara Download Nada Dering Facebook Terbaru dan Gratis.md deleted file mode 100644 index a8baebf1b02973bfbd1538fcbdc8c61a1dee6ebe..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cara Download Nada Dering Facebook Terbaru dan Gratis.md +++ /dev/null @@ -1,88 +0,0 @@ - -

              How to Download Nada Dering Facebook for Your Phone

              -

              Do you want to spice up your Facebook notifications and calls with some cool and unique sounds? If so, you might be interested in downloading Nada Dering Facebook for your phone. Nada Dering Facebook is a term that refers to the custom ringtones and sounds that you can use for your Facebook Messenger app. In this article, we will explain what Nada Dering Facebook is, how to change the notification sound and call ringtone in Facebook Messenger, and how to find and download the best Facebook ringtones for your phone.

              -

              download nada dering facebook


              DOWNLOADhttps://ssurll.com/2uNZRx



              -

              What is Nada Dering Facebook?

              -

              Nada Dering Facebook is a term that originated from Indonesia, where it means "Facebook ringtone". It is used to describe the custom sounds that you can use for your Facebook notifications and calls. These sounds can be anything from music, voice clips, sound effects, or even animal noises. You can download these sounds from various websites and apps, or create your own using audio editing tools.

              -

              The meaning and origin of Nada Dering Facebook

              -

              The term Nada Dering Facebook was coined by Indonesian netizens who wanted to have more fun and variety with their Facebook Messenger app. They started to search for different sounds that they could use as their notification sound or call ringtone, and shared them with their friends and online communities. The term became popular among Indonesian Facebook users, and soon spread to other countries as well.

              -

              The benefits and features of Nada Dering Facebook

              -

              There are many benefits and features of using Nada Dering Facebook for your phone. Some of them are:

              -
                -
              • You can personalize your Facebook Messenger app with sounds that suit your taste and mood.
              • -
              • You can express your personality and style with sounds that reflect your interests and hobbies.
              • -
              • You can make your notifications and calls more fun and enjoyable with sounds that make you laugh or smile.
              • -
              • You can distinguish your notifications and calls from other apps and contacts with sounds that are unique and recognizable.
              • -
              • You can surprise and impress your friends and family with sounds that they have never heard before.
              • -
              -

              How to Change the Notification Sound and Call Ringtone in Facebook Messenger

              es for your phone. You can search by keywords, categories, or popularity. You can also create your own ringtones using the Zedge Ringtone Maker. You can download the ringtones directly to your phone or send them to your email. Zedge is available as a website or as an app for Android and iOS devices.

              -

              Best Ringtones Net

              -

              Best Ringtones Net is another website that offers free ringtones, wallpapers, games, and more. You can find a variety of Facebook ringtones for your phone, ranging from funny, cute, romantic, to scary. You can also upload your own ringtones and share them with other users. You can download the ringtones directly to your phone or send them to your email. Best Ringtones Net is available as a website or as an app for Android devices.

              -

              Best Free Ringtones

              -

              Best Free Ringtones is an app that offers free ringtones, notification sounds, alarm sounds, and more. You can find hundreds of Facebook ringtones for your phone, as well as other popular sounds from social media apps, games, movies, and TV shows. You can also record your own voice and make it into a ringtone. You can download the ringtones directly to your phone or share them with your friends. Best Free Ringtones is available as an app for Android devices.

              -

              Some tips and tricks to choose and download the best Facebook ringtones for your phone

              -

              Here are some tips and tricks to help you choose and download the best Facebook ringtones for your phone:

              -

              Listen to the previews before downloading

              -

              Before you download any ringtone, make sure you listen to the preview first. This will help you avoid downloading low-quality or inappropriate sounds that might annoy you or others. You can also compare different sounds and choose the one that suits your preference.

              -

              Download nada dering wa terbaru 2022 mp3 viral
              -Download nada dering facebook gratis dan mudah
              -Download nada dering facebook messenger keren
              -Download nada dering facebook lite lucu
              -Download nada dering facebook populer dan unik
              -Download nada dering facebook terbaik dan terlengkap
              -Download nada dering facebook untuk android
              -Download nada dering facebook untuk iphone
              -Download nada dering facebook untuk semua hp
              -Download nada dering facebook versi lama
              -Cara download nada dering facebook di hp
              -Cara download nada dering facebook di laptop
              -Cara download nada dering facebook dari youtube
              -Cara download nada dering facebook tanpa aplikasi
              -Cara download nada dering facebook dengan mudah
              -Cara mengganti nada dering facebook di hp
              -Cara mengganti nada dering facebook di laptop
              -Cara mengganti nada dering facebook dengan suara sendiri
              -Cara mengganti nada dering facebook dengan lagu favorit
              -Cara mengganti nada dering facebook dengan suara lucu
              -Kumpulan nada dering facebook mp3 gratis
              -Kumpulan nada dering facebook keren dan unik
              -Kumpulan nada dering facebook lucu dan gokil
              -Kumpulan nada dering facebook terbaru dan terpopuler
              -Kumpulan nada dering facebook original dan asli
              -Nada dering facebook mp3 download free
              -Nada dering facebook keren download gratis
              -Nada dering facebook lucu download mp3
              -Nada dering facebook terbaru download free
              -Nada dering facebook original download gratis
              -Nada Dering - Facebook profile and page
              -Nada Dering Profiles | Facebook - find people with this name
              -Nada Dering Facebook Group - join and share your ringtones
              -Nada Dering Facebook Community - connect with other users
              -Nada Dering Facebook Fans - follow and like the page
              -Review nada dering facebook terbaik dan terbaru
              -Review nada dering facebook keren dan unik
              -Review nada dering facebook lucu dan gokil
              -Review nada dering facebook original dan asli
              -Review cara download dan mengganti nada dering facebook

              -

              Check the compatibility and quality of the ringtones

              -

              Not all ringtones are compatible with all phones and apps. Some ringtones might not work properly or sound distorted on your device. To avoid this, check the compatibility and quality of the ringtones before downloading them. Look for ringtones that have high ratings, positive reviews, and clear descriptions. You can also check the file format, size, and duration of the ringtones.

              -

              Customize and edit the ringtones if needed

              -

              If you want to make your Facebook ringtones more personal and unique, you can customize and edit them using audio editing tools. You can trim, crop, merge, split, fade in/out, adjust volume, add effects, and more. You can also mix different sounds together and create your own mashups.

              -

              Conclusion

              -

              Nada Dering Facebook is a term that refers to the custom ringtones and sounds that you can use for your Facebook Messenger app. It is a fun and easy way to personalize your notifications and calls with sounds that match your taste and mood. In this article, we have explained what Nada Dering Facebook is, how to change the notification sound and call ringtone in Facebook Messenger, and how to find and download the best Facebook ringtones for your phone. We hope you have enjoyed this article and learned something new.

              -

              FAQs

              -

              Here are some frequently asked questions about Nada Dering Facebook:

              -
                -
              1. What is the difference between Nada Dering Facebook and Nada SMS?
              2. -

                Nada Dering Facebook is a term that specifically refers to the custom sounds that you can use for your Facebook Messenger app. Nada SMS is a more general term that refers to any custom sound that you can use for your text messages or SMS.

                -
              3. Can I use Nada Dering Facebook for other apps?
              4. -

                Yes, you can use Nada Dering Facebook for other apps that allow you to change the notification sound and call ringtone. However, some apps might have their own default sounds that cannot be changed.

                -
              5. How do I delete Nada Dering Facebook from my phone?
              6. -

                If you want to delete Nada Dering Facebook from your phone, you can go to the settings of your phone or app and choose a different sound or turn off the sound completely. You can also delete the downloaded files from your phone's storage or memory card.

                -
              7. Where can I get more Nada Dering Facebook?
              8. -

                You can get more Nada Dering Facebook from various websites and apps that offer free ringtones, such as Zedge, Best Ringtones Net, Best Free Ringtones, and more. You can also create your own Nada Dering Facebook using audio editing tools or recording your own voice.

                -
              9. Is Nada Dering Facebook legal and safe?
              10. -

                Nada Dering Facebook is legal and safe as long as you use it for personal and non-commercial purposes. However, you should be careful when downloading Nada Dering Facebook from unknown or untrusted sources, as they might contain viruses, malware, or spyware. You should also respect the intellectual property rights of the original creators and owners of the sounds.

                -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Color by Number Explore Diverse and Inclusive Art.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Color by Number Explore Diverse and Inclusive Art.md deleted file mode 100644 index 96fdf0927bc17c30b5eae8bcc101a761759c0ae6..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Color by Number Explore Diverse and Inclusive Art.md +++ /dev/null @@ -1,114 +0,0 @@ - -

              Color by Number: A Fun and Relaxing Art Game for Everyone

              -

              Do you love coloring and painting? Do you want to create beautiful artworks without any artistic skills? Do you need a break from stress and boredom? If you answered yes to any of these questions, then you should try color by number games. Color by number games are a type of art game where you fill in the numbered squares with the corresponding colors to create a pixelated picture. They are easy, fun, and relaxing for both kids and adults. In this article, we will explain what color by number games are, how to play them, what benefits they offer, and what are some of the best color by number games to try.

              -

              color by number


              Download File ===> https://ssurll.com/2uNYr9



              -

              What is color by number?

              -

              Color by number is a form of art where you use a numbered guide to fill in the colors of a picture. The picture is divided into small squares, each with a number that corresponds to a color in your palette. By following the numbers, you can create a colorful image without any drawing skills. Color by number games are digital versions of this art form, where you use your mouse or finger to click or tap on the squares and fill them with colors. You can choose from various themes, such as animals, flowers, landscapes, cartoons, and more. You can also import your own photos and turn them into color by number artworks.

              -

              How to play color by number games

              -

              The gameplay of color by number games is very simple and straightforward. Here are the basic steps to follow:

              -
                -
              1. Choose a picture you want to color from the menu screen. You can browse different categories or search for specific keywords.
              2. -
              3. Look at the numbers at the bottom of the screen. They show you the colors you need to use and how many squares each color has.
              4. -
              5. Select a color and click or tap on the squares with the same number. You can zoom in or out to see the details better.
              6. -
              7. Fill in all the squares with the right colors until you complete the picture. You can use hints or tools to help you if you get stuck.
              8. -
              9. Admire your masterpiece and share it with your friends or family.
              10. -
              -

              Benefits of color by number games

              -

              Color by number games are not only fun and entertaining, but also beneficial for your mental and physical health. Here are some of the benefits they offer:

              -

              For kids

              -
                -
              • They improve concentration and focus. Kids have to pay attention to the numbers and colors and match them correctly.
              • -
              • They develop fine motor skills and hand-eye coordination. Kids have to use their fingers or mouse to click or tap on the small squares.
              • -
              • They enhance creativity and imagination. Kids can choose from different pictures and colors and create their own artworks.
              • -
              • They teach basic math and logic skills. Kids have to count the numbers and follow the rules of the game.
              • -
              • They foster learning and curiosity. Kids can learn about different animals, plants, cultures, and artists from the pictures they color.
              • -
              -

              For adults

              -
                -
              • They reduce stress and anxiety. Adults can relax and unwind with the soothing and satisfying activity of coloring.
              • -
              • They boost mood and happiness. Adults can feel proud and accomplished when they finish a picture and see their colorful results.
              • -
              • They stimulate brain activity and memory. Adults can challenge their brain and recall different colors and numbers.
              • -
              • They promote mindfulness and meditation. Adults can focus on the present moment and forget about their worries and problems.
              • -
              • They express personality and emotions. Adults can choose pictures and colors that reflect their interests and feelings.
              • -
              -

              Best color by number games to try

              -

              If you are

              If you are looking for some of the best color by number games to try, here are some of our recommendations:

              -

              color by number online
              -color by number app
              -color by number for adults
              -color by number printables
              -color by number books
              -color by number worksheets
              -color by number games
              -color by number pixel art
              -color by number animals
              -color by number mandala
              -color by number flowers
              -color by number unicorn
              -color by number christmas
              -color by number disney
              -color by number halloween
              -color by number free
              -color by number easy
              -color by number hard
              -color by number kids
              -color by number math
              -color by number multiplication
              -color by number addition
              -color by number subtraction
              -color by number division
              -color by number fractions
              -color by number kindergarten
              -color by number preschool
              -color by number first grade
              -color by number second grade
              -color by number third grade
              -color by number fourth grade
              -color by number fifth grade
              -color by number middle school
              -color by number high school
              -color by number art
              -color by number painting
              -color by number drawing
              -color by number coloring pages
              -color by number bible stories
              -color by number superheroes
              -color by number dinosaurs
              -color by number princesses
              -color by number mermaids
              -color by number dragons
              -color by number cars
              -color by number trucks
              -color by number trains
              -color by number planes
              -color by number boats

              -

              Color by Number: Coloring Game - Apps on Google Play

              -

              This is a free and easy color by number game that offers over 10,000 pictures to choose from. You can find various categories, such as animals, flowers, mandalas, unicorns, and more. You can also create your own custom pictures and share them with other players. The game has a smooth and intuitive interface, and you can use different tools, such as a magic wand, a bomb, or a paint bucket, to make your coloring faster and easier. You can also enjoy relaxing music and sound effects while you color.

              -

              Color by Number \uD83D\uDD79️ Play Color by Number on CrazyGames

              -

              This is a free online color by number game that you can play on your browser. You can paint various cute animals, such as a pig, a cat, an octopus, an elephant, a sloth, and a baby chicken. You can zoom in and out to see the details better and use hints if you need help. The game is great for kids and adults who want to have some fun and relax with coloring.

              -

              Happy Color

              -

              This is a popular color by number app that has over 50 million downloads and 4.6 stars on the App Store and Google Play. You can access thousands of pictures in different themes, such as nature, fashion, Disney, Marvel, art, and more. You can also import your own photos and turn them into color by number artworks. The app has a simple and user-friendly design, and you can share your creations with your friends on social media.

              -

              Conclusion

              -

              Color by number games are a fun and relaxing art game for everyone. They are easy to play and offer many benefits for your mental and physical health. They can improve your concentration, creativity, mood, memory, mindfulness, and more. They can also help you express yourself and learn new things. Whether you are a kid or an adult, you can enjoy coloring by numbers with various pictures and themes. You can also try some of the best color by number games we recommended in this article. So what are you waiting for? Grab your device and start coloring!

              -

              FAQs

              -
                -
              1. What is the difference between color by number and paint by number?
              2. -

                Color by number is a digital version of paint by number, where you use colors on your device instead of paints on a canvas. Both are similar in concept and gameplay, but color by number is more convenient and accessible.

                -
              3. How do I create my own color by number picture?
              4. -

                Some color by number apps allow you to import your own photos and turn them into color by number pictures. You can also use online tools or websites that can convert any image into a color by number template.

                -
              5. Can I print my color by number picture?
              6. -

                Yes, you can print your color by number picture after you finish coloring it. Some apps have a print option that lets you print directly from your device. You can also save your picture as an image file and print it from your computer.

                -
              7. How do I change the colors in my palette?
              8. -

                Some color by number apps let you customize the colors in your palette. You can either choose from different presets or create your own colors using sliders or codes.

                -
              9. What are some tips to improve my color by number skills?
              10. -

                Some tips to improve your color by number skills are:

                -
                  -
                • Start with simple pictures that have fewer colors and details.
                • -
                • Use hints or tools if you get stuck or confused.
                • -
                • Zoom in or out to see the picture better.
                • -
                • Follow the numbers carefully and avoid mistakes.
                • -
                • Have fun and enjoy the process.
                • -
                -

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Feng Shui 101 How to Create Balance and Harmony in Your Home.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Feng Shui 101 How to Create Balance and Harmony in Your Home.md deleted file mode 100644 index 33e751dff75326ff45876a14fd47a42f4c6eaf80..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Feng Shui 101 How to Create Balance and Harmony in Your Home.md +++ /dev/null @@ -1,135 +0,0 @@ -
              -

              Feng Shui: A Beginner's Guide to Harmonizing Your Home and Life

              -

              Have you ever wondered how your environment affects your mood, health, and well-being? Do you want to create a more balanced and harmonious living space that supports your goals and desires? If so, you might be interested in learning about feng shui, an ancient Chinese practice that helps you align your energies with your surroundings.

              -

              feng shui


              Download ⚙⚙⚙ https://ssurll.com/2uNUzg



              -

              Feng shui translates to "wind and water" in Chinese. It is based on the idea that everything has its energy or Chi that flows through the land and our bodies. Feng shui aims to tap into the beneficial and balanced flow of energy and avoid the harmful and imbalanced one. It uses the five elements of earth, fire, water, metal, and wood, and the nine areas of life or Guas, to create harmony and support our goals and desires. Feng shui provides a system to study how we interact with our environment and how to make changes to improve the feeling and quality of our spaces.

              -

              In this article, we will introduce you to the basic principles of feng shui, such as the five elements, the bagua map, and some tips for applying feng shui in your home. By following these guidelines, you can enhance your living space and transform your life.

              -

              The Five Elements of Feng Shui

              -

              Feng shui uses the five-element system, which comes from Taoist philosophy. This system looks at the cycles of nature and how they work together to be in balance. The five elements are earth, metal, water, wood, and fire. Each element is associated with certain qualities, as well as colors and shapes that can be used as design elements if you’d like to enhance those qualities in your home and life.

              -

              Feng shui tips for beginners
              -Feng shui bedroom layout ideas
              -Feng shui colors for living room
              -Feng shui plants for wealth and prosperity
              -Feng shui home office design
              -Feng shui bathroom decor
              -Feng shui crystals and their meanings
              -Feng shui compass directions
              -Feng shui bagua map for 2023
              -Feng shui cures for bad luck
              -Feng shui symbols for love and marriage
              -Feng shui art for walls
              -Feng shui mirrors placement rules
              -Feng shui aquarium placement and number of fish
              -Feng shui wind chimes benefits and best locations
              -Feng shui coins meaning and usage
              -Feng shui dragon statue placement and symbolism
              -Feng shui turtle meaning and how to use it
              -Feng shui bamboo plant care and placement
              -Feng shui salt water cure for negative energy
              -Feng shui books for beginners and experts
              -Feng shui courses online and offline
              -Feng shui consultant near me and how to choose one
              -Feng shui certification programs and requirements
              -Feng shui podcast recommendations and reviews
              -Feng shui jewelry for protection and luck
              -Feng shui candles for different purposes and occasions
              -Feng shui essential oils and diffusers
              -Feng shui rugs and carpets for different rooms
              -Feng shui pillows and bedding for better sleep
              -Feng shui kitchen tips and tricks
              -Feng shui garden design and landscaping
              -Feng shui door color and direction
              -Feng shui fountain placement and types
              -Feng shui clock placement and best time to hang it
              -Feng shui calendar 2023 and auspicious dates
              -Feng shui numerology and lucky numbers
              -Feng shui animals and their meanings
              -Feng shui elements and how to balance them
              -Feng shui meditation techniques and benefits
              -Feng shui vs. vastu shastra: similarities and differences
              -Feng shui gifts for friends and family
              -Feng shui house numbers and what they mean
              -Feng shui wallpaper for desktop and mobile devices
              -Feng shui quotes and sayings for inspiration and motivation

              - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
              ElementQualitiesColorsShapes
              EarthSelf-care, boundaries, nourishmentYellow, orange, brownSquare, heavy
              MetalJoy, beauty, precisionWhite, gray, metallicCircular
              WaterWisdom, connection, emotionBlack, dark blueWavy
              WoodGrowth, healing, vitalityGreen, blue, tealTall, columnar
              FirePassion, inspiration, visibilityRed, bright orangeTriangle
              -

              You can add the elements to your home by using objects, colors, or shapes that represent them. For example, you can add a green plant for wood element or a red lamp for fire element. You can also balance the elements by using them in moderation and avoiding excess or deficiency. For example, too much fire element can cause agitation or anger while too little can cause dullness or depression.

              -

              The Feng Shui Bagua Map

              -

              The bagua map is a tool that helps you identify the different areas of your home and how they correspond to different aspects of your life. The word "bagua" means "eight symbols" in Chinese. These symbols are connected to eight life areas or Guas:

              -
                -
              • Career (North)
              • -
              • Knowledge (Northeast)
              • -
              • Family (East)
              • -
              • Wealth (Southeast)
              • -
              • Fame (South)
              • -
              • Relationships (Southwest)
              • -
              • Children (West)
              • -
              • Helpful People (Northwest)
              • -
              -

              The ninth area is the center, which represents your health and well-being.

              -

              To use the bagua map, you need to align it with your floor plan. You can do this by using a compass or an app to find the north direction of your home. Then, place the bagua map over your floor plan so that the career area matches the north direction. You can also use the main entrance of your home as a reference point and align the bagua map so that the career, knowledge, or helpful people area is closest to the door.

              -

              Once you have aligned the bagua map, you can see which area of your home corresponds to which area of your life. You can then use feng shui principles to enhance each area by adding or removing elements, colors, shapes, or objects that support or hinder the energy flow. For example, if you want to improve your wealth area, you can add some wood element, such as a green plant or a wooden bowl, to stimulate growth and abundance. Or, if you want to reduce stress in your career area, you can remove some fire element, such as a red wall or a candle, to calm down and relax.

              -

              Feng Shui Tips for Your Home

              -

              Now that you have learned about the five elements and the bagua map, you might be wondering how to apply feng shui in your home. Here are some simple and practical tips that can help you create a more harmonious and supportive living space:

              -
                -
              • Clear the clutter: Clutter is anything that you don't need, use, or love. It blocks the energy flow and creates stagnation and confusion. By clearing the clutter, you free up space and allow fresh and positive energy to enter your home and life.
              • -
              • Fix what is broken: Broken things represent broken energy. They can cause frustration and disappointment. By fixing what is broken, you restore the energy flow and show respect and care for your home and yourself.
              • -
              • Let in natural light and fresh air: Light and air are essential for life and energy. They brighten up your space and lift up your mood. By letting in natural light and fresh air, you invite clarity and vitality into your home and life.
              • -
              • Add some plants: Plants are living beings that bring life force and wood element into your home. They purify the air, absorb negative energy, and promote healing and growth. By adding some plants, you connect with nature and enhance your well-being.
              • -
              • Use mirrors wisely: Mirrors are powerful tools that can reflect and amplify energy. They can also create illusions and confusion. By using mirrors wisely, you can double the positive energy and avoid the negative one. For example, you can place a mirror in your wealth area to attract more abundance or in your dining area to increase your appetite. But you should avoid placing a mirror in front of your bed or door as it can disturb your sleep or block your opportunities.
              • -
              -

              Conclusion

              -

              Feng shui is an ancient Chinese practice that helps you harmonize your home and life by aligning your energies with your surroundings. It uses the five elements of earth, metal, water, wood, and fire, and the nine areas of life or Guas, to create balance and support for your goals and desires. Feng shui provides a system to study how we interact with our environment and how to make changes to improve the feeling and quality of our spaces.

              -

              By following some basic principles of feng shui, such as clearing the clutter, fixing what is broken, letting in natural light and fresh air, adding some plants, and using mirrors wisely, you can enhance your living space and transform your life. You can also use the bagua map to identify the different areas of your home and how they correspond to different aspects of your life. You can then use feng shui elements, colors, shapes, or objects to enhance each area according to your needs and preferences.

              -

              Feng shui is not a rigid or complicated system that requires expensive or elaborate items. It is a flexible and creative system that adapts to different situations and cultures. It is a way of living that respects nature and ourselves. It is a way of creating harmony and happiness in our homes and lives.

              -

              FAQs

              -

              What is feng shui?

              -

              Feng shui is an ancient Chinese practice that helps you harmonize your home and life by aligning your energies with your surroundings.

              -

              What are the five elements of feng shui?

              -

              The five elements of feng shui are earth, metal, water, wood, and fire. Each element is associated with certain qualities, as well as colors and shapes that can be used as design elements in your home.

              -

              What is the bagua map?

              -

              The bagua map is a tool that helps you identify the different areas of your home and how they correspond to different aspects of your life. The bagua map consists of eight symbols or Guas that represent eight life areas: career, knowledge, family, wealth, fame, relationships, children, and helpful people. The ninth area is the center, which represents your health and well-being.

              -

              How do I apply feng shui in my home?

              -

              You can apply feng shui in your home by using the five elements and the bagua map to enhance the energy flow and harmony in your living space. You can also follow some simple and practical tips, such as clearing the clutter, fixing what is broken, letting in natural light and fresh air, adding some plants, and using mirrors wisely.

              -

              What are the benefits of feng shui?

              -

              Feng shui can help you create a more balanced and harmonious living space that supports your goals and desires. It can also help you improve your mood, health, and well-being by aligning your energies with your surroundings.

              -

              Where can I learn more about feng shui?

              -

              You can learn more about feng shui by reading books, articles, blogs, or watching videos on the topic. You can also consult a professional feng shui consultant or take a course or workshop on feng shui.

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/modules/codebooks_patterns.py b/spaces/simsantonioii/MusicGen-Continuation/audiocraft/modules/codebooks_patterns.py deleted file mode 100644 index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000 --- a/spaces/simsantonioii/MusicGen-Continuation/audiocraft/modules/codebooks_patterns.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple -from dataclasses import dataclass -from functools import lru_cache -import logging -import typing as tp - -from abc import ABC, abstractmethod -import torch - -LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index) -PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates -logger = logging.getLogger(__name__) - - -@dataclass -class Pattern: - """Base implementation of a pattern over a sequence with multiple codebooks. - - The codebook pattern consists in a layout, defining for each sequence step - the list of coordinates of each codebook timestep in the resulting interleaved sequence. - The first item of the pattern is always an empty list in order to properly insert a special token - to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern - and ``timesteps`` the number of timesteps corresponding to the original sequence. - - The pattern provides convenient methods to build and revert interleaved sequences from it: - ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T] - to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size, - K being the number of codebooks, T the number of original timesteps and S the number of sequence steps - for the output sequence. The unfilled positions are replaced with a special token and the built sequence - is returned along with a mask indicating valid tokens. - ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment - of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask - to fill and specify invalid positions if needed. - See the dedicated methods for more details. - """ - # Pattern layout, for each sequence step, we have a list of coordinates - # corresponding to the original codebook timestep and position. - # The first list is always an empty list in order to properly insert - # a special token to start with. - layout: PatternLayout - timesteps: int - n_q: int - - def __post_init__(self): - assert len(self.layout) > 0 - assert self.layout[0] == [] - self._validate_layout() - self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes) - self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes) - logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout)) - - def _validate_layout(self): - """Runs checks on the layout to ensure a valid pattern is defined. - A pattern is considered invalid if: - - Multiple timesteps for a same codebook are defined in the same sequence step - - The timesteps for a given codebook are not in ascending order as we advance in the sequence - (this would mean that we have future timesteps before past timesteps). - """ - q_timesteps = {q: 0 for q in range(self.n_q)} - for s, seq_coords in enumerate(self.layout): - if len(seq_coords) > 0: - qs = set() - for coord in seq_coords: - qs.add(coord.q) - last_q_timestep = q_timesteps[coord.q] - assert coord.t >= last_q_timestep, \ - f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}" - q_timesteps[coord.q] = coord.t - # each sequence step contains at max 1 coordinate per codebook - assert len(qs) == len(seq_coords), \ - f"Multiple entries for a same codebook are found at step {s}" - - @property - def num_sequence_steps(self): - return len(self.layout) - 1 - - @property - def max_delay(self): - max_t_in_seq_coords = 0 - for seq_coords in self.layout[1:]: - for coords in seq_coords: - max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1) - return max_t_in_seq_coords - self.timesteps - - @property - def valid_layout(self): - valid_step = len(self.layout) - self.max_delay - return self.layout[:valid_step] - - def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None): - """Get codebook coordinates in the layout that corresponds to the specified timestep t - and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step - and the actual codebook coordinates. - """ - assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps" - if q is not None: - assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks" - coords = [] - for s, seq_codes in enumerate(self.layout): - for code in seq_codes: - if code.t == t and (q is None or code.q == q): - coords.append((s, code)) - return coords - - def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]: - return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)] - - def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]: - steps_with_timesteps = self.get_steps_with_timestep(t, q) - return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None - - def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool, - device: tp.Union[torch.device, str] = 'cpu'): - """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps. - - Args: - timesteps (int): Maximum number of timesteps steps to consider. - keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps. - device (Union[torch.device, str]): Device for created tensors. - Returns: - indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S]. - """ - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern" - # use the proper layout based on whether we limit ourselves to valid steps only or not, - # note that using the valid_layout will result in a truncated sequence up to the valid steps - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy() - mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - # the last value is n_q * timesteps as we have flattened z and append special token as the last token - # which will correspond to the index: n_q * timesteps - indexes[:] = n_q * timesteps - # iterate over the pattern and fill scattered indexes and mask - for s, sequence_coords in enumerate(ref_layout): - for coords in sequence_coords: - if coords.t < timesteps: - indexes[coords.q, s] = coords.t + coords.q * timesteps - mask[coords.q, s] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Build sequence corresponding to the pattern from the input tensor z. - The sequence is built using up to sequence_steps if specified, and non-pattern - coordinates are filled with the special token. - - Args: - z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T]. - special_token (int): Special token used to fill non-pattern coordinates in the new sequence. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S - corresponding either to the sequence_steps if provided, otherwise to the length of the pattern. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S]. - """ - B, K, T = z.shape - indexes, mask = self._build_pattern_sequence_scatter_indexes( - T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device) - ) - z = z.view(B, -1) - # we append the special token as the last index of our flattened z tensor - z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1) - values = z[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int, - keep_only_valid_steps: bool = False, - is_model_output: bool = False, - device: tp.Union[torch.device, str] = 'cpu'): - """Builds scatter indexes required to retrieve the original multi-codebook sequence - from interleaving pattern. - - Args: - sequence_steps (int): Sequence steps. - n_q (int): Number of codebooks. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not. - device (Union[torch.device, str]): Device for created tensors. - Returns: - torch.Tensor: Indexes for reconstructing the output, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # TODO(jade): Do we want to further truncate to only valid timesteps here as well? - timesteps = self.timesteps - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert sequence_steps <= len(ref_layout), \ - f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}" - - # ensure we take the appropriate indexes to keep the model output from the first special token as well - if is_model_output: - ref_layout = ref_layout[1:] - - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy() - mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - indexes[:] = n_q * sequence_steps - for s, sequence_codes in enumerate(ref_layout): - if s < sequence_steps: - for code in sequence_codes: - if code.t < timesteps: - indexes[code.q, code.t] = s + code.q * sequence_steps - mask[code.q, code.t] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving. - The sequence is reverted using up to timesteps if specified, and non-pattern coordinates - are filled with the special token. - - Args: - s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S]. - special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T - corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - B, K, S = s.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device) - ) - s = s.view(B, -1) - # we append the special token as the last index of our flattened z tensor - s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1) - values = s[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False): - """Revert model logits obtained on a sequence built from the pattern - back to a tensor matching the original sequence. - - This method is similar to ``revert_pattern_sequence`` with the following specificities: - 1. It is designed to work with the extra cardinality dimension - 2. We return the logits for the first sequence item that matches the special_token and - which matching target in the original sequence is the first item of the sequence, - while we skip the last logits as there is no matching target - """ - B, card, K, S = logits.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=True, device=logits.device - ) - logits = logits.reshape(B, card, -1) - # we append the special token as the last index of our flattened z tensor - logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S] - values = logits[:, :, indexes.view(-1)] - values = values.view(B, card, K, indexes.shape[-1]) - return values, indexes, mask - - -class CodebooksPatternProvider(ABC): - """Abstraction around providing pattern for interleaving codebooks. - - The CodebooksPatternProvider abstraction allows to implement various strategies to - define interleaving pattern of sequences composed of multiple codebooks. For a given - number of codebooks `n_q`, the pattern provider can generate a specified pattern - corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern - can be used to construct a new sequence from the original codes respecting the specified - pattern. The pattern is defined as a list of list of code coordinates, code coordinate - being a tuple with the original timestep and codebook to build the new sequence. - Note that all patterns must start with an empty list that is then used to insert a first - sequence step of special tokens in the newly generated sequence. - - Args: - n_q (int): number of codebooks. - cached (bool): if True, patterns for a given length are cached. In general - that should be true for efficiency reason to avoid synchronization points. - """ - def __init__(self, n_q: int, cached: bool = True): - assert n_q > 0 - self.n_q = n_q - self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore - - @abstractmethod - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern with specific interleaving between codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - raise NotImplementedError() - - -class DelayedPatternProvider(CodebooksPatternProvider): - """Provider for delayed pattern across delayed codebooks. - Codebooks are delayed in the sequence and sequence steps will contain codebooks - from different timesteps. - - Example: - Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - The resulting sequence obtained from the returned pattern is: - [[S, 1, 2, 3, 4], - [S, S, 1, 2, 3], - [S, S, S, 1, 2]] - (with S being a special token) - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - flatten_first (int): Flatten the first N timesteps. - empty_initial (int): Prepend with N empty list of coordinates. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None, - flatten_first: int = 0, empty_initial: int = 0): - super().__init__(n_q) - if delays is None: - delays = list(range(n_q)) - self.delays = delays - self.flatten_first = flatten_first - self.empty_initial = empty_initial - assert len(self.delays) == self.n_q - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - max_delay = max(self.delays) - if self.empty_initial: - out += [[] for _ in range(self.empty_initial)] - if self.flatten_first: - for t in range(min(timesteps, self.flatten_first)): - for q in range(self.n_q): - out.append([LayoutCoord(t, q)]) - for t in range(self.flatten_first, timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= self.flatten_first: - v.append(LayoutCoord(t_for_q, q)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class ParallelPatternProvider(DelayedPatternProvider): - """Provider for parallel pattern across codebooks. - This pattern provider is a special case of the delayed pattern with actually no delay, - hence delays=repeat(0, n_q). - - Args: - n_q (int): Number of codebooks. - """ - def __init__(self, n_q: int): - super().__init__(n_q, [0] * n_q) - - -class UnrolledPatternProvider(CodebooksPatternProvider): - """Provider for unrolling codebooks pattern. - This pattern provider enables to represent the codebook flattened completely or only to some extend - while also specifying a given delay between the flattened codebooks representation, allowing to - unroll the codebooks in the sequence. - - Example: - 1. Flattening of the codebooks. - By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q), - taking n_q = 3 and timesteps = 4: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, 1, S, S, 2, S, S, 3, S, S, 4], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step - for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example - taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks - allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the - same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1] - and delays = [0, 3, 3]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, S, 1, S, 2, S, 3, S, 4], - [S, S, S, 1, S, 2, S, 3, S, 4], - [1, 2, 3, S, 4, S, 5, S, 6, S]] - - Args: - n_q (int): Number of codebooks. - flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined, - the codebooks will be flattened to 1 codebook per step, meaning that the sequence will - have n_q extra steps for each timestep. - delays (Optional[List[int]]): Delay for each of the codebooks. If not defined, - no delay is added and therefore will default to [0] * ``n_q``. - Note that two codebooks that will be flattened to the same inner step - should have the same delay, otherwise the pattern is considered as invalid. - """ - FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay']) - - def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None, - delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if flattening is None: - flattening = list(range(n_q)) - if delays is None: - delays = [0] * n_q - assert len(flattening) == n_q - assert len(delays) == n_q - assert sorted(flattening) == flattening - assert sorted(delays) == delays - self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening) - self.max_delay = max(delays) - - def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]): - """Build a flattened codebooks representation as a dictionary of inner step - and the actual codebook indices corresponding to the flattened codebook. For convenience, we - also store the delay associated to the flattened codebook to avoid maintaining an extra mapping. - """ - flattened_codebooks: dict = {} - for q, (inner_step, delay) in enumerate(zip(flattening, delays)): - if inner_step not in flattened_codebooks: - flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay) - else: - flat_codebook = flattened_codebooks[inner_step] - assert flat_codebook.delay == delay, ( - "Delay and flattening between codebooks is inconsistent: ", - "two codebooks flattened to the same position should have the same delay." - ) - flat_codebook.codebooks.append(q) - flattened_codebooks[inner_step] = flat_codebook - return flattened_codebooks - - @property - def _num_inner_steps(self): - """Number of inner steps to unroll between timesteps in order to flatten the codebooks. - """ - return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1 - - def num_virtual_steps(self, timesteps: int) -> int: - return timesteps * self._num_inner_steps + 1 - - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern for delay across codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - # the PatternLayout is built as a tuple of sequence position and list of coordinates - # so that it can be reordered properly given the required delay between codebooks of given timesteps - indexed_out: list = [(-1, [])] - max_timesteps = timesteps + self.max_delay - for t in range(max_timesteps): - # for each timestep, we unroll the flattened codebooks, - # emitting the sequence step with the corresponding delay - for step in range(self._num_inner_steps): - if step in self._flattened_codebooks: - # we have codebooks at this virtual step to emit - step_codebooks = self._flattened_codebooks[step] - t_for_q = t + step_codebooks.delay - coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks] - if t_for_q < max_timesteps and t < max_timesteps: - indexed_out.append((t_for_q, coords)) - else: - # there is no codebook in this virtual step so we emit an empty list - indexed_out.append((t, [])) - out = [coords for _, coords in sorted(indexed_out)] - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class VALLEPattern(CodebooksPatternProvider): - """Almost VALL-E style pattern. We futher allow some delays for the - codebooks other than the first one. - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if delays is None: - delays = [0] * (n_q - 1) - self.delays = delays - assert len(self.delays) == self.n_q - 1 - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for t in range(timesteps): - out.append([LayoutCoord(t, 0)]) - max_delay = max(self.delays) - for t in range(timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= 0: - v.append(LayoutCoord(t_for_q, q + 1)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class MusicLMPattern(CodebooksPatternProvider): - """Almost MusicLM style pattern. This is equivalent to full flattening - but in a different order. - - Args: - n_q (int): Number of codebooks. - group_by (int): Number of codebooks to group together. - """ - def __init__(self, n_q: int, group_by: int = 2): - super().__init__(n_q) - self.group_by = group_by - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for offset in range(0, self.n_q, self.group_by): - for t in range(timesteps): - for q in range(offset, offset + self.group_by): - out.append([LayoutCoord(t, q)]) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) diff --git a/spaces/skf15963/summary/fengshen/examples/PPVAE/generate.py b/spaces/skf15963/summary/fengshen/examples/PPVAE/generate.py deleted file mode 100644 index 1bbd369768cf1b903b4edf642836d28dc5a09274..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/PPVAE/generate.py +++ /dev/null @@ -1,24 +0,0 @@ -import torch -from transformers import BertTokenizer,T5Tokenizer -from fengshen.models.PPVAE.pluginVAE import PPVAEModel -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -encoder_tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Randeng-PPVAE-1.2B-Augmentation-Chinese") -decoder_tokenizer = T5Tokenizer.from_pretrained("IDEA-CCNL/Randeng-PPVAE-1.2B-Augmentation-Chinese", eos_token = '<|endoftext|>', pad_token = '',extra_ids=0) -decoder_tokenizer.add_special_tokens({'bos_token':''}) -ppvae_model = PPVAEModel.from_pretrained("IDEA-CCNL/Randeng-PPVAE-1.2B-Augmentation-Chinese").to(device) -input_texts = [ - "非常好的一个博物馆,是我所有去过的博物馆里感觉最正规的一家,凭有效证件可以入馆,可以自助免费存小件物品,讲解员和馆内外的工作人员也非常认真,其他的服务人员也很热情,非常好的!馆内的藏品也让人非常震撼!希望继续保持~", - "这是我来长沙最最期待的一定要去的地方,总算今天特地去瞻仰千古遗容了,开车到门口大屏幕显示着门票已发完的字样,心里一惊以为今天是白来了。但进了停车场才知道凭停车卡和有效身份证里面也能领,停车还不花钱,真好。", - "地方很大 很气派~~可以逛很久~~~去的时候是免费的~不过要安检~~~里面的马王堆~幸追夫人~还是很不错的~~~~去的时候有一个吴越文化特别展~~~东西也很多~~~~~很好看", - "我们到达的时候是下午3点,门票已经发完了。当时正焦虑的不知道怎么办才好,门卫大哥给我们俩补办了门票,这才得以入馆。非常感谢!绝对不虚此行!相当震撼的展览!原来古人也化妆,还有假发。记忆最深的是那个藕汤。可惜真颜已不得见。", - "去过三次,个人认为这是长沙最值得去的地方,博物馆的重点就是辛追,遗憾的是,每次去我都会感到悲哀,虽然我三次去的时候都要门票,但是每次看到辛追,都觉得现代的人类不应该挖她出来,除了第一次我觉得辛追像刚死去一样,后来两次我觉得太惨不忍睹了。建议大家要去就早去,以后肯定越来越腐烂", - "上大学时候去的,当时学生证是半价25,后来凭有效证件就不要钱了。非常喜欢的一家博物馆,里面可看的东西很多,当然最吸引我的就是那个辛追夫人和“素纱单衣”,果然不是盖的~里面的讲解员大部分都是师大学历史类的,非常专业和有耐心。虽然不在长沙了,不过对那里还是很有感情的,赞~~~", - "这两年也有很多机会去博物馆。。。不过还是想说湖南省博物馆是非常有特色的。。。应该说整个展览分成两个部分吧。。。一个部分是马王堆的主体展。。。另一个就是湖南的一些考古发现。。。其实来省博大部分的游客还是冲着马王堆来的吧。。。博物馆也很有心的为每一批游客安排了讲解员。。。从马王堆的发现到马王堆出土文物的介绍再到最后棺木和辛追的介绍。。。真是上了一节很生动的历史课。", - "网上订票去的,还是很顺利的就进去了,里面挺清净的,外围的环境也不错,还有鸽子可以喂。那天不是很闹,兜了一圈感觉还是很顺畅的,老娘娘和金缕玉衣挺震撼的。到此一游还是挺需要的", -] - -ppvae_model.train_plugin(encoder_tokenizer,decoder_tokenizer,input_texts,negative_samples=None) -# n:输出样本数量 -texts = ppvae_model.generate(n=5) -print(texts) \ No newline at end of file diff --git a/spaces/sonali-tamhankar/WA-Hospital-Regulations-Chatbot/app.py b/spaces/sonali-tamhankar/WA-Hospital-Regulations-Chatbot/app.py deleted file mode 100644 index 6593ebbfba2fcc09561723bfa5f54dc3e0f61b2d..0000000000000000000000000000000000000000 --- a/spaces/sonali-tamhankar/WA-Hospital-Regulations-Chatbot/app.py +++ /dev/null @@ -1,141 +0,0 @@ -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.vectorstores import FAISS -from langchain import HuggingFaceHub -from langchain.chains import RetrievalQA -import streamlit as st - -st.set_page_config(page_title = "Hospital Regulatory Chat", page_icon=":hospital:") - - -DB_FAISS_PATH = '.' - -def get_vectorstore(): - embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2", - model_kwargs={'device': 'cpu'}) - vector_store = FAISS.load_local(DB_FAISS_PATH, embeddings) - return vector_store - -vector_store = get_vectorstore() - -llm = HuggingFaceHub(repo_id = "meta-llama/Llama-2-7b-chat-hf",model_kwargs={"temperature":0.5}) #, "max_length":512}) - -qa_chain = RetrievalQA.from_chain_type(llm=llm, - chain_type='stuff', - retriever=vector_store.as_retriever(search_kwargs={'k': 10}), - #retriever=vector_store.as_retriever(search_kwargs={"score_threshold": .01}), - return_source_documents = True - ) - - -source_dictionary = {"pdf_files\\CMS_SOMA.pdf":"[CMS State Operations Manual Appendix A](https://www.cms.gov/regulations-and-guidance/guidance/manuals/downloads/som107ap_a_hospitals.pdf)", -'pdf_files\\RCW-18-64.pdf': '[Pharmacists](https://app.leg.wa.gov/rcw/default.aspx?cite=18.64&full=true&pdf=true)', -'pdf_files\\RCW-18-64A.pdf': '[Pharmacy Assistants](https://app.leg.wa.gov/rcw/default.aspx?cite=18.64A&full=true&pdf=true)', -'pdf_files\\RCW-18-130.pdf': '[Regulation of Health Professions—Uniform Disciplinary Act](https://app.leg.wa.gov/rcw/default.aspx?cite=18.130&full=true&pdf=true)', -'pdf_files\\RCW-26-44-030.pdf': '[Abuse of Children and Adult Dependent Persons](https://app.leg.wa.gov/rcw/default.aspx?cite=26.44.030&full=true&pdf=true)', -'pdf_files\\RCW-34-05.pdf': '[Administrative Procedures Act](https://app.leg.wa.gov/rcw/default.aspx?cite=34.05&full=true&pdf=true)', -'pdf_files\\RCW-42-56.pdf': '[Public Disclosure](https://app.leg.wa.gov/rcw/default.aspx?cite=42.56&full=true&pdf=true)', -'pdf_files\\RCW-43-70.pdf': '[Department of Health](https://app.leg.wa.gov/rcw/default.aspx?cite=43.70&full=true&pdf=true)', -'pdf_files\\RCW-69-04.pdf': '[Uniform Food, Drug and Cosmetic Act](https://app.leg.wa.gov/rcw/default.aspx?cite=69.04&full=true&pdf=true)', -'pdf_files\\RCW-69-36.pdf': '[Washington Caustic Poison Act of 1929](https://app.leg.wa.gov/rcw/default.aspx?cite=69.36&full=true&pdf=true)', -'pdf_files\\RCW-69-38.pdf': '[Poisons - Sales and Manufacturing](https://app.leg.wa.gov/rcw/default.aspx?cite=69.38&full=true&pdf=true)', -'pdf_files\\RCW-69-40.pdf': '[Poisons and Dangerous Drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.40&full=true&pdf=true)', -'pdf_files\\RCW-69-41.pdf': '[Legend Drugs . . . Prescription Drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.41&full=true&pdf=true)', -'pdf_files\\RCW-69-43.pdf': '[Precursor Drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.43&full=true&pdf=true)', -'pdf_files\\RCW-69-45.pdf': '[Drug Samples](https://app.leg.wa.gov/rcw/default.aspx?cite=69.45&full=true&pdf=true)', -'pdf_files\\RCW-69-48.pdf': '[Drug Take-Back Program\nSecure Medication Return webpage](https://app.leg.wa.gov/rcw/default.aspx?cite=69.48&full=true&pdf=true)', - 'pdf_files\\RCW-69-50.pdf': '[Uniform Controlled Substances Act](https://app.leg.wa.gov/rcw/default.aspx?cite=69.50&full=true&pdf=true)', - 'pdf_files\\RCW-69-51.pdf': '[Controlled Substances Therapeutic Research Act](https://app.leg.wa.gov/rcw/default.aspx?cite=69.51&full=true&pdf=true)', -'pdf_files\\RCW-69-51A.pdf': '[Medical Cannabis](https://app.leg.wa.gov/rcw/default.aspx?cite=69.51A&full=true&pdf=true)', -'pdf_files\\RCW-69-52.pdf': '[Imitation Controlled Substances](https://app.leg.wa.gov/rcw/default.aspx?cite=69.52&full=true&pdf=true)', -'pdf_files\\RCW-69-53.pdf': '[Use of Buildings for Unlawful Drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.53&full=true&pdf=true)', - 'pdf_files\\RCW-69-60.pdf': '[Over-the-Counter Medications - Imprinting](https://app.leg.wa.gov/rcw/default.aspx?cite=69.60&full=true&pdf=true)', - 'pdf_files\\RCW-69-70.pdf': '[Access to prescription drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.70&full=true&pdf=true)', - 'pdf_files\\RCW-69-75.pdf': '[Dextromethorphan](https://app.leg.wa.gov/rcw/default.aspx?cite=69.75&full=true&pdf=true)', -'pdf_files\\RCW-70-02.pdf': '[Medical Records-Health Care Information Access and Disclosure](https://app.leg.wa.gov/rcw/default.aspx?cite=70.02&full=true&pdf=true)', -'pdf_files\\RCW-70-54.pdf': '[Miscellaneous Health and Safety Provisions\n\n70.54.400 Retail restroom access – Customers with medical conditions – Penalty\n\n\n70.54.440 Epinephrine autoinjectors – Prescribing to certain entities-training-Liability-incident reporting](https://app.leg.wa.gov/rcw/default.aspx?cite=70.54&full=true&pdf=true)', -'pdf_files\\RCW-70-115.pdf': '[Drug Injection Devices](https://app.leg.wa.gov/rcw/default.aspx?cite=70.115&full=true&pdf=true)', -'pdf_files\\RCW-70-225.pdf': '[Prescription Monitoring Program](https://app.leg.wa.gov/rcw/default.aspx?cite=70.225&full=true&pdf=true)', -'pdf_files\\RCW-70-245.pdf': '[The Washington Death With Dignity Act](https://app.leg.wa.gov/rcw/default.aspx?cite=70.245&full=true&pdf=true)', -'pdf_files\\RCW-74-34.pdf': '[Abuse of Vulnerable Adults](https://app.leg.wa.gov/rcw/default.aspx?cite=74.34&full=true&pdf=true)', -'pdf_files\\WAC-246-338.pdf': '[Medical Test Site Rules](https://app.leg.wa.gov/wac/default.aspx?cite=246-338&full=true&pdf=true)', -'pdf_files\\WAC-246-320.pdf': '[Hospital Licensing Regulations](https://app.leg.wa.gov/wac/default.aspx?cite=246-320&full=true&pdf=true)', -'pdf_files\\RCW-70-41.pdf': '[Hospital Licensing and Regulation](https://app.leg.wa.gov/rcw/default.aspx?cite=70.41&full=true&pdf=true)', -'pdf_files\\RCW-70-42.pdf': '[Medical Test Sites](https://app.leg.wa.gov/rcw/default.aspx?cite=70.42&full=true&pdf=true)', -'pdf_files\\WAC-246-240.pdf': '[Medical Use of Radioactive Material](https://app.leg.wa.gov/wac/default.aspx?cite=246-240&full=true&pdf=true)', -'pdf_files\\WAC-246-221.pdf': '[Radiation Protection Standards](https://app.leg.wa.gov/wac/default.aspx?cite=246-221&full=true&pdf=true)', -'pdf_files\\RCW-18-71.pdf': '[Physicians](https://app.leg.wa.gov/rcw/default.aspx?cite=18.71&full=true&pdf=true)', -'pdf_files\\RCW-18-71A.pdf': '[Physician Assistants](https://app.leg.wa.gov/rcw/default.aspx?cite=18.71A&full=true&pdf=true)', -'pdf_files\\RCW-18-79.pdf': '[Nursing Care](https://app.leg.wa.gov/rcw/default.aspx?cite=18.79&full=true&pdf=true)', -'pdf_files\\RCW-18-84.pdf': '[Radiologic Technologies](https://app.leg.wa.gov/rcw/default.aspx?cite=18.84&full=true&pdf=true)', -'pdf_files\\RCW-18-370.pdf': '[Medical Assistants](https://app.leg.wa.gov/rcw/default.aspx?cite=18.370&full=true&pdf=true)'} - -with st.container(): - st.title("Hospital Regulation Chat") - -with st.sidebar: - st.subheader("Find regulations for hospitals in the state of Washington.") - st.markdown(""" - We look into these sources to find top ten most relevant excerpts: - - [CMS State Operations Manual Appendix A](https://www.cms.gov/regulations-and-guidance/guidance/manuals/downloads/som107ap_a_hospitals.pdf) - - [Pharmacists](https://app.leg.wa.gov/rcw/default.aspx?cite=18.64&full=true&pdf=true) - - [Pharmacy Assistants](https://app.leg.wa.gov/rcw/default.aspx?cite=18.64A&full=true&pdf=true) - - [Regulation of Health Professions—Uniform Disciplinary Act](https://app.leg.wa.gov/rcw/default.aspx?cite=18.130&full=true&pdf=true) - - [Abuse of Children and Adult Dependent Persons](https://app.leg.wa.gov/rcw/default.aspx?cite=26.44.030&full=true&pdf=true) - - [Administrative Procedures Act](https://app.leg.wa.gov/rcw/default.aspx?cite=34.05&full=true&pdf=true) - - [Public Disclosure](https://app.leg.wa.gov/rcw/default.aspx?cite=42.56&full=true&pdf=true) - - [Department of Health](https://app.leg.wa.gov/rcw/default.aspx?cite=43.70&full=true&pdf=true) - - [Uniform Food, Drug and Cosmetic Act](https://app.leg.wa.gov/rcw/default.aspx?cite=69.04&full=true&pdf=true) - - [Washington Caustic Poison Act of 1929](https://app.leg.wa.gov/rcw/default.aspx?cite=69.36&full=true&pdf=true) - - [Poisons - Sales and Manufacturing](https://app.leg.wa.gov/rcw/default.aspx?cite=69.38&full=true&pdf=true) - - [Poisons and Dangerous Drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.40&full=true&pdf=true) - - [Legend Drugs . . . Prescription Drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.41&full=true&pdf=true) - - [Precursor Drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.43&full=true&pdf=true) - - [Drug Samples](https://app.leg.wa.gov/rcw/default.aspx?cite=69.45&full=true&pdf=true) - - [Drug Take-Back Program Secure Medication Return webpage](https://app.leg.wa.gov/rcw/default.aspx?cite=69.48&full=true&pdf=true) - - [Uniform Controlled Substances Act](https://app.leg.wa.gov/rcw/default.aspx?cite=69.50&full=true&pdf=true) - - [Controlled Substances Therapeutic Research Act](https://app.leg.wa.gov/rcw/default.aspx?cite=69.51&full=true&pdf=true) - - [Medical Cannabis](https://app.leg.wa.gov/rcw/default.aspx?cite=69.51A&full=true&pdf=true) - - [Imitation Controlled Substances](https://app.leg.wa.gov/rcw/default.aspx?cite=69.52&full=true&pdf=true) - - [Use of Buildings for Unlawful Drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.53&full=true&pdf=true) - - [Over-the-Counter Medications - Imprinting](https://app.leg.wa.gov/rcw/default.aspx?cite=69.60&full=true&pdf=true) - - [Access to prescription drugs](https://app.leg.wa.gov/rcw/default.aspx?cite=69.70&full=true&pdf=true) - - [Dextromethorphan](https://app.leg.wa.gov/rcw/default.aspx?cite=69.75&full=true&pdf=true) - - [Medical Records-Health Care Information Access and Disclosure](https://app.leg.wa.gov/rcw/default.aspx?cite=70.02&full=true&pdf=true) - - [Miscellaneous Health and Safety Provisions 70.54.400 Retail restroom access – Customers with medical conditions – Penalty 70.54.440 Epinephrine autoinjectors – Prescribing to certain entities-training-Liability-incident reporting](https://app.leg.wa.gov/rcw/default.aspx?cite=70.54&full=true&pdf=true) - - [Drug Injection Devices](https://app.leg.wa.gov/rcw/default.aspx?cite=70.115&full=true&pdf=true) - - [Prescription Monitoring Program](https://app.leg.wa.gov/rcw/default.aspx?cite=70.225&full=true&pdf=true) - - [The Washington Death With Dignity Act](https://app.leg.wa.gov/rcw/default.aspx?cite=70.245&full=true&pdf=true) - - [Abuse of Vulnerable Adults](https://app.leg.wa.gov/rcw/default.aspx?cite=74.34&full=true&pdf=true) - - [Medical Test Site Rules](https://app.leg.wa.gov/wac/default.aspx?cite=246-338&full=true&pdf=true) - - [Hospital Licensing Regulations](https://app.leg.wa.gov/wac/default.aspx?cite=246-320&full=true&pdf=true) - - [Hospital Licensing and Regulation](https://app.leg.wa.gov/rcw/default.aspx?cite=70.41&full=true&pdf=true) - - [Medical Test Sites](https://app.leg.wa.gov/rcw/default.aspx?cite=70.42&full=true&pdf=true) - - [Medical Use of Radioactive Material](https://app.leg.wa.gov/wac/default.aspx?cite=246-240&full=true&pdf=true) - - [Radiation Protection Standards](https://app.leg.wa.gov/wac/default.aspx?cite=246-221&full=true&pdf=true) - - [Physicians](https://app.leg.wa.gov/rcw/default.aspx?cite=18.71&full=true&pdf=true) - - [Physician Assistants](https://app.leg.wa.gov/rcw/default.aspx?cite=18.71A&full=true&pdf=true) - - [Nursing Care](https://app.leg.wa.gov/rcw/default.aspx?cite=18.79&full=true&pdf=true) - - [Radiologic Technologies](https://app.leg.wa.gov/rcw/default.aspx?cite=18.84&full=true&pdf=true) - - [Medical Assistants](https://app.leg.wa.gov/rcw/default.aspx?cite=18.370&full=true&pdf=true) - """) #, unsafe_allow_html=True) - st.write("This is tool is meant to assist healthcare workers to the extent it can. Please note that the page numbers may be occasionally slightly off, use the matching excerpts to find the reference if this happens.") - -st.markdown("**Ask your question and :red[click 'Find excerpts'.]**") -prompt = st.text_input("e.g. What are the rules regarding a Quality Improvement, or QAPI program?") - -if (st.button("Find excerpts")): - answer = qa_chain({"query":prompt}) - - n = len(answer['source_documents']) - - for i in range(n): - with st.container(): - page = str(answer['source_documents'][i].metadata['page']) - page_no = "#page=" + page + ")" - st.subheader(source_dictionary[answer['source_documents'][i].metadata['source']].replace(")",page_no)) - page_no = "**Page: " + page + "**" - st.markdown(page_no) - st.write("...") - st.write(answer['source_documents'][i].page_content) - st.write("...") - st.write('---------------------------------\n\n') \ No newline at end of file diff --git a/spaces/sqc1729/bingi/src/components/button-scroll-to-bottom.tsx b/spaces/sqc1729/bingi/src/components/button-scroll-to-bottom.tsx deleted file mode 100644 index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/components/button-scroll-to-bottom.tsx +++ /dev/null @@ -1,34 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' -import { useAtBottom } from '@/lib/hooks/use-at-bottom' -import { Button, type ButtonProps } from '@/components/ui/button' -import { IconArrowDown } from '@/components/ui/icons' - -export function ButtonScrollToBottom({ className, ...props }: ButtonProps) { - const isAtBottom = useAtBottom() - - return ( - - ) -} diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/benchmark/dummy_lm.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/benchmark/dummy_lm.py deleted file mode 100644 index c6246a0c0e338fa36244b3aa4fb57f189fbffcb6..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/benchmark/dummy_lm.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Optional - -import torch -from .dummy_dataset import DummyDataset -from fairseq.data import Dictionary -from fairseq.dataclass import FairseqDataclass -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II - - -logger = logging.getLogger(__name__) - - -@dataclass -class DummyLMConfig(FairseqDataclass): - dict_size: int = 49996 - dataset_size: int = 100000 - tokens_per_sample: int = field( - default=512, metadata={"help": "max sequence length"} - ) - add_bos_token: bool = False - batch_size: Optional[int] = II("dataset.batch_size") - max_tokens: Optional[int] = II("dataset.max_tokens") - max_target_positions: int = II("task.tokens_per_sample") - - -@register_task("dummy_lm", dataclass=DummyLMConfig) -class DummyLMTask(FairseqTask): - def __init__(self, cfg: DummyLMConfig): - super().__init__(cfg) - - # load dictionary - self.dictionary = Dictionary() - for i in range(cfg.dict_size): - self.dictionary.add_symbol("word{}".format(i)) - self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8 - logger.info("dictionary: {} types".format(len(self.dictionary))) - - seq = torch.arange(cfg.tokens_per_sample + 1) + self.dictionary.pad() + 1 - - self.dummy_src = seq[:-1] - self.dummy_tgt = seq[1:] - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if self.cfg.batch_size is not None: - bsz = self.cfg.batch_size - else: - bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample) - self.datasets[split] = DummyDataset( - { - "id": 1, - "net_input": { - "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]), - "src_lengths": torch.full( - (bsz,), self.cfg.tokens_per_sample, dtype=torch.long - ), - }, - "target": torch.stack([self.dummy_tgt for _ in range(bsz)]), - "nsentences": bsz, - "ntokens": bsz * self.cfg.tokens_per_sample, - }, - num_items=self.cfg.dataset_size, - item_size=self.cfg.tokens_per_sample, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sparse_multihead_attention.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sparse_multihead_attention.py deleted file mode 100644 index 3cbd9d6785886e319aab0601517e27df733b6f97..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/sparse_multihead_attention.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch - -from .multihead_attention import MultiheadAttention - - -class SparseMultiheadAttention(MultiheadAttention): - """Sparse Multi-Headed Attention. - - "Generating Long Sequences with Sparse Transformers". Implements - fixed factorized self attention, where l=stride and c=expressivity. - A(1) includes all words in the stride window and A(2) takes a summary of c - words from the end of each stride window. - If is_bidirectional=False, we do not include any words past the current word, - as in the paper. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - stride=32, - expressivity=8, - is_bidirectional=True, - ): - - super().__init__( - embed_dim, - num_heads, - kdim, - vdim, - dropout, - bias, - add_bias_kv, - add_zero_attn, - self_attention, - encoder_decoder_attention, - ) - - self.is_bidirectional = is_bidirectional - self.stride = stride - self.expressivity = expressivity - assert self.stride > 0 and self.stride >= self.expressivity - - # Used for Ai(2) calculations - beginning of [l-c, l] range - def compute_checkpoint(self, word_index): - if word_index % self.stride == 0 and word_index != 0: - checkpoint_index = word_index - self.expressivity - else: - checkpoint_index = ( - math.floor(word_index / self.stride) * self.stride - + self.stride - - self.expressivity - ) - return checkpoint_index - - # Computes Ai(2) - def compute_subset_summaries(self, absolute_max): - checkpoint_index = self.compute_checkpoint(0) - subset_two = set() - while checkpoint_index <= absolute_max - 1: - summary = set( - range( - checkpoint_index, - min(checkpoint_index + self.expressivity + 1, absolute_max), - ) - ) - subset_two = subset_two.union(summary) - checkpoint_index = self.compute_checkpoint(checkpoint_index + self.stride) - return subset_two - - # Sparse Transformer Fixed Attention Pattern: https://arxiv.org/pdf/1904.10509.pdf - def compute_fixed_attention_subset(self, word_index, tgt_len): - # +1s account for range function; [min, max) -> [min, max] - if not self.is_bidirectional: - absolute_max = word_index + 1 - else: - absolute_max = tgt_len - - # Subset 1 - whole window - rounded_index = ( - math.floor((word_index + self.stride) / self.stride) * self.stride - ) - if word_index % self.stride == 0 and word_index != 0: - subset_one = set( - range(word_index - self.stride, min(absolute_max, word_index + 1)) - ) - else: - subset_one = set( - range( - max(0, rounded_index - self.stride), - min(absolute_max, rounded_index + 1), - ) - ) - - # Subset 2 - summary per window - # If bidirectional, subset 2 is the same for every index - subset_two = set() - if not self.is_bidirectional: - subset_two = self.compute_subset_summaries(absolute_max) - - return subset_one.union(subset_two) - - # Compute sparse mask - if bidirectional, can pre-compute and store - def buffered_sparse_mask(self, tensor, tgt_len, src_len): - assert tgt_len > self.stride - sparse_mask = torch.empty((tgt_len, src_len)).float().fill_(float("-inf")) - - # If bidirectional, subset 2 is the same for every index - subset_summaries = set() - if self.is_bidirectional: - subset_summaries = self.compute_subset_summaries(tgt_len) - - for i in range(tgt_len): - fixed_attention_subset = self.compute_fixed_attention_subset(i, tgt_len) - fixed_attention_subset = fixed_attention_subset.union(subset_summaries) - included_word_indices = torch.LongTensor(list(fixed_attention_subset)) - sparse_mask[i].index_fill_(0, included_word_indices, 0) - return sparse_mask.type_as(tensor) - - def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz): - sparse_mask = self.buffered_sparse_mask(attn_weights, tgt_len, src_len) - sparse_mask = sparse_mask.unsqueeze(0).expand( - bsz * self.num_heads, tgt_len, src_len - ) - attn_weights += sparse_mask diff --git a/spaces/stomexserde/gpt4-ui/Examples/Adobe Media Encoder CC 2014.0.1 8.0.1.48 RePack By D!akov Utorrent.md b/spaces/stomexserde/gpt4-ui/Examples/Adobe Media Encoder CC 2014.0.1 8.0.1.48 RePack By D!akov Utorrent.md deleted file mode 100644 index b2d7d753874c9f2065ee535b69ba2f52573fed66..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Adobe Media Encoder CC 2014.0.1 8.0.1.48 RePack By D!akov Utorrent.md +++ /dev/null @@ -1,131 +0,0 @@ -
              - - -
              -

              Adobe Media Encoder CC 2014.0.1 8.0.1.48 RePack by D!akov utorrent

              -

              If you are looking for a powerful and versatile program to encode your audio and video files in different formats for various applications and audiences, you should consider downloading Adobe Media Encoder CC 2014. This program allows you to export your videos in formats that are compatible with various devices, from DVD players and websites to mobile phones, portable media players, and standard and high-definition TVs.

              -

              But before you download Adobe Media Encoder CC 2014, you should know that there is a better version available that can save you time and space on your computer. This version is called Adobe Media Encoder CC 2014 RePack by D!akov, and it is a compressed and optimized version of the original program that has been repacked by a reputable developer named D!akov.

              -

              Adobe Media Encoder CC 2014.0.1 8.0.1.48 RePack by D!akov utorrent


              Download File ✑ ✑ ✑ https://urlgoal.com/2uI8T9



              -

              And if And if you want to download Adobe Media Encoder CC 2014 RePack by D!akov in the fastest and safest way possible, you should use utorrent, a popular and reliable torrent client that allows you to download files from peer-to-peer networks. utorrent is easy to use, lightweight, and secure, and it offers features such as bandwidth management, encryption, magnet links, and more.

              -

              In this article, we will show you what Adobe Media Encoder CC 2014 is, why you should download Adobe Media Encoder CC 2014 RePack by D!akov utorrent, how to download and install it on your computer, how to use it to encode your audio and video files, and some tips and tricks for using it effectively. We will also answer some frequently asked questions about Adobe Media Encoder CC 2014 RePack by D!akov utorrent. So, let's get started!

              -

              What is Adobe Media Encoder CC 2014?

              -

              Adobe Media Encoder CC 2014 is a program that belongs to the Adobe Creative Cloud suite of applications. It is designed to help you encode your audio and video files in various formats for different purposes and platforms. You can use Adobe Media Encoder CC 2014 to export your videos from Adobe Premiere Pro CC, Adobe After Effects CC, Adobe Prelude CC, or Adobe Audition CC, or you can import your files directly into the program and encode them there.

              -

              Some of the features of Adobe Media Encoder CC 2014 are:

              -
                -
              • It supports a wide range of input and output formats, including MP4, MOV, AVI, WMV, FLV, MKV, MP3, AAC, WAV, etc.
              • -
              • It allows you to create presets for common encoding settings and save them for future use.
              • -
              • It enables you to encode multiple files at once using batch processing.
              • -
              • It lets you adjust various encoding parameters such as resolution, frame rate, bit rate, aspect ratio, audio channels, etc.
              • -
              • It offers GPU acceleration for faster encoding performance.
              • -
              • It integrates with other Adobe Creative Cloud applications and services such as Adobe Dynamic Link, Adobe Media Browser, Adobe Creative Cloud Libraries, etc.
              • -
              -

              Adobe Media Encoder CC 2014 is a powerful and versatile program that can help you create high-quality audio and video files for various applications and audiences. However, it also has some drawbacks that can affect your user experience. For example:

              -
                -
              • It is a large program that takes up a lot of space on your computer.
              • -
              • It requires a lot of system resources to run smoothly.
              • -
              • It may have some compatibility issues with some formats or devices.
              • -
              • It may have some bugs or errors that can cause crashes or glitches.
              • -
              -

              That's why you should consider downloading Adobe Media Encoder CC 2014 RePack by D!akov utorrent instead of the original version. Let's see why in the next section.

              -

              Why you should download Adobe Media Encoder CC 2014 RePack by D!akov utorrent?

              -

              Benefits of RePack by D!akov

              -

              RePack by D!akov is a compressed and optimized version of Adobe Media Encoder CC 2014 that has been repacked by a reputable developer named D!akov. RePack by D!akov has several benefits over the original version of the program. For example:

              -
                -
              • It reduces the size of the program by removing unnecessary components such as languages, plugins, updates, etc.
              • -
              • It improves the performance of the program by tweaking some settings and fixing some bugs.
              • -
              • It simplifies the installation process by making it faster and easier.
              • -
              • It does not require activation or registration to use the program.
              • -
              -

              By downloading Adobe Media Encoder CC 2014 RePack by D!akov utorrent, you can save time and space on your computer while enjoying the same features and functions of the original program. You can also avoid some of the problems that may occur with the original version of the program such as compatibility issues or errors.

              -

              -

              Benefits of utorrent

              -

              utorrent is a popular and reliable torrent client that allows you to download files from peer-to-peer networks. Torrents are files that contain information about other files that are shared by users on the internet. By using a torrent client like utorrent, you can download these files from multiple sources at once without relying on a central server. This makes downloading faster and more secure than using other methods such as direct downloads or file-sharing sites.

              -

              Some of the benefits of using utorrent to download Adobe Media Encoder CC 201 utorrent to download Adobe Media Encoder CC 2014 RePack by D!akov are:

              -
                -
              • It is easy to use, with a simple and intuitive interface.
              • -
              • It is lightweight, with a small file size and low CPU and memory usage.
              • -
              • It is secure, with encryption and proxy support to protect your privacy and identity.
              • -
              • It offers features such as bandwidth management, magnet links, streaming, remote control, etc.
              • -
              -

              By using utorrent to download Adobe Media Encoder CC 2014 RePack by D!akov, you can ensure that you get the file from a reliable source and that you download it in the fastest and safest way possible. You can also pause and resume your downloads at any time and manage them easily.

              -

              How to download and install Adobe Media Encoder CC 2014 RePack by D!akov utorrent?

              -

              System requirements

              -

              Before you download and install Adobe Media Encoder CC 2014 RePack by D!akov utorrent, you should make sure that your computer meets the minimum and recommended system requirements for running the program. Here are the system requirements for Adobe Media Encoder CC 2014 RePack by D!akov:

              - - - - - - - - - -
              MinimumRecommended
              - Intel Core2 Duo or AMD Phenom II processor with 64-bit support
              - Microsoft Windows 7 with Service Pack 1 (64 bit), Windows 8 (64 bit), or Windows 8.1 (64 bit)
              - 4 GB of RAM
              - 4 GB of available hard-disk space for installation; additional free space required during installation
              - 1024 x 768 display
              - OpenGL 2.0–capable system
              - Sound card compatible with ASIO protocol or Microsoft Windows Driver Model
              - QuickTime 7.6.6 software required for QuickTime features
              - Optional: Adobe-certified GPU card for GPU-accelerated performance
              - Intel Core i7 processor with 64-bit support
              - Microsoft Windows 10 (64 bit)
              - 8 GB of RAM or more
              - 10 GB of available hard-disk space for installation; additional free space required during installation
              - 1280 x 800 display or higher
              - OpenGL 2.0–capable system or higher
              - Sound card compatible with ASIO protocol or Microsoft Windows Driver Model
              - QuickTime 7.6.6 software required for QuickTime features
              - Optional: Adobe-certified GPU card for GPU-accelerated performance
              -

              Download link

              -

              To download Adobe Media Encoder CC 2014 RePack by D!akov utorrent, you need to have a torrent client installed on your computer. We recommend using utorrent, which you can download from here. After you install utorrent, you can click on the link below to download the torrent file for Adobe Media Encoder CC 2014 RePack by D!akov:

              -

              Adobe Media Encoder CC 2014.0.1 8.0.1.48 RePack by D!akov.torrent

              -

              This torrent file contains the following files:

              -
                -
              • Adobe Media Encoder CC.exe (1.03 GB)
              • -
              • D!akov.nfo (5 KB)
              • -
              • D!akov.reg (1 KB)
              • -
              • Readme.txt (2 KB)
              • -
              -

              Installation steps

              -

              After you download the torrent file, you need to open it with your torrent client and start downloading the files. Once the download is complete, you can follow these steps to install Adobe Media Encoder CC 2014 RePack by D!akov on your computer:

              -
                -
              1. Run the Adobe Media Encoder CC.exe file as an administrator.
              2. -
              3. Select your language and click OK.
              4. -
              5. Accept the license agreement and click Next.
              6. -
              7. Select the destination folder for the installation and click Next.
              8. -
              9. Select the components you want to install and click Next.
              10. -
              11. Wait for the installation to finish and click Finish.
              12. -
              13. Run the D!akov.reg file as an administrator and click Yes to add it to the registry.
              14. -
              15. Read the Readme.txt file for more information and instructions.
              16. -
              17. Congratulations, you have successfully installed Adobe Media Encoder CC 2014 RePack by D!akov on your computer!
              18. -
              -

              How to use Adobe Media Encoder CC 2014?

              -

              Basic functions

              -

              To use To use Adobe Media Encoder CC 2014 to encode your audio and video files, you need to follow these basic steps:

              -
                -
              1. Launch the program and click on the Add button to import your files into the queue. You can also drag and drop your files from your computer or from other Adobe applications.
              2. -
              3. Select the files you want to encode and choose a preset from the Preset Browser. You can also create your own preset by clicking on the New Preset button and adjusting the encoding settings.
              4. -
              5. Click on the Output File column to change the name and location of your encoded files. You can also click on the Metadata button to add or edit metadata information such as title, description, keywords, etc.
              6. -
              7. Click on the Start Queue button to start encoding your files. You can monitor the progress and status of your encoding tasks in the Encoding panel. You can also pause, resume, or cancel your encoding tasks at any time.
              8. -
              9. When the encoding is done, you can find your encoded files in the destination folder you specified. You can also preview your encoded files by clicking on the Play button in the Output panel.
              10. -
              -

              By following these steps, you can use Adobe Media Encoder CC 2014 to encode your audio and video files in different formats for various purposes and platforms.

              -

              Advanced functions

              -

              Adobe Media Encoder CC 2014 also offers some advanced functions that can help you enhance your encoding workflow and output quality. Some of these functions are:

              -
                -
              • You can use presets to save and apply common encoding settings for different formats and devices. You can also import and export presets from other sources or share them with other users.
              • -
              • You can use batch processing to encode multiple files at once with the same or different presets. You can also reorder, duplicate, or delete files in the queue as needed.
              • -
              • You can use GPU acceleration to speed up your encoding performance by using your graphics card instead of your CPU. You can enable or disable GPU acceleration in the Preferences menu.
              • -
              • You can use watch folders to automate your encoding tasks by monitoring a specific folder for new files and encoding them with a preset of your choice. You can create and manage watch folders in the File menu.
              • -
              • You can use Dynamic Link to import sequences from Adobe Premiere Pro CC or Adobe After Effects CC without rendering them first. This allows you to encode them with Adobe Media Encoder CC 2014 without losing quality or effects.
              • -
              -

              By using these advanced functions, you can optimize your workflow and output quality using Adobe Media Encoder CC 2014.

              -

              Tips and tricks for using Adobe Media Encoder CC 2014

              -

              Here are some tips and tricks for using Adobe Media Encoder CC 2014 effectively:

              -
                -
              • Use keyboard shortcuts to perform common actions faster. For example, you can press Ctrl+A to select all files in the queue, Ctrl+P to open the Preset Browser, Ctrl+O to open the Output panel, etc. You can find more keyboard shortcuts in the Help menu.
              • -
              • Use the Preview panel to preview your source and output files before encoding them. You can also use the Zoom, Trim, Crop, and Time Tuner tools to adjust your files as needed.
              • -
              • Use the Info panel to view detailed information about your source and output files such as format, codec, resolution, frame rate, bit rate, duration, etc.
              • -
              • Use the Log panel to view a history of your encoding tasks such as start time, end time, status, errors, warnings, etc. You can also export or clear the log as needed.
              • -
              • Use the Preferences menu to customize various settings of Adobe Media Encoder CC 2014 such as general, appearance, playback, sync settings, media cache, etc.
              • -
              -

              By following these tips and tricks, you can use Adobe Media Encoder CC 2014 more efficiently and effectively.

              -

              Conclusion

              -

              In conclusion, Adobe Media Encoder CC 2014 is a powerful and versatile program that allows you to encode your audio and video files in different formats for various purposes and platforms. However, if you want to save time and space on your computer while enjoying the same features and functions of the original program, you should download Adobe Media Encoder CC 2014 RePack by D!akov utorrent instead of the original version. This version is a compressed and optimized version of the original program that has been repacked by a reputable developer named D!akov. And if you want to download it in the fastest and safest way possible, you should use utorrent, a popular and reliable torrent client that allows you to download files from peer-to-peer networks.

              -

              If you are interested in downloading Adobe Media Encoder CC 2014 RePack by D If you are interested in downloading Adobe Media Encoder CC 2014 RePack by D!akov utorrent, you can follow the link and the steps we provided in this article. We hope that this article has helped you understand what Adobe Media Encoder CC 2014 is, why you should download Adobe Media Encoder CC 2014 RePack by D!akov utorrent, how to download and install it on your computer, how to use it to encode your audio and video files, and some tips and tricks for using it effectively. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy encoding!

              -

              FAQs

              -

              Here are some frequently asked questions and answers about Adobe Media Encoder CC 2014 RePack by D!akov utorrent:

              -
                -
              1. Is Adobe Media Encoder CC 2014 RePack by D!akov utorrent safe to download and use?
                Yes, Adobe Media Encoder CC 2014 RePack by D!akov utorrent is safe to download and use, as long as you download it from a reliable source and scan it with a reputable antivirus program before installing it. D!akov is a well-known developer who has repacked many Adobe programs and has a good reputation among users. However, you should always be careful when downloading files from the internet and make sure that they are not infected with malware or viruses.
              2. -
              3. What is the difference between Adobe Media Encoder CC 2014 RePack by D!akov and Adobe Media Encoder CC 2021?
                Adobe Media Encoder CC 2014 RePack by D!akov is an older version of Adobe Media Encoder CC that has been repacked by D!akov to reduce its size and improve its performance. Adobe Media Encoder CC 2021 is the latest version of Adobe Media Encoder CC that has more features and functions than the older version. However, Adobe Media Encoder CC 2021 also requires more system resources and space on your computer than Adobe Media Encoder CC 2014 RePack by D!akov. Depending on your needs and preferences, you can choose the version that suits you best.
              4. -
              5. How can I update Adobe Media Encoder CC 2014 RePack by D!akov?
                Unfortunately, you cannot update Adobe Media Encoder CC 2014 RePack by D!akov, as it is a repacked version of the original program that does not support updates. If you want to use the latest version of Adobe Media Encoder CC, you will have to download and install it separately from the official website of Adobe or from another source. However, you should be aware that the latest version of Adobe Media Encoder CC may not be compatible with some formats or devices that are supported by Adobe Media Encoder CC 2014 RePack by D!akov.
              6. -
              7. Can I use Adobe Media Encoder CC 2014 RePack by D!akov with other Adobe Creative Cloud applications?
                Yes, you can use Adobe Media Encoder CC 2014 RePack by D!akov with other Adobe Creative Cloud applications such as Adobe Premiere Pro CC, Adobe After Effects CC, Adobe Prelude CC, or Adobe Audition CC. You can export your videos from these applications to Adobe Media Encoder CC 2014 RePack by D!akov or import your files directly into the program and encode them there. You can also use Dynamic Link to import sequences from these applications without rendering them first.
              8. -
              9. Can I use Adobe Media Encoder CC 2014 RePack by D!akov for commercial purposes?
                Yes, you can use Adobe Media Encoder CC 2014 RePack by D!akov for commercial purposes, as long as you comply with the terms and conditions of the license agreement of the original program. You can use Adobe Media Encoder CC 2014 RePack by D!akov to encode your audio and video files for various purposes and platforms such as websites, social media, YouTube, Vimeo, etc. However, you should not distribute or sell the program or its files to others without permission from the original developer.
              10. -

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Cary50winuvsoftwaredownload !LINK!.md b/spaces/stomexserde/gpt4-ui/Examples/Cary50winuvsoftwaredownload !LINK!.md deleted file mode 100644 index 480a3bdaaa47bd88c879f2252957eb2156beeae5..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Cary50winuvsoftwaredownload !LINK!.md +++ /dev/null @@ -1,161 +0,0 @@ - - - - - - -
              Article with HTML formatting
              -

              Cary 50 WinUV Software Download: A Guide for UV-Vis Spectrophotometry Users

              -

              If you are a user of the Agilent Cary 50 UV-Vis spectrophotometer, you might be wondering how to download and install the Cary 50 WinUV software that complements your instrument. This software is designed to provide you with powerful features and streamlined methods for data collection, analysis, storage, and display while reducing complexity. In this article, we will guide you through the process of downloading and installing the Cary 50 WinUV software, as well as show you how to use it for various UV-Vis applications.

              -

              cary50winuvsoftwaredownload


              Download Zip ===== https://urlgoal.com/2uI6P1



              -

              What is Cary 50 WinUV Software?

              -

              Cary 50 WinUV software is a UV-Vis software suite that works with the Agilent Cary 50 UV-Vis spectrophotometer. This instrument is a compact, fast, and accurate spectrophotometer that can measure samples in less than one second. It has a wide wavelength range of 190-1100 nm, a high energy xenon flash lamp, and a temperature-controlled detector. It is ideal for routine analysis, quality control, teaching, and research applications.

              -

              Features and Benefits of Cary 50 WinUV Software

              -

              The Cary 50 WinUV software enhances the performance and functionality of the Cary 50 UV-Vis spectrophotometer by providing several features and benefits, such as:

              -
                -
              • Over a dozen software modules designed to cover a range of UV-Vis applications including scan, color, concentration, kinetics, RNA/DNA, and thermal analysis.
              • -
              • Simple operation and configuration made possible by modules such as simple read, scan, align, and validate.
              • -
              • Extensive graphics with simple controls to display data in a number of formats and add custom text and labels to graphs.
              • -
              • Compatible with Windows XP, Vista, 7, 8, and 10 operating systems.
              • -
              • Upgrades make it possible to bring all Cary models onto the latest Microsoft operating systems.
              • -
              -

              Compatibility and Requirements of Cary 50 WinUV Software

              -

              The Cary 50 WinUV software is compatible with the Agilent Cary 50 UV-Vis spectrophotometer. It requires a computer with the following specifications:

              -
                -
              • Processor: Pentium III or higher
              • -
              • Memory: At least 256 MB RAM
              • -
              • Hard disk space: At least 500 MB free space
              • -
              • Display resolution: At least 1024 x 768 pixels
              • -
              • CD-ROM drive
              • -
              • USB port
              • -
              -

              The computer must also have an internet connection to download the software package from the Agilent website.

              -

              -

              How to Download and Install Cary 50 WinUV Software?

              -

              To download and install the Cary 50 Win UV Software, you need to follow these steps:

              -

              Step 1: Visit the Agilent Website

              -

              The first step is to visit the Agilent website at https://www.agilent.com. On the homepage, click on the "Products" tab and select "Software & Informatics" from the drop-down menu. Then, click on the "Spectroscopy Software" link under the "Software by Category" section. This will take you to the page where you can find the Cary 50 WinUV software.

              -

              Step 2: Select Your Product and Software Version

              -

              On the spectroscopy software page, scroll down to the "UV-Vis & UV-Vis-NIR Software" section and click on the "Cary WinUV Software" link. This will open a new page where you can see the details and specifications of the Cary 50 WinUV software. On this page, click on the "Downloads" tab and select your product model and software version from the drop-down menus. For example, if you have a Cary 50 UV-Vis spectrophotometer and want to download the latest version of the Cary 50 WinUV software, you would select "Cary 50 UV-Vis" and "Cary WinUV v5.0.0.0".

              -

              Step 3: Fill Out the Registration Form

              -

              After selecting your product and software version, you will be prompted to fill out a registration form to access the download link. You will need to provide your name, email address, company name, country, and phone number. You will also need to agree to the terms and conditions of the software license agreement. Once you have filled out the form, click on the "Submit" button.

              -

              Step 4: Download the Software Package

              -

              Once you have submitted the registration form, you will receive an email from Agilent with a link to download the software package. The email will also contain your registration number and instructions on how to install the software. Click on the link in the email to start downloading the software package. The file size is about 300 MB, so it may take some time depending on your internet speed.

              -

              Step 5: Run the Setup Wizard

              -

              Once you have downloaded the software package, locate it on your computer and double-click on it to run the setup wizard. The wizard will guide you through the installation process step by step. You will need to accept the license agreement, choose a destination folder, select a program group, and confirm your settings. The installation may take several minutes to complete.

              -

              How to Use Cary 50 WinUV Software?

              -

              After installing the Cary 50 WinUV software, you can start using it for your UV-Vis spectrophotometry applications. Here are some tips on how to use the software:

              -

              Overview of the Software Interface

              -

              The Cary 50 WinUV software has a user-friendly interface that consists of several components, such as:

              -
                -
              • The menu bar that contains various commands and options for file management, data acquisition, data analysis, data display, data export, help, and more.
              • -
              • The toolbar that provides quick access to some of the most commonly used commands and options.
              • -
              • The status bar that shows information about the current status of the instrument and software.
              • -
              • The main window that displays data in graphical or tabular format depending on the selected module.
              • -
              • The module selector that allows you to switch between different modules for different applications.
              • -
              • The method editor that allows you to create, edit, save, load, and run methods for data acquisition and analysis.
              • -
              • The instrument control panel that allows you to control various parameters and settings of the instrument such as wavelength, slit width, scan speed, lamp mode, etc.
              • -
              -

              How to Perform Basic Operations with Cary 50 WinUV Software

              -

              The Cary 50 WinUV software has over a dozen modules that cover a range of UV-Vis applications. Each module has its own specific features and functions that allow you to perform different tasks with ease. Here are some examples of how to perform basic operations with some of the most popular modules:

              Scan

              -

              The scan module allows you to perform a single or multiple wavelength scan of a sample and display the absorbance, transmittance, or reflectance spectrum. To use this module, you need to:

              -
                -
              1. Select the scan module from the module selector.
              2. -
              3. Create or load a method that specifies the scan parameters such as wavelength range, scan speed, data interval, etc.
              4. -
              5. Prepare your sample and reference and place them in the sample holder.
              6. -
              7. Click on the "Start" button to begin the scan.
              8. -
              9. View the scan results in the main window as a graph or a table.
              10. -
              11. Use the graphics options and data processing tools to customize and analyze your data.
              12. -
              -

              Concentration

              -

              The concentration module allows you to determine the concentration of a sample based on a calibration curve or a standard addition method. To use this module, you need to:

              -
                -
              1. Select the concentration module from the module selector.
              2. -
              3. Create or load a method that specifies the concentration parameters such as wavelength, calibration method, number of standards, etc.
              4. -
              5. Prepare your standards and samples and place them in the sample holder.
              6. -
              7. Click on the "Start" button to begin the measurement.
              8. -
              9. View the concentration results in the main window as a graph or a table.
              10. -
              11. Use the graphics options and data processing tools to customize and analyze your data.
              12. -
              -

              Kinetics

              -

              The kinetics module allows you to monitor the change in absorbance, transmittance, or reflectance of a sample over time and calculate the rate constant and activation energy of a reaction. To use this module, you need to:

              -
                -
              1. Select the kinetics module from the module selector.
              2. -
              3. Create or load a method that specifies the kinetics parameters such as wavelength, time interval, temperature, etc.
              4. -
              5. Prepare your sample and place it in the sample holder.
              6. -
              7. Click on the "Start" button to begin the measurement.
              8. -
              9. View the kinetics results in the main window as a graph or a table.
              10. -
              11. Use the graphics options and data processing tools to customize and analyze your data.
              12. -
              -

              RNA-DNA Estimation

              -

              The RNA-DNA estimation module allows you to estimate the concentration and purity of RNA and DNA samples based on their absorbance at 260 nm and 280 nm. To use this module, you need to:

              -
                -
              1. Select the RNA-DNA estimation module from the module selector.
              2. -
              3. Create or load a method that specifies the RNA-DNA estimation parameters such as dilution factor, path length, etc.
              4. -
              5. Prepare your RNA and DNA samples and place them in the sample holder.
              6. -
              7. Click on the "Start" button to begin the measurement.
              8. -
              9. View the RNA-DNA estimation results in the main window as a table.
              10. -
              -

              Enzyme Kinetics

              -

              The enzyme kinetics module allows you to study the kinetics of enzyme-catalyzed reactions by measuring the change in absorbance, transmittance, or reflectance of a substrate or a product over time. To use this module, you need to:

              -
                -
              1. Select the enzyme kinetics module from the module selector.
              2. -
              3. Create or load a method that specifies the enzyme kinetics parameters such as wavelength, time interval, temperature, enzyme concentration, substrate concentration, etc.
              4. -
              5. Prepare your enzyme and substrate solutions and place them in the sample holder.
              6. -
              7. Click on the "Start" button to begin the measurement.
              8. -
              9. View the enzyme kinetics results in the main window as a graph or a table.
              10. -
              11. Use the graphics options and data processing tools to customize and analyze your data.
              12. -
              -

              How to Customize and Optimize Your Data Analysis with Cary 50 WinUV Software

              -

              The Cary 50 WinUV software also provides you with various options and tools to customize and optimize your data analysis according to your needs and preferences. Some of these options and tools include:

              -

              Graphics Options

              -

              The graphics options allow you to change the appearance and format of your graphs, such as:

              -
                -
              • The type of graph (line, scatter, bar, etc.)
              • -
              • The scale of the axes (linear, logarithmic, etc.)
              • -
              • The color, style, and width of the lines and markers
              • -
              • The font, size, and alignment of the text and labels
              • -
              • The legend, title, and grid of the graph
              • -
              • The zoom, pan, and cursor functions of the graph
              • -
              -

              You can access the graphics options by clicking on the "Graphics" menu on the menu bar or by right-clicking on the graph area.

              -

              Data Processing Tools

              -

              The data processing tools allow you to perform various calculations and transformations on your data, such as:

              -
                -
              • Baseline correction
              • -
              • Peak detection and integration
              • -
              • Derivative and smoothing
              • -
              • Normalization and ratio
              • -
              • Mathematical operations (addition, subtraction, multiplication, division, etc.)
              • -
              • Statistical analysis (mean, standard deviation, etc.)
              • -
              -

              You can access the data processing tools by clicking on the "Data" menu on the menu bar or by right-clicking on the data table.

              -

              Validation Module

              -

              The validation module allows you to verify the performance and accuracy of your instrument and software by performing various tests and checks, such as:

              -
                -
              • Wavelength accuracy
              • -
              • Absorbance accuracy
              • -
              • Resolution
              • -
              • Noise
              • -
              • Stray light
              • -
              • Photometric linearity
              • -
              -

              You can access the validation module by clicking on the "Validate" menu on the menu bar or by selecting the validate module from the module selector.

              -

              Conclusion

              -

              In this article, we have shown you how to download and install the Cary 50 WinUV software that works with the Agilent Cary 50 UV-Vis spectrophotometer. We have also explained how to use the software for various UV-Vis applications and how to customize and optimize your data analysis. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to contact us.

              -

              FAQs

              -

              Here are some frequently asked questions about the Cary 50 WinUV software:

              -
                -
              1. What is the difference between Cary 50 WinUV software and Cary WinUV software?
              2. -

                Cary 50 WinUV software is a specific version of Cary WinUV software that is designed for the Agilent Cary 50 UV-Vis spectrophotometer. Cary WinUV software is a general term that refers to any version of UV-Vis software that works with any Agilent Cary UV-Vis or UV-Vis-NIR spectrophotometer.

                -
              3. How can I update my Cary 50 WinUV software?
              4. -

                You can update your Cary 50 WinUV software by visiting the Agilent website and downloading the latest version of the software package. You will need to register again with your details and agree to the license agreement. Then, you can run the setup wizard to install the new version of the software. You may need to uninstall the previous version of the software before installing the new one.

                -
              5. How can I get technical support for my Cary 50 WinUV software?
              6. -

                You can get technical support for your Cary 50 WinUV software by contacting Agilent's customer service center. You can find the contact details of your local service center on the Agilent website or in the user manual of your instrument. You can also access the online help system of the software by clicking on the "Help" menu on the menu bar or by pressing the F1 key on your keyboard.

                -
              7. How can I export my data from Cary 50 WinUV software?
              8. -

                You can export your data from Cary 50 WinUV software in various formats, such as text, Excel, CSV, HTML, PDF, etc. You can also copy and paste your data to other applications, such as Word, PowerPoint, etc. To export your data, you need to click on the "File" menu on the menu bar and select the "Export" option. Then, you can choose the format and destination of your data and click on the "Save" button.

                -
              9. How can I print my data from Cary 50 WinUV software?
              10. -

                You can print your data from Cary 50 WinUV software by connecting your computer to a printer and clicking on the "File" menu on the menu bar and selecting the "Print" option. Then, you can choose the printer settings and click on the "OK" button. You can also preview your data before printing by clicking on the "File" menu and selecting the "Print Preview" option.

                -

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Cat Et 2010 Keygen REPACK Download.md b/spaces/stomexserde/gpt4-ui/Examples/Cat Et 2010 Keygen REPACK Download.md deleted file mode 100644 index 3af4ea26175afd54f77c9cfef8c547829893a3fb..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Cat Et 2010 Keygen REPACK Download.md +++ /dev/null @@ -1,113 +0,0 @@ -
              -

              Cat ET 2010 Keygen Download: How to Install and Activate Caterpillar Electronic Technician Software

              -

              If you are looking for a way to diagnose and repair your Caterpillar machines and engines, you may have heard of Cat ET 2010. This is a diagnostic program that allows you to perform various tests, calibrations, and troubleshooting on your Caterpillar equipment. However, this program is not free. You need a license key to activate it. This is where Cat ET 2010 keygen comes in. This is a program that generates license keys for Cat ET 2010. By downloading and using this keygen, you can install and activate Cat ET 2010 software for free.

              -

              cat et 2010 keygen download


              Downloadhttps://urlgoal.com/2uI858



              -

              But how do you download, install, and activate Cat ET 201 0 software and keygen? How do you use them to diagnose and repair your Caterpillar machines and engines? What are the benefits and drawbacks of using them? In this article, we will answer these questions and more. We will provide you with a step-by-step guide on how to download, install, and activate Cat ET 2010 software and keygen. We will also explain how to use Cat ET 2010 software to access the various features and functions of your Caterpillar equipment. Finally, we will discuss the pros and cons of using Cat ET 2010 keygen and software, and some tips and precautions that you should follow.

              -

              What is Cat ET 2010 and why do you need it?

              -

              Cat ET 2010 is short for Caterpillar Electronic Technician 2010. It is a diagnostic program that is designed for Caterpillar machines and engines. It allows you to communicate with the electronic control modules (ECMs) of your Caterpillar equipment, and perform various tests, calibrations, and troubleshooting. With Cat ET 2010, you can:

              -
                -
              • Read and clear fault codes
              • -
              • View live data and graphs
              • -
              • Record data logs and playback
              • -
              • Change parameters and settings
              • -
              • Perform injector cut-out tests
              • -
              • Calibrate fuel systems
              • -
              • Adjust idle speed
              • -
              • Reset maintenance intervals
              • -
              • And much more
              • -
              -

              Cat ET 2010 is compatible with most Caterpillar machines and engines that have ECMs. It supports both on-highway and off-highway applications, such as trucks, buses, excavators, loaders, bulldozers, generators, marine engines, etc. It also works with other brands of equipment that use Caterpillar engines, such as Perkins, FG Wilson, Olympian, etc.

              -

              To use Cat ET 2010, you need a computer that meets the minimum system requirements, a Caterpillar Communication Adapter (such as CAT Comm Adapter II or III), and a license key. The license key is a code that activates the Cat ET 2010 software on your computer. Without a license key, you cannot use Cat ET 2010 software.

              -

              -

              How to download Cat ET 2010 keygen and software?

              -

              Cat ET 2010 keygen is a program that generates license keys for Cat ET 2010 software. By using this keygen, you can get a valid license key for free, and activate Cat ET 2010 software on your computer. However, Cat ET 2010 keygen is not an official product of Caterpillar. It is a crack or hack that is created by some unknown developers. Therefore, it is not available on the official Caterpillar website or any authorized dealers.

              -

              To download Cat ET 2010 keygen, you need to search for it on various online forums and websites that offer cracks or hacks for different software. Some examples of these forums and websites are:

              -
                -
              • [MHH Auto]
              • -
              • [Auto Repair Manuals]
              • -
              • [Garage Forum]
              • -
              • [Digital Kaos]
              • -
              • [Auto File]
              • -
              -

              However, you should be careful when downloading Cat ET 2010 keygen from these sources. Some of them may contain viruses or malware that can harm your computer or steal your personal information. Some of them may also provide fake or invalid license keys that will not work with Cat ET 2010 software. Therefore, you should always scan the downloaded files with your antivirus software before opening them. You should also check the feedback and comments from other users who have downloaded the same files before.

              -

              Cat ET 2010 software is the official diagnostic program for Caterpillar machines and engines. It is available on the official Caterpillar website or from authorized dealers. However, you need to pay a fee to get the software and the license key from these sources. The fee may vary depending on your location and the type of license you want (such as annual or lifetime).

              -

              If you do not want to pay for Cat ET 2010 software, you can also download it from some online forums and websites that offer free or cracked software. Some examples of these forums and websites are:

              -
                -
              • [MHH Auto]
              • -
              • [Auto Repair Manuals]
              • -
              • [Garage Forum]
              • -
              • [Digital Kaos]
              • -
              • [Auto File]
              • -
              -

              However, as with Cat ET 2010 keygen, you should be careful when downloading Cat ET 2010 software from these sources. Some of them may contain viruses or malware that can harm your computer or steal your personal information. Some of them may also provide outdated or corrupted versions of Cat ET 2010 software that will not work properly with your Caterpillar equipment. Therefore, you should always scan the downloaded files with your antivirus software before opening them. You should also check the feedback and comments from other users who have downloaded the same files before.

              -

              How to install Cat ET 2010 software and keygen?

              -

              After downloading Cat ET 2010 software and keygen, you need to install them on your computer. To do this, you need to follow these steps:

              -
                -
              1. Disable your antivirus software before installing Cat ET 2010 software and keygen. This is because some antivirus software may detect Cat ET 2010 keygen as a virus or malware and block or delete it. However, you should only disable your antivirus software temporarily and re-enable it after installing Cat ET 2010 software and keygen.
              2. -
              3. Open the Cat ET 2010 software setup file and follow the instructions in the setup wizard. You may need to accept the terms and conditions, choose the installation folder, and select the components that you want to install. The setup wizard will guide you through the installation process.
              4. -
              5. Copy the Cat ET 2010 keygen file to the installation folder of Cat ET 2010 software. The installation folder is usually located at C:\Program Files\Caterpillar\Electronic Technician 2010A or C:\Program Files (x86)\Caterpillar\Electronic Technician 2010A, depending on your operating system and version of Cat ET 2010 software.
              6. -
              -

              After installing Cat ET 2010 software and keygen, you need to activate Cat ET 2010 software using the keygen.

              -

              How to activate Cat ET 2010 software using the keygen?

              -

              To activate Cat ET 2010 software using the keygen, you need to follow these steps:

              -
                -
              1. Run the Cat ET 2010 keygen file that you copied to the installation folder of Cat ET 2010 software. A window will pop up asking for your computer ID.
              2. -
              3. Find your computer ID by running the Cat ET 2010 software. A window will pop up asking for a license key. Click on "Get License" and then click on "Copy License Request". Your computer ID will be copied to your clipboard.
              4. -
              5. Paste your computer ID in the Cat ET 2010 keygen window and click on "Generate". A license key will be generated for your computer ID.
              6. -
              7. Copy the license key from the Cat ET 2010 keygen window and paste it in the Cat ET 2010 software window. Click on "Authorize" and then click on "OK". Your Cat ET 2010 software will be activated.
              8. -
              -

              After activating Cat ET 2010 software using the keygen, you can use it to diagnose and repair your Caterpillar machines and engines.

              -

              How to use Cat ET 2010 software to diagnose and repair Caterpillar machines and engines?

              -

              To use Cat ET 2010 software to diagnose and repair your Caterpillar machines and engines, you need to follow these steps:

              -
                -
              1. Connect your Caterpillar Communication Adapter to your computer using a USB cable or a wireless connection. The Caterpillar Communication Adapter is a device that allows you to communicate with the ECMs of your Caterpillar equipment. You can buy it from authorized dealers or online sources.
              2. -
              3. Connect your Caterpillar Communication Adapter to your machine or engine using a data link cable or a wireless connection. The data link cable is a cable that connects the Caterpillar Communication Adapter to the diagnostic port of your machine or engine. You can buy it from authorized dealers or online sources.
              4. -
              5. Run the Cat ET 2010 software on your computer. A window will pop up showing the list of detected ECMs that are connected to your Caterpillar Communication Adapter. Select the ECM that you want to communicate with from the list.
              6. -
              7. Access the various features and functions of Cat ET 2010 software, such as data logging, parameter setting, fault code reading, etc. You can use the menus, toolbars, buttons, tabs, and icons on the Cat ET 2010 software window to access these features and functions. You can also use the help menu or press F1 to get more information and guidance on how to use Cat ET 2010 software.
              8. -
              -

              By using Cat ET 2010 software, you can diagnose and repair your Caterpillar machines and engines more easily and efficiently.

              -

              What are the benefits and drawbacks of using Cat ET 2010 keygen and software?

              - H2: What are the benefits and drawbacks of using Cat ET 2010 keygen and software?

              -

              Using Cat ET 2010 keygen and software may have some benefits and drawbacks that you should consider before deciding to use them. Here are some of them:

              -

              The benefits of using Cat ET 2010 keygen and software are:

              -
                -
              • You can save money by not paying for the official license key and software from Caterpillar or authorized dealers.
              • -
              • You can save time by not waiting for the delivery or installation of the official license key and software from Caterpillar or authorized dealers.
              • -
              • You can save effort by not contacting Caterpillar or authorized dealers for technical support or updates.
              • -
              • You can access all the features and functions of Cat ET 2010 software without any limitations or restrictions.
              • -
              • You can diagnose and repair your Caterpillar machines and engines more easily and efficiently with Cat ET 2010 software.
              • -
              -

              The drawbacks of using Cat ET 2010 keygen and software are:

              -
                -
              • You may encounter compatibility issues with your computer, Caterpillar Communication Adapter, or Caterpillar equipment if you use outdated or corrupted versions of Cat ET 2010 software.
              • -
              • You may face security risks such as viruses or malware that can harm your computer or steal your personal information if you download Cat ET 2010 keygen or software from unreliable sources.
              • -
              • You may have legal problems such as violating the intellectual property rights of Caterpillar or breaking the terms and conditions of using Cat ET 2010 software if you use Cat ET 2010 keygen or software without authorization.
              • -
              • You may experience technical errors such as license key invalidation, software malfunction, or data loss if you use Cat ET 2010 keygen or software improperly.
              • -
              -

              Conclusion

              -

              Cat ET 2010 keygen download is a way to install and activate Caterpillar Electronic Technician software for free. Cat ET 2010 software is a useful tool for diagnosing and repairing Caterpillar machines and engines. However, using Cat ET 2010 keygen and software may also have some disadvantages and challenges that you should be aware of. Therefore, you should weigh the pros and cons of using Cat ET 2010 keygen and software before deciding to use them. You should also follow some tips and precautions to avoid any problems or issues that may arise from using them.

              -

              FAQs

              -

              Q1. Is Cat ET 2010 compatible with Windows 10?

              -

              A1. Yes, Cat ET 2010 is compatible with Windows 10. However, you may need to run it in compatibility mode or as an administrator to avoid any errors or issues.

              -

              Q2. How can I update my Cat ET 2010 software to the latest version?

              -

              A2. You can update your Cat ET 2010 software to the latest version by downloading the update file from the official Caterpillar website or from authorized dealers. However, you may need to pay a fee to get the update file and a new license key. Alternatively, you can also download the update file from some online forums and websites that offer free or cracked software. However, you should be careful when downloading the update file from these sources, as they may contain viruses or malware that can harm your computer or steal your personal information.

              -

              Q3. Where can I find more information and support for Cat ET 2010 software?

              -

              A3. You can find more information and support for Cat ET 2010 software on the official Caterpillar website or from authorized dealers. You can also find more information and support on some online forums and websites that offer free or cracked software. However, you should be careful when accessing these forums and websites, as they may provide inaccurate or misleading information or support.

              -

              Q4. What are some alternatives to Cat ET 2010 software?

              -

              A4. Some alternatives to Cat ET 2010 software are:

              -
                -
              • Cat SIS (Service Information System): This is a program that provides service manuals, parts catalogs, wiring diagrams, etc. for Caterpillar machines and engines.
              • -
              • Cat DPA (Diagnostic Port Adapter): This is a device that allows you to communicate with the ECMs of your Caterpillar equipment without using a Caterpillar Communication Adapter.
              • -
              • Cat Flash Files: These are files that contain the latest firmware updates for your Caterpillar equipment.
              • -
              • Cat Factory Passwords: These are passwords that allow you to access advanced features and functions of Cat ET 2010 software that are normally locked or restricted.
              • -
              -

              However, these alternatives may also require a license key or a fee to use them.- H4: Q5. How can I avoid getting viruses or malware from downloading Cat ET 2010 keygen or software?

              -

              A5. You can avoid getting viruses or malware from downloading Cat ET 2010 keygen or software by following these tips:

              -
                -
              • Always scan the downloaded files with your antivirus software before opening them.
              • -
              • Always download Cat ET 2010 keygen or software from reliable and reputable sources.
              • -
              • Always check the feedback and comments from other users who have downloaded the same files before.
              • -
              • Always backup your important data and files before installing Cat ET 2010 keygen or software.
              • -
              • Always use a firewall and a VPN to protect your online privacy and security.
              • -

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Chhota Bheem - Himalayan Adventure 2 Movie Download In Hindi Mp4.md b/spaces/stomexserde/gpt4-ui/Examples/Chhota Bheem - Himalayan Adventure 2 Movie Download In Hindi Mp4.md deleted file mode 100644 index 401c4fd4a15fdc4657540061a7f9c13b90b7b9ec..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Chhota Bheem - Himalayan Adventure 2 Movie Download In Hindi Mp4.md +++ /dev/null @@ -1,26 +0,0 @@ -
              -

              How to Download Chhota Bheem - Himalayan Adventure 2 Movie in Hindi Mp4 Format

              -

              If you are a fan of Chhota Bheem, the popular Indian animated series, you might be interested in watching the latest movie, Chhota Bheem - Himalayan Adventure 2. This movie is a sequel to the 2016 film, Chhota Bheem - Himalayan Adventure, and follows the adventures of Chhota Bheem and his friends in the Himalayas.

              -

              In this article, we will show you how to download Chhota Bheem - Himalayan Adventure 2 movie in Hindi mp4 format, so you can enjoy it on your device anytime and anywhere. We will also share some tips on how to avoid malware and viruses when downloading movies online.

              -

              Chhota Bheem - Himalayan Adventure 2 Movie Download In Hindi Mp4


              Download Zip >>> https://urlgoal.com/2uI9do



              -

              What is Chhota Bheem - Himalayan Adventure 2 Movie?

              -

              Chhota Bheem - Himalayan Adventure 2 is a 2023 Indian animated comedy-adventure film directed by Rajiv Chilaka and produced by Green Gold Animation. It is the fifth theatrical film based on the Chhota Bheem television series, and the second one set in the Himalayas.

              -

              The movie features the voice talents of Sonal Kaushal as Chhota Bheem, Rupa Bhimani as Chutki, Jigna Bhardwaj as Raju, Sabina Malik as Kalia, Rajesh Kava as Jaggu, and Mausam as Indumati. The movie also introduces new characters such as Tashi, a young Tibetan boy who befriends Chhota Bheem, and Zara, a mysterious girl who has a secret connection to the Himalayas.

              -

              The plot of the movie revolves around Chhota Bheem and his friends visiting Tashi's village in the Himalayas for a winter festival. There, they encounter Zara, who warns them about an evil snowman named Frosty who wants to freeze the entire world. Chhota Bheem and his friends must team up with Zara and Tashi to stop Frosty and his army of snowmen from destroying the Himalayas and the rest of the world.

              -

              Why Download Chhota Bheem - Himalayan Adventure 2 Movie in Hindi Mp4 Format?

              -

              There are many reasons why you might want to download Chhota Bheem - Himalayan Adventure 2 movie in Hindi mp4 format. Here are some of them:

              -
                -
              • Mp4 is a widely supported video format that can play on most devices, such as smartphones, tablets, laptops, and TVs.
              • -
              • Mp4 files are usually smaller in size than other video formats, which means they take up less storage space and bandwidth.
              • -
              • Mp4 files can retain high-quality video and audio even when compressed.
              • -
              • Hindi is the original language of the movie and the voice actors. By downloading the movie in Hindi mp4 format, you can enjoy the movie in its authentic form and appreciate the voice acting and dialogue.
              • -
              -

              How to Download Chhota Bheem - Himalayan Adventure 2 Movie in Hindi Mp4 Format?

              -

              There are many websites that offer free downloads of movies online. However, not all of them are safe and legal. Some of them may contain malware or viruses that can harm your device or steal your personal information. Some of them may also violate copyright laws and infringe on the rights of the creators and producers of the movies.

              -

              To avoid these risks, we recommend that you download Chhota Bheem - Himalayan Adventure 2 movie in Hindi mp4 format from a trusted and legal source. One such source is Chhotabheem.com, the official website of the Chhota Bheem franchise. Here are the steps to download the movie from this website:

              -
                -
              1. Go to Chhotabheem.com and click on the "Movies" tab.
              2. -
              3. Find Chhota Bheem - Himalayan Adventure 2 movie from the list of movies and click on it.
              4. -
              5. You will be redirected to a page where you can watch the trailer of the

                cec2833e83
                -
                -
                \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Cna-windows-k9-installer-5-6-3-en.exe 1.md b/spaces/stomexserde/gpt4-ui/Examples/Cna-windows-k9-installer-5-6-3-en.exe 1.md deleted file mode 100644 index a13abc9357e0fd783b6e1c64d2016133542d0827..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Cna-windows-k9-installer-5-6-3-en.exe 1.md +++ /dev/null @@ -1,32 +0,0 @@ -
                -Here is a possible title and article for the keyword "Cna-windows-k9-installer-5-6-3-en.exe 1": - -

                How to Use Cisco Network Assistant for Windows

                -

                Cisco Network Assistant is a free software tool that helps you manage your Cisco network devices. It simplifies many common networking tasks, such as configuration management, inventory reports, event notification, file management, and software upgrades. You can use Cisco Network Assistant to monitor and troubleshoot your network, as well as apply common services across Cisco switches, routers, and access points.

                -

                Cna-windows-k9-installer-5-6-3-en.exe 1


                Download Zip ->>> https://urlgoal.com/2uI9UE



                -

                In this article, we will show you how to download and install Cisco Network Assistant for Windows, and how to use its main features.

                -

                Downloading and Installing Cisco Network Assistant for Windows

                -

                To download Cisco Network Assistant for Windows, you need to have a Cisco account and a valid service contract. You can register for a free Cisco account here.

                -

                Once you have logged in to your Cisco account, go to the Cisco Network Assistant download page. You will see a list of available software versions for different platforms. Choose the latest version for Windows (cna-windows-k9-installer-5-6-3-en.exe 1) and click on the Download button.

                -

                You will be prompted to accept the End User License Agreement and the Software Download Terms and Conditions. After that, the download will start automatically. The file size is about 79 MB.

                -

                When the download is complete, locate the file (cna-windows-k9-installer-5-6-3-en.exe 1) on your computer and double-click on it to start the installation process. Follow the on-screen instructions to complete the installation. You may need to restart your computer after the installation.

                -

                Using Cisco Network Assistant for Windows

                -

                To launch Cisco Network Assistant for Windows, go to Start > All Programs > Cisco Systems > Cisco Network Assistant. You will see the main window of the application, which consists of three main areas: the toolbar, the community pane, and the workspace pane.

                -

                -

                The toolbar contains buttons for accessing various functions of Cisco Network Assistant, such as creating or opening a community, discovering devices, launching device managers, configuring services, viewing reports, and getting help.

                -

                The community pane shows the list of communities that you have created or opened. A community is a group of network devices that share common characteristics or services. You can create multiple communities to manage different parts of your network.

                -

                The workspace pane shows the graphical representation of your network devices and their status. You can use the workspace pane to monitor and troubleshoot your network devices, as well as perform various tasks on them.

                -

                Creating a Community

                -

                To create a new community, click on the New Community button on the toolbar. You will see a dialog box where you can enter a name for your community and choose a discovery method.

                -

                The discovery method determines how Cisco Network Assistant will find your network devices. You can choose from three options: CDP (Cisco Discovery Protocol), IP Range, or Manual.

                -
                  -
                1. CDP: This option uses CDP to discover all Cisco devices that are directly connected to your computer or to other devices in your network. CDP is enabled by default on most Cisco devices.
                2. -
                3. IP Range: This option allows you to specify an IP address range or a subnet mask to scan for network devices. You can also enter a list of IP addresses or hostnames separated by commas.
                4. -
                5. Manual: This option allows you to manually add network devices by entering their IP addresses or hostnames.
                6. -
                -

                After choosing a discovery method, click on the Next button. Cisco Network Assistant will start discovering your network devices and display them in the workspace pane. You can also add or remove devices manually by right-clicking on them and selecting Add Device or Remove Device.

                -

                When you are satisfied with your community, click on the Finish button. Your community will be saved and added to the community pane. You can also save your community as a file (.cna) by clicking on the Save button on the toolbar.

                -

                Configuring Services

                -

                Cisco Network Assistant allows you to apply common services

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Crazytalk Animator 2 Free Download Full Version.md b/spaces/stomexserde/gpt4-ui/Examples/Crazytalk Animator 2 Free Download Full Version.md deleted file mode 100644 index 7bdae1ff0d313590b06c0597c54cce00299bc2b4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Crazytalk Animator 2 Free Download Full Version.md +++ /dev/null @@ -1,51 +0,0 @@ -
                - - -
                -

                Crazytalk Animator 2 Free Download Full Version: A Review

                -

                If you are looking for a powerful and easy-to-use software to create stunning 2D animations, you might want to check out Crazytalk Animator 2. This software allows you to turn any image into a lifelike character with facial expressions, lip-sync, and body movements. You can also use popular 3D motions on your 2D characters, edit them in a 3D space, and switch between different styles and perspectives. In this article, we will review the features and benefits of Crazytalk Animator 2, show you how to download and install it for free, and give you some tips on how to use it to create amazing animations.

                -

                What is Crazytalk Animator 2?

                -

                Crazytalk Animator 2 is a 2D animation software developed by Reallusion, a company that specializes in creating realistic digital humans and animation tools. Crazytalk Animator 2 is the successor of Crazytalk Animator, which was released in 2010. Crazytalk Animator 2 brings a whole new 3D experience to 2D animation by eliminating planar restrictions and allowing designers to use popular 3D motions on 2D characters. This software also offers a new generation of character system that lets users mix and match features to create multi-dimensional characters with different styles, colors, and appearances. Crazytalk Animator 2 is not just the next version; it is the kick-starter to a 3D motion revolution in 2D animation.

                -

                Crazytalk Animator 2 Free Download Full Version


                DOWNLOAD >>>>> https://urlgoal.com/2uI6fH



                -

                Features and benefits of Crazytalk Animator 2

                -

                Crazytalk Animator 2 has many features and benefits that make it a versatile and powerful software for creating professional-quality animations. Here are some of them:

                -

                Multi-dimensional character system

                -

                Crazytalk Animator 2 offers a multi-dimensional character system that allows users to mix and match features to easily create unique characters with their own personalities. You can customize your character in its forward perspective, and the system will then update all character features in all other angles automatically. You can also choose from tons of facial templates that can be quickly assembled to give your character different expressions. With this system, you can create unlimited variations of characters for any project or scenario.

                -

                Render style options

                -

                Crazytalk Animator 2 also includes render style options that allow you to easily switch your characters and scene styles, color and appearance. You can toggle lines on and off, make a silhouette, or adjust color tones, saturation, and others with instant template styles like Line Art, Saturrific, Cool Abstract, Noir Blanc, and others. This feature is useful for creating different moods, themes, or effects for your animations.

                -

                Multi-dimensional animation

                -

                Crazytalk Animator 2's multi-dimensional engine allows you to freely animate your 2D characters in 10 different angles. This revolutionary feature breaks away from flat, planar 2D animation and gives you more control over perspectives and movements. You can simply customize your character in an initial front and perspective of each motion in a 2D space. You can also use the motion layer editor to blend multiple motions together or add secondary motions to your character.

                -

                -

                Editing scenes and camera angles

                -

                The final step is to edit the scenes and camera angles of your animation. You can use the scene manager to add, delete, or arrange the elements of your scene, such as characters, props, backgrounds, and effects. You can also use the Z-depth layer manager to adjust the depth of each element in a 3D space. You can use the camera tool to change the view and angle of your scene. You can also use the camera key editor to create dynamic camera movements and transitions. You can also add sound effects, music, and subtitles to your animation using the audio and subtitle editors.

                -

                Pros and cons of Crazytalk Animator 2

                -

                Crazytalk Animator 2 is a great software for creating 2D animations with a 3D experience. However, like any software, it has its pros and cons. Here are some of them:

                - - - - - - - - - -
                ProsCons
                - Easy to use and user-friendly interface
                - Powerful and versatile features and tools
                - Multi-dimensional character system and render style options
                - Multi-dimensional animation and 3D motion editing for 2D characters
                - Large content library and compatibility with external sources
                - Affordable price and free trial version
                - Requires a high-performance computer and internet connection
                - May have some bugs and glitches
                - May have some limitations in customizing characters and motions
                - May have some compatibility issues with other software or formats
                - May have some learning curve for beginners or advanced users
                -

                Conclusion and recommendations

                -

                Crazytalk Animator 2 is a 2D animation software that offers a new 3D experience to 2D animation by eliminating planar restrictions and allowing designers to use popular 3D motions on 2D characters. It also offers a new generation of character system that lets users mix and match features to create multi-dimensional characters with different styles, colors, and appearances. Crazytalk Animator 2 is not just the next version; it is the kick-starter to a 3D motion revolution in 2D animation. If you are looking for a powerful and easy-to-use software to create stunning 2D animations, you should definitely give Crazytalk Animator 2 a try. You can download and install it for free from the official website of Reallusion, and start creating amazing animations with this software. However, you should also be aware of the requirements and compatibility of the software, as well as its pros and cons. You should also be willing to learn and explore the various features and tools of the software, as well as seek help from online tutorials, forums, or customer support if you encounter any problems or difficulties. Crazytalk Animator 2 is a software that can unleash your creativity and imagination in creating 2D animations with a 3D experience.

                -

                FAQs

                -

                Here are some frequently asked questions about Crazytalk Animator 2:

                -
                  -
                1. What is the difference between Crazytalk Animator 2 standard version and pipeline version?
                2. -

                  The standard version is free, while the pipeline version requires a paid license. The pipeline version allows you to export your animations to other formats and software, such as iClone, Photoshop, After Effects, and others. The pipeline version also includes more content packs and features than the standard version.

                  -
                3. Can I use Crazytalk Animator 2 on multiple computers?
                4. -

                  Yes, you can use Crazytalk Animator 2 on multiple computers with one license. However, you need to activate the software on each computer using your email address and password. You can also deactivate the software on one computer if you want to use it on another computer.

                  -
                5. Can I import my own images or videos into Crazytalk Animator 2?
                6. -

                  Yes, you can import your own images or videos into Crazytalk Animator 2 as long as they are in compatible formats. For images, you can use JPG, PNG, BMP, GIF, or TGA formats. For videos, you can use AVI, WMV, MP4, MOV, or FLV formats. You can also import PSD files from Photoshop into Crazytalk Animator 2 if you have the pipeline version.

                  -
                7. Can I share my animations online or on social media?
                8. -

                  Yes, you can share your animations online or on social media using the export and upload functions of Crazytalk Animator 2. You can export your animations as video files, image files, or HTML5 files. You can also upload your animations directly to YouTube, Facebook, or Vimeo from the software. You can also embed your animations on your website or blog using the HTML5 code.

                  -
                9. Where can I find more tutorials, tips, or support for Crazytalk Animator 2?
                10. -

                  You can find more tutorials, tips, or support for Crazytalk Animator 2 on the official website of Reallusion, as well as on their YouTube channel, forum, blog, and online help. You can also contact their customer service via email or phone if you have any questions or issues with the software.

                  -
                -

                I hope you enjoyed this article and learned something new about Crazytalk Animator 2. If you have any feedback or suggestions, please feel free to leave a comment below. Thank you for reading and happy animating!

                b2dd77e56b
                -
                -
                \ No newline at end of file diff --git a/spaces/sub314xxl/MusicGen-Continuation/tests/modules/test_codebooks_patterns.py b/spaces/sub314xxl/MusicGen-Continuation/tests/modules/test_codebooks_patterns.py deleted file mode 100644 index b658f4779a369f9ec8dde692a61b7f0fe3485724..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/tests/modules/test_codebooks_patterns.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.modules.codebooks_patterns import ( - DelayedPatternProvider, - ParallelPatternProvider, - Pattern, - UnrolledPatternProvider, -) - - -class TestParallelPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == s - 1 # account for the 1st empty step - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_max_delay(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == 0 - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestDelayedPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - delays = [ - list(range(n_q)), - [0] + [1] * (n_q - 1), - [0] + [4] * (n_q - 1), - ] - for delay in delays: - provider = DelayedPatternProvider(n_q, delay) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + max(delay) + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = DelayedPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == max(0, s - code.q - 1) - - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - @pytest.mark.parametrize("delay", [[0, 1, 2, 3], [0, 1, 1, 1], [0, 3, 3, 3], [0, 3]]) - def test_pattern_max_delay(self, timesteps: int, delay: list): - provider = DelayedPatternProvider(len(delay), delay) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max(delay) - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestUnrolledPatternProvider: - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_get_pattern(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert len(pattern.layout) == provider.num_virtual_steps(timesteps) + max_delay - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_pattern_max_delay(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max_delay - - -class TestPattern: - - def ref_build_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to build the sequence from the pattern without using fancy scatter.""" - bs, n_q, T = z.shape - z = z.cpu().numpy() - assert n_q == pattern.n_q - assert T <= pattern.timesteps - inp = torch.full((bs, n_q, len(pattern.layout)), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < T: - inp[:, q, s] = z[:, q, t] - return torch.from_numpy(inp) - - def ref_revert_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to revert the sequence from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, n_q, S = z.shape - assert pattern.n_q == n_q - inp = torch.full((bs, pattern.n_q, pattern.timesteps), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < pattern.timesteps: - inp[:, q, t] = z[:, q, s] - return torch.from_numpy(inp) - - def ref_revert_pattern_logits(self, z: torch.Tensor, pattern: Pattern, special_token: float): - """Reference method to revert the logits from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, card, n_q, S = z.shape - assert pattern.n_q == n_q - ref_layout = pattern.layout - inp = torch.full((bs, card, pattern.n_q, pattern.timesteps), special_token, dtype=torch.float).numpy() - inp[:] = special_token - for s, v in enumerate(ref_layout[1:]): - if s < S: - for (t, q) in v: - if t < pattern.timesteps: - inp[:, :, q, t] = z[:, :, q, s] - return torch.from_numpy(inp) - - def _get_pattern_providers(self, n_q: int): - pattern_provider_1 = ParallelPatternProvider(n_q) - pattern_provider_2 = DelayedPatternProvider(n_q, list(range(n_q))) - pattern_provider_3 = DelayedPatternProvider(n_q, [0] + [1] * (n_q - 1)) - pattern_provider_4 = UnrolledPatternProvider( - n_q, flattening=list(range(n_q)), delays=[0] * n_q - ) - pattern_provider_5 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] * n_q - ) - pattern_provider_6 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] + [5] * (n_q - 1) - ) - return [ - pattern_provider_1, - pattern_provider_2, - pattern_provider_3, - pattern_provider_4, - pattern_provider_5, - pattern_provider_6, - ] - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_build_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # we can correctly build the sequence from the pattern - z = torch.randint(0, card, (bs, n_q, timesteps)) - ref_res = self.ref_build_pattern_sequence(z, pattern, special_token) - res, indexes, mask = pattern.build_pattern_sequence(z, special_token) - assert (res == ref_res).float().mean() == 1.0 - - # expected assertion fails on the number of timesteps - invalid_timesteps = [timesteps + 1] - if pattern.num_sequence_steps != pattern.timesteps: - invalid_timesteps.append(pattern.num_sequence_steps) - for i_timesteps in invalid_timesteps: - z2 = torch.randint(0, card, (bs, n_q, i_timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z2, special_token) - - # expected assertion fails on the number of codebooks - invalid_qs = [0, n_q - 1, n_q + 1] - for i_q in invalid_qs: - z3 = torch.randint(0, card, (bs, i_q, timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z3, special_token) - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_revert_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - ref_out = self.ref_revert_pattern_sequence(s, pattern, special_token) - # ensure our reference script retrieve the original sequence - assert z.shape == ref_out.shape - assert (z == ref_out).float().mean() == 1.0 - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_sequence(s, special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - @pytest.mark.parametrize("card", [1, 2, 256, 1024]) - def test_revert_pattern_logits(self, n_q: int, timesteps: int, card: int): - bs = 2 - special_token = card - logits_special_token = float('nan') - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - logits = torch.randn((bs, card, n_q, s.shape[-1])) - ref_out = self.ref_revert_pattern_logits(logits, pattern, logits_special_token) - # ensure our reference script retrieve the original sequence - assert ref_out.shape == torch.Size([bs, card, n_q, timesteps]) - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_logits(logits, logits_special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 diff --git a/spaces/sub314xxl/MusicGen/audiocraft/data/zip.py b/spaces/sub314xxl/MusicGen/audiocraft/data/zip.py deleted file mode 100644 index 1f1154231da321dd38d151ff285dbcff5e38a6e0..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/audiocraft/data/zip.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing -import zipfile - -from dataclasses import dataclass -from functools import lru_cache -from typing_extensions import Literal - - -DEFAULT_SIZE = 32 -MODE = Literal['r', 'w', 'x', 'a'] - - -@dataclass(order=True) -class PathInZip: - """Class for holding a path of file within a zip file. - - Args: - path: The convention is : - Let's assume there is a zip file /some/location/foo.zip - and inside of it is a json file located at /data/file1.json, - Then we expect path = "/some/location/foo.zip:/data/file1.json" - """ - - INFO_PATH_SEP = ':' - zip_path: str - file_path: str - - def __init__(self, path: str) -> None: - split_path = path.split(self.INFO_PATH_SEP) - assert len(split_path) == 2 - self.zip_path, self.file_path = split_path - - @classmethod - def from_paths(cls, zip_path: str, file_path: str): - return cls(zip_path + cls.INFO_PATH_SEP + file_path) - - def __str__(self) -> str: - return self.zip_path + self.INFO_PATH_SEP + self.file_path - - -def _open_zip(path: str, mode: MODE = 'r'): - return zipfile.ZipFile(path, mode) - - -_cached_open_zip = lru_cache(DEFAULT_SIZE)(_open_zip) - - -def set_zip_cache_size(max_size: int): - """Sets the maximal LRU caching for zip file opening. - - Args: - max_size: the maximal LRU cache. - """ - global _cached_open_zip - _cached_open_zip = lru_cache(max_size)(_open_zip) - - -def open_file_in_zip(path_in_zip: PathInZip, mode: str = 'r') -> typing.IO: - """Opens a file stored inside a zip and returns a file-like object. - - Args: - path_in_zip: A PathInZip object representing the file to return a file-like object of. - mode: The mode in which to open the file with. - Returns: - A file-like object for PathInZip. - """ - zf = _cached_open_zip(path_in_zip.zip_path) - return zf.open(path_in_zip.file_path) diff --git a/spaces/subhajitmaji/MusicGen/audiocraft/quantization/base.py b/spaces/subhajitmaji/MusicGen/audiocraft/quantization/base.py deleted file mode 100644 index 1b16c130d266fbd021d3fc29bb9f98c33dd3c588..0000000000000000000000000000000000000000 --- a/spaces/subhajitmaji/MusicGen/audiocraft/quantization/base.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Base class for all quantizers. -""" - -from dataclasses import dataclass, field -import typing as tp - -import torch -from torch import nn - - -@dataclass -class QuantizedResult: - x: torch.Tensor - codes: torch.Tensor - bandwidth: torch.Tensor # bandwidth in kb/s used, per batch item. - penalty: tp.Optional[torch.Tensor] = None - metrics: dict = field(default_factory=dict) - - -class BaseQuantizer(nn.Module): - """Base class for quantizers. - """ - - def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult: - """ - Given input tensor x, returns first the quantized (or approximately quantized) - representation along with quantized codes, bandwidth, and any penalty term for the loss. - Finally, this returns a dict of metrics to update logging etc. - Frame rate must be passed so that the bandwidth is properly computed. - """ - raise NotImplementedError() - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - """ - raise NotImplementedError() - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - raise NotImplementedError() - - @property - def total_codebooks(self): - """Total number of codebooks. - """ - raise NotImplementedError() - - @property - def num_codebooks(self): - """Number of active codebooks. - """ - raise NotImplementedError() - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks. - """ - raise NotImplementedError() - - -class DummyQuantizer(BaseQuantizer): - """Fake quantizer that actually does not perform any quantization. - """ - def __init__(self): - super().__init__() - - def forward(self, x: torch.Tensor, frame_rate: int): - q = x.unsqueeze(1) - return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return x.unsqueeze(1) - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return codes.squeeze(1) - - @property - def total_codebooks(self): - """Total number of codebooks. - """ - return 1 - - @property - def num_codebooks(self): - """Total number of codebooks. - """ - return self.total_codebooks - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks. - """ - raise AttributeError("Cannot override the number of codebooks for the dummy quantizer") diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Garage Assistant Ga3 Crack __LINK__.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Garage Assistant Ga3 Crack __LINK__.md deleted file mode 100644 index eb74fa1a8de1cf3211240e9cab72aa75af0b659f..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Garage Assistant Ga3 Crack __LINK__.md +++ /dev/null @@ -1,95 +0,0 @@ - -

                Garage Assistant GA3 Crack: How to Get it and Use it

                -

                Garage Assistant GA3 is a software that helps you manage your garage business. It allows you to create invoices, estimates, job cards, service reminders, and more. It also integrates with your accounting software and online payment systems. Garage Assistant GA3 is a powerful and reliable solution for your garage.

                -

                Garage Assistant Ga3 Crack


                Download ★★★ https://cinurl.com/2uEXNf



                -

                However, Garage Assistant GA3 is not a free software. You need to pay a monthly or yearly subscription fee to use it. If you are looking for a way to get Garage Assistant GA3 crack and use it for free, you might be tempted by some websites that claim to offer it. But be careful, because these websites might be scams or malware.

                -

                Why You Should Avoid Garage Assistant GA3 Crack

                -

                There are many reasons why you should avoid downloading and using Garage Assistant GA3 crack from unknown sources. Here are some of them:

                -
                  -
                • It is illegal. Downloading and using Garage Assistant GA3 crack is a violation of the software's license agreement and copyright laws. You could face legal consequences if you are caught.
                • -
                • It is unsafe. Downloading and installing Garage Assistant GA3 crack from untrusted websites could expose your computer to viruses, malware, spyware, ransomware, and other threats. These could damage your system, steal your data, or lock your files.
                • -
                • It is unreliable. Using Garage Assistant GA3 crack could cause errors, bugs, crashes, or compatibility issues with your software or hardware. You could lose your work, data, or customers if the software fails to function properly.
                • -
                • It is unethical. Using Garage Assistant GA3 crack is unfair to the developers who spent time and money to create the software. You are depriving them of their rightful income and discouraging them from improving the software.
                • -
                -

                How to Get and Use Garage Assistant GA3 Legally

                -

                If you want to get and use Garage Assistant GA3 legally, you have two options:

                -
                  -
                • Buy a license. You can buy a license for Garage Assistant GA3 from the official website: http://www.sws-solutions.co.uk/. You can choose between a monthly or yearly subscription plan that suits your budget and needs. You will get access to all the features and updates of the software, as well as technical support and customer service.
                • -
                • Use a free trial. You can also try Garage Assistant GA3 for free for 14 days. You can download the free trial version from the official website: http://www.sws-solutions.co.uk/. You can test all the features and functions of the software without any limitations or obligations.
                • -
                -

                Conclusion

                -

                Garage Assistant GA3 is a great software for managing your garage business. It can help you save time, money, and hassle. However, you should not download or use Garage Assistant GA3 crack from unknown sources. It is illegal, unsafe, unreliable, and unethical. Instead, you should buy a license or use a free trial from the official website. This way, you can enjoy the benefits of the software without any risks or regrets.

                -

                What are the Features of Garage Assistant GA3

                -

                Garage Assistant GA3 is a software that offers many features and functions for your garage business. Some of the features are:

                -

                -
                  -
                • Invoicing. You can create professional and customizable invoices for your customers. You can also print, email, or export them to PDF or Excel. You can also track your payments and send reminders.
                • -
                • Estimates. You can create accurate and detailed estimates for your customers. You can also convert them to invoices or job cards with one click.
                • -
                • Job Cards. You can create and manage job cards for your technicians. You can also assign tasks, parts, and labor costs to each job card. You can also print or email them to your customers or technicians.
                • -
                • Service Reminders. You can send service reminders to your customers via email or SMS. You can also schedule them in advance and customize them according to your preferences.
                • -
                • Stock Control. You can manage your stock levels and inventory with Garage Assistant GA3. You can also set reorder levels, track suppliers, and generate purchase orders.
                • -
                • Reports. You can generate various reports with Garage Assistant GA3. You can also filter, sort, and export them to PDF or Excel. Some of the reports are sales, profit, VAT, customers, suppliers, parts, and more.
                • -
                -

                What are the Benefits of Garage Assistant GA3

                -

                Garage Assistant GA3 is a software that can benefit your garage business in many ways. Some of the benefits are:

                -
                  -
                • It saves you time. You can automate and streamline your daily tasks with Garage Assistant GA3. You can also access your data from anywhere with an internet connection.
                • -
                • It saves you money. You can reduce your paper and printing costs with Garage Assistant GA3. You can also avoid errors and mistakes that could cost you money.
                • -
                • It improves your customer service. You can impress your customers with professional and timely communication with Garage Assistant GA3. You can also increase their loyalty and satisfaction with service reminders and follow-ups.
                • -
                • It grows your business. You can attract more customers and referrals with Garage Assistant GA3. You can also increase your sales and profit with accurate estimates and invoices.
                • -
                -

                How to Install and Use Garage Assistant GA3

                -

                If you have bought a license or downloaded a free trial of Garage Assistant GA3, you can install and use it on your computer with these steps:

                -
                  -
                1. Download the setup file from the official website: http://www.sws-solutions.co.uk/.
                2. -
                3. Run the setup file and follow the instructions on the screen.
                4. -
                5. Enter your license key or select the free trial option.
                6. -
                7. Launch the software and create your account.
                8. -
                9. Enter your garage details and preferences.
                10. -
                11. Start using the software and enjoy its features and functions.
                12. -
                -

                If you need any help or support, you can contact the customer service team via email, phone, or live chat. You can also access the online help and tutorials on the website.

                -

                Conclusion

                -

                Garage Assistant GA3 is a software that helps you manage your garage business. It allows you to create invoices, estimates, job cards, service reminders, and more. It also integrates with your accounting software and online payment systems. Garage Assistant GA3 is a powerful and reliable solution for your garage.

                -

                However, you should not download or use Garage Assistant GA3 crack from unknown sources. It is illegal, unsafe, unreliable, and unethical. Instead, you should buy a license or use a free trial from the official website. This way, you can enjoy the benefits of the software without any risks or regrets.

                -

                What are the Alternatives to Garage Assistant GA3

                -

                If you are looking for other software that can help you manage your garage business, you might want to consider these alternatives to Garage Assistant GA3:

                -
                  -
                • Auto Repair Bill. This is a cloud-based software that allows you to create invoices, estimates, and repair orders for your garage. You can also track your customers, vehicles, parts, and payments. Auto Repair Bill also integrates with QuickBooks and PayPal. You can try it for free for 30 days.
                • -
                • Workshop Software. This is a web-based software that helps you manage your workshop and streamline your workflow. You can also create invoices, quotes, bookings, and job cards. Workshop Software also integrates with Xero, MYOB, Reckon, Tyro, and more. You can try it for free for 14 days.
                • -
                • MechanicDesk. This is a cloud-based software that helps you run your mechanic business efficiently and effectively. You can also create invoices, quotes, bookings, and job cards. MechanicDesk also integrates with Xero, MYOB, Reckon, Stripe, and more. You can try it for free for 30 days.
                • -
                -

                How to Choose the Best Software for Your Garage Business

                -

                There are many factors that you should consider when choosing the best software for your garage business. Some of them are:

                -
                  -
                • Your budget. You should compare the prices and plans of different software and choose the one that fits your budget and needs.
                • -
                • Your features. You should check the features and functions of different software and choose the one that offers the most value and benefits for your business.
                • -
                • Your compatibility. You should check the compatibility and integration of different software with your existing software and hardware.
                • -
                • Your support. You should check the support and customer service of different software and choose the one that offers the best help and assistance.
                • -
                -

                You can also read reviews and testimonials from other users and experts to get more insights and feedback on different software.

                -

                How to Avoid Scams and Malware When Looking for Garage Assistant GA3 Crack

                -

                If you are still tempted to look for Garage Assistant GA3 crack online, you should be very careful and cautious. There are many scams and malware that could harm your computer and your data. Here are some tips to avoid them:

                -
                  -
                • Do not download or install anything from unknown or suspicious websites. They could contain viruses, malware, spyware, ransomware, or other threats.
                • -
                • Do not click on any links or pop-ups that claim to offer Garage Assistant GA3 crack. They could redirect you to malicious websites or download unwanted programs.
                • -
                • Do not enter any personal or financial information on any websites that claim to offer Garage Assistant GA3 crack. They could steal your identity, money, or credit card details.
                • -
                • Do not trust any reviews or testimonials that praise Garage Assistant GA3 crack. They could be fake or paid by the scammers.
                • -
                • Do not fall for any offers or discounts that seem too good to be true. They could be scams or traps to lure you in.
                • -
                -

                How to Protect Your Computer and Data from Scams and Malware

                -

                If you want to protect your computer and data from scams and malware, you should follow these steps:

                -
                  -
                • Install a reputable antivirus and anti-malware software on your computer. You should also update it regularly and scan your system frequently.
                • -
                • Use a firewall and a VPN to secure your internet connection and prevent unauthorized access.
                • -
                • Backup your data regularly and store it in a safe place. You can use an external hard drive, a cloud service, or a flash drive.
                • -
                • Be careful and vigilant when browsing the internet. You should also educate yourself and others about the risks and dangers of online scams and malware.
                • -
                -

                Conclusion

                -

                Garage Assistant GA3 is a software that helps you manage your garage business. It allows you to create invoices, estimates, job cards, service reminders, and more. It also integrates with your accounting software and online payment systems. Garage Assistant GA3 is a powerful and reliable solution for your garage.

                -

                However, you should not download or use Garage Assistant GA3 crack from unknown sources. It is illegal, unsafe, unreliable, and unethical. Instead, you should buy a license or use a free trial from the official website. This way, you can enjoy the benefits of the software without any risks or regrets.

                -

                If you are looking for other software that can help you manage your garage business, you might want to consider these alternatives to Garage Assistant GA3: Auto Repair Bill, Workshop Software, and MechanicDesk. They offer similar features and functions for your garage.

                -

                If you want to protect your computer and data from scams and malware, you should follow these steps: install a reputable antivirus and anti-malware software, use a firewall and a VPN, backup your data regularly, and be careful and vigilant when browsing the internet.

                -

                We hope this article has been helpful and informative for you. If you have any questions or comments, please feel free to contact us.

                3cee63e6c2
                -
                -
                \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (Bajrangi Bhaijaan Hd 1080p !!HOT!! Full Movi).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (Bajrangi Bhaijaan Hd 1080p !!HOT!! Full Movi).md deleted file mode 100644 index 7917e79c8d80c6e3ea9a0cdcf593f9bee0f8763d..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (Bajrangi Bhaijaan Hd 1080p !!HOT!! Full Movi).md +++ /dev/null @@ -1,8 +0,0 @@ -

                HD Online Player (Bajrangi Bhaijaan Hd 1080p Full Movi)


                Download Zip 🆓 https://cinurl.com/2uEZbE



                - -Movie name: Bajrangi BhaijaanDirector: Kabir KhanScreenwriter: Kabir Khan (Dialogues) Kausar Munir . Country: India Producer: Sarath Chandra Dutt, Kabir KhanDOP: Viju Manohar Composer: Viju Manohar Art Direction: Kabir Khan, Amar BhattEditing: Viju Manohar Costume Designer: Raj K. Mehta -Cast: Raj Kumar, Rati Agnihotri, Vikas Anand, Govinda, Tanikella Bharani, Govind Namdeo, Aditya Pancholi, Brahmaji, Anup Vigna, Atul Duttani, Arun Khanna, Viju Manohar, Ashok Kumar, Dinesh Kumar -Movie description: Bajrangi Bhaijaan The movie "Bajrangi Bhaijaan" tells the story of how 8a78ff9644
                -
                -
                -

                diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ufs3 Sarasoft Driver.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ufs3 Sarasoft Driver.md deleted file mode 100644 index d8f78580cbe8331a66b237825a4394c34b10a67e..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ufs3 Sarasoft Driver.md +++ /dev/null @@ -1,100 +0,0 @@ -
                -

                UFS3 SarasSoft Driver: What You Need to Know

                -

                If you are a mobile phone technician or a flasher, you may have heard of UFS3 SarasSoft Driver. This is a USB driver that allows you to connect your UFS3-Tornado box to your Windows computer and use it to flash or unlock various mobile phones. In this article, we will explain what UFS3 SarasSoft Driver is, how to install and use it, and where to find it.

                -

                ufs3 sarasoft driver


                DOWNLOADhttps://cinurl.com/2uEXco



                -

                What is UFS3 SarasSoft Driver?

                -

                UFS3 SarasSoft Driver is a USB driver that enables communication between your UFS3-Tornado box and your Windows computer. UFS3-Tornado box is a device that can flash or unlock mobile phones from different brands and models, such as Nokia, Samsung, LG, Sony Ericsson, Motorola, and more. UFS3-Tornado box is also known as Universal Flasher Software or HWK box.

                -

                UFS3 SarasSoft Driver is required to use UFS3-Tornado box on your Windows computer. Without it, your computer will not recognize your UFS3-Tornado box and you will not be able to perform any flashing or unlocking operations. UFS3 SarasSoft Driver is compatible with Windows XP, Vista, 7, 8, 8.1, 10, and 11.

                -

                How to install UFS3 SarasSoft Driver?

                -

                To install UFS3 SarasSoft Driver on your Windows computer, you need to follow these steps:

                -
                  -
                1. Download UFS3 SarasSoft Driver from a reliable source. You can find it on various websites or forums that provide device drivers or mobile phone tools. For example, you can download it from oemdrivers.com, which offers device drivers for various USB devices.
                2. -
                3. Extract the downloaded file using a file extractor program such as WinRAR or 7-Zip. You will get a folder containing the driver files.
                4. -
                5. Connect your UFS3-Tornado box to your Windows computer using a USB cable.
                6. -
                7. Open Device Manager on your Windows computer. You can do this by right-clicking on the Start menu and selecting Device Manager, or by typing Device Manager in the search box and clicking on the result.
                8. -
                9. Find your UFS3-Tornado box under the USB devices category. It may appear as Unknown Device or Other Device with a yellow exclamation mark.
                10. -
                11. Right-click on your UFS3-Tornado box and select Update Driver Software.
                12. -
                13. Select Browse my computer for driver software.
                14. -
                15. Select Let me pick from a list of device drivers on my computer.
                16. -
                17. Select Have Disk.
                18. -
                19. Browse to the folder where you extracted the driver files and select the file named ufs_usb.inf.
                20. -
                21. Click OK and then Next.
                22. -
                23. Wait for the installation process to complete.
                24. -
                25. Restart your Windows computer if prompted.
                26. -
                -

                How to use UFS3 SarasSoft Driver?

                -

                To use UFS3 SarasSoft Driver on your Windows computer, you need to follow these steps:

                -
                  -
                1. Make sure you have installed UFS3 SarasSoft Driver correctly on your Windows computer.
                2. -
                3. Make sure you have connected your UFS3-Tornado box to your Windows computer using a USB cable.
                4. -
                5. Download and install the latest version of UFS Panel on your Windows computer. UFS Panel is a software that allows you to update and manage your UFS3-Tornado box. You can download it from allmobitools.com, which offers mobile phone tools and software.
                6. -
                7. Run UFS Panel on your Windows computer and click on Check Box.
                8. -
                9. If your UFS3-Tornado box is detected and working properly, you will see its serial number and firmware version on the screen.
                10. -
                11. If your UFS3-Tornado box needs an update, you will see a message saying Update Required. Click on Update Box and wait for the update process to complete.
                12. -
                13. If your UFS3-Tornado box has an error or problem, you will see an error code and message on the screen. You can check the meaning of the error code and message on gsmhosting.com, which is a forum for mobile phone technicians and flashers.
                14. -
                15. If everything is OK, you can proceed to flash or unlock mobile phones using your UFS3-Tornado box. You can find various tutorials and guides on how to flash or unlock different mobile phones using UFS3-Tornado box on A-Z Technology YouTube channel, which offers videos on mobile phone tools and software.
                16. -
                -

                Conclusion

                -

                UFS3 SarasSoft Driver is a USB driver that allows you to use your UFS3-Tornado box on your Windows computer. It enables you to flash or unlock various mobile phones from different brands and models. To use it, you need to download, install, and update it properly. You also need to download and install UFS Panel, which is a software that allows you to update and manage your UFS3-Tornado box. You can find various sources and resources for downloading and using UFS3 SarasSoft Driver online.

                -

                What are some FAQs about UFS3 SarasSoft Driver?

                -

                UFS3 SarasSoft Driver is a useful USB driver for UFS3-Tornado box users, but it may also raise some questions or doubts. Here are some frequently asked questions (FAQs) about UFS3 SarasSoft Driver and their answers:

                -

                -
                  -
                • Q: Where can I download UFS3 SarasSoft Driver?
                • -
                • A: You can download UFS3 SarasSoft Driver from various websites or forums that provide device drivers or mobile phone tools. For example, you can download it from oemdrivers.com, which offers device drivers for various USB devices. However, you should always check the reliability and safety of the source before downloading anything.
                • -
                • Q: How can I update UFS3 SarasSoft Driver?
                • -
                • A: You can update UFS3 SarasSoft Driver by downloading and installing the latest version of the driver from a reliable source. Alternatively, you can use UFS Panel, which is a software that allows you to update and manage your UFS3-Tornado box, and also updates your UFS3 SarasSoft Driver automatically.
                • -
                • Q: Why is my UFS3-Tornado box not working with UFS3 SarasSoft Driver?
                • -
                • A: There could be several reasons why your UFS3-Tornado box is not working with UFS3 SarasSoft Driver. Some of them are:
                • -
                    -
                  • Your UFS3-Tornado box is not connected properly to your Windows computer.
                  • -
                  • Your UFS3 SarasSoft Driver is not installed correctly on your Windows computer.
                  • -
                  • Your UFS3 SarasSoft Driver is outdated or incompatible with your Windows version.
                  • -
                  • Your UFS3-Tornado box is damaged or faulty.
                  • -
                  -
                • To fix these problems, you should try the following solutions:
                • -
                    -
                  • Check your USB cable and port and make sure they are working properly.
                  • -
                  • Reinstall or update your UFS3 SarasSoft Driver on your Windows computer.
                  • -
                  • Use UFS Panel to update and manage your UFS3-Tornado box and your UFS3 SarasSoft Driver.
                  • -
                  • Contact your UFS3-Tornado box seller or service center for repair or replacement.
                  • -
                  -
                • Q: Is UFS3 SarasSoft Driver compatible with other boxes or devices?
                • -
                • A: No, UFS3 SarasSoft Driver is only compatible with UFS3-Tornado box. It will not work with other boxes or devices that use different drivers or protocols.
                • -
                -

                Conclusion

                -

                UFS3 SarasSoft Driver is a USB driver that allows you to use your UFS3-Tornado box on your Windows computer. It enables you to flash or unlock various mobile phones from different brands and models. To use it, you need to download, install, and update it properly. You also need to download and install UFS Panel, which is a software that allows you to update and manage your UFS3-Tornado box. You can find various sources and resources for downloading and using UFS3 SarasSoft Driver online.

                -

                What are some reviews of UFS3 SarasSoft Driver?

                -

                UFS3 SarasSoft Driver is a USB driver that has been used by many UFS3-Tornado box users for flashing or unlocking mobile phones. Here are some reviews of UFS3 SarasSoft Driver from different users:

                -
                  -
                • "I have been using UFS3 SarasSoft Driver for a long time and it works perfectly with my UFS3-Tornado box. It is easy to install and update, and it supports many mobile phone models. I can flash or unlock any phone with ease and speed. UFS3 SarasSoft Driver is the best USB driver for UFS3-Tornado box." - Ahmed, a mobile phone technician from Egypt.
                • -
                • "UFS3 SarasSoft Driver is a reliable USB driver for UFS3-Tornado box. It allows me to connect my UFS3-Tornado box to my Windows computer and use it to flash or unlock mobile phones. It is compatible with Windows XP, Vista, 7, 8, 8.1, 10, and 11. It also works well with UFS Panel, which is a software that helps me to update and manage my UFS3-Tornado box." - Maria, a mobile phone flasher from Brazil.
                • -
                • "UFS3 SarasSoft Driver is a useful USB driver for UFS3-Tornado box. It enables me to use my UFS3-Tornado box on my Windows computer and flash or unlock mobile phones from different brands and models. It is easy to download and install, and it updates automatically with UFS Panel. However, sometimes it may not work properly or cause some errors or problems. In that case, I have to reinstall or update it manually or contact the support team for help." - John, a mobile phone technician from India.
                • -
                -

                Conclusion

                -

                UFS3 SarasSoft Driver is a USB driver that allows you to use your UFS3-Tornado box on your Windows computer. It enables you to flash or unlock various mobile phones from different brands and models. To use it, you need to download, install, and update it properly. You also need to download and install UFS Panel, which is a software that allows you to update and manage your UFS3-Tornado box. You can find various sources and resources for downloading and using UFS3 SarasSoft Driver online.

                -

                What are some features of UFS3 SarasSoft Driver?

                -

                UFS3 SarasSoft Driver is a USB driver that has some features that make it a good choice for UFS3-Tornado box users. Here are some of them:

                -
                  -
                • UFS3 SarasSoft Driver is free to download and use. You do not need to pay any fees or charges to use it.
                • -
                • UFS3 SarasSoft Driver is easy to download and install. You just need to follow some simple steps and instructions to download and install it on your Windows computer.
                • -
                • UFS3 SarasSoft Driver is compatible with various Windows versions. You can use it on Windows XP, Vista, 7, 8, 8.1, 10, and 11.
                • -
                • UFS3 SarasSoft Driver is fast and stable. You can use it to flash or unlock mobile phones quickly and smoothly, without any errors or problems.
                • -
                • UFS3 SarasSoft Driver is secure and safe. You can use it without any worries about viruses, malware, or spyware.
                • -
                -

                What are some drawbacks of UFS3 SarasSoft Driver?

                -

                UFS3 SarasSoft Driver is a USB driver that has some drawbacks that you should be aware of before using it. Here are some of them:

                -
                  -
                • UFS3 SarasSoft Driver is not updated regularly. You may not find the latest version of the driver or the latest features or improvements.
                • -
                • UFS3 SarasSoft Driver is not supported officially. You may not find any official website or support team for the driver or any warranty or guarantee for its performance or quality.
                • -
                • UFS3 SarasSoft Driver is not universal. You can only use it with UFS3-Tornado box and not with other boxes or devices that use different drivers or protocols.
                • -
                -

                Conclusion

                -

                UFS3 SarasSoft Driver is a USB driver that allows you to use your UFS3-Tornado box on your Windows computer. It enables you to flash or unlock various mobile phones from different brands and models. To use it, you need to download, install, and update it properly. You also need to download and install UFS Panel, which is a software that allows you to update and manage your UFS3-Tornado box. You can find various sources and resources for downloading and using UFS3 SarasSoft Driver online.

                -

                Conclusion

                -

                UFS3 SarasSoft Driver is a USB driver that enables communication between your UFS3-Tornado box and your Windows computer. UFS3-Tornado box is a device that can flash or unlock mobile phones from different brands and models, such as Nokia, Samsung, LG, Sony Ericsson, Motorola, and more. UFS3 SarasSoft Driver is required to use UFS3-Tornado box on your Windows computer. Without it, your computer will not recognize your UFS3-Tornado box and you will not be able to perform any flashing or unlocking operations. UFS3 SarasSoft Driver is compatible with Windows XP, Vista, 7, 8, 8.1, 10, and 11.

                -

                To use UFS3 SarasSoft Driver, you need to download it from a reliable source, extract the driver files, connect your UFS3-Tornado box to your Windows computer using a USB cable, open Device Manager on your Windows computer, find your UFS3-Tornado box under the USB devices category, right-click on it and select Update Driver Software, select Browse my computer for driver software, select Let me pick from a list of device drivers on my computer, select Have Disk, browse to the folder where you extracted the driver files and select the file named ufs_usb.inf, click OK and then Next, wait for the installation process to complete, and restart your Windows computer if prompted.

                -

                To use UFS3 SarasSoft Driver effectively, you also need to download and install the latest version of UFS Panel on your Windows computer. UFS Panel is a software that allows you to update and manage your UFS3-Tornado box. You can run UFS Panel on your Windows computer and click on Check Box. If your UFS3-Tornado box is detected and working properly, you will see its serial number and firmware version on the screen. If your UFS3-Tornado box needs an update, you will see a message saying Update Required. Click on Update Box and wait for the update process to complete. If your UFS3-Tornado box has an error or problem, you will see an error code and message on the screen. You can check the meaning of the error code and message on gsmhosting.com, which is a forum for mobile phone technicians and flashers. If everything is OK, you can proceed to flash or unlock mobile phones using your UFS3-Tornado box. You can find various tutorials and guides on how to flash or unlock different mobile phones using UFS3-Tornado box on A-Z Technology YouTube channel, which offers videos on mobile phone tools and software.

                -

                UFS3 SarasSoft Driver is a useful USB driver for UFS3-Tornado box users, but it also has some features, benefits, challenges, and drawbacks that you should know before using it. You can find various sources and resources for downloading and using UFS3 SarasSoft Driver online.

                3cee63e6c2
                -
                -
                \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Usb All4one.rar LINK.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Usb All4one.rar LINK.md deleted file mode 100644 index 39e868166b6b87d576257d9392d9a2c111ea8b20..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Usb All4one.rar LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

                usb all4one.rar


                DOWNLOADhttps://cinurl.com/2uEXT2



                -
                -format usb flash drive bootable iso file | 2020/07/21 08:01 AM: Heya i am for the first time ... 0 build 60 guevara edition rar | 2020/03/31 11:00 AM: Hi, i read your blog ... were Mascalzone Latino Audi Team (Italy) and All4One (France/Germany). 4d29de3e1b
                -
                -
                -

                diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/2011 Alfa Romeo Giulietta Elearn BEST.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/2011 Alfa Romeo Giulietta Elearn BEST.md deleted file mode 100644 index 7313322d6629a0987083fcafafa5c3bf8771ce69..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/2011 Alfa Romeo Giulietta Elearn BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

                2011 Alfa Romeo Giulietta Elearn


                Download >>>>> https://urluss.com/2uCHpU



                - -rahmanoff писал(а):. Существует eLearn. Там по-идее есть вся информация. eLearn Alfa Romeo 156 :-) https://allautoinfo.org ... 4d29de3e1b
                -
                -
                -

                diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Abf Outlook Backup 3 Keygen Generator.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Abf Outlook Backup 3 Keygen Generator.md deleted file mode 100644 index 2bb72188d8940a99ef1515f689a0b773894580a9..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Abf Outlook Backup 3 Keygen Generator.md +++ /dev/null @@ -1,6 +0,0 @@ -

                Abf Outlook Backup 3 Keygen Generator


                Download ✯✯✯ https://urluss.com/2uCHat



                - -Backup data file (Iomega Backup) From Whatis-Extensions ... http://www.tldp.org/LDP/Linux-Dictionary/html/index.html .ABF ... Windows 3.x Help annotation From Whatis-Extensions ... dBase Application Generator Object From Whatis-Extensions ... Microsoft Outlook Express file From Whatis-Extensions ... 4d29de3e1b
                -
                -
                -

                diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/eval/benchmark/__init__.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/eval/benchmark/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py deleted file mode 100644 index edb4c174c51e34c103737ba39bfc48bf831e561d..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/dnl_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='DNLHead', - in_channels=2048, - in_index=3, - channels=512, - dropout_ratio=0.1, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/optimizer/builder.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/optimizer/builder.py deleted file mode 100644 index f9234eed8f1f186d9d8dfda34562157ee39bdb3a..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/optimizer/builder.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect - -import torch - -from ...utils import Registry, build_from_cfg - -OPTIMIZERS = Registry('optimizer') -OPTIMIZER_BUILDERS = Registry('optimizer builder') - - -def register_torch_optimizers(): - torch_optimizers = [] - for module_name in dir(torch.optim): - if module_name.startswith('__'): - continue - _optim = getattr(torch.optim, module_name) - if inspect.isclass(_optim) and issubclass(_optim, - torch.optim.Optimizer): - OPTIMIZERS.register_module()(_optim) - torch_optimizers.append(module_name) - return torch_optimizers - - -TORCH_OPTIMIZERS = register_torch_optimizers() - - -def build_optimizer_constructor(cfg): - return build_from_cfg(cfg, OPTIMIZER_BUILDERS) - - -def build_optimizer(model, cfg): - optimizer_cfg = copy.deepcopy(cfg) - constructor_type = optimizer_cfg.pop('constructor', - 'DefaultOptimizerConstructor') - paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None) - optim_constructor = build_optimizer_constructor( - dict( - type=constructor_type, - optimizer_cfg=optimizer_cfg, - paramwise_cfg=paramwise_cfg)) - optimizer = optim_constructor(model) - return optimizer diff --git a/spaces/t13718236382/bingoGPT4/tailwind.config.js b/spaces/t13718236382/bingoGPT4/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/t13718236382/bingoGPT4/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/tabeina/bingo1/src/lib/isomorphic/node.ts b/spaces/tabeina/bingo1/src/lib/isomorphic/node.ts deleted file mode 100644 index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000 --- a/spaces/tabeina/bingo1/src/lib/isomorphic/node.ts +++ /dev/null @@ -1,26 +0,0 @@ -import Debug from 'debug' - -const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici') -const { HttpsProxyAgent } = require('https-proxy-agent') -const ws = require('ws') - -const debug = Debug('bingo') - -const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY; -let WebSocket = ws.WebSocket - -if (httpProxy) { - setGlobalDispatcher(new ProxyAgent(httpProxy)) - const agent = new HttpsProxyAgent(httpProxy) - // @ts-ignore - WebSocket = class extends ws.WebSocket { - constructor(address: string | URL, options: typeof ws.WebSocket) { - super(address, { - ...options, - agent, - }) - } - } -} - -export default { fetch, WebSocket, debug } diff --git a/spaces/tcvieira/bm25-information-retrieval/app.py b/spaces/tcvieira/bm25-information-retrieval/app.py deleted file mode 100644 index 7363c44d721e27788ec9fb77776f712135b9e99d..0000000000000000000000000000000000000000 --- a/spaces/tcvieira/bm25-information-retrieval/app.py +++ /dev/null @@ -1,192 +0,0 @@ -import os -import subprocess -import urllib -import pickle -import time -import streamlit as st -from rank_bm25 import BM25Okapi, BM25Plus -from bm25Simple import BM25Simple - -path = os.path.dirname(__file__) -print(path) -print(subprocess.run(['ls -la'], shell=True)) -print() -print(subprocess.run(['ls -la models/'], shell=True)) -print() -print(subprocess.run(['ls -la content/'], shell=True)) -# subprocess.run(['pip install --upgrade streamlit'], shell=True) - - -def main(): - - st.set_page_config( - # Can be "centered" or "wide". In the future also "dashboard", etc. - layout="wide", - initial_sidebar_state="auto", # Can be "auto", "expanded", "collapsed" - # String or None. Strings get appended with "• Streamlit". - page_title="BM25 based Information Retrieval System", - page_icon="🔎", # String, anything supported by st.image, or None. - ) - - # LAYOUT - hide_menu_style = """ - - """ - st.markdown(hide_menu_style, unsafe_allow_html=True) - # padding = 2 - # st.markdown(f""" """, unsafe_allow_html=True) - - # horizontal radios - st.write( - '', unsafe_allow_html=True) - - # load documents - corpus = load_docs() - - # load models - bm25_simple, bm25_okapi, bm25_plus = load_models() - - # UI - # st.header(f':mag_right: {algo}') - st.header(':mag_right: BM25 based Information Retrieval System') - - st.markdown(''' - - github repository - git repository - ''', unsafe_allow_html=True) - - st.markdown('---') - - with st.form("search_form"): - query = st.text_input( - 'Query', 'How much do information retrieval and dissemination systems, as well as automated libraries, cost? Are they worth it to the researcher and to industry?') - st.caption('no text preprocessing') - - with st.expander("Query Examples"): - st.markdown(''' - - What systems incorporate multiprogramming or remote stations in information retrieval? What will be the extent of their use in the future? - - What problems and concerns are there in making up descriptive titles? What difficulties are involved in automatically retrieving articles from approximate titles? - - What is information science? Give definitions where possible. - - Some Considerations Relating to the Cost-Effectiveness of Online Services in Libraries - - A Fast Procedure for the Calculation of Similarity Coefficients in Automatic Classification - ''') - - submitted = st.form_submit_button('Search') - - if submitted: - if query: - st.markdown('---') - - col1, col2, col3 = st.columns(3) - - with col1: - st.subheader('BM25 Simple') - - bm25_simple_time, most_relevant_documents = search_docs( - bm25_simple, query, corpus) - st.caption(f'time: {bm25_simple_time}') - print_docs(most_relevant_documents) - - with col2: - st.subheader('BM25OKapi') - - bm25_okapi_time, most_relevant_documents = search_docs( - bm25_okapi, query, corpus) - st.caption(f'time: {bm25_okapi_time}') - print_docs(most_relevant_documents) - - with col3: - st.subheader('BM25+') - - bm25_plus_time, most_relevant_documents = search_docs( - bm25_plus, query, corpus) - st.caption(f'time: {bm25_plus_time}') - print_docs(most_relevant_documents) - else: - st.text('add some query') - - -def search_docs(model, query, corpus): - tokenized_query = query.split(" ") - - start = time.time() - most_relevant_documents = model.get_top_n( - tokenized_query, corpus, 20) - elapsed = (time.time() - start) - return elapsed, most_relevant_documents[:20] - - -def print_docs(docs): - for index, doc in enumerate(docs): - st.markdown(f''' -
                - {index+1}: {doc} -
                -
                - ''', unsafe_allow_html=True) - - -@st.cache(ttl=3600, allow_output_mutation=True, show_spinner=True, max_entries=2) -def load_docs(): - # Processing DOCUMENTS - doc_set = {} - doc_id = "" - doc_text = "" - documents_file, _ = urllib.request.urlretrieve( - 'https://raw.githubusercontent.com/tcvieira/bm25-exercise-report/main/content/CISI.ALL', 'CISI.ALL.downloaded') - with open(documents_file) as f: - lines = "" - for l in f.readlines(): - lines += "\n" + l.strip() if l.startswith(".") else " " + l.strip() - lines = lines.lstrip("\n").split("\n") - for l in lines: - if l.startswith(".I"): - doc_id = int(l.split(" ")[1].strip())-1 - elif l.startswith(".X"): - doc_set[doc_id] = doc_text.lstrip(" ") - doc_id = "" - doc_text = "" - else: - # The first 3 characters of a line can be ignored. - doc_text += l.strip()[3:] + " " - return list(doc_set.values()) - - -@st.cache(ttl=3600, allow_output_mutation=True, show_spinner=True, max_entries=2) -def load_models(): - - bm25_simple_file, _ = urllib.request.urlretrieve( - 'https://github.com/tcvieira/bm25-exercise-report/blob/main/models/BM25_simple.pkl?raw=true', 'bm25_simple_file.downloaded') - with open(bm25_simple_file, 'rb') as file: - bm25_simple: BM25Simple = pickle.load(file) - print(bm25_simple.corpus_size) - - bm25_okapi_file, _ = urllib.request.urlretrieve( - 'https://github.com/tcvieira/bm25-exercise-report/blob/main/models/BM25Okapi.pkl?raw=true', 'bm25_okapi_file.downloaded') - with open(bm25_okapi_file, 'rb') as file: - bm25_okapi: BM25Okapi = pickle.load(file) - print(bm25_okapi.corpus_size) - - bm25_plus_file, _ = urllib.request.urlretrieve( - 'https://github.com/tcvieira/bm25-exercise-report/blob/main/models/BM25Plus.pkl?raw=true', 'bm25_plus_file.downloaded') - with open(bm25_plus_file, 'rb') as file: - bm25_plus: BM25Plus = pickle.load(file) - print(bm25_plus.corpus_size) - - print(subprocess.run(['ls -la'], shell=True)) - # st.success("BM25 models loaded!", icon='✅') - return bm25_simple, bm25_okapi, bm25_plus - - -if __name__ == "__main__": - main() diff --git a/spaces/terfces0erbo/CollegeProjectV2/Darkcomet Rat Full Version V5.4.1 Legacy.md b/spaces/terfces0erbo/CollegeProjectV2/Darkcomet Rat Full Version V5.4.1 Legacy.md deleted file mode 100644 index de413b039555a91df044e51fa18c86e376168521..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Darkcomet Rat Full Version V5.4.1 Legacy.md +++ /dev/null @@ -1,6 +0,0 @@ -

                Darkcomet Rat Full Version V5.4.1 Legacy


                DOWNLOAD 🗹 https://bytlly.com/2uGkkt



                -
                -Popular Posts; ++ darkcomet download; Darkcomet Rat Crack 5.4.1 Portable Latest 2020 Free Download ... Windows Users' choice Dark comet rat legacy 5. 1fdad05405
                -
                -
                -

                diff --git a/spaces/test12356/SUI-svc-3.0/commons.py b/spaces/test12356/SUI-svc-3.0/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/test12356/SUI-svc-3.0/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Default Localize Mp.cfg Indir Call Of Duty 2.md b/spaces/tialenAdioni/chat-gpt-api/logs/Default Localize Mp.cfg Indir Call Of Duty 2.md deleted file mode 100644 index c58d65c0ab3b24d1bb0cb589b4f5f5f879b6a8a5..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Default Localize Mp.cfg Indir Call Of Duty 2.md +++ /dev/null @@ -1,36 +0,0 @@ -
                -

                How to fix the default_localize_mp.cfg error in Call of Duty 2

                -

                If you are trying to play Call of Duty 2 multiplayer mode and you encounter the error message "Couldn't load default_localize_mp.cfg. Make sure Call of Duty is run from the correct folder.", you are not alone. Many players have reported this issue on Steam forums and other online platforms. Fortunately, there are some possible solutions that you can try to fix this problem and enjoy the game.

                -

                default localize mp.cfg indir call of duty 2


                DOWNLOAD ->->->-> https://urlcod.com/2uK1pM



                -

                Possible solutions

                -
                  -
                1. Make sure you are running the game from the same hard drive where Steam is installed. Some players have reported that running the game from an external hard drive can cause this error[^1^]. If you have installed the game on a different drive, try moving it to the same drive as Steam or reinstalling it there.
                2. -
                3. Make sure you have a microphone connected to your PC. Some players have suggested that having a microphone plugged in can prevent this error from occurring[^1^] [^2^]. If you don't have a microphone, try connecting one or using a headset with a built-in mic.
                4. -
                5. Replace your config_mp.cfg file with a new one. The config_mp.cfg file contains your game settings and preferences for multiplayer mode. Sometimes, this file can get corrupted or outdated and cause this error. You can download a new config_mp.cfg file from various online sources or use one from another player[^3^]. To replace your config_mp.cfg file, follow these steps:
                6. -
                    -
                  • Download your new config_mp.cfg file.
                  • -
                  • Go to this folder in your directory C:\Program Files\Activision\Call of Duty 2\main\players\YOUR PLAYERNAME\.
                  • -
                  • Copy your downloaded config_mp.cfg file and overwrite it in your players folder.
                  • -
                  • You should now be able to use your new downloaded config when you start the game.
                  • -
                  -
                -

                Conclusion

                -

                The default_localize_mp.cfg error in Call of Duty 2 can be frustrating, but it is not impossible to fix. By following the possible solutions above, you may be able to resolve this issue and enjoy the multiplayer mode of this classic game. If none of these solutions work for you, you may want to contact Steam support or Activision customer service for further assistance.

                What is default_localize_mp.cfg?

                -

                The default_localize_mp.cfg file is a configuration file that contains the language settings for the multiplayer mode of Call of Duty 2. It tells the game which language files to load and which fonts to use for displaying text. The file is located in the main folder of the game installation, along with other configuration files such as config_mp.cfg and default_mp.cfg.

                -

                Why does this error occur?

                -

                The default_localize_mp.cfg error can occur for various reasons, but the most common ones are:

                -
                  -
                • The game cannot find the default_localize_mp.cfg file in the main folder. This can happen if the file is missing, renamed, moved, or corrupted.
                • -
                • The game cannot load the default_localize_mp.cfg file due to insufficient permissions or compatibility issues. This can happen if the game is not run as administrator, if the file is read-only, or if the game is not compatible with your operating system.
                • -
                • The game cannot load the language files specified by the default_localize_mp.cfg file. This can happen if the language files are missing, corrupted, or incompatible with your game version.
                • -
                -

                How to prevent this error?

                -

                To prevent this error from happening again, you can take some preventive measures such as:

                -

                -
                  -
                • Always run the game as administrator. This will ensure that the game has enough permissions to access and modify the configuration files.
                • -
                • Always keep your game updated. This will ensure that your game version matches the language files and that any bugs or glitches are fixed.
                • -
                • Always backup your configuration files. This will allow you to restore them in case they get corrupted or overwritten by a new update or mod.
                • -

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Autodesk Homestyler Full Crack A Review of the Features and Benefits.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Autodesk Homestyler Full Crack A Review of the Features and Benefits.md deleted file mode 100644 index 8f106395224c144a2d3fd32f736ea2a9bd5e7974..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Autodesk Homestyler Full Crack A Review of the Features and Benefits.md +++ /dev/null @@ -1,139 +0,0 @@ - -

                Download Autodesk Homestyler Full Crack: A Complete Guide

                -

                If you are looking for a powerful and easy-to-use online home design software, you might have heard of Autodesk Homestyler. This software allows you to create stunning 3D floor plans, interior designs, and renderings for your home or apartment. But how can you get the full version of this software without paying a hefty price? In this article, we will show you how to download Autodesk Homestyler full crack, what are the pros and cons of doing so, and what are some alternatives you can try.

                -

                What is Autodesk Homestyler?

                -

                A brief introduction to the software

                -

                Autodesk Homestyler is a web-based home design software that was launched in 2010 by Autodesk, a leading company in 3D design, engineering, and entertainment software. The software is designed for both professionals and amateurs who want to create realistic and beautiful home designs online. You can access the software from any web browser, anywhere, anytime.

                -

                download autodesk homestyler full crack


                DOWNLOAD ===> https://urlcod.com/2uK8GQ



                -

                The main features and benefits of using it

                -

                Some of the main features and benefits of using Autodesk Homestyler are:

                -
                  -
                • You can draw your own floor plan in 2D and the software will automatically build the 3D rooms for you.
                • -
                • You can decorate your rooms with thousands of furniture, materials, colors, and accessories from various brands and styles.
                • -
                • You can customize your own lighting sources, sunlight, and environment to create different moods and effects.
                • -
                • You can export your designs as images, videos, panoramas, or renderings with high quality.
                • -
                • You can share your designs with others online or collaborate with your team members on projects.
                • -
                -

                With these features and benefits, you can unleash your creativity and design your dream home in 3D with ease and fun.

                -

                Why do you need to download Autodesk Homestyler full crack?

                -

                The limitations of the free version

                -

                Although Autodesk Homestyler is free to use online, it has some limitations that might affect your design experience. Some of these limitations are:

                -
                  -
                • You can only create up to 10 projects per account.
                • -
                • You can only use up to 5 GB of storage space per account.
                • -
                • You can only export your designs as low-resolution images or videos.
                • -
                • You cannot access some advanced features such as AI decoration, material editor, interior finishes, or export drawings.
                • -
                -

                These limitations might not be a big deal if you are just playing around with the software or creating simple designs. But if you are serious about your home design projects or want to use them for professional purposes, you might want to upgrade to the full version.

                -

                The advantages of the full version

                -

                The full version of Autodesk Homestyler offers you more features and benefits that can enhance your design experience and quality. Some of these advantages are:

                -

                how to get autodesk homestyler pro for free
                -autodesk homestyler offline installer with crack
                -autodesk homestyler premium license key generator
                -download autodesk homestyler full version cracked
                -autodesk homestyler crack serial number activation code
                -autodesk homestyler 3d design software free download with crack
                -autodesk homestyler 2023 crack patch keygen
                -autodesk homestyler full crack download for windows 10
                -autodesk homestyler full crack download for mac os
                -autodesk homestyler full crack download for linux
                -autodesk homestyler full crack download for android
                -autodesk homestyler full crack download for ios
                -autodesk homestyler full crack download for chromebook
                -autodesk homestyler full crack torrent magnet link
                -autodesk homestyler full crack direct download link
                -autodesk homestyler full crack google drive link
                -autodesk homestyler full crack mega.nz link
                -autodesk homestyler full crack mediafire link
                -autodesk homestyler full crack dropbox link
                -autodesk homestyler full crack zippyshare link
                -autodesk homestyler full crack rar password
                -autodesk homestyler full crack zip file
                -autodesk homestyler full crack iso file
                -autodesk homestyler full crack exe file
                -autodesk homestyler full crack dmg file
                -autodesk homestyler full crack apk file
                -autodesk homestyler full crack ipa file
                -autodesk homestyler full crack mod apk
                -autodesk homestyler full crack hack tool
                -autodesk homestyler full crack cheat engine
                -autodesk homestyler full crack no survey no human verification
                -autodesk homestyler full crack no virus no malware no spyware
                -autodesk homestyler full crack safe and secure download
                -autodesk homestyler full crack 100% working tested verified
                -autodesk homestyler full crack latest updated version 2023
                -download and install autodesk homestyler full crack step by step guide tutorial video
                -how to use autodesk homestyler full crack features and functions tips and tricks
                -how to fix autodesk homestyler full crack errors and bugs troubleshooting solutions
                -how to uninstall and remove autodesk homestyler full crack completely from your device
                -how to update and upgrade to the newest version of autodesk homestyler with or without the crack file

                -
                  -
                • You can create unlimited projects per account.
                • -
                • You can use unlimited storage space per account.
                • -
                • You can export your designs as high-resolution images, videos, panoramas, or renderings with HD quality.
                • -
                • You can access all the advanced features such as AI decoration, material editor, interior finishes, or export drawings.
                • -
                • You can enjoy more customer support and technical assistance from Autodesk.
                • -
                -

                With these advantages, you can create more complex and realistic home designs online with more flexibility and convenience.

                -

                The risks and challenges of using a cracked version

                -

                However, getting the full version of Autodesk Homestyler is not cheap. According to the official website, the full version costs $29.99 per month or $299.99 per year. That's why some people might be tempted to download Autodesk Homestyler full crack from some unofficial sources online. But is it worth it?

                -

                Downloading Autodesk Homestyler full crack might seem like a good idea at first glance, but it comes with many risks and challenges that you should be aware of. Some of these risks and challenges are:

                -
                  -
                • You might download a fake or corrupted file that does not work properly or damages your computer.
                • -
                • You might download a file that contains viruses, malware, spyware, or ransomware that infects your computer or steals your personal information.
                • -
                • You might violate the intellectual property rights of Autodesk and face legal consequences such as fines or lawsuits.
                • -
                • You might lose access to updates, bug fixes, new features, or customer support from Autodesk.
                • -
                • You might compromise the quality and security of your home design projects online.
                • -
                -

                These risks and challenges might outweigh the benefits of downloading Autodesk Homestyler full crack. Therefore, we do not recommend doing so unless you are willing to take these risks and challenges.

                -

                How to download Autodesk Homestyler full crack safely and easily?

                -

                The steps to follow

                -

                If you still want to download Autodesk Homestyler full crack despite the risks and challenges involved, here are some steps you can follow:

                -
                  -
                1. Search for a reliable source online that offers Autodesk Homestyler full crack for free. You can use keywords such as "Autodesk Homestyler full crack", "Autodesk Homestyler crack download", "Autodesk Homestyler cracked version", etc.
                2. -
                3. Choose a source that has positive reviews, ratings, comments, or feedback from other users who have downloaded it before. Avoid sources that have negative reviews, ratings, comments, or feedback from other users who have encountered problems or issues with it.
                4. -
                5. Download the file from the source using a secure connection and a trusted antivirus software. Scan the file for any viruses, malware, spyware, or ransomware before opening it.
                6. -
                7. Install the file on your computer following the instructions provided by the source. Make sure you have enough space on your hard drive and meet the minimum system requirements for running the software.
                8. -
                9. Launch the software on your computer and enjoy using it for free.
                10. -
                -

                The precautions to take

                -

                However, even if you follow these steps carefully, there is no guarantee that you will be able to download Autodesk Homestyler full crack safely and easily. Therefore, we suggest taking some precautions before doing so:

                -
                  -
                • Backup your important files and data on your computer in case something goes wrong during or after downloading or installing the file.
                • -
                • Create a restore point on your computer in case you need to undo any changes made by the file on your system settings or registry.
                • -
                • Use a VPN service or a proxy server to hide your IP address and location when downloading or using the file online.
                • -
                • Use a disposable email address when registering or logging in to the software online.
                • -
                -

                The alternatives to consider

                -

                Finally, we recommend considering some alternatives before downloading Autodesk Homestyler full crack. Some of these alternatives are:

                -
                  -
                • Use the free version of Autodesk Homestyler online without downloading anything on your computer. You can still create amazing home designs online with limited features and benefits.
                • -
                • Use other free online home design software that offer similar features and benefits as Autodesk Homestyler without requiring any downloads or cracks. Some examples are Floorplanner.com , Planner5D.com , RoomSketcher.com , etc.
                • -
                • Use other paid online home design software that offer more features and benefits than Autodesk Homestyler at a lower price or with a free trial period. Some examples are SketchUp.com , Roomstyler.com , HomeByMe.com , etc.
                • -
                -

                Conclusion

                -

                A summary of the main points

                -

                In conclusion,

                -
                  -
                • Autodesk Homestyler is a web-based home design software that allows you to create stunning 3D floor plans, interior designs, and renderings for your home or apartment.
                • -
                • The full version of Autodesk Homestyler offers more features and benefits than the free version, but it costs $29.99 per month or $299.99 per year.
                • -
                • Downloading Autodesk Homestyler full crack might seem like a good idea to get the full version for free, but it comes with many risks and challenges such as viruses, malware, legal issues, or quality issues.
                • -
                • If you still want to download Autodesk Homestyler full crack, you can follow some steps and take some precautions to do it safely and easily.
                • -
                • However, we recommend considering some alternatives such as using the free version online, using other free or paid online home design software, or buying the full version legally.
                • -
                -

                A call to action for the readers

                -

                We hope this article has helped you understand how to download Autodesk Homestyler full crack and what are the pros and cons of doing so. If you have any questions or comments, please feel free to leave them below. If you liked this article, please share it with your friends or family who might be interested in home design software. And if you are ready to create your own home design online, why not give Autodesk Homestyler a try? You can start designing for free by clicking here.

                -

                FAQs

                -

                What is Autodesk Homestyler?

                -

                Autodesk Homestyler is a web-based home design software that allows you to create stunning 3D floor plans, interior designs, and renderings for your home or apartment.

                -

                How much does Autodesk Homestyler cost?

                -

                The free version of Autodesk Homestyler is available online without any downloads or registrations. The full version of Autodesk Homestyler costs $29.99 per month or $299.99 per year.

                -

                How can I download Autodesk Homestyler full crack?

                -

                You can download Autodesk Homestyler full crack from some unofficial sources online that offer it for free. However, this comes with many risks and challenges such as viruses, malware, legal issues, or quality issues.

                -

                Is it safe to download Autodesk Homestyler full crack?

                -

                No, it is not safe to download Autodesk Homestyler full crack. You might download a fake or corrupted file that does not work properly or damages your computer. You might also violate the intellectual property rights of Autodesk and face legal consequences.

                -

                What are some alternatives to downloading Autodesk Homestyler full crack?

                -

                Some alternatives to downloading Autodesk Homestyler full crack are using the free version online, using other free or paid online home design software, or buying the full version legally.

                -

                0a6ba089eb
                -
                -
                \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Hindi Movie Horn Ok Pleassss ((TOP)).md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Hindi Movie Horn Ok Pleassss ((TOP)).md deleted file mode 100644 index d2391f95011cd1a71aacb506cde138d8c3f4f1f3..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Hindi Movie Horn Ok Pleassss ((TOP)).md +++ /dev/null @@ -1,20 +0,0 @@ - -Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Download Hindi Movie Horn Ok Pleassss": - -

                How to Download Hindi Movie Horn Ok Pleassss (2009)

                -

                If you are looking for a comedy movie with a twist, you might want to check out Horn Ok Pleassss (2009), a Bollywood film starring Nana Patekar, Rimi Sen, Muzamil Ibrahim and Satish Shah. The movie is about a truck driver who helps a photographer get his love interest, who happens to be the twin sister of the truck driver's wife. However, the truck driver is unaware of this fact and ends up creating a lot of confusion and chaos. The movie also features musical segments and a special appearance by Rakhi Sawant.

                -

                Horn Ok Pleassss was directed by Rakesh Sarang and produced by Sunrise Pictures, Sarang Films and Axis Movies. The movie was supposed to release in 2009, but it got delayed due to various reasons and remains unreleased till date[^1^]. However, you can still watch the movie online or download it from various sources. Here are some tips on how to download Hindi movie Horn Ok Pleassss (2009).

                -

                Download Hindi Movie Horn Ok Pleassss


                Download Zip 🗹 https://urlcod.com/2uK5FT



                -

                Step 1: Find a reliable website that offers the movie

                -

                There are many websites that claim to offer free or paid downloads of Horn Ok Pleassss, but not all of them are trustworthy or legal. Some of them may contain viruses, malware, spyware or other harmful software that can damage your device or compromise your privacy. Some of them may also have poor quality or incomplete versions of the movie. Therefore, you need to be careful and do some research before choosing a website to download the movie from.

                -

                One way to find a reliable website is to look for reviews, ratings, feedback or testimonials from other users who have downloaded the movie from that website. You can also check the domain name, security certificate, privacy policy and terms and conditions of the website to see if it is legitimate and safe. You can also use tools like Google Safe Browsing or Norton Safe Web to scan the website for any potential threats.

                -

                Another way to find a reliable website is to use a reputable search engine like Google or Bing and type in keywords like "Download Hindi Movie Horn Ok Pleassss" or "Horn Ok Pleassss Full Movie 2008". You can then filter the results by date, relevance, popularity or quality. You can also look for websites that have official logos, badges or seals from trusted sources like IMDb, YouTube, Netflix or Amazon Prime Video.

                -

                Step 2: Choose a suitable format and quality for the movie

                -

                Once you have found a reliable website that offers the movie, you need to choose a suitable format and quality for the movie. The format refers to the file type or extension of the movie, such as MP4, AVI, MKV, WMV or MOV. The quality refers to the resolution or clarity of the movie, such as 480p, 720p, 1080p or 4K. The format and quality of the movie will affect the size, speed and compatibility of the download.

                -

                You should choose a format and quality that matches your device's specifications and preferences. For example, if you have a smartphone or tablet with limited storage space and internet speed, you may want to choose a smaller file size and lower resolution. If you have a laptop or desktop with ample storage space and internet speed, you may want to choose a larger file size and higher resolution. If you have a smart TV or projector with high-definition display and sound system, you may want to choose the best possible format and quality for an immersive viewing experience.

                -

                -

                Step 3: Download the movie using a suitable method

                -

                After choosing a suitable format and quality for the movie, you need to download the movie using a suitable method. There are different methods of downloading movies from websites, such as direct download links, torrent files, streaming services or third-party software. Each method has its own advantages and disadvantages in terms of speed, convenience, legality and safety.

                -

                A direct download link is a simple and fast way of downloading movies from websites. It is usually indicated by an icon or button that says "Download", "Save", "Get" or something similar. You just need to click on it and follow the instructions to save the movie file on your device

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Free Download Champak Comics In Hindi Pdf.md b/spaces/tialenAdioni/chat-gpt-api/logs/Free Download Champak Comics In Hindi Pdf.md deleted file mode 100644 index d0dc29cea37d74228e09fe8e22ca4a821b2a9f7c..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Free Download Champak Comics In Hindi Pdf.md +++ /dev/null @@ -1,35 +0,0 @@ - -Here is a possible title and article with HTML formatting for the keyword "Free Download Champak Comics In Hindi Pdf": - -``` -

                How to Download Champak Comics in Hindi PDF for Free

                -

                Champak is one of the most popular children's magazines in India. It features stories, comics, puzzles, jokes, and activities that entertain and educate young readers. Champak comics are available in various languages, including Hindi.

                -

                If you want to download Champak comics in Hindi PDF for free, you can follow these steps:

                -

                Free Download Champak Comics In Hindi Pdf


                Download ►►►►► https://urlcod.com/2uK8JO



                -
                  -
                1. Visit the Internet Archive website at https://archive.org/.
                2. -
                3. In the search box, type "Champak Hindi" and click on the search icon.
                4. -
                5. You will see a list of Champak magazines in Hindi that are available for download. You can sort them by date, title, creator, or views.
                6. -
                7. Click on the magazine that you want to download. You will see a preview of the magazine and some details about it.
                8. -
                9. On the right side of the page, you will see a download options section. You can choose to download the magazine in PDF, EPUB, Kindle, or other formats.
                10. -
                11. Click on the PDF option and the download will start automatically.
                12. -
                13. Enjoy reading your Champak comic in Hindi PDF for free!
                14. -
                -

                Note: You can also visit the official website of Champak at https://www.champak.in/ to read online comics, stories, and articles in various languages. However, you may not be able to download them as PDF files.

                -```Here is a possible continuation of the article: - -``` -

                Champak comics are a great way to enjoy reading and learning in Hindi. They feature various characters, such as Cheeku the rabbit, Meeku the mouse, Damru the donkey, and many more. They go on adventures, solve problems, and have fun together. Champak comics also teach moral values, scientific facts, and general knowledge to the readers.

                -

                Some of the benefits of reading Champak comics in Hindi are:

                -
                  -
                • They improve your Hindi vocabulary, grammar, and comprehension skills.
                • -
                • They stimulate your imagination and creativity.
                • -
                • They enhance your critical thinking and problem-solving abilities.
                • -
                • They foster your curiosity and interest in various topics.
                • -
                • They provide you with entertainment and relaxation.
                • -
                -

                So, what are you waiting for? Download Champak comics in Hindi PDF for free and start reading them today!

                -```

                -

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/tintoretor/WealthSentiment/README.md b/spaces/tintoretor/WealthSentiment/README.md deleted file mode 100644 index 1014161fcbac9510fe1052abe16740ea54e6301c..0000000000000000000000000000000000000000 --- a/spaces/tintoretor/WealthSentiment/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WealthSentiment -emoji: 📚 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Crumplepop Fisheye Fixer For GoPro MAC Cracked ((FREE)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Crumplepop Fisheye Fixer For GoPro MAC Cracked ((FREE)).md deleted file mode 100644 index 46abf4700fe8819500bbe325884a9409cd2bf420..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Crumplepop Fisheye Fixer For GoPro MAC Cracked ((FREE)).md +++ /dev/null @@ -1,38 +0,0 @@ -
                -``` -

                How to Remove Fisheye Distortion from GoPro Videos on Mac

                -

                If you own a GoPro camera, you probably love the wide-angle lens that captures stunning scenes and action shots. But sometimes, you may not want the fisheye effect that comes with it. Fisheye distortion can make your videos look unnatural and distorted, especially near the edges of the frame.

                -

                Crumplepop Fisheye Fixer For GoPro MAC Cracked


                Download Zip ☆☆☆ https://urlcod.com/2uHwGW



                -

                Fortunately, there is a way to fix this problem on your Mac computer. You don't need to buy expensive software or spend hours editing your videos. You just need a simple plugin called Crumplepop Fisheye Fixer for GoPro.

                -

                What is Crumplepop Fisheye Fixer for GoPro?

                -

                Crumplepop Fisheye Fixer for GoPro is a plugin that works with Final Cut Pro X, the popular video editing software for Mac. It allows you to quickly and easily remove fisheye distortion from your GoPro videos, without losing quality or resolution.

                -

                Crumplepop Fisheye Fixer for GoPro uses a smart algorithm that analyzes the lens characteristics of your GoPro model and applies the appropriate correction to your footage. You can adjust the amount of correction and preview the results in real time. You can also choose to keep some of the fisheye effect if you prefer.

                -

                How to Use Crumplepop Fisheye Fixer for GoPro?

                -

                Using Crumplepop Fisheye Fixer for GoPro is very easy. Here are the steps you need to follow:

                -
                  -
                1. Download and install Crumplepop Fisheye Fixer for GoPro from the official website. You can get a free trial version or buy the full version for $49.
                2. -
                3. Launch Final Cut Pro X and import your GoPro videos into your project.
                4. -
                5. Select the video clip you want to fix and drag it to the timeline.
                6. -
                7. In the effects browser, find Crumplepop Fisheye Fixer for GoPro and drag it onto your video clip.
                8. -
                9. In the inspector window, select your GoPro model from the drop-down menu. You can also adjust the amount of correction using the slider.
                10. -
                11. Preview your video and see how it looks without fisheye distortion. You can toggle the effect on and off using the checkbox.
                12. -
                13. Repeat the process for any other video clips you want to fix.
                14. -
                15. Export your project as usual and enjoy your fisheye-free videos.
                16. -
                -

                Why Choose Crumplepop Fisheye Fixer for GoPro?

                -

                Crumplepop Fisheye Fixer for GoPro is one of the best solutions for removing fisheye distortion from your GoPro videos on Mac. Here are some of the reasons why you should choose it:

                -
                  -
                • It works with any GoPro model, from Hero 3 to Hero 9.
                • -
                • It preserves the original quality and resolution of your videos.
                • -
                • It is fast and easy to use, with no complicated settings or parameters.
                • -
                • It integrates seamlessly with Final Cut Pro X, so you don't need to switch between different software.
                • -
                • It gives you full control over the amount of correction and lets you keep some of the fisheye effect if you want.
                • -
                • It has a reasonable price and offers a free trial version.
                • -
                -

                Conclusion

                -

                If you are looking for a way to remove fisheye distortion from your GoPro videos on Mac, you should definitely try Crumplepop Fisheye Fixer for GoPro. It is a simple and effective plugin that works with Final Cut Pro X and lets you fix your videos in minutes. You can download it from https://crumplepop.com/fisheyefixer/ and see for yourself how it works.

                -

                - -```

                7b8c122e87
                -
                -
                \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Iron Man 3 Blu Ray [NEW] Download In Tamil 1080p 5.1ch.md b/spaces/tioseFevbu/cartoon-converter/scripts/Iron Man 3 Blu Ray [NEW] Download In Tamil 1080p 5.1ch.md deleted file mode 100644 index 82d373bc141535124c91da1600711d7de09deba8..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Iron Man 3 Blu Ray [NEW] Download In Tamil 1080p 5.1ch.md +++ /dev/null @@ -1,14 +0,0 @@ -
                -

                How to Download Iron Man 3 in Tamil with High Quality Audio and Video

                -

                Iron Man 3 is a 2013 superhero film based on the Marvel Comics character Iron Man, starring Robert Downey Jr. as Tony Stark, a genius billionaire who faces a new enemy called the Mandarin. It is the sequel to Iron Man (2008) and Iron Man 2 (2010), and the seventh film in the Marvel Cinematic Universe (MCU).

                -

                If you are a fan of Iron Man and want to watch the movie in Tamil with high quality audio and video, you might be wondering how to download it from the internet. There are many websites that offer Iron Man 3 in Tamil, but not all of them are reliable or safe. Some of them might contain viruses, malware, or low-quality files that can damage your device or ruin your viewing experience.

                -

                iron man 3 blu ray download in tamil 1080p 5.1ch


                Download File 🗹 https://urlcod.com/2uHwoQ



                -

                To help you find the best source for downloading Iron Man 3 in Tamil with 1080p resolution and 5.1 channel surround sound, we have done some research and found a few options that you can try. Here are some of them:

                -
                  -
                • Archive.org: This is a website that hosts millions of free books, movies, music, software, and more. You can find Iron Man 3 in Tamil with 1080p resolution and 5.1 channel surround sound on this website by searching for "iron man 3 2013 1080p id 68721". This is a file uploaded by a user named play do on July 11, 2021[^1^]. You can download it by clicking on the CINEPACK or TORRENT option on the right side of the page.
                • -
                • Ultimate Marvel Films Collection: This is a collection of all the MCU films from 2008 to 2021 in 1080p resolution and AC-3 audio format. You can find this collection on archive.org by searching for "ultimate-marvel-films-collection-2008-to-2021-1080p-blu-ray-x-264-ac-3-part-3-drago-tv". This is a file uploaded by a user named Drago TV on September 10, 2021[^2^]. You can download Iron Man 3 in Tamil by clicking on the M07 Iron Man 3 (2013) 1080p.mp4 file on the list.
                • -
                • Scribd: This is a website that allows users to upload and share documents, books, audiobooks, and more. You can find a PDF document that contains links to download Iron Man 3 in Tamil with 1080p resolution and DD 5.1 audio on this website by searching for "iron man 3 dual audio 1080p bluray 2013". This is a document uploaded by an anonymous user on June 18, 2020[^3^]. You can download it by clicking on the Download button on the top right corner of the page.
                • -
                -

                These are some of the ways you can download Iron Man 3 in Tamil with high quality audio and video. However, please note that these methods are not endorsed or verified by us, and we are not responsible for any legal or technical issues that may arise from using them. We recommend that you use a VPN service and an antivirus software to protect your privacy and security while downloading files from unknown sources. Also, please respect the intellectual property rights of the creators and distributors of Iron Man 3 and only download it for personal use.

                7b8c122e87
                -
                -
                \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Krayzie Bone Thug Mentality 1999 Disc 1 Full ((FREE)) Album Zip.md b/spaces/tioseFevbu/cartoon-converter/scripts/Krayzie Bone Thug Mentality 1999 Disc 1 Full ((FREE)) Album Zip.md deleted file mode 100644 index 59baa24d37da88ace97c67188855c7a301ab6732..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Krayzie Bone Thug Mentality 1999 Disc 1 Full ((FREE)) Album Zip.md +++ /dev/null @@ -1,17 +0,0 @@ -
                -Here is a possible title and article with html formatting for the keyword "Krayzie Bone Thug Mentality 1999 Disc 1 Full Album Zip": - -

                Krayzie Bone's Thug Mentality 1999: A Classic Solo Debut

                -

                Krayzie Bone, one of the members of the legendary rap group Bone Thugs-n-Harmony, released his first solo album Thug Mentality 1999 on April 6, 1999. The album was a double-disc masterpiece that showcased Krayzie's versatile flow, lyrical skills, and musical influences. Thug Mentality 1999 featured a star-studded lineup of guest appearances, including Mariah Carey, The Marley Brothers, Big Pun, Fat Joe, Cuban Link, Snoop Dogg, E-40, and of course, his fellow Bone Thugs-n-Harmony members. The album also had production from DJ U-Neek, who crafted the signature sound of Bone Thugs-n-Harmony, as well as other producers such as Tony C, Damizza, L.T. Hutton, and Darren Vegas.

                -

                Krayzie Bone Thug Mentality 1999 Disc 1 Full Album Zip


                DOWNLOAD ->>> https://urlcod.com/2uHyfZ



                -

                The first disc of Thug Mentality 1999 contained 17 tracks that ranged from hardcore rap to smooth R&B. Some of the highlights include the title track "Thug Mentality", where Krayzie raps about his life and philosophy over a haunting piano loop; "Paper", where he teams up with Mariah Carey for a catchy hook and a sample of Eric B. & Rakim's "Paid in Full"; "Thugz All Ova Da World", where he collaborates with The Marley Brothers for a reggae-infused anthem; and "Murda Mo", where he delivers a rapid-fire flow over a dark and eerie beat. The first disc also featured the hit single "Thug Luv", where Krayzie and his Bone Thugs-n-Harmony brothers trade verses with Tupac Shakur over a gunshot-laced beat.

                -

                The second disc of Thug Mentality 1999 contained 16 tracks that continued to showcase Krayzie's diversity and creativity. Some of the highlights include "Da Bullshit (Skit)", where he parodies a radio show and takes shots at his rivals; "I Still Believe", where he duets with Mariah Carey for a heartfelt ballad; "Hi-D-Ho", where he samples Cab Calloway's classic song and adds his own twist; and "Silent Warrior", where he pays tribute to his fallen friend and mentor Eazy-E. The second disc also featured the hit single "Hard Time Hustlin'", where Krayzie and Sade sing about the struggles and rewards of the street life.

                -

                Thug Mentality 1999 was a commercial and critical success, debuting at number four on the Billboard 200 chart and selling over one million copies in the United States. It also received positive reviews from critics, who praised Krayzie's artistic vision, lyrical ability, and musical range. Thug Mentality 1999 is widely regarded as one of the best solo albums by a Bone Thugs-n-Harmony member and one of the classic rap albums of the late 1990s.

                -

                If you are a fan of Krayzie Bone or Bone Thugs-n-Harmony, you can download the full album zip file of Thug Mentality 1999 Disc 1 here[^1^]. Enjoy!

                Here are a few more paragraphs for the article: - -

                Krayzie Bone's Thug Mentality 1999 was not only a musical achievement, but also a personal one. Krayzie had to overcome many obstacles and challenges to complete the album, such as legal issues, label disputes, health problems, and personal conflicts. He also had to balance his solo career with his group commitments, as Bone Thugs-n-Harmony were working on their third album The Art of War at the same time. Krayzie dedicated a lot of time and effort to make Thug Mentality 1999 a reality, and it paid off in the end.

                -

                Krayzie Bone's Thug Mentality 1999 also had a significant impact on the rap scene and culture. The album influenced many other artists and fans who admired Krayzie's style and skills. The album also helped to expand the reach and recognition of Bone Thugs-n-Harmony, as well as their affiliated label Mo Thugs Records. The album also showcased the diversity and richness of rap music, as Krayzie blended different genres and elements to create his own unique sound. Thug Mentality 1999 was a testament to Krayzie's talent and vision as an artist.

                -

                Krayzie Bone's Thug Mentality 1999 is an album that deserves to be celebrated and appreciated by rap lovers and music lovers alike. The album is a masterpiece that showcases Krayzie's thug mentality, which is not just about violence and crime, but also about loyalty, resilience, spirituality, and creativity. The album is a reflection of Krayzie's life and personality, as well as his passion and dedication to his craft. Thug Mentality 1999 is an album that will never get old or outdated, as it is timeless and universal. Thug Mentality 1999 is an album that you should listen to if you haven't already.

                -

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/MP3.Releaser.v3.1B376.by-MorGoTH Utorrent.md b/spaces/tioseFevbu/cartoon-converter/scripts/MP3.Releaser.v3.1B376.by-MorGoTH Utorrent.md deleted file mode 100644 index 430a7de73d883e7564a1d0d7caebfd15c2dfc758..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/MP3.Releaser.v3.1B376.by-MorGoTH Utorrent.md +++ /dev/null @@ -1,17 +0,0 @@ - -

                How to Use MP3.Releaser.v3.1B376.by-MorGoTH Utorrent to Manage Your MP3 Files

                -

                If you are looking for a tool that can help you organize, rename, tag, and create playlists and SFV files for your MP3 files, you might want to check out MP3.Releaser.v3.1B376.by-MorGoTH Utorrent. This is a software program created by MorGoTH, a retired developer who used to run a website called morgothtools. Unfortunately, the website is no longer available, but you can still download the program from various sources online.

                -

                MP3.Releaser.v3.1B376.by-MorGoTH Utorrent


                DOWNLOADhttps://urlcod.com/2uHxba



                -

                MP3.Releaser.v3.1B376.by-MorGoTH Utorrent is a lightweight and easy-to-use program that works with the BitTorrent protocol. You can use it to download and upload MP3 files from torrent websites, as well as manage them locally on your computer. The program has various features that allow you to edit the metadata of your MP3 files, such as artist, album, genre, year, track number, and more. You can also use it to create playlists in M3U format, SFV files for verifying the integrity of your files, and NFO files for providing information about your releases.

                -

                To use MP3.Releaser.v3.1B376.by-MorGoTH Utorrent, you need to have a torrent client installed on your computer, such as uTorrent or BitTorrent. You also need to have a torrent file or a magnet link for the MP3 files you want to download or upload. Once you have these, you can launch MP3.Releaser.v3.1B376.by-MorGoTH Utorrent and select the torrent file or paste the magnet link in the program. The program will then start downloading or uploading the MP3 files to your specified folder.

                -

                Once the download or upload is complete, you can use MP3.Releaser.v3.1B376.by-MorGoTH Utorrent to manage your MP3 files. You can browse through your folders and select the files you want to edit. You can then use the buttons on the toolbar to rename, tag, create playlists, create SFV files, create NFO files, or delete your files. You can also use the options menu to customize the settings of the program, such as the default folder, the file naming scheme, the tag format, and more.

                -

                MP3.Releaser.v3.1B376.by-MorGoTH Utorrent is a handy tool for anyone who downloads or uploads MP3 files from torrent websites. It can help you keep your MP3 files organized and well-informed. However, please note that this program is no longer updated by its developer and may not work with some newer versions of Windows or torrent clients. Also, please be careful when downloading or uploading MP3 files from torrent websites, as they may contain viruses or malware, or infringe on copyright laws.

                - -

                However, MP3.Releaser.v3.1B376.by-MorGoTH Utorrent is not just a tool for downloading and uploading MP3 files. It also has some features that make it a useful program for managing your MP3 files locally on your computer. For example, you can use it to edit the metadata of your MP3 files, such as artist, album, genre, year, track number, and more. You can also use it to create playlists in M3U format, SFV files for verifying the integrity of your files, and NFO files for providing information about your releases.

                -

                -

                Editing the metadata of your MP3 files can help you organize your music collection better and make it easier to find the songs you want to listen to. Metadata is the information that is embedded in an MP3 file, such as title, artist, album name, and more. You can use MP3.Releaser.v3.1B376.by-MorGoTH Utorrent to view and modify this information for each file or for multiple files at once. You can also use the program to rename your files according to a custom naming scheme based on the metadata.

                -

                Creating playlists in M3U format can help you enjoy your music in different ways. A playlist is a list of songs that you can play in a specific order or randomly. You can use MP3.Releaser.v3.1B376.by-MorGoTH Utorrent to create playlists from your MP3 files and save them in M3U format, which is compatible with most media players. You can also use the program to create SFV files and NFO files for your releases.

                -

                SFV files are checksum files that can help you verify the integrity of your MP3 files. A checksum is a unique code that is generated from the data in a file. If you compare the checksum of a file with the checksum in an SFV file, you can check if the file has been corrupted or modified. You can use MP3.Releaser.v3.1B376.by-MorGoTH Utorrent to create SFV files for your MP3 files and use them to check if your files are intact.

                -

                NFO files are text files that can help you provide information about your releases. NFO stands for info or information. You can use MP3.Releaser.v3.1B376.by-MorGoTH Utorrent to create NFO files for your MP3 releases and include details such as release name, release date, genre, track list, encoder settings, source, and more. You can also use the program to customize the appearance of your NFO files with ASCII art and colors.

                7b8c122e87
                -
                -
                \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/idnadata.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/idnadata.py deleted file mode 100644 index 1b5805d15e53994f9909dd6f064603574eefdb32..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/idna/idnadata.py +++ /dev/null @@ -1,2137 +0,0 @@ -# This file is automatically generated by tools/idna-data - -__version__ = '14.0.0' -scripts = { - 'Greek': ( - 0x37000000374, - 0x37500000378, - 0x37a0000037e, - 0x37f00000380, - 0x38400000385, - 0x38600000387, - 0x3880000038b, - 0x38c0000038d, - 0x38e000003a2, - 0x3a3000003e2, - 0x3f000000400, - 0x1d2600001d2b, - 0x1d5d00001d62, - 0x1d6600001d6b, - 0x1dbf00001dc0, - 0x1f0000001f16, - 0x1f1800001f1e, - 0x1f2000001f46, - 0x1f4800001f4e, - 0x1f5000001f58, - 0x1f5900001f5a, - 0x1f5b00001f5c, - 0x1f5d00001f5e, - 0x1f5f00001f7e, - 0x1f8000001fb5, - 0x1fb600001fc5, - 0x1fc600001fd4, - 0x1fd600001fdc, - 0x1fdd00001ff0, - 0x1ff200001ff5, - 0x1ff600001fff, - 0x212600002127, - 0xab650000ab66, - 0x101400001018f, - 0x101a0000101a1, - 0x1d2000001d246, - ), - 'Han': ( - 0x2e8000002e9a, - 0x2e9b00002ef4, - 0x2f0000002fd6, - 0x300500003006, - 0x300700003008, - 0x30210000302a, - 0x30380000303c, - 0x340000004dc0, - 0x4e000000a000, - 0xf9000000fa6e, - 0xfa700000fada, - 0x16fe200016fe4, - 0x16ff000016ff2, - 0x200000002a6e0, - 0x2a7000002b739, - 0x2b7400002b81e, - 0x2b8200002cea2, - 0x2ceb00002ebe1, - 0x2f8000002fa1e, - 0x300000003134b, - ), - 'Hebrew': ( - 0x591000005c8, - 0x5d0000005eb, - 0x5ef000005f5, - 0xfb1d0000fb37, - 0xfb380000fb3d, - 0xfb3e0000fb3f, - 0xfb400000fb42, - 0xfb430000fb45, - 0xfb460000fb50, - ), - 'Hiragana': ( - 0x304100003097, - 0x309d000030a0, - 0x1b0010001b120, - 0x1b1500001b153, - 0x1f2000001f201, - ), - 'Katakana': ( - 0x30a1000030fb, - 0x30fd00003100, - 0x31f000003200, - 0x32d0000032ff, - 0x330000003358, - 0xff660000ff70, - 0xff710000ff9e, - 0x1aff00001aff4, - 0x1aff50001affc, - 0x1affd0001afff, - 0x1b0000001b001, - 0x1b1200001b123, - 0x1b1640001b168, - ), -} -joining_types = { - 0x600: 85, - 0x601: 85, - 0x602: 85, - 0x603: 85, - 0x604: 85, - 0x605: 85, - 0x608: 85, - 0x60b: 85, - 0x620: 68, - 0x621: 85, - 0x622: 82, - 0x623: 82, - 0x624: 82, - 0x625: 82, - 0x626: 68, - 0x627: 82, - 0x628: 68, - 0x629: 82, - 0x62a: 68, - 0x62b: 68, - 0x62c: 68, - 0x62d: 68, - 0x62e: 68, - 0x62f: 82, - 0x630: 82, - 0x631: 82, - 0x632: 82, - 0x633: 68, - 0x634: 68, - 0x635: 68, - 0x636: 68, - 0x637: 68, - 0x638: 68, - 0x639: 68, - 0x63a: 68, - 0x63b: 68, - 0x63c: 68, - 0x63d: 68, - 0x63e: 68, - 0x63f: 68, - 0x640: 67, - 0x641: 68, - 0x642: 68, - 0x643: 68, - 0x644: 68, - 0x645: 68, - 0x646: 68, - 0x647: 68, - 0x648: 82, - 0x649: 68, - 0x64a: 68, - 0x66e: 68, - 0x66f: 68, - 0x671: 82, - 0x672: 82, - 0x673: 82, - 0x674: 85, - 0x675: 82, - 0x676: 82, - 0x677: 82, - 0x678: 68, - 0x679: 68, - 0x67a: 68, - 0x67b: 68, - 0x67c: 68, - 0x67d: 68, - 0x67e: 68, - 0x67f: 68, - 0x680: 68, - 0x681: 68, - 0x682: 68, - 0x683: 68, - 0x684: 68, - 0x685: 68, - 0x686: 68, - 0x687: 68, - 0x688: 82, - 0x689: 82, - 0x68a: 82, - 0x68b: 82, - 0x68c: 82, - 0x68d: 82, - 0x68e: 82, - 0x68f: 82, - 0x690: 82, - 0x691: 82, - 0x692: 82, - 0x693: 82, - 0x694: 82, - 0x695: 82, - 0x696: 82, - 0x697: 82, - 0x698: 82, - 0x699: 82, - 0x69a: 68, - 0x69b: 68, - 0x69c: 68, - 0x69d: 68, - 0x69e: 68, - 0x69f: 68, - 0x6a0: 68, - 0x6a1: 68, - 0x6a2: 68, - 0x6a3: 68, - 0x6a4: 68, - 0x6a5: 68, - 0x6a6: 68, - 0x6a7: 68, - 0x6a8: 68, - 0x6a9: 68, - 0x6aa: 68, - 0x6ab: 68, - 0x6ac: 68, - 0x6ad: 68, - 0x6ae: 68, - 0x6af: 68, - 0x6b0: 68, - 0x6b1: 68, - 0x6b2: 68, - 0x6b3: 68, - 0x6b4: 68, - 0x6b5: 68, - 0x6b6: 68, - 0x6b7: 68, - 0x6b8: 68, - 0x6b9: 68, - 0x6ba: 68, - 0x6bb: 68, - 0x6bc: 68, - 0x6bd: 68, - 0x6be: 68, - 0x6bf: 68, - 0x6c0: 82, - 0x6c1: 68, - 0x6c2: 68, - 0x6c3: 82, - 0x6c4: 82, - 0x6c5: 82, - 0x6c6: 82, - 0x6c7: 82, - 0x6c8: 82, - 0x6c9: 82, - 0x6ca: 82, - 0x6cb: 82, - 0x6cc: 68, - 0x6cd: 82, - 0x6ce: 68, - 0x6cf: 82, - 0x6d0: 68, - 0x6d1: 68, - 0x6d2: 82, - 0x6d3: 82, - 0x6d5: 82, - 0x6dd: 85, - 0x6ee: 82, - 0x6ef: 82, - 0x6fa: 68, - 0x6fb: 68, - 0x6fc: 68, - 0x6ff: 68, - 0x70f: 84, - 0x710: 82, - 0x712: 68, - 0x713: 68, - 0x714: 68, - 0x715: 82, - 0x716: 82, - 0x717: 82, - 0x718: 82, - 0x719: 82, - 0x71a: 68, - 0x71b: 68, - 0x71c: 68, - 0x71d: 68, - 0x71e: 82, - 0x71f: 68, - 0x720: 68, - 0x721: 68, - 0x722: 68, - 0x723: 68, - 0x724: 68, - 0x725: 68, - 0x726: 68, - 0x727: 68, - 0x728: 82, - 0x729: 68, - 0x72a: 82, - 0x72b: 68, - 0x72c: 82, - 0x72d: 68, - 0x72e: 68, - 0x72f: 82, - 0x74d: 82, - 0x74e: 68, - 0x74f: 68, - 0x750: 68, - 0x751: 68, - 0x752: 68, - 0x753: 68, - 0x754: 68, - 0x755: 68, - 0x756: 68, - 0x757: 68, - 0x758: 68, - 0x759: 82, - 0x75a: 82, - 0x75b: 82, - 0x75c: 68, - 0x75d: 68, - 0x75e: 68, - 0x75f: 68, - 0x760: 68, - 0x761: 68, - 0x762: 68, - 0x763: 68, - 0x764: 68, - 0x765: 68, - 0x766: 68, - 0x767: 68, - 0x768: 68, - 0x769: 68, - 0x76a: 68, - 0x76b: 82, - 0x76c: 82, - 0x76d: 68, - 0x76e: 68, - 0x76f: 68, - 0x770: 68, - 0x771: 82, - 0x772: 68, - 0x773: 82, - 0x774: 82, - 0x775: 68, - 0x776: 68, - 0x777: 68, - 0x778: 82, - 0x779: 82, - 0x77a: 68, - 0x77b: 68, - 0x77c: 68, - 0x77d: 68, - 0x77e: 68, - 0x77f: 68, - 0x7ca: 68, - 0x7cb: 68, - 0x7cc: 68, - 0x7cd: 68, - 0x7ce: 68, - 0x7cf: 68, - 0x7d0: 68, - 0x7d1: 68, - 0x7d2: 68, - 0x7d3: 68, - 0x7d4: 68, - 0x7d5: 68, - 0x7d6: 68, - 0x7d7: 68, - 0x7d8: 68, - 0x7d9: 68, - 0x7da: 68, - 0x7db: 68, - 0x7dc: 68, - 0x7dd: 68, - 0x7de: 68, - 0x7df: 68, - 0x7e0: 68, - 0x7e1: 68, - 0x7e2: 68, - 0x7e3: 68, - 0x7e4: 68, - 0x7e5: 68, - 0x7e6: 68, - 0x7e7: 68, - 0x7e8: 68, - 0x7e9: 68, - 0x7ea: 68, - 0x7fa: 67, - 0x840: 82, - 0x841: 68, - 0x842: 68, - 0x843: 68, - 0x844: 68, - 0x845: 68, - 0x846: 82, - 0x847: 82, - 0x848: 68, - 0x849: 82, - 0x84a: 68, - 0x84b: 68, - 0x84c: 68, - 0x84d: 68, - 0x84e: 68, - 0x84f: 68, - 0x850: 68, - 0x851: 68, - 0x852: 68, - 0x853: 68, - 0x854: 82, - 0x855: 68, - 0x856: 82, - 0x857: 82, - 0x858: 82, - 0x860: 68, - 0x861: 85, - 0x862: 68, - 0x863: 68, - 0x864: 68, - 0x865: 68, - 0x866: 85, - 0x867: 82, - 0x868: 68, - 0x869: 82, - 0x86a: 82, - 0x870: 82, - 0x871: 82, - 0x872: 82, - 0x873: 82, - 0x874: 82, - 0x875: 82, - 0x876: 82, - 0x877: 82, - 0x878: 82, - 0x879: 82, - 0x87a: 82, - 0x87b: 82, - 0x87c: 82, - 0x87d: 82, - 0x87e: 82, - 0x87f: 82, - 0x880: 82, - 0x881: 82, - 0x882: 82, - 0x883: 67, - 0x884: 67, - 0x885: 67, - 0x886: 68, - 0x887: 85, - 0x888: 85, - 0x889: 68, - 0x88a: 68, - 0x88b: 68, - 0x88c: 68, - 0x88d: 68, - 0x88e: 82, - 0x890: 85, - 0x891: 85, - 0x8a0: 68, - 0x8a1: 68, - 0x8a2: 68, - 0x8a3: 68, - 0x8a4: 68, - 0x8a5: 68, - 0x8a6: 68, - 0x8a7: 68, - 0x8a8: 68, - 0x8a9: 68, - 0x8aa: 82, - 0x8ab: 82, - 0x8ac: 82, - 0x8ad: 85, - 0x8ae: 82, - 0x8af: 68, - 0x8b0: 68, - 0x8b1: 82, - 0x8b2: 82, - 0x8b3: 68, - 0x8b4: 68, - 0x8b5: 68, - 0x8b6: 68, - 0x8b7: 68, - 0x8b8: 68, - 0x8b9: 82, - 0x8ba: 68, - 0x8bb: 68, - 0x8bc: 68, - 0x8bd: 68, - 0x8be: 68, - 0x8bf: 68, - 0x8c0: 68, - 0x8c1: 68, - 0x8c2: 68, - 0x8c3: 68, - 0x8c4: 68, - 0x8c5: 68, - 0x8c6: 68, - 0x8c7: 68, - 0x8c8: 68, - 0x8e2: 85, - 0x1806: 85, - 0x1807: 68, - 0x180a: 67, - 0x180e: 85, - 0x1820: 68, - 0x1821: 68, - 0x1822: 68, - 0x1823: 68, - 0x1824: 68, - 0x1825: 68, - 0x1826: 68, - 0x1827: 68, - 0x1828: 68, - 0x1829: 68, - 0x182a: 68, - 0x182b: 68, - 0x182c: 68, - 0x182d: 68, - 0x182e: 68, - 0x182f: 68, - 0x1830: 68, - 0x1831: 68, - 0x1832: 68, - 0x1833: 68, - 0x1834: 68, - 0x1835: 68, - 0x1836: 68, - 0x1837: 68, - 0x1838: 68, - 0x1839: 68, - 0x183a: 68, - 0x183b: 68, - 0x183c: 68, - 0x183d: 68, - 0x183e: 68, - 0x183f: 68, - 0x1840: 68, - 0x1841: 68, - 0x1842: 68, - 0x1843: 68, - 0x1844: 68, - 0x1845: 68, - 0x1846: 68, - 0x1847: 68, - 0x1848: 68, - 0x1849: 68, - 0x184a: 68, - 0x184b: 68, - 0x184c: 68, - 0x184d: 68, - 0x184e: 68, - 0x184f: 68, - 0x1850: 68, - 0x1851: 68, - 0x1852: 68, - 0x1853: 68, - 0x1854: 68, - 0x1855: 68, - 0x1856: 68, - 0x1857: 68, - 0x1858: 68, - 0x1859: 68, - 0x185a: 68, - 0x185b: 68, - 0x185c: 68, - 0x185d: 68, - 0x185e: 68, - 0x185f: 68, - 0x1860: 68, - 0x1861: 68, - 0x1862: 68, - 0x1863: 68, - 0x1864: 68, - 0x1865: 68, - 0x1866: 68, - 0x1867: 68, - 0x1868: 68, - 0x1869: 68, - 0x186a: 68, - 0x186b: 68, - 0x186c: 68, - 0x186d: 68, - 0x186e: 68, - 0x186f: 68, - 0x1870: 68, - 0x1871: 68, - 0x1872: 68, - 0x1873: 68, - 0x1874: 68, - 0x1875: 68, - 0x1876: 68, - 0x1877: 68, - 0x1878: 68, - 0x1880: 85, - 0x1881: 85, - 0x1882: 85, - 0x1883: 85, - 0x1884: 85, - 0x1885: 84, - 0x1886: 84, - 0x1887: 68, - 0x1888: 68, - 0x1889: 68, - 0x188a: 68, - 0x188b: 68, - 0x188c: 68, - 0x188d: 68, - 0x188e: 68, - 0x188f: 68, - 0x1890: 68, - 0x1891: 68, - 0x1892: 68, - 0x1893: 68, - 0x1894: 68, - 0x1895: 68, - 0x1896: 68, - 0x1897: 68, - 0x1898: 68, - 0x1899: 68, - 0x189a: 68, - 0x189b: 68, - 0x189c: 68, - 0x189d: 68, - 0x189e: 68, - 0x189f: 68, - 0x18a0: 68, - 0x18a1: 68, - 0x18a2: 68, - 0x18a3: 68, - 0x18a4: 68, - 0x18a5: 68, - 0x18a6: 68, - 0x18a7: 68, - 0x18a8: 68, - 0x18aa: 68, - 0x200c: 85, - 0x200d: 67, - 0x202f: 85, - 0x2066: 85, - 0x2067: 85, - 0x2068: 85, - 0x2069: 85, - 0xa840: 68, - 0xa841: 68, - 0xa842: 68, - 0xa843: 68, - 0xa844: 68, - 0xa845: 68, - 0xa846: 68, - 0xa847: 68, - 0xa848: 68, - 0xa849: 68, - 0xa84a: 68, - 0xa84b: 68, - 0xa84c: 68, - 0xa84d: 68, - 0xa84e: 68, - 0xa84f: 68, - 0xa850: 68, - 0xa851: 68, - 0xa852: 68, - 0xa853: 68, - 0xa854: 68, - 0xa855: 68, - 0xa856: 68, - 0xa857: 68, - 0xa858: 68, - 0xa859: 68, - 0xa85a: 68, - 0xa85b: 68, - 0xa85c: 68, - 0xa85d: 68, - 0xa85e: 68, - 0xa85f: 68, - 0xa860: 68, - 0xa861: 68, - 0xa862: 68, - 0xa863: 68, - 0xa864: 68, - 0xa865: 68, - 0xa866: 68, - 0xa867: 68, - 0xa868: 68, - 0xa869: 68, - 0xa86a: 68, - 0xa86b: 68, - 0xa86c: 68, - 0xa86d: 68, - 0xa86e: 68, - 0xa86f: 68, - 0xa870: 68, - 0xa871: 68, - 0xa872: 76, - 0xa873: 85, - 0x10ac0: 68, - 0x10ac1: 68, - 0x10ac2: 68, - 0x10ac3: 68, - 0x10ac4: 68, - 0x10ac5: 82, - 0x10ac6: 85, - 0x10ac7: 82, - 0x10ac8: 85, - 0x10ac9: 82, - 0x10aca: 82, - 0x10acb: 85, - 0x10acc: 85, - 0x10acd: 76, - 0x10ace: 82, - 0x10acf: 82, - 0x10ad0: 82, - 0x10ad1: 82, - 0x10ad2: 82, - 0x10ad3: 68, - 0x10ad4: 68, - 0x10ad5: 68, - 0x10ad6: 68, - 0x10ad7: 76, - 0x10ad8: 68, - 0x10ad9: 68, - 0x10ada: 68, - 0x10adb: 68, - 0x10adc: 68, - 0x10add: 82, - 0x10ade: 68, - 0x10adf: 68, - 0x10ae0: 68, - 0x10ae1: 82, - 0x10ae2: 85, - 0x10ae3: 85, - 0x10ae4: 82, - 0x10aeb: 68, - 0x10aec: 68, - 0x10aed: 68, - 0x10aee: 68, - 0x10aef: 82, - 0x10b80: 68, - 0x10b81: 82, - 0x10b82: 68, - 0x10b83: 82, - 0x10b84: 82, - 0x10b85: 82, - 0x10b86: 68, - 0x10b87: 68, - 0x10b88: 68, - 0x10b89: 82, - 0x10b8a: 68, - 0x10b8b: 68, - 0x10b8c: 82, - 0x10b8d: 68, - 0x10b8e: 82, - 0x10b8f: 82, - 0x10b90: 68, - 0x10b91: 82, - 0x10ba9: 82, - 0x10baa: 82, - 0x10bab: 82, - 0x10bac: 82, - 0x10bad: 68, - 0x10bae: 68, - 0x10baf: 85, - 0x10d00: 76, - 0x10d01: 68, - 0x10d02: 68, - 0x10d03: 68, - 0x10d04: 68, - 0x10d05: 68, - 0x10d06: 68, - 0x10d07: 68, - 0x10d08: 68, - 0x10d09: 68, - 0x10d0a: 68, - 0x10d0b: 68, - 0x10d0c: 68, - 0x10d0d: 68, - 0x10d0e: 68, - 0x10d0f: 68, - 0x10d10: 68, - 0x10d11: 68, - 0x10d12: 68, - 0x10d13: 68, - 0x10d14: 68, - 0x10d15: 68, - 0x10d16: 68, - 0x10d17: 68, - 0x10d18: 68, - 0x10d19: 68, - 0x10d1a: 68, - 0x10d1b: 68, - 0x10d1c: 68, - 0x10d1d: 68, - 0x10d1e: 68, - 0x10d1f: 68, - 0x10d20: 68, - 0x10d21: 68, - 0x10d22: 82, - 0x10d23: 68, - 0x10f30: 68, - 0x10f31: 68, - 0x10f32: 68, - 0x10f33: 82, - 0x10f34: 68, - 0x10f35: 68, - 0x10f36: 68, - 0x10f37: 68, - 0x10f38: 68, - 0x10f39: 68, - 0x10f3a: 68, - 0x10f3b: 68, - 0x10f3c: 68, - 0x10f3d: 68, - 0x10f3e: 68, - 0x10f3f: 68, - 0x10f40: 68, - 0x10f41: 68, - 0x10f42: 68, - 0x10f43: 68, - 0x10f44: 68, - 0x10f45: 85, - 0x10f51: 68, - 0x10f52: 68, - 0x10f53: 68, - 0x10f54: 82, - 0x10f70: 68, - 0x10f71: 68, - 0x10f72: 68, - 0x10f73: 68, - 0x10f74: 82, - 0x10f75: 82, - 0x10f76: 68, - 0x10f77: 68, - 0x10f78: 68, - 0x10f79: 68, - 0x10f7a: 68, - 0x10f7b: 68, - 0x10f7c: 68, - 0x10f7d: 68, - 0x10f7e: 68, - 0x10f7f: 68, - 0x10f80: 68, - 0x10f81: 68, - 0x10fb0: 68, - 0x10fb1: 85, - 0x10fb2: 68, - 0x10fb3: 68, - 0x10fb4: 82, - 0x10fb5: 82, - 0x10fb6: 82, - 0x10fb7: 85, - 0x10fb8: 68, - 0x10fb9: 82, - 0x10fba: 82, - 0x10fbb: 68, - 0x10fbc: 68, - 0x10fbd: 82, - 0x10fbe: 68, - 0x10fbf: 68, - 0x10fc0: 85, - 0x10fc1: 68, - 0x10fc2: 82, - 0x10fc3: 82, - 0x10fc4: 68, - 0x10fc5: 85, - 0x10fc6: 85, - 0x10fc7: 85, - 0x10fc8: 85, - 0x10fc9: 82, - 0x10fca: 68, - 0x10fcb: 76, - 0x110bd: 85, - 0x110cd: 85, - 0x1e900: 68, - 0x1e901: 68, - 0x1e902: 68, - 0x1e903: 68, - 0x1e904: 68, - 0x1e905: 68, - 0x1e906: 68, - 0x1e907: 68, - 0x1e908: 68, - 0x1e909: 68, - 0x1e90a: 68, - 0x1e90b: 68, - 0x1e90c: 68, - 0x1e90d: 68, - 0x1e90e: 68, - 0x1e90f: 68, - 0x1e910: 68, - 0x1e911: 68, - 0x1e912: 68, - 0x1e913: 68, - 0x1e914: 68, - 0x1e915: 68, - 0x1e916: 68, - 0x1e917: 68, - 0x1e918: 68, - 0x1e919: 68, - 0x1e91a: 68, - 0x1e91b: 68, - 0x1e91c: 68, - 0x1e91d: 68, - 0x1e91e: 68, - 0x1e91f: 68, - 0x1e920: 68, - 0x1e921: 68, - 0x1e922: 68, - 0x1e923: 68, - 0x1e924: 68, - 0x1e925: 68, - 0x1e926: 68, - 0x1e927: 68, - 0x1e928: 68, - 0x1e929: 68, - 0x1e92a: 68, - 0x1e92b: 68, - 0x1e92c: 68, - 0x1e92d: 68, - 0x1e92e: 68, - 0x1e92f: 68, - 0x1e930: 68, - 0x1e931: 68, - 0x1e932: 68, - 0x1e933: 68, - 0x1e934: 68, - 0x1e935: 68, - 0x1e936: 68, - 0x1e937: 68, - 0x1e938: 68, - 0x1e939: 68, - 0x1e93a: 68, - 0x1e93b: 68, - 0x1e93c: 68, - 0x1e93d: 68, - 0x1e93e: 68, - 0x1e93f: 68, - 0x1e940: 68, - 0x1e941: 68, - 0x1e942: 68, - 0x1e943: 68, - 0x1e94b: 84, -} -codepoint_classes = { - 'PVALID': ( - 0x2d0000002e, - 0x300000003a, - 0x610000007b, - 0xdf000000f7, - 0xf800000100, - 0x10100000102, - 0x10300000104, - 0x10500000106, - 0x10700000108, - 0x1090000010a, - 0x10b0000010c, - 0x10d0000010e, - 0x10f00000110, - 0x11100000112, - 0x11300000114, - 0x11500000116, - 0x11700000118, - 0x1190000011a, - 0x11b0000011c, - 0x11d0000011e, - 0x11f00000120, - 0x12100000122, - 0x12300000124, - 0x12500000126, - 0x12700000128, - 0x1290000012a, - 0x12b0000012c, - 0x12d0000012e, - 0x12f00000130, - 0x13100000132, - 0x13500000136, - 0x13700000139, - 0x13a0000013b, - 0x13c0000013d, - 0x13e0000013f, - 0x14200000143, - 0x14400000145, - 0x14600000147, - 0x14800000149, - 0x14b0000014c, - 0x14d0000014e, - 0x14f00000150, - 0x15100000152, - 0x15300000154, - 0x15500000156, - 0x15700000158, - 0x1590000015a, - 0x15b0000015c, - 0x15d0000015e, - 0x15f00000160, - 0x16100000162, - 0x16300000164, - 0x16500000166, - 0x16700000168, - 0x1690000016a, - 0x16b0000016c, - 0x16d0000016e, - 0x16f00000170, - 0x17100000172, - 0x17300000174, - 0x17500000176, - 0x17700000178, - 0x17a0000017b, - 0x17c0000017d, - 0x17e0000017f, - 0x18000000181, - 0x18300000184, - 0x18500000186, - 0x18800000189, - 0x18c0000018e, - 0x19200000193, - 0x19500000196, - 0x1990000019c, - 0x19e0000019f, - 0x1a1000001a2, - 0x1a3000001a4, - 0x1a5000001a6, - 0x1a8000001a9, - 0x1aa000001ac, - 0x1ad000001ae, - 0x1b0000001b1, - 0x1b4000001b5, - 0x1b6000001b7, - 0x1b9000001bc, - 0x1bd000001c4, - 0x1ce000001cf, - 0x1d0000001d1, - 0x1d2000001d3, - 0x1d4000001d5, - 0x1d6000001d7, - 0x1d8000001d9, - 0x1da000001db, - 0x1dc000001de, - 0x1df000001e0, - 0x1e1000001e2, - 0x1e3000001e4, - 0x1e5000001e6, - 0x1e7000001e8, - 0x1e9000001ea, - 0x1eb000001ec, - 0x1ed000001ee, - 0x1ef000001f1, - 0x1f5000001f6, - 0x1f9000001fa, - 0x1fb000001fc, - 0x1fd000001fe, - 0x1ff00000200, - 0x20100000202, - 0x20300000204, - 0x20500000206, - 0x20700000208, - 0x2090000020a, - 0x20b0000020c, - 0x20d0000020e, - 0x20f00000210, - 0x21100000212, - 0x21300000214, - 0x21500000216, - 0x21700000218, - 0x2190000021a, - 0x21b0000021c, - 0x21d0000021e, - 0x21f00000220, - 0x22100000222, - 0x22300000224, - 0x22500000226, - 0x22700000228, - 0x2290000022a, - 0x22b0000022c, - 0x22d0000022e, - 0x22f00000230, - 0x23100000232, - 0x2330000023a, - 0x23c0000023d, - 0x23f00000241, - 0x24200000243, - 0x24700000248, - 0x2490000024a, - 0x24b0000024c, - 0x24d0000024e, - 0x24f000002b0, - 0x2b9000002c2, - 0x2c6000002d2, - 0x2ec000002ed, - 0x2ee000002ef, - 0x30000000340, - 0x34200000343, - 0x3460000034f, - 0x35000000370, - 0x37100000372, - 0x37300000374, - 0x37700000378, - 0x37b0000037e, - 0x39000000391, - 0x3ac000003cf, - 0x3d7000003d8, - 0x3d9000003da, - 0x3db000003dc, - 0x3dd000003de, - 0x3df000003e0, - 0x3e1000003e2, - 0x3e3000003e4, - 0x3e5000003e6, - 0x3e7000003e8, - 0x3e9000003ea, - 0x3eb000003ec, - 0x3ed000003ee, - 0x3ef000003f0, - 0x3f3000003f4, - 0x3f8000003f9, - 0x3fb000003fd, - 0x43000000460, - 0x46100000462, - 0x46300000464, - 0x46500000466, - 0x46700000468, - 0x4690000046a, - 0x46b0000046c, - 0x46d0000046e, - 0x46f00000470, - 0x47100000472, - 0x47300000474, - 0x47500000476, - 0x47700000478, - 0x4790000047a, - 0x47b0000047c, - 0x47d0000047e, - 0x47f00000480, - 0x48100000482, - 0x48300000488, - 0x48b0000048c, - 0x48d0000048e, - 0x48f00000490, - 0x49100000492, - 0x49300000494, - 0x49500000496, - 0x49700000498, - 0x4990000049a, - 0x49b0000049c, - 0x49d0000049e, - 0x49f000004a0, - 0x4a1000004a2, - 0x4a3000004a4, - 0x4a5000004a6, - 0x4a7000004a8, - 0x4a9000004aa, - 0x4ab000004ac, - 0x4ad000004ae, - 0x4af000004b0, - 0x4b1000004b2, - 0x4b3000004b4, - 0x4b5000004b6, - 0x4b7000004b8, - 0x4b9000004ba, - 0x4bb000004bc, - 0x4bd000004be, - 0x4bf000004c0, - 0x4c2000004c3, - 0x4c4000004c5, - 0x4c6000004c7, - 0x4c8000004c9, - 0x4ca000004cb, - 0x4cc000004cd, - 0x4ce000004d0, - 0x4d1000004d2, - 0x4d3000004d4, - 0x4d5000004d6, - 0x4d7000004d8, - 0x4d9000004da, - 0x4db000004dc, - 0x4dd000004de, - 0x4df000004e0, - 0x4e1000004e2, - 0x4e3000004e4, - 0x4e5000004e6, - 0x4e7000004e8, - 0x4e9000004ea, - 0x4eb000004ec, - 0x4ed000004ee, - 0x4ef000004f0, - 0x4f1000004f2, - 0x4f3000004f4, - 0x4f5000004f6, - 0x4f7000004f8, - 0x4f9000004fa, - 0x4fb000004fc, - 0x4fd000004fe, - 0x4ff00000500, - 0x50100000502, - 0x50300000504, - 0x50500000506, - 0x50700000508, - 0x5090000050a, - 0x50b0000050c, - 0x50d0000050e, - 0x50f00000510, - 0x51100000512, - 0x51300000514, - 0x51500000516, - 0x51700000518, - 0x5190000051a, - 0x51b0000051c, - 0x51d0000051e, - 0x51f00000520, - 0x52100000522, - 0x52300000524, - 0x52500000526, - 0x52700000528, - 0x5290000052a, - 0x52b0000052c, - 0x52d0000052e, - 0x52f00000530, - 0x5590000055a, - 0x56000000587, - 0x58800000589, - 0x591000005be, - 0x5bf000005c0, - 0x5c1000005c3, - 0x5c4000005c6, - 0x5c7000005c8, - 0x5d0000005eb, - 0x5ef000005f3, - 0x6100000061b, - 0x62000000640, - 0x64100000660, - 0x66e00000675, - 0x679000006d4, - 0x6d5000006dd, - 0x6df000006e9, - 0x6ea000006f0, - 0x6fa00000700, - 0x7100000074b, - 0x74d000007b2, - 0x7c0000007f6, - 0x7fd000007fe, - 0x8000000082e, - 0x8400000085c, - 0x8600000086b, - 0x87000000888, - 0x8890000088f, - 0x898000008e2, - 0x8e300000958, - 0x96000000964, - 0x96600000970, - 0x97100000984, - 0x9850000098d, - 0x98f00000991, - 0x993000009a9, - 0x9aa000009b1, - 0x9b2000009b3, - 0x9b6000009ba, - 0x9bc000009c5, - 0x9c7000009c9, - 0x9cb000009cf, - 0x9d7000009d8, - 0x9e0000009e4, - 0x9e6000009f2, - 0x9fc000009fd, - 0x9fe000009ff, - 0xa0100000a04, - 0xa0500000a0b, - 0xa0f00000a11, - 0xa1300000a29, - 0xa2a00000a31, - 0xa3200000a33, - 0xa3500000a36, - 0xa3800000a3a, - 0xa3c00000a3d, - 0xa3e00000a43, - 0xa4700000a49, - 0xa4b00000a4e, - 0xa5100000a52, - 0xa5c00000a5d, - 0xa6600000a76, - 0xa8100000a84, - 0xa8500000a8e, - 0xa8f00000a92, - 0xa9300000aa9, - 0xaaa00000ab1, - 0xab200000ab4, - 0xab500000aba, - 0xabc00000ac6, - 0xac700000aca, - 0xacb00000ace, - 0xad000000ad1, - 0xae000000ae4, - 0xae600000af0, - 0xaf900000b00, - 0xb0100000b04, - 0xb0500000b0d, - 0xb0f00000b11, - 0xb1300000b29, - 0xb2a00000b31, - 0xb3200000b34, - 0xb3500000b3a, - 0xb3c00000b45, - 0xb4700000b49, - 0xb4b00000b4e, - 0xb5500000b58, - 0xb5f00000b64, - 0xb6600000b70, - 0xb7100000b72, - 0xb8200000b84, - 0xb8500000b8b, - 0xb8e00000b91, - 0xb9200000b96, - 0xb9900000b9b, - 0xb9c00000b9d, - 0xb9e00000ba0, - 0xba300000ba5, - 0xba800000bab, - 0xbae00000bba, - 0xbbe00000bc3, - 0xbc600000bc9, - 0xbca00000bce, - 0xbd000000bd1, - 0xbd700000bd8, - 0xbe600000bf0, - 0xc0000000c0d, - 0xc0e00000c11, - 0xc1200000c29, - 0xc2a00000c3a, - 0xc3c00000c45, - 0xc4600000c49, - 0xc4a00000c4e, - 0xc5500000c57, - 0xc5800000c5b, - 0xc5d00000c5e, - 0xc6000000c64, - 0xc6600000c70, - 0xc8000000c84, - 0xc8500000c8d, - 0xc8e00000c91, - 0xc9200000ca9, - 0xcaa00000cb4, - 0xcb500000cba, - 0xcbc00000cc5, - 0xcc600000cc9, - 0xcca00000cce, - 0xcd500000cd7, - 0xcdd00000cdf, - 0xce000000ce4, - 0xce600000cf0, - 0xcf100000cf3, - 0xd0000000d0d, - 0xd0e00000d11, - 0xd1200000d45, - 0xd4600000d49, - 0xd4a00000d4f, - 0xd5400000d58, - 0xd5f00000d64, - 0xd6600000d70, - 0xd7a00000d80, - 0xd8100000d84, - 0xd8500000d97, - 0xd9a00000db2, - 0xdb300000dbc, - 0xdbd00000dbe, - 0xdc000000dc7, - 0xdca00000dcb, - 0xdcf00000dd5, - 0xdd600000dd7, - 0xdd800000de0, - 0xde600000df0, - 0xdf200000df4, - 0xe0100000e33, - 0xe3400000e3b, - 0xe4000000e4f, - 0xe5000000e5a, - 0xe8100000e83, - 0xe8400000e85, - 0xe8600000e8b, - 0xe8c00000ea4, - 0xea500000ea6, - 0xea700000eb3, - 0xeb400000ebe, - 0xec000000ec5, - 0xec600000ec7, - 0xec800000ece, - 0xed000000eda, - 0xede00000ee0, - 0xf0000000f01, - 0xf0b00000f0c, - 0xf1800000f1a, - 0xf2000000f2a, - 0xf3500000f36, - 0xf3700000f38, - 0xf3900000f3a, - 0xf3e00000f43, - 0xf4400000f48, - 0xf4900000f4d, - 0xf4e00000f52, - 0xf5300000f57, - 0xf5800000f5c, - 0xf5d00000f69, - 0xf6a00000f6d, - 0xf7100000f73, - 0xf7400000f75, - 0xf7a00000f81, - 0xf8200000f85, - 0xf8600000f93, - 0xf9400000f98, - 0xf9900000f9d, - 0xf9e00000fa2, - 0xfa300000fa7, - 0xfa800000fac, - 0xfad00000fb9, - 0xfba00000fbd, - 0xfc600000fc7, - 0x10000000104a, - 0x10500000109e, - 0x10d0000010fb, - 0x10fd00001100, - 0x120000001249, - 0x124a0000124e, - 0x125000001257, - 0x125800001259, - 0x125a0000125e, - 0x126000001289, - 0x128a0000128e, - 0x1290000012b1, - 0x12b2000012b6, - 0x12b8000012bf, - 0x12c0000012c1, - 0x12c2000012c6, - 0x12c8000012d7, - 0x12d800001311, - 0x131200001316, - 0x13180000135b, - 0x135d00001360, - 0x138000001390, - 0x13a0000013f6, - 0x14010000166d, - 0x166f00001680, - 0x16810000169b, - 0x16a0000016eb, - 0x16f1000016f9, - 0x170000001716, - 0x171f00001735, - 0x174000001754, - 0x17600000176d, - 0x176e00001771, - 0x177200001774, - 0x1780000017b4, - 0x17b6000017d4, - 0x17d7000017d8, - 0x17dc000017de, - 0x17e0000017ea, - 0x18100000181a, - 0x182000001879, - 0x1880000018ab, - 0x18b0000018f6, - 0x19000000191f, - 0x19200000192c, - 0x19300000193c, - 0x19460000196e, - 0x197000001975, - 0x1980000019ac, - 0x19b0000019ca, - 0x19d0000019da, - 0x1a0000001a1c, - 0x1a2000001a5f, - 0x1a6000001a7d, - 0x1a7f00001a8a, - 0x1a9000001a9a, - 0x1aa700001aa8, - 0x1ab000001abe, - 0x1abf00001acf, - 0x1b0000001b4d, - 0x1b5000001b5a, - 0x1b6b00001b74, - 0x1b8000001bf4, - 0x1c0000001c38, - 0x1c4000001c4a, - 0x1c4d00001c7e, - 0x1cd000001cd3, - 0x1cd400001cfb, - 0x1d0000001d2c, - 0x1d2f00001d30, - 0x1d3b00001d3c, - 0x1d4e00001d4f, - 0x1d6b00001d78, - 0x1d7900001d9b, - 0x1dc000001e00, - 0x1e0100001e02, - 0x1e0300001e04, - 0x1e0500001e06, - 0x1e0700001e08, - 0x1e0900001e0a, - 0x1e0b00001e0c, - 0x1e0d00001e0e, - 0x1e0f00001e10, - 0x1e1100001e12, - 0x1e1300001e14, - 0x1e1500001e16, - 0x1e1700001e18, - 0x1e1900001e1a, - 0x1e1b00001e1c, - 0x1e1d00001e1e, - 0x1e1f00001e20, - 0x1e2100001e22, - 0x1e2300001e24, - 0x1e2500001e26, - 0x1e2700001e28, - 0x1e2900001e2a, - 0x1e2b00001e2c, - 0x1e2d00001e2e, - 0x1e2f00001e30, - 0x1e3100001e32, - 0x1e3300001e34, - 0x1e3500001e36, - 0x1e3700001e38, - 0x1e3900001e3a, - 0x1e3b00001e3c, - 0x1e3d00001e3e, - 0x1e3f00001e40, - 0x1e4100001e42, - 0x1e4300001e44, - 0x1e4500001e46, - 0x1e4700001e48, - 0x1e4900001e4a, - 0x1e4b00001e4c, - 0x1e4d00001e4e, - 0x1e4f00001e50, - 0x1e5100001e52, - 0x1e5300001e54, - 0x1e5500001e56, - 0x1e5700001e58, - 0x1e5900001e5a, - 0x1e5b00001e5c, - 0x1e5d00001e5e, - 0x1e5f00001e60, - 0x1e6100001e62, - 0x1e6300001e64, - 0x1e6500001e66, - 0x1e6700001e68, - 0x1e6900001e6a, - 0x1e6b00001e6c, - 0x1e6d00001e6e, - 0x1e6f00001e70, - 0x1e7100001e72, - 0x1e7300001e74, - 0x1e7500001e76, - 0x1e7700001e78, - 0x1e7900001e7a, - 0x1e7b00001e7c, - 0x1e7d00001e7e, - 0x1e7f00001e80, - 0x1e8100001e82, - 0x1e8300001e84, - 0x1e8500001e86, - 0x1e8700001e88, - 0x1e8900001e8a, - 0x1e8b00001e8c, - 0x1e8d00001e8e, - 0x1e8f00001e90, - 0x1e9100001e92, - 0x1e9300001e94, - 0x1e9500001e9a, - 0x1e9c00001e9e, - 0x1e9f00001ea0, - 0x1ea100001ea2, - 0x1ea300001ea4, - 0x1ea500001ea6, - 0x1ea700001ea8, - 0x1ea900001eaa, - 0x1eab00001eac, - 0x1ead00001eae, - 0x1eaf00001eb0, - 0x1eb100001eb2, - 0x1eb300001eb4, - 0x1eb500001eb6, - 0x1eb700001eb8, - 0x1eb900001eba, - 0x1ebb00001ebc, - 0x1ebd00001ebe, - 0x1ebf00001ec0, - 0x1ec100001ec2, - 0x1ec300001ec4, - 0x1ec500001ec6, - 0x1ec700001ec8, - 0x1ec900001eca, - 0x1ecb00001ecc, - 0x1ecd00001ece, - 0x1ecf00001ed0, - 0x1ed100001ed2, - 0x1ed300001ed4, - 0x1ed500001ed6, - 0x1ed700001ed8, - 0x1ed900001eda, - 0x1edb00001edc, - 0x1edd00001ede, - 0x1edf00001ee0, - 0x1ee100001ee2, - 0x1ee300001ee4, - 0x1ee500001ee6, - 0x1ee700001ee8, - 0x1ee900001eea, - 0x1eeb00001eec, - 0x1eed00001eee, - 0x1eef00001ef0, - 0x1ef100001ef2, - 0x1ef300001ef4, - 0x1ef500001ef6, - 0x1ef700001ef8, - 0x1ef900001efa, - 0x1efb00001efc, - 0x1efd00001efe, - 0x1eff00001f08, - 0x1f1000001f16, - 0x1f2000001f28, - 0x1f3000001f38, - 0x1f4000001f46, - 0x1f5000001f58, - 0x1f6000001f68, - 0x1f7000001f71, - 0x1f7200001f73, - 0x1f7400001f75, - 0x1f7600001f77, - 0x1f7800001f79, - 0x1f7a00001f7b, - 0x1f7c00001f7d, - 0x1fb000001fb2, - 0x1fb600001fb7, - 0x1fc600001fc7, - 0x1fd000001fd3, - 0x1fd600001fd8, - 0x1fe000001fe3, - 0x1fe400001fe8, - 0x1ff600001ff7, - 0x214e0000214f, - 0x218400002185, - 0x2c3000002c60, - 0x2c6100002c62, - 0x2c6500002c67, - 0x2c6800002c69, - 0x2c6a00002c6b, - 0x2c6c00002c6d, - 0x2c7100002c72, - 0x2c7300002c75, - 0x2c7600002c7c, - 0x2c8100002c82, - 0x2c8300002c84, - 0x2c8500002c86, - 0x2c8700002c88, - 0x2c8900002c8a, - 0x2c8b00002c8c, - 0x2c8d00002c8e, - 0x2c8f00002c90, - 0x2c9100002c92, - 0x2c9300002c94, - 0x2c9500002c96, - 0x2c9700002c98, - 0x2c9900002c9a, - 0x2c9b00002c9c, - 0x2c9d00002c9e, - 0x2c9f00002ca0, - 0x2ca100002ca2, - 0x2ca300002ca4, - 0x2ca500002ca6, - 0x2ca700002ca8, - 0x2ca900002caa, - 0x2cab00002cac, - 0x2cad00002cae, - 0x2caf00002cb0, - 0x2cb100002cb2, - 0x2cb300002cb4, - 0x2cb500002cb6, - 0x2cb700002cb8, - 0x2cb900002cba, - 0x2cbb00002cbc, - 0x2cbd00002cbe, - 0x2cbf00002cc0, - 0x2cc100002cc2, - 0x2cc300002cc4, - 0x2cc500002cc6, - 0x2cc700002cc8, - 0x2cc900002cca, - 0x2ccb00002ccc, - 0x2ccd00002cce, - 0x2ccf00002cd0, - 0x2cd100002cd2, - 0x2cd300002cd4, - 0x2cd500002cd6, - 0x2cd700002cd8, - 0x2cd900002cda, - 0x2cdb00002cdc, - 0x2cdd00002cde, - 0x2cdf00002ce0, - 0x2ce100002ce2, - 0x2ce300002ce5, - 0x2cec00002ced, - 0x2cee00002cf2, - 0x2cf300002cf4, - 0x2d0000002d26, - 0x2d2700002d28, - 0x2d2d00002d2e, - 0x2d3000002d68, - 0x2d7f00002d97, - 0x2da000002da7, - 0x2da800002daf, - 0x2db000002db7, - 0x2db800002dbf, - 0x2dc000002dc7, - 0x2dc800002dcf, - 0x2dd000002dd7, - 0x2dd800002ddf, - 0x2de000002e00, - 0x2e2f00002e30, - 0x300500003008, - 0x302a0000302e, - 0x303c0000303d, - 0x304100003097, - 0x30990000309b, - 0x309d0000309f, - 0x30a1000030fb, - 0x30fc000030ff, - 0x310500003130, - 0x31a0000031c0, - 0x31f000003200, - 0x340000004dc0, - 0x4e000000a48d, - 0xa4d00000a4fe, - 0xa5000000a60d, - 0xa6100000a62c, - 0xa6410000a642, - 0xa6430000a644, - 0xa6450000a646, - 0xa6470000a648, - 0xa6490000a64a, - 0xa64b0000a64c, - 0xa64d0000a64e, - 0xa64f0000a650, - 0xa6510000a652, - 0xa6530000a654, - 0xa6550000a656, - 0xa6570000a658, - 0xa6590000a65a, - 0xa65b0000a65c, - 0xa65d0000a65e, - 0xa65f0000a660, - 0xa6610000a662, - 0xa6630000a664, - 0xa6650000a666, - 0xa6670000a668, - 0xa6690000a66a, - 0xa66b0000a66c, - 0xa66d0000a670, - 0xa6740000a67e, - 0xa67f0000a680, - 0xa6810000a682, - 0xa6830000a684, - 0xa6850000a686, - 0xa6870000a688, - 0xa6890000a68a, - 0xa68b0000a68c, - 0xa68d0000a68e, - 0xa68f0000a690, - 0xa6910000a692, - 0xa6930000a694, - 0xa6950000a696, - 0xa6970000a698, - 0xa6990000a69a, - 0xa69b0000a69c, - 0xa69e0000a6e6, - 0xa6f00000a6f2, - 0xa7170000a720, - 0xa7230000a724, - 0xa7250000a726, - 0xa7270000a728, - 0xa7290000a72a, - 0xa72b0000a72c, - 0xa72d0000a72e, - 0xa72f0000a732, - 0xa7330000a734, - 0xa7350000a736, - 0xa7370000a738, - 0xa7390000a73a, - 0xa73b0000a73c, - 0xa73d0000a73e, - 0xa73f0000a740, - 0xa7410000a742, - 0xa7430000a744, - 0xa7450000a746, - 0xa7470000a748, - 0xa7490000a74a, - 0xa74b0000a74c, - 0xa74d0000a74e, - 0xa74f0000a750, - 0xa7510000a752, - 0xa7530000a754, - 0xa7550000a756, - 0xa7570000a758, - 0xa7590000a75a, - 0xa75b0000a75c, - 0xa75d0000a75e, - 0xa75f0000a760, - 0xa7610000a762, - 0xa7630000a764, - 0xa7650000a766, - 0xa7670000a768, - 0xa7690000a76a, - 0xa76b0000a76c, - 0xa76d0000a76e, - 0xa76f0000a770, - 0xa7710000a779, - 0xa77a0000a77b, - 0xa77c0000a77d, - 0xa77f0000a780, - 0xa7810000a782, - 0xa7830000a784, - 0xa7850000a786, - 0xa7870000a789, - 0xa78c0000a78d, - 0xa78e0000a790, - 0xa7910000a792, - 0xa7930000a796, - 0xa7970000a798, - 0xa7990000a79a, - 0xa79b0000a79c, - 0xa79d0000a79e, - 0xa79f0000a7a0, - 0xa7a10000a7a2, - 0xa7a30000a7a4, - 0xa7a50000a7a6, - 0xa7a70000a7a8, - 0xa7a90000a7aa, - 0xa7af0000a7b0, - 0xa7b50000a7b6, - 0xa7b70000a7b8, - 0xa7b90000a7ba, - 0xa7bb0000a7bc, - 0xa7bd0000a7be, - 0xa7bf0000a7c0, - 0xa7c10000a7c2, - 0xa7c30000a7c4, - 0xa7c80000a7c9, - 0xa7ca0000a7cb, - 0xa7d10000a7d2, - 0xa7d30000a7d4, - 0xa7d50000a7d6, - 0xa7d70000a7d8, - 0xa7d90000a7da, - 0xa7f20000a7f5, - 0xa7f60000a7f8, - 0xa7fa0000a828, - 0xa82c0000a82d, - 0xa8400000a874, - 0xa8800000a8c6, - 0xa8d00000a8da, - 0xa8e00000a8f8, - 0xa8fb0000a8fc, - 0xa8fd0000a92e, - 0xa9300000a954, - 0xa9800000a9c1, - 0xa9cf0000a9da, - 0xa9e00000a9ff, - 0xaa000000aa37, - 0xaa400000aa4e, - 0xaa500000aa5a, - 0xaa600000aa77, - 0xaa7a0000aac3, - 0xaadb0000aade, - 0xaae00000aaf0, - 0xaaf20000aaf7, - 0xab010000ab07, - 0xab090000ab0f, - 0xab110000ab17, - 0xab200000ab27, - 0xab280000ab2f, - 0xab300000ab5b, - 0xab600000ab6a, - 0xabc00000abeb, - 0xabec0000abee, - 0xabf00000abfa, - 0xac000000d7a4, - 0xfa0e0000fa10, - 0xfa110000fa12, - 0xfa130000fa15, - 0xfa1f0000fa20, - 0xfa210000fa22, - 0xfa230000fa25, - 0xfa270000fa2a, - 0xfb1e0000fb1f, - 0xfe200000fe30, - 0xfe730000fe74, - 0x100000001000c, - 0x1000d00010027, - 0x100280001003b, - 0x1003c0001003e, - 0x1003f0001004e, - 0x100500001005e, - 0x10080000100fb, - 0x101fd000101fe, - 0x102800001029d, - 0x102a0000102d1, - 0x102e0000102e1, - 0x1030000010320, - 0x1032d00010341, - 0x103420001034a, - 0x103500001037b, - 0x103800001039e, - 0x103a0000103c4, - 0x103c8000103d0, - 0x104280001049e, - 0x104a0000104aa, - 0x104d8000104fc, - 0x1050000010528, - 0x1053000010564, - 0x10597000105a2, - 0x105a3000105b2, - 0x105b3000105ba, - 0x105bb000105bd, - 0x1060000010737, - 0x1074000010756, - 0x1076000010768, - 0x1078000010786, - 0x10787000107b1, - 0x107b2000107bb, - 0x1080000010806, - 0x1080800010809, - 0x1080a00010836, - 0x1083700010839, - 0x1083c0001083d, - 0x1083f00010856, - 0x1086000010877, - 0x108800001089f, - 0x108e0000108f3, - 0x108f4000108f6, - 0x1090000010916, - 0x109200001093a, - 0x10980000109b8, - 0x109be000109c0, - 0x10a0000010a04, - 0x10a0500010a07, - 0x10a0c00010a14, - 0x10a1500010a18, - 0x10a1900010a36, - 0x10a3800010a3b, - 0x10a3f00010a40, - 0x10a6000010a7d, - 0x10a8000010a9d, - 0x10ac000010ac8, - 0x10ac900010ae7, - 0x10b0000010b36, - 0x10b4000010b56, - 0x10b6000010b73, - 0x10b8000010b92, - 0x10c0000010c49, - 0x10cc000010cf3, - 0x10d0000010d28, - 0x10d3000010d3a, - 0x10e8000010eaa, - 0x10eab00010ead, - 0x10eb000010eb2, - 0x10f0000010f1d, - 0x10f2700010f28, - 0x10f3000010f51, - 0x10f7000010f86, - 0x10fb000010fc5, - 0x10fe000010ff7, - 0x1100000011047, - 0x1106600011076, - 0x1107f000110bb, - 0x110c2000110c3, - 0x110d0000110e9, - 0x110f0000110fa, - 0x1110000011135, - 0x1113600011140, - 0x1114400011148, - 0x1115000011174, - 0x1117600011177, - 0x11180000111c5, - 0x111c9000111cd, - 0x111ce000111db, - 0x111dc000111dd, - 0x1120000011212, - 0x1121300011238, - 0x1123e0001123f, - 0x1128000011287, - 0x1128800011289, - 0x1128a0001128e, - 0x1128f0001129e, - 0x1129f000112a9, - 0x112b0000112eb, - 0x112f0000112fa, - 0x1130000011304, - 0x113050001130d, - 0x1130f00011311, - 0x1131300011329, - 0x1132a00011331, - 0x1133200011334, - 0x113350001133a, - 0x1133b00011345, - 0x1134700011349, - 0x1134b0001134e, - 0x1135000011351, - 0x1135700011358, - 0x1135d00011364, - 0x113660001136d, - 0x1137000011375, - 0x114000001144b, - 0x114500001145a, - 0x1145e00011462, - 0x11480000114c6, - 0x114c7000114c8, - 0x114d0000114da, - 0x11580000115b6, - 0x115b8000115c1, - 0x115d8000115de, - 0x1160000011641, - 0x1164400011645, - 0x116500001165a, - 0x11680000116b9, - 0x116c0000116ca, - 0x117000001171b, - 0x1171d0001172c, - 0x117300001173a, - 0x1174000011747, - 0x118000001183b, - 0x118c0000118ea, - 0x118ff00011907, - 0x119090001190a, - 0x1190c00011914, - 0x1191500011917, - 0x1191800011936, - 0x1193700011939, - 0x1193b00011944, - 0x119500001195a, - 0x119a0000119a8, - 0x119aa000119d8, - 0x119da000119e2, - 0x119e3000119e5, - 0x11a0000011a3f, - 0x11a4700011a48, - 0x11a5000011a9a, - 0x11a9d00011a9e, - 0x11ab000011af9, - 0x11c0000011c09, - 0x11c0a00011c37, - 0x11c3800011c41, - 0x11c5000011c5a, - 0x11c7200011c90, - 0x11c9200011ca8, - 0x11ca900011cb7, - 0x11d0000011d07, - 0x11d0800011d0a, - 0x11d0b00011d37, - 0x11d3a00011d3b, - 0x11d3c00011d3e, - 0x11d3f00011d48, - 0x11d5000011d5a, - 0x11d6000011d66, - 0x11d6700011d69, - 0x11d6a00011d8f, - 0x11d9000011d92, - 0x11d9300011d99, - 0x11da000011daa, - 0x11ee000011ef7, - 0x11fb000011fb1, - 0x120000001239a, - 0x1248000012544, - 0x12f9000012ff1, - 0x130000001342f, - 0x1440000014647, - 0x1680000016a39, - 0x16a4000016a5f, - 0x16a6000016a6a, - 0x16a7000016abf, - 0x16ac000016aca, - 0x16ad000016aee, - 0x16af000016af5, - 0x16b0000016b37, - 0x16b4000016b44, - 0x16b5000016b5a, - 0x16b6300016b78, - 0x16b7d00016b90, - 0x16e6000016e80, - 0x16f0000016f4b, - 0x16f4f00016f88, - 0x16f8f00016fa0, - 0x16fe000016fe2, - 0x16fe300016fe5, - 0x16ff000016ff2, - 0x17000000187f8, - 0x1880000018cd6, - 0x18d0000018d09, - 0x1aff00001aff4, - 0x1aff50001affc, - 0x1affd0001afff, - 0x1b0000001b123, - 0x1b1500001b153, - 0x1b1640001b168, - 0x1b1700001b2fc, - 0x1bc000001bc6b, - 0x1bc700001bc7d, - 0x1bc800001bc89, - 0x1bc900001bc9a, - 0x1bc9d0001bc9f, - 0x1cf000001cf2e, - 0x1cf300001cf47, - 0x1da000001da37, - 0x1da3b0001da6d, - 0x1da750001da76, - 0x1da840001da85, - 0x1da9b0001daa0, - 0x1daa10001dab0, - 0x1df000001df1f, - 0x1e0000001e007, - 0x1e0080001e019, - 0x1e01b0001e022, - 0x1e0230001e025, - 0x1e0260001e02b, - 0x1e1000001e12d, - 0x1e1300001e13e, - 0x1e1400001e14a, - 0x1e14e0001e14f, - 0x1e2900001e2af, - 0x1e2c00001e2fa, - 0x1e7e00001e7e7, - 0x1e7e80001e7ec, - 0x1e7ed0001e7ef, - 0x1e7f00001e7ff, - 0x1e8000001e8c5, - 0x1e8d00001e8d7, - 0x1e9220001e94c, - 0x1e9500001e95a, - 0x1fbf00001fbfa, - 0x200000002a6e0, - 0x2a7000002b739, - 0x2b7400002b81e, - 0x2b8200002cea2, - 0x2ceb00002ebe1, - 0x300000003134b, - ), - 'CONTEXTJ': ( - 0x200c0000200e, - ), - 'CONTEXTO': ( - 0xb7000000b8, - 0x37500000376, - 0x5f3000005f5, - 0x6600000066a, - 0x6f0000006fa, - 0x30fb000030fc, - ), -} diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/typing_extensions.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/typing_extensions.py deleted file mode 100644 index 4fd8247683ec3a73efa11192b84104b9d9c932e4..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/typing_extensions.py +++ /dev/null @@ -1,2069 +0,0 @@ -import abc -import collections -import collections.abc -import functools -import operator -import sys -import types as _types -import typing - - -# Please keep __all__ alphabetized within each category. -__all__ = [ - # Super-special typing primitives. - 'ClassVar', - 'Concatenate', - 'Final', - 'LiteralString', - 'ParamSpec', - 'ParamSpecArgs', - 'ParamSpecKwargs', - 'Self', - 'Type', - 'TypeVarTuple', - 'Unpack', - - # ABCs (from collections.abc). - 'Awaitable', - 'AsyncIterator', - 'AsyncIterable', - 'Coroutine', - 'AsyncGenerator', - 'AsyncContextManager', - 'ChainMap', - - # Concrete collection types. - 'ContextManager', - 'Counter', - 'Deque', - 'DefaultDict', - 'NamedTuple', - 'OrderedDict', - 'TypedDict', - - # Structural checks, a.k.a. protocols. - 'SupportsIndex', - - # One-off things. - 'Annotated', - 'assert_never', - 'assert_type', - 'clear_overloads', - 'dataclass_transform', - 'get_overloads', - 'final', - 'get_args', - 'get_origin', - 'get_type_hints', - 'IntVar', - 'is_typeddict', - 'Literal', - 'NewType', - 'overload', - 'Protocol', - 'reveal_type', - 'runtime', - 'runtime_checkable', - 'Text', - 'TypeAlias', - 'TypeGuard', - 'TYPE_CHECKING', - 'Never', - 'NoReturn', - 'Required', - 'NotRequired', -] - -# for backward compatibility -PEP_560 = True -GenericMeta = type - -# The functions below are modified copies of typing internal helpers. -# They are needed by _ProtocolMeta and they provide support for PEP 646. - -_marker = object() - - -def _check_generic(cls, parameters, elen=_marker): - """Check correct count for parameters of a generic cls (internal helper). - This gives a nice error message in case of count mismatch. - """ - if not elen: - raise TypeError(f"{cls} is not a generic class") - if elen is _marker: - if not hasattr(cls, "__parameters__") or not cls.__parameters__: - raise TypeError(f"{cls} is not a generic class") - elen = len(cls.__parameters__) - alen = len(parameters) - if alen != elen: - if hasattr(cls, "__parameters__"): - parameters = [p for p in cls.__parameters__ if not _is_unpack(p)] - num_tv_tuples = sum(isinstance(p, TypeVarTuple) for p in parameters) - if (num_tv_tuples > 0) and (alen >= elen - num_tv_tuples): - return - raise TypeError(f"Too {'many' if alen > elen else 'few'} parameters for {cls};" - f" actual {alen}, expected {elen}") - - -if sys.version_info >= (3, 10): - def _should_collect_from_parameters(t): - return isinstance( - t, (typing._GenericAlias, _types.GenericAlias, _types.UnionType) - ) -elif sys.version_info >= (3, 9): - def _should_collect_from_parameters(t): - return isinstance(t, (typing._GenericAlias, _types.GenericAlias)) -else: - def _should_collect_from_parameters(t): - return isinstance(t, typing._GenericAlias) and not t._special - - -def _collect_type_vars(types, typevar_types=None): - """Collect all type variable contained in types in order of - first appearance (lexicographic order). For example:: - - _collect_type_vars((T, List[S, T])) == (T, S) - """ - if typevar_types is None: - typevar_types = typing.TypeVar - tvars = [] - for t in types: - if ( - isinstance(t, typevar_types) and - t not in tvars and - not _is_unpack(t) - ): - tvars.append(t) - if _should_collect_from_parameters(t): - tvars.extend([t for t in t.__parameters__ if t not in tvars]) - return tuple(tvars) - - -NoReturn = typing.NoReturn - -# Some unconstrained type variables. These are used by the container types. -# (These are not for export.) -T = typing.TypeVar('T') # Any type. -KT = typing.TypeVar('KT') # Key type. -VT = typing.TypeVar('VT') # Value type. -T_co = typing.TypeVar('T_co', covariant=True) # Any type covariant containers. -T_contra = typing.TypeVar('T_contra', contravariant=True) # Ditto contravariant. - -ClassVar = typing.ClassVar - -# On older versions of typing there is an internal class named "Final". -# 3.8+ -if hasattr(typing, 'Final') and sys.version_info[:2] >= (3, 7): - Final = typing.Final -# 3.7 -else: - class _FinalForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - Final = _FinalForm('Final', - doc="""A special typing construct to indicate that a name - cannot be re-assigned or overridden in a subclass. - For example: - - MAX_SIZE: Final = 9000 - MAX_SIZE += 1 # Error reported by type checker - - class Connection: - TIMEOUT: Final[int] = 10 - class FastConnector(Connection): - TIMEOUT = 1 # Error reported by type checker - - There is no runtime checking of these properties.""") - -if sys.version_info >= (3, 11): - final = typing.final -else: - # @final exists in 3.8+, but we backport it for all versions - # before 3.11 to keep support for the __final__ attribute. - # See https://bugs.python.org/issue46342 - def final(f): - """This decorator can be used to indicate to type checkers that - the decorated method cannot be overridden, and decorated class - cannot be subclassed. For example: - - class Base: - @final - def done(self) -> None: - ... - class Sub(Base): - def done(self) -> None: # Error reported by type checker - ... - @final - class Leaf: - ... - class Other(Leaf): # Error reported by type checker - ... - - There is no runtime checking of these properties. The decorator - sets the ``__final__`` attribute to ``True`` on the decorated object - to allow runtime introspection. - """ - try: - f.__final__ = True - except (AttributeError, TypeError): - # Skip the attribute silently if it is not writable. - # AttributeError happens if the object has __slots__ or a - # read-only property, TypeError if it's a builtin class. - pass - return f - - -def IntVar(name): - return typing.TypeVar(name) - - -# 3.8+: -if hasattr(typing, 'Literal'): - Literal = typing.Literal -# 3.7: -else: - class _LiteralForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return typing._GenericAlias(self, parameters) - - Literal = _LiteralForm('Literal', - doc="""A type that can be used to indicate to type checkers - that the corresponding value has a value literally equivalent - to the provided parameter. For example: - - var: Literal[4] = 4 - - The type checker understands that 'var' is literally equal to - the value 4 and no other value. - - Literal[...] cannot be subclassed. There is no runtime - checking verifying that the parameter is actually a value - instead of a type.""") - - -_overload_dummy = typing._overload_dummy # noqa - - -if hasattr(typing, "get_overloads"): # 3.11+ - overload = typing.overload - get_overloads = typing.get_overloads - clear_overloads = typing.clear_overloads -else: - # {module: {qualname: {firstlineno: func}}} - _overload_registry = collections.defaultdict( - functools.partial(collections.defaultdict, dict) - ) - - def overload(func): - """Decorator for overloaded functions/methods. - - In a stub file, place two or more stub definitions for the same - function in a row, each decorated with @overload. For example: - - @overload - def utf8(value: None) -> None: ... - @overload - def utf8(value: bytes) -> bytes: ... - @overload - def utf8(value: str) -> bytes: ... - - In a non-stub file (i.e. a regular .py file), do the same but - follow it with an implementation. The implementation should *not* - be decorated with @overload. For example: - - @overload - def utf8(value: None) -> None: ... - @overload - def utf8(value: bytes) -> bytes: ... - @overload - def utf8(value: str) -> bytes: ... - def utf8(value): - # implementation goes here - - The overloads for a function can be retrieved at runtime using the - get_overloads() function. - """ - # classmethod and staticmethod - f = getattr(func, "__func__", func) - try: - _overload_registry[f.__module__][f.__qualname__][ - f.__code__.co_firstlineno - ] = func - except AttributeError: - # Not a normal function; ignore. - pass - return _overload_dummy - - def get_overloads(func): - """Return all defined overloads for *func* as a sequence.""" - # classmethod and staticmethod - f = getattr(func, "__func__", func) - if f.__module__ not in _overload_registry: - return [] - mod_dict = _overload_registry[f.__module__] - if f.__qualname__ not in mod_dict: - return [] - return list(mod_dict[f.__qualname__].values()) - - def clear_overloads(): - """Clear all overloads in the registry.""" - _overload_registry.clear() - - -# This is not a real generic class. Don't use outside annotations. -Type = typing.Type - -# Various ABCs mimicking those in collections.abc. -# A few are simply re-exported for completeness. - - -Awaitable = typing.Awaitable -Coroutine = typing.Coroutine -AsyncIterable = typing.AsyncIterable -AsyncIterator = typing.AsyncIterator -Deque = typing.Deque -ContextManager = typing.ContextManager -AsyncContextManager = typing.AsyncContextManager -DefaultDict = typing.DefaultDict - -# 3.7.2+ -if hasattr(typing, 'OrderedDict'): - OrderedDict = typing.OrderedDict -# 3.7.0-3.7.2 -else: - OrderedDict = typing._alias(collections.OrderedDict, (KT, VT)) - -Counter = typing.Counter -ChainMap = typing.ChainMap -AsyncGenerator = typing.AsyncGenerator -NewType = typing.NewType -Text = typing.Text -TYPE_CHECKING = typing.TYPE_CHECKING - - -_PROTO_WHITELIST = ['Callable', 'Awaitable', - 'Iterable', 'Iterator', 'AsyncIterable', 'AsyncIterator', - 'Hashable', 'Sized', 'Container', 'Collection', 'Reversible', - 'ContextManager', 'AsyncContextManager'] - - -def _get_protocol_attrs(cls): - attrs = set() - for base in cls.__mro__[:-1]: # without object - if base.__name__ in ('Protocol', 'Generic'): - continue - annotations = getattr(base, '__annotations__', {}) - for attr in list(base.__dict__.keys()) + list(annotations.keys()): - if (not attr.startswith('_abc_') and attr not in ( - '__abstractmethods__', '__annotations__', '__weakref__', - '_is_protocol', '_is_runtime_protocol', '__dict__', - '__args__', '__slots__', - '__next_in_mro__', '__parameters__', '__origin__', - '__orig_bases__', '__extra__', '__tree_hash__', - '__doc__', '__subclasshook__', '__init__', '__new__', - '__module__', '_MutableMapping__marker', '_gorg')): - attrs.add(attr) - return attrs - - -def _is_callable_members_only(cls): - return all(callable(getattr(cls, attr, None)) for attr in _get_protocol_attrs(cls)) - - -def _maybe_adjust_parameters(cls): - """Helper function used in Protocol.__init_subclass__ and _TypedDictMeta.__new__. - - The contents of this function are very similar - to logic found in typing.Generic.__init_subclass__ - on the CPython main branch. - """ - tvars = [] - if '__orig_bases__' in cls.__dict__: - tvars = typing._collect_type_vars(cls.__orig_bases__) - # Look for Generic[T1, ..., Tn] or Protocol[T1, ..., Tn]. - # If found, tvars must be a subset of it. - # If not found, tvars is it. - # Also check for and reject plain Generic, - # and reject multiple Generic[...] and/or Protocol[...]. - gvars = None - for base in cls.__orig_bases__: - if (isinstance(base, typing._GenericAlias) and - base.__origin__ in (typing.Generic, Protocol)): - # for error messages - the_base = base.__origin__.__name__ - if gvars is not None: - raise TypeError( - "Cannot inherit from Generic[...]" - " and/or Protocol[...] multiple types.") - gvars = base.__parameters__ - if gvars is None: - gvars = tvars - else: - tvarset = set(tvars) - gvarset = set(gvars) - if not tvarset <= gvarset: - s_vars = ', '.join(str(t) for t in tvars if t not in gvarset) - s_args = ', '.join(str(g) for g in gvars) - raise TypeError(f"Some type variables ({s_vars}) are" - f" not listed in {the_base}[{s_args}]") - tvars = gvars - cls.__parameters__ = tuple(tvars) - - -# 3.8+ -if hasattr(typing, 'Protocol'): - Protocol = typing.Protocol -# 3.7 -else: - - def _no_init(self, *args, **kwargs): - if type(self)._is_protocol: - raise TypeError('Protocols cannot be instantiated') - - class _ProtocolMeta(abc.ABCMeta): - # This metaclass is a bit unfortunate and exists only because of the lack - # of __instancehook__. - def __instancecheck__(cls, instance): - # We need this method for situations where attributes are - # assigned in __init__. - if ((not getattr(cls, '_is_protocol', False) or - _is_callable_members_only(cls)) and - issubclass(instance.__class__, cls)): - return True - if cls._is_protocol: - if all(hasattr(instance, attr) and - (not callable(getattr(cls, attr, None)) or - getattr(instance, attr) is not None) - for attr in _get_protocol_attrs(cls)): - return True - return super().__instancecheck__(instance) - - class Protocol(metaclass=_ProtocolMeta): - # There is quite a lot of overlapping code with typing.Generic. - # Unfortunately it is hard to avoid this while these live in two different - # modules. The duplicated code will be removed when Protocol is moved to typing. - """Base class for protocol classes. Protocol classes are defined as:: - - class Proto(Protocol): - def meth(self) -> int: - ... - - Such classes are primarily used with static type checkers that recognize - structural subtyping (static duck-typing), for example:: - - class C: - def meth(self) -> int: - return 0 - - def func(x: Proto) -> int: - return x.meth() - - func(C()) # Passes static type check - - See PEP 544 for details. Protocol classes decorated with - @typing_extensions.runtime act as simple-minded runtime protocol that checks - only the presence of given attributes, ignoring their type signatures. - - Protocol classes can be generic, they are defined as:: - - class GenProto(Protocol[T]): - def meth(self) -> T: - ... - """ - __slots__ = () - _is_protocol = True - - def __new__(cls, *args, **kwds): - if cls is Protocol: - raise TypeError("Type Protocol cannot be instantiated; " - "it can only be used as a base class") - return super().__new__(cls) - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple): - params = (params,) - if not params and cls is not typing.Tuple: - raise TypeError( - f"Parameter list to {cls.__qualname__}[...] cannot be empty") - msg = "Parameters to generic types must be types." - params = tuple(typing._type_check(p, msg) for p in params) # noqa - if cls is Protocol: - # Generic can only be subscripted with unique type variables. - if not all(isinstance(p, typing.TypeVar) for p in params): - i = 0 - while isinstance(params[i], typing.TypeVar): - i += 1 - raise TypeError( - "Parameters to Protocol[...] must all be type variables." - f" Parameter {i + 1} is {params[i]}") - if len(set(params)) != len(params): - raise TypeError( - "Parameters to Protocol[...] must all be unique") - else: - # Subscripting a regular Generic subclass. - _check_generic(cls, params, len(cls.__parameters__)) - return typing._GenericAlias(cls, params) - - def __init_subclass__(cls, *args, **kwargs): - if '__orig_bases__' in cls.__dict__: - error = typing.Generic in cls.__orig_bases__ - else: - error = typing.Generic in cls.__bases__ - if error: - raise TypeError("Cannot inherit from plain Generic") - _maybe_adjust_parameters(cls) - - # Determine if this is a protocol or a concrete subclass. - if not cls.__dict__.get('_is_protocol', None): - cls._is_protocol = any(b is Protocol for b in cls.__bases__) - - # Set (or override) the protocol subclass hook. - def _proto_hook(other): - if not cls.__dict__.get('_is_protocol', None): - return NotImplemented - if not getattr(cls, '_is_runtime_protocol', False): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Instance and class checks can only be used with" - " @runtime protocols") - if not _is_callable_members_only(cls): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Protocols with non-method members" - " don't support issubclass()") - if not isinstance(other, type): - # Same error as for issubclass(1, int) - raise TypeError('issubclass() arg 1 must be a class') - for attr in _get_protocol_attrs(cls): - for base in other.__mro__: - if attr in base.__dict__: - if base.__dict__[attr] is None: - return NotImplemented - break - annotations = getattr(base, '__annotations__', {}) - if (isinstance(annotations, typing.Mapping) and - attr in annotations and - isinstance(other, _ProtocolMeta) and - other._is_protocol): - break - else: - return NotImplemented - return True - if '__subclasshook__' not in cls.__dict__: - cls.__subclasshook__ = _proto_hook - - # We have nothing more to do for non-protocols. - if not cls._is_protocol: - return - - # Check consistency of bases. - for base in cls.__bases__: - if not (base in (object, typing.Generic) or - base.__module__ == 'collections.abc' and - base.__name__ in _PROTO_WHITELIST or - isinstance(base, _ProtocolMeta) and base._is_protocol): - raise TypeError('Protocols can only inherit from other' - f' protocols, got {repr(base)}') - cls.__init__ = _no_init - - -# 3.8+ -if hasattr(typing, 'runtime_checkable'): - runtime_checkable = typing.runtime_checkable -# 3.7 -else: - def runtime_checkable(cls): - """Mark a protocol class as a runtime protocol, so that it - can be used with isinstance() and issubclass(). Raise TypeError - if applied to a non-protocol class. - - This allows a simple-minded structural check very similar to the - one-offs in collections.abc such as Hashable. - """ - if not isinstance(cls, _ProtocolMeta) or not cls._is_protocol: - raise TypeError('@runtime_checkable can be only applied to protocol classes,' - f' got {cls!r}') - cls._is_runtime_protocol = True - return cls - - -# Exists for backwards compatibility. -runtime = runtime_checkable - - -# 3.8+ -if hasattr(typing, 'SupportsIndex'): - SupportsIndex = typing.SupportsIndex -# 3.7 -else: - @runtime_checkable - class SupportsIndex(Protocol): - __slots__ = () - - @abc.abstractmethod - def __index__(self) -> int: - pass - - -if hasattr(typing, "Required"): - # The standard library TypedDict in Python 3.8 does not store runtime information - # about which (if any) keys are optional. See https://bugs.python.org/issue38834 - # The standard library TypedDict in Python 3.9.0/1 does not honour the "total" - # keyword with old-style TypedDict(). See https://bugs.python.org/issue42059 - # The standard library TypedDict below Python 3.11 does not store runtime - # information about optional and required keys when using Required or NotRequired. - # Generic TypedDicts are also impossible using typing.TypedDict on Python <3.11. - TypedDict = typing.TypedDict - _TypedDictMeta = typing._TypedDictMeta - is_typeddict = typing.is_typeddict -else: - def _check_fails(cls, other): - try: - if sys._getframe(1).f_globals['__name__'] not in ['abc', - 'functools', - 'typing']: - # Typed dicts are only for static structural subtyping. - raise TypeError('TypedDict does not support instance and class checks') - except (AttributeError, ValueError): - pass - return False - - def _dict_new(*args, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - return dict(*args, **kwargs) - - _dict_new.__text_signature__ = '($cls, _typename, _fields=None, /, **kwargs)' - - def _typeddict_new(*args, total=True, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - if args: - typename, args = args[0], args[1:] # allow the "_typename" keyword be passed - elif '_typename' in kwargs: - typename = kwargs.pop('_typename') - import warnings - warnings.warn("Passing '_typename' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - raise TypeError("TypedDict.__new__() missing 1 required positional " - "argument: '_typename'") - if args: - try: - fields, = args # allow the "_fields" keyword be passed - except ValueError: - raise TypeError('TypedDict.__new__() takes from 2 to 3 ' - f'positional arguments but {len(args) + 2} ' - 'were given') - elif '_fields' in kwargs and len(kwargs) == 1: - fields = kwargs.pop('_fields') - import warnings - warnings.warn("Passing '_fields' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - fields = None - - if fields is None: - fields = kwargs - elif kwargs: - raise TypeError("TypedDict takes either a dict or keyword arguments," - " but not both") - - ns = {'__annotations__': dict(fields)} - try: - # Setting correct module is necessary to make typed dict classes pickleable. - ns['__module__'] = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - pass - - return _TypedDictMeta(typename, (), ns, total=total) - - _typeddict_new.__text_signature__ = ('($cls, _typename, _fields=None,' - ' /, *, total=True, **kwargs)') - - class _TypedDictMeta(type): - def __init__(cls, name, bases, ns, total=True): - super().__init__(name, bases, ns) - - def __new__(cls, name, bases, ns, total=True): - # Create new typed dict class object. - # This method is called directly when TypedDict is subclassed, - # or via _typeddict_new when TypedDict is instantiated. This way - # TypedDict supports all three syntaxes described in its docstring. - # Subclasses and instances of TypedDict return actual dictionaries - # via _dict_new. - ns['__new__'] = _typeddict_new if name == 'TypedDict' else _dict_new - # Don't insert typing.Generic into __bases__ here, - # or Generic.__init_subclass__ will raise TypeError - # in the super().__new__() call. - # Instead, monkey-patch __bases__ onto the class after it's been created. - tp_dict = super().__new__(cls, name, (dict,), ns) - - if any(issubclass(base, typing.Generic) for base in bases): - tp_dict.__bases__ = (typing.Generic, dict) - _maybe_adjust_parameters(tp_dict) - - annotations = {} - own_annotations = ns.get('__annotations__', {}) - msg = "TypedDict('Name', {f0: t0, f1: t1, ...}); each t must be a type" - own_annotations = { - n: typing._type_check(tp, msg) for n, tp in own_annotations.items() - } - required_keys = set() - optional_keys = set() - - for base in bases: - annotations.update(base.__dict__.get('__annotations__', {})) - required_keys.update(base.__dict__.get('__required_keys__', ())) - optional_keys.update(base.__dict__.get('__optional_keys__', ())) - - annotations.update(own_annotations) - for annotation_key, annotation_type in own_annotations.items(): - annotation_origin = get_origin(annotation_type) - if annotation_origin is Annotated: - annotation_args = get_args(annotation_type) - if annotation_args: - annotation_type = annotation_args[0] - annotation_origin = get_origin(annotation_type) - - if annotation_origin is Required: - required_keys.add(annotation_key) - elif annotation_origin is NotRequired: - optional_keys.add(annotation_key) - elif total: - required_keys.add(annotation_key) - else: - optional_keys.add(annotation_key) - - tp_dict.__annotations__ = annotations - tp_dict.__required_keys__ = frozenset(required_keys) - tp_dict.__optional_keys__ = frozenset(optional_keys) - if not hasattr(tp_dict, '__total__'): - tp_dict.__total__ = total - return tp_dict - - __instancecheck__ = __subclasscheck__ = _check_fails - - TypedDict = _TypedDictMeta('TypedDict', (dict,), {}) - TypedDict.__module__ = __name__ - TypedDict.__doc__ = \ - """A simple typed name space. At runtime it is equivalent to a plain dict. - - TypedDict creates a dictionary type that expects all of its - instances to have a certain set of keys, with each key - associated with a value of a consistent type. This expectation - is not checked at runtime but is only enforced by type checkers. - Usage:: - - class Point2D(TypedDict): - x: int - y: int - label: str - - a: Point2D = {'x': 1, 'y': 2, 'label': 'good'} # OK - b: Point2D = {'z': 3, 'label': 'bad'} # Fails type check - - assert Point2D(x=1, y=2, label='first') == dict(x=1, y=2, label='first') - - The type info can be accessed via the Point2D.__annotations__ dict, and - the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets. - TypedDict supports two additional equivalent forms:: - - Point2D = TypedDict('Point2D', x=int, y=int, label=str) - Point2D = TypedDict('Point2D', {'x': int, 'y': int, 'label': str}) - - The class syntax is only supported in Python 3.6+, while two other - syntax forms work for Python 2.7 and 3.2+ - """ - - if hasattr(typing, "_TypedDictMeta"): - _TYPEDDICT_TYPES = (typing._TypedDictMeta, _TypedDictMeta) - else: - _TYPEDDICT_TYPES = (_TypedDictMeta,) - - def is_typeddict(tp): - """Check if an annotation is a TypedDict class - - For example:: - class Film(TypedDict): - title: str - year: int - - is_typeddict(Film) # => True - is_typeddict(Union[list, str]) # => False - """ - return isinstance(tp, tuple(_TYPEDDICT_TYPES)) - - -if hasattr(typing, "assert_type"): - assert_type = typing.assert_type - -else: - def assert_type(__val, __typ): - """Assert (to the type checker) that the value is of the given type. - - When the type checker encounters a call to assert_type(), it - emits an error if the value is not of the specified type:: - - def greet(name: str) -> None: - assert_type(name, str) # ok - assert_type(name, int) # type checker error - - At runtime this returns the first argument unchanged and otherwise - does nothing. - """ - return __val - - -if hasattr(typing, "Required"): - get_type_hints = typing.get_type_hints -else: - import functools - import types - - # replaces _strip_annotations() - def _strip_extras(t): - """Strips Annotated, Required and NotRequired from a given type.""" - if isinstance(t, _AnnotatedAlias): - return _strip_extras(t.__origin__) - if hasattr(t, "__origin__") and t.__origin__ in (Required, NotRequired): - return _strip_extras(t.__args__[0]) - if isinstance(t, typing._GenericAlias): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return t.copy_with(stripped_args) - if hasattr(types, "GenericAlias") and isinstance(t, types.GenericAlias): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return types.GenericAlias(t.__origin__, stripped_args) - if hasattr(types, "UnionType") and isinstance(t, types.UnionType): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return functools.reduce(operator.or_, stripped_args) - - return t - - def get_type_hints(obj, globalns=None, localns=None, include_extras=False): - """Return type hints for an object. - - This is often the same as obj.__annotations__, but it handles - forward references encoded as string literals, adds Optional[t] if a - default value equal to None is set and recursively replaces all - 'Annotated[T, ...]', 'Required[T]' or 'NotRequired[T]' with 'T' - (unless 'include_extras=True'). - - The argument may be a module, class, method, or function. The annotations - are returned as a dictionary. For classes, annotations include also - inherited members. - - TypeError is raised if the argument is not of a type that can contain - annotations, and an empty dictionary is returned if no annotations are - present. - - BEWARE -- the behavior of globalns and localns is counterintuitive - (unless you are familiar with how eval() and exec() work). The - search order is locals first, then globals. - - - If no dict arguments are passed, an attempt is made to use the - globals from obj (or the respective module's globals for classes), - and these are also used as the locals. If the object does not appear - to have globals, an empty dictionary is used. - - - If one dict argument is passed, it is used for both globals and - locals. - - - If two dict arguments are passed, they specify globals and - locals, respectively. - """ - if hasattr(typing, "Annotated"): - hint = typing.get_type_hints( - obj, globalns=globalns, localns=localns, include_extras=True - ) - else: - hint = typing.get_type_hints(obj, globalns=globalns, localns=localns) - if include_extras: - return hint - return {k: _strip_extras(t) for k, t in hint.items()} - - -# Python 3.9+ has PEP 593 (Annotated) -if hasattr(typing, 'Annotated'): - Annotated = typing.Annotated - # Not exported and not a public API, but needed for get_origin() and get_args() - # to work. - _AnnotatedAlias = typing._AnnotatedAlias -# 3.7-3.8 -else: - class _AnnotatedAlias(typing._GenericAlias, _root=True): - """Runtime representation of an annotated type. - - At its core 'Annotated[t, dec1, dec2, ...]' is an alias for the type 't' - with extra annotations. The alias behaves like a normal typing alias, - instantiating is the same as instantiating the underlying type, binding - it to types is also the same. - """ - def __init__(self, origin, metadata): - if isinstance(origin, _AnnotatedAlias): - metadata = origin.__metadata__ + metadata - origin = origin.__origin__ - super().__init__(origin, origin) - self.__metadata__ = metadata - - def copy_with(self, params): - assert len(params) == 1 - new_type = params[0] - return _AnnotatedAlias(new_type, self.__metadata__) - - def __repr__(self): - return (f"typing_extensions.Annotated[{typing._type_repr(self.__origin__)}, " - f"{', '.join(repr(a) for a in self.__metadata__)}]") - - def __reduce__(self): - return operator.getitem, ( - Annotated, (self.__origin__,) + self.__metadata__ - ) - - def __eq__(self, other): - if not isinstance(other, _AnnotatedAlias): - return NotImplemented - if self.__origin__ != other.__origin__: - return False - return self.__metadata__ == other.__metadata__ - - def __hash__(self): - return hash((self.__origin__, self.__metadata__)) - - class Annotated: - """Add context specific metadata to a type. - - Example: Annotated[int, runtime_check.Unsigned] indicates to the - hypothetical runtime_check module that this type is an unsigned int. - Every other consumer of this type can ignore this metadata and treat - this type as int. - - The first argument to Annotated must be a valid type (and will be in - the __origin__ field), the remaining arguments are kept as a tuple in - the __extra__ field. - - Details: - - - It's an error to call `Annotated` with less than two arguments. - - Nested Annotated are flattened:: - - Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3] - - - Instantiating an annotated type is equivalent to instantiating the - underlying type:: - - Annotated[C, Ann1](5) == C(5) - - - Annotated can be used as a generic type alias:: - - Optimized = Annotated[T, runtime.Optimize()] - Optimized[int] == Annotated[int, runtime.Optimize()] - - OptimizedList = Annotated[List[T], runtime.Optimize()] - OptimizedList[int] == Annotated[List[int], runtime.Optimize()] - """ - - __slots__ = () - - def __new__(cls, *args, **kwargs): - raise TypeError("Type Annotated cannot be instantiated.") - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple) or len(params) < 2: - raise TypeError("Annotated[...] should be used " - "with at least two arguments (a type and an " - "annotation).") - allowed_special_forms = (ClassVar, Final) - if get_origin(params[0]) in allowed_special_forms: - origin = params[0] - else: - msg = "Annotated[t, ...]: t must be a type." - origin = typing._type_check(params[0], msg) - metadata = tuple(params[1:]) - return _AnnotatedAlias(origin, metadata) - - def __init_subclass__(cls, *args, **kwargs): - raise TypeError( - f"Cannot subclass {cls.__module__}.Annotated" - ) - -# Python 3.8 has get_origin() and get_args() but those implementations aren't -# Annotated-aware, so we can't use those. Python 3.9's versions don't support -# ParamSpecArgs and ParamSpecKwargs, so only Python 3.10's versions will do. -if sys.version_info[:2] >= (3, 10): - get_origin = typing.get_origin - get_args = typing.get_args -# 3.7-3.9 -else: - try: - # 3.9+ - from typing import _BaseGenericAlias - except ImportError: - _BaseGenericAlias = typing._GenericAlias - try: - # 3.9+ - from typing import GenericAlias as _typing_GenericAlias - except ImportError: - _typing_GenericAlias = typing._GenericAlias - - def get_origin(tp): - """Get the unsubscripted version of a type. - - This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar - and Annotated. Return None for unsupported types. Examples:: - - get_origin(Literal[42]) is Literal - get_origin(int) is None - get_origin(ClassVar[int]) is ClassVar - get_origin(Generic) is Generic - get_origin(Generic[T]) is Generic - get_origin(Union[T, int]) is Union - get_origin(List[Tuple[T, T]][int]) == list - get_origin(P.args) is P - """ - if isinstance(tp, _AnnotatedAlias): - return Annotated - if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias, _BaseGenericAlias, - ParamSpecArgs, ParamSpecKwargs)): - return tp.__origin__ - if tp is typing.Generic: - return typing.Generic - return None - - def get_args(tp): - """Get type arguments with all substitutions performed. - - For unions, basic simplifications used by Union constructor are performed. - Examples:: - get_args(Dict[str, int]) == (str, int) - get_args(int) == () - get_args(Union[int, Union[T, int], str][int]) == (int, str) - get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int]) - get_args(Callable[[], T][int]) == ([], int) - """ - if isinstance(tp, _AnnotatedAlias): - return (tp.__origin__,) + tp.__metadata__ - if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias)): - if getattr(tp, "_special", False): - return () - res = tp.__args__ - if get_origin(tp) is collections.abc.Callable and res[0] is not Ellipsis: - res = (list(res[:-1]), res[-1]) - return res - return () - - -# 3.10+ -if hasattr(typing, 'TypeAlias'): - TypeAlias = typing.TypeAlias -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeAliasForm - def TypeAlias(self, parameters): - """Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example above. - """ - raise TypeError(f"{self} is not subscriptable") -# 3.7-3.8 -else: - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - TypeAlias = _TypeAliasForm('TypeAlias', - doc="""Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example - above.""") - - -# Python 3.10+ has PEP 612 -if hasattr(typing, 'ParamSpecArgs'): - ParamSpecArgs = typing.ParamSpecArgs - ParamSpecKwargs = typing.ParamSpecKwargs -# 3.7-3.9 -else: - class _Immutable: - """Mixin to indicate that object should not be copied.""" - __slots__ = () - - def __copy__(self): - return self - - def __deepcopy__(self, memo): - return self - - class ParamSpecArgs(_Immutable): - """The args for a ParamSpec object. - - Given a ParamSpec object P, P.args is an instance of ParamSpecArgs. - - ParamSpecArgs objects have a reference back to their ParamSpec: - - P.args.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.args" - - def __eq__(self, other): - if not isinstance(other, ParamSpecArgs): - return NotImplemented - return self.__origin__ == other.__origin__ - - class ParamSpecKwargs(_Immutable): - """The kwargs for a ParamSpec object. - - Given a ParamSpec object P, P.kwargs is an instance of ParamSpecKwargs. - - ParamSpecKwargs objects have a reference back to their ParamSpec: - - P.kwargs.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.kwargs" - - def __eq__(self, other): - if not isinstance(other, ParamSpecKwargs): - return NotImplemented - return self.__origin__ == other.__origin__ - -# 3.10+ -if hasattr(typing, 'ParamSpec'): - ParamSpec = typing.ParamSpec -# 3.7-3.9 -else: - - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class ParamSpec(list): - """Parameter specification variable. - - Usage:: - - P = ParamSpec('P') - - Parameter specification variables exist primarily for the benefit of static - type checkers. They are used to forward the parameter types of one - callable to another callable, a pattern commonly found in higher order - functions and decorators. They are only valid when used in ``Concatenate``, - or s the first argument to ``Callable``. In Python 3.10 and higher, - they are also supported in user-defined Generics at runtime. - See class Generic for more information on generic types. An - example for annotating a decorator:: - - T = TypeVar('T') - P = ParamSpec('P') - - def add_logging(f: Callable[P, T]) -> Callable[P, T]: - '''A type-safe decorator to add logging to a function.''' - def inner(*args: P.args, **kwargs: P.kwargs) -> T: - logging.info(f'{f.__name__} was called') - return f(*args, **kwargs) - return inner - - @add_logging - def add_two(x: float, y: float) -> float: - '''Add two numbers together.''' - return x + y - - Parameter specification variables defined with covariant=True or - contravariant=True can be used to declare covariant or contravariant - generic types. These keyword arguments are valid, but their actual semantics - are yet to be decided. See PEP 612 for details. - - Parameter specification variables can be introspected. e.g.: - - P.__name__ == 'T' - P.__bound__ == None - P.__covariant__ == False - P.__contravariant__ == False - - Note that only parameter specification variables defined in global scope can - be pickled. - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - @property - def args(self): - return ParamSpecArgs(self) - - @property - def kwargs(self): - return ParamSpecKwargs(self) - - def __init__(self, name, *, bound=None, covariant=False, contravariant=False): - super().__init__([self]) - self.__name__ = name - self.__covariant__ = bool(covariant) - self.__contravariant__ = bool(contravariant) - if bound: - self.__bound__ = typing._type_check(bound, 'Bound must be a type.') - else: - self.__bound__ = None - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - def __repr__(self): - if self.__covariant__: - prefix = '+' - elif self.__contravariant__: - prefix = '-' - else: - prefix = '~' - return prefix + self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - # Hack to get typing._type_check to pass. - def __call__(self, *args, **kwargs): - pass - - -# 3.7-3.9 -if not hasattr(typing, 'Concatenate'): - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class _ConcatenateGenericAlias(list): - - # Trick Generic into looking into this for __parameters__. - __class__ = typing._GenericAlias - - # Flag in 3.8. - _special = False - - def __init__(self, origin, args): - super().__init__(args) - self.__origin__ = origin - self.__args__ = args - - def __repr__(self): - _type_repr = typing._type_repr - return (f'{_type_repr(self.__origin__)}' - f'[{", ".join(_type_repr(arg) for arg in self.__args__)}]') - - def __hash__(self): - return hash((self.__origin__, self.__args__)) - - # Hack to get typing._type_check to pass in Generic. - def __call__(self, *args, **kwargs): - pass - - @property - def __parameters__(self): - return tuple( - tp for tp in self.__args__ if isinstance(tp, (typing.TypeVar, ParamSpec)) - ) - - -# 3.7-3.9 -@typing._tp_cache -def _concatenate_getitem(self, parameters): - if parameters == (): - raise TypeError("Cannot take a Concatenate of no types.") - if not isinstance(parameters, tuple): - parameters = (parameters,) - if not isinstance(parameters[-1], ParamSpec): - raise TypeError("The last parameter to Concatenate should be a " - "ParamSpec variable.") - msg = "Concatenate[arg, ...]: each arg must be a type." - parameters = tuple(typing._type_check(p, msg) for p in parameters) - return _ConcatenateGenericAlias(self, parameters) - - -# 3.10+ -if hasattr(typing, 'Concatenate'): - Concatenate = typing.Concatenate - _ConcatenateGenericAlias = typing._ConcatenateGenericAlias # noqa -# 3.9 -elif sys.version_info[:2] >= (3, 9): - @_TypeAliasForm - def Concatenate(self, parameters): - """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """ - return _concatenate_getitem(self, parameters) -# 3.7-8 -else: - class _ConcatenateForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return _concatenate_getitem(self, parameters) - - Concatenate = _ConcatenateForm( - 'Concatenate', - doc="""Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """) - -# 3.10+ -if hasattr(typing, 'TypeGuard'): - TypeGuard = typing.TypeGuard -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeGuardForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeGuardForm - def TypeGuard(self, parameters): - """Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """ - item = typing._type_check(parameters, f'{self} accepts only a single type.') - return typing._GenericAlias(self, (item,)) -# 3.7-3.8 -else: - class _TypeGuardForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type') - return typing._GenericAlias(self, (item,)) - - TypeGuard = _TypeGuardForm( - 'TypeGuard', - doc="""Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """) - - -# Vendored from cpython typing._SpecialFrom -class _SpecialForm(typing._Final, _root=True): - __slots__ = ('_name', '__doc__', '_getitem') - - def __init__(self, getitem): - self._getitem = getitem - self._name = getitem.__name__ - self.__doc__ = getitem.__doc__ - - def __getattr__(self, item): - if item in {'__name__', '__qualname__'}: - return self._name - - raise AttributeError(item) - - def __mro_entries__(self, bases): - raise TypeError(f"Cannot subclass {self!r}") - - def __repr__(self): - return f'typing_extensions.{self._name}' - - def __reduce__(self): - return self._name - - def __call__(self, *args, **kwds): - raise TypeError(f"Cannot instantiate {self!r}") - - def __or__(self, other): - return typing.Union[self, other] - - def __ror__(self, other): - return typing.Union[other, self] - - def __instancecheck__(self, obj): - raise TypeError(f"{self} cannot be used with isinstance()") - - def __subclasscheck__(self, cls): - raise TypeError(f"{self} cannot be used with issubclass()") - - @typing._tp_cache - def __getitem__(self, parameters): - return self._getitem(self, parameters) - - -if hasattr(typing, "LiteralString"): - LiteralString = typing.LiteralString -else: - @_SpecialForm - def LiteralString(self, params): - """Represents an arbitrary literal string. - - Example:: - - from pip._vendor.typing_extensions import LiteralString - - def query(sql: LiteralString) -> ...: - ... - - query("SELECT * FROM table") # ok - query(f"SELECT * FROM {input()}") # not ok - - See PEP 675 for details. - - """ - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, "Self"): - Self = typing.Self -else: - @_SpecialForm - def Self(self, params): - """Used to spell the type of "self" in classes. - - Example:: - - from typing import Self - - class ReturnsSelf: - def parse(self, data: bytes) -> Self: - ... - return self - - """ - - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, "Never"): - Never = typing.Never -else: - @_SpecialForm - def Never(self, params): - """The bottom type, a type that has no members. - - This can be used to define a function that should never be - called, or a function that never returns:: - - from pip._vendor.typing_extensions import Never - - def never_call_me(arg: Never) -> None: - pass - - def int_or_str(arg: int | str) -> None: - never_call_me(arg) # type checker error - match arg: - case int(): - print("It's an int") - case str(): - print("It's a str") - case _: - never_call_me(arg) # ok, arg is of type Never - - """ - - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, 'Required'): - Required = typing.Required - NotRequired = typing.NotRequired -elif sys.version_info[:2] >= (3, 9): - class _ExtensionsSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_ExtensionsSpecialForm - def Required(self, parameters): - """A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - @_ExtensionsSpecialForm - def NotRequired(self, parameters): - """A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - -else: - class _RequiredForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - Required = _RequiredForm( - 'Required', - doc="""A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """) - NotRequired = _RequiredForm( - 'NotRequired', - doc="""A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """) - - -if hasattr(typing, "Unpack"): # 3.11+ - Unpack = typing.Unpack -elif sys.version_info[:2] >= (3, 9): - class _UnpackSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - class _UnpackAlias(typing._GenericAlias, _root=True): - __class__ = typing.TypeVar - - @_UnpackSpecialForm - def Unpack(self, parameters): - """A special typing construct to unpack a variadic type. For example: - - Shape = TypeVarTuple('Shape') - Batch = NewType('Batch', int) - - def add_batch_axis( - x: Array[Unpack[Shape]] - ) -> Array[Batch, Unpack[Shape]]: ... - - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return _UnpackAlias(self, (item,)) - - def _is_unpack(obj): - return isinstance(obj, _UnpackAlias) - -else: - class _UnpackAlias(typing._GenericAlias, _root=True): - __class__ = typing.TypeVar - - class _UnpackForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return _UnpackAlias(self, (item,)) - - Unpack = _UnpackForm( - 'Unpack', - doc="""A special typing construct to unpack a variadic type. For example: - - Shape = TypeVarTuple('Shape') - Batch = NewType('Batch', int) - - def add_batch_axis( - x: Array[Unpack[Shape]] - ) -> Array[Batch, Unpack[Shape]]: ... - - """) - - def _is_unpack(obj): - return isinstance(obj, _UnpackAlias) - - -if hasattr(typing, "TypeVarTuple"): # 3.11+ - TypeVarTuple = typing.TypeVarTuple -else: - class TypeVarTuple: - """Type variable tuple. - - Usage:: - - Ts = TypeVarTuple('Ts') - - In the same way that a normal type variable is a stand-in for a single - type such as ``int``, a type variable *tuple* is a stand-in for a *tuple* - type such as ``Tuple[int, str]``. - - Type variable tuples can be used in ``Generic`` declarations. - Consider the following example:: - - class Array(Generic[*Ts]): ... - - The ``Ts`` type variable tuple here behaves like ``tuple[T1, T2]``, - where ``T1`` and ``T2`` are type variables. To use these type variables - as type parameters of ``Array``, we must *unpack* the type variable tuple using - the star operator: ``*Ts``. The signature of ``Array`` then behaves - as if we had simply written ``class Array(Generic[T1, T2]): ...``. - In contrast to ``Generic[T1, T2]``, however, ``Generic[*Shape]`` allows - us to parameterise the class with an *arbitrary* number of type parameters. - - Type variable tuples can be used anywhere a normal ``TypeVar`` can. - This includes class definitions, as shown above, as well as function - signatures and variable annotations:: - - class Array(Generic[*Ts]): - - def __init__(self, shape: Tuple[*Ts]): - self._shape: Tuple[*Ts] = shape - - def get_shape(self) -> Tuple[*Ts]: - return self._shape - - shape = (Height(480), Width(640)) - x: Array[Height, Width] = Array(shape) - y = abs(x) # Inferred type is Array[Height, Width] - z = x + x # ... is Array[Height, Width] - x.get_shape() # ... is tuple[Height, Width] - - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - def __iter__(self): - yield self.__unpacked__ - - def __init__(self, name): - self.__name__ = name - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - self.__unpacked__ = Unpack[self] - - def __repr__(self): - return self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - def __init_subclass__(self, *args, **kwds): - if '_root' not in kwds: - raise TypeError("Cannot subclass special typing classes") - - -if hasattr(typing, "reveal_type"): - reveal_type = typing.reveal_type -else: - def reveal_type(__obj: T) -> T: - """Reveal the inferred type of a variable. - - When a static type checker encounters a call to ``reveal_type()``, - it will emit the inferred type of the argument:: - - x: int = 1 - reveal_type(x) - - Running a static type checker (e.g., ``mypy``) on this example - will produce output similar to 'Revealed type is "builtins.int"'. - - At runtime, the function prints the runtime type of the - argument and returns it unchanged. - - """ - print(f"Runtime type is {type(__obj).__name__!r}", file=sys.stderr) - return __obj - - -if hasattr(typing, "assert_never"): - assert_never = typing.assert_never -else: - def assert_never(__arg: Never) -> Never: - """Assert to the type checker that a line of code is unreachable. - - Example:: - - def int_or_str(arg: int | str) -> None: - match arg: - case int(): - print("It's an int") - case str(): - print("It's a str") - case _: - assert_never(arg) - - If a type checker finds that a call to assert_never() is - reachable, it will emit an error. - - At runtime, this throws an exception when called. - - """ - raise AssertionError("Expected code to be unreachable") - - -if hasattr(typing, 'dataclass_transform'): - dataclass_transform = typing.dataclass_transform -else: - def dataclass_transform( - *, - eq_default: bool = True, - order_default: bool = False, - kw_only_default: bool = False, - field_specifiers: typing.Tuple[ - typing.Union[typing.Type[typing.Any], typing.Callable[..., typing.Any]], - ... - ] = (), - **kwargs: typing.Any, - ) -> typing.Callable[[T], T]: - """Decorator that marks a function, class, or metaclass as providing - dataclass-like behavior. - - Example: - - from pip._vendor.typing_extensions import dataclass_transform - - _T = TypeVar("_T") - - # Used on a decorator function - @dataclass_transform() - def create_model(cls: type[_T]) -> type[_T]: - ... - return cls - - @create_model - class CustomerModel: - id: int - name: str - - # Used on a base class - @dataclass_transform() - class ModelBase: ... - - class CustomerModel(ModelBase): - id: int - name: str - - # Used on a metaclass - @dataclass_transform() - class ModelMeta(type): ... - - class ModelBase(metaclass=ModelMeta): ... - - class CustomerModel(ModelBase): - id: int - name: str - - Each of the ``CustomerModel`` classes defined in this example will now - behave similarly to a dataclass created with the ``@dataclasses.dataclass`` - decorator. For example, the type checker will synthesize an ``__init__`` - method. - - The arguments to this decorator can be used to customize this behavior: - - ``eq_default`` indicates whether the ``eq`` parameter is assumed to be - True or False if it is omitted by the caller. - - ``order_default`` indicates whether the ``order`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``kw_only_default`` indicates whether the ``kw_only`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``field_specifiers`` specifies a static list of supported classes - or functions that describe fields, similar to ``dataclasses.field()``. - - At runtime, this decorator records its arguments in the - ``__dataclass_transform__`` attribute on the decorated object. - - See PEP 681 for details. - - """ - def decorator(cls_or_fn): - cls_or_fn.__dataclass_transform__ = { - "eq_default": eq_default, - "order_default": order_default, - "kw_only_default": kw_only_default, - "field_specifiers": field_specifiers, - "kwargs": kwargs, - } - return cls_or_fn - return decorator - - -# We have to do some monkey patching to deal with the dual nature of -# Unpack/TypeVarTuple: -# - We want Unpack to be a kind of TypeVar so it gets accepted in -# Generic[Unpack[Ts]] -# - We want it to *not* be treated as a TypeVar for the purposes of -# counting generic parameters, so that when we subscript a generic, -# the runtime doesn't try to substitute the Unpack with the subscripted type. -if not hasattr(typing, "TypeVarTuple"): - typing._collect_type_vars = _collect_type_vars - typing._check_generic = _check_generic - - -# Backport typing.NamedTuple as it exists in Python 3.11. -# In 3.11, the ability to define generic `NamedTuple`s was supported. -# This was explicitly disallowed in 3.9-3.10, and only half-worked in <=3.8. -if sys.version_info >= (3, 11): - NamedTuple = typing.NamedTuple -else: - def _caller(): - try: - return sys._getframe(2).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): # For platforms without _getframe() - return None - - def _make_nmtuple(name, types, module, defaults=()): - fields = [n for n, t in types] - annotations = {n: typing._type_check(t, f"field {n} annotation must be a type") - for n, t in types} - nm_tpl = collections.namedtuple(name, fields, - defaults=defaults, module=module) - nm_tpl.__annotations__ = nm_tpl.__new__.__annotations__ = annotations - # The `_field_types` attribute was removed in 3.9; - # in earlier versions, it is the same as the `__annotations__` attribute - if sys.version_info < (3, 9): - nm_tpl._field_types = annotations - return nm_tpl - - _prohibited_namedtuple_fields = typing._prohibited - _special_namedtuple_fields = frozenset({'__module__', '__name__', '__annotations__'}) - - class _NamedTupleMeta(type): - def __new__(cls, typename, bases, ns): - assert _NamedTuple in bases - for base in bases: - if base is not _NamedTuple and base is not typing.Generic: - raise TypeError( - 'can only inherit from a NamedTuple type and Generic') - bases = tuple(tuple if base is _NamedTuple else base for base in bases) - types = ns.get('__annotations__', {}) - default_names = [] - for field_name in types: - if field_name in ns: - default_names.append(field_name) - elif default_names: - raise TypeError(f"Non-default namedtuple field {field_name} " - f"cannot follow default field" - f"{'s' if len(default_names) > 1 else ''} " - f"{', '.join(default_names)}") - nm_tpl = _make_nmtuple( - typename, types.items(), - defaults=[ns[n] for n in default_names], - module=ns['__module__'] - ) - nm_tpl.__bases__ = bases - if typing.Generic in bases: - class_getitem = typing.Generic.__class_getitem__.__func__ - nm_tpl.__class_getitem__ = classmethod(class_getitem) - # update from user namespace without overriding special namedtuple attributes - for key in ns: - if key in _prohibited_namedtuple_fields: - raise AttributeError("Cannot overwrite NamedTuple attribute " + key) - elif key not in _special_namedtuple_fields and key not in nm_tpl._fields: - setattr(nm_tpl, key, ns[key]) - if typing.Generic in bases: - nm_tpl.__init_subclass__() - return nm_tpl - - def NamedTuple(__typename, __fields=None, **kwargs): - if __fields is None: - __fields = kwargs.items() - elif kwargs: - raise TypeError("Either list of fields or keywords" - " can be provided to NamedTuple, not both") - return _make_nmtuple(__typename, __fields, module=_caller()) - - NamedTuple.__doc__ = typing.NamedTuple.__doc__ - _NamedTuple = type.__new__(_NamedTupleMeta, 'NamedTuple', (), {}) - - # On 3.8+, alter the signature so that it matches typing.NamedTuple. - # The signature of typing.NamedTuple on >=3.8 is invalid syntax in Python 3.7, - # so just leave the signature as it is on 3.7. - if sys.version_info >= (3, 8): - NamedTuple.__text_signature__ = '(typename, fields=None, /, **kwargs)' - - def _namedtuple_mro_entries(bases): - assert NamedTuple in bases - return (_NamedTuple,) - - NamedTuple.__mro_entries__ = _namedtuple_mro_entries diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/jaraco/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pkg_resources/_vendor/jaraco/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/importlib_metadata/_meta.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/importlib_metadata/_meta.py deleted file mode 100644 index 37ee43e6ef447dfb4ae68f5f6c35597d12fdc5a1..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/importlib_metadata/_meta.py +++ /dev/null @@ -1,48 +0,0 @@ -from ._compat import Protocol -from typing import Any, Dict, Iterator, List, TypeVar, Union - - -_T = TypeVar("_T") - - -class PackageMetadata(Protocol): - def __len__(self) -> int: - ... # pragma: no cover - - def __contains__(self, item: str) -> bool: - ... # pragma: no cover - - def __getitem__(self, key: str) -> str: - ... # pragma: no cover - - def __iter__(self) -> Iterator[str]: - ... # pragma: no cover - - def get_all(self, name: str, failobj: _T = ...) -> Union[List[Any], _T]: - """ - Return all values associated with a possibly multi-valued key. - """ - - @property - def json(self) -> Dict[str, Union[str, List[str]]]: - """ - A JSON-compatible form of the metadata. - """ - - -class SimplePath(Protocol): - """ - A minimal subset of pathlib.Path required by PathDistribution. - """ - - def joinpath(self) -> 'SimplePath': - ... # pragma: no cover - - def __truediv__(self) -> 'SimplePath': - ... # pragma: no cover - - def parent(self) -> 'SimplePath': - ... # pragma: no cover - - def read_text(self) -> str: - ... # pragma: no cover diff --git a/spaces/tomofi/MMOCR/tests/test_dataset/test_textdet_targets.py b/spaces/tomofi/MMOCR/tests/test_dataset/test_textdet_targets.py deleted file mode 100644 index 2008c5c6faaa0efc05325c9e48ba821859a43f47..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tests/test_dataset/test_textdet_targets.py +++ /dev/null @@ -1,367 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from unittest import mock - -import numpy as np -from mmdet.core import PolygonMasks - -import mmocr.datasets.pipelines.custom_format_bundle as cf_bundle -import mmocr.datasets.pipelines.textdet_targets as textdet_targets - - -@mock.patch('%s.cf_bundle.show_feature' % __name__) -def test_gen_pannet_targets(mock_show_feature): - - target_generator = textdet_targets.PANetTargets() - assert target_generator.max_shrink == 20 - - # test generate_kernels - img_size = (3, 10) - text_polys = [[np.array([0, 0, 1, 0, 1, 1, 0, 1])], - [np.array([2, 0, 3, 0, 3, 1, 2, 1])]] - shrink_ratio = 1.0 - kernel = np.array([[1, 1, 2, 2, 0, 0, 0, 0, 0, 0], - [1, 1, 2, 2, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) - output, _ = target_generator.generate_kernels(img_size, text_polys, - shrink_ratio) - print(output) - assert np.allclose(output, kernel) - - # test generate_effective_mask - polys_ignore = text_polys - output = target_generator.generate_effective_mask((3, 10), polys_ignore) - target = np.array([[0, 0, 0, 0, 1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]) - - assert np.allclose(output, target) - - # test generate_targets - results = {} - results['img'] = np.zeros((3, 10, 3), np.uint8) - results['gt_masks'] = PolygonMasks(text_polys, 3, 10) - results['gt_masks_ignore'] = PolygonMasks([], 3, 10) - results['img_shape'] = (3, 10, 3) - results['mask_fields'] = [] - output = target_generator(results) - assert len(output['gt_kernels']) == 2 - assert len(output['gt_mask']) == 1 - - bundle = cf_bundle.CustomFormatBundle( - keys=['gt_kernels', 'gt_mask'], - visualize=dict(flag=True, boundary_key='gt_kernels')) - bundle(output) - assert 'gt_kernels' in output.keys() - assert 'gt_mask' in output.keys() - mock_show_feature.assert_called_once() - - -def test_gen_psenet_targets(): - target_generator = textdet_targets.PSENetTargets() - assert target_generator.max_shrink == 20 - assert target_generator.shrink_ratio == (1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4) - - -# Test DBNetTargets - - -def test_dbnet_targets_find_invalid(): - target_generator = textdet_targets.DBNetTargets() - assert target_generator.shrink_ratio == 0.4 - assert target_generator.thr_min == 0.3 - assert target_generator.thr_max == 0.7 - - results = {} - text_polys = [[np.array([0, 0, 10, 0, 10, 10, 0, 10])], - [np.array([20, 0, 30, 0, 30, 10, 20, 10])]] - results['gt_masks'] = PolygonMasks(text_polys, 40, 40) - - ignore_tags = target_generator.find_invalid(results) - assert np.allclose(ignore_tags, [False, False]) - - -def test_dbnet_targets(): - target_generator = textdet_targets.DBNetTargets() - assert target_generator.shrink_ratio == 0.4 - assert target_generator.thr_min == 0.3 - assert target_generator.thr_max == 0.7 - - -def test_dbnet_ignore_texts(): - target_generator = textdet_targets.DBNetTargets() - ignore_tags = [True, False] - results = {} - text_polys = [[np.array([0, 0, 10, 0, 10, 10, 0, 10])], - [np.array([20, 0, 30, 0, 30, 10, 20, 10])]] - text_polys_ignore = [[np.array([0, 0, 15, 0, 15, 10, 0, 10])]] - - results['gt_masks_ignore'] = PolygonMasks(text_polys_ignore, 40, 40) - results['gt_masks'] = PolygonMasks(text_polys, 40, 40) - results['gt_bboxes'] = np.array([[0, 0, 10, 10], [20, 0, 30, 10]]) - results['gt_labels'] = np.array([0, 1]) - - target_generator.ignore_texts(results, ignore_tags) - - assert np.allclose(results['gt_labels'], np.array([1])) - assert len(results['gt_masks_ignore'].masks) == 2 - assert np.allclose(results['gt_masks_ignore'].masks[1][0], - text_polys[0][0]) - assert len(results['gt_masks'].masks) == 1 - - -def test_dbnet_generate_thr_map(): - target_generator = textdet_targets.DBNetTargets() - text_polys = [[np.array([0, 0, 10, 0, 10, 10, 0, 10])], - [np.array([20, 0, 30, 0, 30, 10, 20, 10])]] - thr_map, thr_mask = target_generator.generate_thr_map((40, 40), text_polys) - assert np.all((thr_map >= 0.29) * (thr_map <= 0.71)) - - -def test_dbnet_draw_border_map(): - target_generator = textdet_targets.DBNetTargets() - poly = np.array([[20, 21], [-14, 20], [-11, 30], [-22, 26]]) - img_size = (40, 40) - thr_map = np.zeros(img_size, dtype=np.float32) - thr_mask = np.zeros(img_size, dtype=np.uint8) - - target_generator.draw_border_map(poly, thr_map, thr_mask) - - -def test_dbnet_generate_targets(): - target_generator = textdet_targets.DBNetTargets() - text_polys = [[np.array([0, 0, 10, 0, 10, 10, 0, 10])], - [np.array([20, 0, 30, 0, 30, 10, 20, 10])]] - text_polys_ignore = [[np.array([0, 0, 15, 0, 15, 10, 0, 10])]] - - results = {} - results['mask_fields'] = [] - results['img_shape'] = (40, 40, 3) - results['gt_masks_ignore'] = PolygonMasks(text_polys_ignore, 40, 40) - results['gt_masks'] = PolygonMasks(text_polys, 40, 40) - results['gt_bboxes'] = np.array([[0, 0, 10, 10], [20, 0, 30, 10]]) - results['gt_labels'] = np.array([0, 1]) - - target_generator.generate_targets(results) - assert 'gt_shrink' in results['mask_fields'] - assert 'gt_shrink_mask' in results['mask_fields'] - assert 'gt_thr' in results['mask_fields'] - assert 'gt_thr_mask' in results['mask_fields'] - - -@mock.patch('%s.cf_bundle.show_feature' % __name__) -def test_gen_textsnake_targets(mock_show_feature): - - target_generator = textdet_targets.TextSnakeTargets() - assert np.allclose(target_generator.orientation_thr, 2.0) - assert np.allclose(target_generator.resample_step, 4.0) - assert np.allclose(target_generator.center_region_shrink_ratio, 0.3) - - # test vector_angle - vec1 = np.array([[-1, 0], [0, 1]]) - vec2 = np.array([[1, 0], [0, 1]]) - angles = target_generator.vector_angle(vec1, vec2) - assert np.allclose(angles, np.array([np.pi, 0]), atol=1e-3) - - # test find_head_tail for quadrangle - polygon = np.array([[1.0, 1.0], [5.0, 1.0], [5.0, 3.0], [1.0, 3.0]]) - head_inds, tail_inds = target_generator.find_head_tail(polygon, 2.0) - assert np.allclose(head_inds, [3, 0]) - assert np.allclose(tail_inds, [1, 2]) - polygon = np.array([[1.0, 1.0], [1.0, 3.0], [5.0, 3.0], [5.0, 1.0]]) - head_inds, tail_inds = target_generator.find_head_tail(polygon, 2.0) - assert np.allclose(head_inds, [0, 1]) - assert np.allclose(tail_inds, [2, 3]) - - # test find_head_tail for polygon - polygon = np.array([[0., 10.], [3., 3.], [10., 0.], [17., 3.], [20., 10.], - [15., 10.], [13.5, 6.5], [10., 5.], [6.5, 6.5], - [5., 10.]]) - head_inds, tail_inds = target_generator.find_head_tail(polygon, 2.0) - assert np.allclose(head_inds, [9, 0]) - assert np.allclose(tail_inds, [4, 5]) - - # test resample_line - line = np.array([[0, 0], [0, 1], [0, 3], [0, 4], [0, 7], [0, 8]]) - resampled_line = target_generator.resample_line(line, 3) - assert len(resampled_line) == 3 - assert np.allclose(resampled_line, np.array([[0, 0], [0, 4], [0, 8]])) - line = np.array([[0, 0], [0, 0]]) - resampled_line = target_generator.resample_line(line, 4) - assert len(resampled_line) == 4 - assert np.allclose(resampled_line, - np.array([[0, 0], [0, 0], [0, 0], [0, 0]])) - - # test generate_text_region_mask - img_size = (3, 10) - text_polys = [[np.array([0, 0, 1, 0, 1, 1, 0, 1])], - [np.array([2, 0, 3, 0, 3, 1, 2, 1])]] - output = target_generator.generate_text_region_mask(img_size, text_polys) - target = np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) - assert np.allclose(output, target) - - # test generate_center_region_mask - target_generator.center_region_shrink_ratio = 1.0 - (center_region_mask, radius_map, sin_map, - cos_map) = target_generator.generate_center_mask_attrib_maps( - img_size, text_polys) - target = np.array([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) - assert np.allclose(center_region_mask, target) - assert np.allclose(sin_map, np.zeros(img_size)) - assert np.allclose(cos_map, target) - - # test generate_effective_mask - polys_ignore = text_polys - output = target_generator.generate_effective_mask(img_size, polys_ignore) - target = np.array([[0, 0, 0, 0, 1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]) - assert np.allclose(output, target) - - # test generate_targets - results = {} - results['img'] = np.zeros((3, 10, 3), np.uint8) - results['gt_masks'] = PolygonMasks(text_polys, 3, 10) - results['gt_masks_ignore'] = PolygonMasks([], 3, 10) - results['img_shape'] = (3, 10, 3) - results['mask_fields'] = [] - output = target_generator(results) - assert len(output['gt_text_mask']) == 1 - assert len(output['gt_center_region_mask']) == 1 - assert len(output['gt_mask']) == 1 - assert len(output['gt_radius_map']) == 1 - assert len(output['gt_sin_map']) == 1 - assert len(output['gt_cos_map']) == 1 - - bundle = cf_bundle.CustomFormatBundle( - keys=[ - 'gt_text_mask', 'gt_center_region_mask', 'gt_mask', - 'gt_radius_map', 'gt_sin_map', 'gt_cos_map' - ], - visualize=dict(flag=True, boundary_key='gt_text_mask')) - bundle(output) - assert 'gt_text_mask' in output.keys() - assert 'gt_center_region_mask' in output.keys() - assert 'gt_mask' in output.keys() - assert 'gt_radius_map' in output.keys() - assert 'gt_sin_map' in output.keys() - assert 'gt_cos_map' in output.keys() - mock_show_feature.assert_called_once() - - -def test_fcenet_generate_targets(): - fourier_degree = 5 - target_generator = textdet_targets.FCENetTargets( - fourier_degree=fourier_degree) - - h, w, c = (64, 64, 3) - text_polys = [[np.array([0, 0, 10, 0, 10, 10, 0, 10])], - [np.array([20, 0, 30, 0, 30, 10, 20, 10])]] - text_polys_ignore = [[np.array([0, 0, 15, 0, 15, 10, 0, 10])]] - - results = {} - results['mask_fields'] = [] - results['img_shape'] = (h, w, c) - results['gt_masks_ignore'] = PolygonMasks(text_polys_ignore, h, w) - results['gt_masks'] = PolygonMasks(text_polys, h, w) - results['gt_bboxes'] = np.array([[0, 0, 10, 10], [20, 0, 30, 10]]) - results['gt_labels'] = np.array([0, 1]) - - target_generator.generate_targets(results) - assert 'p3_maps' in results.keys() - assert 'p4_maps' in results.keys() - assert 'p5_maps' in results.keys() - - -def test_gen_drrg_targets(): - target_generator = textdet_targets.DRRGTargets() - assert np.allclose(target_generator.orientation_thr, 2.0) - assert np.allclose(target_generator.resample_step, 8.0) - assert target_generator.num_min_comps == 9 - assert target_generator.num_max_comps == 600 - assert np.allclose(target_generator.min_width, 8.0) - assert np.allclose(target_generator.max_width, 24.0) - assert np.allclose(target_generator.center_region_shrink_ratio, 0.3) - assert np.allclose(target_generator.comp_shrink_ratio, 1.0) - assert np.allclose(target_generator.comp_w_h_ratio, 0.3) - assert np.allclose(target_generator.text_comp_nms_thr, 0.25) - assert np.allclose(target_generator.min_rand_half_height, 8.0) - assert np.allclose(target_generator.max_rand_half_height, 24.0) - assert np.allclose(target_generator.jitter_level, 0.2) - - # test generate_targets - target_generator = textdet_targets.DRRGTargets( - min_width=2., - max_width=4., - min_rand_half_height=3., - max_rand_half_height=5.) - - results = {} - results['img'] = np.zeros((64, 64, 3), np.uint8) - text_polys = [[np.array([4, 2, 30, 2, 30, 10, 4, 10])], - [np.array([36, 12, 8, 12, 8, 22, 36, 22])], - [np.array([48, 20, 52, 20, 52, 50, 48, 50])], - [np.array([44, 50, 38, 50, 38, 20, 44, 20])]] - results['gt_masks'] = PolygonMasks(text_polys, 20, 30) - results['gt_masks_ignore'] = PolygonMasks([], 64, 64) - results['img_shape'] = (64, 64, 3) - results['mask_fields'] = [] - output = target_generator(results) - assert len(output['gt_text_mask']) == 1 - assert len(output['gt_center_region_mask']) == 1 - assert len(output['gt_mask']) == 1 - assert len(output['gt_top_height_map']) == 1 - assert len(output['gt_bot_height_map']) == 1 - assert len(output['gt_sin_map']) == 1 - assert len(output['gt_cos_map']) == 1 - assert output['gt_comp_attribs'].shape[-1] == 8 - - # test generate_targets with the number of proposed text components exceeds - # num_max_comps - target_generator = textdet_targets.DRRGTargets( - min_width=2., - max_width=4., - min_rand_half_height=3., - max_rand_half_height=5., - num_max_comps=6) - output = target_generator(results) - assert output['gt_comp_attribs'].ndim == 2 - assert output['gt_comp_attribs'].shape[0] == 6 - - # test generate_targets with blank polygon masks - target_generator = textdet_targets.DRRGTargets( - min_width=2., - max_width=4., - min_rand_half_height=3., - max_rand_half_height=5.) - results = {} - results['img'] = np.zeros((20, 30, 3), np.uint8) - results['gt_masks'] = PolygonMasks([], 20, 30) - results['gt_masks_ignore'] = PolygonMasks([], 20, 30) - results['img_shape'] = (20, 30, 3) - results['mask_fields'] = [] - output = target_generator(results) - assert output['gt_comp_attribs'][0, 0] > 8 - - # test generate_targets with one proposed text component - text_polys = [[np.array([13, 6, 17, 6, 17, 14, 13, 14])]] - target_generator = textdet_targets.DRRGTargets( - min_width=4., - max_width=8., - min_rand_half_height=3., - max_rand_half_height=5.) - results['gt_masks'] = PolygonMasks(text_polys, 20, 30) - output = target_generator(results) - assert output['gt_comp_attribs'][0, 0] > 8 - - # test generate_targets with shrunk margin in generate_rand_comp_attribs - target_generator = textdet_targets.DRRGTargets( - min_width=2., - max_width=30., - min_rand_half_height=3., - max_rand_half_height=30.) - output = target_generator(results) - assert output['gt_comp_attribs'][0, 0] > 8 diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/legacy_1.x/mask_rcnn_r50_fpn_1x_coco_v1.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/legacy_1.x/mask_rcnn_r50_fpn_1x_coco_v1.py deleted file mode 100644 index 04581bbc901d0fda0ec8c6b4a8078ae04f21473a..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/legacy_1.x/mask_rcnn_r50_fpn_1x_coco_v1.py +++ /dev/null @@ -1,34 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_r50_fpn.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - rpn_head=dict( - anchor_generator=dict(type='LegacyAnchorGenerator', center_offset=0.5), - bbox_coder=dict(type='LegacyDeltaXYWHBBoxCoder'), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)), - roi_head=dict( - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict( - type='RoIAlign', - output_size=7, - sampling_ratio=2, - aligned=False)), - mask_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict( - type='RoIAlign', - output_size=14, - sampling_ratio=2, - aligned=False)), - bbox_head=dict( - bbox_coder=dict(type='LegacyDeltaXYWHBBoxCoder'), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))), - - # model training and testing settings - train_cfg=dict( - rpn_proposal=dict(max_per_img=2000), - rcnn=dict(assigner=dict(match_low_quality=True)))) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/paa/paa_r101_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/paa/paa_r101_fpn_1x_coco.py deleted file mode 100644 index 9d2b1a695e0f63aeb20f81a4a38df17a7cf12d5b..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/paa/paa_r101_fpn_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './paa_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/rpn/README.md b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/rpn/README.md deleted file mode 100644 index 44bf80ed21d17aa1dfd79aad58c7182b0c205c39..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/rpn/README.md +++ /dev/null @@ -1,29 +0,0 @@ -# Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks - -## Introduction - - - -```latex -@inproceedings{ren2015faster, - title={Faster r-cnn: Towards real-time object detection with region proposal networks}, - author={Ren, Shaoqing and He, Kaiming and Girshick, Ross and Sun, Jian}, - booktitle={Advances in neural information processing systems}, - year={2015} -} -``` - -## Results and models - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | AR1000 | Config | Download | -| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: | -| R-50-FPN | caffe | 1x | 3.5 | 22.6 | 58.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_r50_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r50_caffe_fpn_1x_coco/rpn_r50_caffe_fpn_1x_coco_20200531-5b903a37.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r50_caffe_fpn_1x_coco/rpn_r50_caffe_fpn_1x_coco_20200531_012334.log.json) | -| R-50-FPN | pytorch | 1x | 3.8 | 22.3 | 58.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r50_fpn_1x_coco/rpn_r50_fpn_1x_coco_20200218-5525fa2e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r50_fpn_1x_coco/rpn_r50_fpn_1x_coco_20200218_151240.log.json) | -| R-50-FPN | pytorch | 2x | - | - | 58.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_r50_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r50_fpn_2x_coco/rpn_r50_fpn_2x_coco_20200131-0728c9b3.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r50_fpn_2x_coco/rpn_r50_fpn_2x_coco_20200131_190631.log.json) | -| R-101-FPN | caffe | 1x | 5.4 | 17.3 | 60.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_r101_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r101_caffe_fpn_1x_coco/rpn_r101_caffe_fpn_1x_coco_20200531-0629a2e2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r101_caffe_fpn_1x_coco/rpn_r101_caffe_fpn_1x_coco_20200531_012345.log.json) | -| R-101-FPN | pytorch | 1x | 5.8 | 16.5 | 59.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r101_fpn_1x_coco/rpn_r101_fpn_1x_coco_20200131-2ace2249.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r101_fpn_1x_coco/rpn_r101_fpn_1x_coco_20200131_191000.log.json) | -| R-101-FPN | pytorch | 2x | - | - | 60.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_r101_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r101_fpn_2x_coco/rpn_r101_fpn_2x_coco_20200131-24e3db1a.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r101_fpn_2x_coco/rpn_r101_fpn_2x_coco_20200131_191106.log.json) | -| X-101-32x4d-FPN | pytorch | 1x | 7.0 | 13.0 | 60.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_x101_32x4d_fpn_1x_coco/rpn_x101_32x4d_fpn_1x_coco_20200219-b02646c6.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_x101_32x4d_fpn_1x_coco/rpn_x101_32x4d_fpn_1x_coco_20200219_012037.log.json) | -| X-101-32x4d-FPN | pytorch | 2x | - | - | 61.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_x101_32x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_x101_32x4d_fpn_2x_coco/rpn_x101_32x4d_fpn_2x_coco_20200208-d22bd0bb.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_x101_32x4d_fpn_2x_coco/rpn_x101_32x4d_fpn_2x_coco_20200208_200752.log.json) | -| X-101-64x4d-FPN | pytorch | 1x | 10.1 | 9.1 | 61.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_x101_64x4d_fpn_1x_coco/rpn_x101_64x4d_fpn_1x_coco_20200208-cde6f7dd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_x101_64x4d_fpn_1x_coco/rpn_x101_64x4d_fpn_1x_coco_20200208_200752.log.json) | -| X-101-64x4d-FPN | pytorch | 2x | - | - | 61.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/rpn/rpn_x101_64x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_x101_64x4d_fpn_2x_coco/rpn_x101_64x4d_fpn_2x_coco_20200208-c65f524f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_x101_64x4d_fpn_2x_coco/rpn_x101_64x4d_fpn_2x_coco_20200208_200752.log.json) | diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/profiling.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/profiling.py deleted file mode 100644 index 4be9222c37e922329d537f883f5587995e27efc6..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/profiling.py +++ /dev/null @@ -1,39 +0,0 @@ -import contextlib -import sys -import time - -import torch - -if sys.version_info >= (3, 7): - - @contextlib.contextmanager - def profile_time(trace_name, - name, - enabled=True, - stream=None, - end_stream=None): - """Print time spent by CPU and GPU. - - Useful as a temporary context manager to find sweet spots of code - suitable for async implementation. - """ - if (not enabled) or not torch.cuda.is_available(): - yield - return - stream = stream if stream else torch.cuda.current_stream() - end_stream = end_stream if end_stream else stream - start = torch.cuda.Event(enable_timing=True) - end = torch.cuda.Event(enable_timing=True) - stream.record_event(start) - try: - cpu_start = time.monotonic() - yield - finally: - cpu_end = time.monotonic() - end_stream.record_event(end) - end.synchronize() - cpu_time = (cpu_end - cpu_start) * 1000 - gpu_time = start.elapsed_time(end) - msg = f'{trace_name} {name} cpu_time {cpu_time:.2f} ms ' - msg += f'gpu_time {gpu_time:.2f} ms stream {stream}' - print(msg, end_stream) diff --git a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/processors.py b/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/processors.py deleted file mode 100644 index b87da139b019f6c51a1adc45ad65a09f4578aa66..0000000000000000000000000000000000000000 --- a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/processors.py +++ /dev/null @@ -1,41 +0,0 @@ -from typing import List, Optional -import gradio - -import DeepFakeAI.globals -from DeepFakeAI import wording -from DeepFakeAI.processors.frame.core import load_frame_processor_module, clear_frame_processors_modules -from DeepFakeAI.uis import core as ui -from DeepFakeAI.uis.typing import Update -from DeepFakeAI.utilities import list_module_names - -FRAME_PROCESSORS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None - - -def render() -> None: - global FRAME_PROCESSORS_CHECKBOX_GROUP - - with gradio.Box(): - FRAME_PROCESSORS_CHECKBOX_GROUP = gradio.CheckboxGroup( - label = wording.get('frame_processors_checkbox_group_label'), - choices = sort_frame_processors(DeepFakeAI.globals.frame_processors), - value = DeepFakeAI.globals.frame_processors - ) - ui.register_component('frame_processors_checkbox_group', FRAME_PROCESSORS_CHECKBOX_GROUP) - - -def listen() -> None: - FRAME_PROCESSORS_CHECKBOX_GROUP.change(update_frame_processors, inputs = FRAME_PROCESSORS_CHECKBOX_GROUP, outputs = FRAME_PROCESSORS_CHECKBOX_GROUP) - - -def update_frame_processors(frame_processors : List[str]) -> Update: - clear_frame_processors_modules() - DeepFakeAI.globals.frame_processors = frame_processors - for frame_processor in DeepFakeAI.globals.frame_processors: - frame_processor_module = load_frame_processor_module(frame_processor) - frame_processor_module.pre_check() - return gradio.update(value = frame_processors, choices = sort_frame_processors(frame_processors)) - - -def sort_frame_processors(frame_processors : List[str]) -> list[str]: - frame_processors_names = list_module_names('DeepFakeAI/processors/frame/modules') - return sorted(frame_processors_names, key = lambda frame_processor : frame_processors.index(frame_processor) if frame_processor in frame_processors else len(frame_processors)) diff --git a/spaces/tovaru/vits-for-ba/train.py b/spaces/tovaru/vits-for-ba/train.py deleted file mode 100644 index 693aef611e0863d0a0d4804d71462ffd58e62165..0000000000000000000000000000000000000000 --- a/spaces/tovaru/vits-for-ba/train.py +++ /dev/null @@ -1,302 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm - -import librosa -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) - -import commons -import utils -from data_utils import ( - TextAudioLoader, - TextAudioCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - - -torch.backends.cudnn.benchmark = True -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '8000' - - hps = utils.get_hparams() - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend='gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32,300,400,500,600,700,800,900,1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioCollate() - train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=8, shuffle=False, - batch_size=hps.train.batch_size, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model).cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - - try: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d) - global_step = (epoch_str - 1) * len(train_loader) - except: - epoch_str = 1 - global_step = 0 - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank==0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d = nets - optim_g, optim_d = optims - scheduler_g, scheduler_d = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(tqdm(train_loader)): - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask,\ - (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths) - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank==0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - old_g=os.path.join(hps.model_dir, "G_{}.pth".format(global_step-2000)) - old_d=os.path.join(hps.model_dir, "D_{}.pth".format(global_step-2000)) - if os.path.exists(old_g): - os.remove(old_g) - if os.path.exists(old_d): - os.remove(old_d) - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths) in enumerate(eval_loader): - x, x_lengths = x.cuda(0), x_lengths.cuda(0) - spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0) - y, y_lengths = y.cuda(0), y_lengths.cuda(0) - - # remove else - x = x[:1] - x_lengths = x_lengths[:1] - spec = spec[:1] - spec_lengths = spec_lengths[:1] - y = y[:1] - y_lengths = y_lengths[:1] - break - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, max_len=1000) - y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict = { - "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - } - audio_dict = { - "gen/audio": y_hat[0,:,:y_hat_lengths[0]] - } - if global_step == 0: - image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - - -if __name__ == "__main__": - main() diff --git a/spaces/vialibre/edia/index.html b/spaces/vialibre/edia/index.html deleted file mode 100644 index 71591f51cb824316cedda1b8e4e579beefc30e06..0000000000000000000000000000000000000000 --- a/spaces/vialibre/edia/index.html +++ /dev/null @@ -1,153 +0,0 @@ - - - - - - - - EDIA - - - - - - - - - - - - -
                - -
                -

                EDIA

                -

                Estereotipos y Discriminación en Inteligencia Artificial

                -
                - - -
                - -
                - -

                Los modelos de lenguaje y las representaciones de palabras obtenidas con aprendizaje automatizado han demostrado contener estereotipos discriminatorios. Aquí presentamos un conjunto de herramientas de inspección: EDIA (Estereotipos y Discriminación en Inteligencia Artificial). El objetivo de este proyecto es diseñar y evaluar una metodología que permita a comunidades de ciencias sociales y personas expertas de dominio en Latinoamérica, explorar sesgos y estereotipos discriminatorios presentes en word embeddings y modelos de lenguaje. También les permite definir el tipo de sesgo a explorar y acercarse a un enfoque interseccional desde dos dimensiones binarias de análisis (por ejemplo, mujer-hombre vs gordo-flaco).

                -

                EDIA contiene diversas herramientas que sirven para detectar e inspeccionar sesgos en sistemas de procesamiento de lenguaje natural basados en modelos de lenguaje o word embeddings. Contamos con modelos en español e inglés para trabajar y explorar los sesgos en diferentes idiomas a requerimiento de las personas usuarias. Cada una de las siguientes herramientas son funciones distintas que nos acercan a un aspecto particular de la problemática del sesgo y a la vez, nos permiten entender partes diferentes pero complementarias del mismo.

                - - Video presentación de EDIA - -
                - - -
                - -
                -

                Sesgos en listas de palabras

                -
                -

                Basada en una técnica para detectar sesgos en WE, esta función nos permite visualizar la distribución de palabras en un espacio 2D y con ello observar la distancia entre ellas. Entre más contextos de ocurrencia compartan, estarán más cerca, y entre menos contextos de ocurrencia compartan, estarán más lejos. Esto, generalmente, hace que las palabras con un significado parecido aparezcan cercanas. A partir de la creación de listas de palabras que nos sirven para definir campos semánticos, podremos observar sesgos y explorar palabras vecinas entre esos significados.

                - -
                -
                - -
                -

                Sesgos en frases

                -
                -

                Aquí desplegamos una herramienta que utiliza modelos de lenguaje para evidenciar sesgos en frases, lo que nos permite trabajar con sesgos no binarios (como mujer - hombre, femenino - masculino) y eliminar ambigüedades (producto de polisemias). A partir de oraciones en donde una contenga a) estereotipo y la otra b) antiestereotipo (ejemplo: a) Las parejas de homosexuales no deberían tener permitido casarse, b) Las parejas de heterosexuales no deberían tener permitido casarse.), buscamos definir las preferencias de un modelo de lenguaje pre-entrenado a la hora de producir lenguaje. Si el modelo no tuviera sesgo ambas tendrían el mismo nivel de preferencia, pero si el modelo estuviera sesgado, una va a tener mayor preferencia.

                - -
                -
                - -
                -

                Datos de las palabras

                -
                -

                Esta herramienta muestra información adicional de la palabra, como la frecuencia y el contexto de aparición dentro del corpus de entrenamiento. Sirve para explicar e interpretar comportamientos inesperados en otras pestañas producto de la polisemia o la poca frecuencia de las palabras, y a partir de esta exploración, poder realizar modificaciones pertinentes en nuestras listas de palabras y frases.

                - -
                -
                - -
                - -
                - -
                - - - -
                - -

                Desarollado por:

                - - - - - - - -
                - -
                - -

                Auspiciado por:

                - - - - - - - -
                - -
                - -

                Puedes encontrar a EDIA en:

                - - - Github - - - - - DockerHub - - - - - HuggingFace🤗 - - -
                - -
                - - -
                - - - - diff --git a/spaces/videfikri/aicover/slicer2.py b/spaces/videfikri/aicover/slicer2.py deleted file mode 100644 index 5b29ee262aa54045e807be2cffeb41687499ba58..0000000000000000000000000000000000000000 --- a/spaces/videfikri/aicover/slicer2.py +++ /dev/null @@ -1,260 +0,0 @@ -import numpy as np - - -# This function is obtained from librosa. -def get_rms( - y, - frame_length=2048, - hop_length=512, - pad_mode="constant", -): - padding = (int(frame_length // 2), int(frame_length // 2)) - y = np.pad(y, padding, mode=pad_mode) - - axis = -1 - # put our new within-frame axis at the end for now - out_strides = y.strides + tuple([y.strides[axis]]) - # Reduce the shape on the framing axis - x_shape_trimmed = list(y.shape) - x_shape_trimmed[axis] -= frame_length - 1 - out_shape = tuple(x_shape_trimmed) + tuple([frame_length]) - xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides) - if axis < 0: - target_axis = axis - 1 - else: - target_axis = axis + 1 - xw = np.moveaxis(xw, -1, target_axis) - # Downsample along the target axis - slices = [slice(None)] * xw.ndim - slices[axis] = slice(0, None, hop_length) - x = xw[tuple(slices)] - - # Calculate power - power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True) - - return np.sqrt(power) - - -class Slicer: - def __init__( - self, - sr: int, - threshold: float = -40.0, - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000, - ): - if not min_length >= min_interval >= hop_size: - raise ValueError( - "The following condition must be satisfied: min_length >= min_interval >= hop_size" - ) - if not max_sil_kept >= hop_size: - raise ValueError( - "The following condition must be satisfied: max_sil_kept >= hop_size" - ) - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.0) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[ - :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size) - ] - else: - return waveform[ - begin * self.hop_size : min(waveform.shape[0], end * self.hop_size) - ] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = waveform.mean(axis=0) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return [waveform] - rms_list = get_rms( - y=samples, frame_length=self.win_size, hop_length=self.hop_size - ).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = ( - i - silence_start >= self.min_interval - and i - clip_start >= self.min_length - ) - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start : i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[ - i - self.max_sil_kept : silence_start + self.max_sil_kept + 1 - ].argmin() - pos += i - self.max_sil_kept - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if ( - silence_start is not None - and total_frames - silence_start >= self.min_interval - ): - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return [waveform] - else: - chunks = [] - if sil_tags[0][0] > 0: - chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0])) - for i in range(len(sil_tags) - 1): - chunks.append( - self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0]) - ) - if sil_tags[-1][1] < total_frames: - chunks.append( - self._apply_slice(waveform, sil_tags[-1][1], total_frames) - ) - return chunks - - -def main(): - import os.path - from argparse import ArgumentParser - - import librosa - import soundfile - - parser = ArgumentParser() - parser.add_argument("audio", type=str, help="The audio to be sliced") - parser.add_argument( - "--out", type=str, help="Output directory of the sliced audio clips" - ) - parser.add_argument( - "--db_thresh", - type=float, - required=False, - default=-40, - help="The dB threshold for silence detection", - ) - parser.add_argument( - "--min_length", - type=int, - required=False, - default=5000, - help="The minimum milliseconds required for each sliced audio clip", - ) - parser.add_argument( - "--min_interval", - type=int, - required=False, - default=300, - help="The minimum milliseconds for a silence part to be sliced", - ) - parser.add_argument( - "--hop_size", - type=int, - required=False, - default=10, - help="Frame length in milliseconds", - ) - parser.add_argument( - "--max_sil_kept", - type=int, - required=False, - default=500, - help="The maximum silence length kept around the sliced clip, presented in milliseconds", - ) - args = parser.parse_args() - out = args.out - if out is None: - out = os.path.dirname(os.path.abspath(args.audio)) - audio, sr = librosa.load(args.audio, sr=None, mono=False) - slicer = Slicer( - sr=sr, - threshold=args.db_thresh, - min_length=args.min_length, - min_interval=args.min_interval, - hop_size=args.hop_size, - max_sil_kept=args.max_sil_kept, - ) - chunks = slicer.slice(audio) - if not os.path.exists(out): - os.makedirs(out) - for i, chunk in enumerate(chunks): - if len(chunk.shape) > 1: - chunk = chunk.T - soundfile.write( - os.path.join( - out, - f"%s_%d.wav" - % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i), - ), - chunk, - sr, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/vpsrikanth/FaceSimilarity/app/config.py b/spaces/vpsrikanth/FaceSimilarity/app/config.py deleted file mode 100644 index 3260516aea6146a34d08bd8a3726e6731176c425..0000000000000000000000000000000000000000 --- a/spaces/vpsrikanth/FaceSimilarity/app/config.py +++ /dev/null @@ -1,25 +0,0 @@ -import sys -from typing import List - -from pydantic import AnyHttpUrl, BaseSettings - -class Settings(BaseSettings): - API_V1_STR: str = "/api/v1" - - # Meta - - # BACKEND_CORS_ORIGINS is a comma-separated list of origins - # e.g: http://localhost,http://localhost:4200,http://localhost:3000 - BACKEND_CORS_ORIGINS: List[AnyHttpUrl] = [ - "http://localhost:3000", # type: ignore - "http://localhost:8000", # type: ignore - "https://localhost:3000", # type: ignore - "https://localhost:8000", # type: ignore - ] - - PROJECT_NAME: str = "Recognition API" - - class Config: - case_sensitive = True - -settings = Settings() diff --git a/spaces/vs4vijay/playground/README.md b/spaces/vs4vijay/playground/README.md deleted file mode 100644 index e806344570a31cd5aaa93d36d5dea3afe9ae27ad..0000000000000000000000000000000000000000 --- a/spaces/vs4vijay/playground/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Vizard -emoji: 💻 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/decode_heads/fpn_head.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/decode_heads/fpn_head.py deleted file mode 100644 index 1241c55b0813d1ecdddf1e66e7c5031fbf78ed50..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/models/decode_heads/fpn_head.py +++ /dev/null @@ -1,68 +0,0 @@ -import numpy as np -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class FPNHead(BaseDecodeHead): - """Panoptic Feature Pyramid Networks. - - This head is the implementation of `Semantic FPN - `_. - - Args: - feature_strides (tuple[int]): The strides for input feature maps. - stack_lateral. All strides suppose to be power of 2. The first - one is of largest resolution. - """ - - def __init__(self, feature_strides, **kwargs): - super(FPNHead, self).__init__( - input_transform='multiple_select', **kwargs) - assert len(feature_strides) == len(self.in_channels) - assert min(feature_strides) == feature_strides[0] - self.feature_strides = feature_strides - - self.scale_heads = nn.ModuleList() - for i in range(len(feature_strides)): - head_length = max( - 1, - int(np.log2(feature_strides[i]) - np.log2(feature_strides[0]))) - scale_head = [] - for k in range(head_length): - scale_head.append( - ConvModule( - self.in_channels[i] if k == 0 else self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - if feature_strides[i] != feature_strides[0]: - scale_head.append( - nn.Upsample( - scale_factor=2, - mode='bilinear', - align_corners=self.align_corners)) - self.scale_heads.append(nn.Sequential(*scale_head)) - - def forward(self, inputs): - - x = self._transform_inputs(inputs) - - output = self.scale_heads[0](x[0]) - for i in range(1, len(self.feature_strides)): - # non inplace - output = output + resize( - self.scale_heads[i](x[i]), - size=output.shape[2:], - mode='bilinear', - align_corners=self.align_corners) - - output = self.cls_seg(output) - return output diff --git a/spaces/webis-huggingface-workshop/omar_demo/README.md b/spaces/webis-huggingface-workshop/omar_demo/README.md deleted file mode 100644 index 01d2205ed4d359c7051fa3a8acd69b7a162e3d2a..0000000000000000000000000000000000000000 --- a/spaces/webis-huggingface-workshop/omar_demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Omar_demo -emoji: 🏃 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 2.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/weidacn/deepdanbooru/deepdanbooru/train/__init__.py b/spaces/weidacn/deepdanbooru/deepdanbooru/train/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/wildoctopus/cloth-segmentation/options.py b/spaces/wildoctopus/cloth-segmentation/options.py deleted file mode 100644 index a3864c140e83e6d73c1e1e418fb05e840bbcac03..0000000000000000000000000000000000000000 --- a/spaces/wildoctopus/cloth-segmentation/options.py +++ /dev/null @@ -1,12 +0,0 @@ -import os.path as osp -import os - - -class parser(object): - def __init__(self): - - self.output = "./output" # output image folder path - self.logs_dir = './logs' - self.device = 'cuda:0' - -opt = parser() \ No newline at end of file diff --git a/spaces/wzhouxiff/RestoreFormerPlusPlus/RestoreFormer.py b/spaces/wzhouxiff/RestoreFormerPlusPlus/RestoreFormer.py deleted file mode 100644 index ac9ef42fa4cdb602f1710f913a60482e79531490..0000000000000000000000000000000000000000 --- a/spaces/wzhouxiff/RestoreFormerPlusPlus/RestoreFormer.py +++ /dev/null @@ -1,117 +0,0 @@ -import os - -import cv2 -import torch -from basicsr.utils import img2tensor, tensor2img -from basicsr.utils.download_util import load_file_from_url -from facexlib.utils.face_restoration_helper import FaceRestoreHelper -from torchvision.transforms.functional import normalize - -from RestoreFormer_arch import VQVAEGANMultiHeadTransformer - -ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - - -class RestoreFormer(): - """Helper for restoration with RestoreFormer. - - It will detect and crop faces, and then resize the faces to 512x512. - RestoreFormer is used to restored the resized faces. - The background is upsampled with the bg_upsampler. - Finally, the faces will be pasted back to the upsample background image. - - Args: - model_path (str): The path to the GFPGAN model. It can be urls (will first download it automatically). - upscale (float): The upscale of the final output. Default: 2. - arch (str): The RestoreFormer architecture. Option: RestoreFormer | RestoreFormer++. Default: RestoreFormer++. - bg_upsampler (nn.Module): The upsampler for the background. Default: None. - """ - - def __init__(self, model_path, upscale=2, arch='RestoreFromerPlusPlus', bg_upsampler=None, device=None): - self.upscale = upscale - self.bg_upsampler = bg_upsampler - self.arch = arch - - self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') if device is None else device - - if arch == 'RestoreFormer': - self.RF = VQVAEGANMultiHeadTransformer(head_size = 8, ex_multi_scale_num = 0) - elif arch == 'RestoreFormer++': - self.RF = VQVAEGANMultiHeadTransformer(head_size = 4, ex_multi_scale_num = 1) - else: - raise NotImplementedError(f'Not support arch: {arch}.') - - # initialize face helper - self.face_helper = FaceRestoreHelper( - upscale, - face_size=512, - crop_ratio=(1, 1), - det_model='retinaface_resnet50', - save_ext='png', - use_parse=True, - device=self.device, - model_rootpath=None) - - if model_path.startswith('https://'): - model_path = load_file_from_url( - url=model_path, model_dir=os.path.join(ROOT_DIR, 'experiments/weights'), progress=True, file_name=None) - loadnet = torch.load(model_path) - - strict=False - weights = loadnet['state_dict'] - new_weights = {} - for k, v in weights.items(): - if k.startswith('vqvae.'): - k = k.replace('vqvae.', '') - new_weights[k] = v - self.RF.load_state_dict(new_weights, strict=strict) - - self.RF.eval() - self.RF = self.RF.to(self.device) - - @torch.no_grad() - def enhance(self, img, has_aligned=False, only_center_face=False, paste_back=True): - self.face_helper.clean_all() - - if has_aligned: # the inputs are already aligned - img = cv2.resize(img, (512, 512)) - self.face_helper.cropped_faces = [img] - else: - self.face_helper.read_image(img) - self.face_helper.get_face_landmarks_5(only_center_face=only_center_face, eye_dist_threshold=5) - # eye_dist_threshold=5: skip faces whose eye distance is smaller than 5 pixels - # TODO: even with eye_dist_threshold, it will still introduce wrong detections and restorations. - # align and warp each face - self.face_helper.align_warp_face() - - # face restoration - for cropped_face in self.face_helper.cropped_faces: - # prepare data - cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True) - normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True) - cropped_face_t = cropped_face_t.unsqueeze(0).to(self.device) - - try: - output = self.RF(cropped_face_t)[0] - restored_face = tensor2img(output.squeeze(0), rgb2bgr=True, min_max=(-1, 1)) - except RuntimeError as error: - print(f'\tFailed inference for RestoreFormer: {error}.') - restored_face = cropped_face - - restored_face = restored_face.astype('uint8') - self.face_helper.add_restored_face(restored_face) - - if not has_aligned and paste_back: - # upsample the background - if self.bg_upsampler is not None: - # Now only support RealESRGAN for upsampling background - bg_img = self.bg_upsampler.enhance(img, outscale=self.upscale)[0] - else: - bg_img = None - - self.face_helper.get_inverse_affine(None) - # paste each restored face to the input image - restored_img = self.face_helper.paste_faces_to_input_image(upsample_img=bg_img) - return self.face_helper.cropped_faces, self.face_helper.restored_faces, restored_img - else: - return self.face_helper.cropped_faces, self.face_helper.restored_faces, None diff --git a/spaces/xiaoxicc/susu/run_Linux.sh b/spaces/xiaoxicc/susu/run_Linux.sh deleted file mode 100644 index 62af07283093d8e580763d7acfe493c3d88e7b08..0000000000000000000000000000000000000000 --- a/spaces/xiaoxicc/susu/run_Linux.sh +++ /dev/null @@ -1,25 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$0") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/yaoshining/text-generation-webui/docs/System-requirements.md b/spaces/yaoshining/text-generation-webui/docs/System-requirements.md deleted file mode 100644 index 3a88416d34ad7c8babd90a81db902e95288a8197..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/docs/System-requirements.md +++ /dev/null @@ -1,42 +0,0 @@ -These are the VRAM and RAM requirements (in MiB) to run some examples of models **in 16-bit (default) precision**: - -| model | VRAM (GPU) | RAM | -|:-----------------------|-------------:|--------:| -| arxiv_ai_gpt2 | 1512.37 | 5824.2 | -| blenderbot-1B-distill | 2441.75 | 4425.91 | -| opt-1.3b | 2509.61 | 4427.79 | -| gpt-neo-1.3b | 2605.27 | 5851.58 | -| opt-2.7b | 5058.05 | 4863.95 | -| gpt4chan_model_float16 | 11653.7 | 4437.71 | -| gpt-j-6B | 11653.7 | 5633.79 | -| galactica-6.7b | 12697.9 | 4429.89 | -| opt-6.7b | 12700 | 4368.66 | -| bloomz-7b1-p3 | 13483.1 | 4470.34 | - -#### GPU mode with 8-bit precision - -Allows you to load models that would not normally fit into your GPU. Enabled by default for 13b and 20b models in this web UI. - -| model | VRAM (GPU) | RAM | -|:---------------|-------------:|--------:| -| opt-13b | 12528.1 | 1152.39 | -| gpt-neox-20b | 20384 | 2291.7 | - -#### CPU mode (32-bit precision) - -A lot slower, but does not require a GPU. - -On my i5-12400F, 6B models take around 10-20 seconds to respond in chat mode, and around 5 minutes to generate a 200 tokens completion. - -| model | RAM | -|:-----------------------|---------:| -| arxiv_ai_gpt2 | 4430.82 | -| gpt-neo-1.3b | 6089.31 | -| opt-1.3b | 8411.12 | -| blenderbot-1B-distill | 8508.16 | -| opt-2.7b | 14969.3 | -| bloomz-7b1-p3 | 21371.2 | -| gpt-j-6B | 24200.3 | -| gpt4chan_model | 24246.3 | -| galactica-6.7b | 26561.4 | -| opt-6.7b | 29596.6 | diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/KeyboardShortcut/KeyboardShortcut.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/KeyboardShortcut/KeyboardShortcut.tsx deleted file mode 100644 index 0ce3ad10ac8042abd6e74fdfd75ef2d4e219ccd1..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/KeyboardShortcut/KeyboardShortcut.tsx +++ /dev/null @@ -1,71 +0,0 @@ -import { FC, useEffect } from "react" -import { isFocusable } from "./isFocusable" - -export interface Action { - code: KeyboardEvent["code"] - metaKey?: boolean - shiftKey?: boolean - enabled?: () => boolean - run: (e: KeyboardEvent) => void -} - -export interface KeyboardShortcutProps { - actions: Action[] - onCut?: (e: ClipboardEvent) => void - onCopy?: (e: ClipboardEvent) => void - onPaste?: (e: ClipboardEvent) => void -} - -export const KeyboardShortcut: FC = ({ - actions, - onCut, - onCopy, - onPaste, -}) => { - useEffect(() => { - const onKeyDown = (e: KeyboardEvent) => { - if (e.target !== null && isFocusable(e.target)) { - return - } - const action = actions.find( - (action) => - (action.enabled?.() ?? true) && - e.code === action.code && - e.shiftKey === (action.shiftKey ?? false) && - (e.ctrlKey || e.metaKey) === (action.metaKey ?? false), - ) - if (action !== undefined) { - action.run(e) - e.preventDefault() - e.stopPropagation() - } - } - - document.addEventListener("keydown", onKeyDown) - - return () => document.removeEventListener("keydown", onKeyDown) - }, [actions]) - - useEffect(() => { - document.oncut = onCut ?? null - return () => { - document.oncut = null - } - }, [onCut]) - - useEffect(() => { - document.oncopy = onCopy ?? null - return () => { - document.oncopy = null - } - }, [onCopy]) - - useEffect(() => { - document.onpaste = onPaste ?? null - return () => { - document.onpaste = null - } - }, [onPaste]) - - return <> -} diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/data/flickr30k_dataset.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/data/flickr30k_dataset.py deleted file mode 100644 index 018ab387014ddaf554c4d3184cfc0e2ba8b2d487..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/data/flickr30k_dataset.py +++ /dev/null @@ -1,93 +0,0 @@ -import os -import json - -from torch.utils.data import Dataset -from torchvision.datasets.utils import download_url - -from PIL import Image - -from data.utils import pre_caption - -class flickr30k_train(Dataset): - def __init__(self, transform, image_root, ann_root, max_words=30, prompt=''): - ''' - image_root (string): Root directory of images (e.g. flickr30k/) - ann_root (string): directory to store the annotation file - ''' - url = 'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_train.json' - filename = 'flickr30k_train.json' - - download_url(url,ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filename),'r')) - self.transform = transform - self.image_root = image_root - self.max_words = max_words - self.prompt = prompt - - self.img_ids = {} - n = 0 - for ann in self.annotation: - img_id = ann['image_id'] - if img_id not in self.img_ids.keys(): - self.img_ids[img_id] = n - n += 1 - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - image_path = os.path.join(self.image_root,ann['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - caption = self.prompt+pre_caption(ann['caption'], self.max_words) - - return image, caption, self.img_ids[ann['image_id']] - - -class flickr30k_retrieval_eval(Dataset): - def __init__(self, transform, image_root, ann_root, split, max_words=30): - ''' - image_root (string): Root directory of images (e.g. flickr30k/) - ann_root (string): directory to store the annotation file - split (string): val or test - ''' - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_val.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_test.json'} - filenames = {'val':'flickr30k_val.json','test':'flickr30k_test.json'} - - download_url(urls[split],ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r')) - self.transform = transform - self.image_root = image_root - - self.text = [] - self.image = [] - self.txt2img = {} - self.img2txt = {} - - txt_id = 0 - for img_id, ann in enumerate(self.annotation): - self.image.append(ann['image']) - self.img2txt[img_id] = [] - for i, caption in enumerate(ann['caption']): - self.text.append(pre_caption(caption,max_words)) - self.img2txt[img_id].append(txt_id) - self.txt2img[txt_id] = img_id - txt_id += 1 - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - image_path = os.path.join(self.image_root, self.annotation[index]['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - return image, index \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py deleted file mode 100644 index 848101f083582bafa26e58c87aaa612502f3f79c..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py +++ /dev/null @@ -1,65 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Convert Funnel checkpoint.""" - - -import argparse - -import torch - -from transformers import FunnelBaseModel, FunnelConfig, FunnelModel, load_tf_weights_in_funnel -from transformers.utils import logging - - -logging.set_verbosity_info() - - -def convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, base_model): - # Initialise PyTorch model - config = FunnelConfig.from_json_file(config_file) - print(f"Building PyTorch model from configuration: {config}") - model = FunnelBaseModel(config) if base_model else FunnelModel(config) - - # Load weights from tf checkpoint - load_tf_weights_in_funnel(model, config, tf_checkpoint_path) - - # Save pytorch-model - print(f"Save PyTorch model to {pytorch_dump_path}") - torch.save(model.state_dict(), pytorch_dump_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument( - "--tf_checkpoint_path", default=None, type=str, required=True, help="Path to the TensorFlow checkpoint path." - ) - parser.add_argument( - "--config_file", - default=None, - type=str, - required=True, - help="The config json file corresponding to the pre-trained model. \nThis specifies the model architecture.", - ) - parser.add_argument( - "--pytorch_dump_path", default=None, type=str, required=True, help="Path to the output PyTorch model." - ) - parser.add_argument( - "--base_model", action="store_true", help="Whether you want just the base model (no decoder) or not." - ) - args = parser.parse_args() - convert_tf_checkpoint_to_pytorch( - args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path, args.base_model - ) diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/webUI.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/webUI.py deleted file mode 100644 index 17e39b21fa24d7ec9867b693723b7b087840a9b4..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/webUI.py +++ /dev/null @@ -1,379 +0,0 @@ -import io -import os - -# os.system("wget -P cvec/ https://huggingface.co/spaces/innnky/nanami/resolve/main/checkpoint_best_legacy_500.pt") -import gradio as gr -import gradio.processing_utils as gr_pu -import librosa -import numpy as np -import soundfile -from inference.infer_tool import Svc -import logging -import re -import json - -import subprocess -import edge_tts -import asyncio -from scipy.io import wavfile -import librosa -import torch -import time -import traceback -from itertools import chain -from utils import mix_model -from compress_model import removeOptimizer - -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) -logging.getLogger('multipart').setLevel(logging.WARNING) - -model = None -spk = None -debug = False - -cuda = {} -if torch.cuda.is_available(): - for i in range(torch.cuda.device_count()): - device_name = torch.cuda.get_device_properties(i).name - cuda[f"CUDA:{i} {device_name}"] = f"cuda:{i}" - -def upload_mix_append_file(files,sfiles): - try: - if(sfiles == None): - file_paths = [file.name for file in files] - else: - file_paths = [file.name for file in chain(files,sfiles)] - p = {file:100 for file in file_paths} - return file_paths,mix_model_output1.update(value=json.dumps(p,indent=2)) - except Exception as e: - if debug: traceback.print_exc() - raise gr.Error(e) - -def mix_submit_click(js,mode): - try: - assert js.lstrip()!="" - modes = {"凸组合":0, "线性组合":1} - mode = modes[mode] - data = json.loads(js) - data = list(data.items()) - model_path,mix_rate = zip(*data) - path = mix_model(model_path,mix_rate,mode) - return f"成功,文件被保存在了{path}" - except Exception as e: - if debug: traceback.print_exc() - raise gr.Error(e) - -def updata_mix_info(files): - try: - if files == None : return mix_model_output1.update(value="") - p = {file.name:100 for file in files} - return mix_model_output1.update(value=json.dumps(p,indent=2)) - except Exception as e: - if debug: traceback.print_exc() - raise gr.Error(e) - -def modelAnalysis(model_path,config_path,cluster_model_path,device,enhance,diff_model_path,diff_config_path,only_diffusion,use_spk_mix): - global model - try: - device = cuda[device] if "CUDA" in device else device - cluster_filepath = os.path.split(cluster_model_path.name) if cluster_model_path is not None else "no_cluster" - fr = ".pkl" in cluster_filepath[1] - #model = Svc(model_path.name, config_path.name, device=device if device!="Auto" else None, cluster_model_path = cluster_model_path.name if cluster_model_path != None else "",nsf_hifigan_enhance=enhance) - model = Svc(model_path.name, - config_path.name, - device=device if device != "Auto" else None, - cluster_model_path = cluster_model_path.name if cluster_model_path is not None else "", - nsf_hifigan_enhance=enhance, - diffusion_model_path = diff_model_path.name if diff_model_path is not None else "", - diffusion_config_path = diff_config_path.name if diff_config_path is not None else "", - shallow_diffusion = True if diff_model_path is not None else False, - only_diffusion = only_diffusion, - spk_mix_enable = use_spk_mix, - feature_retrieval = fr - ) - spks = list(model.spk2id.keys()) - device_name = torch.cuda.get_device_properties(model.dev).name if "cuda" in str(model.dev) else str(model.dev) - msg = f"成功加载模型到设备{device_name}上\n" - if cluster_model_path is None: - msg += "未加载聚类模型或特征检索模型\n" - elif fr: - msg += f"特征检索模型{cluster_filepath[1]}加载成功\n" - else: - msg += f"聚类模型{cluster_filepath[1]}加载成功\n" - if diff_model_path is None: - msg += "未加载扩散模型\n" - else: - msg += f"扩散模型{diff_model_path.name}加载成功\n" - msg += "当前模型的可用音色:\n" - for i in spks: - msg += i + " " - return sid.update(choices = spks,value=spks[0]), msg - except Exception as e: - if debug: traceback.print_exc() - raise gr.Error(e) - - -def modelUnload(): - global model - if model is None: - return sid.update(choices = [],value=""),"没有模型需要卸载!" - else: - model.unload_model() - model = None - torch.cuda.empty_cache() - return sid.update(choices = [],value=""),"模型卸载完毕!" - -def vc_fn(sid, input_audio, vc_transform, auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,f0_predictor,enhancer_adaptive_key,cr_threshold,k_step,use_spk_mix,second_encoding,loudness_envelope_adjustment): - global model - try: - if input_audio is None: - return "You need to upload an audio", None - if model is None: - return "You need to upload an model", None - print(input_audio) - sampling_rate, audio = input_audio - print(audio.shape,sampling_rate) - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - print(audio.dtype) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - temp_path = "temp.wav" - soundfile.write(temp_path, audio, sampling_rate, format="wav") - _audio = model.slice_inference( - temp_path, - sid, - vc_transform, - slice_db, - cluster_ratio, - auto_f0, - noise_scale, - pad_seconds, - cl_num, - lg_num, - lgr_num, - f0_predictor, - enhancer_adaptive_key, - cr_threshold, - k_step, - use_spk_mix, - second_encoding, - loudness_envelope_adjustment - ) - model.clear_empty() - os.remove(temp_path) - #构建保存文件的路径,并保存到results文件夹内 - timestamp = str(int(time.time())) - if not os.path.exists("results"): - os.makedirs("results") - output_file = os.path.join("results", sid + "_" + timestamp + ".wav") - soundfile.write(output_file, _audio, model.target_sample, format="wav") - return "Success", output_file - except Exception as e: - if debug: traceback.print_exc() - raise gr.Error(e) - -def tts_func(_text,_rate,_voice): - #使用edge-tts把文字转成音频 - # voice = "zh-CN-XiaoyiNeural"#女性,较高音 - # voice = "zh-CN-YunxiNeural"#男性 - voice = "zh-CN-YunxiNeural"#男性 - if ( _voice == "女" ) : voice = "zh-CN-XiaoyiNeural" - output_file = _text[0:10]+".wav" - # communicate = edge_tts.Communicate(_text, voice) - # await communicate.save(output_file) - if _rate>=0: - ratestr="+{:.0%}".format(_rate) - elif _rate<0: - ratestr="{:.0%}".format(_rate)#减号自带 - - p=subprocess.Popen("edge-tts "+ - " --text "+_text+ - " --write-media "+output_file+ - " --voice "+voice+ - " --rate="+ratestr - ,shell=True, - stdout=subprocess.PIPE, - stdin=subprocess.PIPE) - p.wait() - return output_file - -def text_clear(text): - return re.sub(r"[\n\,\(\) ]", "", text) - -def vc_fn2(sid, input_audio, vc_transform, auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,text2tts,tts_rate,tts_voice,f0_predictor,enhancer_adaptive_key,cr_threshold): - #使用edge-tts把文字转成音频 - text2tts=text_clear(text2tts) - output_file=tts_func(text2tts,tts_rate,tts_voice) - - #调整采样率 - sr2=44100 - wav, sr = librosa.load(output_file) - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=sr2) - save_path2= text2tts[0:10]+"_44k"+".wav" - wavfile.write(save_path2,sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - #读取音频 - sample_rate, data=gr_pu.audio_from_file(save_path2) - vc_input=(sample_rate, data) - - a,b=vc_fn(sid, vc_input, vc_transform,auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,f0_predictor,enhancer_adaptive_key,cr_threshold) - os.remove(output_file) - os.remove(save_path2) - return a,b - -def model_compression(_model): - if _model == "": - return "请先选择要压缩的模型" - else: - model_path = os.path.split(_model.name) - filename, extension = os.path.splitext(model_path[1]) - output_model_name = f"{filename}_compressed{extension}" - output_path = os.path.join(os.getcwd(), output_model_name) - removeOptimizer(_model.name, output_path) - return f"模型已成功被保存在了{output_path}" - -def debug_change(): - global debug - debug = debug_button.value - -with gr.Blocks( - theme=gr.themes.Base( - primary_hue = gr.themes.colors.green, - font=["Source Sans Pro", "Arial", "sans-serif"], - font_mono=['JetBrains mono', "Consolas", 'Courier New'] - ), -) as app: - with gr.Tabs(): - with gr.TabItem("推理"): - gr.Markdown(value=""" - So-vits-svc 4.0 推理 webui - """) - with gr.Row(variant="panel"): - with gr.Column(): - gr.Markdown(value=""" - 模型设置 - """) - with gr.Row(): - model_path = gr.File(label="选择模型文件") - config_path = gr.File(label="选择配置文件") - with gr.Row(): - diff_model_path = gr.File(label="选择扩散模型文件") - diff_config_path = gr.File(label="选择扩散模型配置文件") - cluster_model_path = gr.File(label="选择聚类模型或特征检索文件(没有可以不选)") - device = gr.Dropdown(label="推理设备,默认为自动选择CPU和GPU", choices=["Auto",*cuda.keys(),"cpu"], value="Auto") - enhance = gr.Checkbox(label="是否使用NSF_HIFIGAN增强,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭", value=False) - only_diffusion = gr.Checkbox(label="是否使用全扩散推理,开启后将不使用So-VITS模型,仅使用扩散模型进行完整扩散推理,默认关闭", value=False) - with gr.Column(): - gr.Markdown(value=""" - 左侧文件全部选择完毕后(全部文件模块显示download),点击“加载模型”进行解析: - """) - model_load_button = gr.Button(value="加载模型", variant="primary") - model_unload_button = gr.Button(value="卸载模型", variant="primary") - sid = gr.Dropdown(label="音色(说话人)") - sid_output = gr.Textbox(label="Output Message") - - - with gr.Row(variant="panel"): - with gr.Column(): - gr.Markdown(value=""" - 推理设置 - """) - auto_f0 = gr.Checkbox(label="自动f0预测,配合聚类模型f0预测效果更好,会导致变调功能失效(仅限转换语音,歌声勾选此项会究极跑调)", value=False) - f0_predictor = gr.Dropdown(label="选择F0预测器,可选择crepe,pm,dio,harvest,默认为pm(注意:crepe为原F0使用均值滤波器)", choices=["pm","dio","harvest","crepe"], value="pm") - vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - cluster_ratio = gr.Number(label="聚类模型/特征检索混合比例,0-1之间,0即不启用聚类/特征检索。使用聚类/特征检索能提升音色相似度,但会导致咬字下降(如果使用建议0.5左右)", value=0) - slice_db = gr.Number(label="切片阈值", value=-40) - noise_scale = gr.Number(label="noise_scale 建议不要动,会影响音质,玄学参数", value=0.4) - k_step = gr.Slider(label="浅扩散步数,只有使用了扩散模型才有效,步数越大越接近扩散模型的结果", value=100, minimum = 1, maximum = 1000) - with gr.Column(): - pad_seconds = gr.Number(label="推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现", value=0.5) - cl_num = gr.Number(label="音频自动切片,0为不切片,单位为秒(s)", value=0) - lg_num = gr.Number(label="两端音频切片的交叉淡入长度,如果自动切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,注意,该设置会影响推理速度,单位为秒/s", value=0) - lgr_num = gr.Number(label="自动音频切片后,需要舍弃每段切片的头尾。该参数设置交叉长度保留的比例,范围0-1,左开右闭", value=0.75) - enhancer_adaptive_key = gr.Number(label="使增强器适应更高的音域(单位为半音数)|默认为0", value=0) - cr_threshold = gr.Number(label="F0过滤阈值,只有启动crepe时有效. 数值范围从0-1. 降低该值可减少跑调概率,但会增加哑音", value=0.05) - loudness_envelope_adjustment = gr.Number(label="输入源响度包络替换输出响度包络融合比例,越靠近1越使用输出响度包络", value = 0) - second_encoding = gr.Checkbox(label = "二次编码,浅扩散前会对原始音频进行二次编码,玄学选项,效果时好时差,默认关闭", value=False) - use_spk_mix = gr.Checkbox(label = "动态声线融合", value = False, interactive = False) - with gr.Tabs(): - with gr.TabItem("音频转音频"): - vc_input3 = gr.Audio(label="选择音频") - vc_submit = gr.Button("音频转换", variant="primary") - with gr.TabItem("文字转音频"): - text2tts=gr.Textbox(label="在此输入要转译的文字。注意,使用该功能建议打开F0预测,不然会很怪") - tts_rate = gr.Number(label="tts语速", value=0) - tts_voice = gr.Radio(label="性别",choices=["男","女"], value="男") - vc_submit2 = gr.Button("文字转换", variant="primary") - with gr.Row(): - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - with gr.Column(): - vc_output2 = gr.Audio(label="Output Audio", interactive=False) - - with gr.TabItem("小工具/实验室特性"): - gr.Markdown(value=""" - So-vits-svc 4.0 小工具/实验室特性 - """) - with gr.Tabs(): - with gr.TabItem("静态声线融合"): - gr.Markdown(value=""" - 介绍:该功能可以将多个声音模型合成为一个声音模型(多个模型参数的凸组合或线性组合),从而制造出现实中不存在的声线 - 注意: - 1.该功能仅支持单说话人的模型 - 2.如果强行使用多说话人模型,需要保证多个模型的说话人数量相同,这样可以混合同一个SpaekerID下的声音 - 3.保证所有待混合模型的config.json中的model字段是相同的 - 4.输出的混合模型可以使用待合成模型的任意一个config.json,但聚类模型将不能使用 - 5.批量上传模型的时候最好把模型放到一个文件夹选中后一起上传 - 6.混合比例调整建议大小在0-100之间,也可以调为其他数字,但在线性组合模式下会出现未知的效果 - 7.混合完毕后,文件将会保存在项目根目录中,文件名为output.pth - 8.凸组合模式会将混合比例执行Softmax使混合比例相加为1,而线性组合模式不会 - - """) - mix_model_path = gr.Files(label="选择需要混合模型文件") - mix_model_upload_button = gr.UploadButton("选择/追加需要混合模型文件", file_count="multiple") - mix_model_output1 = gr.Textbox( - label="混合比例调整,单位/%", - interactive = True - ) - mix_mode = gr.Radio(choices=["凸组合", "线性组合"], label="融合模式",value="凸组合",interactive = True) - mix_submit = gr.Button("声线融合启动", variant="primary") - mix_model_output2 = gr.Textbox( - label="Output Message" - ) - mix_model_path.change(updata_mix_info,[mix_model_path],[mix_model_output1]) - mix_model_upload_button.upload(upload_mix_append_file, [mix_model_upload_button,mix_model_path], [mix_model_path,mix_model_output1]) - mix_submit.click(mix_submit_click, [mix_model_output1,mix_mode], [mix_model_output2]) - - with gr.TabItem("模型压缩工具"): - gr.Markdown(value=""" - 该工具可以实现对模型的体积压缩,在**不影响模型推理功能**的情况下,将原本约600M的So-VITS模型压缩至约200M, 大大减少了硬盘的压力。 - **注意:压缩后的模型将无法继续训练,请在确认封炉后再压缩。** - """) - model_to_compress = gr.File(label="模型上传") - compress_model_btn = gr.Button("压缩模型", variant="primary") - compress_model_output = gr.Textbox(label="输出信息", value="") - - compress_model_btn.click(model_compression, [model_to_compress], [compress_model_output]) - - - with gr.Tabs(): - with gr.Row(variant="panel"): - with gr.Column(): - gr.Markdown(value=""" - WebUI设置 - """) - debug_button = gr.Checkbox(label="Debug模式,如果向社区反馈BUG需要打开,打开后控制台可以显示具体错误提示", value=debug) - vc_submit.click(vc_fn, [sid, vc_input3, vc_transform,auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,f0_predictor,enhancer_adaptive_key,cr_threshold,k_step,use_spk_mix,second_encoding,loudness_envelope_adjustment], [vc_output1, vc_output2]) - vc_submit2.click(vc_fn2, [sid, vc_input3, vc_transform,auto_f0,cluster_ratio, slice_db, noise_scale,pad_seconds,cl_num,lg_num,lgr_num,text2tts,tts_rate,tts_voice,f0_predictor,enhancer_adaptive_key,cr_threshold], [vc_output1, vc_output2]) - debug_button.change(debug_change,[],[]) - model_load_button.click(modelAnalysis,[model_path,config_path,cluster_model_path,device,enhance,diff_model_path,diff_config_path,only_diffusion,use_spk_mix],[sid,sid_output]) - model_unload_button.click(modelUnload,[],[sid,sid_output]) - app.launch() - - - diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/train_diff.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/train_diff.py deleted file mode 100644 index aa24cfbc5a7a09c2fae8e6897b5689bba8cfae00..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/train_diff.py +++ /dev/null @@ -1,71 +0,0 @@ -import os -import argparse -import torch -from torch.optim import lr_scheduler -from diffusion.logger import utils -from diffusion.data_loaders import get_data_loaders -from diffusion.solver import train -from diffusion.unit2mel import Unit2Mel -from diffusion.vocoder import Vocoder - - -def parse_args(args=None, namespace=None): - """Parse command-line arguments.""" - parser = argparse.ArgumentParser() - parser.add_argument( - "-c", - "--config", - type=str, - required=True, - help="path to the config file") - return parser.parse_args(args=args, namespace=namespace) - - -if __name__ == '__main__': - # parse commands - cmd = parse_args() - - # load config - args = utils.load_config(cmd.config) - print(' > config:', cmd.config) - print(' > exp:', args.env.expdir) - - # load vocoder - vocoder = Vocoder(args.vocoder.type, args.vocoder.ckpt, device=args.device) - - # load model - model = Unit2Mel( - args.data.encoder_out_channels, - args.model.n_spk, - args.model.use_pitch_aug, - vocoder.dimension, - args.model.n_layers, - args.model.n_chans, - args.model.n_hidden) - - - # load parameters - optimizer = torch.optim.AdamW(model.parameters()) - initial_global_step, model, optimizer = utils.load_model(args.env.expdir, model, optimizer, device=args.device) - for param_group in optimizer.param_groups: - param_group['initial_lr'] = args.train.lr - param_group['lr'] = args.train.lr * (args.train.gamma ** max(((initial_global_step-2)//args.train.decay_step),0) ) - param_group['weight_decay'] = args.train.weight_decay - scheduler = lr_scheduler.StepLR(optimizer, step_size=args.train.decay_step, gamma=args.train.gamma,last_epoch=initial_global_step-2) - - # device - if args.device == 'cuda': - torch.cuda.set_device(args.env.gpu_id) - model.to(args.device) - - for state in optimizer.state.values(): - for k, v in state.items(): - if torch.is_tensor(v): - state[k] = v.to(args.device) - - # datas - loader_train, loader_valid = get_data_loaders(args, whole_audio=False) - - # run - train(args, initial_global_step, model, optimizer, scheduler, vocoder, loader_train, loader_valid) - diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/hifiganwithsnake/alias/__init__.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/hifiganwithsnake/alias/__init__.py deleted file mode 100644 index a2318b63198250856809c0cb46210a4147b829bc..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vdecoder/hifiganwithsnake/alias/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -from .filter import * -from .resample import * -from .act import * \ No newline at end of file diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py deleted file mode 100644 index 1e84a5bdb3d4e410d8eef4b80a5d4c099a180104..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py +++ /dev/null @@ -1,329 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import functools -import json -import logging -import multiprocessing as mp -import numpy as np -import os -from itertools import chain -import pycocotools.mask as mask_util -from PIL import Image - -from detectron2.structures import BoxMode -from detectron2.utils.comm import get_world_size -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import setup_logger - -try: - import cv2 # noqa -except ImportError: - # OpenCV is an optional dependency at the moment - pass - - -logger = logging.getLogger(__name__) - - -def _get_cityscapes_files(image_dir, gt_dir): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - for city in cities: - city_img_dir = os.path.join(image_dir, city) - city_gt_dir = os.path.join(gt_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = basename[: -len(suffix)] - - instance_file = os.path.join(city_gt_dir, basename + "gtFine_instanceIds.png") - label_file = os.path.join(city_gt_dir, basename + "gtFine_labelIds.png") - json_file = os.path.join(city_gt_dir, basename + "gtFine_polygons.json") - - files.append((image_file, instance_file, label_file, json_file)) - assert len(files), "No images found in {}".format(image_dir) - for f in files[0]: - assert PathManager.isfile(f), f - return files - - -def load_cityscapes_instances(image_dir, gt_dir, from_json=True, to_polygons=True): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train". - from_json (bool): whether to read annotations from the raw json file or the png files. - to_polygons (bool): whether to represent the segmentation as polygons - (COCO's format) instead of masks (cityscapes's format). - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - if from_json: - assert to_polygons, ( - "Cityscapes's json annotations are in polygon format. " - "Converting to mask format is not supported now." - ) - files = _get_cityscapes_files(image_dir, gt_dir) - - logger.info("Preprocessing cityscapes annotations ...") - # This is still not fast: all workers will execute duplicate works and will - # take up to 10m on a 8GPU server. - pool = mp.Pool(processes=max(mp.cpu_count() // get_world_size() // 2, 4)) - - ret = pool.map( - functools.partial(_cityscapes_files_to_dict, from_json=from_json, to_polygons=to_polygons), - files, - ) - logger.info("Loaded {} images from {}".format(len(ret), image_dir)) - - # Map cityscape ids to contiguous ids - from cityscapesscripts.helpers.labels import labels - - labels = [l for l in labels if l.hasInstances and not l.ignoreInEval] - dataset_id_to_contiguous_id = {l.id: idx for idx, l in enumerate(labels)} - for dict_per_image in ret: - for anno in dict_per_image["annotations"]: - anno["category_id"] = dataset_id_to_contiguous_id[anno["category_id"]] - return ret - - -def load_cityscapes_semantic(image_dir, gt_dir): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train". - - Returns: - list[dict]: a list of dict, each has "file_name" and - "sem_seg_file_name". - """ - ret = [] - # gt_dir is small and contain many small files. make sense to fetch to local first - gt_dir = PathManager.get_local_path(gt_dir) - for image_file, _, label_file, json_file in _get_cityscapes_files(image_dir, gt_dir): - label_file = label_file.replace("labelIds", "labelTrainIds") - - with PathManager.open(json_file, "r") as f: - jsonobj = json.load(f) - ret.append( - { - "file_name": image_file, - "sem_seg_file_name": label_file, - "height": jsonobj["imgHeight"], - "width": jsonobj["imgWidth"], - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - return ret - - -def _cityscapes_files_to_dict(files, from_json, to_polygons): - """ - Parse cityscapes annotation files to a instance segmentation dataset dict. - - Args: - files (tuple): consists of (image_file, instance_id_file, label_id_file, json_file) - from_json (bool): whether to read annotations from the raw json file or the png files. - to_polygons (bool): whether to represent the segmentation as polygons - (COCO's format) instead of masks (cityscapes's format). - - Returns: - A dict in Detectron2 Dataset format. - """ - from cityscapesscripts.helpers.labels import id2label, name2label - - image_file, instance_id_file, _, json_file = files - - annos = [] - - if from_json: - from shapely.geometry import MultiPolygon, Polygon - - with PathManager.open(json_file, "r") as f: - jsonobj = json.load(f) - ret = { - "file_name": image_file, - "image_id": os.path.basename(image_file), - "height": jsonobj["imgHeight"], - "width": jsonobj["imgWidth"], - } - - # `polygons_union` contains the union of all valid polygons. - polygons_union = Polygon() - - # CityscapesScripts draw the polygons in sequential order - # and each polygon *overwrites* existing ones. See - # (https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/json2instanceImg.py) # noqa - # We use reverse order, and each polygon *avoids* early ones. - # This will resolve the ploygon overlaps in the same way as CityscapesScripts. - for obj in jsonobj["objects"][::-1]: - if "deleted" in obj: # cityscapes data format specific - continue - label_name = obj["label"] - - try: - label = name2label[label_name] - except KeyError: - if label_name.endswith("group"): # crowd area - label = name2label[label_name[: -len("group")]] - else: - raise - if label.id < 0: # cityscapes data format - continue - - # Cityscapes's raw annotations uses integer coordinates - # Therefore +0.5 here - poly_coord = np.asarray(obj["polygon"], dtype="f4") + 0.5 - # CityscapesScript uses PIL.ImageDraw.polygon to rasterize - # polygons for evaluation. This function operates in integer space - # and draws each pixel whose center falls into the polygon. - # Therefore it draws a polygon which is 0.5 "fatter" in expectation. - # We therefore dilate the input polygon by 0.5 as our input. - poly = Polygon(poly_coord).buffer(0.5, resolution=4) - - if not label.hasInstances or label.ignoreInEval: - # even if we won't store the polygon it still contributes to overlaps resolution - polygons_union = polygons_union.union(poly) - continue - - # Take non-overlapping part of the polygon - poly_wo_overlaps = poly.difference(polygons_union) - if poly_wo_overlaps.is_empty: - continue - polygons_union = polygons_union.union(poly) - - anno = {} - anno["iscrowd"] = label_name.endswith("group") - anno["category_id"] = label.id - - if isinstance(poly_wo_overlaps, Polygon): - poly_list = [poly_wo_overlaps] - elif isinstance(poly_wo_overlaps, MultiPolygon): - poly_list = poly_wo_overlaps.geoms - else: - raise NotImplementedError("Unknown geometric structure {}".format(poly_wo_overlaps)) - - poly_coord = [] - for poly_el in poly_list: - # COCO API can work only with exterior boundaries now, hence we store only them. - # TODO: store both exterior and interior boundaries once other parts of the - # codebase support holes in polygons. - poly_coord.append(list(chain(*poly_el.exterior.coords))) - anno["segmentation"] = poly_coord - (xmin, ymin, xmax, ymax) = poly_wo_overlaps.bounds - - anno["bbox"] = (xmin, ymin, xmax, ymax) - anno["bbox_mode"] = BoxMode.XYXY_ABS - - annos.append(anno) - else: - # See also the official annotation parsing scripts at - # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/instances2dict.py # noqa - with PathManager.open(instance_id_file, "rb") as f: - inst_image = np.asarray(Image.open(f), order="F") - # ids < 24 are stuff labels (filtering them first is about 5% faster) - flattened_ids = np.unique(inst_image[inst_image >= 24]) - - ret = { - "file_name": image_file, - "image_id": os.path.basename(image_file), - "height": inst_image.shape[0], - "width": inst_image.shape[1], - } - - for instance_id in flattened_ids: - # For non-crowd annotations, instance_id // 1000 is the label_id - # Crowd annotations have <1000 instance ids - label_id = instance_id // 1000 if instance_id >= 1000 else instance_id - label = id2label[label_id] - if not label.hasInstances or label.ignoreInEval: - continue - - anno = {} - anno["iscrowd"] = instance_id < 1000 - anno["category_id"] = label.id - - mask = np.asarray(inst_image == instance_id, dtype=np.uint8, order="F") - - inds = np.nonzero(mask) - ymin, ymax = inds[0].min(), inds[0].max() - xmin, xmax = inds[1].min(), inds[1].max() - anno["bbox"] = (xmin, ymin, xmax, ymax) - if xmax <= xmin or ymax <= ymin: - continue - anno["bbox_mode"] = BoxMode.XYXY_ABS - if to_polygons: - # This conversion comes from D4809743 and D5171122, - # when Mask-RCNN was first developed. - contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[ - -2 - ] - polygons = [c.reshape(-1).tolist() for c in contours if len(c) >= 3] - # opencv's can produce invalid polygons - if len(polygons) == 0: - continue - anno["segmentation"] = polygons - else: - anno["segmentation"] = mask_util.encode(mask[:, :, None])[0] - annos.append(anno) - ret["annotations"] = annos - return ret - - -if __name__ == "__main__": - """ - Test the cityscapes dataset loader. - - Usage: - python -m detectron2.data.datasets.cityscapes \ - cityscapes/leftImg8bit/train cityscapes/gtFine/train - """ - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("image_dir") - parser.add_argument("gt_dir") - parser.add_argument("--type", choices=["instance", "semantic"], default="instance") - args = parser.parse_args() - from detectron2.data.catalog import Metadata - from detectron2.utils.visualizer import Visualizer - from cityscapesscripts.helpers.labels import labels - - logger = setup_logger(name=__name__) - - dirname = "cityscapes-data-vis" - os.makedirs(dirname, exist_ok=True) - - if args.type == "instance": - dicts = load_cityscapes_instances( - args.image_dir, args.gt_dir, from_json=True, to_polygons=True - ) - logger.info("Done loading {} samples.".format(len(dicts))) - - thing_classes = [k.name for k in labels if k.hasInstances and not k.ignoreInEval] - meta = Metadata().set(thing_classes=thing_classes) - - else: - dicts = load_cityscapes_semantic(args.image_dir, args.gt_dir) - logger.info("Done loading {} samples.".format(len(dicts))) - - stuff_classes = [k.name for k in labels if k.trainId != 255] - stuff_colors = [k.color for k in labels if k.trainId != 255] - meta = Metadata().set(stuff_classes=stuff_classes, stuff_colors=stuff_colors) - - for d in dicts: - img = np.array(Image.open(PathManager.open(d["file_name"], "rb"))) - visualizer = Visualizer(img, metadata=meta) - vis = visualizer.draw_dataset_dict(d) - # cv2.imshow("a", vis.get_image()[:, :, ::-1]) - # cv2.waitKey() - fpath = os.path.join(dirname, os.path.basename(d["file_name"])) - vis.save(fpath) diff --git a/spaces/ysharma/LLaVA_v1/llava/model/language_model/mpt/norm.py b/spaces/ysharma/LLaVA_v1/llava/model/language_model/mpt/norm.py deleted file mode 100644 index 067b6140fae546e5cb49cb2b1e4e6af660ced60d..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LLaVA_v1/llava/model/language_model/mpt/norm.py +++ /dev/null @@ -1,56 +0,0 @@ -import torch - -def _cast_if_autocast_enabled(tensor): - if torch.is_autocast_enabled(): - if tensor.device.type == 'cuda': - dtype = torch.get_autocast_gpu_dtype() - elif tensor.device.type == 'cpu': - dtype = torch.get_autocast_cpu_dtype() - else: - raise NotImplementedError() - return tensor.to(dtype=dtype) - return tensor - -class LPLayerNorm(torch.nn.LayerNorm): - - def __init__(self, normalized_shape, eps=1e-05, elementwise_affine=True, device=None, dtype=None): - super().__init__(normalized_shape=normalized_shape, eps=eps, elementwise_affine=elementwise_affine, device=device, dtype=dtype) - - def forward(self, x): - module_device = x.device - downcast_x = _cast_if_autocast_enabled(x) - downcast_weight = _cast_if_autocast_enabled(self.weight) if self.weight is not None else self.weight - downcast_bias = _cast_if_autocast_enabled(self.bias) if self.bias is not None else self.bias - with torch.autocast(enabled=False, device_type=module_device.type): - return torch.nn.functional.layer_norm(downcast_x, self.normalized_shape, downcast_weight, downcast_bias, self.eps) - -def rms_norm(x, weight=None, eps=1e-05): - output = x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + eps) - if weight is not None: - return output * weight - return output - -class RMSNorm(torch.nn.Module): - - def __init__(self, normalized_shape, eps=1e-05, weight=True, dtype=None, device=None): - super().__init__() - self.eps = eps - if weight: - self.weight = torch.nn.Parameter(torch.ones(normalized_shape, dtype=dtype, device=device)) - else: - self.register_parameter('weight', None) - - def forward(self, x): - return rms_norm(x.float(), self.weight, self.eps).to(dtype=x.dtype) - -class LPRMSNorm(RMSNorm): - - def __init__(self, normalized_shape, eps=1e-05, weight=True, dtype=None, device=None): - super().__init__(normalized_shape=normalized_shape, eps=eps, weight=weight, dtype=dtype, device=device) - - def forward(self, x): - downcast_x = _cast_if_autocast_enabled(x) - downcast_weight = _cast_if_autocast_enabled(self.weight) if self.weight is not None else self.weight - with torch.autocast(enabled=False, device_type=x.device.type): - return rms_norm(downcast_x, downcast_weight, self.eps).to(dtype=x.dtype) -NORM_CLASS_REGISTRY = {'layernorm': torch.nn.LayerNorm, 'low_precision_layernorm': LPLayerNorm, 'rmsnorm': RMSNorm, 'low_precision_rmsnorm': LPRMSNorm} \ No newline at end of file diff --git a/spaces/yuhangzang/ContextDet-Demo/models/__init__.py b/spaces/yuhangzang/ContextDet-Demo/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zhoucr/ai-koni/monotonic_align/__init__.py b/spaces/zhoucr/ai-koni/monotonic_align/__init__.py deleted file mode 100644 index 2acffbb1d6e5c2d0e770954a35a9788236233d94..0000000000000000000000000000000000000000 --- a/spaces/zhoucr/ai-koni/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/zhubao315/Salesforce-xgen-7b-8k-inst/README.md b/spaces/zhubao315/Salesforce-xgen-7b-8k-inst/README.md deleted file mode 100644 index 6c3549aa98ad8a76c61fe6d8a662a422ffafe568..0000000000000000000000000000000000000000 --- a/spaces/zhubao315/Salesforce-xgen-7b-8k-inst/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Salesforce Xgen 7b 8k Inst -emoji: 🦀 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zhuj/goodwork/Dockerfile b/spaces/zhuj/goodwork/Dockerfile deleted file mode 100644 index d53cf524f7b58c5bf71fe09852c65ac3faf8b871..0000000000000000000000000000000000000000 --- a/spaces/zhuj/goodwork/Dockerfile +++ /dev/null @@ -1,32 +0,0 @@ -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -#从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w” 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQwYtXrr6G6bE3fz4i0" - -# 暴露8080端口 -EXPOSE 8080 -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/zideliu/styledrop/libs/muse.py b/spaces/zideliu/styledrop/libs/muse.py deleted file mode 100644 index 3556442e0cebc0cadb402bbf2c7afb7e4412f498..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/libs/muse.py +++ /dev/null @@ -1,107 +0,0 @@ -import numpy as np -import torch -import math -from einops import rearrange -from torch.nn import functional as F - - -def add_gumbel_noise(t, temperature, device): - return (t + torch.Tensor(temperature * np.random.gumbel(size=t.shape)).to(device)) - - -class MUSE(object): - def __init__(self, codebook_size, device, ignore_ind=-1, smoothing=0., gen_temp=4.5): - self.mask_ind = codebook_size # for input masking - self.ignore_ind = ignore_ind # for ce loss, excluding visible - self.device = device - self.smoothing = smoothing - self.gen_temp = gen_temp - - @staticmethod - def cosine_schedule(t): - return torch.cos(t * math.pi * 0.5) - - def sample(self, x0): - N, L, device = *x0.shape, self.device - timesteps = torch.zeros((N,), device=device).float().uniform_(0, 1) - rand_mask_probs = self.cosine_schedule(timesteps) # cosine schedule - num_token_masked = (L * rand_mask_probs).round().clamp(min=1) - batch_randperm = torch.rand(N, L, device=device).argsort(dim=-1) - mask = batch_randperm < rearrange(num_token_masked, 'b -> b 1') - masked_ids = torch.where(mask, self.mask_ind, x0) - labels = torch.where(mask, x0, self.ignore_ind) - return labels, masked_ids - - def loss(self, pred, label): - return F.cross_entropy(pred.transpose(1, 2), label.long(), - ignore_index=self.ignore_ind, label_smoothing=self.smoothing) - - @torch.no_grad() - def generate(self, config, _n_samples, nnet, decode_fn, is_eval=False, **kwargs): - fmap_size, _sample_steps, device = config.z_shape[-1], config.sample.sample_steps, self.device - - seq_len = fmap_size ** 2 - ids = torch.full((_n_samples, seq_len), self.mask_ind, dtype=torch.long, device=device) - cfg_scale = 0. - for step in range(_sample_steps): - ratio = 1. * (step + 1) / _sample_steps - annealed_temp = self.gen_temp * (1 - ratio) - is_mask = (ids == self.mask_ind) - logits = nnet(ids, **kwargs, scale=cfg_scale) - # sampling & scoring - sampled_ids = add_gumbel_noise(logits, annealed_temp, device).argmax(dim=-1) - sampled_logits = torch.squeeze( - torch.gather(logits, dim=-1, index=torch.unsqueeze(sampled_ids, -1)), -1) - sampled_ids = torch.where(is_mask, sampled_ids, ids) - sampled_logits = torch.where(is_mask, sampled_logits, +np.inf).float() - # masking - mask_ratio = np.cos(ratio * math.pi * 0.5) - mask_len = torch.Tensor([np.floor(seq_len * mask_ratio)]).to(device) - mask_len = torch.maximum(torch.Tensor([1]).to(device), - torch.minimum(torch.sum(is_mask, dim=-1, keepdims=True) - 1, - mask_len))[0].squeeze() - confidence = add_gumbel_noise(sampled_logits, annealed_temp, device) - sorted_confidence, _ = torch.sort(confidence, axis=-1) - cut_off = sorted_confidence[:, mask_len.long() - 1:mask_len.long()] - masking = (confidence <= cut_off) - ids = torch.where(masking, self.mask_ind, sampled_ids) - cfg_scale = ratio * config.sample.scale - - _z1 = rearrange(sampled_ids, 'b (i j) -> b i j', i=fmap_size, j=fmap_size) - - # with adapter - ids = torch.full((_n_samples, seq_len), self.mask_ind, dtype=torch.long, device=device) - cfg_scale = 0. - lambdaA=0. - lambdaB=0. - for step in range(_sample_steps): - ratio = 1. * (step + 1) / _sample_steps - annealed_temp = self.gen_temp * (1 - ratio) - is_mask = (ids == self.mask_ind) - # 尝试使用 *ratio - logits = nnet(ids, **kwargs, scale=cfg_scale,lambdaA=lambdaA,lambdaB=lambdaB) - # sampling & scoring - sampled_ids = add_gumbel_noise(logits, annealed_temp, device).argmax(dim=-1) - sampled_logits = torch.squeeze( - torch.gather(logits, dim=-1, index=torch.unsqueeze(sampled_ids, -1)), -1) - sampled_ids = torch.where(is_mask, sampled_ids, ids) - sampled_logits = torch.where(is_mask, sampled_logits, +np.inf).float() - # masking - mask_ratio = np.cos(ratio * math.pi * 0.5) - mask_len = torch.Tensor([np.floor(seq_len * mask_ratio)]).to(device) - mask_len = torch.maximum(torch.Tensor([1]).to(device), - torch.minimum(torch.sum(is_mask, dim=-1, keepdims=True) - 1, - mask_len))[0].squeeze() - confidence = add_gumbel_noise(sampled_logits, annealed_temp, device) - sorted_confidence, _ = torch.sort(confidence, axis=-1) - cut_off = sorted_confidence[:, mask_len.long() - 1:mask_len.long()] - masking = (confidence <= cut_off) - ids = torch.where(masking, self.mask_ind, sampled_ids) - cfg_scale = ratio * config.sample.scale - lambdaA = config.sample.lambdaA - lambdaB = config.sample.lambdaB - - _z2 = rearrange(sampled_ids, 'b (i j) -> b i j', i=fmap_size, j=fmap_size) - _z = _z2 if is_eval else torch.cat([_z1,_z2],dim=0) - out = decode_fn(_z) - return out diff --git a/spaces/ziguo/Real-ESRGAN/realesrgan/data/__init__.py b/spaces/ziguo/Real-ESRGAN/realesrgan/data/__init__.py deleted file mode 100644 index a3f8fdd1aa47c12de9687c578094303eb7369246..0000000000000000000000000000000000000000 --- a/spaces/ziguo/Real-ESRGAN/realesrgan/data/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import dataset modules for registry -# scan all the files that end with '_dataset.py' under the data folder -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')] -# import all the dataset modules -_dataset_modules = [importlib.import_module(f'realesrgan.data.{file_name}') for file_name in dataset_filenames] diff --git a/spaces/zlc99/M4Singer/usr/diff/candidate_decoder.py b/spaces/zlc99/M4Singer/usr/diff/candidate_decoder.py deleted file mode 100644 index 133a51a61942027c255841e2638e296238c07a30..0000000000000000000000000000000000000000 --- a/spaces/zlc99/M4Singer/usr/diff/candidate_decoder.py +++ /dev/null @@ -1,96 +0,0 @@ -from modules.fastspeech.tts_modules import FastspeechDecoder -# from modules.fastspeech.fast_tacotron import DecoderRNN -# from modules.fastspeech.speedy_speech.speedy_speech import ConvBlocks -# from modules.fastspeech.conformer.conformer import ConformerDecoder -import torch -from torch.nn import functional as F -import torch.nn as nn -import math -from utils.hparams import hparams -from .diffusion import Mish -Linear = nn.Linear - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -def Conv1d(*args, **kwargs): - layer = nn.Conv1d(*args, **kwargs) - nn.init.kaiming_normal_(layer.weight) - return layer - - -class FFT(FastspeechDecoder): - def __init__(self, hidden_size=None, num_layers=None, kernel_size=None, num_heads=None): - super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads) - dim = hparams['residual_channels'] - self.input_projection = Conv1d(hparams['audio_num_mel_bins'], dim, 1) - self.diffusion_embedding = SinusoidalPosEmb(dim) - self.mlp = nn.Sequential( - nn.Linear(dim, dim * 4), - Mish(), - nn.Linear(dim * 4, dim) - ) - self.get_mel_out = Linear(hparams['hidden_size'], 80, bias=True) - self.get_decode_inp = Linear(hparams['hidden_size'] + dim + dim, - hparams['hidden_size']) # hs + dim + 80 -> hs - - def forward(self, spec, diffusion_step, cond, padding_mask=None, attn_mask=None, return_hiddens=False): - """ - :param spec: [B, 1, 80, T] - :param diffusion_step: [B, 1] - :param cond: [B, M, T] - :return: - """ - x = spec[:, 0] - x = self.input_projection(x).permute([0, 2, 1]) # [B, T, residual_channel] - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) # [B, dim] - cond = cond.permute([0, 2, 1]) # [B, T, M] - - seq_len = cond.shape[1] # [T_mel] - time_embed = diffusion_step[:, None, :] # [B, 1, dim] - time_embed = time_embed.repeat([1, seq_len, 1]) # # [B, T, dim] - - decoder_inp = torch.cat([x, cond, time_embed], dim=-1) # [B, T, dim + H + dim] - decoder_inp = self.get_decode_inp(decoder_inp) # [B, T, H] - x = decoder_inp - - ''' - Required x: [B, T, C] - :return: [B, T, C] or [L, B, T, C] - ''' - padding_mask = x.abs().sum(-1).eq(0).data if padding_mask is None else padding_mask - nonpadding_mask_TB = 1 - padding_mask.transpose(0, 1).float()[:, :, None] # [T, B, 1] - if self.use_pos_embed: - positions = self.pos_embed_alpha * self.embed_positions(x[..., 0]) - x = x + positions - x = F.dropout(x, p=self.dropout, training=self.training) - # B x T x C -> T x B x C - x = x.transpose(0, 1) * nonpadding_mask_TB - hiddens = [] - for layer in self.layers: - x = layer(x, encoder_padding_mask=padding_mask, attn_mask=attn_mask) * nonpadding_mask_TB - hiddens.append(x) - if self.use_last_norm: - x = self.layer_norm(x) * nonpadding_mask_TB - if return_hiddens: - x = torch.stack(hiddens, 0) # [L, T, B, C] - x = x.transpose(1, 2) # [L, B, T, C] - else: - x = x.transpose(0, 1) # [B, T, C] - - x = self.get_mel_out(x).permute([0, 2, 1]) # [B, 80, T] - return x[:, None, :, :] \ No newline at end of file