diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crackdown 2 DLC What It Is Why You Need It and How to Get It.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crackdown 2 DLC What It Is Why You Need It and How to Get It.md deleted file mode 100644 index e1aa160fabdfcdd989e14dbebf3c552ac32b0464..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crackdown 2 DLC What It Is Why You Need It and How to Get It.md +++ /dev/null @@ -1,37 +0,0 @@ -
-

Crackdown 2 DLC: Everything You Need to Know

-

Crackdown 2 is a sandbox action-adventure game that lets you play as a super-powered agent in a futuristic city. You can explore the open world, fight enemies, collect orbs, and complete missions. But if you want more content and challenges, you may be interested in the downloadable content (DLC) packs that are available for Crackdown 2.

-

In this article, we will give you an overview of the two DLC packs that were released for Crackdown 2: the Toy Box pack and the Deluge pack. We will tell you what they include, how much they cost, and how to get them. We will also share some tips and tricks on how to enjoy the DLC packs to the fullest.

-

crackdown 2 dlc


Download File ✑ ✑ ✑ https://byltly.com/2uKxFs



-

The Toy Box Pack

-

The Toy Box pack was the first DLC pack that was released for Crackdown 2 on September 2, 2010. It added new features, modes, vehicles, weapons, and achievements to the game. The Toy Box pack had two versions: a free version and a premium version.

-

The free version included:

- -

The premium version cost 560 Microsoft Points ($7) and included everything in the free version plus:

- -

To get the Toy Box pack, you had to download it from the Xbox Live Marketplace or from the in-game menu. You also had to have an Xbox Live Gold membership to access some of the features.

-

The Deluge Pack

-

The Deluge pack was the second and final DLC pack that was released for Crackdown 2 on November 16, 2010. It added a new mode, maps, vehicles, weapons, achievements, and avatar awards to the game. The Deluge pack cost 560 Microsoft Points ($7) and included:

- -

To get the Deluge pack, you had to download it from the Xbox Live Marketplace or from the in-game menu. You also had to have an Xbox Live Gold membership to access some of the features. -

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Arsenal Extended Power License Generator.md b/spaces/1gistliPinn/ChatGPT4/Examples/Arsenal Extended Power License Generator.md deleted file mode 100644 index 084512e72dfa340b44d39b310156eafe258e09c6..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Arsenal Extended Power License Generator.md +++ /dev/null @@ -1,6 +0,0 @@ - -

* The Dukovany, or Dukovany 1 and 2 units are similar twin steam-electric reactors. The first of the two units was put into operation in 1972, and the second in 1976. The two reactors had design power output of 500 MW, and a maximum design output of 1,400 MW, and a total generating capacity of more than 2,000 MW. Of note The final cost of the two reactor units as quoted in 1979 was CZK 12.9 billion ($512 million).

The reactors use natural uranium fuel and drive turbines to generate electricity. Both have a thermal capacity of approximately 260 MW. The reactors both utilize a one-pass reheater, and the only significant difference between the two units is that Unit 1 has a domed concrete reactor head, with the roof slab on the same level as the reactor vessel, whereas Unit 2 has a flat concrete roof slab. The front of the roofs of the two units are separated by about 12 metres, and the main entry doors in the front of each unit are about 25 metres wide, allowing access to the power blocks and the tall cooling towers.

The reactors are constructed on the grounds of a former cement factory, which was initially used for building materials, but later converted for nuclear power plant use.

-

The agreements covering the lease of the sites for units 1 and 2 of the first Dukovany nuclear power plant were signed in 1971.The LTA pertains to unit 1, and the LTA is an area that covers the parcels of land where the reactor and the reactor building are located, and which includes the land and buildings around the reactor site. It includes the reactor, its service areas and all other land and buildings located on the land covered by the LTA.

The LTA ownership was transferred to the National Property Fund for State Investments (NPU, pronounced as na powa) in 1999, and the NPU, since January 1, 2008, has held the ownership of the LTA.

-

Arsenal Extended Power License Generator


DOWNLOAD ››››› https://imgfil.com/2uxXvb



899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/AutoCAD Architecture 2007 Crack Free Download The Ultimate Guide for Architects and Designers.md b/spaces/1gistliPinn/ChatGPT4/Examples/AutoCAD Architecture 2007 Crack Free Download The Ultimate Guide for Architects and Designers.md deleted file mode 100644 index fae4af264d3b8c9abe6ea415cd324ab33c1b8723..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/AutoCAD Architecture 2007 Crack Free Download The Ultimate Guide for Architects and Designers.md +++ /dev/null @@ -1,5 +0,0 @@ - -

AutoCAD XL is now more reliable than the previous version. For enhanced performance, updates are available. This version can be run on any computer, regardless of how much RAM or storage it has. AutoCAD 2007 can be downloaded and installed on your computer. It will then run in no time. AutoCAD 2007 can also be used with Windows 7, 8, or 10.

-

AutoCAD Architecture 2007 crack free download


Download File 🔗 https://imgfil.com/2uy1KJ



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Winrar Para Mac Os X 10.5.8 [BETTER].md b/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Winrar Para Mac Os X 10.5.8 [BETTER].md deleted file mode 100644 index 9e736ec63762a8e3f8c736402edb4bbde123d2e3..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Descargar Winrar Para Mac Os X 10.5.8 [BETTER].md +++ /dev/null @@ -1,6 +0,0 @@ -
-

Mac OS X 10.5.8 Update es un programa útil y gratuito sólo disponible para Mac, que forma parte de la categoría Utilidades con la subcategoría Mejoras del sistema y ha sido publicado por Apple.

-

Descargar Winrar Para Mac Os X 10.5.8


Download Filehttps://imgfil.com/2uxZw5



-

La información de la versión del programa no está disponible y su última actualización se produjo el 8/08/2011. Está disponible para usuarios con el sistema operativo Mac OS X y versiones anteriores, y se puede descargar en varios idiomas como inglés, español y alemán.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Salamander 2012 TV Serie 4DVDrip Dutch English Klam Torrent - A Must-See for Fans of Mystery and Suspense.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Salamander 2012 TV Serie 4DVDrip Dutch English Klam Torrent - A Must-See for Fans of Mystery and Suspense.md deleted file mode 100644 index 88e46b5d73bcd7d59e5c715fa98516c837f847b1..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Salamander 2012 TV Serie 4DVDrip Dutch English Klam Torrent - A Must-See for Fans of Mystery and Suspense.md +++ /dev/null @@ -1,6 +0,0 @@ -

Download Salamander 2012 TV Serie 4DVDrip Dutch English Klam Torrent - KickassTorrentsl


Download Ziphttps://imgfil.com/2uy0NL



- - aaccfb2cb3
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Airline Commander Hack How to Get Free AC Credits and Unlock All Planes.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Airline Commander Hack How to Get Free AC Credits and Unlock All Planes.md deleted file mode 100644 index ed65842a3c3cfe7dc54b48c0859deb965431b0f8..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Airline Commander Hack How to Get Free AC Credits and Unlock All Planes.md +++ /dev/null @@ -1,81 +0,0 @@ - -

Airline Commander Hack: How to Unlock All Planes and Get Free AC Credits

-

If you are a fan of flight simulator games, you might have heard of Airline Commander, one of the most realistic airplane games on the market. In this game, you can start as a new pilot who must learn how to fly big aircrafts, take off from the airport, land safely, and manage your own airline. You can also expand your airplane fleet, choose new flying routes, and handle different situations in real-time.

-

airline commander hack


Download ☆☆☆☆☆ https://urlin.us/2uSW4Z



-

However, as fun as it sounds, Airline Commander can also be challenging and frustrating at times. You might find yourself stuck with limited planes, routes, and money. You might also struggle to earn enough AC credits, which are the premium currency in the game. That's why many players are looking for a way to hack Airline Commander and enjoy the game without any limitations.

-

Fortunately, there is a simple and effective solution for that. In this article, we will show you how to use an Airline Commander hack that can unlock all planes and generate free AC credits for you. With this hack, you can experience the full potential of this amazing flight simulator game and have more fun than ever.

-

How to Unlock All Planes in Airline Commander

-

One of the main features of Airline Commander is that it offers dozens of airliners for you to choose from. You can fly turbine, reaction, single deck or double deck planes. You can also open thousands of routes towards all the major airports of the world and explore hundreds of realistic airports and runways.

-

However, unlocking new planes and routes is not easy. You need to complete contracts and earn money to buy new planes. You also need to improve your skills and get new licenses to access more challenging routes. This can take a lot of time and effort, especially if you are a beginner.

-

airline commander cheat codes
-airline commander mod apk
-airline commander unlimited money
-airline commander hack ios
-airline commander hack android
-airline commander hack tool
-airline commander hack online
-airline commander hack no survey
-airline commander hack no verification
-airline commander hack 2023
-airline commander free ac credits
-airline commander free flights
-airline commander free planes
-airline commander free licenses
-airline commander free upgrades
-airline commander tips and tricks
-airline commander guide and walkthrough
-airline commander best planes
-airline commander best routes
-airline commander best strategies
-how to hack airline commander
-how to cheat in airline commander
-how to get free ac credits in airline commander
-how to get free flights in airline commander
-how to get free planes in airline commander
-how to get free licenses in airline commander
-how to get free upgrades in airline commander
-how to play airline commander offline
-how to play airline commander on pc
-how to play airline commander with friends
-is there a hack for airline commander
-is there a cheat for airline commander
-is there a mod for airline commander
-is there a way to get free ac credits in airline commander
-is there a way to get free flights in airline commander
-is there a way to get free planes in airline commander
-is there a way to get free licenses in airline commander
-is there a way to get free upgrades in airline commander
-does airline commander hack work
-does airline commander cheat work
-does airline commander mod work
-does airline commander have multiplayer mode
-does airline commander have realistic mode
-does airline commander have weather effects
-does airline commander have emergencies and failures

-

That's why using an Airline Commander hack can be very helpful. With this hack, you can unlock all planes instantly and customize them as you wish. You don't have to worry about money or licenses anymore. You can just pick any plane you like and fly it anywhere you want.

-

This way, you can enjoy the realistic flight experience with different planes and routes. You can also compete with other players and prove your skills as a pilot.

-

How to Get Free AC Credits in Airline Commander

-

Another feature of Airline Commander is that it uses AC credits as an advanced currency in the game. AC credits are very useful for many things, such as buying special planes, upgrading your fleet, speeding up your progress, and more.

-

However, AC credits are also very hard to get. You can only earn them from daily tasks and achievements, which are limited and time-consuming. You can also buy them with real money, but that can be expensive and not worth it.

-

That's why using an Airline Commander hack can be very beneficial. With this hack, you can generate unlimited AC credits for free. You don't have to spend any time or money on them anymore. You can just use them for whatever you want in the game.

-

This way, you can enhance your gameplay and make it more enjoyable. You can also unlock more features and options in the game and have more fun than ever.

-

Conclusion

-

Airline Commander is a fantastic flight simulator game that lets you experience the thrill of flying big aircrafts and managing your own airline. However, it can also be frustrating and limiting if you don't have enough planes, routes, and AC credits.

-

That's why using an Airline Commander hack can be a great idea. With this hack, you can unlock all planes and get free AC credits in a matter of minutes. You don't have to spend any time or money on them anymore. You can just enjoy the game to the fullest and have a blast.

-

If you want to try this hack, you can download it from the link below. It is safe, easy, and fast to use. You just need to follow the instructions and enter your username. Then, you can choose how many planes and AC credits you want and click on the generate button. That's it!

-

Don't miss this opportunity and get your Airline Commander hack today. You will be amazed by how much better your gameplay will be. You will also impress your friends and other players with your skills and achievements.

-

So, what are you waiting for? Click on the link below and start your flight adventure now!

-

Download Airline Commander Hack Here

-

FAQs

-

Is this hack safe to use?

-

Yes, this hack is 100% safe and secure to use. It does not require any root or jailbreak, and it does not contain any viruses or malware. It also does not ask for your password or personal information. It only uses your username to connect to the game server and generate the resources you want.

-

Will I get banned for using this hack?

-

No, you will not get banned for using this hack. This hack has a built-in anti-ban system that protects your account from detection and suspension. It also uses proxy servers and encryption methods to ensure your safety and privacy. You can use this hack without any worries or risks.

-

How often can I use this hack?

-

You can use this hack as often as you want. There is no limit or restriction on how many times you can use it or how many resources you can generate. You can always come back and use it again whenever you need more planes or AC credits.

-

Does this hack work on all devices?

-

Yes, this hack works on all devices that support Airline Commander. It does not matter if you are using an Android or iOS device, a smartphone or a tablet, a PC or a Mac. As long as you have an internet connection and a browser, you can use this hack from any device.

-

Do I need to update this hack?

-

No, you do not need to update this hack manually. This hack is always updated automatically to match the latest version of the game. You don't have to worry about compatibility issues or errors. You can always enjoy the latest features and benefits of this hack.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us on PC - How to Download and Install the Game for Free.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us on PC - How to Download and Install the Game for Free.md deleted file mode 100644 index 33676bac618c17651631150b0dbb29dceb43f3f3..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us on PC - How to Download and Install the Game for Free.md +++ /dev/null @@ -1,176 +0,0 @@ -
-

How to Download Among Us on PC Windows 7

-

Among Us is a popular multiplayer game that has taken the gaming world by storm. It is a social deduction game where you play as either a crewmate or an imposter on a spaceship. As a crewmate, your goal is to complete tasks around the ship while avoiding being killed by the imposters. As an imposter, your goal is to kill enough crewmates without being caught or voted out.

-

Among Us is a fun and addictive game that can be played online or locally with up to 15 players. You can customize your character, choose from different maps and modes, chat with other players, and enjoy cross-platform play between Android, iOS, PC, and console. If you are looking for a game that will test your skills of deception, teamwork, and deduction, then you should definitely give Among Us a try.

-

how to download among us in pc windows 7


Download File 🗸🗸🗸 https://urlin.us/2uSYw7



-

But how can you play Among Us on your PC Windows 7? There are two main ways to do so: using emulators or downloading the game directly from Steam or Microsoft Store. In this article, we will show you how to do both methods step by step. We will also give you some tips and tricks for playing Among Us on PC. Let's get started!

-

How to Download Among Us on PC with Emulators

-

What are Emulators and How They Work

-

An emulator is a software that allows you to run Android apps on your PC. It simulates the Android operating system and creates a virtual environment where you can install and use Android apps just like you would on your phone or tablet. Emulators are useful for playing Android games on a bigger screen, using keyboard and mouse controls, recording gameplay videos, or testing apps before publishing them.

-

There are many emulators available for PC Windows 7, such as Bluestacks, Gameloop, LDPlayer, NoxPlayer, etc. Each emulator has its own features, advantages, and disadvantages. You can choose the one that suits your preferences and system requirements. However, keep in mind that emulators may consume a lot of CPU and RAM resources, so make sure your PC can handle them.

-

How to Install an Emulator on Your PC

-

To install an emulator on your PC, follow these steps: - Go to the official website of the emulator you want to use and download the installer file. For example, if you want to use Bluestacks, go to [Bluestacks.com] and click on the "Download Bluestacks" button. - Run the installer file and follow the instructions on the screen. You may need to grant some permissions and choose a location for the emulator files. - Wait for the installation process to finish. It may take some time depending on your internet speed and PC performance. - Launch the emulator and sign in with your Google account. If you don't have one, you can create one for free. - Congratulations! You have successfully installed an emulator on your PC. Now you can access the Google Play Store and download Android apps on your PC.

-

How to Download and Play Among Us on Your Emulator

-

To download and play Among Us on your emulator, follow these steps: - Open the emulator and go to the Google Play Store. You can find it on the home screen or in the app drawer of the emulator. - Search for "Among Us" in the search bar and tap on the game icon. You can also use this [link] to go directly to the game page. - Tap on the "Install" button and wait for the game to download and install on your emulator. - Tap on the "Open" button or find the game icon on the home screen or in the app drawer of the emulator. - Enjoy playing Among Us on your PC with your emulator!

-

How to Download Among Us on PC without Emulators

-

What are the Requirements for Playing Among Us on PC without Emulators

-

If you don't want to use emulators, you can also download Among Us directly from Steam or Microsoft Store. However, you will need to meet some requirements for playing Among Us on PC without emulators. Here are the minimum and recommended system requirements for playing Among Us on PC: - - - - - - - - - - - - - - - - - - - - - - - - -
Minimum RequirementsRecommended Requirements
OS: Windows 7 SP1+OS: Windows 10
Processor: SSE2 instruction set supportProcessor: Intel Core i3 or higher
Memory: 1 GB RAMMemory: 4 GB RAM
DirectX: Version 10DirectX: Version 12
Storage: 250 MB available spaceStorage: 500 MB available space
-

Make sure your PC meets these requirements before downloading Among Us from Steam or Microsoft Store.

-

How to Download and Install Among Us on Steam

-

To download and install Among Us on Steam, follow these steps: - Go to [Steam's official website] and download the Steam client. If you already have Steam installed, skip this step. - Run the Steam client and sign in with your Steam account. If you don't have one, you can create one for free. - Go to [Among Us's game page] on Steam or search for "Among Us" in the Steam store. - Click on the "Add to Cart" button and proceed to checkout. You will need to pay $4.99 USD to purchase Among Us on Steam. - After purchasing, go to your library and find Among Us in your games list. Click on the "Install" button and wait for the game to download and install on your PC. - Click on the "Play" button or double-click on the game icon in your library to launch Among Us on Steam. - Enjoy playing Among Us on your PC with Steam!

-

How to Download and Install Among Us from Microsoft Store

-

To download and install Among Us from Microsoft Store, follow these steps: - Go to [Microsoft Store's official website] or open the Microsoft Store app on your PC Windows 7. - Search for "Among Us" in the search bar and click on the game icon. You can also use this [link] to go directly to the game page. - Click on the "Get" button and sign in with your Microsoft account. If you don't have one, you can create one for free. - Wait for the game to download and install on your PC. - Click on the "Play" button or find the game icon in your start menu or desktop to launch Among Us from Microsoft Store. - Enjoy playing Among Us on your PC with Microsoft Store!

-

How to install among us on windows 7 laptop
-How to play among us on pc without emulator
-How to get among us for free on windows 7
-How to run among us on windows 7 32 bit
-How to download among us on steam for windows 7
-How to update among us on pc windows 7
-How to fix among us not working on windows 7
-How to download among us on pc with bluestacks
-How to play among us online on windows 7
-How to download among us airship map on windows 7
-How to download among us mod menu on pc windows 7
-How to play among us with friends on windows 7
-How to download among us on pc without steam
-How to change language in among us on windows 7
-How to download among us on pc with gameloop
-How to play hide and seek in among us on windows 7
-How to download among us on pc with ldplayer
-How to use discord with among us on windows 7
-How to download among us on pc from google play store
-How to create a server in among us on windows 7
-How to download among us on pc with nox player
-How to customize your character in among us on windows 7
-How to download among us on pc with memu play
-How to join a game in among us on windows 7
-How to download among us on pc from official website
-How to chat in among us on windows 7 keyboard
-How to download among us on pc with koplayer
-How to vote in among us on windows 7 mouse
-How to download among us on pc with genymotion
-How to report a body in among us on windows 7
-How to download among us on pc with andy emulator
-How to use voice chat in among us on windows 7
-How to download among us on pc with remix os player
-How to be an impostor in among us on windows 7
-How to download among us on pc with phoenix os
-How to use skins and pets in among us on windows 7
-How to download among us on pc with prime os
-How to complete tasks in among us on windows 7
-How to download among us on pc with tencent gaming buddy
-How to sabotage in among us on windows 7 keyboard shortcuts
-How to download among us on pc with droid4x emulator
-How to change name and color in among us on windows 7
-How to download among us on pc with windroy emulator
-How to kick or ban players in among us on windows 7 host settings
-How to download among us on pc with youwave emulator
-How to enable crossplay in among us on windows 7
-How to download among us on pc with leapdroid emulator
-How to link accounts in among us on windows 7

-

T ips and Tricks for Playing Among Us on PC

-

How to Customize Your Character and Settings

-

One of the fun aspects of Among Us is that you can customize your character and settings to suit your preferences. Here are some of the things you can do: - To change your name, color, hat, skin, pet, or language, go to the main menu and click on the "Customize" button at the bottom right corner. You can also access this option in the lobby before starting a game. - To change the game settings, such as the number of imposters, map, mode, speed, vision, kill cooldown, task difficulty, etc., go to the lobby and click on the "Game" button at the bottom left corner. You can also access this option in the main menu by clicking on the "Host" button and creating a private game. - To change the sound and graphics settings, such as the volume, resolution, full screen mode, vsync, etc., go to the main menu and click on the "Settings" button at the bottom left corner. You can also access this option in the lobby by clicking on the gear icon at the top right corner.

-

How to Play as a Crewmate or an Imposter

-

Among Us is a game of deception and deduction. Depending on your role, you will have different objectives and strategies. Here are some of the basics of how to play as a crewmate or an imposter: - As a crewmate, your goal is to complete tasks around the ship and find out who the imposters are. You can see your tasks on the top left corner of your screen or by opening your map. Tasks are mini-games that require you to perform simple actions, such as connecting wires, swiping cards, scanning bodies, etc. Completing tasks will fill up the task bar at the top of your screen. If you fill up the task bar before the imposters kill everyone, you win. - As an imposter, your goal is to kill enough crewmates without being caught or voted out. You can see who your fellow imposters are by their red names. You can kill crewmates by getting close to them and clicking on the "Kill" button at the bottom right corner of your screen. However, you have to wait for a cooldown time before you can kill again. You can also sabotage the ship by clicking on the "Sabotage" button at the bottom right corner of your screen. Sabotages are actions that disrupt the crewmates' tasks or cause emergencies, such as locking doors, turning off lights, starting a reactor meltdown, etc. Sabotages can help you create chaos, distract crewmates, or prevent them from completing tasks. - As a crewmate or an imposter, you can use vents to move around the map quickly and secretly. Vents are holes that connect different rooms of the ship. You can enter or exit vents by clicking on them when you are near them. However, only imposters can use vents, so be careful not to be seen by crewmates when you do so. - As a crewmate or an imposter, you can use cameras to monitor other players' activities. Cameras are devices that show live feeds of different areas of the map. You can access cameras by going to the security room and clicking on the screen. However, using cameras will limit your vision and movement, so be careful not to miss anything important or expose yourself to danger when you do so. - As a crewmate or an imposter, you can participate in meetings to discuss and vote for who you think is an imposter. Meetings are triggered when someone reports a dead body or calls an emergency meeting by pressing a button. During meetings, you can chat with other players by typing or using voice chat (if enabled). You can also vote for someone by clicking on their name or skip voting by clicking on the skip button. The person with the most votes will be ejected from the ship. If there is a tie, no one will be ejected. If all imposters are ejected or all crewmates are killed, the game ends.

-

How to Use Keyboard and Mouse Controls

-

Playing Among Us on PC with keyboard and mouse controls can give you an edge over playing on mobile devices with touch controls. Here are some of the default keyboard and mouse controls for playing Among Us on PC and how to change them if needed: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ActionKeyboard ControlMouse Control
MoveWASD keysLeft click and drag
UseE key or SpacebarLeft click
KillQ keyLeft click -
SabotageR keyLeft click
ReportR keyLeft click
MapTab keyLeft click
ChatT key or Enter keyLeft click
SettingsEscape keyLeft click
ConfirmE key or SpacebarLeft click
BackEscape key or Backspace keyRight click or Left click on the back button
-

To change the keyboard and mouse controls, go to the main menu and click on the "Settings" button at the bottom left corner. Then, click on the "Controls" tab and choose the option that suits you best. You can also customize the keyboard controls by clicking on the "Customize Keyboard" button and assigning different keys to different actions.

-

Conclusion

-

In this article, we have shown you how to download Among Us on PC Windows 7 using two methods: with emulators or without emulators. We have also given you some tips and tricks for playing Among Us on PC, such as how to customize your character and settings, how to play as a crewmate or an imposter, and how to use keyboard and mouse controls. We hope you found this article helpful and informative.

-

Among Us is a game that will keep you entertained and engaged for hours. It is a game that will challenge your skills of deception, teamwork, and deduction. It is a game that will make you laugh, scream, and rage. It is a game that you should definitely try out if you haven't already.

-

So what are you waiting for? Download Among Us on your PC Windows 7 today and join the fun! And don't forget to share this article with your friends who might also be interested in playing Among Us on PC. Happy gaming!

-

Frequently Asked Questions (FAQs)

-

Q: Is Among Us free on PC?

-

A: Among Us is not free on PC. You have to pay $4.99 USD to purchase it from Steam or Microsoft Store. However, you can play it for free on PC using emulators, which allow you to run Android apps on your PC.

-

Q: Can I play Among Us on PC with my friends who play on mobile devices?

-

A: Yes, you can. Among Us supports cross-platform play between Android, iOS, PC, and console. You can join the same game with your friends who play on different devices by entering the same code or creating a private game.

-

Q: How can I update Among Us on PC?

-

A: If you downloaded Among Us from Steam or Microsoft Store, you can update it automatically by launching the game or checking for updates in the store. If you downloaded Among Us from an emulator, you can update it manually by going to the Google Play Store and tapping on the "Update" button.

-

Q: How can I report bugs or issues in Among Us on PC?

-

A: If you encounter any bugs or issues in Among Us on PC, you can report them to the developers by filling out this [form] or contacting them via email at support@innersloth.com.

-

Q: How can I get more information about Among Us on PC?

-

A: If you want to get more information about Among Us on PC, such as the latest news, updates, features, tips, guides, etc., you can visit the official website of Among Us at [innersloth.com] or follow their social media accounts at [Twitter], [Facebook], [Instagram], [YouTube], [Discord], [Reddit], etc.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Cheats How to Download and Install the Unlimited Gems Mod.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Cheats How to Download and Install the Unlimited Gems Mod.md deleted file mode 100644 index 8d15dce78fe19c0c74792ec852535bab86695c9b..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Brawl Stars Cheats How to Download and Install the Unlimited Gems Mod.md +++ /dev/null @@ -1,126 +0,0 @@ -
-

Download Brawl Stars Unlimited Gems: How to Get Free Gems in Brawl Stars

-

If you are a fan of Brawl Stars, you probably know how important gems are in this game. Gems are the premium currency that can help you unlock and upgrade your favorite brawlers, buy cool skins, gadgets, and star powers, and get access to exclusive offers and rewards. But how can you get free gems in Brawl Stars without spending real money? In this article, we will show you how to download Brawl Stars unlimited gems using different methods. But first, let's take a look at what Brawl Stars is and why you need gems in the first place.

-

download brawl stars unlimited gems


Download 🗸 https://urlin.us/2uT09K



-

What is Brawl Stars?

-

A fast-paced multiplayer game with various modes and characters

-

Brawl Stars is a popular mobile game developed by Supercell, the makers of Clash of Clans, Clash Royale, and Boom Beach. It is a 3v3 or solo multiplayer game that features various game modes, such as Gem Grab, Showdown, Brawl Ball, Bounty, Heist, Special Events, and Championship Challenge. In each mode, you have to team up with your friends or play solo against other players from around the world and use your skills and strategies to win the match.

-

Brawl Stars also has a diverse cast of characters called brawlers, each with their own unique abilities, attacks, and super moves. There are currently over 40 brawlers in the game, divided into different rarities: Trophy Road, Rare, Super Rare, Epic, Mythic, Legendary, and Chromatic. You can unlock new brawlers by opening brawl boxes or buying them with gems. You can also customize your brawlers with different skins that change their appearance and animations.

-

Why do you need gems in Brawl Stars?

-

To unlock and upgrade brawlers, skins, gadgets, and star powers

-

One of the main reasons why you need gems in Brawl Stars is to unlock and upgrade your brawlers. As mentioned earlier, brawlers are the characters that you use to play the game. Each brawler has a power level that determines their stats and performance. You can increase your brawler's power level by collecting power points and coins. Power points are items that you can get from brawl boxes or buy with coins. Coins are the basic currency that you can earn by playing the game or opening brawl boxes.

-

However, some brawlers are not available from brawl boxes or coins. They can only be bought with gems. These include some of the rarest and most powerful brawlers in the game, such as Leon, Sandy, Spike, Amber, Colette, Gale, Surge, Lou, Colonel Ruffs, Belle, Buzz, Griff, Ash, Meg, etc. These brawlers can cost anywhere from 30 to 700 gems depending on their rarity.

-

Gems can also help you unlock and upgrade your brawler's skins, gadgets, and star powers. Skins are cosmetic items that change your brawler's look and animations. Some skins can be bought with coins or star points (a special currency that you can earn by reaching certain ranks), but most of them require gems. Gadgets are special items that give your brawler an extra ability that can be activated once or twice per match. Star powers are passive abilities that enhance your brawler's super move or basic attack. Both gadgets and star powers can be unlocked from brawl boxes once your brawler reaches power level 7 and

10 and 9 respectively, but you can also buy them with gems if you don't want to wait.

-

To buy brawl boxes, brawl pass, and special offers

-

Another reason why you need gems in Brawl Stars is to buy brawl boxes, brawl pass, and special offers. Brawl boxes are loot boxes that contain various rewards, such as coins, power points, brawlers, gadgets, star powers, and tokens. Tokens are items that you can use to unlock the brawl pass tiers. You can get brawl boxes by playing the game or buying them with gems. There are different types of brawl boxes, such as normal, big, and mega boxes, that have different chances of dropping rare items.

-

Brawl pass is a seasonal feature that gives you access to exclusive rewards, such as brawlers, skins, coins, power points, gems, and more. There are two tracks of the brawl pass: the free track and the premium track. The free track is available for everyone and contains basic rewards. The premium track costs 169 gems and contains more valuable and exclusive rewards. You can unlock the rewards by collecting tokens or buying tiers with gems.

-

Special offers are limited-time deals that give you discounts or bonuses on certain items, such as brawl boxes, brawlers, skins, coins, power points, gems, etc. You can find them in the shop section of the game and buy them with gems. Some special offers are only available for certain players or events, so make sure to check them out regularly.

-

How to download brawl stars mod apk with unlimited gems and coins
-Brawl stars hack download free gems generator no survey
-Download brawl stars latest version with unlimited gems and brawlers
-Brawl stars cheats download for android and ios devices
-Brawl stars unlimited gems apk download 2023 working
-Download brawl stars hack tool online for free gems and resources
-Brawl stars mod menu download with unlimited gems and skins
-Brawl stars private server download with unlimited gems and legendary brawlers
-Download brawl stars game with unlimited gems and money
-Brawl stars tips and tricks to get unlimited gems and rewards
-Brawl stars free gems download without human verification or root
-Download brawl stars cracked apk with unlimited gems and tickets
-Brawl stars glitch download for unlimited gems and trophies
-Brawl stars mega mod apk download with unlimited gems and power points
-Download brawl stars hack version with unlimited gems and star points
-Brawl stars guide to download and install unlimited gems mod
-Brawl stars gameplay with unlimited gems and characters
-Download brawl stars for pc with unlimited gems and features
-Brawl stars online generator download for free gems and coins
-Brawl stars best brawlers to use with unlimited gems and abilities
-Download brawl stars update with unlimited gems and new events
-Brawl stars codes to redeem for unlimited gems and items
-Download brawl stars 3d mod with unlimited gems and graphics
-Brawl stars review of the unlimited gems mod apk
-Brawl stars tutorial on how to download and use unlimited gems hack
-Download brawl stars offline mode with unlimited gems and bots
-Brawl stars fun mode download with unlimited gems and custom games
-Brawl stars wallpaper download with unlimited gems and cool designs
-Brawl stars videos of players using unlimited gems and hacks
-Download brawl stars old version with unlimited gems and nostalgia
-Brawl stars challenges to complete with unlimited gems and rewards
-Download brawl stars on mac with unlimited gems and compatibility
-Brawl stars memes about the unlimited gems mod apk
-Download brawl stars on chromebook with unlimited gems and performance
-Brawl stars news about the unlimited gems mod apk ban
-Download brawl stars on windows 10 with unlimited gems and support
-Brawl stars fan art of the unlimited gems mod apk users
-Download brawl stars on linux with unlimited gems and stability
-Brawl stars quiz to test your knowledge of the unlimited gems mod apk
-Download brawl stars on fire tablet with unlimited gems and quality
-Brawl stars facts about the unlimited gems mod apk development
-Download brawl stars on bluestacks with unlimited gems and speed
-Brawl stars opinions about the unlimited gems mod apk ethics
-Download brawl stars on nox player with unlimited gems and features
-Brawl stars secrets of the unlimited gems mod apk creators
-Download brawl stars on ldplayer with unlimited gems and optimization
-Brawl stars rumors about the next update of the unlimited gems mod apk

-

How to download Brawl Stars unlimited gems?

-

The official way: complete quests, watch ads, and participate in events

-

The official way to download Brawl Stars unlimited gems is to earn them by playing the game. There are several ways to do this:

- -

These methods are safe and legal, but they require a lot of time and patience to accumulate enough gems for your needs. If you want a faster and easier way to download Brawl Stars unlimited gems, you might want to try the unofficial way.

-

The unofficial way: use a modded APK or a hack tool

-

The unofficial way to download Brawl Stars unlimited gems is to use a modded APK or a hack tool. These are third-party applications or websites that claim to give you unlimited gems or other resources in Brawl Stars by modifying the game files or injecting code into the game servers. However, these methods are not recommended for several reasons:

-

The pros and cons of using a modded APK

-

A modded APK is a modified version of the original Brawl Stars game that has been altered to give you unlimited gems or other features. Some of the pros of using a modded APK are:

- -

Some of the cons of using a modded APK are:

-

The pros and cons of using a hack tool

-

A hack tool is a website or an application that claims to give you unlimited gems or other resources in Brawl Stars by hacking the game servers or generating fake codes. Some of the pros of using a hack tool are:

- -

Some of the cons of using a hack tool are:

- -

Conclusion

-

Summary of the main points

-

In conclusion, Brawl Stars is a fun and addictive game that requires gems to unlock and upgrade your brawlers, skins, gadgets, and star powers, and to buy brawl boxes, brawl pass, and special offers. You can download Brawl Stars unlimited gems by using the official way or the unofficial way. The official way is to earn gems by completing quests, watching ads, and participating in events. The unofficial way is to use a modded APK or a hack tool that claim to give you unlimited gems or other resources by modifying the game files or hacking the game servers. However, both methods have their pros and cons, and you should be careful and responsible when using them.

-

Call to action and disclaimer

-

If you want to download Brawl Stars unlimited gems, you can try any of the methods that we have discussed in this article. However, we recommend that you use the official way as much as possible, as it is safer, legal, and fair. The unofficial way might seem tempting, but it is risky, illegal, and unfair. You might end up harming your device, losing your account, or getting banned from the game. Plus, you might ruin the fun and challenge of the game for yourself and others.

-

So, what are you waiting for? Download Brawl Stars now and enjoy the game with your friends. And remember, don't cheat, play fair!

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator The Ultimate Guide to Downloading and Setting Up the Latest Version.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator The Ultimate Guide to Downloading and Setting Up the Latest Version.md deleted file mode 100644 index 508e4cb097707653ee7be74ade32fb23f1c8138c..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator The Ultimate Guide to Downloading and Setting Up the Latest Version.md +++ /dev/null @@ -1,116 +0,0 @@ -
-

Introduction

-

If you are a fan of Nintendo games, you might have wondered if there is a way to play them on your PC with better graphics and performance than on the original consoles. Well, wonder no more, because there is a solution for you: Dolphin emulator.

-

Dolphin is a free and open-source software that emulates the Nintendo GameCube and Wii consoles on Windows, Linux, macOS, Android, Xbox One, Xbox Series X/S. It was first released in 2003 as a GameCube emulator, but later gained support for Wii emulation as well. Dolphin allows you to play thousands of games from these two platforms in full HD (1080p) resolution with various enhancements such as compatibility with all PC controllers, turbo speed, networked multiplayer, custom textures, achievements, and more.

-

download latest dolphin emulator


Download Filehttps://urlin.us/2uSUOD



-

In this article, I will show you how to download, install, and configure Dolphin on your PC, as well as answer some frequently asked questions about the emulator. By following this guide, you will be able to enjoy your favorite Nintendo games on your computer with ease.

-

System Requirements

-

Before you download and install Dolphin, you need to make sure that your PC meets the minimum or recommended system requirements for running the emulator. Here are the specifications that you need:

- - - - - - - -
ComponentMinimumRecommended
Operating SystemWindows 10 or higher (64-bit), Linux (64-bit), macOS Catalina 10.15 or higher (64-bit)Same as minimum
CPUx86-64 CPU with SSE2 support or AArch64 CPUIntel Core i5-4670K or equivalent AMD Ryzen CPU or newer
Memory2 GB RAM or moreSame as minimum
Graphics CardPixel Shader 3.0 support and Direct3D 10 or OpenGL 3 supportModern Direct3D 11.1, OpenGL 4.4, or Vulkan GPU
Input DeviceAny PC input device (mouse and keyboard by default)Nintendo GameCube controller with Smash Bros. Wii U USB adapter or Nintendo Wii Remote via DolphinBar
-

Note that these are general guidelines and some games may require more powerful hardware or specific settings to run smoothly. You can check the compatibility list on the official website to see how well each game works on Dolphin.

-

Downloading Dolphin

-

The first step to use Dolphin is to download it from the official website. There are two types of versions that you can download: beta and development. The beta version is more stable and tested, but it may not have the latest features and improvements. The development version is updated more frequently and has the newest additions, but it may also have more bugs and issues. You can choose the version that suits your preference and needs.

-

To download Dolphin, go to https://dolphin-emu.org/download/ and select the version that you want. You will see a list of download links for different operating systems. Click on the link that matches your OS and wait for the download to finish. The file size is about 10 MB.

-

Installing Dolphin

-

Once you have downloaded Dolphin, you need to install it on your PC. The installation process is very simple and straightforward. Here are the steps that you need to follow:

-
    -
  1. Locate the downloaded file on your PC. It should be a ZIP file with a name like dolphin-x64-5.0-xxxxx.zip, where x64 indicates the 64-bit version and xxxxx indicates the build number.
  2. -
  3. Extract the ZIP file to a folder of your choice. You can use any file extraction software such as WinRAR or 7-Zip to do this.
  4. -
  5. Open the extracted folder and double-click on the Dolphin.exe file to run the emulator. You don't need to install anything else or modify any registry settings.
  6. -
  7. You will see the Dolphin main window with a list of games that you can play. If you don't have any games yet, you can skip to the next section to learn how to load them.
  8. -
-

Congratulations, you have successfully installed Dolphin on your PC!

-

Download Dolphin Emulator for Windows 10
-How to install Dolphin Emulator on Android
-Dolphin Emulator latest version 5.0-19685
-Dolphin Emulator compatibility list for GameCube and Wii games
-Dolphin Emulator best settings for performance and graphics
-Download Dolphin Emulator beta versions and development versions
-How to update Dolphin Emulator to the latest version
-Dolphin Emulator official website and forums
-Dolphin Emulator system requirements and features
-How to use Dolphin Emulator with controllers and keyboards
-Download Dolphin Emulator for macOS (ARM/Intel Universal)
-How to play multiplayer games on Dolphin Emulator online
-Dolphin Emulator cheats and hacks for GameCube and Wii games
-Dolphin Emulator troubleshooting and common issues
-Dolphin Emulator reviews and ratings from users
-Download Dolphin Emulator for Windows ARM64
-How to backup and restore save files on Dolphin Emulator
-Dolphin Emulator custom textures and mods for GameCube and Wii games
-Dolphin Emulator tutorials and guides for beginners
-Dolphin Emulator progress reports and news updates
-Download Dolphin Emulator for Linux and Ubuntu
-How to configure Dolphin Emulator for VR and 3D
-Dolphin Emulator comparison with other emulators
-Dolphin Emulator FAQs and tips
-Dolphin Emulator donation and support options

-

Configuring Dolphin

-

General Settings

-

Before you start playing games on Dolphin, you may want to adjust some of the general settings of the emulator to suit your preferences and needs. To access the general settings, click on the Config button on the main toolbar or press Ctrl+S on your keyboard.

-

You will see a window with several tabs that contain different options for configuring Dolphin. Here are some of the most important ones that you should know:

- -

You can experiment with these settings and see how they affect your emulation experience. If you are not sure what they do, you can always leave them at their default values or consult the Dolphin wiki for more information.

-

Graphics Settings

-

One of the main advantages of using Dolphin is that it can enhance the graphics of the original games by increasing the resolution, adding anti-aliasing, enabling anisotropic filtering, improving textures, and more. However, these enhancements also require more processing power from your PC, so you need to balance them with your hardware capabilities and performance expectations.

-

To access the graphics settings, click on the Graphics button on the main toolbar or press Ctrl+G on your keyboard. You will see a window with several tabs that contain different options for optimizing the graphics of the emulator. Here are some of the most important ones that you should know:

- -

These are just some of the features that Dolphin offers to enhance your gameplay experience. You can explore more of them by browsing the menus and options of the emulator, or by visiting the Dolphin wiki for more information.

-

Dolphin Features and Benefits

-

As you can see, Dolphin is not just a simple emulator that lets you play GameCube and Wii games on your PC. It is also a powerful and versatile software that offers many features and benefits that you cannot get from the original consoles. Here are some of the reasons why you should use Dolphin:

- -

Dolphin is truly an amazing emulator that lets you enjoy your favorite Nintendo games on your PC with enhanced graphics, performance, and features. It is also free and open-source, which means that anyone can contribute to its development and improvement.

-

Dolphin Compatibility and Performance

-

While Dolphin is a great emulator that can play most GameCube and Wii games flawlessly, it is not perfect. Some games may have compatibility or performance issues that prevent them from running smoothly or at all on Dolphin. These issues may be caused by various factors, such as hardware limitations, software bugs, emulation inaccuracies, game protections, etc.

-

If you encounter any problems with your games on Dolphin, here are some steps that you can take to try to solve them:

-
    -
  1. Check the compatibility list: The first thing that you should do is check the compatibility list on the official website to see how well your game works on Dolphin. The list shows the rating, status, and notes for each game based on user reports and tests. You can also search for your game on the Dolphin wiki or Dolphin forums to find more information and solutions.
  2. -
  3. Update your Dolphin version: The next thing that you should do is update your Dolphin version to the latest one available. The developers are constantly working on fixing bugs and improving compatibility and performance for various games. You can download the latest version from the official website or enable automatic updates from the Config > General tab.
  4. -
  5. Adjust your settings: The last thing that you should do is adjust your settings to optimize your emulation experience. You can try changing some of the settings that may affect your game, such as graphics, audio, controller, and hacks. You can also use the game properties window to enable or disable specific settings for each game. To access the game properties window, right-click on the game in the game list and select Properties. You will see a window with several tabs that contain various options and information for your game. You can also consult the Dolphin wiki or Dolphin forums to find the best settings for your game.
  6. -
-

By following these steps, you may be able to solve or reduce the compatibility or performance issues that you encounter with your games on Dolphin. However, keep in mind that some games may still have unsolved problems that require further development and improvement from the Dolphin team. You can always report any bugs or issues that you find on the Dolphin issue tracker or Dolphin forums to help the developers fix them.

-

Conclusion

-

In this article, I have shown you how to download, install, and configure Dolphin emulator on your PC, as well as how to play games on it with enhanced graphics, performance, and features. I have also answered some frequently asked questions about the emulator and provided some tips for solving compatibility and performance issues.

-

Dolphin is a fantastic emulator that lets you enjoy your favorite Nintendo games on your PC with ease. It is also free and open-source, which means that anyone can contribute to its development and improvement. If you are a fan of GameCube and Wii games, you should definitely give Dolphin a try and see for yourself how amazing it is.

-

I hope you found this article helpful and informative. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy gaming!

-

FAQs

-

Here are some of the most common questions that people ask about Dolphin:

-
    -
  1. Is Dolphin legal?
  2. -

    Dolphin is legal as long as you use it with your own legally obtained games. You can either use physical discs or ISO files that you created from your own discs. However, downloading or sharing ISO files from the internet is illegal and may result in legal consequences.

    -
  3. Is Dolphin safe?
  4. -

    Dolphin is safe as long as you download it from the official website or a trusted source. You should avoid downloading Dolphin from unknown or suspicious websites, as they may contain malware or viruses that can harm your PC.

    -
  5. How do I update Dolphin?
  6. -

    You can update Dolphin by downloading the latest version from the official website or by enabling automatic updates from the Config > General tab. You can also check for updates manually by clicking on the Help > Check for Updates menu.

    -
  7. How do I uninstall Dolphin?
  8. -

    You can uninstall Dolphin by deleting the folder where you extracted it. You don't need to uninstall anything else or modify any registry settings. However, if you want to remove all traces of Dolphin from your PC, you may also want to delete the User folder in the Documents\Dolphin Emulator directory, which contains your configuration files, save files, screenshots, etc.

    -
  9. Where can I get more help with Dolphin?
  10. -

    You can get more help with Dolphin by visiting the Dolphin wiki, Dolphin forums, Dolphin issue tracker, or Dolphin Discord server. You can also contact the developers and other users through these channels and get support, feedback, suggestions, etc.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download 5 MB File without Losing Quality.md b/spaces/1phancelerku/anime-remove-background/Download 5 MB File without Losing Quality.md deleted file mode 100644 index daddb02a853990b595c8fbd3adfbb9ff61419d75..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download 5 MB File without Losing Quality.md +++ /dev/null @@ -1,212 +0,0 @@ - -

How to Download a 5 MB File Quickly and Easily

-

Have you ever encountered a situation where you need to download a 5 MB file from the internet, but you are not sure how to do it fast and efficiently? Or maybe you have a larger file that you want to compress or split into smaller chunks of 5 MB or less, so that you can send or receive it via email? If so, then this article is for you.

-

In this article, we will explain what a 5 MB file is, why you might need to download it, how to download it from the internet, how to compress or split a larger file into 5 MB or less, and how to send or receive a 5 MB file via email. By the end of this article, you will be able to download any 5 MB file quickly and easily.

-

download 5 mb file


DOWNLOADhttps://jinyurl.com/2uNKBt



-

What is a 5 MB File?

-

A 5 MB file is a file that has a size of 5 megabytes (MB) or less. A megabyte is a unit of data that measures how much information a file contains. One megabyte is equal to 1,024 kilobytes (KB) or about one million bytes. A byte is the smallest unit of data that can be stored on a computer.

-

How Big is 5 MB in Terms of Data?

-

To give you an idea of how big 5 MB is in terms of data, here are some examples of how much information can be stored in 5 MB:

- -

What Kind of Files Can Be 5 MB or Less?

-

There are many kinds of files that can be 5 MB or less, depending on their format, quality, and compression. Some common types of files that are usually 5 MB or less are:

- -

Why Do You Need to Download a 5 MB File?

-

You might need to download a 5 MB file for various reasons, depending on your personal or professional needs. Some common reasons why you might need to download a 5 MB file are:

-

For Personal Use

- -

For Professional Use

- -

How to Download a 5 MB File from the Internet?

-

Downloading a 5 MB file from the internet is usually a simple and quick process, but there are some factors that can affect the speed and quality of your download. Here are some tips on how to download a 5 MB file from the internet:

-

Choose a Reliable Source

-

The first step to download a 5 MB file from the internet is to choose a reliable source that offers the file you want. A reliable source is one that has a good reputation, provides accurate and updated information, and does not contain any malware or viruses. You can use a search engine like Bing to find the best source for your file, or you can use a trusted website that specializes in the type of file you are looking for. For example, if you want to download an audio file, you can use a website like SoundCloud or Spotify. If you want to download a video file, you can use a website like YouTube or Vimeo. If you want to download an image file, you can use a website like Flickr or Unsplash. If you want to download a document file, you can use a website like Google Docs or Dropbox. If you want to download an archive file, you can use a website like WinZip or 7-Zip.

-

How to download 5 mb file faster
-Download 5 mb file in seconds
-Best sites to download 5 mb file
-Download 5 mb file without internet
-Download 5 mb file from Google Drive
-Download 5 mb file on iPhone
-Download 5 mb file on Android
-Download 5 mb file on Windows
-Download 5 mb file on Mac
-Download 5 mb file on Linux
-Download 5 mb file using VPN
-Download 5 mb file anonymously
-Download 5 mb file with resume support
-Download 5 mb file with IDM
-Download 5 mb file with wget
-Download 5 mb file with curl
-Download 5 mb file with Python
-Download 5 mb file with Java
-Download 5 mb file with C#
-Download 5 mb file with PHP
-Download 5 mb file with JavaScript
-Download 5 mb file with PowerShell
-Download 5 mb file with Bash
-Download 5 mb file with Ruby
-Download 5 mb file with Perl
-Download 5 mb file as zip
-Download 5 mb file as mp3
-Download 5 mb file as mp4
-Download 5 mb file as pdf
-Download 5 mb file as jpg
-Download 5 mb file as png
-Download 5 mb file as gif
-Download 5 mb file as txt
-Download 5 mb file as docx
-Download 5 mb file as xlsx
-Download 5 mb file as pptx
-Download 5 mb file as csv
-Download 5 mb file as json
-Download 5 mb file as xml
-Download 5 mb file as html
-Compress video to download 5 mb file
-Reduce image size to download 5 mb file
-Split large files to download 5 mb files
-Merge small files to download 5 mb files
-Encrypt files to download 5 mb files
-Decrypt files to download 5 mb files
-Scan files for virus before downloading 5 mb files
-Check files for errors after downloading 5 mb files
-Compare files for differences after downloading 5 mb files
-Convert files to different formats after downloading 5 mb files

-

Check Your Internet Speed and Connection

-

The second step to download a 5 MB file from the internet is to check your internet speed and connection. Your internet speed and connection can affect how fast and smooth your download will be. You can use an online tool like Speedtest to measure your internet speed and connection. Ideally, you should have an internet speed of at least 10 Mbps (megabits per second) and a stable connection with no interruptions or errors. If your internet speed is too slow or your connection is too unstable, you might experience longer download times, incomplete downloads, or corrupted files. To improve your internet speed and connection, you can try some of these solutions:

- -

Use a Download Manager or Browser Extension

-

The third step to download a 5 MB file from the internet is to use a download manager or browser extension. A download manager or browser extension is a software or tool that helps you manage and optimize your downloads. Some benefits of using a download manager or browser extension are:

- -

Some examples of popular and free download managers and browser extensions are:

- -

How to Compress or Split a Larger File into 5 MB or Less?

Sometimes, you might have a larger file that you want to download, but it exceeds the size limit of your email provider or your storage device. In that case, you can compress or split the larger file into smaller chunks of 5 MB or less, and then download them separately. Here are some ways to compress or split a larger file into 5 MB or less:

-

Use an Online or Offline Video Compressor

-

If you have a large video file that you want to download, you can use an online or offline video compressor to reduce its size and quality. A video compressor is a software or tool that can change the format, resolution, bitrate, frame rate, and other parameters of a video file to make it smaller and more compatible. Some examples of online and offline video compressors are:

- -

To use a video compressor, you need to upload or import your video file, choose the output format and quality, adjust the settings if needed, and start the compression process. Depending on the size and complexity of your video file, the compression process might take some time. Once the compression is done, you can download the compressed video file, which should be 5 MB or less.

-

Use a File Splitter or Zipper

-

If you have a large file that is not a video, such as an audio, image, document, or archive file, you can use a file splitter or zipper to divide it into smaller parts of 5 MB or less. A file splitter or zipper is a software or tool that can cut or merge a file into multiple segments or archives. Some examples of file splitters and zippers are:

- -

To use a file splitter or zipper, you need to open or select your file, choose the output size and format, and start the splitting or zipping process. Depending on the size and type of your file, the splitting or zipping process might take some time. Once the splitting or zipping is done, you can download the split or zipped files, which should be 5 MB or less each.

-

How to Send or Receive a 5 MB File via Email?

-

One of the most common ways to share a 5 MB file with someone else is to send or receive it via email. However, not all email providers have the same file size limit for attachments. Some email providers might allow you to send or receive files up to 25 MB, while others might only allow up to 10 MB or even less. Here are some tips on how to send or receive a 5 MB file via email:

-

Check the File Size Limit of Your Email Provider

-

The first step to send or receive a 5 MB file via email is to check the file size limit of your email provider. You can find this information on your email provider's website or help center, or by contacting their customer support. Here are some examples of the file size limit for some popular email providers:

- - - - - - - - - - - - -
Email ProviderFile Size Limit
Gmail25 MB
Yahoo Mail25 MB
Outlook.com20 MB
AOL Mail25 MB
Zoho Mail20 MB
iCloud Mail20 MB
ProtonMail25 MB
Tutanota25 MB
Mozilla ThunderbirdNo limit (depends on server)
EudoraNo limit (depends on server)
-

If your 5 MB file is within the file size limit of your email provider, you can attach it directly to your email message and send it normally. If your 5 MB file exceeds the file size limit of your email provider, you can use one of the following methods:

-

Attach the File Directly or Use a Cloud Service

-

The second step to send or receive a 5 MB file via email is to attach the file directly or use a cloud service. A cloud service is a software or tool that allows you to store and share your files online. Some examples of cloud services are:

- -

To use a cloud service, you need to create an account and upload your file to the cloud. Then, you can generate a link or an invitation to your file and paste it in your email message. The recipient of your email can then click on the link or accept the invitation and download the file from the cloud.

-

Conclusion

-

Downloading a 5 MB file from the internet is not a difficult task, but it requires some knowledge and skills to do it efficiently and safely. In this article, we have covered the following topics:

- -

We hope that this article has helped you learn how to download any 5 MB file quickly and easily. If you have any questions or feedback, please feel free to leave a comment below.

-

FAQs

-

Here are some frequently asked questions about downloading a 5 MB file:

-

Q: How long does it take to download a 5 MB file?

-

A: The time it takes to download a 5 MB file depends on your internet speed and connection. For example, if you have an internet speed of 10 Mbps, it will take about 4 seconds to download a 5 MB file. If you have an internet speed of 1 Mbps, it will take about 40 seconds to download a 5 MB file. You can use an online calculator like Download Time Calculator to estimate how long it will take to download any file based on your internet speed.

-

Q: How can I check the size of a file before downloading it?

-

A: You can check the size of a file before downloading it by looking at the information provided by the source website or by using a browser extension like File Size Info. The information might include the name, type, format, quality, and size of the file. You can also right-click on the download link and select "Save link as" or "Save target as" to see the size of the file before saving it.

-

Q: How can I increase my download speed?

-

A: You can increase your download speed by following some of these tips:

- -

Q: How can I reduce the size of a file without losing quality?

-

A: You can reduce the size of a file without losing quality by using an online or offline video compressor or a file splitter or zipper that can compress or split your file without affecting its format, resolution, bitrate, frame rate, and other parameters. You can also use some advanced settings like cropping, trimming, scaling, rotating, filtering, encoding, etc. to optimize your file for smaller size and better quality.

-

Q: How can I share a large file with someone else?

-

A: You can share a large file with someone else by using one of these methods:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Garten Of Banban 3 Fanmade Characters The Ultimate Guide.md b/spaces/1phancelerku/anime-remove-background/Download Garten Of Banban 3 Fanmade Characters The Ultimate Guide.md deleted file mode 100644 index eea47aa1afb7d1b45e055ed755bab8734441239b..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Garten Of Banban 3 Fanmade Characters The Ultimate Guide.md +++ /dev/null @@ -1,143 +0,0 @@ -
-

Garten of Banban 3 Fanmade Download: Everything You Need to Know

-

If you are a fan of horror games, you might have heard of Garten of Banban 3, a terrifying game that will keep you on the edge of your seat. But did you know that you can also download and install fanmade characters and models for this game? In this article, we will tell you everything you need to know about Garten of Banban 3 fanmade download, including what the game is about, what are fanmade creations, how to download and install them, and some tips and tricks for using them. Let's get started!

-

garten of banban 3 fanmade download


Download File ✏ ✏ ✏ https://jinyurl.com/2uNQj3



-

What is Garten of Banban 3?

-

Garten of Banban 3 is a horror game developed by The Euphoric Brothers, a team of indie game developers from South Korea. It is the third installment in the Garten of Banban series, which started in 2019. The game is available for Windows, Mac, Linux, Android, and iOS devices.

-

A brief introduction to the game and its features

-

The game is set in a haunted amusement park called Garten Park, where you play as a character named Jim, who is looking for his missing sister. Along the way, you will encounter various monsters and traps that will try to kill you. You will have to use your flashlight, your phone, and your wits to survive and escape.

-

The game features stunning graphics, realistic sound effects, immersive atmosphere, and multiple endings. The game also has a multiplayer mode, where you can team up with other players online or play against them as a monster. The game also supports VR devices, such as Oculus Rift and HTC Vive, for a more immersive experience.

-

garten of banban 3 fanmade character download
-garten of banban 3 fanmade monster download
-garten of banban 3 fanmade stinger flynn download
-garten of banban 3 fanmade happy jim download
-garten of banban 3 fanmade sketchfab download
-garten of banban 3 fanmade 3d model download
-garten of banban 3 fanmade rigged download
-garten of banban 3 fanmade blender download
-garten of banban 3 fanmade scary download
-garten of banban 3 fanmade euphoric brothers download
-garten of banban 3 fanmade free download
-garten of banban 3 fanmade pc download
-garten of banban 3 fanmade game download
-garten of banban 3 fanmade horror download
-garten of banban 3 fanmade alpha5714 download
-garten of banban 3 fanmade grabplayer download
-garten of banban 3 fanmade fbx download
-garten of banban 3 fanmade cc attribution download
-garten of banban 3 fanmade no category set download
-garten of banban 3 fanmade no tags set download
-garten of banban 3 fanmade no description provided download
-garten of banban 3 fanmade triangles vertices download
-garten of banban 3 fanmade license learn more download
-garten of banban 3 fanmade published months ago download
-garten of banban 3 fanmade model information download
-how to download garten of banban 3 fanmade
-where to download garten of banban 3 fanmade
-what is garten of banban 3 fanmade
-why to download garten of banban 3 fanmade
-who made garten of banban 3 fanmade
-best garten of banban 3 fanmade download sites
-latest garten of banban 3 fanmade download updates
-new garten of banban 3 fanmade download features
-top garten of banban 3 fanmade download reviews
-cool garten of banban 3 fanmade download models
-awesome garten of banban 3 fanmade download characters
-amazing garten of banban 3 fanmade download monsters
-fun garten of banban 3 fanmade download games
-creepy garten of banban 3 fanmade download horror
-realistic garten of banban 3 fanmade download graphics
-easy garten of banban 3 fanmade download instructions
-fast garten of banban 3 fanmade download speed
-safe garten of banban 3 fanmade download sources
-legal garten of banban 3 fanmade download options
-quality garten of banban 3 fanmade download content

-

The story and the characters of Garten of Banban 3

-

The story of Garten of Banban 3 follows Jim, who receives a mysterious phone call from his sister, who tells him that she is trapped in Garten Park. Jim decides to go there to rescue her, but soon realizes that he is not alone. There are other people who are also trapped in the park, as well as horrifying creatures that lurk in the shadows.

-

The game has several characters that you can interact with or play as. Some of them are:

- -

The gameplay and the mechanics of Garten of Banban 3

-

The gameplay of Garten of Banban 3 is similar to other horror games, such as Outlast or Five Nights at Freddy's. You will have to explore the park, find clues, solve puzzles, hide from

the monsters, and escape. You will have a flashlight that you can use to illuminate your surroundings, but be careful, as it can also attract unwanted attention. You will also have a phone that you can use to call Lisa or Leo, or access the map and the inventory. The phone has a limited battery life, so you will have to find chargers or batteries to keep it running.

-

The game has different difficulty levels, ranging from easy to nightmare. The higher the difficulty, the more aggressive and intelligent the monsters will be, and the less resources you will have. The game also has a permadeath mode, where you will have to start over if you die.

-

What are fanmade characters and models?

-

Fanmade characters and models are creations made by fans of the game, using various tools and software. They are not official or endorsed by the developers of the game, but they are made for fun and entertainment purposes. They can be based on existing characters or models from the game, or they can be original or inspired by other sources.

-

The definition and the purpose of fanmade creations

-

Fanmade creations are a form of fan art, which is a term used to describe any artistic expression that is influenced by a work of fiction, such as a game, a movie, a book, or a show. Fan art can include drawings, paintings, sculptures, animations, comics, videos, music, cosplay, and more. Fanmade creations are a specific type of fan art that involves creating new characters or models for a game, using various tools and software.

-

The purpose of fanmade creations is to express one's creativity and passion for a game, and to share it with other fans. Fanmade creations can also add more variety and diversity to a game, and enhance its replay value. Some fanmade creations can even improve or fix some aspects of the game, such as graphics, performance, bugs, or glitches.

-

The benefits and the challenges of fanmade creations

-

Fanmade creations have many benefits for both the creators and the players. Some of them are:

- -

However, fanmade creations also have some challenges and drawbacks. Some of them are:

- -

Some examples of fanmade characters and models for Garten of Banban 3

-

There are many fanmade characters and models for Garten of Banban 3 that you can find online. Some of them are:

- -

How to download and install fanmade characters and models for Garten of Banban 3?

-

If you want to download and install fanmade characters and models for Garten of Banban 3, you will need to follow some steps and precautions. Here are some tips on how to do it:

-

The sources and the requirements for downloading fanmade creations

-

The first thing you

need to do is to find a reliable and safe source for downloading fanmade creations. There are many websites and forums where fans share their creations, such as Nexus Mods, Mod DB, Steam Workshop, or Reddit. However, not all of them are trustworthy or secure, so you need to be careful and check the reviews, ratings, comments, and feedback from other users before downloading anything.

-

The second thing you need to do is to check the requirements for downloading fanmade creations. Some of them may require you to have a certain version of the game, a certain operating system, a certain software or tool, or a certain amount of space or memory. You also need to make sure that your device can handle the fanmade creations without affecting the performance or the quality of the game.

-

The steps and the precautions for installing fanmade creations

-

The third thing you need to do is to follow the steps and the precautions for installing fanmade creations. The steps may vary depending on the type and the source of the fanmade creation, but generally they involve:

-
    -
  1. Downloading the fanmade creation file from the source.
  2. -
  3. Extracting the file using a program such as WinRAR or 7-Zip.
  4. -
  5. Copying or moving the file to the game folder or directory.
  6. -
  7. Launching the game and enabling or activating the fanmade creation.
  8. -
-

The precautions you need to take are:

- -

The tips and the tricks for using fanmade creations in Garten of Banban 3

-

The fourth thing you need to do is to enjoy using fanmade creations in Garten of Banban 3. Here are some tips and tricks on how to do it:

- -

Conclusion

-

A summary of the main points and a call to action

-

Garten of Banban 3 is a horror game that will scare you and thrill you with its amazing graphics, sound effects, atmosphere, and gameplay. But if you want to spice up your game even more, you can download and install fanmade characters and models for this game. Fanmade creations are a form of fan art that allows fans to express their creativity and passion for the game. They can also add more variety and diversity to the game, and enhance its replay value. However, you need to be careful and follow some steps and precautions when downloading and installing fanmade creations. You also need to enjoy using them and share them with other fans. If you are ready to try Garten of Banban 3 fanmade download, go ahead and have fun!

-

FAQs

-

Here are some frequently asked questions about Garten of Banban 3 fanmade download:

-
    -
  1. Where can I find more information about Garten of Banban 3?
  2. -

    You can find more information about Garten of Banban 3 on its official website, its social media pages, its YouTube channel, or its wiki page.

    -
  3. Is Garten of Banban 3 free to play?
  4. -

    No, Garten of Banban 3 is not free to play. You need to purchase it from its official website or from other platforms such as Steam or Google Play.

    -
  5. Is Garten of Banban 3 suitable for children?
  6. -

    No, Garten of Banban 3 is not suitable for children. It contains graphic violence, gore, blood, jump scares, disturbing images, and mature themes. It is rated M for Mature by ESRB.

    -
  7. Can I play Garten of Banban 3 with my friends?
  8. -

    Yes, you can play Garten of Banban 3 with your friends. The game has a multiplayer mode, where you can team up with other players online or play against them as a monster. You can also chat with them using voice or text.

    -
  9. How can I contact the developers of Garten of Banban 3?
  10. -

    You can contact the developers of Garten of Banban 3 by sending them an email at the.euphoric.brothers@gmail.com, or by following them on their social media pages, such as Facebook, Twitter, or Instagram.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download X-War Clash of Zombies Mod Apk and Get Unlimited Money and Crystals.md b/spaces/1phancelerku/anime-remove-background/Download X-War Clash of Zombies Mod Apk and Get Unlimited Money and Crystals.md deleted file mode 100644 index bb3ab36080b4439920f4a4105046364628c0bceb..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download X-War Clash of Zombies Mod Apk and Get Unlimited Money and Crystals.md +++ /dev/null @@ -1,101 +0,0 @@ - -

X-War: Clash of Zombies Mod Apk (Unlimited Crystals) - A Review

-

If you are a fan of strategy games with zombies and superheroes, you might want to check out X-War: Clash of Zombies. This is a game where you have to build your base, train your army, and fight against hordes of undead and other players. But what if you want to enjoy the game without spending money or waiting for hours? That's where X-War: Clash of Zombies Mod Apk comes in. In this article, we will review this modded version of the game and tell you how to download and install it on your device.

-

x-war clash of zombies mod apk (unlimited crystals)


Download ••• https://jinyurl.com/2uNOs9



-

What is X-War: Clash of Zombies?

-

A strategy game with zombies and superheroes

-

X-War: Clash of Zombies is a strategy game developed by Caesars Studio. It was released in 2015 for Android and iOS devices. The game combines elements of base-building, resource management, hero collection, and combat. You have to create your own city, gather resources, recruit heroes, and defend it from zombies and other enemies. You can also attack other players' bases and loot their resources.

-

A post-apocalyptic world with different factions

-

The game is set in a post-apocalyptic world where a virus has turned most people into zombies. There are different factions fighting for survival and dominance. You can choose to join one of them or create your own. Each faction has its own heroes, buildings, and units. Some of the factions are:

- -

A multiplayer mode with online battles and alliances

-

The game also has a multiplayer mode where you can compete with other players from around the world. You can join or create an alliance with other players and cooperate with them in wars, raids, missions, and events. You can also chat with them and exchange gifts. You can also challenge other players in PvP battles and climb the rankings. You can earn rewards such as crystals, gold, energy, medals, and more.

-

x-war clash of zombies hack apk download free
-x-war zombie game mod apk unlimited money and gems
-how to get unlimited crystals in x-war clash of zombies
-x-war clash of zombies cheats and tips for android
-x-war clash of zombies latest version mod apk offline
-x-war clash of zombies mod apk unlimited everything 2023
-x-war clash of zombies best base layout and strategy
-x-war clash of zombies mod menu apk no root
-x-war clash of zombies online generator tool
-x-war clash of zombies apk mod free shopping
-x-war clash of zombies mod apk unlimited resources and energy
-x-war clash of zombies gameplay and review
-x-war clash of zombies mod apk unlimited troops and heroes
-x-war clash of zombies hack tool without survey or verification
-x-war clash of zombies mod apk unlimited coins and diamonds
-x-war clash of zombies wiki and guide
-x-war clash of zombies mod apk unlimited power and speed
-x-war clash of zombies hack apk no human verification
-x-war clash of zombies mod apk unlimited health and damage
-x-war clash of zombies codes and redeem coupons
-x-war clash of zombies mod apk unlimited skills and abilities
-x-war clash of zombies hack apk for ios and iphone
-x-war clash of zombies mod apk unlimited weapons and items
-x-war clash of zombies hack online no download
-x-war clash of zombies mod apk unlimited gold and silver
-x-war clash of zombies tricks and secrets
-x-war clash of zombies mod apk unlimited levels and xp
-x-war clash of zombies hack app for android
-x-war clash of zombies mod apk unlimited stars and medals
-x-war clash of zombies hack no root or jailbreak

-

What is X-War: Clash of Zombies Mod Apk?

-

A modified version of the original game

-

X-War: Clash of Zombies Mod Apk is a modified version of the original game that has been hacked by some developers. It allows you to access features that are not available in the official version. For example, you can get unlimited crystals, which are the premium currency of the game. You can use them to buy heroes, buildings, items, speed up processes, and more. You can also get unlimited gold, food, energy, medals, and other resources that you need to

upgrade your base and army. You can also unlock all the heroes and buildings that are otherwise locked or require real money to purchase. You can have access to all the factions and their features. You can also enjoy unlimited energy, which is needed to play the game and participate in battles.

-

A way to get unlimited crystals and other resources

-

One of the main reasons why people use X-War: Clash of Zombies Mod Apk is to get unlimited crystals and other resources. Crystals are very hard to earn in the game, and they are used for almost everything. You can buy them with real money, but that can be very expensive and not everyone can afford it. With the mod apk, you can get as many crystals as you want for free. You can also get unlimited gold, food, energy, medals, and other resources that are essential for your progress. You can use them to build your base, train your army, buy items, speed up processes, and more. You don't have to worry about running out of resources or waiting for hours to get them.

-

A way to unlock all the heroes and buildings

-

Another reason why people use X-War: Clash of Zombies Mod Apk is to unlock all the heroes and buildings that are available in the game. Heroes are the most powerful units in the game, and they have unique skills and abilities. There are hundreds of heroes to collect and upgrade, but some of them are locked or require real money to purchase. With the mod apk, you can unlock all the heroes for free and use them in your battles. You can also unlock all the buildings that are needed to improve your base and army. Some of them are locked or require a certain level or faction to build. With the mod apk, you can build them without any restrictions and enjoy their benefits.

-

How to download and install X-War: Clash of Zombies Mod Apk?

-

The steps to follow

-

If you want to download and install X-War: Clash of Zombies Mod Apk on your device, you have to follow these steps:

-
    -
  1. First, you have to uninstall the original version of the game if you have it installed on your device.
  2. -
  3. Second, you have to enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and turning it on.
  4. -
  5. Third, you have to download the X-War: Clash of Zombies Mod Apk file from a reliable source. You can find many websites that offer it for free, but be careful of viruses and malware. You can use this link as an example: .
  6. -
  7. Fourth, you have to locate the downloaded file on your device and tap on it to start the installation process.
  8. -
  9. Fifth, you have to follow the instructions on the screen and wait for the installation to finish.
  10. -
  11. Sixth, you have to launch the game and enjoy the modded features.
  12. -
-

The requirements and precautions

-

Before you download and install X-War: Clash of Zombies Mod Apk on your device, you have to make sure that you meet these requirements and take these precautions:

- -

The benefits and risks

-

Using X-War: Clash of Zombies Mod Apk has its benefits and risks. Here are some of them:

- | Benefits | Risks | | --- | --- | | You can get unlimited crystals and other resources for free. | You can get banned from the official servers if detected by the developers. | | You can unlock all the heroes and buildings without spending money or waiting for hours. | You can lose your progress if the mod apk is not compatible with the latest version of the game. | | You can enjoy all the features of all the factions without any limitations. | You can damage your device if the mod apk contains viruses or malware. | | You can have more fun and excitement playing the game with enhanced abilities and options. | You can ruin the balance and challenge of the game by making it too easy or unfair. |

Conclusion

-

A

A summary of the main points

-

In this article, we have reviewed X-War: Clash of Zombies Mod Apk, a modified version of the original strategy game with zombies and superheroes. We have explained what the game is about, what the mod apk offers, how to download and install it, and what are the benefits and risks of using it. We have also provided an outline of the article and a table to compare the benefits and risks.

-

A recommendation for the readers

-

If you are looking for a fun and exciting game that combines base-building, resource management, hero collection, and combat, you might want to try X-War: Clash of Zombies. If you want to enjoy the game without spending money or waiting for hours, you might want to try X-War: Clash of Zombies Mod Apk. However, you should be aware of the risks involved and use it at your own risk. We hope you found this article helpful and informative. Thank you for reading.

-

FAQs

-

Q: Is X-War: Clash of Zombies Mod Apk safe to use?

-

A: X-War: Clash of Zombies Mod Apk is not an official version of the game and it has been hacked by some developers. Therefore, it is not guaranteed to be safe or secure. It may contain viruses or malware that can harm your device or steal your data. It may also cause errors or crashes that can affect your gameplay or progress. You should use it at your own risk and discretion.

-

Q: How can I avoid getting banned from the official servers when using X-War: Clash of Zombies Mod Apk?

-

A: There is no sure way to avoid getting banned from the official servers when using X-War: Clash of Zombies Mod Apk. The developers may detect your modded version and ban your account or device. However, some tips that may help you reduce the chances of getting banned are:

- -

Q: How can I update X-War: Clash of Zombies Mod Apk?

-

A: You should not update X-War: Clash of Zombies Mod Apk unless you are sure it is safe and compatible with the latest version of the game. Updating the game or the mod apk may cause errors or crashes that can affect your gameplay or progress. It may also make your modded version detectable by the developers and get you banned from the official servers. You should always check the source of the mod apk before downloading and installing it.

-

Q: Can I play X-War: Clash of Zombies Mod Apk offline?

-

A: No, you cannot play X-War: Clash of Zombies Mod Apk offline. The game requires a stable internet connection to play online and access all the features. You need to connect to the official servers or other players' bases to participate in battles, events, missions, and more. You also need to sync your data with the cloud to save your progress and settings.

-

Q: Can I play X-War: Clash of Zombies Mod Apk with my friends?

-

A: Yes, you can play X-War: Clash of Zombies Mod Apk with your friends if they also have the same modded version as you. You can join or create an alliance with them and cooperate with them in wars, raids, missions, and events. You can also chat with them and exchange gifts. However, you should be careful not to expose your modded version to other players who may report you or get you banned from the official servers.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/__init__.py deleted file mode 100644 index 4cba94ff838813eaab5ba8ba0de2a592beb8df1a..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/__init__.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# flake8: noqa - -from ..utils import ( - OptionalDependencyNotAvailable, - is_fastdeploy_available, - is_k_diffusion_available, - is_librosa_available, - is_paddle_available, - is_paddlenlp_available, -) - -try: - if not is_paddle_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_paddle_objects import * # noqa F403 -else: - from .dance_diffusion import DanceDiffusionPipeline - from .ddim import DDIMPipeline - from .ddpm import DDPMPipeline - from .latent_diffusion import LDMSuperResolutionPipeline - from .latent_diffusion_uncond import LDMPipeline - from .pndm import PNDMPipeline - from .repaint import RePaintPipeline - from .score_sde_ve import ScoreSdeVePipeline - from .stochastic_karras_ve import KarrasVePipeline - - -try: - if not (is_paddle_available() and is_librosa_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_paddle_and_librosa_objects import * # noqa F403 -else: - from .audio_diffusion import AudioDiffusionPipeline, Mel - -try: - if not (is_paddle_available() and is_paddlenlp_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_paddle_and_paddlenlp_objects import * # noqa F403 -else: - from .alt_diffusion import ( - AltDiffusionImg2ImgPipeline, - AltDiffusionPipeline, - RobertaSeriesModelWithTransformation, - ) - from .latent_diffusion import ( - LDMBertModel, - LDMSuperResolutionPipeline, - LDMTextToImagePipeline, - ) - from .paint_by_example import PaintByExamplePipeline - from .stable_diffusion import ( - CycleDiffusionPipeline, - StableDiffusionDepth2ImgPipeline, - StableDiffusionImageVariationPipeline, - StableDiffusionImg2ImgPipeline, - StableDiffusionInpaintPipeline, - StableDiffusionInpaintPipelineLegacy, - StableDiffusionMegaPipeline, - StableDiffusionPipeline, - StableDiffusionPipelineAllinOne, - StableDiffusionUpscalePipeline, - ) - from .stable_diffusion_safe import StableDiffusionPipelineSafe - from .unclip import UnCLIPPipeline - from .versatile_diffusion import ( - VersatileDiffusionDualGuidedPipeline, - VersatileDiffusionImageVariationPipeline, - VersatileDiffusionPipeline, - VersatileDiffusionTextToImagePipeline, - ) - from .vq_diffusion import VQDiffusionPipeline - -try: - if not (is_paddle_available() and is_paddlenlp_available() and is_fastdeploy_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_paddle_and_paddlenlp_and_fastdeploy_objects import * # noqa F403 -else: - from .stable_diffusion import ( - FastDeployStableDiffusionImg2ImgPipeline, - FastDeployStableDiffusionInpaintPipeline, - FastDeployStableDiffusionInpaintPipelineLegacy, - FastDeployStableDiffusionMegaPipeline, - FastDeployStableDiffusionPipeline, - ) -try: - if not (is_paddle_available() and is_paddlenlp_available() and is_k_diffusion_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_paddle_and_paddlenlp_and_k_diffusion_objects import * # noqa F403 -else: - from .stable_diffusion import StableDiffusionKDiffusionPipeline diff --git a/spaces/2ndelement/voicevox/speaker_info/7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff/policy.md b/spaces/2ndelement/voicevox/speaker_info/7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff/policy.md deleted file mode 100644 index c9bcc2cea42f727c8e43c934fc38163144848882..0000000000000000000000000000000000000000 --- a/spaces/2ndelement/voicevox/speaker_info/7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff/policy.md +++ /dev/null @@ -1,3 +0,0 @@ -dummy1 policy - -https://voicevox.hiroshiba.jp/ diff --git a/spaces/801artistry/RVC801/demucs/separate.py b/spaces/801artistry/RVC801/demucs/separate.py deleted file mode 100644 index 3fc7af9e711978b3e21398aa6f1deb9ae87dd370..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/demucs/separate.py +++ /dev/null @@ -1,185 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys -from pathlib import Path -import subprocess - -import julius -import torch as th -import torchaudio as ta - -from .audio import AudioFile, convert_audio_channels -from .pretrained import is_pretrained, load_pretrained -from .utils import apply_model, load_model - - -def load_track(track, device, audio_channels, samplerate): - errors = {} - wav = None - - try: - wav = AudioFile(track).read( - streams=0, - samplerate=samplerate, - channels=audio_channels).to(device) - except FileNotFoundError: - errors['ffmpeg'] = 'Ffmpeg is not installed.' - except subprocess.CalledProcessError: - errors['ffmpeg'] = 'FFmpeg could not read the file.' - - if wav is None: - try: - wav, sr = ta.load(str(track)) - except RuntimeError as err: - errors['torchaudio'] = err.args[0] - else: - wav = convert_audio_channels(wav, audio_channels) - wav = wav.to(device) - wav = julius.resample_frac(wav, sr, samplerate) - - if wav is None: - print(f"Could not load file {track}. " - "Maybe it is not a supported file format? ") - for backend, error in errors.items(): - print(f"When trying to load using {backend}, got the following error: {error}") - sys.exit(1) - return wav - - -def encode_mp3(wav, path, bitrate=320, samplerate=44100, channels=2, verbose=False): - try: - import lameenc - except ImportError: - print("Failed to call lame encoder. Maybe it is not installed? " - "On windows, run `python.exe -m pip install -U lameenc`, " - "on OSX/Linux, run `python3 -m pip install -U lameenc`, " - "then try again.", file=sys.stderr) - sys.exit(1) - encoder = lameenc.Encoder() - encoder.set_bit_rate(bitrate) - encoder.set_in_sample_rate(samplerate) - encoder.set_channels(channels) - encoder.set_quality(2) # 2-highest, 7-fastest - if not verbose: - encoder.silence() - wav = wav.transpose(0, 1).numpy() - mp3_data = encoder.encode(wav.tobytes()) - mp3_data += encoder.flush() - with open(path, "wb") as f: - f.write(mp3_data) - - -def main(): - parser = argparse.ArgumentParser("demucs.separate", - description="Separate the sources for the given tracks") - parser.add_argument("tracks", nargs='+', type=Path, default=[], help='Path to tracks') - parser.add_argument("-n", - "--name", - default="demucs_quantized", - help="Model name. See README.md for the list of pretrained models. " - "Default is demucs_quantized.") - parser.add_argument("-v", "--verbose", action="store_true") - parser.add_argument("-o", - "--out", - type=Path, - default=Path("separated"), - help="Folder where to put extracted tracks. A subfolder " - "with the model name will be created.") - parser.add_argument("--models", - type=Path, - default=Path("models"), - help="Path to trained models. " - "Also used to store downloaded pretrained models") - parser.add_argument("-d", - "--device", - default="cuda" if th.cuda.is_available() else "cpu", - help="Device to use, default is cuda if available else cpu") - parser.add_argument("--shifts", - default=0, - type=int, - help="Number of random shifts for equivariant stabilization." - "Increase separation time but improves quality for Demucs. 10 was used " - "in the original paper.") - parser.add_argument("--overlap", - default=0.25, - type=float, - help="Overlap between the splits.") - parser.add_argument("--no-split", - action="store_false", - dest="split", - default=True, - help="Doesn't split audio in chunks. This can use large amounts of memory.") - parser.add_argument("--float32", - action="store_true", - help="Convert the output wavefile to use pcm f32 format instead of s16. " - "This should not make a difference if you just plan on listening to the " - "audio but might be needed to compute exactly metrics like SDR etc.") - parser.add_argument("--int16", - action="store_false", - dest="float32", - help="Opposite of --float32, here for compatibility.") - parser.add_argument("--mp3", action="store_true", - help="Convert the output wavs to mp3.") - parser.add_argument("--mp3-bitrate", - default=320, - type=int, - help="Bitrate of converted mp3.") - - args = parser.parse_args() - name = args.name + ".th" - model_path = args.models / name - if model_path.is_file(): - model = load_model(model_path) - else: - if is_pretrained(args.name): - model = load_pretrained(args.name) - else: - print(f"No pre-trained model {args.name}", file=sys.stderr) - sys.exit(1) - model.to(args.device) - - out = args.out / args.name - out.mkdir(parents=True, exist_ok=True) - print(f"Separated tracks will be stored in {out.resolve()}") - for track in args.tracks: - if not track.exists(): - print( - f"File {track} does not exist. If the path contains spaces, " - "please try again after surrounding the entire path with quotes \"\".", - file=sys.stderr) - continue - print(f"Separating track {track}") - wav = load_track(track, args.device, model.audio_channels, model.samplerate) - - ref = wav.mean(0) - wav = (wav - ref.mean()) / ref.std() - sources = apply_model(model, wav, shifts=args.shifts, split=args.split, - overlap=args.overlap, progress=True) - sources = sources * ref.std() + ref.mean() - - track_folder = out / track.name.rsplit(".", 1)[0] - track_folder.mkdir(exist_ok=True) - for source, name in zip(sources, model.sources): - source = source / max(1.01 * source.abs().max(), 1) - if args.mp3 or not args.float32: - source = (source * 2**15).clamp_(-2**15, 2**15 - 1).short() - source = source.cpu() - stem = str(track_folder / name) - if args.mp3: - encode_mp3(source, stem + ".mp3", - bitrate=args.mp3_bitrate, - samplerate=model.samplerate, - channels=model.audio_channels, - verbose=args.verbose) - else: - wavname = str(track_folder / f"{name}.wav") - ta.save(wavname, source, sample_rate=model.samplerate) - - -if __name__ == "__main__": - main() diff --git a/spaces/801artistry/RVC801/gui_v0.py b/spaces/801artistry/RVC801/gui_v0.py deleted file mode 100644 index 88c3cf9eb1eaa0fa812b32ae4d3750b4ce0a8699..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/gui_v0.py +++ /dev/null @@ -1,786 +0,0 @@ -import os, sys, traceback, re - -import json - -now_dir = os.getcwd() -sys.path.append(now_dir) -from configs.config import Config - -Config = Config() -import PySimpleGUI as sg -import sounddevice as sd -import noisereduce as nr -import numpy as np -from fairseq import checkpoint_utils -import librosa, torch, pyworld, faiss, time, threading -import torch.nn.functional as F -import torchaudio.transforms as tat -import scipy.signal as signal -import torchcrepe - -# import matplotlib.pyplot as plt -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from i18n import I18nAuto - -i18n = I18nAuto() -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -current_dir = os.getcwd() - - -class RVC: - def __init__( - self, key, f0_method, hubert_path, pth_path, index_path, npy_path, index_rate - ) -> None: - """ - 初始化 - """ - try: - self.f0_up_key = key - self.time_step = 160 / 16000 * 1000 - self.f0_min = 50 - self.f0_max = 1100 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - self.f0_method = f0_method - self.sr = 16000 - self.window = 160 - - # Get Torch Device - if torch.cuda.is_available(): - self.torch_device = torch.device( - f"cuda:{0 % torch.cuda.device_count()}" - ) - elif torch.backends.mps.is_available(): - self.torch_device = torch.device("mps") - else: - self.torch_device = torch.device("cpu") - - if index_rate != 0: - self.index = faiss.read_index(index_path) - # self.big_npy = np.load(npy_path) - self.big_npy = self.index.reconstruct_n(0, self.index.ntotal) - print("index search enabled") - self.index_rate = index_rate - model_path = hubert_path - print("load model(s) from {}".format(model_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", - ) - self.model = models[0] - self.model = self.model.to(device) - if Config.is_half: - self.model = self.model.half() - else: - self.model = self.model.float() - self.model.eval() - cpt = torch.load(pth_path, map_location="cpu") - self.tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - self.if_f0 = cpt.get("f0", 1) - self.version = cpt.get("version", "v1") - if self.version == "v1": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif self.version == "v2": - if self.if_f0 == 1: - self.net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=Config.is_half - ) - else: - self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del self.net_g.enc_q - print(self.net_g.load_state_dict(cpt["weight"], strict=False)) - self.net_g.eval().to(device) - if Config.is_half: - self.net_g = self.net_g.half() - else: - self.net_g = self.net_g.float() - except: - print(traceback.format_exc()) - - def get_regular_crepe_computation(self, x, f0_min, f0_max, model="full"): - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.torch_device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - def get_harvest_computation(self, x, f0_min, f0_max): - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - return f0 - - def get_f0(self, x, f0_up_key, inp_f0=None): - # Calculate Padding and f0 details here - p_len = x.shape[0] // 512 # For Now This probs doesn't work - x_pad = 1 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = 0 - # Here, check f0_methods and get their computations - if self.f0_method == "harvest": - f0 = self.get_harvest_computation(x, f0_min, f0_max) - elif self.f0_method == "reg-crepe": - f0 = self.get_regular_crepe_computation(x, f0_min, f0_max) - elif self.f0_method == "reg-crepe-tiny": - f0 = self.get_regular_crepe_computation(x, f0_min, f0_max, "tiny") - - # Calculate f0_course and f0_bak here - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0] - f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def infer(self, feats: torch.Tensor) -> np.ndarray: - """ - 推理函数 - """ - audio = feats.clone().cpu().numpy() - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - if Config.is_half: - feats = feats.half() - else: - feats = feats.float() - inputs = { - "source": feats.to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9 if self.version == "v1" else 12, - } - torch.cuda.synchronize() - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = ( - self.model.final_proj(logits[0]) if self.version == "v1" else logits[0] - ) - - ####索引优化 - try: - if ( - hasattr(self, "index") - and hasattr(self, "big_npy") - and self.index_rate != 0 - ): - npy = feats[0].cpu().numpy().astype("float32") - score, ix = self.index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - if Config.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate - + (1 - self.index_rate) * feats - ) - else: - print("index search FAIL or disabled") - except: - traceback.print_exc() - print("index search FAIL") - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - torch.cuda.synchronize() - print(feats.shape) - if self.if_f0 == 1: - pitch, pitchf = self.get_f0(audio, self.f0_up_key) - p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存 - else: - pitch, pitchf = None, None - p_len = min(feats.shape[1], 13000) # 太大了爆显存 - torch.cuda.synchronize() - # print(feats.shape,pitch.shape) - feats = feats[:, :p_len, :] - if self.if_f0 == 1: - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - p_len = torch.LongTensor([p_len]).to(device) - ii = 0 # sid - sid = torch.LongTensor([ii]).to(device) - with torch.no_grad(): - if self.if_f0 == 1: - infered_audio = ( - self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - ) - else: - infered_audio = ( - self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float() - ) - torch.cuda.synchronize() - return infered_audio - - -class GUIConfig: - def __init__(self) -> None: - self.hubert_path: str = "" - self.pth_path: str = "" - self.index_path: str = "" - self.npy_path: str = "" - self.f0_method: str = "" - self.pitch: int = 12 - self.samplerate: int = 44100 - self.block_time: float = 1.0 # s - self.buffer_num: int = 1 - self.threhold: int = -30 - self.crossfade_time: float = 0.08 - self.extra_time: float = 0.04 - self.I_noise_reduce = False - self.O_noise_reduce = False - self.index_rate = 0.3 - - -class GUI: - def __init__(self) -> None: - self.config = GUIConfig() - self.flag_vc = False - - self.launcher() - - def load(self): - ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) = self.get_devices() - try: - with open("values1.json", "r") as j: - data = json.load(j) - except: - # Injecting f0_method into the json data - with open("values1.json", "w") as j: - data = { - "pth_path": "", - "index_path": "", - "sg_input_device": input_devices[ - input_devices_indices.index(sd.default.device[0]) - ], - "sg_output_device": output_devices[ - output_devices_indices.index(sd.default.device[1]) - ], - "threhold": "-45", - "pitch": "0", - "index_rate": "0", - "block_time": "1", - "crossfade_length": "0.04", - "extra_time": "1", - } - return data - - def launcher(self): - data = self.load() - sg.theme("DarkTeal12") - input_devices, output_devices, _, _ = self.get_devices() - layout = [ - [ - sg.Frame( - title="Proudly forked by Mangio621", - ), - sg.Frame( - title=i18n("Load model"), - layout=[ - [ - sg.Input( - default_text="hubert_base.pt", - key="hubert_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Hubert Model"), - initial_folder=os.path.join(os.getcwd()), - file_types=(("pt files", "*.pt"),), - ), - ], - [ - sg.Input( - default_text=data.get("pth_path", ""), - key="pth_path", - ), - sg.FileBrowse( - i18n("Select the .pth file"), - initial_folder=os.path.join(os.getcwd(), "weights"), - file_types=(("weight files", "*.pth"),), - ), - ], - [ - sg.Input( - default_text=data.get("index_path", ""), - key="index_path", - ), - sg.FileBrowse( - i18n("Select the .index file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("index files", "*.index"),), - ), - ], - [ - sg.Input( - default_text="你不需要填写这个You don't need write this.", - key="npy_path", - disabled=True, - ), - sg.FileBrowse( - i18n("Select the .npy file"), - initial_folder=os.path.join(os.getcwd(), "logs"), - file_types=(("feature files", "*.npy"),), - ), - ], - ], - ), - ], - [ - # Mangio f0 Selection frame Here - sg.Frame( - layout=[ - [ - sg.Radio( - "Harvest", "f0_method", key="harvest", default=True - ), - sg.Radio("Crepe", "f0_method", key="reg-crepe"), - sg.Radio("Crepe Tiny", "f0_method", key="reg-crepe-tiny"), - ] - ], - title="Select an f0 Method", - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Input device")), - sg.Combo( - input_devices, - key="sg_input_device", - default_value=data.get("sg_input_device", ""), - ), - ], - [ - sg.Text(i18n("Output device")), - sg.Combo( - output_devices, - key="sg_output_device", - default_value=data.get("sg_output_device", ""), - ), - ], - ], - title=i18n("Audio device (please use the same type of driver)"), - ) - ], - [ - sg.Frame( - layout=[ - [ - sg.Text(i18n("Response threshold")), - sg.Slider( - range=(-60, 0), - key="threhold", - resolution=1, - orientation="h", - default_value=data.get("threhold", ""), - ), - ], - [ - sg.Text(i18n("Pitch settings")), - sg.Slider( - range=(-24, 24), - key="pitch", - resolution=1, - orientation="h", - default_value=data.get("pitch", ""), - ), - ], - [ - sg.Text(i18n("Index Rate")), - sg.Slider( - range=(0.0, 1.0), - key="index_rate", - resolution=0.01, - orientation="h", - default_value=data.get("index_rate", ""), - ), - ], - ], - title=i18n("General settings"), - ), - sg.Frame( - layout=[ - [ - sg.Text(i18n("Sample length")), - sg.Slider( - range=(0.1, 3.0), - key="block_time", - resolution=0.1, - orientation="h", - default_value=data.get("block_time", ""), - ), - ], - [ - sg.Text(i18n("Fade length")), - sg.Slider( - range=(0.01, 0.15), - key="crossfade_length", - resolution=0.01, - orientation="h", - default_value=data.get("crossfade_length", ""), - ), - ], - [ - sg.Text(i18n("Extra推理时长")), - sg.Slider( - range=(0.05, 3.00), - key="extra_time", - resolution=0.01, - orientation="h", - default_value=data.get("extra_time", ""), - ), - ], - [ - sg.Checkbox(i18n("Input noise reduction"), key="I_noise_reduce"), - sg.Checkbox(i18n("Output noise reduction"), key="O_noise_reduce"), - ], - ], - title=i18n("Performance settings"), - ), - ], - [ - sg.Button(i18n("开始音频Convert"), key="start_vc"), - sg.Button(i18n("停止音频Convert"), key="stop_vc"), - sg.Text(i18n("Inference time (ms):")), - sg.Text("0", key="infer_time"), - ], - ] - self.window = sg.Window("RVC - GUI", layout=layout) - self.event_handler() - - def event_handler(self): - while True: - event, values = self.window.read() - if event == sg.WINDOW_CLOSED: - self.flag_vc = False - exit() - if event == "start_vc" and self.flag_vc == False: - if self.set_values(values) == True: - print("using_cuda:" + str(torch.cuda.is_available())) - self.start_vc() - settings = { - "pth_path": values["pth_path"], - "index_path": values["index_path"], - "f0_method": self.get_f0_method_from_radios(values), - "sg_input_device": values["sg_input_device"], - "sg_output_device": values["sg_output_device"], - "threhold": values["threhold"], - "pitch": values["pitch"], - "index_rate": values["index_rate"], - "block_time": values["block_time"], - "crossfade_length": values["crossfade_length"], - "extra_time": values["extra_time"], - } - with open("values1.json", "w") as j: - json.dump(settings, j) - if event == "stop_vc" and self.flag_vc == True: - self.flag_vc = False - - # Function that returns the used f0 method in string format "harvest" - def get_f0_method_from_radios(self, values): - f0_array = [ - {"name": "harvest", "val": values["harvest"]}, - {"name": "reg-crepe", "val": values["reg-crepe"]}, - {"name": "reg-crepe-tiny", "val": values["reg-crepe-tiny"]}, - ] - # Filter through to find a true value - used_f0 = "" - for f0 in f0_array: - if f0["val"] == True: - used_f0 = f0["name"] - break - if used_f0 == "": - used_f0 = "harvest" # Default Harvest if used_f0 is empty somehow - return used_f0 - - def set_values(self, values): - if len(values["pth_path"].strip()) == 0: - sg.popup(i18n("Select the pth file")) - return False - if len(values["index_path"].strip()) == 0: - sg.popup(i18n("Select the index file")) - return False - pattern = re.compile("[^\x00-\x7F]+") - if pattern.findall(values["hubert_path"]): - sg.popup(i18n("The hubert model path must not contain Chinese characters")) - return False - if pattern.findall(values["pth_path"]): - sg.popup(i18n("The pth file path must not contain Chinese characters.")) - return False - if pattern.findall(values["index_path"]): - sg.popup(i18n("The index file path must not contain Chinese characters.")) - return False - self.set_devices(values["sg_input_device"], values["sg_output_device"]) - self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt") - self.config.pth_path = values["pth_path"] - self.config.index_path = values["index_path"] - self.config.npy_path = values["npy_path"] - self.config.f0_method = self.get_f0_method_from_radios(values) - self.config.threhold = values["threhold"] - self.config.pitch = values["pitch"] - self.config.block_time = values["block_time"] - self.config.crossfade_time = values["crossfade_length"] - self.config.extra_time = values["extra_time"] - self.config.I_noise_reduce = values["I_noise_reduce"] - self.config.O_noise_reduce = values["O_noise_reduce"] - self.config.index_rate = values["index_rate"] - return True - - def start_vc(self): - torch.cuda.empty_cache() - self.flag_vc = True - self.block_frame = int(self.config.block_time * self.config.samplerate) - self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate) - self.sola_search_frame = int(0.012 * self.config.samplerate) - self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s - self.extra_frame = int(self.config.extra_time * self.config.samplerate) - self.rvc = None - self.rvc = RVC( - self.config.pitch, - self.config.f0_method, - self.config.hubert_path, - self.config.pth_path, - self.config.index_path, - self.config.npy_path, - self.config.index_rate, - ) - self.input_wav: np.ndarray = np.zeros( - self.extra_frame - + self.crossfade_frame - + self.sola_search_frame - + self.block_frame, - dtype="float32", - ) - self.output_wav: torch.Tensor = torch.zeros( - self.block_frame, device=device, dtype=torch.float32 - ) - self.sola_buffer: torch.Tensor = torch.zeros( - self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_in_window: torch.Tensor = torch.linspace( - 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32 - ) - self.fade_out_window: torch.Tensor = 1 - self.fade_in_window - self.resampler1 = tat.Resample( - orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32 - ) - self.resampler2 = tat.Resample( - orig_freq=self.rvc.tgt_sr, - new_freq=self.config.samplerate, - dtype=torch.float32, - ) - thread_vc = threading.Thread(target=self.soundinput) - thread_vc.start() - - def soundinput(self): - """ - 接受音频输入 - """ - with sd.Stream( - channels=2, - callback=self.audio_callback, - blocksize=self.block_frame, - samplerate=self.config.samplerate, - dtype="float32", - ): - while self.flag_vc: - time.sleep(self.config.block_time) - print("Audio block passed.") - print("ENDing VC") - - def audio_callback( - self, indata: np.ndarray, outdata: np.ndarray, frames, times, status - ): - """ - 音频处理 - """ - start_time = time.perf_counter() - indata = librosa.to_mono(indata.T) - if self.config.I_noise_reduce: - indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate) - - """noise gate""" - frame_length = 2048 - hop_length = 1024 - rms = librosa.feature.rms( - y=indata, frame_length=frame_length, hop_length=hop_length - ) - db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold - # print(rms.shape,db.shape,db) - for i in range(db_threhold.shape[0]): - if db_threhold[i]: - indata[i * hop_length : (i + 1) * hop_length] = 0 - self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata) - - # infer - print("input_wav:" + str(self.input_wav.shape)) - # print('infered_wav:'+str(infer_wav.shape)) - infer_wav: torch.Tensor = self.resampler2( - self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav))) - )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to( - device - ) - print("infer_wav:" + str(infer_wav.shape)) - - # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC - cor_nom = F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame], - self.sola_buffer[None, None, :], - ) - cor_den = torch.sqrt( - F.conv1d( - infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame] - ** 2, - torch.ones(1, 1, self.crossfade_frame, device=device), - ) - + 1e-8 - ) - sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0]) - print("sola offset: " + str(int(sola_offset))) - - # crossfade - self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame] - self.output_wav[: self.crossfade_frame] *= self.fade_in_window - self.output_wav[: self.crossfade_frame] += self.sola_buffer[:] - if sola_offset < self.sola_search_frame: - self.sola_buffer[:] = ( - infer_wav[ - -self.sola_search_frame - - self.crossfade_frame - + sola_offset : -self.sola_search_frame - + sola_offset - ] - * self.fade_out_window - ) - else: - self.sola_buffer[:] = ( - infer_wav[-self.crossfade_frame :] * self.fade_out_window - ) - - if self.config.O_noise_reduce: - outdata[:] = np.tile( - nr.reduce_noise( - y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate - ), - (2, 1), - ).T - else: - outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy() - total_time = time.perf_counter() - start_time - self.window["infer_time"].update(int(total_time * 1000)) - print("infer time:" + str(total_time)) - print("f0_method: " + str(self.config.f0_method)) - - def get_devices(self, update: bool = True): - """获取设备列表""" - if update: - sd._terminate() - sd._initialize() - devices = sd.query_devices() - hostapis = sd.query_hostapis() - for hostapi in hostapis: - for device_idx in hostapi["devices"]: - devices[device_idx]["hostapi_name"] = hostapi["name"] - input_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices = [ - f"{d['name']} ({d['hostapi_name']})" - for d in devices - if d["max_output_channels"] > 0 - ] - input_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_input_channels"] > 0 - ] - output_devices_indices = [ - d["index"] if "index" in d else d["name"] - for d in devices - if d["max_output_channels"] > 0 - ] - return ( - input_devices, - output_devices, - input_devices_indices, - output_devices_indices, - ) - - def set_devices(self, input_device, output_device): - """设置输出设备""" - ( - input_devices, - output_devices, - input_device_indices, - output_device_indices, - ) = self.get_devices() - sd.default.device[0] = input_device_indices[input_devices.index(input_device)] - sd.default.device[1] = output_device_indices[ - output_devices.index(output_device) - ] - print("input device:" + str(sd.default.device[0]) + ":" + str(input_device)) - print("output device:" + str(sd.default.device[1]) + ":" + str(output_device)) - - -gui = GUI() diff --git a/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/README.md b/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/README.md deleted file mode 100644 index 1b13026c96d88035e27f367038c949019c3d25bb..0000000000000000000000000000000000000000 --- a/spaces/AI-Zero-to-Hero/02-H5-AR-VR-IOT/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: 02 H5 Aframe AR VR -emoji: 🦀 -colorFrom: indigo -colorTo: gray -sdk: static -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AI-Zero-to-Hero/09-SL-Live-RealTime-Dashboard/app.py b/spaces/AI-Zero-to-Hero/09-SL-Live-RealTime-Dashboard/app.py deleted file mode 100644 index 71fee5595536d14ea9a0d98a9d6930d516d5c4eb..0000000000000000000000000000000000000000 --- a/spaces/AI-Zero-to-Hero/09-SL-Live-RealTime-Dashboard/app.py +++ /dev/null @@ -1,92 +0,0 @@ -import time # to simulate a real time data, time loop - -import numpy as np # np mean, np random -import pandas as pd # read csv, df manipulation -import plotly.express as px # interactive charts -import streamlit as st # 🎈 data web app development - -st.set_page_config( - page_title="Real-Time Data Science Dashboard", - page_icon="✅", - layout="wide", -) - -# read csv from a github repo -dataset_url = "https://raw.githubusercontent.com/Lexie88rus/bank-marketing-analysis/master/bank.csv" - -# read csv from a URL -@st.experimental_memo -def get_data() -> pd.DataFrame: - return pd.read_csv(dataset_url) - -df = get_data() - -# dashboard title -st.title("Real-Time / Live Data Science Dashboard") - -# top-level filters -job_filter = st.selectbox("Select the Job", pd.unique(df["job"])) - -# creating a single-element container -placeholder = st.empty() - -# dataframe filter -df = df[df["job"] == job_filter] - -# near real-time / live feed simulation -for seconds in range(200): - - df["age_new"] = df["age"] * np.random.choice(range(1, 5)) - df["balance_new"] = df["balance"] * np.random.choice(range(1, 5)) - - # creating KPIs - avg_age = np.mean(df["age_new"]) - - count_married = int( - df[(df["marital"] == "married")]["marital"].count() - + np.random.choice(range(1, 30)) - ) - - balance = np.mean(df["balance_new"]) - - with placeholder.container(): - - # create three columns - kpi1, kpi2, kpi3 = st.columns(3) - - # fill in those three columns with respective metrics or KPIs - kpi1.metric( - label="Age ⏳", - value=round(avg_age), - delta=round(avg_age) - 10, - ) - - kpi2.metric( - label="Married Count 💍", - value=int(count_married), - delta=-10 + count_married, - ) - - kpi3.metric( - label="A/C Balance $", - value=f"$ {round(balance,2)} ", - delta=-round(balance / count_married) * 100, - ) - - # create two columns for charts - fig_col1, fig_col2 = st.columns(2) - with fig_col1: - st.markdown("### First Chart") - fig = px.density_heatmap( - data_frame=df, y="age_new", x="marital" - ) - st.write(fig) - - with fig_col2: - st.markdown("### Second Chart") - fig2 = px.histogram(data_frame=df, x="age_new") - st.write(fig2) - - st.markdown("### Detailed Data View") - st.dataframe(df) - time.sleep(1) \ No newline at end of file diff --git a/spaces/AIConsultant/MusicGen/scripts/templates/index.html b/spaces/AIConsultant/MusicGen/scripts/templates/index.html deleted file mode 100644 index 7bd3afe9d933271bb922c1a0a534dd6b86fe67bc..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/scripts/templates/index.html +++ /dev/null @@ -1,28 +0,0 @@ -{% extends "base.html" %} -{% block content %} - -

- Welcome {{session['user']}} to the internal MOS assistant for AudioCraft. - You can create custom surveys between your models, that you can - evaluate yourself, or with the help of your teammates, by simply - sharing a link! -

- -{% for error in errors %} -

{{error}}

-{% endfor %} -
-
-
- -
-
- -
- - - -{% endblock %} diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/tts/base_tts_infer.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/tts/base_tts_infer.py deleted file mode 100644 index 3808e968ad4757d98c67244e67a91d4ebfa07e26..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/inference/tts/base_tts_infer.py +++ /dev/null @@ -1,101 +0,0 @@ -from tasks.tts.dataset_utils import FastSpeechWordDataset -from tasks.tts.tts_utils import load_data_preprocessor -from vocoders.hifigan import HifiGanGenerator -import os -import librosa -import soundfile as sf -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor -from string import punctuation -import torch -from utils.ckpt_utils import load_ckpt -from utils.hparams import set_hparams -from utils.hparams import hparams as hp - -class BaseTTSInfer: - def __init__(self, hparams, device=None): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.hparams = hparams - self.device = device - self.data_dir = hparams['binary_data_dir'] - self.preprocessor, self.preprocess_args = load_data_preprocessor() - self.ph_encoder, self.word_encoder = self.preprocessor.load_dict(self.data_dir) - self.ds_cls = FastSpeechWordDataset - self.model = self.build_model() - self.model.eval() - self.model.to(self.device) - self.vocoder = self.build_vocoder() - self.vocoder.eval() - self.vocoder.to(self.device) - self.asr_processor, self.asr_model = self.build_asr() - - def build_model(self): - raise NotImplementedError - - def forward_model(self, inp): - raise NotImplementedError - - def build_asr(self): - # load pretrained model - processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") # facebook/wav2vec2-base-960h wav2vec2-large-960h-lv60-self - model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to(self.device) - return processor, model - - def build_vocoder(self): - base_dir = self.hparams['vocoder_ckpt'] - config_path = f'{base_dir}/config.yaml' - config = set_hparams(config_path, global_hparams=False) - vocoder = HifiGanGenerator(config) - load_ckpt(vocoder, base_dir, 'model_gen') - return vocoder - - def run_vocoder(self, c): - c = c.transpose(2, 1) - y = self.vocoder(c)[:, 0] - return y - - def preprocess_input(self, inp): - raise NotImplementedError - - def input_to_batch(self, item): - raise NotImplementedError - - def postprocess_output(self, output): - return output - - def infer_once(self, inp): - inp = self.preprocess_input(inp) - output = self.forward_model(inp) - output = self.postprocess_output(output) - return output - - @classmethod - def example_run(cls, inp): - from utils.audio import save_wav - - #set_hparams(print_hparams=False) - infer_ins = cls(hp) - out = infer_ins.infer_once(inp) - os.makedirs('infer_out', exist_ok=True) - save_wav(out, f'infer_out/{hp["text"]}.wav', hp['audio_sample_rate']) - print(f'Save at infer_out/{hp["text"]}.wav.') - - def asr(self, file): - sample_rate = self.hparams['audio_sample_rate'] - audio_input, source_sample_rate = sf.read(file) - - # Resample the wav if needed - if sample_rate is not None and source_sample_rate != sample_rate: - audio_input = librosa.resample(audio_input, source_sample_rate, sample_rate) - - # pad input values and return pt tensor - input_values = self.asr_processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values - - # retrieve logits & take argmax - logits = self.asr_model(input_values).logits - predicted_ids = torch.argmax(logits, dim=-1) - - # transcribe - transcription = self.asr_processor.decode(predicted_ids[0]) - transcription = transcription.rstrip(punctuation) - return audio_input, transcription \ No newline at end of file diff --git a/spaces/AILab-CVC/SEED-LLaMA/models/transforms.py b/spaces/AILab-CVC/SEED-LLaMA/models/transforms.py deleted file mode 100644 index 1690b38b973b518bb5064f4b7418ddfac8de3066..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/SEED-LLaMA/models/transforms.py +++ /dev/null @@ -1,21 +0,0 @@ -from torchvision import transforms - - -def get_transform(type='clip', keep_ratio=True, image_size=224): - if type == 'clip': - transform = [] - if keep_ratio: - transform.extend([ - transforms.Resize(image_size), - transforms.CenterCrop(image_size), - ]) - else: - transform.append(transforms.Resize((image_size, image_size))) - transform.extend([ - transforms.ToTensor(), - transforms.Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711)) - ]) - - return transforms.Compose(transform) - else: - raise NotImplementedError diff --git a/spaces/AeroXi/english-ai/app.py b/spaces/AeroXi/english-ai/app.py deleted file mode 100644 index c0e5b8f0e4de4effb681d249ec6dc4ad4930657a..0000000000000000000000000000000000000000 --- a/spaces/AeroXi/english-ai/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import openai -import os - - -# # 设置你的OpenAI API密钥 -# openai.api_key = "your_openai_api_key" - -# 定义将音频转换为文本的函数 -def transcribe_audio(audio): - os.rename(audio, audio + '.wav') - audio_file = open(audio + '.wav', "rb") - # 调用Whisper API进行语音识别 - transcript = openai.Audio.transcribe("whisper-1", audio_file) - - # 返回识别的文字 - return transcript["text"] - -# 创建Gradio界面 -audio_input = gr.inputs.Audio(source="microphone", type="filepath") -text_output = gr.outputs.Textbox() - -iface = gr.Interface(fn=transcribe_audio, inputs=audio_input, outputs=text_output, - title="Whisper语音识别", - description="使用麦克风录制音频并将其转换为文本。") - -# 启动Gradio应用 -iface.launch() diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.js deleted file mode 100644 index 44250496e81fda884353bb1c80da38d636728a16..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/circularprogress/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import CircularProgress from './CircularProgress.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('circularProgress', function (x, y, radius, barColor, value, config) { - var gameObject = new CircularProgress(this.scene, x, y, radius, barColor, value, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.CircularProgress', CircularProgress); - -export default CircularProgress; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/drag/Drag.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/drag/Drag.d.ts deleted file mode 100644 index 979a4674c95916cddafa26a8246950f4e9d318f6..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/drag/Drag.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import Drag from '../../../plugins/drag'; -export default Drag; \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/configs/paths_config.py b/spaces/Amrrs/DragGan-Inversion/PTI/configs/paths_config.py deleted file mode 100644 index 2e1e65ddb127eec0dc0007ddc8472f4e3c51932a..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/configs/paths_config.py +++ /dev/null @@ -1,31 +0,0 @@ -## Pretrained models paths -e4e = 'PTI/pretrained_models/e4e_ffhq_encode.pt' -stylegan2_ada_ffhq = '../PTI/pretrained_models/ffhq.pkl' -style_clip_pretrained_mappers = '' -ir_se50 = 'PTI/pretrained_models/model_ir_se50.pth' -dlib = 'PTI/pretrained_models/align.dat' - -## Dirs for output files -checkpoints_dir = 'PTI/checkpoints' -embedding_base_dir = 'PTI/embeddings' -styleclip_output_dir = 'PTI/StyleCLIP_results' -experiments_output_dir = 'PTI/output' - -## Input info -### Input dir, where the images reside -input_data_path = '' -### Inversion identifier, used to keeping track of the inversion results. Both the latent code and the generator -input_data_id = 'barcelona' - -## Keywords -pti_results_keyword = 'PTI' -e4e_results_keyword = 'e4e' -sg2_results_keyword = 'SG2' -sg2_plus_results_keyword = 'SG2_plus' -multi_id_model_type = 'multi_id' - -## Edit directions -interfacegan_age = 'PTI/editings/interfacegan_directions/age.pt' -interfacegan_smile = 'PTI/editings/interfacegan_directions/smile.pt' -interfacegan_rotation = 'PTI/editings/interfacegan_directions/rotation.pt' -ffhq_pca = 'PTI/editings/ganspace_pca/ffhq_pca.pt' diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/psp.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/psp.py deleted file mode 100644 index 4fa715ae86a6280a7cdb8640ef9192608a5b7e30..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/psp.py +++ /dev/null @@ -1,109 +0,0 @@ -from pti.pti_models.e4e.stylegan2.model import Generator -from pti.pti_models.e4e.encoders import psp_encoders -from torch import nn -import torch -import matplotlib -from pti.pti_configs import paths_config -matplotlib.use('Agg') - - -def get_keys(d, name): - if 'state_dict' in d: - d = d['state_dict'] - d_filt = {k[len(name) + 1:]: v for k, v in d.items() - if k[:len(name)] == name} - return d_filt - - -class pSp(nn.Module): - - def __init__(self, opts): - super(pSp, self).__init__() - self.opts = opts - # Define architecture - self.encoder = self.set_encoder() - self.decoder = Generator( - opts.stylegan_size, 512, 8, channel_multiplier=2) - self.face_pool = torch.nn.AdaptiveAvgPool2d((256, 256 // 2)) - # Load weights if needed - self.load_weights() - - def set_encoder(self): - if self.opts.encoder_type == 'GradualStyleEncoder': - encoder = psp_encoders.GradualStyleEncoder(50, 'ir_se', self.opts) - elif self.opts.encoder_type == 'Encoder4Editing': - encoder = psp_encoders.Encoder4Editing(50, 'ir_se', self.opts) - elif self.opts.encoder_type == 'SingleStyleCodeEncoder': - encoder = psp_encoders.BackboneEncoderUsingLastLayerIntoW( - 50, 'ir_se', self.opts) - else: - raise Exception('{} is not a valid encoders'.format( - self.opts.encoder_type)) - return encoder - - def load_weights(self): - if self.opts.checkpoint_path is not None: - print('Loading e4e over the pSp framework from checkpoint: {}'.format( - self.opts.checkpoint_path)) - ckpt = torch.load(self.opts.checkpoint_path, map_location='cpu') - self.encoder.load_state_dict( - get_keys(ckpt, 'encoder'), strict=True) - self.decoder.load_state_dict( - get_keys(ckpt, 'decoder'), strict=True) - self.__load_latent_avg(ckpt) - else: - print('Loading encoders weights from irse50!') - encoder_ckpt = torch.load(model_paths['ir_se50']) - self.encoder.load_state_dict(encoder_ckpt, strict=False) - print('Loading decoder weights from pretrained!') - ckpt = torch.load(self.opts.stylegan_weights) - self.decoder.load_state_dict(ckpt['g_ema'], strict=False) - self.__load_latent_avg(ckpt, repeat=self.encoder.style_count) - - def forward(self, x, resize=True, latent_mask=None, input_code=False, randomize_noise=True, - inject_latent=None, return_latents=False, alpha=None): - if input_code: - codes = x - else: - codes = self.encoder(x) - # normalize with respect to the center of an average face - if self.opts.start_from_latent_avg: - if codes.ndim == 2: - codes = codes + \ - self.latent_avg.repeat(codes.shape[0], 1, 1)[:, 0, :] - else: - codes = codes + \ - self.latent_avg.repeat(codes.shape[0], 1, 1) - - if latent_mask is not None: - for i in latent_mask: - if inject_latent is not None: - if alpha is not None: - codes[:, i] = alpha * inject_latent[:, i] + \ - (1 - alpha) * codes[:, i] - else: - codes[:, i] = inject_latent[:, i] - else: - codes[:, i] = 0 - - input_is_latent = not input_code - images, result_latent = self.decoder([codes], - input_is_latent=input_is_latent, - randomize_noise=randomize_noise, - return_latents=return_latents) - - if resize: - images = self.face_pool(images) - - if return_latents: - return images, result_latent - else: - return images - - def __load_latent_avg(self, ckpt, repeat=None): - if 'latent_avg' in ckpt: - self.latent_avg = ckpt['latent_avg'].to(self.opts.device) - if repeat is not None: - self.latent_avg = self.latent_avg.repeat(repeat, 1) - else: - self.latent_avg = None diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py deleted file mode 100644 index 7c57a6f8ff0a7dbb18666c1b9c882da10e586aa3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/pascal_context.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=60), - auxiliary_head=dict(num_classes=60), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_80k_cityscapes.py deleted file mode 100644 index a653dda19255214a1a412b645abddd3fc5c0d853..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_hr18.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RoPE.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RoPE.py deleted file mode 100644 index c15616c672b6ea304212d6771207e05805007ae8..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RoPE.py +++ /dev/null @@ -1,18 +0,0 @@ -def get_alpha_value(alpha, base): - ''' - Gets alpha_value from alpha_value and rope_freq_base - ''' - if base > 0: - return (base/10000.) ** (63/64.) - else: - return alpha - - -def get_rope_freq_base(alpha, base): - ''' - Gets rope_freq_base from alpha_value and rope_freq_base - ''' - if base > 0: - return base - else: - return 10000 * alpha ** (64/63.) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py deleted file mode 100644 index 6d12508469c6c8fa1884debece44c58d158cb6fa..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/fused_bias_leakyrelu.py +++ /dev/null @@ -1,268 +0,0 @@ -# modified from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501 - -# Copyright (c) 2021, NVIDIA Corporation. All rights reserved. -# NVIDIA Source Code License for StyleGAN2 with Adaptive Discriminator -# Augmentation (ADA) -# ======================================================================= - -# 1. Definitions - -# "Licensor" means any person or entity that distributes its Work. - -# "Software" means the original work of authorship made available under -# this License. - -# "Work" means the Software and any additions to or derivative works of -# the Software that are made available under this License. - -# The terms "reproduce," "reproduction," "derivative works," and -# "distribution" have the meaning as provided under U.S. copyright law; -# provided, however, that for the purposes of this License, derivative -# works shall not include works that remain separable from, or merely -# link (or bind by name) to the interfaces of, the Work. - -# Works, including the Software, are "made available" under this License -# by including in or with the Work either (a) a copyright notice -# referencing the applicability of this License to the Work, or (b) a -# copy of this License. - -# 2. License Grants - -# 2.1 Copyright Grant. Subject to the terms and conditions of this -# License, each Licensor grants to you a perpetual, worldwide, -# non-exclusive, royalty-free, copyright license to reproduce, -# prepare derivative works of, publicly display, publicly perform, -# sublicense and distribute its Work and any resulting derivative -# works in any form. - -# 3. Limitations - -# 3.1 Redistribution. You may reproduce or distribute the Work only -# if (a) you do so under this License, (b) you include a complete -# copy of this License with your distribution, and (c) you retain -# without modification any copyright, patent, trademark, or -# attribution notices that are present in the Work. - -# 3.2 Derivative Works. You may specify that additional or different -# terms apply to the use, reproduction, and distribution of your -# derivative works of the Work ("Your Terms") only if (a) Your Terms -# provide that the use limitation in Section 3.3 applies to your -# derivative works, and (b) you identify the specific derivative -# works that are subject to Your Terms. Notwithstanding Your Terms, -# this License (including the redistribution requirements in Section -# 3.1) will continue to apply to the Work itself. - -# 3.3 Use Limitation. The Work and any derivative works thereof only -# may be used or intended for use non-commercially. Notwithstanding -# the foregoing, NVIDIA and its affiliates may use the Work and any -# derivative works commercially. As used herein, "non-commercially" -# means for research or evaluation purposes only. - -# 3.4 Patent Claims. If you bring or threaten to bring a patent claim -# against any Licensor (including any claim, cross-claim or -# counterclaim in a lawsuit) to enforce any patents that you allege -# are infringed by any Work, then your rights under this License from -# such Licensor (including the grant in Section 2.1) will terminate -# immediately. - -# 3.5 Trademarks. This License does not grant any rights to use any -# Licensor’s or its affiliates’ names, logos, or trademarks, except -# as necessary to reproduce the notices described in this License. - -# 3.6 Termination. If you violate any term of this License, then your -# rights under this License (including the grant in Section 2.1) will -# terminate immediately. - -# 4. Disclaimer of Warranty. - -# THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR -# NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER -# THIS LICENSE. - -# 5. Limitation of Liability. - -# EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL -# THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE -# SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, -# INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF -# OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK -# (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, -# LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER -# COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF -# THE POSSIBILITY OF SUCH DAMAGES. - -# ======================================================================= - -import torch -import torch.nn.functional as F -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['fused_bias_leakyrelu']) - - -class FusedBiasLeakyReLUFunctionBackward(Function): - """Calculate second order deviation. - - This function is to compute the second order deviation for the fused leaky - relu operation. - """ - - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = ext_module.fused_bias_leakyrelu( - grad_output, - empty, - out, - act=3, - grad=1, - alpha=negative_slope, - scale=scale) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - - # The second order deviation, in fact, contains two parts, while the - # the first part is zero. Thus, we direct consider the second part - # which is similar with the first order deviation in implementation. - gradgrad_out = ext_module.fused_bias_leakyrelu( - gradgrad_input, - gradgrad_bias.to(out.dtype), - out, - act=3, - grad=1, - alpha=ctx.negative_slope, - scale=ctx.scale) - - return gradgrad_out, None, None, None - - -class FusedBiasLeakyReLUFunction(Function): - - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - - out = ext_module.fused_bias_leakyrelu( - input, - bias, - empty, - act=3, - grad=0, - alpha=negative_slope, - scale=scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedBiasLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale) - - return grad_input, grad_bias, None, None - - -class FusedBiasLeakyReLU(nn.Module): - """Fused bias leaky ReLU. - - This function is introduced in the StyleGAN2: - http://arxiv.org/abs/1912.04958 - - The bias term comes from the convolution operation. In addition, to keep - the variance of the feature map or gradients unchanged, they also adopt a - scale similarly with Kaiming initialization. However, since the - :math:`1+{alpha}^2` : is too small, we can just ignore it. Therefore, the - final scale is just :math:`\sqrt{2}`:. Of course, you may change it with # noqa: W605, E501 - your own scale. - - TODO: Implement the CPU version. - - Args: - channel (int): The channel number of the feature map. - negative_slope (float, optional): Same as nn.LeakyRelu. - Defaults to 0.2. - scale (float, optional): A scalar to adjust the variance of the feature - map. Defaults to 2**0.5. - """ - - def __init__(self, num_channels, negative_slope=0.2, scale=2**0.5): - super(FusedBiasLeakyReLU, self).__init__() - - self.bias = nn.Parameter(torch.zeros(num_channels)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_bias_leakyrelu(input, self.bias, self.negative_slope, - self.scale) - - -def fused_bias_leakyrelu(input, bias, negative_slope=0.2, scale=2**0.5): - """Fused bias leaky ReLU function. - - This function is introduced in the StyleGAN2: - http://arxiv.org/abs/1912.04958 - - The bias term comes from the convolution operation. In addition, to keep - the variance of the feature map or gradients unchanged, they also adopt a - scale similarly with Kaiming initialization. However, since the - :math:`1+{alpha}^2` : is too small, we can just ignore it. Therefore, the - final scale is just :math:`\sqrt{2}`:. Of course, you may change it with # noqa: W605, E501 - your own scale. - - Args: - input (torch.Tensor): Input feature map. - bias (nn.Parameter): The bias from convolution operation. - negative_slope (float, optional): Same as nn.LeakyRelu. - Defaults to 0.2. - scale (float, optional): A scalar to adjust the variance of the feature - map. Defaults to 2**0.5. - - Returns: - torch.Tensor: Feature map after non-linear activation. - """ - - if not input.is_cuda: - return bias_leakyrelu_ref(input, bias, negative_slope, scale) - - return FusedBiasLeakyReLUFunction.apply(input, bias.to(input.dtype), - negative_slope, scale) - - -def bias_leakyrelu_ref(x, bias, negative_slope=0.2, scale=2**0.5): - - if bias is not None: - assert bias.ndim == 1 - assert bias.shape[0] == x.shape[1] - x = x + bias.reshape([-1 if i == 1 else 1 for i in range(x.ndim)]) - - x = F.leaky_relu(x, negative_slope) - if scale != 1: - x = x * scale - - return x diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roiaware_pool3d.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roiaware_pool3d.py deleted file mode 100644 index 291b0e5a9b692492c7d7e495ea639c46042e2f18..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roiaware_pool3d.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn -from torch.autograd import Function - -import annotator.uniformer.mmcv as mmcv -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roiaware_pool3d_forward', 'roiaware_pool3d_backward']) - - -class RoIAwarePool3d(nn.Module): - """Encode the geometry-specific features of each 3D proposal. - - Please refer to `PartA2 `_ for more - details. - - Args: - out_size (int or tuple): The size of output features. n or - [n1, n2, n3]. - max_pts_per_voxel (int, optional): The maximum number of points per - voxel. Default: 128. - mode (str, optional): Pooling method of RoIAware, 'max' or 'avg'. - Default: 'max'. - """ - - def __init__(self, out_size, max_pts_per_voxel=128, mode='max'): - super().__init__() - - self.out_size = out_size - self.max_pts_per_voxel = max_pts_per_voxel - assert mode in ['max', 'avg'] - pool_mapping = {'max': 0, 'avg': 1} - self.mode = pool_mapping[mode] - - def forward(self, rois, pts, pts_feature): - """ - Args: - rois (torch.Tensor): [N, 7], in LiDAR coordinate, - (x, y, z) is the bottom center of rois. - pts (torch.Tensor): [npoints, 3], coordinates of input points. - pts_feature (torch.Tensor): [npoints, C], features of input points. - - Returns: - pooled_features (torch.Tensor): [N, out_x, out_y, out_z, C] - """ - - return RoIAwarePool3dFunction.apply(rois, pts, pts_feature, - self.out_size, - self.max_pts_per_voxel, self.mode) - - -class RoIAwarePool3dFunction(Function): - - @staticmethod - def forward(ctx, rois, pts, pts_feature, out_size, max_pts_per_voxel, - mode): - """ - Args: - rois (torch.Tensor): [N, 7], in LiDAR coordinate, - (x, y, z) is the bottom center of rois. - pts (torch.Tensor): [npoints, 3], coordinates of input points. - pts_feature (torch.Tensor): [npoints, C], features of input points. - out_size (int or tuple): The size of output features. n or - [n1, n2, n3]. - max_pts_per_voxel (int): The maximum number of points per voxel. - Default: 128. - mode (int): Pooling method of RoIAware, 0 (max pool) or 1 (average - pool). - - Returns: - pooled_features (torch.Tensor): [N, out_x, out_y, out_z, C], output - pooled features. - """ - - if isinstance(out_size, int): - out_x = out_y = out_z = out_size - else: - assert len(out_size) == 3 - assert mmcv.is_tuple_of(out_size, int) - out_x, out_y, out_z = out_size - - num_rois = rois.shape[0] - num_channels = pts_feature.shape[-1] - num_pts = pts.shape[0] - - pooled_features = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, num_channels)) - argmax = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, num_channels), dtype=torch.int) - pts_idx_of_voxels = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, max_pts_per_voxel), - dtype=torch.int) - - ext_module.roiaware_pool3d_forward(rois, pts, pts_feature, argmax, - pts_idx_of_voxels, pooled_features, - mode) - - ctx.roiaware_pool3d_for_backward = (pts_idx_of_voxels, argmax, mode, - num_pts, num_channels) - return pooled_features - - @staticmethod - def backward(ctx, grad_out): - ret = ctx.roiaware_pool3d_for_backward - pts_idx_of_voxels, argmax, mode, num_pts, num_channels = ret - - grad_in = grad_out.new_zeros((num_pts, num_channels)) - ext_module.roiaware_pool3d_backward(pts_idx_of_voxels, argmax, - grad_out.contiguous(), grad_in, - mode) - - return None, None, grad_in, None, None, None diff --git a/spaces/Ariharasudhan/YoloV5/utils/autoanchor.py b/spaces/Ariharasudhan/YoloV5/utils/autoanchor.py deleted file mode 100644 index cfc4c276e3aa6b7224568315508337f0f01c81fa..0000000000000000000000000000000000000000 --- a/spaces/Ariharasudhan/YoloV5/utils/autoanchor.py +++ /dev/null @@ -1,169 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -AutoAnchor utils -""" - -import random - -import numpy as np -import torch -import yaml -from tqdm import tqdm - -from utils import TryExcept -from utils.general import LOGGER, colorstr - -PREFIX = colorstr('AutoAnchor: ') - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary - a = m.anchors.prod(-1).mean(-1).view(-1) # mean anchor area per output layer - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da and (da.sign() != ds.sign()): # same order - LOGGER.info(f'{PREFIX}Reversing anchor order') - m.anchors[:] = m.anchors.flip(0) - - -@TryExcept(f'{PREFIX}ERROR') -def check_anchors(dataset, model, thr=4.0, imgsz=640): - # Check anchor fit to data, recompute if necessary - m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect() - shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True) - scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale - wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh - - def metric(k): # compute metric - r = wh[:, None] / k[None] - x = torch.min(r, 1 / r).min(2)[0] # ratio metric - best = x.max(1)[0] # best_x - aat = (x > 1 / thr).float().sum(1).mean() # anchors above threshold - bpr = (best > 1 / thr).float().mean() # best possible recall - return bpr, aat - - stride = m.stride.to(m.anchors.device).view(-1, 1, 1) # model strides - anchors = m.anchors.clone() * stride # current anchors - bpr, aat = metric(anchors.cpu().view(-1, 2)) - s = f'\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). ' - if bpr > 0.98: # threshold to recompute - LOGGER.info(f'{s}Current anchors are a good fit to dataset ✅') - else: - LOGGER.info(f'{s}Anchors are a poor fit to dataset ⚠️, attempting to improve...') - na = m.anchors.numel() // 2 # number of anchors - anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False) - new_bpr = metric(anchors)[0] - if new_bpr > bpr: # replace anchors - anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors) - m.anchors[:] = anchors.clone().view_as(m.anchors) - check_anchor_order(m) # must be in pixel-space (not grid-space) - m.anchors /= stride - s = f'{PREFIX}Done ✅ (optional: update model *.yaml to use these anchors in the future)' - else: - s = f'{PREFIX}Done ⚠️ (original anchors better than new anchors, proceeding with original anchors)' - LOGGER.info(s) - - -def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True): - """ Creates kmeans-evolved anchors from training dataset - - Arguments: - dataset: path to data.yaml, or a loaded dataset - n: number of anchors - img_size: image size used for training - thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0 - gen: generations to evolve anchors using genetic algorithm - verbose: print all results - - Return: - k: kmeans evolved anchors - - Usage: - from utils.autoanchor import *; _ = kmean_anchors() - """ - from scipy.cluster.vq import kmeans - - npr = np.random - thr = 1 / thr - - def metric(k, wh): # compute metrics - r = wh[:, None] / k[None] - x = torch.min(r, 1 / r).min(2)[0] # ratio metric - # x = wh_iou(wh, torch.tensor(k)) # iou metric - return x, x.max(1)[0] # x, best_x - - def anchor_fitness(k): # mutation fitness - _, best = metric(torch.tensor(k, dtype=torch.float32), wh) - return (best * (best > thr).float()).mean() # fitness - - def print_results(k, verbose=True): - k = k[np.argsort(k.prod(1))] # sort small to large - x, best = metric(k, wh0) - bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr - s = f'{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\n' \ - f'{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' \ - f'past_thr={x[x > thr].mean():.3f}-mean: ' - for x in k: - s += '%i,%i, ' % (round(x[0]), round(x[1])) - if verbose: - LOGGER.info(s[:-2]) - return k - - if isinstance(dataset, str): # *.yaml file - with open(dataset, errors='ignore') as f: - data_dict = yaml.safe_load(f) # model dict - from utils.dataloaders import LoadImagesAndLabels - dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True) - - # Get label wh - shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True) - wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh - - # Filter - i = (wh0 < 3.0).any(1).sum() - if i: - LOGGER.info(f'{PREFIX}WARNING ⚠️ Extremely small objects found: {i} of {len(wh0)} labels are <3 pixels in size') - wh = wh0[(wh0 >= 2.0).any(1)].astype(np.float32) # filter > 2 pixels - # wh = wh * (npr.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1 - - # Kmeans init - try: - LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...') - assert n <= len(wh) # apply overdetermined constraint - s = wh.std(0) # sigmas for whitening - k = kmeans(wh / s, n, iter=30)[0] * s # points - assert n == len(k) # kmeans may return fewer points than requested if wh is insufficient or too similar - except Exception: - LOGGER.warning(f'{PREFIX}WARNING ⚠️ switching strategies from kmeans to random init') - k = np.sort(npr.rand(n * 2)).reshape(n, 2) * img_size # random init - wh, wh0 = (torch.tensor(x, dtype=torch.float32) for x in (wh, wh0)) - k = print_results(k, verbose=False) - - # Plot - # k, d = [None] * 20, [None] * 20 - # for i in tqdm(range(1, 21)): - # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance - # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True) - # ax = ax.ravel() - # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.') - # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh - # ax[0].hist(wh[wh[:, 0]<100, 0],400) - # ax[1].hist(wh[wh[:, 1]<100, 1],400) - # fig.savefig('wh.png', dpi=200) - - # Evolve - f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma - pbar = tqdm(range(gen), bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar - for _ in pbar: - v = np.ones(sh) - while (v == 1).all(): # mutate until a change occurs (prevent duplicates) - v = ((npr.random(sh) < mp) * random.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0) - kg = (k.copy() * v).clip(min=2.0) - fg = anchor_fitness(kg) - if fg > f: - f, k = fg, kg.copy() - pbar.desc = f'{PREFIX}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}' - if verbose: - print_results(k, verbose) - - return print_results(k).astype(np.float32) diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/__init__.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/__init__.py deleted file mode 100644 index 2af819d61d589cfec2e0ca46612a7456f42b831a..0000000000000000000000000000000000000000 --- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -from .groundingdino import build_groundingdino diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install_egg_info.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install_egg_info.py deleted file mode 100644 index 65ede406bfa32204acecb48a3fc73537b2801ddc..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/install_egg_info.py +++ /dev/null @@ -1,63 +0,0 @@ -from distutils import log, dir_util -import os - -from setuptools import Command -from setuptools import namespaces -from setuptools.archive_util import unpack_archive -from .._path import ensure_directory -import pkg_resources - - -class install_egg_info(namespaces.Installer, Command): - """Install an .egg-info directory for the package""" - - description = "Install an .egg-info directory for the package" - - user_options = [ - ('install-dir=', 'd', "directory to install to"), - ] - - def initialize_options(self): - self.install_dir = None - - def finalize_options(self): - self.set_undefined_options('install_lib', - ('install_dir', 'install_dir')) - ei_cmd = self.get_finalized_command("egg_info") - basename = pkg_resources.Distribution( - None, None, ei_cmd.egg_name, ei_cmd.egg_version - ).egg_name() + '.egg-info' - self.source = ei_cmd.egg_info - self.target = os.path.join(self.install_dir, basename) - self.outputs = [] - - def run(self): - self.run_command('egg_info') - if os.path.isdir(self.target) and not os.path.islink(self.target): - dir_util.remove_tree(self.target, dry_run=self.dry_run) - elif os.path.exists(self.target): - self.execute(os.unlink, (self.target,), "Removing " + self.target) - if not self.dry_run: - ensure_directory(self.target) - self.execute( - self.copytree, (), "Copying %s to %s" % (self.source, self.target) - ) - self.install_namespaces() - - def get_outputs(self): - return self.outputs - - def copytree(self): - # Copy the .egg-info tree to site-packages - def skimmer(src, dst): - # filter out source-control directories; note that 'src' is always - # a '/'-separated path, regardless of platform. 'dst' is a - # platform-specific path. - for skip in '.svn/', 'CVS/': - if src.startswith(skip) or '/' + skip in src: - return None - self.outputs.append(dst) - log.debug("Copying %s to %s", src, dst) - return dst - - unpack_archive(self.source, self.target, skimmer) diff --git a/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_m.py b/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_m.py deleted file mode 100644 index 9666a31177b9cc1c94978f9867aaceac8ddebce2..0000000000000000000000000000000000000000 --- a/spaces/Atualli/yoloxTeste/yoloxdetect2/configs/yolox_m.py +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 0.67 - self.width = 0.75 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] diff --git a/spaces/Aveygo/AstroSleuth/app.py b/spaces/Aveygo/AstroSleuth/app.py deleted file mode 100644 index 9cd8402e801138348ec3906645952b9339e2965c..0000000000000000000000000000000000000000 --- a/spaces/Aveygo/AstroSleuth/app.py +++ /dev/null @@ -1,163 +0,0 @@ -import streamlit as st -from streamlit.runtime.scriptrunner import add_script_run_ctx -from streamlit.web.server.websocket_headers import _get_websocket_headers - -from PIL import Image -import time, threading, io, warnings, argparse -from os import listdir - -from file_queue import FileQueue -from main import AstroSleuth - -parser = argparse.ArgumentParser(description='AstroSleuth') -parser.add_argument('--cpu', action='store_true', help='Force CPU') -parser.add_argument('--ignore_hf', action='store_true', help='Ignore hugging face enviornment') - -args = parser.parse_args() -FORCE_CPU = args.cpu -IGNORE_HF = args.ignore_hf - -# Check if we are running in huggingface environment -try: IS_HF = listdir('/home/')[0] == 'user' -except: IS_HF = False - -# Set image warning and max sizes -IS_HF = IS_HF if not IGNORE_HF else False -WARNING_SIZE = 1024 if IS_HF else 4096 -MAX_SIZE = 2048 if IS_HF else None - -if IS_HF: warnings.warn(f"Running in huggingface environment! Images will be resized to cap of {MAX_SIZE}x{MAX_SIZE}") - -class App: - def __init__(self): - self.queue = None - self.running = True - - def on_download(self): - self.download_info = st.info(f"Downloading the model, this may take a minute...", icon ="☁️") - - def off_download(self): - self.download_info.empty() - - def upscale(self, image): - # Convert to RGB if not already - image_rgb = Image.new("RGB", image.size, (255, 255, 255)) - image_rgb.paste(image) - del image - - # Start the model (downloading is done here) - model = AstroSleuth(force_cpu=FORCE_CPU, on_download=self.on_download, off_download=self.off_download) - - # Show that upscale is starting - self.info = st.info("Upscaling image...", icon="🔥") - - # Set the bar to 0 - bar = st.progress(0) - - # Run the model, yield progress - result = None - for i in model.enhance_with_progress(image_rgb): - if type(i) == float: - bar.progress(i) - else: - result = i - break - - # Early exit if we are no longer running (user closed the page) - if not self.running: - break - - # Clear the bar - bar.empty() - return result - - def heart(self): - # Beacause multiple users may be using the app at once, we need to check if - # the websocket headers are still valid and to communicate with other threads - # that we are still "in line" - - while self.running and self.queue.should_run(): - if _get_websocket_headers() is None: - self.close() - return - - self.queue.heartbeat() - time.sleep(1) - - def render(self): - st.title('AstroSleuth') - st.subheader("Upscale deep space targets with AI") - - # Show the file uploader and submit button - with st.form("my-form", clear_on_submit=True): - file = st.file_uploader("FILE UPLOADER", type=["png", "jpg", "jpeg"]) - submitted = st.form_submit_button("Upscale!") - - if submitted and file is not None: - image = Image.open(file) - - # Resize the image if it is too large - if MAX_SIZE is not None and (image.width > MAX_SIZE or image.height > MAX_SIZE): - st.warning("Your image was resized to save on resources! To avoid this, run AstroSleuth with colab or locally: https://github.com/Aveygo/AstroSleuth#running", icon="⚠️") - if image.width > image.height: - image = image.resize((MAX_SIZE, MAX_SIZE * image.height // image.width)) - else: - image = image.resize((MAX_SIZE * image.width // image.height, MAX_SIZE)) - - elif image.width > WARNING_SIZE or image.height > WARNING_SIZE: - st.info("Woah, that image is quite large! You may have to wait a while and/or get unexpected errors!", icon="🕒") - - # Start the queue - self.queue = FileQueue() - queue_box = None - - # Wait for the queue to be empty - while not self.queue.should_run(): - if queue_box is None: - queue_box = st.warning("Experincing high demand, you have been placed in a queue! Please wait...", icon ="🚦") - time.sleep(1) - self.queue.heartbeat() - - # Start the heart thread while we are upscaling - t = threading.Thread(target=self.heart) - add_script_run_ctx(t) - t.start() - - # Empty the queue box - if queue_box is not None: - queue_box.empty() - - # Start the upscale - image = self.upscale(image) - - # Check if the upscale failed for whatever reason - if image is None: - st.error("Internal error: Upscaling failed, please try again later?", icon="❌") - self.close() - return - - # Empty the info box - self.info.empty() - - st.success('Done! Receiving result... (Please use the download button for the highest resolution)', icon="🎉") - - # Convert to bytes - b = io.BytesIO() - file_type = file.name.split(".")[-1].upper() - file_type = "JPEG" if not file_type in ["JPEG", "PNG"] else file_type - image.save(b, format=file_type) - st.download_button("Download Full Resolution", b.getvalue(), file.name, "image/" + file_type) - - # Show preview - st.image(image, caption='Upscaled preview', use_column_width=True) - self.close() - - def close(self): - # Exit from queue and stop running - self.running = False - if self.queue is not None: - self.queue.quit() - self.queue = None - -app = App() -app.render() \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Cuentos Milagrosos De Mariquita Y Amperio Episodios De Gato Negro.md b/spaces/Benson/text-generation/Examples/Cuentos Milagrosos De Mariquita Y Amperio Episodios De Gato Negro.md deleted file mode 100644 index 227e9b58565f4e28c91edecfc8ee1481c2125239..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cuentos Milagrosos De Mariquita Y Amperio Episodios De Gato Negro.md +++ /dev/null @@ -1,103 +0,0 @@ -
-

Cómo descargar Miraculous: Cuentos de mariquitas y Cat Noir Episodios

-

Si eres un fan de las historias de superhéroes, programas de chicas mágicas o comedia romántica, es posible que quieras echar un vistazo a Miraculous: Tales of Ladybug & Cat Noir, una serie animada francesa que ha ganado popularidad en todo el mundo. El espectáculo sigue las aventuras de dos adolescentes parisinos, Marinette y Adrien, que se transforman en los superhéroes Ladybug y Cat Noir usando joyas mágicas llamadas Miraculouses. Juntos, luchan contra la malvada polilla halcón, que crea supervillanos explotando las emociones negativas de la gente.

-

cuentos milagrosos de mariquita y amperio; episodios de gato negro


Download File > https://bltlly.com/2v6Kmb



-

En este artículo, le daremos una visión general del programa y sus personajes, así como le diremos cómo verlo en línea y cómo descargar sus episodios de forma legal y segura. Ya sea que quieras ver toda la serie o ponerte al día con la última temporada, te tenemos cubierto.

-

Cómo ver milagrosa: Cuentos de mariquita y gato negro en línea

-

Miraculous: Tales of Ladybug & Cat Noir tiene cinco temporadas hasta ahora, con una sexta en producción. El programa se transmite en varios canales y plataformas de todo el mundo, dependiendo de su región y preferencia de idioma. Estas son algunas de las opciones más comunes para ver el programa en línea:

-
    -
  • Disney Plus: Disney Plus es el servicio exclusivo de streaming para Miraculous: Tales of Ladybug & Cat Noir en muchos países, incluidos los Estados Unidos, Canadá, Australia, Nueva Zelanda y la mayor parte de Europa. Puedes ver las cinco temporadas del programa en inglés o francés con subtítulos en Disney Plus. Necesitas una suscripción para acceder a Disney Plus, que cuesta $7.99 por mes o $79.99 por año en los Estados Unidos.
  • - -
  • TVNZ: TVNZ es un servicio de streaming gratuito que ofrece Miraculous: Tales of Ladybug & Cat Noir en Nueva Zelanda. Puedes ver las cinco temporadas del programa en inglés con subtítulos en TVNZ. No necesitas una suscripción o una cuenta para acceder a TVNZ.
  • -
  • Disney Channel: Disney Channel es una red de televisión por cable que transmite Miraculous: Tales of Ladybug & Cat Noir en muchos países alrededor del mundo. Puedes ver nuevos episodios del programa mientras se estrenan en Disney Channel o ponerte al día con episodios anteriores a pedido a través de tu proveedor de cable o en línea a través de DisneyNOW. Necesitas una suscripción por cable o un usuario de proveedor de TV para acceder a Disney Channel o DisneyNOW.
  • -
-

Cómo descargar Miraculous: Cuentos de mariquita y gato Noir Episodios

-

Si quieres descargar episodios de Miraculous: Tales of Ladybug & Cat Noir para verlos sin conexión o mantenerlos permanentemente en tu dispositivo, también tienes algunas opciones. Sin embargo, usted debe tener cuidado acerca de dónde descargarlos y cómo se utilizan. La descarga de episodios de fuentes no autorizadas puede exponerlo a malware o virus, así como a las consecuencias legales de la piratería. La descarga de episodios de fuentes autorizadas puede requerir el pago o tener limitaciones en el número de descargas o la duración de la disponibilidad. Estas son algunas de las opciones más comunes para descargar episodios de Miraculous: Tales of Ladybug & Cat Noir:

-
    -
  • Disney Plus: Disney Plus te permite descargar episodios de Miraculous: Tales of Ladybug & Cat Noir para verlos sin conexión en hasta 10 dispositivos. Puede descargar tantos episodios como desee, siempre y cuando tenga suficiente espacio de almacenamiento en su dispositivo y una suscripción activa. Puedes acceder a tus episodios descargados mientras sigas siendo suscriptor y el programa esté disponible en Disney Plus.
  • - -
  • Amazon Prime Video: Amazon Prime Video le permite comprar o alquilar episodios de Miraculous: Tales of Ladybug & Cat Noir para ver sin conexión en hasta cuatro dispositivos. Puedes comprar episodios individuales por $2.99 cada uno o un pase de temporada por $19.99 o más, dependiendo de la temporada. También puedes alquilar episodios por $1.99 cada uno, pero caducarán después de 48 horas. Puedes acceder a tus episodios comprados o alquilados mientras estén disponibles en Amazon Prime Video.
  • -
  • iTunes: iTunes te permite comprar episodios de Miraculous: Tales of Ladybug & Cat Noir para verlos sin conexión en hasta cinco dispositivos. Puedes comprar episodios individuales por $2.99 cada uno o un pase de temporada por $19.99 o más, dependiendo de la temporada. Puedes acceder a tus episodios comprados mientras estén disponibles en iTunes.
  • -
-

Conclusión

-

Miraculous: Tales of Ladybug & Cat Noir es un espectáculo divertido y emocionante que atrae a una amplia gama de audiencias. Combina acción, comedia, romance y magia en un entorno colorido y encantador. Si desea ver o descargar episodios del programa, tiene varias opciones para elegir, dependiendo de su región, idioma y preferencia. Sin embargo, siempre debes tener cuidado sobre de dónde sacas tus episodios y cómo los usas, para evitar cualquier problema legal o técnico.

-

-

Esperamos que este artículo te haya ayudado a encontrar la mejor manera de disfrutar de Miraculous: Tales of Ladybug & Cat Noir. Si tienes alguna pregunta o comentario, siéntete libre de dejarlos abajo. Y recuerda, ¡no dejes que nadie te akumatize!

-

Preguntas frecuentes

-

¿Cuál es el orden de las estaciones y episodios de Miraculous: Tales of Ladybug & Cat Noir?

- -

¿Quiénes son los actores de voz de Miraculous: Tales of Ladybug & Cat Noir?

-

Miraculous: Tales of Ladybug & Cat Noir ha sido doblado en muchos idiomas, pero el idioma original es el francés. Los principales actores de voz en francés son:

-
    -
  • Cristina Vee como Marinette Dupain-Cheng / Mariquita
  • -
  • Bryce Papenbrook como Adrien Agreste / Cat Noir
  • -
  • Keith Silverstein como Gabriel Agreste / Hawk Moth
  • -
  • Mela Lee como Tikki
  • -
  • Max Mittelman como Plagg
  • -
  • Carrie Keranen como Alya Césaire / Rena Rouge
  • -
  • Ben Diskin como Nino Lahiffe / Caparazón
  • -
  • Selah Victor como Chloé Bourgeois / Abeja reina
  • -
  • Sabrina Weisz como Nathalie Sancoeur / Mayura
  • -
  • Ezra Weisz as Luka Couffaine / Viperion
  • Los principales actores de voz en inglés son:

    -
      -
    • Anouck Hautbois como Marinette Dupain-Cheng / Mariquita
    • -
    • Benjamin Bollen como Adrien Agreste / Cat Noir
    • -
    • Antoine Tomé como Gabriel Agreste / Halcón Polilla
    • -
    • Marie Nonnenmacher como Tikki
    • -
    • Thierry Kazazian como Plagg
    • -
    • Fanny Bloc como Alya Césaire / Rena Rouge
    • -
    • Alexandre N'Guyen como Nino Lahiffe / Caparazón
    • -
    • Marie Chevalot como Chloé Bourgeois / Abeja reina
    • -
    • Clara Soares como Nathalie Sancoeur / Mayura
    • -
    • Maxime Baudouin como Luka Couffaine / Viperion
    • -
    -

    ¿Cuáles son los Milagros y los Kwamis?

    - -

    Los principales Milagros y Kwamis en la serie son:

    - - -Milagroso -Kwami -Poder -Titular -Superhéroe - - -Pendientes de mariquita -Tikki -Amuleto de la suerte (crea un objeto útil) -Marinette Dupain-Cheng -Mariquita - - -Anillo de gato -Plagg -Cataclismo (destruye cualquier cosa) -Adrien Agreste -Gato negro - - -Broche de polilla -Nooroo -Akumatización (crea supervillanos) -Gabriel Agreste / Halcón Polilla -N/A - - -collar de zorro -Trix -Espejismo (crea ilusiones) -Alya Césaire / Rena Rouge Nino Lahiffe / Carapace Chloé Bourgeois / Queen Bee Nathalie Sancoeur / Mayura Luka Couffaine / Viperion Kagami Tsurugi / Ryuko Max Kanté / Pegasus Kim Chiến Lê / King Monkey Alix Kubdel / Bunnyx Rose Lavillant / Pigella Juleka Couffaine / Tigeress Mylène Haprèle / Polymouse Ivan Bruel / Minotaurox Aurora Borealis / Polarix Alix Kubdel (futuro) / Bunnyx (futuro) Marinette Dupain-Cheng (futuro) / Ladybug (futuro) Adrien Agreste (futuro) / Cat Noir (futuro) Gabriel Agreste (futuro) / Hawk Moth (futuro) Emilie Agreste (pasado) / Mayura (pasado)> -

    ¿Cuáles son los temas y mensajes de Miraculous: Tales of Ladybug & Cat Noir?

    -

    Miraculous: Tales of Ladybug & Cat Noir es un espectáculo que explora varios temas y mensajes, como:

    -
      -
    • Amistad y trabajo en equipo: El programa enfatiza la importancia de tener amigos y trabajar juntos para superar los desafíos y derrotar a los enemigos. Los héroes a menudo confían en el apoyo y las habilidades de los demás, así como en la ayuda de otros superhéroes o aliados.
    • - -
    • Coraje y responsabilidad: El programa retrata a los héroes como individuos valientes y responsables que enfrentan el peligro y el sacrificio por el bien mayor. El programa también muestra las consecuencias de abusar del poder o dejar que las emociones negativas se hagan cargo.
    • -
    • Diversidad e inclusión: El programa celebra la diversidad y la inclusión presentando personajes de diferentes orígenes, culturas, personalidades y habilidades. La muestra también promueve la tolerancia y el respeto por las diferentes opiniones y perspectivas.
    • -
    • Humor y creatividad: El espectáculo inyecta humor y creatividad en sus historias y personajes, haciéndolos más agradables y memorables. El espectáculo también anima a los espectadores a utilizar su imaginación y divertirse.
    • -
    -

    ¿Qué tan popular es Miraculous: Tales of Ladybug & Cat Noir?

    -

    Miraculous: Tales of Ladybug & Cat Noir es un programa muy popular que ha recibido la aclamación de la crítica y el apoyo de los fans. El programa ha ganado varios premios, como el Teen Choice Award for Choice Animated TV Show en 2018 y el Kidscreen Award for Best in Class en 2019. El espectáculo también ha dado lugar a un gran fandom que crea fan art, fan fiction, cosplay, mercancía y más. El programa también ha inspirado spin-offs, como cómics, libros, videojuegos, webisodes y una película de acción en vivo.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/adaptive.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/adaptive.py deleted file mode 100644 index a7c1fda4d99802c5837d7ec4f44ebc30c065ecb4..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/retries/adaptive.py +++ /dev/null @@ -1,133 +0,0 @@ -import logging -import math -import threading - -from botocore.retries import bucket, standard, throttling - -logger = logging.getLogger(__name__) - - -def register_retry_handler(client): - clock = bucket.Clock() - rate_adjustor = throttling.CubicCalculator( - starting_max_rate=0, start_time=clock.current_time() - ) - token_bucket = bucket.TokenBucket(max_rate=1, clock=clock) - rate_clocker = RateClocker(clock) - throttling_detector = standard.ThrottlingErrorDetector( - retry_event_adapter=standard.RetryEventAdapter(), - ) - limiter = ClientRateLimiter( - rate_adjustor=rate_adjustor, - rate_clocker=rate_clocker, - token_bucket=token_bucket, - throttling_detector=throttling_detector, - clock=clock, - ) - client.meta.events.register( - 'before-send', - limiter.on_sending_request, - ) - client.meta.events.register( - 'needs-retry', - limiter.on_receiving_response, - ) - return limiter - - -class ClientRateLimiter: - - _MAX_RATE_ADJUST_SCALE = 2.0 - - def __init__( - self, - rate_adjustor, - rate_clocker, - token_bucket, - throttling_detector, - clock, - ): - self._rate_adjustor = rate_adjustor - self._rate_clocker = rate_clocker - self._token_bucket = token_bucket - self._throttling_detector = throttling_detector - self._clock = clock - self._enabled = False - self._lock = threading.Lock() - - def on_sending_request(self, request, **kwargs): - if self._enabled: - self._token_bucket.acquire() - - # Hooked up to needs-retry. - def on_receiving_response(self, **kwargs): - measured_rate = self._rate_clocker.record() - timestamp = self._clock.current_time() - with self._lock: - if not self._throttling_detector.is_throttling_error(**kwargs): - new_rate = self._rate_adjustor.success_received(timestamp) - else: - if not self._enabled: - rate_to_use = measured_rate - else: - rate_to_use = min( - measured_rate, self._token_bucket.max_rate - ) - new_rate = self._rate_adjustor.error_received( - rate_to_use, timestamp - ) - logger.debug( - "Throttling response received, new send rate: %s " - "measured rate: %s, token bucket capacity " - "available: %s", - new_rate, - measured_rate, - self._token_bucket.available_capacity, - ) - self._enabled = True - self._token_bucket.max_rate = min( - new_rate, self._MAX_RATE_ADJUST_SCALE * measured_rate - ) - - -class RateClocker: - """Tracks the rate at which a client is sending a request.""" - - _DEFAULT_SMOOTHING = 0.8 - # Update the rate every _TIME_BUCKET_RANGE seconds. - _TIME_BUCKET_RANGE = 0.5 - - def __init__( - self, - clock, - smoothing=_DEFAULT_SMOOTHING, - time_bucket_range=_TIME_BUCKET_RANGE, - ): - self._clock = clock - self._measured_rate = 0 - self._smoothing = smoothing - self._last_bucket = math.floor(self._clock.current_time()) - self._time_bucket_scale = 1 / self._TIME_BUCKET_RANGE - self._count = 0 - self._lock = threading.Lock() - - def record(self, amount=1): - with self._lock: - t = self._clock.current_time() - bucket = ( - math.floor(t * self._time_bucket_scale) - / self._time_bucket_scale - ) - self._count += amount - if bucket > self._last_bucket: - current_rate = self._count / float(bucket - self._last_bucket) - self._measured_rate = (current_rate * self._smoothing) + ( - self._measured_rate * (1 - self._smoothing) - ) - self._count = 0 - self._last_bucket = bucket - return self._measured_rate - - @property - def measured_rate(self): - return self._measured_rate diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/upload.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/upload.py deleted file mode 100644 index 0c99bd7b2967597168b2fbae18923b2b0122fe40..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/upload.py +++ /dev/null @@ -1,802 +0,0 @@ -# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -import math -from io import BytesIO - -from s3transfer.compat import readable, seekable -from s3transfer.futures import IN_MEMORY_UPLOAD_TAG -from s3transfer.tasks import ( - CompleteMultipartUploadTask, - CreateMultipartUploadTask, - SubmissionTask, - Task, -) -from s3transfer.utils import ( - ChunksizeAdjuster, - DeferredOpenFile, - get_callbacks, - get_filtered_dict, -) - - -class AggregatedProgressCallback: - def __init__(self, callbacks, threshold=1024 * 256): - """Aggregates progress updates for every provided progress callback - - :type callbacks: A list of functions that accepts bytes_transferred - as a single argument - :param callbacks: The callbacks to invoke when threshold is reached - - :type threshold: int - :param threshold: The progress threshold in which to take the - aggregated progress and invoke the progress callback with that - aggregated progress total - """ - self._callbacks = callbacks - self._threshold = threshold - self._bytes_seen = 0 - - def __call__(self, bytes_transferred): - self._bytes_seen += bytes_transferred - if self._bytes_seen >= self._threshold: - self._trigger_callbacks() - - def flush(self): - """Flushes out any progress that has not been sent to its callbacks""" - if self._bytes_seen > 0: - self._trigger_callbacks() - - def _trigger_callbacks(self): - for callback in self._callbacks: - callback(bytes_transferred=self._bytes_seen) - self._bytes_seen = 0 - - -class InterruptReader: - """Wrapper that can interrupt reading using an error - - It uses a transfer coordinator to propagate an error if it notices - that a read is being made while the file is being read from. - - :type fileobj: file-like obj - :param fileobj: The file-like object to read from - - :type transfer_coordinator: s3transfer.futures.TransferCoordinator - :param transfer_coordinator: The transfer coordinator to use if the - reader needs to be interrupted. - """ - - def __init__(self, fileobj, transfer_coordinator): - self._fileobj = fileobj - self._transfer_coordinator = transfer_coordinator - - def read(self, amount=None): - # If there is an exception, then raise the exception. - # We raise an error instead of returning no bytes because for - # requests where the content length and md5 was sent, it will - # cause md5 mismatches and retries as there was no indication that - # the stream being read from encountered any issues. - if self._transfer_coordinator.exception: - raise self._transfer_coordinator.exception - return self._fileobj.read(amount) - - def seek(self, where, whence=0): - self._fileobj.seek(where, whence) - - def tell(self): - return self._fileobj.tell() - - def close(self): - self._fileobj.close() - - def __enter__(self): - return self - - def __exit__(self, *args, **kwargs): - self.close() - - -class UploadInputManager: - """Base manager class for handling various types of files for uploads - - This class is typically used for the UploadSubmissionTask class to help - determine the following: - - * How to determine the size of the file - * How to determine if a multipart upload is required - * How to retrieve the body for a PutObject - * How to retrieve the bodies for a set of UploadParts - - The answers/implementations differ for the various types of file inputs - that may be accepted. All implementations must subclass and override - public methods from this class. - """ - - def __init__(self, osutil, transfer_coordinator, bandwidth_limiter=None): - self._osutil = osutil - self._transfer_coordinator = transfer_coordinator - self._bandwidth_limiter = bandwidth_limiter - - @classmethod - def is_compatible(cls, upload_source): - """Determines if the source for the upload is compatible with manager - - :param upload_source: The source for which the upload will pull data - from. - - :returns: True if the manager can handle the type of source specified - otherwise returns False. - """ - raise NotImplementedError('must implement _is_compatible()') - - def stores_body_in_memory(self, operation_name): - """Whether the body it provides are stored in-memory - - :type operation_name: str - :param operation_name: The name of the client operation that the body - is being used for. Valid operation_names are ``put_object`` and - ``upload_part``. - - :rtype: boolean - :returns: True if the body returned by the manager will be stored in - memory. False if the manager will not directly store the body in - memory. - """ - raise NotImplementedError('must implement store_body_in_memory()') - - def provide_transfer_size(self, transfer_future): - """Provides the transfer size of an upload - - :type transfer_future: s3transfer.futures.TransferFuture - :param transfer_future: The future associated with upload request - """ - raise NotImplementedError('must implement provide_transfer_size()') - - def requires_multipart_upload(self, transfer_future, config): - """Determines where a multipart upload is required - - :type transfer_future: s3transfer.futures.TransferFuture - :param transfer_future: The future associated with upload request - - :type config: s3transfer.manager.TransferConfig - :param config: The config associated to the transfer manager - - :rtype: boolean - :returns: True, if the upload should be multipart based on - configuration and size. False, otherwise. - """ - raise NotImplementedError('must implement requires_multipart_upload()') - - def get_put_object_body(self, transfer_future): - """Returns the body to use for PutObject - - :type transfer_future: s3transfer.futures.TransferFuture - :param transfer_future: The future associated with upload request - - :type config: s3transfer.manager.TransferConfig - :param config: The config associated to the transfer manager - - :rtype: s3transfer.utils.ReadFileChunk - :returns: A ReadFileChunk including all progress callbacks - associated with the transfer future. - """ - raise NotImplementedError('must implement get_put_object_body()') - - def yield_upload_part_bodies(self, transfer_future, chunksize): - """Yields the part number and body to use for each UploadPart - - :type transfer_future: s3transfer.futures.TransferFuture - :param transfer_future: The future associated with upload request - - :type chunksize: int - :param chunksize: The chunksize to use for this upload. - - :rtype: int, s3transfer.utils.ReadFileChunk - :returns: Yields the part number and the ReadFileChunk including all - progress callbacks associated with the transfer future for that - specific yielded part. - """ - raise NotImplementedError('must implement yield_upload_part_bodies()') - - def _wrap_fileobj(self, fileobj): - fileobj = InterruptReader(fileobj, self._transfer_coordinator) - if self._bandwidth_limiter: - fileobj = self._bandwidth_limiter.get_bandwith_limited_stream( - fileobj, self._transfer_coordinator, enabled=False - ) - return fileobj - - def _get_progress_callbacks(self, transfer_future): - callbacks = get_callbacks(transfer_future, 'progress') - # We only want to be wrapping the callbacks if there are callbacks to - # invoke because we do not want to be doing any unnecessary work if - # there are no callbacks to invoke. - if callbacks: - return [AggregatedProgressCallback(callbacks)] - return [] - - def _get_close_callbacks(self, aggregated_progress_callbacks): - return [callback.flush for callback in aggregated_progress_callbacks] - - -class UploadFilenameInputManager(UploadInputManager): - """Upload utility for filenames""" - - @classmethod - def is_compatible(cls, upload_source): - return isinstance(upload_source, str) - - def stores_body_in_memory(self, operation_name): - return False - - def provide_transfer_size(self, transfer_future): - transfer_future.meta.provide_transfer_size( - self._osutil.get_file_size(transfer_future.meta.call_args.fileobj) - ) - - def requires_multipart_upload(self, transfer_future, config): - return transfer_future.meta.size >= config.multipart_threshold - - def get_put_object_body(self, transfer_future): - # Get a file-like object for the given input - fileobj, full_size = self._get_put_object_fileobj_with_full_size( - transfer_future - ) - - # Wrap fileobj with interrupt reader that will quickly cancel - # uploads if needed instead of having to wait for the socket - # to completely read all of the data. - fileobj = self._wrap_fileobj(fileobj) - - callbacks = self._get_progress_callbacks(transfer_future) - close_callbacks = self._get_close_callbacks(callbacks) - size = transfer_future.meta.size - # Return the file-like object wrapped into a ReadFileChunk to get - # progress. - return self._osutil.open_file_chunk_reader_from_fileobj( - fileobj=fileobj, - chunk_size=size, - full_file_size=full_size, - callbacks=callbacks, - close_callbacks=close_callbacks, - ) - - def yield_upload_part_bodies(self, transfer_future, chunksize): - full_file_size = transfer_future.meta.size - num_parts = self._get_num_parts(transfer_future, chunksize) - for part_number in range(1, num_parts + 1): - callbacks = self._get_progress_callbacks(transfer_future) - close_callbacks = self._get_close_callbacks(callbacks) - start_byte = chunksize * (part_number - 1) - # Get a file-like object for that part and the size of the full - # file size for the associated file-like object for that part. - fileobj, full_size = self._get_upload_part_fileobj_with_full_size( - transfer_future.meta.call_args.fileobj, - start_byte=start_byte, - part_size=chunksize, - full_file_size=full_file_size, - ) - - # Wrap fileobj with interrupt reader that will quickly cancel - # uploads if needed instead of having to wait for the socket - # to completely read all of the data. - fileobj = self._wrap_fileobj(fileobj) - - # Wrap the file-like object into a ReadFileChunk to get progress. - read_file_chunk = self._osutil.open_file_chunk_reader_from_fileobj( - fileobj=fileobj, - chunk_size=chunksize, - full_file_size=full_size, - callbacks=callbacks, - close_callbacks=close_callbacks, - ) - yield part_number, read_file_chunk - - def _get_deferred_open_file(self, fileobj, start_byte): - fileobj = DeferredOpenFile( - fileobj, start_byte, open_function=self._osutil.open - ) - return fileobj - - def _get_put_object_fileobj_with_full_size(self, transfer_future): - fileobj = transfer_future.meta.call_args.fileobj - size = transfer_future.meta.size - return self._get_deferred_open_file(fileobj, 0), size - - def _get_upload_part_fileobj_with_full_size(self, fileobj, **kwargs): - start_byte = kwargs['start_byte'] - full_size = kwargs['full_file_size'] - return self._get_deferred_open_file(fileobj, start_byte), full_size - - def _get_num_parts(self, transfer_future, part_size): - return int(math.ceil(transfer_future.meta.size / float(part_size))) - - -class UploadSeekableInputManager(UploadFilenameInputManager): - """Upload utility for an open file object""" - - @classmethod - def is_compatible(cls, upload_source): - return readable(upload_source) and seekable(upload_source) - - def stores_body_in_memory(self, operation_name): - if operation_name == 'put_object': - return False - else: - return True - - def provide_transfer_size(self, transfer_future): - fileobj = transfer_future.meta.call_args.fileobj - # To determine size, first determine the starting position - # Seek to the end and then find the difference in the length - # between the end and start positions. - start_position = fileobj.tell() - fileobj.seek(0, 2) - end_position = fileobj.tell() - fileobj.seek(start_position) - transfer_future.meta.provide_transfer_size( - end_position - start_position - ) - - def _get_upload_part_fileobj_with_full_size(self, fileobj, **kwargs): - # Note: It is unfortunate that in order to do a multithreaded - # multipart upload we cannot simply copy the filelike object - # since there is not really a mechanism in python (i.e. os.dup - # points to the same OS filehandle which causes concurrency - # issues). So instead we need to read from the fileobj and - # chunk the data out to separate file-like objects in memory. - data = fileobj.read(kwargs['part_size']) - # We return the length of the data instead of the full_file_size - # because we partitioned the data into separate BytesIO objects - # meaning the BytesIO object has no knowledge of its start position - # relative the input source nor access to the rest of the input - # source. So we must treat it as its own standalone file. - return BytesIO(data), len(data) - - def _get_put_object_fileobj_with_full_size(self, transfer_future): - fileobj = transfer_future.meta.call_args.fileobj - # The current position needs to be taken into account when retrieving - # the full size of the file. - size = fileobj.tell() + transfer_future.meta.size - return fileobj, size - - -class UploadNonSeekableInputManager(UploadInputManager): - """Upload utility for a file-like object that cannot seek.""" - - def __init__(self, osutil, transfer_coordinator, bandwidth_limiter=None): - super().__init__(osutil, transfer_coordinator, bandwidth_limiter) - self._initial_data = b'' - - @classmethod - def is_compatible(cls, upload_source): - return readable(upload_source) - - def stores_body_in_memory(self, operation_name): - return True - - def provide_transfer_size(self, transfer_future): - # No-op because there is no way to do this short of reading the entire - # body into memory. - return - - def requires_multipart_upload(self, transfer_future, config): - # If the user has set the size, we can use that. - if transfer_future.meta.size is not None: - return transfer_future.meta.size >= config.multipart_threshold - - # This is tricky to determine in this case because we can't know how - # large the input is. So to figure it out, we read data into memory - # up until the threshold and compare how much data was actually read - # against the threshold. - fileobj = transfer_future.meta.call_args.fileobj - threshold = config.multipart_threshold - self._initial_data = self._read(fileobj, threshold, False) - if len(self._initial_data) < threshold: - return False - else: - return True - - def get_put_object_body(self, transfer_future): - callbacks = self._get_progress_callbacks(transfer_future) - close_callbacks = self._get_close_callbacks(callbacks) - fileobj = transfer_future.meta.call_args.fileobj - - body = self._wrap_data( - self._initial_data + fileobj.read(), callbacks, close_callbacks - ) - - # Zero out the stored data so we don't have additional copies - # hanging around in memory. - self._initial_data = None - return body - - def yield_upload_part_bodies(self, transfer_future, chunksize): - file_object = transfer_future.meta.call_args.fileobj - part_number = 0 - - # Continue reading parts from the file-like object until it is empty. - while True: - callbacks = self._get_progress_callbacks(transfer_future) - close_callbacks = self._get_close_callbacks(callbacks) - part_number += 1 - part_content = self._read(file_object, chunksize) - if not part_content: - break - part_object = self._wrap_data( - part_content, callbacks, close_callbacks - ) - - # Zero out part_content to avoid hanging on to additional data. - part_content = None - yield part_number, part_object - - def _read(self, fileobj, amount, truncate=True): - """ - Reads a specific amount of data from a stream and returns it. If there - is any data in initial_data, that will be popped out first. - - :type fileobj: A file-like object that implements read - :param fileobj: The stream to read from. - - :type amount: int - :param amount: The number of bytes to read from the stream. - - :type truncate: bool - :param truncate: Whether or not to truncate initial_data after - reading from it. - - :return: Generator which generates part bodies from the initial data. - """ - # If the the initial data is empty, we simply read from the fileobj - if len(self._initial_data) == 0: - return fileobj.read(amount) - - # If the requested number of bytes is less than the amount of - # initial data, pull entirely from initial data. - if amount <= len(self._initial_data): - data = self._initial_data[:amount] - # Truncate initial data so we don't hang onto the data longer - # than we need. - if truncate: - self._initial_data = self._initial_data[amount:] - return data - - # At this point there is some initial data left, but not enough to - # satisfy the number of bytes requested. Pull out the remaining - # initial data and read the rest from the fileobj. - amount_to_read = amount - len(self._initial_data) - data = self._initial_data + fileobj.read(amount_to_read) - - # Zero out initial data so we don't hang onto the data any more. - if truncate: - self._initial_data = b'' - return data - - def _wrap_data(self, data, callbacks, close_callbacks): - """ - Wraps data with the interrupt reader and the file chunk reader. - - :type data: bytes - :param data: The data to wrap. - - :type callbacks: list - :param callbacks: The callbacks associated with the transfer future. - - :type close_callbacks: list - :param close_callbacks: The callbacks to be called when closing the - wrapper for the data. - - :return: Fully wrapped data. - """ - fileobj = self._wrap_fileobj(BytesIO(data)) - return self._osutil.open_file_chunk_reader_from_fileobj( - fileobj=fileobj, - chunk_size=len(data), - full_file_size=len(data), - callbacks=callbacks, - close_callbacks=close_callbacks, - ) - - -class UploadSubmissionTask(SubmissionTask): - """Task for submitting tasks to execute an upload""" - - UPLOAD_PART_ARGS = [ - 'ChecksumAlgorithm', - 'SSECustomerKey', - 'SSECustomerAlgorithm', - 'SSECustomerKeyMD5', - 'RequestPayer', - 'ExpectedBucketOwner', - ] - - COMPLETE_MULTIPART_ARGS = ['RequestPayer', 'ExpectedBucketOwner'] - - def _get_upload_input_manager_cls(self, transfer_future): - """Retrieves a class for managing input for an upload based on file type - - :type transfer_future: s3transfer.futures.TransferFuture - :param transfer_future: The transfer future for the request - - :rtype: class of UploadInputManager - :returns: The appropriate class to use for managing a specific type of - input for uploads. - """ - upload_manager_resolver_chain = [ - UploadFilenameInputManager, - UploadSeekableInputManager, - UploadNonSeekableInputManager, - ] - - fileobj = transfer_future.meta.call_args.fileobj - for upload_manager_cls in upload_manager_resolver_chain: - if upload_manager_cls.is_compatible(fileobj): - return upload_manager_cls - raise RuntimeError( - 'Input {} of type: {} is not supported.'.format( - fileobj, type(fileobj) - ) - ) - - def _submit( - self, - client, - config, - osutil, - request_executor, - transfer_future, - bandwidth_limiter=None, - ): - """ - :param client: The client associated with the transfer manager - - :type config: s3transfer.manager.TransferConfig - :param config: The transfer config associated with the transfer - manager - - :type osutil: s3transfer.utils.OSUtil - :param osutil: The os utility associated to the transfer manager - - :type request_executor: s3transfer.futures.BoundedExecutor - :param request_executor: The request executor associated with the - transfer manager - - :type transfer_future: s3transfer.futures.TransferFuture - :param transfer_future: The transfer future associated with the - transfer request that tasks are being submitted for - """ - upload_input_manager = self._get_upload_input_manager_cls( - transfer_future - )(osutil, self._transfer_coordinator, bandwidth_limiter) - - # Determine the size if it was not provided - if transfer_future.meta.size is None: - upload_input_manager.provide_transfer_size(transfer_future) - - # Do a multipart upload if needed, otherwise do a regular put object. - if not upload_input_manager.requires_multipart_upload( - transfer_future, config - ): - self._submit_upload_request( - client, - config, - osutil, - request_executor, - transfer_future, - upload_input_manager, - ) - else: - self._submit_multipart_request( - client, - config, - osutil, - request_executor, - transfer_future, - upload_input_manager, - ) - - def _submit_upload_request( - self, - client, - config, - osutil, - request_executor, - transfer_future, - upload_input_manager, - ): - call_args = transfer_future.meta.call_args - - # Get any tags that need to be associated to the put object task - put_object_tag = self._get_upload_task_tag( - upload_input_manager, 'put_object' - ) - - # Submit the request of a single upload. - self._transfer_coordinator.submit( - request_executor, - PutObjectTask( - transfer_coordinator=self._transfer_coordinator, - main_kwargs={ - 'client': client, - 'fileobj': upload_input_manager.get_put_object_body( - transfer_future - ), - 'bucket': call_args.bucket, - 'key': call_args.key, - 'extra_args': call_args.extra_args, - }, - is_final=True, - ), - tag=put_object_tag, - ) - - def _submit_multipart_request( - self, - client, - config, - osutil, - request_executor, - transfer_future, - upload_input_manager, - ): - call_args = transfer_future.meta.call_args - - # Submit the request to create a multipart upload. - create_multipart_future = self._transfer_coordinator.submit( - request_executor, - CreateMultipartUploadTask( - transfer_coordinator=self._transfer_coordinator, - main_kwargs={ - 'client': client, - 'bucket': call_args.bucket, - 'key': call_args.key, - 'extra_args': call_args.extra_args, - }, - ), - ) - - # Submit requests to upload the parts of the file. - part_futures = [] - extra_part_args = self._extra_upload_part_args(call_args.extra_args) - - # Get any tags that need to be associated to the submitted task - # for upload the data - upload_part_tag = self._get_upload_task_tag( - upload_input_manager, 'upload_part' - ) - - size = transfer_future.meta.size - adjuster = ChunksizeAdjuster() - chunksize = adjuster.adjust_chunksize(config.multipart_chunksize, size) - part_iterator = upload_input_manager.yield_upload_part_bodies( - transfer_future, chunksize - ) - - for part_number, fileobj in part_iterator: - part_futures.append( - self._transfer_coordinator.submit( - request_executor, - UploadPartTask( - transfer_coordinator=self._transfer_coordinator, - main_kwargs={ - 'client': client, - 'fileobj': fileobj, - 'bucket': call_args.bucket, - 'key': call_args.key, - 'part_number': part_number, - 'extra_args': extra_part_args, - }, - pending_main_kwargs={ - 'upload_id': create_multipart_future - }, - ), - tag=upload_part_tag, - ) - ) - - complete_multipart_extra_args = self._extra_complete_multipart_args( - call_args.extra_args - ) - # Submit the request to complete the multipart upload. - self._transfer_coordinator.submit( - request_executor, - CompleteMultipartUploadTask( - transfer_coordinator=self._transfer_coordinator, - main_kwargs={ - 'client': client, - 'bucket': call_args.bucket, - 'key': call_args.key, - 'extra_args': complete_multipart_extra_args, - }, - pending_main_kwargs={ - 'upload_id': create_multipart_future, - 'parts': part_futures, - }, - is_final=True, - ), - ) - - def _extra_upload_part_args(self, extra_args): - # Only the args in UPLOAD_PART_ARGS actually need to be passed - # onto the upload_part calls. - return get_filtered_dict(extra_args, self.UPLOAD_PART_ARGS) - - def _extra_complete_multipart_args(self, extra_args): - return get_filtered_dict(extra_args, self.COMPLETE_MULTIPART_ARGS) - - def _get_upload_task_tag(self, upload_input_manager, operation_name): - tag = None - if upload_input_manager.stores_body_in_memory(operation_name): - tag = IN_MEMORY_UPLOAD_TAG - return tag - - -class PutObjectTask(Task): - """Task to do a nonmultipart upload""" - - def _main(self, client, fileobj, bucket, key, extra_args): - """ - :param client: The client to use when calling PutObject - :param fileobj: The file to upload. - :param bucket: The name of the bucket to upload to - :param key: The name of the key to upload to - :param extra_args: A dictionary of any extra arguments that may be - used in the upload. - """ - with fileobj as body: - client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args) - - -class UploadPartTask(Task): - """Task to upload a part in a multipart upload""" - - def _main( - self, client, fileobj, bucket, key, upload_id, part_number, extra_args - ): - """ - :param client: The client to use when calling PutObject - :param fileobj: The file to upload. - :param bucket: The name of the bucket to upload to - :param key: The name of the key to upload to - :param upload_id: The id of the upload - :param part_number: The number representing the part of the multipart - upload - :param extra_args: A dictionary of any extra arguments that may be - used in the upload. - - :rtype: dict - :returns: A dictionary representing a part:: - - {'Etag': etag_value, 'PartNumber': part_number} - - This value can be appended to a list to be used to complete - the multipart upload. - """ - with fileobj as body: - response = client.upload_part( - Bucket=bucket, - Key=key, - UploadId=upload_id, - PartNumber=part_number, - Body=body, - **extra_args, - ) - etag = response['ETag'] - part_metadata = {'ETag': etag, 'PartNumber': part_number} - if 'ChecksumAlgorithm' in extra_args: - algorithm_name = extra_args['ChecksumAlgorithm'].upper() - checksum_member = f'Checksum{algorithm_name}' - if checksum_member in response: - part_metadata[checksum_member] = response[checksum_member] - return part_metadata diff --git a/spaces/CForGETaass/vits-uma-genshin-honkai/transforms.py b/spaces/CForGETaass/vits-uma-genshin-honkai/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/CForGETaass/vits-uma-genshin-honkai/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/CVH-vn1210/make_hair/minigpt4/tasks/base_task.py b/spaces/CVH-vn1210/make_hair/minigpt4/tasks/base_task.py deleted file mode 100644 index 9f82a2a52779a782e5a40dfb6a6d9a57e991e345..0000000000000000000000000000000000000000 --- a/spaces/CVH-vn1210/make_hair/minigpt4/tasks/base_task.py +++ /dev/null @@ -1,286 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import logging -import os - -import torch -import torch.distributed as dist -from minigpt4.common.dist_utils import get_rank, get_world_size, is_main_process, is_dist_avail_and_initialized -from minigpt4.common.logger import MetricLogger, SmoothedValue -from minigpt4.common.registry import registry -from minigpt4.datasets.data_utils import prepare_sample - - -class BaseTask: - def __init__(self, **kwargs): - super().__init__() - - self.inst_id_key = "instance_id" - - @classmethod - def setup_task(cls, **kwargs): - return cls() - - def build_model(self, cfg): - model_config = cfg.model_cfg - - model_cls = registry.get_model_class(model_config.arch) - return model_cls.from_config(model_config) - - def build_datasets(self, cfg): - """ - Build a dictionary of datasets, keyed by split 'train', 'valid', 'test'. - Download dataset and annotations automatically if not exist. - - Args: - cfg (common.config.Config): _description_ - - Returns: - dict: Dictionary of torch.utils.data.Dataset objects by split. - """ - - datasets = dict() - - datasets_config = cfg.datasets_cfg - - assert len(datasets_config) > 0, "At least one dataset has to be specified." - - for name in datasets_config: - dataset_config = datasets_config[name] - - builder = registry.get_builder_class(name)(dataset_config) - dataset = builder.build_datasets() - - dataset['train'].name = name - if 'sample_ratio' in dataset_config: - dataset['train'].sample_ratio = dataset_config.sample_ratio - - datasets[name] = dataset - - return datasets - - def train_step(self, model, samples): - loss = model(samples)["loss"] - return loss - - def valid_step(self, model, samples): - raise NotImplementedError - - def before_evaluation(self, model, dataset, **kwargs): - model.before_evaluation(dataset=dataset, task_type=type(self)) - - def after_evaluation(self, **kwargs): - pass - - def inference_step(self): - raise NotImplementedError - - def evaluation(self, model, data_loader, cuda_enabled=True): - metric_logger = MetricLogger(delimiter=" ") - header = "Evaluation" - # TODO make it configurable - print_freq = 10 - - results = [] - - for samples in metric_logger.log_every(data_loader, print_freq, header): - samples = prepare_sample(samples, cuda_enabled=cuda_enabled) - - eval_output = self.valid_step(model=model, samples=samples) - results.extend(eval_output) - - if is_dist_avail_and_initialized(): - dist.barrier() - - return results - - def train_epoch( - self, - epoch, - model, - data_loader, - optimizer, - lr_scheduler, - scaler=None, - cuda_enabled=False, - log_freq=50, - accum_grad_iters=1, - ): - return self._train_inner_loop( - epoch=epoch, - iters_per_epoch=lr_scheduler.iters_per_epoch, - model=model, - data_loader=data_loader, - optimizer=optimizer, - scaler=scaler, - lr_scheduler=lr_scheduler, - log_freq=log_freq, - cuda_enabled=cuda_enabled, - accum_grad_iters=accum_grad_iters, - ) - - def train_iters( - self, - epoch, - start_iters, - iters_per_inner_epoch, - model, - data_loader, - optimizer, - lr_scheduler, - scaler=None, - cuda_enabled=False, - log_freq=50, - accum_grad_iters=1, - ): - return self._train_inner_loop( - epoch=epoch, - start_iters=start_iters, - iters_per_epoch=iters_per_inner_epoch, - model=model, - data_loader=data_loader, - optimizer=optimizer, - scaler=scaler, - lr_scheduler=lr_scheduler, - log_freq=log_freq, - cuda_enabled=cuda_enabled, - accum_grad_iters=accum_grad_iters, - ) - - def _train_inner_loop( - self, - epoch, - iters_per_epoch, - model, - data_loader, - optimizer, - lr_scheduler, - scaler=None, - start_iters=None, - log_freq=50, - cuda_enabled=False, - accum_grad_iters=1, - ): - """ - An inner training loop compatible with both epoch-based and iter-based training. - - When using epoch-based, training stops after one epoch; when using iter-based, - training stops after #iters_per_epoch iterations. - """ - use_amp = scaler is not None - - if not hasattr(data_loader, "__next__"): - # convert to iterator if not already - data_loader = iter(data_loader) - - metric_logger = MetricLogger(delimiter=" ") - metric_logger.add_meter("lr", SmoothedValue(window_size=1, fmt="{value:.6f}")) - metric_logger.add_meter("loss", SmoothedValue(window_size=1, fmt="{value:.4f}")) - - # if iter-based runner, schedule lr based on inner epoch. - logging.info( - "Start training epoch {}, {} iters per inner epoch.".format( - epoch, iters_per_epoch - ) - ) - header = "Train: data epoch: [{}]".format(epoch) - if start_iters is None: - # epoch-based runner - inner_epoch = epoch - else: - # In iter-based runner, we schedule the learning rate based on iterations. - inner_epoch = start_iters // iters_per_epoch - header = header + "; inner epoch [{}]".format(inner_epoch) - - for i in metric_logger.log_every(range(iters_per_epoch), log_freq, header): - # if using iter-based runner, we stop after iters_per_epoch iterations. - if i >= iters_per_epoch: - break - - samples = next(data_loader) - - samples = prepare_sample(samples, cuda_enabled=cuda_enabled) - samples.update( - { - "epoch": inner_epoch, - "num_iters_per_epoch": iters_per_epoch, - "iters": i, - } - ) - - lr_scheduler.step(cur_epoch=inner_epoch, cur_step=i) - - with torch.cuda.amp.autocast(enabled=use_amp): - loss = self.train_step(model=model, samples=samples) - - # after_train_step() - if use_amp: - scaler.scale(loss).backward() - else: - loss.backward() - - # update gradients every accum_grad_iters iterations - if (i + 1) % accum_grad_iters == 0: - if use_amp: - scaler.step(optimizer) - scaler.update() - else: - optimizer.step() - optimizer.zero_grad() - - metric_logger.update(loss=loss.item()) - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - - # after train_epoch() - # gather the stats from all processes - metric_logger.synchronize_between_processes() - logging.info("Averaged stats: " + str(metric_logger.global_avg())) - return { - k: "{:.3f}".format(meter.global_avg) - for k, meter in metric_logger.meters.items() - } - - @staticmethod - def save_result(result, result_dir, filename, remove_duplicate=""): - import json - - result_file = os.path.join( - result_dir, "%s_rank%d.json" % (filename, get_rank()) - ) - final_result_file = os.path.join(result_dir, "%s.json" % filename) - - json.dump(result, open(result_file, "w")) - - if is_dist_avail_and_initialized(): - dist.barrier() - - if is_main_process(): - logging.warning("rank %d starts merging results." % get_rank()) - # combine results from all processes - result = [] - - for rank in range(get_world_size()): - result_file = os.path.join( - result_dir, "%s_rank%d.json" % (filename, rank) - ) - res = json.load(open(result_file, "r")) - result += res - - if remove_duplicate: - result_new = [] - id_list = [] - for res in result: - if res[remove_duplicate] not in id_list: - id_list.append(res[remove_duplicate]) - result_new.append(res) - result = result_new - - json.dump(result, open(final_result_file, "w")) - print("result file saved to %s" % final_result_file) - - return final_result_file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/core/base_dataset.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/core/base_dataset.py deleted file mode 100644 index f499bfa8ee4fb8161568aaa238c81b6b1e2f0a22..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/core/base_dataset.py +++ /dev/null @@ -1,103 +0,0 @@ -# -------------------------------------------------------- -# OpenVQA -# Written by Yuhao Cui https://github.com/cuiyuhao1996 -# -------------------------------------------------------- - -import numpy as np -import glob, json, torch, random -import torch.utils.data as Data -import torch.nn as nn -from openvqa.utils.feat_filter import feat_filter - -class BaseDataSet(Data.Dataset): - def __init__(self): - self.token_to_ix = None - self.pretrained_emb = None - self.ans_to_ix = None - self.ix_to_ans = None - - self.data_size = None - self.token_size = None - self.ans_size = None - - - def load_ques_ans(self, idx): - raise NotImplementedError() - - - def load_img_feats(self, idx, iid): - raise NotImplementedError() - - - def __getitem__(self, idx): - - ques_ix_iter, ans_iter, iid = self.load_ques_ans(idx) - - frcn_feat_iter, grid_feat_iter, bbox_feat_iter = self.load_img_feats(idx, iid) - - return \ - torch.from_numpy(frcn_feat_iter),\ - torch.from_numpy(grid_feat_iter),\ - torch.from_numpy(bbox_feat_iter),\ - torch.from_numpy(ques_ix_iter),\ - torch.from_numpy(ans_iter) - - - def __len__(self): - return self.data_size - - def shuffle_list(self, list): - random.shuffle(list) - - -class BaseAdapter(nn.Module): - def __init__(self, __C): - super(BaseAdapter, self).__init__() - self.__C = __C - if self.__C.DATASET in ['vqa']: - self.vqa_init(__C) - - elif self.__C.DATASET in ['gqa']: - self.gqa_init(__C) - - elif self.__C.DATASET in ['clevr']: - self.clevr_init(__C) - - else: - exit(-1) - - # eval('self.' + __C.DATASET + '_init()') - - def vqa_init(self, __C): - raise NotImplementedError() - - def gqa_init(self, __C): - raise NotImplementedError() - - def clevr_init(self, __C): - raise NotImplementedError() - - def forward(self, frcn_feat, grid_feat, bbox_feat): - feat_dict = feat_filter(self.__C.DATASET, frcn_feat, grid_feat, bbox_feat) - - if self.__C.DATASET in ['vqa']: - return self.vqa_forward(feat_dict) - - elif self.__C.DATASET in ['gqa']: - return self.gqa_forward(feat_dict) - - elif self.__C.DATASET in ['clevr']: - return self.clevr_forward(feat_dict) - - else: - exit(-1) - - def vqa_forward(self, feat_dict): - raise NotImplementedError() - - def gqa_forward(self, feat_dict): - raise NotImplementedError() - - def clevr_forward(self, feat_dict): - raise NotImplementedError() - diff --git a/spaces/CVPR/LIVE/pybind11/.github/ISSUE_TEMPLATE/bug-report.md b/spaces/CVPR/LIVE/pybind11/.github/ISSUE_TEMPLATE/bug-report.md deleted file mode 100644 index ae36ea65083643dfc6f252249141b94c7ecb65e7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/.github/ISSUE_TEMPLATE/bug-report.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -name: Bug Report -about: File an issue about a bug -title: "[BUG] " ---- - - -Make sure you've completed the following steps before submitting your issue -- thank you! - -1. Make sure you've read the [documentation][]. Your issue may be addressed there. -2. Search the [issue tracker][] to verify that this hasn't already been reported. +1 or comment there if it has. -3. Consider asking first in the [Gitter chat room][]. -4. Include a self-contained and minimal piece of code that reproduces the problem. If that's not possible, try to make the description as clear as possible. - a. If possible, make a PR with a new, failing test to give us a starting point to work on! - -[documentation]: https://pybind11.readthedocs.io -[issue tracker]: https://github.com/pybind/pybind11/issues -[Gitter chat room]: https://gitter.im/pybind/Lobby - -*After reading, remove this checklist and the template text in parentheses below.* - -## Issue description - -(Provide a short description, state the expected behavior and what actually happens.) - -## Reproducible example code - -(The code should be minimal, have no external dependencies, isolate the function(s) that cause breakage. Submit matched and complete C++ and Python snippets that can be easily compiled and run to diagnose the issue.) diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/memory_wrapper.h b/spaces/CVPR/LIVE/thrust/thrust/detail/memory_wrapper.h deleted file mode 100644 index bfc9056fa15ff6d123659499e5fb9044f937f769..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/memory_wrapper.h +++ /dev/null @@ -1,30 +0,0 @@ -/* - * Copyright 2020 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -// When a compiler uses Thrust as part of its implementation of Standard C++ -// algorithms, a cycle of included files may result when Thrust code tries to -// use a standard algorithm. Having a macro that is defined only when Thrust -// is including an algorithms-related header gives the compiler a chance to -// detect and break the cycle of includes. ( declares several standard -// algorithms, including all of the uninitialized_* algorithms. "_ALGORITHMS_" -// in the macro name is meant generically, not as a specific reference to -// the header .) - -#define THRUST_INCLUDING_ALGORITHMS_HEADER -#include -#undef THRUST_INCLUDING_ALGORITHMS_HEADER diff --git a/spaces/CVPR/Text2Human/Text2Human/utils/__init__.py b/spaces/CVPR/Text2Human/Text2Human/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-b7124075.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-b7124075.js deleted file mode 100644 index 7e688b9e5869367a0138174a990786e71031d039..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-b7124075.js +++ /dev/null @@ -1,4 +0,0 @@ -import{S as Ae,e as ye,s as Se,J as Pe,K as h,p as E,M as L,n as $,A as V,N as H,O as J,U as S,Q as B,X as re,af as Ve,a1 as ge,P as Y,R as Z,G as Re,m as pe,V as dl,z as P,u as ue,v as R,y as oe,B as Be,ag as Nl,k as j,o as K,x as Q,ah as Ol,h as He,ai as zl,_ as Ie,F as C,T as Te,aj as ml,j as cl,t as hl,a9 as Il,ab as Dl,ac as Cl,ad as jl,ak as z,E as Kl,ae as Ql,q as Yl,r as ql}from"./index-3370be2a.js";import"./Blocks-f0129fcd.js";import{U as Xl}from"./UploadText-28892309.js";import{a as bl,B as Gl}from"./Button-89624748.js";import{U as Jl}from"./Upload-f29b2460.js";import{M as Zl}from"./ModifyUpload-d8fc50ab.js";import{B as gl}from"./BlockLabel-56db415e.js";import{E as Wl}from"./Empty-585389a4.js";import{S as xl,u as $l}from"./ShareButton-39feba51.js";import{n as en}from"./ModifyUpload.svelte_svelte_type_style_lang-d2acacf0.js";import"./IconButton-abe5ede9.js";function ln(l){let e,i,n,t;return{c(){e=Pe("svg"),i=Pe("path"),n=Pe("circle"),t=Pe("circle"),h(i,"d","M9 18V5l12-2v13"),h(n,"cx","6"),h(n,"cy","18"),h(n,"r","3"),h(t,"cx","18"),h(t,"cy","16"),h(t,"r","3"),h(e,"xmlns","http://www.w3.org/2000/svg"),h(e,"width","100%"),h(e,"height","100%"),h(e,"viewBox","0 0 24 24"),h(e,"fill","none"),h(e,"stroke","currentColor"),h(e,"stroke-width","1.5"),h(e,"stroke-linecap","round"),h(e,"stroke-linejoin","round"),h(e,"class","feather feather-music")},m(a,s){E(a,e,s),L(e,i),L(e,n),L(e,t)},p:$,i:$,o:$,d(a){a&&V(e)}}}class Ne extends Ae{constructor(e){super(),ye(this,e,null,ln,Se,{})}}function De(l,e,i){const n=l.slice();return n[27]=e[i],n[29]=i,n}function Ce(l){let e,i,n,t,a=(l[6]==="label"||l[7]==="label")&&je(l);return{c(){e=H("span"),a&&a.c(),h(e,"class","pip first"),h(e,"style",i=l[14]+": 0%;"),S(e,"selected",l[17](l[0])),S(e,"in-range",l[16](l[0]))},m(s,r){E(s,e,r),a&&a.m(e,null),n||(t=[B(e,"click",function(){re(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}),B(e,"touchend",Ve(function(){re(l[20](l[0]))&&l[20](l[0]).apply(this,arguments)}))],n=!0)},p(s,r){l=s,l[6]==="label"||l[7]==="label"?a?a.p(l,r):(a=je(l),a.c(),a.m(e,null)):a&&(a.d(1),a=null),r&16384&&i!==(i=l[14]+": 0%;")&&h(e,"style",i),r&131073&&S(e,"selected",l[17](l[0])),r&65537&&S(e,"in-range",l[16](l[0]))},d(s){s&&V(e),a&&a.d(),n=!1,ge(t)}}}function je(l){let e,i=l[12](l[0],0,0)+"",n,t=l[10]&&Ke(l),a=l[11]&&Qe(l);return{c(){e=H("span"),t&&t.c(),n=Y(i),a&&a.c(),h(e,"class","pipVal")},m(s,r){E(s,e,r),t&&t.m(e,null),L(e,n),a&&a.m(e,null)},p(s,r){s[10]?t?t.p(s,r):(t=Ke(s),t.c(),t.m(e,n)):t&&(t.d(1),t=null),r&4097&&i!==(i=s[12](s[0],0,0)+"")&&Z(n,i),s[11]?a?a.p(s,r):(a=Qe(s),a.c(),a.m(e,null)):a&&(a.d(1),a=null)},d(s){s&&V(e),t&&t.d(),a&&a.d()}}}function Ke(l){let e,i;return{c(){e=H("span"),i=Y(l[10]),h(e,"class","pipVal-prefix")},m(n,t){E(n,e,t),L(e,i)},p(n,t){t&1024&&Z(i,n[10])},d(n){n&&V(e)}}}function Qe(l){let e,i;return{c(){e=H("span"),i=Y(l[11]),h(e,"class","pipVal-suffix")},m(n,t){E(n,e,t),L(e,i)},p(n,t){t&2048&&Z(i,n[11])},d(n){n&&V(e)}}}function Ye(l){let e,i=Re(Array(l[19]+1)),n=[];for(let t=0;tp}=e,{focus:U=void 0}=e,{orientationStart:X=void 0}=e,{percentOf:ie=void 0}=e,{moveHandle:le=void 0}=e;function fe(p){le(void 0,p)}return l.$$set=p=>{"range"in p&&i(21,f=p.range),"min"in p&&i(0,g=p.min),"max"in p&&i(1,d=p.max),"step"in p&&i(22,c=p.step),"values"in p&&i(23,o=p.values),"vertical"in p&&i(2,_=p.vertical),"reversed"in p&&i(3,m=p.reversed),"hoverable"in p&&i(4,A=p.hoverable),"disabled"in p&&i(5,y=p.disabled),"pipstep"in p&&i(24,w=p.pipstep),"all"in p&&i(6,I=p.all),"first"in p&&i(7,q=p.first),"last"in p&&i(8,O=p.last),"rest"in p&&i(9,D=p.rest),"prefix"in p&&i(10,F=p.prefix),"suffix"in p&&i(11,W=p.suffix),"formatter"in p&&i(12,ee=p.formatter),"focus"in p&&i(13,U=p.focus),"orientationStart"in p&&i(14,X=p.orientationStart),"percentOf"in p&&i(15,ie=p.percentOf),"moveHandle"in p&&i(25,le=p.moveHandle)},l.$$.update=()=>{l.$$.dirty&20971527&&i(26,n=w||((d-g)/c>=(_?50:100)?(d-g)/(_?10:20):1)),l.$$.dirty&71303171&&i(19,t=parseInt((d-g)/(c*n),10)),l.$$.dirty&71303169&&i(18,a=function(p){return g+p*c*n}),l.$$.dirty&8388608&&i(17,s=function(p){return o.some(te=>te===p)}),l.$$.dirty&10485760&&i(16,r=function(p){if(f==="min")return o[0]>p;if(f==="max")return o[0]p})},[g,d,_,m,A,y,I,q,O,D,F,W,ee,U,X,ie,r,s,a,t,fe,f,c,o,w,le,n]}class an extends Ae{constructor(e){super(),ye(this,e,tn,nn,Se,{range:21,min:0,max:1,step:22,values:23,vertical:2,reversed:3,hoverable:4,disabled:5,pipstep:24,all:6,first:7,last:8,rest:9,prefix:10,suffix:11,formatter:12,focus:13,orientationStart:14,percentOf:15,moveHandle:25})}}function ll(l,e,i){const n=l.slice();return n[63]=e[i],n[65]=i,n}function nl(l){let e,i=l[21](l[63],l[65],l[23](l[63]))+"",n,t=l[18]&&il(l),a=l[19]&&tl(l);return{c(){e=H("span"),t&&t.c(),n=Y(i),a&&a.c(),h(e,"class","rangeFloat")},m(s,r){E(s,e,r),t&&t.m(e,null),L(e,n),a&&a.m(e,null)},p(s,r){s[18]?t?t.p(s,r):(t=il(s),t.c(),t.m(e,n)):t&&(t.d(1),t=null),r[0]&10485761&&i!==(i=s[21](s[63],s[65],s[23](s[63]))+"")&&Z(n,i),s[19]?a?a.p(s,r):(a=tl(s),a.c(),a.m(e,null)):a&&(a.d(1),a=null)},d(s){s&&V(e),t&&t.d(),a&&a.d()}}}function il(l){let e,i;return{c(){e=H("span"),i=Y(l[18]),h(e,"class","rangeFloat-prefix")},m(n,t){E(n,e,t),L(e,i)},p(n,t){t[0]&262144&&Z(i,n[18])},d(n){n&&V(e)}}}function tl(l){let e,i;return{c(){e=H("span"),i=Y(l[19]),h(e,"class","rangeFloat-suffix")},m(n,t){E(n,e,t),L(e,i)},p(n,t){t[0]&524288&&Z(i,n[19])},d(n){n&&V(e)}}}function al(l){let e,i,n,t,a,s,r,f,g,d,c,o,_=l[7]&&nl(l);return{c(){e=H("span"),i=H("span"),n=J(),_&&_.c(),h(i,"class","rangeNub"),h(e,"role","slider"),h(e,"class","rangeHandle"),h(e,"data-handle",l[65]),h(e,"style",t=l[28]+": "+l[29][l[65]]+"%; z-index: "+(l[26]===l[65]?3:2)+";"),h(e,"aria-valuemin",a=l[2]===!0&&l[65]===1?l[0][0]:l[3]),h(e,"aria-valuemax",s=l[2]===!0&&l[65]===0?l[0][1]:l[4]),h(e,"aria-valuenow",r=l[63]),h(e,"aria-valuetext",f=""+(l[18]+l[21](l[63],l[65],l[23](l[63]))+l[19])),h(e,"aria-orientation",g=l[6]?"vertical":"horizontal"),h(e,"aria-disabled",l[10]),h(e,"disabled",l[10]),h(e,"tabindex",d=l[10]?-1:0),S(e,"active",l[24]&&l[26]===l[65]),S(e,"press",l[25]&&l[26]===l[65])},m(m,A){E(m,e,A),L(e,i),L(e,n),_&&_.m(e,null),c||(o=[B(e,"blur",l[33]),B(e,"focus",l[34]),B(e,"keydown",l[35])],c=!0)},p(m,A){m[7]?_?_.p(m,A):(_=nl(m),_.c(),_.m(e,null)):_&&(_.d(1),_=null),A[0]&872415232&&t!==(t=m[28]+": "+m[29][m[65]]+"%; z-index: "+(m[26]===m[65]?3:2)+";")&&h(e,"style",t),A[0]&13&&a!==(a=m[2]===!0&&m[65]===1?m[0][0]:m[3])&&h(e,"aria-valuemin",a),A[0]&21&&s!==(s=m[2]===!0&&m[65]===0?m[0][1]:m[4])&&h(e,"aria-valuemax",s),A[0]&1&&r!==(r=m[63])&&h(e,"aria-valuenow",r),A[0]&11272193&&f!==(f=""+(m[18]+m[21](m[63],m[65],m[23](m[63]))+m[19]))&&h(e,"aria-valuetext",f),A[0]&64&&g!==(g=m[6]?"vertical":"horizontal")&&h(e,"aria-orientation",g),A[0]&1024&&h(e,"aria-disabled",m[10]),A[0]&1024&&h(e,"disabled",m[10]),A[0]&1024&&d!==(d=m[10]?-1:0)&&h(e,"tabindex",d),A[0]&83886080&&S(e,"active",m[24]&&m[26]===m[65]),A[0]&100663296&&S(e,"press",m[25]&&m[26]===m[65])},d(m){m&&V(e),_&&_.d(),c=!1,ge(o)}}}function sl(l){let e,i;return{c(){e=H("span"),h(e,"class","rangeBar"),h(e,"style",i=l[28]+": "+l[31](l[29])+"%; "+l[27]+": "+l[32](l[29])+"%;")},m(n,t){E(n,e,t)},p(n,t){t[0]&939524096&&i!==(i=n[28]+": "+n[31](n[29])+"%; "+n[27]+": "+n[32](n[29])+"%;")&&h(e,"style",i)},d(n){n&&V(e)}}}function fl(l){let e,i;return e=new an({props:{values:l[0],min:l[3],max:l[4],step:l[5],range:l[2],vertical:l[6],reversed:l[8],orientationStart:l[28],hoverable:l[9],disabled:l[10],all:l[13],first:l[14],last:l[15],rest:l[16],pipstep:l[12],prefix:l[18],suffix:l[19],formatter:l[20],focus:l[24],percentOf:l[23],moveHandle:l[30]}}),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},p(n,t){const a={};t[0]&1&&(a.values=n[0]),t[0]&8&&(a.min=n[3]),t[0]&16&&(a.max=n[4]),t[0]&32&&(a.step=n[5]),t[0]&4&&(a.range=n[2]),t[0]&64&&(a.vertical=n[6]),t[0]&256&&(a.reversed=n[8]),t[0]&268435456&&(a.orientationStart=n[28]),t[0]&512&&(a.hoverable=n[9]),t[0]&1024&&(a.disabled=n[10]),t[0]&8192&&(a.all=n[13]),t[0]&16384&&(a.first=n[14]),t[0]&32768&&(a.last=n[15]),t[0]&65536&&(a.rest=n[16]),t[0]&4096&&(a.pipstep=n[12]),t[0]&262144&&(a.prefix=n[18]),t[0]&524288&&(a.suffix=n[19]),t[0]&1048576&&(a.formatter=n[20]),t[0]&16777216&&(a.focus=n[24]),t[0]&8388608&&(a.percentOf=n[23]),e.$set(a)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function sn(l){let e,i,n,t,a,s,r=Re(l[0]),f=[];for(let c=0;c{d=null}),oe()),(!t||o[0]&131072)&&h(e,"id",c[17]),(!t||o[0]&4)&&S(e,"range",c[2]),(!t||o[0]&1024)&&S(e,"disabled",c[10]),(!t||o[0]&512)&&S(e,"hoverable",c[9]),(!t||o[0]&64)&&S(e,"vertical",c[6]),(!t||o[0]&256)&&S(e,"reversed",c[8]),(!t||o[0]&16777216)&&S(e,"focus",c[24]),(!t||o[0]&4)&&S(e,"min",c[2]==="min"),(!t||o[0]&4)&&S(e,"max",c[2]==="max"),(!t||o[0]&2048)&&S(e,"pips",c[11]),(!t||o[0]&122880)&&S(e,"pip-labels",c[13]==="label"||c[14]==="label"||c[15]==="label"||c[16]==="label")},i(c){t||(P(d),t=!0)},o(c){R(d),t=!1},d(c){c&&V(e),dl(f,c),g&&g.d(),d&&d.d(),l[49](null),a=!1,ge(s)}}}function rl(l){if(!l)return-1;for(var e=0;l=l.previousElementSibling;)e++;return e}function Ue(l){return l.type.includes("touch")?l.touches[0]:l}function fn(l,e,i){let n,t,a,s,r,f,g=$,d=()=>(g(),g=Ol(be,u=>i(29,f=u)),be);l.$$.on_destroy.push(()=>g());let{slider:c}=e,{range:o=!1}=e,{pushy:_=!1}=e,{min:m=0}=e,{max:A=100}=e,{step:y=1}=e,{values:w=[(A+m)/2]}=e,{vertical:I=!1}=e,{float:q=!1}=e,{reversed:O=!1}=e,{hoverable:D=!0}=e,{disabled:F=!1}=e,{pips:W=!1}=e,{pipstep:ee=void 0}=e,{all:U=void 0}=e,{first:X=void 0}=e,{last:ie=void 0}=e,{rest:le=void 0}=e,{id:fe=void 0}=e,{prefix:p=""}=e,{suffix:te=""}=e,{formatter:_e=(u,v,M)=>u}=e,{handleFormatter:we=_e}=e,{precision:G=2}=e,{springValues:de={stiffness:.15,damping:.4}}=e;const me=Be();let ce=0,x=!1,ne=!1,b=!1,k=!1,N=w.length-1,ae,he,be;function Me(u){const v=c.querySelectorAll(".handle"),M=Array.prototype.includes.call(v,u),T=Array.prototype.some.call(v,se=>se.contains(u));return M||T}function Ee(u){return o==="min"||o==="max"?u.slice(0,1):o?u.slice(0,2):u}function ke(){return c.getBoundingClientRect()}function Fe(u){const v=ke();let M=0,T=0,se=0;I?(M=u.clientY-v.top,T=M/v.height*100,T=O?T:100-T):(M=u.clientX-v.left,T=M/v.width*100,T=O?100-T:T),se=(A-m)/100*T+m;let ze;return o===!0&&w[0]===w[1]?se>w[1]?1:0:(ze=w.indexOf([...w].sort((Ll,Ul)=>Math.abs(se-Ll)-Math.abs(se-Ul))[0]),ze)}function Le(u){const v=ke();let M=0,T=0,se=0;I?(M=u.clientY-v.top,T=M/v.height*100,T=O?T:100-T):(M=u.clientX-v.left,T=M/v.width*100,T=O?100-T:T),se=(A-m)/100*T+m,ve(N,se)}function ve(u,v){return v=a(v),typeof u>"u"&&(u=N),o&&(u===0&&v>w[1]?_?i(0,w[1]=v,w):v=w[1]:u===1&&va(u))})}function Oe(){!F&&me("stop",{activeHandle:N,startValue:ae,value:w[N],values:w.map(u=>a(u))})}function Ml(){!F&&me("change",{activeHandle:N,startValue:ae,previousValue:typeof he>"u"?ae:he,value:w[N],values:w.map(u=>a(u))})}function Fl(u){He[u?"unshift":"push"](()=>{c=u,i(1,c)})}return l.$$set=u=>{"slider"in u&&i(1,c=u.slider),"range"in u&&i(2,o=u.range),"pushy"in u&&i(43,_=u.pushy),"min"in u&&i(3,m=u.min),"max"in u&&i(4,A=u.max),"step"in u&&i(5,y=u.step),"values"in u&&i(0,w=u.values),"vertical"in u&&i(6,I=u.vertical),"float"in u&&i(7,q=u.float),"reversed"in u&&i(8,O=u.reversed),"hoverable"in u&&i(9,D=u.hoverable),"disabled"in u&&i(10,F=u.disabled),"pips"in u&&i(11,W=u.pips),"pipstep"in u&&i(12,ee=u.pipstep),"all"in u&&i(13,U=u.all),"first"in u&&i(14,X=u.first),"last"in u&&i(15,ie=u.last),"rest"in u&&i(16,le=u.rest),"id"in u&&i(17,fe=u.id),"prefix"in u&&i(18,p=u.prefix),"suffix"in u&&i(19,te=u.suffix),"formatter"in u&&i(20,_e=u.formatter),"handleFormatter"in u&&i(21,we=u.handleFormatter),"precision"in u&&i(44,G=u.precision),"springValues"in u&&i(45,de=u.springValues)},l.$$.update=()=>{l.$$.dirty[0]&24&&i(48,t=function(u){return u<=m?m:u>=A?A:u}),l.$$.dirty[0]&56|l.$$.dirty[1]&139264&&i(47,a=function(u){if(u<=m)return m;if(u>=A)return A;let v=(u-m)%y,M=u-v;return Math.abs(v)*2>=y&&(M+=v>0?y:-y),M=t(M),parseFloat(M.toFixed(G))}),l.$$.dirty[0]&24|l.$$.dirty[1]&8192&&i(23,n=function(u){let v=(u-m)/(A-m)*100;return isNaN(v)||v<=0?0:v>=100?100:parseFloat(v.toFixed(G))}),l.$$.dirty[0]&12582937|l.$$.dirty[1]&114688&&(Array.isArray(w)||(i(0,w=[(A+m)/2]),console.error("'values' prop should be an Array (https://github.com/simeydotme/svelte-range-slider-pips#slider-props)")),i(0,w=Ee(w.map(u=>a(u)))),ce!==w.length?d(i(22,be=Nl(w.map(u=>n(u)),de))):be.set(w.map(u=>n(u))),i(46,ce=w.length)),l.$$.dirty[0]&320&&i(28,s=I?O?"top":"bottom":O?"right":"left"),l.$$.dirty[0]&320&&i(27,r=I?O?"bottom":"top":O?"left":"right")},[w,c,o,m,A,y,I,q,O,D,F,W,ee,U,X,ie,le,fe,p,te,_e,we,be,n,x,b,N,r,s,f,ve,wl,kl,vl,Al,yl,Sl,El,Vl,Pl,Rl,Tl,Bl,_,G,de,ce,a,t,Fl]}class rn extends Ae{constructor(e){super(),ye(this,e,fn,sn,Se,{slider:1,range:2,pushy:43,min:3,max:4,step:5,values:0,vertical:6,float:7,reversed:8,hoverable:9,disabled:10,pips:11,pipstep:12,all:13,first:14,last:15,rest:16,id:17,prefix:18,suffix:19,formatter:20,handleFormatter:21,precision:44,springValues:45},null,[-1,-1,-1])}}function pl(l,{crop_values:e,autoplay:i}={}){function n(){if(e===void 0)return;const a=e[0]/100*l.duration,s=e[1]/100*l.duration;l.currentTimes&&(l.currentTime=a,l.pause())}async function t(){i&&(l.pause(),await l.play())}return l.addEventListener("loadeddata",t),l.addEventListener("timeupdate",n),{destroy(){l.removeEventListener("loadeddata",t),l.removeEventListener("timeupdate",n)}}}function un(l){let e,i,n,t,a,s,r,f,g,d,c;e=new Zl({props:{editable:!0,absolute:!0}}),e.$on("clear",l[13]),e.$on("edit",l[26]);let o=l[8]==="edit"&&l[9]?.duration&&ul(l);return{c(){j(e.$$.fragment),i=J(),n=H("audio"),r=J(),o&&o.c(),f=pe(),n.controls=!0,h(n,"preload","metadata"),Te(n.src,t=l[1]?.data)||h(n,"src",t),h(n,"data-testid",a=`${l[2]}-audio`),h(n,"class","svelte-1thnwz")},m(_,m){K(e,_,m),E(_,i,m),E(_,n,m),l[27](n),E(_,r,m),o&&o.m(_,m),E(_,f,m),g=!0,d||(c=[ml(s=pl.call(null,n,{autoplay:l[6],crop_values:l[10]})),B(n,"play",l[23]),B(n,"pause",l[24]),B(n,"ended",l[16])],d=!0)},p(_,m){(!g||m[0]&2&&!Te(n.src,t=_[1]?.data))&&h(n,"src",t),(!g||m[0]&4&&a!==(a=`${_[2]}-audio`))&&h(n,"data-testid",a),s&&re(s.update)&&m[0]&1088&&s.update.call(null,{autoplay:_[6],crop_values:_[10]}),_[8]==="edit"&&_[9]?.duration?o?(o.p(_,m),m[0]&768&&P(o,1)):(o=ul(_),o.c(),P(o,1),o.m(f.parentNode,f)):o&&(ue(),R(o,1,1,()=>{o=null}),oe())},i(_){g||(P(e.$$.fragment,_),P(o),g=!0)},o(_){R(e.$$.fragment,_),R(o),g=!1},d(_){_&&(V(i),V(n),V(r),V(f)),Q(e,_),l[27](null),o&&o.d(_),d=!1,ge(c)}}}function on(l){let e,i,n,t;const a=[dn,_n],s=[];function r(f,g){return f[4]==="microphone"?0:f[4]==="upload"?1:-1}return~(e=r(l))&&(i=s[e]=a[e](l)),{c(){i&&i.c(),n=pe()},m(f,g){~e&&s[e].m(f,g),E(f,n,g),t=!0},p(f,g){let d=e;e=r(f),e===d?~e&&s[e].p(f,g):(i&&(ue(),R(s[d],1,1,()=>{s[d]=null}),oe()),~e?(i=s[e],i?i.p(f,g):(i=s[e]=a[e](f),i.c()),P(i,1),i.m(n.parentNode,n)):i=null)},i(f){t||(P(i),t=!0)},o(f){R(i),t=!1},d(f){f&&V(n),~e&&s[e].d(f)}}}function ul(l){let e,i,n;function t(s){l[28](s)}let a={range:!0,min:0,max:100,step:1};return l[10]!==void 0&&(a.values=l[10]),e=new rn({props:a}),He.push(()=>cl(e,"values",t)),e.$on("change",l[14]),{c(){j(e.$$.fragment)},m(s,r){K(e,s,r),n=!0},p(s,r){const f={};!i&&r[0]&1024&&(i=!0,f.values=s[10],hl(()=>i=!1)),e.$set(f)},i(s){n||(P(e.$$.fragment,s),n=!0)},o(s){R(e.$$.fragment,s),n=!1},d(s){Q(e,s)}}}function _n(l){let e,i,n;function t(s){l[25](s)}let a={filetype:"audio/aac,audio/midi,audio/mpeg,audio/ogg,audio/wav,audio/x-wav,audio/opus,audio/webm,audio/flac,audio/vnd.rn-realaudio,audio/x-ms-wma,audio/x-aiff,audio/amr,audio/*",$$slots:{default:[mn]},$$scope:{ctx:l}};return l[0]!==void 0&&(a.dragging=l[0]),e=new Jl({props:a}),He.push(()=>cl(e,"dragging",t)),e.$on("load",l[15]),{c(){j(e.$$.fragment)},m(s,r){K(e,s,r),n=!0},p(s,r){const f={};r[0]&536870912&&(f.$$scope={dirty:r,ctx:s}),!i&&r[0]&1&&(i=!0,f.dragging=s[0],hl(()=>i=!1)),e.$set(f)},i(s){n||(P(e.$$.fragment,s),n=!0)},o(s){R(e.$$.fragment,s),n=!1},d(s){Q(e,s)}}}function dn(l){let e,i,n,t;const a=[hn,cn],s=[];function r(f,g){return f[7]?0:1}return i=r(l),n=s[i]=a[i](l),{c(){e=H("div"),n.c(),h(e,"class","mic-wrap svelte-1thnwz")},m(f,g){E(f,e,g),s[i].m(e,null),t=!0},p(f,g){let d=i;i=r(f),i===d?s[i].p(f,g):(ue(),R(s[d],1,1,()=>{s[d]=null}),oe(),n=s[i],n?n.p(f,g):(n=s[i]=a[i](f),n.c()),P(n,1),n.m(e,null))},i(f){t||(P(n),t=!0)},o(f){R(n),t=!1},d(f){f&&V(e),s[i].d()}}}function mn(l){let e;const i=l[22].default,n=Il(i,l,l[29],null);return{c(){n&&n.c()},m(t,a){n&&n.m(t,a),e=!0},p(t,a){n&&n.p&&(!e||a[0]&536870912)&&Dl(n,i,t,t[29],e?jl(i,t[29],a,null):Cl(t[29]),null)},i(t){e||(P(n,t),e=!0)},o(t){R(n,t),e=!1},d(t){n&&n.d(t)}}}function cn(l){let e,i;return e=new bl({props:{size:"sm",$$slots:{default:[bn]},$$scope:{ctx:l}}}),e.$on("click",l[11]),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},p(n,t){const a={};t[0]&536870912&&(a.$$scope={dirty:t,ctx:n}),e.$set(a)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function hn(l){let e,i;return e=new bl({props:{size:"sm",$$slots:{default:[gn]},$$scope:{ctx:l}}}),e.$on("click",l[12]),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},p(n,t){const a={};t[0]&536870912&&(a.$$scope={dirty:t,ctx:n}),e.$set(a)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function bn(l){let e,i;return{c(){e=H("span"),e.innerHTML='',i=Y(` - Record from microphone`),h(e,"class","record-icon svelte-1thnwz")},m(n,t){E(n,e,t),E(n,i,t)},p:$,d(n){n&&(V(e),V(i))}}}function gn(l){let e,i;return{c(){e=H("span"),e.innerHTML=' ',i=Y(` - Stop recording`),h(e,"class","record-icon svelte-1thnwz")},m(n,t){E(n,e,t),E(n,i,t)},p:$,d(n){n&&(V(e),V(i))}}}function pn(l){let e,i,n,t,a,s;e=new gl({props:{show_label:l[3],Icon:Ne,float:l[4]==="upload"&&l[1]===null,label:l[2]||"Audio"}});const r=[on,un],f=[];function g(d,c){return d[1]===null||d[5]?0:1}return n=g(l),t=f[n]=r[n](l),{c(){j(e.$$.fragment),i=J(),t.c(),a=pe()},m(d,c){K(e,d,c),E(d,i,c),f[n].m(d,c),E(d,a,c),s=!0},p(d,c){const o={};c[0]&8&&(o.show_label=d[3]),c[0]&18&&(o.float=d[4]==="upload"&&d[1]===null),c[0]&4&&(o.label=d[2]||"Audio"),e.$set(o);let _=n;n=g(d),n===_?f[n].p(d,c):(ue(),R(f[_],1,1,()=>{f[_]=null}),oe(),t=f[n],t?t.p(d,c):(t=f[n]=r[n](d),t.c()),P(t,1),t.m(a.parentNode,a))},i(d){s||(P(e.$$.fragment,d),P(t),s=!0)},o(d){R(e.$$.fragment,d),R(t),s=!1},d(d){d&&(V(i),V(a)),Q(e,d),f[n].d(d)}}}const wn=500,ol=44;function kn(l){return new Promise((e,i)=>{let n=new FileReader;n.onerror=i,n.onload=()=>e(n.result),n.readAsDataURL(l)})}function vn(l,e,i){let{$$slots:n={},$$scope:t}=e,{value:a=null}=e,{label:s}=e,{show_label:r=!0}=e,{name:f=""}=e,{source:g}=e,{pending:d=!1}=e,{streaming:c=!1}=e,{autoplay:o=!1}=e,_=!1,m,A="",y,w=[],I=!1,q,O=!1,D=[0,100],F=[],W;function ee(){W=[Ie(()=>import("./module-447425fe.js"),["./module-447425fe.js","./module-a3cf0cc4.js","./index-3370be2a.js","./index-f2292b12.css"],import.meta.url),Ie(()=>import("./module-a5a0afa0.js"),["./module-a5a0afa0.js","./module-a3cf0cc4.js"],import.meta.url)]}c&&ee();const U=Be(),X=async(k,N)=>{let ae=new Blob(k,{type:"audio/wav"});i(1,a={data:await kn(ae),name:"audio.wav"}),U(N,a)};async function ie(){let k;try{k=await navigator.mediaDevices.getUserMedia({audio:!0})}catch(N){if(N instanceof DOMException&&N.name=="NotAllowedError"){U("error","Please allow access to the microphone for recording.");return}throw N}if(k!=null){if(c){const[{MediaRecorder:N,register:ae},{connect:he}]=await Promise.all(W);await ae(await he()),m=new N(k,{mimeType:"audio/wav"});async function be(Me){let Ee=await Me.data.arrayBuffer(),ke=new Uint8Array(Ee);if(y||(i(19,y=new Uint8Array(Ee.slice(0,ol))),ke=new Uint8Array(Ee.slice(ol))),d)w.push(ke);else{let Fe=[y].concat(w,[ke]);X(Fe,"stream"),i(20,w=[])}}m.addEventListener("dataavailable",be)}else m=new MediaRecorder(k),m.addEventListener("dataavailable",N=>{F.push(N.data)}),m.addEventListener("stop",async()=>{i(7,_=!1),await X(F,"change"),await X(F,"stop_recording"),F=[]});O=!0}}async function le(){i(7,_=!0),U("start_recording"),O||await ie(),i(19,y=void 0),c?m.start(wn):m.start()}zl(()=>{m&&m.state!=="inactive"&&m.stop()});function fe(){m.stop(),c&&(i(7,_=!1),d&&i(21,I=!0))}function p(){U("change",null),U("clear"),i(8,A=""),i(1,a=null)}function te({detail:{values:k}}){a&&(U("change",{data:a.data,name:f,crop_min:k[0],crop_max:k[1]}),U("edit"))}function _e({detail:k}){i(1,a=k),U("change",{data:k.data,name:k.name}),U("upload",k)}function we(){U("stop"),U("end")}let{dragging:G=!1}=e;function de(k){C.call(this,l,k)}function me(k){C.call(this,l,k)}function ce(k){G=k,i(0,G)}const x=()=>i(8,A="edit");function ne(k){He[k?"unshift":"push"](()=>{q=k,i(9,q)})}function b(k){D=k,i(10,D)}return l.$$set=k=>{"value"in k&&i(1,a=k.value),"label"in k&&i(2,s=k.label),"show_label"in k&&i(3,r=k.show_label),"name"in k&&i(17,f=k.name),"source"in k&&i(4,g=k.source),"pending"in k&&i(18,d=k.pending),"streaming"in k&&i(5,c=k.streaming),"autoplay"in k&&i(6,o=k.autoplay),"dragging"in k&&i(0,G=k.dragging),"$$scope"in k&&i(29,t=k.$$scope)},l.$$.update=()=>{if(l.$$.dirty[0]&3932160&&I&&d===!1&&(i(21,I=!1),y&&w)){let k=[y].concat(w);i(20,w=[]),X(k,"stream")}l.$$.dirty[0]&1&&U("drag",G)},[G,a,s,r,g,c,o,_,A,q,D,le,fe,p,te,_e,we,f,d,y,w,I,n,de,me,ce,x,ne,b,t]}class An extends Ae{constructor(e){super(),ye(this,e,vn,pn,Se,{value:1,label:2,show_label:3,name:17,source:4,pending:18,streaming:5,autoplay:6,dragging:0},null,[-1,-1])}}function _l(l){let e,i,n;return i=new xl({props:{formatter:l[9],value:l[0]}}),i.$on("error",l[10]),i.$on("share",l[11]),{c(){e=H("div"),j(i.$$.fragment),h(e,"class","icon-button svelte-1yfus5a")},m(t,a){E(t,e,a),K(i,e,null),n=!0},p(t,a){const s={};a&1&&(s.value=t[0]),i.$set(s)},i(t){n||(P(i.$$.fragment,t),n=!0)},o(t){R(i.$$.fragment,t),n=!1},d(t){t&&V(e),Q(i)}}}function yn(l){let e,i,n,t,a,s;return{c(){e=H("audio"),e.controls=!0,h(e,"preload","metadata"),Te(e.src,i=l[0]?.data)||h(e,"src",i),h(e,"data-testid",n=`${l[1]}-audio`),h(e,"class","svelte-1yfus5a")},m(r,f){E(r,e,f),a||(s=[ml(t=pl.call(null,e,{autoplay:l[3]})),B(e,"play",l[7]),B(e,"pause",l[8]),B(e,"ended",l[5])],a=!0)},p(r,f){f&1&&!Te(e.src,i=r[0]?.data)&&h(e,"src",i),f&2&&n!==(n=`${r[1]}-audio`)&&h(e,"data-testid",n),t&&re(t.update)&&f&8&&t.update.call(null,{autoplay:r[3]})},i:$,o:$,d(r){r&&V(e),a=!1,ge(s)}}}function Sn(l){let e,i;return e=new Wl({props:{size:"small",$$slots:{default:[En]},$$scope:{ctx:l}}}),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},p(n,t){const a={};t&8192&&(a.$$scope={dirty:t,ctx:n}),e.$set(a)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function En(l){let e,i;return e=new Ne({}),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function Vn(l){let e,i,n,t,a,s,r;e=new gl({props:{show_label:l[2],Icon:Ne,float:!1,label:l[1]||"Audio"}});let f=l[4]&&l[0]!==null&&_l(l);const g=[Sn,yn],d=[];function c(o,_){return o[0]===null?0:1}return t=c(l),a=d[t]=g[t](l),{c(){j(e.$$.fragment),i=J(),f&&f.c(),n=J(),a.c(),s=pe()},m(o,_){K(e,o,_),E(o,i,_),f&&f.m(o,_),E(o,n,_),d[t].m(o,_),E(o,s,_),r=!0},p(o,[_]){const m={};_&4&&(m.show_label=o[2]),_&2&&(m.label=o[1]||"Audio"),e.$set(m),o[4]&&o[0]!==null?f?(f.p(o,_),_&17&&P(f,1)):(f=_l(o),f.c(),P(f,1),f.m(n.parentNode,n)):f&&(ue(),R(f,1,1,()=>{f=null}),oe());let A=t;t=c(o),t===A?d[t].p(o,_):(ue(),R(d[A],1,1,()=>{d[A]=null}),oe(),a=d[t],a?a.p(o,_):(a=d[t]=g[t](o),a.c()),P(a,1),a.m(s.parentNode,s))},i(o){r||(P(e.$$.fragment,o),P(f),P(a),r=!0)},o(o){R(e.$$.fragment,o),R(f),R(a),r=!1},d(o){o&&(V(i),V(n),V(s)),Q(e,o),f&&f.d(o),d[t].d(o)}}}function Pn(l,e,i){let{value:n=null}=e,{label:t}=e,{name:a}=e,{show_label:s=!0}=e,{autoplay:r}=e,{show_share_button:f=!1}=e;const g=Be();function d(){g("stop"),g("end")}function c(y){C.call(this,l,y)}function o(y){C.call(this,l,y)}const _=async y=>y?``:"";function m(y){C.call(this,l,y)}function A(y){C.call(this,l,y)}return l.$$set=y=>{"value"in y&&i(0,n=y.value),"label"in y&&i(1,t=y.label),"name"in y&&i(6,a=y.name),"show_label"in y&&i(2,s=y.show_label),"autoplay"in y&&i(3,r=y.autoplay),"show_share_button"in y&&i(4,f=y.show_share_button)},l.$$.update=()=>{l.$$.dirty&65&&n&&g("change",{name:a,data:n?.data})},[n,t,s,r,f,d,a,c,o,_,m,A]}class Rn extends Ae{constructor(e){super(),ye(this,e,Pn,Vn,Se,{value:0,label:1,name:6,show_label:2,autoplay:3,show_share_button:4})}}function Tn(l){let e,i;return e=new Rn({props:{autoplay:l[15],show_label:l[9],show_share_button:l[16],value:l[17],name:l[17]?.name||"audio_file",label:l[8]}}),e.$on("share",l[35]),e.$on("error",l[36]),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},p(n,t){const a={};t[0]&32768&&(a.autoplay=n[15]),t[0]&512&&(a.show_label=n[9]),t[0]&65536&&(a.show_share_button=n[16]),t[0]&131072&&(a.value=n[17]),t[0]&131072&&(a.name=n[17]?.name||"audio_file"),t[0]&256&&(a.label=n[8]),e.$set(a)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function Bn(l){let e,i;return e=new An({props:{label:l[8],show_label:l[9],value:l[17],name:l[6],source:l[7],pending:l[10],streaming:l[11],autoplay:l[15],$$slots:{default:[Hn]},$$scope:{ctx:l}}}),e.$on("change",l[23]),e.$on("stream",l[24]),e.$on("drag",l[25]),e.$on("edit",l[26]),e.$on("play",l[27]),e.$on("pause",l[28]),e.$on("stop",l[29]),e.$on("end",l[30]),e.$on("start_recording",l[31]),e.$on("stop_recording",l[32]),e.$on("upload",l[33]),e.$on("error",l[34]),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},p(n,t){const a={};t[0]&256&&(a.label=n[8]),t[0]&512&&(a.show_label=n[9]),t[0]&131072&&(a.value=n[17]),t[0]&64&&(a.name=n[6]),t[0]&128&&(a.source=n[7]),t[0]&1024&&(a.pending=n[10]),t[0]&2048&&(a.streaming=n[11]),t[0]&32768&&(a.autoplay=n[15]),t[1]&64&&(a.$$scope={dirty:t,ctx:n}),e.$set(a)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function Hn(l){let e,i;return e=new Xl({props:{type:"audio"}}),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},p:$,i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function Mn(l){let e,i,n,t,a,s;const r=[l[1]];let f={};for(let o=0;o{d[A]=null}),oe(),t=d[n],t?t.p(o,_):(t=d[n]=g[n](o),t.c()),P(t,1),t.m(a.parentNode,a))},i(o){s||(P(e.$$.fragment,o),P(t),s=!0)},o(o){R(e.$$.fragment,o),R(t),s=!1},d(o){o&&(V(i),V(a)),Q(e,o),d[n].d(o)}}}function Fn(l){let e,i;return e=new Gl({props:{variant:l[5]==="dynamic"&&l[0]===null&&l[7]==="upload"?"dashed":"solid",border_mode:l[18]?"focus":"base",padding:!1,elem_id:l[2],elem_classes:l[3],visible:l[4],container:l[12],scale:l[13],min_width:l[14],$$slots:{default:[Mn]},$$scope:{ctx:l}}}),{c(){j(e.$$.fragment)},m(n,t){K(e,n,t),i=!0},p(n,t){const a={};t[0]&161&&(a.variant=n[5]==="dynamic"&&n[0]===null&&n[7]==="upload"?"dashed":"solid"),t[0]&262144&&(a.border_mode=n[18]?"focus":"base"),t[0]&4&&(a.elem_id=n[2]),t[0]&8&&(a.elem_classes=n[3]),t[0]&16&&(a.visible=n[4]),t[0]&4096&&(a.container=n[12]),t[0]&8192&&(a.scale=n[13]),t[0]&16384&&(a.min_width=n[14]),t[0]&495587|t[1]&64&&(a.$$scope={dirty:t,ctx:n}),e.$set(a)},i(n){i||(P(e.$$.fragment,n),i=!0)},o(n){R(e.$$.fragment,n),i=!1},d(n){Q(e,n)}}}function Ln(l,e,i){const n=Be();let{elem_id:t=""}=e,{elem_classes:a=[]}=e,{visible:s=!0}=e,{mode:r}=e,{value:f=null}=e,g=null,{name:d}=e,{source:c}=e,{label:o}=e,{root:_}=e,{show_label:m}=e,{pending:A}=e,{streaming:y}=e,{root_url:w}=e,{container:I=!0}=e,{scale:q=null}=e,{min_width:O=void 0}=e,{loading_status:D}=e,{autoplay:F=!1}=e,{show_share_button:W=!1}=e,ee,U;const X=({detail:b})=>i(0,f=b),ie=({detail:b})=>{i(0,f=b),n("stream",f)},le=({detail:b})=>i(18,U=b);function fe(b){C.call(this,l,b)}function p(b){C.call(this,l,b)}function te(b){C.call(this,l,b)}function _e(b){C.call(this,l,b)}function we(b){C.call(this,l,b)}function G(b){C.call(this,l,b)}function de(b){C.call(this,l,b)}function me(b){C.call(this,l,b)}const ce=({detail:b})=>{i(1,D=D||{}),i(1,D.status="error",D),n("error",b)};function x(b){C.call(this,l,b)}function ne(b){C.call(this,l,b)}return l.$$set=b=>{"elem_id"in b&&i(2,t=b.elem_id),"elem_classes"in b&&i(3,a=b.elem_classes),"visible"in b&&i(4,s=b.visible),"mode"in b&&i(5,r=b.mode),"value"in b&&i(0,f=b.value),"name"in b&&i(6,d=b.name),"source"in b&&i(7,c=b.source),"label"in b&&i(8,o=b.label),"root"in b&&i(20,_=b.root),"show_label"in b&&i(9,m=b.show_label),"pending"in b&&i(10,A=b.pending),"streaming"in b&&i(11,y=b.streaming),"root_url"in b&&i(21,w=b.root_url),"container"in b&&i(12,I=b.container),"scale"in b&&i(13,q=b.scale),"min_width"in b&&i(14,O=b.min_width),"loading_status"in b&&i(1,D=b.loading_status),"autoplay"in b&&i(15,F=b.autoplay),"show_share_button"in b&&i(16,W=b.show_share_button)},l.$$.update=()=>{l.$$.dirty[0]&3145729&&i(17,ee=en(f,_,w)),l.$$.dirty[0]&4194305&&JSON.stringify(f)!==JSON.stringify(g)&&(i(22,g=f),n("change"))},[f,D,t,a,s,r,d,c,o,m,A,y,I,q,O,F,W,ee,U,n,_,w,g,X,ie,le,fe,p,te,_e,we,G,de,me,ce,x,ne]}class Un extends Ae{constructor(e){super(),ye(this,e,Ln,Fn,Se,{elem_id:2,elem_classes:3,visible:4,mode:5,value:0,name:6,source:7,label:8,root:20,show_label:9,pending:10,streaming:11,root_url:21,container:12,scale:13,min_width:14,loading_status:1,autoplay:15,show_share_button:16},null,[-1,-1])}get elem_id(){return this.$$.ctx[2]}set elem_id(e){this.$$set({elem_id:e}),z()}get elem_classes(){return this.$$.ctx[3]}set elem_classes(e){this.$$set({elem_classes:e}),z()}get visible(){return this.$$.ctx[4]}set visible(e){this.$$set({visible:e}),z()}get mode(){return this.$$.ctx[5]}set mode(e){this.$$set({mode:e}),z()}get value(){return this.$$.ctx[0]}set value(e){this.$$set({value:e}),z()}get name(){return this.$$.ctx[6]}set name(e){this.$$set({name:e}),z()}get source(){return this.$$.ctx[7]}set source(e){this.$$set({source:e}),z()}get label(){return this.$$.ctx[8]}set label(e){this.$$set({label:e}),z()}get root(){return this.$$.ctx[20]}set root(e){this.$$set({root:e}),z()}get show_label(){return this.$$.ctx[9]}set show_label(e){this.$$set({show_label:e}),z()}get pending(){return this.$$.ctx[10]}set pending(e){this.$$set({pending:e}),z()}get streaming(){return this.$$.ctx[11]}set streaming(e){this.$$set({streaming:e}),z()}get root_url(){return this.$$.ctx[21]}set root_url(e){this.$$set({root_url:e}),z()}get container(){return this.$$.ctx[12]}set container(e){this.$$set({container:e}),z()}get scale(){return this.$$.ctx[13]}set scale(e){this.$$set({scale:e}),z()}get min_width(){return this.$$.ctx[14]}set min_width(e){this.$$set({min_width:e}),z()}get loading_status(){return this.$$.ctx[1]}set loading_status(e){this.$$set({loading_status:e}),z()}get autoplay(){return this.$$.ctx[15]}set autoplay(e){this.$$set({autoplay:e}),z()}get show_share_button(){return this.$$.ctx[16]}set show_share_button(e){this.$$set({show_share_button:e}),z()}}const Xn=Un,Gn=["static","dynamic"],Jn=()=>({type:{input_payload:"{ name: string; data: string }",response_object:"{ name: string; data: string, is_file: boolean }"},description:{input_payload:"audio data as object with filename and base64 string",response_object:"object that includes path to audio file. The URL: {ROOT}file={name} contains the data"},example_data:{name:"audio.wav",data:"data:audio/wav;base64,UklGRiQAAABXQVZFZm10IBAAAAABAAEARKwAAIhYAQACABAAZGF0YQAAAAA="}});export{Xn as Component,Jn as document,Gn as modes}; -//# sourceMappingURL=index-b7124075.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio_client/data_classes.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio_client/data_classes.py deleted file mode 100644 index 50f22042d3038925f35311e0cd329c89a91c79d8..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio_client/data_classes.py +++ /dev/null @@ -1,15 +0,0 @@ -from __future__ import annotations - -from typing import TypedDict - -from typing_extensions import NotRequired - - -class FileData(TypedDict): - name: str | None # filename - data: str | None # base64 encoded data - size: NotRequired[int | None] # size in bytes - is_file: NotRequired[ - bool - ] # whether the data corresponds to a file or base64 encoded data - orig_name: NotRequired[str] # original filename diff --git a/spaces/Dewa/Text-Summurisation/app.py b/spaces/Dewa/Text-Summurisation/app.py deleted file mode 100644 index 8fcb42a1799471fba4bdf7d6f5e46a3ae8b279cc..0000000000000000000000000000000000000000 --- a/spaces/Dewa/Text-Summurisation/app.py +++ /dev/null @@ -1,12 +0,0 @@ -from transformers import pipeline -import gradio as gr - -model=pipeline("summarization") - -def predict(prompt): - summary=model(prompt)[0]['summary_text'] - return summary - - -iface = gr.Interface(fn=predict, inputs="text", outputs="text") -iface.launch() diff --git a/spaces/Dinoking/Guccio-AI-Designer/README.md b/spaces/Dinoking/Guccio-AI-Designer/README.md deleted file mode 100644 index 2314160c31b58a510da73488d1321b733f7fc187..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Guccio-AI-Designer -emoji: 👗🧢🥻 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: cc-by-nc-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/shared.py b/spaces/Dorado607/ChuanhuChatGPT/modules/shared.py deleted file mode 100644 index 32e74665b400a56fd1b10bbd4a9566fe332e49bd..0000000000000000000000000000000000000000 --- a/spaces/Dorado607/ChuanhuChatGPT/modules/shared.py +++ /dev/null @@ -1,64 +0,0 @@ -from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST -import os -import queue -import openai - -class State: - interrupted = False - multi_api_key = False - completion_url = COMPLETION_URL - balance_api_url = BALANCE_API_URL - usage_api_url = USAGE_API_URL - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_api_host(self, api_host: str): - api_host = api_host.rstrip("/") - if not api_host.startswith("http"): - api_host = f"https://{api_host}" - if api_host.endswith("/v1"): - api_host = api_host[:-3] - self.completion_url = f"{api_host}/v1/chat/completions" - self.balance_api_url = f"{api_host}/dashboard/billing/credit_grants" - self.usage_api_url = f"{api_host}/dashboard/billing/usage" - os.environ["OPENAI_API_BASE"] = api_host - - def reset_api_host(self): - self.completion_url = COMPLETION_URL - self.balance_api_url = BALANCE_API_URL - self.usage_api_url = USAGE_API_URL - os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}" - return API_HOST - - def reset_all(self): - self.interrupted = False - self.completion_url = COMPLETION_URL - - def set_api_key_queue(self, api_key_list): - self.multi_api_key = True - self.api_key_queue = queue.Queue() - for api_key in api_key_list: - self.api_key_queue.put(api_key) - - def switching_api_key(self, func): - if not hasattr(self, "api_key_queue"): - return func - - def wrapped(*args, **kwargs): - api_key = self.api_key_queue.get() - args[0].api_key = api_key - ret = func(*args, **kwargs) - self.api_key_queue.put(api_key) - return ret - - return wrapped - - -state = State() - -modules_path = os.path.dirname(os.path.realpath(__file__)) -chuanhu_path = os.path.dirname(modules_path) diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/training/dataset.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/training/dataset.py deleted file mode 100644 index f04842155f754b0aac49b91b1de1de6db017a776..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/training/dataset.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Streaming images and labels from datasets created with dataset_tool.py.""" - -import os -import numpy as np -import zipfile -import PIL.Image -import json -import torch -import dnnlib - -try: - import pyspng -except ImportError: - pyspng = None - -# ---------------------------------------------------------------------------- - - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - max_size=None, - # Enable conditioning labels? False = label dimension is zero. - use_labels=False, - # Artificially double the size of the dataset via x-flips. Applied after max_size. - xflip=False, - # Random seed to use when applying max_size. - random_seed=0, - ): - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate( - [self._xflip, np.ones_like(self._xflip)]) - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros( - [self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - return image.copy(), self.get_label(idx) - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - assert self.image_shape[1] == self.image_shape[2] - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - -# ---------------------------------------------------------------------------- - - -class ImageFolderDataset(Dataset): - def __init__(self, - path, # Path to directory or zip. - # Ensure specific resolution, None = highest available. - resolution=None, - # Additional arguments for the Dataset base class. - **super_kwargs, - ): - self._path = path - self._zipfile = None - - if os.path.isdir(self._path): - self._type = 'dir' - self._all_fnames = {os.path.relpath(os.path.join( - root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files} - elif self._file_ext(self._path) == '.zip': - self._type = 'zip' - self._all_fnames = set(self._get_zipfile().namelist()) - else: - raise IOError('Path must point to a directory or zip') - - PIL.Image.init() - self._image_fnames = sorted( - fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - name = os.path.splitext(os.path.basename(self._path))[0] - raw_shape = [len(self._image_fnames)] + \ - list(self._load_raw_image(0).shape) - if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - raise IOError('Image files do not match the specified resolution') - super().__init__(name=name, raw_shape=raw_shape, **super_kwargs) - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _get_zipfile(self): - assert self._type == 'zip' - if self._zipfile is None: - self._zipfile = zipfile.ZipFile(self._path) - return self._zipfile - - def _open_file(self, fname): - if self._type == 'dir': - return open(os.path.join(self._path, fname), 'rb') - if self._type == 'zip': - return self._get_zipfile().open(fname, 'r') - return None - - def close(self): - try: - if self._zipfile is not None: - self._zipfile.close() - finally: - self._zipfile = None - - def __getstate__(self): - return dict(super().__getstate__(), _zipfile=None) - - def _load_raw_image(self, raw_idx): - fname = self._image_fnames[raw_idx] - with self._open_file(fname) as f: - if pyspng is not None and self._file_ext(fname) == '.png': - image = pyspng.load(f.read()) - else: - image = np.array(PIL.Image.open(f)) - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - image = image.transpose(2, 0, 1) # HWC => CHW - return image - - def _load_raw_labels(self): - fname = 'dataset.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] - for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - -# ---------------------------------------------------------------------------- diff --git a/spaces/Duskfallcrew/Gambit_and_Rogue/app.py b/spaces/Duskfallcrew/Gambit_and_Rogue/app.py deleted file mode 100644 index 0270c26fbfbbbdc5c196d955a83b7e0cbab9d003..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/Gambit_and_Rogue/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import os -import gradio as gr - -API_KEY=os.environ.get('HUGGING_FACE_HUB_TOKEN', None) - -article = """--- -This space was created using [SD Space Creator](https://huggingface.co/spaces/anzorq/sd-space-creator).""" - -gr.Interface.load( - name="models/Duskfallcrew/Gambit_and_Rogue", - title="""Gambit And Rogue""", - description="""Demo for Gambit And Rogue Stable Diffusion model. -Coffee is nice - Model Updates on CivIt """, - article=article, - api_key=API_KEY, - ).queue(concurrency_count=20).launch() diff --git a/spaces/ECCV2022/bytetrack/yolox/tracker/byte_tracker.py b/spaces/ECCV2022/bytetrack/yolox/tracker/byte_tracker.py deleted file mode 100644 index 2d004599bba96ff4ba5fc1e9ad943e64361067e3..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/tracker/byte_tracker.py +++ /dev/null @@ -1,330 +0,0 @@ -import numpy as np -from collections import deque -import os -import os.path as osp -import copy -import torch -import torch.nn.functional as F - -from .kalman_filter import KalmanFilter -from yolox.tracker import matching -from .basetrack import BaseTrack, TrackState - -class STrack(BaseTrack): - shared_kalman = KalmanFilter() - def __init__(self, tlwh, score): - - # wait activate - self._tlwh = np.asarray(tlwh, dtype=np.float) - self.kalman_filter = None - self.mean, self.covariance = None, None - self.is_activated = False - - self.score = score - self.tracklet_len = 0 - - def predict(self): - mean_state = self.mean.copy() - if self.state != TrackState.Tracked: - mean_state[7] = 0 - self.mean, self.covariance = self.kalman_filter.predict(mean_state, self.covariance) - - @staticmethod - def multi_predict(stracks): - if len(stracks) > 0: - multi_mean = np.asarray([st.mean.copy() for st in stracks]) - multi_covariance = np.asarray([st.covariance for st in stracks]) - for i, st in enumerate(stracks): - if st.state != TrackState.Tracked: - multi_mean[i][7] = 0 - multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance) - for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)): - stracks[i].mean = mean - stracks[i].covariance = cov - - def activate(self, kalman_filter, frame_id): - """Start a new tracklet""" - self.kalman_filter = kalman_filter - self.track_id = self.next_id() - self.mean, self.covariance = self.kalman_filter.initiate(self.tlwh_to_xyah(self._tlwh)) - - self.tracklet_len = 0 - self.state = TrackState.Tracked - if frame_id == 1: - self.is_activated = True - # self.is_activated = True - self.frame_id = frame_id - self.start_frame = frame_id - - def re_activate(self, new_track, frame_id, new_id=False): - self.mean, self.covariance = self.kalman_filter.update( - self.mean, self.covariance, self.tlwh_to_xyah(new_track.tlwh) - ) - self.tracklet_len = 0 - self.state = TrackState.Tracked - self.is_activated = True - self.frame_id = frame_id - if new_id: - self.track_id = self.next_id() - self.score = new_track.score - - def update(self, new_track, frame_id): - """ - Update a matched track - :type new_track: STrack - :type frame_id: int - :type update_feature: bool - :return: - """ - self.frame_id = frame_id - self.tracklet_len += 1 - - new_tlwh = new_track.tlwh - self.mean, self.covariance = self.kalman_filter.update( - self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh)) - self.state = TrackState.Tracked - self.is_activated = True - - self.score = new_track.score - - @property - # @jit(nopython=True) - def tlwh(self): - """Get current position in bounding box format `(top left x, top left y, - width, height)`. - """ - if self.mean is None: - return self._tlwh.copy() - ret = self.mean[:4].copy() - ret[2] *= ret[3] - ret[:2] -= ret[2:] / 2 - return ret - - @property - # @jit(nopython=True) - def tlbr(self): - """Convert bounding box to format `(min x, min y, max x, max y)`, i.e., - `(top left, bottom right)`. - """ - ret = self.tlwh.copy() - ret[2:] += ret[:2] - return ret - - @staticmethod - # @jit(nopython=True) - def tlwh_to_xyah(tlwh): - """Convert bounding box to format `(center x, center y, aspect ratio, - height)`, where the aspect ratio is `width / height`. - """ - ret = np.asarray(tlwh).copy() - ret[:2] += ret[2:] / 2 - ret[2] /= ret[3] - return ret - - def to_xyah(self): - return self.tlwh_to_xyah(self.tlwh) - - @staticmethod - # @jit(nopython=True) - def tlbr_to_tlwh(tlbr): - ret = np.asarray(tlbr).copy() - ret[2:] -= ret[:2] - return ret - - @staticmethod - # @jit(nopython=True) - def tlwh_to_tlbr(tlwh): - ret = np.asarray(tlwh).copy() - ret[2:] += ret[:2] - return ret - - def __repr__(self): - return 'OT_{}_({}-{})'.format(self.track_id, self.start_frame, self.end_frame) - - -class BYTETracker(object): - def __init__(self, args, frame_rate=30): - self.tracked_stracks = [] # type: list[STrack] - self.lost_stracks = [] # type: list[STrack] - self.removed_stracks = [] # type: list[STrack] - - self.frame_id = 0 - self.args = args - #self.det_thresh = args.track_thresh - self.det_thresh = args.track_thresh + 0.1 - self.buffer_size = int(frame_rate / 30.0 * args.track_buffer) - self.max_time_lost = self.buffer_size - self.kalman_filter = KalmanFilter() - - def update(self, output_results, img_info, img_size): - self.frame_id += 1 - activated_starcks = [] - refind_stracks = [] - lost_stracks = [] - removed_stracks = [] - - if output_results.shape[1] == 5: - scores = output_results[:, 4] - bboxes = output_results[:, :4] - else: - output_results = output_results.cpu().numpy() - scores = output_results[:, 4] * output_results[:, 5] - bboxes = output_results[:, :4] # x1y1x2y2 - img_h, img_w = img_info[0], img_info[1] - scale = min(img_size[0] / float(img_h), img_size[1] / float(img_w)) - bboxes /= scale - - remain_inds = scores > self.args.track_thresh - inds_low = scores > 0.1 - inds_high = scores < self.args.track_thresh - - inds_second = np.logical_and(inds_low, inds_high) - dets_second = bboxes[inds_second] - dets = bboxes[remain_inds] - scores_keep = scores[remain_inds] - scores_second = scores[inds_second] - - if len(dets) > 0: - '''Detections''' - detections = [STrack(STrack.tlbr_to_tlwh(tlbr), s) for - (tlbr, s) in zip(dets, scores_keep)] - else: - detections = [] - - ''' Add newly detected tracklets to tracked_stracks''' - unconfirmed = [] - tracked_stracks = [] # type: list[STrack] - for track in self.tracked_stracks: - if not track.is_activated: - unconfirmed.append(track) - else: - tracked_stracks.append(track) - - ''' Step 2: First association, with high score detection boxes''' - strack_pool = joint_stracks(tracked_stracks, self.lost_stracks) - # Predict the current location with KF - STrack.multi_predict(strack_pool) - dists = matching.iou_distance(strack_pool, detections) - if not self.args.mot20: - dists = matching.fuse_score(dists, detections) - matches, u_track, u_detection = matching.linear_assignment(dists, thresh=self.args.match_thresh) - - for itracked, idet in matches: - track = strack_pool[itracked] - det = detections[idet] - if track.state == TrackState.Tracked: - track.update(detections[idet], self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - ''' Step 3: Second association, with low score detection boxes''' - # association the untrack to the low score detections - if len(dets_second) > 0: - '''Detections''' - detections_second = [STrack(STrack.tlbr_to_tlwh(tlbr), s) for - (tlbr, s) in zip(dets_second, scores_second)] - else: - detections_second = [] - r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked] - dists = matching.iou_distance(r_tracked_stracks, detections_second) - matches, u_track, u_detection_second = matching.linear_assignment(dists, thresh=0.5) - for itracked, idet in matches: - track = r_tracked_stracks[itracked] - det = detections_second[idet] - if track.state == TrackState.Tracked: - track.update(det, self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - for it in u_track: - track = r_tracked_stracks[it] - if not track.state == TrackState.Lost: - track.mark_lost() - lost_stracks.append(track) - - '''Deal with unconfirmed tracks, usually tracks with only one beginning frame''' - detections = [detections[i] for i in u_detection] - dists = matching.iou_distance(unconfirmed, detections) - if not self.args.mot20: - dists = matching.fuse_score(dists, detections) - matches, u_unconfirmed, u_detection = matching.linear_assignment(dists, thresh=0.7) - for itracked, idet in matches: - unconfirmed[itracked].update(detections[idet], self.frame_id) - activated_starcks.append(unconfirmed[itracked]) - for it in u_unconfirmed: - track = unconfirmed[it] - track.mark_removed() - removed_stracks.append(track) - - """ Step 4: Init new stracks""" - for inew in u_detection: - track = detections[inew] - if track.score < self.det_thresh: - continue - track.activate(self.kalman_filter, self.frame_id) - activated_starcks.append(track) - """ Step 5: Update state""" - for track in self.lost_stracks: - if self.frame_id - track.end_frame > self.max_time_lost: - track.mark_removed() - removed_stracks.append(track) - - # print('Ramained match {} s'.format(t4-t3)) - - self.tracked_stracks = [t for t in self.tracked_stracks if t.state == TrackState.Tracked] - self.tracked_stracks = joint_stracks(self.tracked_stracks, activated_starcks) - self.tracked_stracks = joint_stracks(self.tracked_stracks, refind_stracks) - self.lost_stracks = sub_stracks(self.lost_stracks, self.tracked_stracks) - self.lost_stracks.extend(lost_stracks) - self.lost_stracks = sub_stracks(self.lost_stracks, self.removed_stracks) - self.removed_stracks.extend(removed_stracks) - self.tracked_stracks, self.lost_stracks = remove_duplicate_stracks(self.tracked_stracks, self.lost_stracks) - # get scores of lost tracks - output_stracks = [track for track in self.tracked_stracks if track.is_activated] - - return output_stracks - - -def joint_stracks(tlista, tlistb): - exists = {} - res = [] - for t in tlista: - exists[t.track_id] = 1 - res.append(t) - for t in tlistb: - tid = t.track_id - if not exists.get(tid, 0): - exists[tid] = 1 - res.append(t) - return res - - -def sub_stracks(tlista, tlistb): - stracks = {} - for t in tlista: - stracks[t.track_id] = t - for t in tlistb: - tid = t.track_id - if stracks.get(tid, 0): - del stracks[tid] - return list(stracks.values()) - - -def remove_duplicate_stracks(stracksa, stracksb): - pdist = matching.iou_distance(stracksa, stracksb) - pairs = np.where(pdist < 0.15) - dupa, dupb = list(), list() - for p, q in zip(*pairs): - timep = stracksa[p].frame_id - stracksa[p].start_frame - timeq = stracksb[q].frame_id - stracksb[q].start_frame - if timep > timeq: - dupb.append(q) - else: - dupa.append(p) - resa = [t for i, t in enumerate(stracksa) if not i in dupa] - resb = [t for i, t in enumerate(stracksb) if not i in dupb] - return resa, resb diff --git a/spaces/Eddycrack864/Applio-Inference/demucs/raw.py b/spaces/Eddycrack864/Applio-Inference/demucs/raw.py deleted file mode 100644 index d4941ad2d7ed858f490db441f5b46b12bd61ad78..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/demucs/raw.py +++ /dev/null @@ -1,173 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -from collections import defaultdict, namedtuple -from pathlib import Path - -import musdb -import numpy as np -import torch as th -import tqdm -from torch.utils.data import DataLoader - -from .audio import AudioFile - -ChunkInfo = namedtuple("ChunkInfo", ["file_index", "offset", "local_index"]) - - -class Rawset: - """ - Dataset of raw, normalized, float32 audio files - """ - def __init__(self, path, samples=None, stride=None, channels=2, streams=None): - self.path = Path(path) - self.channels = channels - self.samples = samples - if stride is None: - stride = samples if samples is not None else 0 - self.stride = stride - entries = defaultdict(list) - for root, folders, files in os.walk(self.path, followlinks=True): - folders.sort() - files.sort() - for file in files: - if file.endswith(".raw"): - path = Path(root) / file - name, stream = path.stem.rsplit('.', 1) - entries[(path.parent.relative_to(self.path), name)].append(int(stream)) - - self._entries = list(entries.keys()) - - sizes = [] - self._lengths = [] - ref_streams = sorted(entries[self._entries[0]]) - assert ref_streams == list(range(len(ref_streams))) - if streams is None: - self.streams = ref_streams - else: - self.streams = streams - for entry in sorted(entries.keys()): - streams = entries[entry] - assert sorted(streams) == ref_streams - file = self._path(*entry) - length = file.stat().st_size // (4 * channels) - if samples is None: - sizes.append(1) - else: - if length < samples: - self._entries.remove(entry) - continue - sizes.append((length - samples) // stride + 1) - self._lengths.append(length) - if not sizes: - raise ValueError(f"Empty dataset {self.path}") - self._cumulative_sizes = np.cumsum(sizes) - self._sizes = sizes - - def __len__(self): - return self._cumulative_sizes[-1] - - @property - def total_length(self): - return sum(self._lengths) - - def chunk_info(self, index): - file_index = np.searchsorted(self._cumulative_sizes, index, side='right') - if file_index == 0: - local_index = index - else: - local_index = index - self._cumulative_sizes[file_index - 1] - return ChunkInfo(offset=local_index * self.stride, - file_index=file_index, - local_index=local_index) - - def _path(self, folder, name, stream=0): - return self.path / folder / (name + f'.{stream}.raw') - - def __getitem__(self, index): - chunk = self.chunk_info(index) - entry = self._entries[chunk.file_index] - - length = self.samples or self._lengths[chunk.file_index] - streams = [] - to_read = length * self.channels * 4 - for stream_index, stream in enumerate(self.streams): - offset = chunk.offset * 4 * self.channels - file = open(self._path(*entry, stream=stream), 'rb') - file.seek(offset) - content = file.read(to_read) - assert len(content) == to_read - content = np.frombuffer(content, dtype=np.float32) - content = content.copy() # make writable - streams.append(th.from_numpy(content).view(length, self.channels).t()) - return th.stack(streams, dim=0) - - def name(self, index): - chunk = self.chunk_info(index) - folder, name = self._entries[chunk.file_index] - return folder / name - - -class MusDBSet: - def __init__(self, mus, streams=slice(None), samplerate=44100, channels=2): - self.mus = mus - self.streams = streams - self.samplerate = samplerate - self.channels = channels - - def __len__(self): - return len(self.mus.tracks) - - def __getitem__(self, index): - track = self.mus.tracks[index] - return (track.name, AudioFile(track.path).read(channels=self.channels, - seek_time=0, - streams=self.streams, - samplerate=self.samplerate)) - - -def build_raw(mus, destination, normalize, workers, samplerate, channels): - destination.mkdir(parents=True, exist_ok=True) - loader = DataLoader(MusDBSet(mus, channels=channels, samplerate=samplerate), - batch_size=1, - num_workers=workers, - collate_fn=lambda x: x[0]) - for name, streams in tqdm.tqdm(loader): - if normalize: - ref = streams[0].mean(dim=0) # use mono mixture as reference - streams = (streams - ref.mean()) / ref.std() - for index, stream in enumerate(streams): - open(destination / (name + f'.{index}.raw'), "wb").write(stream.t().numpy().tobytes()) - - -def main(): - parser = argparse.ArgumentParser('rawset') - parser.add_argument('--workers', type=int, default=10) - parser.add_argument('--samplerate', type=int, default=44100) - parser.add_argument('--channels', type=int, default=2) - parser.add_argument('musdb', type=Path) - parser.add_argument('destination', type=Path) - - args = parser.parse_args() - - build_raw(musdb.DB(root=args.musdb, subsets=["train"], split="train"), - args.destination / "train", - normalize=True, - channels=args.channels, - samplerate=args.samplerate, - workers=args.workers) - build_raw(musdb.DB(root=args.musdb, subsets=["train"], split="valid"), - args.destination / "valid", - normalize=True, - samplerate=args.samplerate, - channels=args.channels, - workers=args.workers) - - -if __name__ == "__main__": - main() diff --git a/spaces/Egrt/MaskGAN/utils/utils_fit.py b/spaces/Egrt/MaskGAN/utils/utils_fit.py deleted file mode 100644 index f747fea35eed4277940bcc2345799ff15afc29dd..0000000000000000000000000000000000000000 --- a/spaces/Egrt/MaskGAN/utils/utils_fit.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import torch.nn.functional as F -from models.SwinIR import compute_gradient_penalty -from tqdm import tqdm - -from .utils import get_lr, show_result -from .utils_metrics import PSNR, SSIM - - - -def fit_one_epoch(writer, G_model_train, D_model_train, G_model, D_model, VGG_feature_model, ResNeSt_model, G_optimizer, D_optimizer, BCEWithLogits_loss, L1_loss, Face_loss, epoch, epoch_size, gen, Epoch, cuda, batch_size, save_interval): - G_total_loss = 0 - D_total_loss = 0 - G_total_PSNR = 0 - G_total_SSIM = 0 - - with tqdm(total=epoch_size,desc=f'Epoch {epoch + 1}/{Epoch}',postfix=dict,mininterval=0.3, ncols=150) as pbar: - for iteration, batch in enumerate(gen): - if iteration >= epoch_size: - break - - with torch.no_grad(): - lr_images, hr_images = batch - lr_images, hr_images = torch.from_numpy(lr_images).type(torch.FloatTensor), torch.from_numpy(hr_images).type(torch.FloatTensor) - y_real, y_fake = torch.ones(batch_size), torch.zeros(batch_size) - if cuda: - lr_images, hr_images, y_real, y_fake = lr_images.cuda(), hr_images.cuda(), y_real.cuda(), y_fake.cuda() - - #-------------------------------------------------# - # 训练判别器 - #-------------------------------------------------# - D_optimizer.zero_grad() - - D_result_r = D_model_train(hr_images) - - G_result = G_model_train(lr_images) - D_result_f = D_model_train(G_result).squeeze() - D_result_rf = D_result_r - D_result_f.mean() - D_result_fr = D_result_f - D_result_r.mean() - D_train_loss_rf = BCEWithLogits_loss(D_result_rf, y_real) - D_train_loss_fr = BCEWithLogits_loss(D_result_fr, y_fake) - gradient_penalty = compute_gradient_penalty(D_model_train, hr_images, G_result) - D_train_loss = 10 * gradient_penalty + (D_train_loss_rf + D_train_loss_fr) / 2 - D_train_loss.backward() - - D_optimizer.step() - - #-------------------------------------------------# - # 训练生成器 - #-------------------------------------------------# - G_optimizer.zero_grad() - - G_result = G_model_train(lr_images) - image_loss = L1_loss(G_result, hr_images) - - D_result_r = D_model_train(hr_images) - D_result_f = D_model_train(G_result).squeeze() - D_result_rf = D_result_r - D_result_f.mean() - D_result_fr = D_result_f - D_result_r.mean() - D_train_loss_rf = BCEWithLogits_loss(D_result_rf, y_fake) - D_train_loss_fr = BCEWithLogits_loss(D_result_fr, y_real) - adversarial_loss = (D_train_loss_rf + D_train_loss_fr) / 2 - - perception_loss = L1_loss(VGG_feature_model(G_result), VGG_feature_model(hr_images)) - # 进行下采样以适配人脸识别网络 - G_result_face = F.interpolate(G_result, size=(112, 112), mode='bicubic', align_corners=True) - hr_images_face = F.interpolate(hr_images, size=(112, 112), mode='bicubic', align_corners=True) - face_loss = torch.mean(1. - Face_loss(ResNeSt_model(G_result_face), ResNeSt_model(hr_images_face))) - G_train_loss = 3.0 * image_loss + 1.0 * adversarial_loss + 0.9 * perception_loss + 2.5 * face_loss - - G_train_loss.backward() - G_optimizer.step() - - G_total_loss += G_train_loss.item() - D_total_loss += D_train_loss.item() - - with torch.no_grad(): - G_total_PSNR += PSNR(G_result, hr_images).item() - G_total_SSIM += SSIM(G_result, hr_images).item() - - pbar.set_postfix(**{'G_loss' : G_total_loss / (iteration + 1), - 'D_loss' : D_total_loss / (iteration + 1), - 'G_PSNR' : G_total_PSNR / (iteration + 1), - 'G_SSIM' : G_total_SSIM / (iteration + 1), - 'lr' : get_lr(G_optimizer)}) - pbar.update(1) - - if iteration % save_interval == 0: - show_result(epoch + 1, G_model_train, lr_images, hr_images) - writer.add_scalar('G_loss', G_total_loss / (iteration + 1), epoch + 1) - writer.add_scalar('D_loss', D_total_loss / (iteration + 1), epoch + 1) - writer.add_scalar('G_PSNR', G_total_PSNR / (iteration + 1), epoch + 1) - writer.add_scalar('G_SSIM', G_total_SSIM / (iteration + 1), epoch + 1) - writer.add_scalar('lr', get_lr(G_optimizer), epoch + 1) - print('Epoch:'+ str(epoch + 1) + '/' + str(Epoch)) - print('G Loss: %.4f || D Loss: %.4f ' % (G_total_loss / epoch_size, D_total_loss / epoch_size)) - print('Saving state, iter:', str(epoch+1)) - # 保存模型权重 - torch.save(G_model, 'logs/G_Epoch%d-GLoss%.4f-DLoss%.4f.pth'%((epoch + 1), G_total_loss / epoch_size, D_total_loss / epoch_size)) - torch.save(D_model, 'logs/D_Epoch%d-GLoss%.4f-DLoss%.4f.pth'%((epoch + 1), G_total_loss / epoch_size, D_total_loss / epoch_size)) diff --git a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/models.py b/spaces/Enterprisium/Easy_GUI/lib/infer_pack/models.py deleted file mode 100644 index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000 --- a/spaces/Enterprisium/Easy_GUI/lib/infer_pack/models.py +++ /dev/null @@ -1,1142 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Faridmaruf/rvc-Blue-archives/app.py b/spaces/Faridmaruf/rvc-Blue-archives/app.py deleted file mode 100644 index b545c33df5f8714308d872bc6cab208485e14e6b..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-Blue-archives/app.py +++ /dev/null @@ -1,516 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" - -audio_mode = [] -f0method_mode = [] -f0method_info = "" -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow. (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)" - -def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - print(f"Converting using {model_name}...") - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_name} | {info}") - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, None - return vc_fn - -def load_model(): - categories = [] - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - description = category_info['description'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, description, models]) - return categories - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "
    \n\n"+ - "# Multi Model RVC Inference\n\n"+ - "[![Repository](https://img.shields.io/badge/Github-Multi%20Model%20RVC%20Inference-blue?style=for-the-badge&logo=github)](https://github.com/ArkanDash/Multi-Model-RVC-Inference)\n\n"+ - "
    " - ) - for (folder_title, folder, description, models) in categories: - with gr.TabItem(folder_title): - if description: - gr.Markdown(f"###
    {description}") - with gr.Tabs(): - if not models: - gr.Markdown("#
    No Model Loaded.") - gr.Markdown("##
    Please add model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - f'
    RVC {model_version} Model
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_microphone_mode, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/Ferion/image-matting-app/ppmatting/core/predict.py b/spaces/Ferion/image-matting-app/ppmatting/core/predict.py deleted file mode 100644 index e7ff765d9c62f3cb7b758d1756632cfe65cab0f1..0000000000000000000000000000000000000000 --- a/spaces/Ferion/image-matting-app/ppmatting/core/predict.py +++ /dev/null @@ -1,58 +0,0 @@ -from typing import Optional - -import numpy as np -import paddle -import paddle.nn.functional as F - - -def reverse_transform(alpha, trans_info): - """recover pred to origin shape""" - for item in trans_info[::-1]: - if item[0] == "resize": - h, w = item[1][0], item[1][1] - alpha = F.interpolate(alpha, [h, w], mode="bilinear") - elif item[0] == "padding": - h, w = item[1][0], item[1][1] - alpha = alpha[:, :, 0:h, 0:w] - else: - raise Exception(f"Unexpected info '{item[0]}' in im_info") - - return alpha - - -def preprocess(img, transforms, trimap=None): - data = {} - data["img"] = img - if trimap is not None: - data["trimap"] = trimap - data["gt_fields"] = ["trimap"] - data["trans_info"] = [] - data = transforms(data) - data["img"] = paddle.to_tensor(data["img"]) - data["img"] = data["img"].unsqueeze(0) - if trimap is not None: - data["trimap"] = paddle.to_tensor(data["trimap"]) - data["trimap"] = data["trimap"].unsqueeze((0, 1)) - - return data - - -def predict( - model, - transforms, - image: np.ndarray, - trimap: Optional[np.ndarray] = None, -): - with paddle.no_grad(): - data = preprocess(img=image, transforms=transforms, trimap=None) - - alpha = model(data) - - alpha = reverse_transform(alpha, data["trans_info"]) - alpha = alpha.numpy().squeeze() - - if trimap is not None: - alpha[trimap == 0] = 0 - alpha[trimap == 255] = 1. - - return alpha diff --git a/spaces/FourthBrainGenAI/AI-Superstar-Space/README.md b/spaces/FourthBrainGenAI/AI-Superstar-Space/README.md deleted file mode 100644 index e1a64b336b8a34341d9f9719feddb305e92f3438..0000000000000000000000000000000000000000 --- a/spaces/FourthBrainGenAI/AI-Superstar-Space/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI Superstar Space -emoji: ⚡ -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: bigscience-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L9.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L9.py deleted file mode 100644 index b0089c789cd87cfd3b1badb2fc45cb1b88041eab..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/vencoder/ContentVec256L9.py +++ /dev/null @@ -1,35 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import torch -from fairseq import checkpoint_utils - -class ContentVec256L9(SpeechEncoder): - def __init__(self,vec_path = "pretrain/checkpoint_best_legacy_500.pt",device=None): - print("load model(s) from {}".format(vec_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.model = models[0].to(self.dev) - self.model.eval() - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav.device), - "padding_mask": padding_mask.to(wav.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = self.model.final_proj(logits[0]) - return feats.transpose(1, 2) diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/train/losses.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/train/losses.py deleted file mode 100644 index b1b263e4c205e78ffe970f622ab6ff68f36d3b17..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/train/losses.py +++ /dev/null @@ -1,58 +0,0 @@ -import torch - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg**2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/FriendlyUser/YoutubeDownloaderSubber/README.md b/spaces/FriendlyUser/YoutubeDownloaderSubber/README.md deleted file mode 100644 index 4dcf491ad9e93d021d0511b8c8879052641f6c07..0000000000000000000000000000000000000000 --- a/spaces/FriendlyUser/YoutubeDownloaderSubber/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: YoutubeDownloaderSubber -emoji: ⚡ -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GXSA/bingo/src/components/providers.tsx b/spaces/GXSA/bingo/src/components/providers.tsx deleted file mode 100644 index 892226412d80fe0b05211911b9e245cd22876460..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/components/providers.tsx +++ /dev/null @@ -1,15 +0,0 @@ -'use client' - -import * as React from 'react' -import { ThemeProvider as NextThemesProvider } from 'next-themes' -import { ThemeProviderProps } from 'next-themes/dist/types' - -import { TooltipProvider } from '@/components/ui/tooltip' - -export function Providers({ children, ...props }: ThemeProviderProps) { - return ( - - {children} - - ) -} diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder_train.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder_train.py deleted file mode 100644 index d712ffa3e6c92a091aa18dc90f0027f46940e400..0000000000000000000000000000000000000000 --- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder_train.py +++ /dev/null @@ -1,56 +0,0 @@ -from utils.argutils import print_args -from vocoder.train import train -from pathlib import Path -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="Trains the vocoder from the synthesizer audios and the GTA synthesized mels, " - "or ground truth mels.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - - parser.add_argument("run_id", type=str, help= \ - "Name for this model instance. If a model state from the same run ID was previously " - "saved, the training will restart from there. Pass -f to overwrite saved states and " - "restart from scratch.") - parser.add_argument("datasets_root", type=str, help= \ - "Path to the directory containing your SV2TTS directory. Specifying --syn_dir or --voc_dir " - "will take priority over this argument.") - parser.add_argument("--syn_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the synthesizer directory that contains the ground truth mel spectrograms, " - "the wavs and the embeds. Defaults to /SV2TTS/synthesizer/.") - parser.add_argument("--voc_dir", type=str, default=argparse.SUPPRESS, help= \ - "Path to the vocoder directory that contains the GTA synthesized mel spectrograms. " - "Defaults to /SV2TTS/vocoder/. Unused if --ground_truth is passed.") - parser.add_argument("-m", "--models_dir", type=str, default="vocoder/saved_models/", help=\ - "Path to the directory that will contain the saved model weights, as well as backups " - "of those weights and wavs generated during training.") - parser.add_argument("-g", "--ground_truth", action="store_true", help= \ - "Train on ground truth spectrograms (/SV2TTS/synthesizer/mels).") - parser.add_argument("-s", "--save_every", type=int, default=1000, help= \ - "Number of steps between updates of the model on the disk. Set to 0 to never save the " - "model.") - parser.add_argument("-b", "--backup_every", type=int, default=25000, help= \ - "Number of steps between backups of the model. Set to 0 to never make backups of the " - "model.") - parser.add_argument("-f", "--force_restart", action="store_true", help= \ - "Do not load any saved model and restart from scratch.") - args = parser.parse_args() - - # Process the arguments - if not hasattr(args, "syn_dir"): - args.syn_dir = Path(args.datasets_root, "SV2TTS", "synthesizer") - args.syn_dir = Path(args.syn_dir) - if not hasattr(args, "voc_dir"): - args.voc_dir = Path(args.datasets_root, "SV2TTS", "vocoder") - args.voc_dir = Path(args.voc_dir) - del args.datasets_root - args.models_dir = Path(args.models_dir) - args.models_dir.mkdir(exist_ok=True) - - # Run the training - print_args(args, parser) - train(**vars(args)) - \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py deleted file mode 100644 index ad6ad47696e6aeb2b3505abab0bd2d49d3b7aa83..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py' -model = dict( - backbone=dict(plugins=[ - dict( - cfg=dict(type='ContextBlock', ratio=1. / 16), - stages=(False, True, True, True), - position='after_conv3') - ])) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/yolo/yolov3_d53_320_273e_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/yolo/yolov3_d53_320_273e_coco.py deleted file mode 100644 index 87359f6fb66d94de10b8e3797ee3eec93a19cb26..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/yolo/yolov3_d53_320_273e_coco.py +++ /dev/null @@ -1,42 +0,0 @@ -_base_ = './yolov3_d53_mstrain-608_273e_coco.py' -# dataset settings -img_norm_cfg = dict(mean=[0, 0, 0], std=[255., 255., 255.], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='PhotoMetricDistortion'), - dict( - type='Expand', - mean=img_norm_cfg['mean'], - to_rgb=img_norm_cfg['to_rgb'], - ratio_range=(1, 2)), - dict( - type='MinIoURandomCrop', - min_ious=(0.4, 0.5, 0.6, 0.7, 0.8, 0.9), - min_crop_size=0.3), - dict(type='Resize', img_scale=(320, 320), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(320, 320), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/backup.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/backup.py deleted file mode 100644 index 4797a2af766db8e786261bc100d617d843cd31bb..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/backup.py +++ /dev/null @@ -1,16 +0,0 @@ -import os -from typing import List -import wget - -from .configs.base_config import base_cfg - -def backup(cfg: base_cfg, urls: List[str]): - current_experiment_dir_path = os.path.join( - cfg.experiment_dir_path, - cfg.experiment_name - ) - - os.makedirs(current_experiment_dir_path, exist_ok=True) - - for url in urls: - wget.download(url, out = current_experiment_dir_path) diff --git a/spaces/Hallucinate/demo/taming/models/dummy_cond_stage.py b/spaces/Hallucinate/demo/taming/models/dummy_cond_stage.py deleted file mode 100644 index 6e19938078752e09b926a3e749907ee99a258ca0..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/taming/models/dummy_cond_stage.py +++ /dev/null @@ -1,22 +0,0 @@ -from torch import Tensor - - -class DummyCondStage: - def __init__(self, conditional_key): - self.conditional_key = conditional_key - self.train = None - - def eval(self): - return self - - @staticmethod - def encode(c: Tensor): - return c, None, (None, None, c) - - @staticmethod - def decode(c: Tensor): - return c - - @staticmethod - def to_rgb(c: Tensor): - return c diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/append_token_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/append_token_dataset.py deleted file mode 100644 index 87695bd0f5fcb6b10247e3b743340623e6438cc1..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/append_token_dataset.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class AppendTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, token=None): - super().__init__(dataset) - self.token = token - if token is not None: - self._sizes = np.array(dataset.sizes) + 1 - else: - self._sizes = dataset.sizes - - def __getitem__(self, idx): - item = self.dataset[idx] - if self.token is not None: - item = torch.cat([item, item.new([self.token])]) - return item - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - n = self.dataset.num_tokens(index) - if self.token is not None: - n += 1 - return n - - def size(self, index): - n = self.dataset.size(index) - if self.token is not None: - n += 1 - return n diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/transformer_base.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/transformer_base.py deleted file mode 100644 index b4d5604dbbae979b424650882d33b45ebab323e6..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/transformer/transformer_base.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.distributed import fsdp_wrap -from fairseq.models import FairseqEncoderDecoderModel -from fairseq.models.transformer import ( - TransformerEncoderBase, - TransformerDecoderBase, - TransformerConfig, -) -from torch import Tensor - - -class TransformerModelBase(FairseqEncoderDecoderModel): - """ - Transformer model from `"Attention Is All You Need" (Vaswani, et al, 2017) - `_. - - Args: - encoder (TransformerEncoder): the encoder - decoder (TransformerDecoder): the decoder - - The Transformer model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.transformer_parser - :prog: - """ - - def __init__(self, cfg, encoder, decoder): - super().__init__(encoder, decoder) - self.cfg = cfg - self.supports_align_args = True - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - # we want to build the args recursively in this case. - gen_parser_from_dataclass( - parser, TransformerConfig(), delete_default=False, with_prefix="" - ) - - @classmethod - def build_model(cls, cfg, task): - """Build a new model instance.""" - - # -- TODO T96535332 - # bug caused by interaction between OmegaConf II and argparsing - cfg.decoder.input_dim = int(cfg.decoder.input_dim) - cfg.decoder.output_dim = int(cfg.decoder.output_dim) - # -- - - if cfg.encoder.layers_to_keep: - cfg.encoder.layers = len(cfg.encoder.layers_to_keep.split(",")) - if cfg.decoder.layers_to_keep: - cfg.decoder.layers = len(cfg.decoder.layers_to_keep.split(",")) - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - if cfg.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if cfg.encoder.embed_dim != cfg.decoder.embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if cfg.decoder.embed_path and ( - cfg.decoder.embed_path != cfg.encoder.embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = cls.build_embedding( - cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - cfg.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = cls.build_embedding( - cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path - ) - decoder_embed_tokens = cls.build_embedding( - cfg, tgt_dict, cfg.decoder.embed_dim, cfg.decoder.embed_path - ) - if cfg.offload_activations: - cfg.checkpoint_activations = True # offloading implies checkpointing - encoder = cls.build_encoder(cfg, src_dict, encoder_embed_tokens) - decoder = cls.build_decoder(cfg, tgt_dict, decoder_embed_tokens) - if not cfg.share_all_embeddings: - # fsdp_wrap is a no-op when --ddp-backend != fully_sharded - encoder = fsdp_wrap(encoder, min_num_params=cfg.min_params_to_wrap) - decoder = fsdp_wrap(decoder, min_num_params=cfg.min_params_to_wrap) - return cls(cfg, encoder, decoder) - - @classmethod - def build_embedding(cls, cfg, dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - @classmethod - def build_encoder(cls, cfg, src_dict, embed_tokens): - return TransformerEncoderBase(cfg, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, cfg, tgt_dict, embed_tokens): - return TransformerDecoderBase( - cfg, - tgt_dict, - embed_tokens, - no_encoder_attn=cfg.no_cross_attention, - ) - - # TorchScript doesn't support optional arguments with variable length (**kwargs). - # Current workaround is to add union of all arguments in child classes. - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - return_all_hiddens: bool = True, - features_only: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - """ - Run the forward pass for an encoder-decoder model. - - Copied from the base class, but without ``**kwargs``, - which are not supported by TorchScript. - """ - encoder_out = self.encoder( - src_tokens, src_lengths=src_lengths, return_all_hiddens=return_all_hiddens - ) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - return decoder_out - - # Since get_normalized_probs is in the Fairseq Model which is not scriptable, - # I rewrite the get_normalized_probs from Base Class to call the - # helper function in the Base Class. - @torch.jit.export - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m diff --git a/spaces/Hazem/roop/roop/predicter.py b/spaces/Hazem/roop/roop/predicter.py deleted file mode 100644 index 7ebc2b62e4152c12ce41e55d718222ca9c8a8b7f..0000000000000000000000000000000000000000 --- a/spaces/Hazem/roop/roop/predicter.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy -import opennsfw2 -from PIL import Image - -from roop.typing import Frame - -MAX_PROBABILITY = 0.85 - - -def predict_frame(target_frame: Frame) -> bool: - image = Image.fromarray(target_frame) - image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO) - model = opennsfw2.make_open_nsfw_model() - views = numpy.expand_dims(image, axis=0) - _, probability = model.predict(views)[0] - return probability > MAX_PROBABILITY - - -def predict_image(target_path: str) -> bool: - return opennsfw2.predict_image(target_path) > MAX_PROBABILITY - - -def predict_video(target_path: str) -> bool: - _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100) - return any(probability > MAX_PROBABILITY for probability in probabilities) diff --git a/spaces/Heckeroo/Cyberpunk-Anime-Diffusion/README.md b/spaces/Heckeroo/Cyberpunk-Anime-Diffusion/README.md deleted file mode 100644 index b1463db1ea7f0d047b61bcf22a9afd82d301167c..0000000000000000000000000000000000000000 --- a/spaces/Heckeroo/Cyberpunk-Anime-Diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Cyberpunk Anime Diffusion -emoji: 📈 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.13.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Hina4867/bingo/next.config.js b/spaces/Hina4867/bingo/next.config.js deleted file mode 100644 index 0e6ccd7fbc91d0459eaaff3e968ce0556789c605..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/next.config.js +++ /dev/null @@ -1,38 +0,0 @@ -/** @type {import('next').NextConfig} */ -const nextConfig = { - // output: 'export', - // assetPrefix: '.', - webpack: (config, { isServer }) => { - if (!isServer) { - config.resolve = { - ...config.resolve, - fallback: { - 'bufferutil': false, - 'utf-8-validate': false, - http: false, - https: false, - stream: false, - // fixes proxy-agent dependencies - net: false, - dns: false, - tls: false, - assert: false, - // fixes next-i18next dependencies - path: false, - fs: false, - // fixes mapbox dependencies - events: false, - // fixes sentry dependencies - process: false - } - }; - } - config.module.exprContextCritical = false; - - return config; - }, -} - -module.exports = (...args) => { - return nextConfig -} diff --git a/spaces/Hushh/Generative_QNA/variables.py b/spaces/Hushh/Generative_QNA/variables.py deleted file mode 100644 index 68d36f7c7359c75c61424517b57ddc45fa773a01..0000000000000000000000000000000000000000 --- a/spaces/Hushh/Generative_QNA/variables.py +++ /dev/null @@ -1,14 +0,0 @@ -# from chromadb.config import Settings - -EMBEDDING_MODEL_NAME = "sentence-transformers/paraphrase-albert-small-v2" #for embedding the text from the documents. -MODEL_ID = "TheBloke/Llama-2-7b-Chat-GPTQ" -MODEL_BASENAME = "model" - -__import__('pysqlite3') -import sys -sys.modules['sqlite3'] = sys.modules.pop('pysqlite3') - -# CHROMA_SETTINGS = Settings( -# anonymized_telemetry=False, -# is_persistent=True, -# ) diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py deleted file mode 100644 index eb0f7c360d749fd9d489b40b04dae8652b095098..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torch -import numpy as np -from examples.textless_nlp.gslm.unit2speech.tacotron2.text import ( - EOS_TOK, - SOS_TOK, - code_to_sequence, - text_to_sequence, -) -from examples.textless_nlp.gslm.unit2speech.tacotron2.utils import ( - load_code_dict, -) - - -class TacotronInputDataset: - def __init__(self, hparams, append_str=""): - self.is_text = getattr(hparams, "text_or_code", "text") == "text" - if not self.is_text: - self.code_dict = load_code_dict(hparams.code_dict) - self.code_key = hparams.code_key - self.add_sos = hparams.add_sos - self.add_eos = hparams.add_eos - self.collapse_code = hparams.collapse_code - self.append_str = append_str - - def process_code(self, inp_str): - inp_toks = inp_str.split() - if self.add_sos: - inp_toks = [SOS_TOK] + inp_toks - if self.add_eos: - inp_toks = inp_toks + [EOS_TOK] - return code_to_sequence(inp_toks, self.code_dict, self.collapse_code) - - def process_text(self, inp_str): - return text_to_sequence(inp_str, ["english_cleaners"]) - - def get_tensor(self, inp_str): - # uid, txt, inp_str = self._get_data(idx) - inp_str = inp_str + self.append_str - if self.is_text: - inp_toks = self.process_text(inp_str) - else: - inp_toks = self.process_code(inp_str) - return torch.from_numpy(np.array(inp_toks)).long() - - def __len__(self): - return len(self.data) diff --git a/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/predictor.py b/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/predictor.py deleted file mode 100644 index 57c089d1fc4a6bbf5786e1ef62c59e22d582f5aa..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/segment_anything/segment_anything/predictor.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from segment_anything.modeling import Sam - -from typing import Optional, Tuple - -from .utils.transforms import ResizeLongestSide - - -class SamPredictor: - def __init__( - self, - sam_model: Sam, - ) -> None: - """ - Uses SAM to calculate the image embedding for an image, and then - allow repeated, efficient mask prediction given prompts. - - Arguments: - sam_model (Sam): The model to use for mask prediction. - """ - super().__init__() - self.model = sam_model - self.transform = ResizeLongestSide(sam_model.image_encoder.img_size) - self.reset_image() - - def set_image( - self, - image: np.ndarray, - image_format: str = "RGB", - ) -> None: - """ - Calculates the image embeddings for the provided image, allowing - masks to be predicted with the 'predict' method. - - Arguments: - image (np.ndarray): The image for calculating masks. Expects an - image in HWC uint8 format, with pixel values in [0, 255]. - image_format (str): The color format of the image, in ['RGB', 'BGR']. - """ - assert image_format in [ - "RGB", - "BGR", - ], f"image_format must be in ['RGB', 'BGR'], is {image_format}." - if image_format != self.model.image_format: - image = image[..., ::-1] - - # Transform the image to the form expected by the model - input_image = self.transform.apply_image(image) - input_image_torch = torch.as_tensor(input_image, device=self.device) - input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :] - - self.set_torch_image(input_image_torch, image.shape[:2]) - - @torch.no_grad() - def set_torch_image( - self, - transformed_image: torch.Tensor, - original_image_size: Tuple[int, ...], - ) -> None: - """ - Calculates the image embeddings for the provided image, allowing - masks to be predicted with the 'predict' method. Expects the input - image to be already transformed to the format expected by the model. - - Arguments: - transformed_image (torch.Tensor): The input image, with shape - 1x3xHxW, which has been transformed with ResizeLongestSide. - original_image_size (tuple(int, int)): The size of the image - before transformation, in (H, W) format. - """ - assert ( - len(transformed_image.shape) == 4 - and transformed_image.shape[1] == 3 - and max(*transformed_image.shape[2:]) == self.model.image_encoder.img_size - ), f"set_torch_image input must be BCHW with long side {self.model.image_encoder.img_size}." - self.reset_image() - - self.original_size = original_image_size - self.input_size = tuple(transformed_image.shape[-2:]) - input_image = self.model.preprocess(transformed_image) - self.features = self.model.image_encoder(input_image) - self.is_image_set = True - - def predict( - self, - point_coords: Optional[np.ndarray] = None, - point_labels: Optional[np.ndarray] = None, - box: Optional[np.ndarray] = None, - mask_input: Optional[np.ndarray] = None, - multimask_output: bool = True, - return_logits: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Predict masks for the given input prompts, using the currently set image. - - Arguments: - point_coords (np.ndarray or None): A Nx2 array of point prompts to the - model. Each point is in (X,Y) in pixels. - point_labels (np.ndarray or None): A length N array of labels for the - point prompts. 1 indicates a foreground point and 0 indicates a - background point. - box (np.ndarray or None): A length 4 array given a box prompt to the - model, in XYXY format. - mask_input (np.ndarray): A low resolution mask input to the model, typically - coming from a previous prediction iteration. Has form 1xHxW, where - for SAM, H=W=256. - multimask_output (bool): If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - return_logits (bool): If true, returns un-thresholded masks logits - instead of a binary mask. - - Returns: - (np.ndarray): The output masks in CxHxW format, where C is the - number of masks, and (H, W) is the original image size. - (np.ndarray): An array of length C containing the model's - predictions for the quality of each mask. - (np.ndarray): An array of shape CxHxW, where C is the number - of masks and H=W=256. These low resolution logits can be passed to - a subsequent iteration as mask input. - """ - if not self.is_image_set: - raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") - - # Transform input prompts - coords_torch, labels_torch, box_torch, mask_input_torch = None, None, None, None - if point_coords is not None: - assert ( - point_labels is not None - ), "point_labels must be supplied if point_coords is supplied." - point_coords = self.transform.apply_coords(point_coords, self.original_size) - coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=self.device) - labels_torch = torch.as_tensor(point_labels, dtype=torch.int, device=self.device) - coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :] - if box is not None: - box = self.transform.apply_boxes(box, self.original_size) - box_torch = torch.as_tensor(box, dtype=torch.float, device=self.device) - box_torch = box_torch[None, :] - if mask_input is not None: - mask_input_torch = torch.as_tensor(mask_input, dtype=torch.float, device=self.device) - mask_input_torch = mask_input_torch[None, :, :, :] - - masks, iou_predictions, low_res_masks = self.predict_torch( - coords_torch, - labels_torch, - box_torch, - mask_input_torch, - multimask_output, - return_logits=return_logits, - ) - - masks = masks[0].detach().cpu().numpy() - iou_predictions = iou_predictions[0].detach().cpu().numpy() - low_res_masks = low_res_masks[0].detach().cpu().numpy() - return masks, iou_predictions, low_res_masks - - @torch.no_grad() - def predict_torch( - self, - point_coords: Optional[torch.Tensor], - point_labels: Optional[torch.Tensor], - boxes: Optional[torch.Tensor] = None, - mask_input: Optional[torch.Tensor] = None, - multimask_output: bool = True, - return_logits: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Predict masks for the given input prompts, using the currently set image. - Input prompts are batched torch tensors and are expected to already be - transformed to the input frame using ResizeLongestSide. - - Arguments: - point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the - model. Each point is in (X,Y) in pixels. - point_labels (torch.Tensor or None): A BxN array of labels for the - point prompts. 1 indicates a foreground point and 0 indicates a - background point. - box (np.ndarray or None): A Bx4 array given a box prompt to the - model, in XYXY format. - mask_input (np.ndarray): A low resolution mask input to the model, typically - coming from a previous prediction iteration. Has form Bx1xHxW, where - for SAM, H=W=256. Masks returned by a previous iteration of the - predict method do not need further transformation. - multimask_output (bool): If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - return_logits (bool): If true, returns un-thresholded masks logits - instead of a binary mask. - - Returns: - (torch.Tensor): The output masks in BxCxHxW format, where C is the - number of masks, and (H, W) is the original image size. - (torch.Tensor): An array of shape BxC containing the model's - predictions for the quality of each mask. - (torch.Tensor): An array of shape BxCxHxW, where C is the number - of masks and H=W=256. These low res logits can be passed to - a subsequent iteration as mask input. - """ - if not self.is_image_set: - raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") - - if point_coords is not None: - points = (point_coords, point_labels) - else: - points = None - - # Embed prompts - sparse_embeddings, dense_embeddings = self.model.prompt_encoder( - points=points, - boxes=boxes, - masks=mask_input, - ) - - # Predict masks - low_res_masks, iou_predictions = self.model.mask_decoder( - image_embeddings=self.features, - image_pe=self.model.prompt_encoder.get_dense_pe(), - sparse_prompt_embeddings=sparse_embeddings, - dense_prompt_embeddings=dense_embeddings, - multimask_output=multimask_output, - ) - - # Upscale the masks to the original image resolution - masks = self.model.postprocess_masks(low_res_masks, self.input_size, self.original_size) - - if not return_logits: - masks = masks > self.model.mask_threshold - - return masks, iou_predictions, low_res_masks - - def get_image_embedding(self) -> torch.Tensor: - """ - Returns the image embeddings for the currently set image, with - shape 1xCxHxW, where C is the embedding dimension and (H,W) are - the embedding spatial dimension of SAM (typically C=256, H=W=64). - """ - if not self.is_image_set: - raise RuntimeError( - "An image must be set with .set_image(...) to generate an embedding." - ) - assert self.features is not None, "Features must exist if an image has been set." - return self.features - - @property - def device(self) -> torch.device: - return self.model.device - - def reset_image(self) -> None: - """Resets the currently set image.""" - self.is_image_set = False - self.features = None - self.orig_h = None - self.orig_w = None - self.input_h = None - self.input_w = None diff --git a/spaces/Inia2567/anime-ai-detect/README.md b/spaces/Inia2567/anime-ai-detect/README.md deleted file mode 100644 index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000 --- a/spaces/Inia2567/anime-ai-detect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anime Ai Detect -emoji: 🤖 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: saltacc/anime-ai-detect ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/IvaElen/nlp_proj/biLSTM1.py b/spaces/IvaElen/nlp_proj/biLSTM1.py deleted file mode 100644 index 2beabc0292c1fff99e680b999cd0ee60c82af658..0000000000000000000000000000000000000000 --- a/spaces/IvaElen/nlp_proj/biLSTM1.py +++ /dev/null @@ -1,47 +0,0 @@ -import torch -import torch.nn as nn - -class biLSTM(nn.Module): - """ - The LSTM model that will be used to perform Sentiment analysis. - """ - - def __init__(self, - # объем словаря, с которым мы работаем, размер входа для слоя Embedding - vocab_size: int, - # размер выходного эмбеддинга каждый элемент последовательности - # будет описан вектором такой размерности - embedding_dim: int, - # размерность hidden state LSTM слоя - hidden_dim: int, - # число слоев в LSTM - n_layers: int, - drop_prob=0.5, - seq_len = 128) -> None: - - super().__init__() - self.hidden_dim = hidden_dim - self.n_layers = n_layers - self.seq_len = seq_len - self.embedding = nn.Embedding(vocab_size, embedding_dim) - self.lstm = nn.LSTM(embedding_dim, - hidden_dim, - n_layers, - dropout=drop_prob, - batch_first=True, - bidirectional=True - ) - - self.do = nn.Dropout() - - self.fc1 = nn.Linear(2*hidden_dim * self.seq_len, 256) - self.fc2 = nn.Linear(256, 1) - self.sigmoid = nn.Sigmoid() - - def forward(self, x): - embeds = self.embedding(x) - lstm_out, _ = self.lstm(embeds) - out = self.fc2(torch.tanh(self.do(self.fc1(lstm_out.flatten(1))))) - sig_out = self.sigmoid(out) - - return sig_out \ No newline at end of file diff --git a/spaces/Izal887/rvc-ram12/config.py b/spaces/Izal887/rvc-ram12/config.py deleted file mode 100644 index 040a64d2c5ce4d7802bdf7f69321483b81008f08..0000000000000000000000000000000000000000 --- a/spaces/Izal887/rvc-ram12/config.py +++ /dev/null @@ -1,106 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.colab, - self.noparallel, - self.noautoopen, - self.api - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument("--api", action="store_true", help="Launch with api") - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.api - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/JSP/test4k/main.py b/spaces/JSP/test4k/main.py deleted file mode 100644 index 978fc6a7d35d4512c44d5f75531c09e832c35e1f..0000000000000000000000000000000000000000 --- a/spaces/JSP/test4k/main.py +++ /dev/null @@ -1,27 +0,0 @@ -from llama_cpp.server.app import create_app, Settings -from fastapi.responses import HTMLResponse -import os - -app = create_app( - Settings( - n_threads=2, # set to number of cpu cores - model="model/gguf-model.bin", - embedding=True - ) -) - -# Read the content of index.html once and store it in memory -with open("index.html", "r") as f: - content = f.read() - - -@app.get("/", response_class=HTMLResponse) -async def read_items(): - return content - -if __name__ == "__main__": - import uvicorn - uvicorn.run(app, - host=os.environ["HOST"], - port=int(os.environ["PORT"]) - ) diff --git a/spaces/Jaehan/Text2Text-Sentiment-Analysis/README.md b/spaces/Jaehan/Text2Text-Sentiment-Analysis/README.md deleted file mode 100644 index 132dcebc50d3a19a02161fca55d1d243ae6af398..0000000000000000000000000000000000000000 --- a/spaces/Jaehan/Text2Text-Sentiment-Analysis/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text2Text Sentiment Analysis -emoji: 🏆 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KenjieDec/GPEN/face_model/op/fused_bias_act.cpp b/spaces/KenjieDec/GPEN/face_model/op/fused_bias_act.cpp deleted file mode 100644 index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/GPEN/face_model/op/fused_bias_act.cpp +++ /dev/null @@ -1,21 +0,0 @@ -#include - - -torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer, - int act, int grad, float alpha, float scale) { - CHECK_CUDA(input); - CHECK_CUDA(bias); - - return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)"); -} \ No newline at end of file diff --git a/spaces/KenjieDec/GPEN/sr_model/rrdbnet_arch.py b/spaces/KenjieDec/GPEN/sr_model/rrdbnet_arch.py deleted file mode 100644 index 5e1f04c5aee5bcdcd2ddae5471843ff057d863b4..0000000000000000000000000000000000000000 --- a/spaces/KenjieDec/GPEN/sr_model/rrdbnet_arch.py +++ /dev/null @@ -1,116 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F - -from arch_util import default_init_weights, make_layer, pixel_unshuffle - - -class ResidualDenseBlock(nn.Module): - """Residual Dense Block. - - Used in RRDB block in ESRGAN. - - Args: - num_feat (int): Channel number of intermediate features. - num_grow_ch (int): Channels for each growth. - """ - - def __init__(self, num_feat=64, num_grow_ch=32): - super(ResidualDenseBlock, self).__init__() - self.conv1 = nn.Conv2d(num_feat, num_grow_ch, 3, 1, 1) - self.conv2 = nn.Conv2d(num_feat + num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv3 = nn.Conv2d(num_feat + 2 * num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv4 = nn.Conv2d(num_feat + 3 * num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv5 = nn.Conv2d(num_feat + 4 * num_grow_ch, num_feat, 3, 1, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - # initialization - default_init_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1) - - def forward(self, x): - x1 = self.lrelu(self.conv1(x)) - x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1))) - x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1))) - x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1))) - x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1)) - # Emperically, we use 0.2 to scale the residual for better performance - return x5 * 0.2 + x - - -class RRDB(nn.Module): - """Residual in Residual Dense Block. - - Used in RRDB-Net in ESRGAN. - - Args: - num_feat (int): Channel number of intermediate features. - num_grow_ch (int): Channels for each growth. - """ - - def __init__(self, num_feat, num_grow_ch=32): - super(RRDB, self).__init__() - self.rdb1 = ResidualDenseBlock(num_feat, num_grow_ch) - self.rdb2 = ResidualDenseBlock(num_feat, num_grow_ch) - self.rdb3 = ResidualDenseBlock(num_feat, num_grow_ch) - - def forward(self, x): - out = self.rdb1(x) - out = self.rdb2(out) - out = self.rdb3(out) - # Emperically, we use 0.2 to scale the residual for better performance - return out * 0.2 + x - -class RRDBNet(nn.Module): - """Networks consisting of Residual in Residual Dense Block, which is used - in ESRGAN. - - ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. - - We extend ESRGAN for scale x2 and scale x1. - Note: This is one option for scale 1, scale 2 in RRDBNet. - We first employ the pixel-unshuffle (an inverse operation of pixelshuffle to reduce the spatial size - and enlarge the channel size before feeding inputs into the main ESRGAN architecture. - - Args: - num_in_ch (int): Channel number of inputs. - num_out_ch (int): Channel number of outputs. - num_feat (int): Channel number of intermediate features. - Default: 64 - num_block (int): Block number in the trunk network. Defaults: 23 - num_grow_ch (int): Channels for each growth. Default: 32. - """ - - def __init__(self, num_in_ch, num_out_ch, scale=4, num_feat=64, num_block=23, num_grow_ch=32): - super(RRDBNet, self).__init__() - self.scale = scale - if scale == 2: - num_in_ch = num_in_ch * 4 - elif scale == 1: - num_in_ch = num_in_ch * 16 - self.conv_first = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1) - self.body = make_layer(RRDB, num_block, num_feat=num_feat, num_grow_ch=num_grow_ch) - self.conv_body = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - # upsample - self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - if self.scale == 2: - feat = pixel_unshuffle(x, scale=2) - elif self.scale == 1: - feat = pixel_unshuffle(x, scale=4) - else: - feat = x - feat = self.conv_first(feat) - body_feat = self.conv_body(self.body(feat)) - feat = feat + body_feat - # upsample - feat = self.lrelu(self.conv_up1(F.interpolate(feat, scale_factor=2, mode='nearest'))) - feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode='nearest'))) - out = self.conv_last(self.lrelu(self.conv_hr(feat))) - return out diff --git a/spaces/Kevin676/AutoGPT/tests/test_token_counter.py b/spaces/Kevin676/AutoGPT/tests/test_token_counter.py deleted file mode 100644 index 6d7ae016b2f823123b0b69b2eeb3eab50d94f00f..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/AutoGPT/tests/test_token_counter.py +++ /dev/null @@ -1,63 +0,0 @@ -import unittest - -import tests.context -from autogpt.token_counter import count_message_tokens, count_string_tokens - - -class TestTokenCounter(unittest.TestCase): - def test_count_message_tokens(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages), 17) - - def test_count_message_tokens_with_name(self): - messages = [ - {"role": "user", "content": "Hello", "name": "John"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages), 17) - - def test_count_message_tokens_empty_input(self): - self.assertEqual(count_message_tokens([]), 3) - - def test_count_message_tokens_invalid_model(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - with self.assertRaises(KeyError): - count_message_tokens(messages, model="invalid_model") - - def test_count_message_tokens_gpt_4(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - self.assertEqual(count_message_tokens(messages, model="gpt-4-0314"), 15) - - def test_count_string_tokens(self): - string = "Hello, world!" - self.assertEqual( - count_string_tokens(string, model_name="gpt-3.5-turbo-0301"), 4 - ) - - def test_count_string_tokens_empty_input(self): - self.assertEqual(count_string_tokens("", model_name="gpt-3.5-turbo-0301"), 0) - - def test_count_message_tokens_invalid_model(self): - messages = [ - {"role": "user", "content": "Hello"}, - {"role": "assistant", "content": "Hi there!"}, - ] - with self.assertRaises(NotImplementedError): - count_message_tokens(messages, model="invalid_model") - - def test_count_string_tokens_gpt_4(self): - string = "Hello, world!" - self.assertEqual(count_string_tokens(string, model_name="gpt-4-0314"), 4) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Keyradesu/Oka/README.md b/spaces/Keyradesu/Oka/README.md deleted file mode 100644 index 3fb44b8a0a1887d89b42c22c10b198125c1aede5..0000000000000000000000000000000000000000 --- a/spaces/Keyradesu/Oka/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Oka -emoji: 🐢 -colorFrom: indigo -colorTo: red -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kirihasan/rvc-holo/infer_pack/models_onnx.py b/spaces/Kirihasan/rvc-holo/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/Kirihasan/rvc-holo/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/KyanChen/FunSR/models/liif.py b/spaces/KyanChen/FunSR/models/liif.py deleted file mode 100644 index d6099426081918556a32b81aca00a53be51d91fe..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/models/liif.py +++ /dev/null @@ -1,110 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -import models -from models import register -from utils import make_coord - - -@register('liif') -class LIIF(nn.Module): - - def __init__(self, encoder_spec, imnet_spec=None, - local_ensemble=True, feat_unfold=True, cell_decode=True): - super().__init__() - self.local_ensemble = local_ensemble - self.feat_unfold = feat_unfold - self.cell_decode = cell_decode - - self.encoder = models.make(encoder_spec) - - if imnet_spec is not None: - imnet_in_dim = self.encoder.out_dim - if self.feat_unfold: - imnet_in_dim *= 9 - imnet_in_dim += 2 # attach coord - if self.cell_decode: - imnet_in_dim += 2 - self.imnet = models.make(imnet_spec, args={'in_dim': imnet_in_dim}) - else: - self.imnet = None - - def gen_feat(self, inp): - self.feat = self.encoder(inp) - return self.feat - - def query_rgb(self, coord, cell=None): - feat = self.feat - - if self.imnet is None: - ret = F.grid_sample(feat, coord.flip(-1).unsqueeze(1), - mode='nearest', align_corners=False)[:, :, 0, :] \ - .permute(0, 2, 1) - return ret - - if self.feat_unfold: - feat = F.unfold(feat, 3, padding=1).view( - feat.shape[0], feat.shape[1] * 9, feat.shape[2], feat.shape[3]) - - if self.local_ensemble: - vx_lst = [-1, 1] - vy_lst = [-1, 1] - eps_shift = 1e-6 - else: - vx_lst, vy_lst, eps_shift = [0], [0], 0 - - # field radius (global: [-1, 1]) - rx = 2 / feat.shape[-2] / 2 - ry = 2 / feat.shape[-1] / 2 - - feat_coord = make_coord(feat.shape[-2:], flatten=False).cuda() \ - .permute(2, 0, 1) \ - .unsqueeze(0).expand(feat.shape[0], 2, *feat.shape[-2:]) - - preds = [] - areas = [] - for vx in vx_lst: - for vy in vy_lst: - coord_ = coord.clone() - coord_[:, :, 0] += vx * rx + eps_shift - coord_[:, :, 1] += vy * ry + eps_shift - coord_.clamp_(-1 + 1e-6, 1 - 1e-6) - q_feat = F.grid_sample( - feat, coord_.flip(-1).unsqueeze(1), - mode='nearest', align_corners=False)[:, :, 0, :] \ - .permute(0, 2, 1) - q_coord = F.grid_sample( - feat_coord, coord_.flip(-1).unsqueeze(1), - mode='nearest', align_corners=False)[:, :, 0, :] \ - .permute(0, 2, 1) - rel_coord = coord - q_coord - rel_coord[:, :, 0] *= feat.shape[-2] - rel_coord[:, :, 1] *= feat.shape[-1] - inp = torch.cat([q_feat, rel_coord], dim=-1) - - if self.cell_decode: - rel_cell = cell.clone() - rel_cell[:, :, 0] *= feat.shape[-2] - rel_cell[:, :, 1] *= feat.shape[-1] - inp = torch.cat([inp, rel_cell], dim=-1) - - bs, q = coord.shape[:2] - pred = self.imnet(inp.view(bs * q, -1)).view(bs, q, -1) - preds.append(pred) - - area = torch.abs(rel_coord[:, :, 0] * rel_coord[:, :, 1]) - areas.append(area + 1e-9) - - tot_area = torch.stack(areas).sum(dim=0) - if self.local_ensemble: - t = areas[0]; areas[0] = areas[3]; areas[3] = t - t = areas[1]; areas[1] = areas[2]; areas[2] = t - ret = 0 - for pred, area in zip(preds, areas): - ret = ret + pred * (area / tot_area).unsqueeze(-1) - return ret - - def forward(self, inp, coord, cell): - self.gen_feat(inp) - return self.query_rgb(coord, cell) diff --git a/spaces/Laihiujin/OneFormer/oneformer/data/datasets/register_cityscapes_panoptic.py b/spaces/Laihiujin/OneFormer/oneformer/data/datasets/register_cityscapes_panoptic.py deleted file mode 100644 index 07ecb23ba6422ac24e4a21aa6bb3125b07f71f33..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/data/datasets/register_cityscapes_panoptic.py +++ /dev/null @@ -1,199 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/data/datasets/cityscapes_panoptic.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -import json -import logging -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES -from detectron2.utils.file_io import PathManager - -""" -This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog. -""" - - -logger = logging.getLogger(__name__) - - -def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - image_dict = {} - for city in cities: - city_img_dir = os.path.join(image_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "_leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = os.path.basename(basename)[: -len(suffix)] - - image_dict[basename] = image_file - - for ann in json_info["annotations"]: - image_file = image_dict.get(ann["image_id"], None) - assert image_file is not None, "No image {} found for annotation {}".format( - ann["image_id"], ann["file_name"] - ) - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = ann["segments_info"] - files.append((image_file, label_file, segments_info)) - - assert len(files), "No images found in {}".format(image_dir) - assert PathManager.isfile(files[0][0]), files[0][0] - assert PathManager.isfile(files[0][1]), files[0][1] - return files - - -def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train". - gt_json (str): path to the json file. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train.json". - meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id" - and "stuff_dataset_id_to_contiguous_id" to map category ids to - contiguous ids for training. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - return segment_info - - assert os.path.exists( - gt_json - ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa - - - with open(gt_json) as f: - json_info = json.load(f) - - files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info) - ret = [] - for image_file, label_file, segments_info in files: - sem_label_file = ( - image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png" - ) - segments_info = [_convert_category_id(x, meta) for x in segments_info] - ret.append( - { - "file_name": image_file, - "image_id": "_".join( - os.path.splitext(os.path.basename(image_file))[0].split("_")[:3] - ), - "sem_seg_file_name": sem_label_file, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - assert PathManager.isfile( - ret[0]["pan_seg_file_name"] - ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa - return ret - - -_RAW_CITYSCAPES_PANOPTIC_SPLITS = { - "cityscapes_fine_panoptic_train": ( - "cityscapes/leftImg8bit/train", - "cityscapes/gtFine/cityscapes_panoptic_train", - "cityscapes/gtFine/cityscapes_panoptic_train.json", - ), - "cityscapes_fine_panoptic_val": ( - "cityscapes/leftImg8bit/val", - "cityscapes/gtFine/cityscapes_panoptic_val", - "cityscapes/gtFine/cityscapes_panoptic_val.json", - ), - # "cityscapes_fine_panoptic_test": not supported yet -} - - -def register_all_cityscapes_panoptic(root): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # There are three types of ids in cityscapes panoptic segmentation: - # (1) category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the classifier - # (2) instance id: this id is used to differentiate different instances from - # the same category. For "stuff" classes, the instance id is always 0; for - # "thing" classes, the instance id starts from 1 and 0 is reserved for - # ignored instances (e.g. crowd annotation). - # (3) panoptic id: this is the compact id that encode both category and - # instance id by: category_id * 1000 + instance_id. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for k in CITYSCAPES_CATEGORIES: - if k["isthing"] == 1: - thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - else: - stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items(): - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - gt_json = os.path.join(root, gt_json) - - if key in DatasetCatalog.list(): - DatasetCatalog.remove(key) - - DatasetCatalog.register( - key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta) - ) - MetadataCatalog.get(key).set( - panoptic_root=gt_dir, - image_root=image_dir, - panoptic_json=gt_json, - gt_dir=gt_dir.replace("cityscapes_panoptic_", ""), - evaluator_type="cityscapes_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **meta, - ) - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_cityscapes_panoptic(_root) \ No newline at end of file diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index 54c2fd2484c3d52c3dc9bb4c88e5c102fa686fdc..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,91 +0,0 @@ -import numpy as np -import pyworld - -from lib.infer.infer_libs.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/LeeroyVonJenkins/OCR-Invoice-LayoutLMv3/app.py b/spaces/LeeroyVonJenkins/OCR-Invoice-LayoutLMv3/app.py deleted file mode 100644 index 5615916416d367c54bff82ae6880ed26f107ad97..0000000000000000000000000000000000000000 --- a/spaces/LeeroyVonJenkins/OCR-Invoice-LayoutLMv3/app.py +++ /dev/null @@ -1,146 +0,0 @@ -import os - -os.system('pip install pip --upgrade') -os.system('pip install -q git+https://github.com/huggingface/transformers.git') - - -os.system("pip install pyyaml==5.1") -# workaround: install old version of pytorch since detectron2 hasn't released packages for pytorch 1.9 (issue: https://github.com/facebookresearch/detectron2/issues/3158) -os.system( - "pip install torch==1.8.0+cu101 torchvision==0.9.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html" -) - -# install detectron2 that matches pytorch 1.8 -# See https://detectron2.readthedocs.io/tutorials/install.html for instructions -os.system( - "pip install -q detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html" -) - -## install PyTesseract -os.system("pip install -q pytesseract") - -import gradio as gr -import numpy as np -from transformers import AutoModelForTokenClassification -from datasets.features import ClassLabel -from transformers import AutoProcessor -from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D -import torch -from datasets import load_metric -from transformers import LayoutLMv3ForTokenClassification -from transformers.data.data_collator import default_data_collator - - -from transformers import AutoModelForTokenClassification -from datasets import load_dataset -from PIL import Image, ImageDraw, ImageFont - - -processor = AutoProcessor.from_pretrained("jinhybr/OCR-LayoutLMv3-Invoice", apply_ocr=True) -model = AutoModelForTokenClassification.from_pretrained("jinhybr/OCR-LayoutLMv3-Invoice") - - - -# load image example -dataset = load_dataset("jinhybr/WildReceipt", split="test") -Image.open(dataset[1]["image_path"]).convert("RGB").save("example1.png") -Image.open(dataset[3]["image_path"]).convert("RGB").save("example2.png") -Image.open(dataset[25]["image_path"]).convert("RGB").save("example3.png") -# define id2label, label2color -labels = dataset.features['ner_tags'].feature.names -id2label = {v: k for v, k in enumerate(labels)} -label2color = { - "Date_key": 'red', - "Date_value": 'green', - "Ignore": 'orange', - "Others": 'orange', - "Prod_item_key": 'red', - "Prod_item_value": 'green', - "Prod_price_key": 'red', - "Prod_price_value": 'green', - "Prod_quantity_key": 'red', - "Prod_quantity_value": 'green', - "Store_addr_key": 'red', - "Store_addr_value": 'green', - "Store_name_key": 'red', - "Store_name_value": 'green', - "Subtotal_key": 'red', - "Subtotal_value": 'green', - "Tax_key": 'red', - "Tax_value": 'green', - "Tel_key": 'red', - "Tel_value": 'green', - "Time_key": 'red', - "Time_value": 'green', - "Tips_key": 'red', - "Tips_value": 'green', - "Total_key": 'red', - "Total_value": 'blue' - } - -def unnormalize_box(bbox, width, height): - return [ - width * (bbox[0] / 1000), - height * (bbox[1] / 1000), - width * (bbox[2] / 1000), - height * (bbox[3] / 1000), - ] - - -def iob_to_label(label): - return label - - - -def process_image(image): - - print(type(image)) - width, height = image.size - - # encode - encoding = processor(image, truncation=True, return_offsets_mapping=True, return_tensors="pt") - offset_mapping = encoding.pop('offset_mapping') - - # forward pass - outputs = model(**encoding) - - # get predictions - predictions = outputs.logits.argmax(-1).squeeze().tolist() - token_boxes = encoding.bbox.squeeze().tolist() - - # only keep non-subword predictions - is_subword = np.array(offset_mapping.squeeze().tolist())[:,0] != 0 - true_predictions = [id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]] - true_boxes = [unnormalize_box(box, width, height) for idx, box in enumerate(token_boxes) if not is_subword[idx]] - - # draw predictions over the image - draw = ImageDraw.Draw(image) - font = ImageFont.load_default() - for prediction, box in zip(true_predictions, true_boxes): - predicted_label = iob_to_label(prediction) - draw.rectangle(box, outline=label2color[predicted_label]) - draw.text((box[0]+10, box[1]-10), text=predicted_label, fill=label2color[predicted_label], font=font) - - return image - - -title = "OCR Invoice - Information Extraction - LayoutLMv3" -description = "Fine-tuned Microsoft's LayoutLMv3 on WildReceipt Dataset to parse Invoice OCR document. To use it, simply upload an image or use the example image below. Results will show up in a few seconds." - -article="References
    [1] Y. Xu et al., “LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking.” 2022. Paper Link
    [2] LayoutLMv3 training and inference
    [3] Hongbin Sun, Zhanghui Kuang, Xiaoyu Yue, Chenhao Lin, and Wayne Zhang. 2021. Spatial Dual-Modality Graph Reasoning for Key Information Extraction. arXiv. DOI:https://doi.org/10.48550/ARXIV.2103.14470 Paper Link" - -examples =[['example1.png'],['example2.png'],['example3.png'],['inv2.jpg']] - -css = """.output_image, .input_image {height: 600px !important}""" - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="annotated image"), - title=title, - description=description, - article=article, - examples=examples, - css=css, - analytics_enabled = True, enable_queue=True) - -iface.launch(inline=False, share=False, debug=True) \ No newline at end of file diff --git a/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/README.md b/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/README.md deleted file mode 100644 index aa7c7bbf446ec8c483811ac0c31d9fa1021909ef..0000000000000000000000000000000000000000 --- a/spaces/LinoyTsaban/edit_friendly_ddpm_inversion/README.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Edit Friendly Ddpm Inversion -emoji: 🖼️ -colorFrom: pink -colorTo: orange -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -## BibTeX - -``` -@article{HubermanSpiegelglas2023, - title = {An Edit Friendly DDPM Noise Space: Inversion and Manipulations}, - author = {Huberman-Spiegelglas, Inbar and Kulikov, Vladimir and Michaeli, Tomer}, - journal = {arXiv preprint arXiv:2304.06140}, - year = {2023} - } -``` diff --git a/spaces/LuxOAI/ChatGpt-Web/app/store/chat.ts b/spaces/LuxOAI/ChatGpt-Web/app/store/chat.ts deleted file mode 100644 index 0ee15bf56f0411c329c3e3e81f8ac9cd6171c395..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/store/chat.ts +++ /dev/null @@ -1,532 +0,0 @@ -import { create } from "zustand"; -import { persist } from "zustand/middleware"; - -import { type ChatCompletionResponseMessage } from "openai"; -import { - ControllerPool, - requestChatStream, - requestWithPrompt, -} from "../requests"; -import { trimTopic } from "../utils"; - -import Locale from "../locales"; -import { showToast } from "../components/ui-lib"; -import { ModelType, useAppConfig } from "./config"; -import { createEmptyMask, Mask } from "./mask"; -import { StoreKey } from "../constant"; - -export type Message = ChatCompletionResponseMessage & { - date: string; - streaming?: boolean; - isError?: boolean; - id?: number; - model?: ModelType; -}; - -export function createMessage(override: Partial): Message { - return { - id: Date.now(), - date: new Date().toLocaleString(), - role: "user", - content: "", - ...override, - }; -} - -export const ROLES: Message["role"][] = ["system", "user", "assistant"]; - -export interface ChatStat { - tokenCount: number; - wordCount: number; - charCount: number; -} - -export interface ChatSession { - id: number; - - topic: string; - - memoryPrompt: string; - messages: Message[]; - stat: ChatStat; - lastUpdate: number; - lastSummarizeIndex: number; - - mask: Mask; -} - -export const DEFAULT_TOPIC = Locale.Store.DefaultTopic; -export const BOT_HELLO: Message = createMessage({ - role: "assistant", - content: Locale.Store.BotHello, -}); - -function createEmptySession(): ChatSession { - return { - id: Date.now() + Math.random(), - topic: DEFAULT_TOPIC, - memoryPrompt: "", - messages: [], - stat: { - tokenCount: 0, - wordCount: 0, - charCount: 0, - }, - lastUpdate: Date.now(), - lastSummarizeIndex: 0, - mask: createEmptyMask(), - }; -} - -interface ChatStore { - sessions: ChatSession[]; - currentSessionIndex: number; - globalId: number; - clearSessions: () => void; - moveSession: (from: number, to: number) => void; - selectSession: (index: number) => void; - newSession: (mask?: Mask) => void; - deleteSession: (index: number) => void; - currentSession: () => ChatSession; - onNewMessage: (message: Message) => void; - onUserInput: (content: string) => Promise; - summarizeSession: () => void; - updateStat: (message: Message) => void; - updateCurrentSession: (updater: (session: ChatSession) => void) => void; - updateMessage: ( - sessionIndex: number, - messageIndex: number, - updater: (message?: Message) => void, - ) => void; - resetSession: () => void; - getMessagesWithMemory: () => Message[]; - getMemoryPrompt: () => Message; - - clearAllData: () => void; - clearAll: () => void; -} - -function countMessages(msgs: Message[]) { - return msgs.reduce((pre, cur) => pre + cur.content.length, 0); -} - -export const useChatStore = create()( - persist( - (set, get) => ({ - sessions: [createEmptySession()], - currentSessionIndex: 0, - globalId: 0, - - clearSessions() { - set(() => ({ - sessions: [createEmptySession()], - currentSessionIndex: 0, - })); - }, - - selectSession(index: number) { - set({ - currentSessionIndex: index, - }); - }, - - moveSession(from: number, to: number) { - set((state) => { - const { sessions, currentSessionIndex: oldIndex } = state; - - // move the session - const newSessions = [...sessions]; - const session = newSessions[from]; - newSessions.splice(from, 1); - newSessions.splice(to, 0, session); - - // modify current session id - let newIndex = oldIndex === from ? to : oldIndex; - if (oldIndex > from && oldIndex <= to) { - newIndex -= 1; - } else if (oldIndex < from && oldIndex >= to) { - newIndex += 1; - } - - return { - currentSessionIndex: newIndex, - sessions: newSessions, - }; - }); - }, - - newSession(mask) { - const session = createEmptySession(); - - set(() => ({ globalId: get().globalId + 1 })); - session.id = get().globalId; - - if (mask) { - session.mask = { ...mask }; - session.topic = mask.name; - } - - set((state) => ({ - currentSessionIndex: 0, - sessions: [session].concat(state.sessions), - })); - }, - - deleteSession(index) { - const deletingLastSession = get().sessions.length === 1; - const deletedSession = get().sessions.at(index); - - if (!deletedSession) return; - - const sessions = get().sessions.slice(); - sessions.splice(index, 1); - - const currentIndex = get().currentSessionIndex; - let nextIndex = Math.min( - currentIndex - Number(index < currentIndex), - sessions.length - 1, - ); - - if (deletingLastSession) { - nextIndex = 0; - sessions.push(createEmptySession()); - } - - // for undo delete action - const restoreState = { - currentSessionIndex: get().currentSessionIndex, - sessions: get().sessions.slice(), - }; - - set(() => ({ - currentSessionIndex: nextIndex, - sessions, - })); - - showToast( - Locale.Home.DeleteToast, - { - text: Locale.Home.Revert, - onClick() { - set(() => restoreState); - }, - }, - 5000, - ); - }, - - currentSession() { - let index = get().currentSessionIndex; - const sessions = get().sessions; - - if (index < 0 || index >= sessions.length) { - index = Math.min(sessions.length - 1, Math.max(0, index)); - set(() => ({ currentSessionIndex: index })); - } - - const session = sessions[index]; - - return session; - }, - - onNewMessage(message) { - get().updateCurrentSession((session) => { - session.lastUpdate = Date.now(); - }); - get().updateStat(message); - get().summarizeSession(); - }, - - async onUserInput(content) { - const session = get().currentSession(); - const modelConfig = session.mask.modelConfig; - - const userMessage: Message = createMessage({ - role: "user", - content, - }); - - const botMessage: Message = createMessage({ - role: "assistant", - streaming: true, - id: userMessage.id! + 1, - model: modelConfig.model, - }); - const systemInfo = createMessage({ - role: "system", - content: `IMPRTANT: You are a virtual assistant powered by the ${ - modelConfig.model - } model, now time is ${new Date().toLocaleString()}}`, - id: botMessage.id! + 1, - }); - // get recent messages - const systemMessages = [systemInfo]; - const recentMessages = get().getMessagesWithMemory(); - const sendMessages = systemMessages.concat( - recentMessages.concat(userMessage), - ); - const sessionIndex = get().currentSessionIndex; - const messageIndex = get().currentSession().messages.length + 1; - - // save user's and bot's message - get().updateCurrentSession((session) => { - session.messages.push(userMessage); - session.messages.push(botMessage); - }); - - // make request - console.log("[User Input] ", sendMessages); - requestChatStream(sendMessages, { - onMessage(content, done) { - // stream response - if (done) { - botMessage.streaming = false; - botMessage.content = content; - get().onNewMessage(botMessage); - ControllerPool.remove( - sessionIndex, - botMessage.id ?? messageIndex, - ); - } else { - botMessage.content = content; - set(() => ({})); - } - }, - onError(error, statusCode) { - const isAborted = error.message.includes("aborted"); - if (statusCode === 401) { - botMessage.content = Locale.Error.Unauthorized; - } else if (!isAborted) { - botMessage.content += "\n\n" + Locale.Store.Error; - } - botMessage.streaming = false; - userMessage.isError = !isAborted; - botMessage.isError = !isAborted; - - set(() => ({})); - ControllerPool.remove(sessionIndex, botMessage.id ?? messageIndex); - }, - onController(controller) { - // collect controller for stop/retry - ControllerPool.addController( - sessionIndex, - botMessage.id ?? messageIndex, - controller, - ); - }, - modelConfig: { ...modelConfig }, - }); - }, - - getMemoryPrompt() { - const session = get().currentSession(); - - return { - role: "system", - content: - session.memoryPrompt.length > 0 - ? Locale.Store.Prompt.History(session.memoryPrompt) - : "", - date: "", - } as Message; - }, - - getMessagesWithMemory() { - const session = get().currentSession(); - const modelConfig = session.mask.modelConfig; - const messages = session.messages.filter((msg) => !msg.isError); - const n = messages.length; - - const context = session.mask.context.slice(); - - // long term memory - if ( - modelConfig.sendMemory && - session.memoryPrompt && - session.memoryPrompt.length > 0 - ) { - const memoryPrompt = get().getMemoryPrompt(); - context.push(memoryPrompt); - } - - // get short term and unmemoried long term memory - const shortTermMemoryMessageIndex = Math.max( - 0, - n - modelConfig.historyMessageCount, - ); - const longTermMemoryMessageIndex = session.lastSummarizeIndex; - const oldestIndex = Math.max( - shortTermMemoryMessageIndex, - longTermMemoryMessageIndex, - ); - const threshold = modelConfig.compressMessageLengthThreshold; - - // get recent messages as many as possible - const reversedRecentMessages = []; - for ( - let i = n - 1, count = 0; - i >= oldestIndex && count < threshold; - i -= 1 - ) { - const msg = messages[i]; - if (!msg || msg.isError) continue; - count += msg.content.length; - reversedRecentMessages.push(msg); - } - - // concat - const recentMessages = context.concat(reversedRecentMessages.reverse()); - - return recentMessages; - }, - - updateMessage( - sessionIndex: number, - messageIndex: number, - updater: (message?: Message) => void, - ) { - const sessions = get().sessions; - const session = sessions.at(sessionIndex); - const messages = session?.messages; - updater(messages?.at(messageIndex)); - set(() => ({ sessions })); - }, - - resetSession() { - get().updateCurrentSession((session) => { - session.messages = []; - session.memoryPrompt = ""; - }); - }, - - summarizeSession() { - const session = get().currentSession(); - - // should summarize topic after chating more than 50 words - const SUMMARIZE_MIN_LEN = 50; - if ( - session.topic === DEFAULT_TOPIC && - countMessages(session.messages) >= SUMMARIZE_MIN_LEN - ) { - const Bot = useAppConfig.getState().bot; - if (Bot != "OpenAI") { - get().updateCurrentSession( - (session) => (session.topic = trimTopic(Bot)), - ); - } else { - requestWithPrompt(session.messages, Locale.Store.Prompt.Topic, { - model: "gpt-3.5-turbo", - }).then((res) => { - get().updateCurrentSession( - (session) => - (session.topic = res ? trimTopic(res) : DEFAULT_TOPIC), - ); - }); - } - } - - const modelConfig = session.mask.modelConfig; - let toBeSummarizedMsgs = session.messages.slice( - session.lastSummarizeIndex, - ); - - const historyMsgLength = countMessages(toBeSummarizedMsgs); - - if (historyMsgLength > modelConfig?.max_tokens ?? 4000) { - const n = toBeSummarizedMsgs.length; - toBeSummarizedMsgs = toBeSummarizedMsgs.slice( - Math.max(0, n - modelConfig.historyMessageCount), - ); - } - - // add memory prompt - toBeSummarizedMsgs.unshift(get().getMemoryPrompt()); - - const lastSummarizeIndex = session.messages.length; - - console.log( - "[Chat History] ", - toBeSummarizedMsgs, - historyMsgLength, - modelConfig.compressMessageLengthThreshold, - ); - - if ( - historyMsgLength > modelConfig.compressMessageLengthThreshold && - session.mask.modelConfig.sendMemory - ) { - requestChatStream( - toBeSummarizedMsgs.concat({ - role: "system", - content: Locale.Store.Prompt.Summarize, - date: "", - }), - { - overrideModel: "gpt-3.5-turbo", - onMessage(message, done) { - session.memoryPrompt = message; - if (done) { - console.log("[Memory] ", session.memoryPrompt); - session.lastSummarizeIndex = lastSummarizeIndex; - } - }, - onError(error) { - console.error("[Summarize] ", error); - }, - }, - ); - } - }, - - updateStat(message) { - get().updateCurrentSession((session) => { - session.stat.charCount += message.content.length; - // TODO: should update chat count and word count - }); - }, - - updateCurrentSession(updater) { - const sessions = get().sessions; - const index = get().currentSessionIndex; - updater(sessions[index]); - set(() => ({ sessions })); - }, - - clearAllData() { - localStorage.clear(); - location.reload(); - }, - - clearAll() { - // localStorage.clear(); - location.reload(); - }, - }), - { - name: StoreKey.Chat, - version: 2, - migrate(persistedState, version) { - const state = persistedState as any; - const newState = JSON.parse(JSON.stringify(state)) as ChatStore; - - if (version < 2) { - newState.globalId = 0; - newState.sessions = []; - - const oldSessions = state.sessions; - for (const oldSession of oldSessions) { - const newSession = createEmptySession(); - newSession.topic = oldSession.topic; - newSession.messages = [...oldSession.messages]; - newSession.mask.modelConfig.sendMemory = true; - newSession.mask.modelConfig.historyMessageCount = 4; - newSession.mask.modelConfig.compressMessageLengthThreshold = 1000; - newState.sessions.push(newSession); - } - } - - return newState; - }, - }, - ), -); diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/midas/midas_net_custom.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/midas/midas_net_custom.py deleted file mode 100644 index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/midas/midas/midas_net_custom.py +++ /dev/null @@ -1,128 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder - - -class MidasNet_small(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True, - blocks={'expand': True}): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet_small, self).__init__() - - use_pretrained = False if path else True - - self.channels_last = channels_last - self.blocks = blocks - self.backbone = backbone - - self.groups = 1 - - features1=features - features2=features - features3=features - features4=features - self.expand = False - if "expand" in self.blocks and self.blocks['expand'] == True: - self.expand = True - features1=features - features2=features*2 - features3=features*4 - features4=features*8 - - self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable) - - self.scratch.activation = nn.ReLU(False) - - self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners) - - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1), - self.scratch.activation, - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - if path: - self.load(path) - - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - if self.channels_last==True: - print("self.channels_last = ", self.channels_last) - x.contiguous(memory_format=torch.channels_last) - - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) - - - -def fuse_model(m): - prev_previous_type = nn.Identity() - prev_previous_name = '' - previous_type = nn.Identity() - previous_name = '' - for name, module in m.named_modules(): - if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU: - # print("FUSED ", prev_previous_name, previous_name, name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True) - elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d: - # print("FUSED ", prev_previous_name, previous_name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True) - # elif previous_type == nn.Conv2d and type(module) == nn.ReLU: - # print("FUSED ", previous_name, name) - # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True) - - prev_previous_type = previous_type - prev_previous_name = previous_name - previous_type = type(module) - previous_name = name \ No newline at end of file diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/scripts/download_trained_model.sh b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/scripts/download_trained_model.sh deleted file mode 100644 index c652f2c666dc48ff1e2e7a94d559e925ac058dec..0000000000000000000000000000000000000000 --- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/scripts/download_trained_model.sh +++ /dev/null @@ -1,7 +0,0 @@ -set -ex - -mkdir -p checkpoints -cd checkpoints -wget "https://drive.google.com/uc?export=download&id=1zEmVXG2VHy0MMzngcRshB4D8Sr_oLHsm" -O net_G -wget "https://drive.google.com/uc?export=download&id=1V83B6GDIjYMfHdpg-KcCSAPgHxpafHgd" -O net_C -cd .. \ No newline at end of file diff --git a/spaces/Miuzarte/SUI-svc-3.0/add_speaker.py b/spaces/Miuzarte/SUI-svc-3.0/add_speaker.py deleted file mode 100644 index fb6013dd8542efd62915ebdd445012ae7a4bdc28..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-3.0/add_speaker.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -import argparse -from tqdm import tqdm -from random import shuffle -import json - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list") - parser.add_argument("--source_dir", type=str, default="./dataset/48k", help="path to source dir") - args = parser.parse_args() - - previous_config = json.load(open("configs/config.json", "rb")) - - train = [] - val = [] - test = [] - idx = 0 - spk_dict = previous_config["spk"] - spk_id = max([i for i in spk_dict.values()]) + 1 - for speaker in tqdm(os.listdir(args.source_dir)): - if speaker not in spk_dict.keys(): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = [os.path.join(args.source_dir, speaker, i)for i in os.listdir(os.path.join(args.source_dir, speaker))] - wavs = [i for i in wavs if i.endswith("wav")] - shuffle(wavs) - train += wavs[2:-10] - val += wavs[:2] - test += wavs[-10:] - - assert previous_config["model"]["n_speakers"] > len(spk_dict.keys()) - shuffle(train) - shuffle(val) - shuffle(test) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.test_list) - with open(args.test_list, "w") as f: - for fname in tqdm(test): - wavpath = fname - f.write(wavpath + "\n") - - previous_config["spk"] = spk_dict - - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(previous_config, f, indent=2) diff --git a/spaces/MrD05/text-generation-webui-space/modules/shared.py b/spaces/MrD05/text-generation-webui-space/modules/shared.py deleted file mode 100644 index ea2eb50b7f586e5c562bf2e7c75429c91f21ec6c..0000000000000000000000000000000000000000 --- a/spaces/MrD05/text-generation-webui-space/modules/shared.py +++ /dev/null @@ -1,103 +0,0 @@ -import argparse - -model = None -tokenizer = None -model_name = "" -soft_prompt_tensor = None -soft_prompt = False -is_RWKV = False - -# Chat variables -history = {'internal': [], 'visible': []} -character = 'None' -stop_everything = False -processing_message = '*Is typing...*' - -# UI elements (buttons, sliders, HTML, etc) -gradio = {} - -# Generation input parameters -input_params = [] - -settings = { - 'max_new_tokens': 200, - 'max_new_tokens_min': 1, - 'max_new_tokens_max': 2000, - 'name1': 'Person 1', - 'name2': 'Person 2', - 'context': 'This is a conversation between two people.', - 'stop_at_newline': True, - 'chat_prompt_size': 2048, - 'chat_prompt_size_min': 0, - 'chat_prompt_size_max': 2048, - 'chat_generation_attempts': 1, - 'chat_generation_attempts_min': 1, - 'chat_generation_attempts_max': 5, - 'name1_pygmalion': 'You', - 'name2_pygmalion': 'Kawaii', - 'context_pygmalion': "Kawaii's persona: Kawaii is a cheerful person who loves to make others smile. She is an optimist who loves to spread happiness and positivity wherever she goes.\n", - 'stop_at_newline_pygmalion': False, - 'default_extensions': [], - 'chat_default_extensions': ["gallery"], - 'presets': { - 'default': 'NovelAI-Sphinx Moth', - 'pygmalion-*': 'Pygmalion', - 'RWKV-*': 'Naive', - }, - 'prompts': { - 'default': 'Common sense questions and answers\n\nQuestion: \nFactual answer:', - '^(gpt4chan|gpt-4chan|4chan)': '-----\n--- 865467536\nInput text\n--- 865467537\n', - '(rosey|chip|joi)_.*_instruct.*': 'User: \n', - 'oasst-*': '<|prompter|>Write a story about future of AI development<|endoftext|><|assistant|>' - } -} - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif v.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - -parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog,max_help_position=54)) -parser.add_argument('--model', type=str, help='Name of the model to load by default.') -parser.add_argument('--notebook', action='store_true', help='Launch the web UI in notebook mode, where the output is written to the same text box as the input.') -parser.add_argument('--chat', action='store_true', help='Launch the web UI in chat mode.') -parser.add_argument('--cai-chat', action='store_true', help='Launch the web UI in chat mode with a style similar to Character.AI\'s. If the file img_bot.png or img_bot.jpg exists in the same folder as server.py, this image will be used as the bot\'s profile picture. Similarly, img_me.png or img_me.jpg will be used as your profile picture.') -parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text.') -parser.add_argument('--load-in-8bit', action='store_true', help='Load the model with 8-bit precision.') -parser.add_argument('--load-in-4bit', action='store_true', help='DEPRECATED: use --gptq-bits 4 instead.') -parser.add_argument('--gptq-bits', type=int, default=0, help='Load a pre-quantized model with specified precision. 2, 3, 4 and 8bit are supported. Currently only works with LLaMA and OPT.') -parser.add_argument('--gptq-model-type', type=str, help='Model type of pre-quantized model. Currently only LLaMa and OPT are supported.') -parser.add_argument('--bf16', action='store_true', help='Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU.') -parser.add_argument('--auto-devices', action='store_true', help='Automatically split the model across the available GPU(s) and CPU.') -parser.add_argument('--disk', action='store_true', help='If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk.') -parser.add_argument('--disk-cache-dir', type=str, default="cache", help='Directory to save the disk cache to. Defaults to "cache".') -parser.add_argument('--gpu-memory', type=int, nargs="+", help='Maxmimum GPU memory in GiB to be allocated per GPU. Example: --gpu-memory 10 for a single GPU, --gpu-memory 10 5 for two GPUs.') -parser.add_argument('--cpu-memory', type=int, help='Maximum CPU memory in GiB to allocate for offloaded weights. Must be an integer number. Defaults to 99.') -parser.add_argument('--flexgen', action='store_true', help='Enable the use of FlexGen offloading.') -parser.add_argument('--percent', type=int, nargs="+", default=[0, 100, 100, 0, 100, 0], help='FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0).') -parser.add_argument("--compress-weight", action="store_true", help="FlexGen: activate weight compression.") -parser.add_argument("--pin-weight", type=str2bool, nargs="?", const=True, default=True, help="FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%%).") -parser.add_argument('--deepspeed', action='store_true', help='Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration.') -parser.add_argument('--nvme-offload-dir', type=str, help='DeepSpeed: Directory to use for ZeRO-3 NVME offloading.') -parser.add_argument('--local_rank', type=int, default=0, help='DeepSpeed: Optional argument for distributed setups.') -parser.add_argument('--rwkv-strategy', type=str, default=None, help='RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8".') -parser.add_argument('--rwkv-cuda-on', action='store_true', help='RWKV: Compile the CUDA kernel for better performance.') -parser.add_argument('--no-stream', action='store_true', help='Don\'t stream the text output in real time.') -parser.add_argument('--settings', type=str, help='Load the default interface settings from this json file. See settings-template.json for an example. If you create a file called settings.json, this file will be loaded by default without the need to use the --settings flag.') -parser.add_argument('--extensions', type=str, nargs="+", help='The list of extensions to load. If you want to load more than one extension, write the names separated by spaces.') -parser.add_argument('--listen', action='store_true', help='Make the web UI reachable from your local network.') -parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.') -parser.add_argument('--share', action='store_true', help='Create a public URL. This is useful for running the web UI on Google Colab or similar.') -parser.add_argument('--auto-launch', action='store_true', default=False, help='Open the web UI in the default browser upon launch.') -parser.add_argument('--verbose', action='store_true', help='Print the prompts to the terminal.') -args = parser.parse_args() - -# Provisional, this will be deleted later -if args.load_in_4bit: - print("Warning: --load-in-4bit is deprecated and will be removed. Use --gptq-bits 4 instead.\n") - args.gptq_bits = 4 diff --git a/spaces/Norod78/WoWQuestTextGenerator/README.md b/spaces/Norod78/WoWQuestTextGenerator/README.md deleted file mode 100644 index 08fab4e9dfa9356425019755175637a08df5e760..0000000000000000000000000000000000000000 --- a/spaces/Norod78/WoWQuestTextGenerator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WoW Quest Generator -emoji: 🧝‍♀️ -colorFrom: green -colorTo: orange -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: cc-by-nc-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/clib/libnat_cuda/binding.cpp b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/clib/libnat_cuda/binding.cpp deleted file mode 100644 index ced91c0d0afab9071842911d9876e6360d90284a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/clib/libnat_cuda/binding.cpp +++ /dev/null @@ -1,67 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -/* - This code is partially adpoted from - https://github.com/1ytic/pytorch-edit-distance - */ - -#include -#include "edit_dist.h" - -#ifndef TORCH_CHECK -#define TORCH_CHECK AT_CHECK -#endif - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -torch::Tensor LevenshteinDistance( - torch::Tensor source, - torch::Tensor target, - torch::Tensor source_length, - torch::Tensor target_length) { - CHECK_INPUT(source); - CHECK_INPUT(target); - CHECK_INPUT(source_length); - CHECK_INPUT(target_length); - return LevenshteinDistanceCuda(source, target, source_length, target_length); -} - -torch::Tensor GenerateDeletionLabel( - torch::Tensor source, - torch::Tensor operations) { - CHECK_INPUT(source); - CHECK_INPUT(operations); - return GenerateDeletionLabelCuda(source, operations); -} - -std::pair GenerateInsertionLabel( - torch::Tensor target, - torch::Tensor operations) { - CHECK_INPUT(target); - CHECK_INPUT(operations); - return GenerateInsertionLabelCuda(target, operations); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("levenshtein_distance", &LevenshteinDistance, "Levenshtein distance"); - m.def( - "generate_deletion_labels", - &GenerateDeletionLabel, - "Generate Deletion Label"); - m.def( - "generate_insertion_labels", - &GenerateInsertionLabel, - "Generate Insertion Label"); -} diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/model_criterion.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/model_criterion.py deleted file mode 100644 index 30350f13b1c00498de6784579250d6b342ced7dd..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/model_criterion.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Dict, List - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass - - -logger = logging.getLogger(__name__) - - -@dataclass -class ModelCriterionConfig(FairseqDataclass): - loss_weights: Dict[str, float] = field( - default_factory=dict, - metadata={"help": "weights for the loss terms"}, - ) - log_keys: List[str] = field( - default_factory=list, - metadata={"help": "additional output keys to log"}, - ) - - -@register_criterion("model", dataclass=ModelCriterionConfig) -class ModelCriterion(FairseqCriterion): - """ - This criterion relies on the model to supply losses. - The losses should be a dictionary of name -> scalar returned by - the model either by including it in the net_output dict or by - implementing a get_losses(net_output, sample) method. The final loss is - a scaled sum of all losses according to weights in loss_weights. - If no weights are provided, then all losses are scaled by 1.0. - - The losses will be automatically logged. Additional keys from - net_output dict can be logged via the log_keys parameter. - """ - - def __init__(self, task, loss_weights=None, log_keys=None): - super().__init__(task) - self.loss_weights = loss_weights - self.log_keys = log_keys - - def forward(self, model, sample, reduce=True): - net_output = model(**sample["net_input"]) - - sample_size = net_output["sample_size"] - scaled_losses = {} - - if hasattr(model, "get_losses"): - losses = model.get_losses(net_output, sample) - elif isinstance(net_output, dict) and "losses" in net_output: - losses = net_output["losses"] - else: - raise Exception("Could not retrieve losses") - - for lk, p in losses.items(): - try: - coef = 1.0 if len(self.loss_weights) == 0 else self.loss_weights[lk] - except KeyError: - logger.error( - f"weight for loss {lk} is not in loss_weights ({self.loss_weights})" - ) - raise - if coef != 0 and p is not None: - scaled_losses[lk] = coef * p.float() - - loss = sum(scaled_losses.values()) - if reduce and loss.numel() > 1: - loss = loss.sum() - - logging_output = { - "loss": loss.data, - "ntokens": sample_size, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - "_world_size": 1, - } - - for lk in self.log_keys: - if lk in net_output and net_output[lk] is not None: - logging_output[lk] = float(net_output[lk]) - - if len(scaled_losses) > 1: - for lk, l in scaled_losses.items(): - logging_output[f"loss_{lk}"] = l.item() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - nsentences = utils.item( - sum(log.get("nsentences", 0) for log in logging_outputs) - ) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar("loss", loss_sum / sample_size, sample_size, round=3) - metrics.log_scalar("ntokens", ntokens) - metrics.log_scalar("nsentences", nsentences) - - builtin_keys = { - "loss", - "ntokens", - "nsentences", - "sample_size", - "_world_size", - } - - world_size = utils.item( - sum(log.get("_world_size", 0) for log in logging_outputs) - ) - - for k in logging_outputs[0]: - if k not in builtin_keys: - val = sum(log.get(k, 0) for log in logging_outputs) - if k.startswith("loss_"): - metrics.log_scalar(k, val / sample_size, sample_size, round=3) - else: - metrics.log_scalar(k, val / world_size, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/frm_text_to_speech.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/frm_text_to_speech.py deleted file mode 100644 index 1fa9b0f83e742aefce764e2858a81f99db911afd..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/frm_text_to_speech.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -from fairseq.data.audio.frm_text_to_speech_dataset import FrmTextToSpeechDatasetCreator -from fairseq.tasks import register_task -from fairseq.tasks.text_to_speech import TextToSpeechTask - - -logging.basicConfig( - format='%(asctime)s | %(levelname)s | %(name)s | %(message)s', - datefmt='%Y-%m-%d %H:%M:%S', level=logging.INFO -) -logger = logging.getLogger(__name__) - - -@register_task('frm_text_to_speech') -class FrmTextToSpeechTask(TextToSpeechTask): - @staticmethod - def add_args(parser): - TextToSpeechTask.add_args(parser) - parser.add_argument( - "--do_chunk", action="store_true", help="train on chunks" - ) - parser.add_argument("--chunk_bound", default=-1, type=int) - parser.add_argument("--chunk_init", default=50, type=int) - parser.add_argument("--chunk_incr", default=5, type=int) - parser.add_argument("--add_eos", action="store_true") - parser.add_argument("--dedup", action="store_true") - parser.add_argument("--ref_fpu", default=-1, type=float) - - def load_dataset(self, split, **unused_kwargs): - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - self.datasets[split] = FrmTextToSpeechDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.src_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split=is_train_split, - n_frames_per_step=self.args.n_frames_per_step, - speaker_to_id=self.speaker_to_id, - do_chunk=self.args.do_chunk, - chunk_bound=self.args.chunk_bound, - chunk_init=self.args.chunk_init, - chunk_incr=self.args.chunk_incr, - add_eos=self.args.add_eos, - dedup=self.args.dedup, - ref_fpu=self.args.ref_fpu - ) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py deleted file mode 100644 index f869c4b2f8fb15f96a292e39bd293df7898a4fce..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Callable, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.modules import LayerNorm, MultiheadAttention -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise - - -class TransformerSentenceEncoderLayer(nn.Module): - """ - Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained - models. - """ - - def __init__( - self, - embedding_dim: int = 768, - ffn_embedding_dim: int = 3072, - num_attention_heads: int = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - activation_fn: str = "relu", - export: bool = False, - q_noise: float = 0.0, - qn_block_size: int = 8, - init_fn: Callable = None, - ) -> None: - super().__init__() - - if init_fn is not None: - init_fn() - - # Initialize parameters - self.embedding_dim = embedding_dim - self.num_attention_heads = num_attention_heads - self.attention_dropout = attention_dropout - self.q_noise = q_noise - self.qn_block_size = qn_block_size - - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.activation_dropout_module = FairseqDropout( - activation_dropout, module_name=self.__class__.__name__ - ) - - # Initialize blocks - self.activation_fn = utils.get_activation_fn(activation_fn) - self.self_attn = self.build_self_attention( - self.embedding_dim, - num_attention_heads, - dropout=attention_dropout, - self_attention=True, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = LayerNorm(self.embedding_dim, export=export) - - self.fc1 = self.build_fc1( - self.embedding_dim, - ffn_embedding_dim, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - self.fc2 = self.build_fc2( - ffn_embedding_dim, - self.embedding_dim, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = LayerNorm(self.embedding_dim, export=export) - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_self_attention( - self, - embed_dim, - num_attention_heads, - dropout, - self_attention, - q_noise, - qn_block_size, - ): - return MultiheadAttention( - embed_dim, - num_attention_heads, - dropout=dropout, - self_attention=True, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - - def forward( - self, - x: torch.Tensor, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - ): - """ - LayerNorm is applied either before or after the self-attention/ffn - modules similar to the original Transformer implementation. - """ - residual = x - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = self.dropout_module(x) - x = residual + x - x = self.self_attn_layer_norm(x) - - residual = x - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.final_layer_norm(x) - return x, attn diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/multiprocessing_bpe_encoder.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/multiprocessing_bpe_encoder.py deleted file mode 100644 index 43fe0451bf4d5762d734314075b1402c2a8db2bb..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/roberta/multiprocessing_bpe_encoder.py +++ /dev/null @@ -1,130 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import contextlib -import sys -from collections import Counter -from multiprocessing import Pool - -from fairseq.data.encoders.gpt2_bpe import get_encoder - - -def main(): - """ - Helper script to encode raw text with the GPT-2 BPE using multiple processes. - - The encoder.json and vocab.bpe files can be obtained here: - - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json - - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--encoder-json", - help="path to encoder.json", - ) - parser.add_argument( - "--vocab-bpe", - type=str, - help="path to vocab.bpe", - ) - parser.add_argument( - "--inputs", - nargs="+", - default=["-"], - help="input files to filter/encode", - ) - parser.add_argument( - "--outputs", - nargs="+", - default=["-"], - help="path to save encoded outputs", - ) - parser.add_argument( - "--keep-empty", - action="store_true", - help="keep empty lines", - ) - parser.add_argument("--workers", type=int, default=20) - args = parser.parse_args() - - assert len(args.inputs) == len( - args.outputs - ), "number of input and output paths should match" - - with contextlib.ExitStack() as stack: - inputs = [ - stack.enter_context(open(input, "r", encoding="utf-8")) - if input != "-" - else sys.stdin - for input in args.inputs - ] - outputs = [ - stack.enter_context(open(output, "w", encoding="utf-8")) - if output != "-" - else sys.stdout - for output in args.outputs - ] - - encoder = MultiprocessingEncoder(args) - pool = Pool(args.workers, initializer=encoder.initializer) - encoded_lines = pool.imap(encoder.encode_lines, zip(*inputs), 100) - - stats = Counter() - for i, (filt, enc_lines) in enumerate(encoded_lines, start=1): - if filt == "PASS": - for enc_line, output_h in zip(enc_lines, outputs): - print(enc_line, file=output_h) - else: - stats["num_filtered_" + filt] += 1 - if i % 10000 == 0: - print("processed {} lines".format(i), file=sys.stderr) - - for k, v in stats.most_common(): - print("[{}] filtered {} lines".format(k, v), file=sys.stderr) - - -class MultiprocessingEncoder(object): - def __init__(self, args): - self.args = args - - def initializer(self): - global bpe - bpe = get_encoder(self.args.encoder_json, self.args.vocab_bpe) - - def encode(self, line): - global bpe - ids = bpe.encode(line) - return list(map(str, ids)) - - def decode(self, tokens): - global bpe - return bpe.decode(tokens) - - def encode_lines(self, lines): - """ - Encode a set of lines. All lines will be encoded together. - """ - enc_lines = [] - for line in lines: - line = line.strip() - if len(line) == 0 and not self.args.keep_empty: - return ["EMPTY", None] - tokens = self.encode(line) - enc_lines.append(" ".join(tokens)) - return ["PASS", enc_lines] - - def decode_lines(self, lines): - dec_lines = [] - for line in lines: - tokens = map(int, line.strip().split()) - dec_lines.append(self.decode(tokens)) - return ["PASS", dec_lines] - - -if __name__ == "__main__": - main() diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/lvis_v0_5_categories.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/lvis_v0_5_categories.py deleted file mode 100644 index d3dab6198da614937b08682f4c9edf52bdf1d236..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/lvis_v0_5_categories.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Autogen with -# with open("lvis_v0.5_val.json", "r") as f: -# a = json.load(f) -# c = a["categories"] -# for x in c: -# del x["image_count"] -# del x["instance_count"] -# LVIS_CATEGORIES = repr(c) + " # noqa" - -# fmt: off -LVIS_CATEGORIES = [{'frequency': 'r', 'id': 1, 'synset': 'acorn.n.01', 'synonyms': ['acorn'], 'def': 'nut from an oak tree', 'name': 'acorn'}, {'frequency': 'c', 'id': 2, 'synset': 'aerosol.n.02', 'synonyms': ['aerosol_can', 'spray_can'], 'def': 'a dispenser that holds a substance under pressure', 'name': 'aerosol_can'}, {'frequency': 'f', 'id': 3, 'synset': 'air_conditioner.n.01', 'synonyms': ['air_conditioner'], 'def': 'a machine that keeps air cool and dry', 'name': 'air_conditioner'}, {'frequency': 'f', 'id': 4, 'synset': 'airplane.n.01', 'synonyms': ['airplane', 'aeroplane'], 'def': 'an aircraft that has a fixed wing and is powered by propellers or jets', 'name': 'airplane'}, {'frequency': 'c', 'id': 5, 'synset': 'alarm_clock.n.01', 'synonyms': ['alarm_clock'], 'def': 'a clock that wakes a sleeper at some preset time', 'name': 'alarm_clock'}, {'frequency': 'c', 'id': 6, 'synset': 'alcohol.n.01', 'synonyms': ['alcohol', 'alcoholic_beverage'], 'def': 'a liquor or brew containing alcohol as the active agent', 'name': 'alcohol'}, {'frequency': 'r', 'id': 7, 'synset': 'alligator.n.02', 'synonyms': ['alligator', 'gator'], 'def': 'amphibious reptiles related to crocodiles but with shorter broader snouts', 'name': 'alligator'}, {'frequency': 'c', 'id': 8, 'synset': 'almond.n.02', 'synonyms': ['almond'], 'def': 'oval-shaped edible seed of the almond tree', 'name': 'almond'}, {'frequency': 'c', 'id': 9, 'synset': 'ambulance.n.01', 'synonyms': ['ambulance'], 'def': 'a vehicle that takes people to and from hospitals', 'name': 'ambulance'}, {'frequency': 'r', 'id': 10, 'synset': 'amplifier.n.01', 'synonyms': ['amplifier'], 'def': 'electronic equipment that increases strength of signals', 'name': 'amplifier'}, {'frequency': 'c', 'id': 11, 'synset': 'anklet.n.03', 'synonyms': ['anklet', 'ankle_bracelet'], 'def': 'an ornament worn around the ankle', 'name': 'anklet'}, {'frequency': 'f', 'id': 12, 'synset': 'antenna.n.01', 'synonyms': ['antenna', 'aerial', 'transmitting_aerial'], 'def': 'an electrical device that sends or receives radio or television signals', 'name': 'antenna'}, {'frequency': 'f', 'id': 13, 'synset': 'apple.n.01', 'synonyms': ['apple'], 'def': 'fruit with red or yellow or green skin and sweet to tart crisp whitish flesh', 'name': 'apple'}, {'frequency': 'r', 'id': 14, 'synset': 'apple_juice.n.01', 'synonyms': ['apple_juice'], 'def': 'the juice of apples', 'name': 'apple_juice'}, {'frequency': 'r', 'id': 15, 'synset': 'applesauce.n.01', 'synonyms': ['applesauce'], 'def': 'puree of stewed apples usually sweetened and spiced', 'name': 'applesauce'}, {'frequency': 'r', 'id': 16, 'synset': 'apricot.n.02', 'synonyms': ['apricot'], 'def': 'downy yellow to rosy-colored fruit resembling a small peach', 'name': 'apricot'}, {'frequency': 'f', 'id': 17, 'synset': 'apron.n.01', 'synonyms': ['apron'], 'def': 'a garment of cloth that is tied about the waist and worn to protect clothing', 'name': 'apron'}, {'frequency': 'c', 'id': 18, 'synset': 'aquarium.n.01', 'synonyms': ['aquarium', 'fish_tank'], 'def': 'a tank/pool/bowl filled with water for keeping live fish and underwater animals', 'name': 'aquarium'}, {'frequency': 'c', 'id': 19, 'synset': 'armband.n.02', 'synonyms': ['armband'], 'def': 'a band worn around the upper arm', 'name': 'armband'}, {'frequency': 'f', 'id': 20, 'synset': 'armchair.n.01', 'synonyms': ['armchair'], 'def': 'chair with a support on each side for arms', 'name': 'armchair'}, {'frequency': 'r', 'id': 21, 'synset': 'armoire.n.01', 'synonyms': ['armoire'], 'def': 'a large wardrobe or cabinet', 'name': 'armoire'}, {'frequency': 'r', 'id': 22, 'synset': 'armor.n.01', 'synonyms': ['armor', 'armour'], 'def': 'protective covering made of metal and used in combat', 'name': 'armor'}, {'frequency': 'c', 'id': 23, 'synset': 'artichoke.n.02', 'synonyms': ['artichoke'], 'def': 'a thistlelike flower head with edible fleshy leaves and heart', 'name': 'artichoke'}, {'frequency': 'f', 'id': 24, 'synset': 'ashcan.n.01', 'synonyms': ['trash_can', 'garbage_can', 'wastebin', 'dustbin', 'trash_barrel', 'trash_bin'], 'def': 'a bin that holds rubbish until it is collected', 'name': 'trash_can'}, {'frequency': 'c', 'id': 25, 'synset': 'ashtray.n.01', 'synonyms': ['ashtray'], 'def': "a receptacle for the ash from smokers' cigars or cigarettes", 'name': 'ashtray'}, {'frequency': 'c', 'id': 26, 'synset': 'asparagus.n.02', 'synonyms': ['asparagus'], 'def': 'edible young shoots of the asparagus plant', 'name': 'asparagus'}, {'frequency': 'c', 'id': 27, 'synset': 'atomizer.n.01', 'synonyms': ['atomizer', 'atomiser', 'spray', 'sprayer', 'nebulizer', 'nebuliser'], 'def': 'a dispenser that turns a liquid (such as perfume) into a fine mist', 'name': 'atomizer'}, {'frequency': 'c', 'id': 28, 'synset': 'avocado.n.01', 'synonyms': ['avocado'], 'def': 'a pear-shaped fruit with green or blackish skin and rich yellowish pulp enclosing a single large seed', 'name': 'avocado'}, {'frequency': 'c', 'id': 29, 'synset': 'award.n.02', 'synonyms': ['award', 'accolade'], 'def': 'a tangible symbol signifying approval or distinction', 'name': 'award'}, {'frequency': 'f', 'id': 30, 'synset': 'awning.n.01', 'synonyms': ['awning'], 'def': 'a canopy made of canvas to shelter people or things from rain or sun', 'name': 'awning'}, {'frequency': 'r', 'id': 31, 'synset': 'ax.n.01', 'synonyms': ['ax', 'axe'], 'def': 'an edge tool with a heavy bladed head mounted across a handle', 'name': 'ax'}, {'frequency': 'f', 'id': 32, 'synset': 'baby_buggy.n.01', 'synonyms': ['baby_buggy', 'baby_carriage', 'perambulator', 'pram', 'stroller'], 'def': 'a small vehicle with four wheels in which a baby or child is pushed around', 'name': 'baby_buggy'}, {'frequency': 'c', 'id': 33, 'synset': 'backboard.n.01', 'synonyms': ['basketball_backboard'], 'def': 'a raised vertical board with basket attached; used to play basketball', 'name': 'basketball_backboard'}, {'frequency': 'f', 'id': 34, 'synset': 'backpack.n.01', 'synonyms': ['backpack', 'knapsack', 'packsack', 'rucksack', 'haversack'], 'def': 'a bag carried by a strap on your back or shoulder', 'name': 'backpack'}, {'frequency': 'f', 'id': 35, 'synset': 'bag.n.04', 'synonyms': ['handbag', 'purse', 'pocketbook'], 'def': 'a container used for carrying money and small personal items or accessories', 'name': 'handbag'}, {'frequency': 'f', 'id': 36, 'synset': 'bag.n.06', 'synonyms': ['suitcase', 'baggage', 'luggage'], 'def': 'cases used to carry belongings when traveling', 'name': 'suitcase'}, {'frequency': 'c', 'id': 37, 'synset': 'bagel.n.01', 'synonyms': ['bagel', 'beigel'], 'def': 'glazed yeast-raised doughnut-shaped roll with hard crust', 'name': 'bagel'}, {'frequency': 'r', 'id': 38, 'synset': 'bagpipe.n.01', 'synonyms': ['bagpipe'], 'def': 'a tubular wind instrument; the player blows air into a bag and squeezes it out', 'name': 'bagpipe'}, {'frequency': 'r', 'id': 39, 'synset': 'baguet.n.01', 'synonyms': ['baguet', 'baguette'], 'def': 'narrow French stick loaf', 'name': 'baguet'}, {'frequency': 'r', 'id': 40, 'synset': 'bait.n.02', 'synonyms': ['bait', 'lure'], 'def': 'something used to lure fish or other animals into danger so they can be trapped or killed', 'name': 'bait'}, {'frequency': 'f', 'id': 41, 'synset': 'ball.n.06', 'synonyms': ['ball'], 'def': 'a spherical object used as a plaything', 'name': 'ball'}, {'frequency': 'r', 'id': 42, 'synset': 'ballet_skirt.n.01', 'synonyms': ['ballet_skirt', 'tutu'], 'def': 'very short skirt worn by ballerinas', 'name': 'ballet_skirt'}, {'frequency': 'f', 'id': 43, 'synset': 'balloon.n.01', 'synonyms': ['balloon'], 'def': 'large tough nonrigid bag filled with gas or heated air', 'name': 'balloon'}, {'frequency': 'c', 'id': 44, 'synset': 'bamboo.n.02', 'synonyms': ['bamboo'], 'def': 'woody tropical grass having hollow woody stems', 'name': 'bamboo'}, {'frequency': 'f', 'id': 45, 'synset': 'banana.n.02', 'synonyms': ['banana'], 'def': 'elongated crescent-shaped yellow fruit with soft sweet flesh', 'name': 'banana'}, {'frequency': 'r', 'id': 46, 'synset': 'band_aid.n.01', 'synonyms': ['Band_Aid'], 'def': 'trade name for an adhesive bandage to cover small cuts or blisters', 'name': 'Band_Aid'}, {'frequency': 'c', 'id': 47, 'synset': 'bandage.n.01', 'synonyms': ['bandage'], 'def': 'a piece of soft material that covers and protects an injured part of the body', 'name': 'bandage'}, {'frequency': 'c', 'id': 48, 'synset': 'bandanna.n.01', 'synonyms': ['bandanna', 'bandana'], 'def': 'large and brightly colored handkerchief; often used as a neckerchief', 'name': 'bandanna'}, {'frequency': 'r', 'id': 49, 'synset': 'banjo.n.01', 'synonyms': ['banjo'], 'def': 'a stringed instrument of the guitar family with a long neck and circular body', 'name': 'banjo'}, {'frequency': 'f', 'id': 50, 'synset': 'banner.n.01', 'synonyms': ['banner', 'streamer'], 'def': 'long strip of cloth or paper used for decoration or advertising', 'name': 'banner'}, {'frequency': 'r', 'id': 51, 'synset': 'barbell.n.01', 'synonyms': ['barbell'], 'def': 'a bar to which heavy discs are attached at each end; used in weightlifting', 'name': 'barbell'}, {'frequency': 'r', 'id': 52, 'synset': 'barge.n.01', 'synonyms': ['barge'], 'def': 'a flatbottom boat for carrying heavy loads (especially on canals)', 'name': 'barge'}, {'frequency': 'f', 'id': 53, 'synset': 'barrel.n.02', 'synonyms': ['barrel', 'cask'], 'def': 'a cylindrical container that holds liquids', 'name': 'barrel'}, {'frequency': 'c', 'id': 54, 'synset': 'barrette.n.01', 'synonyms': ['barrette'], 'def': "a pin for holding women's hair in place", 'name': 'barrette'}, {'frequency': 'c', 'id': 55, 'synset': 'barrow.n.03', 'synonyms': ['barrow', 'garden_cart', 'lawn_cart', 'wheelbarrow'], 'def': 'a cart for carrying small loads; has handles and one or more wheels', 'name': 'barrow'}, {'frequency': 'f', 'id': 56, 'synset': 'base.n.03', 'synonyms': ['baseball_base'], 'def': 'a place that the runner must touch before scoring', 'name': 'baseball_base'}, {'frequency': 'f', 'id': 57, 'synset': 'baseball.n.02', 'synonyms': ['baseball'], 'def': 'a ball used in playing baseball', 'name': 'baseball'}, {'frequency': 'f', 'id': 58, 'synset': 'baseball_bat.n.01', 'synonyms': ['baseball_bat'], 'def': 'an implement used in baseball by the batter', 'name': 'baseball_bat'}, {'frequency': 'f', 'id': 59, 'synset': 'baseball_cap.n.01', 'synonyms': ['baseball_cap', 'jockey_cap', 'golf_cap'], 'def': 'a cap with a bill', 'name': 'baseball_cap'}, {'frequency': 'f', 'id': 60, 'synset': 'baseball_glove.n.01', 'synonyms': ['baseball_glove', 'baseball_mitt'], 'def': 'the handwear used by fielders in playing baseball', 'name': 'baseball_glove'}, {'frequency': 'f', 'id': 61, 'synset': 'basket.n.01', 'synonyms': ['basket', 'handbasket'], 'def': 'a container that is usually woven and has handles', 'name': 'basket'}, {'frequency': 'c', 'id': 62, 'synset': 'basket.n.03', 'synonyms': ['basketball_hoop'], 'def': 'metal hoop supporting a net through which players try to throw the basketball', 'name': 'basketball_hoop'}, {'frequency': 'c', 'id': 63, 'synset': 'basketball.n.02', 'synonyms': ['basketball'], 'def': 'an inflated ball used in playing basketball', 'name': 'basketball'}, {'frequency': 'r', 'id': 64, 'synset': 'bass_horn.n.01', 'synonyms': ['bass_horn', 'sousaphone', 'tuba'], 'def': 'the lowest brass wind instrument', 'name': 'bass_horn'}, {'frequency': 'r', 'id': 65, 'synset': 'bat.n.01', 'synonyms': ['bat_(animal)'], 'def': 'nocturnal mouselike mammal with forelimbs modified to form membranous wings', 'name': 'bat_(animal)'}, {'frequency': 'f', 'id': 66, 'synset': 'bath_mat.n.01', 'synonyms': ['bath_mat'], 'def': 'a heavy towel or mat to stand on while drying yourself after a bath', 'name': 'bath_mat'}, {'frequency': 'f', 'id': 67, 'synset': 'bath_towel.n.01', 'synonyms': ['bath_towel'], 'def': 'a large towel; to dry yourself after a bath', 'name': 'bath_towel'}, {'frequency': 'c', 'id': 68, 'synset': 'bathrobe.n.01', 'synonyms': ['bathrobe'], 'def': 'a loose-fitting robe of towelling; worn after a bath or swim', 'name': 'bathrobe'}, {'frequency': 'f', 'id': 69, 'synset': 'bathtub.n.01', 'synonyms': ['bathtub', 'bathing_tub'], 'def': 'a large open container that you fill with water and use to wash the body', 'name': 'bathtub'}, {'frequency': 'r', 'id': 70, 'synset': 'batter.n.02', 'synonyms': ['batter_(food)'], 'def': 'a liquid or semiliquid mixture, as of flour, eggs, and milk, used in cooking', 'name': 'batter_(food)'}, {'frequency': 'c', 'id': 71, 'synset': 'battery.n.02', 'synonyms': ['battery'], 'def': 'a portable device that produces electricity', 'name': 'battery'}, {'frequency': 'r', 'id': 72, 'synset': 'beach_ball.n.01', 'synonyms': ['beachball'], 'def': 'large and light ball; for play at the seaside', 'name': 'beachball'}, {'frequency': 'c', 'id': 73, 'synset': 'bead.n.01', 'synonyms': ['bead'], 'def': 'a small ball with a hole through the middle used for ornamentation, jewellery, etc.', 'name': 'bead'}, {'frequency': 'r', 'id': 74, 'synset': 'beaker.n.01', 'synonyms': ['beaker'], 'def': 'a flatbottomed jar made of glass or plastic; used for chemistry', 'name': 'beaker'}, {'frequency': 'c', 'id': 75, 'synset': 'bean_curd.n.01', 'synonyms': ['bean_curd', 'tofu'], 'def': 'cheeselike food made of curdled soybean milk', 'name': 'bean_curd'}, {'frequency': 'c', 'id': 76, 'synset': 'beanbag.n.01', 'synonyms': ['beanbag'], 'def': 'a bag filled with dried beans or similar items; used in games or to sit on', 'name': 'beanbag'}, {'frequency': 'f', 'id': 77, 'synset': 'beanie.n.01', 'synonyms': ['beanie', 'beany'], 'def': 'a small skullcap; formerly worn by schoolboys and college freshmen', 'name': 'beanie'}, {'frequency': 'f', 'id': 78, 'synset': 'bear.n.01', 'synonyms': ['bear'], 'def': 'large carnivorous or omnivorous mammals with shaggy coats and claws', 'name': 'bear'}, {'frequency': 'f', 'id': 79, 'synset': 'bed.n.01', 'synonyms': ['bed'], 'def': 'a piece of furniture that provides a place to sleep', 'name': 'bed'}, {'frequency': 'c', 'id': 80, 'synset': 'bedspread.n.01', 'synonyms': ['bedspread', 'bedcover', 'bed_covering', 'counterpane', 'spread'], 'def': 'decorative cover for a bed', 'name': 'bedspread'}, {'frequency': 'f', 'id': 81, 'synset': 'beef.n.01', 'synonyms': ['cow'], 'def': 'cattle that are reared for their meat', 'name': 'cow'}, {'frequency': 'c', 'id': 82, 'synset': 'beef.n.02', 'synonyms': ['beef_(food)', 'boeuf_(food)'], 'def': 'meat from an adult domestic bovine', 'name': 'beef_(food)'}, {'frequency': 'r', 'id': 83, 'synset': 'beeper.n.01', 'synonyms': ['beeper', 'pager'], 'def': 'an device that beeps when the person carrying it is being paged', 'name': 'beeper'}, {'frequency': 'f', 'id': 84, 'synset': 'beer_bottle.n.01', 'synonyms': ['beer_bottle'], 'def': 'a bottle that holds beer', 'name': 'beer_bottle'}, {'frequency': 'c', 'id': 85, 'synset': 'beer_can.n.01', 'synonyms': ['beer_can'], 'def': 'a can that holds beer', 'name': 'beer_can'}, {'frequency': 'r', 'id': 86, 'synset': 'beetle.n.01', 'synonyms': ['beetle'], 'def': 'insect with hard wing covers', 'name': 'beetle'}, {'frequency': 'f', 'id': 87, 'synset': 'bell.n.01', 'synonyms': ['bell'], 'def': 'a hollow device made of metal that makes a ringing sound when struck', 'name': 'bell'}, {'frequency': 'f', 'id': 88, 'synset': 'bell_pepper.n.02', 'synonyms': ['bell_pepper', 'capsicum'], 'def': 'large bell-shaped sweet pepper in green or red or yellow or orange or black varieties', 'name': 'bell_pepper'}, {'frequency': 'f', 'id': 89, 'synset': 'belt.n.02', 'synonyms': ['belt'], 'def': 'a band to tie or buckle around the body (usually at the waist)', 'name': 'belt'}, {'frequency': 'f', 'id': 90, 'synset': 'belt_buckle.n.01', 'synonyms': ['belt_buckle'], 'def': 'the buckle used to fasten a belt', 'name': 'belt_buckle'}, {'frequency': 'f', 'id': 91, 'synset': 'bench.n.01', 'synonyms': ['bench'], 'def': 'a long seat for more than one person', 'name': 'bench'}, {'frequency': 'c', 'id': 92, 'synset': 'beret.n.01', 'synonyms': ['beret'], 'def': 'a cap with no brim or bill; made of soft cloth', 'name': 'beret'}, {'frequency': 'c', 'id': 93, 'synset': 'bib.n.02', 'synonyms': ['bib'], 'def': 'a napkin tied under the chin of a child while eating', 'name': 'bib'}, {'frequency': 'r', 'id': 94, 'synset': 'bible.n.01', 'synonyms': ['Bible'], 'def': 'the sacred writings of the Christian religions', 'name': 'Bible'}, {'frequency': 'f', 'id': 95, 'synset': 'bicycle.n.01', 'synonyms': ['bicycle', 'bike_(bicycle)'], 'def': 'a wheeled vehicle that has two wheels and is moved by foot pedals', 'name': 'bicycle'}, {'frequency': 'f', 'id': 96, 'synset': 'bill.n.09', 'synonyms': ['visor', 'vizor'], 'def': 'a brim that projects to the front to shade the eyes', 'name': 'visor'}, {'frequency': 'c', 'id': 97, 'synset': 'binder.n.03', 'synonyms': ['binder', 'ring-binder'], 'def': 'holds loose papers or magazines', 'name': 'binder'}, {'frequency': 'c', 'id': 98, 'synset': 'binoculars.n.01', 'synonyms': ['binoculars', 'field_glasses', 'opera_glasses'], 'def': 'an optical instrument designed for simultaneous use by both eyes', 'name': 'binoculars'}, {'frequency': 'f', 'id': 99, 'synset': 'bird.n.01', 'synonyms': ['bird'], 'def': 'animal characterized by feathers and wings', 'name': 'bird'}, {'frequency': 'r', 'id': 100, 'synset': 'bird_feeder.n.01', 'synonyms': ['birdfeeder'], 'def': 'an outdoor device that supplies food for wild birds', 'name': 'birdfeeder'}, {'frequency': 'r', 'id': 101, 'synset': 'birdbath.n.01', 'synonyms': ['birdbath'], 'def': 'an ornamental basin (usually in a garden) for birds to bathe in', 'name': 'birdbath'}, {'frequency': 'c', 'id': 102, 'synset': 'birdcage.n.01', 'synonyms': ['birdcage'], 'def': 'a cage in which a bird can be kept', 'name': 'birdcage'}, {'frequency': 'c', 'id': 103, 'synset': 'birdhouse.n.01', 'synonyms': ['birdhouse'], 'def': 'a shelter for birds', 'name': 'birdhouse'}, {'frequency': 'f', 'id': 104, 'synset': 'birthday_cake.n.01', 'synonyms': ['birthday_cake'], 'def': 'decorated cake served at a birthday party', 'name': 'birthday_cake'}, {'frequency': 'r', 'id': 105, 'synset': 'birthday_card.n.01', 'synonyms': ['birthday_card'], 'def': 'a card expressing a birthday greeting', 'name': 'birthday_card'}, {'frequency': 'r', 'id': 106, 'synset': 'biscuit.n.01', 'synonyms': ['biscuit_(bread)'], 'def': 'small round bread leavened with baking-powder or soda', 'name': 'biscuit_(bread)'}, {'frequency': 'r', 'id': 107, 'synset': 'black_flag.n.01', 'synonyms': ['pirate_flag'], 'def': 'a flag usually bearing a white skull and crossbones on a black background', 'name': 'pirate_flag'}, {'frequency': 'c', 'id': 108, 'synset': 'black_sheep.n.02', 'synonyms': ['black_sheep'], 'def': 'sheep with a black coat', 'name': 'black_sheep'}, {'frequency': 'c', 'id': 109, 'synset': 'blackboard.n.01', 'synonyms': ['blackboard', 'chalkboard'], 'def': 'sheet of slate; for writing with chalk', 'name': 'blackboard'}, {'frequency': 'f', 'id': 110, 'synset': 'blanket.n.01', 'synonyms': ['blanket'], 'def': 'bedding that keeps a person warm in bed', 'name': 'blanket'}, {'frequency': 'c', 'id': 111, 'synset': 'blazer.n.01', 'synonyms': ['blazer', 'sport_jacket', 'sport_coat', 'sports_jacket', 'sports_coat'], 'def': 'lightweight jacket; often striped in the colors of a club or school', 'name': 'blazer'}, {'frequency': 'f', 'id': 112, 'synset': 'blender.n.01', 'synonyms': ['blender', 'liquidizer', 'liquidiser'], 'def': 'an electrically powered mixer that mix or chop or liquefy foods', 'name': 'blender'}, {'frequency': 'r', 'id': 113, 'synset': 'blimp.n.02', 'synonyms': ['blimp'], 'def': 'a small nonrigid airship used for observation or as a barrage balloon', 'name': 'blimp'}, {'frequency': 'c', 'id': 114, 'synset': 'blinker.n.01', 'synonyms': ['blinker', 'flasher'], 'def': 'a light that flashes on and off; used as a signal or to send messages', 'name': 'blinker'}, {'frequency': 'c', 'id': 115, 'synset': 'blueberry.n.02', 'synonyms': ['blueberry'], 'def': 'sweet edible dark-blue berries of blueberry plants', 'name': 'blueberry'}, {'frequency': 'r', 'id': 116, 'synset': 'boar.n.02', 'synonyms': ['boar'], 'def': 'an uncastrated male hog', 'name': 'boar'}, {'frequency': 'r', 'id': 117, 'synset': 'board.n.09', 'synonyms': ['gameboard'], 'def': 'a flat portable surface (usually rectangular) designed for board games', 'name': 'gameboard'}, {'frequency': 'f', 'id': 118, 'synset': 'boat.n.01', 'synonyms': ['boat', 'ship_(boat)'], 'def': 'a vessel for travel on water', 'name': 'boat'}, {'frequency': 'c', 'id': 119, 'synset': 'bobbin.n.01', 'synonyms': ['bobbin', 'spool', 'reel'], 'def': 'a thing around which thread/tape/film or other flexible materials can be wound', 'name': 'bobbin'}, {'frequency': 'r', 'id': 120, 'synset': 'bobby_pin.n.01', 'synonyms': ['bobby_pin', 'hairgrip'], 'def': 'a flat wire hairpin used to hold bobbed hair in place', 'name': 'bobby_pin'}, {'frequency': 'c', 'id': 121, 'synset': 'boiled_egg.n.01', 'synonyms': ['boiled_egg', 'coddled_egg'], 'def': 'egg cooked briefly in the shell in gently boiling water', 'name': 'boiled_egg'}, {'frequency': 'r', 'id': 122, 'synset': 'bolo_tie.n.01', 'synonyms': ['bolo_tie', 'bolo', 'bola_tie', 'bola'], 'def': 'a cord fastened around the neck with an ornamental clasp and worn as a necktie', 'name': 'bolo_tie'}, {'frequency': 'c', 'id': 123, 'synset': 'bolt.n.03', 'synonyms': ['deadbolt'], 'def': 'the part of a lock that is engaged or withdrawn with a key', 'name': 'deadbolt'}, {'frequency': 'f', 'id': 124, 'synset': 'bolt.n.06', 'synonyms': ['bolt'], 'def': 'a screw that screws into a nut to form a fastener', 'name': 'bolt'}, {'frequency': 'r', 'id': 125, 'synset': 'bonnet.n.01', 'synonyms': ['bonnet'], 'def': 'a hat tied under the chin', 'name': 'bonnet'}, {'frequency': 'f', 'id': 126, 'synset': 'book.n.01', 'synonyms': ['book'], 'def': 'a written work or composition that has been published', 'name': 'book'}, {'frequency': 'r', 'id': 127, 'synset': 'book_bag.n.01', 'synonyms': ['book_bag'], 'def': 'a bag in which students carry their books', 'name': 'book_bag'}, {'frequency': 'c', 'id': 128, 'synset': 'bookcase.n.01', 'synonyms': ['bookcase'], 'def': 'a piece of furniture with shelves for storing books', 'name': 'bookcase'}, {'frequency': 'c', 'id': 129, 'synset': 'booklet.n.01', 'synonyms': ['booklet', 'brochure', 'leaflet', 'pamphlet'], 'def': 'a small book usually having a paper cover', 'name': 'booklet'}, {'frequency': 'r', 'id': 130, 'synset': 'bookmark.n.01', 'synonyms': ['bookmark', 'bookmarker'], 'def': 'a marker (a piece of paper or ribbon) placed between the pages of a book', 'name': 'bookmark'}, {'frequency': 'r', 'id': 131, 'synset': 'boom.n.04', 'synonyms': ['boom_microphone', 'microphone_boom'], 'def': 'a pole carrying an overhead microphone projected over a film or tv set', 'name': 'boom_microphone'}, {'frequency': 'f', 'id': 132, 'synset': 'boot.n.01', 'synonyms': ['boot'], 'def': 'footwear that covers the whole foot and lower leg', 'name': 'boot'}, {'frequency': 'f', 'id': 133, 'synset': 'bottle.n.01', 'synonyms': ['bottle'], 'def': 'a glass or plastic vessel used for storing drinks or other liquids', 'name': 'bottle'}, {'frequency': 'c', 'id': 134, 'synset': 'bottle_opener.n.01', 'synonyms': ['bottle_opener'], 'def': 'an opener for removing caps or corks from bottles', 'name': 'bottle_opener'}, {'frequency': 'c', 'id': 135, 'synset': 'bouquet.n.01', 'synonyms': ['bouquet'], 'def': 'an arrangement of flowers that is usually given as a present', 'name': 'bouquet'}, {'frequency': 'r', 'id': 136, 'synset': 'bow.n.04', 'synonyms': ['bow_(weapon)'], 'def': 'a weapon for shooting arrows', 'name': 'bow_(weapon)'}, {'frequency': 'f', 'id': 137, 'synset': 'bow.n.08', 'synonyms': ['bow_(decorative_ribbons)'], 'def': 'a decorative interlacing of ribbons', 'name': 'bow_(decorative_ribbons)'}, {'frequency': 'f', 'id': 138, 'synset': 'bow_tie.n.01', 'synonyms': ['bow-tie', 'bowtie'], 'def': "a man's tie that ties in a bow", 'name': 'bow-tie'}, {'frequency': 'f', 'id': 139, 'synset': 'bowl.n.03', 'synonyms': ['bowl'], 'def': 'a dish that is round and open at the top for serving foods', 'name': 'bowl'}, {'frequency': 'r', 'id': 140, 'synset': 'bowl.n.08', 'synonyms': ['pipe_bowl'], 'def': 'a small round container that is open at the top for holding tobacco', 'name': 'pipe_bowl'}, {'frequency': 'c', 'id': 141, 'synset': 'bowler_hat.n.01', 'synonyms': ['bowler_hat', 'bowler', 'derby_hat', 'derby', 'plug_hat'], 'def': 'a felt hat that is round and hard with a narrow brim', 'name': 'bowler_hat'}, {'frequency': 'r', 'id': 142, 'synset': 'bowling_ball.n.01', 'synonyms': ['bowling_ball'], 'def': 'a large ball with finger holes used in the sport of bowling', 'name': 'bowling_ball'}, {'frequency': 'r', 'id': 143, 'synset': 'bowling_pin.n.01', 'synonyms': ['bowling_pin'], 'def': 'a club-shaped wooden object used in bowling', 'name': 'bowling_pin'}, {'frequency': 'r', 'id': 144, 'synset': 'boxing_glove.n.01', 'synonyms': ['boxing_glove'], 'def': 'large glove coverings the fists of a fighter worn for the sport of boxing', 'name': 'boxing_glove'}, {'frequency': 'c', 'id': 145, 'synset': 'brace.n.06', 'synonyms': ['suspenders'], 'def': 'elastic straps that hold trousers up (usually used in the plural)', 'name': 'suspenders'}, {'frequency': 'f', 'id': 146, 'synset': 'bracelet.n.02', 'synonyms': ['bracelet', 'bangle'], 'def': 'jewelry worn around the wrist for decoration', 'name': 'bracelet'}, {'frequency': 'r', 'id': 147, 'synset': 'brass.n.07', 'synonyms': ['brass_plaque'], 'def': 'a memorial made of brass', 'name': 'brass_plaque'}, {'frequency': 'c', 'id': 148, 'synset': 'brassiere.n.01', 'synonyms': ['brassiere', 'bra', 'bandeau'], 'def': 'an undergarment worn by women to support their breasts', 'name': 'brassiere'}, {'frequency': 'c', 'id': 149, 'synset': 'bread-bin.n.01', 'synonyms': ['bread-bin', 'breadbox'], 'def': 'a container used to keep bread or cake in', 'name': 'bread-bin'}, {'frequency': 'r', 'id': 150, 'synset': 'breechcloth.n.01', 'synonyms': ['breechcloth', 'breechclout', 'loincloth'], 'def': 'a garment that provides covering for the loins', 'name': 'breechcloth'}, {'frequency': 'c', 'id': 151, 'synset': 'bridal_gown.n.01', 'synonyms': ['bridal_gown', 'wedding_gown', 'wedding_dress'], 'def': 'a gown worn by the bride at a wedding', 'name': 'bridal_gown'}, {'frequency': 'c', 'id': 152, 'synset': 'briefcase.n.01', 'synonyms': ['briefcase'], 'def': 'a case with a handle; for carrying papers or files or books', 'name': 'briefcase'}, {'frequency': 'c', 'id': 153, 'synset': 'bristle_brush.n.01', 'synonyms': ['bristle_brush'], 'def': 'a brush that is made with the short stiff hairs of an animal or plant', 'name': 'bristle_brush'}, {'frequency': 'f', 'id': 154, 'synset': 'broccoli.n.01', 'synonyms': ['broccoli'], 'def': 'plant with dense clusters of tight green flower buds', 'name': 'broccoli'}, {'frequency': 'r', 'id': 155, 'synset': 'brooch.n.01', 'synonyms': ['broach'], 'def': 'a decorative pin worn by women', 'name': 'broach'}, {'frequency': 'c', 'id': 156, 'synset': 'broom.n.01', 'synonyms': ['broom'], 'def': 'bundle of straws or twigs attached to a long handle; used for cleaning', 'name': 'broom'}, {'frequency': 'c', 'id': 157, 'synset': 'brownie.n.03', 'synonyms': ['brownie'], 'def': 'square or bar of very rich chocolate cake usually with nuts', 'name': 'brownie'}, {'frequency': 'c', 'id': 158, 'synset': 'brussels_sprouts.n.01', 'synonyms': ['brussels_sprouts'], 'def': 'the small edible cabbage-like buds growing along a stalk', 'name': 'brussels_sprouts'}, {'frequency': 'r', 'id': 159, 'synset': 'bubble_gum.n.01', 'synonyms': ['bubble_gum'], 'def': 'a kind of chewing gum that can be blown into bubbles', 'name': 'bubble_gum'}, {'frequency': 'f', 'id': 160, 'synset': 'bucket.n.01', 'synonyms': ['bucket', 'pail'], 'def': 'a roughly cylindrical vessel that is open at the top', 'name': 'bucket'}, {'frequency': 'r', 'id': 161, 'synset': 'buggy.n.01', 'synonyms': ['horse_buggy'], 'def': 'a small lightweight carriage; drawn by a single horse', 'name': 'horse_buggy'}, {'frequency': 'c', 'id': 162, 'synset': 'bull.n.11', 'synonyms': ['bull'], 'def': 'mature male cow', 'name': 'bull'}, {'frequency': 'r', 'id': 163, 'synset': 'bulldog.n.01', 'synonyms': ['bulldog'], 'def': 'a thickset short-haired dog with a large head and strong undershot lower jaw', 'name': 'bulldog'}, {'frequency': 'r', 'id': 164, 'synset': 'bulldozer.n.01', 'synonyms': ['bulldozer', 'dozer'], 'def': 'large powerful tractor; a large blade in front flattens areas of ground', 'name': 'bulldozer'}, {'frequency': 'c', 'id': 165, 'synset': 'bullet_train.n.01', 'synonyms': ['bullet_train'], 'def': 'a high-speed passenger train', 'name': 'bullet_train'}, {'frequency': 'c', 'id': 166, 'synset': 'bulletin_board.n.02', 'synonyms': ['bulletin_board', 'notice_board'], 'def': 'a board that hangs on a wall; displays announcements', 'name': 'bulletin_board'}, {'frequency': 'r', 'id': 167, 'synset': 'bulletproof_vest.n.01', 'synonyms': ['bulletproof_vest'], 'def': 'a vest capable of resisting the impact of a bullet', 'name': 'bulletproof_vest'}, {'frequency': 'c', 'id': 168, 'synset': 'bullhorn.n.01', 'synonyms': ['bullhorn', 'megaphone'], 'def': 'a portable loudspeaker with built-in microphone and amplifier', 'name': 'bullhorn'}, {'frequency': 'r', 'id': 169, 'synset': 'bully_beef.n.01', 'synonyms': ['corned_beef', 'corn_beef'], 'def': 'beef cured or pickled in brine', 'name': 'corned_beef'}, {'frequency': 'f', 'id': 170, 'synset': 'bun.n.01', 'synonyms': ['bun', 'roll'], 'def': 'small rounded bread either plain or sweet', 'name': 'bun'}, {'frequency': 'c', 'id': 171, 'synset': 'bunk_bed.n.01', 'synonyms': ['bunk_bed'], 'def': 'beds built one above the other', 'name': 'bunk_bed'}, {'frequency': 'f', 'id': 172, 'synset': 'buoy.n.01', 'synonyms': ['buoy'], 'def': 'a float attached by rope to the seabed to mark channels in a harbor or underwater hazards', 'name': 'buoy'}, {'frequency': 'r', 'id': 173, 'synset': 'burrito.n.01', 'synonyms': ['burrito'], 'def': 'a flour tortilla folded around a filling', 'name': 'burrito'}, {'frequency': 'f', 'id': 174, 'synset': 'bus.n.01', 'synonyms': ['bus_(vehicle)', 'autobus', 'charabanc', 'double-decker', 'motorbus', 'motorcoach'], 'def': 'a vehicle carrying many passengers; used for public transport', 'name': 'bus_(vehicle)'}, {'frequency': 'c', 'id': 175, 'synset': 'business_card.n.01', 'synonyms': ['business_card'], 'def': "a card on which are printed the person's name and business affiliation", 'name': 'business_card'}, {'frequency': 'c', 'id': 176, 'synset': 'butcher_knife.n.01', 'synonyms': ['butcher_knife'], 'def': 'a large sharp knife for cutting or trimming meat', 'name': 'butcher_knife'}, {'frequency': 'c', 'id': 177, 'synset': 'butter.n.01', 'synonyms': ['butter'], 'def': 'an edible emulsion of fat globules made by churning milk or cream; for cooking and table use', 'name': 'butter'}, {'frequency': 'c', 'id': 178, 'synset': 'butterfly.n.01', 'synonyms': ['butterfly'], 'def': 'insect typically having a slender body with knobbed antennae and broad colorful wings', 'name': 'butterfly'}, {'frequency': 'f', 'id': 179, 'synset': 'button.n.01', 'synonyms': ['button'], 'def': 'a round fastener sewn to shirts and coats etc to fit through buttonholes', 'name': 'button'}, {'frequency': 'f', 'id': 180, 'synset': 'cab.n.03', 'synonyms': ['cab_(taxi)', 'taxi', 'taxicab'], 'def': 'a car that takes passengers where they want to go in exchange for money', 'name': 'cab_(taxi)'}, {'frequency': 'r', 'id': 181, 'synset': 'cabana.n.01', 'synonyms': ['cabana'], 'def': 'a small tent used as a dressing room beside the sea or a swimming pool', 'name': 'cabana'}, {'frequency': 'r', 'id': 182, 'synset': 'cabin_car.n.01', 'synonyms': ['cabin_car', 'caboose'], 'def': 'a car on a freight train for use of the train crew; usually the last car on the train', 'name': 'cabin_car'}, {'frequency': 'f', 'id': 183, 'synset': 'cabinet.n.01', 'synonyms': ['cabinet'], 'def': 'a piece of furniture resembling a cupboard with doors and shelves and drawers', 'name': 'cabinet'}, {'frequency': 'r', 'id': 184, 'synset': 'cabinet.n.03', 'synonyms': ['locker', 'storage_locker'], 'def': 'a storage compartment for clothes and valuables; usually it has a lock', 'name': 'locker'}, {'frequency': 'f', 'id': 185, 'synset': 'cake.n.03', 'synonyms': ['cake'], 'def': 'baked goods made from or based on a mixture of flour, sugar, eggs, and fat', 'name': 'cake'}, {'frequency': 'c', 'id': 186, 'synset': 'calculator.n.02', 'synonyms': ['calculator'], 'def': 'a small machine that is used for mathematical calculations', 'name': 'calculator'}, {'frequency': 'f', 'id': 187, 'synset': 'calendar.n.02', 'synonyms': ['calendar'], 'def': 'a list or register of events (appointments/social events/court cases, etc)', 'name': 'calendar'}, {'frequency': 'c', 'id': 188, 'synset': 'calf.n.01', 'synonyms': ['calf'], 'def': 'young of domestic cattle', 'name': 'calf'}, {'frequency': 'c', 'id': 189, 'synset': 'camcorder.n.01', 'synonyms': ['camcorder'], 'def': 'a portable television camera and videocassette recorder', 'name': 'camcorder'}, {'frequency': 'c', 'id': 190, 'synset': 'camel.n.01', 'synonyms': ['camel'], 'def': 'cud-chewing mammal used as a draft or saddle animal in desert regions', 'name': 'camel'}, {'frequency': 'f', 'id': 191, 'synset': 'camera.n.01', 'synonyms': ['camera'], 'def': 'equipment for taking photographs', 'name': 'camera'}, {'frequency': 'c', 'id': 192, 'synset': 'camera_lens.n.01', 'synonyms': ['camera_lens'], 'def': 'a lens that focuses the image in a camera', 'name': 'camera_lens'}, {'frequency': 'c', 'id': 193, 'synset': 'camper.n.02', 'synonyms': ['camper_(vehicle)', 'camping_bus', 'motor_home'], 'def': 'a recreational vehicle equipped for camping out while traveling', 'name': 'camper_(vehicle)'}, {'frequency': 'f', 'id': 194, 'synset': 'can.n.01', 'synonyms': ['can', 'tin_can'], 'def': 'airtight sealed metal container for food or drink or paint etc.', 'name': 'can'}, {'frequency': 'c', 'id': 195, 'synset': 'can_opener.n.01', 'synonyms': ['can_opener', 'tin_opener'], 'def': 'a device for cutting cans open', 'name': 'can_opener'}, {'frequency': 'r', 'id': 196, 'synset': 'candelabrum.n.01', 'synonyms': ['candelabrum', 'candelabra'], 'def': 'branched candlestick; ornamental; has several lights', 'name': 'candelabrum'}, {'frequency': 'f', 'id': 197, 'synset': 'candle.n.01', 'synonyms': ['candle', 'candlestick'], 'def': 'stick of wax with a wick in the middle', 'name': 'candle'}, {'frequency': 'f', 'id': 198, 'synset': 'candlestick.n.01', 'synonyms': ['candle_holder'], 'def': 'a holder with sockets for candles', 'name': 'candle_holder'}, {'frequency': 'r', 'id': 199, 'synset': 'candy_bar.n.01', 'synonyms': ['candy_bar'], 'def': 'a candy shaped as a bar', 'name': 'candy_bar'}, {'frequency': 'c', 'id': 200, 'synset': 'candy_cane.n.01', 'synonyms': ['candy_cane'], 'def': 'a hard candy in the shape of a rod (usually with stripes)', 'name': 'candy_cane'}, {'frequency': 'c', 'id': 201, 'synset': 'cane.n.01', 'synonyms': ['walking_cane'], 'def': 'a stick that people can lean on to help them walk', 'name': 'walking_cane'}, {'frequency': 'c', 'id': 202, 'synset': 'canister.n.02', 'synonyms': ['canister', 'cannister'], 'def': 'metal container for storing dry foods such as tea or flour', 'name': 'canister'}, {'frequency': 'r', 'id': 203, 'synset': 'cannon.n.02', 'synonyms': ['cannon'], 'def': 'heavy gun fired from a tank', 'name': 'cannon'}, {'frequency': 'c', 'id': 204, 'synset': 'canoe.n.01', 'synonyms': ['canoe'], 'def': 'small and light boat; pointed at both ends; propelled with a paddle', 'name': 'canoe'}, {'frequency': 'r', 'id': 205, 'synset': 'cantaloup.n.02', 'synonyms': ['cantaloup', 'cantaloupe'], 'def': 'the fruit of a cantaloup vine; small to medium-sized melon with yellowish flesh', 'name': 'cantaloup'}, {'frequency': 'r', 'id': 206, 'synset': 'canteen.n.01', 'synonyms': ['canteen'], 'def': 'a flask for carrying water; used by soldiers or travelers', 'name': 'canteen'}, {'frequency': 'c', 'id': 207, 'synset': 'cap.n.01', 'synonyms': ['cap_(headwear)'], 'def': 'a tight-fitting headwear', 'name': 'cap_(headwear)'}, {'frequency': 'f', 'id': 208, 'synset': 'cap.n.02', 'synonyms': ['bottle_cap', 'cap_(container_lid)'], 'def': 'a top (as for a bottle)', 'name': 'bottle_cap'}, {'frequency': 'r', 'id': 209, 'synset': 'cape.n.02', 'synonyms': ['cape'], 'def': 'a sleeveless garment like a cloak but shorter', 'name': 'cape'}, {'frequency': 'c', 'id': 210, 'synset': 'cappuccino.n.01', 'synonyms': ['cappuccino', 'coffee_cappuccino'], 'def': 'equal parts of espresso and steamed milk', 'name': 'cappuccino'}, {'frequency': 'f', 'id': 211, 'synset': 'car.n.01', 'synonyms': ['car_(automobile)', 'auto_(automobile)', 'automobile'], 'def': 'a motor vehicle with four wheels', 'name': 'car_(automobile)'}, {'frequency': 'f', 'id': 212, 'synset': 'car.n.02', 'synonyms': ['railcar_(part_of_a_train)', 'railway_car_(part_of_a_train)', 'railroad_car_(part_of_a_train)'], 'def': 'a wheeled vehicle adapted to the rails of railroad', 'name': 'railcar_(part_of_a_train)'}, {'frequency': 'r', 'id': 213, 'synset': 'car.n.04', 'synonyms': ['elevator_car'], 'def': 'where passengers ride up and down', 'name': 'elevator_car'}, {'frequency': 'r', 'id': 214, 'synset': 'car_battery.n.01', 'synonyms': ['car_battery', 'automobile_battery'], 'def': 'a battery in a motor vehicle', 'name': 'car_battery'}, {'frequency': 'c', 'id': 215, 'synset': 'card.n.02', 'synonyms': ['identity_card'], 'def': 'a card certifying the identity of the bearer', 'name': 'identity_card'}, {'frequency': 'c', 'id': 216, 'synset': 'card.n.03', 'synonyms': ['card'], 'def': 'a rectangular piece of paper used to send messages (e.g. greetings or pictures)', 'name': 'card'}, {'frequency': 'r', 'id': 217, 'synset': 'cardigan.n.01', 'synonyms': ['cardigan'], 'def': 'knitted jacket that is fastened up the front with buttons or a zipper', 'name': 'cardigan'}, {'frequency': 'r', 'id': 218, 'synset': 'cargo_ship.n.01', 'synonyms': ['cargo_ship', 'cargo_vessel'], 'def': 'a ship designed to carry cargo', 'name': 'cargo_ship'}, {'frequency': 'r', 'id': 219, 'synset': 'carnation.n.01', 'synonyms': ['carnation'], 'def': 'plant with pink to purple-red spice-scented usually double flowers', 'name': 'carnation'}, {'frequency': 'c', 'id': 220, 'synset': 'carriage.n.02', 'synonyms': ['horse_carriage'], 'def': 'a vehicle with wheels drawn by one or more horses', 'name': 'horse_carriage'}, {'frequency': 'f', 'id': 221, 'synset': 'carrot.n.01', 'synonyms': ['carrot'], 'def': 'deep orange edible root of the cultivated carrot plant', 'name': 'carrot'}, {'frequency': 'c', 'id': 222, 'synset': 'carryall.n.01', 'synonyms': ['tote_bag'], 'def': 'a capacious bag or basket', 'name': 'tote_bag'}, {'frequency': 'c', 'id': 223, 'synset': 'cart.n.01', 'synonyms': ['cart'], 'def': 'a heavy open wagon usually having two wheels and drawn by an animal', 'name': 'cart'}, {'frequency': 'c', 'id': 224, 'synset': 'carton.n.02', 'synonyms': ['carton'], 'def': 'a box made of cardboard; opens by flaps on top', 'name': 'carton'}, {'frequency': 'c', 'id': 225, 'synset': 'cash_register.n.01', 'synonyms': ['cash_register', 'register_(for_cash_transactions)'], 'def': 'a cashbox with an adding machine to register transactions', 'name': 'cash_register'}, {'frequency': 'r', 'id': 226, 'synset': 'casserole.n.01', 'synonyms': ['casserole'], 'def': 'food cooked and served in a casserole', 'name': 'casserole'}, {'frequency': 'r', 'id': 227, 'synset': 'cassette.n.01', 'synonyms': ['cassette'], 'def': 'a container that holds a magnetic tape used for recording or playing sound or video', 'name': 'cassette'}, {'frequency': 'c', 'id': 228, 'synset': 'cast.n.05', 'synonyms': ['cast', 'plaster_cast', 'plaster_bandage'], 'def': 'bandage consisting of a firm covering that immobilizes broken bones while they heal', 'name': 'cast'}, {'frequency': 'f', 'id': 229, 'synset': 'cat.n.01', 'synonyms': ['cat'], 'def': 'a domestic house cat', 'name': 'cat'}, {'frequency': 'c', 'id': 230, 'synset': 'cauliflower.n.02', 'synonyms': ['cauliflower'], 'def': 'edible compact head of white undeveloped flowers', 'name': 'cauliflower'}, {'frequency': 'r', 'id': 231, 'synset': 'caviar.n.01', 'synonyms': ['caviar', 'caviare'], 'def': "salted roe of sturgeon or other large fish; usually served as an hors d'oeuvre", 'name': 'caviar'}, {'frequency': 'c', 'id': 232, 'synset': 'cayenne.n.02', 'synonyms': ['cayenne_(spice)', 'cayenne_pepper_(spice)', 'red_pepper_(spice)'], 'def': 'ground pods and seeds of pungent red peppers of the genus Capsicum', 'name': 'cayenne_(spice)'}, {'frequency': 'c', 'id': 233, 'synset': 'cd_player.n.01', 'synonyms': ['CD_player'], 'def': 'electronic equipment for playing compact discs (CDs)', 'name': 'CD_player'}, {'frequency': 'c', 'id': 234, 'synset': 'celery.n.01', 'synonyms': ['celery'], 'def': 'widely cultivated herb with aromatic leaf stalks that are eaten raw or cooked', 'name': 'celery'}, {'frequency': 'f', 'id': 235, 'synset': 'cellular_telephone.n.01', 'synonyms': ['cellular_telephone', 'cellular_phone', 'cellphone', 'mobile_phone', 'smart_phone'], 'def': 'a hand-held mobile telephone', 'name': 'cellular_telephone'}, {'frequency': 'r', 'id': 236, 'synset': 'chain_mail.n.01', 'synonyms': ['chain_mail', 'ring_mail', 'chain_armor', 'chain_armour', 'ring_armor', 'ring_armour'], 'def': '(Middle Ages) flexible armor made of interlinked metal rings', 'name': 'chain_mail'}, {'frequency': 'f', 'id': 237, 'synset': 'chair.n.01', 'synonyms': ['chair'], 'def': 'a seat for one person, with a support for the back', 'name': 'chair'}, {'frequency': 'r', 'id': 238, 'synset': 'chaise_longue.n.01', 'synonyms': ['chaise_longue', 'chaise', 'daybed'], 'def': 'a long chair; for reclining', 'name': 'chaise_longue'}, {'frequency': 'r', 'id': 239, 'synset': 'champagne.n.01', 'synonyms': ['champagne'], 'def': 'a white sparkling wine produced in Champagne or resembling that produced there', 'name': 'champagne'}, {'frequency': 'f', 'id': 240, 'synset': 'chandelier.n.01', 'synonyms': ['chandelier'], 'def': 'branched lighting fixture; often ornate; hangs from the ceiling', 'name': 'chandelier'}, {'frequency': 'r', 'id': 241, 'synset': 'chap.n.04', 'synonyms': ['chap'], 'def': 'leather leggings without a seat; worn over trousers by cowboys to protect their legs', 'name': 'chap'}, {'frequency': 'r', 'id': 242, 'synset': 'checkbook.n.01', 'synonyms': ['checkbook', 'chequebook'], 'def': 'a book issued to holders of checking accounts', 'name': 'checkbook'}, {'frequency': 'r', 'id': 243, 'synset': 'checkerboard.n.01', 'synonyms': ['checkerboard'], 'def': 'a board having 64 squares of two alternating colors', 'name': 'checkerboard'}, {'frequency': 'c', 'id': 244, 'synset': 'cherry.n.03', 'synonyms': ['cherry'], 'def': 'a red fruit with a single hard stone', 'name': 'cherry'}, {'frequency': 'r', 'id': 245, 'synset': 'chessboard.n.01', 'synonyms': ['chessboard'], 'def': 'a checkerboard used to play chess', 'name': 'chessboard'}, {'frequency': 'r', 'id': 246, 'synset': 'chest_of_drawers.n.01', 'synonyms': ['chest_of_drawers_(furniture)', 'bureau_(furniture)', 'chest_(furniture)'], 'def': 'furniture with drawers for keeping clothes', 'name': 'chest_of_drawers_(furniture)'}, {'frequency': 'c', 'id': 247, 'synset': 'chicken.n.02', 'synonyms': ['chicken_(animal)'], 'def': 'a domestic fowl bred for flesh or eggs', 'name': 'chicken_(animal)'}, {'frequency': 'c', 'id': 248, 'synset': 'chicken_wire.n.01', 'synonyms': ['chicken_wire'], 'def': 'a galvanized wire network with a hexagonal mesh; used to build fences', 'name': 'chicken_wire'}, {'frequency': 'r', 'id': 249, 'synset': 'chickpea.n.01', 'synonyms': ['chickpea', 'garbanzo'], 'def': 'the seed of the chickpea plant; usually dried', 'name': 'chickpea'}, {'frequency': 'r', 'id': 250, 'synset': 'chihuahua.n.03', 'synonyms': ['Chihuahua'], 'def': 'an old breed of tiny short-haired dog with protruding eyes from Mexico', 'name': 'Chihuahua'}, {'frequency': 'r', 'id': 251, 'synset': 'chili.n.02', 'synonyms': ['chili_(vegetable)', 'chili_pepper_(vegetable)', 'chilli_(vegetable)', 'chilly_(vegetable)', 'chile_(vegetable)'], 'def': 'very hot and finely tapering pepper of special pungency', 'name': 'chili_(vegetable)'}, {'frequency': 'r', 'id': 252, 'synset': 'chime.n.01', 'synonyms': ['chime', 'gong'], 'def': 'an instrument consisting of a set of bells that are struck with a hammer', 'name': 'chime'}, {'frequency': 'r', 'id': 253, 'synset': 'chinaware.n.01', 'synonyms': ['chinaware'], 'def': 'dishware made of high quality porcelain', 'name': 'chinaware'}, {'frequency': 'c', 'id': 254, 'synset': 'chip.n.04', 'synonyms': ['crisp_(potato_chip)', 'potato_chip'], 'def': 'a thin crisp slice of potato fried in deep fat', 'name': 'crisp_(potato_chip)'}, {'frequency': 'r', 'id': 255, 'synset': 'chip.n.06', 'synonyms': ['poker_chip'], 'def': 'a small disk-shaped counter used to represent money when gambling', 'name': 'poker_chip'}, {'frequency': 'c', 'id': 256, 'synset': 'chocolate_bar.n.01', 'synonyms': ['chocolate_bar'], 'def': 'a bar of chocolate candy', 'name': 'chocolate_bar'}, {'frequency': 'c', 'id': 257, 'synset': 'chocolate_cake.n.01', 'synonyms': ['chocolate_cake'], 'def': 'cake containing chocolate', 'name': 'chocolate_cake'}, {'frequency': 'r', 'id': 258, 'synset': 'chocolate_milk.n.01', 'synonyms': ['chocolate_milk'], 'def': 'milk flavored with chocolate syrup', 'name': 'chocolate_milk'}, {'frequency': 'r', 'id': 259, 'synset': 'chocolate_mousse.n.01', 'synonyms': ['chocolate_mousse'], 'def': 'dessert mousse made with chocolate', 'name': 'chocolate_mousse'}, {'frequency': 'f', 'id': 260, 'synset': 'choker.n.03', 'synonyms': ['choker', 'collar', 'neckband'], 'def': 'necklace that fits tightly around the neck', 'name': 'choker'}, {'frequency': 'f', 'id': 261, 'synset': 'chopping_board.n.01', 'synonyms': ['chopping_board', 'cutting_board', 'chopping_block'], 'def': 'a wooden board where meats or vegetables can be cut', 'name': 'chopping_board'}, {'frequency': 'c', 'id': 262, 'synset': 'chopstick.n.01', 'synonyms': ['chopstick'], 'def': 'one of a pair of slender sticks used as oriental tableware to eat food with', 'name': 'chopstick'}, {'frequency': 'f', 'id': 263, 'synset': 'christmas_tree.n.05', 'synonyms': ['Christmas_tree'], 'def': 'an ornamented evergreen used as a Christmas decoration', 'name': 'Christmas_tree'}, {'frequency': 'c', 'id': 264, 'synset': 'chute.n.02', 'synonyms': ['slide'], 'def': 'sloping channel through which things can descend', 'name': 'slide'}, {'frequency': 'r', 'id': 265, 'synset': 'cider.n.01', 'synonyms': ['cider', 'cyder'], 'def': 'a beverage made from juice pressed from apples', 'name': 'cider'}, {'frequency': 'r', 'id': 266, 'synset': 'cigar_box.n.01', 'synonyms': ['cigar_box'], 'def': 'a box for holding cigars', 'name': 'cigar_box'}, {'frequency': 'c', 'id': 267, 'synset': 'cigarette.n.01', 'synonyms': ['cigarette'], 'def': 'finely ground tobacco wrapped in paper; for smoking', 'name': 'cigarette'}, {'frequency': 'c', 'id': 268, 'synset': 'cigarette_case.n.01', 'synonyms': ['cigarette_case', 'cigarette_pack'], 'def': 'a small flat case for holding cigarettes', 'name': 'cigarette_case'}, {'frequency': 'f', 'id': 269, 'synset': 'cistern.n.02', 'synonyms': ['cistern', 'water_tank'], 'def': 'a tank that holds the water used to flush a toilet', 'name': 'cistern'}, {'frequency': 'r', 'id': 270, 'synset': 'clarinet.n.01', 'synonyms': ['clarinet'], 'def': 'a single-reed instrument with a straight tube', 'name': 'clarinet'}, {'frequency': 'r', 'id': 271, 'synset': 'clasp.n.01', 'synonyms': ['clasp'], 'def': 'a fastener (as a buckle or hook) that is used to hold two things together', 'name': 'clasp'}, {'frequency': 'c', 'id': 272, 'synset': 'cleansing_agent.n.01', 'synonyms': ['cleansing_agent', 'cleanser', 'cleaner'], 'def': 'a preparation used in cleaning something', 'name': 'cleansing_agent'}, {'frequency': 'r', 'id': 273, 'synset': 'clementine.n.01', 'synonyms': ['clementine'], 'def': 'a variety of mandarin orange', 'name': 'clementine'}, {'frequency': 'c', 'id': 274, 'synset': 'clip.n.03', 'synonyms': ['clip'], 'def': 'any of various small fasteners used to hold loose articles together', 'name': 'clip'}, {'frequency': 'c', 'id': 275, 'synset': 'clipboard.n.01', 'synonyms': ['clipboard'], 'def': 'a small writing board with a clip at the top for holding papers', 'name': 'clipboard'}, {'frequency': 'f', 'id': 276, 'synset': 'clock.n.01', 'synonyms': ['clock', 'timepiece', 'timekeeper'], 'def': 'a timepiece that shows the time of day', 'name': 'clock'}, {'frequency': 'f', 'id': 277, 'synset': 'clock_tower.n.01', 'synonyms': ['clock_tower'], 'def': 'a tower with a large clock visible high up on an outside face', 'name': 'clock_tower'}, {'frequency': 'c', 'id': 278, 'synset': 'clothes_hamper.n.01', 'synonyms': ['clothes_hamper', 'laundry_basket', 'clothes_basket'], 'def': 'a hamper that holds dirty clothes to be washed or wet clothes to be dried', 'name': 'clothes_hamper'}, {'frequency': 'c', 'id': 279, 'synset': 'clothespin.n.01', 'synonyms': ['clothespin', 'clothes_peg'], 'def': 'wood or plastic fastener; for holding clothes on a clothesline', 'name': 'clothespin'}, {'frequency': 'r', 'id': 280, 'synset': 'clutch_bag.n.01', 'synonyms': ['clutch_bag'], 'def': "a woman's strapless purse that is carried in the hand", 'name': 'clutch_bag'}, {'frequency': 'f', 'id': 281, 'synset': 'coaster.n.03', 'synonyms': ['coaster'], 'def': 'a covering (plate or mat) that protects the surface of a table', 'name': 'coaster'}, {'frequency': 'f', 'id': 282, 'synset': 'coat.n.01', 'synonyms': ['coat'], 'def': 'an outer garment that has sleeves and covers the body from shoulder down', 'name': 'coat'}, {'frequency': 'c', 'id': 283, 'synset': 'coat_hanger.n.01', 'synonyms': ['coat_hanger', 'clothes_hanger', 'dress_hanger'], 'def': "a hanger that is shaped like a person's shoulders", 'name': 'coat_hanger'}, {'frequency': 'r', 'id': 284, 'synset': 'coatrack.n.01', 'synonyms': ['coatrack', 'hatrack'], 'def': 'a rack with hooks for temporarily holding coats and hats', 'name': 'coatrack'}, {'frequency': 'c', 'id': 285, 'synset': 'cock.n.04', 'synonyms': ['cock', 'rooster'], 'def': 'adult male chicken', 'name': 'cock'}, {'frequency': 'c', 'id': 286, 'synset': 'coconut.n.02', 'synonyms': ['coconut', 'cocoanut'], 'def': 'large hard-shelled brown oval nut with a fibrous husk', 'name': 'coconut'}, {'frequency': 'r', 'id': 287, 'synset': 'coffee_filter.n.01', 'synonyms': ['coffee_filter'], 'def': 'filter (usually of paper) that passes the coffee and retains the coffee grounds', 'name': 'coffee_filter'}, {'frequency': 'f', 'id': 288, 'synset': 'coffee_maker.n.01', 'synonyms': ['coffee_maker', 'coffee_machine'], 'def': 'a kitchen appliance for brewing coffee automatically', 'name': 'coffee_maker'}, {'frequency': 'f', 'id': 289, 'synset': 'coffee_table.n.01', 'synonyms': ['coffee_table', 'cocktail_table'], 'def': 'low table where magazines can be placed and coffee or cocktails are served', 'name': 'coffee_table'}, {'frequency': 'c', 'id': 290, 'synset': 'coffeepot.n.01', 'synonyms': ['coffeepot'], 'def': 'tall pot in which coffee is brewed', 'name': 'coffeepot'}, {'frequency': 'r', 'id': 291, 'synset': 'coil.n.05', 'synonyms': ['coil'], 'def': 'tubing that is wound in a spiral', 'name': 'coil'}, {'frequency': 'c', 'id': 292, 'synset': 'coin.n.01', 'synonyms': ['coin'], 'def': 'a flat metal piece (usually a disc) used as money', 'name': 'coin'}, {'frequency': 'r', 'id': 293, 'synset': 'colander.n.01', 'synonyms': ['colander', 'cullender'], 'def': 'bowl-shaped strainer; used to wash or drain foods', 'name': 'colander'}, {'frequency': 'c', 'id': 294, 'synset': 'coleslaw.n.01', 'synonyms': ['coleslaw', 'slaw'], 'def': 'basically shredded cabbage', 'name': 'coleslaw'}, {'frequency': 'r', 'id': 295, 'synset': 'coloring_material.n.01', 'synonyms': ['coloring_material', 'colouring_material'], 'def': 'any material used for its color', 'name': 'coloring_material'}, {'frequency': 'r', 'id': 296, 'synset': 'combination_lock.n.01', 'synonyms': ['combination_lock'], 'def': 'lock that can be opened only by turning dials in a special sequence', 'name': 'combination_lock'}, {'frequency': 'c', 'id': 297, 'synset': 'comforter.n.04', 'synonyms': ['pacifier', 'teething_ring'], 'def': 'device used for an infant to suck or bite on', 'name': 'pacifier'}, {'frequency': 'r', 'id': 298, 'synset': 'comic_book.n.01', 'synonyms': ['comic_book'], 'def': 'a magazine devoted to comic strips', 'name': 'comic_book'}, {'frequency': 'f', 'id': 299, 'synset': 'computer_keyboard.n.01', 'synonyms': ['computer_keyboard', 'keyboard_(computer)'], 'def': 'a keyboard that is a data input device for computers', 'name': 'computer_keyboard'}, {'frequency': 'r', 'id': 300, 'synset': 'concrete_mixer.n.01', 'synonyms': ['concrete_mixer', 'cement_mixer'], 'def': 'a machine with a large revolving drum in which cement/concrete is mixed', 'name': 'concrete_mixer'}, {'frequency': 'f', 'id': 301, 'synset': 'cone.n.01', 'synonyms': ['cone', 'traffic_cone'], 'def': 'a cone-shaped object used to direct traffic', 'name': 'cone'}, {'frequency': 'f', 'id': 302, 'synset': 'control.n.09', 'synonyms': ['control', 'controller'], 'def': 'a mechanism that controls the operation of a machine', 'name': 'control'}, {'frequency': 'r', 'id': 303, 'synset': 'convertible.n.01', 'synonyms': ['convertible_(automobile)'], 'def': 'a car that has top that can be folded or removed', 'name': 'convertible_(automobile)'}, {'frequency': 'r', 'id': 304, 'synset': 'convertible.n.03', 'synonyms': ['sofa_bed'], 'def': 'a sofa that can be converted into a bed', 'name': 'sofa_bed'}, {'frequency': 'c', 'id': 305, 'synset': 'cookie.n.01', 'synonyms': ['cookie', 'cooky', 'biscuit_(cookie)'], 'def': "any of various small flat sweet cakes (`biscuit' is the British term)", 'name': 'cookie'}, {'frequency': 'r', 'id': 306, 'synset': 'cookie_jar.n.01', 'synonyms': ['cookie_jar', 'cooky_jar'], 'def': 'a jar in which cookies are kept (and sometimes money is hidden)', 'name': 'cookie_jar'}, {'frequency': 'r', 'id': 307, 'synset': 'cooking_utensil.n.01', 'synonyms': ['cooking_utensil'], 'def': 'a kitchen utensil made of material that does not melt easily; used for cooking', 'name': 'cooking_utensil'}, {'frequency': 'f', 'id': 308, 'synset': 'cooler.n.01', 'synonyms': ['cooler_(for_food)', 'ice_chest'], 'def': 'an insulated box for storing food often with ice', 'name': 'cooler_(for_food)'}, {'frequency': 'c', 'id': 309, 'synset': 'cork.n.04', 'synonyms': ['cork_(bottle_plug)', 'bottle_cork'], 'def': 'the plug in the mouth of a bottle (especially a wine bottle)', 'name': 'cork_(bottle_plug)'}, {'frequency': 'r', 'id': 310, 'synset': 'corkboard.n.01', 'synonyms': ['corkboard'], 'def': 'a sheet consisting of cork granules', 'name': 'corkboard'}, {'frequency': 'r', 'id': 311, 'synset': 'corkscrew.n.01', 'synonyms': ['corkscrew', 'bottle_screw'], 'def': 'a bottle opener that pulls corks', 'name': 'corkscrew'}, {'frequency': 'c', 'id': 312, 'synset': 'corn.n.03', 'synonyms': ['edible_corn', 'corn', 'maize'], 'def': 'ears of corn that can be prepared and served for human food', 'name': 'edible_corn'}, {'frequency': 'r', 'id': 313, 'synset': 'cornbread.n.01', 'synonyms': ['cornbread'], 'def': 'bread made primarily of cornmeal', 'name': 'cornbread'}, {'frequency': 'c', 'id': 314, 'synset': 'cornet.n.01', 'synonyms': ['cornet', 'horn', 'trumpet'], 'def': 'a brass musical instrument with a narrow tube and a flared bell and many valves', 'name': 'cornet'}, {'frequency': 'c', 'id': 315, 'synset': 'cornice.n.01', 'synonyms': ['cornice', 'valance', 'valance_board', 'pelmet'], 'def': 'a decorative framework to conceal curtain fixtures at the top of a window casing', 'name': 'cornice'}, {'frequency': 'r', 'id': 316, 'synset': 'cornmeal.n.01', 'synonyms': ['cornmeal'], 'def': 'coarsely ground corn', 'name': 'cornmeal'}, {'frequency': 'r', 'id': 317, 'synset': 'corset.n.01', 'synonyms': ['corset', 'girdle'], 'def': "a woman's close-fitting foundation garment", 'name': 'corset'}, {'frequency': 'r', 'id': 318, 'synset': 'cos.n.02', 'synonyms': ['romaine_lettuce'], 'def': 'lettuce with long dark-green leaves in a loosely packed elongated head', 'name': 'romaine_lettuce'}, {'frequency': 'c', 'id': 319, 'synset': 'costume.n.04', 'synonyms': ['costume'], 'def': 'the attire characteristic of a country or a time or a social class', 'name': 'costume'}, {'frequency': 'r', 'id': 320, 'synset': 'cougar.n.01', 'synonyms': ['cougar', 'puma', 'catamount', 'mountain_lion', 'panther'], 'def': 'large American feline resembling a lion', 'name': 'cougar'}, {'frequency': 'r', 'id': 321, 'synset': 'coverall.n.01', 'synonyms': ['coverall'], 'def': 'a loose-fitting protective garment that is worn over other clothing', 'name': 'coverall'}, {'frequency': 'r', 'id': 322, 'synset': 'cowbell.n.01', 'synonyms': ['cowbell'], 'def': 'a bell hung around the neck of cow so that the cow can be easily located', 'name': 'cowbell'}, {'frequency': 'f', 'id': 323, 'synset': 'cowboy_hat.n.01', 'synonyms': ['cowboy_hat', 'ten-gallon_hat'], 'def': 'a hat with a wide brim and a soft crown; worn by American ranch hands', 'name': 'cowboy_hat'}, {'frequency': 'r', 'id': 324, 'synset': 'crab.n.01', 'synonyms': ['crab_(animal)'], 'def': 'decapod having eyes on short stalks and a broad flattened shell and pincers', 'name': 'crab_(animal)'}, {'frequency': 'c', 'id': 325, 'synset': 'cracker.n.01', 'synonyms': ['cracker'], 'def': 'a thin crisp wafer', 'name': 'cracker'}, {'frequency': 'r', 'id': 326, 'synset': 'crape.n.01', 'synonyms': ['crape', 'crepe', 'French_pancake'], 'def': 'small very thin pancake', 'name': 'crape'}, {'frequency': 'f', 'id': 327, 'synset': 'crate.n.01', 'synonyms': ['crate'], 'def': 'a rugged box (usually made of wood); used for shipping', 'name': 'crate'}, {'frequency': 'r', 'id': 328, 'synset': 'crayon.n.01', 'synonyms': ['crayon', 'wax_crayon'], 'def': 'writing or drawing implement made of a colored stick of composition wax', 'name': 'crayon'}, {'frequency': 'r', 'id': 329, 'synset': 'cream_pitcher.n.01', 'synonyms': ['cream_pitcher'], 'def': 'a small pitcher for serving cream', 'name': 'cream_pitcher'}, {'frequency': 'r', 'id': 330, 'synset': 'credit_card.n.01', 'synonyms': ['credit_card', 'charge_card', 'debit_card'], 'def': 'a card, usually plastic, used to pay for goods and services', 'name': 'credit_card'}, {'frequency': 'c', 'id': 331, 'synset': 'crescent_roll.n.01', 'synonyms': ['crescent_roll', 'croissant'], 'def': 'very rich flaky crescent-shaped roll', 'name': 'crescent_roll'}, {'frequency': 'c', 'id': 332, 'synset': 'crib.n.01', 'synonyms': ['crib', 'cot'], 'def': 'baby bed with high sides made of slats', 'name': 'crib'}, {'frequency': 'c', 'id': 333, 'synset': 'crock.n.03', 'synonyms': ['crock_pot', 'earthenware_jar'], 'def': 'an earthen jar (made of baked clay)', 'name': 'crock_pot'}, {'frequency': 'f', 'id': 334, 'synset': 'crossbar.n.01', 'synonyms': ['crossbar'], 'def': 'a horizontal bar that goes across something', 'name': 'crossbar'}, {'frequency': 'r', 'id': 335, 'synset': 'crouton.n.01', 'synonyms': ['crouton'], 'def': 'a small piece of toasted or fried bread; served in soup or salads', 'name': 'crouton'}, {'frequency': 'r', 'id': 336, 'synset': 'crow.n.01', 'synonyms': ['crow'], 'def': 'black birds having a raucous call', 'name': 'crow'}, {'frequency': 'c', 'id': 337, 'synset': 'crown.n.04', 'synonyms': ['crown'], 'def': 'an ornamental jeweled headdress signifying sovereignty', 'name': 'crown'}, {'frequency': 'c', 'id': 338, 'synset': 'crucifix.n.01', 'synonyms': ['crucifix'], 'def': 'representation of the cross on which Jesus died', 'name': 'crucifix'}, {'frequency': 'c', 'id': 339, 'synset': 'cruise_ship.n.01', 'synonyms': ['cruise_ship', 'cruise_liner'], 'def': 'a passenger ship used commercially for pleasure cruises', 'name': 'cruise_ship'}, {'frequency': 'c', 'id': 340, 'synset': 'cruiser.n.01', 'synonyms': ['police_cruiser', 'patrol_car', 'police_car', 'squad_car'], 'def': 'a car in which policemen cruise the streets', 'name': 'police_cruiser'}, {'frequency': 'c', 'id': 341, 'synset': 'crumb.n.03', 'synonyms': ['crumb'], 'def': 'small piece of e.g. bread or cake', 'name': 'crumb'}, {'frequency': 'r', 'id': 342, 'synset': 'crutch.n.01', 'synonyms': ['crutch'], 'def': 'a wooden or metal staff that fits under the armpit and reaches to the ground', 'name': 'crutch'}, {'frequency': 'c', 'id': 343, 'synset': 'cub.n.03', 'synonyms': ['cub_(animal)'], 'def': 'the young of certain carnivorous mammals such as the bear or wolf or lion', 'name': 'cub_(animal)'}, {'frequency': 'r', 'id': 344, 'synset': 'cube.n.05', 'synonyms': ['cube', 'square_block'], 'def': 'a block in the (approximate) shape of a cube', 'name': 'cube'}, {'frequency': 'f', 'id': 345, 'synset': 'cucumber.n.02', 'synonyms': ['cucumber', 'cuke'], 'def': 'cylindrical green fruit with thin green rind and white flesh eaten as a vegetable', 'name': 'cucumber'}, {'frequency': 'c', 'id': 346, 'synset': 'cufflink.n.01', 'synonyms': ['cufflink'], 'def': 'jewelry consisting of linked buttons used to fasten the cuffs of a shirt', 'name': 'cufflink'}, {'frequency': 'f', 'id': 347, 'synset': 'cup.n.01', 'synonyms': ['cup'], 'def': 'a small open container usually used for drinking; usually has a handle', 'name': 'cup'}, {'frequency': 'c', 'id': 348, 'synset': 'cup.n.08', 'synonyms': ['trophy_cup'], 'def': 'a metal vessel with handles that is awarded as a trophy to a competition winner', 'name': 'trophy_cup'}, {'frequency': 'c', 'id': 349, 'synset': 'cupcake.n.01', 'synonyms': ['cupcake'], 'def': 'small cake baked in a muffin tin', 'name': 'cupcake'}, {'frequency': 'r', 'id': 350, 'synset': 'curler.n.01', 'synonyms': ['hair_curler', 'hair_roller', 'hair_crimper'], 'def': 'a cylindrical tube around which the hair is wound to curl it', 'name': 'hair_curler'}, {'frequency': 'r', 'id': 351, 'synset': 'curling_iron.n.01', 'synonyms': ['curling_iron'], 'def': 'a cylindrical home appliance that heats hair that has been curled around it', 'name': 'curling_iron'}, {'frequency': 'f', 'id': 352, 'synset': 'curtain.n.01', 'synonyms': ['curtain', 'drapery'], 'def': 'hanging cloth used as a blind (especially for a window)', 'name': 'curtain'}, {'frequency': 'f', 'id': 353, 'synset': 'cushion.n.03', 'synonyms': ['cushion'], 'def': 'a soft bag filled with air or padding such as feathers or foam rubber', 'name': 'cushion'}, {'frequency': 'r', 'id': 354, 'synset': 'custard.n.01', 'synonyms': ['custard'], 'def': 'sweetened mixture of milk and eggs baked or boiled or frozen', 'name': 'custard'}, {'frequency': 'c', 'id': 355, 'synset': 'cutter.n.06', 'synonyms': ['cutting_tool'], 'def': 'a cutting implement; a tool for cutting', 'name': 'cutting_tool'}, {'frequency': 'r', 'id': 356, 'synset': 'cylinder.n.04', 'synonyms': ['cylinder'], 'def': 'a cylindrical container', 'name': 'cylinder'}, {'frequency': 'r', 'id': 357, 'synset': 'cymbal.n.01', 'synonyms': ['cymbal'], 'def': 'a percussion instrument consisting of a concave brass disk', 'name': 'cymbal'}, {'frequency': 'r', 'id': 358, 'synset': 'dachshund.n.01', 'synonyms': ['dachshund', 'dachsie', 'badger_dog'], 'def': 'small long-bodied short-legged breed of dog having a short sleek coat and long drooping ears', 'name': 'dachshund'}, {'frequency': 'r', 'id': 359, 'synset': 'dagger.n.01', 'synonyms': ['dagger'], 'def': 'a short knife with a pointed blade used for piercing or stabbing', 'name': 'dagger'}, {'frequency': 'r', 'id': 360, 'synset': 'dartboard.n.01', 'synonyms': ['dartboard'], 'def': 'a circular board of wood or cork used as the target in the game of darts', 'name': 'dartboard'}, {'frequency': 'r', 'id': 361, 'synset': 'date.n.08', 'synonyms': ['date_(fruit)'], 'def': 'sweet edible fruit of the date palm with a single long woody seed', 'name': 'date_(fruit)'}, {'frequency': 'f', 'id': 362, 'synset': 'deck_chair.n.01', 'synonyms': ['deck_chair', 'beach_chair'], 'def': 'a folding chair for use outdoors; a wooden frame supports a length of canvas', 'name': 'deck_chair'}, {'frequency': 'c', 'id': 363, 'synset': 'deer.n.01', 'synonyms': ['deer', 'cervid'], 'def': "distinguished from Bovidae by the male's having solid deciduous antlers", 'name': 'deer'}, {'frequency': 'c', 'id': 364, 'synset': 'dental_floss.n.01', 'synonyms': ['dental_floss', 'floss'], 'def': 'a soft thread for cleaning the spaces between the teeth', 'name': 'dental_floss'}, {'frequency': 'f', 'id': 365, 'synset': 'desk.n.01', 'synonyms': ['desk'], 'def': 'a piece of furniture with a writing surface and usually drawers or other compartments', 'name': 'desk'}, {'frequency': 'r', 'id': 366, 'synset': 'detergent.n.01', 'synonyms': ['detergent'], 'def': 'a surface-active chemical widely used in industry and laundering', 'name': 'detergent'}, {'frequency': 'c', 'id': 367, 'synset': 'diaper.n.01', 'synonyms': ['diaper'], 'def': 'garment consisting of a folded cloth drawn up between the legs and fastened at the waist', 'name': 'diaper'}, {'frequency': 'r', 'id': 368, 'synset': 'diary.n.01', 'synonyms': ['diary', 'journal'], 'def': 'a daily written record of (usually personal) experiences and observations', 'name': 'diary'}, {'frequency': 'r', 'id': 369, 'synset': 'die.n.01', 'synonyms': ['die', 'dice'], 'def': 'a small cube with 1 to 6 spots on the six faces; used in gambling', 'name': 'die'}, {'frequency': 'r', 'id': 370, 'synset': 'dinghy.n.01', 'synonyms': ['dinghy', 'dory', 'rowboat'], 'def': 'a small boat of shallow draft with seats and oars with which it is propelled', 'name': 'dinghy'}, {'frequency': 'f', 'id': 371, 'synset': 'dining_table.n.01', 'synonyms': ['dining_table'], 'def': 'a table at which meals are served', 'name': 'dining_table'}, {'frequency': 'r', 'id': 372, 'synset': 'dinner_jacket.n.01', 'synonyms': ['tux', 'tuxedo'], 'def': 'semiformal evening dress for men', 'name': 'tux'}, {'frequency': 'c', 'id': 373, 'synset': 'dish.n.01', 'synonyms': ['dish'], 'def': 'a piece of dishware normally used as a container for holding or serving food', 'name': 'dish'}, {'frequency': 'c', 'id': 374, 'synset': 'dish.n.05', 'synonyms': ['dish_antenna'], 'def': 'directional antenna consisting of a parabolic reflector', 'name': 'dish_antenna'}, {'frequency': 'c', 'id': 375, 'synset': 'dishrag.n.01', 'synonyms': ['dishrag', 'dishcloth'], 'def': 'a cloth for washing dishes', 'name': 'dishrag'}, {'frequency': 'c', 'id': 376, 'synset': 'dishtowel.n.01', 'synonyms': ['dishtowel', 'tea_towel'], 'def': 'a towel for drying dishes', 'name': 'dishtowel'}, {'frequency': 'f', 'id': 377, 'synset': 'dishwasher.n.01', 'synonyms': ['dishwasher', 'dishwashing_machine'], 'def': 'a machine for washing dishes', 'name': 'dishwasher'}, {'frequency': 'r', 'id': 378, 'synset': 'dishwasher_detergent.n.01', 'synonyms': ['dishwasher_detergent', 'dishwashing_detergent', 'dishwashing_liquid'], 'def': 'a low-sudsing detergent designed for use in dishwashers', 'name': 'dishwasher_detergent'}, {'frequency': 'r', 'id': 379, 'synset': 'diskette.n.01', 'synonyms': ['diskette', 'floppy', 'floppy_disk'], 'def': 'a small plastic magnetic disk enclosed in a stiff envelope used to store data', 'name': 'diskette'}, {'frequency': 'c', 'id': 380, 'synset': 'dispenser.n.01', 'synonyms': ['dispenser'], 'def': 'a container so designed that the contents can be used in prescribed amounts', 'name': 'dispenser'}, {'frequency': 'c', 'id': 381, 'synset': 'dixie_cup.n.01', 'synonyms': ['Dixie_cup', 'paper_cup'], 'def': 'a disposable cup made of paper; for holding drinks', 'name': 'Dixie_cup'}, {'frequency': 'f', 'id': 382, 'synset': 'dog.n.01', 'synonyms': ['dog'], 'def': 'a common domesticated dog', 'name': 'dog'}, {'frequency': 'f', 'id': 383, 'synset': 'dog_collar.n.01', 'synonyms': ['dog_collar'], 'def': 'a collar for a dog', 'name': 'dog_collar'}, {'frequency': 'c', 'id': 384, 'synset': 'doll.n.01', 'synonyms': ['doll'], 'def': 'a toy replica of a HUMAN (NOT AN ANIMAL)', 'name': 'doll'}, {'frequency': 'r', 'id': 385, 'synset': 'dollar.n.02', 'synonyms': ['dollar', 'dollar_bill', 'one_dollar_bill'], 'def': 'a piece of paper money worth one dollar', 'name': 'dollar'}, {'frequency': 'r', 'id': 386, 'synset': 'dolphin.n.02', 'synonyms': ['dolphin'], 'def': 'any of various small toothed whales with a beaklike snout; larger than porpoises', 'name': 'dolphin'}, {'frequency': 'c', 'id': 387, 'synset': 'domestic_ass.n.01', 'synonyms': ['domestic_ass', 'donkey'], 'def': 'domestic beast of burden descended from the African wild ass; patient but stubborn', 'name': 'domestic_ass'}, {'frequency': 'r', 'id': 388, 'synset': 'domino.n.03', 'synonyms': ['eye_mask'], 'def': 'a mask covering the upper part of the face but with holes for the eyes', 'name': 'eye_mask'}, {'frequency': 'r', 'id': 389, 'synset': 'doorbell.n.01', 'synonyms': ['doorbell', 'buzzer'], 'def': 'a button at an outer door that gives a ringing or buzzing signal when pushed', 'name': 'doorbell'}, {'frequency': 'f', 'id': 390, 'synset': 'doorknob.n.01', 'synonyms': ['doorknob', 'doorhandle'], 'def': "a knob used to open a door (often called `doorhandle' in Great Britain)", 'name': 'doorknob'}, {'frequency': 'c', 'id': 391, 'synset': 'doormat.n.02', 'synonyms': ['doormat', 'welcome_mat'], 'def': 'a mat placed outside an exterior door for wiping the shoes before entering', 'name': 'doormat'}, {'frequency': 'f', 'id': 392, 'synset': 'doughnut.n.02', 'synonyms': ['doughnut', 'donut'], 'def': 'a small ring-shaped friedcake', 'name': 'doughnut'}, {'frequency': 'r', 'id': 393, 'synset': 'dove.n.01', 'synonyms': ['dove'], 'def': 'any of numerous small pigeons', 'name': 'dove'}, {'frequency': 'r', 'id': 394, 'synset': 'dragonfly.n.01', 'synonyms': ['dragonfly'], 'def': 'slender-bodied non-stinging insect having iridescent wings that are outspread at rest', 'name': 'dragonfly'}, {'frequency': 'f', 'id': 395, 'synset': 'drawer.n.01', 'synonyms': ['drawer'], 'def': 'a boxlike container in a piece of furniture; made so as to slide in and out', 'name': 'drawer'}, {'frequency': 'c', 'id': 396, 'synset': 'drawers.n.01', 'synonyms': ['underdrawers', 'boxers', 'boxershorts'], 'def': 'underpants worn by men', 'name': 'underdrawers'}, {'frequency': 'f', 'id': 397, 'synset': 'dress.n.01', 'synonyms': ['dress', 'frock'], 'def': 'a one-piece garment for a woman; has skirt and bodice', 'name': 'dress'}, {'frequency': 'c', 'id': 398, 'synset': 'dress_hat.n.01', 'synonyms': ['dress_hat', 'high_hat', 'opera_hat', 'silk_hat', 'top_hat'], 'def': "a man's hat with a tall crown; usually covered with silk or with beaver fur", 'name': 'dress_hat'}, {'frequency': 'c', 'id': 399, 'synset': 'dress_suit.n.01', 'synonyms': ['dress_suit'], 'def': 'formalwear consisting of full evening dress for men', 'name': 'dress_suit'}, {'frequency': 'c', 'id': 400, 'synset': 'dresser.n.05', 'synonyms': ['dresser'], 'def': 'a cabinet with shelves', 'name': 'dresser'}, {'frequency': 'c', 'id': 401, 'synset': 'drill.n.01', 'synonyms': ['drill'], 'def': 'a tool with a sharp rotating point for making holes in hard materials', 'name': 'drill'}, {'frequency': 'r', 'id': 402, 'synset': 'drinking_fountain.n.01', 'synonyms': ['drinking_fountain'], 'def': 'a public fountain to provide a jet of drinking water', 'name': 'drinking_fountain'}, {'frequency': 'r', 'id': 403, 'synset': 'drone.n.04', 'synonyms': ['drone'], 'def': 'an aircraft without a pilot that is operated by remote control', 'name': 'drone'}, {'frequency': 'r', 'id': 404, 'synset': 'dropper.n.01', 'synonyms': ['dropper', 'eye_dropper'], 'def': 'pipet consisting of a small tube with a vacuum bulb at one end for drawing liquid in and releasing it a drop at a time', 'name': 'dropper'}, {'frequency': 'c', 'id': 405, 'synset': 'drum.n.01', 'synonyms': ['drum_(musical_instrument)'], 'def': 'a musical percussion instrument; usually consists of a hollow cylinder with a membrane stretched across each end', 'name': 'drum_(musical_instrument)'}, {'frequency': 'r', 'id': 406, 'synset': 'drumstick.n.02', 'synonyms': ['drumstick'], 'def': 'a stick used for playing a drum', 'name': 'drumstick'}, {'frequency': 'f', 'id': 407, 'synset': 'duck.n.01', 'synonyms': ['duck'], 'def': 'small web-footed broad-billed swimming bird', 'name': 'duck'}, {'frequency': 'r', 'id': 408, 'synset': 'duckling.n.02', 'synonyms': ['duckling'], 'def': 'young duck', 'name': 'duckling'}, {'frequency': 'c', 'id': 409, 'synset': 'duct_tape.n.01', 'synonyms': ['duct_tape'], 'def': 'a wide silvery adhesive tape', 'name': 'duct_tape'}, {'frequency': 'f', 'id': 410, 'synset': 'duffel_bag.n.01', 'synonyms': ['duffel_bag', 'duffle_bag', 'duffel', 'duffle'], 'def': 'a large cylindrical bag of heavy cloth', 'name': 'duffel_bag'}, {'frequency': 'r', 'id': 411, 'synset': 'dumbbell.n.01', 'synonyms': ['dumbbell'], 'def': 'an exercising weight with two ball-like ends connected by a short handle', 'name': 'dumbbell'}, {'frequency': 'c', 'id': 412, 'synset': 'dumpster.n.01', 'synonyms': ['dumpster'], 'def': 'a container designed to receive and transport and dump waste', 'name': 'dumpster'}, {'frequency': 'r', 'id': 413, 'synset': 'dustpan.n.02', 'synonyms': ['dustpan'], 'def': 'a short-handled receptacle into which dust can be swept', 'name': 'dustpan'}, {'frequency': 'r', 'id': 414, 'synset': 'dutch_oven.n.02', 'synonyms': ['Dutch_oven'], 'def': 'iron or earthenware cooking pot; used for stews', 'name': 'Dutch_oven'}, {'frequency': 'c', 'id': 415, 'synset': 'eagle.n.01', 'synonyms': ['eagle'], 'def': 'large birds of prey noted for their broad wings and strong soaring flight', 'name': 'eagle'}, {'frequency': 'f', 'id': 416, 'synset': 'earphone.n.01', 'synonyms': ['earphone', 'earpiece', 'headphone'], 'def': 'device for listening to audio that is held over or inserted into the ear', 'name': 'earphone'}, {'frequency': 'r', 'id': 417, 'synset': 'earplug.n.01', 'synonyms': ['earplug'], 'def': 'a soft plug that is inserted into the ear canal to block sound', 'name': 'earplug'}, {'frequency': 'f', 'id': 418, 'synset': 'earring.n.01', 'synonyms': ['earring'], 'def': 'jewelry to ornament the ear', 'name': 'earring'}, {'frequency': 'c', 'id': 419, 'synset': 'easel.n.01', 'synonyms': ['easel'], 'def': "an upright tripod for displaying something (usually an artist's canvas)", 'name': 'easel'}, {'frequency': 'r', 'id': 420, 'synset': 'eclair.n.01', 'synonyms': ['eclair'], 'def': 'oblong cream puff', 'name': 'eclair'}, {'frequency': 'r', 'id': 421, 'synset': 'eel.n.01', 'synonyms': ['eel'], 'def': 'an elongate fish with fatty flesh', 'name': 'eel'}, {'frequency': 'f', 'id': 422, 'synset': 'egg.n.02', 'synonyms': ['egg', 'eggs'], 'def': 'oval reproductive body of a fowl (especially a hen) used as food', 'name': 'egg'}, {'frequency': 'r', 'id': 423, 'synset': 'egg_roll.n.01', 'synonyms': ['egg_roll', 'spring_roll'], 'def': 'minced vegetables and meat wrapped in a pancake and fried', 'name': 'egg_roll'}, {'frequency': 'c', 'id': 424, 'synset': 'egg_yolk.n.01', 'synonyms': ['egg_yolk', 'yolk_(egg)'], 'def': 'the yellow spherical part of an egg', 'name': 'egg_yolk'}, {'frequency': 'c', 'id': 425, 'synset': 'eggbeater.n.02', 'synonyms': ['eggbeater', 'eggwhisk'], 'def': 'a mixer for beating eggs or whipping cream', 'name': 'eggbeater'}, {'frequency': 'c', 'id': 426, 'synset': 'eggplant.n.01', 'synonyms': ['eggplant', 'aubergine'], 'def': 'egg-shaped vegetable having a shiny skin typically dark purple', 'name': 'eggplant'}, {'frequency': 'r', 'id': 427, 'synset': 'electric_chair.n.01', 'synonyms': ['electric_chair'], 'def': 'a chair-shaped instrument of execution by electrocution', 'name': 'electric_chair'}, {'frequency': 'f', 'id': 428, 'synset': 'electric_refrigerator.n.01', 'synonyms': ['refrigerator'], 'def': 'a refrigerator in which the coolant is pumped around by an electric motor', 'name': 'refrigerator'}, {'frequency': 'f', 'id': 429, 'synset': 'elephant.n.01', 'synonyms': ['elephant'], 'def': 'a common elephant', 'name': 'elephant'}, {'frequency': 'r', 'id': 430, 'synset': 'elk.n.01', 'synonyms': ['elk', 'moose'], 'def': 'large northern deer with enormous flattened antlers in the male', 'name': 'elk'}, {'frequency': 'c', 'id': 431, 'synset': 'envelope.n.01', 'synonyms': ['envelope'], 'def': 'a flat (usually rectangular) container for a letter, thin package, etc.', 'name': 'envelope'}, {'frequency': 'c', 'id': 432, 'synset': 'eraser.n.01', 'synonyms': ['eraser'], 'def': 'an implement used to erase something', 'name': 'eraser'}, {'frequency': 'r', 'id': 433, 'synset': 'escargot.n.01', 'synonyms': ['escargot'], 'def': 'edible snail usually served in the shell with a sauce of melted butter and garlic', 'name': 'escargot'}, {'frequency': 'r', 'id': 434, 'synset': 'eyepatch.n.01', 'synonyms': ['eyepatch'], 'def': 'a protective cloth covering for an injured eye', 'name': 'eyepatch'}, {'frequency': 'r', 'id': 435, 'synset': 'falcon.n.01', 'synonyms': ['falcon'], 'def': 'birds of prey having long pointed powerful wings adapted for swift flight', 'name': 'falcon'}, {'frequency': 'f', 'id': 436, 'synset': 'fan.n.01', 'synonyms': ['fan'], 'def': 'a device for creating a current of air by movement of a surface or surfaces', 'name': 'fan'}, {'frequency': 'f', 'id': 437, 'synset': 'faucet.n.01', 'synonyms': ['faucet', 'spigot', 'tap'], 'def': 'a regulator for controlling the flow of a liquid from a reservoir', 'name': 'faucet'}, {'frequency': 'r', 'id': 438, 'synset': 'fedora.n.01', 'synonyms': ['fedora'], 'def': 'a hat made of felt with a creased crown', 'name': 'fedora'}, {'frequency': 'r', 'id': 439, 'synset': 'ferret.n.02', 'synonyms': ['ferret'], 'def': 'domesticated albino variety of the European polecat bred for hunting rats and rabbits', 'name': 'ferret'}, {'frequency': 'c', 'id': 440, 'synset': 'ferris_wheel.n.01', 'synonyms': ['Ferris_wheel'], 'def': 'a large wheel with suspended seats that remain upright as the wheel rotates', 'name': 'Ferris_wheel'}, {'frequency': 'r', 'id': 441, 'synset': 'ferry.n.01', 'synonyms': ['ferry', 'ferryboat'], 'def': 'a boat that transports people or vehicles across a body of water and operates on a regular schedule', 'name': 'ferry'}, {'frequency': 'r', 'id': 442, 'synset': 'fig.n.04', 'synonyms': ['fig_(fruit)'], 'def': 'fleshy sweet pear-shaped yellowish or purple fruit eaten fresh or preserved or dried', 'name': 'fig_(fruit)'}, {'frequency': 'c', 'id': 443, 'synset': 'fighter.n.02', 'synonyms': ['fighter_jet', 'fighter_aircraft', 'attack_aircraft'], 'def': 'a high-speed military or naval airplane designed to destroy enemy targets', 'name': 'fighter_jet'}, {'frequency': 'f', 'id': 444, 'synset': 'figurine.n.01', 'synonyms': ['figurine'], 'def': 'a small carved or molded figure', 'name': 'figurine'}, {'frequency': 'c', 'id': 445, 'synset': 'file.n.03', 'synonyms': ['file_cabinet', 'filing_cabinet'], 'def': 'office furniture consisting of a container for keeping papers in order', 'name': 'file_cabinet'}, {'frequency': 'r', 'id': 446, 'synset': 'file.n.04', 'synonyms': ['file_(tool)'], 'def': 'a steel hand tool with small sharp teeth on some or all of its surfaces; used for smoothing wood or metal', 'name': 'file_(tool)'}, {'frequency': 'f', 'id': 447, 'synset': 'fire_alarm.n.02', 'synonyms': ['fire_alarm', 'smoke_alarm'], 'def': 'an alarm that is tripped off by fire or smoke', 'name': 'fire_alarm'}, {'frequency': 'c', 'id': 448, 'synset': 'fire_engine.n.01', 'synonyms': ['fire_engine', 'fire_truck'], 'def': 'large trucks that carry firefighters and equipment to the site of a fire', 'name': 'fire_engine'}, {'frequency': 'c', 'id': 449, 'synset': 'fire_extinguisher.n.01', 'synonyms': ['fire_extinguisher', 'extinguisher'], 'def': 'a manually operated device for extinguishing small fires', 'name': 'fire_extinguisher'}, {'frequency': 'c', 'id': 450, 'synset': 'fire_hose.n.01', 'synonyms': ['fire_hose'], 'def': 'a large hose that carries water from a fire hydrant to the site of the fire', 'name': 'fire_hose'}, {'frequency': 'f', 'id': 451, 'synset': 'fireplace.n.01', 'synonyms': ['fireplace'], 'def': 'an open recess in a wall at the base of a chimney where a fire can be built', 'name': 'fireplace'}, {'frequency': 'f', 'id': 452, 'synset': 'fireplug.n.01', 'synonyms': ['fireplug', 'fire_hydrant', 'hydrant'], 'def': 'an upright hydrant for drawing water to use in fighting a fire', 'name': 'fireplug'}, {'frequency': 'c', 'id': 453, 'synset': 'fish.n.01', 'synonyms': ['fish'], 'def': 'any of various mostly cold-blooded aquatic vertebrates usually having scales and breathing through gills', 'name': 'fish'}, {'frequency': 'r', 'id': 454, 'synset': 'fish.n.02', 'synonyms': ['fish_(food)'], 'def': 'the flesh of fish used as food', 'name': 'fish_(food)'}, {'frequency': 'r', 'id': 455, 'synset': 'fishbowl.n.02', 'synonyms': ['fishbowl', 'goldfish_bowl'], 'def': 'a transparent bowl in which small fish are kept', 'name': 'fishbowl'}, {'frequency': 'r', 'id': 456, 'synset': 'fishing_boat.n.01', 'synonyms': ['fishing_boat', 'fishing_vessel'], 'def': 'a vessel for fishing', 'name': 'fishing_boat'}, {'frequency': 'c', 'id': 457, 'synset': 'fishing_rod.n.01', 'synonyms': ['fishing_rod', 'fishing_pole'], 'def': 'a rod that is used in fishing to extend the fishing line', 'name': 'fishing_rod'}, {'frequency': 'f', 'id': 458, 'synset': 'flag.n.01', 'synonyms': ['flag'], 'def': 'emblem usually consisting of a rectangular piece of cloth of distinctive design (do not include pole)', 'name': 'flag'}, {'frequency': 'f', 'id': 459, 'synset': 'flagpole.n.02', 'synonyms': ['flagpole', 'flagstaff'], 'def': 'a tall staff or pole on which a flag is raised', 'name': 'flagpole'}, {'frequency': 'c', 'id': 460, 'synset': 'flamingo.n.01', 'synonyms': ['flamingo'], 'def': 'large pink web-footed bird with down-bent bill', 'name': 'flamingo'}, {'frequency': 'c', 'id': 461, 'synset': 'flannel.n.01', 'synonyms': ['flannel'], 'def': 'a soft light woolen fabric; used for clothing', 'name': 'flannel'}, {'frequency': 'r', 'id': 462, 'synset': 'flash.n.10', 'synonyms': ['flash', 'flashbulb'], 'def': 'a lamp for providing momentary light to take a photograph', 'name': 'flash'}, {'frequency': 'c', 'id': 463, 'synset': 'flashlight.n.01', 'synonyms': ['flashlight', 'torch'], 'def': 'a small portable battery-powered electric lamp', 'name': 'flashlight'}, {'frequency': 'r', 'id': 464, 'synset': 'fleece.n.03', 'synonyms': ['fleece'], 'def': 'a soft bulky fabric with deep pile; used chiefly for clothing', 'name': 'fleece'}, {'frequency': 'f', 'id': 465, 'synset': 'flip-flop.n.02', 'synonyms': ['flip-flop_(sandal)'], 'def': 'a backless sandal held to the foot by a thong between two toes', 'name': 'flip-flop_(sandal)'}, {'frequency': 'c', 'id': 466, 'synset': 'flipper.n.01', 'synonyms': ['flipper_(footwear)', 'fin_(footwear)'], 'def': 'a shoe to aid a person in swimming', 'name': 'flipper_(footwear)'}, {'frequency': 'f', 'id': 467, 'synset': 'flower_arrangement.n.01', 'synonyms': ['flower_arrangement', 'floral_arrangement'], 'def': 'a decorative arrangement of flowers', 'name': 'flower_arrangement'}, {'frequency': 'c', 'id': 468, 'synset': 'flute.n.02', 'synonyms': ['flute_glass', 'champagne_flute'], 'def': 'a tall narrow wineglass', 'name': 'flute_glass'}, {'frequency': 'r', 'id': 469, 'synset': 'foal.n.01', 'synonyms': ['foal'], 'def': 'a young horse', 'name': 'foal'}, {'frequency': 'c', 'id': 470, 'synset': 'folding_chair.n.01', 'synonyms': ['folding_chair'], 'def': 'a chair that can be folded flat for storage', 'name': 'folding_chair'}, {'frequency': 'c', 'id': 471, 'synset': 'food_processor.n.01', 'synonyms': ['food_processor'], 'def': 'a kitchen appliance for shredding, blending, chopping, or slicing food', 'name': 'food_processor'}, {'frequency': 'c', 'id': 472, 'synset': 'football.n.02', 'synonyms': ['football_(American)'], 'def': 'the inflated oblong ball used in playing American football', 'name': 'football_(American)'}, {'frequency': 'r', 'id': 473, 'synset': 'football_helmet.n.01', 'synonyms': ['football_helmet'], 'def': 'a padded helmet with a face mask to protect the head of football players', 'name': 'football_helmet'}, {'frequency': 'c', 'id': 474, 'synset': 'footstool.n.01', 'synonyms': ['footstool', 'footrest'], 'def': 'a low seat or a stool to rest the feet of a seated person', 'name': 'footstool'}, {'frequency': 'f', 'id': 475, 'synset': 'fork.n.01', 'synonyms': ['fork'], 'def': 'cutlery used for serving and eating food', 'name': 'fork'}, {'frequency': 'r', 'id': 476, 'synset': 'forklift.n.01', 'synonyms': ['forklift'], 'def': 'an industrial vehicle with a power operated fork in front that can be inserted under loads to lift and move them', 'name': 'forklift'}, {'frequency': 'r', 'id': 477, 'synset': 'freight_car.n.01', 'synonyms': ['freight_car'], 'def': 'a railway car that carries freight', 'name': 'freight_car'}, {'frequency': 'r', 'id': 478, 'synset': 'french_toast.n.01', 'synonyms': ['French_toast'], 'def': 'bread slice dipped in egg and milk and fried', 'name': 'French_toast'}, {'frequency': 'c', 'id': 479, 'synset': 'freshener.n.01', 'synonyms': ['freshener', 'air_freshener'], 'def': 'anything that freshens', 'name': 'freshener'}, {'frequency': 'f', 'id': 480, 'synset': 'frisbee.n.01', 'synonyms': ['frisbee'], 'def': 'a light, plastic disk propelled with a flip of the wrist for recreation or competition', 'name': 'frisbee'}, {'frequency': 'c', 'id': 481, 'synset': 'frog.n.01', 'synonyms': ['frog', 'toad', 'toad_frog'], 'def': 'a tailless stout-bodied amphibians with long hind limbs for leaping', 'name': 'frog'}, {'frequency': 'c', 'id': 482, 'synset': 'fruit_juice.n.01', 'synonyms': ['fruit_juice'], 'def': 'drink produced by squeezing or crushing fruit', 'name': 'fruit_juice'}, {'frequency': 'r', 'id': 483, 'synset': 'fruit_salad.n.01', 'synonyms': ['fruit_salad'], 'def': 'salad composed of fruits', 'name': 'fruit_salad'}, {'frequency': 'c', 'id': 484, 'synset': 'frying_pan.n.01', 'synonyms': ['frying_pan', 'frypan', 'skillet'], 'def': 'a pan used for frying foods', 'name': 'frying_pan'}, {'frequency': 'r', 'id': 485, 'synset': 'fudge.n.01', 'synonyms': ['fudge'], 'def': 'soft creamy candy', 'name': 'fudge'}, {'frequency': 'r', 'id': 486, 'synset': 'funnel.n.02', 'synonyms': ['funnel'], 'def': 'a cone-shaped utensil used to channel a substance into a container with a small mouth', 'name': 'funnel'}, {'frequency': 'c', 'id': 487, 'synset': 'futon.n.01', 'synonyms': ['futon'], 'def': 'a pad that is used for sleeping on the floor or on a raised frame', 'name': 'futon'}, {'frequency': 'r', 'id': 488, 'synset': 'gag.n.02', 'synonyms': ['gag', 'muzzle'], 'def': "restraint put into a person's mouth to prevent speaking or shouting", 'name': 'gag'}, {'frequency': 'r', 'id': 489, 'synset': 'garbage.n.03', 'synonyms': ['garbage'], 'def': 'a receptacle where waste can be discarded', 'name': 'garbage'}, {'frequency': 'c', 'id': 490, 'synset': 'garbage_truck.n.01', 'synonyms': ['garbage_truck'], 'def': 'a truck for collecting domestic refuse', 'name': 'garbage_truck'}, {'frequency': 'c', 'id': 491, 'synset': 'garden_hose.n.01', 'synonyms': ['garden_hose'], 'def': 'a hose used for watering a lawn or garden', 'name': 'garden_hose'}, {'frequency': 'c', 'id': 492, 'synset': 'gargle.n.01', 'synonyms': ['gargle', 'mouthwash'], 'def': 'a medicated solution used for gargling and rinsing the mouth', 'name': 'gargle'}, {'frequency': 'r', 'id': 493, 'synset': 'gargoyle.n.02', 'synonyms': ['gargoyle'], 'def': 'an ornament consisting of a grotesquely carved figure of a person or animal', 'name': 'gargoyle'}, {'frequency': 'c', 'id': 494, 'synset': 'garlic.n.02', 'synonyms': ['garlic', 'ail'], 'def': 'aromatic bulb used as seasoning', 'name': 'garlic'}, {'frequency': 'r', 'id': 495, 'synset': 'gasmask.n.01', 'synonyms': ['gasmask', 'respirator', 'gas_helmet'], 'def': 'a protective face mask with a filter', 'name': 'gasmask'}, {'frequency': 'r', 'id': 496, 'synset': 'gazelle.n.01', 'synonyms': ['gazelle'], 'def': 'small swift graceful antelope of Africa and Asia having lustrous eyes', 'name': 'gazelle'}, {'frequency': 'c', 'id': 497, 'synset': 'gelatin.n.02', 'synonyms': ['gelatin', 'jelly'], 'def': 'an edible jelly made with gelatin and used as a dessert or salad base or a coating for foods', 'name': 'gelatin'}, {'frequency': 'r', 'id': 498, 'synset': 'gem.n.02', 'synonyms': ['gemstone'], 'def': 'a crystalline rock that can be cut and polished for jewelry', 'name': 'gemstone'}, {'frequency': 'c', 'id': 499, 'synset': 'giant_panda.n.01', 'synonyms': ['giant_panda', 'panda', 'panda_bear'], 'def': 'large black-and-white herbivorous mammal of bamboo forests of China and Tibet', 'name': 'giant_panda'}, {'frequency': 'c', 'id': 500, 'synset': 'gift_wrap.n.01', 'synonyms': ['gift_wrap'], 'def': 'attractive wrapping paper suitable for wrapping gifts', 'name': 'gift_wrap'}, {'frequency': 'c', 'id': 501, 'synset': 'ginger.n.03', 'synonyms': ['ginger', 'gingerroot'], 'def': 'the root of the common ginger plant; used fresh as a seasoning', 'name': 'ginger'}, {'frequency': 'f', 'id': 502, 'synset': 'giraffe.n.01', 'synonyms': ['giraffe'], 'def': 'tall animal having a spotted coat and small horns and very long neck and legs', 'name': 'giraffe'}, {'frequency': 'c', 'id': 503, 'synset': 'girdle.n.02', 'synonyms': ['cincture', 'sash', 'waistband', 'waistcloth'], 'def': 'a band of material around the waist that strengthens a skirt or trousers', 'name': 'cincture'}, {'frequency': 'f', 'id': 504, 'synset': 'glass.n.02', 'synonyms': ['glass_(drink_container)', 'drinking_glass'], 'def': 'a container for holding liquids while drinking', 'name': 'glass_(drink_container)'}, {'frequency': 'c', 'id': 505, 'synset': 'globe.n.03', 'synonyms': ['globe'], 'def': 'a sphere on which a map (especially of the earth) is represented', 'name': 'globe'}, {'frequency': 'f', 'id': 506, 'synset': 'glove.n.02', 'synonyms': ['glove'], 'def': 'handwear covering the hand', 'name': 'glove'}, {'frequency': 'c', 'id': 507, 'synset': 'goat.n.01', 'synonyms': ['goat'], 'def': 'a common goat', 'name': 'goat'}, {'frequency': 'f', 'id': 508, 'synset': 'goggles.n.01', 'synonyms': ['goggles'], 'def': 'tight-fitting spectacles worn to protect the eyes', 'name': 'goggles'}, {'frequency': 'r', 'id': 509, 'synset': 'goldfish.n.01', 'synonyms': ['goldfish'], 'def': 'small golden or orange-red freshwater fishes used as pond or aquarium pets', 'name': 'goldfish'}, {'frequency': 'r', 'id': 510, 'synset': 'golf_club.n.02', 'synonyms': ['golf_club', 'golf-club'], 'def': 'golf equipment used by a golfer to hit a golf ball', 'name': 'golf_club'}, {'frequency': 'c', 'id': 511, 'synset': 'golfcart.n.01', 'synonyms': ['golfcart'], 'def': 'a small motor vehicle in which golfers can ride between shots', 'name': 'golfcart'}, {'frequency': 'r', 'id': 512, 'synset': 'gondola.n.02', 'synonyms': ['gondola_(boat)'], 'def': 'long narrow flat-bottomed boat propelled by sculling; traditionally used on canals of Venice', 'name': 'gondola_(boat)'}, {'frequency': 'c', 'id': 513, 'synset': 'goose.n.01', 'synonyms': ['goose'], 'def': 'loud, web-footed long-necked aquatic birds usually larger than ducks', 'name': 'goose'}, {'frequency': 'r', 'id': 514, 'synset': 'gorilla.n.01', 'synonyms': ['gorilla'], 'def': 'largest ape', 'name': 'gorilla'}, {'frequency': 'r', 'id': 515, 'synset': 'gourd.n.02', 'synonyms': ['gourd'], 'def': 'any of numerous inedible fruits with hard rinds', 'name': 'gourd'}, {'frequency': 'r', 'id': 516, 'synset': 'gown.n.04', 'synonyms': ['surgical_gown', 'scrubs_(surgical_clothing)'], 'def': 'protective garment worn by surgeons during operations', 'name': 'surgical_gown'}, {'frequency': 'f', 'id': 517, 'synset': 'grape.n.01', 'synonyms': ['grape'], 'def': 'any of various juicy fruit with green or purple skins; grow in clusters', 'name': 'grape'}, {'frequency': 'r', 'id': 518, 'synset': 'grasshopper.n.01', 'synonyms': ['grasshopper'], 'def': 'plant-eating insect with hind legs adapted for leaping', 'name': 'grasshopper'}, {'frequency': 'c', 'id': 519, 'synset': 'grater.n.01', 'synonyms': ['grater'], 'def': 'utensil with sharp perforations for shredding foods (as vegetables or cheese)', 'name': 'grater'}, {'frequency': 'c', 'id': 520, 'synset': 'gravestone.n.01', 'synonyms': ['gravestone', 'headstone', 'tombstone'], 'def': 'a stone that is used to mark a grave', 'name': 'gravestone'}, {'frequency': 'r', 'id': 521, 'synset': 'gravy_boat.n.01', 'synonyms': ['gravy_boat', 'gravy_holder'], 'def': 'a dish (often boat-shaped) for serving gravy or sauce', 'name': 'gravy_boat'}, {'frequency': 'c', 'id': 522, 'synset': 'green_bean.n.02', 'synonyms': ['green_bean'], 'def': 'a common bean plant cultivated for its slender green edible pods', 'name': 'green_bean'}, {'frequency': 'c', 'id': 523, 'synset': 'green_onion.n.01', 'synonyms': ['green_onion', 'spring_onion', 'scallion'], 'def': 'a young onion before the bulb has enlarged', 'name': 'green_onion'}, {'frequency': 'r', 'id': 524, 'synset': 'griddle.n.01', 'synonyms': ['griddle'], 'def': 'cooking utensil consisting of a flat heated surface on which food is cooked', 'name': 'griddle'}, {'frequency': 'r', 'id': 525, 'synset': 'grillroom.n.01', 'synonyms': ['grillroom', 'grill_(restaurant)'], 'def': 'a restaurant where food is cooked on a grill', 'name': 'grillroom'}, {'frequency': 'r', 'id': 526, 'synset': 'grinder.n.04', 'synonyms': ['grinder_(tool)'], 'def': 'a machine tool that polishes metal', 'name': 'grinder_(tool)'}, {'frequency': 'r', 'id': 527, 'synset': 'grits.n.01', 'synonyms': ['grits', 'hominy_grits'], 'def': 'coarsely ground corn boiled as a breakfast dish', 'name': 'grits'}, {'frequency': 'c', 'id': 528, 'synset': 'grizzly.n.01', 'synonyms': ['grizzly', 'grizzly_bear'], 'def': 'powerful brownish-yellow bear of the uplands of western North America', 'name': 'grizzly'}, {'frequency': 'c', 'id': 529, 'synset': 'grocery_bag.n.01', 'synonyms': ['grocery_bag'], 'def': "a sack for holding customer's groceries", 'name': 'grocery_bag'}, {'frequency': 'r', 'id': 530, 'synset': 'guacamole.n.01', 'synonyms': ['guacamole'], 'def': 'a dip made of mashed avocado mixed with chopped onions and other seasonings', 'name': 'guacamole'}, {'frequency': 'f', 'id': 531, 'synset': 'guitar.n.01', 'synonyms': ['guitar'], 'def': 'a stringed instrument usually having six strings; played by strumming or plucking', 'name': 'guitar'}, {'frequency': 'c', 'id': 532, 'synset': 'gull.n.02', 'synonyms': ['gull', 'seagull'], 'def': 'mostly white aquatic bird having long pointed wings and short legs', 'name': 'gull'}, {'frequency': 'c', 'id': 533, 'synset': 'gun.n.01', 'synonyms': ['gun'], 'def': 'a weapon that discharges a bullet at high velocity from a metal tube', 'name': 'gun'}, {'frequency': 'r', 'id': 534, 'synset': 'hair_spray.n.01', 'synonyms': ['hair_spray'], 'def': 'substance sprayed on the hair to hold it in place', 'name': 'hair_spray'}, {'frequency': 'c', 'id': 535, 'synset': 'hairbrush.n.01', 'synonyms': ['hairbrush'], 'def': "a brush used to groom a person's hair", 'name': 'hairbrush'}, {'frequency': 'c', 'id': 536, 'synset': 'hairnet.n.01', 'synonyms': ['hairnet'], 'def': 'a small net that someone wears over their hair to keep it in place', 'name': 'hairnet'}, {'frequency': 'c', 'id': 537, 'synset': 'hairpin.n.01', 'synonyms': ['hairpin'], 'def': "a double pronged pin used to hold women's hair in place", 'name': 'hairpin'}, {'frequency': 'f', 'id': 538, 'synset': 'ham.n.01', 'synonyms': ['ham', 'jambon', 'gammon'], 'def': 'meat cut from the thigh of a hog (usually smoked)', 'name': 'ham'}, {'frequency': 'c', 'id': 539, 'synset': 'hamburger.n.01', 'synonyms': ['hamburger', 'beefburger', 'burger'], 'def': 'a sandwich consisting of a patty of minced beef served on a bun', 'name': 'hamburger'}, {'frequency': 'c', 'id': 540, 'synset': 'hammer.n.02', 'synonyms': ['hammer'], 'def': 'a hand tool with a heavy head and a handle; used to deliver an impulsive force by striking', 'name': 'hammer'}, {'frequency': 'r', 'id': 541, 'synset': 'hammock.n.02', 'synonyms': ['hammock'], 'def': 'a hanging bed of canvas or rope netting (usually suspended between two trees)', 'name': 'hammock'}, {'frequency': 'r', 'id': 542, 'synset': 'hamper.n.02', 'synonyms': ['hamper'], 'def': 'a basket usually with a cover', 'name': 'hamper'}, {'frequency': 'r', 'id': 543, 'synset': 'hamster.n.01', 'synonyms': ['hamster'], 'def': 'short-tailed burrowing rodent with large cheek pouches', 'name': 'hamster'}, {'frequency': 'c', 'id': 544, 'synset': 'hand_blower.n.01', 'synonyms': ['hair_dryer'], 'def': 'a hand-held electric blower that can blow warm air onto the hair', 'name': 'hair_dryer'}, {'frequency': 'r', 'id': 545, 'synset': 'hand_glass.n.01', 'synonyms': ['hand_glass', 'hand_mirror'], 'def': 'a mirror intended to be held in the hand', 'name': 'hand_glass'}, {'frequency': 'f', 'id': 546, 'synset': 'hand_towel.n.01', 'synonyms': ['hand_towel', 'face_towel'], 'def': 'a small towel used to dry the hands or face', 'name': 'hand_towel'}, {'frequency': 'c', 'id': 547, 'synset': 'handcart.n.01', 'synonyms': ['handcart', 'pushcart', 'hand_truck'], 'def': 'wheeled vehicle that can be pushed by a person', 'name': 'handcart'}, {'frequency': 'r', 'id': 548, 'synset': 'handcuff.n.01', 'synonyms': ['handcuff'], 'def': 'shackle that consists of a metal loop that can be locked around the wrist', 'name': 'handcuff'}, {'frequency': 'c', 'id': 549, 'synset': 'handkerchief.n.01', 'synonyms': ['handkerchief'], 'def': 'a square piece of cloth used for wiping the eyes or nose or as a costume accessory', 'name': 'handkerchief'}, {'frequency': 'f', 'id': 550, 'synset': 'handle.n.01', 'synonyms': ['handle', 'grip', 'handgrip'], 'def': 'the appendage to an object that is designed to be held in order to use or move it', 'name': 'handle'}, {'frequency': 'r', 'id': 551, 'synset': 'handsaw.n.01', 'synonyms': ['handsaw', "carpenter's_saw"], 'def': 'a saw used with one hand for cutting wood', 'name': 'handsaw'}, {'frequency': 'r', 'id': 552, 'synset': 'hardback.n.01', 'synonyms': ['hardback_book', 'hardcover_book'], 'def': 'a book with cardboard or cloth or leather covers', 'name': 'hardback_book'}, {'frequency': 'r', 'id': 553, 'synset': 'harmonium.n.01', 'synonyms': ['harmonium', 'organ_(musical_instrument)', 'reed_organ_(musical_instrument)'], 'def': 'a free-reed instrument in which air is forced through the reeds by bellows', 'name': 'harmonium'}, {'frequency': 'f', 'id': 554, 'synset': 'hat.n.01', 'synonyms': ['hat'], 'def': 'headwear that protects the head from bad weather, sun, or worn for fashion', 'name': 'hat'}, {'frequency': 'r', 'id': 555, 'synset': 'hatbox.n.01', 'synonyms': ['hatbox'], 'def': 'a round piece of luggage for carrying hats', 'name': 'hatbox'}, {'frequency': 'r', 'id': 556, 'synset': 'hatch.n.03', 'synonyms': ['hatch'], 'def': 'a movable barrier covering a hatchway', 'name': 'hatch'}, {'frequency': 'c', 'id': 557, 'synset': 'head_covering.n.01', 'synonyms': ['veil'], 'def': 'a garment that covers the head and face', 'name': 'veil'}, {'frequency': 'f', 'id': 558, 'synset': 'headband.n.01', 'synonyms': ['headband'], 'def': 'a band worn around or over the head', 'name': 'headband'}, {'frequency': 'f', 'id': 559, 'synset': 'headboard.n.01', 'synonyms': ['headboard'], 'def': 'a vertical board or panel forming the head of a bedstead', 'name': 'headboard'}, {'frequency': 'f', 'id': 560, 'synset': 'headlight.n.01', 'synonyms': ['headlight', 'headlamp'], 'def': 'a powerful light with reflector; attached to the front of an automobile or locomotive', 'name': 'headlight'}, {'frequency': 'c', 'id': 561, 'synset': 'headscarf.n.01', 'synonyms': ['headscarf'], 'def': 'a kerchief worn over the head and tied under the chin', 'name': 'headscarf'}, {'frequency': 'r', 'id': 562, 'synset': 'headset.n.01', 'synonyms': ['headset'], 'def': 'receiver consisting of a pair of headphones', 'name': 'headset'}, {'frequency': 'c', 'id': 563, 'synset': 'headstall.n.01', 'synonyms': ['headstall_(for_horses)', 'headpiece_(for_horses)'], 'def': "the band that is the part of a bridle that fits around a horse's head", 'name': 'headstall_(for_horses)'}, {'frequency': 'r', 'id': 564, 'synset': 'hearing_aid.n.02', 'synonyms': ['hearing_aid'], 'def': 'an acoustic device used to direct sound to the ear of a hearing-impaired person', 'name': 'hearing_aid'}, {'frequency': 'c', 'id': 565, 'synset': 'heart.n.02', 'synonyms': ['heart'], 'def': 'a muscular organ; its contractions move the blood through the body', 'name': 'heart'}, {'frequency': 'c', 'id': 566, 'synset': 'heater.n.01', 'synonyms': ['heater', 'warmer'], 'def': 'device that heats water or supplies warmth to a room', 'name': 'heater'}, {'frequency': 'c', 'id': 567, 'synset': 'helicopter.n.01', 'synonyms': ['helicopter'], 'def': 'an aircraft without wings that obtains its lift from the rotation of overhead blades', 'name': 'helicopter'}, {'frequency': 'f', 'id': 568, 'synset': 'helmet.n.02', 'synonyms': ['helmet'], 'def': 'a protective headgear made of hard material to resist blows', 'name': 'helmet'}, {'frequency': 'r', 'id': 569, 'synset': 'heron.n.02', 'synonyms': ['heron'], 'def': 'grey or white wading bird with long neck and long legs and (usually) long bill', 'name': 'heron'}, {'frequency': 'c', 'id': 570, 'synset': 'highchair.n.01', 'synonyms': ['highchair', 'feeding_chair'], 'def': 'a chair for feeding a very young child', 'name': 'highchair'}, {'frequency': 'f', 'id': 571, 'synset': 'hinge.n.01', 'synonyms': ['hinge'], 'def': 'a joint that holds two parts together so that one can swing relative to the other', 'name': 'hinge'}, {'frequency': 'r', 'id': 572, 'synset': 'hippopotamus.n.01', 'synonyms': ['hippopotamus'], 'def': 'massive thick-skinned animal living in or around rivers of tropical Africa', 'name': 'hippopotamus'}, {'frequency': 'r', 'id': 573, 'synset': 'hockey_stick.n.01', 'synonyms': ['hockey_stick'], 'def': 'sports implement consisting of a stick used by hockey players to move the puck', 'name': 'hockey_stick'}, {'frequency': 'c', 'id': 574, 'synset': 'hog.n.03', 'synonyms': ['hog', 'pig'], 'def': 'domestic swine', 'name': 'hog'}, {'frequency': 'f', 'id': 575, 'synset': 'home_plate.n.01', 'synonyms': ['home_plate_(baseball)', 'home_base_(baseball)'], 'def': '(baseball) a rubber slab where the batter stands; it must be touched by a base runner in order to score', 'name': 'home_plate_(baseball)'}, {'frequency': 'c', 'id': 576, 'synset': 'honey.n.01', 'synonyms': ['honey'], 'def': 'a sweet yellow liquid produced by bees', 'name': 'honey'}, {'frequency': 'f', 'id': 577, 'synset': 'hood.n.06', 'synonyms': ['fume_hood', 'exhaust_hood'], 'def': 'metal covering leading to a vent that exhausts smoke or fumes', 'name': 'fume_hood'}, {'frequency': 'f', 'id': 578, 'synset': 'hook.n.05', 'synonyms': ['hook'], 'def': 'a curved or bent implement for suspending or pulling something', 'name': 'hook'}, {'frequency': 'f', 'id': 579, 'synset': 'horse.n.01', 'synonyms': ['horse'], 'def': 'a common horse', 'name': 'horse'}, {'frequency': 'f', 'id': 580, 'synset': 'hose.n.03', 'synonyms': ['hose', 'hosepipe'], 'def': 'a flexible pipe for conveying a liquid or gas', 'name': 'hose'}, {'frequency': 'r', 'id': 581, 'synset': 'hot-air_balloon.n.01', 'synonyms': ['hot-air_balloon'], 'def': 'balloon for travel through the air in a basket suspended below a large bag of heated air', 'name': 'hot-air_balloon'}, {'frequency': 'r', 'id': 582, 'synset': 'hot_plate.n.01', 'synonyms': ['hotplate'], 'def': 'a portable electric appliance for heating or cooking or keeping food warm', 'name': 'hotplate'}, {'frequency': 'c', 'id': 583, 'synset': 'hot_sauce.n.01', 'synonyms': ['hot_sauce'], 'def': 'a pungent peppery sauce', 'name': 'hot_sauce'}, {'frequency': 'r', 'id': 584, 'synset': 'hourglass.n.01', 'synonyms': ['hourglass'], 'def': 'a sandglass timer that runs for sixty minutes', 'name': 'hourglass'}, {'frequency': 'r', 'id': 585, 'synset': 'houseboat.n.01', 'synonyms': ['houseboat'], 'def': 'a barge that is designed and equipped for use as a dwelling', 'name': 'houseboat'}, {'frequency': 'r', 'id': 586, 'synset': 'hummingbird.n.01', 'synonyms': ['hummingbird'], 'def': 'tiny American bird having brilliant iridescent plumage and long slender bills', 'name': 'hummingbird'}, {'frequency': 'r', 'id': 587, 'synset': 'hummus.n.01', 'synonyms': ['hummus', 'humus', 'hommos', 'hoummos', 'humous'], 'def': 'a thick spread made from mashed chickpeas', 'name': 'hummus'}, {'frequency': 'c', 'id': 588, 'synset': 'ice_bear.n.01', 'synonyms': ['polar_bear'], 'def': 'white bear of Arctic regions', 'name': 'polar_bear'}, {'frequency': 'c', 'id': 589, 'synset': 'ice_cream.n.01', 'synonyms': ['icecream'], 'def': 'frozen dessert containing cream and sugar and flavoring', 'name': 'icecream'}, {'frequency': 'r', 'id': 590, 'synset': 'ice_lolly.n.01', 'synonyms': ['popsicle'], 'def': 'ice cream or water ice on a small wooden stick', 'name': 'popsicle'}, {'frequency': 'c', 'id': 591, 'synset': 'ice_maker.n.01', 'synonyms': ['ice_maker'], 'def': 'an appliance included in some electric refrigerators for making ice cubes', 'name': 'ice_maker'}, {'frequency': 'r', 'id': 592, 'synset': 'ice_pack.n.01', 'synonyms': ['ice_pack', 'ice_bag'], 'def': 'a waterproof bag filled with ice: applied to the body (especially the head) to cool or reduce swelling', 'name': 'ice_pack'}, {'frequency': 'r', 'id': 593, 'synset': 'ice_skate.n.01', 'synonyms': ['ice_skate'], 'def': 'skate consisting of a boot with a steel blade fitted to the sole', 'name': 'ice_skate'}, {'frequency': 'r', 'id': 594, 'synset': 'ice_tea.n.01', 'synonyms': ['ice_tea', 'iced_tea'], 'def': 'strong tea served over ice', 'name': 'ice_tea'}, {'frequency': 'c', 'id': 595, 'synset': 'igniter.n.01', 'synonyms': ['igniter', 'ignitor', 'lighter'], 'def': 'a substance or device used to start a fire', 'name': 'igniter'}, {'frequency': 'r', 'id': 596, 'synset': 'incense.n.01', 'synonyms': ['incense'], 'def': 'a substance that produces a fragrant odor when burned', 'name': 'incense'}, {'frequency': 'r', 'id': 597, 'synset': 'inhaler.n.01', 'synonyms': ['inhaler', 'inhalator'], 'def': 'a dispenser that produces a chemical vapor to be inhaled through mouth or nose', 'name': 'inhaler'}, {'frequency': 'c', 'id': 598, 'synset': 'ipod.n.01', 'synonyms': ['iPod'], 'def': 'a pocket-sized device used to play music files', 'name': 'iPod'}, {'frequency': 'c', 'id': 599, 'synset': 'iron.n.04', 'synonyms': ['iron_(for_clothing)', 'smoothing_iron_(for_clothing)'], 'def': 'home appliance consisting of a flat metal base that is heated and used to smooth cloth', 'name': 'iron_(for_clothing)'}, {'frequency': 'r', 'id': 600, 'synset': 'ironing_board.n.01', 'synonyms': ['ironing_board'], 'def': 'narrow padded board on collapsible supports; used for ironing clothes', 'name': 'ironing_board'}, {'frequency': 'f', 'id': 601, 'synset': 'jacket.n.01', 'synonyms': ['jacket'], 'def': 'a waist-length coat', 'name': 'jacket'}, {'frequency': 'r', 'id': 602, 'synset': 'jam.n.01', 'synonyms': ['jam'], 'def': 'preserve of crushed fruit', 'name': 'jam'}, {'frequency': 'f', 'id': 603, 'synset': 'jean.n.01', 'synonyms': ['jean', 'blue_jean', 'denim'], 'def': '(usually plural) close-fitting trousers of heavy denim for manual work or casual wear', 'name': 'jean'}, {'frequency': 'c', 'id': 604, 'synset': 'jeep.n.01', 'synonyms': ['jeep', 'landrover'], 'def': 'a car suitable for traveling over rough terrain', 'name': 'jeep'}, {'frequency': 'r', 'id': 605, 'synset': 'jelly_bean.n.01', 'synonyms': ['jelly_bean', 'jelly_egg'], 'def': 'sugar-glazed jellied candy', 'name': 'jelly_bean'}, {'frequency': 'f', 'id': 606, 'synset': 'jersey.n.03', 'synonyms': ['jersey', 'T-shirt', 'tee_shirt'], 'def': 'a close-fitting pullover shirt', 'name': 'jersey'}, {'frequency': 'c', 'id': 607, 'synset': 'jet.n.01', 'synonyms': ['jet_plane', 'jet-propelled_plane'], 'def': 'an airplane powered by one or more jet engines', 'name': 'jet_plane'}, {'frequency': 'c', 'id': 608, 'synset': 'jewelry.n.01', 'synonyms': ['jewelry', 'jewellery'], 'def': 'an adornment (as a bracelet or ring or necklace) made of precious metals and set with gems (or imitation gems)', 'name': 'jewelry'}, {'frequency': 'r', 'id': 609, 'synset': 'joystick.n.02', 'synonyms': ['joystick'], 'def': 'a control device for computers consisting of a vertical handle that can move freely in two directions', 'name': 'joystick'}, {'frequency': 'r', 'id': 610, 'synset': 'jump_suit.n.01', 'synonyms': ['jumpsuit'], 'def': "one-piece garment fashioned after a parachutist's uniform", 'name': 'jumpsuit'}, {'frequency': 'c', 'id': 611, 'synset': 'kayak.n.01', 'synonyms': ['kayak'], 'def': 'a small canoe consisting of a light frame made watertight with animal skins', 'name': 'kayak'}, {'frequency': 'r', 'id': 612, 'synset': 'keg.n.02', 'synonyms': ['keg'], 'def': 'small cask or barrel', 'name': 'keg'}, {'frequency': 'r', 'id': 613, 'synset': 'kennel.n.01', 'synonyms': ['kennel', 'doghouse'], 'def': 'outbuilding that serves as a shelter for a dog', 'name': 'kennel'}, {'frequency': 'c', 'id': 614, 'synset': 'kettle.n.01', 'synonyms': ['kettle', 'boiler'], 'def': 'a metal pot for stewing or boiling; usually has a lid', 'name': 'kettle'}, {'frequency': 'f', 'id': 615, 'synset': 'key.n.01', 'synonyms': ['key'], 'def': 'metal instrument used to unlock a lock', 'name': 'key'}, {'frequency': 'r', 'id': 616, 'synset': 'keycard.n.01', 'synonyms': ['keycard'], 'def': 'a plastic card used to gain access typically to a door', 'name': 'keycard'}, {'frequency': 'r', 'id': 617, 'synset': 'kilt.n.01', 'synonyms': ['kilt'], 'def': 'a knee-length pleated tartan skirt worn by men as part of the traditional dress in the Highlands of northern Scotland', 'name': 'kilt'}, {'frequency': 'c', 'id': 618, 'synset': 'kimono.n.01', 'synonyms': ['kimono'], 'def': 'a loose robe; imitated from robes originally worn by Japanese', 'name': 'kimono'}, {'frequency': 'f', 'id': 619, 'synset': 'kitchen_sink.n.01', 'synonyms': ['kitchen_sink'], 'def': 'a sink in a kitchen', 'name': 'kitchen_sink'}, {'frequency': 'c', 'id': 620, 'synset': 'kitchen_table.n.01', 'synonyms': ['kitchen_table'], 'def': 'a table in the kitchen', 'name': 'kitchen_table'}, {'frequency': 'f', 'id': 621, 'synset': 'kite.n.03', 'synonyms': ['kite'], 'def': 'plaything consisting of a light frame covered with tissue paper; flown in wind at end of a string', 'name': 'kite'}, {'frequency': 'c', 'id': 622, 'synset': 'kitten.n.01', 'synonyms': ['kitten', 'kitty'], 'def': 'young domestic cat', 'name': 'kitten'}, {'frequency': 'c', 'id': 623, 'synset': 'kiwi.n.03', 'synonyms': ['kiwi_fruit'], 'def': 'fuzzy brown egg-shaped fruit with slightly tart green flesh', 'name': 'kiwi_fruit'}, {'frequency': 'f', 'id': 624, 'synset': 'knee_pad.n.01', 'synonyms': ['knee_pad'], 'def': 'protective garment consisting of a pad worn by football or baseball or hockey players', 'name': 'knee_pad'}, {'frequency': 'f', 'id': 625, 'synset': 'knife.n.01', 'synonyms': ['knife'], 'def': 'tool with a blade and point used as a cutting instrument', 'name': 'knife'}, {'frequency': 'r', 'id': 626, 'synset': 'knight.n.02', 'synonyms': ['knight_(chess_piece)', 'horse_(chess_piece)'], 'def': 'a chess game piece shaped to resemble the head of a horse', 'name': 'knight_(chess_piece)'}, {'frequency': 'r', 'id': 627, 'synset': 'knitting_needle.n.01', 'synonyms': ['knitting_needle'], 'def': 'needle consisting of a slender rod with pointed ends; usually used in pairs', 'name': 'knitting_needle'}, {'frequency': 'f', 'id': 628, 'synset': 'knob.n.02', 'synonyms': ['knob'], 'def': 'a round handle often found on a door', 'name': 'knob'}, {'frequency': 'r', 'id': 629, 'synset': 'knocker.n.05', 'synonyms': ['knocker_(on_a_door)', 'doorknocker'], 'def': 'a device (usually metal and ornamental) attached by a hinge to a door', 'name': 'knocker_(on_a_door)'}, {'frequency': 'r', 'id': 630, 'synset': 'koala.n.01', 'synonyms': ['koala', 'koala_bear'], 'def': 'sluggish tailless Australian marsupial with grey furry ears and coat', 'name': 'koala'}, {'frequency': 'r', 'id': 631, 'synset': 'lab_coat.n.01', 'synonyms': ['lab_coat', 'laboratory_coat'], 'def': 'a light coat worn to protect clothing from substances used while working in a laboratory', 'name': 'lab_coat'}, {'frequency': 'f', 'id': 632, 'synset': 'ladder.n.01', 'synonyms': ['ladder'], 'def': 'steps consisting of two parallel members connected by rungs', 'name': 'ladder'}, {'frequency': 'c', 'id': 633, 'synset': 'ladle.n.01', 'synonyms': ['ladle'], 'def': 'a spoon-shaped vessel with a long handle frequently used to transfer liquids', 'name': 'ladle'}, {'frequency': 'r', 'id': 634, 'synset': 'ladybug.n.01', 'synonyms': ['ladybug', 'ladybeetle', 'ladybird_beetle'], 'def': 'small round bright-colored and spotted beetle, typically red and black', 'name': 'ladybug'}, {'frequency': 'c', 'id': 635, 'synset': 'lamb.n.01', 'synonyms': ['lamb_(animal)'], 'def': 'young sheep', 'name': 'lamb_(animal)'}, {'frequency': 'r', 'id': 636, 'synset': 'lamb_chop.n.01', 'synonyms': ['lamb-chop', 'lambchop'], 'def': 'chop cut from a lamb', 'name': 'lamb-chop'}, {'frequency': 'f', 'id': 637, 'synset': 'lamp.n.02', 'synonyms': ['lamp'], 'def': 'a piece of furniture holding one or more electric light bulbs', 'name': 'lamp'}, {'frequency': 'f', 'id': 638, 'synset': 'lamppost.n.01', 'synonyms': ['lamppost'], 'def': 'a metal post supporting an outdoor lamp (such as a streetlight)', 'name': 'lamppost'}, {'frequency': 'f', 'id': 639, 'synset': 'lampshade.n.01', 'synonyms': ['lampshade'], 'def': 'a protective ornamental shade used to screen a light bulb from direct view', 'name': 'lampshade'}, {'frequency': 'c', 'id': 640, 'synset': 'lantern.n.01', 'synonyms': ['lantern'], 'def': 'light in a transparent protective case', 'name': 'lantern'}, {'frequency': 'f', 'id': 641, 'synset': 'lanyard.n.02', 'synonyms': ['lanyard', 'laniard'], 'def': 'a cord worn around the neck to hold a knife or whistle, etc.', 'name': 'lanyard'}, {'frequency': 'f', 'id': 642, 'synset': 'laptop.n.01', 'synonyms': ['laptop_computer', 'notebook_computer'], 'def': 'a portable computer small enough to use in your lap', 'name': 'laptop_computer'}, {'frequency': 'r', 'id': 643, 'synset': 'lasagna.n.01', 'synonyms': ['lasagna', 'lasagne'], 'def': 'baked dish of layers of lasagna pasta with sauce and cheese and meat or vegetables', 'name': 'lasagna'}, {'frequency': 'c', 'id': 644, 'synset': 'latch.n.02', 'synonyms': ['latch'], 'def': 'a bar that can be lowered or slid into a groove to fasten a door or gate', 'name': 'latch'}, {'frequency': 'r', 'id': 645, 'synset': 'lawn_mower.n.01', 'synonyms': ['lawn_mower'], 'def': 'garden tool for mowing grass on lawns', 'name': 'lawn_mower'}, {'frequency': 'r', 'id': 646, 'synset': 'leather.n.01', 'synonyms': ['leather'], 'def': 'an animal skin made smooth and flexible by removing the hair and then tanning', 'name': 'leather'}, {'frequency': 'c', 'id': 647, 'synset': 'legging.n.01', 'synonyms': ['legging_(clothing)', 'leging_(clothing)', 'leg_covering'], 'def': 'a garment covering the leg (usually extending from the knee to the ankle)', 'name': 'legging_(clothing)'}, {'frequency': 'c', 'id': 648, 'synset': 'lego.n.01', 'synonyms': ['Lego', 'Lego_set'], 'def': "a child's plastic construction set for making models from blocks", 'name': 'Lego'}, {'frequency': 'f', 'id': 649, 'synset': 'lemon.n.01', 'synonyms': ['lemon'], 'def': 'yellow oval fruit with juicy acidic flesh', 'name': 'lemon'}, {'frequency': 'r', 'id': 650, 'synset': 'lemonade.n.01', 'synonyms': ['lemonade'], 'def': 'sweetened beverage of diluted lemon juice', 'name': 'lemonade'}, {'frequency': 'f', 'id': 651, 'synset': 'lettuce.n.02', 'synonyms': ['lettuce'], 'def': 'leafy plant commonly eaten in salad or on sandwiches', 'name': 'lettuce'}, {'frequency': 'f', 'id': 652, 'synset': 'license_plate.n.01', 'synonyms': ['license_plate', 'numberplate'], 'def': "a plate mounted on the front and back of car and bearing the car's registration number", 'name': 'license_plate'}, {'frequency': 'f', 'id': 653, 'synset': 'life_buoy.n.01', 'synonyms': ['life_buoy', 'lifesaver', 'life_belt', 'life_ring'], 'def': 'a ring-shaped life preserver used to prevent drowning (NOT a life-jacket or vest)', 'name': 'life_buoy'}, {'frequency': 'f', 'id': 654, 'synset': 'life_jacket.n.01', 'synonyms': ['life_jacket', 'life_vest'], 'def': 'life preserver consisting of a sleeveless jacket of buoyant or inflatable design', 'name': 'life_jacket'}, {'frequency': 'f', 'id': 655, 'synset': 'light_bulb.n.01', 'synonyms': ['lightbulb'], 'def': 'glass bulb or tube shaped electric device that emits light (DO NOT MARK LAMPS AS A WHOLE)', 'name': 'lightbulb'}, {'frequency': 'r', 'id': 656, 'synset': 'lightning_rod.n.02', 'synonyms': ['lightning_rod', 'lightning_conductor'], 'def': 'a metallic conductor that is attached to a high point and leads to the ground', 'name': 'lightning_rod'}, {'frequency': 'c', 'id': 657, 'synset': 'lime.n.06', 'synonyms': ['lime'], 'def': 'the green acidic fruit of any of various lime trees', 'name': 'lime'}, {'frequency': 'r', 'id': 658, 'synset': 'limousine.n.01', 'synonyms': ['limousine'], 'def': 'long luxurious car; usually driven by a chauffeur', 'name': 'limousine'}, {'frequency': 'r', 'id': 659, 'synset': 'linen.n.02', 'synonyms': ['linen_paper'], 'def': 'a high-quality paper made of linen fibers or with a linen finish', 'name': 'linen_paper'}, {'frequency': 'c', 'id': 660, 'synset': 'lion.n.01', 'synonyms': ['lion'], 'def': 'large gregarious predatory cat of Africa and India', 'name': 'lion'}, {'frequency': 'c', 'id': 661, 'synset': 'lip_balm.n.01', 'synonyms': ['lip_balm'], 'def': 'a balm applied to the lips', 'name': 'lip_balm'}, {'frequency': 'c', 'id': 662, 'synset': 'lipstick.n.01', 'synonyms': ['lipstick', 'lip_rouge'], 'def': 'makeup that is used to color the lips', 'name': 'lipstick'}, {'frequency': 'r', 'id': 663, 'synset': 'liquor.n.01', 'synonyms': ['liquor', 'spirits', 'hard_liquor', 'liqueur', 'cordial'], 'def': 'an alcoholic beverage that is distilled rather than fermented', 'name': 'liquor'}, {'frequency': 'r', 'id': 664, 'synset': 'lizard.n.01', 'synonyms': ['lizard'], 'def': 'a reptile with usually two pairs of legs and a tapering tail', 'name': 'lizard'}, {'frequency': 'r', 'id': 665, 'synset': 'loafer.n.02', 'synonyms': ['Loafer_(type_of_shoe)'], 'def': 'a low leather step-in shoe', 'name': 'Loafer_(type_of_shoe)'}, {'frequency': 'f', 'id': 666, 'synset': 'log.n.01', 'synonyms': ['log'], 'def': 'a segment of the trunk of a tree when stripped of branches', 'name': 'log'}, {'frequency': 'c', 'id': 667, 'synset': 'lollipop.n.02', 'synonyms': ['lollipop'], 'def': 'hard candy on a stick', 'name': 'lollipop'}, {'frequency': 'c', 'id': 668, 'synset': 'lotion.n.01', 'synonyms': ['lotion'], 'def': 'any of various cosmetic preparations that are applied to the skin', 'name': 'lotion'}, {'frequency': 'f', 'id': 669, 'synset': 'loudspeaker.n.01', 'synonyms': ['speaker_(stero_equipment)'], 'def': 'electronic device that produces sound often as part of a stereo system', 'name': 'speaker_(stero_equipment)'}, {'frequency': 'c', 'id': 670, 'synset': 'love_seat.n.01', 'synonyms': ['loveseat'], 'def': 'small sofa that seats two people', 'name': 'loveseat'}, {'frequency': 'r', 'id': 671, 'synset': 'machine_gun.n.01', 'synonyms': ['machine_gun'], 'def': 'a rapidly firing automatic gun', 'name': 'machine_gun'}, {'frequency': 'f', 'id': 672, 'synset': 'magazine.n.02', 'synonyms': ['magazine'], 'def': 'a paperback periodic publication', 'name': 'magazine'}, {'frequency': 'f', 'id': 673, 'synset': 'magnet.n.01', 'synonyms': ['magnet'], 'def': 'a device that attracts iron and produces a magnetic field', 'name': 'magnet'}, {'frequency': 'r', 'id': 674, 'synset': 'mail_slot.n.01', 'synonyms': ['mail_slot'], 'def': 'a slot (usually in a door) through which mail can be delivered', 'name': 'mail_slot'}, {'frequency': 'c', 'id': 675, 'synset': 'mailbox.n.01', 'synonyms': ['mailbox_(at_home)', 'letter_box_(at_home)'], 'def': 'a private box for delivery of mail', 'name': 'mailbox_(at_home)'}, {'frequency': 'r', 'id': 676, 'synset': 'mallet.n.01', 'synonyms': ['mallet'], 'def': 'a sports implement with a long handle and a hammer-like head used to hit a ball', 'name': 'mallet'}, {'frequency': 'r', 'id': 677, 'synset': 'mammoth.n.01', 'synonyms': ['mammoth'], 'def': 'any of numerous extinct elephants widely distributed in the Pleistocene', 'name': 'mammoth'}, {'frequency': 'c', 'id': 678, 'synset': 'mandarin.n.05', 'synonyms': ['mandarin_orange'], 'def': 'a somewhat flat reddish-orange loose skinned citrus of China', 'name': 'mandarin_orange'}, {'frequency': 'c', 'id': 679, 'synset': 'manger.n.01', 'synonyms': ['manger', 'trough'], 'def': 'a container (usually in a barn or stable) from which cattle or horses feed', 'name': 'manger'}, {'frequency': 'f', 'id': 680, 'synset': 'manhole.n.01', 'synonyms': ['manhole'], 'def': 'a hole (usually with a flush cover) through which a person can gain access to an underground structure', 'name': 'manhole'}, {'frequency': 'c', 'id': 681, 'synset': 'map.n.01', 'synonyms': ['map'], 'def': "a diagrammatic representation of the earth's surface (or part of it)", 'name': 'map'}, {'frequency': 'c', 'id': 682, 'synset': 'marker.n.03', 'synonyms': ['marker'], 'def': 'a writing implement for making a mark', 'name': 'marker'}, {'frequency': 'r', 'id': 683, 'synset': 'martini.n.01', 'synonyms': ['martini'], 'def': 'a cocktail made of gin (or vodka) with dry vermouth', 'name': 'martini'}, {'frequency': 'r', 'id': 684, 'synset': 'mascot.n.01', 'synonyms': ['mascot'], 'def': 'a person or animal that is adopted by a team or other group as a symbolic figure', 'name': 'mascot'}, {'frequency': 'c', 'id': 685, 'synset': 'mashed_potato.n.01', 'synonyms': ['mashed_potato'], 'def': 'potato that has been peeled and boiled and then mashed', 'name': 'mashed_potato'}, {'frequency': 'r', 'id': 686, 'synset': 'masher.n.02', 'synonyms': ['masher'], 'def': 'a kitchen utensil used for mashing (e.g. potatoes)', 'name': 'masher'}, {'frequency': 'f', 'id': 687, 'synset': 'mask.n.04', 'synonyms': ['mask', 'facemask'], 'def': 'a protective covering worn over the face', 'name': 'mask'}, {'frequency': 'f', 'id': 688, 'synset': 'mast.n.01', 'synonyms': ['mast'], 'def': 'a vertical spar for supporting sails', 'name': 'mast'}, {'frequency': 'c', 'id': 689, 'synset': 'mat.n.03', 'synonyms': ['mat_(gym_equipment)', 'gym_mat'], 'def': 'sports equipment consisting of a piece of thick padding on the floor for gymnastics', 'name': 'mat_(gym_equipment)'}, {'frequency': 'r', 'id': 690, 'synset': 'matchbox.n.01', 'synonyms': ['matchbox'], 'def': 'a box for holding matches', 'name': 'matchbox'}, {'frequency': 'f', 'id': 691, 'synset': 'mattress.n.01', 'synonyms': ['mattress'], 'def': 'a thick pad filled with resilient material used as a bed or part of a bed', 'name': 'mattress'}, {'frequency': 'c', 'id': 692, 'synset': 'measuring_cup.n.01', 'synonyms': ['measuring_cup'], 'def': 'graduated cup used to measure liquid or granular ingredients', 'name': 'measuring_cup'}, {'frequency': 'c', 'id': 693, 'synset': 'measuring_stick.n.01', 'synonyms': ['measuring_stick', 'ruler_(measuring_stick)', 'measuring_rod'], 'def': 'measuring instrument having a sequence of marks at regular intervals', 'name': 'measuring_stick'}, {'frequency': 'c', 'id': 694, 'synset': 'meatball.n.01', 'synonyms': ['meatball'], 'def': 'ground meat formed into a ball and fried or simmered in broth', 'name': 'meatball'}, {'frequency': 'c', 'id': 695, 'synset': 'medicine.n.02', 'synonyms': ['medicine'], 'def': 'something that treats or prevents or alleviates the symptoms of disease', 'name': 'medicine'}, {'frequency': 'r', 'id': 696, 'synset': 'melon.n.01', 'synonyms': ['melon'], 'def': 'fruit of the gourd family having a hard rind and sweet juicy flesh', 'name': 'melon'}, {'frequency': 'f', 'id': 697, 'synset': 'microphone.n.01', 'synonyms': ['microphone'], 'def': 'device for converting sound waves into electrical energy', 'name': 'microphone'}, {'frequency': 'r', 'id': 698, 'synset': 'microscope.n.01', 'synonyms': ['microscope'], 'def': 'magnifier of the image of small objects', 'name': 'microscope'}, {'frequency': 'f', 'id': 699, 'synset': 'microwave.n.02', 'synonyms': ['microwave_oven'], 'def': 'kitchen appliance that cooks food by passing an electromagnetic wave through it', 'name': 'microwave_oven'}, {'frequency': 'r', 'id': 700, 'synset': 'milestone.n.01', 'synonyms': ['milestone', 'milepost'], 'def': 'stone post at side of a road to show distances', 'name': 'milestone'}, {'frequency': 'c', 'id': 701, 'synset': 'milk.n.01', 'synonyms': ['milk'], 'def': 'a white nutritious liquid secreted by mammals and used as food by human beings', 'name': 'milk'}, {'frequency': 'f', 'id': 702, 'synset': 'minivan.n.01', 'synonyms': ['minivan'], 'def': 'a small box-shaped passenger van', 'name': 'minivan'}, {'frequency': 'r', 'id': 703, 'synset': 'mint.n.05', 'synonyms': ['mint_candy'], 'def': 'a candy that is flavored with a mint oil', 'name': 'mint_candy'}, {'frequency': 'f', 'id': 704, 'synset': 'mirror.n.01', 'synonyms': ['mirror'], 'def': 'polished surface that forms images by reflecting light', 'name': 'mirror'}, {'frequency': 'c', 'id': 705, 'synset': 'mitten.n.01', 'synonyms': ['mitten'], 'def': 'glove that encases the thumb separately and the other four fingers together', 'name': 'mitten'}, {'frequency': 'c', 'id': 706, 'synset': 'mixer.n.04', 'synonyms': ['mixer_(kitchen_tool)', 'stand_mixer'], 'def': 'a kitchen utensil that is used for mixing foods', 'name': 'mixer_(kitchen_tool)'}, {'frequency': 'c', 'id': 707, 'synset': 'money.n.03', 'synonyms': ['money'], 'def': 'the official currency issued by a government or national bank', 'name': 'money'}, {'frequency': 'f', 'id': 708, 'synset': 'monitor.n.04', 'synonyms': ['monitor_(computer_equipment) computer_monitor'], 'def': 'a computer monitor', 'name': 'monitor_(computer_equipment) computer_monitor'}, {'frequency': 'c', 'id': 709, 'synset': 'monkey.n.01', 'synonyms': ['monkey'], 'def': 'any of various long-tailed primates', 'name': 'monkey'}, {'frequency': 'f', 'id': 710, 'synset': 'motor.n.01', 'synonyms': ['motor'], 'def': 'machine that converts other forms of energy into mechanical energy and so imparts motion', 'name': 'motor'}, {'frequency': 'f', 'id': 711, 'synset': 'motor_scooter.n.01', 'synonyms': ['motor_scooter', 'scooter'], 'def': 'a wheeled vehicle with small wheels and a low-powered engine', 'name': 'motor_scooter'}, {'frequency': 'r', 'id': 712, 'synset': 'motor_vehicle.n.01', 'synonyms': ['motor_vehicle', 'automotive_vehicle'], 'def': 'a self-propelled wheeled vehicle that does not run on rails', 'name': 'motor_vehicle'}, {'frequency': 'r', 'id': 713, 'synset': 'motorboat.n.01', 'synonyms': ['motorboat', 'powerboat'], 'def': 'a boat propelled by an internal-combustion engine', 'name': 'motorboat'}, {'frequency': 'f', 'id': 714, 'synset': 'motorcycle.n.01', 'synonyms': ['motorcycle'], 'def': 'a motor vehicle with two wheels and a strong frame', 'name': 'motorcycle'}, {'frequency': 'f', 'id': 715, 'synset': 'mound.n.01', 'synonyms': ['mound_(baseball)', "pitcher's_mound"], 'def': '(baseball) the slight elevation on which the pitcher stands', 'name': 'mound_(baseball)'}, {'frequency': 'r', 'id': 716, 'synset': 'mouse.n.01', 'synonyms': ['mouse_(animal_rodent)'], 'def': 'a small rodent with pointed snouts and small ears on elongated bodies with slender usually hairless tails', 'name': 'mouse_(animal_rodent)'}, {'frequency': 'f', 'id': 717, 'synset': 'mouse.n.04', 'synonyms': ['mouse_(computer_equipment)', 'computer_mouse'], 'def': 'a computer input device that controls an on-screen pointer', 'name': 'mouse_(computer_equipment)'}, {'frequency': 'f', 'id': 718, 'synset': 'mousepad.n.01', 'synonyms': ['mousepad'], 'def': 'a small portable pad that provides an operating surface for a computer mouse', 'name': 'mousepad'}, {'frequency': 'c', 'id': 719, 'synset': 'muffin.n.01', 'synonyms': ['muffin'], 'def': 'a sweet quick bread baked in a cup-shaped pan', 'name': 'muffin'}, {'frequency': 'f', 'id': 720, 'synset': 'mug.n.04', 'synonyms': ['mug'], 'def': 'with handle and usually cylindrical', 'name': 'mug'}, {'frequency': 'f', 'id': 721, 'synset': 'mushroom.n.02', 'synonyms': ['mushroom'], 'def': 'a common mushroom', 'name': 'mushroom'}, {'frequency': 'r', 'id': 722, 'synset': 'music_stool.n.01', 'synonyms': ['music_stool', 'piano_stool'], 'def': 'a stool for piano players; usually adjustable in height', 'name': 'music_stool'}, {'frequency': 'r', 'id': 723, 'synset': 'musical_instrument.n.01', 'synonyms': ['musical_instrument', 'instrument_(musical)'], 'def': 'any of various devices or contrivances that can be used to produce musical tones or sounds', 'name': 'musical_instrument'}, {'frequency': 'r', 'id': 724, 'synset': 'nailfile.n.01', 'synonyms': ['nailfile'], 'def': 'a small flat file for shaping the nails', 'name': 'nailfile'}, {'frequency': 'r', 'id': 725, 'synset': 'nameplate.n.01', 'synonyms': ['nameplate'], 'def': 'a plate bearing a name', 'name': 'nameplate'}, {'frequency': 'f', 'id': 726, 'synset': 'napkin.n.01', 'synonyms': ['napkin', 'table_napkin', 'serviette'], 'def': 'a small piece of table linen or paper that is used to wipe the mouth and to cover the lap in order to protect clothing', 'name': 'napkin'}, {'frequency': 'r', 'id': 727, 'synset': 'neckerchief.n.01', 'synonyms': ['neckerchief'], 'def': 'a kerchief worn around the neck', 'name': 'neckerchief'}, {'frequency': 'f', 'id': 728, 'synset': 'necklace.n.01', 'synonyms': ['necklace'], 'def': 'jewelry consisting of a cord or chain (often bearing gems) worn about the neck as an ornament', 'name': 'necklace'}, {'frequency': 'f', 'id': 729, 'synset': 'necktie.n.01', 'synonyms': ['necktie', 'tie_(necktie)'], 'def': 'neckwear consisting of a long narrow piece of material worn under a collar and tied in knot at the front', 'name': 'necktie'}, {'frequency': 'r', 'id': 730, 'synset': 'needle.n.03', 'synonyms': ['needle'], 'def': 'a sharp pointed implement (usually metal)', 'name': 'needle'}, {'frequency': 'c', 'id': 731, 'synset': 'nest.n.01', 'synonyms': ['nest'], 'def': 'a structure in which animals lay eggs or give birth to their young', 'name': 'nest'}, {'frequency': 'r', 'id': 732, 'synset': 'newsstand.n.01', 'synonyms': ['newsstand'], 'def': 'a stall where newspapers and other periodicals are sold', 'name': 'newsstand'}, {'frequency': 'c', 'id': 733, 'synset': 'nightwear.n.01', 'synonyms': ['nightshirt', 'nightwear', 'sleepwear', 'nightclothes'], 'def': 'garments designed to be worn in bed', 'name': 'nightshirt'}, {'frequency': 'r', 'id': 734, 'synset': 'nosebag.n.01', 'synonyms': ['nosebag_(for_animals)', 'feedbag'], 'def': 'a canvas bag that is used to feed an animal (such as a horse); covers the muzzle and fastens at the top of the head', 'name': 'nosebag_(for_animals)'}, {'frequency': 'r', 'id': 735, 'synset': 'noseband.n.01', 'synonyms': ['noseband_(for_animals)', 'nosepiece_(for_animals)'], 'def': "a strap that is the part of a bridle that goes over the animal's nose", 'name': 'noseband_(for_animals)'}, {'frequency': 'f', 'id': 736, 'synset': 'notebook.n.01', 'synonyms': ['notebook'], 'def': 'a book with blank pages for recording notes or memoranda', 'name': 'notebook'}, {'frequency': 'c', 'id': 737, 'synset': 'notepad.n.01', 'synonyms': ['notepad'], 'def': 'a pad of paper for keeping notes', 'name': 'notepad'}, {'frequency': 'c', 'id': 738, 'synset': 'nut.n.03', 'synonyms': ['nut'], 'def': 'a small metal block (usually square or hexagonal) with internal screw thread to be fitted onto a bolt', 'name': 'nut'}, {'frequency': 'r', 'id': 739, 'synset': 'nutcracker.n.01', 'synonyms': ['nutcracker'], 'def': 'a hand tool used to crack nuts open', 'name': 'nutcracker'}, {'frequency': 'c', 'id': 740, 'synset': 'oar.n.01', 'synonyms': ['oar'], 'def': 'an implement used to propel or steer a boat', 'name': 'oar'}, {'frequency': 'r', 'id': 741, 'synset': 'octopus.n.01', 'synonyms': ['octopus_(food)'], 'def': 'tentacles of octopus prepared as food', 'name': 'octopus_(food)'}, {'frequency': 'r', 'id': 742, 'synset': 'octopus.n.02', 'synonyms': ['octopus_(animal)'], 'def': 'bottom-living cephalopod having a soft oval body with eight long tentacles', 'name': 'octopus_(animal)'}, {'frequency': 'c', 'id': 743, 'synset': 'oil_lamp.n.01', 'synonyms': ['oil_lamp', 'kerosene_lamp', 'kerosine_lamp'], 'def': 'a lamp that burns oil (as kerosine) for light', 'name': 'oil_lamp'}, {'frequency': 'c', 'id': 744, 'synset': 'olive_oil.n.01', 'synonyms': ['olive_oil'], 'def': 'oil from olives', 'name': 'olive_oil'}, {'frequency': 'r', 'id': 745, 'synset': 'omelet.n.01', 'synonyms': ['omelet', 'omelette'], 'def': 'beaten eggs cooked until just set; may be folded around e.g. ham or cheese or jelly', 'name': 'omelet'}, {'frequency': 'f', 'id': 746, 'synset': 'onion.n.01', 'synonyms': ['onion'], 'def': 'the bulb of an onion plant', 'name': 'onion'}, {'frequency': 'f', 'id': 747, 'synset': 'orange.n.01', 'synonyms': ['orange_(fruit)'], 'def': 'orange (FRUIT of an orange tree)', 'name': 'orange_(fruit)'}, {'frequency': 'c', 'id': 748, 'synset': 'orange_juice.n.01', 'synonyms': ['orange_juice'], 'def': 'bottled or freshly squeezed juice of oranges', 'name': 'orange_juice'}, {'frequency': 'r', 'id': 749, 'synset': 'oregano.n.01', 'synonyms': ['oregano', 'marjoram'], 'def': 'aromatic Eurasian perennial herb used in cooking and baking', 'name': 'oregano'}, {'frequency': 'c', 'id': 750, 'synset': 'ostrich.n.02', 'synonyms': ['ostrich'], 'def': 'fast-running African flightless bird with two-toed feet; largest living bird', 'name': 'ostrich'}, {'frequency': 'c', 'id': 751, 'synset': 'ottoman.n.03', 'synonyms': ['ottoman', 'pouf', 'pouffe', 'hassock'], 'def': 'thick cushion used as a seat', 'name': 'ottoman'}, {'frequency': 'c', 'id': 752, 'synset': 'overall.n.01', 'synonyms': ['overalls_(clothing)'], 'def': 'work clothing consisting of denim trousers usually with a bib and shoulder straps', 'name': 'overalls_(clothing)'}, {'frequency': 'c', 'id': 753, 'synset': 'owl.n.01', 'synonyms': ['owl'], 'def': 'nocturnal bird of prey with hawk-like beak and claws and large head with front-facing eyes', 'name': 'owl'}, {'frequency': 'c', 'id': 754, 'synset': 'packet.n.03', 'synonyms': ['packet'], 'def': 'a small package or bundle', 'name': 'packet'}, {'frequency': 'r', 'id': 755, 'synset': 'pad.n.03', 'synonyms': ['inkpad', 'inking_pad', 'stamp_pad'], 'def': 'absorbent material saturated with ink used to transfer ink evenly to a rubber stamp', 'name': 'inkpad'}, {'frequency': 'c', 'id': 756, 'synset': 'pad.n.04', 'synonyms': ['pad'], 'def': 'a flat mass of soft material used for protection, stuffing, or comfort', 'name': 'pad'}, {'frequency': 'c', 'id': 757, 'synset': 'paddle.n.04', 'synonyms': ['paddle', 'boat_paddle'], 'def': 'a short light oar used without an oarlock to propel a canoe or small boat', 'name': 'paddle'}, {'frequency': 'c', 'id': 758, 'synset': 'padlock.n.01', 'synonyms': ['padlock'], 'def': 'a detachable, portable lock', 'name': 'padlock'}, {'frequency': 'r', 'id': 759, 'synset': 'paintbox.n.01', 'synonyms': ['paintbox'], 'def': "a box containing a collection of cubes or tubes of artists' paint", 'name': 'paintbox'}, {'frequency': 'c', 'id': 760, 'synset': 'paintbrush.n.01', 'synonyms': ['paintbrush'], 'def': 'a brush used as an applicator to apply paint', 'name': 'paintbrush'}, {'frequency': 'f', 'id': 761, 'synset': 'painting.n.01', 'synonyms': ['painting'], 'def': 'graphic art consisting of an artistic composition made by applying paints to a surface', 'name': 'painting'}, {'frequency': 'c', 'id': 762, 'synset': 'pajama.n.02', 'synonyms': ['pajamas', 'pyjamas'], 'def': 'loose-fitting nightclothes worn for sleeping or lounging', 'name': 'pajamas'}, {'frequency': 'c', 'id': 763, 'synset': 'palette.n.02', 'synonyms': ['palette', 'pallet'], 'def': 'board that provides a flat surface on which artists mix paints and the range of colors used', 'name': 'palette'}, {'frequency': 'f', 'id': 764, 'synset': 'pan.n.01', 'synonyms': ['pan_(for_cooking)', 'cooking_pan'], 'def': 'cooking utensil consisting of a wide metal vessel', 'name': 'pan_(for_cooking)'}, {'frequency': 'r', 'id': 765, 'synset': 'pan.n.03', 'synonyms': ['pan_(metal_container)'], 'def': 'shallow container made of metal', 'name': 'pan_(metal_container)'}, {'frequency': 'c', 'id': 766, 'synset': 'pancake.n.01', 'synonyms': ['pancake'], 'def': 'a flat cake of thin batter fried on both sides on a griddle', 'name': 'pancake'}, {'frequency': 'r', 'id': 767, 'synset': 'pantyhose.n.01', 'synonyms': ['pantyhose'], 'def': "a woman's tights consisting of underpants and stockings", 'name': 'pantyhose'}, {'frequency': 'r', 'id': 768, 'synset': 'papaya.n.02', 'synonyms': ['papaya'], 'def': 'large oval melon-like tropical fruit with yellowish flesh', 'name': 'papaya'}, {'frequency': 'r', 'id': 769, 'synset': 'paper_clip.n.01', 'synonyms': ['paperclip'], 'def': 'a wire or plastic clip for holding sheets of paper together', 'name': 'paperclip'}, {'frequency': 'f', 'id': 770, 'synset': 'paper_plate.n.01', 'synonyms': ['paper_plate'], 'def': 'a disposable plate made of cardboard', 'name': 'paper_plate'}, {'frequency': 'f', 'id': 771, 'synset': 'paper_towel.n.01', 'synonyms': ['paper_towel'], 'def': 'a disposable towel made of absorbent paper', 'name': 'paper_towel'}, {'frequency': 'r', 'id': 772, 'synset': 'paperback_book.n.01', 'synonyms': ['paperback_book', 'paper-back_book', 'softback_book', 'soft-cover_book'], 'def': 'a book with paper covers', 'name': 'paperback_book'}, {'frequency': 'r', 'id': 773, 'synset': 'paperweight.n.01', 'synonyms': ['paperweight'], 'def': 'a weight used to hold down a stack of papers', 'name': 'paperweight'}, {'frequency': 'c', 'id': 774, 'synset': 'parachute.n.01', 'synonyms': ['parachute'], 'def': 'rescue equipment consisting of a device that fills with air and retards your fall', 'name': 'parachute'}, {'frequency': 'r', 'id': 775, 'synset': 'parakeet.n.01', 'synonyms': ['parakeet', 'parrakeet', 'parroket', 'paraquet', 'paroquet', 'parroquet'], 'def': 'any of numerous small slender long-tailed parrots', 'name': 'parakeet'}, {'frequency': 'c', 'id': 776, 'synset': 'parasail.n.01', 'synonyms': ['parasail_(sports)'], 'def': 'parachute that will lift a person up into the air when it is towed by a motorboat or a car', 'name': 'parasail_(sports)'}, {'frequency': 'r', 'id': 777, 'synset': 'parchment.n.01', 'synonyms': ['parchment'], 'def': 'a superior paper resembling sheepskin', 'name': 'parchment'}, {'frequency': 'r', 'id': 778, 'synset': 'parka.n.01', 'synonyms': ['parka', 'anorak'], 'def': "a kind of heavy jacket (`windcheater' is a British term)", 'name': 'parka'}, {'frequency': 'f', 'id': 779, 'synset': 'parking_meter.n.01', 'synonyms': ['parking_meter'], 'def': 'a coin-operated timer located next to a parking space', 'name': 'parking_meter'}, {'frequency': 'c', 'id': 780, 'synset': 'parrot.n.01', 'synonyms': ['parrot'], 'def': 'usually brightly colored tropical birds with short hooked beaks and the ability to mimic sounds', 'name': 'parrot'}, {'frequency': 'c', 'id': 781, 'synset': 'passenger_car.n.01', 'synonyms': ['passenger_car_(part_of_a_train)', 'coach_(part_of_a_train)'], 'def': 'a railcar where passengers ride', 'name': 'passenger_car_(part_of_a_train)'}, {'frequency': 'r', 'id': 782, 'synset': 'passenger_ship.n.01', 'synonyms': ['passenger_ship'], 'def': 'a ship built to carry passengers', 'name': 'passenger_ship'}, {'frequency': 'r', 'id': 783, 'synset': 'passport.n.02', 'synonyms': ['passport'], 'def': 'a document issued by a country to a citizen allowing that person to travel abroad and re-enter the home country', 'name': 'passport'}, {'frequency': 'f', 'id': 784, 'synset': 'pastry.n.02', 'synonyms': ['pastry'], 'def': 'any of various baked foods made of dough or batter', 'name': 'pastry'}, {'frequency': 'r', 'id': 785, 'synset': 'patty.n.01', 'synonyms': ['patty_(food)'], 'def': 'small flat mass of chopped food', 'name': 'patty_(food)'}, {'frequency': 'c', 'id': 786, 'synset': 'pea.n.01', 'synonyms': ['pea_(food)'], 'def': 'seed of a pea plant used for food', 'name': 'pea_(food)'}, {'frequency': 'c', 'id': 787, 'synset': 'peach.n.03', 'synonyms': ['peach'], 'def': 'downy juicy fruit with sweet yellowish or whitish flesh', 'name': 'peach'}, {'frequency': 'c', 'id': 788, 'synset': 'peanut_butter.n.01', 'synonyms': ['peanut_butter'], 'def': 'a spread made from ground peanuts', 'name': 'peanut_butter'}, {'frequency': 'c', 'id': 789, 'synset': 'pear.n.01', 'synonyms': ['pear'], 'def': 'sweet juicy gritty-textured fruit available in many varieties', 'name': 'pear'}, {'frequency': 'r', 'id': 790, 'synset': 'peeler.n.03', 'synonyms': ['peeler_(tool_for_fruit_and_vegetables)'], 'def': 'a device for peeling vegetables or fruits', 'name': 'peeler_(tool_for_fruit_and_vegetables)'}, {'frequency': 'r', 'id': 791, 'synset': 'pegboard.n.01', 'synonyms': ['pegboard'], 'def': 'a board perforated with regularly spaced holes into which pegs can be fitted', 'name': 'pegboard'}, {'frequency': 'c', 'id': 792, 'synset': 'pelican.n.01', 'synonyms': ['pelican'], 'def': 'large long-winged warm-water seabird having a large bill with a distensible pouch for fish', 'name': 'pelican'}, {'frequency': 'f', 'id': 793, 'synset': 'pen.n.01', 'synonyms': ['pen'], 'def': 'a writing implement with a point from which ink flows', 'name': 'pen'}, {'frequency': 'c', 'id': 794, 'synset': 'pencil.n.01', 'synonyms': ['pencil'], 'def': 'a thin cylindrical pointed writing implement made of wood and graphite', 'name': 'pencil'}, {'frequency': 'r', 'id': 795, 'synset': 'pencil_box.n.01', 'synonyms': ['pencil_box', 'pencil_case'], 'def': 'a box for holding pencils', 'name': 'pencil_box'}, {'frequency': 'r', 'id': 796, 'synset': 'pencil_sharpener.n.01', 'synonyms': ['pencil_sharpener'], 'def': 'a rotary implement for sharpening the point on pencils', 'name': 'pencil_sharpener'}, {'frequency': 'r', 'id': 797, 'synset': 'pendulum.n.01', 'synonyms': ['pendulum'], 'def': 'an apparatus consisting of an object mounted so that it swings freely under the influence of gravity', 'name': 'pendulum'}, {'frequency': 'c', 'id': 798, 'synset': 'penguin.n.01', 'synonyms': ['penguin'], 'def': 'short-legged flightless birds of cold southern regions having webbed feet and wings modified as flippers', 'name': 'penguin'}, {'frequency': 'r', 'id': 799, 'synset': 'pennant.n.02', 'synonyms': ['pennant'], 'def': 'a flag longer than it is wide (and often tapering)', 'name': 'pennant'}, {'frequency': 'r', 'id': 800, 'synset': 'penny.n.02', 'synonyms': ['penny_(coin)'], 'def': 'a coin worth one-hundredth of the value of the basic unit', 'name': 'penny_(coin)'}, {'frequency': 'c', 'id': 801, 'synset': 'pepper.n.03', 'synonyms': ['pepper', 'peppercorn'], 'def': 'pungent seasoning from the berry of the common pepper plant; whole or ground', 'name': 'pepper'}, {'frequency': 'c', 'id': 802, 'synset': 'pepper_mill.n.01', 'synonyms': ['pepper_mill', 'pepper_grinder'], 'def': 'a mill for grinding pepper', 'name': 'pepper_mill'}, {'frequency': 'c', 'id': 803, 'synset': 'perfume.n.02', 'synonyms': ['perfume'], 'def': 'a toiletry that emits and diffuses a fragrant odor', 'name': 'perfume'}, {'frequency': 'r', 'id': 804, 'synset': 'persimmon.n.02', 'synonyms': ['persimmon'], 'def': 'orange fruit resembling a plum; edible when fully ripe', 'name': 'persimmon'}, {'frequency': 'f', 'id': 805, 'synset': 'person.n.01', 'synonyms': ['baby', 'child', 'boy', 'girl', 'man', 'woman', 'person', 'human'], 'def': 'a human being', 'name': 'baby'}, {'frequency': 'r', 'id': 806, 'synset': 'pet.n.01', 'synonyms': ['pet'], 'def': 'a domesticated animal kept for companionship or amusement', 'name': 'pet'}, {'frequency': 'r', 'id': 807, 'synset': 'petfood.n.01', 'synonyms': ['petfood', 'pet-food'], 'def': 'food prepared for animal pets', 'name': 'petfood'}, {'frequency': 'r', 'id': 808, 'synset': 'pew.n.01', 'synonyms': ['pew_(church_bench)', 'church_bench'], 'def': 'long bench with backs; used in church by the congregation', 'name': 'pew_(church_bench)'}, {'frequency': 'r', 'id': 809, 'synset': 'phonebook.n.01', 'synonyms': ['phonebook', 'telephone_book', 'telephone_directory'], 'def': 'a directory containing an alphabetical list of telephone subscribers and their telephone numbers', 'name': 'phonebook'}, {'frequency': 'c', 'id': 810, 'synset': 'phonograph_record.n.01', 'synonyms': ['phonograph_record', 'phonograph_recording', 'record_(phonograph_recording)'], 'def': 'sound recording consisting of a typically black disk with a continuous groove', 'name': 'phonograph_record'}, {'frequency': 'c', 'id': 811, 'synset': 'piano.n.01', 'synonyms': ['piano'], 'def': 'a keyboard instrument that is played by depressing keys that cause hammers to strike tuned strings and produce sounds', 'name': 'piano'}, {'frequency': 'f', 'id': 812, 'synset': 'pickle.n.01', 'synonyms': ['pickle'], 'def': 'vegetables (especially cucumbers) preserved in brine or vinegar', 'name': 'pickle'}, {'frequency': 'f', 'id': 813, 'synset': 'pickup.n.01', 'synonyms': ['pickup_truck'], 'def': 'a light truck with an open body and low sides and a tailboard', 'name': 'pickup_truck'}, {'frequency': 'c', 'id': 814, 'synset': 'pie.n.01', 'synonyms': ['pie'], 'def': 'dish baked in pastry-lined pan often with a pastry top', 'name': 'pie'}, {'frequency': 'c', 'id': 815, 'synset': 'pigeon.n.01', 'synonyms': ['pigeon'], 'def': 'wild and domesticated birds having a heavy body and short legs', 'name': 'pigeon'}, {'frequency': 'r', 'id': 816, 'synset': 'piggy_bank.n.01', 'synonyms': ['piggy_bank', 'penny_bank'], 'def': "a child's coin bank (often shaped like a pig)", 'name': 'piggy_bank'}, {'frequency': 'f', 'id': 817, 'synset': 'pillow.n.01', 'synonyms': ['pillow'], 'def': 'a cushion to support the head of a sleeping person', 'name': 'pillow'}, {'frequency': 'r', 'id': 818, 'synset': 'pin.n.09', 'synonyms': ['pin_(non_jewelry)'], 'def': 'a small slender (often pointed) piece of wood or metal used to support or fasten or attach things', 'name': 'pin_(non_jewelry)'}, {'frequency': 'f', 'id': 819, 'synset': 'pineapple.n.02', 'synonyms': ['pineapple'], 'def': 'large sweet fleshy tropical fruit with a tuft of stiff leaves', 'name': 'pineapple'}, {'frequency': 'c', 'id': 820, 'synset': 'pinecone.n.01', 'synonyms': ['pinecone'], 'def': 'the seed-producing cone of a pine tree', 'name': 'pinecone'}, {'frequency': 'r', 'id': 821, 'synset': 'ping-pong_ball.n.01', 'synonyms': ['ping-pong_ball'], 'def': 'light hollow ball used in playing table tennis', 'name': 'ping-pong_ball'}, {'frequency': 'r', 'id': 822, 'synset': 'pinwheel.n.03', 'synonyms': ['pinwheel'], 'def': 'a toy consisting of vanes of colored paper or plastic that is pinned to a stick and spins when it is pointed into the wind', 'name': 'pinwheel'}, {'frequency': 'r', 'id': 823, 'synset': 'pipe.n.01', 'synonyms': ['tobacco_pipe'], 'def': 'a tube with a small bowl at one end; used for smoking tobacco', 'name': 'tobacco_pipe'}, {'frequency': 'f', 'id': 824, 'synset': 'pipe.n.02', 'synonyms': ['pipe', 'piping'], 'def': 'a long tube made of metal or plastic that is used to carry water or oil or gas etc.', 'name': 'pipe'}, {'frequency': 'r', 'id': 825, 'synset': 'pistol.n.01', 'synonyms': ['pistol', 'handgun'], 'def': 'a firearm that is held and fired with one hand', 'name': 'pistol'}, {'frequency': 'r', 'id': 826, 'synset': 'pita.n.01', 'synonyms': ['pita_(bread)', 'pocket_bread'], 'def': 'usually small round bread that can open into a pocket for filling', 'name': 'pita_(bread)'}, {'frequency': 'f', 'id': 827, 'synset': 'pitcher.n.02', 'synonyms': ['pitcher_(vessel_for_liquid)', 'ewer'], 'def': 'an open vessel with a handle and a spout for pouring', 'name': 'pitcher_(vessel_for_liquid)'}, {'frequency': 'r', 'id': 828, 'synset': 'pitchfork.n.01', 'synonyms': ['pitchfork'], 'def': 'a long-handled hand tool with sharp widely spaced prongs for lifting and pitching hay', 'name': 'pitchfork'}, {'frequency': 'f', 'id': 829, 'synset': 'pizza.n.01', 'synonyms': ['pizza'], 'def': 'Italian open pie made of thin bread dough spread with a spiced mixture of e.g. tomato sauce and cheese', 'name': 'pizza'}, {'frequency': 'f', 'id': 830, 'synset': 'place_mat.n.01', 'synonyms': ['place_mat'], 'def': 'a mat placed on a table for an individual place setting', 'name': 'place_mat'}, {'frequency': 'f', 'id': 831, 'synset': 'plate.n.04', 'synonyms': ['plate'], 'def': 'dish on which food is served or from which food is eaten', 'name': 'plate'}, {'frequency': 'c', 'id': 832, 'synset': 'platter.n.01', 'synonyms': ['platter'], 'def': 'a large shallow dish used for serving food', 'name': 'platter'}, {'frequency': 'r', 'id': 833, 'synset': 'playing_card.n.01', 'synonyms': ['playing_card'], 'def': 'one of a pack of cards that are used to play card games', 'name': 'playing_card'}, {'frequency': 'r', 'id': 834, 'synset': 'playpen.n.01', 'synonyms': ['playpen'], 'def': 'a portable enclosure in which babies may be left to play', 'name': 'playpen'}, {'frequency': 'c', 'id': 835, 'synset': 'pliers.n.01', 'synonyms': ['pliers', 'plyers'], 'def': 'a gripping hand tool with two hinged arms and (usually) serrated jaws', 'name': 'pliers'}, {'frequency': 'r', 'id': 836, 'synset': 'plow.n.01', 'synonyms': ['plow_(farm_equipment)', 'plough_(farm_equipment)'], 'def': 'a farm tool having one or more heavy blades to break the soil and cut a furrow prior to sowing', 'name': 'plow_(farm_equipment)'}, {'frequency': 'r', 'id': 837, 'synset': 'pocket_watch.n.01', 'synonyms': ['pocket_watch'], 'def': 'a watch that is carried in a small watch pocket', 'name': 'pocket_watch'}, {'frequency': 'c', 'id': 838, 'synset': 'pocketknife.n.01', 'synonyms': ['pocketknife'], 'def': 'a knife with a blade that folds into the handle; suitable for carrying in the pocket', 'name': 'pocketknife'}, {'frequency': 'c', 'id': 839, 'synset': 'poker.n.01', 'synonyms': ['poker_(fire_stirring_tool)', 'stove_poker', 'fire_hook'], 'def': 'fire iron consisting of a metal rod with a handle; used to stir a fire', 'name': 'poker_(fire_stirring_tool)'}, {'frequency': 'f', 'id': 840, 'synset': 'pole.n.01', 'synonyms': ['pole', 'post'], 'def': 'a long (usually round) rod of wood or metal or plastic', 'name': 'pole'}, {'frequency': 'r', 'id': 841, 'synset': 'police_van.n.01', 'synonyms': ['police_van', 'police_wagon', 'paddy_wagon', 'patrol_wagon'], 'def': 'van used by police to transport prisoners', 'name': 'police_van'}, {'frequency': 'f', 'id': 842, 'synset': 'polo_shirt.n.01', 'synonyms': ['polo_shirt', 'sport_shirt'], 'def': 'a shirt with short sleeves designed for comfort and casual wear', 'name': 'polo_shirt'}, {'frequency': 'r', 'id': 843, 'synset': 'poncho.n.01', 'synonyms': ['poncho'], 'def': 'a blanket-like cloak with a hole in the center for the head', 'name': 'poncho'}, {'frequency': 'c', 'id': 844, 'synset': 'pony.n.05', 'synonyms': ['pony'], 'def': 'any of various breeds of small gentle horses usually less than five feet high at the shoulder', 'name': 'pony'}, {'frequency': 'r', 'id': 845, 'synset': 'pool_table.n.01', 'synonyms': ['pool_table', 'billiard_table', 'snooker_table'], 'def': 'game equipment consisting of a heavy table on which pool is played', 'name': 'pool_table'}, {'frequency': 'f', 'id': 846, 'synset': 'pop.n.02', 'synonyms': ['pop_(soda)', 'soda_(pop)', 'tonic', 'soft_drink'], 'def': 'a sweet drink containing carbonated water and flavoring', 'name': 'pop_(soda)'}, {'frequency': 'r', 'id': 847, 'synset': 'portrait.n.02', 'synonyms': ['portrait', 'portrayal'], 'def': 'any likeness of a person, in any medium', 'name': 'portrait'}, {'frequency': 'c', 'id': 848, 'synset': 'postbox.n.01', 'synonyms': ['postbox_(public)', 'mailbox_(public)'], 'def': 'public box for deposit of mail', 'name': 'postbox_(public)'}, {'frequency': 'c', 'id': 849, 'synset': 'postcard.n.01', 'synonyms': ['postcard', 'postal_card', 'mailing-card'], 'def': 'a card for sending messages by post without an envelope', 'name': 'postcard'}, {'frequency': 'f', 'id': 850, 'synset': 'poster.n.01', 'synonyms': ['poster', 'placard'], 'def': 'a sign posted in a public place as an advertisement', 'name': 'poster'}, {'frequency': 'f', 'id': 851, 'synset': 'pot.n.01', 'synonyms': ['pot'], 'def': 'metal or earthenware cooking vessel that is usually round and deep; often has a handle and lid', 'name': 'pot'}, {'frequency': 'f', 'id': 852, 'synset': 'pot.n.04', 'synonyms': ['flowerpot'], 'def': 'a container in which plants are cultivated', 'name': 'flowerpot'}, {'frequency': 'f', 'id': 853, 'synset': 'potato.n.01', 'synonyms': ['potato'], 'def': 'an edible tuber native to South America', 'name': 'potato'}, {'frequency': 'c', 'id': 854, 'synset': 'potholder.n.01', 'synonyms': ['potholder'], 'def': 'an insulated pad for holding hot pots', 'name': 'potholder'}, {'frequency': 'c', 'id': 855, 'synset': 'pottery.n.01', 'synonyms': ['pottery', 'clayware'], 'def': 'ceramic ware made from clay and baked in a kiln', 'name': 'pottery'}, {'frequency': 'c', 'id': 856, 'synset': 'pouch.n.01', 'synonyms': ['pouch'], 'def': 'a small or medium size container for holding or carrying things', 'name': 'pouch'}, {'frequency': 'r', 'id': 857, 'synset': 'power_shovel.n.01', 'synonyms': ['power_shovel', 'excavator', 'digger'], 'def': 'a machine for excavating', 'name': 'power_shovel'}, {'frequency': 'c', 'id': 858, 'synset': 'prawn.n.01', 'synonyms': ['prawn', 'shrimp'], 'def': 'any of various edible decapod crustaceans', 'name': 'prawn'}, {'frequency': 'f', 'id': 859, 'synset': 'printer.n.03', 'synonyms': ['printer', 'printing_machine'], 'def': 'a machine that prints', 'name': 'printer'}, {'frequency': 'c', 'id': 860, 'synset': 'projectile.n.01', 'synonyms': ['projectile_(weapon)', 'missile'], 'def': 'a weapon that is forcibly thrown or projected at a targets', 'name': 'projectile_(weapon)'}, {'frequency': 'c', 'id': 861, 'synset': 'projector.n.02', 'synonyms': ['projector'], 'def': 'an optical instrument that projects an enlarged image onto a screen', 'name': 'projector'}, {'frequency': 'f', 'id': 862, 'synset': 'propeller.n.01', 'synonyms': ['propeller', 'propellor'], 'def': 'a mechanical device that rotates to push against air or water', 'name': 'propeller'}, {'frequency': 'r', 'id': 863, 'synset': 'prune.n.01', 'synonyms': ['prune'], 'def': 'dried plum', 'name': 'prune'}, {'frequency': 'r', 'id': 864, 'synset': 'pudding.n.01', 'synonyms': ['pudding'], 'def': 'any of various soft thick unsweetened baked dishes', 'name': 'pudding'}, {'frequency': 'r', 'id': 865, 'synset': 'puffer.n.02', 'synonyms': ['puffer_(fish)', 'pufferfish', 'blowfish', 'globefish'], 'def': 'fishes whose elongated spiny body can inflate itself with water or air to form a globe', 'name': 'puffer_(fish)'}, {'frequency': 'r', 'id': 866, 'synset': 'puffin.n.01', 'synonyms': ['puffin'], 'def': 'seabirds having short necks and brightly colored compressed bills', 'name': 'puffin'}, {'frequency': 'r', 'id': 867, 'synset': 'pug.n.01', 'synonyms': ['pug-dog'], 'def': 'small compact smooth-coated breed of Asiatic origin having a tightly curled tail and broad flat wrinkled muzzle', 'name': 'pug-dog'}, {'frequency': 'c', 'id': 868, 'synset': 'pumpkin.n.02', 'synonyms': ['pumpkin'], 'def': 'usually large pulpy deep-yellow round fruit of the squash family maturing in late summer or early autumn', 'name': 'pumpkin'}, {'frequency': 'r', 'id': 869, 'synset': 'punch.n.03', 'synonyms': ['puncher'], 'def': 'a tool for making holes or indentations', 'name': 'puncher'}, {'frequency': 'r', 'id': 870, 'synset': 'puppet.n.01', 'synonyms': ['puppet', 'marionette'], 'def': 'a small figure of a person operated from above with strings by a puppeteer', 'name': 'puppet'}, {'frequency': 'r', 'id': 871, 'synset': 'puppy.n.01', 'synonyms': ['puppy'], 'def': 'a young dog', 'name': 'puppy'}, {'frequency': 'r', 'id': 872, 'synset': 'quesadilla.n.01', 'synonyms': ['quesadilla'], 'def': 'a tortilla that is filled with cheese and heated', 'name': 'quesadilla'}, {'frequency': 'r', 'id': 873, 'synset': 'quiche.n.02', 'synonyms': ['quiche'], 'def': 'a tart filled with rich unsweetened custard; often contains other ingredients (as cheese or ham or seafood or vegetables)', 'name': 'quiche'}, {'frequency': 'f', 'id': 874, 'synset': 'quilt.n.01', 'synonyms': ['quilt', 'comforter'], 'def': 'bedding made of two layers of cloth filled with stuffing and stitched together', 'name': 'quilt'}, {'frequency': 'c', 'id': 875, 'synset': 'rabbit.n.01', 'synonyms': ['rabbit'], 'def': 'any of various burrowing animals of the family Leporidae having long ears and short tails', 'name': 'rabbit'}, {'frequency': 'r', 'id': 876, 'synset': 'racer.n.02', 'synonyms': ['race_car', 'racing_car'], 'def': 'a fast car that competes in races', 'name': 'race_car'}, {'frequency': 'c', 'id': 877, 'synset': 'racket.n.04', 'synonyms': ['racket', 'racquet'], 'def': 'a sports implement used to strike a ball in various games', 'name': 'racket'}, {'frequency': 'r', 'id': 878, 'synset': 'radar.n.01', 'synonyms': ['radar'], 'def': 'measuring instrument in which the echo of a pulse of microwave radiation is used to detect and locate distant objects', 'name': 'radar'}, {'frequency': 'c', 'id': 879, 'synset': 'radiator.n.03', 'synonyms': ['radiator'], 'def': 'a mechanism consisting of a metal honeycomb through which hot fluids circulate', 'name': 'radiator'}, {'frequency': 'c', 'id': 880, 'synset': 'radio_receiver.n.01', 'synonyms': ['radio_receiver', 'radio_set', 'radio', 'tuner_(radio)'], 'def': 'an electronic receiver that detects and demodulates and amplifies transmitted radio signals', 'name': 'radio_receiver'}, {'frequency': 'c', 'id': 881, 'synset': 'radish.n.03', 'synonyms': ['radish', 'daikon'], 'def': 'pungent edible root of any of various cultivated radish plants', 'name': 'radish'}, {'frequency': 'c', 'id': 882, 'synset': 'raft.n.01', 'synonyms': ['raft'], 'def': 'a flat float (usually made of logs or planks) that can be used for transport or as a platform for swimmers', 'name': 'raft'}, {'frequency': 'r', 'id': 883, 'synset': 'rag_doll.n.01', 'synonyms': ['rag_doll'], 'def': 'a cloth doll that is stuffed and (usually) painted', 'name': 'rag_doll'}, {'frequency': 'c', 'id': 884, 'synset': 'raincoat.n.01', 'synonyms': ['raincoat', 'waterproof_jacket'], 'def': 'a water-resistant coat', 'name': 'raincoat'}, {'frequency': 'c', 'id': 885, 'synset': 'ram.n.05', 'synonyms': ['ram_(animal)'], 'def': 'uncastrated adult male sheep', 'name': 'ram_(animal)'}, {'frequency': 'c', 'id': 886, 'synset': 'raspberry.n.02', 'synonyms': ['raspberry'], 'def': 'red or black edible aggregate berries usually smaller than the related blackberries', 'name': 'raspberry'}, {'frequency': 'r', 'id': 887, 'synset': 'rat.n.01', 'synonyms': ['rat'], 'def': 'any of various long-tailed rodents similar to but larger than a mouse', 'name': 'rat'}, {'frequency': 'c', 'id': 888, 'synset': 'razorblade.n.01', 'synonyms': ['razorblade'], 'def': 'a blade that has very sharp edge', 'name': 'razorblade'}, {'frequency': 'c', 'id': 889, 'synset': 'reamer.n.01', 'synonyms': ['reamer_(juicer)', 'juicer', 'juice_reamer'], 'def': 'a squeezer with a conical ridged center that is used for squeezing juice from citrus fruit', 'name': 'reamer_(juicer)'}, {'frequency': 'f', 'id': 890, 'synset': 'rearview_mirror.n.01', 'synonyms': ['rearview_mirror'], 'def': 'car mirror that reflects the view out of the rear window', 'name': 'rearview_mirror'}, {'frequency': 'c', 'id': 891, 'synset': 'receipt.n.02', 'synonyms': ['receipt'], 'def': 'an acknowledgment (usually tangible) that payment has been made', 'name': 'receipt'}, {'frequency': 'c', 'id': 892, 'synset': 'recliner.n.01', 'synonyms': ['recliner', 'reclining_chair', 'lounger_(chair)'], 'def': 'an armchair whose back can be lowered and foot can be raised to allow the sitter to recline in it', 'name': 'recliner'}, {'frequency': 'r', 'id': 893, 'synset': 'record_player.n.01', 'synonyms': ['record_player', 'phonograph_(record_player)', 'turntable'], 'def': 'machine in which rotating records cause a stylus to vibrate and the vibrations are amplified acoustically or electronically', 'name': 'record_player'}, {'frequency': 'r', 'id': 894, 'synset': 'red_cabbage.n.02', 'synonyms': ['red_cabbage'], 'def': 'compact head of purplish-red leaves', 'name': 'red_cabbage'}, {'frequency': 'f', 'id': 895, 'synset': 'reflector.n.01', 'synonyms': ['reflector'], 'def': 'device that reflects light, radiation, etc.', 'name': 'reflector'}, {'frequency': 'f', 'id': 896, 'synset': 'remote_control.n.01', 'synonyms': ['remote_control'], 'def': 'a device that can be used to control a machine or apparatus from a distance', 'name': 'remote_control'}, {'frequency': 'c', 'id': 897, 'synset': 'rhinoceros.n.01', 'synonyms': ['rhinoceros'], 'def': 'massive powerful herbivorous odd-toed ungulate of southeast Asia and Africa having very thick skin and one or two horns on the snout', 'name': 'rhinoceros'}, {'frequency': 'r', 'id': 898, 'synset': 'rib.n.03', 'synonyms': ['rib_(food)'], 'def': 'cut of meat including one or more ribs', 'name': 'rib_(food)'}, {'frequency': 'r', 'id': 899, 'synset': 'rifle.n.01', 'synonyms': ['rifle'], 'def': 'a shoulder firearm with a long barrel', 'name': 'rifle'}, {'frequency': 'f', 'id': 900, 'synset': 'ring.n.08', 'synonyms': ['ring'], 'def': 'jewelry consisting of a circlet of precious metal (often set with jewels) worn on the finger', 'name': 'ring'}, {'frequency': 'r', 'id': 901, 'synset': 'river_boat.n.01', 'synonyms': ['river_boat'], 'def': 'a boat used on rivers or to ply a river', 'name': 'river_boat'}, {'frequency': 'r', 'id': 902, 'synset': 'road_map.n.02', 'synonyms': ['road_map'], 'def': '(NOT A ROAD) a MAP showing roads (for automobile travel)', 'name': 'road_map'}, {'frequency': 'c', 'id': 903, 'synset': 'robe.n.01', 'synonyms': ['robe'], 'def': 'any loose flowing garment', 'name': 'robe'}, {'frequency': 'c', 'id': 904, 'synset': 'rocking_chair.n.01', 'synonyms': ['rocking_chair'], 'def': 'a chair mounted on rockers', 'name': 'rocking_chair'}, {'frequency': 'r', 'id': 905, 'synset': 'roller_skate.n.01', 'synonyms': ['roller_skate'], 'def': 'a shoe with pairs of rollers (small hard wheels) fixed to the sole', 'name': 'roller_skate'}, {'frequency': 'r', 'id': 906, 'synset': 'rollerblade.n.01', 'synonyms': ['Rollerblade'], 'def': 'an in-line variant of a roller skate', 'name': 'Rollerblade'}, {'frequency': 'c', 'id': 907, 'synset': 'rolling_pin.n.01', 'synonyms': ['rolling_pin'], 'def': 'utensil consisting of a cylinder (usually of wood) with a handle at each end; used to roll out dough', 'name': 'rolling_pin'}, {'frequency': 'r', 'id': 908, 'synset': 'root_beer.n.01', 'synonyms': ['root_beer'], 'def': 'carbonated drink containing extracts of roots and herbs', 'name': 'root_beer'}, {'frequency': 'c', 'id': 909, 'synset': 'router.n.02', 'synonyms': ['router_(computer_equipment)'], 'def': 'a device that forwards data packets between computer networks', 'name': 'router_(computer_equipment)'}, {'frequency': 'f', 'id': 910, 'synset': 'rubber_band.n.01', 'synonyms': ['rubber_band', 'elastic_band'], 'def': 'a narrow band of elastic rubber used to hold things (such as papers) together', 'name': 'rubber_band'}, {'frequency': 'c', 'id': 911, 'synset': 'runner.n.08', 'synonyms': ['runner_(carpet)'], 'def': 'a long narrow carpet', 'name': 'runner_(carpet)'}, {'frequency': 'f', 'id': 912, 'synset': 'sack.n.01', 'synonyms': ['plastic_bag', 'paper_bag'], 'def': "a bag made of paper or plastic for holding customer's purchases", 'name': 'plastic_bag'}, {'frequency': 'f', 'id': 913, 'synset': 'saddle.n.01', 'synonyms': ['saddle_(on_an_animal)'], 'def': 'a seat for the rider of a horse or camel', 'name': 'saddle_(on_an_animal)'}, {'frequency': 'f', 'id': 914, 'synset': 'saddle_blanket.n.01', 'synonyms': ['saddle_blanket', 'saddlecloth', 'horse_blanket'], 'def': 'stable gear consisting of a blanket placed under the saddle', 'name': 'saddle_blanket'}, {'frequency': 'c', 'id': 915, 'synset': 'saddlebag.n.01', 'synonyms': ['saddlebag'], 'def': 'a large bag (or pair of bags) hung over a saddle', 'name': 'saddlebag'}, {'frequency': 'r', 'id': 916, 'synset': 'safety_pin.n.01', 'synonyms': ['safety_pin'], 'def': 'a pin in the form of a clasp; has a guard so the point of the pin will not stick the user', 'name': 'safety_pin'}, {'frequency': 'c', 'id': 917, 'synset': 'sail.n.01', 'synonyms': ['sail'], 'def': 'a large piece of fabric by means of which wind is used to propel a sailing vessel', 'name': 'sail'}, {'frequency': 'c', 'id': 918, 'synset': 'salad.n.01', 'synonyms': ['salad'], 'def': 'food mixtures either arranged on a plate or tossed and served with a moist dressing; usually consisting of or including greens', 'name': 'salad'}, {'frequency': 'r', 'id': 919, 'synset': 'salad_plate.n.01', 'synonyms': ['salad_plate', 'salad_bowl'], 'def': 'a plate or bowl for individual servings of salad', 'name': 'salad_plate'}, {'frequency': 'r', 'id': 920, 'synset': 'salami.n.01', 'synonyms': ['salami'], 'def': 'highly seasoned fatty sausage of pork and beef usually dried', 'name': 'salami'}, {'frequency': 'r', 'id': 921, 'synset': 'salmon.n.01', 'synonyms': ['salmon_(fish)'], 'def': 'any of various large food and game fishes of northern waters', 'name': 'salmon_(fish)'}, {'frequency': 'r', 'id': 922, 'synset': 'salmon.n.03', 'synonyms': ['salmon_(food)'], 'def': 'flesh of any of various marine or freshwater fish of the family Salmonidae', 'name': 'salmon_(food)'}, {'frequency': 'r', 'id': 923, 'synset': 'salsa.n.01', 'synonyms': ['salsa'], 'def': 'spicy sauce of tomatoes and onions and chili peppers to accompany Mexican foods', 'name': 'salsa'}, {'frequency': 'f', 'id': 924, 'synset': 'saltshaker.n.01', 'synonyms': ['saltshaker'], 'def': 'a shaker with a perforated top for sprinkling salt', 'name': 'saltshaker'}, {'frequency': 'f', 'id': 925, 'synset': 'sandal.n.01', 'synonyms': ['sandal_(type_of_shoe)'], 'def': 'a shoe consisting of a sole fastened by straps to the foot', 'name': 'sandal_(type_of_shoe)'}, {'frequency': 'f', 'id': 926, 'synset': 'sandwich.n.01', 'synonyms': ['sandwich'], 'def': 'two (or more) slices of bread with a filling between them', 'name': 'sandwich'}, {'frequency': 'r', 'id': 927, 'synset': 'satchel.n.01', 'synonyms': ['satchel'], 'def': 'luggage consisting of a small case with a flat bottom and (usually) a shoulder strap', 'name': 'satchel'}, {'frequency': 'r', 'id': 928, 'synset': 'saucepan.n.01', 'synonyms': ['saucepan'], 'def': 'a deep pan with a handle; used for stewing or boiling', 'name': 'saucepan'}, {'frequency': 'f', 'id': 929, 'synset': 'saucer.n.02', 'synonyms': ['saucer'], 'def': 'a small shallow dish for holding a cup at the table', 'name': 'saucer'}, {'frequency': 'f', 'id': 930, 'synset': 'sausage.n.01', 'synonyms': ['sausage'], 'def': 'highly seasoned minced meat stuffed in casings', 'name': 'sausage'}, {'frequency': 'r', 'id': 931, 'synset': 'sawhorse.n.01', 'synonyms': ['sawhorse', 'sawbuck'], 'def': 'a framework for holding wood that is being sawed', 'name': 'sawhorse'}, {'frequency': 'r', 'id': 932, 'synset': 'sax.n.02', 'synonyms': ['saxophone'], 'def': "a wind instrument with a `J'-shaped form typically made of brass", 'name': 'saxophone'}, {'frequency': 'f', 'id': 933, 'synset': 'scale.n.07', 'synonyms': ['scale_(measuring_instrument)'], 'def': 'a measuring instrument for weighing; shows amount of mass', 'name': 'scale_(measuring_instrument)'}, {'frequency': 'r', 'id': 934, 'synset': 'scarecrow.n.01', 'synonyms': ['scarecrow', 'strawman'], 'def': 'an effigy in the shape of a man to frighten birds away from seeds', 'name': 'scarecrow'}, {'frequency': 'f', 'id': 935, 'synset': 'scarf.n.01', 'synonyms': ['scarf'], 'def': 'a garment worn around the head or neck or shoulders for warmth or decoration', 'name': 'scarf'}, {'frequency': 'c', 'id': 936, 'synset': 'school_bus.n.01', 'synonyms': ['school_bus'], 'def': 'a bus used to transport children to or from school', 'name': 'school_bus'}, {'frequency': 'f', 'id': 937, 'synset': 'scissors.n.01', 'synonyms': ['scissors'], 'def': 'a tool having two crossed pivoting blades with looped handles', 'name': 'scissors'}, {'frequency': 'c', 'id': 938, 'synset': 'scoreboard.n.01', 'synonyms': ['scoreboard'], 'def': 'a large board for displaying the score of a contest (and some other information)', 'name': 'scoreboard'}, {'frequency': 'c', 'id': 939, 'synset': 'scrambled_eggs.n.01', 'synonyms': ['scrambled_eggs'], 'def': 'eggs beaten and cooked to a soft firm consistency while stirring', 'name': 'scrambled_eggs'}, {'frequency': 'r', 'id': 940, 'synset': 'scraper.n.01', 'synonyms': ['scraper'], 'def': 'any of various hand tools for scraping', 'name': 'scraper'}, {'frequency': 'r', 'id': 941, 'synset': 'scratcher.n.03', 'synonyms': ['scratcher'], 'def': 'a device used for scratching', 'name': 'scratcher'}, {'frequency': 'c', 'id': 942, 'synset': 'screwdriver.n.01', 'synonyms': ['screwdriver'], 'def': 'a hand tool for driving screws; has a tip that fits into the head of a screw', 'name': 'screwdriver'}, {'frequency': 'c', 'id': 943, 'synset': 'scrub_brush.n.01', 'synonyms': ['scrubbing_brush'], 'def': 'a brush with short stiff bristles for heavy cleaning', 'name': 'scrubbing_brush'}, {'frequency': 'c', 'id': 944, 'synset': 'sculpture.n.01', 'synonyms': ['sculpture'], 'def': 'a three-dimensional work of art', 'name': 'sculpture'}, {'frequency': 'r', 'id': 945, 'synset': 'seabird.n.01', 'synonyms': ['seabird', 'seafowl'], 'def': 'a bird that frequents coastal waters and the open ocean: gulls; pelicans; gannets; cormorants; albatrosses; petrels; etc.', 'name': 'seabird'}, {'frequency': 'r', 'id': 946, 'synset': 'seahorse.n.02', 'synonyms': ['seahorse'], 'def': 'small fish with horse-like heads bent sharply downward and curled tails', 'name': 'seahorse'}, {'frequency': 'r', 'id': 947, 'synset': 'seaplane.n.01', 'synonyms': ['seaplane', 'hydroplane'], 'def': 'an airplane that can land on or take off from water', 'name': 'seaplane'}, {'frequency': 'c', 'id': 948, 'synset': 'seashell.n.01', 'synonyms': ['seashell'], 'def': 'the shell of a marine organism', 'name': 'seashell'}, {'frequency': 'r', 'id': 949, 'synset': 'seedling.n.01', 'synonyms': ['seedling'], 'def': 'young plant or tree grown from a seed', 'name': 'seedling'}, {'frequency': 'c', 'id': 950, 'synset': 'serving_dish.n.01', 'synonyms': ['serving_dish'], 'def': 'a dish used for serving food', 'name': 'serving_dish'}, {'frequency': 'r', 'id': 951, 'synset': 'sewing_machine.n.01', 'synonyms': ['sewing_machine'], 'def': 'a textile machine used as a home appliance for sewing', 'name': 'sewing_machine'}, {'frequency': 'r', 'id': 952, 'synset': 'shaker.n.03', 'synonyms': ['shaker'], 'def': 'a container in which something can be shaken', 'name': 'shaker'}, {'frequency': 'c', 'id': 953, 'synset': 'shampoo.n.01', 'synonyms': ['shampoo'], 'def': 'cleansing agent consisting of soaps or detergents used for washing the hair', 'name': 'shampoo'}, {'frequency': 'r', 'id': 954, 'synset': 'shark.n.01', 'synonyms': ['shark'], 'def': 'typically large carnivorous fishes with sharpe teeth', 'name': 'shark'}, {'frequency': 'r', 'id': 955, 'synset': 'sharpener.n.01', 'synonyms': ['sharpener'], 'def': 'any implement that is used to make something (an edge or a point) sharper', 'name': 'sharpener'}, {'frequency': 'r', 'id': 956, 'synset': 'sharpie.n.03', 'synonyms': ['Sharpie'], 'def': 'a pen with indelible ink that will write on any surface', 'name': 'Sharpie'}, {'frequency': 'r', 'id': 957, 'synset': 'shaver.n.03', 'synonyms': ['shaver_(electric)', 'electric_shaver', 'electric_razor'], 'def': 'a razor powered by an electric motor', 'name': 'shaver_(electric)'}, {'frequency': 'c', 'id': 958, 'synset': 'shaving_cream.n.01', 'synonyms': ['shaving_cream', 'shaving_soap'], 'def': 'toiletry consisting that forms a rich lather for softening the beard before shaving', 'name': 'shaving_cream'}, {'frequency': 'r', 'id': 959, 'synset': 'shawl.n.01', 'synonyms': ['shawl'], 'def': 'cloak consisting of an oblong piece of cloth used to cover the head and shoulders', 'name': 'shawl'}, {'frequency': 'r', 'id': 960, 'synset': 'shears.n.01', 'synonyms': ['shears'], 'def': 'large scissors with strong blades', 'name': 'shears'}, {'frequency': 'f', 'id': 961, 'synset': 'sheep.n.01', 'synonyms': ['sheep'], 'def': 'woolly usually horned ruminant mammal related to the goat', 'name': 'sheep'}, {'frequency': 'r', 'id': 962, 'synset': 'shepherd_dog.n.01', 'synonyms': ['shepherd_dog', 'sheepdog'], 'def': 'any of various usually long-haired breeds of dog reared to herd and guard sheep', 'name': 'shepherd_dog'}, {'frequency': 'r', 'id': 963, 'synset': 'sherbert.n.01', 'synonyms': ['sherbert', 'sherbet'], 'def': 'a frozen dessert made primarily of fruit juice and sugar', 'name': 'sherbert'}, {'frequency': 'r', 'id': 964, 'synset': 'shield.n.02', 'synonyms': ['shield'], 'def': 'armor carried on the arm to intercept blows', 'name': 'shield'}, {'frequency': 'f', 'id': 965, 'synset': 'shirt.n.01', 'synonyms': ['shirt'], 'def': 'a garment worn on the upper half of the body', 'name': 'shirt'}, {'frequency': 'f', 'id': 966, 'synset': 'shoe.n.01', 'synonyms': ['shoe', 'sneaker_(type_of_shoe)', 'tennis_shoe'], 'def': 'common footwear covering the foot', 'name': 'shoe'}, {'frequency': 'c', 'id': 967, 'synset': 'shopping_bag.n.01', 'synonyms': ['shopping_bag'], 'def': 'a bag made of plastic or strong paper (often with handles); used to transport goods after shopping', 'name': 'shopping_bag'}, {'frequency': 'c', 'id': 968, 'synset': 'shopping_cart.n.01', 'synonyms': ['shopping_cart'], 'def': 'a handcart that holds groceries or other goods while shopping', 'name': 'shopping_cart'}, {'frequency': 'f', 'id': 969, 'synset': 'short_pants.n.01', 'synonyms': ['short_pants', 'shorts_(clothing)', 'trunks_(clothing)'], 'def': 'trousers that end at or above the knee', 'name': 'short_pants'}, {'frequency': 'r', 'id': 970, 'synset': 'shot_glass.n.01', 'synonyms': ['shot_glass'], 'def': 'a small glass adequate to hold a single swallow of whiskey', 'name': 'shot_glass'}, {'frequency': 'c', 'id': 971, 'synset': 'shoulder_bag.n.01', 'synonyms': ['shoulder_bag'], 'def': 'a large handbag that can be carried by a strap looped over the shoulder', 'name': 'shoulder_bag'}, {'frequency': 'c', 'id': 972, 'synset': 'shovel.n.01', 'synonyms': ['shovel'], 'def': 'a hand tool for lifting loose material such as snow, dirt, etc.', 'name': 'shovel'}, {'frequency': 'f', 'id': 973, 'synset': 'shower.n.01', 'synonyms': ['shower_head'], 'def': 'a plumbing fixture that sprays water over you', 'name': 'shower_head'}, {'frequency': 'f', 'id': 974, 'synset': 'shower_curtain.n.01', 'synonyms': ['shower_curtain'], 'def': 'a curtain that keeps water from splashing out of the shower area', 'name': 'shower_curtain'}, {'frequency': 'r', 'id': 975, 'synset': 'shredder.n.01', 'synonyms': ['shredder_(for_paper)'], 'def': 'a device that shreds documents', 'name': 'shredder_(for_paper)'}, {'frequency': 'r', 'id': 976, 'synset': 'sieve.n.01', 'synonyms': ['sieve', 'screen_(sieve)'], 'def': 'a strainer for separating lumps from powdered material or grading particles', 'name': 'sieve'}, {'frequency': 'f', 'id': 977, 'synset': 'signboard.n.01', 'synonyms': ['signboard'], 'def': 'structure displaying a board on which advertisements can be posted', 'name': 'signboard'}, {'frequency': 'c', 'id': 978, 'synset': 'silo.n.01', 'synonyms': ['silo'], 'def': 'a cylindrical tower used for storing goods', 'name': 'silo'}, {'frequency': 'f', 'id': 979, 'synset': 'sink.n.01', 'synonyms': ['sink'], 'def': 'plumbing fixture consisting of a water basin fixed to a wall or floor and having a drainpipe', 'name': 'sink'}, {'frequency': 'f', 'id': 980, 'synset': 'skateboard.n.01', 'synonyms': ['skateboard'], 'def': 'a board with wheels that is ridden in a standing or crouching position and propelled by foot', 'name': 'skateboard'}, {'frequency': 'c', 'id': 981, 'synset': 'skewer.n.01', 'synonyms': ['skewer'], 'def': 'a long pin for holding meat in position while it is being roasted', 'name': 'skewer'}, {'frequency': 'f', 'id': 982, 'synset': 'ski.n.01', 'synonyms': ['ski'], 'def': 'sports equipment for skiing on snow', 'name': 'ski'}, {'frequency': 'f', 'id': 983, 'synset': 'ski_boot.n.01', 'synonyms': ['ski_boot'], 'def': 'a stiff boot that is fastened to a ski with a ski binding', 'name': 'ski_boot'}, {'frequency': 'f', 'id': 984, 'synset': 'ski_parka.n.01', 'synonyms': ['ski_parka', 'ski_jacket'], 'def': 'a parka to be worn while skiing', 'name': 'ski_parka'}, {'frequency': 'f', 'id': 985, 'synset': 'ski_pole.n.01', 'synonyms': ['ski_pole'], 'def': 'a pole with metal points used as an aid in skiing', 'name': 'ski_pole'}, {'frequency': 'f', 'id': 986, 'synset': 'skirt.n.02', 'synonyms': ['skirt'], 'def': 'a garment hanging from the waist; worn mainly by girls and women', 'name': 'skirt'}, {'frequency': 'c', 'id': 987, 'synset': 'sled.n.01', 'synonyms': ['sled', 'sledge', 'sleigh'], 'def': 'a vehicle or flat object for transportation over snow by sliding or pulled by dogs, etc.', 'name': 'sled'}, {'frequency': 'c', 'id': 988, 'synset': 'sleeping_bag.n.01', 'synonyms': ['sleeping_bag'], 'def': 'large padded bag designed to be slept in outdoors', 'name': 'sleeping_bag'}, {'frequency': 'r', 'id': 989, 'synset': 'sling.n.05', 'synonyms': ['sling_(bandage)', 'triangular_bandage'], 'def': 'bandage to support an injured forearm; slung over the shoulder or neck', 'name': 'sling_(bandage)'}, {'frequency': 'c', 'id': 990, 'synset': 'slipper.n.01', 'synonyms': ['slipper_(footwear)', 'carpet_slipper_(footwear)'], 'def': 'low footwear that can be slipped on and off easily; usually worn indoors', 'name': 'slipper_(footwear)'}, {'frequency': 'r', 'id': 991, 'synset': 'smoothie.n.02', 'synonyms': ['smoothie'], 'def': 'a thick smooth drink consisting of fresh fruit pureed with ice cream or yoghurt or milk', 'name': 'smoothie'}, {'frequency': 'r', 'id': 992, 'synset': 'snake.n.01', 'synonyms': ['snake', 'serpent'], 'def': 'limbless scaly elongate reptile; some are venomous', 'name': 'snake'}, {'frequency': 'f', 'id': 993, 'synset': 'snowboard.n.01', 'synonyms': ['snowboard'], 'def': 'a board that resembles a broad ski or a small surfboard; used in a standing position to slide down snow-covered slopes', 'name': 'snowboard'}, {'frequency': 'c', 'id': 994, 'synset': 'snowman.n.01', 'synonyms': ['snowman'], 'def': 'a figure of a person made of packed snow', 'name': 'snowman'}, {'frequency': 'c', 'id': 995, 'synset': 'snowmobile.n.01', 'synonyms': ['snowmobile'], 'def': 'tracked vehicle for travel on snow having skis in front', 'name': 'snowmobile'}, {'frequency': 'f', 'id': 996, 'synset': 'soap.n.01', 'synonyms': ['soap'], 'def': 'a cleansing agent made from the salts of vegetable or animal fats', 'name': 'soap'}, {'frequency': 'f', 'id': 997, 'synset': 'soccer_ball.n.01', 'synonyms': ['soccer_ball'], 'def': "an inflated ball used in playing soccer (called `football' outside of the United States)", 'name': 'soccer_ball'}, {'frequency': 'f', 'id': 998, 'synset': 'sock.n.01', 'synonyms': ['sock'], 'def': 'cloth covering for the foot; worn inside the shoe; reaches to between the ankle and the knee', 'name': 'sock'}, {'frequency': 'r', 'id': 999, 'synset': 'soda_fountain.n.02', 'synonyms': ['soda_fountain'], 'def': 'an apparatus for dispensing soda water', 'name': 'soda_fountain'}, {'frequency': 'r', 'id': 1000, 'synset': 'soda_water.n.01', 'synonyms': ['carbonated_water', 'club_soda', 'seltzer', 'sparkling_water'], 'def': 'effervescent beverage artificially charged with carbon dioxide', 'name': 'carbonated_water'}, {'frequency': 'f', 'id': 1001, 'synset': 'sofa.n.01', 'synonyms': ['sofa', 'couch', 'lounge'], 'def': 'an upholstered seat for more than one person', 'name': 'sofa'}, {'frequency': 'r', 'id': 1002, 'synset': 'softball.n.01', 'synonyms': ['softball'], 'def': 'ball used in playing softball', 'name': 'softball'}, {'frequency': 'c', 'id': 1003, 'synset': 'solar_array.n.01', 'synonyms': ['solar_array', 'solar_battery', 'solar_panel'], 'def': 'electrical device consisting of a large array of connected solar cells', 'name': 'solar_array'}, {'frequency': 'r', 'id': 1004, 'synset': 'sombrero.n.02', 'synonyms': ['sombrero'], 'def': 'a straw hat with a tall crown and broad brim; worn in American southwest and in Mexico', 'name': 'sombrero'}, {'frequency': 'c', 'id': 1005, 'synset': 'soup.n.01', 'synonyms': ['soup'], 'def': 'liquid food especially of meat or fish or vegetable stock often containing pieces of solid food', 'name': 'soup'}, {'frequency': 'r', 'id': 1006, 'synset': 'soup_bowl.n.01', 'synonyms': ['soup_bowl'], 'def': 'a bowl for serving soup', 'name': 'soup_bowl'}, {'frequency': 'c', 'id': 1007, 'synset': 'soupspoon.n.01', 'synonyms': ['soupspoon'], 'def': 'a spoon with a rounded bowl for eating soup', 'name': 'soupspoon'}, {'frequency': 'c', 'id': 1008, 'synset': 'sour_cream.n.01', 'synonyms': ['sour_cream', 'soured_cream'], 'def': 'soured light cream', 'name': 'sour_cream'}, {'frequency': 'r', 'id': 1009, 'synset': 'soya_milk.n.01', 'synonyms': ['soya_milk', 'soybean_milk', 'soymilk'], 'def': 'a milk substitute containing soybean flour and water; used in some infant formulas and in making tofu', 'name': 'soya_milk'}, {'frequency': 'r', 'id': 1010, 'synset': 'space_shuttle.n.01', 'synonyms': ['space_shuttle'], 'def': "a reusable spacecraft with wings for a controlled descent through the Earth's atmosphere", 'name': 'space_shuttle'}, {'frequency': 'r', 'id': 1011, 'synset': 'sparkler.n.02', 'synonyms': ['sparkler_(fireworks)'], 'def': 'a firework that burns slowly and throws out a shower of sparks', 'name': 'sparkler_(fireworks)'}, {'frequency': 'f', 'id': 1012, 'synset': 'spatula.n.02', 'synonyms': ['spatula'], 'def': 'a hand tool with a thin flexible blade used to mix or spread soft substances', 'name': 'spatula'}, {'frequency': 'r', 'id': 1013, 'synset': 'spear.n.01', 'synonyms': ['spear', 'lance'], 'def': 'a long pointed rod used as a tool or weapon', 'name': 'spear'}, {'frequency': 'f', 'id': 1014, 'synset': 'spectacles.n.01', 'synonyms': ['spectacles', 'specs', 'eyeglasses', 'glasses'], 'def': 'optical instrument consisting of a frame that holds a pair of lenses for correcting defective vision', 'name': 'spectacles'}, {'frequency': 'c', 'id': 1015, 'synset': 'spice_rack.n.01', 'synonyms': ['spice_rack'], 'def': 'a rack for displaying containers filled with spices', 'name': 'spice_rack'}, {'frequency': 'r', 'id': 1016, 'synset': 'spider.n.01', 'synonyms': ['spider'], 'def': 'predatory arachnid with eight legs, two poison fangs, two feelers, and usually two silk-spinning organs at the back end of the body', 'name': 'spider'}, {'frequency': 'c', 'id': 1017, 'synset': 'sponge.n.01', 'synonyms': ['sponge'], 'def': 'a porous mass usable to absorb water typically used for cleaning', 'name': 'sponge'}, {'frequency': 'f', 'id': 1018, 'synset': 'spoon.n.01', 'synonyms': ['spoon'], 'def': 'a piece of cutlery with a shallow bowl-shaped container and a handle', 'name': 'spoon'}, {'frequency': 'c', 'id': 1019, 'synset': 'sportswear.n.01', 'synonyms': ['sportswear', 'athletic_wear', 'activewear'], 'def': 'attire worn for sport or for casual wear', 'name': 'sportswear'}, {'frequency': 'c', 'id': 1020, 'synset': 'spotlight.n.02', 'synonyms': ['spotlight'], 'def': 'a lamp that produces a strong beam of light to illuminate a restricted area; used to focus attention of a stage performer', 'name': 'spotlight'}, {'frequency': 'r', 'id': 1021, 'synset': 'squirrel.n.01', 'synonyms': ['squirrel'], 'def': 'a kind of arboreal rodent having a long bushy tail', 'name': 'squirrel'}, {'frequency': 'c', 'id': 1022, 'synset': 'stapler.n.01', 'synonyms': ['stapler_(stapling_machine)'], 'def': 'a machine that inserts staples into sheets of paper in order to fasten them together', 'name': 'stapler_(stapling_machine)'}, {'frequency': 'r', 'id': 1023, 'synset': 'starfish.n.01', 'synonyms': ['starfish', 'sea_star'], 'def': 'echinoderms characterized by five arms extending from a central disk', 'name': 'starfish'}, {'frequency': 'f', 'id': 1024, 'synset': 'statue.n.01', 'synonyms': ['statue_(sculpture)'], 'def': 'a sculpture representing a human or animal', 'name': 'statue_(sculpture)'}, {'frequency': 'c', 'id': 1025, 'synset': 'steak.n.01', 'synonyms': ['steak_(food)'], 'def': 'a slice of meat cut from the fleshy part of an animal or large fish', 'name': 'steak_(food)'}, {'frequency': 'r', 'id': 1026, 'synset': 'steak_knife.n.01', 'synonyms': ['steak_knife'], 'def': 'a sharp table knife used in eating steak', 'name': 'steak_knife'}, {'frequency': 'r', 'id': 1027, 'synset': 'steamer.n.02', 'synonyms': ['steamer_(kitchen_appliance)'], 'def': 'a cooking utensil that can be used to cook food by steaming it', 'name': 'steamer_(kitchen_appliance)'}, {'frequency': 'f', 'id': 1028, 'synset': 'steering_wheel.n.01', 'synonyms': ['steering_wheel'], 'def': 'a handwheel that is used for steering', 'name': 'steering_wheel'}, {'frequency': 'r', 'id': 1029, 'synset': 'stencil.n.01', 'synonyms': ['stencil'], 'def': 'a sheet of material (metal, plastic, etc.) that has been perforated with a pattern; ink or paint can pass through the perforations to create the printed pattern on the surface below', 'name': 'stencil'}, {'frequency': 'r', 'id': 1030, 'synset': 'step_ladder.n.01', 'synonyms': ['stepladder'], 'def': 'a folding portable ladder hinged at the top', 'name': 'stepladder'}, {'frequency': 'c', 'id': 1031, 'synset': 'step_stool.n.01', 'synonyms': ['step_stool'], 'def': 'a stool that has one or two steps that fold under the seat', 'name': 'step_stool'}, {'frequency': 'c', 'id': 1032, 'synset': 'stereo.n.01', 'synonyms': ['stereo_(sound_system)'], 'def': 'electronic device for playing audio', 'name': 'stereo_(sound_system)'}, {'frequency': 'r', 'id': 1033, 'synset': 'stew.n.02', 'synonyms': ['stew'], 'def': 'food prepared by stewing especially meat or fish with vegetables', 'name': 'stew'}, {'frequency': 'r', 'id': 1034, 'synset': 'stirrer.n.02', 'synonyms': ['stirrer'], 'def': 'an implement used for stirring', 'name': 'stirrer'}, {'frequency': 'f', 'id': 1035, 'synset': 'stirrup.n.01', 'synonyms': ['stirrup'], 'def': "support consisting of metal loops into which rider's feet go", 'name': 'stirrup'}, {'frequency': 'c', 'id': 1036, 'synset': 'stocking.n.01', 'synonyms': ['stockings_(leg_wear)'], 'def': 'close-fitting hosiery to cover the foot and leg; come in matched pairs', 'name': 'stockings_(leg_wear)'}, {'frequency': 'f', 'id': 1037, 'synset': 'stool.n.01', 'synonyms': ['stool'], 'def': 'a simple seat without a back or arms', 'name': 'stool'}, {'frequency': 'f', 'id': 1038, 'synset': 'stop_sign.n.01', 'synonyms': ['stop_sign'], 'def': 'a traffic sign to notify drivers that they must come to a complete stop', 'name': 'stop_sign'}, {'frequency': 'f', 'id': 1039, 'synset': 'stoplight.n.01', 'synonyms': ['brake_light'], 'def': 'a red light on the rear of a motor vehicle that signals when the brakes are applied', 'name': 'brake_light'}, {'frequency': 'f', 'id': 1040, 'synset': 'stove.n.01', 'synonyms': ['stove', 'kitchen_stove', 'range_(kitchen_appliance)', 'kitchen_range', 'cooking_stove'], 'def': 'a kitchen appliance used for cooking food', 'name': 'stove'}, {'frequency': 'c', 'id': 1041, 'synset': 'strainer.n.01', 'synonyms': ['strainer'], 'def': 'a filter to retain larger pieces while smaller pieces and liquids pass through', 'name': 'strainer'}, {'frequency': 'f', 'id': 1042, 'synset': 'strap.n.01', 'synonyms': ['strap'], 'def': 'an elongated strip of material for binding things together or holding', 'name': 'strap'}, {'frequency': 'f', 'id': 1043, 'synset': 'straw.n.04', 'synonyms': ['straw_(for_drinking)', 'drinking_straw'], 'def': 'a thin paper or plastic tube used to suck liquids into the mouth', 'name': 'straw_(for_drinking)'}, {'frequency': 'f', 'id': 1044, 'synset': 'strawberry.n.01', 'synonyms': ['strawberry'], 'def': 'sweet fleshy red fruit', 'name': 'strawberry'}, {'frequency': 'f', 'id': 1045, 'synset': 'street_sign.n.01', 'synonyms': ['street_sign'], 'def': 'a sign visible from the street', 'name': 'street_sign'}, {'frequency': 'f', 'id': 1046, 'synset': 'streetlight.n.01', 'synonyms': ['streetlight', 'street_lamp'], 'def': 'a lamp supported on a lamppost; for illuminating a street', 'name': 'streetlight'}, {'frequency': 'r', 'id': 1047, 'synset': 'string_cheese.n.01', 'synonyms': ['string_cheese'], 'def': 'cheese formed in long strings twisted together', 'name': 'string_cheese'}, {'frequency': 'r', 'id': 1048, 'synset': 'stylus.n.02', 'synonyms': ['stylus'], 'def': 'a pointed tool for writing or drawing or engraving', 'name': 'stylus'}, {'frequency': 'r', 'id': 1049, 'synset': 'subwoofer.n.01', 'synonyms': ['subwoofer'], 'def': 'a loudspeaker that is designed to reproduce very low bass frequencies', 'name': 'subwoofer'}, {'frequency': 'r', 'id': 1050, 'synset': 'sugar_bowl.n.01', 'synonyms': ['sugar_bowl'], 'def': 'a dish in which sugar is served', 'name': 'sugar_bowl'}, {'frequency': 'r', 'id': 1051, 'synset': 'sugarcane.n.01', 'synonyms': ['sugarcane_(plant)'], 'def': 'juicy canes whose sap is a source of molasses and commercial sugar; fresh canes are sometimes chewed for the juice', 'name': 'sugarcane_(plant)'}, {'frequency': 'c', 'id': 1052, 'synset': 'suit.n.01', 'synonyms': ['suit_(clothing)'], 'def': 'a set of garments (usually including a jacket and trousers or skirt) for outerwear all of the same fabric and color', 'name': 'suit_(clothing)'}, {'frequency': 'c', 'id': 1053, 'synset': 'sunflower.n.01', 'synonyms': ['sunflower'], 'def': 'any plant of the genus Helianthus having large flower heads with dark disk florets and showy yellow rays', 'name': 'sunflower'}, {'frequency': 'f', 'id': 1054, 'synset': 'sunglasses.n.01', 'synonyms': ['sunglasses'], 'def': 'spectacles that are darkened or polarized to protect the eyes from the glare of the sun', 'name': 'sunglasses'}, {'frequency': 'c', 'id': 1055, 'synset': 'sunhat.n.01', 'synonyms': ['sunhat'], 'def': 'a hat with a broad brim that protects the face from direct exposure to the sun', 'name': 'sunhat'}, {'frequency': 'r', 'id': 1056, 'synset': 'sunscreen.n.01', 'synonyms': ['sunscreen', 'sunblock'], 'def': 'a cream spread on the skin; contains a chemical to filter out ultraviolet light and so protect from sunburn', 'name': 'sunscreen'}, {'frequency': 'f', 'id': 1057, 'synset': 'surfboard.n.01', 'synonyms': ['surfboard'], 'def': 'a narrow buoyant board for riding surf', 'name': 'surfboard'}, {'frequency': 'c', 'id': 1058, 'synset': 'sushi.n.01', 'synonyms': ['sushi'], 'def': 'rice (with raw fish) wrapped in seaweed', 'name': 'sushi'}, {'frequency': 'c', 'id': 1059, 'synset': 'swab.n.02', 'synonyms': ['mop'], 'def': 'cleaning implement consisting of absorbent material fastened to a handle; for cleaning floors', 'name': 'mop'}, {'frequency': 'c', 'id': 1060, 'synset': 'sweat_pants.n.01', 'synonyms': ['sweat_pants'], 'def': 'loose-fitting trousers with elastic cuffs; worn by athletes', 'name': 'sweat_pants'}, {'frequency': 'c', 'id': 1061, 'synset': 'sweatband.n.02', 'synonyms': ['sweatband'], 'def': 'a band of material tied around the forehead or wrist to absorb sweat', 'name': 'sweatband'}, {'frequency': 'f', 'id': 1062, 'synset': 'sweater.n.01', 'synonyms': ['sweater'], 'def': 'a crocheted or knitted garment covering the upper part of the body', 'name': 'sweater'}, {'frequency': 'f', 'id': 1063, 'synset': 'sweatshirt.n.01', 'synonyms': ['sweatshirt'], 'def': 'cotton knit pullover with long sleeves worn during athletic activity', 'name': 'sweatshirt'}, {'frequency': 'c', 'id': 1064, 'synset': 'sweet_potato.n.02', 'synonyms': ['sweet_potato'], 'def': 'the edible tuberous root of the sweet potato vine', 'name': 'sweet_potato'}, {'frequency': 'f', 'id': 1065, 'synset': 'swimsuit.n.01', 'synonyms': ['swimsuit', 'swimwear', 'bathing_suit', 'swimming_costume', 'bathing_costume', 'swimming_trunks', 'bathing_trunks'], 'def': 'garment worn for swimming', 'name': 'swimsuit'}, {'frequency': 'c', 'id': 1066, 'synset': 'sword.n.01', 'synonyms': ['sword'], 'def': 'a cutting or thrusting weapon that has a long metal blade', 'name': 'sword'}, {'frequency': 'r', 'id': 1067, 'synset': 'syringe.n.01', 'synonyms': ['syringe'], 'def': 'a medical instrument used to inject or withdraw fluids', 'name': 'syringe'}, {'frequency': 'r', 'id': 1068, 'synset': 'tabasco.n.02', 'synonyms': ['Tabasco_sauce'], 'def': 'very spicy sauce (trade name Tabasco) made from fully-aged red peppers', 'name': 'Tabasco_sauce'}, {'frequency': 'r', 'id': 1069, 'synset': 'table-tennis_table.n.01', 'synonyms': ['table-tennis_table', 'ping-pong_table'], 'def': 'a table used for playing table tennis', 'name': 'table-tennis_table'}, {'frequency': 'f', 'id': 1070, 'synset': 'table.n.02', 'synonyms': ['table'], 'def': 'a piece of furniture having a smooth flat top that is usually supported by one or more vertical legs', 'name': 'table'}, {'frequency': 'c', 'id': 1071, 'synset': 'table_lamp.n.01', 'synonyms': ['table_lamp'], 'def': 'a lamp that sits on a table', 'name': 'table_lamp'}, {'frequency': 'f', 'id': 1072, 'synset': 'tablecloth.n.01', 'synonyms': ['tablecloth'], 'def': 'a covering spread over a dining table', 'name': 'tablecloth'}, {'frequency': 'r', 'id': 1073, 'synset': 'tachometer.n.01', 'synonyms': ['tachometer'], 'def': 'measuring instrument for indicating speed of rotation', 'name': 'tachometer'}, {'frequency': 'r', 'id': 1074, 'synset': 'taco.n.02', 'synonyms': ['taco'], 'def': 'a small tortilla cupped around a filling', 'name': 'taco'}, {'frequency': 'f', 'id': 1075, 'synset': 'tag.n.02', 'synonyms': ['tag'], 'def': 'a label associated with something for the purpose of identification or information', 'name': 'tag'}, {'frequency': 'f', 'id': 1076, 'synset': 'taillight.n.01', 'synonyms': ['taillight', 'rear_light'], 'def': 'lamp (usually red) mounted at the rear of a motor vehicle', 'name': 'taillight'}, {'frequency': 'r', 'id': 1077, 'synset': 'tambourine.n.01', 'synonyms': ['tambourine'], 'def': 'a shallow drum with a single drumhead and with metallic disks in the sides', 'name': 'tambourine'}, {'frequency': 'r', 'id': 1078, 'synset': 'tank.n.01', 'synonyms': ['army_tank', 'armored_combat_vehicle', 'armoured_combat_vehicle'], 'def': 'an enclosed armored military vehicle; has a cannon and moves on caterpillar treads', 'name': 'army_tank'}, {'frequency': 'c', 'id': 1079, 'synset': 'tank.n.02', 'synonyms': ['tank_(storage_vessel)', 'storage_tank'], 'def': 'a large (usually metallic) vessel for holding gases or liquids', 'name': 'tank_(storage_vessel)'}, {'frequency': 'f', 'id': 1080, 'synset': 'tank_top.n.01', 'synonyms': ['tank_top_(clothing)'], 'def': 'a tight-fitting sleeveless shirt with wide shoulder straps and low neck and no front opening', 'name': 'tank_top_(clothing)'}, {'frequency': 'c', 'id': 1081, 'synset': 'tape.n.01', 'synonyms': ['tape_(sticky_cloth_or_paper)'], 'def': 'a long thin piece of cloth or paper as used for binding or fastening', 'name': 'tape_(sticky_cloth_or_paper)'}, {'frequency': 'c', 'id': 1082, 'synset': 'tape.n.04', 'synonyms': ['tape_measure', 'measuring_tape'], 'def': 'measuring instrument consisting of a narrow strip (cloth or metal) marked in inches or centimeters and used for measuring lengths', 'name': 'tape_measure'}, {'frequency': 'c', 'id': 1083, 'synset': 'tapestry.n.02', 'synonyms': ['tapestry'], 'def': 'a heavy textile with a woven design; used for curtains and upholstery', 'name': 'tapestry'}, {'frequency': 'f', 'id': 1084, 'synset': 'tarpaulin.n.01', 'synonyms': ['tarp'], 'def': 'waterproofed canvas', 'name': 'tarp'}, {'frequency': 'c', 'id': 1085, 'synset': 'tartan.n.01', 'synonyms': ['tartan', 'plaid'], 'def': 'a cloth having a crisscross design', 'name': 'tartan'}, {'frequency': 'c', 'id': 1086, 'synset': 'tassel.n.01', 'synonyms': ['tassel'], 'def': 'adornment consisting of a bunch of cords fastened at one end', 'name': 'tassel'}, {'frequency': 'r', 'id': 1087, 'synset': 'tea_bag.n.01', 'synonyms': ['tea_bag'], 'def': 'a measured amount of tea in a bag for an individual serving of tea', 'name': 'tea_bag'}, {'frequency': 'c', 'id': 1088, 'synset': 'teacup.n.02', 'synonyms': ['teacup'], 'def': 'a cup from which tea is drunk', 'name': 'teacup'}, {'frequency': 'c', 'id': 1089, 'synset': 'teakettle.n.01', 'synonyms': ['teakettle'], 'def': 'kettle for boiling water to make tea', 'name': 'teakettle'}, {'frequency': 'c', 'id': 1090, 'synset': 'teapot.n.01', 'synonyms': ['teapot'], 'def': 'pot for brewing tea; usually has a spout and handle', 'name': 'teapot'}, {'frequency': 'f', 'id': 1091, 'synset': 'teddy.n.01', 'synonyms': ['teddy_bear'], 'def': "plaything consisting of a child's toy bear (usually plush and stuffed with soft materials)", 'name': 'teddy_bear'}, {'frequency': 'f', 'id': 1092, 'synset': 'telephone.n.01', 'synonyms': ['telephone', 'phone', 'telephone_set'], 'def': 'electronic device for communicating by voice over long distances', 'name': 'telephone'}, {'frequency': 'c', 'id': 1093, 'synset': 'telephone_booth.n.01', 'synonyms': ['telephone_booth', 'phone_booth', 'call_box', 'telephone_box', 'telephone_kiosk'], 'def': 'booth for using a telephone', 'name': 'telephone_booth'}, {'frequency': 'f', 'id': 1094, 'synset': 'telephone_pole.n.01', 'synonyms': ['telephone_pole', 'telegraph_pole', 'telegraph_post'], 'def': 'tall pole supporting telephone wires', 'name': 'telephone_pole'}, {'frequency': 'r', 'id': 1095, 'synset': 'telephoto_lens.n.01', 'synonyms': ['telephoto_lens', 'zoom_lens'], 'def': 'a camera lens that magnifies the image', 'name': 'telephoto_lens'}, {'frequency': 'c', 'id': 1096, 'synset': 'television_camera.n.01', 'synonyms': ['television_camera', 'tv_camera'], 'def': 'television equipment for capturing and recording video', 'name': 'television_camera'}, {'frequency': 'f', 'id': 1097, 'synset': 'television_receiver.n.01', 'synonyms': ['television_set', 'tv', 'tv_set'], 'def': 'an electronic device that receives television signals and displays them on a screen', 'name': 'television_set'}, {'frequency': 'f', 'id': 1098, 'synset': 'tennis_ball.n.01', 'synonyms': ['tennis_ball'], 'def': 'ball about the size of a fist used in playing tennis', 'name': 'tennis_ball'}, {'frequency': 'f', 'id': 1099, 'synset': 'tennis_racket.n.01', 'synonyms': ['tennis_racket'], 'def': 'a racket used to play tennis', 'name': 'tennis_racket'}, {'frequency': 'r', 'id': 1100, 'synset': 'tequila.n.01', 'synonyms': ['tequila'], 'def': 'Mexican liquor made from fermented juices of an agave plant', 'name': 'tequila'}, {'frequency': 'c', 'id': 1101, 'synset': 'thermometer.n.01', 'synonyms': ['thermometer'], 'def': 'measuring instrument for measuring temperature', 'name': 'thermometer'}, {'frequency': 'c', 'id': 1102, 'synset': 'thermos.n.01', 'synonyms': ['thermos_bottle'], 'def': 'vacuum flask that preserves temperature of hot or cold drinks', 'name': 'thermos_bottle'}, {'frequency': 'c', 'id': 1103, 'synset': 'thermostat.n.01', 'synonyms': ['thermostat'], 'def': 'a regulator for automatically regulating temperature by starting or stopping the supply of heat', 'name': 'thermostat'}, {'frequency': 'r', 'id': 1104, 'synset': 'thimble.n.02', 'synonyms': ['thimble'], 'def': 'a small metal cap to protect the finger while sewing; can be used as a small container', 'name': 'thimble'}, {'frequency': 'c', 'id': 1105, 'synset': 'thread.n.01', 'synonyms': ['thread', 'yarn'], 'def': 'a fine cord of twisted fibers (of cotton or silk or wool or nylon etc.) used in sewing and weaving', 'name': 'thread'}, {'frequency': 'c', 'id': 1106, 'synset': 'thumbtack.n.01', 'synonyms': ['thumbtack', 'drawing_pin', 'pushpin'], 'def': 'a tack for attaching papers to a bulletin board or drawing board', 'name': 'thumbtack'}, {'frequency': 'c', 'id': 1107, 'synset': 'tiara.n.01', 'synonyms': ['tiara'], 'def': 'a jeweled headdress worn by women on formal occasions', 'name': 'tiara'}, {'frequency': 'c', 'id': 1108, 'synset': 'tiger.n.02', 'synonyms': ['tiger'], 'def': 'large feline of forests in most of Asia having a tawny coat with black stripes', 'name': 'tiger'}, {'frequency': 'c', 'id': 1109, 'synset': 'tights.n.01', 'synonyms': ['tights_(clothing)', 'leotards'], 'def': 'skintight knit hose covering the body from the waist to the feet worn by acrobats and dancers and as stockings by women and girls', 'name': 'tights_(clothing)'}, {'frequency': 'c', 'id': 1110, 'synset': 'timer.n.01', 'synonyms': ['timer', 'stopwatch'], 'def': 'a timepiece that measures a time interval and signals its end', 'name': 'timer'}, {'frequency': 'f', 'id': 1111, 'synset': 'tinfoil.n.01', 'synonyms': ['tinfoil'], 'def': 'foil made of tin or an alloy of tin and lead', 'name': 'tinfoil'}, {'frequency': 'r', 'id': 1112, 'synset': 'tinsel.n.01', 'synonyms': ['tinsel'], 'def': 'a showy decoration that is basically valueless', 'name': 'tinsel'}, {'frequency': 'f', 'id': 1113, 'synset': 'tissue.n.02', 'synonyms': ['tissue_paper'], 'def': 'a soft thin (usually translucent) paper', 'name': 'tissue_paper'}, {'frequency': 'c', 'id': 1114, 'synset': 'toast.n.01', 'synonyms': ['toast_(food)'], 'def': 'slice of bread that has been toasted', 'name': 'toast_(food)'}, {'frequency': 'f', 'id': 1115, 'synset': 'toaster.n.02', 'synonyms': ['toaster'], 'def': 'a kitchen appliance (usually electric) for toasting bread', 'name': 'toaster'}, {'frequency': 'c', 'id': 1116, 'synset': 'toaster_oven.n.01', 'synonyms': ['toaster_oven'], 'def': 'kitchen appliance consisting of a small electric oven for toasting or warming food', 'name': 'toaster_oven'}, {'frequency': 'f', 'id': 1117, 'synset': 'toilet.n.02', 'synonyms': ['toilet'], 'def': 'a plumbing fixture for defecation and urination', 'name': 'toilet'}, {'frequency': 'f', 'id': 1118, 'synset': 'toilet_tissue.n.01', 'synonyms': ['toilet_tissue', 'toilet_paper', 'bathroom_tissue'], 'def': 'a soft thin absorbent paper for use in toilets', 'name': 'toilet_tissue'}, {'frequency': 'f', 'id': 1119, 'synset': 'tomato.n.01', 'synonyms': ['tomato'], 'def': 'mildly acid red or yellow pulpy fruit eaten as a vegetable', 'name': 'tomato'}, {'frequency': 'c', 'id': 1120, 'synset': 'tongs.n.01', 'synonyms': ['tongs'], 'def': 'any of various devices for taking hold of objects; usually have two hinged legs with handles above and pointed hooks below', 'name': 'tongs'}, {'frequency': 'c', 'id': 1121, 'synset': 'toolbox.n.01', 'synonyms': ['toolbox'], 'def': 'a box or chest or cabinet for holding hand tools', 'name': 'toolbox'}, {'frequency': 'f', 'id': 1122, 'synset': 'toothbrush.n.01', 'synonyms': ['toothbrush'], 'def': 'small brush; has long handle; used to clean teeth', 'name': 'toothbrush'}, {'frequency': 'f', 'id': 1123, 'synset': 'toothpaste.n.01', 'synonyms': ['toothpaste'], 'def': 'a dentifrice in the form of a paste', 'name': 'toothpaste'}, {'frequency': 'c', 'id': 1124, 'synset': 'toothpick.n.01', 'synonyms': ['toothpick'], 'def': 'pick consisting of a small strip of wood or plastic; used to pick food from between the teeth', 'name': 'toothpick'}, {'frequency': 'c', 'id': 1125, 'synset': 'top.n.09', 'synonyms': ['cover'], 'def': 'covering for a hole (especially a hole in the top of a container)', 'name': 'cover'}, {'frequency': 'c', 'id': 1126, 'synset': 'tortilla.n.01', 'synonyms': ['tortilla'], 'def': 'thin unleavened pancake made from cornmeal or wheat flour', 'name': 'tortilla'}, {'frequency': 'c', 'id': 1127, 'synset': 'tow_truck.n.01', 'synonyms': ['tow_truck'], 'def': 'a truck equipped to hoist and pull wrecked cars (or to remove cars from no-parking zones)', 'name': 'tow_truck'}, {'frequency': 'f', 'id': 1128, 'synset': 'towel.n.01', 'synonyms': ['towel'], 'def': 'a rectangular piece of absorbent cloth (or paper) for drying or wiping', 'name': 'towel'}, {'frequency': 'f', 'id': 1129, 'synset': 'towel_rack.n.01', 'synonyms': ['towel_rack', 'towel_rail', 'towel_bar'], 'def': 'a rack consisting of one or more bars on which towels can be hung', 'name': 'towel_rack'}, {'frequency': 'f', 'id': 1130, 'synset': 'toy.n.03', 'synonyms': ['toy'], 'def': 'a device regarded as providing amusement', 'name': 'toy'}, {'frequency': 'c', 'id': 1131, 'synset': 'tractor.n.01', 'synonyms': ['tractor_(farm_equipment)'], 'def': 'a wheeled vehicle with large wheels; used in farming and other applications', 'name': 'tractor_(farm_equipment)'}, {'frequency': 'f', 'id': 1132, 'synset': 'traffic_light.n.01', 'synonyms': ['traffic_light'], 'def': 'a device to control vehicle traffic often consisting of three or more lights', 'name': 'traffic_light'}, {'frequency': 'r', 'id': 1133, 'synset': 'trail_bike.n.01', 'synonyms': ['dirt_bike'], 'def': 'a lightweight motorcycle equipped with rugged tires and suspension for off-road use', 'name': 'dirt_bike'}, {'frequency': 'c', 'id': 1134, 'synset': 'trailer_truck.n.01', 'synonyms': ['trailer_truck', 'tractor_trailer', 'trucking_rig', 'articulated_lorry', 'semi_truck'], 'def': 'a truck consisting of a tractor and trailer together', 'name': 'trailer_truck'}, {'frequency': 'f', 'id': 1135, 'synset': 'train.n.01', 'synonyms': ['train_(railroad_vehicle)', 'railroad_train'], 'def': 'public or private transport provided by a line of railway cars coupled together and drawn by a locomotive', 'name': 'train_(railroad_vehicle)'}, {'frequency': 'r', 'id': 1136, 'synset': 'trampoline.n.01', 'synonyms': ['trampoline'], 'def': 'gymnastic apparatus consisting of a strong canvas sheet attached with springs to a metal frame', 'name': 'trampoline'}, {'frequency': 'f', 'id': 1137, 'synset': 'tray.n.01', 'synonyms': ['tray'], 'def': 'an open receptacle for holding or displaying or serving articles or food', 'name': 'tray'}, {'frequency': 'r', 'id': 1138, 'synset': 'tree_house.n.01', 'synonyms': ['tree_house'], 'def': '(NOT A TREE) a PLAYHOUSE built in the branches of a tree', 'name': 'tree_house'}, {'frequency': 'r', 'id': 1139, 'synset': 'trench_coat.n.01', 'synonyms': ['trench_coat'], 'def': 'a military style raincoat; belted with deep pockets', 'name': 'trench_coat'}, {'frequency': 'r', 'id': 1140, 'synset': 'triangle.n.05', 'synonyms': ['triangle_(musical_instrument)'], 'def': 'a percussion instrument consisting of a metal bar bent in the shape of an open triangle', 'name': 'triangle_(musical_instrument)'}, {'frequency': 'r', 'id': 1141, 'synset': 'tricycle.n.01', 'synonyms': ['tricycle'], 'def': 'a vehicle with three wheels that is moved by foot pedals', 'name': 'tricycle'}, {'frequency': 'c', 'id': 1142, 'synset': 'tripod.n.01', 'synonyms': ['tripod'], 'def': 'a three-legged rack used for support', 'name': 'tripod'}, {'frequency': 'f', 'id': 1143, 'synset': 'trouser.n.01', 'synonyms': ['trousers', 'pants_(clothing)'], 'def': 'a garment extending from the waist to the knee or ankle, covering each leg separately', 'name': 'trousers'}, {'frequency': 'f', 'id': 1144, 'synset': 'truck.n.01', 'synonyms': ['truck'], 'def': 'an automotive vehicle suitable for hauling', 'name': 'truck'}, {'frequency': 'r', 'id': 1145, 'synset': 'truffle.n.03', 'synonyms': ['truffle_(chocolate)', 'chocolate_truffle'], 'def': 'creamy chocolate candy', 'name': 'truffle_(chocolate)'}, {'frequency': 'c', 'id': 1146, 'synset': 'trunk.n.02', 'synonyms': ['trunk'], 'def': 'luggage consisting of a large strong case used when traveling or for storage', 'name': 'trunk'}, {'frequency': 'r', 'id': 1147, 'synset': 'tub.n.02', 'synonyms': ['vat'], 'def': 'a large open vessel for holding or storing liquids', 'name': 'vat'}, {'frequency': 'c', 'id': 1148, 'synset': 'turban.n.01', 'synonyms': ['turban'], 'def': 'a traditional headdress consisting of a long scarf wrapped around the head', 'name': 'turban'}, {'frequency': 'r', 'id': 1149, 'synset': 'turkey.n.01', 'synonyms': ['turkey_(bird)'], 'def': 'large gallinaceous bird with fan-shaped tail; widely domesticated for food', 'name': 'turkey_(bird)'}, {'frequency': 'c', 'id': 1150, 'synset': 'turkey.n.04', 'synonyms': ['turkey_(food)'], 'def': 'flesh of large domesticated fowl usually roasted', 'name': 'turkey_(food)'}, {'frequency': 'r', 'id': 1151, 'synset': 'turnip.n.01', 'synonyms': ['turnip'], 'def': 'widely cultivated plant having a large fleshy edible white or yellow root', 'name': 'turnip'}, {'frequency': 'c', 'id': 1152, 'synset': 'turtle.n.02', 'synonyms': ['turtle'], 'def': 'any of various aquatic and land reptiles having a bony shell and flipper-like limbs for swimming', 'name': 'turtle'}, {'frequency': 'r', 'id': 1153, 'synset': 'turtleneck.n.01', 'synonyms': ['turtleneck_(clothing)', 'polo-neck'], 'def': 'a sweater or jersey with a high close-fitting collar', 'name': 'turtleneck_(clothing)'}, {'frequency': 'r', 'id': 1154, 'synset': 'typewriter.n.01', 'synonyms': ['typewriter'], 'def': 'hand-operated character printer for printing written messages one character at a time', 'name': 'typewriter'}, {'frequency': 'f', 'id': 1155, 'synset': 'umbrella.n.01', 'synonyms': ['umbrella'], 'def': 'a lightweight handheld collapsible canopy', 'name': 'umbrella'}, {'frequency': 'c', 'id': 1156, 'synset': 'underwear.n.01', 'synonyms': ['underwear', 'underclothes', 'underclothing', 'underpants'], 'def': 'undergarment worn next to the skin and under the outer garments', 'name': 'underwear'}, {'frequency': 'r', 'id': 1157, 'synset': 'unicycle.n.01', 'synonyms': ['unicycle'], 'def': 'a vehicle with a single wheel that is driven by pedals', 'name': 'unicycle'}, {'frequency': 'c', 'id': 1158, 'synset': 'urinal.n.01', 'synonyms': ['urinal'], 'def': 'a plumbing fixture (usually attached to the wall) used by men to urinate', 'name': 'urinal'}, {'frequency': 'r', 'id': 1159, 'synset': 'urn.n.01', 'synonyms': ['urn'], 'def': 'a large vase that usually has a pedestal or feet', 'name': 'urn'}, {'frequency': 'c', 'id': 1160, 'synset': 'vacuum.n.04', 'synonyms': ['vacuum_cleaner'], 'def': 'an electrical home appliance that cleans by suction', 'name': 'vacuum_cleaner'}, {'frequency': 'c', 'id': 1161, 'synset': 'valve.n.03', 'synonyms': ['valve'], 'def': 'control consisting of a mechanical device for controlling the flow of a fluid', 'name': 'valve'}, {'frequency': 'f', 'id': 1162, 'synset': 'vase.n.01', 'synonyms': ['vase'], 'def': 'an open jar of glass or porcelain used as an ornament or to hold flowers', 'name': 'vase'}, {'frequency': 'c', 'id': 1163, 'synset': 'vending_machine.n.01', 'synonyms': ['vending_machine'], 'def': 'a slot machine for selling goods', 'name': 'vending_machine'}, {'frequency': 'f', 'id': 1164, 'synset': 'vent.n.01', 'synonyms': ['vent', 'blowhole', 'air_vent'], 'def': 'a hole for the escape of gas or air', 'name': 'vent'}, {'frequency': 'c', 'id': 1165, 'synset': 'videotape.n.01', 'synonyms': ['videotape'], 'def': 'a video recording made on magnetic tape', 'name': 'videotape'}, {'frequency': 'r', 'id': 1166, 'synset': 'vinegar.n.01', 'synonyms': ['vinegar'], 'def': 'sour-tasting liquid produced usually by oxidation of the alcohol in wine or cider and used as a condiment or food preservative', 'name': 'vinegar'}, {'frequency': 'r', 'id': 1167, 'synset': 'violin.n.01', 'synonyms': ['violin', 'fiddle'], 'def': 'bowed stringed instrument that is the highest member of the violin family', 'name': 'violin'}, {'frequency': 'r', 'id': 1168, 'synset': 'vodka.n.01', 'synonyms': ['vodka'], 'def': 'unaged colorless liquor originating in Russia', 'name': 'vodka'}, {'frequency': 'r', 'id': 1169, 'synset': 'volleyball.n.02', 'synonyms': ['volleyball'], 'def': 'an inflated ball used in playing volleyball', 'name': 'volleyball'}, {'frequency': 'r', 'id': 1170, 'synset': 'vulture.n.01', 'synonyms': ['vulture'], 'def': 'any of various large birds of prey having naked heads and weak claws and feeding chiefly on carrion', 'name': 'vulture'}, {'frequency': 'c', 'id': 1171, 'synset': 'waffle.n.01', 'synonyms': ['waffle'], 'def': 'pancake batter baked in a waffle iron', 'name': 'waffle'}, {'frequency': 'r', 'id': 1172, 'synset': 'waffle_iron.n.01', 'synonyms': ['waffle_iron'], 'def': 'a kitchen appliance for baking waffles', 'name': 'waffle_iron'}, {'frequency': 'c', 'id': 1173, 'synset': 'wagon.n.01', 'synonyms': ['wagon'], 'def': 'any of various kinds of wheeled vehicles drawn by an animal or a tractor', 'name': 'wagon'}, {'frequency': 'c', 'id': 1174, 'synset': 'wagon_wheel.n.01', 'synonyms': ['wagon_wheel'], 'def': 'a wheel of a wagon', 'name': 'wagon_wheel'}, {'frequency': 'c', 'id': 1175, 'synset': 'walking_stick.n.01', 'synonyms': ['walking_stick'], 'def': 'a stick carried in the hand for support in walking', 'name': 'walking_stick'}, {'frequency': 'c', 'id': 1176, 'synset': 'wall_clock.n.01', 'synonyms': ['wall_clock'], 'def': 'a clock mounted on a wall', 'name': 'wall_clock'}, {'frequency': 'f', 'id': 1177, 'synset': 'wall_socket.n.01', 'synonyms': ['wall_socket', 'wall_plug', 'electric_outlet', 'electrical_outlet', 'outlet', 'electric_receptacle'], 'def': 'receptacle providing a place in a wiring system where current can be taken to run electrical devices', 'name': 'wall_socket'}, {'frequency': 'c', 'id': 1178, 'synset': 'wallet.n.01', 'synonyms': ['wallet', 'billfold'], 'def': 'a pocket-size case for holding papers and paper money', 'name': 'wallet'}, {'frequency': 'r', 'id': 1179, 'synset': 'walrus.n.01', 'synonyms': ['walrus'], 'def': 'either of two large northern marine mammals having ivory tusks and tough hide over thick blubber', 'name': 'walrus'}, {'frequency': 'r', 'id': 1180, 'synset': 'wardrobe.n.01', 'synonyms': ['wardrobe'], 'def': 'a tall piece of furniture that provides storage space for clothes; has a door and rails or hooks for hanging clothes', 'name': 'wardrobe'}, {'frequency': 'r', 'id': 1181, 'synset': 'wasabi.n.02', 'synonyms': ['wasabi'], 'def': 'the thick green root of the wasabi plant that the Japanese use in cooking and that tastes like strong horseradish', 'name': 'wasabi'}, {'frequency': 'c', 'id': 1182, 'synset': 'washer.n.03', 'synonyms': ['automatic_washer', 'washing_machine'], 'def': 'a home appliance for washing clothes and linens automatically', 'name': 'automatic_washer'}, {'frequency': 'f', 'id': 1183, 'synset': 'watch.n.01', 'synonyms': ['watch', 'wristwatch'], 'def': 'a small, portable timepiece', 'name': 'watch'}, {'frequency': 'f', 'id': 1184, 'synset': 'water_bottle.n.01', 'synonyms': ['water_bottle'], 'def': 'a bottle for holding water', 'name': 'water_bottle'}, {'frequency': 'c', 'id': 1185, 'synset': 'water_cooler.n.01', 'synonyms': ['water_cooler'], 'def': 'a device for cooling and dispensing drinking water', 'name': 'water_cooler'}, {'frequency': 'c', 'id': 1186, 'synset': 'water_faucet.n.01', 'synonyms': ['water_faucet', 'water_tap', 'tap_(water_faucet)'], 'def': 'a faucet for drawing water from a pipe or cask', 'name': 'water_faucet'}, {'frequency': 'r', 'id': 1187, 'synset': 'water_filter.n.01', 'synonyms': ['water_filter'], 'def': 'a filter to remove impurities from the water supply', 'name': 'water_filter'}, {'frequency': 'r', 'id': 1188, 'synset': 'water_heater.n.01', 'synonyms': ['water_heater', 'hot-water_heater'], 'def': 'a heater and storage tank to supply heated water', 'name': 'water_heater'}, {'frequency': 'r', 'id': 1189, 'synset': 'water_jug.n.01', 'synonyms': ['water_jug'], 'def': 'a jug that holds water', 'name': 'water_jug'}, {'frequency': 'r', 'id': 1190, 'synset': 'water_pistol.n.01', 'synonyms': ['water_gun', 'squirt_gun'], 'def': 'plaything consisting of a toy pistol that squirts water', 'name': 'water_gun'}, {'frequency': 'c', 'id': 1191, 'synset': 'water_scooter.n.01', 'synonyms': ['water_scooter', 'sea_scooter', 'jet_ski'], 'def': 'a motorboat resembling a motor scooter (NOT A SURFBOARD OR WATER SKI)', 'name': 'water_scooter'}, {'frequency': 'c', 'id': 1192, 'synset': 'water_ski.n.01', 'synonyms': ['water_ski'], 'def': 'broad ski for skimming over water towed by a speedboat (DO NOT MARK WATER)', 'name': 'water_ski'}, {'frequency': 'c', 'id': 1193, 'synset': 'water_tower.n.01', 'synonyms': ['water_tower'], 'def': 'a large reservoir for water', 'name': 'water_tower'}, {'frequency': 'c', 'id': 1194, 'synset': 'watering_can.n.01', 'synonyms': ['watering_can'], 'def': 'a container with a handle and a spout with a perforated nozzle; used to sprinkle water over plants', 'name': 'watering_can'}, {'frequency': 'c', 'id': 1195, 'synset': 'watermelon.n.02', 'synonyms': ['watermelon'], 'def': 'large oblong or roundish melon with a hard green rind and sweet watery red or occasionally yellowish pulp', 'name': 'watermelon'}, {'frequency': 'f', 'id': 1196, 'synset': 'weathervane.n.01', 'synonyms': ['weathervane', 'vane_(weathervane)', 'wind_vane'], 'def': 'mechanical device attached to an elevated structure; rotates freely to show the direction of the wind', 'name': 'weathervane'}, {'frequency': 'c', 'id': 1197, 'synset': 'webcam.n.01', 'synonyms': ['webcam'], 'def': 'a digital camera designed to take digital photographs and transmit them over the internet', 'name': 'webcam'}, {'frequency': 'c', 'id': 1198, 'synset': 'wedding_cake.n.01', 'synonyms': ['wedding_cake', 'bridecake'], 'def': 'a rich cake with two or more tiers and covered with frosting and decorations; served at a wedding reception', 'name': 'wedding_cake'}, {'frequency': 'c', 'id': 1199, 'synset': 'wedding_ring.n.01', 'synonyms': ['wedding_ring', 'wedding_band'], 'def': 'a ring given to the bride and/or groom at the wedding', 'name': 'wedding_ring'}, {'frequency': 'f', 'id': 1200, 'synset': 'wet_suit.n.01', 'synonyms': ['wet_suit'], 'def': 'a close-fitting garment made of a permeable material; worn in cold water to retain body heat', 'name': 'wet_suit'}, {'frequency': 'f', 'id': 1201, 'synset': 'wheel.n.01', 'synonyms': ['wheel'], 'def': 'a circular frame with spokes (or a solid disc) that can rotate on a shaft or axle', 'name': 'wheel'}, {'frequency': 'c', 'id': 1202, 'synset': 'wheelchair.n.01', 'synonyms': ['wheelchair'], 'def': 'a movable chair mounted on large wheels', 'name': 'wheelchair'}, {'frequency': 'c', 'id': 1203, 'synset': 'whipped_cream.n.01', 'synonyms': ['whipped_cream'], 'def': 'cream that has been beaten until light and fluffy', 'name': 'whipped_cream'}, {'frequency': 'r', 'id': 1204, 'synset': 'whiskey.n.01', 'synonyms': ['whiskey'], 'def': 'a liquor made from fermented mash of grain', 'name': 'whiskey'}, {'frequency': 'r', 'id': 1205, 'synset': 'whistle.n.03', 'synonyms': ['whistle'], 'def': 'a small wind instrument that produces a whistling sound by blowing into it', 'name': 'whistle'}, {'frequency': 'r', 'id': 1206, 'synset': 'wick.n.02', 'synonyms': ['wick'], 'def': 'a loosely woven cord in a candle or oil lamp that is lit on fire', 'name': 'wick'}, {'frequency': 'c', 'id': 1207, 'synset': 'wig.n.01', 'synonyms': ['wig'], 'def': 'hairpiece covering the head and made of real or synthetic hair', 'name': 'wig'}, {'frequency': 'c', 'id': 1208, 'synset': 'wind_chime.n.01', 'synonyms': ['wind_chime'], 'def': 'a decorative arrangement of pieces of metal or glass or pottery that hang together loosely so the wind can cause them to tinkle', 'name': 'wind_chime'}, {'frequency': 'c', 'id': 1209, 'synset': 'windmill.n.01', 'synonyms': ['windmill'], 'def': 'a mill that is powered by the wind', 'name': 'windmill'}, {'frequency': 'c', 'id': 1210, 'synset': 'window_box.n.01', 'synonyms': ['window_box_(for_plants)'], 'def': 'a container for growing plants on a windowsill', 'name': 'window_box_(for_plants)'}, {'frequency': 'f', 'id': 1211, 'synset': 'windshield_wiper.n.01', 'synonyms': ['windshield_wiper', 'windscreen_wiper', 'wiper_(for_windshield/screen)'], 'def': 'a mechanical device that cleans the windshield', 'name': 'windshield_wiper'}, {'frequency': 'c', 'id': 1212, 'synset': 'windsock.n.01', 'synonyms': ['windsock', 'air_sock', 'air-sleeve', 'wind_sleeve', 'wind_cone'], 'def': 'a truncated cloth cone mounted on a mast/pole; shows wind direction', 'name': 'windsock'}, {'frequency': 'f', 'id': 1213, 'synset': 'wine_bottle.n.01', 'synonyms': ['wine_bottle'], 'def': 'a bottle for holding wine', 'name': 'wine_bottle'}, {'frequency': 'r', 'id': 1214, 'synset': 'wine_bucket.n.01', 'synonyms': ['wine_bucket', 'wine_cooler'], 'def': 'a bucket of ice used to chill a bottle of wine', 'name': 'wine_bucket'}, {'frequency': 'f', 'id': 1215, 'synset': 'wineglass.n.01', 'synonyms': ['wineglass'], 'def': 'a glass that has a stem and in which wine is served', 'name': 'wineglass'}, {'frequency': 'r', 'id': 1216, 'synset': 'wing_chair.n.01', 'synonyms': ['wing_chair'], 'def': 'easy chair having wings on each side of a high back', 'name': 'wing_chair'}, {'frequency': 'c', 'id': 1217, 'synset': 'winker.n.02', 'synonyms': ['blinder_(for_horses)'], 'def': 'blinds that prevent a horse from seeing something on either side', 'name': 'blinder_(for_horses)'}, {'frequency': 'c', 'id': 1218, 'synset': 'wok.n.01', 'synonyms': ['wok'], 'def': 'pan with a convex bottom; used for frying in Chinese cooking', 'name': 'wok'}, {'frequency': 'r', 'id': 1219, 'synset': 'wolf.n.01', 'synonyms': ['wolf'], 'def': 'a wild carnivorous mammal of the dog family, living and hunting in packs', 'name': 'wolf'}, {'frequency': 'c', 'id': 1220, 'synset': 'wooden_spoon.n.02', 'synonyms': ['wooden_spoon'], 'def': 'a spoon made of wood', 'name': 'wooden_spoon'}, {'frequency': 'c', 'id': 1221, 'synset': 'wreath.n.01', 'synonyms': ['wreath'], 'def': 'an arrangement of flowers, leaves, or stems fastened in a ring', 'name': 'wreath'}, {'frequency': 'c', 'id': 1222, 'synset': 'wrench.n.03', 'synonyms': ['wrench', 'spanner'], 'def': 'a hand tool that is used to hold or twist a nut or bolt', 'name': 'wrench'}, {'frequency': 'c', 'id': 1223, 'synset': 'wristband.n.01', 'synonyms': ['wristband'], 'def': 'band consisting of a part of a sleeve that covers the wrist', 'name': 'wristband'}, {'frequency': 'f', 'id': 1224, 'synset': 'wristlet.n.01', 'synonyms': ['wristlet', 'wrist_band'], 'def': 'a band or bracelet worn around the wrist', 'name': 'wristlet'}, {'frequency': 'r', 'id': 1225, 'synset': 'yacht.n.01', 'synonyms': ['yacht'], 'def': 'an expensive vessel propelled by sail or power and used for cruising or racing', 'name': 'yacht'}, {'frequency': 'r', 'id': 1226, 'synset': 'yak.n.02', 'synonyms': ['yak'], 'def': 'large long-haired wild ox of Tibet often domesticated', 'name': 'yak'}, {'frequency': 'c', 'id': 1227, 'synset': 'yogurt.n.01', 'synonyms': ['yogurt', 'yoghurt', 'yoghourt'], 'def': 'a custard-like food made from curdled milk', 'name': 'yogurt'}, {'frequency': 'r', 'id': 1228, 'synset': 'yoke.n.07', 'synonyms': ['yoke_(animal_equipment)'], 'def': 'gear joining two animals at the neck; NOT egg yolk', 'name': 'yoke_(animal_equipment)'}, {'frequency': 'f', 'id': 1229, 'synset': 'zebra.n.01', 'synonyms': ['zebra'], 'def': 'any of several fleet black-and-white striped African equines', 'name': 'zebra'}, {'frequency': 'c', 'id': 1230, 'synset': 'zucchini.n.02', 'synonyms': ['zucchini', 'courgette'], 'def': 'small cucumber-shaped vegetable marrow; typically dark green', 'name': 'zucchini'}] # noqa -# fmt: on diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py deleted file mode 100644 index e564438d5bf016bcdbb65b4bbdc215d79f579f8a..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/register_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .coco import register_coco_instances # noqa -from .coco_panoptic import register_coco_panoptic_separated # noqa diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py deleted file mode 100644 index 30b1a3d6580cf0360710426fbea1f05acdf07b4b..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/hsigmoid.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class HSigmoid(nn.Module): - """Hard Sigmoid Module. Apply the hard sigmoid function: - Hsigmoid(x) = min(max((x + bias) / divisor, min_value), max_value) - Default: Hsigmoid(x) = min(max((x + 1) / 2, 0), 1) - - Args: - bias (float): Bias of the input feature map. Default: 1.0. - divisor (float): Divisor of the input feature map. Default: 2.0. - min_value (float): Lower bound value. Default: 0.0. - max_value (float): Upper bound value. Default: 1.0. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, bias=1.0, divisor=2.0, min_value=0.0, max_value=1.0): - super(HSigmoid, self).__init__() - self.bias = bias - self.divisor = divisor - assert self.divisor != 0 - self.min_value = min_value - self.max_value = max_value - - def forward(self, x): - x = (x + self.bias) / self.divisor - - return x.clamp_(self.min_value, self.max_value) diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/__init__.py b/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/__init__.py deleted file mode 100644 index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/models/stylegan2/op_ori/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu -from .upfirdn2d import upfirdn2d diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/ssax/input-parse.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/ssax/input-parse.go deleted file mode 100644 index 85435720da17afcb250768fd5cf41d1c8ca998dd..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/sxml/ssax/input-parse.go and /dev/null differ diff --git a/spaces/Plurigrid/LifeSim/src/components/ui/command.tsx b/spaces/Plurigrid/LifeSim/src/components/ui/command.tsx deleted file mode 100644 index a4e602ef2508a071948aef7779023540c9f25381..0000000000000000000000000000000000000000 --- a/spaces/Plurigrid/LifeSim/src/components/ui/command.tsx +++ /dev/null @@ -1,155 +0,0 @@ -"use client" - -import * as React from "react" -import { DialogProps } from "@radix-ui/react-dialog" -import { Command as CommandPrimitive } from "cmdk" -import { Search } from "lucide-react" - -import { cn } from "@/lib/utils" -import { Dialog, DialogContent } from "@/components/ui/dialog" - -const Command = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -Command.displayName = CommandPrimitive.displayName - -interface CommandDialogProps extends DialogProps {} - -const CommandDialog = ({ children, ...props }: CommandDialogProps) => { - return ( - - - - {children} - - - - ) -} - -const CommandInput = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( -
    - - -
    -)) - -CommandInput.displayName = CommandPrimitive.Input.displayName - -const CommandList = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) - -CommandList.displayName = CommandPrimitive.List.displayName - -const CommandEmpty = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->((props, ref) => ( - -)) - -CommandEmpty.displayName = CommandPrimitive.Empty.displayName - -const CommandGroup = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) - -CommandGroup.displayName = CommandPrimitive.Group.displayName - -const CommandSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -CommandSeparator.displayName = CommandPrimitive.Separator.displayName - -const CommandItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) - -CommandItem.displayName = CommandPrimitive.Item.displayName - -const CommandShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -CommandShortcut.displayName = "CommandShortcut" - -export { - Command, - CommandDialog, - CommandInput, - CommandList, - CommandEmpty, - CommandGroup, - CommandItem, - CommandShortcut, - CommandSeparator, -} diff --git a/spaces/PrathamDesai/fastai_bear_classifier/app.py b/spaces/PrathamDesai/fastai_bear_classifier/app.py deleted file mode 100644 index 9610db32a293315a4e0df59125bcae4e828d36c4..0000000000000000000000000000000000000000 --- a/spaces/PrathamDesai/fastai_bear_classifier/app.py +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# # Bear Classifier -# -# This is a prototype tool to deploy a model which classifies 3 bear categories namely Black, Grizzly and Teddy (Toys) -# -# Upload a picture of a bear and click classify to the results - - - -from fastai.vision.all import * -import gradio as gr -import skimage - - -learn_inf = load_learner('bear_model.pkl') -labels = learn_inf.dls.vocab - - -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn_inf.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -gr.Interface(fn=predict, inputs=gr.inputs.Image(shape=(512, 512)), outputs=gr.outputs.Label(num_top_classes=3), title = "Bear Classifier", -description = "A Bear Classifier trained with fastai. Created as a demo for Gradio and HuggingFace Spaces. Classifies from Grizzly, Black and Teddy(Toys). ",interpretation='default', examples=['ted.jpg','grizzly.jpg']).launch(share=True) - diff --git a/spaces/R34Koba/ClaudeProxyGaming/README.md b/spaces/R34Koba/ClaudeProxyGaming/README.md deleted file mode 100644 index ecbd9deb8f370cd958acf9e2e58a99cb7057d6f8..0000000000000000000000000000000000000000 --- a/spaces/R34Koba/ClaudeProxyGaming/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ClaudeProxyGaming -emoji: 📚 -colorFrom: gray -colorTo: pink -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/RameshBanala/aivoicebot/README.md b/spaces/RameshBanala/aivoicebot/README.md deleted file mode 100644 index 4ecce4f056d6ec075d4532c863001817a466856a..0000000000000000000000000000000000000000 --- a/spaces/RameshBanala/aivoicebot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Aivoicebot -emoji: 🚀 -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/setopt.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/setopt.py deleted file mode 100644 index 6358c0451b2d0036e3821d897fb6f7ab436ee4a9..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/setopt.py +++ /dev/null @@ -1,149 +0,0 @@ -from distutils.util import convert_path -from distutils import log -from distutils.errors import DistutilsOptionError -import distutils -import os -import configparser - -from setuptools import Command - -__all__ = ['config_file', 'edit_config', 'option_base', 'setopt'] - - -def config_file(kind="local"): - """Get the filename of the distutils, local, global, or per-user config - - `kind` must be one of "local", "global", or "user" - """ - if kind == 'local': - return 'setup.cfg' - if kind == 'global': - return os.path.join( - os.path.dirname(distutils.__file__), 'distutils.cfg' - ) - if kind == 'user': - dot = os.name == 'posix' and '.' or '' - return os.path.expanduser(convert_path("~/%spydistutils.cfg" % dot)) - raise ValueError( - "config_file() type must be 'local', 'global', or 'user'", kind - ) - - -def edit_config(filename, settings, dry_run=False): - """Edit a configuration file to include `settings` - - `settings` is a dictionary of dictionaries or ``None`` values, keyed by - command/section name. A ``None`` value means to delete the entire section, - while a dictionary lists settings to be changed or deleted in that section. - A setting of ``None`` means to delete that setting. - """ - log.debug("Reading configuration from %s", filename) - opts = configparser.RawConfigParser() - opts.optionxform = lambda x: x - opts.read([filename]) - for section, options in settings.items(): - if options is None: - log.info("Deleting section [%s] from %s", section, filename) - opts.remove_section(section) - else: - if not opts.has_section(section): - log.debug("Adding new section [%s] to %s", section, filename) - opts.add_section(section) - for option, value in options.items(): - if value is None: - log.debug( - "Deleting %s.%s from %s", - section, option, filename - ) - opts.remove_option(section, option) - if not opts.options(section): - log.info("Deleting empty [%s] section from %s", - section, filename) - opts.remove_section(section) - else: - log.debug( - "Setting %s.%s to %r in %s", - section, option, value, filename - ) - opts.set(section, option, value) - - log.info("Writing %s", filename) - if not dry_run: - with open(filename, 'w') as f: - opts.write(f) - - -class option_base(Command): - """Abstract base class for commands that mess with config files""" - - user_options = [ - ('global-config', 'g', - "save options to the site-wide distutils.cfg file"), - ('user-config', 'u', - "save options to the current user's pydistutils.cfg file"), - ('filename=', 'f', - "configuration file to use (default=setup.cfg)"), - ] - - boolean_options = [ - 'global-config', 'user-config', - ] - - def initialize_options(self): - self.global_config = None - self.user_config = None - self.filename = None - - def finalize_options(self): - filenames = [] - if self.global_config: - filenames.append(config_file('global')) - if self.user_config: - filenames.append(config_file('user')) - if self.filename is not None: - filenames.append(self.filename) - if not filenames: - filenames.append(config_file('local')) - if len(filenames) > 1: - raise DistutilsOptionError( - "Must specify only one configuration file option", - filenames - ) - self.filename, = filenames - - -class setopt(option_base): - """Save command-line options to a file""" - - description = "set an option in setup.cfg or another config file" - - user_options = [ - ('command=', 'c', 'command to set an option for'), - ('option=', 'o', 'option to set'), - ('set-value=', 's', 'value of the option'), - ('remove', 'r', 'remove (unset) the value'), - ] + option_base.user_options - - boolean_options = option_base.boolean_options + ['remove'] - - def initialize_options(self): - option_base.initialize_options(self) - self.command = None - self.option = None - self.set_value = None - self.remove = None - - def finalize_options(self): - option_base.finalize_options(self) - if self.command is None or self.option is None: - raise DistutilsOptionError("Must specify --command *and* --option") - if self.set_value is None and not self.remove: - raise DistutilsOptionError("Must specify --set-value or --remove") - - def run(self): - edit_config( - self.filename, { - self.command: {self.option.replace('-', '_'): self.set_value} - }, - self.dry_run - ) diff --git a/spaces/Rbrq/DeticChatGPT/tools/create_lvis_21k.py b/spaces/Rbrq/DeticChatGPT/tools/create_lvis_21k.py deleted file mode 100644 index 3e6fe60a2d579d1ef1f3610f600a915155c81fed..0000000000000000000000000000000000000000 --- a/spaces/Rbrq/DeticChatGPT/tools/create_lvis_21k.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import copy -import json - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--imagenet_path', default='datasets/imagenet/annotations/imagenet-21k_image_info.json') - parser.add_argument('--lvis_path', default='datasets/lvis/lvis_v1_train.json') - parser.add_argument('--save_categories', default='') - parser.add_argument('--not_save_imagenet', action='store_true') - parser.add_argument('--not_save_lvis', action='store_true') - parser.add_argument('--mark', default='lvis-21k') - args = parser.parse_args() - - print('Loading', args.imagenet_path) - in_data = json.load(open(args.imagenet_path, 'r')) - print('Loading', args.lvis_path) - lvis_data = json.load(open(args.lvis_path, 'r')) - - categories = copy.deepcopy(lvis_data['categories']) - cat_count = max(x['id'] for x in categories) - synset2id = {x['synset']: x['id'] for x in categories} - name2id = {x['name']: x['id'] for x in categories} - in_id_map = {} - for x in in_data['categories']: - if x['synset'] in synset2id: - in_id_map[x['id']] = synset2id[x['synset']] - elif x['name'] in name2id: - in_id_map[x['id']] = name2id[x['name']] - x['id'] = name2id[x['name']] - else: - cat_count = cat_count + 1 - name2id[x['name']] = cat_count - in_id_map[x['id']] = cat_count - x['id'] = cat_count - categories.append(x) - - print('lvis cats', len(lvis_data['categories'])) - print('imagenet cats', len(in_data['categories'])) - print('merge cats', len(categories)) - - filtered_images = [] - for x in in_data['images']: - x['pos_category_ids'] = [in_id_map[xx] for xx in x['pos_category_ids']] - x['pos_category_ids'] = [xx for xx in \ - sorted(set(x['pos_category_ids'])) if xx >= 0] - if len(x['pos_category_ids']) > 0: - filtered_images.append(x) - - in_data['categories'] = categories - lvis_data['categories'] = categories - - if not args.not_save_imagenet: - in_out_path = args.imagenet_path[:-5] + '_{}.json'.format(args.mark) - for k, v in in_data.items(): - print('imagenet', k, len(v)) - print('Saving Imagenet to', in_out_path) - json.dump(in_data, open(in_out_path, 'w')) - - if not args.not_save_lvis: - lvis_out_path = args.lvis_path[:-5] + '_{}.json'.format(args.mark) - for k, v in lvis_data.items(): - print('lvis', k, len(v)) - print('Saving LVIS to', lvis_out_path) - json.dump(lvis_data, open(lvis_out_path, 'w')) - - if args.save_categories != '': - for x in categories: - for k in ['image_count', 'instance_count', 'synonyms', 'def']: - if k in x: - del x[k] - CATEGORIES = repr(categories) + " # noqa" - open(args.save_categories, 'wt').write(f"CATEGORIES = {CATEGORIES}") diff --git a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/utils/JPEG.py b/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/utils/JPEG.py deleted file mode 100644 index 7cdd7fa91ee424250f241ecc7de63d868795aaa7..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/datasets/InvISP/utils/JPEG.py +++ /dev/null @@ -1,38 +0,0 @@ -import torch -import torch.nn as nn - -from .JPEG_utils import diff_round, quality_to_factor, Quantization -from .compression import compress_jpeg -from .decompression import decompress_jpeg - - -class DiffJPEG(nn.Module): - def __init__(self, differentiable=True, quality=75): - """Initialize the DiffJPEG layer - Inputs: - height(int): Original image height - width(int): Original image width - differentiable(bool): If true uses custom differentiable - rounding function, if false uses standrard torch.round - quality(float): Quality factor for jpeg compression scheme. - """ - super(DiffJPEG, self).__init__() - if differentiable: - rounding = diff_round - # rounding = Quantization() - else: - rounding = torch.round - factor = quality_to_factor(quality) - self.compress = compress_jpeg(rounding=rounding, factor=factor) - # self.decompress = decompress_jpeg(height, width, rounding=rounding, - # factor=factor) - self.decompress = decompress_jpeg(rounding=rounding, factor=factor) - - def forward(self, x): - """ """ - org_height = x.shape[2] - org_width = x.shape[3] - y, cb, cr = self.compress(x) - - recovered = self.decompress(y, cb, cr, org_height, org_width) - return recovered diff --git a/spaces/Realcat/image-matching-webui/third_party/lanet/train.py b/spaces/Realcat/image-matching-webui/third_party/lanet/train.py deleted file mode 100644 index e82900a3b27f8954c65f7bf4127f38a65ac76fff..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/lanet/train.py +++ /dev/null @@ -1,152 +0,0 @@ -import os -import torch -import torch.optim as optim -from tqdm import tqdm - -from torch.autograd import Variable - -from network_v0.model import PointModel -from loss_function import KeypointLoss - - -class Trainer(object): - def __init__(self, config, train_loader=None): - self.config = config - # data parameters - self.train_loader = train_loader - self.num_train = len(self.train_loader) - - # training parameters - self.max_epoch = config.max_epoch - self.start_epoch = config.start_epoch - self.momentum = config.momentum - self.lr = config.init_lr - self.lr_factor = config.lr_factor - self.display = config.display - - # misc params - self.use_gpu = config.use_gpu - self.random_seed = config.seed - self.gpu = config.gpu - self.ckpt_dir = config.ckpt_dir - self.ckpt_name = "{}-{}".format(config.ckpt_name, config.seed) - - # build model - self.model = PointModel(is_test=False) - - # training on GPU - if self.use_gpu: - torch.cuda.set_device(self.gpu) - self.model.cuda() - - print( - "Number of model parameters: {:,}".format( - sum([p.data.nelement() for p in self.model.parameters()]) - ) - ) - - # build loss functional - self.loss_func = KeypointLoss(config) - - # build optimizer and scheduler - self.optimizer = optim.Adam(self.model.parameters(), lr=self.lr) - self.lr_scheduler = optim.lr_scheduler.MultiStepLR( - self.optimizer, milestones=[4, 8], gamma=self.lr_factor - ) - - # resume - if int(self.config.start_epoch) > 0: - ( - self.config.start_epoch, - self.model, - self.optimizer, - self.lr_scheduler, - ) = self.load_checkpoint( - int(self.config.start_epoch), - self.model, - self.optimizer, - self.lr_scheduler, - ) - - def train(self): - print("\nTrain on {} samples".format(self.num_train)) - self.save_checkpoint(0, self.model, self.optimizer, self.lr_scheduler) - for epoch in range(self.start_epoch, self.max_epoch): - print( - "\nEpoch: {}/{} --lr: {:.6f}".format(epoch + 1, self.max_epoch, self.lr) - ) - # train for one epoch - self.train_one_epoch(epoch) - if self.lr_scheduler: - self.lr_scheduler.step() - self.save_checkpoint( - epoch + 1, self.model, self.optimizer, self.lr_scheduler - ) - - def train_one_epoch(self, epoch): - self.model.train() - for (i, data) in enumerate(tqdm(self.train_loader)): - - if self.use_gpu: - source_img = data["image_aug"].cuda() - target_img = data["image"].cuda() - homography = data["homography"].cuda() - - source_img = Variable(source_img) - target_img = Variable(target_img) - homography = Variable(homography) - - # forward propogation - output = self.model(source_img, target_img, homography) - - # compute loss - loss, loc_loss, desc_loss, score_loss, corres_loss = self.loss_func(output) - - # compute gradients and update - self.optimizer.zero_grad() - loss.backward() - self.optimizer.step() - - # print training info - msg_batch = ( - "Epoch:{} Iter:{} lr:{:.4f} " - "loc_loss={:.4f} desc_loss={:.4f} score_loss={:.4f} corres_loss={:.4f} " - "loss={:.4f} ".format( - (epoch + 1), - i, - self.lr, - loc_loss.data, - desc_loss.data, - score_loss.data, - corres_loss.data, - loss.data, - ) - ) - - if (i % self.display) == 0: - print(msg_batch) - return - - def save_checkpoint(self, epoch, model, optimizer, lr_scheduler): - filename = self.ckpt_name + "_" + str(epoch) + ".pth" - torch.save( - { - "epoch": epoch, - "model_state": model.state_dict(), - "optimizer_state": optimizer.state_dict(), - "lr_scheduler": lr_scheduler.state_dict(), - }, - os.path.join(self.ckpt_dir, filename), - ) - - def load_checkpoint(self, epoch, model, optimizer, lr_scheduler): - filename = self.ckpt_name + "_" + str(epoch) + ".pth" - ckpt = torch.load(os.path.join(self.ckpt_dir, filename)) - epoch = ckpt["epoch"] - model.load_state_dict(ckpt["model_state"]) - optimizer.load_state_dict(ckpt["optimizer_state"]) - lr_scheduler.load_state_dict(ckpt["lr_scheduler"]) - - print("[*] Loaded {} checkpoint @ epoch {}".format(filename, ckpt["epoch"])) - - return epoch, model, optimizer, lr_scheduler diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/sync_buffer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/sync_buffer.py deleted file mode 100644 index 6376b7ff894280cb2782243b25e8973650591577..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/sync_buffer.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..dist_utils import allreduce_params -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class SyncBuffersHook(Hook): - """Synchronize model buffers such as running_mean and running_var in BN at - the end of each epoch. - - Args: - distributed (bool): Whether distributed training is used. It is - effective only for distributed training. Defaults to True. - """ - - def __init__(self, distributed=True): - self.distributed = distributed - - def after_epoch(self, runner): - """All-reduce model buffers at the end of each epoch.""" - if self.distributed: - allreduce_params(runner.model.buffers()) diff --git a/spaces/SIGGRAPH2022/DCT-Net/source/facelib/LK/lk.py b/spaces/SIGGRAPH2022/DCT-Net/source/facelib/LK/lk.py deleted file mode 100644 index df05e3f9035656ec0861f9d2913e34a4219cb702..0000000000000000000000000000000000000000 --- a/spaces/SIGGRAPH2022/DCT-Net/source/facelib/LK/lk.py +++ /dev/null @@ -1,97 +0,0 @@ -import numpy as np - -from modelscope.models.cv.cartoon.facelib.config import config as cfg - - -class GroupTrack(): - - def __init__(self): - self.old_frame = None - self.previous_landmarks_set = None - self.with_landmark = True - self.thres = cfg.TRACE.pixel_thres - self.alpha = cfg.TRACE.smooth_landmark - self.iou_thres = cfg.TRACE.iou_thres - - def calculate(self, img, current_landmarks_set): - if self.previous_landmarks_set is None: - self.previous_landmarks_set = current_landmarks_set - result = current_landmarks_set - else: - previous_lm_num = self.previous_landmarks_set.shape[0] - if previous_lm_num == 0: - self.previous_landmarks_set = current_landmarks_set - result = current_landmarks_set - return result - else: - result = [] - for i in range(current_landmarks_set.shape[0]): - not_in_flag = True - for j in range(previous_lm_num): - if self.iou(current_landmarks_set[i], - self.previous_landmarks_set[j] - ) > self.iou_thres: - result.append( - self.smooth(current_landmarks_set[i], - self.previous_landmarks_set[j])) - not_in_flag = False - break - if not_in_flag: - result.append(current_landmarks_set[i]) - - result = np.array(result) - self.previous_landmarks_set = result - - return result - - def iou(self, p_set0, p_set1): - rec1 = [ - np.min(p_set0[:, 0]), - np.min(p_set0[:, 1]), - np.max(p_set0[:, 0]), - np.max(p_set0[:, 1]) - ] - rec2 = [ - np.min(p_set1[:, 0]), - np.min(p_set1[:, 1]), - np.max(p_set1[:, 0]), - np.max(p_set1[:, 1]) - ] - - # computing area of each rectangles - S_rec1 = (rec1[2] - rec1[0]) * (rec1[3] - rec1[1]) - S_rec2 = (rec2[2] - rec2[0]) * (rec2[3] - rec2[1]) - - # computing the sum_area - sum_area = S_rec1 + S_rec2 - - # find the each edge of intersect rectangle - x1 = max(rec1[0], rec2[0]) - y1 = max(rec1[1], rec2[1]) - x2 = min(rec1[2], rec2[2]) - y2 = min(rec1[3], rec2[3]) - - # judge if there is an intersect - intersect = max(0, x2 - x1) * max(0, y2 - y1) - - iou = intersect / (sum_area - intersect) - return iou - - def smooth(self, now_landmarks, previous_landmarks): - result = [] - for i in range(now_landmarks.shape[0]): - x = now_landmarks[i][0] - previous_landmarks[i][0] - y = now_landmarks[i][1] - previous_landmarks[i][1] - dis = np.sqrt(np.square(x) + np.square(y)) - if dis < self.thres: - result.append(previous_landmarks[i]) - else: - result.append( - self.do_moving_average(now_landmarks[i], - previous_landmarks[i])) - - return np.array(result) - - def do_moving_average(self, p_now, p_previous): - p = self.alpha * p_now + (1 - self.alpha) * p_previous - return p diff --git a/spaces/SalahZa/Tunisian-ASR-v0/partly_frozen_splitted_wavlm/ctc_train.py b/spaces/SalahZa/Tunisian-ASR-v0/partly_frozen_splitted_wavlm/ctc_train.py deleted file mode 100644 index 39b6b13ff99870adb71e2bcffca4ce2479405a08..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Tunisian-ASR-v0/partly_frozen_splitted_wavlm/ctc_train.py +++ /dev/null @@ -1,339 +0,0 @@ -#!/usr/bin/env/python3 -"""Recipe for training a wav2vec-based ctc ASR system with librispeech. -The system employs wav2vec as its encoder. Decoding is performed with -ctc greedy decoder. -To run this recipe, do the following: -> python train_with_wav2vec.py hparams/train_with_wav2vec.yaml -The neural network is trained on CTC likelihood target and character units -are used as basic recognition tokens. Training is performed on the full -LibriSpeech dataset (960 h). - -Authors - * Sung-Lin Yeh 2021 - * Titouan Parcollet 2021 - * Ju-Chieh Chou 2020 - * Mirco Ravanelli 2020 - * Abdel Heba 2020 - * Peter Plantinga 2020 - * Samuele Cornell 2020 -""" - -import os -import sys -import torch -import logging -import speechbrain as sb -from speechbrain.utils.distributed import run_on_main -from hyperpyyaml import load_hyperpyyaml -from pathlib import Path -import torchaudio.transforms as T -logger = logging.getLogger(__name__) - -# Define training procedure -class ASR(sb.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - tokens_bos, _ = batch.tokens_bos - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - # Forward pass - feats = self.modules.wav2vec2(wavs) - x = self.modules.enc(feats) - # Compute outputs - p_tokens = None - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - if stage != sb.Stage.TRAIN: - p_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - return p_ctc, wav_lens, p_tokens - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC+NLL) given predictions and targets.""" - - p_ctc, wav_lens, predicted_tokens = predictions - - ids = batch.id - tokens_eos, tokens_eos_lens = batch.tokens_eos - tokens, tokens_lens = batch.tokens - - if hasattr(self.modules, "env_corrupt") and stage == sb.Stage.TRAIN: - tokens_eos = torch.cat([tokens_eos, tokens_eos], dim=0) - tokens_eos_lens = torch.cat( - [tokens_eos_lens, tokens_eos_lens], dim=0 - ) - tokens = torch.cat([tokens, tokens], dim=0) - tokens_lens = torch.cat([tokens_lens, tokens_lens], dim=0) - - loss_ctc = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - loss = loss_ctc - - if stage != sb.Stage.TRAIN: - # Decode token terms to words - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - target_words = [wrd.split(" ") for wrd in batch.wrd] - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - predictions = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(predictions, batch, sb.Stage.TRAIN) - loss.backward() - if self.check_gradients(loss): - self.wav2vec_optimizer.step() - self.model_optimizer.step() - - self.wav2vec_optimizer.zero_grad() - self.model_optimizer.zero_grad() - - return loss.detach() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - -def dataio_prepare(hparams): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted(sort_key="duration") - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["train_dataloader_opts"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", reverse=True - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["train_dataloader_opts"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - valid_data = valid_data.filtered_sorted(sort_key="duration") - - # test is separate - test_datasets = {} - for csv_file in hparams["test_csv"]: - name = Path(csv_file).stem - test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=csv_file, replacements={"data_root": data_folder} - ) - test_datasets[name] = test_datasets[name].filtered_sorted( - sort_key="duration" - ) - - datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()] - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav", "sr") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav, sr): - sig = sb.dataio.dataio.read_audio(wav) - sig = resamplers[sr](sig) - return sig - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - label_encoder = sb.dataio.encoder.CTCTextEncoder() - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "wrd", "char_list", "tokens_list", "tokens_bos", "tokens_eos", "tokens" - ) - def text_pipeline(wrd): - yield wrd - char_list = list(wrd) - yield char_list - tokens_list = label_encoder.encode_sequence(char_list) - yield tokens_list - tokens_bos = torch.LongTensor([hparams["bos_index"]] + (tokens_list)) - yield tokens_bos - tokens_eos = torch.LongTensor(tokens_list + [hparams["eos_index"]]) - yield tokens_eos - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - - lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt") - special_labels = { - "bos_label": hparams["bos_index"], - "eos_label": hparams["eos_index"], - "blank_label": hparams["blank_index"], - } - label_encoder.load_or_create( - path=lab_enc_file, - from_didatasets=[train_data], - output_key="char_list", - special_labels=special_labels, - sequence_input=True, - ) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, - ["id", "sig", "wrd", "char_list", "tokens_bos", "tokens_eos", "tokens"], - ) - return train_data, valid_data, test_datasets, label_encoder - - -if __name__ == "__main__": - - # CLI: - hparams_file, run_opts, overrides = sb.parse_arguments(sys.argv[1:]) - - # If distributed_launch=True then - # create ddp_group with the right communication protocol - sb.utils.distributed.ddp_init_group(run_opts) - - with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - - # Create experiment directory - sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, - ) - - # Dataset prep (parsing Librispeech) - - resampler_8000 = T.Resample(8000, 16000, dtype=torch.float) - - resampler_44100 =T.Resample(44100, 16000, dtype=torch.float) - resampler_32000 =T.Resample(32000, 16000, dtype=torch.float) - resampler_48000 =T.Resample(48000, 16000, dtype=torch.float) - - - resamplers = {"48000": resampler_48000,"8000": resampler_8000, "44100":resampler_44100, "32000":resampler_32000} - - # here we create the datasets objects as well as tokenization and encoding - train_data, valid_data, test_datasets, label_encoder = dataio_prepare( - hparams - ) - - # Trainer initialization - asr_brain = ASR( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], - ) - asr_brain.device= "cpu" - asr_brain.modules.to("cpu") - - # We dynamicaly add the tokenizer to our brain class. - # NB: This tokenizer corresponds to the one used for the LM!! - asr_brain.tokenizer = label_encoder - - # Training - asr_brain.fit( - asr_brain.hparams.epoch_counter, - train_data, - valid_data, - train_loader_kwargs=hparams["train_dataloader_opts"], - valid_loader_kwargs=hparams["valid_dataloader_opts"], - ) - - # Testing - for k in test_datasets.keys(): # keys are test_clean, test_other etc - asr_brain.hparams.wer_file = os.path.join( - hparams["output_folder"], "wer_{}.txt".format(k) - ) - asr_brain.evaluate( - test_datasets[k], test_loader_kwargs=hparams["test_dataloader_opts"] - ) diff --git a/spaces/Sapiensia/diffuse-the-rest/build/index.html b/spaces/Sapiensia/diffuse-the-rest/build/index.html deleted file mode 100644 index 86c28c048d5c5a0015faf3ace74e0b73c190edc4..0000000000000000000000000000000000000000 --- a/spaces/Sapiensia/diffuse-the-rest/build/index.html +++ /dev/null @@ -1,57 +0,0 @@ - - - - - - - - - - - - - - - - -
    - - - -
    -

    Loading…

    -

    █▒▒▒▒▒▒▒▒▒

    -
    -
    - - - -
    - - diff --git a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/dataloader.py b/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/dataloader.py deleted file mode 100644 index 7b21feec06a2ac7d6adc68b0f142cb0488478b07..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/joints_detectors/Alphapose/dataloader.py +++ /dev/null @@ -1,782 +0,0 @@ -import os -import sys -import time -from multiprocessing import Queue as pQueue -from threading import Thread - -import cv2 -import numpy as np -import torch -import torch.multiprocessing as mp -import torch.utils.data as data -import torchvision.transforms as transforms -from PIL import Image -from torch.autograd import Variable - -from SPPE.src.utils.eval import getPrediction, getMultiPeakPrediction -from SPPE.src.utils.img import load_image, cropBox, im_to_torch -from matching import candidate_reselect as matching -from opt import opt -from pPose_nms import pose_nms -from yolo.darknet import Darknet -from yolo.preprocess import prep_image, prep_frame -from yolo.util import dynamic_write_results - -# import the Queue class from Python 3 -if sys.version_info >= (3, 0): - from queue import Queue, LifoQueue -# otherwise, import the Queue class for Python 2.7 -else: - from Queue import Queue, LifoQueue - -if opt.vis_fast: - from fn import vis_frame_fast as vis_frame -else: - from fn import vis_frame - - -class Image_loader(data.Dataset): - def __init__(self, im_names, format='yolo'): - super(Image_loader, self).__init__() - self.img_dir = opt.inputpath - self.imglist = im_names - self.transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) - ]) - self.format = format - - def getitem_ssd(self, index): - im_name = self.imglist[index].rstrip('\n').rstrip('\r') - im_name = os.path.join(self.img_dir, im_name) - im = Image.open(im_name) - inp = load_image(im_name) - if im.mode == 'L': - im = im.convert('RGB') - - ow = oh = 512 - im = im.resize((ow, oh)) - im = self.transform(im) - return im, inp, im_name - - def getitem_yolo(self, index): - inp_dim = int(opt.inp_dim) - im_name = self.imglist[index].rstrip('\n').rstrip('\r') - im_name = os.path.join(self.img_dir, im_name) - im, orig_img, im_dim = prep_image(im_name, inp_dim) - # im_dim = torch.FloatTensor([im_dim]).repeat(1, 2) - - inp = load_image(im_name) - return im, inp, orig_img, im_name, im_dim - - def __getitem__(self, index): - if self.format == 'ssd': - return self.getitem_ssd(index) - elif self.format == 'yolo': - return self.getitem_yolo(index) - else: - raise NotImplementedError - - def __len__(self): - return len(self.imglist) - - -class ImageLoader: - def __init__(self, im_names, batchSize=1, format='yolo', queueSize=50): - self.img_dir = opt.inputpath - self.imglist = im_names - self.transform = transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) - ]) - self.format = format - - self.batchSize = batchSize - self.datalen = len(self.imglist) - leftover = 0 - if (self.datalen) % batchSize: - leftover = 1 - self.num_batches = self.datalen // batchSize + leftover - - # initialize the queue used to store data - if opt.sp: - self.Q = Queue(maxsize=queueSize) - else: - self.Q = mp.Queue(maxsize=queueSize) - - def start(self): - # start a thread to read frames from the file video stream - if self.format == 'ssd': - if opt.sp: - p = Thread(target=self.getitem_ssd, args=()) - else: - p = mp.Process(target=self.getitem_ssd, args=()) - elif self.format == 'yolo': - if opt.sp: - p = Thread(target=self.getitem_yolo, args=()) - else: - p = mp.Process(target=self.getitem_yolo, args=()) - else: - raise NotImplementedError - p.daemon = True - p.start() - return self - - def getitem_ssd(self): - length = len(self.imglist) - for index in range(length): - im_name = self.imglist[index].rstrip('\n').rstrip('\r') - im_name = os.path.join(self.img_dir, im_name) - im = Image.open(im_name) - inp = load_image(im_name) - if im.mode == 'L': - im = im.convert('RGB') - - ow = oh = 512 - im = im.resize((ow, oh)) - im = self.transform(im) - while self.Q.full(): - time.sleep(2) - self.Q.put((im, inp, im_name)) - - def getitem_yolo(self): - for i in range(self.num_batches): - img = [] - orig_img = [] - im_name = [] - im_dim_list = [] - for k in range(i * self.batchSize, min((i + 1) * self.batchSize, self.datalen)): - inp_dim = int(opt.inp_dim) - im_name_k = self.imglist[k].rstrip('\n').rstrip('\r') - im_name_k = os.path.join(self.img_dir, im_name_k) - img_k, orig_img_k, im_dim_list_k = prep_image(im_name_k, inp_dim) - - img.append(img_k) - orig_img.append(orig_img_k) - im_name.append(im_name_k) - im_dim_list.append(im_dim_list_k) - - with torch.no_grad(): - # Human Detection - img = torch.cat(img) - im_dim_list = torch.FloatTensor(im_dim_list).repeat(1, 2) - im_dim_list_ = im_dim_list - - while self.Q.full(): - time.sleep(2) - - self.Q.put((img, orig_img, im_name, im_dim_list)) - - def getitem(self): - return self.Q.get() - - def length(self): - return len(self.imglist) - - def len(self): - return self.Q.qsize() - - -class VideoLoader: - def __init__(self, path, batchSize=1, queueSize=50): - # initialize the file video stream along with the boolean - # used to indicate if the thread should be stopped or not - self.path = path - self.stream = cv2.VideoCapture(path) - assert self.stream.isOpened(), 'Cannot capture source' - self.stopped = False - - self.batchSize = batchSize - self.datalen = int(self.stream.get(cv2.CAP_PROP_FRAME_COUNT)) - leftover = 0 - if (self.datalen) % batchSize: - leftover = 1 - self.num_batches = self.datalen // batchSize + leftover - - # initialize the queue used to store frames read from - # the video file - if opt.sp: - self.Q = Queue(maxsize=queueSize) - else: - self.Q = mp.Queue(maxsize=queueSize) - - def length(self): - return self.datalen - - def start(self): - # start a thread to read frames from the file video stream - if opt.sp: - t = Thread(target=self.update, args=()) - t.daemon = True - t.start() - else: - p = mp.Process(target=self.update, args=()) - p.daemon = True - p.start() - return self - - def update(self): - stream = cv2.VideoCapture(self.path) - assert stream.isOpened(), 'Cannot capture source' - - for i in range(self.num_batches): - img = [] - orig_img = [] - im_name = [] - im_dim_list = [] - for k in range(i * self.batchSize, min((i + 1) * self.batchSize, self.datalen)): - inp_dim = int(opt.inp_dim) - (grabbed, frame) = stream.read() - # if the `grabbed` boolean is `False`, then we have - # reached the end of the video file - if not grabbed: - self.Q.put((None, None, None, None)) - print('===========================> This video get ' + str(k) + ' frames in total.') - sys.stdout.flush() - return - # process and add the frame to the queue - img_k, orig_img_k, im_dim_list_k = prep_frame(frame, inp_dim) - - img.append(img_k) - orig_img.append(orig_img_k) - im_name.append(str(k) + '.jpg') - im_dim_list.append(im_dim_list_k) - - with torch.no_grad(): - # Human Detection - img = torch.cat(img) - im_dim_list = torch.FloatTensor(im_dim_list).repeat(1, 2) - - while self.Q.full(): - time.sleep(2) - - self.Q.put((img, orig_img, im_name, im_dim_list)) - - def videoinfo(self): - # indicate the video info - fourcc = int(self.stream.get(cv2.CAP_PROP_FOURCC)) - fps = self.stream.get(cv2.CAP_PROP_FPS) - frameSize = (int(self.stream.get(cv2.CAP_PROP_FRAME_WIDTH)), int(self.stream.get(cv2.CAP_PROP_FRAME_HEIGHT))) - return (fourcc, fps, frameSize) - - def getitem(self): - # return next frame in the queue - return self.Q.get() - - def len(self): - return self.Q.qsize() - - -class DetectionLoader: - def __init__(self, dataloder, batchSize=1, queueSize=1024): - # initialize the file video stream along with the boolean - # used to indicate if the thread should be stopped or not - self.det_model = Darknet("joints_detectors/Alphapose/yolo/cfg/yolov3-spp.cfg") - self.det_model.load_weights('joints_detectors/Alphapose/models/yolo/yolov3-spp.weights') - self.det_model.net_info['height'] = opt.inp_dim - self.det_inp_dim = int(self.det_model.net_info['height']) - assert self.det_inp_dim % 32 == 0 - assert self.det_inp_dim > 32 - self.det_model - self.det_model.eval() - - self.stopped = False - self.dataloder = dataloder - self.batchSize = batchSize - self.datalen = self.dataloder.length() - leftover = 0 - if (self.datalen) % batchSize: - leftover = 1 - self.num_batches = self.datalen // batchSize + leftover - # initialize the queue used to store frames read from - # the video file - if opt.sp: - self.Q = Queue(maxsize=queueSize) - else: - self.Q = mp.Queue(maxsize=queueSize) - - def start(self): - # start a thread to read frames from the file video stream - if opt.sp: - t = Thread(target=self.update, args=()) - t.daemon = True - t.start() - else: - p = mp.Process(target=self.update, args=(), daemon=True) - # p = mp.Process(target=self.update, args=()) - # p.daemon = True - p.start() - return self - - def update(self): - # keep looping the whole dataset - for i in range(self.num_batches): - img, orig_img, im_name, im_dim_list = self.dataloder.getitem() - if img is None: - self.Q.put((None, None, None, None, None, None, None)) - return - - with torch.no_grad(): - # Human Detection - img = img - prediction = self.det_model(img, CUDA=True) - # NMS process - dets = dynamic_write_results(prediction, opt.confidence, - opt.num_classes, nms=True, nms_conf=opt.nms_thesh) - if isinstance(dets, int) or dets.shape[0] == 0: - for k in range(len(orig_img)): - if self.Q.full(): - time.sleep(2) - self.Q.put((orig_img[k], im_name[k], None, None, None, None, None)) - continue - dets = dets.cpu() - im_dim_list = torch.index_select(im_dim_list, 0, dets[:, 0].long()) - scaling_factor = torch.min(self.det_inp_dim / im_dim_list, 1)[0].view(-1, 1) - - # coordinate transfer - dets[:, [1, 3]] -= (self.det_inp_dim - scaling_factor * im_dim_list[:, 0].view(-1, 1)) / 2 - dets[:, [2, 4]] -= (self.det_inp_dim - scaling_factor * im_dim_list[:, 1].view(-1, 1)) / 2 - - dets[:, 1:5] /= scaling_factor - for j in range(dets.shape[0]): - dets[j, [1, 3]] = torch.clamp(dets[j, [1, 3]], 0.0, im_dim_list[j, 0]) - dets[j, [2, 4]] = torch.clamp(dets[j, [2, 4]], 0.0, im_dim_list[j, 1]) - boxes = dets[:, 1:5] - scores = dets[:, 5:6] - - for k in range(len(orig_img)): - boxes_k = boxes[dets[:, 0] == k] - if isinstance(boxes_k, int) or boxes_k.shape[0] == 0: - if self.Q.full(): - time.sleep(2) - self.Q.put((orig_img[k], im_name[k], None, None, None, None, None)) - continue - inps = torch.zeros(boxes_k.size(0), 3, opt.inputResH, opt.inputResW) - pt1 = torch.zeros(boxes_k.size(0), 2) - pt2 = torch.zeros(boxes_k.size(0), 2) - if self.Q.full(): - time.sleep(2) - self.Q.put((orig_img[k], im_name[k], boxes_k, scores[dets[:, 0] == k], inps, pt1, pt2)) - - def read(self): - # return next frame in the queue - return self.Q.get() - - def len(self): - # return queue len - return self.Q.qsize() - - -class DetectionProcessor: - def __init__(self, detectionLoader, queueSize=1024): - # initialize the file video stream along with the boolean - # used to indicate if the thread should be stopped or not - self.detectionLoader = detectionLoader - self.stopped = False - self.datalen = self.detectionLoader.datalen - - # initialize the queue used to store data - if opt.sp: - self.Q = Queue(maxsize=queueSize) - else: - self.Q = pQueue(maxsize=queueSize) - - def start(self): - # start a thread to read frames from the file video stream - if opt.sp: - # t = Thread(target=self.update, args=(), daemon=True) - t = Thread(target=self.update, args=()) - t.daemon = True - t.start() - else: - p = mp.Process(target=self.update, args=(), daemon=True) - # p = mp.Process(target=self.update, args=()) - # p.daemon = True - p.start() - return self - - def update(self): - # keep looping the whole dataset - for i in range(self.datalen): - - with torch.no_grad(): - (orig_img, im_name, boxes, scores, inps, pt1, pt2) = self.detectionLoader.read() - if orig_img is None: - self.Q.put((None, None, None, None, None, None, None)) - return - if boxes is None or boxes.nelement() == 0: - while self.Q.full(): - time.sleep(0.2) - self.Q.put((None, orig_img, im_name, boxes, scores, None, None)) - continue - inp = im_to_torch(cv2.cvtColor(orig_img, cv2.COLOR_BGR2RGB)) - inps, pt1, pt2 = crop_from_dets(inp, boxes, inps, pt1, pt2) - - while self.Q.full(): - time.sleep(0.2) - self.Q.put((inps, orig_img, im_name, boxes, scores, pt1, pt2)) - - def read(self): - # return next frame in the queue - return self.Q.get() - - def len(self): - # return queue len - return self.Q.qsize() - - -class VideoDetectionLoader: - def __init__(self, path, batchSize=4, queueSize=256): - # initialize the file video stream along with the boolean - # used to indicate if the thread should be stopped or not - self.det_model = Darknet("yolo/cfg/yolov3-spp.cfg") - self.det_model.load_weights('models/yolo/yolov3-spp.weights') - self.det_model.net_info['height'] = opt.inp_dim - self.det_inp_dim = int(self.det_model.net_info['height']) - assert self.det_inp_dim % 32 == 0 - assert self.det_inp_dim > 32 - self.det_model - self.det_model.eval() - - self.stream = cv2.VideoCapture(path) - assert self.stream.isOpened(), 'Cannot capture source' - self.stopped = False - self.batchSize = batchSize - self.datalen = int(self.stream.get(cv2.CAP_PROP_FRAME_COUNT)) - leftover = 0 - if (self.datalen) % batchSize: - leftover = 1 - self.num_batches = self.datalen // batchSize + leftover - # initialize the queue used to store frames read from - # the video file - self.Q = Queue(maxsize=queueSize) - - def length(self): - return self.datalen - - def len(self): - return self.Q.qsize() - - def start(self): - # start a thread to read frames from the file video stream - t = Thread(target=self.update, args=()) - t.daemon = True - t.start() - return self - - def update(self): - # keep looping the whole video - for i in range(self.num_batches): - img = [] - inp = [] - orig_img = [] - im_name = [] - im_dim_list = [] - for k in range(i * self.batchSize, min((i + 1) * self.batchSize, self.datalen)): - (grabbed, frame) = self.stream.read() - # if the `grabbed` boolean is `False`, then we have - # reached the end of the video file - if not grabbed: - self.stop() - return - # process and add the frame to the queue - inp_dim = int(opt.inp_dim) - img_k, orig_img_k, im_dim_list_k = prep_frame(frame, inp_dim) - inp_k = im_to_torch(orig_img_k) - - img.append(img_k) - inp.append(inp_k) - orig_img.append(orig_img_k) - im_dim_list.append(im_dim_list_k) - - with torch.no_grad(): - ht = inp[0].size(1) - wd = inp[0].size(2) - # Human Detection - img = Variable(torch.cat(img)) - im_dim_list = torch.FloatTensor(im_dim_list).repeat(1, 2) - im_dim_list = im_dim_list - - prediction = self.det_model(img, CUDA=True) - # NMS process - dets = dynamic_write_results(prediction, opt.confidence, - opt.num_classes, nms=True, nms_conf=opt.nms_thesh) - if isinstance(dets, int) or dets.shape[0] == 0: - for k in range(len(inp)): - while self.Q.full(): - time.sleep(0.2) - self.Q.put((inp[k], orig_img[k], None, None)) - continue - - im_dim_list = torch.index_select(im_dim_list, 0, dets[:, 0].long()) - scaling_factor = torch.min(self.det_inp_dim / im_dim_list, 1)[0].view(-1, 1) - - # coordinate transfer - dets[:, [1, 3]] -= (self.det_inp_dim - scaling_factor * im_dim_list[:, 0].view(-1, 1)) / 2 - dets[:, [2, 4]] -= (self.det_inp_dim - scaling_factor * im_dim_list[:, 1].view(-1, 1)) / 2 - - dets[:, 1:5] /= scaling_factor - for j in range(dets.shape[0]): - dets[j, [1, 3]] = torch.clamp(dets[j, [1, 3]], 0.0, im_dim_list[j, 0]) - dets[j, [2, 4]] = torch.clamp(dets[j, [2, 4]], 0.0, im_dim_list[j, 1]) - boxes = dets[:, 1:5].cpu() - scores = dets[:, 5:6].cpu() - - for k in range(len(inp)): - while self.Q.full(): - time.sleep(0.2) - self.Q.put((inp[k], orig_img[k], boxes[dets[:, 0] == k], scores[dets[:, 0] == k])) - - def videoinfo(self): - # indicate the video info - fourcc = int(self.stream.get(cv2.CAP_PROP_FOURCC)) - fps = self.stream.get(cv2.CAP_PROP_FPS) - frameSize = (int(self.stream.get(cv2.CAP_PROP_FRAME_WIDTH)), int(self.stream.get(cv2.CAP_PROP_FRAME_HEIGHT))) - return (fourcc, fps, frameSize) - - def read(self): - # return next frame in the queue - return self.Q.get() - - def more(self): - # return True if there are still frames in the queue - return self.Q.qsize() > 0 - - def stop(self): - # indicate that the thread should be stopped - self.stopped = True - - -class WebcamLoader: - def __init__(self, webcam, queueSize=256): - # initialize the file video stream along with the boolean - # used to indicate if the thread should be stopped or not - self.stream = cv2.VideoCapture(int(webcam)) - assert self.stream.isOpened(), 'Cannot capture source' - self.stopped = False - # initialize the queue used to store frames read from - # the video file - self.Q = LifoQueue(maxsize=queueSize) - - def start(self): - # start a thread to read frames from the file video stream - t = Thread(target=self.update, args=()) - t.daemon = True - t.start() - return self - - def update(self): - # keep looping infinitely - while True: - # otherwise, ensure the queue has room in it - if not self.Q.full(): - # read the next frame from the file - (grabbed, frame) = self.stream.read() - # if the `grabbed` boolean is `False`, then we have - # reached the end of the video file - if not grabbed: - self.stop() - return - # process and add the frame to the queue - inp_dim = int(opt.inp_dim) - img, orig_img, dim = prep_frame(frame, inp_dim) - inp = im_to_torch(orig_img) - im_dim_list = torch.FloatTensor([dim]).repeat(1, 2) - - self.Q.put((img, orig_img, inp, im_dim_list)) - else: - with self.Q.mutex: - self.Q.queue.clear() - - def videoinfo(self): - # indicate the video info - fourcc = int(self.stream.get(cv2.CAP_PROP_FOURCC)) - fps = self.stream.get(cv2.CAP_PROP_FPS) - frameSize = (int(self.stream.get(cv2.CAP_PROP_FRAME_WIDTH)), int(self.stream.get(cv2.CAP_PROP_FRAME_HEIGHT))) - return (fourcc, fps, frameSize) - - def read(self): - # return next frame in the queue - return self.Q.get() - - def len(self): - # return queue size - return self.Q.qsize() - - def stop(self): - # indicate that the thread should be stopped - self.stopped = True - - -class DataWriter: - def __init__(self, save_video=False, - savepath='examples/res/1.avi', fourcc=cv2.VideoWriter_fourcc(*'XVID'), fps=25, frameSize=(640, 480), - queueSize=1024): - if save_video: - # initialize the file video stream along with the boolean - # used to indicate if the thread should be stopped or not - self.stream = cv2.VideoWriter(savepath, fourcc, fps, frameSize) - assert self.stream.isOpened(), 'Cannot open video for writing' - self.save_video = save_video - self.stopped = False - self.final_result = [] - # initialize the queue used to store frames read from - # the video file - self.Q = Queue(maxsize=queueSize) - if opt.save_img: - if not os.path.exists(opt.outputpath + '/vis'): - os.mkdir(opt.outputpath + '/vis') - - def start(self): - # start a thread to read frames from the file video stream - t = Thread(target=self.update, args=(), daemon=True) - # t = Thread(target=self.update, args=()) - # t.daemon = True - t.start() - return self - - def update(self): - # keep looping infinitely - while True: - # if the thread indicator variable is set, stop the - # thread - if self.stopped: - if self.save_video: - self.stream.release() - return - # otherwise, ensure the queue is not empty - if not self.Q.empty(): - (boxes, scores, hm_data, pt1, pt2, orig_img, im_name) = self.Q.get() - orig_img = np.array(orig_img, dtype=np.uint8) - if boxes is None: - if opt.save_img or opt.save_video or opt.vis: - img = orig_img - if opt.vis: - cv2.imshow("AlphaPose Demo", img) - cv2.waitKey(30) - if opt.save_img: - cv2.imwrite(os.path.join(opt.outputpath, 'vis', im_name), img) - if opt.save_video: - self.stream.write(img) - else: - # location prediction (n, kp, 2) | score prediction (n, kp, 1) - if opt.matching: - preds = getMultiPeakPrediction( - hm_data, pt1.numpy(), pt2.numpy(), opt.inputResH, opt.inputResW, opt.outputResH, opt.outputResW) - result = matching(boxes, scores.numpy(), preds) - else: - preds_hm, preds_img, preds_scores = getPrediction( - hm_data, pt1, pt2, opt.inputResH, opt.inputResW, opt.outputResH, opt.outputResW) - result = pose_nms( - boxes, scores, preds_img, preds_scores) - result = { - 'imgname': im_name, - 'result': result - } - self.final_result.append(result) - if opt.save_img or opt.save_video or opt.vis: - img = vis_frame(orig_img, result) - if opt.vis: - cv2.imshow("AlphaPose Demo", img) - cv2.waitKey(30) - if opt.save_img: - cv2.imwrite(os.path.join(opt.outputpath, 'vis', im_name), img) - if opt.save_video: - self.stream.write(img) - else: - time.sleep(0.1) - - def running(self): - # indicate that the thread is still running - time.sleep(0.2) - return not self.Q.empty() - - def save(self, boxes, scores, hm_data, pt1, pt2, orig_img, im_name): - # save next frame in the queue - self.Q.put((boxes, scores, hm_data, pt1, pt2, orig_img, im_name)) - - def stop(self): - # indicate that the thread should be stopped - self.stopped = True - time.sleep(0.2) - - def results(self): - # return final result - return self.final_result - - def len(self): - # return queue len - return self.Q.qsize() - - -class Mscoco(data.Dataset): - def __init__(self, train=True, sigma=1, - scale_factor=(0.2, 0.3), rot_factor=40, label_type='Gaussian'): - self.img_folder = '../data/coco/images' # root image folders - self.is_train = train # training set or test set - self.inputResH = opt.inputResH - self.inputResW = opt.inputResW - self.outputResH = opt.outputResH - self.outputResW = opt.outputResW - self.sigma = sigma - self.scale_factor = scale_factor - self.rot_factor = rot_factor - self.label_type = label_type - - self.nJoints_coco = 17 - self.nJoints_mpii = 16 - self.nJoints = 33 - - self.accIdxs = (1, 2, 3, 4, 5, 6, 7, 8, - 9, 10, 11, 12, 13, 14, 15, 16, 17) - self.flipRef = ((2, 3), (4, 5), (6, 7), - (8, 9), (10, 11), (12, 13), - (14, 15), (16, 17)) - - def __getitem__(self, index): - pass - - def __len__(self): - pass - - -def crop_from_dets(img, boxes, inps, pt1, pt2): - ''' - Crop human from origin image according to Dectecion Results - ''' - - imght = img.size(1) - imgwidth = img.size(2) - tmp_img = img - tmp_img[0].add_(-0.406) - tmp_img[1].add_(-0.457) - tmp_img[2].add_(-0.480) - for i, box in enumerate(boxes): - upLeft = torch.Tensor( - (float(box[0]), float(box[1]))) - bottomRight = torch.Tensor( - (float(box[2]), float(box[3]))) - - ht = bottomRight[1] - upLeft[1] - width = bottomRight[0] - upLeft[0] - - scaleRate = 0.3 - - upLeft[0] = max(0, upLeft[0] - width * scaleRate / 2) - upLeft[1] = max(0, upLeft[1] - ht * scaleRate / 2) - bottomRight[0] = max( - min(imgwidth - 1, bottomRight[0] + width * scaleRate / 2), upLeft[0] + 5) - bottomRight[1] = max( - min(imght - 1, bottomRight[1] + ht * scaleRate / 2), upLeft[1] + 5) - - try: - inps[i] = cropBox(tmp_img.clone(), upLeft, bottomRight, opt.inputResH, opt.inputResW) - except IndexError: - print(tmp_img.shape) - print(upLeft) - print(bottomRight) - print('===') - pt1[i] = upLeft - pt2[i] = bottomRight - - return inps, pt1, pt2 diff --git a/spaces/SeViLA/SeViLA/lavis/datasets/builders/video_qa_builder.py b/spaces/SeViLA/SeViLA/lavis/datasets/builders/video_qa_builder.py deleted file mode 100644 index ae07df2a8e0c05540836467d3ef1a416df38d6df..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/datasets/builders/video_qa_builder.py +++ /dev/null @@ -1,93 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from lavis.common.registry import registry -from lavis.common.utils import get_cache_path -from lavis.datasets.builders.base_dataset_builder import BaseDatasetBuilder -from lavis.datasets.datasets.video_vqa_datasets import VideoQADataset -from lavis.datasets.datasets.mc_video_vqa_datasets import MCVideoQADataset - -class VideoQABuilder(BaseDatasetBuilder): - train_dataset_cls = VideoQADataset - eval_dataset_cls = VideoQADataset - - def build(self): - datasets = super().build() - - ans2label = self.config.build_info.annotations.get("ans2label") - if ans2label is None: - raise ValueError("ans2label is not specified in build_info.") - - ans2label = get_cache_path(ans2label.storage) - - for split in datasets: - datasets[split]._build_class_labels(ans2label) - - return datasets - -class MCVideoQABuilder(BaseDatasetBuilder): - train_dataset_cls = MCVideoQADataset - eval_dataset_cls = MCVideoQADataset - - def build(self): - datasets = super().build() - - for split in datasets: - datasets[split]._load_auxiliary_mappings() - - return datasets - -@registry.register_builder("msrvtt_qa") -class MSRVTTQABuilder(VideoQABuilder): - DATASET_CONFIG_DICT = { - "default": "configs/datasets/msrvtt/defaults_qa.yaml", - } - - -@registry.register_builder("msvd_qa") -class MSVDQABuilder(VideoQABuilder): - DATASET_CONFIG_DICT = { - "default": "configs/datasets/msvd/defaults_qa.yaml", - } - -# multi-choice videoqa -@registry.register_builder("nextqa") -class NextQABuilder(MCVideoQABuilder): - DATASET_CONFIG_DICT = { - "default": "configs/datasets/nextqa/defaults_qa.yaml", - } -@registry.register_builder("star") -class STARBuilder(MCVideoQABuilder): - DATASET_CONFIG_DICT = { - "default": "configs/datasets/star/defaults_qa.yaml", - } - -@registry.register_builder("tvqa") -class TVQABuilder(MCVideoQABuilder): - DATASET_CONFIG_DICT = { - "default": "configs/datasets/tvqa/defaults_qa.yaml", - } - -@registry.register_builder("how2qa") -class How2QABuilder(MCVideoQABuilder): - DATASET_CONFIG_DICT = { - "default": "configs/datasets/how2qa/defaults_qa.yaml", - } - -@registry.register_builder("vlep") -class VLEPBuilder(MCVideoQABuilder): - DATASET_CONFIG_DICT = { - "default": "configs/datasets/vlep/defaults_qa.yaml", - } - -@registry.register_builder("qvh") -class QVHBuilder(MCVideoQABuilder): - DATASET_CONFIG_DICT = { - "default": "configs/datasets/qvh/defaults.yaml", - } - -# open-ended QA \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/datasets/datasets/multimodal_classification_datasets.py b/spaces/SeViLA/SeViLA/lavis/datasets/datasets/multimodal_classification_datasets.py deleted file mode 100644 index 152e097995b5afd5bcad95a1f6df60b895300ac8..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/datasets/datasets/multimodal_classification_datasets.py +++ /dev/null @@ -1,25 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from abc import abstractmethod -from lavis.datasets.datasets.base_dataset import BaseDataset - - -class MultimodalClassificationDataset(BaseDataset): - def __init__(self, vis_processor, text_processor, vis_root, ann_paths): - super().__init__(vis_processor, text_processor, vis_root, ann_paths) - - self.class_labels = None - - @abstractmethod - def _build_class_labels(self): - pass - - @abstractmethod - def _load_auxiliary_mappings(self): - pass - diff --git a/spaces/SrRaptor/Imagy/README.md b/spaces/SrRaptor/Imagy/README.md deleted file mode 100644 index 4ac080a5d5d15f938ebdc5a23d59c8c1bcd2ee13..0000000000000000000000000000000000000000 --- a/spaces/SrRaptor/Imagy/README.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: Sheet Music Generator -emoji: 🎵 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false -duplicated_from: visakh7843/Sheet_Music_Generator ---- - -# Sheet-music-generator-for-Sight-Reading -Sheet Music generation for easier sight reading practice for musicians. -Musicians often struggle at finding new sheet music to practice sight reading. Finding new sheet music to practice is particularly important as musicians playing same melodies over and over unknowingly memorize it, which would defeat the purpose of practising to sight-reading. Hence to practice sight-reading for musician, it is crucial to get new and unseen sheet music every now and then which is difficult. -This project aims at developing a probabilistic algorithm (Markov Model) for generating music of appropriate complexity, in a range of appropriate keys and tempos. Markov models were chosen over deep learning models due to their overhead of required resources which didn’t give a considerable advantage over Markov Models. As a secondary objective we will also explore the possibility of generating music specific to different instruments taking into account the limitation of specific instruments. - - -How to run the project: -```sh -python app.py -``` diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/__init__.py deleted file mode 100644 index 72c34e544e1634e4f42c005506bac9b61ab095f5..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/__init__.py +++ /dev/null @@ -1,90 +0,0 @@ -from __future__ import annotations - -__all__ = ( - "AsyncResource", - "IPAddressType", - "IPSockAddrType", - "SocketAttribute", - "SocketStream", - "SocketListener", - "UDPSocket", - "UNIXSocketStream", - "UDPPacketType", - "ConnectedUDPSocket", - "UnreliableObjectReceiveStream", - "UnreliableObjectSendStream", - "UnreliableObjectStream", - "ObjectReceiveStream", - "ObjectSendStream", - "ObjectStream", - "ByteReceiveStream", - "ByteSendStream", - "ByteStream", - "AnyUnreliableByteReceiveStream", - "AnyUnreliableByteSendStream", - "AnyUnreliableByteStream", - "AnyByteReceiveStream", - "AnyByteSendStream", - "AnyByteStream", - "Listener", - "Process", - "Event", - "Condition", - "Lock", - "Semaphore", - "CapacityLimiter", - "CancelScope", - "TaskGroup", - "TaskStatus", - "TestRunner", - "BlockingPortal", -) - -from typing import Any - -from ._resources import AsyncResource -from ._sockets import ( - ConnectedUDPSocket, - IPAddressType, - IPSockAddrType, - SocketAttribute, - SocketListener, - SocketStream, - UDPPacketType, - UDPSocket, - UNIXSocketStream, -) -from ._streams import ( - AnyByteReceiveStream, - AnyByteSendStream, - AnyByteStream, - AnyUnreliableByteReceiveStream, - AnyUnreliableByteSendStream, - AnyUnreliableByteStream, - ByteReceiveStream, - ByteSendStream, - ByteStream, - Listener, - ObjectReceiveStream, - ObjectSendStream, - ObjectStream, - UnreliableObjectReceiveStream, - UnreliableObjectSendStream, - UnreliableObjectStream, -) -from ._subprocesses import Process -from ._tasks import TaskGroup, TaskStatus -from ._testing import TestRunner - -# Re-exported here, for backwards compatibility -# isort: off -from .._core._synchronization import CapacityLimiter, Condition, Event, Lock, Semaphore -from .._core._tasks import CancelScope -from ..from_thread import BlockingPortal - -# Re-export imports so they look like they live directly in this package -key: str -value: Any -for key, value in list(locals().items()): - if getattr(value, "__module__", "").startswith("anyio.abc."): - value.__module__ = __name__ diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_next_gen.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_next_gen.py deleted file mode 100644 index 8f7c0b9a46b7a0ee008f94b8054baf5807df043a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_next_gen.py +++ /dev/null @@ -1,232 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -These are keyword-only APIs that call `attr.s` and `attr.ib` with different -default values. -""" - - -from functools import partial - -from . import setters -from ._funcs import asdict as _asdict -from ._funcs import astuple as _astuple -from ._make import ( - NOTHING, - _frozen_setattrs, - _ng_default_on_setattr, - attrib, - attrs, -) -from .exceptions import UnannotatedAttributeError - - -def define( - maybe_cls=None, - *, - these=None, - repr=None, - unsafe_hash=None, - hash=None, - init=None, - slots=True, - frozen=False, - weakref_slot=True, - str=False, - auto_attribs=None, - kw_only=False, - cache_hash=False, - auto_exc=True, - eq=None, - order=False, - auto_detect=True, - getstate_setstate=None, - on_setattr=None, - field_transformer=None, - match_args=True, -): - r""" - Define an *attrs* class. - - Differences to the classic `attr.s` that it uses underneath: - - - Automatically detect whether or not *auto_attribs* should be `True` (c.f. - *auto_attribs* parameter). - - If *frozen* is `False`, run converters and validators when setting an - attribute by default. - - *slots=True* - - .. caution:: - - Usually this has only upsides and few visible effects in everyday - programming. But it *can* lead to some suprising behaviors, so please - make sure to read :term:`slotted classes`. - - *auto_exc=True* - - *auto_detect=True* - - *order=False* - - Some options that were only relevant on Python 2 or were kept around for - backwards-compatibility have been removed. - - Please note that these are all defaults and you can change them as you - wish. - - :param Optional[bool] auto_attribs: If set to `True` or `False`, it behaves - exactly like `attr.s`. If left `None`, `attr.s` will try to guess: - - 1. If any attributes are annotated and no unannotated `attrs.fields`\ s - are found, it assumes *auto_attribs=True*. - 2. Otherwise it assumes *auto_attribs=False* and tries to collect - `attrs.fields`\ s. - - For now, please refer to `attr.s` for the rest of the parameters. - - .. versionadded:: 20.1.0 - .. versionchanged:: 21.3.0 Converters are also run ``on_setattr``. - .. versionadded:: 22.2.0 - *unsafe_hash* as an alias for *hash* (for :pep:`681` compliance). - """ - - def do_it(cls, auto_attribs): - return attrs( - maybe_cls=cls, - these=these, - repr=repr, - hash=hash, - unsafe_hash=unsafe_hash, - init=init, - slots=slots, - frozen=frozen, - weakref_slot=weakref_slot, - str=str, - auto_attribs=auto_attribs, - kw_only=kw_only, - cache_hash=cache_hash, - auto_exc=auto_exc, - eq=eq, - order=order, - auto_detect=auto_detect, - collect_by_mro=True, - getstate_setstate=getstate_setstate, - on_setattr=on_setattr, - field_transformer=field_transformer, - match_args=match_args, - ) - - def wrap(cls): - """ - Making this a wrapper ensures this code runs during class creation. - - We also ensure that frozen-ness of classes is inherited. - """ - nonlocal frozen, on_setattr - - had_on_setattr = on_setattr not in (None, setters.NO_OP) - - # By default, mutable classes convert & validate on setattr. - if frozen is False and on_setattr is None: - on_setattr = _ng_default_on_setattr - - # However, if we subclass a frozen class, we inherit the immutability - # and disable on_setattr. - for base_cls in cls.__bases__: - if base_cls.__setattr__ is _frozen_setattrs: - if had_on_setattr: - raise ValueError( - "Frozen classes can't use on_setattr " - "(frozen-ness was inherited)." - ) - - on_setattr = setters.NO_OP - break - - if auto_attribs is not None: - return do_it(cls, auto_attribs) - - try: - return do_it(cls, True) - except UnannotatedAttributeError: - return do_it(cls, False) - - # maybe_cls's type depends on the usage of the decorator. It's a class - # if it's used as `@attrs` but ``None`` if used as `@attrs()`. - if maybe_cls is None: - return wrap - else: - return wrap(maybe_cls) - - -mutable = define -frozen = partial(define, frozen=True, on_setattr=None) - - -def field( - *, - default=NOTHING, - validator=None, - repr=True, - hash=None, - init=True, - metadata=None, - type=None, - converter=None, - factory=None, - kw_only=False, - eq=None, - order=None, - on_setattr=None, - alias=None, -): - """ - Identical to `attr.ib`, except keyword-only and with some arguments - removed. - - .. versionadded:: 23.1.0 - The *type* parameter has been re-added; mostly for - {func}`attrs.make_class`. Please note that type checkers ignore this - metadata. - .. versionadded:: 20.1.0 - """ - return attrib( - default=default, - validator=validator, - repr=repr, - hash=hash, - init=init, - metadata=metadata, - type=type, - converter=converter, - factory=factory, - kw_only=kw_only, - eq=eq, - order=order, - on_setattr=on_setattr, - alias=alias, - ) - - -def asdict(inst, *, recurse=True, filter=None, value_serializer=None): - """ - Same as `attr.asdict`, except that collections types are always retained - and dict is always used as *dict_factory*. - - .. versionadded:: 21.3.0 - """ - return _asdict( - inst=inst, - recurse=recurse, - filter=filter, - value_serializer=value_serializer, - retain_collection_types=True, - ) - - -def astuple(inst, *, recurse=True, filter=None): - """ - Same as `attr.astuple`, except that collections types are always retained - and `tuple` is always used as the *tuple_factory*. - - .. versionadded:: 21.3.0 - """ - return _astuple( - inst=inst, recurse=recurse, filter=filter, retain_collection_types=True - ) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/types.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/types.py deleted file mode 100644 index 015e162fbea9c8c5c4f93b4759b6dafab462ad1b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/types.py +++ /dev/null @@ -1,50 +0,0 @@ -from abc import ABC, abstractmethod -from typing import Sequence, Any - -Matrix = Sequence[Sequence[Any]] - - -class Closable(ABC): - @abstractmethod - def close(self): - pass - - -class ByteSource(Closable): - last_message = None - - @abstractmethod - def read_leb128(self) -> int: - pass - - @abstractmethod - def read_leb128_str(self) -> str: - pass - - @abstractmethod - def read_uint64(self) -> int: - pass - - @abstractmethod - def read_bytes(self, sz: int) -> bytes: - pass - - @abstractmethod - def read_str_col(self, num_rows: int, encoding: str, nullable: bool = False, null_obj: Any = None): - pass - - @abstractmethod - def read_bytes_col(self, sz: int, num_rows: int): - pass - - @abstractmethod - def read_fixed_str_col(self, sz: int, num_rows: int, encoding: str): - pass - - @abstractmethod - def read_array(self, array_type: str, num_rows: int): - pass - - @abstractmethod - def read_byte(self) -> int: - pass diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/inputhookgtk3.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/inputhookgtk3.py deleted file mode 100644 index f2ca39f390034797e460e89503c3cf2422412baf..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/inputhookgtk3.py +++ /dev/null @@ -1,35 +0,0 @@ -# encoding: utf-8 -""" -Enable Gtk3 to be used interacive by IPython. - -Authors: Thomi Richards -""" -#----------------------------------------------------------------------------- -# Copyright (c) 2012, the IPython Development Team. -# -# Distributed under the terms of the Modified BSD License. -# -# The full license is in the file COPYING.txt, distributed with this software. -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -from gi.repository import Gtk, GLib # @UnresolvedImport - -#----------------------------------------------------------------------------- -# Code -#----------------------------------------------------------------------------- - -def _main_quit(*args, **kwargs): - Gtk.main_quit() - return False - - -def create_inputhook_gtk3(stdin_file): - def inputhook_gtk3(): - GLib.io_add_watch(stdin_file, GLib.IO_IN, _main_quit) - Gtk.main() - return 0 - return inputhook_gtk3 diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/conv.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/conv.py deleted file mode 100644 index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/modules/conv.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp -import warnings - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm, weight_norm - - -CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm', - 'time_group_norm']) - - -def apply_parametrization_norm(module: nn.Module, norm: str = 'none'): - assert norm in CONV_NORMALIZATIONS - if norm == 'weight_norm': - return weight_norm(module) - elif norm == 'spectral_norm': - return spectral_norm(module) - else: - # We already check was in CONV_NORMALIZATION, so any other choice - # doesn't need reparametrization. - return module - - -def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs): - """Return the proper normalization module. If causal is True, this will ensure the returned - module is causal, or return an error if the normalization doesn't support causal evaluation. - """ - assert norm in CONV_NORMALIZATIONS - if norm == 'time_group_norm': - if causal: - raise ValueError("GroupNorm doesn't support causal evaluation.") - assert isinstance(module, nn.modules.conv._ConvNd) - return nn.GroupNorm(1, module.out_channels, **norm_kwargs) - else: - return nn.Identity() - - -def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, - padding_total: int = 0) -> int: - """See `pad_for_conv1d`. - """ - length = x.shape[-1] - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length - length - - -def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0): - """Pad for a convolution to make sure that the last window is full. - Extra padding is added at the end. This is required to ensure that we can rebuild - an output of the same length, as otherwise, even with padding, some time steps - might get removed. - For instance, with total padding = 4, kernel size = 4, stride = 2: - 0 0 1 2 3 4 5 0 0 # (0s are padding) - 1 2 3 # (output frames of a convolution, last 0 is never used) - 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding) - 1 2 3 4 # once you removed padding, we are missing one time step ! - """ - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - return F.pad(x, (0, extra_padding)) - - -def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.): - """Tiny wrapper around F.pad, just to allow for reflect padding on small input. - If this is the case, we insert extra 0 padding to the right before the reflection happen. - """ - length = x.shape[-1] - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - if mode == 'reflect': - max_pad = max(padding_left, padding_right) - extra_pad = 0 - if length <= max_pad: - extra_pad = max_pad - length + 1 - x = F.pad(x, (0, extra_pad)) - padded = F.pad(x, paddings, mode, value) - end = padded.shape[-1] - extra_pad - return padded[..., :end] - else: - return F.pad(x, paddings, mode, value) - - -def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]): - """Remove padding from x, handling properly zero padding. Only for 1d! - """ - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - assert (padding_left + padding_right) <= x.shape[-1] - end = x.shape[-1] - padding_right - return x[..., padding_left: end] - - -class NormConv1d(nn.Module): - """Wrapper around Conv1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConv2d(nn.Module): - """Wrapper around Conv2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConvTranspose1d(nn.Module): - """Wrapper around ConvTranspose1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class NormConvTranspose2d(nn.Module): - """Wrapper around ConvTranspose2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs) - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class StreamableConv1d(nn.Module): - """Conv1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, dilation: int = 1, - groups: int = 1, bias: bool = True, causal: bool = False, - norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, - pad_mode: str = 'reflect'): - super().__init__() - # warn user on unusual setup between dilation and stride - if stride > 1 and dilation > 1: - warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1' - f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).') - self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride, - dilation=dilation, groups=groups, bias=bias, causal=causal, - norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.pad_mode = pad_mode - - def forward(self, x): - B, C, T = x.shape - kernel_size = self.conv.conv.kernel_size[0] - stride = self.conv.conv.stride[0] - dilation = self.conv.conv.dilation[0] - kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations - padding_total = kernel_size - stride - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - if self.causal: - # Left padding for causal - x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode) - return self.conv(x) - - -class StreamableConvTranspose1d(nn.Module): - """ConvTranspose1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, causal: bool = False, - norm: str = 'none', trim_right_ratio: float = 1., - norm_kwargs: tp.Dict[str, tp.Any] = {}): - super().__init__() - self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride, - causal=causal, norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.trim_right_ratio = trim_right_ratio - assert self.causal or self.trim_right_ratio == 1., \ - "`trim_right_ratio` != 1.0 only makes sense for causal convolutions" - assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1. - - def forward(self, x): - kernel_size = self.convtr.convtr.kernel_size[0] - stride = self.convtr.convtr.stride[0] - padding_total = kernel_size - stride - - y = self.convtr(x) - - # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be - # removed at the very end, when keeping only the right length for the output, - # as removing it here would require also passing the length at the matching layer - # in the encoder. - if self.causal: - # Trim the padding on the right according to the specified ratio - # if trim_right_ratio = 1.0, trim everything from right - padding_right = math.ceil(padding_total * self.trim_right_ratio) - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - return y diff --git a/spaces/TIMBOVILL/RVC-Noobie/i18n/locale_diff.py b/spaces/TIMBOVILL/RVC-Noobie/i18n/locale_diff.py deleted file mode 100644 index 257277965e0866a86d0361863a8f1b408c4f71ab..0000000000000000000000000000000000000000 --- a/spaces/TIMBOVILL/RVC-Noobie/i18n/locale_diff.py +++ /dev/null @@ -1,45 +0,0 @@ -import json -import os -from collections import OrderedDict - -# Define the standard file name -standard_file = "zh_CN.json" - -# Find all JSON files in the directory -dir_path = "./" -languages = [ - f for f in os.listdir(dir_path) if f.endswith(".json") and f != standard_file -] - -# Load the standard file -with open(standard_file, "r", encoding="utf-8") as f: - standard_data = json.load(f, object_pairs_hook=OrderedDict) - -# Loop through each language file -for lang_file in languages: - # Load the language file - with open(lang_file, "r", encoding="utf-8") as f: - lang_data = json.load(f, object_pairs_hook=OrderedDict) - - # Find the difference between the language file and the standard file - diff = set(standard_data.keys()) - set(lang_data.keys()) - - miss = set(lang_data.keys()) - set(standard_data.keys()) - - # Add any missing keys to the language file - for key in diff: - lang_data[key] = key - - # Del any extra keys to the language file - for key in miss: - del lang_data[key] - - # Sort the keys of the language file to match the order of the standard file - lang_data = OrderedDict( - sorted(lang_data.items(), key=lambda x: list(standard_data.keys()).index(x[0])) - ) - - # Save the updated language file - with open(lang_file, "w", encoding="utf-8") as f: - json.dump(lang_data, f, ensure_ascii=False, indent=4) - f.write("\n") diff --git a/spaces/TabPFN/TabPFNPrediction/TabPFN/scripts/tabular_evaluation.py b/spaces/TabPFN/TabPFNPrediction/TabPFN/scripts/tabular_evaluation.py deleted file mode 100644 index c761aee2bffbf441a8c5c33bc4ded07f915e15a3..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNPrediction/TabPFN/scripts/tabular_evaluation.py +++ /dev/null @@ -1,312 +0,0 @@ -import time -import os -from pathlib import Path -from contextlib import nullcontext - -import torch -from tqdm import tqdm -import random -import numpy as np - -from torch import nn - -from torch.utils.checkpoint import checkpoint -from utils import normalize_data, torch_nanmean, to_ranking_low_mem, remove_outliers -from scripts.tabular_baselines import get_scoring_string -from scripts import tabular_metrics -from scripts.transformer_prediction_interface import * -from scripts.baseline_prediction_interface import * -""" -=============================== -PUBLIC FUNCTIONS FOR EVALUATION -=============================== -""" - - -def eval_model(i, e, valid_datasets, test_datasets, eval_positions, bptt, add_name, base_path, device='cpu', eval_addition='', **kwargs): - metrics_test, config_sample, model_path = eval_model_on_ds(i, e, test_datasets, eval_positions, bptt, add_name, base_path, device=device, eval_addition=eval_addition, **kwargs) - metrics_valid, _, _ = eval_model_on_ds(i, e, valid_datasets, eval_positions, bptt, add_name, base_path, device=device, eval_addition=eval_addition, **kwargs) - return {'mean_auc_test': metrics_test['mean_roc_at_1000'], 'mean_auc_valid': metrics_valid['mean_roc_at_1000'], 'mean_ce_test': metrics_test['mean_ce_at_1000'], 'mean_ce_valid': metrics_valid['mean_ce_at_1000'], 'config_sample': config_sample, 'model_path': model_path} - -def eval_model_on_ds(i, e, valid_datasets, eval_positions, bptt, add_name, base_path, device='cpu', eval_addition='', **kwargs): - - # How to use: evaluate_without_fitting(i,0,valid_datasets, [1024], 100000, add_name=model_string, base_path=base_path,) - def check_file(e): - model_file = f'models_diff/prior_diff_real_checkpoint{add_name}_n_{i}_epoch_{e}.cpkt' - model_path = os.path.join(base_path, model_file) - # print('Evaluate ', model_path) - results_file = os.path.join(base_path, - f'models_diff/prior_diff_real_results{add_name}_n_{i}_epoch_{e}_{eval_addition}.pkl') - if not Path(model_path).is_file(): # or Path(results_file).is_file(): - # print('checkpoint exists: ', Path(model_file).is_file(), ', results are written:', Path(results_file).is_file()) - return None, None, None - return model_file, model_path, results_file - - if e == -1: # use last checkpoint, if e == -1 - for e_ in range(100, -1, -1): - model_file_, model_path_, results_file_ = check_file(e_) - if model_file_ is not None: - e = e_ - model_file, model_path, results_file = model_file_, model_path_, results_file_ - break - else: - model_file, model_path, results_file = check_file(e) - - model, config_sample = load_model(base_path, model_file, device, None, verbose=False) - - params = {'max_features': config_sample['num_features'] - , 'rescale_features': config_sample["normalize_by_used_features"] - , 'normalize_to_ranking': config_sample["normalize_to_ranking"] - , 'normalize_with_sqrt': config_sample.get("normalize_with_sqrt", False) - } - metrics_valid = evaluate(datasets=valid_datasets, model=model[2], method='transformer', device=device, overwrite=True, - extend_features=True - # just removed the style keyword but transformer is trained with style, just empty - , save=False - , metric_used=tabular_metrics.cross_entropy - , return_tensor=True - , verbose=False - , eval_positions=eval_positions - , bptt=bptt - , base_path=None - , inference_mode=True - , **params - , **kwargs) - - tabular_metrics.calculate_score_per_method(tabular_metrics.auc_metric, 'roc', metrics_valid, valid_datasets, eval_positions) - tabular_metrics.calculate_score_per_method(tabular_metrics.cross_entropy, 'ce', metrics_valid, valid_datasets, eval_positions) - - return metrics_valid, config_sample, model_path - - -def evaluate(datasets, bptt, eval_positions, metric_used, model, device='cpu' - , verbose=False - , return_tensor=False - , **kwargs): - """ - Evaluates a list of datasets for a model function. - - :param datasets: List of datasets - :param bptt: maximum sequence length - :param eval_positions: List of positions where to evaluate models - :param verbose: If True, is verbose. - :param metric_used: Which metric is optimized for. - :param return_tensor: Wheater to return results as a pytorch.tensor or numpy, this is only relevant for transformer. - :param kwargs: - :return: - """ - overall_result = {'metric_used': get_scoring_string(metric_used) - , 'bptt': bptt - , 'eval_positions': eval_positions} - - aggregated_metric_datasets, num_datasets = torch.tensor(0.0), 0 - - # For each dataset - for [ds_name, X, y, categorical_feats, _, _] in datasets: - dataset_bptt = min(len(X), bptt) - #if verbose and dataset_bptt < bptt: - # print(f'Dataset too small for given bptt, reducing to {len(X)} ({bptt})') - - aggregated_metric, num = torch.tensor(0.0), 0 - ds_result = {} - - for eval_position in (eval_positions if verbose else eval_positions): - eval_position_real = int(dataset_bptt * 0.5) if 2 * eval_position > dataset_bptt else eval_position - eval_position_bptt = int(eval_position_real * 2.0) - - r = evaluate_position(X, y, model=model - , num_classes=len(torch.unique(y)) - , categorical_feats = categorical_feats - , bptt = eval_position_bptt - , ds_name=ds_name - , eval_position = eval_position_real - , metric_used = metric_used - , device=device - ,**kwargs) - - if r is None: - print('Execution failed') - continue - - _, outputs, ys, best_configs, time_used = r - - if torch.is_tensor(outputs): - outputs = outputs.to(outputs.device) - ys = ys.to(outputs.device) - - # WARNING: This leaks information on the scaling of the labels - if isinstance(model, nn.Module) and "BarDistribution" in str(type(model.criterion)): - ys = (ys - torch.min(ys, axis=0)[0]) / (torch.max(ys, axis=0)[0] - torch.min(ys, axis=0)[0]) - - # If we use the bar distribution and the metric_used is r2 -> convert buckets - # metric used is prob -> keep - if isinstance(model, nn.Module) and "BarDistribution" in str(type(model.criterion)) and ( - metric_used == tabular_metrics.r2_metric or metric_used == tabular_metrics.root_mean_squared_error_metric): - ds_result[f'{ds_name}_bar_dist_at_{eval_position}'] = outputs - outputs = model.criterion.mean(outputs) - - ys = ys.T - ds_result[f'{ds_name}_best_configs_at_{eval_position}'] = best_configs - ds_result[f'{ds_name}_outputs_at_{eval_position}'] = outputs - ds_result[f'{ds_name}_ys_at_{eval_position}'] = ys - ds_result[f'{ds_name}_time_at_{eval_position}'] = time_used - - new_metric = torch_nanmean(torch.stack([metric_used(ys[i], outputs[i]) for i in range(ys.shape[0])])) - - if not return_tensor: - make_scalar = lambda x: float(x.detach().cpu().numpy()) if (torch.is_tensor(x) and (len(x.shape) == 0)) else x - new_metric = make_scalar(new_metric) - ds_result = {k: make_scalar(ds_result[k]) for k in ds_result.keys()} - - lib = torch if return_tensor else np - if not lib.isnan(new_metric).any(): - aggregated_metric, num = aggregated_metric + new_metric, num + 1 - - overall_result.update(ds_result) - if num > 0: - aggregated_metric_datasets, num_datasets = (aggregated_metric_datasets + (aggregated_metric / num)), num_datasets + 1 - - overall_result['mean_metric'] = aggregated_metric_datasets / num_datasets - - return overall_result - -""" -=============================== -INTERNAL HELPER FUNCTIONS -=============================== -""" - -def check_file_exists(path): - """Checks if a pickle file exists. Returns None if not, else returns the unpickled file.""" - if (os.path.isfile(path)): - print(f'loading results from {path}') - with open(path, 'rb') as f: - return np.load(f, allow_pickle=True).tolist() - return None - -def generate_valid_split(X, y, bptt, eval_position, is_classification, split_number=1): - """Generates a deteministic train-(test/valid) split. Both splits must contain the same classes and all classes in - the entire datasets. If no such split can be sampled in 7 passes, returns None. - - :param X: torch tensor, feature values - :param y: torch tensor, class values - :param bptt: Number of samples in train + test - :param eval_position: Number of samples in train, i.e. from which index values are in test - :param split_number: The split id - :return: - """ - done, seed = False, 13 - - torch.manual_seed(split_number) - perm = torch.randperm(X.shape[0]) if split_number > 1 else torch.arange(0, X.shape[0]) - X, y = X[perm], y[perm] - while not done: - if seed > 20: - return None, None # No split could be generated in 7 passes, return None - random.seed(seed) - i = random.randint(0, len(X) - bptt) if len(X) - bptt > 0 else 0 - y_ = y[i:i + bptt] - - if is_classification: - # Checks if all classes from dataset are contained and classes in train and test are equal (contain same - # classes) and - done = len(torch.unique(y_)) == len(torch.unique(y)) - done = done and torch.all(torch.unique(y_) == torch.unique(y)) - done = done and len(torch.unique(y_[:eval_position])) == len(torch.unique(y_[eval_position:])) - done = done and torch.all(torch.unique(y_[:eval_position]) == torch.unique(y_[eval_position:])) - seed = seed + 1 - else: - done = True - - eval_xs = torch.stack([X[i:i + bptt].clone()], 1) - eval_ys = torch.stack([y[i:i + bptt].clone()], 1) - - return eval_xs, eval_ys - - -def evaluate_position(X, y, categorical_feats, model, bptt - , eval_position, overwrite, save, base_path, path_interfix, method, ds_name, fetch_only=False - , max_time=300, split_number=1, metric_used=None, device='cpu' - , per_step_normalization=False, **kwargs): - """ - Evaluates a dataset with a 'bptt' number of training samples. - - :param X: Dataset X - :param y: Dataset labels - :param categorical_feats: Indices of categorical features. - :param model: Model function - :param bptt: Sequence length. - :param eval_position: Number of training samples. - :param overwrite: Wheater to ove - :param overwrite: If True, results on disk are overwritten. - :param save: - :param path_interfix: Used for constructing path to write on disk. - :param method: Model name. - :param ds_name: Datset name. - :param fetch_only: Wheater to calculate or only fetch results. - :param per_step_normalization: - :param kwargs: - :return: - """ - - if save: - path = os.path.join(base_path, f'results/tabular/{path_interfix}/results_{method}_{ds_name}_{eval_position}_{bptt}_{split_number}.npy') - #log_path = - - ## Load results if on disk - if not overwrite: - result = check_file_exists(path) - if result is not None: - if not fetch_only: - print(f'Loaded saved result for {path}') - return result - elif fetch_only: - print(f'Could not load saved result for {path}') - return None - - ## Generate data splits - eval_xs, eval_ys = generate_valid_split(X, y, bptt, eval_position - , is_classification=tabular_metrics.is_classification(metric_used) - , split_number=split_number) - if eval_xs is None: - print(f"No dataset could be generated {ds_name} {bptt}") - return None - - eval_ys = (eval_ys > torch.unique(eval_ys).unsqueeze(0)).sum(axis=1).unsqueeze(-1) - - if isinstance(model, nn.Module): - model = model.to(device) - eval_xs = eval_xs.to(device) - eval_ys = eval_ys.to(device) - - start_time = time.time() - - if isinstance(model, nn.Module): # Two separate predict interfaces for transformer and baselines - outputs, best_configs = transformer_predict(model, eval_xs, eval_ys, eval_position, metric_used=metric_used - , categorical_feats=categorical_feats - , inference_mode=True - , device=device - , extend_features=True, - **kwargs), None - else: - _, outputs, best_configs = baseline_predict(model, eval_xs, eval_ys, categorical_feats - , eval_pos=eval_position - , device=device - , max_time=max_time, metric_used=metric_used, **kwargs) - eval_ys = eval_ys[eval_position:] - if outputs is None: - print('Execution failed') - return None - - if torch.is_tensor(outputs): # Transfers data to cpu for saving - outputs = outputs.cpu() - eval_ys = eval_ys.cpu() - - ds_result = None, outputs, eval_ys, best_configs, time.time() - start_time - - if save: - with open(path, 'wb') as f: - np.save(f, ds_result) - print(f'saved results to {path}') - - return ds_result \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/compat.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/compat.py deleted file mode 100644 index 1fe3d225acb9bf37acffafc2198dc96c7c7fd313..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/compat.py +++ /dev/null @@ -1,1116 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2013-2017 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -from __future__ import absolute_import - -import os -import re -import sys - -try: - import ssl -except ImportError: # pragma: no cover - ssl = None - -if sys.version_info[0] < 3: # pragma: no cover - from StringIO import StringIO - string_types = basestring, - text_type = unicode - from types import FileType as file_type - import __builtin__ as builtins - import ConfigParser as configparser - from urlparse import urlparse, urlunparse, urljoin, urlsplit, urlunsplit - from urllib import (urlretrieve, quote as _quote, unquote, url2pathname, - pathname2url, ContentTooShortError, splittype) - - def quote(s): - if isinstance(s, unicode): - s = s.encode('utf-8') - return _quote(s) - - import urllib2 - from urllib2 import (Request, urlopen, URLError, HTTPError, - HTTPBasicAuthHandler, HTTPPasswordMgr, - HTTPHandler, HTTPRedirectHandler, - build_opener) - if ssl: - from urllib2 import HTTPSHandler - import httplib - import xmlrpclib - import Queue as queue - from HTMLParser import HTMLParser - import htmlentitydefs - raw_input = raw_input - from itertools import ifilter as filter - from itertools import ifilterfalse as filterfalse - - # Leaving this around for now, in case it needs resurrecting in some way - # _userprog = None - # def splituser(host): - # """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'.""" - # global _userprog - # if _userprog is None: - # import re - # _userprog = re.compile('^(.*)@(.*)$') - - # match = _userprog.match(host) - # if match: return match.group(1, 2) - # return None, host - -else: # pragma: no cover - from io import StringIO - string_types = str, - text_type = str - from io import TextIOWrapper as file_type - import builtins - import configparser - import shutil - from urllib.parse import (urlparse, urlunparse, urljoin, quote, - unquote, urlsplit, urlunsplit, splittype) - from urllib.request import (urlopen, urlretrieve, Request, url2pathname, - pathname2url, - HTTPBasicAuthHandler, HTTPPasswordMgr, - HTTPHandler, HTTPRedirectHandler, - build_opener) - if ssl: - from urllib.request import HTTPSHandler - from urllib.error import HTTPError, URLError, ContentTooShortError - import http.client as httplib - import urllib.request as urllib2 - import xmlrpc.client as xmlrpclib - import queue - from html.parser import HTMLParser - import html.entities as htmlentitydefs - raw_input = input - from itertools import filterfalse - filter = filter - - -try: - from ssl import match_hostname, CertificateError -except ImportError: # pragma: no cover - class CertificateError(ValueError): - pass - - - def _dnsname_match(dn, hostname, max_wildcards=1): - """Matching according to RFC 6125, section 6.4.3 - - http://tools.ietf.org/html/rfc6125#section-6.4.3 - """ - pats = [] - if not dn: - return False - - parts = dn.split('.') - leftmost, remainder = parts[0], parts[1:] - - wildcards = leftmost.count('*') - if wildcards > max_wildcards: - # Issue #17980: avoid denials of service by refusing more - # than one wildcard per fragment. A survey of established - # policy among SSL implementations showed it to be a - # reasonable choice. - raise CertificateError( - "too many wildcards in certificate DNS name: " + repr(dn)) - - # speed up common case w/o wildcards - if not wildcards: - return dn.lower() == hostname.lower() - - # RFC 6125, section 6.4.3, subitem 1. - # The client SHOULD NOT attempt to match a presented identifier in which - # the wildcard character comprises a label other than the left-most label. - if leftmost == '*': - # When '*' is a fragment by itself, it matches a non-empty dotless - # fragment. - pats.append('[^.]+') - elif leftmost.startswith('xn--') or hostname.startswith('xn--'): - # RFC 6125, section 6.4.3, subitem 3. - # The client SHOULD NOT attempt to match a presented identifier - # where the wildcard character is embedded within an A-label or - # U-label of an internationalized domain name. - pats.append(re.escape(leftmost)) - else: - # Otherwise, '*' matches any dotless string, e.g. www* - pats.append(re.escape(leftmost).replace(r'\*', '[^.]*')) - - # add the remaining fragments, ignore any wildcards - for frag in remainder: - pats.append(re.escape(frag)) - - pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE) - return pat.match(hostname) - - - def match_hostname(cert, hostname): - """Verify that *cert* (in decoded format as returned by - SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125 - rules are followed, but IP addresses are not accepted for *hostname*. - - CertificateError is raised on failure. On success, the function - returns nothing. - """ - if not cert: - raise ValueError("empty or no certificate, match_hostname needs a " - "SSL socket or SSL context with either " - "CERT_OPTIONAL or CERT_REQUIRED") - dnsnames = [] - san = cert.get('subjectAltName', ()) - for key, value in san: - if key == 'DNS': - if _dnsname_match(value, hostname): - return - dnsnames.append(value) - if not dnsnames: - # The subject is only checked when there is no dNSName entry - # in subjectAltName - for sub in cert.get('subject', ()): - for key, value in sub: - # XXX according to RFC 2818, the most specific Common Name - # must be used. - if key == 'commonName': - if _dnsname_match(value, hostname): - return - dnsnames.append(value) - if len(dnsnames) > 1: - raise CertificateError("hostname %r " - "doesn't match either of %s" - % (hostname, ', '.join(map(repr, dnsnames)))) - elif len(dnsnames) == 1: - raise CertificateError("hostname %r " - "doesn't match %r" - % (hostname, dnsnames[0])) - else: - raise CertificateError("no appropriate commonName or " - "subjectAltName fields were found") - - -try: - from types import SimpleNamespace as Container -except ImportError: # pragma: no cover - class Container(object): - """ - A generic container for when multiple values need to be returned - """ - def __init__(self, **kwargs): - self.__dict__.update(kwargs) - - -try: - from shutil import which -except ImportError: # pragma: no cover - # Implementation from Python 3.3 - def which(cmd, mode=os.F_OK | os.X_OK, path=None): - """Given a command, mode, and a PATH string, return the path which - conforms to the given mode on the PATH, or None if there is no such - file. - - `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result - of os.environ.get("PATH"), or can be overridden with a custom search - path. - - """ - # Check that a given file can be accessed with the correct mode. - # Additionally check that `file` is not a directory, as on Windows - # directories pass the os.access check. - def _access_check(fn, mode): - return (os.path.exists(fn) and os.access(fn, mode) - and not os.path.isdir(fn)) - - # If we're given a path with a directory part, look it up directly rather - # than referring to PATH directories. This includes checking relative to the - # current directory, e.g. ./script - if os.path.dirname(cmd): - if _access_check(cmd, mode): - return cmd - return None - - if path is None: - path = os.environ.get("PATH", os.defpath) - if not path: - return None - path = path.split(os.pathsep) - - if sys.platform == "win32": - # The current directory takes precedence on Windows. - if not os.curdir in path: - path.insert(0, os.curdir) - - # PATHEXT is necessary to check on Windows. - pathext = os.environ.get("PATHEXT", "").split(os.pathsep) - # See if the given file matches any of the expected path extensions. - # This will allow us to short circuit when given "python.exe". - # If it does match, only test that one, otherwise we have to try - # others. - if any(cmd.lower().endswith(ext.lower()) for ext in pathext): - files = [cmd] - else: - files = [cmd + ext for ext in pathext] - else: - # On other platforms you don't have things like PATHEXT to tell you - # what file suffixes are executable, so just pass on cmd as-is. - files = [cmd] - - seen = set() - for dir in path: - normdir = os.path.normcase(dir) - if not normdir in seen: - seen.add(normdir) - for thefile in files: - name = os.path.join(dir, thefile) - if _access_check(name, mode): - return name - return None - - -# ZipFile is a context manager in 2.7, but not in 2.6 - -from zipfile import ZipFile as BaseZipFile - -if hasattr(BaseZipFile, '__enter__'): # pragma: no cover - ZipFile = BaseZipFile -else: # pragma: no cover - from zipfile import ZipExtFile as BaseZipExtFile - - class ZipExtFile(BaseZipExtFile): - def __init__(self, base): - self.__dict__.update(base.__dict__) - - def __enter__(self): - return self - - def __exit__(self, *exc_info): - self.close() - # return None, so if an exception occurred, it will propagate - - class ZipFile(BaseZipFile): - def __enter__(self): - return self - - def __exit__(self, *exc_info): - self.close() - # return None, so if an exception occurred, it will propagate - - def open(self, *args, **kwargs): - base = BaseZipFile.open(self, *args, **kwargs) - return ZipExtFile(base) - -try: - from platform import python_implementation -except ImportError: # pragma: no cover - def python_implementation(): - """Return a string identifying the Python implementation.""" - if 'PyPy' in sys.version: - return 'PyPy' - if os.name == 'java': - return 'Jython' - if sys.version.startswith('IronPython'): - return 'IronPython' - return 'CPython' - -import shutil -import sysconfig - -try: - callable = callable -except NameError: # pragma: no cover - from collections.abc import Callable - - def callable(obj): - return isinstance(obj, Callable) - - -try: - fsencode = os.fsencode - fsdecode = os.fsdecode -except AttributeError: # pragma: no cover - # Issue #99: on some systems (e.g. containerised), - # sys.getfilesystemencoding() returns None, and we need a real value, - # so fall back to utf-8. From the CPython 2.7 docs relating to Unix and - # sys.getfilesystemencoding(): the return value is "the user’s preference - # according to the result of nl_langinfo(CODESET), or None if the - # nl_langinfo(CODESET) failed." - _fsencoding = sys.getfilesystemencoding() or 'utf-8' - if _fsencoding == 'mbcs': - _fserrors = 'strict' - else: - _fserrors = 'surrogateescape' - - def fsencode(filename): - if isinstance(filename, bytes): - return filename - elif isinstance(filename, text_type): - return filename.encode(_fsencoding, _fserrors) - else: - raise TypeError("expect bytes or str, not %s" % - type(filename).__name__) - - def fsdecode(filename): - if isinstance(filename, text_type): - return filename - elif isinstance(filename, bytes): - return filename.decode(_fsencoding, _fserrors) - else: - raise TypeError("expect bytes or str, not %s" % - type(filename).__name__) - -try: - from tokenize import detect_encoding -except ImportError: # pragma: no cover - from codecs import BOM_UTF8, lookup - import re - - cookie_re = re.compile(r"coding[:=]\s*([-\w.]+)") - - def _get_normal_name(orig_enc): - """Imitates get_normal_name in tokenizer.c.""" - # Only care about the first 12 characters. - enc = orig_enc[:12].lower().replace("_", "-") - if enc == "utf-8" or enc.startswith("utf-8-"): - return "utf-8" - if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \ - enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")): - return "iso-8859-1" - return orig_enc - - def detect_encoding(readline): - """ - The detect_encoding() function is used to detect the encoding that should - be used to decode a Python source file. It requires one argument, readline, - in the same way as the tokenize() generator. - - It will call readline a maximum of twice, and return the encoding used - (as a string) and a list of any lines (left as bytes) it has read in. - - It detects the encoding from the presence of a utf-8 bom or an encoding - cookie as specified in pep-0263. If both a bom and a cookie are present, - but disagree, a SyntaxError will be raised. If the encoding cookie is an - invalid charset, raise a SyntaxError. Note that if a utf-8 bom is found, - 'utf-8-sig' is returned. - - If no encoding is specified, then the default of 'utf-8' will be returned. - """ - try: - filename = readline.__self__.name - except AttributeError: - filename = None - bom_found = False - encoding = None - default = 'utf-8' - def read_or_stop(): - try: - return readline() - except StopIteration: - return b'' - - def find_cookie(line): - try: - # Decode as UTF-8. Either the line is an encoding declaration, - # in which case it should be pure ASCII, or it must be UTF-8 - # per default encoding. - line_string = line.decode('utf-8') - except UnicodeDecodeError: - msg = "invalid or missing encoding declaration" - if filename is not None: - msg = '{} for {!r}'.format(msg, filename) - raise SyntaxError(msg) - - matches = cookie_re.findall(line_string) - if not matches: - return None - encoding = _get_normal_name(matches[0]) - try: - codec = lookup(encoding) - except LookupError: - # This behaviour mimics the Python interpreter - if filename is None: - msg = "unknown encoding: " + encoding - else: - msg = "unknown encoding for {!r}: {}".format(filename, - encoding) - raise SyntaxError(msg) - - if bom_found: - if codec.name != 'utf-8': - # This behaviour mimics the Python interpreter - if filename is None: - msg = 'encoding problem: utf-8' - else: - msg = 'encoding problem for {!r}: utf-8'.format(filename) - raise SyntaxError(msg) - encoding += '-sig' - return encoding - - first = read_or_stop() - if first.startswith(BOM_UTF8): - bom_found = True - first = first[3:] - default = 'utf-8-sig' - if not first: - return default, [] - - encoding = find_cookie(first) - if encoding: - return encoding, [first] - - second = read_or_stop() - if not second: - return default, [first] - - encoding = find_cookie(second) - if encoding: - return encoding, [first, second] - - return default, [first, second] - -# For converting & <-> & etc. -try: - from html import escape -except ImportError: - from cgi import escape -if sys.version_info[:2] < (3, 4): - unescape = HTMLParser().unescape -else: - from html import unescape - -try: - from collections import ChainMap -except ImportError: # pragma: no cover - from collections import MutableMapping - - try: - from reprlib import recursive_repr as _recursive_repr - except ImportError: - def _recursive_repr(fillvalue='...'): - ''' - Decorator to make a repr function return fillvalue for a recursive - call - ''' - - def decorating_function(user_function): - repr_running = set() - - def wrapper(self): - key = id(self), get_ident() - if key in repr_running: - return fillvalue - repr_running.add(key) - try: - result = user_function(self) - finally: - repr_running.discard(key) - return result - - # Can't use functools.wraps() here because of bootstrap issues - wrapper.__module__ = getattr(user_function, '__module__') - wrapper.__doc__ = getattr(user_function, '__doc__') - wrapper.__name__ = getattr(user_function, '__name__') - wrapper.__annotations__ = getattr(user_function, '__annotations__', {}) - return wrapper - - return decorating_function - - class ChainMap(MutableMapping): - ''' A ChainMap groups multiple dicts (or other mappings) together - to create a single, updateable view. - - The underlying mappings are stored in a list. That list is public and can - accessed or updated using the *maps* attribute. There is no other state. - - Lookups search the underlying mappings successively until a key is found. - In contrast, writes, updates, and deletions only operate on the first - mapping. - - ''' - - def __init__(self, *maps): - '''Initialize a ChainMap by setting *maps* to the given mappings. - If no mappings are provided, a single empty dictionary is used. - - ''' - self.maps = list(maps) or [{}] # always at least one map - - def __missing__(self, key): - raise KeyError(key) - - def __getitem__(self, key): - for mapping in self.maps: - try: - return mapping[key] # can't use 'key in mapping' with defaultdict - except KeyError: - pass - return self.__missing__(key) # support subclasses that define __missing__ - - def get(self, key, default=None): - return self[key] if key in self else default - - def __len__(self): - return len(set().union(*self.maps)) # reuses stored hash values if possible - - def __iter__(self): - return iter(set().union(*self.maps)) - - def __contains__(self, key): - return any(key in m for m in self.maps) - - def __bool__(self): - return any(self.maps) - - @_recursive_repr() - def __repr__(self): - return '{0.__class__.__name__}({1})'.format( - self, ', '.join(map(repr, self.maps))) - - @classmethod - def fromkeys(cls, iterable, *args): - 'Create a ChainMap with a single dict created from the iterable.' - return cls(dict.fromkeys(iterable, *args)) - - def copy(self): - 'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]' - return self.__class__(self.maps[0].copy(), *self.maps[1:]) - - __copy__ = copy - - def new_child(self): # like Django's Context.push() - 'New ChainMap with a new dict followed by all previous maps.' - return self.__class__({}, *self.maps) - - @property - def parents(self): # like Django's Context.pop() - 'New ChainMap from maps[1:].' - return self.__class__(*self.maps[1:]) - - def __setitem__(self, key, value): - self.maps[0][key] = value - - def __delitem__(self, key): - try: - del self.maps[0][key] - except KeyError: - raise KeyError('Key not found in the first mapping: {!r}'.format(key)) - - def popitem(self): - 'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.' - try: - return self.maps[0].popitem() - except KeyError: - raise KeyError('No keys found in the first mapping.') - - def pop(self, key, *args): - 'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].' - try: - return self.maps[0].pop(key, *args) - except KeyError: - raise KeyError('Key not found in the first mapping: {!r}'.format(key)) - - def clear(self): - 'Clear maps[0], leaving maps[1:] intact.' - self.maps[0].clear() - -try: - from importlib.util import cache_from_source # Python >= 3.4 -except ImportError: # pragma: no cover - def cache_from_source(path, debug_override=None): - assert path.endswith('.py') - if debug_override is None: - debug_override = __debug__ - if debug_override: - suffix = 'c' - else: - suffix = 'o' - return path + suffix - -try: - from collections import OrderedDict -except ImportError: # pragma: no cover -## {{{ http://code.activestate.com/recipes/576693/ (r9) -# Backport of OrderedDict() class that runs on Python 2.4, 2.5, 2.6, 2.7 and pypy. -# Passes Python2.7's test suite and incorporates all the latest updates. - try: - from thread import get_ident as _get_ident - except ImportError: - from dummy_thread import get_ident as _get_ident - - try: - from _abcoll import KeysView, ValuesView, ItemsView - except ImportError: - pass - - - class OrderedDict(dict): - 'Dictionary that remembers insertion order' - # An inherited dict maps keys to values. - # The inherited dict provides __getitem__, __len__, __contains__, and get. - # The remaining methods are order-aware. - # Big-O running times for all methods are the same as for regular dictionaries. - - # The internal self.__map dictionary maps keys to links in a doubly linked list. - # The circular doubly linked list starts and ends with a sentinel element. - # The sentinel element never gets deleted (this simplifies the algorithm). - # Each link is stored as a list of length three: [PREV, NEXT, KEY]. - - def __init__(self, *args, **kwds): - '''Initialize an ordered dictionary. Signature is the same as for - regular dictionaries, but keyword arguments are not recommended - because their insertion order is arbitrary. - - ''' - if len(args) > 1: - raise TypeError('expected at most 1 arguments, got %d' % len(args)) - try: - self.__root - except AttributeError: - self.__root = root = [] # sentinel node - root[:] = [root, root, None] - self.__map = {} - self.__update(*args, **kwds) - - def __setitem__(self, key, value, dict_setitem=dict.__setitem__): - 'od.__setitem__(i, y) <==> od[i]=y' - # Setting a new item creates a new link which goes at the end of the linked - # list, and the inherited dictionary is updated with the new key/value pair. - if key not in self: - root = self.__root - last = root[0] - last[1] = root[0] = self.__map[key] = [last, root, key] - dict_setitem(self, key, value) - - def __delitem__(self, key, dict_delitem=dict.__delitem__): - 'od.__delitem__(y) <==> del od[y]' - # Deleting an existing item uses self.__map to find the link which is - # then removed by updating the links in the predecessor and successor nodes. - dict_delitem(self, key) - link_prev, link_next, key = self.__map.pop(key) - link_prev[1] = link_next - link_next[0] = link_prev - - def __iter__(self): - 'od.__iter__() <==> iter(od)' - root = self.__root - curr = root[1] - while curr is not root: - yield curr[2] - curr = curr[1] - - def __reversed__(self): - 'od.__reversed__() <==> reversed(od)' - root = self.__root - curr = root[0] - while curr is not root: - yield curr[2] - curr = curr[0] - - def clear(self): - 'od.clear() -> None. Remove all items from od.' - try: - for node in self.__map.itervalues(): - del node[:] - root = self.__root - root[:] = [root, root, None] - self.__map.clear() - except AttributeError: - pass - dict.clear(self) - - def popitem(self, last=True): - '''od.popitem() -> (k, v), return and remove a (key, value) pair. - Pairs are returned in LIFO order if last is true or FIFO order if false. - - ''' - if not self: - raise KeyError('dictionary is empty') - root = self.__root - if last: - link = root[0] - link_prev = link[0] - link_prev[1] = root - root[0] = link_prev - else: - link = root[1] - link_next = link[1] - root[1] = link_next - link_next[0] = root - key = link[2] - del self.__map[key] - value = dict.pop(self, key) - return key, value - - # -- the following methods do not depend on the internal structure -- - - def keys(self): - 'od.keys() -> list of keys in od' - return list(self) - - def values(self): - 'od.values() -> list of values in od' - return [self[key] for key in self] - - def items(self): - 'od.items() -> list of (key, value) pairs in od' - return [(key, self[key]) for key in self] - - def iterkeys(self): - 'od.iterkeys() -> an iterator over the keys in od' - return iter(self) - - def itervalues(self): - 'od.itervalues -> an iterator over the values in od' - for k in self: - yield self[k] - - def iteritems(self): - 'od.iteritems -> an iterator over the (key, value) items in od' - for k in self: - yield (k, self[k]) - - def update(*args, **kwds): - '''od.update(E, **F) -> None. Update od from dict/iterable E and F. - - If E is a dict instance, does: for k in E: od[k] = E[k] - If E has a .keys() method, does: for k in E.keys(): od[k] = E[k] - Or if E is an iterable of items, does: for k, v in E: od[k] = v - In either case, this is followed by: for k, v in F.items(): od[k] = v - - ''' - if len(args) > 2: - raise TypeError('update() takes at most 2 positional ' - 'arguments (%d given)' % (len(args),)) - elif not args: - raise TypeError('update() takes at least 1 argument (0 given)') - self = args[0] - # Make progressively weaker assumptions about "other" - other = () - if len(args) == 2: - other = args[1] - if isinstance(other, dict): - for key in other: - self[key] = other[key] - elif hasattr(other, 'keys'): - for key in other.keys(): - self[key] = other[key] - else: - for key, value in other: - self[key] = value - for key, value in kwds.items(): - self[key] = value - - __update = update # let subclasses override update without breaking __init__ - - __marker = object() - - def pop(self, key, default=__marker): - '''od.pop(k[,d]) -> v, remove specified key and return the corresponding value. - If key is not found, d is returned if given, otherwise KeyError is raised. - - ''' - if key in self: - result = self[key] - del self[key] - return result - if default is self.__marker: - raise KeyError(key) - return default - - def setdefault(self, key, default=None): - 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od' - if key in self: - return self[key] - self[key] = default - return default - - def __repr__(self, _repr_running=None): - 'od.__repr__() <==> repr(od)' - if not _repr_running: _repr_running = {} - call_key = id(self), _get_ident() - if call_key in _repr_running: - return '...' - _repr_running[call_key] = 1 - try: - if not self: - return '%s()' % (self.__class__.__name__,) - return '%s(%r)' % (self.__class__.__name__, self.items()) - finally: - del _repr_running[call_key] - - def __reduce__(self): - 'Return state information for pickling' - items = [[k, self[k]] for k in self] - inst_dict = vars(self).copy() - for k in vars(OrderedDict()): - inst_dict.pop(k, None) - if inst_dict: - return (self.__class__, (items,), inst_dict) - return self.__class__, (items,) - - def copy(self): - 'od.copy() -> a shallow copy of od' - return self.__class__(self) - - @classmethod - def fromkeys(cls, iterable, value=None): - '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S - and values equal to v (which defaults to None). - - ''' - d = cls() - for key in iterable: - d[key] = value - return d - - def __eq__(self, other): - '''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive - while comparison to a regular mapping is order-insensitive. - - ''' - if isinstance(other, OrderedDict): - return len(self)==len(other) and self.items() == other.items() - return dict.__eq__(self, other) - - def __ne__(self, other): - return not self == other - - # -- the following methods are only used in Python 2.7 -- - - def viewkeys(self): - "od.viewkeys() -> a set-like object providing a view on od's keys" - return KeysView(self) - - def viewvalues(self): - "od.viewvalues() -> an object providing a view on od's values" - return ValuesView(self) - - def viewitems(self): - "od.viewitems() -> a set-like object providing a view on od's items" - return ItemsView(self) - -try: - from logging.config import BaseConfigurator, valid_ident -except ImportError: # pragma: no cover - IDENTIFIER = re.compile('^[a-z_][a-z0-9_]*$', re.I) - - - def valid_ident(s): - m = IDENTIFIER.match(s) - if not m: - raise ValueError('Not a valid Python identifier: %r' % s) - return True - - - # The ConvertingXXX classes are wrappers around standard Python containers, - # and they serve to convert any suitable values in the container. The - # conversion converts base dicts, lists and tuples to their wrapped - # equivalents, whereas strings which match a conversion format are converted - # appropriately. - # - # Each wrapper should have a configurator attribute holding the actual - # configurator to use for conversion. - - class ConvertingDict(dict): - """A converting dictionary wrapper.""" - - def __getitem__(self, key): - value = dict.__getitem__(self, key) - result = self.configurator.convert(value) - #If the converted value is different, save for next time - if value is not result: - self[key] = result - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - def get(self, key, default=None): - value = dict.get(self, key, default) - result = self.configurator.convert(value) - #If the converted value is different, save for next time - if value is not result: - self[key] = result - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - def pop(self, key, default=None): - value = dict.pop(self, key, default) - result = self.configurator.convert(value) - if value is not result: - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - class ConvertingList(list): - """A converting list wrapper.""" - def __getitem__(self, key): - value = list.__getitem__(self, key) - result = self.configurator.convert(value) - #If the converted value is different, save for next time - if value is not result: - self[key] = result - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - def pop(self, idx=-1): - value = list.pop(self, idx) - result = self.configurator.convert(value) - if value is not result: - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - return result - - class ConvertingTuple(tuple): - """A converting tuple wrapper.""" - def __getitem__(self, key): - value = tuple.__getitem__(self, key) - result = self.configurator.convert(value) - if value is not result: - if type(result) in (ConvertingDict, ConvertingList, - ConvertingTuple): - result.parent = self - result.key = key - return result - - class BaseConfigurator(object): - """ - The configurator base class which defines some useful defaults. - """ - - CONVERT_PATTERN = re.compile(r'^(?P[a-z]+)://(?P.*)$') - - WORD_PATTERN = re.compile(r'^\s*(\w+)\s*') - DOT_PATTERN = re.compile(r'^\.\s*(\w+)\s*') - INDEX_PATTERN = re.compile(r'^\[\s*(\w+)\s*\]\s*') - DIGIT_PATTERN = re.compile(r'^\d+$') - - value_converters = { - 'ext' : 'ext_convert', - 'cfg' : 'cfg_convert', - } - - # We might want to use a different one, e.g. importlib - importer = staticmethod(__import__) - - def __init__(self, config): - self.config = ConvertingDict(config) - self.config.configurator = self - - def resolve(self, s): - """ - Resolve strings to objects using standard import and attribute - syntax. - """ - name = s.split('.') - used = name.pop(0) - try: - found = self.importer(used) - for frag in name: - used += '.' + frag - try: - found = getattr(found, frag) - except AttributeError: - self.importer(used) - found = getattr(found, frag) - return found - except ImportError: - e, tb = sys.exc_info()[1:] - v = ValueError('Cannot resolve %r: %s' % (s, e)) - v.__cause__, v.__traceback__ = e, tb - raise v - - def ext_convert(self, value): - """Default converter for the ext:// protocol.""" - return self.resolve(value) - - def cfg_convert(self, value): - """Default converter for the cfg:// protocol.""" - rest = value - m = self.WORD_PATTERN.match(rest) - if m is None: - raise ValueError("Unable to convert %r" % value) - else: - rest = rest[m.end():] - d = self.config[m.groups()[0]] - #print d, rest - while rest: - m = self.DOT_PATTERN.match(rest) - if m: - d = d[m.groups()[0]] - else: - m = self.INDEX_PATTERN.match(rest) - if m: - idx = m.groups()[0] - if not self.DIGIT_PATTERN.match(idx): - d = d[idx] - else: - try: - n = int(idx) # try as number first (most likely) - d = d[n] - except TypeError: - d = d[idx] - if m: - rest = rest[m.end():] - else: - raise ValueError('Unable to convert ' - '%r at %r' % (value, rest)) - #rest should be empty - return d - - def convert(self, value): - """ - Convert values to an appropriate type. dicts, lists and tuples are - replaced by their converting alternatives. Strings are checked to - see if they have a conversion format and are converted if they do. - """ - if not isinstance(value, ConvertingDict) and isinstance(value, dict): - value = ConvertingDict(value) - value.configurator = self - elif not isinstance(value, ConvertingList) and isinstance(value, list): - value = ConvertingList(value) - value.configurator = self - elif not isinstance(value, ConvertingTuple) and\ - isinstance(value, tuple): - value = ConvertingTuple(value) - value.configurator = self - elif isinstance(value, string_types): - m = self.CONVERT_PATTERN.match(value) - if m: - d = m.groupdict() - prefix = d['prefix'] - converter = self.value_converters.get(prefix, None) - if converter: - suffix = d['suffix'] - converter = getattr(self, converter) - value = converter(suffix) - return value - - def configure_custom(self, config): - """Configure an object with a user-supplied factory.""" - c = config.pop('()') - if not callable(c): - c = self.resolve(c) - props = config.pop('.', None) - # Check for valid identifiers - kwargs = dict([(k, config[k]) for k in config if valid_ident(k)]) - result = c(**kwargs) - if props: - for name, value in props.items(): - setattr(result, name, value) - return result - - def as_tuple(self, value): - """Utility function which converts lists to tuples.""" - if isinstance(value, list): - value = tuple(value) - return value diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tomli/_parser.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tomli/_parser.py deleted file mode 100644 index f1bb0aa19a556725aa2ae2b8cea95489c99a9078..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/tomli/_parser.py +++ /dev/null @@ -1,691 +0,0 @@ -# SPDX-License-Identifier: MIT -# SPDX-FileCopyrightText: 2021 Taneli Hukkinen -# Licensed to PSF under a Contributor Agreement. - -from __future__ import annotations - -from collections.abc import Iterable -import string -from types import MappingProxyType -from typing import Any, BinaryIO, NamedTuple - -from ._re import ( - RE_DATETIME, - RE_LOCALTIME, - RE_NUMBER, - match_to_datetime, - match_to_localtime, - match_to_number, -) -from ._types import Key, ParseFloat, Pos - -ASCII_CTRL = frozenset(chr(i) for i in range(32)) | frozenset(chr(127)) - -# Neither of these sets include quotation mark or backslash. They are -# currently handled as separate cases in the parser functions. -ILLEGAL_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t") -ILLEGAL_MULTILINE_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t\n") - -ILLEGAL_LITERAL_STR_CHARS = ILLEGAL_BASIC_STR_CHARS -ILLEGAL_MULTILINE_LITERAL_STR_CHARS = ILLEGAL_MULTILINE_BASIC_STR_CHARS - -ILLEGAL_COMMENT_CHARS = ILLEGAL_BASIC_STR_CHARS - -TOML_WS = frozenset(" \t") -TOML_WS_AND_NEWLINE = TOML_WS | frozenset("\n") -BARE_KEY_CHARS = frozenset(string.ascii_letters + string.digits + "-_") -KEY_INITIAL_CHARS = BARE_KEY_CHARS | frozenset("\"'") -HEXDIGIT_CHARS = frozenset(string.hexdigits) - -BASIC_STR_ESCAPE_REPLACEMENTS = MappingProxyType( - { - "\\b": "\u0008", # backspace - "\\t": "\u0009", # tab - "\\n": "\u000A", # linefeed - "\\f": "\u000C", # form feed - "\\r": "\u000D", # carriage return - '\\"': "\u0022", # quote - "\\\\": "\u005C", # backslash - } -) - - -class TOMLDecodeError(ValueError): - """An error raised if a document is not valid TOML.""" - - -def load(__fp: BinaryIO, *, parse_float: ParseFloat = float) -> dict[str, Any]: - """Parse TOML from a binary file object.""" - b = __fp.read() - try: - s = b.decode() - except AttributeError: - raise TypeError( - "File must be opened in binary mode, e.g. use `open('foo.toml', 'rb')`" - ) from None - return loads(s, parse_float=parse_float) - - -def loads(__s: str, *, parse_float: ParseFloat = float) -> dict[str, Any]: # noqa: C901 - """Parse TOML from a string.""" - - # The spec allows converting "\r\n" to "\n", even in string - # literals. Let's do so to simplify parsing. - src = __s.replace("\r\n", "\n") - pos = 0 - out = Output(NestedDict(), Flags()) - header: Key = () - parse_float = make_safe_parse_float(parse_float) - - # Parse one statement at a time - # (typically means one line in TOML source) - while True: - # 1. Skip line leading whitespace - pos = skip_chars(src, pos, TOML_WS) - - # 2. Parse rules. Expect one of the following: - # - end of file - # - end of line - # - comment - # - key/value pair - # - append dict to list (and move to its namespace) - # - create dict (and move to its namespace) - # Skip trailing whitespace when applicable. - try: - char = src[pos] - except IndexError: - break - if char == "\n": - pos += 1 - continue - if char in KEY_INITIAL_CHARS: - pos = key_value_rule(src, pos, out, header, parse_float) - pos = skip_chars(src, pos, TOML_WS) - elif char == "[": - try: - second_char: str | None = src[pos + 1] - except IndexError: - second_char = None - out.flags.finalize_pending() - if second_char == "[": - pos, header = create_list_rule(src, pos, out) - else: - pos, header = create_dict_rule(src, pos, out) - pos = skip_chars(src, pos, TOML_WS) - elif char != "#": - raise suffixed_err(src, pos, "Invalid statement") - - # 3. Skip comment - pos = skip_comment(src, pos) - - # 4. Expect end of line or end of file - try: - char = src[pos] - except IndexError: - break - if char != "\n": - raise suffixed_err( - src, pos, "Expected newline or end of document after a statement" - ) - pos += 1 - - return out.data.dict - - -class Flags: - """Flags that map to parsed keys/namespaces.""" - - # Marks an immutable namespace (inline array or inline table). - FROZEN = 0 - # Marks a nest that has been explicitly created and can no longer - # be opened using the "[table]" syntax. - EXPLICIT_NEST = 1 - - def __init__(self) -> None: - self._flags: dict[str, dict] = {} - self._pending_flags: set[tuple[Key, int]] = set() - - def add_pending(self, key: Key, flag: int) -> None: - self._pending_flags.add((key, flag)) - - def finalize_pending(self) -> None: - for key, flag in self._pending_flags: - self.set(key, flag, recursive=False) - self._pending_flags.clear() - - def unset_all(self, key: Key) -> None: - cont = self._flags - for k in key[:-1]: - if k not in cont: - return - cont = cont[k]["nested"] - cont.pop(key[-1], None) - - def set(self, key: Key, flag: int, *, recursive: bool) -> None: # noqa: A003 - cont = self._flags - key_parent, key_stem = key[:-1], key[-1] - for k in key_parent: - if k not in cont: - cont[k] = {"flags": set(), "recursive_flags": set(), "nested": {}} - cont = cont[k]["nested"] - if key_stem not in cont: - cont[key_stem] = {"flags": set(), "recursive_flags": set(), "nested": {}} - cont[key_stem]["recursive_flags" if recursive else "flags"].add(flag) - - def is_(self, key: Key, flag: int) -> bool: - if not key: - return False # document root has no flags - cont = self._flags - for k in key[:-1]: - if k not in cont: - return False - inner_cont = cont[k] - if flag in inner_cont["recursive_flags"]: - return True - cont = inner_cont["nested"] - key_stem = key[-1] - if key_stem in cont: - cont = cont[key_stem] - return flag in cont["flags"] or flag in cont["recursive_flags"] - return False - - -class NestedDict: - def __init__(self) -> None: - # The parsed content of the TOML document - self.dict: dict[str, Any] = {} - - def get_or_create_nest( - self, - key: Key, - *, - access_lists: bool = True, - ) -> dict: - cont: Any = self.dict - for k in key: - if k not in cont: - cont[k] = {} - cont = cont[k] - if access_lists and isinstance(cont, list): - cont = cont[-1] - if not isinstance(cont, dict): - raise KeyError("There is no nest behind this key") - return cont - - def append_nest_to_list(self, key: Key) -> None: - cont = self.get_or_create_nest(key[:-1]) - last_key = key[-1] - if last_key in cont: - list_ = cont[last_key] - if not isinstance(list_, list): - raise KeyError("An object other than list found behind this key") - list_.append({}) - else: - cont[last_key] = [{}] - - -class Output(NamedTuple): - data: NestedDict - flags: Flags - - -def skip_chars(src: str, pos: Pos, chars: Iterable[str]) -> Pos: - try: - while src[pos] in chars: - pos += 1 - except IndexError: - pass - return pos - - -def skip_until( - src: str, - pos: Pos, - expect: str, - *, - error_on: frozenset[str], - error_on_eof: bool, -) -> Pos: - try: - new_pos = src.index(expect, pos) - except ValueError: - new_pos = len(src) - if error_on_eof: - raise suffixed_err(src, new_pos, f"Expected {expect!r}") from None - - if not error_on.isdisjoint(src[pos:new_pos]): - while src[pos] not in error_on: - pos += 1 - raise suffixed_err(src, pos, f"Found invalid character {src[pos]!r}") - return new_pos - - -def skip_comment(src: str, pos: Pos) -> Pos: - try: - char: str | None = src[pos] - except IndexError: - char = None - if char == "#": - return skip_until( - src, pos + 1, "\n", error_on=ILLEGAL_COMMENT_CHARS, error_on_eof=False - ) - return pos - - -def skip_comments_and_array_ws(src: str, pos: Pos) -> Pos: - while True: - pos_before_skip = pos - pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE) - pos = skip_comment(src, pos) - if pos == pos_before_skip: - return pos - - -def create_dict_rule(src: str, pos: Pos, out: Output) -> tuple[Pos, Key]: - pos += 1 # Skip "[" - pos = skip_chars(src, pos, TOML_WS) - pos, key = parse_key(src, pos) - - if out.flags.is_(key, Flags.EXPLICIT_NEST) or out.flags.is_(key, Flags.FROZEN): - raise suffixed_err(src, pos, f"Cannot declare {key} twice") - out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False) - try: - out.data.get_or_create_nest(key) - except KeyError: - raise suffixed_err(src, pos, "Cannot overwrite a value") from None - - if not src.startswith("]", pos): - raise suffixed_err(src, pos, "Expected ']' at the end of a table declaration") - return pos + 1, key - - -def create_list_rule(src: str, pos: Pos, out: Output) -> tuple[Pos, Key]: - pos += 2 # Skip "[[" - pos = skip_chars(src, pos, TOML_WS) - pos, key = parse_key(src, pos) - - if out.flags.is_(key, Flags.FROZEN): - raise suffixed_err(src, pos, f"Cannot mutate immutable namespace {key}") - # Free the namespace now that it points to another empty list item... - out.flags.unset_all(key) - # ...but this key precisely is still prohibited from table declaration - out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False) - try: - out.data.append_nest_to_list(key) - except KeyError: - raise suffixed_err(src, pos, "Cannot overwrite a value") from None - - if not src.startswith("]]", pos): - raise suffixed_err(src, pos, "Expected ']]' at the end of an array declaration") - return pos + 2, key - - -def key_value_rule( - src: str, pos: Pos, out: Output, header: Key, parse_float: ParseFloat -) -> Pos: - pos, key, value = parse_key_value_pair(src, pos, parse_float) - key_parent, key_stem = key[:-1], key[-1] - abs_key_parent = header + key_parent - - relative_path_cont_keys = (header + key[:i] for i in range(1, len(key))) - for cont_key in relative_path_cont_keys: - # Check that dotted key syntax does not redefine an existing table - if out.flags.is_(cont_key, Flags.EXPLICIT_NEST): - raise suffixed_err(src, pos, f"Cannot redefine namespace {cont_key}") - # Containers in the relative path can't be opened with the table syntax or - # dotted key/value syntax in following table sections. - out.flags.add_pending(cont_key, Flags.EXPLICIT_NEST) - - if out.flags.is_(abs_key_parent, Flags.FROZEN): - raise suffixed_err( - src, pos, f"Cannot mutate immutable namespace {abs_key_parent}" - ) - - try: - nest = out.data.get_or_create_nest(abs_key_parent) - except KeyError: - raise suffixed_err(src, pos, "Cannot overwrite a value") from None - if key_stem in nest: - raise suffixed_err(src, pos, "Cannot overwrite a value") - # Mark inline table and array namespaces recursively immutable - if isinstance(value, (dict, list)): - out.flags.set(header + key, Flags.FROZEN, recursive=True) - nest[key_stem] = value - return pos - - -def parse_key_value_pair( - src: str, pos: Pos, parse_float: ParseFloat -) -> tuple[Pos, Key, Any]: - pos, key = parse_key(src, pos) - try: - char: str | None = src[pos] - except IndexError: - char = None - if char != "=": - raise suffixed_err(src, pos, "Expected '=' after a key in a key/value pair") - pos += 1 - pos = skip_chars(src, pos, TOML_WS) - pos, value = parse_value(src, pos, parse_float) - return pos, key, value - - -def parse_key(src: str, pos: Pos) -> tuple[Pos, Key]: - pos, key_part = parse_key_part(src, pos) - key: Key = (key_part,) - pos = skip_chars(src, pos, TOML_WS) - while True: - try: - char: str | None = src[pos] - except IndexError: - char = None - if char != ".": - return pos, key - pos += 1 - pos = skip_chars(src, pos, TOML_WS) - pos, key_part = parse_key_part(src, pos) - key += (key_part,) - pos = skip_chars(src, pos, TOML_WS) - - -def parse_key_part(src: str, pos: Pos) -> tuple[Pos, str]: - try: - char: str | None = src[pos] - except IndexError: - char = None - if char in BARE_KEY_CHARS: - start_pos = pos - pos = skip_chars(src, pos, BARE_KEY_CHARS) - return pos, src[start_pos:pos] - if char == "'": - return parse_literal_str(src, pos) - if char == '"': - return parse_one_line_basic_str(src, pos) - raise suffixed_err(src, pos, "Invalid initial character for a key part") - - -def parse_one_line_basic_str(src: str, pos: Pos) -> tuple[Pos, str]: - pos += 1 - return parse_basic_str(src, pos, multiline=False) - - -def parse_array(src: str, pos: Pos, parse_float: ParseFloat) -> tuple[Pos, list]: - pos += 1 - array: list = [] - - pos = skip_comments_and_array_ws(src, pos) - if src.startswith("]", pos): - return pos + 1, array - while True: - pos, val = parse_value(src, pos, parse_float) - array.append(val) - pos = skip_comments_and_array_ws(src, pos) - - c = src[pos : pos + 1] - if c == "]": - return pos + 1, array - if c != ",": - raise suffixed_err(src, pos, "Unclosed array") - pos += 1 - - pos = skip_comments_and_array_ws(src, pos) - if src.startswith("]", pos): - return pos + 1, array - - -def parse_inline_table(src: str, pos: Pos, parse_float: ParseFloat) -> tuple[Pos, dict]: - pos += 1 - nested_dict = NestedDict() - flags = Flags() - - pos = skip_chars(src, pos, TOML_WS) - if src.startswith("}", pos): - return pos + 1, nested_dict.dict - while True: - pos, key, value = parse_key_value_pair(src, pos, parse_float) - key_parent, key_stem = key[:-1], key[-1] - if flags.is_(key, Flags.FROZEN): - raise suffixed_err(src, pos, f"Cannot mutate immutable namespace {key}") - try: - nest = nested_dict.get_or_create_nest(key_parent, access_lists=False) - except KeyError: - raise suffixed_err(src, pos, "Cannot overwrite a value") from None - if key_stem in nest: - raise suffixed_err(src, pos, f"Duplicate inline table key {key_stem!r}") - nest[key_stem] = value - pos = skip_chars(src, pos, TOML_WS) - c = src[pos : pos + 1] - if c == "}": - return pos + 1, nested_dict.dict - if c != ",": - raise suffixed_err(src, pos, "Unclosed inline table") - if isinstance(value, (dict, list)): - flags.set(key, Flags.FROZEN, recursive=True) - pos += 1 - pos = skip_chars(src, pos, TOML_WS) - - -def parse_basic_str_escape( - src: str, pos: Pos, *, multiline: bool = False -) -> tuple[Pos, str]: - escape_id = src[pos : pos + 2] - pos += 2 - if multiline and escape_id in {"\\ ", "\\\t", "\\\n"}: - # Skip whitespace until next non-whitespace character or end of - # the doc. Error if non-whitespace is found before newline. - if escape_id != "\\\n": - pos = skip_chars(src, pos, TOML_WS) - try: - char = src[pos] - except IndexError: - return pos, "" - if char != "\n": - raise suffixed_err(src, pos, "Unescaped '\\' in a string") - pos += 1 - pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE) - return pos, "" - if escape_id == "\\u": - return parse_hex_char(src, pos, 4) - if escape_id == "\\U": - return parse_hex_char(src, pos, 8) - try: - return pos, BASIC_STR_ESCAPE_REPLACEMENTS[escape_id] - except KeyError: - raise suffixed_err(src, pos, "Unescaped '\\' in a string") from None - - -def parse_basic_str_escape_multiline(src: str, pos: Pos) -> tuple[Pos, str]: - return parse_basic_str_escape(src, pos, multiline=True) - - -def parse_hex_char(src: str, pos: Pos, hex_len: int) -> tuple[Pos, str]: - hex_str = src[pos : pos + hex_len] - if len(hex_str) != hex_len or not HEXDIGIT_CHARS.issuperset(hex_str): - raise suffixed_err(src, pos, "Invalid hex value") - pos += hex_len - hex_int = int(hex_str, 16) - if not is_unicode_scalar_value(hex_int): - raise suffixed_err(src, pos, "Escaped character is not a Unicode scalar value") - return pos, chr(hex_int) - - -def parse_literal_str(src: str, pos: Pos) -> tuple[Pos, str]: - pos += 1 # Skip starting apostrophe - start_pos = pos - pos = skip_until( - src, pos, "'", error_on=ILLEGAL_LITERAL_STR_CHARS, error_on_eof=True - ) - return pos + 1, src[start_pos:pos] # Skip ending apostrophe - - -def parse_multiline_str(src: str, pos: Pos, *, literal: bool) -> tuple[Pos, str]: - pos += 3 - if src.startswith("\n", pos): - pos += 1 - - if literal: - delim = "'" - end_pos = skip_until( - src, - pos, - "'''", - error_on=ILLEGAL_MULTILINE_LITERAL_STR_CHARS, - error_on_eof=True, - ) - result = src[pos:end_pos] - pos = end_pos + 3 - else: - delim = '"' - pos, result = parse_basic_str(src, pos, multiline=True) - - # Add at maximum two extra apostrophes/quotes if the end sequence - # is 4 or 5 chars long instead of just 3. - if not src.startswith(delim, pos): - return pos, result - pos += 1 - if not src.startswith(delim, pos): - return pos, result + delim - pos += 1 - return pos, result + (delim * 2) - - -def parse_basic_str(src: str, pos: Pos, *, multiline: bool) -> tuple[Pos, str]: - if multiline: - error_on = ILLEGAL_MULTILINE_BASIC_STR_CHARS - parse_escapes = parse_basic_str_escape_multiline - else: - error_on = ILLEGAL_BASIC_STR_CHARS - parse_escapes = parse_basic_str_escape - result = "" - start_pos = pos - while True: - try: - char = src[pos] - except IndexError: - raise suffixed_err(src, pos, "Unterminated string") from None - if char == '"': - if not multiline: - return pos + 1, result + src[start_pos:pos] - if src.startswith('"""', pos): - return pos + 3, result + src[start_pos:pos] - pos += 1 - continue - if char == "\\": - result += src[start_pos:pos] - pos, parsed_escape = parse_escapes(src, pos) - result += parsed_escape - start_pos = pos - continue - if char in error_on: - raise suffixed_err(src, pos, f"Illegal character {char!r}") - pos += 1 - - -def parse_value( # noqa: C901 - src: str, pos: Pos, parse_float: ParseFloat -) -> tuple[Pos, Any]: - try: - char: str | None = src[pos] - except IndexError: - char = None - - # IMPORTANT: order conditions based on speed of checking and likelihood - - # Basic strings - if char == '"': - if src.startswith('"""', pos): - return parse_multiline_str(src, pos, literal=False) - return parse_one_line_basic_str(src, pos) - - # Literal strings - if char == "'": - if src.startswith("'''", pos): - return parse_multiline_str(src, pos, literal=True) - return parse_literal_str(src, pos) - - # Booleans - if char == "t": - if src.startswith("true", pos): - return pos + 4, True - if char == "f": - if src.startswith("false", pos): - return pos + 5, False - - # Arrays - if char == "[": - return parse_array(src, pos, parse_float) - - # Inline tables - if char == "{": - return parse_inline_table(src, pos, parse_float) - - # Dates and times - datetime_match = RE_DATETIME.match(src, pos) - if datetime_match: - try: - datetime_obj = match_to_datetime(datetime_match) - except ValueError as e: - raise suffixed_err(src, pos, "Invalid date or datetime") from e - return datetime_match.end(), datetime_obj - localtime_match = RE_LOCALTIME.match(src, pos) - if localtime_match: - return localtime_match.end(), match_to_localtime(localtime_match) - - # Integers and "normal" floats. - # The regex will greedily match any type starting with a decimal - # char, so needs to be located after handling of dates and times. - number_match = RE_NUMBER.match(src, pos) - if number_match: - return number_match.end(), match_to_number(number_match, parse_float) - - # Special floats - first_three = src[pos : pos + 3] - if first_three in {"inf", "nan"}: - return pos + 3, parse_float(first_three) - first_four = src[pos : pos + 4] - if first_four in {"-inf", "+inf", "-nan", "+nan"}: - return pos + 4, parse_float(first_four) - - raise suffixed_err(src, pos, "Invalid value") - - -def suffixed_err(src: str, pos: Pos, msg: str) -> TOMLDecodeError: - """Return a `TOMLDecodeError` where error message is suffixed with - coordinates in source.""" - - def coord_repr(src: str, pos: Pos) -> str: - if pos >= len(src): - return "end of document" - line = src.count("\n", 0, pos) + 1 - if line == 1: - column = pos + 1 - else: - column = pos - src.rindex("\n", 0, pos) - return f"line {line}, column {column}" - - return TOMLDecodeError(f"{msg} (at {coord_repr(src, pos)})") - - -def is_unicode_scalar_value(codepoint: int) -> bool: - return (0 <= codepoint <= 55295) or (57344 <= codepoint <= 1114111) - - -def make_safe_parse_float(parse_float: ParseFloat) -> ParseFloat: - """A decorator to make `parse_float` safe. - - `parse_float` must not return dicts or lists, because these types - would be mixed with parsed TOML tables and arrays, thus confusing - the parser. The returned decorated callable raises `ValueError` - instead of returning illegal types. - """ - # The default `float` callable never returns illegal types. Optimize it. - if parse_float is float: # type: ignore[comparison-overlap] - return float - - def safe_parse_float(float_str: str) -> Any: - float_value = parse_float(float_str) - if isinstance(float_value, (dict, list)): - raise ValueError("parse_float must not return dicts or lists") - return float_value - - return safe_parse_float diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/gbt.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/gbt.py deleted file mode 100644 index 1a948e7adb377b5fbf2792a59c6f85e197564d09..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/gbt.py +++ /dev/null @@ -1,482 +0,0 @@ -#!/usr/local/bin/python3 - -# avenir-python: Machine Learning -# Author: Pranab Ghosh -# -# Licensed under the Apache License, Version 2.0 (the "License"); you -# may not use this file except in compliance with the License. You may -# obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -# implied. See the License for the specific language governing -# permissions and limitations under the License. - -# Package imports -import os -import sys -import matplotlib.pyplot as plt -import numpy as np -import sklearn as sk -import matplotlib -import random -import jprops -from sklearn.ensemble import GradientBoostingClassifier -import joblib -from sklearn.metrics import accuracy_score -from sklearn.metrics import confusion_matrix -from sklearn.model_selection import cross_val_score -from random import randint -from io import StringIO -sys.path.append(os.path.abspath("../lib")) -from util import * -from mlutil import * -from pasearch import * -from bacl import * - -# gradient boosting classification -class GradientBoostedTrees(object): - def __init__(self, configFile): - defValues = {} - defValues["common.mode"] = ("training", None) - defValues["common.model.directory"] = ("model", None) - defValues["common.model.file"] = (None, None) - defValues["common.preprocessing"] = (None, None) - defValues["common.verbose"] = (False, None) - defValues["train.data.file"] = (None, "missing training data file") - defValues["train.data.fields"] = (None, "missing training data field ordinals") - defValues["train.data.feature.fields"] = (None, "missing training data feature field ordinals") - defValues["train.data.class.field"] = (None, "missing class field ordinal") - defValues["train.validation"] = ("kfold", None) - defValues["train.num.folds"] = (5, None) - defValues["train.min.samples.split"] = ("4", None) - defValues["train.min.samples.leaf.gb"] = ("2", None) - defValues["train.max.depth.gb"] = (3, None) - defValues["train.max.leaf.nodes.gb"] = (None, None) - defValues["train.max.features.gb"] = (None, None) - defValues["train.learning.rate"] = (0.1, None) - defValues["train.num.estimators.gb"] = (100, None) - defValues["train.subsample"] = (1.0, None) - defValues["train.loss"] = ("deviance", None) - defValues["train.random.state"] = (None, None) - defValues["train.verbose"] = (0, None) - defValues["train.warm.start"] = (False, None) - defValues["train.presort"] = ("auto", None) - defValues["train.criterion"] = ("friedman_mse", None) - defValues["train.success.criterion"] = ("error", None) - defValues["train.model.save"] = (False, None) - defValues["train.score.method"] = ("accuracy", None) - defValues["train.search.param.strategy"] = (None, None) - defValues["train.search.params"] = (None, None) - defValues["predict.data.file"] = (None, None) - defValues["predict.data.fields"] = (None, "missing data field ordinals") - defValues["predict.data.feature.fields"] = (None, "missing data feature field ordinals") - defValues["predict.use.saved.model"] = (False, None) - defValues["validate.data.file"] = (None, "missing validation data file") - defValues["validate.data.fields"] = (None, "missing validation data field ordinals") - defValues["validate.data.feature.fields"] = (None, "missing validation data feature field ordinals") - defValues["validate.data.class.field"] = (None, "missing class field ordinal") - defValues["validate.use.saved.model"] = (False, None) - defValues["validate.score.method"] = ("accuracy", None) - - self.config = Configuration(configFile, defValues) - self.subSampleRate = None - self.featData = None - self.clsData = None - self.gbcClassifier = None - self.verbose = self.config.getBooleanConfig("common.verbose")[0] - logFilePath = self.config.getStringConfig("common.logging.file")[0] - logLevName = self.config.getStringConfig("common.logging.level")[0] - self.logger = createLogger(__name__, logFilePath, logLevName) - self.logger.info("********* starting session") - - # initialize config - def initConfig(self, configFile, defValues): - self.config = Configuration(configFile, defValues) - - # get config object - def getConfig(self): - return self.config - - #set config param - def setConfigParam(self, name, value): - self.config.setParam(name, value) - - #get mode - def getMode(self): - return self.config.getStringConfig("common.mode")[0] - - #get search parameter - def getSearchParamStrategy(self): - return self.config.getStringConfig("train.search.param.strategy")[0] - - def setModel(self, model): - self.gbcClassifier = model - - # train model - def train(self): - #build model - self.buildModel() - - # training data - if self.featData is None: - (featData, clsData) = self.prepTrainingData() - (self.featData, self.clsData) = (featData, clsData) - else: - (featData, clsData) = (self.featData, self.clsData) - if self.subSampleRate is not None: - (featData, clsData) = subSample(featData, clsData, self.subSampleRate, False) - self.logger.info("subsample size " + str(featData.shape[0])) - - # parameters - modelSave = self.config.getBooleanConfig("train.model.save")[0] - - #train - self.logger.info("...training model") - self.gbcClassifier.fit(featData, clsData) - score = self.gbcClassifier.score(featData, clsData) - successCriterion = self.config.getStringConfig("train.success.criterion")[0] - result = None - if successCriterion == "accuracy": - self.logger.info("accuracy with training data {:06.3f}".format(score)) - result = score - elif successCriterion == "error": - error = 1.0 - score - self.logger.info("error with training data {:06.3f}".format(error)) - result = error - else: - raise ValueError("invalid success criterion") - - if modelSave: - self.logger.info("...saving model") - modelFilePath = self.getModelFilePath() - joblib.dump(self.gbcClassifier, modelFilePath) - return result - - #train with k fold validation - def trainValidate(self): - #build model - self.buildModel() - - # training data - (featData, clsData) = self.prepTrainingData() - - #parameter - validation = self.config.getStringConfig("train.validation")[0] - numFolds = self.config.getIntConfig("train.num.folds")[0] - successCriterion = self.config.getStringConfig("train.success.criterion")[0] - scoreMethod = self.config.getStringConfig("train.score.method")[0] - - #train with validation - self.logger.info("...training and kfold cross validating model") - scores = cross_val_score(self.gbcClassifier, featData, clsData, cv=numFolds,scoring=scoreMethod) - avScore = np.mean(scores) - result = self.reportResult(avScore, successCriterion, scoreMethod) - return result - - #train with k fold validation and search parameter space for optimum - def trainValidateSearch(self): - self.logger.info("...starting train validate with parameter search") - searchStrategyName = self.getSearchParamStrategy() - if searchStrategyName is not None: - if searchStrategyName == "grid": - searchStrategy = GuidedParameterSearch(self.verbose) - elif searchStrategyName == "random": - searchStrategy = RandomParameterSearch(self.verbose) - maxIter = self.config.getIntConfig("train.search.max.iterations")[0] - searchStrategy.setMaxIter(maxIter) - elif searchStrategyName == "simuan": - searchStrategy = SimulatedAnnealingParameterSearch(self.verbose) - maxIter = self.config.getIntConfig("train.search.max.iterations")[0] - searchStrategy.setMaxIter(maxIter) - temp = self.config.getFloatConfig("train.search.sa.temp")[0] - searchStrategy.setTemp(temp) - tempRedRate = self.config.getFloatConfig("train.search.sa.temp.red.rate")[0] - searchStrategy.setTempReductionRate(tempRedRate) - else: - raise ValueError("invalid paramtere search strategy") - else: - raise ValueError("missing search strategy") - - # add search params - searchParams = self.config.getStringConfig("train.search.params")[0].split(",") - searchParamNames = [] - extSearchParamNames = [] - if searchParams is not None: - for searchParam in searchParams: - paramItems = searchParam.split(":") - extSearchParamNames.append(paramItems[0]) - - #get rid name component search - paramNameItems = paramItems[0].split(".") - del paramNameItems[1] - paramItems[0] = ".".join(paramNameItems) - - searchStrategy.addParam(paramItems) - searchParamNames.append(paramItems[0]) - else: - raise ValueError("missing search parameter list") - - # add search param data list for each param - for (searchParamName,extSearchParamName) in zip(searchParamNames,extSearchParamNames): - searchParamData = self.config.getStringConfig(extSearchParamName)[0].split(",") - searchStrategy.addParamVaues(searchParamName, searchParamData) - - # train and validate for various param value combination - searchStrategy.prepare() - paramValues = searchStrategy.nextParamValues() - searchResults = [] - while paramValues is not None: - self.logger.info("...next parameter set") - paramStr = "" - for paramValue in paramValues: - self.setConfigParam(paramValue[0], str(paramValue[1])) - paramStr = paramStr + paramValue[0] + "=" + str(paramValue[1]) + " " - result = self.trainValidate() - searchStrategy.setCost(result) - searchResults.append((paramStr, result)) - paramValues = searchStrategy.nextParamValues() - - # output - self.logger.info("all parameter search results") - for searchResult in searchResults: - self.logger.info("{}\t{:06.3f}".format(searchResult[0], searchResult[1])) - - self.logger.info("best parameter search result") - bestSolution = searchStrategy.getBestSolution() - paramStr = "" - for paramValue in bestSolution[0]: - paramStr = paramStr + paramValue[0] + "=" + str(paramValue[1]) + " " - self.logger.info("{}\t{:06.3f}".format(paramStr, bestSolution[1])) - return bestSolution - - #predict - def validate(self): - # create model - useSavedModel = self.config.getBooleanConfig("validate.use.saved.model")[0] - if useSavedModel: - # load saved model - self.logger.info("...loading model") - modelFilePath = self.getModelFilePath() - self.gbcClassifier = joblib.load(modelFilePath) - else: - # train model - self.train() - - # prepare test data - (featData, clsDataActual) = self.prepValidationData() - - #predict - self.logger.info("...predicting") - clsDataPred = self.gbcClassifier.predict(featData) - - self.logger.info("...validating") - #self.logger.info(clsData) - scoreMethod = self.config.getStringConfig("validate.score.method")[0] - if scoreMethod == "accuracy": - accuracy = accuracy_score(clsDataActual, clsDataPred) - self.logger.info("accuracy:") - self.logger.info(accuracy) - elif scoreMethod == "confusionMatrix": - confMatrx = confusion_matrix(clsDataActual, clsDataPred) - self.logger.info("confusion matrix:") - self.logger.info(confMatrx) - - - #predict - def predictx(self): - # create model - useSavedModel = self.config.getBooleanConfig("predict.use.saved.model")[0] - if useSavedModel: - # load saved model - self.logger.info("...loading model") - modelFilePath = self.getModelFilePath() - self.gbcClassifier = joblib.load(modelFilePath) - else: - # train model - self.train() - - # prepare test data - featData = self.prepPredictData() - - #predict - self.logger.info("...predicting") - clsData = self.gbcClassifier.predict(featData) - self.logger.info(clsData) - - #predict with in memory data - def predict(self, recs=None): - # create model - self.prepModel() - - #input record - #input record - if recs: - #passed record - featData = self.prepStringPredictData(recs) - if (featData.ndim == 1): - featData = featData.reshape(1, -1) - else: - #file - featData = self.prepPredictData() - - #predict - self.logger.info("...predicting") - clsData = self.gbcClassifier.predict(featData) - return clsData - - #predict probability with in memory data - def predictProb(self, recs): - # create model - self.prepModel() - - #input record - if type(recs) is str: - featData = self.prepStringPredictData(recs) - else: - featData = recs - #self.logger.info(featData.shape) - if (featData.ndim == 1): - featData = featData.reshape(1, -1) - - #predict - self.logger.info("...predicting class probability") - clsData = self.gbcClassifier.predict_proba(featData) - return clsData - - #preparing model - def prepModel(self): - useSavedModel = self.config.getBooleanConfig("predict.use.saved.model")[0] - if (useSavedModel and not self.gbcClassifier): - # load saved model - self.logger.info("...loading saved model") - modelFilePath = self.getModelFilePath() - self.gbcClassifier = joblib.load(modelFilePath) - else: - # train model - self.train() - return self.gbcClassifier - - #prepare string predict data - def prepStringPredictData(self, recs): - frecs = StringIO(recs) - featData = np.loadtxt(frecs, delimiter=',') - #self.logger.info(featData) - return featData - - #loads and prepares training data - def prepTrainingData(self): - # parameters - dataFile = self.config.getStringConfig("train.data.file")[0] - fieldIndices = self.config.getStringConfig("train.data.fields")[0] - if not fieldIndices is None: - fieldIndices = strToIntArray(fieldIndices, ",") - featFieldIndices = self.config.getStringConfig("train.data.feature.fields")[0] - if not featFieldIndices is None: - featFieldIndices = strToIntArray(featFieldIndices, ",") - classFieldIndex = self.config.getIntConfig("train.data.class.field")[0] - - #training data - (data, featData) = loadDataFile(dataFile, ",", fieldIndices, featFieldIndices) - clsData = extrColumns(data, classFieldIndex) - clsData = np.array([int(a) for a in clsData]) - return (featData, clsData) - - #loads and prepares training data - def prepValidationData(self): - # parameters - dataFile = self.config.getStringConfig("validate.data.file")[0] - fieldIndices = self.config.getStringConfig("validate.data.fields")[0] - if not fieldIndices is None: - fieldIndices = strToIntArray(fieldIndices, ",") - featFieldIndices = self.config.getStringConfig("validate.data.feature.fields")[0] - if not featFieldIndices is None: - featFieldIndices = strToIntArray(featFieldIndices, ",") - classFieldIndex = self.config.getIntConfig("validate.data.class.field")[0] - - #training data - (data, featData) = loadDataFile(dataFile, ",", fieldIndices, featFieldIndices) - clsData = extrColumns(data, classFieldIndex) - clsData = [int(a) for a in clsData] - return (featData, clsData) - - #loads and prepares training data - def prepPredictData(self): - # parameters - dataFile = self.config.getStringConfig("predict.data.file")[0] - if dataFile is None: - raise ValueError("missing prediction data file") - fieldIndices = self.config.getStringConfig("predict.data.fields")[0] - if not fieldIndices is None: - fieldIndices = strToIntArray(fieldIndices, ",") - featFieldIndices = self.config.getStringConfig("predict.data.feature.fields")[0] - if not featFieldIndices is None: - featFieldIndices = strToIntArray(featFieldIndices, ",") - - #training data - (data, featData) = loadDataFile(dataFile, ",", fieldIndices, featFieldIndices) - - return featData - - # get model file path - def getModelFilePath(self): - modelDirectory = self.config.getStringConfig("common.model.directory")[0] - modelFile = self.config.getStringConfig("common.model.file")[0] - if modelFile is None: - raise ValueError("missing model file name") - modelFilePath = modelDirectory + "/" + modelFile - return modelFilePath - - # report result - def reportResult(self, score, successCriterion, scoreMethod): - if successCriterion == "accuracy": - self.logger.info("average " + scoreMethod + " with k fold cross validation {:06.3f}".format(score)) - result = score - elif successCriterion == "error": - error = 1.0 - score - self.logger.info("average error with k fold cross validation {:06.3f}".format(error)) - result = error - else: - raise ValueError("invalid success criterion") - return result - - # builds model object - def buildModel(self): - self.logger.info("...building gradient boosted tree model") - # parameters - minSamplesSplit = self.config.getStringConfig("train.min.samples.split")[0] - minSamplesSplit = typedValue(minSamplesSplit) - minSamplesLeaf = self.config.getStringConfig("train.min.samples.leaf.gb")[0] - minSamplesLeaf = typedValue(minSamplesLeaf) - #minWeightFractionLeaf = self.config.getFloatConfig("train.min.weight.fraction.leaf.gb")[0] - (maxDepth, maxLeafNodes) = self.config.eitherOrIntConfig("train.max.depth.gb", "train.max.leaf.nodes.gb") - maxFeatures = self.config.getStringConfig("train.max.features.gb")[0] - maxFeatures = typedValue(maxFeatures) - learningRate = self.config.getFloatConfig("train.learning.rate")[0] - numEstimators = self.config.getIntConfig("train.num.estimators.gb")[0] - subsampleFraction = self.config.getFloatConfig("train.subsample")[0] - lossFun = self.config.getStringConfig("train.loss")[0] - randomState = self.config.getIntConfig("train.random.state")[0] - verboseOutput = self.config.getIntConfig("train.verbose")[0] - warmStart = self.config.getBooleanConfig("train.warm.start")[0] - presort = self.config.getStringConfig("train.presort") - if (presort[1]): - presortChoice = presort[0] - else: - presortChoice = presort[0].lower() == "true" - splitCriterion = self.config.getStringConfig("train.criterion")[0] - - #classifier - self.gbcClassifier = GradientBoostingClassifier(loss=lossFun, learning_rate=learningRate, n_estimators=numEstimators, - subsample=subsampleFraction, min_samples_split=minSamplesSplit, - min_samples_leaf=minSamplesLeaf, min_weight_fraction_leaf=0.0, max_depth=maxDepth, - init=None, random_state=randomState, max_features=maxFeatures, verbose=verboseOutput, - max_leaf_nodes=maxLeafNodes, warm_start=warmStart, presort=presortChoice) - - - - - diff --git a/spaces/ViktorTsoi13/ABA_Test/greeting.md b/spaces/ViktorTsoi13/ABA_Test/greeting.md deleted file mode 100644 index 46ba433c7f64d0dfba0180ef6fd0d980bd5efd67..0000000000000000000000000000000000000000 --- a/spaces/ViktorTsoi13/ABA_Test/greeting.md +++ /dev/null @@ -1 +0,0 @@ -Most popular ABA private server \ No newline at end of file diff --git a/spaces/VishalF5/Text_Similarity/README.md b/spaces/VishalF5/Text_Similarity/README.md deleted file mode 100644 index cbe7b14d0a32122de00a57d7297e2c65cd2483eb..0000000000000000000000000000000000000000 --- a/spaces/VishalF5/Text_Similarity/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Similarity -emoji: 🐠 -colorFrom: indigo -colorTo: blue -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/WanderingRose/Storm/Dockerfile b/spaces/WanderingRose/Storm/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/WanderingRose/Storm/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/Wauplin/bloomz.cpp-converter/convert.py b/spaces/Wauplin/bloomz.cpp-converter/convert.py deleted file mode 100644 index 6929cf98b8be7e597fbc787de58550718b741948..0000000000000000000000000000000000000000 --- a/spaces/Wauplin/bloomz.cpp-converter/convert.py +++ /dev/null @@ -1,51 +0,0 @@ -from pathlib import Path -from subprocess import run -from typing import Generator - -BLOOMZ_FOLDER = Path(__file__).parent / "bloomz.cpp" - - -def convert( - cache_folder: Path, model_id: str, precision: str, quantization: bool -) -> Generator[str, Path, None]: - # Conversion - cmd = [ - "python", - str(BLOOMZ_FOLDER / "convert-hf-to-ggml.py"), - model_id, - str(cache_folder), - ] - if precision == "FP32": - cmd.append("--use-fp32") - yield f"Running command: `{' '.join(cmd)}`" - run(cmd, check=True) - - # Model file should exist - f_suffix = "f32" if precision == "FP32" else "f16" - _, model_name = model_id.split("/") - model_path = cache_folder / f"ggml-model-{model_name}-{f_suffix}.bin" - assert model_path.is_file() - yield f"Model successfully converted to ggml: {model_path}" - - # Quantization - if quantization: - q_model_path = ( - cache_folder / f"ggml-model-{model_name}-{f_suffix}-q4_0.bin" - ) - cmd = [ - "./bloomz.cpp/quantize", - str(model_path), - str(q_model_path), - "2", - ] - yield f"Running command: `{' '.join(cmd)}`" - run(cmd, check=True) - assert q_model_path.is_file() - - # Delete non-quantized file - model_path.unlink(missing_ok=True) - model_path = q_model_path - yield f"Model successfully quantized: {model_path}" - - # Return - return model_path diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/text/interpret.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/text/interpret.py deleted file mode 100644 index 4073ffd1fc63d334461fde347c17a84a3c26625b..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/text/interpret.py +++ /dev/null @@ -1,100 +0,0 @@ -from ..torch_core import * -from ..basic_data import * -from ..basic_train import * -from ..train import ClassificationInterpretation -import matplotlib.cm as cm - -__all__ = ['TextClassificationInterpretation'] - -def value2rgba(x:float, cmap:Callable=cm.RdYlGn, alpha_mult:float=1.0)->Tuple: - "Convert a value `x` from 0 to 1 (inclusive) to an RGBA tuple according to `cmap` times transparency `alpha_mult`." - c = cmap(x) - rgb = (np.array(c[:-1]) * 255).astype(int) - a = c[-1] * alpha_mult - return tuple(rgb.tolist() + [a]) - -def piece_attn_html(pieces:List[str], attns:List[float], sep:str=' ', **kwargs)->str: - html_code,spans = [''], [] - for p, a in zip(pieces, attns): - p = html.escape(p) - c = str(value2rgba(a, alpha_mult=0.5, **kwargs)) - spans.append(f'{p}') - html_code.append(sep.join(spans)) - html_code.append('') - return ''.join(html_code) - -def show_piece_attn(*args, **kwargs): - from IPython.display import display, HTML - display(HTML(piece_attn_html(*args, **kwargs))) - -def _eval_dropouts(mod): - module_name = mod.__class__.__name__ - if 'Dropout' in module_name or 'BatchNorm' in module_name: mod.training = False - for module in mod.children(): _eval_dropouts(module) - -class TextClassificationInterpretation(ClassificationInterpretation): - """Provides an interpretation of classification based on input sensitivity. - This was designed for AWD-LSTM only for the moment, because Transformer already has its own attentional model. - """ - - def __init__(self, learn: Learner, preds: Tensor, y_true: Tensor, losses: Tensor, ds_type: DatasetType = DatasetType.Valid): - super(TextClassificationInterpretation, self).__init__(learn,preds,y_true,losses,ds_type) - self.model = learn.model - - @classmethod - def from_learner(cls, learn: Learner, ds_type:DatasetType=DatasetType.Valid, activ:nn.Module=None): - "Gets preds, y_true, losses to construct base class from a learner" - preds_res = learn.get_preds(ds_type=ds_type, activ=activ, with_loss=True, ordered=True) - return cls(learn, *preds_res) - - def intrinsic_attention(self, text:str, class_id:int=None): - """Calculate the intrinsic attention of the input w.r.t to an output `class_id`, or the classification given by the model if `None`. - For reference, see the Sequential Jacobian session at https://www.cs.toronto.edu/~graves/preprint.pdf - """ - self.model.train() - _eval_dropouts(self.model) - self.model.zero_grad() - self.model.reset() - ids = self.data.one_item(text)[0] - emb = self.model[0].module.encoder(ids).detach().requires_grad_(True) - lstm_output = self.model[0].module(emb, from_embeddings=True) - self.model.eval() - cl = self.model[1](lstm_output + (torch.zeros_like(ids).byte(),))[0].softmax(dim=-1) - if class_id is None: class_id = cl.argmax() - cl[0][class_id].backward() - attn = emb.grad.squeeze().abs().sum(dim=-1) - attn /= attn.max() - tokens = self.data.single_ds.reconstruct(ids[0]) - return tokens, attn - - def html_intrinsic_attention(self, text:str, class_id:int=None, **kwargs)->str: - text, attn = self.intrinsic_attention(text, class_id) - return piece_attn_html(text.text.split(), to_np(attn), **kwargs) - - def show_intrinsic_attention(self, text:str, class_id:int=None, **kwargs)->None: - text, attn = self.intrinsic_attention(text, class_id) - show_piece_attn(text.text.split(), to_np(attn), **kwargs) - - def show_top_losses(self, k:int, max_len:int=70)->None: - """ - Create a tabulation showing the first `k` texts in top_losses along with their prediction, actual,loss, and probability of - actual class. `max_len` is the maximum number of tokens displayed. - """ - from IPython.display import display, HTML - items = [] - tl_val,tl_idx = self.top_losses() - for i,idx in enumerate(tl_idx): - if k <= 0: break - k -= 1 - tx,cl = self.data.dl(self.ds_type).dataset[idx] - cl = cl.data - classes = self.data.classes - txt = ' '.join(tx.text.split(' ')[:max_len]) if max_len is not None else tx.text - tmp = [txt, f'{classes[self.pred_class[idx]]}', f'{classes[cl]}', f'{self.losses[idx]:.2f}', - f'{self.preds[idx][cl]:.2f}'] - items.append(tmp) - items = np.array(items) - names = ['Text', 'Prediction', 'Actual', 'Loss', 'Probability'] - df = pd.DataFrame({n:items[:,i] for i,n in enumerate(names)}, columns=names) - with pd.option_context('display.max_colwidth', -1): - display(HTML(df.to_html(index=False))) diff --git a/spaces/Xinyoumeng233hu/SteganographywithGPT-2/utils.py b/spaces/Xinyoumeng233hu/SteganographywithGPT-2/utils.py deleted file mode 100644 index 2a3660e600b8833d3399c78c8e6a0eb5c48f16c7..0000000000000000000000000000000000000000 --- a/spaces/Xinyoumeng233hu/SteganographywithGPT-2/utils.py +++ /dev/null @@ -1,296 +0,0 @@ -import torch -import numpy as np -import bitarray - -from pytorch_transformers import GPT2LMHeadModel, GPT2Tokenizer - -def decode(self, token_ids, **kwargs): - filtered_tokens = self.convert_ids_to_tokens(token_ids) - text = self.convert_tokens_to_string(filtered_tokens) - return text -GPT2Tokenizer.decode = decode - -def _convert_token_to_id(self, token): - return self.encoder.get(token, 0) -GPT2Tokenizer._convert_token_to_id = _convert_token_to_id - - -def limit_past(past): - past = list(past) - for i in range(len(past)): - past[i] = past[i][:, :, :, -1022:] - return past - -def kl(q, logq, logp): - res = q*(logq-logp)/0.69315 - res[q==0] = 0 - return res.sum().item() # in bits - -def entropy(q, logq): - res = q*logq/0.69315 - res[q==0] = 0 - return -res.sum().item() # in bits - -# e.g. [0, 1, 1, 1] looks like 1110=14 -def bits2int(bits): - res = 0 - for i, bit in enumerate(bits): - res += bit*(2**i) - return res - -def int2bits(inp, num_bits): - if num_bits == 0: - return [] - strlist = ('{0:0%db}'%num_bits).format(inp) - return [int(strval) for strval in reversed(strlist)] - -def is_sent_finish(token_idx, enc): - token = enc.decoder[token_idx] - return '.' in token or '!' in token or '?' in token - -def num_same_from_beg(bits1, bits2): - assert len(bits1) == len(bits2) - for i in range(len(bits1)): - if bits1[i] != bits2[i]: - break - - return i - -def encode_context(raw_text, enc): - context_tokens = [enc.encoder['<|endoftext|>']] + enc.encode(raw_text) - return context_tokens - -# Use gpt2-medium for 345M param model -# Use gpt2-large for 774M param model -def get_model(seed=1234, model_name='gpt2'): - np.random.seed(seed) - torch.random.manual_seed(seed) - torch.cuda.manual_seed(seed) - device = torch.device("cpu") - - enc = GPT2Tokenizer.from_pretrained(model_name) - enc.unk_token = None - enc.bos_token = None - enc.eos_token = None - - model = GPT2LMHeadModel.from_pretrained(model_name) - model.to(device) - model.eval() - #model.double() - - return enc, model - -enc32_itoc = ['\0', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '.', ',', "'", '!', ' '] -enc32_ctoi = {k: v for v, k in enumerate(enc32_itoc)} -def enc32(text): - bits = [] - for c in text: - bits.extend(int2bits(enc32_ctoi[c], 5)) - return bits - -def dec32(bits): - text = '' - for i in range(0, len(bits), 5): - c = enc32_itoc[bits2int(bits[i:i+5])] - if c == '\0': - break - text += c - return text - -# message should be bit string -# encoded should be text string -def expansion_ratio(message, encoded): - message_bits = len(message) - encoded_ba = bitarray.bitarray() - encoded_ba.frombytes(encoded.encode('utf-8')) - encoded_bits = len(encoded_ba.tolist()) - return encoded_bits/message_bits - -#@title -import torch -import math -import random - -def bin_sort(l, token_indices, total, entropy, device): - #compute entropy for upper bound on the number of bins we need - - bucket_size = total - num_bins = 2**int(entropy+1) - bucket_size = total / num_bins - - bins = [torch.empty(0, dtype=torch.long, device=device)] * num_bins - value_in_bins = [0] * num_bins - space_left_after = [total - i*bucket_size for i in range(0,num_bins)] - - - token_bins = [torch.empty(0, dtype=torch.long, device=device)] * num_bins - - # Figuring out what the search order should be - step_size = num_bins/4 - search_order = [] - priorities = [0]*num_bins - priority = 0 - search_order.append(int(num_bins/2)) - search_order.append(0) - priorities[int(num_bins/2)] = 0 - priorities[0] = 0 - while(step_size>=1): - priority += 1 - for x in range(num_bins-int(step_size), -1, -int(step_size*2)): - search_order.append(x) - priorities[x] = priority - step_size = step_size/2 - - # Adding the actual elements - for (item, token_index) in zip(l.tolist(), token_indices.tolist()): - found_single_bucket_fit = False - single_bucket_index = -1 - single_bucket_value = bucket_size - - found_multi_bucket_bumpless_fit = False - multi_bucket_bumpless_index = -1 - multi_bucket_bumpless_value = total - - found_multi_bucket_bumping_fit = False - multi_bucket_bumping_index = -1 - multi_bucket_bumping_value = total - - for i in search_order: # for index in search_order - if(item > space_left_after[i]): - continue - if(value_in_bins[i] >= bucket_size): - continue - - # Priority of choices - # 1. Can i place this thing in an empty bucket all on its own? - # 2. Can i plan this somewhere where is doesnt have to bump anything else around? - # 2a. Minimize the wasted space. Aka use the smallest space (of equal priority) that accomplishes this goal - # 3. If not (1) and (2), then put it in the space the bumps stuff the least. - - if(value_in_bins[i] + item > bucket_size): #Would overflow. - - space_before_next_block = bucket_size - value_in_bins[i] - for j in range(i+1, len(bins)): - if(value_in_bins[j] > 0): # We have found a bucket with something in it. This is how much space we have here. - space_before_next_block = space_before_next_block + (bucket_size - value_in_bins[i]) - break - else: # This was a empty bucket - space_before_next_block = space_before_next_block + bucket_size - - if((not found_multi_bucket_bumpless_fit) or (found_multi_bucket_bumpless_fit and priorities[i] <= priorities[multi_bucket_bumpless_index])): #This could potentially be a match - - # If this is a valid space to put this without bumping and it is a better fit than previous spaces - if(space_before_next_block > item and space_before_next_block < multi_bucket_bumpless_value): - # set this to be the pointer! we can fit stuff here - found_multi_bucket_bumpless_fit = True - multi_bucket_bumpless_index = i - multi_bucket_bumpless_value = space_before_next_block - - # Find the overflow that will bump the least - if ( item - space_before_next_block < multi_bucket_bumping_value): - found_multi_bucket_bumping_fit = True - multi_bucket_bumping_index = i - multi_bucket_bumping_value = item - space_before_next_block - - if(value_in_bins[i] + item <= bucket_size): #Would fit - if(single_bucket_value > value_in_bins[i]): - found_single_bucket_fit = True - single_bucket_value = value_in_bins[i] - single_bucket_index = i - - if (single_bucket_index == multi_bucket_bumpless_index == multi_bucket_bumping_index == -1): - bins[0] = torch.cat( (torch.tensor([item], device=device), bins[0]), 0) - token_bins[0] = torch.cat( (torch.tensor([token_index], device=device), token_bins[0]), 0) - continue - - - if found_single_bucket_fit: - # We found somewhere we can actually fit! - bins[single_bucket_index] = torch.cat( (bins[single_bucket_index], torch.tensor([item], device=device)), 0) - token_bins[single_bucket_index] = torch.cat( (token_bins[single_bucket_index], torch.tensor([token_index], device=device)), 0) - value_in_bins[single_bucket_index] += item - for i in range(0, single_bucket_index+1): - space_left_after[i] -= item - - elif found_multi_bucket_bumpless_fit: - # Found somewhere we can put this without upsetting the force - part_in_bucket = bucket_size - value_in_bins[multi_bucket_bumpless_index] - part_overflow = item - part_in_bucket - bins[multi_bucket_bumpless_index] = torch.cat( (bins[multi_bucket_bumpless_index], torch.tensor([item], device=device)), 0) - token_bins[multi_bucket_bumpless_index] = torch.cat( (token_bins[multi_bucket_bumpless_index], torch.tensor([token_index], device=device)), 0) - value_in_bins[multi_bucket_bumpless_index] = bucket_size - - # Fill this bucket and continue overflowing - j = multi_bucket_bumpless_index + 1 - for i in range(0, j): - space_left_after[i] -= item - - while(part_overflow > 0): - new_part_overflow = (value_in_bins[j] + part_overflow) - bucket_size - value_in_bins[j] = min(bucket_size, part_overflow+value_in_bins[j]) # mark the bucket as filled - space_left_after[j] -= part_overflow - part_overflow = new_part_overflow - j+=1 - - else: - part_in_bucket = bucket_size - value_in_bins[multi_bucket_bumping_index] - part_overflow = item - part_in_bucket - bins[multi_bucket_bumping_index] = torch.cat( (bins[multi_bucket_bumping_index], torch.tensor([item], device=device)), 0) - token_bins[multi_bucket_bumping_index] = torch.cat( (token_bins[multi_bucket_bumping_index], torch.tensor([token_index], device=device)), 0) - value_in_bins[multi_bucket_bumping_index] = bucket_size - - # Fill this bucket and continue overflowing - j = multi_bucket_bumping_index + 1 - for i in range(0, j): - space_left_after[i] -= item - while(part_overflow > 0): - new_part_overflow = (value_in_bins[j] + part_overflow) - bucket_size - value_in_bins[j] = min(bucket_size, part_overflow+value_in_bins[j]) # mark the bucket as filled - space_left_after[j] -= part_overflow - part_overflow = new_part_overflow - j+=1 - - sorted_tensor = torch.cat(bins, 0) - sorted_tokens = torch.cat(token_bins, 0) - - return sorted_tensor, sorted_tokens - -def compute_ev(t, precision): - expected_bits = [] - cum_probs = t.cumsum(0) - - for selection in range(0, len(cum_probs)): - - # Calculate new range as ints - new_int_bottom = cum_probs[selection-1] if selection > 0 else 0 - new_int_top = cum_probs[selection] - - # Convert range to bits - new_int_bottom_bits_inc = list(reversed(int2bits(new_int_bottom, precision))) - new_int_top_bits_inc = list(reversed(int2bits(new_int_top-1, precision))) # -1 here because upper bound is exclusive - - # Consume most significant bits which are now fixed and update interval - num_bits_encoded = num_same_from_beg(new_int_bottom_bits_inc, new_int_top_bits_inc) - expected_bits.append(t[selection] * num_bits_encoded) - - return(float(sum(expected_bits).item())/(2**precision)) - -def visualize_bins(values_in_bins, bucket_size): - out_str = "[" - for b in values_in_bins: - out_str = out_str + " " + str(round(100*b/bucket_size,2)) + " |" - out_str = out_str + "]" - print(out_str) - -def visualize_distribution(l): - total = sum(l) - out_str = "[" - for b in l: - out_str = out_str + " " + str(round(100*b/total,2)) + " |" - out_str = out_str + "]" - print(out_str) - -def compute_entropy(lists): - total = sum(lists) - entropy = -1*sum([ (x/total) * math.log2(x/total) for x in lists]) - return entropy \ No newline at end of file diff --git a/spaces/XzJosh/Aatrox-Bert-VITS2/resample.py b/spaces/XzJosh/Aatrox-Bert-VITS2/resample.py deleted file mode 100644 index 2ed1685654a371c5722168e9987809b05b1cb224..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Aatrox-Bert-VITS2/resample.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count - -import soundfile -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=args.sr) - soundfile.write( - os.path.join(args.out_dir, speaker, wav_name), - wav, - sr - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./raw", help="path to source dir") - parser.add_argument("--out_dir", type=str, default="./dataset", help="path to target dir") - args = parser.parse_args() - # processs = 8 - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/XzJosh/Bekki-Bert-VITS2/losses.py b/spaces/XzJosh/Bekki-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bekki-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/XzJosh/Echo-Bert-VITS2/bert_gen.py b/spaces/XzJosh/Echo-Bert-VITS2/bert_gen.py deleted file mode 100644 index 44814715396ffc3abe84a12c74d66293c356eb4f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Echo-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with open(hps.data.validation_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with Pool(processes=12) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/XzJosh/Eileen-Bert-VITS2/bert_gen.py b/spaces/XzJosh/Eileen-Bert-VITS2/bert_gen.py deleted file mode 100644 index 44814715396ffc3abe84a12c74d66293c356eb4f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Eileen-Bert-VITS2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with open(hps.data.validation_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - with Pool(processes=12) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/XzJosh/Eileen-Bert-VITS2/preprocess_text.py b/spaces/XzJosh/Eileen-Bert-VITS2/preprocess_text.py deleted file mode 100644 index 5eb0f3b9e929fcbe91dcbeb653391227a2518a15..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Eileen-Bert-VITS2/preprocess_text.py +++ /dev/null @@ -1,64 +0,0 @@ -import json -from random import shuffle - -import tqdm -from text.cleaner import clean_text -from collections import defaultdict -stage = [1,2,3] - -transcription_path = 'filelists/genshin.list' -train_path = 'filelists/train.list' -val_path = 'filelists/val.list' -config_path = "configs/config.json" -val_per_spk = 4 -max_val_total = 8 - -if 1 in stage: - with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f: - for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()): - try: - utt, spk, language, text = line.strip().split('|') - norm_text, phones, tones, word2ph = clean_text(text, language) - f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones), - " ".join([str(i) for i in tones]), - " ".join([str(i) for i in word2ph]))) - except Exception as error : - print("err!", utt, error) - -if 2 in stage: - spk_utt_map = defaultdict(list) - spk_id_map = {} - current_sid = 0 - - with open( transcription_path+'.cleaned', encoding='utf-8') as f: - for line in f.readlines(): - utt, spk, language, text, phones, tones, word2ph = line.strip().split('|') - spk_utt_map[spk].append(line) - if spk not in spk_id_map.keys(): - spk_id_map[spk] = current_sid - current_sid += 1 - train_list = [] - val_list = [] - - for spk, utts in spk_utt_map.items(): - shuffle(utts) - val_list+=utts[:val_per_spk] - train_list+=utts[val_per_spk:] - if len(val_list) > max_val_total: - train_list+=val_list[max_val_total:] - val_list = val_list[:max_val_total] - - with open( train_path,"w", encoding='utf-8') as f: - for line in train_list: - f.write(line) - - with open(val_path, "w", encoding='utf-8') as f: - for line in val_list: - f.write(line) - -if 3 in stage: - assert 2 in stage - config = json.load(open(config_path, encoding='utf-8')) - config["data"]['spk2id'] = spk_id_map - with open(config_path, 'w', encoding='utf-8') as f: - json.dump(config, f, indent=2, ensure_ascii=False) diff --git a/spaces/XzJosh/Taffy-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/Taffy-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md deleted file mode 100644 index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Taffy-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -language: -- zh -tags: -- bert -license: "apache-2.0" ---- - -# Please use 'Bert' related functions to load this model! - -## Chinese BERT with Whole Word Masking -For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. - -**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** -Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu - -This repository is developed based on:https://github.com/google-research/bert - -You may also interested in, -- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm -- Chinese MacBERT: https://github.com/ymcui/MacBERT -- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA -- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet -- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer - -More resources by HFL: https://github.com/ymcui/HFL-Anthology - -## Citation -If you find the technical report or resource is useful, please cite the following technical report in your paper. -- Primary: https://arxiv.org/abs/2004.13922 -``` -@inproceedings{cui-etal-2020-revisiting, - title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", - author = "Cui, Yiming and - Che, Wanxiang and - Liu, Ting and - Qin, Bing and - Wang, Shijin and - Hu, Guoping", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", - pages = "657--668", -} -``` -- Secondary: https://arxiv.org/abs/1906.08101 -``` -@article{chinese-bert-wwm, - title={Pre-Training with Whole Word Masking for Chinese BERT}, - author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, - journal={arXiv preprint arXiv:1906.08101}, - year={2019} - } -``` \ No newline at end of file diff --git a/spaces/XzJosh/XingTong-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/XingTong-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/XingTong-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/Y-T-G/Blur-Anything/tracker/model/trainer.py b/spaces/Y-T-G/Blur-Anything/tracker/model/trainer.py deleted file mode 100644 index 0a935cf7a3cde3e9123a4b2ce01860301d423c7e..0000000000000000000000000000000000000000 --- a/spaces/Y-T-G/Blur-Anything/tracker/model/trainer.py +++ /dev/null @@ -1,302 +0,0 @@ -""" -trainer.py - warpper and utility functions for network training -Compute loss, back-prop, update parameters, logging, etc. -""" -import datetime -import os -import time -import numpy as np -import torch -import torch.nn as nn -import torch.optim as optim - -from model.network import XMem -from model.losses import LossComputer -from util.log_integrator import Integrator -from util.image_saver import pool_pairs - - -class XMemTrainer: - def __init__(self, config, logger=None, save_path=None, local_rank=0, world_size=1): - self.config = config - self.num_frames = config["num_frames"] - self.num_ref_frames = config["num_ref_frames"] - self.deep_update_prob = config["deep_update_prob"] - self.local_rank = local_rank - - self.XMem = nn.parallel.DistributedDataParallel( - XMem(config).cuda(), - device_ids=[local_rank], - output_device=local_rank, - broadcast_buffers=False, - ) - - # Set up logger when local_rank=0 - self.logger = logger - self.save_path = save_path - if logger is not None: - self.last_time = time.time() - self.logger.log_string( - "model_size", - str(sum([param.nelement() for param in self.XMem.parameters()])), - ) - self.train_integrator = Integrator( - self.logger, distributed=True, local_rank=local_rank, world_size=world_size - ) - self.loss_computer = LossComputer(config) - - self.train() - self.optimizer = optim.AdamW( - filter(lambda p: p.requires_grad, self.XMem.parameters()), - lr=config["lr"], - weight_decay=config["weight_decay"], - ) - self.scheduler = optim.lr_scheduler.MultiStepLR( - self.optimizer, config["steps"], config["gamma"] - ) - if config["amp"]: - self.scaler = torch.cuda.amp.GradScaler() - - # Logging info - self.log_text_interval = config["log_text_interval"] - self.log_image_interval = config["log_image_interval"] - self.save_network_interval = config["save_network_interval"] - self.save_checkpoint_interval = config["save_checkpoint_interval"] - if config["debug"]: - self.log_text_interval = self.log_image_interval = 1 - - def do_pass(self, data, max_it, it=0): - # No need to store the gradient outside training - torch.set_grad_enabled(self._is_train) - - for k, v in data.items(): - if type(v) != list and type(v) != dict and type(v) != int: - data[k] = v.cuda(non_blocking=True) - - out = {} - frames = data["rgb"] - first_frame_gt = data["first_frame_gt"].float() - b = frames.shape[0] - num_filled_objects = [o.item() for o in data["info"]["num_objects"]] - num_objects = first_frame_gt.shape[2] - selector = data["selector"].unsqueeze(2).unsqueeze(2) - - global_avg = 0 - - with torch.cuda.amp.autocast(enabled=self.config["amp"]): - # image features never change, compute once - key, shrinkage, selection, f16, f8, f4 = self.XMem("encode_key", frames) - - filler_one = torch.zeros(1, dtype=torch.int64) - hidden = torch.zeros( - (b, num_objects, self.config["hidden_dim"], *key.shape[-2:]) - ) - v16, hidden = self.XMem( - "encode_value", frames[:, 0], f16[:, 0], hidden, first_frame_gt[:, 0] - ) - values = v16.unsqueeze(3) # add the time dimension - - for ti in range(1, self.num_frames): - if ti <= self.num_ref_frames: - ref_values = values - ref_keys = key[:, :, :ti] - ref_shrinkage = ( - shrinkage[:, :, :ti] if shrinkage is not None else None - ) - else: - # pick num_ref_frames random frames - # this is not very efficient but I think we would - # need broadcasting in gather which we don't have - indices = [ - torch.cat( - [ - filler_one, - torch.randperm(ti - 1)[: self.num_ref_frames - 1] + 1, - ] - ) - for _ in range(b) - ] - ref_values = torch.stack( - [values[bi, :, :, indices[bi]] for bi in range(b)], 0 - ) - ref_keys = torch.stack( - [key[bi, :, indices[bi]] for bi in range(b)], 0 - ) - ref_shrinkage = ( - torch.stack( - [shrinkage[bi, :, indices[bi]] for bi in range(b)], 0 - ) - if shrinkage is not None - else None - ) - - # Segment frame ti - memory_readout = self.XMem( - "read_memory", - key[:, :, ti], - selection[:, :, ti] if selection is not None else None, - ref_keys, - ref_shrinkage, - ref_values, - ) - hidden, logits, masks = self.XMem( - "segment", - (f16[:, ti], f8[:, ti], f4[:, ti]), - memory_readout, - hidden, - selector, - h_out=(ti < (self.num_frames - 1)), - ) - - # No need to encode the last frame - if ti < (self.num_frames - 1): - is_deep_update = np.random.rand() < self.deep_update_prob - v16, hidden = self.XMem( - "encode_value", - frames[:, ti], - f16[:, ti], - hidden, - masks, - is_deep_update=is_deep_update, - ) - values = torch.cat([values, v16.unsqueeze(3)], 3) - - out[f"masks_{ti}"] = masks - out[f"logits_{ti}"] = logits - - if self._do_log or self._is_train: - losses = self.loss_computer.compute( - {**data, **out}, num_filled_objects, it - ) - - # Logging - if self._do_log: - self.integrator.add_dict(losses) - if self._is_train: - if it % self.log_image_interval == 0 and it != 0: - if self.logger is not None: - images = {**data, **out} - size = (384, 384) - self.logger.log_cv2( - "train/pairs", - pool_pairs(images, size, num_filled_objects), - it, - ) - - if self._is_train: - - if (it) % self.log_text_interval == 0 and it != 0: - time_spent = time.time() - self.last_time - - if self.logger is not None: - self.logger.log_scalar( - "train/lr", self.scheduler.get_last_lr()[0], it - ) - self.logger.log_metrics( - "train", "time", (time_spent) / self.log_text_interval, it - ) - - global_avg = 0.5 * (global_avg) + 0.5 * (time_spent) - eta_seconds = global_avg * (max_it - it) / 100 - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - print(f"ETA: {eta_string}") - - self.last_time = time.time() - self.train_integrator.finalize("train", it) - self.train_integrator.reset_except_hooks() - - if it % self.save_network_interval == 0 and it != 0: - if self.logger is not None: - self.save_network(it) - - if it % self.save_checkpoint_interval == 0 and it != 0: - if self.logger is not None: - self.save_checkpoint(it) - - # Backward pass - self.optimizer.zero_grad(set_to_none=True) - if self.config["amp"]: - self.scaler.scale(losses["total_loss"]).backward() - self.scaler.step(self.optimizer) - self.scaler.update() - else: - losses["total_loss"].backward() - self.optimizer.step() - - self.scheduler.step() - - def save_network(self, it): - if self.save_path is None: - print("Saving has been disabled.") - return - - os.makedirs(os.path.dirname(self.save_path), exist_ok=True) - model_path = f"{self.save_path}_{it}.pth" - torch.save(self.XMem.module.state_dict(), model_path) - print(f"Network saved to {model_path}.") - - def save_checkpoint(self, it): - if self.save_path is None: - print("Saving has been disabled.") - return - - os.makedirs(os.path.dirname(self.save_path), exist_ok=True) - checkpoint_path = f"{self.save_path}_checkpoint_{it}.pth" - checkpoint = { - "it": it, - "network": self.XMem.module.state_dict(), - "optimizer": self.optimizer.state_dict(), - "scheduler": self.scheduler.state_dict(), - } - torch.save(checkpoint, checkpoint_path) - print(f"Checkpoint saved to {checkpoint_path}.") - - def load_checkpoint(self, path): - # This method loads everything and should be used to resume training - map_location = "cuda:%d" % self.local_rank - checkpoint = torch.load(path, map_location={"cuda:0": map_location}) - - it = checkpoint["it"] - network = checkpoint["network"] - optimizer = checkpoint["optimizer"] - scheduler = checkpoint["scheduler"] - - map_location = "cuda:%d" % self.local_rank - self.XMem.module.load_state_dict(network) - self.optimizer.load_state_dict(optimizer) - self.scheduler.load_state_dict(scheduler) - - print("Network weights, optimizer states, and scheduler states loaded.") - - return it - - def load_network_in_memory(self, src_dict): - self.XMem.module.load_weights(src_dict) - print("Network weight loaded from memory.") - - def load_network(self, path): - # This method loads only the network weight and should be used to load a pretrained model - map_location = "cuda:%d" % self.local_rank - src_dict = torch.load(path, map_location={"cuda:0": map_location}) - - self.load_network_in_memory(src_dict) - print(f"Network weight loaded from {path}") - - def train(self): - self._is_train = True - self._do_log = True - self.integrator = self.train_integrator - self.XMem.eval() - return self - - def val(self): - self._is_train = False - self._do_log = True - self.XMem.eval() - return self - - def test(self): - self._is_train = False - self._do_log = False - self.XMem.eval() - return self diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_utils.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_utils.py deleted file mode 100644 index 90ab674e38a40796dd1183ec0ef341159f8f62b4..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_utils.py +++ /dev/null @@ -1,154 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import importlib -import os -from dataclasses import dataclass -from typing import Any, Dict, Optional, Union - -import torch - -from ..utils import BaseOutput - - -SCHEDULER_CONFIG_NAME = "scheduler_config.json" - - -@dataclass -class SchedulerOutput(BaseOutput): - """ - Base class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - """ - - prev_sample: torch.FloatTensor - - -class SchedulerMixin: - """ - Mixin containing common functions for the schedulers. - - Class attributes: - - **_compatibles** (`List[str]`) -- A list of classes that are compatible with the parent class, so that - `from_config` can be used from a class different than the one used to save the config (should be overridden - by parent class). - """ - - config_name = SCHEDULER_CONFIG_NAME - _compatibles = [] - has_compatibles = True - - @classmethod - def from_pretrained( - cls, - pretrained_model_name_or_path: Dict[str, Any] = None, - subfolder: Optional[str] = None, - return_unused_kwargs=False, - **kwargs, - ): - r""" - Instantiate a Scheduler class from a pre-defined JSON configuration file inside a directory or Hub repo. - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *model id* of a model repo on huggingface.co. Valid model ids should have an - organization name, like `google/ddpm-celebahq-256`. - - A path to a *directory* containing the schedluer configurations saved using - [`~SchedulerMixin.save_pretrained`], e.g., `./my_model_directory/`. - subfolder (`str`, *optional*): - In case the relevant files are located inside a subfolder of the model repo (either remote in - huggingface.co or downloaded locally), you can specify the folder name here. - return_unused_kwargs (`bool`, *optional*, defaults to `False`): - Whether kwargs that are not consumed by the Python class should be returned or not. - cache_dir (`Union[str, os.PathLike]`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (i.e., do not try to download the model). - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `transformers-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - - - - It is required to be logged in (`huggingface-cli login`) when you want to use private or [gated - models](https://huggingface.co/docs/hub/models-gated#gated-models). - - - - - - Activate the special ["offline-mode"](https://huggingface.co/transformers/installation.html#offline-mode) to - use this method in a firewalled environment. - - - - """ - config, kwargs = cls.load_config( - pretrained_model_name_or_path=pretrained_model_name_or_path, - subfolder=subfolder, - return_unused_kwargs=True, - **kwargs, - ) - return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs) - - def save_pretrained(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs): - """ - Save a scheduler configuration object to the directory `save_directory`, so that it can be re-loaded using the - [`~SchedulerMixin.from_pretrained`] class method. - - Args: - save_directory (`str` or `os.PathLike`): - Directory where the configuration JSON file will be saved (will be created if it does not exist). - """ - self.save_config(save_directory=save_directory, push_to_hub=push_to_hub, **kwargs) - - @property - def compatibles(self): - """ - Returns all schedulers that are compatible with this scheduler - - Returns: - `List[SchedulerMixin]`: List of compatible schedulers - """ - return self._get_compatibles() - - @classmethod - def _get_compatibles(cls): - compatible_classes_str = list(set([cls.__name__] + cls._compatibles)) - diffusers_library = importlib.import_module(__name__.split(".")[0]) - compatible_classes = [ - getattr(diffusers_library, c) for c in compatible_classes_str if hasattr(diffusers_library, c) - ] - return compatible_classes diff --git a/spaces/Yilin98/Stock_Prediction/stock_prediction.py b/spaces/Yilin98/Stock_Prediction/stock_prediction.py deleted file mode 100644 index 33d27c941e4df24732e68687798d3e46f1f0738f..0000000000000000000000000000000000000000 --- a/spaces/Yilin98/Stock_Prediction/stock_prediction.py +++ /dev/null @@ -1,66 +0,0 @@ -import hopsworks -import joblib -import math -from sklearn.preprocessing import MinMaxScaler -import numpy as np -from datetime import timedelta, datetime - - - - - -def model(ticker): - project = hopsworks.login() - - # import data - fs = project.get_feature_store() - feature_group = fs.get_feature_group( - name = 'final_data_for_prediction') - - data = feature_group.select_all().read() - data = data.sort_values(by='date') - - last_date = data['date'].values[-1] - last_date = datetime.fromtimestamp(int(int(last_date) / 1000)) - date = last_date.date() + timedelta(days=1) - - data = data.set_index('date') - if ticker == 'AAPL': - data = data.loc[data['name'] == 'APPLE'] - elif ticker == 'AMZN': - data = data.loc[data['name'] == 'AMAZON'] - else: - data = data.loc[data['name'] == 'META'] - data.drop(['name', 'price_move'], axis=1, inplace=True) - - # scaling data - prices = data[['close','neg','neu','pos','compound']] - scaler = MinMaxScaler(feature_range=(0,1)) - scaled_data = scaler.fit_transform(prices) - - prediction_list = scaled_data[-60:] - - x = [] - x.append(prediction_list[-60:]) - x = np.array(x) - - # import model - mr = project.get_model_registry() - if ticker == 'AAPL': - remote_model = mr.get_model("LSTM_Apple", version=1) - model_dir = remote_model.download() - remote_model = joblib.load(model_dir + "/apple_model.pkl") - elif ticker == 'AMZN': - remote_model = mr.get_model("LSTM_Amazon", version=1) - model_dir = remote_model.download() - remote_model = joblib.load(model_dir + "/amazon_model.pkl") - else: - remote_model = mr.get_model("LSTM_Meta", version=1) - model_dir = remote_model.download() - remote_model = joblib.load(model_dir + "/meta_model.pkl") - - # predict - out = remote_model.predict(x) - B=np.hstack((out,scaled_data[ : 1,1:])) - out = scaler.inverse_transform(B)[0,0] - return date, out \ No newline at end of file diff --git a/spaces/Yudha515/Rvc-Models/tests/common_utils/__init__.py b/spaces/Yudha515/Rvc-Models/tests/common_utils/__init__.py deleted file mode 100644 index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/tests/common_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .temp_utils import TempDirMixin -from .wav_utils import get_batch_white_noise, get_white_noise, save_wav diff --git a/spaces/Yuliang/ECON/lib/pymafx/core/cfgs.py b/spaces/Yuliang/ECON/lib/pymafx/core/cfgs.py deleted file mode 100644 index 17abd247de8d335131d8facc866d95e485ea9a7a..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/pymafx/core/cfgs.py +++ /dev/null @@ -1,108 +0,0 @@ -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -import argparse -import json -import os -import random -import string -from datetime import datetime - -from yacs.config import CfgNode as CN - -# Configuration variables -cfg = CN(new_allowed=True) - -cfg.OUTPUT_DIR = 'results' -cfg.DEVICE = 'cuda' -cfg.DEBUG = False -cfg.LOGDIR = '' -cfg.VAL_VIS_BATCH_FREQ = 200 -cfg.TRAIN_VIS_ITER_FERQ = 1000 -cfg.SEED_VALUE = -1 - -cfg.TRAIN = CN(new_allowed=True) - -cfg.LOSS = CN(new_allowed=True) -cfg.LOSS.KP_2D_W = 300.0 -cfg.LOSS.KP_3D_W = 300.0 -cfg.LOSS.SHAPE_W = 0.06 -cfg.LOSS.POSE_W = 60.0 -cfg.LOSS.VERT_W = 0.0 - -# Loss weights for dense correspondences -cfg.LOSS.INDEX_WEIGHTS = 2.0 -# Loss weights for surface parts. (24 Parts) -cfg.LOSS.PART_WEIGHTS = 0.3 -# Loss weights for UV regression. -cfg.LOSS.POINT_REGRESSION_WEIGHTS = 0.5 - -cfg.MODEL = CN(new_allowed=True) - -cfg.MODEL.PyMAF = CN(new_allowed=True) - -## switch -cfg.TRAIN.BATCH_SIZE = 64 -cfg.TRAIN.VAL_LOOP = True - -cfg.TEST = CN(new_allowed=True) - - -def get_cfg_defaults(): - """Get a yacs CfgNode object with default values for my_project.""" - # Return a clone so that the defaults will not be altered - # This is for the "local variable" use pattern - # return cfg.clone() - return cfg - - -def update_cfg(cfg_file): - # cfg = get_cfg_defaults() - cfg.merge_from_file(cfg_file) - # return cfg.clone() - return cfg - - -def parse_args(args): - cfg_file = args.cfg_file - if args.cfg_file is not None: - cfg = update_cfg(args.cfg_file) - else: - cfg = get_cfg_defaults() - - if args.misc is not None: - cfg.merge_from_list(args.misc) - - return cfg - - -def parse_args_extend(args): - if args.resume: - if not os.path.exists(args.log_dir): - raise ValueError('Experiment are set to resume mode, but log directory does not exist.') - - if args.cfg_file is not None: - cfg = update_cfg(args.cfg_file) - else: - cfg = get_cfg_defaults() - # load log's cfg - cfg_file = os.path.join(args.log_dir, 'cfg.yaml') - cfg = update_cfg(cfg_file) - - if args.misc is not None: - cfg.merge_from_list(args.misc) - else: - parse_args(args) diff --git a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v5/model_download/yolov5_model_p5_all.sh b/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v5/model_download/yolov5_model_p5_all.sh deleted file mode 100644 index ab68c26898822fc2d09995c60584d7a0d9d40657..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Gradio_YOLOv5_Det_v5/model_download/yolov5_model_p5_all.sh +++ /dev/null @@ -1,8 +0,0 @@ -cd ./yolov5 - -# 下载YOLOv5模型 -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5n.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5m.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5l.pt -wget -c -t 0 https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5x.pt diff --git a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/val.py b/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/val.py deleted file mode 100644 index 5427ee7b361938a21b18b038aa0ab30fd6c15ecc..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/val.py +++ /dev/null @@ -1,397 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Validate a trained YOLOv5 detection model on a detection dataset - -Usage: - $ python val.py --weights yolov5s.pt --data coco128.yaml --img 640 - -Usage - formats: - $ python val.py --weights yolov5s.pt # PyTorch - yolov5s.torchscript # TorchScript - yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn - yolov5s.xml # OpenVINO - yolov5s.engine # TensorRT - yolov5s.mlmodel # CoreML (macOS-only) - yolov5s_saved_model # TensorFlow SavedModel - yolov5s.pb # TensorFlow GraphDef - yolov5s.tflite # TensorFlow Lite - yolov5s_edgetpu.tflite # TensorFlow Edge TPU -""" - -import argparse -import json -import os -import sys -from pathlib import Path - -import numpy as np -import torch -from tqdm import tqdm - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -from models.common import DetectMultiBackend -from utils.callbacks import Callbacks -from utils.dataloaders import create_dataloader -from utils.general import (LOGGER, Profile, check_dataset, check_img_size, check_requirements, check_yaml, - coco80_to_coco91_class, colorstr, increment_path, non_max_suppression, print_args, - scale_coords, xywh2xyxy, xyxy2xywh) -from utils.metrics import ConfusionMatrix, ap_per_class, box_iou -from utils.plots import output_to_target, plot_images, plot_val_study -from utils.torch_utils import select_device, smart_inference_mode - - -def save_one_txt(predn, save_conf, shape, file): - # Save one txt result - gn = torch.tensor(shape)[[1, 0, 1, 0]] # normalization gain whwh - for *xyxy, conf, cls in predn.tolist(): - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format - with open(file, 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - -def save_one_json(predn, jdict, path, class_map): - # Save one JSON result {"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236} - image_id = int(path.stem) if path.stem.isnumeric() else path.stem - box = xyxy2xywh(predn[:, :4]) # xywh - box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner - for p, b in zip(predn.tolist(), box.tolist()): - jdict.append({ - 'image_id': image_id, - 'category_id': class_map[int(p[5])], - 'bbox': [round(x, 3) for x in b], - 'score': round(p[4], 5)}) - - -def process_batch(detections, labels, iouv): - """ - Return correct predictions matrix. Both sets of boxes are in (x1, y1, x2, y2) format. - Arguments: - detections (Array[N, 6]), x1, y1, x2, y2, conf, class - labels (Array[M, 5]), class, x1, y1, x2, y2 - Returns: - correct (Array[N, 10]), for 10 IoU levels - """ - correct = np.zeros((detections.shape[0], iouv.shape[0])).astype(bool) - iou = box_iou(labels[:, 1:], detections[:, :4]) - correct_class = labels[:, 0:1] == detections[:, 5] - for i in range(len(iouv)): - x = torch.where((iou >= iouv[i]) & correct_class) # IoU > threshold and classes match - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() # [label, detect, iou] - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - # matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - correct[matches[:, 1].astype(int), i] = True - return torch.tensor(correct, dtype=torch.bool, device=iouv.device) - - -@smart_inference_mode() -def run( - data, - weights=None, # model.pt path(s) - batch_size=32, # batch size - imgsz=640, # inference size (pixels) - conf_thres=0.001, # confidence threshold - iou_thres=0.6, # NMS IoU threshold - task='val', # train, val, test, speed or study - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - workers=8, # max dataloader workers (per RANK in DDP mode) - single_cls=False, # treat as single-class dataset - augment=False, # augmented inference - verbose=False, # verbose output - save_txt=False, # save results to *.txt - save_hybrid=False, # save label+prediction hybrid results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_json=False, # save a COCO-JSON results file - project=ROOT / 'runs/val', # save to project/name - name='exp', # save to project/name - exist_ok=False, # existing project/name ok, do not increment - half=True, # use FP16 half-precision inference - dnn=False, # use OpenCV DNN for ONNX inference - model=None, - dataloader=None, - save_dir=Path(''), - plots=True, - callbacks=Callbacks(), - compute_loss=None, -): - # Initialize/load model and set device - training = model is not None - if training: # called by train.py - device, pt, jit, engine = next(model.parameters()).device, True, False, False # get model device, PyTorch model - half &= device.type != 'cpu' # half precision only supported on CUDA - model.half() if half else model.float() - else: # called directly - device = select_device(device, batch_size=batch_size) - - # Directories - save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Load model - model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) - stride, pt, jit, engine = model.stride, model.pt, model.jit, model.engine - imgsz = check_img_size(imgsz, s=stride) # check image size - half = model.fp16 # FP16 supported on limited backends with CUDA - if engine: - batch_size = model.batch_size - else: - device = model.device - if not (pt or jit): - batch_size = 1 # export.py models default to batch-size 1 - LOGGER.info(f'Forcing --batch-size 1 square inference (1,3,{imgsz},{imgsz}) for non-PyTorch models') - - # Data - data = check_dataset(data) # check - - # Configure - model.eval() - cuda = device.type != 'cpu' - is_coco = isinstance(data.get('val'), str) and data['val'].endswith(f'coco{os.sep}val2017.txt') # COCO dataset - nc = 1 if single_cls else int(data['nc']) # number of classes - iouv = torch.linspace(0.5, 0.95, 10, device=device) # iou vector for mAP@0.5:0.95 - niou = iouv.numel() - - # Dataloader - if not training: - if pt and not single_cls: # check --weights are trained on --data - ncm = model.model.nc - assert ncm == nc, f'{weights} ({ncm} classes) trained on different --data than what you passed ({nc} ' \ - f'classes). Pass correct combination of --weights and --data that are trained together.' - model.warmup(imgsz=(1 if pt else batch_size, 3, imgsz, imgsz)) # warmup - pad = 0.0 if task in ('speed', 'benchmark') else 0.5 - rect = False if task == 'benchmark' else pt # square inference for benchmarks - task = task if task in ('train', 'val', 'test') else 'val' # path to train/val/test images - dataloader = create_dataloader(data[task], - imgsz, - batch_size, - stride, - single_cls, - pad=pad, - rect=rect, - workers=workers, - prefix=colorstr(f'{task}: '))[0] - - seen = 0 - confusion_matrix = ConfusionMatrix(nc=nc) - names = model.names if hasattr(model, 'names') else model.module.names # get class names - if isinstance(names, (list, tuple)): # old format - names = dict(enumerate(names)) - class_map = coco80_to_coco91_class() if is_coco else list(range(1000)) - s = ('%22s' + '%11s' * 6) % ('Class', 'Images', 'Instances', 'P', 'R', 'mAP@.5', 'mAP@.5:.95') - dt, p, r, f1, mp, mr, map50, map = (Profile(), Profile(), Profile()), 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 - loss = torch.zeros(3, device=device) - jdict, stats, ap, ap_class = [], [], [], [] - callbacks.run('on_val_start') - pbar = tqdm(dataloader, desc=s, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar - for batch_i, (im, targets, paths, shapes) in enumerate(pbar): - callbacks.run('on_val_batch_start') - with dt[0]: - if cuda: - im = im.to(device, non_blocking=True) - targets = targets.to(device) - im = im.half() if half else im.float() # uint8 to fp16/32 - im /= 255 # 0 - 255 to 0.0 - 1.0 - nb, _, height, width = im.shape # batch size, channels, height, width - - # Inference - with dt[1]: - out, train_out = model(im) if compute_loss else (model(im, augment=augment), None) - - # Loss - if compute_loss: - loss += compute_loss(train_out, targets)[1] # box, obj, cls - - # NMS - targets[:, 2:] *= torch.tensor((width, height, width, height), device=device) # to pixels - lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling - with dt[2]: - out = non_max_suppression(out, conf_thres, iou_thres, labels=lb, multi_label=True, agnostic=single_cls) - - # Metrics - for si, pred in enumerate(out): - labels = targets[targets[:, 0] == si, 1:] - nl, npr = labels.shape[0], pred.shape[0] # number of labels, predictions - path, shape = Path(paths[si]), shapes[si][0] - correct = torch.zeros(npr, niou, dtype=torch.bool, device=device) # init - seen += 1 - - if npr == 0: - if nl: - stats.append((correct, *torch.zeros((2, 0), device=device), labels[:, 0])) - if plots: - confusion_matrix.process_batch(detections=None, labels=labels[:, 0]) - continue - - # Predictions - if single_cls: - pred[:, 5] = 0 - predn = pred.clone() - scale_coords(im[si].shape[1:], predn[:, :4], shape, shapes[si][1]) # native-space pred - - # Evaluate - if nl: - tbox = xywh2xyxy(labels[:, 1:5]) # target boxes - scale_coords(im[si].shape[1:], tbox, shape, shapes[si][1]) # native-space labels - labelsn = torch.cat((labels[:, 0:1], tbox), 1) # native-space labels - correct = process_batch(predn, labelsn, iouv) - if plots: - confusion_matrix.process_batch(predn, labelsn) - stats.append((correct, pred[:, 4], pred[:, 5], labels[:, 0])) # (correct, conf, pcls, tcls) - - # Save/log - if save_txt: - save_one_txt(predn, save_conf, shape, file=save_dir / 'labels' / f'{path.stem}.txt') - if save_json: - save_one_json(predn, jdict, path, class_map) # append to COCO-JSON dictionary - callbacks.run('on_val_image_end', pred, predn, path, names, im[si]) - - # Plot images - if plots and batch_i < 3: - plot_images(im, targets, paths, save_dir / f'val_batch{batch_i}_labels.jpg', names) # labels - plot_images(im, output_to_target(out), paths, save_dir / f'val_batch{batch_i}_pred.jpg', names) # pred - - callbacks.run('on_val_batch_end') - - # Compute metrics - stats = [torch.cat(x, 0).cpu().numpy() for x in zip(*stats)] # to numpy - if len(stats) and stats[0].any(): - tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names) - ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95 - mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean() - nt = np.bincount(stats[3].astype(int), minlength=nc) # number of targets per class - - # Print results - pf = '%22s' + '%11i' * 2 + '%11.3g' * 4 # print format - LOGGER.info(pf % ('all', seen, nt.sum(), mp, mr, map50, map)) - if nt.sum() == 0: - LOGGER.warning(f'WARNING: no labels found in {task} set, can not compute metrics without labels ⚠️') - - # Print results per class - if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats): - for i, c in enumerate(ap_class): - LOGGER.info(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i])) - - # Print speeds - t = tuple(x.t / seen * 1E3 for x in dt) # speeds per image - if not training: - shape = (batch_size, 3, imgsz, imgsz) - LOGGER.info(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {shape}' % t) - - # Plots - if plots: - confusion_matrix.plot(save_dir=save_dir, names=list(names.values())) - callbacks.run('on_val_end') - - # Save JSON - if save_json and len(jdict): - w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights - anno_json = str(Path(data.get('path', '../coco')) / 'annotations/instances_val2017.json') # annotations json - pred_json = str(save_dir / f"{w}_predictions.json") # predictions json - LOGGER.info(f'\nEvaluating pycocotools mAP... saving {pred_json}...') - with open(pred_json, 'w') as f: - json.dump(jdict, f) - - try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb - check_requirements(['pycocotools']) - from pycocotools.coco import COCO - from pycocotools.cocoeval import COCOeval - - anno = COCO(anno_json) # init annotations api - pred = anno.loadRes(pred_json) # init predictions api - eval = COCOeval(anno, pred, 'bbox') - if is_coco: - eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.im_files] # image IDs to evaluate - eval.evaluate() - eval.accumulate() - eval.summarize() - map, map50 = eval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5) - except Exception as e: - LOGGER.info(f'pycocotools unable to run: {e}') - - # Return results - model.float() # for training - if not training: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}") - maps = np.zeros(nc) + map - for i, c in enumerate(ap_class): - maps[c] = ap[i] - return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)') - parser.add_argument('--batch-size', type=int, default=32, help='batch size') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold') - parser.add_argument('--task', default='val', help='train, val, test, speed or study') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)') - parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--verbose', action='store_true', help='report mAP by class') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--save-json', action='store_true', help='save a COCO-JSON results file') - parser.add_argument('--project', default=ROOT / 'runs/val', help='save to project/name') - parser.add_argument('--name', default='exp', help='save to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference') - opt = parser.parse_args() - opt.data = check_yaml(opt.data) # check YAML - opt.save_json |= opt.data.endswith('coco.yaml') - opt.save_txt |= opt.save_hybrid - print_args(vars(opt)) - return opt - - -def main(opt): - check_requirements(requirements=ROOT / 'requirements.txt', exclude=('tensorboard', 'thop')) - - if opt.task in ('train', 'val', 'test'): # run normally - if opt.conf_thres > 0.001: # https://github.com/ultralytics/yolov5/issues/1466 - LOGGER.info(f'WARNING: confidence threshold {opt.conf_thres} > 0.001 produces invalid results ⚠️') - if opt.save_hybrid: - LOGGER.info('WARNING: --save-hybrid will return high mAP from hybrid labels, not from predictions alone ⚠️') - run(**vars(opt)) - - else: - weights = opt.weights if isinstance(opt.weights, list) else [opt.weights] - opt.half = True # FP16 for fastest results - if opt.task == 'speed': # speed benchmarks - # python val.py --task speed --data coco.yaml --batch 1 --weights yolov5n.pt yolov5s.pt... - opt.conf_thres, opt.iou_thres, opt.save_json = 0.25, 0.45, False - for opt.weights in weights: - run(**vars(opt), plots=False) - - elif opt.task == 'study': # speed vs mAP benchmarks - # python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n.pt yolov5s.pt... - for opt.weights in weights: - f = f'study_{Path(opt.data).stem}_{Path(opt.weights).stem}.txt' # filename to save to - x, y = list(range(256, 1536 + 128, 128)), [] # x axis (image sizes), y axis - for opt.imgsz in x: # img-size - LOGGER.info(f'\nRunning {f} --imgsz {opt.imgsz}...') - r, _, t = run(**vars(opt), plots=False) - y.append(r + t) # results and times - np.savetxt(f, y, fmt='%10.4g') # save - os.system('zip -r study.zip study_*.txt') - plot_val_study(x=x) # plot - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/aaronb/Anything2Image/anything2image/imagebind/__init__.py b/spaces/aaronb/Anything2Image/anything2image/imagebind/__init__.py deleted file mode 100644 index f97604c263254bf8fa784bbcfd15fe904c3d464a..0000000000000000000000000000000000000000 --- a/spaces/aaronb/Anything2Image/anything2image/imagebind/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .data import load_and_transform_text, load_and_transform_audio_data, load_and_transform_video_data, load_and_transform_vision_data -from .models.imagebind_model import imagebind_huge, ModalityType \ No newline at end of file diff --git a/spaces/abdabbas/abd/README.md b/spaces/abdabbas/abd/README.md deleted file mode 100644 index acd1ec00e6d0830ec4073eacc2d2891370ea405c..0000000000000000000000000000000000000000 --- a/spaces/abdabbas/abd/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Abdulrahman -emoji: 🚀 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.0.13 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py deleted file mode 100644 index ee0dc6bdd8df5775857028aaed5444c0f59caf80..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class DistSamplerSeedHook(Hook): - """Data-loading sampler for distributed training. - - When distributed training, it is only useful in conjunction with - :obj:`EpochBasedRunner`, while :obj:`IterBasedRunner` achieves the same - purpose with :obj:`IterLoader`. - """ - - def before_epoch(self, runner): - if hasattr(runner.data_loader.sampler, 'set_epoch'): - # in case the data loader uses `SequentialSampler` in Pytorch - runner.data_loader.sampler.set_epoch(runner.epoch) - elif hasattr(runner.data_loader.batch_sampler.sampler, 'set_epoch'): - # batch sampler in pytorch warps the sampler as its attributes. - runner.data_loader.batch_sampler.sampler.set_epoch(runner.epoch) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/visualization/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/visualization/__init__.py deleted file mode 100644 index 4ff995c0861490941f8cfc19ebbd41a2ee7e2d65..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/visualization/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .image import (color_val_matplotlib, imshow_det_bboxes, - imshow_gt_det_bboxes) - -__all__ = ['imshow_det_bboxes', 'imshow_gt_det_bboxes', 'color_val_matplotlib'] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/custom.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/custom.py deleted file mode 100644 index 1a2351c217f43d32178053dfc682a2b241f9a3f1..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/datasets/custom.py +++ /dev/null @@ -1,323 +0,0 @@ -import os.path as osp -import warnings -from collections import OrderedDict - -import mmcv -import numpy as np -from mmcv.utils import print_log -from torch.utils.data import Dataset - -from mmdet.core import eval_map, eval_recalls -from .builder import DATASETS -from .pipelines import Compose - - -@DATASETS.register_module() -class CustomDataset(Dataset): - """Custom dataset for detection. - - The annotation format is shown as follows. The `ann` field is optional for - testing. - - .. code-block:: none - - [ - { - 'filename': 'a.jpg', - 'width': 1280, - 'height': 720, - 'ann': { - 'bboxes': (n, 4) in (x1, y1, x2, y2) order. - 'labels': (n, ), - 'bboxes_ignore': (k, 4), (optional field) - 'labels_ignore': (k, 4) (optional field) - } - }, - ... - ] - - Args: - ann_file (str): Annotation file path. - pipeline (list[dict]): Processing pipeline. - classes (str | Sequence[str], optional): Specify classes to load. - If is None, ``cls.CLASSES`` will be used. Default: None. - data_root (str, optional): Data root for ``ann_file``, - ``img_prefix``, ``seg_prefix``, ``proposal_file`` if specified. - test_mode (bool, optional): If set True, annotation will not be loaded. - filter_empty_gt (bool, optional): If set true, images without bounding - boxes of the dataset's classes will be filtered out. This option - only works when `test_mode=False`, i.e., we never filter images - during tests. - """ - - CLASSES = None - - def __init__(self, - ann_file, - pipeline, - classes=None, - data_root=None, - img_prefix='', - seg_prefix=None, - proposal_file=None, - test_mode=False, - filter_empty_gt=True): - self.ann_file = ann_file - self.data_root = data_root - self.img_prefix = img_prefix - self.seg_prefix = seg_prefix - self.proposal_file = proposal_file - self.test_mode = test_mode - self.filter_empty_gt = filter_empty_gt - self.CLASSES = self.get_classes(classes) - - # join paths if data_root is specified - if self.data_root is not None: - if not osp.isabs(self.ann_file): - self.ann_file = osp.join(self.data_root, self.ann_file) - if not (self.img_prefix is None or osp.isabs(self.img_prefix)): - self.img_prefix = osp.join(self.data_root, self.img_prefix) - if not (self.seg_prefix is None or osp.isabs(self.seg_prefix)): - self.seg_prefix = osp.join(self.data_root, self.seg_prefix) - if not (self.proposal_file is None - or osp.isabs(self.proposal_file)): - self.proposal_file = osp.join(self.data_root, - self.proposal_file) - # load annotations (and proposals) - self.data_infos = self.load_annotations(self.ann_file) - - if self.proposal_file is not None: - self.proposals = self.load_proposals(self.proposal_file) - else: - self.proposals = None - - # filter images too small and containing no annotations - if not test_mode: - valid_inds = self._filter_imgs() - self.data_infos = [self.data_infos[i] for i in valid_inds] - if self.proposals is not None: - self.proposals = [self.proposals[i] for i in valid_inds] - # set group flag for the sampler - self._set_group_flag() - - # processing pipeline - self.pipeline = Compose(pipeline) - - def __len__(self): - """Total number of samples of data.""" - return len(self.data_infos) - - def load_annotations(self, ann_file): - """Load annotation from annotation file.""" - return mmcv.load(ann_file) - - def load_proposals(self, proposal_file): - """Load proposal from proposal file.""" - return mmcv.load(proposal_file) - - def get_ann_info(self, idx): - """Get annotation by index. - - Args: - idx (int): Index of data. - - Returns: - dict: Annotation info of specified index. - """ - - return self.data_infos[idx]['ann'] - - def get_cat_ids(self, idx): - """Get category ids by index. - - Args: - idx (int): Index of data. - - Returns: - list[int]: All categories in the image of specified index. - """ - - return self.data_infos[idx]['ann']['labels'].astype(np.int).tolist() - - def pre_pipeline(self, results): - """Prepare results dict for pipeline.""" - results['img_prefix'] = self.img_prefix - results['seg_prefix'] = self.seg_prefix - results['proposal_file'] = self.proposal_file - results['bbox_fields'] = [] - results['mask_fields'] = [] - results['seg_fields'] = [] - - def _filter_imgs(self, min_size=32): - """Filter images too small.""" - if self.filter_empty_gt: - warnings.warn( - 'CustomDataset does not support filtering empty gt images.') - valid_inds = [] - for i, img_info in enumerate(self.data_infos): - if min(img_info['width'], img_info['height']) >= min_size: - valid_inds.append(i) - return valid_inds - - def _set_group_flag(self): - """Set flag according to image aspect ratio. - - Images with aspect ratio greater than 1 will be set as group 1, - otherwise group 0. - """ - self.flag = np.zeros(len(self), dtype=np.uint8) - for i in range(len(self)): - img_info = self.data_infos[i] - if img_info['width'] / img_info['height'] > 1: - self.flag[i] = 1 - - def _rand_another(self, idx): - """Get another random index from the same group as the given index.""" - pool = np.where(self.flag == self.flag[idx])[0] - return np.random.choice(pool) - - def __getitem__(self, idx): - """Get training/test data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training/test data (with annotation if `test_mode` is set \ - True). - """ - - if self.test_mode: - return self.prepare_test_img(idx) - while True: - data = self.prepare_train_img(idx) - if data is None: - idx = self._rand_another(idx) - continue - return data - - def prepare_train_img(self, idx): - """Get training data and annotations after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Training data and annotation after pipeline with new keys \ - introduced by pipeline. - """ - - img_info = self.data_infos[idx] - ann_info = self.get_ann_info(idx) - results = dict(img_info=img_info, ann_info=ann_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - def prepare_test_img(self, idx): - """Get testing data after pipeline. - - Args: - idx (int): Index of data. - - Returns: - dict: Testing data after pipeline with new keys introduced by \ - pipeline. - """ - - img_info = self.data_infos[idx] - results = dict(img_info=img_info) - if self.proposals is not None: - results['proposals'] = self.proposals[idx] - self.pre_pipeline(results) - return self.pipeline(results) - - @classmethod - def get_classes(cls, classes=None): - """Get class names of current dataset. - - Args: - classes (Sequence[str] | str | None): If classes is None, use - default CLASSES defined by builtin dataset. If classes is a - string, take it as a file name. The file contains the name of - classes where each line contains one class name. If classes is - a tuple or list, override the CLASSES defined by the dataset. - - Returns: - tuple[str] or list[str]: Names of categories of the dataset. - """ - if classes is None: - return cls.CLASSES - - if isinstance(classes, str): - # take it as a file path - class_names = mmcv.list_from_file(classes) - elif isinstance(classes, (tuple, list)): - class_names = classes - else: - raise ValueError(f'Unsupported type {type(classes)} of classes.') - - return class_names - - def format_results(self, results, **kwargs): - """Place holder to format result to dataset specific output.""" - - def evaluate(self, - results, - metric='mAP', - logger=None, - proposal_nums=(100, 300, 1000), - iou_thr=0.5, - scale_ranges=None): - """Evaluate the dataset. - - Args: - results (list): Testing results of the dataset. - metric (str | list[str]): Metrics to be evaluated. - logger (logging.Logger | None | str): Logger used for printing - related information during evaluation. Default: None. - proposal_nums (Sequence[int]): Proposal number used for evaluating - recalls, such as recall@100, recall@1000. - Default: (100, 300, 1000). - iou_thr (float | list[float]): IoU threshold. Default: 0.5. - scale_ranges (list[tuple] | None): Scale ranges for evaluating mAP. - Default: None. - """ - - if not isinstance(metric, str): - assert len(metric) == 1 - metric = metric[0] - allowed_metrics = ['mAP', 'recall'] - if metric not in allowed_metrics: - raise KeyError(f'metric {metric} is not supported') - annotations = [self.get_ann_info(i) for i in range(len(self))] - eval_results = OrderedDict() - iou_thrs = [iou_thr] if isinstance(iou_thr, float) else iou_thr - if metric == 'mAP': - assert isinstance(iou_thrs, list) - mean_aps = [] - for iou_thr in iou_thrs: - print_log(f'\n{"-" * 15}iou_thr: {iou_thr}{"-" * 15}') - mean_ap, _ = eval_map( - results, - annotations, - scale_ranges=scale_ranges, - iou_thr=iou_thr, - dataset=self.CLASSES, - logger=logger) - mean_aps.append(mean_ap) - eval_results[f'AP{int(iou_thr * 100):02d}'] = round(mean_ap, 3) - eval_results['mAP'] = sum(mean_aps) / len(mean_aps) - elif metric == 'recall': - gt_bboxes = [ann['bboxes'] for ann in annotations] - recalls = eval_recalls( - gt_bboxes, results, proposal_nums, iou_thr, logger=logger) - for i, num in enumerate(proposal_nums): - for j, iou in enumerate(iou_thrs): - eval_results[f'recall@{num}@{iou}'] = recalls[i, j] - if recalls.shape[1] > 1: - ar = recalls.mean(axis=1) - for i, num in enumerate(proposal_nums): - eval_results[f'AR@{num}'] = ar[i] - return eval_results diff --git a/spaces/ahmedghani/Editing-Tools/app.py b/spaces/ahmedghani/Editing-Tools/app.py deleted file mode 100644 index 219526697df13ee831280ec10b97f1f1fd442f46..0000000000000000000000000000000000000000 --- a/spaces/ahmedghani/Editing-Tools/app.py +++ /dev/null @@ -1,150 +0,0 @@ -import gradio as gr -from watermark_remover import convert_video_to_frames, remove_image_watermark, remove_video_watermark -from video_converter import convert_video -from image_converter import convert_image -from image_editing import edit_image -from image_inpainting import inpaint - - -css = """ - #remove_btn { - background: linear-gradient(#201d18, #2bbbc3); - font-weight: bold; - font-size: 18px; - color:white; - } - #remove_btn:hover { - background: linear-gradient(#2bbbc3, #201d18); - } - #convert_btn { - background: linear-gradient(#201d18, #2bbbc3); - font-weight: bold; - font-size: 18px; - color:white; - } - #convert_btn:hover { - background: linear-gradient(#2bbbc3, #201d18); - } - #button { - background: linear-gradient(#201d18, #2bbbc3); - font-weight: bold; - font-size: 18px; - color:white; - } - #button:hover { - background: linear-gradient(#2bbbc3, #201d18); - } - footer { - display: none !important; - } -""" - -demo = gr.Blocks(css=css, title="Editing Tools") -with demo: - with gr.Tab("Image Converter"): - gr.Markdown(""" - #
    🖼️ Image Converter
    - """) - image_format = ['jpg', 'jpeg', 'png', 'bmp', 'tiff', 'gif', 'webp', 'ico'] - with gr.Row(): - with gr.Column(): - input_image = gr.File(label="Upload an Image") - with gr.Column(): - with gr.Row(): - image_format = gr.Radio(image_format, label="Select Format", interactive=False) - with gr.Row(): - image_convert_btn = gr.Button("Convert Image", interactive=False, elem_id="convert_btn") - with gr.Row(): - output_image = gr.File(label="Output File", interactive=False) - image_status = gr.Textbox(label="Status", interactive=False) - input_image.change(lambda x: gr.Radio.update(interactive=True), inputs=[input_image], outputs=[image_format]) - image_format.change(lambda x: gr.Button.update(interactive=True), None, outputs=[image_convert_btn]) - image_convert_btn.click(convert_image, inputs=[input_image, image_format], outputs=[output_image, image_status]) - - with gr.Tab("Image Watermark Remover"): - gr.Markdown(""" - #
    🖼️ Image Watermark Remover
    - """) - input_image_watermark = gr.Image(label="Upload an Image", tool="sketch", type="pil", interactive=True) - image_remove_btn = gr.Button("Remove Watermark", interactive=True, elem_id="remove_btn") - output_image_clean = gr.Image(label="Output Image", interactive=True) - - image_remove_btn.click(remove_image_watermark, inputs=[input_image_watermark], outputs=[output_image_clean]) - - with gr.Tab("Image Editing"): - gr.Markdown(""" - #
    🖼️ Image Editing
    - """) - input_editing_image = gr.Image(label="Upload an Image", type="pil", interactive=True) - image_editing_options = gr.Radio(["High Res", "Colorize", "Greyscale", "Remove Background"], label="Select Editing Option", interactive=True, value="High Resolution") - image_editing_btn = gr.Button("Submit", interactive=True, elem_id="button") - with gr.Row(): - image_editing_output = gr.Image(label="Output Preview", interactive=False) - image_editing_file = gr.File(label="Download File", interactive=False) - - image_editing_btn.click(edit_image, inputs=[input_editing_image, image_editing_options], outputs=[image_editing_output, image_editing_file]) - - with gr.Tab("Image Inpainting"): - gr.Markdown(""" - #
    🖼️ Image Inpainting
    - """) - input_inpainting_image = gr.Image(label="Upload an Image", type="pil", interactive=True, tool="sketch") - input_inpainting_prompt = gr.Textbox(label="Prompt", interactive=True) - input_inpainting_btn = gr.Button("Submit", interactive=True, elem_id="button") - with gr.Row(): - input_inpainting_output = gr.Image(label="Image Preview", interactive=False) - input_inpainting_file = gr.File(label="Download File", interactive=False) - - input_inpainting_btn.click(inpaint, inputs=[input_inpainting_image, input_inpainting_prompt], outputs=[input_inpainting_output, input_inpainting_file]) - - with gr.Tab("Video Converter"): - gr.Markdown(""" - #
    🎥 Video Converter
    - """) - video_format = ['webm', 'wmv', 'mkv', 'mp4', 'avi', 'mpeg', 'vob', 'flv'] - audio_format = ['mp3', 'wav', 'ogg', 'flac', 'aac'] - with gr.Row(): - with gr.Column(): - input_video = gr.Video(label="Upload a Video") - with gr.Column(): - with gr.Row(): - format_select = gr.Radio(["Video", "Audio"], label="Select Format", default="Video") - with gr.Row(): - format = gr.Radio(video_format, label="Select Format", interactive=False) - with gr.Row(): - with gr.Column(): - pass - with gr.Column(): - convert_btn = gr.Button("Convert Video", interactive=False, elem_id="convert_btn") - with gr.Column(): - pass - with gr.Row(): - output = gr.File(label="Output File", interactive=False) - status = gr.Textbox(label="Status", interactive=False) - format_select.change(lambda x: gr.Radio.update(choices=video_format if x == "Video" else audio_format, interactive=True), inputs=[format_select], outputs=[format]) - format.change(lambda x: gr.Button.update(interactive=True), None, outputs=[convert_btn]) - convert_btn.click(convert_video, inputs=[input_video, format], outputs=[output, status]) - - with gr.Tab("Video Watermark Remover"): - gr.Markdown(""" - #
    🎥 Video Watermark Remover
    - """) - with gr.Row(): - with gr.Column(): - input_video = gr.Video(label="Upload a Video") - with gr.Column(): - mask = gr.Image(label="Create a mask for the image", tool="sketch", type="pil", interactive=False) - with gr.Row(): - with gr.Column(): - pass - with gr.Column(): - remove_btn = gr.Button("Remove Watermark", interactive=False, elem_id="remove_btn") - with gr.Column(): - pass - - with gr.Row(): - output_video = gr.File(label="Output Video", interactive=False) - input_video.change(convert_video_to_frames, inputs=[input_video], outputs=[mask, remove_btn]) - remove_btn.click(remove_video_watermark, inputs=[mask], outputs=[output_video, remove_btn]) - -demo.launch(show_api=False, share=True) diff --git a/spaces/airely/bingai1/README.md b/spaces/airely/bingai1/README.md deleted file mode 100644 index 574bfa08d57787cfdcec68014f76ce9530d82e3b..0000000000000000000000000000000000000000 --- a/spaces/airely/bingai1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bingo -emoji: 🐠 -colorFrom: blue -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 3000 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/akhaliq/SummerTime/model/third_party/HMNet/Models/Optimizers/LnrWrmpInvSqRtDcyScheduler.py b/spaces/akhaliq/SummerTime/model/third_party/HMNet/Models/Optimizers/LnrWrmpInvSqRtDcyScheduler.py deleted file mode 100644 index c9ce98d92c4eb2fcd9b688c8ca6d8fb49a842875..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SummerTime/model/third_party/HMNet/Models/Optimizers/LnrWrmpInvSqRtDcyScheduler.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT license. - -import math -from torch.optim.lr_scheduler import LambdaLR - - -class LnrWrmpInvSqRtDcyScheduler(LambdaLR): - """Inverse Square Root learning rate schedule used in T5""" - - def __init__(self, optimizer, warmup_steps, warmup_init_lr, warmup_end_lr): - self.warmup_steps = warmup_steps - self.warmup_init_lr = warmup_init_lr - self.warmup_end_lr = warmup_end_lr - self.lr_step = (warmup_end_lr - warmup_init_lr) / warmup_steps - super(LnrWrmpInvSqRtDcyScheduler, self).__init__( - optimizer, self.lr_lambda, last_epoch=-1 - ) - - def lr_lambda(self, step): - if step < self.warmup_steps: - return (self.warmup_init_lr + step * self.lr_step) / self.warmup_end_lr - else: - return 1.0 / float(math.sqrt(step / float(self.warmup_steps))) - - def get_last_lr(self): - return self.get_lr() diff --git a/spaces/akhaliq/deeplab2/model/layers/drop_path_test.py b/spaces/akhaliq/deeplab2/model/layers/drop_path_test.py deleted file mode 100644 index 7d02f5fa9d2de935cdeb043bfbad81441e0b1b6f..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/layers/drop_path_test.py +++ /dev/null @@ -1,76 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Test for drop_path.py.""" -import numpy as np -import tensorflow as tf - -from deeplab2.model.layers import drop_path - -# Set a fixed random seed. -tf.random.set_seed(1) - - -class DropPathTest(tf.test.TestCase): - - def test_drop_path_keep_prob_one(self): - # Test drop_path_keep_prob = 1, where output should be equal to input. - drop_path_keep_prob = 1.0 - input_tensor = tf.random.uniform(shape=(3, 65, 65, 32)) - layer_op = drop_path.DropPath(drop_path_keep_prob) - output_tensor = layer_op(input_tensor, training=True) - np.testing.assert_equal(input_tensor.numpy(), output_tensor.numpy()) - - def test_not_training_mode(self): - # Test not training mode, where output should be equal to input. - drop_path_keep_prob = 0.8 - input_tensor = tf.random.uniform(shape=(3, 65, 65, 32)) - layer_op = drop_path.DropPath(drop_path_keep_prob) - output_tensor = layer_op(input_tensor, training=False) - np.testing.assert_equal(input_tensor.numpy(), output_tensor.numpy()) - - def test_drop_path(self): - drop_path_keep_prob = 0.8 - input_tensor = tf.random.uniform(shape=(3, 65, 65, 32)) - layer_op = drop_path.DropPath(drop_path_keep_prob) - output_tensor = layer_op(input_tensor, training=True) - self.assertFalse(np.array_equal(input_tensor.numpy(), - output_tensor.numpy())) - - def test_constant_drop_path_schedule(self): - keep_prob_for_last_stage = 0.8 - current_stage_keep_prob = drop_path.get_drop_path_keep_prob( - keep_prob_for_last_stage, - schedule='constant', - current_stage=2, - num_stages=5) - self.assertEqual(current_stage_keep_prob, keep_prob_for_last_stage) - - def test_linear_drop_path_schedule(self): - keep_prob_for_last_stage = 0.8 - current_stage_keep_prob = drop_path.get_drop_path_keep_prob( - keep_prob_for_last_stage, - schedule='linear', - current_stage=1, - num_stages=4) - self.assertEqual(current_stage_keep_prob, 0.95) - - def test_unknown_drop_path_schedule(self): - with self.assertRaises(ValueError): - _ = drop_path.get_drop_path_keep_prob(0.8, 'unknown', 1, 4) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/akhaliq/lama/saicinpainting/evaluation/losses/fid/__init__.py b/spaces/akhaliq/lama/saicinpainting/evaluation/losses/fid/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/alamin655/Personas/README.md b/spaces/alamin655/Personas/README.md deleted file mode 100644 index 0dbdf323911d0583ab43b31f27d670bff0a75e7f..0000000000000000000000000000000000000000 --- a/spaces/alamin655/Personas/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Persona Chat -emoji: 🎭 -colorFrom: blue -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alan-chen-intel/dagan-demo/sync_batchnorm/batchnorm.py b/spaces/alan-chen-intel/dagan-demo/sync_batchnorm/batchnorm.py deleted file mode 100644 index 5f4e763f0366dffa10320116413f8c7181a8aeb1..0000000000000000000000000000000000000000 --- a/spaces/alan-chen-intel/dagan-demo/sync_batchnorm/batchnorm.py +++ /dev/null @@ -1,315 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast - -from .comm import SyncMaster - -__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d'] - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dementions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True): - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine) - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - - # Always using same "device order" makes the ReduceAdd operation faster. - # Thanks to:: Tete Xiao (http://tetexiao.com/) - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - - return mean, bias_var.clamp(self.eps) ** -0.5 - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm1d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm2d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm3d, self)._check_input_dim(input) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/cli/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/cli/__init__.py deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/cli/__init__.py +++ /dev/null @@ -1 +0,0 @@ - diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/jisfreq.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/jisfreq.py deleted file mode 100644 index 83fc082b545106d02622de20f2083e8a7562f96c..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/chardet/jisfreq.py +++ /dev/null @@ -1,325 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -# Sampling from about 20M text materials include literature and computer technology -# -# Japanese frequency table, applied to both S-JIS and EUC-JP -# They are sorted in order. - -# 128 --> 0.77094 -# 256 --> 0.85710 -# 512 --> 0.92635 -# 1024 --> 0.97130 -# 2048 --> 0.99431 -# -# Ideal Distribution Ratio = 0.92635 / (1-0.92635) = 12.58 -# Random Distribution Ration = 512 / (2965+62+83+86-512) = 0.191 -# -# Typical Distribution Ratio, 25% of IDR - -JIS_TYPICAL_DISTRIBUTION_RATIO = 3.0 - -# Char to FreqOrder table , -JIS_TABLE_SIZE = 4368 - -JIS_CHAR_TO_FREQ_ORDER = ( - 40, 1, 6, 182, 152, 180, 295,2127, 285, 381,3295,4304,3068,4606,3165,3510, # 16 -3511,1822,2785,4607,1193,2226,5070,4608, 171,2996,1247, 18, 179,5071, 856,1661, # 32 -1262,5072, 619, 127,3431,3512,3230,1899,1700, 232, 228,1294,1298, 284, 283,2041, # 48 -2042,1061,1062, 48, 49, 44, 45, 433, 434,1040,1041, 996, 787,2997,1255,4305, # 64 -2108,4609,1684,1648,5073,5074,5075,5076,5077,5078,3687,5079,4610,5080,3927,3928, # 80 -5081,3296,3432, 290,2285,1471,2187,5082,2580,2825,1303,2140,1739,1445,2691,3375, # 96 -1691,3297,4306,4307,4611, 452,3376,1182,2713,3688,3069,4308,5083,5084,5085,5086, # 112 -5087,5088,5089,5090,5091,5092,5093,5094,5095,5096,5097,5098,5099,5100,5101,5102, # 128 -5103,5104,5105,5106,5107,5108,5109,5110,5111,5112,4097,5113,5114,5115,5116,5117, # 144 -5118,5119,5120,5121,5122,5123,5124,5125,5126,5127,5128,5129,5130,5131,5132,5133, # 160 -5134,5135,5136,5137,5138,5139,5140,5141,5142,5143,5144,5145,5146,5147,5148,5149, # 176 -5150,5151,5152,4612,5153,5154,5155,5156,5157,5158,5159,5160,5161,5162,5163,5164, # 192 -5165,5166,5167,5168,5169,5170,5171,5172,5173,5174,5175,1472, 598, 618, 820,1205, # 208 -1309,1412,1858,1307,1692,5176,5177,5178,5179,5180,5181,5182,1142,1452,1234,1172, # 224 -1875,2043,2149,1793,1382,2973, 925,2404,1067,1241, 960,1377,2935,1491, 919,1217, # 240 -1865,2030,1406,1499,2749,4098,5183,5184,5185,5186,5187,5188,2561,4099,3117,1804, # 256 -2049,3689,4309,3513,1663,5189,3166,3118,3298,1587,1561,3433,5190,3119,1625,2998, # 272 -3299,4613,1766,3690,2786,4614,5191,5192,5193,5194,2161, 26,3377, 2,3929, 20, # 288 -3691, 47,4100, 50, 17, 16, 35, 268, 27, 243, 42, 155, 24, 154, 29, 184, # 304 - 4, 91, 14, 92, 53, 396, 33, 289, 9, 37, 64, 620, 21, 39, 321, 5, # 320 - 12, 11, 52, 13, 3, 208, 138, 0, 7, 60, 526, 141, 151,1069, 181, 275, # 336 -1591, 83, 132,1475, 126, 331, 829, 15, 69, 160, 59, 22, 157, 55,1079, 312, # 352 - 109, 38, 23, 25, 10, 19, 79,5195, 61, 382,1124, 8, 30,5196,5197,5198, # 368 -5199,5200,5201,5202,5203,5204,5205,5206, 89, 62, 74, 34,2416, 112, 139, 196, # 384 - 271, 149, 84, 607, 131, 765, 46, 88, 153, 683, 76, 874, 101, 258, 57, 80, # 400 - 32, 364, 121,1508, 169,1547, 68, 235, 145,2999, 41, 360,3027, 70, 63, 31, # 416 - 43, 259, 262,1383, 99, 533, 194, 66, 93, 846, 217, 192, 56, 106, 58, 565, # 432 - 280, 272, 311, 256, 146, 82, 308, 71, 100, 128, 214, 655, 110, 261, 104,1140, # 448 - 54, 51, 36, 87, 67,3070, 185,2618,2936,2020, 28,1066,2390,2059,5207,5208, # 464 -5209,5210,5211,5212,5213,5214,5215,5216,4615,5217,5218,5219,5220,5221,5222,5223, # 480 -5224,5225,5226,5227,5228,5229,5230,5231,5232,5233,5234,5235,5236,3514,5237,5238, # 496 -5239,5240,5241,5242,5243,5244,2297,2031,4616,4310,3692,5245,3071,5246,3598,5247, # 512 -4617,3231,3515,5248,4101,4311,4618,3808,4312,4102,5249,4103,4104,3599,5250,5251, # 528 -5252,5253,5254,5255,5256,5257,5258,5259,5260,5261,5262,5263,5264,5265,5266,5267, # 544 -5268,5269,5270,5271,5272,5273,5274,5275,5276,5277,5278,5279,5280,5281,5282,5283, # 560 -5284,5285,5286,5287,5288,5289,5290,5291,5292,5293,5294,5295,5296,5297,5298,5299, # 576 -5300,5301,5302,5303,5304,5305,5306,5307,5308,5309,5310,5311,5312,5313,5314,5315, # 592 -5316,5317,5318,5319,5320,5321,5322,5323,5324,5325,5326,5327,5328,5329,5330,5331, # 608 -5332,5333,5334,5335,5336,5337,5338,5339,5340,5341,5342,5343,5344,5345,5346,5347, # 624 -5348,5349,5350,5351,5352,5353,5354,5355,5356,5357,5358,5359,5360,5361,5362,5363, # 640 -5364,5365,5366,5367,5368,5369,5370,5371,5372,5373,5374,5375,5376,5377,5378,5379, # 656 -5380,5381, 363, 642,2787,2878,2788,2789,2316,3232,2317,3434,2011, 165,1942,3930, # 672 -3931,3932,3933,5382,4619,5383,4620,5384,5385,5386,5387,5388,5389,5390,5391,5392, # 688 -5393,5394,5395,5396,5397,5398,5399,5400,5401,5402,5403,5404,5405,5406,5407,5408, # 704 -5409,5410,5411,5412,5413,5414,5415,5416,5417,5418,5419,5420,5421,5422,5423,5424, # 720 -5425,5426,5427,5428,5429,5430,5431,5432,5433,5434,5435,5436,5437,5438,5439,5440, # 736 -5441,5442,5443,5444,5445,5446,5447,5448,5449,5450,5451,5452,5453,5454,5455,5456, # 752 -5457,5458,5459,5460,5461,5462,5463,5464,5465,5466,5467,5468,5469,5470,5471,5472, # 768 -5473,5474,5475,5476,5477,5478,5479,5480,5481,5482,5483,5484,5485,5486,5487,5488, # 784 -5489,5490,5491,5492,5493,5494,5495,5496,5497,5498,5499,5500,5501,5502,5503,5504, # 800 -5505,5506,5507,5508,5509,5510,5511,5512,5513,5514,5515,5516,5517,5518,5519,5520, # 816 -5521,5522,5523,5524,5525,5526,5527,5528,5529,5530,5531,5532,5533,5534,5535,5536, # 832 -5537,5538,5539,5540,5541,5542,5543,5544,5545,5546,5547,5548,5549,5550,5551,5552, # 848 -5553,5554,5555,5556,5557,5558,5559,5560,5561,5562,5563,5564,5565,5566,5567,5568, # 864 -5569,5570,5571,5572,5573,5574,5575,5576,5577,5578,5579,5580,5581,5582,5583,5584, # 880 -5585,5586,5587,5588,5589,5590,5591,5592,5593,5594,5595,5596,5597,5598,5599,5600, # 896 -5601,5602,5603,5604,5605,5606,5607,5608,5609,5610,5611,5612,5613,5614,5615,5616, # 912 -5617,5618,5619,5620,5621,5622,5623,5624,5625,5626,5627,5628,5629,5630,5631,5632, # 928 -5633,5634,5635,5636,5637,5638,5639,5640,5641,5642,5643,5644,5645,5646,5647,5648, # 944 -5649,5650,5651,5652,5653,5654,5655,5656,5657,5658,5659,5660,5661,5662,5663,5664, # 960 -5665,5666,5667,5668,5669,5670,5671,5672,5673,5674,5675,5676,5677,5678,5679,5680, # 976 -5681,5682,5683,5684,5685,5686,5687,5688,5689,5690,5691,5692,5693,5694,5695,5696, # 992 -5697,5698,5699,5700,5701,5702,5703,5704,5705,5706,5707,5708,5709,5710,5711,5712, # 1008 -5713,5714,5715,5716,5717,5718,5719,5720,5721,5722,5723,5724,5725,5726,5727,5728, # 1024 -5729,5730,5731,5732,5733,5734,5735,5736,5737,5738,5739,5740,5741,5742,5743,5744, # 1040 -5745,5746,5747,5748,5749,5750,5751,5752,5753,5754,5755,5756,5757,5758,5759,5760, # 1056 -5761,5762,5763,5764,5765,5766,5767,5768,5769,5770,5771,5772,5773,5774,5775,5776, # 1072 -5777,5778,5779,5780,5781,5782,5783,5784,5785,5786,5787,5788,5789,5790,5791,5792, # 1088 -5793,5794,5795,5796,5797,5798,5799,5800,5801,5802,5803,5804,5805,5806,5807,5808, # 1104 -5809,5810,5811,5812,5813,5814,5815,5816,5817,5818,5819,5820,5821,5822,5823,5824, # 1120 -5825,5826,5827,5828,5829,5830,5831,5832,5833,5834,5835,5836,5837,5838,5839,5840, # 1136 -5841,5842,5843,5844,5845,5846,5847,5848,5849,5850,5851,5852,5853,5854,5855,5856, # 1152 -5857,5858,5859,5860,5861,5862,5863,5864,5865,5866,5867,5868,5869,5870,5871,5872, # 1168 -5873,5874,5875,5876,5877,5878,5879,5880,5881,5882,5883,5884,5885,5886,5887,5888, # 1184 -5889,5890,5891,5892,5893,5894,5895,5896,5897,5898,5899,5900,5901,5902,5903,5904, # 1200 -5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5916,5917,5918,5919,5920, # 1216 -5921,5922,5923,5924,5925,5926,5927,5928,5929,5930,5931,5932,5933,5934,5935,5936, # 1232 -5937,5938,5939,5940,5941,5942,5943,5944,5945,5946,5947,5948,5949,5950,5951,5952, # 1248 -5953,5954,5955,5956,5957,5958,5959,5960,5961,5962,5963,5964,5965,5966,5967,5968, # 1264 -5969,5970,5971,5972,5973,5974,5975,5976,5977,5978,5979,5980,5981,5982,5983,5984, # 1280 -5985,5986,5987,5988,5989,5990,5991,5992,5993,5994,5995,5996,5997,5998,5999,6000, # 1296 -6001,6002,6003,6004,6005,6006,6007,6008,6009,6010,6011,6012,6013,6014,6015,6016, # 1312 -6017,6018,6019,6020,6021,6022,6023,6024,6025,6026,6027,6028,6029,6030,6031,6032, # 1328 -6033,6034,6035,6036,6037,6038,6039,6040,6041,6042,6043,6044,6045,6046,6047,6048, # 1344 -6049,6050,6051,6052,6053,6054,6055,6056,6057,6058,6059,6060,6061,6062,6063,6064, # 1360 -6065,6066,6067,6068,6069,6070,6071,6072,6073,6074,6075,6076,6077,6078,6079,6080, # 1376 -6081,6082,6083,6084,6085,6086,6087,6088,6089,6090,6091,6092,6093,6094,6095,6096, # 1392 -6097,6098,6099,6100,6101,6102,6103,6104,6105,6106,6107,6108,6109,6110,6111,6112, # 1408 -6113,6114,2044,2060,4621, 997,1235, 473,1186,4622, 920,3378,6115,6116, 379,1108, # 1424 -4313,2657,2735,3934,6117,3809, 636,3233, 573,1026,3693,3435,2974,3300,2298,4105, # 1440 - 854,2937,2463, 393,2581,2417, 539, 752,1280,2750,2480, 140,1161, 440, 708,1569, # 1456 - 665,2497,1746,1291,1523,3000, 164,1603, 847,1331, 537,1997, 486, 508,1693,2418, # 1472 -1970,2227, 878,1220, 299,1030, 969, 652,2751, 624,1137,3301,2619, 65,3302,2045, # 1488 -1761,1859,3120,1930,3694,3516, 663,1767, 852, 835,3695, 269, 767,2826,2339,1305, # 1504 - 896,1150, 770,1616,6118, 506,1502,2075,1012,2519, 775,2520,2975,2340,2938,4314, # 1520 -3028,2086,1224,1943,2286,6119,3072,4315,2240,1273,1987,3935,1557, 175, 597, 985, # 1536 -3517,2419,2521,1416,3029, 585, 938,1931,1007,1052,1932,1685,6120,3379,4316,4623, # 1552 - 804, 599,3121,1333,2128,2539,1159,1554,2032,3810, 687,2033,2904, 952, 675,1467, # 1568 -3436,6121,2241,1096,1786,2440,1543,1924, 980,1813,2228, 781,2692,1879, 728,1918, # 1584 -3696,4624, 548,1950,4625,1809,1088,1356,3303,2522,1944, 502, 972, 373, 513,2827, # 1600 - 586,2377,2391,1003,1976,1631,6122,2464,1084, 648,1776,4626,2141, 324, 962,2012, # 1616 -2177,2076,1384, 742,2178,1448,1173,1810, 222, 102, 301, 445, 125,2420, 662,2498, # 1632 - 277, 200,1476,1165,1068, 224,2562,1378,1446, 450,1880, 659, 791, 582,4627,2939, # 1648 -3936,1516,1274, 555,2099,3697,1020,1389,1526,3380,1762,1723,1787,2229, 412,2114, # 1664 -1900,2392,3518, 512,2597, 427,1925,2341,3122,1653,1686,2465,2499, 697, 330, 273, # 1680 - 380,2162, 951, 832, 780, 991,1301,3073, 965,2270,3519, 668,2523,2636,1286, 535, # 1696 -1407, 518, 671, 957,2658,2378, 267, 611,2197,3030,6123, 248,2299, 967,1799,2356, # 1712 - 850,1418,3437,1876,1256,1480,2828,1718,6124,6125,1755,1664,2405,6126,4628,2879, # 1728 -2829, 499,2179, 676,4629, 557,2329,2214,2090, 325,3234, 464, 811,3001, 992,2342, # 1744 -2481,1232,1469, 303,2242, 466,1070,2163, 603,1777,2091,4630,2752,4631,2714, 322, # 1760 -2659,1964,1768, 481,2188,1463,2330,2857,3600,2092,3031,2421,4632,2318,2070,1849, # 1776 -2598,4633,1302,2254,1668,1701,2422,3811,2905,3032,3123,2046,4106,1763,1694,4634, # 1792 -1604, 943,1724,1454, 917, 868,2215,1169,2940, 552,1145,1800,1228,1823,1955, 316, # 1808 -1080,2510, 361,1807,2830,4107,2660,3381,1346,1423,1134,4108,6127, 541,1263,1229, # 1824 -1148,2540, 545, 465,1833,2880,3438,1901,3074,2482, 816,3937, 713,1788,2500, 122, # 1840 -1575, 195,1451,2501,1111,6128, 859, 374,1225,2243,2483,4317, 390,1033,3439,3075, # 1856 -2524,1687, 266, 793,1440,2599, 946, 779, 802, 507, 897,1081, 528,2189,1292, 711, # 1872 -1866,1725,1167,1640, 753, 398,2661,1053, 246, 348,4318, 137,1024,3440,1600,2077, # 1888 -2129, 825,4319, 698, 238, 521, 187,2300,1157,2423,1641,1605,1464,1610,1097,2541, # 1904 -1260,1436, 759,2255,1814,2150, 705,3235, 409,2563,3304, 561,3033,2005,2564, 726, # 1920 -1956,2343,3698,4109, 949,3812,3813,3520,1669, 653,1379,2525, 881,2198, 632,2256, # 1936 -1027, 778,1074, 733,1957, 514,1481,2466, 554,2180, 702,3938,1606,1017,1398,6129, # 1952 -1380,3521, 921, 993,1313, 594, 449,1489,1617,1166, 768,1426,1360, 495,1794,3601, # 1968 -1177,3602,1170,4320,2344, 476, 425,3167,4635,3168,1424, 401,2662,1171,3382,1998, # 1984 -1089,4110, 477,3169, 474,6130,1909, 596,2831,1842, 494, 693,1051,1028,1207,3076, # 2000 - 606,2115, 727,2790,1473,1115, 743,3522, 630, 805,1532,4321,2021, 366,1057, 838, # 2016 - 684,1114,2142,4322,2050,1492,1892,1808,2271,3814,2424,1971,1447,1373,3305,1090, # 2032 -1536,3939,3523,3306,1455,2199, 336, 369,2331,1035, 584,2393, 902, 718,2600,6131, # 2048 -2753, 463,2151,1149,1611,2467, 715,1308,3124,1268, 343,1413,3236,1517,1347,2663, # 2064 -2093,3940,2022,1131,1553,2100,2941,1427,3441,2942,1323,2484,6132,1980, 872,2368, # 2080 -2441,2943, 320,2369,2116,1082, 679,1933,3941,2791,3815, 625,1143,2023, 422,2200, # 2096 -3816,6133, 730,1695, 356,2257,1626,2301,2858,2637,1627,1778, 937, 883,2906,2693, # 2112 -3002,1769,1086, 400,1063,1325,3307,2792,4111,3077, 456,2345,1046, 747,6134,1524, # 2128 - 884,1094,3383,1474,2164,1059, 974,1688,2181,2258,1047, 345,1665,1187, 358, 875, # 2144 -3170, 305, 660,3524,2190,1334,1135,3171,1540,1649,2542,1527, 927, 968,2793, 885, # 2160 -1972,1850, 482, 500,2638,1218,1109,1085,2543,1654,2034, 876, 78,2287,1482,1277, # 2176 - 861,1675,1083,1779, 724,2754, 454, 397,1132,1612,2332, 893, 672,1237, 257,2259, # 2192 -2370, 135,3384, 337,2244, 547, 352, 340, 709,2485,1400, 788,1138,2511, 540, 772, # 2208 -1682,2260,2272,2544,2013,1843,1902,4636,1999,1562,2288,4637,2201,1403,1533, 407, # 2224 - 576,3308,1254,2071, 978,3385, 170, 136,1201,3125,2664,3172,2394, 213, 912, 873, # 2240 -3603,1713,2202, 699,3604,3699, 813,3442, 493, 531,1054, 468,2907,1483, 304, 281, # 2256 -4112,1726,1252,2094, 339,2319,2130,2639, 756,1563,2944, 748, 571,2976,1588,2425, # 2272 -2715,1851,1460,2426,1528,1392,1973,3237, 288,3309, 685,3386, 296, 892,2716,2216, # 2288 -1570,2245, 722,1747,2217, 905,3238,1103,6135,1893,1441,1965, 251,1805,2371,3700, # 2304 -2601,1919,1078, 75,2182,1509,1592,1270,2640,4638,2152,6136,3310,3817, 524, 706, # 2320 -1075, 292,3818,1756,2602, 317, 98,3173,3605,3525,1844,2218,3819,2502, 814, 567, # 2336 - 385,2908,1534,6137, 534,1642,3239, 797,6138,1670,1529, 953,4323, 188,1071, 538, # 2352 - 178, 729,3240,2109,1226,1374,2000,2357,2977, 731,2468,1116,2014,2051,6139,1261, # 2368 -1593, 803,2859,2736,3443, 556, 682, 823,1541,6140,1369,2289,1706,2794, 845, 462, # 2384 -2603,2665,1361, 387, 162,2358,1740, 739,1770,1720,1304,1401,3241,1049, 627,1571, # 2400 -2427,3526,1877,3942,1852,1500, 431,1910,1503, 677, 297,2795, 286,1433,1038,1198, # 2416 -2290,1133,1596,4113,4639,2469,1510,1484,3943,6141,2442, 108, 712,4640,2372, 866, # 2432 -3701,2755,3242,1348, 834,1945,1408,3527,2395,3243,1811, 824, 994,1179,2110,1548, # 2448 -1453, 790,3003, 690,4324,4325,2832,2909,3820,1860,3821, 225,1748, 310, 346,1780, # 2464 -2470, 821,1993,2717,2796, 828, 877,3528,2860,2471,1702,2165,2910,2486,1789, 453, # 2480 - 359,2291,1676, 73,1164,1461,1127,3311, 421, 604, 314,1037, 589, 116,2487, 737, # 2496 - 837,1180, 111, 244, 735,6142,2261,1861,1362, 986, 523, 418, 581,2666,3822, 103, # 2512 - 855, 503,1414,1867,2488,1091, 657,1597, 979, 605,1316,4641,1021,2443,2078,2001, # 2528 -1209, 96, 587,2166,1032, 260,1072,2153, 173, 94, 226,3244, 819,2006,4642,4114, # 2544 -2203, 231,1744, 782, 97,2667, 786,3387, 887, 391, 442,2219,4326,1425,6143,2694, # 2560 - 633,1544,1202, 483,2015, 592,2052,1958,2472,1655, 419, 129,4327,3444,3312,1714, # 2576 -1257,3078,4328,1518,1098, 865,1310,1019,1885,1512,1734, 469,2444, 148, 773, 436, # 2592 -1815,1868,1128,1055,4329,1245,2756,3445,2154,1934,1039,4643, 579,1238, 932,2320, # 2608 - 353, 205, 801, 115,2428, 944,2321,1881, 399,2565,1211, 678, 766,3944, 335,2101, # 2624 -1459,1781,1402,3945,2737,2131,1010, 844, 981,1326,1013, 550,1816,1545,2620,1335, # 2640 -1008, 371,2881, 936,1419,1613,3529,1456,1395,2273,1834,2604,1317,2738,2503, 416, # 2656 -1643,4330, 806,1126, 229, 591,3946,1314,1981,1576,1837,1666, 347,1790, 977,3313, # 2672 - 764,2861,1853, 688,2429,1920,1462, 77, 595, 415,2002,3034, 798,1192,4115,6144, # 2688 -2978,4331,3035,2695,2582,2072,2566, 430,2430,1727, 842,1396,3947,3702, 613, 377, # 2704 - 278, 236,1417,3388,3314,3174, 757,1869, 107,3530,6145,1194, 623,2262, 207,1253, # 2720 -2167,3446,3948, 492,1117,1935, 536,1838,2757,1246,4332, 696,2095,2406,1393,1572, # 2736 -3175,1782, 583, 190, 253,1390,2230, 830,3126,3389, 934,3245,1703,1749,2979,1870, # 2752 -2545,1656,2204, 869,2346,4116,3176,1817, 496,1764,4644, 942,1504, 404,1903,1122, # 2768 -1580,3606,2945,1022, 515, 372,1735, 955,2431,3036,6146,2797,1110,2302,2798, 617, # 2784 -6147, 441, 762,1771,3447,3607,3608,1904, 840,3037, 86, 939,1385, 572,1370,2445, # 2800 -1336, 114,3703, 898, 294, 203,3315, 703,1583,2274, 429, 961,4333,1854,1951,3390, # 2816 -2373,3704,4334,1318,1381, 966,1911,2322,1006,1155, 309, 989, 458,2718,1795,1372, # 2832 -1203, 252,1689,1363,3177, 517,1936, 168,1490, 562, 193,3823,1042,4117,1835, 551, # 2848 - 470,4645, 395, 489,3448,1871,1465,2583,2641, 417,1493, 279,1295, 511,1236,1119, # 2864 - 72,1231,1982,1812,3004, 871,1564, 984,3449,1667,2696,2096,4646,2347,2833,1673, # 2880 -3609, 695,3246,2668, 807,1183,4647, 890, 388,2333,1801,1457,2911,1765,1477,1031, # 2896 -3316,3317,1278,3391,2799,2292,2526, 163,3450,4335,2669,1404,1802,6148,2323,2407, # 2912 -1584,1728,1494,1824,1269, 298, 909,3318,1034,1632, 375, 776,1683,2061, 291, 210, # 2928 -1123, 809,1249,1002,2642,3038, 206,1011,2132, 144, 975, 882,1565, 342, 667, 754, # 2944 -1442,2143,1299,2303,2062, 447, 626,2205,1221,2739,2912,1144,1214,2206,2584, 760, # 2960 -1715, 614, 950,1281,2670,2621, 810, 577,1287,2546,4648, 242,2168, 250,2643, 691, # 2976 - 123,2644, 647, 313,1029, 689,1357,2946,1650, 216, 771,1339,1306, 808,2063, 549, # 2992 - 913,1371,2913,2914,6149,1466,1092,1174,1196,1311,2605,2396,1783,1796,3079, 406, # 3008 -2671,2117,3949,4649, 487,1825,2220,6150,2915, 448,2348,1073,6151,2397,1707, 130, # 3024 - 900,1598, 329, 176,1959,2527,1620,6152,2275,4336,3319,1983,2191,3705,3610,2155, # 3040 -3706,1912,1513,1614,6153,1988, 646, 392,2304,1589,3320,3039,1826,1239,1352,1340, # 3056 -2916, 505,2567,1709,1437,2408,2547, 906,6154,2672, 384,1458,1594,1100,1329, 710, # 3072 - 423,3531,2064,2231,2622,1989,2673,1087,1882, 333, 841,3005,1296,2882,2379, 580, # 3088 -1937,1827,1293,2585, 601, 574, 249,1772,4118,2079,1120, 645, 901,1176,1690, 795, # 3104 -2207, 478,1434, 516,1190,1530, 761,2080, 930,1264, 355, 435,1552, 644,1791, 987, # 3120 - 220,1364,1163,1121,1538, 306,2169,1327,1222, 546,2645, 218, 241, 610,1704,3321, # 3136 -1984,1839,1966,2528, 451,6155,2586,3707,2568, 907,3178, 254,2947, 186,1845,4650, # 3152 - 745, 432,1757, 428,1633, 888,2246,2221,2489,3611,2118,1258,1265, 956,3127,1784, # 3168 -4337,2490, 319, 510, 119, 457,3612, 274,2035,2007,4651,1409,3128, 970,2758, 590, # 3184 -2800, 661,2247,4652,2008,3950,1420,1549,3080,3322,3951,1651,1375,2111, 485,2491, # 3200 -1429,1156,6156,2548,2183,1495, 831,1840,2529,2446, 501,1657, 307,1894,3247,1341, # 3216 - 666, 899,2156,1539,2549,1559, 886, 349,2208,3081,2305,1736,3824,2170,2759,1014, # 3232 -1913,1386, 542,1397,2948, 490, 368, 716, 362, 159, 282,2569,1129,1658,1288,1750, # 3248 -2674, 276, 649,2016, 751,1496, 658,1818,1284,1862,2209,2087,2512,3451, 622,2834, # 3264 - 376, 117,1060,2053,1208,1721,1101,1443, 247,1250,3179,1792,3952,2760,2398,3953, # 3280 -6157,2144,3708, 446,2432,1151,2570,3452,2447,2761,2835,1210,2448,3082, 424,2222, # 3296 -1251,2449,2119,2836, 504,1581,4338, 602, 817, 857,3825,2349,2306, 357,3826,1470, # 3312 -1883,2883, 255, 958, 929,2917,3248, 302,4653,1050,1271,1751,2307,1952,1430,2697, # 3328 -2719,2359, 354,3180, 777, 158,2036,4339,1659,4340,4654,2308,2949,2248,1146,2232, # 3344 -3532,2720,1696,2623,3827,6158,3129,1550,2698,1485,1297,1428, 637, 931,2721,2145, # 3360 - 914,2550,2587, 81,2450, 612, 827,2646,1242,4655,1118,2884, 472,1855,3181,3533, # 3376 -3534, 569,1353,2699,1244,1758,2588,4119,2009,2762,2171,3709,1312,1531,6159,1152, # 3392 -1938, 134,1830, 471,3710,2276,1112,1535,3323,3453,3535, 982,1337,2950, 488, 826, # 3408 - 674,1058,1628,4120,2017, 522,2399, 211, 568,1367,3454, 350, 293,1872,1139,3249, # 3424 -1399,1946,3006,1300,2360,3324, 588, 736,6160,2606, 744, 669,3536,3828,6161,1358, # 3440 - 199, 723, 848, 933, 851,1939,1505,1514,1338,1618,1831,4656,1634,3613, 443,2740, # 3456 -3829, 717,1947, 491,1914,6162,2551,1542,4121,1025,6163,1099,1223, 198,3040,2722, # 3472 - 370, 410,1905,2589, 998,1248,3182,2380, 519,1449,4122,1710, 947, 928,1153,4341, # 3488 -2277, 344,2624,1511, 615, 105, 161,1212,1076,1960,3130,2054,1926,1175,1906,2473, # 3504 - 414,1873,2801,6164,2309, 315,1319,3325, 318,2018,2146,2157, 963, 631, 223,4342, # 3520 -4343,2675, 479,3711,1197,2625,3712,2676,2361,6165,4344,4123,6166,2451,3183,1886, # 3536 -2184,1674,1330,1711,1635,1506, 799, 219,3250,3083,3954,1677,3713,3326,2081,3614, # 3552 -1652,2073,4657,1147,3041,1752, 643,1961, 147,1974,3955,6167,1716,2037, 918,3007, # 3568 -1994, 120,1537, 118, 609,3184,4345, 740,3455,1219, 332,1615,3830,6168,1621,2980, # 3584 -1582, 783, 212, 553,2350,3714,1349,2433,2082,4124, 889,6169,2310,1275,1410, 973, # 3600 - 166,1320,3456,1797,1215,3185,2885,1846,2590,2763,4658, 629, 822,3008, 763, 940, # 3616 -1990,2862, 439,2409,1566,1240,1622, 926,1282,1907,2764, 654,2210,1607, 327,1130, # 3632 -3956,1678,1623,6170,2434,2192, 686, 608,3831,3715, 903,3957,3042,6171,2741,1522, # 3648 -1915,1105,1555,2552,1359, 323,3251,4346,3457, 738,1354,2553,2311,2334,1828,2003, # 3664 -3832,1753,2351,1227,6172,1887,4125,1478,6173,2410,1874,1712,1847, 520,1204,2607, # 3680 - 264,4659, 836,2677,2102, 600,4660,3833,2278,3084,6174,4347,3615,1342, 640, 532, # 3696 - 543,2608,1888,2400,2591,1009,4348,1497, 341,1737,3616,2723,1394, 529,3252,1321, # 3712 - 983,4661,1515,2120, 971,2592, 924, 287,1662,3186,4349,2700,4350,1519, 908,1948, # 3728 -2452, 156, 796,1629,1486,2223,2055, 694,4126,1259,1036,3392,1213,2249,2742,1889, # 3744 -1230,3958,1015, 910, 408, 559,3617,4662, 746, 725, 935,4663,3959,3009,1289, 563, # 3760 - 867,4664,3960,1567,2981,2038,2626, 988,2263,2381,4351, 143,2374, 704,1895,6175, # 3776 -1188,3716,2088, 673,3085,2362,4352, 484,1608,1921,2765,2918, 215, 904,3618,3537, # 3792 - 894, 509, 976,3043,2701,3961,4353,2837,2982, 498,6176,6177,1102,3538,1332,3393, # 3808 -1487,1636,1637, 233, 245,3962, 383, 650, 995,3044, 460,1520,1206,2352, 749,3327, # 3824 - 530, 700, 389,1438,1560,1773,3963,2264, 719,2951,2724,3834, 870,1832,1644,1000, # 3840 - 839,2474,3717, 197,1630,3394, 365,2886,3964,1285,2133, 734, 922, 818,1106, 732, # 3856 - 480,2083,1774,3458, 923,2279,1350, 221,3086, 85,2233,2234,3835,1585,3010,2147, # 3872 -1387,1705,2382,1619,2475, 133, 239,2802,1991,1016,2084,2383, 411,2838,1113, 651, # 3888 -1985,1160,3328, 990,1863,3087,1048,1276,2647, 265,2627,1599,3253,2056, 150, 638, # 3904 -2019, 656, 853, 326,1479, 680,1439,4354,1001,1759, 413,3459,3395,2492,1431, 459, # 3920 -4355,1125,3329,2265,1953,1450,2065,2863, 849, 351,2678,3131,3254,3255,1104,1577, # 3936 - 227,1351,1645,2453,2193,1421,2887, 812,2121, 634, 95,2435, 201,2312,4665,1646, # 3952 -1671,2743,1601,2554,2702,2648,2280,1315,1366,2089,3132,1573,3718,3965,1729,1189, # 3968 - 328,2679,1077,1940,1136, 558,1283, 964,1195, 621,2074,1199,1743,3460,3619,1896, # 3984 -1916,1890,3836,2952,1154,2112,1064, 862, 378,3011,2066,2113,2803,1568,2839,6178, # 4000 -3088,2919,1941,1660,2004,1992,2194, 142, 707,1590,1708,1624,1922,1023,1836,1233, # 4016 -1004,2313, 789, 741,3620,6179,1609,2411,1200,4127,3719,3720,4666,2057,3721, 593, # 4032 -2840, 367,2920,1878,6180,3461,1521, 628,1168, 692,2211,2649, 300, 720,2067,2571, # 4048 -2953,3396, 959,2504,3966,3539,3462,1977, 701,6181, 954,1043, 800, 681, 183,3722, # 4064 -1803,1730,3540,4128,2103, 815,2314, 174, 467, 230,2454,1093,2134, 755,3541,3397, # 4080 -1141,1162,6182,1738,2039, 270,3256,2513,1005,1647,2185,3837, 858,1679,1897,1719, # 4096 -2954,2324,1806, 402, 670, 167,4129,1498,2158,2104, 750,6183, 915, 189,1680,1551, # 4112 - 455,4356,1501,2455, 405,1095,2955, 338,1586,1266,1819, 570, 641,1324, 237,1556, # 4128 -2650,1388,3723,6184,1368,2384,1343,1978,3089,2436, 879,3724, 792,1191, 758,3012, # 4144 -1411,2135,1322,4357, 240,4667,1848,3725,1574,6185, 420,3045,1546,1391, 714,4358, # 4160 -1967, 941,1864, 863, 664, 426, 560,1731,2680,1785,2864,1949,2363, 403,3330,1415, # 4176 -1279,2136,1697,2335, 204, 721,2097,3838, 90,6186,2085,2505, 191,3967, 124,2148, # 4192 -1376,1798,1178,1107,1898,1405, 860,4359,1243,1272,2375,2983,1558,2456,1638, 113, # 4208 -3621, 578,1923,2609, 880, 386,4130, 784,2186,2266,1422,2956,2172,1722, 497, 263, # 4224 -2514,1267,2412,2610, 177,2703,3542, 774,1927,1344, 616,1432,1595,1018, 172,4360, # 4240 -2325, 911,4361, 438,1468,3622, 794,3968,2024,2173,1681,1829,2957, 945, 895,3090, # 4256 - 575,2212,2476, 475,2401,2681, 785,2744,1745,2293,2555,1975,3133,2865, 394,4668, # 4272 -3839, 635,4131, 639, 202,1507,2195,2766,1345,1435,2572,3726,1908,1184,1181,2457, # 4288 -3727,3134,4362, 843,2611, 437, 916,4669, 234, 769,1884,3046,3047,3623, 833,6187, # 4304 -1639,2250,2402,1355,1185,2010,2047, 999, 525,1732,1290,1488,2612, 948,1578,3728, # 4320 -2413,2477,1216,2725,2159, 334,3840,1328,3624,2921,1525,4132, 564,1056, 891,4363, # 4336 -1444,1698,2385,2251,3729,1365,2281,2235,1717,6188, 864,3841,2515, 444, 527,2767, # 4352 -2922,3625, 544, 461,6189, 566, 209,2437,3398,2098,1065,2068,3331,3626,3257,2137, # 4368 #last 512 -) - - diff --git a/spaces/aliabd/Anime2Sketch/model.py b/spaces/aliabd/Anime2Sketch/model.py deleted file mode 100644 index f02529621334315815ae53277580d98c2152066a..0000000000000000000000000000000000000000 --- a/spaces/aliabd/Anime2Sketch/model.py +++ /dev/null @@ -1,121 +0,0 @@ -import torch -import torch.nn as nn -import functools - - -class UnetGenerator(nn.Module): - """Create a Unet-based generator""" - - def __init__(self, input_nc, output_nc, num_downs, ngf=64, norm_layer=nn.BatchNorm2d, use_dropout=False): - """Construct a Unet generator - Parameters: - input_nc (int) -- the number of channels in input images - output_nc (int) -- the number of channels in output images - num_downs (int) -- the number of downsamplings in UNet. For example, # if |num_downs| == 7, - image of size 128x128 will become of size 1x1 # at the bottleneck - ngf (int) -- the number of filters in the last conv layer - norm_layer -- normalization layer - We construct the U-Net from the innermost layer to the outermost layer. - It is a recursive process. - """ - super(UnetGenerator, self).__init__() - # construct unet structure - unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, innermost=True) # add the innermost layer - for _ in range(num_downs - 5): # add intermediate layers with ngf * 8 filters - unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer, use_dropout=use_dropout) - # gradually reduce the number of filters from ngf * 8 to ngf - unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - self.model = UnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, norm_layer=norm_layer) # add the outermost layer - - def forward(self, input): - """Standard forward""" - return self.model(input) - -class UnetSkipConnectionBlock(nn.Module): - """Defines the Unet submodule with skip connection. - X -------------------identity---------------------- - |-- downsampling -- |submodule| -- upsampling --| - """ - - def __init__(self, outer_nc, inner_nc, input_nc=None, - submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False): - """Construct a Unet submodule with skip connections. - Parameters: - outer_nc (int) -- the number of filters in the outer conv layer - inner_nc (int) -- the number of filters in the inner conv layer - input_nc (int) -- the number of channels in input images/features - submodule (UnetSkipConnectionBlock) -- previously defined submodules - outermost (bool) -- if this module is the outermost module - innermost (bool) -- if this module is the innermost module - norm_layer -- normalization layer - use_dropout (bool) -- if use dropout layers. - """ - super(UnetSkipConnectionBlock, self).__init__() - self.outermost = outermost - if type(norm_layer) == functools.partial: - use_bias = norm_layer.func == nn.InstanceNorm2d - else: - use_bias = norm_layer == nn.InstanceNorm2d - if input_nc is None: - input_nc = outer_nc - downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4, - stride=2, padding=1, bias=use_bias) - downrelu = nn.LeakyReLU(0.2, True) - downnorm = norm_layer(inner_nc) - uprelu = nn.ReLU(True) - upnorm = norm_layer(outer_nc) - - if outermost: - upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc, - kernel_size=4, stride=2, - padding=1) - down = [downconv] - up = [uprelu, upconv, nn.Tanh()] - model = down + [submodule] + up - elif innermost: - upconv = nn.ConvTranspose2d(inner_nc, outer_nc, - kernel_size=4, stride=2, - padding=1, bias=use_bias) - down = [downrelu, downconv] - up = [uprelu, upconv, upnorm] - model = down + up - else: - upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc, - kernel_size=4, stride=2, - padding=1, bias=use_bias) - down = [downrelu, downconv, downnorm] - up = [uprelu, upconv, upnorm] - - if use_dropout: - model = down + [submodule] + up + [nn.Dropout(0.5)] - else: - model = down + [submodule] + up - - self.model = nn.Sequential(*model) - - def forward(self, x): - if self.outermost: - return self.model(x) - else: # add skip connections - return torch.cat([x, self.model(x)], 1) - - -def create_model(gpu_ids=[]): - """Create a model for anime2sketch - hardcoding the options for simplicity - """ - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False, track_running_stats=False) - net = UnetGenerator(3, 1, 8, 64, norm_layer=norm_layer, use_dropout=False) - ckpt = torch.load('weights/netG.pth') - for key in list(ckpt.keys()): - if 'module.' in key: - ckpt[key.replace('module.', '')] = ckpt[key] - del ckpt[key] - net.load_state_dict(ckpt) - if len(gpu_ids) > 0: - assert(torch.cuda.is_available()) - net.to(gpu_ids[0]) - net = torch.nn.DataParallel(net, gpu_ids) # multi-GPUs - return net \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test108/README.md b/spaces/allknowingroger/Image-Models-Test108/README.md deleted file mode 100644 index c6f01c6c7c17f0433e85a32d5b58cdf0072045a8..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test108/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test107 ---- - - \ No newline at end of file diff --git a/spaces/amagastya/SPARK/app/spark.py b/spaces/amagastya/SPARK/app/spark.py deleted file mode 100644 index 619d66cb5fa921cccac636449b71f32792991621..0000000000000000000000000000000000000000 --- a/spaces/amagastya/SPARK/app/spark.py +++ /dev/null @@ -1,85 +0,0 @@ -import os -from langchain.embeddings.cohere import CohereEmbeddings -from langchain.vectorstores import Pinecone -from langchain.chains import ConversationalRetrievalChain, LLMChain -from langchain.chat_models import ChatOpenAI -import pinecone -import chainlit as cl -from langchain.memory import ConversationTokenBufferMemory -from langchain.prompts import ( - ChatPromptTemplate, - PromptTemplate, - SystemMessagePromptTemplate, - HumanMessagePromptTemplate, -) -from langchain.prompts.prompt import PromptTemplate -from langchain.chains.qa_with_sources import load_qa_with_sources_chain -from langchain.callbacks import get_openai_callback -from langchain.retrievers import ContextualCompressionRetriever -from langchain.retrievers.document_compressors import CohereRerank -from chainlit import user_session -from prompts import load_query_gen_prompt, load_spark_prompt -from chainlit import on_message, on_chat_start -import openai -from langchain.callbacks import ContextCallbackHandler -from promptwatch import PromptWatch - - -index_name = "spark" - -spark = load_spark_prompt() -query_gen_prompt = load_query_gen_prompt() -CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(query_gen_prompt) -pinecone.init( - api_key=os.environ.get("PINECONE_API_KEY"), - environment='us-west1-gcp', - ) -@on_chat_start -def init(): - token = os.environ["CONTEXT_TOKEN"] - context_callback = ContextCallbackHandler(token) - os.environ["LANGCHAIN_WANDB_TRACING"] = "true" - os.environ["WANDB_PROJECT"] = "spark" - llm = ChatOpenAI(temperature=0.7, verbose=True, openai_api_key = os.environ.get("OPENAI_API_KEY"), streaming=True, - callbacks=[context_callback]) - memory = ConversationTokenBufferMemory(llm=llm,memory_key="chat_history", return_messages=True,input_key='question',max_token_limit=1000) - embeddings = CohereEmbeddings(model='embed-english-light-v2.0',cohere_api_key=os.environ.get("COHERE_API_KEY")) - - docsearch = Pinecone.from_existing_index( - index_name=index_name, embedding=embeddings - ) - retriever = docsearch.as_retriever(search_kwargs={"k": 4}) - # compressor = CohereRerank() - # reranker = ContextualCompressionRetriever( - # base_compressor=compressor, base_retriever=retriever - # ) - messages = [SystemMessagePromptTemplate.from_template(spark)] - # print('mem', user_session.get('memory')) - messages.append(HumanMessagePromptTemplate.from_template("{question}")) - prompt = ChatPromptTemplate.from_messages(messages) - - question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT, verbose=True) - doc_chain = load_qa_with_sources_chain(llm, chain_type="stuff", verbose=True,prompt=prompt) - - chain = ConversationalRetrievalChain( - retriever=retriever, - question_generator=question_generator, - combine_docs_chain=doc_chain, - verbose=True, - memory=memory, - rephrase_question=False, - callbacks=[context_callback] - ) - cl.user_session.set("conversation_chain", chain) - - -@on_message -async def main(message: str): - with PromptWatch(api_key=os.environ.get("PROMPTWATCH_KEY")) as pw: - token = os.environ["CONTEXT_TOKEN"] - context_callback = ContextCallbackHandler(token) - chain = cl.user_session.get("conversation_chain") - res = await chain.arun({"question": message},callbacks=[cl.AsyncLangchainCallbackHandler(), - context_callback]) - # Send the answer and the text elements to the UI - await cl.Message(content=res).send() \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/evaluate.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/evaluate.py deleted file mode 100644 index 3134280c899500543e5d5e3d6960af4c627a40ef..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/evaluate.py +++ /dev/null @@ -1,142 +0,0 @@ -import datetime -import traceback -from pathlib import Path - -import pandas as pd -import torch -from datasets import load_dataset -from tqdm import tqdm - -from modules import shared -from modules.models import load_model, unload_model -from modules.text_generation import encode -from server import get_model_specific_settings, update_model_parameters - - -def load_past_evaluations(): - if Path('logs/evaluations.csv').exists(): - df = pd.read_csv(Path('logs/evaluations.csv'), dtype=str) - df['Perplexity'] = pd.to_numeric(df['Perplexity']) - return df - else: - return pd.DataFrame(columns=['Model', 'LoRAs', 'Dataset', 'Perplexity', 'stride', 'max_length', 'Date', 'Comment']) -past_evaluations = load_past_evaluations() - - -def save_past_evaluations(df): - global past_evaluations - past_evaluations = df - df.to_csv(Path('logs/evaluations.csv'), index=False) - - -def calculate_perplexity(models, input_dataset, stride, _max_length): - ''' - Based on: - https://huggingface.co/docs/transformers/perplexity#calculating-ppl-with-fixedlength-models - ''' - - global past_evaluations - cumulative_log = '' - cumulative_log += "Loading the input dataset...\n" - yield cumulative_log - - # Copied from https://github.com/qwopqwop200/GPTQ-for-LLaMa/blob/triton/utils/datautils.py - if input_dataset == 'wikitext': - data = load_dataset('wikitext', 'wikitext-2-raw-v1', split='test') - text = "\n\n".join(data['text']) - elif input_dataset == 'ptb': - data = load_dataset('ptb_text_only', 'penn_treebank', split='validation') - text = "\n\n".join(data['sentence']) - elif input_dataset == 'ptb_new': - data = load_dataset('ptb_text_only', 'penn_treebank', split='test') - text = " ".join(data['sentence']) - else: - with open(Path(f'training/datasets/{input_dataset}.txt'), 'r', encoding='utf-8') as f: - text = f.read() - - for model in models: - if is_in_past_evaluations(model, input_dataset, stride, _max_length): - cumulative_log += f"{model} has already been tested. Ignoring.\n" - yield cumulative_log - continue - - if model != 'current model': - try: - yield cumulative_log + f"Loading {model}...\n" - model_settings = get_model_specific_settings(model) - shared.settings.update(model_settings) # hijacking the interface defaults - update_model_parameters(model_settings) # hijacking the command-line arguments - shared.model_name = model - unload_model() - shared.model, shared.tokenizer = load_model(shared.model_name) - except: - cumulative_log += f"Failed to load {model}. Moving on.\n" - yield cumulative_log - continue - - cumulative_log += f"Processing {model}...\n" - yield cumulative_log + "Tokenizing the input dataset...\n" - encodings = encode(text, add_special_tokens=False) - seq_len = encodings.shape[1] - max_length = _max_length or shared.model.config.max_position_embeddings - nlls = [] - prev_end_loc = 0 - for begin_loc in tqdm(range(0, seq_len, stride)): - yield cumulative_log + f"Evaluating... {100*begin_loc/seq_len:.2f}%" - end_loc = min(begin_loc + max_length, seq_len) - trg_len = end_loc - prev_end_loc # may be different from stride on last loop - input_ids = encodings[:, begin_loc:end_loc] - target_ids = input_ids.clone() - target_ids[:, :-trg_len] = -100 - - with torch.no_grad(): - outputs = shared.model(input_ids, labels=target_ids) - - # loss is calculated using CrossEntropyLoss which averages over valid labels - # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels - # to the left by 1. - neg_log_likelihood = outputs.loss - - nlls.append(neg_log_likelihood) - - prev_end_loc = end_loc - if end_loc == seq_len: - break - - ppl = torch.exp(torch.stack(nlls).mean()) - add_entry_to_past_evaluations(float(ppl), shared.model_name, input_dataset, stride, _max_length) - save_past_evaluations(past_evaluations) - cumulative_log += f"Done. The perplexity is: {float(ppl)}\n\n" - yield cumulative_log - - -def add_entry_to_past_evaluations(perplexity, model, dataset, stride, max_length): - global past_evaluations - entry = { - 'Model': model, - 'LoRAs': ', '.join(shared.lora_names) or '-', - 'Dataset': dataset, - 'Perplexity': perplexity, - 'stride': str(stride), - 'max_length': str(max_length), - 'Date': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), - 'Comment': '' - } - past_evaluations = pd.concat([past_evaluations, pd.DataFrame([entry])], ignore_index=True) - - -def is_in_past_evaluations(model, dataset, stride, max_length): - entries = past_evaluations[(past_evaluations['Model'] == model) & - (past_evaluations['Dataset'] == dataset) & - (past_evaluations['max_length'] == str(max_length)) & - (past_evaluations['stride'] == str(stride))] - - if entries.shape[0] > 0: - return True - else: - return False - - -def generate_markdown_table(): - sorted_df = past_evaluations.sort_values(by=['Dataset', 'stride', 'Perplexity', 'Date']) - return sorted_df diff --git a/spaces/anupam210/Flight_ATA_Class/app.py b/spaces/anupam210/Flight_ATA_Class/app.py deleted file mode 100644 index 1958200ba2a05f9ce16fef57a52d155f1523ca26..0000000000000000000000000000000000000000 --- a/spaces/anupam210/Flight_ATA_Class/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import os -import openai -import gradio as gr -from azure.cognitiveservices.vision.computervision import ComputerVisionClient -from msrest.authentication import CognitiveServicesCredentials -from azure.storage.blob import BlobClient -#import utils functions -from preprocessing_images import preprocessing_function -from extract_text import azure_ocr -my_container = os.getenv("AZURE_CONTAINER") -subscription_key = os.getenv("SUB_KEY") -endpoint = os.getenv("AZURE_ENDPOINT") -connection_string = os.getenv("AZURE_CON_STRING") -openai.api_key = os.getenv("OPENAI_API_KEY") -computervision_client = ComputerVisionClient(endpoint, CognitiveServicesCredentials(subscription_key)) - -def ocr_pdf(pdf_url): - preprocessing_function(pdf_url) - my_blob = pdf_url.split('/')[-1] - blob = BlobClient.from_connection_string(conn_str=connection_string, container_name= my_container, blob_name=my_blob) - with open("answer_paper.pdf", "rb") as data: - blob.upload_blob(data,overwrite=True) - text = azure_ocr(blob.url,computervision_client) - return text.strip() - -# def ocr_pdf(pdf_url2): -# preprocessing_function(pdf_url2) -# my_blob = pdf_url2.split('/')[-1] -# blob = BlobClient.from_connection_string(conn_str=connection_string, container_name= my_container, blob_name=my_blob) -# with open("answer_paper.pdf", "rb") as data: -# blob.upload_blob(data,overwrite=True) -# text = azure_ocr(blob.url,computervision_client) -# return text.strip() - -def classify_cause(incident_description): - response = openai.Completion.create( - engine="text-davinci-003", - prompt= f"Identify the root cause from the below list:\nincident_description:{incident_description}\n", - temperature= 0, - max_tokens= 50, - n=1, - stop=None - #timeout=15, - ) - classification = response.choices[0].text.strip() - return classification - -def classify_class(incident_description): - response = openai.Completion.create( - engine="text-davinci-003", - prompt= f"Classify the following incident description into one of the given classes:Aircraft Autopilot Problem, Auxiliary Power Problem,Cabin Pressure Problem, Engine Problem,Fuel System Problem,Avionics Problem,Communications Problem,Electrical System Problem,Engine Problem,Smoke Problem\nincident_description:{incident_description}\n", - temperature= 0, - max_tokens= 50, - n=1, - stop=None - #timeout=15, - ) - classification = response.choices[0].text.strip() - return classification - - -def avatiation(pdf_url): - pdftext = ocr_pdf(pdf_url) - - - defect_class = classify_class(pdftext) - main_issue = classify_cause(pdftext) - return main_issue, defect_class - - - -inputs1 = gr.inputs.Textbox(label="Link for aviation log reports") -#inputs2 = gr.inputs.Textbox(label="Link for aviation log reports 2") - - -outputs = [gr.outputs.Textbox(label="Main Issue of the log report"), - gr.outputs.Textbox(label="category of the log report") - ] - - -demo = gr.Interface(fn=avatiation,inputs=inputs1,outputs=outputs, title="ATA Auto classification using OCR and GPT3 ") -demo.launch() - diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/datasets/preprocess.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/datasets/preprocess.py deleted file mode 100644 index 0f69b812fa58949eadc78b450114f03b19e5c80c..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/datasets/preprocess.py +++ /dev/null @@ -1,70 +0,0 @@ -import glob -import os -from pathlib import Path - -import numpy as np -from coqpit import Coqpit -from tqdm import tqdm - -from TTS.utils.audio import AudioProcessor - - -def preprocess_wav_files(out_path: str, config: Coqpit, ap: AudioProcessor): - """Process wav and compute mel and quantized wave signal. - It is mainly used by WaveRNN dataloader. - - Args: - out_path (str): Parent folder path to save the files. - config (Coqpit): Model config. - ap (AudioProcessor): Audio processor. - """ - os.makedirs(os.path.join(out_path, "quant"), exist_ok=True) - os.makedirs(os.path.join(out_path, "mel"), exist_ok=True) - wav_files = find_wav_files(config.data_path) - for path in tqdm(wav_files): - wav_name = Path(path).stem - quant_path = os.path.join(out_path, "quant", wav_name + ".npy") - mel_path = os.path.join(out_path, "mel", wav_name + ".npy") - y = ap.load_wav(path) - mel = ap.melspectrogram(y) - np.save(mel_path, mel) - if isinstance(config.mode, int): - quant = ap.mulaw_encode(y, qc=config.mode) if config.model_args.mulaw else ap.quantize(y, bits=config.mode) - np.save(quant_path, quant) - - -def find_wav_files(data_path, file_ext="wav"): - wav_paths = glob.glob(os.path.join(data_path, "**", f"*.{file_ext}"), recursive=True) - return wav_paths - - -def find_feat_files(data_path): - feat_paths = glob.glob(os.path.join(data_path, "**", "*.npy"), recursive=True) - return feat_paths - - -def load_wav_data(data_path, eval_split_size, file_ext="wav"): - wav_paths = find_wav_files(data_path, file_ext=file_ext) - assert len(wav_paths) > 0, f" [!] {data_path} is empty." - np.random.seed(0) - np.random.shuffle(wav_paths) - return wav_paths[:eval_split_size], wav_paths[eval_split_size:] - - -def load_wav_feat_data(data_path, feat_path, eval_split_size): - wav_paths = find_wav_files(data_path) - feat_paths = find_feat_files(feat_path) - - wav_paths.sort(key=lambda x: Path(x).stem) - feat_paths.sort(key=lambda x: Path(x).stem) - - assert len(wav_paths) == len(feat_paths), f" [!] {len(wav_paths)} vs {feat_paths}" - for wav, feat in zip(wav_paths, feat_paths): - wav_name = Path(wav).stem - feat_name = Path(feat).stem - assert wav_name == feat_name - - items = list(zip(wav_paths, feat_paths)) - np.random.seed(0) - np.random.shuffle(items) - return items[:eval_split_size], items[eval_split_size:] diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/docker-prepare-start.sh b/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/docker-prepare-start.sh deleted file mode 100644 index a4ce3c6dcca3abced93bd6c80d863061d8d86486..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/bel-alex73/docker-prepare-start.sh +++ /dev/null @@ -1,19 +0,0 @@ -#!/bin/bash -set -x - -cd $( dirname -- "$0"; ) - -cp ../../requirements*.txt docker-prepare/ - -docker build -t tts-learn -f docker-prepare/Dockerfile docker-prepare/ - -mkdir -p ../../../storage -docker run --rm -it \ - -p 2525:2525 \ - --shm-size=256M \ - --name tts-learn-run \ - -v $(pwd)/../../:/a/TTS \ - -v $(pwd)/../../../cv-corpus:/a/cv-corpus \ - -v $(pwd)/../../../fanetyka/:/a/fanetyka/ \ - -v $(pwd)/../../../storage:/storage \ - tts-learn diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_tacotron2_model.py b/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_tacotron2_model.py deleted file mode 100644 index b1bdeb9fd16536efe22c64f2309c46b7bae44e22..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/tts_tests/test_tacotron2_model.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import os -import unittest - -import torch -from torch import nn, optim - -from tests import get_tests_input_path -from TTS.tts.configs.shared_configs import CapacitronVAEConfig, GSTConfig -from TTS.tts.configs.tacotron2_config import Tacotron2Config -from TTS.tts.layers.losses import MSELossMasked -from TTS.tts.models.tacotron2 import Tacotron2 -from TTS.utils.audio import AudioProcessor - -# pylint: disable=unused-variable - -torch.manual_seed(1) -use_cuda = torch.cuda.is_available() -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - -config_global = Tacotron2Config(num_chars=32, num_speakers=5, out_channels=80, decoder_output_dim=80) - -ap = AudioProcessor(**config_global.audio) -WAV_FILE = os.path.join(get_tests_input_path(), "example_1.wav") - - -class TacotronTrainTest(unittest.TestCase): - """Test vanilla Tacotron2 model.""" - - def test_train_step(self): # pylint: disable=no-self-use - config = config_global.copy() - config.use_speaker_embedding = False - config.num_speakers = 1 - - input_dummy = torch.randint(0, 24, (8, 128)).long().to(device) - input_lengths = torch.randint(100, 128, (8,)).long().to(device) - input_lengths = torch.sort(input_lengths, descending=True)[0] - mel_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_postnet_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_lengths = torch.randint(20, 30, (8,)).long().to(device) - mel_lengths[0] = 30 - stop_targets = torch.zeros(8, 30, 1).float().to(device) - - for idx in mel_lengths: - stop_targets[:, int(idx.item()) :, 0] = 1.0 - - stop_targets = stop_targets.view(input_dummy.shape[0], stop_targets.size(1) // config.r, -1) - stop_targets = (stop_targets.sum(2) > 0.0).unsqueeze(2).float().squeeze() - - criterion = MSELossMasked(seq_len_norm=False).to(device) - criterion_st = nn.BCEWithLogitsLoss().to(device) - model = Tacotron2(config).to(device) - model.train() - model_ref = copy.deepcopy(model) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count += 1 - optimizer = optim.Adam(model.parameters(), lr=config.lr) - for i in range(5): - outputs = model.forward(input_dummy, input_lengths, mel_spec, mel_lengths) - assert torch.sigmoid(outputs["stop_tokens"]).data.max() <= 1.0 - assert torch.sigmoid(outputs["stop_tokens"]).data.min() >= 0.0 - optimizer.zero_grad() - loss = criterion(outputs["decoder_outputs"], mel_spec, mel_lengths) - stop_loss = criterion_st(outputs["stop_tokens"], stop_targets) - loss = loss + criterion(outputs["model_outputs"], mel_postnet_spec, mel_lengths) + stop_loss - loss.backward() - optimizer.step() - # check parameter changes - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - # ignore pre-higway layer since it works conditional - # if count not in [145, 59]: - assert (param != param_ref).any(), "param {} with shape {} not updated!! \n{}\n{}".format( - count, param.shape, param, param_ref - ) - count += 1 - - -class MultiSpeakerTacotronTrainTest(unittest.TestCase): - """Test multi-speaker Tacotron2 with speaker embedding layer""" - - @staticmethod - def test_train_step(): - config = config_global.copy() - config.use_speaker_embedding = True - config.num_speakers = 5 - - input_dummy = torch.randint(0, 24, (8, 128)).long().to(device) - input_lengths = torch.randint(100, 128, (8,)).long().to(device) - input_lengths = torch.sort(input_lengths, descending=True)[0] - mel_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_postnet_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_lengths = torch.randint(20, 30, (8,)).long().to(device) - mel_lengths[0] = 30 - stop_targets = torch.zeros(8, 30, 1).float().to(device) - speaker_ids = torch.randint(0, 5, (8,)).long().to(device) - - for idx in mel_lengths: - stop_targets[:, int(idx.item()) :, 0] = 1.0 - - stop_targets = stop_targets.view(input_dummy.shape[0], stop_targets.size(1) // config.r, -1) - stop_targets = (stop_targets.sum(2) > 0.0).unsqueeze(2).float().squeeze() - - criterion = MSELossMasked(seq_len_norm=False).to(device) - criterion_st = nn.BCEWithLogitsLoss().to(device) - config.d_vector_dim = 55 - model = Tacotron2(config).to(device) - model.train() - model_ref = copy.deepcopy(model) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count += 1 - optimizer = optim.Adam(model.parameters(), lr=config.lr) - for _ in range(5): - outputs = model.forward( - input_dummy, input_lengths, mel_spec, mel_lengths, aux_input={"speaker_ids": speaker_ids} - ) - assert torch.sigmoid(outputs["stop_tokens"]).data.max() <= 1.0 - assert torch.sigmoid(outputs["stop_tokens"]).data.min() >= 0.0 - optimizer.zero_grad() - loss = criterion(outputs["decoder_outputs"], mel_spec, mel_lengths) - stop_loss = criterion_st(outputs["stop_tokens"], stop_targets) - loss = loss + criterion(outputs["model_outputs"], mel_postnet_spec, mel_lengths) + stop_loss - loss.backward() - optimizer.step() - # check parameter changes - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - # ignore pre-higway layer since it works conditional - # if count not in [145, 59]: - assert (param != param_ref).any(), "param {} with shape {} not updated!! \n{}\n{}".format( - count, param.shape, param, param_ref - ) - count += 1 - - -class TacotronGSTTrainTest(unittest.TestCase): - """Test multi-speaker Tacotron2 with Global Style Token and Speaker Embedding""" - - # pylint: disable=no-self-use - def test_train_step(self): - # with random gst mel style - config = config_global.copy() - config.use_speaker_embedding = True - config.num_speakers = 10 - config.use_gst = True - config.gst = GSTConfig() - - input_dummy = torch.randint(0, 24, (8, 128)).long().to(device) - input_lengths = torch.randint(100, 128, (8,)).long().to(device) - input_lengths = torch.sort(input_lengths, descending=True)[0] - mel_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_postnet_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_lengths = torch.randint(20, 30, (8,)).long().to(device) - mel_lengths[0] = 30 - stop_targets = torch.zeros(8, 30, 1).float().to(device) - speaker_ids = torch.randint(0, 5, (8,)).long().to(device) - - for idx in mel_lengths: - stop_targets[:, int(idx.item()) :, 0] = 1.0 - - stop_targets = stop_targets.view(input_dummy.shape[0], stop_targets.size(1) // config.r, -1) - stop_targets = (stop_targets.sum(2) > 0.0).unsqueeze(2).float().squeeze() - - criterion = MSELossMasked(seq_len_norm=False).to(device) - criterion_st = nn.BCEWithLogitsLoss().to(device) - config.use_gst = True - config.gst = GSTConfig() - model = Tacotron2(config).to(device) - model.train() - model_ref = copy.deepcopy(model) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count += 1 - optimizer = optim.Adam(model.parameters(), lr=config.lr) - for i in range(10): - outputs = model.forward( - input_dummy, input_lengths, mel_spec, mel_lengths, aux_input={"speaker_ids": speaker_ids} - ) - assert torch.sigmoid(outputs["stop_tokens"]).data.max() <= 1.0 - assert torch.sigmoid(outputs["stop_tokens"]).data.min() >= 0.0 - optimizer.zero_grad() - loss = criterion(outputs["decoder_outputs"], mel_spec, mel_lengths) - stop_loss = criterion_st(outputs["stop_tokens"], stop_targets) - loss = loss + criterion(outputs["model_outputs"], mel_postnet_spec, mel_lengths) + stop_loss - loss.backward() - optimizer.step() - # check parameter changes - count = 0 - for name_param, param_ref in zip(model.named_parameters(), model_ref.parameters()): - # ignore pre-higway layer since it works conditional - # if count not in [145, 59]: - name, param = name_param - if name == "gst_layer.encoder.recurrence.weight_hh_l0": - # print(param.grad) - continue - assert (param != param_ref).any(), "param {} {} with shape {} not updated!! \n{}\n{}".format( - name, count, param.shape, param, param_ref - ) - count += 1 - - # with file gst style - mel_spec = ( - torch.FloatTensor(ap.melspectrogram(ap.load_wav(WAV_FILE)))[:, :30].unsqueeze(0).transpose(1, 2).to(device) - ) - mel_spec = mel_spec.repeat(8, 1, 1) - input_dummy = torch.randint(0, 24, (8, 128)).long().to(device) - input_lengths = torch.randint(100, 128, (8,)).long().to(device) - input_lengths = torch.sort(input_lengths, descending=True)[0] - mel_postnet_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_lengths = torch.randint(20, 30, (8,)).long().to(device) - mel_lengths[0] = 30 - stop_targets = torch.zeros(8, 30, 1).float().to(device) - speaker_ids = torch.randint(0, 5, (8,)).long().to(device) - - for idx in mel_lengths: - stop_targets[:, int(idx.item()) :, 0] = 1.0 - - stop_targets = stop_targets.view(input_dummy.shape[0], stop_targets.size(1) // config.r, -1) - stop_targets = (stop_targets.sum(2) > 0.0).unsqueeze(2).float().squeeze() - - criterion = MSELossMasked(seq_len_norm=False).to(device) - criterion_st = nn.BCEWithLogitsLoss().to(device) - model = Tacotron2(config).to(device) - model.train() - model_ref = copy.deepcopy(model) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count += 1 - optimizer = optim.Adam(model.parameters(), lr=config.lr) - for i in range(10): - outputs = model.forward( - input_dummy, input_lengths, mel_spec, mel_lengths, aux_input={"speaker_ids": speaker_ids} - ) - assert torch.sigmoid(outputs["stop_tokens"]).data.max() <= 1.0 - assert torch.sigmoid(outputs["stop_tokens"]).data.min() >= 0.0 - optimizer.zero_grad() - loss = criterion(outputs["decoder_outputs"], mel_spec, mel_lengths) - stop_loss = criterion_st(outputs["stop_tokens"], stop_targets) - loss = loss + criterion(outputs["model_outputs"], mel_postnet_spec, mel_lengths) + stop_loss - loss.backward() - optimizer.step() - # check parameter changes - count = 0 - for name_param, param_ref in zip(model.named_parameters(), model_ref.parameters()): - # ignore pre-higway layer since it works conditional - # if count not in [145, 59]: - name, param = name_param - if name == "gst_layer.encoder.recurrence.weight_hh_l0": - # print(param.grad) - continue - assert (param != param_ref).any(), "param {} {} with shape {} not updated!! \n{}\n{}".format( - name, count, param.shape, param, param_ref - ) - count += 1 - - -class TacotronCapacitronTrainTest(unittest.TestCase): - @staticmethod - def test_train_step(): - config = Tacotron2Config( - num_chars=32, - num_speakers=10, - use_speaker_embedding=True, - out_channels=80, - decoder_output_dim=80, - use_capacitron_vae=True, - capacitron_vae=CapacitronVAEConfig(), - optimizer="CapacitronOptimizer", - optimizer_params={ - "RAdam": {"betas": [0.9, 0.998], "weight_decay": 1e-6}, - "SGD": {"lr": 1e-5, "momentum": 0.9}, - }, - ) - - batch = dict({}) - batch["text_input"] = torch.randint(0, 24, (8, 128)).long().to(device) - batch["text_lengths"] = torch.randint(100, 129, (8,)).long().to(device) - batch["text_lengths"] = torch.sort(batch["text_lengths"], descending=True)[0] - batch["text_lengths"][0] = 128 - batch["mel_input"] = torch.rand(8, 120, config.audio["num_mels"]).to(device) - batch["mel_lengths"] = torch.randint(20, 120, (8,)).long().to(device) - batch["mel_lengths"] = torch.sort(batch["mel_lengths"], descending=True)[0] - batch["mel_lengths"][0] = 120 - batch["stop_targets"] = torch.zeros(8, 120, 1).float().to(device) - batch["stop_target_lengths"] = torch.randint(0, 120, (8,)).to(device) - batch["speaker_ids"] = torch.randint(0, 5, (8,)).long().to(device) - batch["d_vectors"] = None - - for idx in batch["mel_lengths"]: - batch["stop_targets"][:, int(idx.item()) :, 0] = 1.0 - - batch["stop_targets"] = batch["stop_targets"].view( - batch["text_input"].shape[0], batch["stop_targets"].size(1) // config.r, -1 - ) - batch["stop_targets"] = (batch["stop_targets"].sum(2) > 0.0).unsqueeze(2).float().squeeze() - - model = Tacotron2(config).to(device) - criterion = model.get_criterion().to(device) - optimizer = model.get_optimizer() - - model.train() - model_ref = copy.deepcopy(model) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count += 1 - for _ in range(10): - _, loss_dict = model.train_step(batch, criterion) - optimizer.zero_grad() - loss_dict["capacitron_vae_beta_loss"].backward() - optimizer.first_step() - loss_dict["loss"].backward() - optimizer.step() - # check parameter changes - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - # ignore pre-higway layer since it works conditional - assert (param != param_ref).any(), "param {} with shape {} not updated!! \n{}\n{}".format( - count, param.shape, param, param_ref - ) - count += 1 - - -class SCGSTMultiSpeakeTacotronTrainTest(unittest.TestCase): - """Test multi-speaker Tacotron2 with Global Style Tokens and d-vector inputs.""" - - @staticmethod - def test_train_step(): - config = config_global.copy() - config.use_d_vector_file = True - - config.use_gst = True - config.gst = GSTConfig() - - input_dummy = torch.randint(0, 24, (8, 128)).long().to(device) - input_lengths = torch.randint(100, 128, (8,)).long().to(device) - input_lengths = torch.sort(input_lengths, descending=True)[0] - mel_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_postnet_spec = torch.rand(8, 30, config.audio["num_mels"]).to(device) - mel_lengths = torch.randint(20, 30, (8,)).long().to(device) - mel_lengths[0] = 30 - stop_targets = torch.zeros(8, 30, 1).float().to(device) - speaker_embeddings = torch.rand(8, 55).to(device) - - for idx in mel_lengths: - stop_targets[:, int(idx.item()) :, 0] = 1.0 - - stop_targets = stop_targets.view(input_dummy.shape[0], stop_targets.size(1) // config.r, -1) - stop_targets = (stop_targets.sum(2) > 0.0).unsqueeze(2).float().squeeze() - criterion = MSELossMasked(seq_len_norm=False).to(device) - criterion_st = nn.BCEWithLogitsLoss().to(device) - config.d_vector_dim = 55 - model = Tacotron2(config).to(device) - model.train() - model_ref = copy.deepcopy(model) - count = 0 - for param, param_ref in zip(model.parameters(), model_ref.parameters()): - assert (param - param_ref).sum() == 0, param - count += 1 - optimizer = optim.Adam(model.parameters(), lr=config.lr) - for i in range(5): - outputs = model.forward( - input_dummy, input_lengths, mel_spec, mel_lengths, aux_input={"d_vectors": speaker_embeddings} - ) - assert torch.sigmoid(outputs["stop_tokens"]).data.max() <= 1.0 - assert torch.sigmoid(outputs["stop_tokens"]).data.min() >= 0.0 - optimizer.zero_grad() - loss = criterion(outputs["decoder_outputs"], mel_spec, mel_lengths) - stop_loss = criterion_st(outputs["stop_tokens"], stop_targets) - loss = loss + criterion(outputs["model_outputs"], mel_postnet_spec, mel_lengths) + stop_loss - loss.backward() - optimizer.step() - # check parameter changes - count = 0 - for name_param, param_ref in zip(model.named_parameters(), model_ref.parameters()): - # ignore pre-higway layer since it works conditional - # if count not in [145, 59]: - name, param = name_param - if name == "gst_layer.encoder.recurrence.weight_hh_l0": - continue - assert (param != param_ref).any(), "param {} with shape {} not updated!! \n{}\n{}".format( - count, param.shape, param, param_ref - ) - count += 1 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/huffman_coder.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/huffman_coder.py deleted file mode 100644 index c04f84564e6a22209439c67fed3cac31f010c6e9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/huffman/huffman_coder.py +++ /dev/null @@ -1,267 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re -import typing as tp -from collections import Counter, deque -from dataclasses import dataclass - -from bitarray import bitarray, util -from fairseq.data import Dictionary - -# basically we have to write to addressable bytes for the memory mapped -# dataset loader. Sentences that get encoded to a length that is not a -# multiple of BLOCKSIZE (a byte) will be padded to fit. (see _pad in the coder) -BLOCKSIZE = 8 - - -class HuffmanCoder: - def __init__( - self, root: "HuffmanNode", bos="", pad="", eos="", unk="" - ): - self.root = root - self.table = root.code_table() - self.bos_word, self.unk_word, self.pad_word, self.eos_word = bos, unk, pad, eos - - def _pad(self, a: bitarray) -> bitarray: - """ - bitpadding, 1 then 0. - - If the array is already a multiple of blocksize, we add a full block. - """ - pad_len = BLOCKSIZE - (len(a) % BLOCKSIZE) - 1 - padding = bitarray("1" + "0" * pad_len) - return a + padding - - def _unpad(self, a: bitarray) -> bitarray: - """ - remove the bitpadding. - - There will be a set of 0s preceded by a 1 at the end of the bitarray, we remove that - """ - # count the 0 padding at the end until we find the first 1 - # we want to remove the one too - remove_cnt = util.rindex(a, 1) - return a[:remove_cnt] - - def encode(self, iter: tp.List[str]) -> bytes: - """ - encode a list of tokens a return bytes. We use bitpadding to make sure the encoded bits fit in bytes. - """ - a = bitarray() - for token in iter: - code = self.get_code(token) - if code is None: - if self.unk_word is None: - raise Exception(f"unknown token {token} cannot be encoded.") - else: - token = self.unk_word - a = a + self.get_code(token) - return self._pad(a).tobytes() - - def decode(self, bits: bytes) -> tp.Iterator["HuffmanNode"]: - """ - take bitpadded bytes and decode it to a set of leaves. You can then use each node to find the symbol/id - """ - a = bitarray() - a.frombytes(bits) - return self.root.decode(self._unpad(a)) - - def get_code(self, symbol: str) -> tp.Optional[bitarray]: - node = self.get_node(symbol) - return None if node is None else node.code - - def get_node(self, symbol: str) -> "HuffmanNode": - return self.table.get(symbol) - - @classmethod - def from_file( - cls, - filename: str, - bos="", - pad="", - eos="", - unk="", - ) -> "HuffmanCoder": - builder = HuffmanCodeBuilder.from_file(filename) - return builder.build_code(bos=bos, pad=pad, eos=eos, unk=unk) - - def to_file(self, filename, sep="\t"): - nodes = list(self.table.values()) - nodes.sort(key=lambda n: n.id) - with open(filename, "w", encoding="utf-8") as output: - for n in nodes: - output.write(f"{n.symbol}{sep}{n.count}\n") - - def __iter__(self): - for n in self.table.values(): - yield n - - def merge(self, other_coder: "HuffmanCoder") -> "HuffmanCoder": - builder = HuffmanCodeBuilder() - for n in self: - builder.increment(n.symbol, n.count) - for n in other_coder: - builder.increment(n.symbol, n.count) - return builder.build_code() - - def __eq__(self, other: "HuffmanCoder") -> bool: - return self.table == other.table - - def __len__(self) -> int: - return len(self.table) - - def __contains__(self, sym: str) -> bool: - return sym in self.table - - def to_dictionary(self) -> Dictionary: - dictionary = Dictionary(bos=self.bos, unk=self.unk, pad=self.pad, eos=self.eos) - for n in self: - dictionary.add_symbol(n.symbol, n=n.count) - dictionary.finalize() - return dictionary - - -@dataclass -class HuffmanNode: - """ - a node in a Huffman tree - """ - - id: int - count: int - symbol: tp.Optional[str] = None - left: tp.Optional["HuffmanNode"] = None - right: tp.Optional["HuffmanNode"] = None - code: tp.Optional[bitarray] = None - - def is_leaf(self) -> bool: - return self.left is None and self.right is None - - def code_table( - self, prefix: tp.Optional[bitarray] = None - ) -> tp.Dict[str, "HuffmanNode"]: - defaulted_prefix = prefix if prefix is not None else bitarray() - if self.is_leaf(): - self.code = ( - defaulted_prefix if len(defaulted_prefix) > 0 else bitarray("0") - ) # leaf could be the root if there is only one symbol - return {self.symbol: self} - - codes_right = self.right.code_table(defaulted_prefix + bitarray([0])) - codes_left = self.left.code_table(defaulted_prefix + bitarray([1])) - return {**codes_left, **codes_right} - - def decode(self, bits: bitarray) -> tp.Iterator["HuffmanNode"]: - current_node = self - for bit in bits: - if bit == 0: # go right - current_node = current_node.right - else: # go left - current_node = current_node.left - if current_node is None: - # we shouldn't be on a leaf here - raise Exception("fell off a leaf") - if current_node.is_leaf(): - yield current_node - current_node = self - if current_node != self: - raise Exception("couldn't decode all the bits") - - -class HuffmanCodeBuilder: - """ - build a dictionary with occurence count and then build the Huffman code for it. - """ - - def __init__(self): - self.symbols = Counter() - - def add_symbols(self, *syms) -> None: - self.symbols.update(syms) - - def increment(self, symbol: str, cnt: int) -> None: - self.symbols[symbol] += cnt - - @classmethod - def from_file(cls, filename): - c = cls() - with open(filename, "r", encoding="utf-8") as input: - for line in input: - split = re.split(r"[\s]+", line) - c.increment(split[0], int(split[1])) - return c - - def to_file(self, filename, sep="\t"): - with open(filename, "w", encoding="utf-8") as output: - for (tok, cnt) in self.symbols.most_common(): - output.write(f"{tok}{sep}{cnt}\n") - - def _smallest(self, q1: deque, q2: deque) -> HuffmanNode: - if len(q1) == 0: - return q2.pop() - - if len(q2) == 0: - return q1.pop() - - if q1[-1].count < q2[-1].count: - return q1.pop() - - return q2.pop() - - def __add__(self, c: "HuffmanCodeBuilder") -> "HuffmanCodeBuilder": - new_c = self.symbols + c.symbols - new_b = HuffmanCodeBuilder() - new_b.symbols = new_c - return new_b - - def build_code( - self, - bos="", - pad="", - eos="", - unk="", - ) -> HuffmanCoder: - assert len(self.symbols) > 0, "cannot build code from empty list of symbols" - - if self.symbols[bos] == 0: - self.add_symbols(bos) - if self.symbols[pad] == 0: - self.add_symbols(pad) - if self.symbols[eos] == 0: - self.add_symbols(eos) - if self.symbols[unk] == 0: - self.add_symbols(unk) - - node_id = 0 - leaves_queue = deque( - [ - HuffmanNode(symbol=symbol, count=count, id=idx) - for idx, (symbol, count) in enumerate(self.symbols.most_common()) - ] - ) # left are the most common, right are the least common - - if len(leaves_queue) == 1: - root = leaves_queue.pop() - root.id = 0 - return HuffmanCoder(root) - - nodes_queue = deque() - - while len(leaves_queue) > 0 or len(nodes_queue) != 1: - # get the lowest two nodes at the head of each queue - node1 = self._smallest(leaves_queue, nodes_queue) - node2 = self._smallest(leaves_queue, nodes_queue) - - # add new node - nodes_queue.appendleft( - HuffmanNode( - count=node1.count + node2.count, left=node1, right=node2, id=node_id - ) - ) - node_id += 1 - - # we are left with the root - return HuffmanCoder(nodes_queue.pop(), bos=bos, pad=pad, eos=eos, unk=unk) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/transform_eos_dataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/transform_eos_dataset.py deleted file mode 100644 index fb14ff018edf13b20f5d0e486692dfb0a37ec6d1..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/transform_eos_dataset.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import FairseqDataset - - -class TransformEosDataset(FairseqDataset): - """A :class:`~fairseq.data.FairseqDataset` wrapper that appends/prepends/strips EOS. - - Note that the transformation is applied in :func:`collater`. - - Args: - dataset (~fairseq.data.FairseqDataset): dataset to wrap - eos (int): index of the end-of-sentence symbol - append_eos_to_src (bool, optional): append EOS to the end of src - remove_eos_from_src (bool, optional): remove EOS from the end of src - append_eos_to_tgt (bool, optional): append EOS to the end of tgt - remove_eos_from_tgt (bool, optional): remove EOS from the end of tgt - """ - - def __init__( - self, - dataset, - eos, - append_eos_to_src=False, - remove_eos_from_src=False, - append_eos_to_tgt=False, - remove_eos_from_tgt=False, - has_target=True, - ): - if not isinstance(dataset, FairseqDataset): - raise ValueError("dataset must be an instance of FairseqDataset") - if append_eos_to_src and remove_eos_from_src: - raise ValueError("cannot combine append_eos_to_src and remove_eos_from_src") - if append_eos_to_tgt and remove_eos_from_tgt: - raise ValueError("cannot combine append_eos_to_tgt and remove_eos_from_tgt") - - self.dataset = dataset - self.eos = torch.LongTensor([eos]) - self.append_eos_to_src = append_eos_to_src - self.remove_eos_from_src = remove_eos_from_src - self.append_eos_to_tgt = append_eos_to_tgt - self.remove_eos_from_tgt = remove_eos_from_tgt - self.has_target = has_target - - # precompute how we should adjust the reported sizes - self._src_delta = 0 - self._src_delta += 1 if append_eos_to_src else 0 - self._src_delta -= 1 if remove_eos_from_src else 0 - self._tgt_delta = 0 - self._tgt_delta += 1 if append_eos_to_tgt else 0 - self._tgt_delta -= 1 if remove_eos_from_tgt else 0 - - self._checked_src = False - self._checked_tgt = False - - def _check_src(self, src, expect_eos): - if not self._checked_src: - assert (src[-1] == self.eos[0]) == expect_eos - self._checked_src = True - - def _check_tgt(self, tgt, expect_eos): - if self.has_target and not self._checked_tgt: - assert (tgt[-1] == self.eos[0]) == expect_eos - self._checked_tgt = True - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples): - def transform(item): - if self.append_eos_to_src: - self.eos = self.eos.to(device=item["source"].device) - self._check_src(item["source"], expect_eos=False) - item["source"] = torch.cat([item["source"], self.eos]) - if self.remove_eos_from_src: - self.eos = self.eos.to(device=item["source"].device) - self._check_src(item["source"], expect_eos=True) - item["source"] = item["source"][:-1] - if self.append_eos_to_tgt: - self.eos = self.eos.to(device=item["target"].device) - self._check_tgt(item["target"], expect_eos=False) - item["target"] = torch.cat([item["target"], self.eos]) - if self.remove_eos_from_tgt: - self.eos = self.eos.to(device=item["target"].device) - self._check_tgt(item["target"], expect_eos=True) - item["target"] = item["target"][:-1] - return item - - samples = list(map(transform, samples)) - return self.dataset.collater(samples) - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - if self.has_target: - src_len, tgt_len = self.dataset.size(index) - return (src_len + self._src_delta, tgt_len + self._tgt_delta) - else: - return self.dataset.size(index) - - def ordered_indices(self): - # NOTE: we assume that the ordering does not change based on the - # addition or removal of eos - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/spaces/aryadytm/photo-colorization/src/deoldify/_device.py b/spaces/aryadytm/photo-colorization/src/deoldify/_device.py deleted file mode 100644 index ed40ce131e3375a937c862fafa44e432f825f93b..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/photo-colorization/src/deoldify/_device.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -from enum import Enum -from .device_id import DeviceId - -#NOTE: This must be called first before any torch imports in order to work properly! - -class DeviceException(Exception): - pass - -class _Device: - def __init__(self): - self.set(DeviceId.CPU) - - def is_gpu(self): - ''' Returns `True` if the current device is GPU, `False` otherwise. ''' - return self.current() is not DeviceId.CPU - - def current(self): - return self._current_device - - def set(self, device:DeviceId): - if device == DeviceId.CPU: - os.environ['CUDA_VISIBLE_DEVICES']='' - else: - os.environ['CUDA_VISIBLE_DEVICES']=str(device.value) - import torch - torch.backends.cudnn.benchmark=False - - self._current_device = device - return device \ No newline at end of file diff --git a/spaces/atimughal662/InfoFusion/src/gradio_utils/css.py b/spaces/atimughal662/InfoFusion/src/gradio_utils/css.py deleted file mode 100644 index 6f3d0dd56bfd4287034afd0b23751e3abd59a143..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/src/gradio_utils/css.py +++ /dev/null @@ -1,148 +0,0 @@ -def get_css(kwargs) -> str: - if kwargs['h2ocolors']: - css_code = """footer {visibility: hidden;} - body{background:linear-gradient(#f5f5f5,#e5e5e5);} - body.dark{background:linear-gradient(#000000,#0d0d0d);} - """ - else: - css_code = """footer {visibility: hidden}""" - - css_code += make_css_base() - return css_code - - -def make_css_base() -> str: - return """ - #col_container {margin-left: auto; margin-right: auto; text-align: left;} - - @import url('https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@400;600&display=swap'); - - body.dark{#warning {background-color: #555555};} - - #sidebar { - order: 1; - - @media (max-width: 463px) { - order: 2; - } - } - - #col-tabs { - order: 2; - - @media (max-width: 463px) { - order: 1; - } - } - - #small_btn { - margin: 0.6em 0em 0.55em 0; - max-width: 20em; - min-width: 5em !important; - height: 5em; - font-size: 14px !important; - } - - #prompt-form { - border: 1px solid var(--primary-500) !important; - } - - #prompt-form.block { - border-radius: var(--block-radius) !important; - } - - #prompt-form textarea { - border: 1px solid rgb(209, 213, 219); - } - - #prompt-form label > div { - margin-top: 4px; - } - - button.primary:hover { - background-color: var(--primary-600) !important; - transition: .2s; - } - - #prompt-form-area { - margin-bottom: 2.5rem; - } - .chatsmall chatbot {font-size: 10px !important} - - .gradio-container { - max-width: none !important; - } - - div.message { - padding: var(--text-lg) !important; - } - - div.message.user > div.icon-button { - top: unset; - bottom: 0; - } - - div.message.bot > div.icon-button { - top: unset; - bottom: 0; - } - - #prompt-form-row { - position: relative; - } - - #attach-button { - position: absolute; - top: 45px; - right: 20px; - - display: flex; - justify-content: center; - border: 1px solid var(--primary-500) !important; - - @media (max-width: 463px) { - width: 56px; - } - } - - #attach-button > img { - margin-right: 0; - } - - #prompt-form > label > textarea { - padding-right: 104px; - - @media (max-width: 463px) { - min-height: 94px; - padding-right: 70px; - } - } - - #visible-models > label > div.wrap > div.wrap-inner > div.secondary-wrap > div.remove-all { - display: none !important; - } - - #visible-models > label > div.wrap > div.wrap-inner > div.token { - display: none !important; - } - - #visible-models > label > div.wrap > div.wrap-inner > div.secondary-wrap::before { - content: "Select"; - padding: 0 4px; - margin-right: 2px; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.secondary-wrap > div.remove-all { - display: none !important; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.token { - display: none !important; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.secondary-wrap::before { - content: "Select"; - padding: 0 4px; - margin-right: 2px; - } - """ diff --git a/spaces/awacke1/AIOutline/README.md b/spaces/awacke1/AIOutline/README.md deleted file mode 100644 index d69e286c53eb63236b0611c933dc0e193f5b95ee..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AIOutline/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🛍️🧠AIMind🤖📊 -emoji: 🌐🧠🤖 -colorFrom: yellow -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/AnimatedGifGallery/app.py b/spaces/awacke1/AnimatedGifGallery/app.py deleted file mode 100644 index ab7cb9583171b765412463f9c8d16b14f2a25d59..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AnimatedGifGallery/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import streamlit as st -import os -import random - -def get_gifs(directory): - return [f for f in os.listdir(directory) if f.endswith('.gif')] - -def showAnimatedGif(gif): - import streamlit as st - import base64 - #st.markdown("![Alt Text](https://media.giphy.com/media/vFKqnCdLPNOKc/giphy.gif)") - st.write('Loading: ' + gif) - file_ = open(gif, "rb") - contents = file_.read() - data_url = base64.b64encode(contents).decode("utf-8") - file_.close() - st.write(data_url) - - st.markdown( - f'gif', - unsafe_allow_html=True, - ) - -def main(): - st.title('Animated GIFs in Streamlit') - - directory = './gifs' # Replace with your directory of GIFs - gif_files = get_gifs(directory) - - num_rows = len(gif_files) // 3 - if len(gif_files) % 3: - num_rows += 1 - - cols = [st.columns(3) for _ in range(num_rows)] - - for i in range(num_rows): - for j in range(3): - idx = i*3 + j - if idx < len(gif_files): - #showAnimatedGif(os.path.join(directory, gif_files[idx])) - cols[i][j].image(os.path.join(directory, gif_files[idx]), width=200) - - if st.button('Randomize'): - random.shuffle(gif_files) - for i in range(num_rows): - for j in range(3): - idx = i*3 + j - if idx < len(gif_files): - cols[i][j].image(os.path.join(directory, gif_files[idx]), width=200) - -if __name__ == "__main__": - main() diff --git a/spaces/awacke1/AnimationAI/app.py b/spaces/awacke1/AnimationAI/app.py deleted file mode 100644 index c00a82220989cabfe090136e5eb0b3f05b760dd0..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AnimationAI/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import requests -import streamlit as st -from streamlit_lottie import st_lottie - -def load_lottie_url(url: str): - r = requests.get(url) - if r.status_code != 200: - return None - return r.json() - -def ShowAnimation(name, URL): - anim=load_lottie_url(URL) - st_lottie(anim, key = name) - -st.markdown('# Animations: https://lottiefiles.com/recent') -st.markdown("# Animate with JSON, SVG, Adobe XD, Figma, and deploy to web, mobile as tiny animation files ") - -# to Use Lottie in HTML (gradio or HTML5) use the code below in HTML -# -# - -ShowAnimation("Badge1","https://assets5.lottiefiles.com/packages/lf20_wtohqzml.json") -ShowAnimation("Badge2","https://assets5.lottiefiles.com/packages/lf20_i4zw2ddg.json") -ShowAnimation("Badge3","https://assets5.lottiefiles.com/private_files/lf30_jfhmdmk5.json") -ShowAnimation("Graph","https://assets6.lottiefiles.com/packages/lf20_4gqhiayj.json") -ShowAnimation("PhoneBot","https://assets9.lottiefiles.com/packages/lf20_zrqthn6o.json") -ShowAnimation("SupportBot","https://assets5.lottiefiles.com/private_files/lf30_cmd8kh2q.json") -ShowAnimation("ChatBot","https://assets8.lottiefiles.com/packages/lf20_j1oeaifz.json") -ShowAnimation("IntelligentMachine","https://assets8.lottiefiles.com/packages/lf20_edouagsj.json") -ShowAnimation("GearAI","https://assets10.lottiefiles.com/packages/lf20_3jkp7dqt.json") -ShowAnimation("ContextGraph","https://assets10.lottiefiles.com/private_files/lf30_vwC61X.json") -ShowAnimation("Yggdrasil","https://assets4.lottiefiles.com/packages/lf20_8q1bhU.json") -ShowAnimation("Studying","https://assets9.lottiefiles.com/packages/lf20_6ft9bypa.json") diff --git a/spaces/awacke1/Azure.Streamlit.Github.Actions.Azure.Container.Registry.Docker.AKS/app.py b/spaces/awacke1/Azure.Streamlit.Github.Actions.Azure.Container.Registry.Docker.AKS/app.py deleted file mode 100644 index b96300db8d40d0d84a5a4ea53192dbd1eef13799..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Azure.Streamlit.Github.Actions.Azure.Container.Registry.Docker.AKS/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import streamlit as st -from collections import Counter -import plotly.express as px -import numpy as np - -def get_word_score(word): - # This function returns a score based on the length of the word - # Modify this function as per your requirements - score = len(word)**2 - return score - -def get_word_frequency(text): - # This function returns the word frequency of the given text - words = text.split() - word_frequency = Counter(words) - return word_frequency - -# Load the markdown file -with open('Setup.md', 'r') as file: - text = file.read() - - -# Display the parsed markdown -st.markdown(text, unsafe_allow_html=True) - -# Get the word frequency of the markdown text -word_frequency = get_word_frequency(text) - -# Get the top words and their frequency -top_words = word_frequency.most_common(10) -top_words_dict = dict(top_words) - -# Create a Plotly bar chart to display the top words and their frequency -fig = px.bar(x=list(top_words_dict.keys()), y=list(top_words_dict.values()), labels={'x':'Word', 'y':'Frequency'}) -st.plotly_chart(fig) - -# Calculate the scores for each word based on their length -word_scores = {word:get_word_score(word) for word in word_frequency} -top_word_scores = dict(sorted(word_scores.items(), key=lambda item: item[1], reverse=True)[:10]) - -# Create a Plotly bar chart to display the top words and their scores -fig = px.bar(x=list(top_word_scores.keys()), y=list(top_word_scores.values()), labels={'x':'Word', 'y':'Score'}) -st.plotly_chart(fig) diff --git a/spaces/awacke1/MultiPDF-QA-ChatGPT-Langchain/app.py b/spaces/awacke1/MultiPDF-QA-ChatGPT-Langchain/app.py deleted file mode 100644 index 931bb3d60fd2d318ecccca2bc73b2973b8a935f1..0000000000000000000000000000000000000000 --- a/spaces/awacke1/MultiPDF-QA-ChatGPT-Langchain/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import os -import streamlit as st -from dotenv import load_dotenv -from PyPDF2 import PdfReader -from langchain.text_splitter import CharacterTextSplitter -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import FAISS -from langchain.chat_models import ChatOpenAI -from langchain.memory import ConversationBufferMemory -from langchain.chains import ConversationalRetrievalChain -from htmlTemplates import css, bot_template, user_template - -def extract_text_from_pdfs(pdf_docs): - text = "" - for pdf in pdf_docs: - pdf_reader = PdfReader(pdf) - for page in pdf_reader.pages: - text += page.extract_text() - return text - -def split_text_into_chunks(text): - text_splitter = CharacterTextSplitter(separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len) - return text_splitter.split_text(text) - -def create_vector_store_from_text_chunks(text_chunks): - key = os.getenv('OPENAI_KEY') - embeddings = OpenAIEmbeddings(openai_api_key=key) - return FAISS.from_texts(texts=text_chunks, embedding=embeddings) - -def create_conversation_chain(vectorstore): - llm = ChatOpenAI() - memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True) - return ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory) - -def process_user_input(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chat_history = response['chat_history'] - - for i, message in enumerate(st.session_state.chat_history): - template = user_template if i % 2 == 0 else bot_template - st.write(template.replace("{{MSG}}", message.content), unsafe_allow_html=True) - -def main(): - load_dotenv() - st.set_page_config(page_title="Chat with multiple PDFs", page_icon=":books:") - st.write(css, unsafe_allow_html=True) - - st.header("Chat with multiple PDFs :books:") - user_question = st.text_input("Ask a question about your documents:") - if user_question: - process_user_input(user_question) - - with st.sidebar: - st.subheader("Your documents") - pdf_docs = st.file_uploader("Upload your PDFs here and click on 'Process'", accept_multiple_files=True) - if st.button("Process"): - with st.spinner("Processing"): - raw_text = extract_text_from_pdfs(pdf_docs) - text_chunks = split_text_into_chunks(raw_text) - vectorstore = create_vector_store_from_text_chunks(text_chunks) - st.session_state.conversation = create_conversation_chain(vectorstore) - -if __name__ == '__main__': - main() diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/README.md b/spaces/azusarang/so-vits-svc-models-ba_P/README.md deleted file mode 100644 index f3b56f298db18efbf65a293c2b124d155847de50..0000000000000000000000000000000000000000 --- a/spaces/azusarang/so-vits-svc-models-ba_P/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: So Vits Svc Models Ba -emoji: 🦀 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: FrankZxShen/so-vits-svc-models-ba ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/MD2Character.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/MD2Character.js deleted file mode 100644 index 2501dc98176f345e75558798a66aef722c9dbdf1..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/MD2Character.js +++ /dev/null @@ -1,261 +0,0 @@ -/** - * @author alteredq / http://alteredqualia.com/ - */ - -THREE.MD2Character = function () { - - var scope = this; - - this.scale = 1; - this.animationFPS = 6; - - this.root = new THREE.Object3D(); - - this.meshBody = null; - this.meshWeapon = null; - - this.skinsBody = []; - this.skinsWeapon = []; - - this.weapons = []; - - this.activeAnimation = null; - - this.mixer = null; - - this.onLoadComplete = function () {}; - - this.loadCounter = 0; - - this.loadParts = function ( config ) { - - this.loadCounter = config.weapons.length * 2 + config.skins.length + 1; - - var weaponsTextures = []; - for ( var i = 0; i < config.weapons.length; i ++ ) weaponsTextures[ i ] = config.weapons[ i ][ 1 ]; - // SKINS - - this.skinsBody = loadTextures( config.baseUrl + "skins/", config.skins ); - this.skinsWeapon = loadTextures( config.baseUrl + "skins/", weaponsTextures ); - - // BODY - - var loader = new THREE.MD2Loader(); - - loader.load( config.baseUrl + config.body, function ( geo ) { - - var boundingBox = new THREE.Box3(); - boundingBox.setFromBufferAttribute( geo.attributes.position ); - - scope.root.position.y = - scope.scale * boundingBox.min.y; - - var mesh = createPart( geo, scope.skinsBody[ 0 ] ); - mesh.scale.set( scope.scale, scope.scale, scope.scale ); - - scope.root.add( mesh ); - - scope.meshBody = mesh; - - scope.meshBody.clipOffset = 0; - scope.activeAnimationClipName = mesh.geometry.animations[ 0 ].name; - - scope.mixer = new THREE.AnimationMixer( mesh ); - - checkLoadingComplete(); - - } ); - - // WEAPONS - - var generateCallback = function ( index, name ) { - - return function ( geo ) { - - var mesh = createPart( geo, scope.skinsWeapon[ index ] ); - mesh.scale.set( scope.scale, scope.scale, scope.scale ); - mesh.visible = false; - - mesh.name = name; - - scope.root.add( mesh ); - - scope.weapons[ index ] = mesh; - scope.meshWeapon = mesh; - - checkLoadingComplete(); - - }; - - }; - - for ( var i = 0; i < config.weapons.length; i ++ ) { - - loader.load( config.baseUrl + config.weapons[ i ][ 0 ], generateCallback( i, config.weapons[ i ][ 0 ] ) ); - - } - - }; - - this.setPlaybackRate = function ( rate ) { - - if ( rate !== 0 ) { - - this.mixer.timeScale = 1 / rate; - - } else { - - this.mixer.timeScale = 0; - - } - - }; - - this.setWireframe = function ( wireframeEnabled ) { - - if ( wireframeEnabled ) { - - if ( this.meshBody ) this.meshBody.material = this.meshBody.materialWireframe; - if ( this.meshWeapon ) this.meshWeapon.material = this.meshWeapon.materialWireframe; - - } else { - - if ( this.meshBody ) this.meshBody.material = this.meshBody.materialTexture; - if ( this.meshWeapon ) this.meshWeapon.material = this.meshWeapon.materialTexture; - - } - - }; - - this.setSkin = function ( index ) { - - if ( this.meshBody && this.meshBody.material.wireframe === false ) { - - this.meshBody.material.map = this.skinsBody[ index ]; - - } - - }; - - this.setWeapon = function ( index ) { - - for ( var i = 0; i < this.weapons.length; i ++ ) this.weapons[ i ].visible = false; - - var activeWeapon = this.weapons[ index ]; - - if ( activeWeapon ) { - - activeWeapon.visible = true; - this.meshWeapon = activeWeapon; - - scope.syncWeaponAnimation(); - - } - - }; - - this.setAnimation = function ( clipName ) { - - if ( this.meshBody ) { - - if ( this.meshBody.activeAction ) { - - this.meshBody.activeAction.stop(); - this.meshBody.activeAction = null; - - } - - var action = this.mixer.clipAction( clipName, this.meshBody ); - - if ( action ) { - - this.meshBody.activeAction = action.play(); - - } - - } - - scope.activeClipName = clipName; - - scope.syncWeaponAnimation(); - - }; - - this.syncWeaponAnimation = function () { - - var clipName = scope.activeClipName; - - if ( scope.meshWeapon ) { - - if ( this.meshWeapon.activeAction ) { - - this.meshWeapon.activeAction.stop(); - this.meshWeapon.activeAction = null; - - } - - var action = this.mixer.clipAction( clipName, this.meshWeapon ); - - if ( action ) { - - this.meshWeapon.activeAction = action.syncWith( this.meshBody.activeAction ).play(); - - } - - } - - }; - - this.update = function ( delta ) { - - if ( this.mixer ) this.mixer.update( delta ); - - }; - - function loadTextures( baseUrl, textureUrls ) { - - var textureLoader = new THREE.TextureLoader(); - var textures = []; - - for ( var i = 0; i < textureUrls.length; i ++ ) { - - textures[ i ] = textureLoader.load( baseUrl + textureUrls[ i ], checkLoadingComplete ); - textures[ i ].mapping = THREE.UVMapping; - textures[ i ].name = textureUrls[ i ]; - - } - - return textures; - - } - - function createPart( geometry, skinMap ) { - - var materialWireframe = new THREE.MeshLambertMaterial( { color: 0xffaa00, wireframe: true, morphTargets: true, morphNormals: true } ); - var materialTexture = new THREE.MeshLambertMaterial( { color: 0xffffff, wireframe: false, map: skinMap, morphTargets: true, morphNormals: true } ); - - // - - var mesh = new THREE.Mesh( geometry, materialTexture ); - mesh.rotation.y = - Math.PI / 2; - - mesh.castShadow = true; - mesh.receiveShadow = true; - - // - - mesh.materialTexture = materialTexture; - mesh.materialWireframe = materialWireframe; - - return mesh; - - } - - function checkLoadingComplete() { - - scope.loadCounter -= 1; - - if ( scope.loadCounter === 0 ) scope.onLoadComplete(); - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/geometries/DodecahedronGeometry.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/geometries/DodecahedronGeometry.d.ts deleted file mode 100644 index 587490ad1bfac1b8fb4fd685d4fcb82e59faa2d3..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/geometries/DodecahedronGeometry.d.ts +++ /dev/null @@ -1,15 +0,0 @@ -import { Geometry } from './../core/Geometry'; -import { PolyhedronBufferGeometry } from './PolyhedronGeometry'; - -export class DodecahedronBufferGeometry extends PolyhedronBufferGeometry { - constructor(radius?: number, detail?: number); -} - -export class DodecahedronGeometry extends Geometry { - constructor(radius?: number, detail?: number); - - parameters: { - radius: number; - detail: number; - }; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/textures/CanvasTexture.js b/spaces/banana-projects/web3d/node_modules/three/src/textures/CanvasTexture.js deleted file mode 100644 index 5239619fbce4370ee2a7169bdc20714578097861..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/textures/CanvasTexture.js +++ /dev/null @@ -1,19 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - */ - -import { Texture } from './Texture.js'; - -function CanvasTexture( canvas, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy ) { - - Texture.call( this, canvas, mapping, wrapS, wrapT, magFilter, minFilter, format, type, anisotropy ); - - this.needsUpdate = true; - -} - -CanvasTexture.prototype = Object.create( Texture.prototype ); -CanvasTexture.prototype.constructor = CanvasTexture; -CanvasTexture.prototype.isCanvasTexture = true; - -export { CanvasTexture }; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326233250.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326233250.py deleted file mode 100644 index 0a38d76ce2ad23d2334dcc1d23d9094842aa1493..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326233250.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_faces[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327093322.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327093322.py deleted file mode 100644 index 9f55fd67fc52747ea6994ee7a4efbd44cda9a7ad..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327093322.py +++ /dev/null @@ -1,66 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - #return Image.fromarray(restored_faces[0][:,:,::-1]) - return Image.fromarray(restored_img[:, :, ::-1]) - -title = "让美好回忆更清晰" - - -description = "上传老照片,点击Submit,稍等片刻,右侧Output将照片另存为即可。" -article = "

    | | Github Repo

    visitor badge
    " - -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True,share=True) - - diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621074627.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621074627.py deleted file mode 100644 index a2bf8c4cb07b63488e952ab9effb21da7d7e0c4e..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621074627.py +++ /dev/null @@ -1,31 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) -process_background = st.selectbox("表格线条是否隐藏",('True', 'False')) -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number,process_background=process_background) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - tables[0].to_excel(result,index=False) - # for i in range(0,len(tables)): - # table = tables[i].df - # sheetname = str(i) - # table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") \ No newline at end of file diff --git a/spaces/binery/Paddle_OCR/README.md b/spaces/binery/Paddle_OCR/README.md deleted file mode 100644 index b61dd332049aa04a8db6c8140aefd101331adc0b..0000000000000000000000000000000000000000 --- a/spaces/binery/Paddle_OCR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Paddel OCR -emoji: 🐨 -colorFrom: gray -colorTo: indigo -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/Download Conflict Desert Storm 2 Full Version For Free ((FULL)).md b/spaces/bioriAsaeru/text-to-voice/Download Conflict Desert Storm 2 Full Version For Free ((FULL)).md deleted file mode 100644 index de358d09d63e4476da9506e8528c5557fc8fe941..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download Conflict Desert Storm 2 Full Version For Free ((FULL)).md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    desert storm 2 plays out in the deserts of the middle east, iraq, kuwait, saudi arabia and oman. the game has 11 campaigns, with missions such as the "elite" and "assassin" campaigns, which are based on true events. the most popular missions are scenarios 2 and 6, which are based on the military operations called desert shield and desert storm. other scenarios are "operation iraqi freedom" and "operation enduring freedom".

    -

    the game is set in the year 2004. the persian gulf war was fought between iraq and coalition forces during the period between 1st and 28th february, 1991. the main reason for the war was that iraq invaded kuwait. the war has been the longest and the bloodiest. conflict desert storm 2 is a war game that is played between 4 (or more) soldiers. the game is very exciting as you have to fight against the occupying iraqi forces and you have to complete the mission of your team. you have to protect your base and you have to destroy the iraqi forces. the game lets you use different types of weapons. enjoy the conflict desert storm 2 free download.

    -

    download conflict desert storm 2 full version for free


    Downloadhttps://urloso.com/2uyOEs



    -

    the game is set in the year 2004. the persian gulf war was fought between iraq and coalition forces during the period between 1st and 28th february, 1991. the main reason for the war was that iraq invaded kuwait. the war has been the longest and the bloodiest. conflict desert storm 2 is a war game that is played between 4 (or more) soldiers. the game is very exciting as you have to fight against the occupying iraqi forces and you have to complete the mission of your team. you have to protect your base and you have to destroy the iraqi forces. the game lets you use different types of weapons.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/How to Download Samacheer Kalvi 7th Std Tamil Book in PDF Format A Guide for Students and Teachers.md b/spaces/bioriAsaeru/text-to-voice/How to Download Samacheer Kalvi 7th Std Tamil Book in PDF Format A Guide for Students and Teachers.md deleted file mode 100644 index f673669a4bea736eb662a5fceb1955ae4c5bab07..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/How to Download Samacheer Kalvi 7th Std Tamil Book in PDF Format A Guide for Students and Teachers.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    Textbooks for Tamilnadu Samacheer Kalvi 7th Books are uploaded and readily available in PDF for free download. Aspirants who are preparing for TNPSC group exams can download the seventh-standard textbooks in PDF format below. The books are available for English and Tamil Medium students. The Tamil Nadu State Board Syllabus for the 7th Standard: Tamil, Maths, English, Science, and Social Science Book English and Tamil mediums are listed in the table below. Back questions with answers (Solutions Guide) in PDF are given below in the link.

    -

    You can download the TN Textbooks for Classes 1 to 12 PDF through the links available on this page. Avail the TN Board Books 1st to 12th Standard for all the Subjects in Tamil and English Medium.

    -

    samacheer kalvi 7th std tamil book download


    Download File 🗸 https://urloso.com/2uyOkY



    -

    Samacheer Kalvi Books: TamilNadu Government has released Samacheer Kalvi New Books for class 1st, 6th, 9th and 11th. Students who are searching for new and old Tamilnadu TN SCERT Books can download from the below links. TamilNadu TN Textbooks are available in both English and Tamil Medium for std 1st to Class 12th. Students who are preparing for examinations can download Tamilnadu Textbooks in PDF Format. Updated Syllabus of Std 1st, 2nd, 3rd, 4th, 5th, 6th, 7th, 8th, 9th, 10th, 11th, 12th Class Tamil Nadu School Books Online is also available to download. In this article, we are providing Tamilnadu TN School Books pdf free download.

    -

    Get Tamilnadu State Board Text Books Solutions of New Syllabus 2021-2022 Edition for State Board of School Examinations Tamilnadu for all Classes and Subjects in Tamil Medium and English Medium on TNBoardSolutions.com. We provide step by step Tamilnadu State Board Books Answers, Solutions Guides for Class 12th, 11th, 10th, 9th, 8th, 7th, and 6th, 5th, 4th, 3rd, 2nd, 1st Standard all subjects. You can also download the Tamilnadu State Board Textbooks Solutions with a Free PDF download option.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/blaziant/ysda_nlp_ops_update/app/model.py b/spaces/blaziant/ysda_nlp_ops_update/app/model.py deleted file mode 100644 index 07728c2f60ead641620f5114b096ddc91cc0d33d..0000000000000000000000000000000000000000 --- a/spaces/blaziant/ysda_nlp_ops_update/app/model.py +++ /dev/null @@ -1,31 +0,0 @@ -from typing import Tuple, List -import os -import numpy as np -import pickle -import torch -from transformers import BertTokenizer - -with open('/backend/app/vocabulary.pkl', 'rb') as f: - voc = pickle.load(f) -ind_to_cat = {val: key for key, val in voc.items()} -model = torch.load("/backend/app/final_model.pth") - -def model_predict(state_name: str, state_abstract: str) -> List[Tuple[float, str]]: - text = state_name + " " + state_abstract - tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') - encoding = tokenizer.encode_plus( - text, - add_special_tokens=True, - max_length=512, - return_token_type_ids=False, - padding='max_length', - return_attention_mask=True, - return_tensors='pt', - truncation=True - ) - predict = model(encoding["input_ids"], encoding["attention_mask"]).logits - proba = torch.nn.Softmax(dim=1)(predict) - top_3 = proba.topk(3) - labels = [ind_to_cat[ind] for ind in top_3.indices.detach().numpy()[0]] - p = top_3.values.detach().numpy()[0] - return sorted(zip(p, labels), reverse=True) \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/tests/test_frame_selector.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/tests/test_frame_selector.py deleted file mode 100644 index 65f05f55c78d4ab24950e5335818b3e1f981aa0d..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/tests/test_frame_selector.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import random -import unittest - -from densepose.data.video import FirstKFramesSelector, LastKFramesSelector, RandomKFramesSelector - - -class TestFrameSelector(unittest.TestCase): - def test_frame_selector_random_k_1(self): - _SEED = 43 - _K = 4 - random.seed(_SEED) - selector = RandomKFramesSelector(_K) - frame_tss = list(range(0, 20, 2)) - _SELECTED_GT = [0, 8, 4, 6] - selected = selector(frame_tss) - self.assertEqual(_SELECTED_GT, selected) - - def test_frame_selector_random_k_2(self): - _SEED = 43 - _K = 10 - random.seed(_SEED) - selector = RandomKFramesSelector(_K) - frame_tss = list(range(0, 6, 2)) - _SELECTED_GT = [0, 2, 4] - selected = selector(frame_tss) - self.assertEqual(_SELECTED_GT, selected) - - def test_frame_selector_first_k_1(self): - _K = 4 - selector = FirstKFramesSelector(_K) - frame_tss = list(range(0, 20, 2)) - _SELECTED_GT = frame_tss[:_K] - selected = selector(frame_tss) - self.assertEqual(_SELECTED_GT, selected) - - def test_frame_selector_first_k_2(self): - _K = 10 - selector = FirstKFramesSelector(_K) - frame_tss = list(range(0, 6, 2)) - _SELECTED_GT = frame_tss[:_K] - selected = selector(frame_tss) - self.assertEqual(_SELECTED_GT, selected) - - def test_frame_selector_last_k_1(self): - _K = 4 - selector = LastKFramesSelector(_K) - frame_tss = list(range(0, 20, 2)) - _SELECTED_GT = frame_tss[-_K:] - selected = selector(frame_tss) - self.assertEqual(_SELECTED_GT, selected) - - def test_frame_selector_last_k_2(self): - _K = 10 - selector = LastKFramesSelector(_K) - frame_tss = list(range(0, 6, 2)) - _SELECTED_GT = frame_tss[-_K:] - selected = selector(frame_tss) - self.assertEqual(_SELECTED_GT, selected) diff --git a/spaces/cc1799/vits-uma-genshin-honkai/utils.py b/spaces/cc1799/vits-uma-genshin-honkai/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/cc1799/vits-uma-genshin-honkai/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/performer/modeling_flax_performer_utils.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/performer/modeling_flax_performer_utils.py deleted file mode 100644 index 6e6173729cc348eeca5204becc713481109cde6a..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/performer/modeling_flax_performer_utils.py +++ /dev/null @@ -1,658 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The Google Research Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -""" -IMPORTANT: - -This code was copied from -https://github.com/google-research/google-research/blob/master/performer/fast_self_attention/fast_self_attention.py on -6/11/2020. This is very new code, so it might be prone to change soon -> make sure to check the original code and -update accordingly - -Core Fast Attention Module for Flax. Implementation of the approximate fast softmax and generalized attention mechanism -leveraging structured random feature maps [RFM] techniques and low rank decomposition of the attention matrix. -""" -# pylint: disable=invalid-name, missing-function-docstring, line-too-long - -import abc -import functools -from collections.abc import Iterable # pylint: disable=g-importing-member - -import jax -import jax.numpy as jnp -import numpy as onp -from absl import logging -from jax import lax, random - - -def nonnegative_softmax_kernel_feature_creator( - data, projection_matrix, attention_dims_t, batch_dims_t, precision, is_query, normalize_data=True, eps=0.0001 -): - """ - Constructs nonnegative kernel features for fast softmax attention - - Args: - data: input for which features are computes - projection_matrix: random matrix used to compute features - attention_dims_t: tuple of attention dimensions - batch_dims_t: tuple of batch dimensions - precision: precision parameter - is_query: predicate indicating whether input data corresponds to queries or - keys - normalize_data: predicate indicating whether data should be normalized, - eps: numerical stabilizer - - Returns: - Random features for fast softmax attention. - """ - del attention_dims_t - if normalize_data: - # We have e^{qk^T/sqrt{d}} = e^{q_norm k_norm^T}, where - # w_norm = w * data_normalizer for w in {q,k}. - data_normalizer = 1.0 / (jnp.sqrt(jnp.sqrt(data.shape[-1]))) - else: - data_normalizer = 1.0 - ratio = 1.0 / jnp.sqrt(projection_matrix.shape[0]) - data_mod_shape = data.shape[0 : len(batch_dims_t)] + projection_matrix.shape - data_thick_random_matrix = jnp.zeros(data_mod_shape) + projection_matrix - - data_dash = lax.dot_general( - data_normalizer * data, - data_thick_random_matrix, - (((data.ndim - 1,), (data_thick_random_matrix.ndim - 1,)), (batch_dims_t, batch_dims_t)), - precision=precision, - ) - - diag_data = jnp.square(data) - diag_data = jnp.sum(diag_data, axis=data.ndim - 1) - diag_data = (diag_data / 2.0) * data_normalizer * data_normalizer - diag_data = jnp.expand_dims(diag_data, axis=data.ndim - 1) - - if is_query: - last_dims_t = (len(data_dash.shape) - 1,) - data_dash = ratio * ( - jnp.exp(data_dash - diag_data - jnp.max(data_dash, axis=last_dims_t, keepdims=True)) + eps - ) - else: - data_dash = ratio * (jnp.exp(data_dash - diag_data - jnp.max(data_dash)) + eps) - - return data_dash - - -def sincos_softmax_kernel_feature_creator( - data, projection_matrix, attention_dims_t, batch_dims_t, precision, normalize_data=True -): - """ - Constructs kernel sin-cos features for fast softmax attention - - Args: - data: input for which features are computes - projection_matrix: random matrix used to compute features - attention_dims_t: tuple of attention dimensions - batch_dims_t: tuple of batch dimensions - precision: precision parameter - normalize_data: predicate indicating whether data should be normalized - - Returns: - Random features for fast softmax attention. - """ - if normalize_data: - # We have: exp(qk^T/sqrt{d}) = exp(|q|^2/2sqrt{d}) * exp(|k|^2/2sqrt{d}) * - # exp(-(|q*c-k*c|^2)/2), where c = 1.0 / sqrt{sqrt{d}}. - data_normalizer = 1.0 / (jnp.sqrt(jnp.sqrt(data.shape[-1]))) - else: - data_normalizer = 1.0 - ratio = 1.0 / jnp.sqrt(projection_matrix.shape[0]) - data_mod_shape = data.shape[0 : len(batch_dims_t)] + projection_matrix.shape - data_thick_random_matrix = jnp.zeros(data_mod_shape) + projection_matrix - - data_dash = lax.dot_general( - data_normalizer * data, - data_thick_random_matrix, - (((data.ndim - 1,), (data_thick_random_matrix.ndim - 1,)), (batch_dims_t, batch_dims_t)), - precision=precision, - ) - data_dash_cos = ratio * jnp.cos(data_dash) - data_dash_sin = ratio * jnp.sin(data_dash) - data_dash = jnp.concatenate((data_dash_cos, data_dash_sin), axis=-1) - - # Constructing D_data and data^{'} - diag_data = jnp.square(data) - diag_data = jnp.sum(diag_data, axis=data.ndim - 1) - diag_data = (diag_data / 2.0) * data_normalizer * data_normalizer - diag_data = jnp.expand_dims(diag_data, axis=data.ndim - 1) - # Additional renormalization for numerical stability - data_renormalizer = jnp.max(diag_data, attention_dims_t, keepdims=True) - diag_data -= data_renormalizer - diag_data = jnp.exp(diag_data) - data_prime = data_dash * diag_data - return data_prime - - -def generalized_kernel_feature_creator( - data, projection_matrix, batch_dims_t, precision, kernel_fn, kernel_epsilon, normalize_data -): - """ - Constructs kernel features for fast generalized attention - - Args: - data: input for which features are computes - projection_matrix: matrix used to compute features - batch_dims_t: tuple of batch dimensions - precision: precision parameter - kernel_fn: kernel function used - kernel_epsilon: additive positive term added to every feature for numerical - stability - normalize_data: predicate indicating whether data should be normalized - - Returns: - Random features for fast generalized attention. - """ - if normalize_data: - data_normalizer = 1.0 / (jnp.sqrt(jnp.sqrt(data.shape[-1]))) - else: - data_normalizer = 1.0 - if projection_matrix is None: - return kernel_fn(data_normalizer * data) + kernel_epsilon - else: - data_mod_shape = data.shape[0 : len(batch_dims_t)] + projection_matrix.shape - data_thick_random_matrix = jnp.zeros(data_mod_shape) + projection_matrix - data_dash = lax.dot_general( - data_normalizer * data, - data_thick_random_matrix, - (((data.ndim - 1,), (data_thick_random_matrix.ndim - 1,)), (batch_dims_t, batch_dims_t)), - precision=precision, - ) - data_prime = kernel_fn(data_dash) + kernel_epsilon - return data_prime - - -def make_fast_softmax_attention( - qkv_dim, - renormalize_attention=True, - numerical_stabilizer=0.000001, - nb_features=256, - ortho_features=True, - ortho_scaling=0.0, - redraw_features=True, - unidirectional=False, - nonnegative_features=True, - lax_scan_unroll=1, -): - """Construct a fast softmax attention method.""" - logging.info( - "Fast softmax attention: %s features and orthogonal=%s, renormalize=%s", - nb_features, - ortho_features, - renormalize_attention, - ) - if ortho_features: - matrix_creator = functools.partial(GaussianOrthogonalRandomMatrix, nb_features, qkv_dim, scaling=ortho_scaling) - else: - matrix_creator = functools.partial(GaussianUnstructuredRandomMatrix, nb_features, qkv_dim) - if nonnegative_features: - - def kernel_feature_creator( - data, projection_matrix, attention_dims_t, batch_dims_t, precision, is_query, normalize_data=True - ): - return nonnegative_softmax_kernel_feature_creator( - data, - projection_matrix, - attention_dims_t, - batch_dims_t, - precision, - is_query, - normalize_data, - numerical_stabilizer, - ) - - else: - - def kernel_feature_creator( - data, projection_matrix, attention_dims_t, batch_dims_t, precision, is_query, normalize_data=True - ): - del is_query - return sincos_softmax_kernel_feature_creator( - data, projection_matrix, attention_dims_t, batch_dims_t, precision, normalize_data - ) - - attention_fn = FastAttentionviaLowRankDecomposition( - matrix_creator, - kernel_feature_creator, - renormalize_attention=renormalize_attention, - numerical_stabilizer=numerical_stabilizer, - redraw_features=redraw_features, - unidirectional=unidirectional, - lax_scan_unroll=lax_scan_unroll, - ).dot_product_attention - return attention_fn - - -def make_fast_generalized_attention( - qkv_dim, - renormalize_attention=True, - numerical_stabilizer=0.0, - nb_features=256, - features_type="deterministic", - kernel_fn=jax.nn.relu, - kernel_epsilon=0.001, - redraw_features=False, - unidirectional=False, - lax_scan_unroll=1, -): - """Construct a fast generalized attention menthod.""" - logging.info("Fast generalized attention.: %s features and renormalize=%s", nb_features, renormalize_attention) - if features_type == "ortho": - matrix_creator = functools.partial(GaussianOrthogonalRandomMatrix, nb_features, qkv_dim, scaling=False) - elif features_type == "iid": - matrix_creator = functools.partial(GaussianUnstructuredRandomMatrix, nb_features, qkv_dim) - elif features_type == "deterministic": - matrix_creator = None - else: - raise ValueError("Unknown feature value type") - - def kernel_feature_creator( - data, projection_matrix, attention_dims_t, batch_dims_t, precision, is_query, normalize_data=False - ): - del attention_dims_t - del is_query - return generalized_kernel_feature_creator( - data, projection_matrix, batch_dims_t, precision, kernel_fn, kernel_epsilon, normalize_data - ) - - attention_fn = FastAttentionviaLowRankDecomposition( - matrix_creator, - kernel_feature_creator, - renormalize_attention=renormalize_attention, - numerical_stabilizer=numerical_stabilizer, - redraw_features=redraw_features, - unidirectional=unidirectional, - lax_scan_unroll=lax_scan_unroll, - ).dot_product_attention - return attention_fn - - -class RandomMatrix(object): - r""" - Abstract class providing a method for constructing 2D random arrays. Class is responsible for constructing 2D - random arrays. - """ - - __metaclass__ = abc.ABCMeta - - @abc.abstractmethod - def get_2d_array(self): - raise NotImplementedError("Abstract method") - - -class GaussianUnstructuredRandomMatrix(RandomMatrix): - def __init__(self, nb_rows, nb_columns, key): - self.nb_rows = nb_rows - self.nb_columns = nb_columns - self.key = key - - def get_2d_array(self): - return random.normal(self.key, (self.nb_rows, self.nb_columns)) - - -class GaussianOrthogonalRandomMatrix(RandomMatrix): - r""" - Class providing a method to create Gaussian orthogonal matrix. Class is responsible for constructing 2D Gaussian - orthogonal arrays. - """ - - def __init__(self, nb_rows, nb_columns, key, scaling=0): - self.nb_rows = nb_rows - self.nb_columns = nb_columns - self.key = key - self.scaling = scaling - - def get_2d_array(self): - nb_full_blocks = int(self.nb_rows / self.nb_columns) - block_list = [] - rng = self.key - for _ in range(nb_full_blocks): - rng, rng_input = jax.random.split(rng) - unstructured_block = random.normal(rng_input, (self.nb_columns, self.nb_columns)) - q, _ = jnp.linalg.qr(unstructured_block) - q = jnp.transpose(q) - block_list.append(q) - remaining_rows = self.nb_rows - nb_full_blocks * self.nb_columns - if remaining_rows > 0: - rng, rng_input = jax.random.split(rng) - unstructured_block = random.normal(rng_input, (self.nb_columns, self.nb_columns)) - q, _ = jnp.linalg.qr(unstructured_block) - q = jnp.transpose(q) - block_list.append(q[0:remaining_rows]) - final_matrix = jnp.vstack(block_list) - - if self.scaling == 0: - multiplier = jnp.linalg.norm(random.normal(self.key, (self.nb_rows, self.nb_columns)), axis=1) - elif self.scaling == 1: - multiplier = jnp.sqrt(float(self.nb_columns)) * jnp.ones((self.nb_rows)) - else: - raise ValueError("Scaling must be one of {0, 1}. Was %s" % self._scaling) - - return jnp.matmul(jnp.diag(multiplier), final_matrix) - - -class FastAttention(object): - r""" - Abstract class providing a method for fast attention. Class is responsible for providing a method - for fast approximate attention. - """ - - __metaclass__ = abc.ABCMeta - - @abc.abstractmethod - def dot_product_attention( - self, - query, - key, - value, - dtype=jnp.float32, - bias=None, - axis=None, - broadcast_dropout=True, - dropout_rng=None, - dropout_rate=0.0, - deterministic=False, - precision=None, - ): - """ - Computes dot-product attention given query, key, and value. This is the core function for applying fast - approximate dot-product attention. It calculates the attention weights given query and key and combines the - values using the attention weights. This function supports multi-dimensional inputs - - Args: - query: queries for calculating attention with shape of [batch_size, dim1, - dim2, ..., dimN, num_heads, mem_channels]. - key: keys for calculating attention with shape of [batch_size, dim1, dim2, - ..., dimN, num_heads, mem_channels]. - value: values to be used in attention with shape of [batch_size, dim1, - dim2,..., dimN, num_heads, value_channels]. - dtype: the dtype of the computation (default: float32) - bias: bias for the attention weights. This can be used for incorporating - autoregressive mask, padding mask, proximity bias. - axis: axises over which the attention is applied. - broadcast_dropout: bool: use a broadcasted dropout along batch dims. - dropout_rng: JAX PRNGKey: to be used for dropout. - dropout_rate: dropout rate. - deterministic: bool, deterministic or not (to apply dropout). - precision: numerical precision of the computation see `jax.lax.Precision` - for details - - Returns: - Output of shape [bs, dim1, dim2, ..., dimN,, num_heads, value_channels]. - """ - raise NotImplementedError("Abstract method") - - -def _numerator(z_slice_shape, precision, unroll=1): - def fwd(qs, ks, vs): - def body(p, qkv): - (q, k, v) = qkv - p += jnp.einsum("...m,...d->...md", k, v, precision=precision) - X_slice = jnp.einsum("...m,...md->...d", q, p, precision=precision) - return p, X_slice - - init_value = jnp.zeros(z_slice_shape) - p, W = lax.scan(body, init_value, (qs, ks, vs), unroll=unroll) - return W, (p, qs, ks, vs) - - def bwd(pqkv, W_ct): - def body(carry, qkv_xct): - p, p_ct = carry - q, k, v, x_ct = qkv_xct - q_ct = jnp.einsum("...d,...md->...m", x_ct, p, precision=precision) - p_ct += jnp.einsum("...d,...m->...md", x_ct, q, precision=precision) - k_ct = jnp.einsum("...md,...d->...m", p_ct, v, precision=precision) - v_ct = jnp.einsum("...md,...m->...d", p_ct, k, precision=precision) - p -= jnp.einsum("...m,...d->...md", k, v, precision=precision) - return (p, p_ct), (q_ct, k_ct, v_ct) - - p, qs, ks, vs = pqkv - _, (qs_ct, ks_ct, vs_ct) = lax.scan( - body, (p, jnp.zeros_like(p)), (qs, ks, vs, W_ct), reverse=True, unroll=unroll - ) - return qs_ct, ks_ct, vs_ct - - @jax.custom_vjp - def _numerator_impl(qs, ks, vs): - W, _ = fwd(qs, ks, vs) - return W - - _numerator_impl.defvjp(fwd, bwd) - - return _numerator_impl - - -def _denominator(t_slice_shape, precision, unroll=1): - def fwd(qs, ks): - def body(p, qk): - q, k = qk - p += k - x = jnp.einsum("...m,...m->...", q, p, precision=precision) - return p, x - - p = jnp.zeros(t_slice_shape) - p, R = lax.scan(body, p, (qs, ks), unroll=unroll) - return R, (qs, ks, p) - - def bwd(qkp, R_ct): - def body(carry, qkx): - p, p_ct = carry - q, k, x_ct = qkx - q_ct = jnp.einsum("...,...m->...m", x_ct, p, precision=precision) - p_ct += jnp.einsum("...,...m->...m", x_ct, q, precision=precision) - k_ct = p_ct - p -= k - return (p, p_ct), (q_ct, k_ct) - - qs, ks, p = qkp - _, (qs_ct, ks_ct) = lax.scan(body, (p, jnp.zeros_like(p)), (qs, ks, R_ct), reverse=True, unroll=unroll) - return (qs_ct, ks_ct) - - @jax.custom_vjp - def _denominator_impl(qs, ks): - R, _ = fwd(qs, ks) - return R - - _denominator_impl.defvjp(fwd, bwd) - - return _denominator_impl - - -class FastAttentionviaLowRankDecomposition(FastAttention): - r""" - Class providing a method for fast attention via low rank decomposition. Class is responsible for providing a method - for fast dot-product attention with the use of low rank decomposition (e.g. with random - feature maps). - """ - - def __init__( - self, - matrix_creator, - kernel_feature_creator, - renormalize_attention, - numerical_stabilizer, - redraw_features, - unidirectional, - lax_scan_unroll=1, - ): # For optimal GPU performance, set to 16. - rng = random.PRNGKey(0) - self.matrix_creator = matrix_creator - self.projection_matrix = self.draw_weights(rng) - self.kernel_feature_creator = kernel_feature_creator - self.renormalize_attention = renormalize_attention - self.numerical_stabilizer = numerical_stabilizer - self.redraw_features = redraw_features - self.unidirectional = unidirectional - self.lax_scan_unroll = lax_scan_unroll - - def draw_weights(self, key): - if self.matrix_creator is None: - return None - matrixrng, _ = random.split(key) - projection_matrix = self.matrix_creator(key=matrixrng).get_2d_array() - return projection_matrix - - def dot_product_attention( - self, - query, - key, - value, - dtype=jnp.float32, - bias=None, - axis=None, - broadcast_dropout=True, - dropout_rng=None, - dropout_rate=0.0, - deterministic=False, - precision=None, - ): - assert key.shape[:-1] == value.shape[:-1] - assert query.shape[0:1] == key.shape[0:1] and query.shape[-1] == key.shape[-1] - if axis is None: - axis = tuple(range(1, key.ndim - 2)) - if not isinstance(axis, Iterable): - axis = (axis,) - assert key.ndim == query.ndim - assert key.ndim == value.ndim - for ax in axis: - if not (query.ndim >= 3 and 1 <= ax < query.ndim - 2): - raise ValueError("Attention axis must be between the batch axis and the last-two axes.") - n = key.ndim - - # Constructing projection tensor. - if self.redraw_features: - # TODO(kchoro): Get rid of the constant below. - query_seed = lax.convert_element_type(jnp.ceil(jnp.sum(query) * 10000000.0), jnp.int32) - rng = random.PRNGKey(query_seed) - self.projection_matrix = self.draw_weights(rng) - - # batch_dims is , num_heads> - batch_dims = tuple(onp.delete(range(n), axis + (n - 1,))) - # q & k -> (bs, , num_heads, , channels) - qk_perm = batch_dims + axis + (n - 1,) - k_extra_perm = axis + batch_dims + (n - 1,) - key_extra = key.transpose(k_extra_perm) - key = key.transpose(qk_perm) - query = query.transpose(qk_perm) - # v -> (bs, , num_heads, , channels) - v_perm = batch_dims + axis + (n - 1,) - value = value.transpose(v_perm) - batch_dims_t = tuple(range(len(batch_dims))) - attention_dims_t = tuple(range(len(batch_dims), len(batch_dims) + len(axis))) - - # Constructing tensors Q^{'} and K^{'}. - query_prime = self.kernel_feature_creator( - query, self.projection_matrix, attention_dims_t, batch_dims_t, precision, True - ) - key_prime = self.kernel_feature_creator( - key, self.projection_matrix, attention_dims_t, batch_dims_t, precision, False - ) - - if self.unidirectional: - index = attention_dims_t[0] - z_slice_shape = key_prime.shape[0 : len(batch_dims_t)] + (key_prime.shape[-1],) + (value.shape[-1],) - - numerator_fn = _numerator(z_slice_shape, precision, self.lax_scan_unroll) - W = numerator_fn( - jnp.moveaxis(query_prime, index, 0), jnp.moveaxis(key_prime, index, 0), jnp.moveaxis(value, index, 0) - ) - - # Constructing W = (Q^{'}(K^{'})^{T})_{masked}V - W = jnp.moveaxis(W, 0, index) - - if not self.renormalize_attention: - # Unidirectional, not-normalized attention. - perm_inv = _invert_perm(qk_perm) - result = W.transpose(perm_inv) - return result - else: - # Unidirectional, normalized attention. - thick_all_ones = jnp.zeros(key.shape[0:-1]) + jnp.ones(key_extra.shape[0 : len(axis)]) - - index = attention_dims_t[0] - t_slice_shape = key_prime.shape[0 : len(batch_dims_t)] + (key_prime.shape[-1],) - denominator_fn = _denominator(t_slice_shape, precision, self.lax_scan_unroll) - R = denominator_fn(jnp.moveaxis(query_prime, index, 0), jnp.moveaxis(key_prime, index, 0)) - - R = jnp.moveaxis(R, 0, index) - else: - contract_query = tuple(range(len(batch_dims) + len(axis), len(batch_dims) + len(axis) + 1)) - contract_z = tuple(range(len(batch_dims), len(batch_dims) + 1)) - # Constructing Z = (K^{'})^{T}V - # Z (bs, , num_heads, channels_m, channels_v) - Z = lax.dot_general( - key_prime, - value, - ((attention_dims_t, attention_dims_t), (batch_dims_t, batch_dims_t)), - precision=precision, - ) - # Constructing W = Q^{'}Z = Q^{'}(K^{'})^{T}V - # q (bs, , num_heads, , channels_m) - # Z (bs, , num_heads, channels_m, channels_v) - # W (bs, , num_heads, , channels_v) - W = lax.dot_general( - query_prime, Z, ((contract_query, contract_z), (batch_dims_t, batch_dims_t)), precision=precision - ) - if not self.renormalize_attention: - # Bidirectional, not-normalized attention. - perm_inv = _invert_perm(qk_perm) - result = W.transpose(perm_inv) - return result - else: - # Bidirectional, normalized attention. - thick_all_ones = jnp.zeros(key.shape[0:-1]) + jnp.ones(key_extra.shape[0 : len(axis)]) - contract_key = tuple(range(len(batch_dims), len(batch_dims) + len(axis))) - contract_thick_all_ones = tuple(range(thick_all_ones.ndim - len(axis), thick_all_ones.ndim)) - # Construct T = (K^{'})^{T} 1_L - # k (bs, , num_heads, , channels) - T = lax.dot_general( - key_prime, - thick_all_ones, - ((contract_key, contract_thick_all_ones), (batch_dims_t, batch_dims_t)), - precision=precision, - ) - - # Construct partition function: R = Q^{'} T = Q^{'}(K^{'})^{T} 1_L - # q_p (bs, , num_heads, , channs_m) - # T (bs, , num_heads, channels_m) - R = lax.dot_general( - query_prime, - T, - (((query_prime.ndim - 1,), (T.ndim - 1,)), (batch_dims_t, range(0, len(T.shape) - 1))), - precision=precision, - ) - - R = R + 2 * self.numerical_stabilizer * (jnp.abs(R) <= self.numerical_stabilizer) - R = jnp.reciprocal(R) - R = jnp.expand_dims(R, len(R.shape)) - # W (bs, , num_heads, , channels_v) - # R (bs, , num_heads, , extra_channel) - result = W * R - # back to (bs, dim1, dim2, ..., dimN, num_heads, channels) - perm_inv = _invert_perm(qk_perm) - result = result.transpose(perm_inv) - return result - - -def _invert_perm(perm): - perm_inv = [0] * len(perm) - for i, j in enumerate(perm): - perm_inv[j] = i - return tuple(perm_inv) diff --git a/spaces/chilge/taoli/models.py b/spaces/chilge/taoli/models.py deleted file mode 100644 index bdbce8445304abda792f235a4761b831fd6f4d12..0000000000000000000000000000000000000000 --- a/spaces/chilge/taoli/models.py +++ /dev/null @@ -1,351 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import attentions -import commons -import modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_lengths, f0=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout) - hps = { - "sampling_rate": 32000, - "inter_channels": 192, - "resblock": "1", - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "upsample_rates": [10, 8, 2, 2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16, 16, 4, 4], - "gin_channels": 256, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - def forward(self, c, f0, spec, g=None, mel=None, c_lengths=None, spec_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - if spec_lengths == None: - spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device) - - g = self.emb_g(g).transpose(1,2) - - z_ptemp, m_p, logs_p, _ = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # o = self.dec(z_slice, g=g) - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, c, f0, g=None, mel=None, c_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - - z_p, m_p, logs_p, c_mask = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z = self.flow(z_p, c_mask, g=g, reverse=True) - - o = self.dec(z * c_mask, g=g, f0=f0) - - return o diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/_src/vmap/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/_src/vmap/__init__.py deleted file mode 100644 index 792a2fde38bb3563ed5b336132d7af008bf3e11a..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/_src/vmap/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# This file has moved to under torch/_functorch. It is not public API. -# If you are not a PyTorch developer and you are relying on the following -# imports, please file an issue. -from torch._functorch.vmap import ( - _add_batch_dim, - _broadcast_to_and_flatten, - _get_name, - _remove_batch_dim, - _validate_and_get_batch_size, - Tensor, - tree_flatten, - tree_unflatten, - _process_batched_inputs, - _create_batched_inputs, - _unwrap_batched, -) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/inputs.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/inputs.py deleted file mode 100644 index 9345530649a0b8843c27d7a0f965ac73bfcce7d6..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/inputs.py +++ /dev/null @@ -1,451 +0,0 @@ -# type: ignore -""" -This module defines various classes that can serve as the `input` to an interface. Each class must inherit from -`InputComponent`, and each class must define a path to its template. All of the subclasses of `InputComponent` are -automatically added to a registry, which allows them to be easily referenced in other parts of the code. -""" - -from __future__ import annotations - -from typing import Any, Optional - -from gradio import components -from gradio.deprecation import warn_deprecation - - -def warn_inputs_deprecation(): - warn_deprecation( - "Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components", - ) - - -class Textbox(components.Textbox): - def __init__( - self, - lines: int = 1, - placeholder: Optional[str] = None, - default: str = "", - numeric: Optional[bool] = False, - type: Optional[str] = "text", - label: Optional[str] = None, - optional: bool = False, - ): - warn_inputs_deprecation() - super().__init__( - value=default, - lines=lines, - placeholder=placeholder, - label=label, - numeric=numeric, - type=type, - optional=optional, - ) - - -class Number(components.Number): - """ - Component creates a field for user to enter numeric input. Provides a number as an argument to the wrapped function. - Input type: float - """ - - def __init__( - self, - default: Optional[float] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - default (float): default value. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no value for this component. - """ - warn_inputs_deprecation() - super().__init__(value=default, label=label, optional=optional) - - -class Slider(components.Slider): - """ - Component creates a slider that ranges from `minimum` to `maximum`. Provides number as an argument to the wrapped function. - Input type: float - """ - - def __init__( - self, - minimum: float = 0, - maximum: float = 100, - step: Optional[float] = None, - default: Optional[float] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - minimum (float): minimum value for slider. - maximum (float): maximum value for slider. - step (float): increment between slider values. - default (float): default value. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - - super().__init__( - value=default, - minimum=minimum, - maximum=maximum, - step=step, - label=label, - optional=optional, - ) - - -class Checkbox(components.Checkbox): - """ - Component creates a checkbox that can be set to `True` or `False`. Provides a boolean as an argument to the wrapped function. - Input type: bool - """ - - def __init__( - self, - default: bool = False, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - label (str): component name in interface. - default (bool): if True, checked by default. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__(value=default, label=label, optional=optional) - - -class CheckboxGroup(components.CheckboxGroup): - """ - Component creates a set of checkboxes of which a subset can be selected. Provides a list of strings representing the selected choices as an argument to the wrapped function. - Input type: Union[List[str], List[int]] - """ - - def __init__( - self, - choices: list[str], - default: list[str] | None = None, - type: str = "value", - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - choices (List[str]): list of options to select from. - default (List[str]): default selected list of options. - type (str): Type of value to be returned by component. "value" returns the list of strings of the choices selected, "index" returns the list of indices of the choices selected. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - if default is None: - default = [] - warn_inputs_deprecation() - super().__init__( - value=default, - choices=choices, - type=type, - label=label, - optional=optional, - ) - - -class Radio(components.Radio): - """ - Component creates a set of radio buttons of which only one can be selected. Provides string representing selected choice as an argument to the wrapped function. - Input type: Union[str, int] - """ - - def __init__( - self, - choices: list[str], - type: str = "value", - default: Optional[str] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - choices (List[str]): list of options to select from. - type (str): Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - default (str): the button selected by default. If None, no button is selected by default. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__( - choices=choices, - type=type, - value=default, - label=label, - optional=optional, - ) - - -class Dropdown(components.Dropdown): - """ - Component creates a dropdown of which only one can be selected. Provides string representing selected choice as an argument to the wrapped function. - Input type: Union[str, int] - """ - - def __init__( - self, - choices: list[str], - type: str = "value", - default: Optional[str] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - choices (List[str]): list of options to select from. - type (str): Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - default (str): default value selected in dropdown. If None, no value is selected by default. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__( - choices=choices, - type=type, - value=default, - label=label, - optional=optional, - ) - - -class Image(components.Image): - """ - Component creates an image upload box with editing capabilities. - Input type: Union[numpy.array, PIL.Image, file-object] - """ - - def __init__( - self, - shape: tuple[int, int] = None, - image_mode: str = "RGB", - invert_colors: bool = False, - source: str = "upload", - tool: str = "editor", - type: str = "numpy", - label: str = None, - optional: bool = False, - ): - """ - Parameters: - shape (Tuple[int, int]): (width, height) shape to crop and resize image to; if None, matches input image size. - image_mode (str): How to process the uploaded image. Accepts any of the PIL image modes, e.g. "RGB" for color images, "RGBA" to include the transparency mask, "L" for black-and-white images. - invert_colors (bool): whether to invert the image as a preprocessing step. - source (str): Source of image. "upload" creates a box where user can drop an image file, "webcam" allows user to take snapshot from their webcam, "canvas" defaults to a white image that can be edited and drawn upon with tools. - tool (str): Tools used for editing. "editor" allows a full screen editor, "select" provides a cropping and zoom tool. - type (str): Type of value to be returned by component. "numpy" returns a numpy array with shape (height, width, 3) and values from 0 to 255, "pil" returns a PIL image object, "file" returns a temporary file object whose path can be retrieved by file_obj.name, "filepath" returns the path directly. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__( - shape=shape, - image_mode=image_mode, - invert_colors=invert_colors, - source=source, - tool=tool, - type=type, - label=label, - optional=optional, - ) - - -class Video(components.Video): - """ - Component creates a video file upload that is converted to a file path. - - Input type: filepath - """ - - def __init__( - self, - type: Optional[str] = None, - source: str = "upload", - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - type (str): Type of video format to be returned by component, such as 'avi' or 'mp4'. If set to None, video will keep uploaded format. - source (str): Source of video. "upload" creates a box where user can drop an video file, "webcam" allows user to record a video from their webcam. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded video, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__(format=type, source=source, label=label, optional=optional) - - -class Audio(components.Audio): - """ - Component accepts audio input files. - Input type: Union[Tuple[int, numpy.array], file-object, numpy.array] - """ - - def __init__( - self, - source: str = "upload", - type: str = "numpy", - label: str = None, - optional: bool = False, - ): - """ - Parameters: - source (str): Source of audio. "upload" creates a box where user can drop an audio file, "microphone" creates a microphone input. - type (str): Type of value to be returned by component. "numpy" returns a 2-set tuple with an integer sample_rate and the data numpy.array of shape (samples, 2), "file" returns a temporary file object whose path can be retrieved by file_obj.name, "filepath" returns the path directly. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded audio, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__(source=source, type=type, label=label, optional=optional) - - -class File(components.File): - """ - Component accepts generic file uploads. - Input type: Union[file-object, bytes, List[Union[file-object, bytes]]] - """ - - def __init__( - self, - file_count: str = "single", - type: str = "file", - label: Optional[str] = None, - keep_filename: bool = True, - optional: bool = False, - ): - """ - Parameters: - file_count (str): if single, allows user to upload one file. If "multiple", user uploads multiple files. If "directory", user uploads all files in selected directory. Return type will be list for each file in case of "multiple" or "directory". - type (str): Type of value to be returned by component. "file" returns a temporary file object whose path can be retrieved by file_obj.name, "binary" returns an bytes object. - label (str): component name in interface. - keep_filename (bool): DEPRECATED. Original filename always kept. - optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__( - file_count=file_count, - type=type, - label=label, - keep_filename=keep_filename, - optional=optional, - ) - - -class Dataframe(components.Dataframe): - """ - Component accepts 2D input through a spreadsheet interface. - Input type: Union[pandas.DataFrame, numpy.array, List[Union[str, float]], List[List[Union[str, float]]]] - """ - - def __init__( - self, - headers: Optional[list[str]] = None, - row_count: int = 3, - col_count: Optional[int] = 3, - datatype: str | list[str] = "str", - col_width: int | list[int] = None, - default: Optional[list[list[Any]]] = None, - type: str = "pandas", - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - headers (List[str]): Header names to dataframe. If None, no headers are shown. - row_count (int): Limit number of rows for input. - col_count (int): Limit number of columns for input. If equal to 1, return data will be one-dimensional. Ignored if `headers` is provided. - datatype (Union[str, List[str]]): Datatype of values in sheet. Can be provided per column as a list of strings, or for the entire sheet as a single string. Valid datatypes are "str", "number", "bool", and "date". - col_width (Union[int, List[int]]): Width of columns in pixels. Can be provided as single value or list of values per column. - default (List[List[Any]]): Default value - type (str): Type of value to be returned by component. "pandas" for pandas dataframe, "numpy" for numpy array, or "array" for a Python array. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__( - value=default, - headers=headers, - row_count=row_count, - col_count=col_count, - datatype=datatype, - col_width=col_width, - type=type, - label=label, - optional=optional, - ) - - -class Timeseries(components.Timeseries): - """ - Component accepts pandas.DataFrame uploaded as a timeseries csv file. - Input type: pandas.DataFrame - """ - - def __init__( - self, - x: Optional[str] = None, - y: str | list[str] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - x (str): Column name of x (time) series. None if csv has no headers, in which case first column is x series. - y (Union[str, List[str]]): Column name of y series, or list of column names if multiple series. None if csv has no headers, in which case every column after first is a y series. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded csv file, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__(x=x, y=y, label=label, optional=optional) - - -class State(components.State): - """ - Special hidden component that stores state across runs of the interface. - Input type: Any - """ - - def __init__( - self, - label: str = None, - default: Any = None, - ): - """ - Parameters: - label (str): component name in interface (not used). - default (Any): the initial value of the state. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__(value=default, label=label) - - -class Image3D(components.Model3D): - """ - Used for 3D image model output. - Input type: File object of type (.obj, glb, or .gltf) - """ - - def __init__( - self, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__(label=label, optional=optional) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage.svelte_svelte_type_style_lang-72cfcc0b.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage.svelte_svelte_type_style_lang-72cfcc0b.js deleted file mode 100644 index 330f3e0c7149cba01f903b763e530ec2272caed9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage.svelte_svelte_type_style_lang-72cfcc0b.js +++ /dev/null @@ -1,11 +0,0 @@ -import{S as bt,e as wt,s as yt,J as H,K as p,p as U,M as Z,n as I,A as j,N as ct,O as je,U as re,z as W,u as jt,v as G,y as Vt,B as Ve,C as Ge,Q as qe,X as Fe,h as Ke,k as Gt,o as qt,x as Ft,m as Qe}from"./index-f877dfd5.js";import"./Button-11a87b79.js";function Ze(a){let t,e,i;return{c(){t=H("svg"),e=H("path"),i=H("circle"),p(e,"d","M23 19a2 2 0 0 1-2 2H3a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h4l2-3h6l2 3h4a2 2 0 0 1 2 2z"),p(i,"cx","12"),p(i,"cy","13"),p(i,"r","4"),p(t,"xmlns","http://www.w3.org/2000/svg"),p(t,"width","100%"),p(t,"height","100%"),p(t,"viewBox","0 0 24 24"),p(t,"fill","none"),p(t,"stroke","currentColor"),p(t,"stroke-width","1.5"),p(t,"stroke-linecap","round"),p(t,"stroke-linejoin","round"),p(t,"class","feather feather-camera")},m(n,r){U(n,t,r),Z(t,e),Z(t,i)},p:I,i:I,o:I,d(n){n&&j(t)}}}class Je extends bt{constructor(t){super(),wt(this,t,null,Ze,yt,{})}}function $e(a){let t,e;return{c(){t=H("svg"),e=H("circle"),p(e,"cx","12"),p(e,"cy","12"),p(e,"r","10"),p(t,"xmlns","http://www.w3.org/2000/svg"),p(t,"width","100%"),p(t,"height","100%"),p(t,"viewBox","0 0 24 24"),p(t,"fill","red"),p(t,"stroke","red"),p(t,"stroke-width","1.5"),p(t,"stroke-linecap","round"),p(t,"stroke-linejoin","round"),p(t,"class","feather feather-circle")},m(i,n){U(i,t,n),Z(t,e)},p:I,i:I,o:I,d(i){i&&j(t)}}}class ti extends bt{constructor(t){super(),wt(this,t,null,$e,yt,{})}}function ei(a){let t,e;return{c(){t=H("svg"),e=H("rect"),p(e,"x","3"),p(e,"y","3"),p(e,"width","18"),p(e,"height","18"),p(e,"rx","2"),p(e,"ry","2"),p(t,"xmlns","http://www.w3.org/2000/svg"),p(t,"width","100%"),p(t,"height","100%"),p(t,"viewBox","0 0 24 24"),p(t,"fill","red"),p(t,"stroke","red"),p(t,"stroke-width","1.5"),p(t,"stroke-linecap","round"),p(t,"stroke-linejoin","round"),p(t,"class","feather feather-square")},m(i,n){U(i,t,n),Z(t,e)},p:I,i:I,o:I,d(i){i&&j(t)}}}class ii extends bt{constructor(t){super(),wt(this,t,null,ei,yt,{})}}function ai(a){let t,e,i;return{c(){t=H("svg"),e=H("polyline"),i=H("path"),p(e,"points","1 4 1 10 7 10"),p(i,"d","M3.51 15a9 9 0 1 0 2.13-9.36L1 10"),p(t,"xmlns","http://www.w3.org/2000/svg"),p(t,"width","100%"),p(t,"height","100%"),p(t,"viewBox","0 0 24 24"),p(t,"fill","none"),p(t,"stroke","currentColor"),p(t,"stroke-width","1.5"),p(t,"stroke-linecap","round"),p(t,"stroke-linejoin","round"),p(t,"class","feather feather-rotate-ccw")},m(n,r){U(n,t,r),Z(t,e),Z(t,i)},p:I,i:I,o:I,d(n){n&&j(t)}}}class ba extends bt{constructor(t){super(),wt(this,t,null,ai,yt,{})}}/*! - * Cropper.js v1.5.12 - * https://fengyuanchen.github.io/cropperjs - * - * Copyright 2015-present Chen Fengyuan - * Released under the MIT license - * - * Date: 2021-06-12T08:00:17.411Z - */function ne(a,t){var e=Object.keys(a);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(a);t&&(i=i.filter(function(n){return Object.getOwnPropertyDescriptor(a,n).enumerable})),e.push.apply(e,i)}return e}function Ee(a){for(var t=1;ta.length)&&(t=a.length);for(var e=0,i=new Array(t);e
    ',yi=Number.isNaN||X.isNaN;function b(a){return typeof a=="number"&&!yi(a)}var be=function(t){return t>0&&t<1/0};function St(a){return typeof a>"u"}function at(a){return Dt(a)==="object"&&a!==null}var _i=Object.prototype.hasOwnProperty;function nt(a){if(!at(a))return!1;try{var t=a.constructor,e=t.prototype;return t&&e&&_i.call(e,"isPrototypeOf")}catch{return!1}}function S(a){return typeof a=="function"}var xi=Array.prototype.slice;function Se(a){return Array.from?Array.from(a):xi.call(a)}function C(a,t){return a&&S(t)&&(Array.isArray(a)||b(a.length)?Se(a).forEach(function(e,i){t.call(a,e,i,a)}):at(a)&&Object.keys(a).forEach(function(e){t.call(a,a[e],e,a)})),a}var D=Object.assign||function(t){for(var e=arguments.length,i=new Array(e>1?e-1:0),n=1;n0&&i.forEach(function(r){at(r)&&Object.keys(r).forEach(function(o){t[o]=r[o]})}),t},Ei=/\.\d*(?:0|9){12}\d*$/;function st(a){var t=arguments.length>1&&arguments[1]!==void 0?arguments[1]:1e11;return Ei.test(a)?Math.round(a*t)/t:a}var Di=/^width|height|left|top|marginLeft|marginTop$/;function K(a,t){var e=a.style;C(t,function(i,n){Di.test(n)&&b(i)&&(i="".concat(i,"px")),e[n]=i})}function Mi(a,t){return a.classList?a.classList.contains(t):a.className.indexOf(t)>-1}function A(a,t){if(t){if(b(a.length)){C(a,function(i){A(i,t)});return}if(a.classList){a.classList.add(t);return}var e=a.className.trim();e?e.indexOf(t)<0&&(a.className="".concat(e," ").concat(t)):a.className=t}}function Y(a,t){if(t){if(b(a.length)){C(a,function(e){Y(e,t)});return}if(a.classList){a.classList.remove(t);return}a.className.indexOf(t)>=0&&(a.className=a.className.replace(t,""))}}function ot(a,t,e){if(t){if(b(a.length)){C(a,function(i){ot(i,t,e)});return}e?A(a,t):Y(a,t)}}var Oi=/([a-z\d])([A-Z])/g;function $t(a){return a.replace(Oi,"$1-$2").toLowerCase()}function Xt(a,t){return at(a[t])?a[t]:a.dataset?a.dataset[t]:a.getAttribute("data-".concat($t(t)))}function mt(a,t,e){at(e)?a[t]=e:a.dataset?a.dataset[t]=e:a.setAttribute("data-".concat($t(t)),e)}function Ti(a,t){if(at(a[t]))try{delete a[t]}catch{a[t]=void 0}else if(a.dataset)try{delete a.dataset[t]}catch{a.dataset[t]=void 0}else a.removeAttribute("data-".concat($t(t)))}var ke=/\s\s*/,Ie=function(){var a=!1;if(Ct){var t=!1,e=function(){},i=Object.defineProperty({},"once",{get:function(){return a=!0,t},set:function(r){t=r}});X.addEventListener("test",e,i),X.removeEventListener("test",e,i)}return a}();function z(a,t,e){var i=arguments.length>3&&arguments[3]!==void 0?arguments[3]:{},n=e;t.trim().split(ke).forEach(function(r){if(!Ie){var o=a.listeners;o&&o[r]&&o[r][e]&&(n=o[r][e],delete o[r][e],Object.keys(o[r]).length===0&&delete o[r],Object.keys(o).length===0&&delete a.listeners)}a.removeEventListener(r,n,i)})}function B(a,t,e){var i=arguments.length>3&&arguments[3]!==void 0?arguments[3]:{},n=e;t.trim().split(ke).forEach(function(r){if(i.once&&!Ie){var o=a.listeners,s=o===void 0?{}:o;n=function(){delete s[r][e],a.removeEventListener(r,n,i);for(var f=arguments.length,h=new Array(f),c=0;cMath.abs(e)&&(e=u)})}),e}function Et(a,t){var e=a.pageX,i=a.pageY,n={endX:e,endY:i};return t?n:Ee({startX:e,startY:i},n)}function Ai(a){var t=0,e=0,i=0;return C(a,function(n){var r=n.startX,o=n.startY;t+=r,e+=o,i+=1}),t/=i,e/=i,{pageX:t,pageY:e}}function Q(a){var t=a.aspectRatio,e=a.height,i=a.width,n=arguments.length>1&&arguments[1]!==void 0?arguments[1]:"contain",r=be(i),o=be(e);if(r&&o){var s=e*t;n==="contain"&&s>i||n==="cover"&&s90?{width:l,height:s}:{width:s,height:l}}function Si(a,t,e,i){var n=t.aspectRatio,r=t.naturalWidth,o=t.naturalHeight,s=t.rotate,l=s===void 0?0:s,f=t.scaleX,h=f===void 0?1:f,c=t.scaleY,u=c===void 0?1:c,v=e.aspectRatio,g=e.naturalWidth,_=e.naturalHeight,m=i.fillColor,x=m===void 0?"transparent":m,T=i.imageSmoothingEnabled,O=T===void 0?!0:T,w=i.imageSmoothingQuality,M=w===void 0?"low":w,d=i.maxWidth,y=d===void 0?1/0:d,R=i.maxHeight,L=R===void 0?1/0:R,V=i.minWidth,J=V===void 0?0:V,$=i.minHeight,q=$===void 0?0:$,P=document.createElement("canvas"),N=P.getContext("2d"),tt=Q({aspectRatio:v,width:y,height:L}),_t=Q({aspectRatio:v,width:J,height:q},"cover"),At=Math.min(tt.width,Math.max(_t.width,g)),Nt=Math.min(tt.height,Math.max(_t.height,_)),te=Q({aspectRatio:n,width:y,height:L}),ee=Q({aspectRatio:n,width:J,height:q},"cover"),ie=Math.min(te.width,Math.max(ee.width,r)),ae=Math.min(te.height,Math.max(ee.height,o)),Xe=[-ie/2,-ae/2,ie,ae];return P.width=st(At),P.height=st(Nt),N.fillStyle=x,N.fillRect(0,0,At,Nt),N.save(),N.translate(At/2,Nt/2),N.rotate(l*Math.PI/180),N.scale(h,u),N.imageSmoothingEnabled=O,N.imageSmoothingQuality=M,N.drawImage.apply(N,[a].concat(De(Xe.map(function(Ue){return Math.floor(st(Ue))})))),N.restore(),P}var Be=String.fromCharCode;function ki(a,t,e){var i="";e+=t;for(var n=t;n0;)e.push(Be.apply(null,Se(n.subarray(0,i)))),n=n.subarray(i);return"data:".concat(t,";base64,").concat(btoa(e.join("")))}function zi(a){var t=new DataView(a),e;try{var i,n,r;if(t.getUint8(0)===255&&t.getUint8(1)===216)for(var o=t.byteLength,s=2;s+1=8&&(r=f+c)}}}if(r){var u=t.getUint16(r,i),v,g;for(g=0;g=0?r:Ae),height:Math.max(i.offsetHeight,o>=0?o:Ne)};this.containerData=s,K(n,{width:s.width,height:s.height}),A(t,k),Y(n,k)},initCanvas:function(){var t=this.containerData,e=this.imageData,i=this.options.viewMode,n=Math.abs(e.rotate)%180===90,r=n?e.naturalHeight:e.naturalWidth,o=n?e.naturalWidth:e.naturalHeight,s=r/o,l=t.width,f=t.height;t.height*s>t.width?i===3?l=t.height*s:f=t.width/s:i===3?f=t.width/s:l=t.height*s;var h={aspectRatio:s,naturalWidth:r,naturalHeight:o,width:l,height:f};this.canvasData=h,this.limited=i===1||i===2,this.limitCanvas(!0,!0),h.width=Math.min(Math.max(h.width,h.minWidth),h.maxWidth),h.height=Math.min(Math.max(h.height,h.minHeight),h.maxHeight),h.left=(t.width-h.width)/2,h.top=(t.height-h.height)/2,h.oldLeft=h.left,h.oldTop=h.top,this.initialCanvasData=D({},h)},limitCanvas:function(t,e){var i=this.options,n=this.containerData,r=this.canvasData,o=this.cropBoxData,s=i.viewMode,l=r.aspectRatio,f=this.cropped&&o;if(t){var h=Number(i.minCanvasWidth)||0,c=Number(i.minCanvasHeight)||0;s>1?(h=Math.max(h,n.width),c=Math.max(c,n.height),s===3&&(c*l>h?h=c*l:c=h/l)):s>0&&(h?h=Math.max(h,f?o.width:0):c?c=Math.max(c,f?o.height:0):f&&(h=o.width,c=o.height,c*l>h?h=c*l:c=h/l));var u=Q({aspectRatio:l,width:h,height:c});h=u.width,c=u.height,r.minWidth=h,r.minHeight=c,r.maxWidth=1/0,r.maxHeight=1/0}if(e)if(s>(f?0:1)){var v=n.width-r.width,g=n.height-r.height;r.minLeft=Math.min(0,v),r.minTop=Math.min(0,g),r.maxLeft=Math.max(0,v),r.maxTop=Math.max(0,g),f&&this.limited&&(r.minLeft=Math.min(o.left,o.left+(o.width-r.width)),r.minTop=Math.min(o.top,o.top+(o.height-r.height)),r.maxLeft=o.left,r.maxTop=o.top,s===2&&(r.width>=n.width&&(r.minLeft=Math.min(0,v),r.maxLeft=Math.max(0,v)),r.height>=n.height&&(r.minTop=Math.min(0,g),r.maxTop=Math.max(0,g))))}else r.minLeft=-r.width,r.minTop=-r.height,r.maxLeft=n.width,r.maxTop=n.height},renderCanvas:function(t,e){var i=this.canvasData,n=this.imageData;if(e){var r=Ni({width:n.naturalWidth*Math.abs(n.scaleX||1),height:n.naturalHeight*Math.abs(n.scaleY||1),degree:n.rotate||0}),o=r.width,s=r.height,l=i.width*(o/i.naturalWidth),f=i.height*(s/i.naturalHeight);i.left-=(l-i.width)/2,i.top-=(f-i.height)/2,i.width=l,i.height=f,i.aspectRatio=o/s,i.naturalWidth=o,i.naturalHeight=s,this.limitCanvas(!0,!1)}(i.width>i.maxWidth||i.widthi.maxHeight||i.heighte.width?r.height=r.width/i:r.width=r.height*i),this.cropBoxData=r,this.limitCropBox(!0,!0),r.width=Math.min(Math.max(r.width,r.minWidth),r.maxWidth),r.height=Math.min(Math.max(r.height,r.minHeight),r.maxHeight),r.width=Math.max(r.minWidth,r.width*n),r.height=Math.max(r.minHeight,r.height*n),r.left=e.left+(e.width-r.width)/2,r.top=e.top+(e.height-r.height)/2,r.oldLeft=r.left,r.oldTop=r.top,this.initialCropBoxData=D({},r)},limitCropBox:function(t,e){var i=this.options,n=this.containerData,r=this.canvasData,o=this.cropBoxData,s=this.limited,l=i.aspectRatio;if(t){var f=Number(i.minCropBoxWidth)||0,h=Number(i.minCropBoxHeight)||0,c=s?Math.min(n.width,r.width,r.width+r.left,n.width-r.left):n.width,u=s?Math.min(n.height,r.height,r.height+r.top,n.height-r.top):n.height;f=Math.min(f,n.width),h=Math.min(h,n.height),l&&(f&&h?h*l>f?h=f/l:f=h*l:f?h=f/l:h&&(f=h*l),u*l>c?u=c/l:c=u*l),o.minWidth=Math.min(f,c),o.minHeight=Math.min(h,u),o.maxWidth=c,o.maxHeight=u}e&&(s?(o.minLeft=Math.max(0,r.left),o.minTop=Math.max(0,r.top),o.maxLeft=Math.min(n.width,r.left+r.width)-o.width,o.maxTop=Math.min(n.height,r.top+r.height)-o.height):(o.minLeft=0,o.minTop=0,o.maxLeft=n.width-o.width,o.maxTop=n.height-o.height))},renderCropBox:function(){var t=this.options,e=this.containerData,i=this.cropBoxData;(i.width>i.maxWidth||i.widthi.maxHeight||i.height=e.width&&i.height>=e.height?Oe:Zt),K(this.cropBox,D({width:i.width,height:i.height},vt({translateX:i.left,translateY:i.top}))),this.cropped&&this.limited&&this.limitCanvas(!0,!0),this.disabled||this.output()},output:function(){this.preview(),ht(this.element,zt,this.getData())}},Wi={initPreview:function(){var t=this.element,e=this.crossOrigin,i=this.options.preview,n=e?this.crossOriginUrl:this.url,r=t.alt||"The image to preview",o=document.createElement("img");if(e&&(o.crossOrigin=e),o.src=n,o.alt=r,this.viewBox.appendChild(o),this.viewBoxImage=o,!!i){var s=i;typeof i=="string"?s=t.ownerDocument.querySelectorAll(i):i.querySelector&&(s=[i]),this.previews=s,C(s,function(l){var f=document.createElement("img");mt(l,xt,{width:l.offsetWidth,height:l.offsetHeight,html:l.innerHTML}),e&&(f.crossOrigin=e),f.src=n,f.alt=r,f.style.cssText='display:block;width:100%;height:auto;min-width:0!important;min-height:0!important;max-width:none!important;max-height:none!important;image-orientation:0deg!important;"',l.innerHTML="",l.appendChild(f)})}},resetPreview:function(){C(this.previews,function(t){var e=Xt(t,xt);K(t,{width:e.width,height:e.height}),t.innerHTML=e.html,Ti(t,xt)})},preview:function(){var t=this.imageData,e=this.canvasData,i=this.cropBoxData,n=i.width,r=i.height,o=t.width,s=t.height,l=i.left-e.left-t.left,f=i.top-e.top-t.top;!this.cropped||this.disabled||(K(this.viewBoxImage,D({width:o,height:s},vt(D({translateX:-l,translateY:-f},t)))),C(this.previews,function(h){var c=Xt(h,xt),u=c.width,v=c.height,g=u,_=v,m=1;n&&(m=u/n,_=r*m),r&&_>v&&(m=v/r,g=n*m,_=v),K(h,{width:g,height:_}),K(h.getElementsByTagName("img")[0],D({width:o*m,height:s*m},vt(D({translateX:-l*m,translateY:-f*m},t))))}))}},Yi={bind:function(){var t=this.element,e=this.options,i=this.cropper;S(e.cropstart)&&B(t,Wt,e.cropstart),S(e.cropmove)&&B(t,Ht,e.cropmove),S(e.cropend)&&B(t,Pt,e.cropend),S(e.crop)&&B(t,zt,e.crop),S(e.zoom)&&B(t,Yt,e.zoom),B(i,le,this.onCropStart=this.cropStart.bind(this)),e.zoomable&&e.zoomOnWheel&&B(i,ve,this.onWheel=this.wheel.bind(this),{passive:!1,capture:!0}),e.toggleDragModeOnDblclick&&B(i,ce,this.onDblclick=this.dblclick.bind(this)),B(t.ownerDocument,fe,this.onCropMove=this.cropMove.bind(this)),B(t.ownerDocument,ue,this.onCropEnd=this.cropEnd.bind(this)),e.responsive&&B(window,pe,this.onResize=this.resize.bind(this))},unbind:function(){var t=this.element,e=this.options,i=this.cropper;S(e.cropstart)&&z(t,Wt,e.cropstart),S(e.cropmove)&&z(t,Ht,e.cropmove),S(e.cropend)&&z(t,Pt,e.cropend),S(e.crop)&&z(t,zt,e.crop),S(e.zoom)&&z(t,Yt,e.zoom),z(i,le,this.onCropStart),e.zoomable&&e.zoomOnWheel&&z(i,ve,this.onWheel,{passive:!1,capture:!0}),e.toggleDragModeOnDblclick&&z(i,ce,this.onDblclick),z(t.ownerDocument,fe,this.onCropMove),z(t.ownerDocument,ue,this.onCropEnd),e.responsive&&z(window,pe,this.onResize)}},Xi={resize:function(){if(!this.disabled){var t=this.options,e=this.container,i=this.containerData,n=e.offsetWidth/i.width,r=e.offsetHeight/i.height,o=Math.abs(n-1)>Math.abs(r-1)?n:r;if(o!==1){var s,l;t.restore&&(s=this.getCanvasData(),l=this.getCropBoxData()),this.render(),t.restore&&(this.setCanvasData(C(s,function(f,h){s[h]=f*o})),this.setCropBoxData(C(l,function(f,h){l[h]=f*o})))}}},dblclick:function(){this.disabled||this.options.dragMode===Re||this.setDragMode(Mi(this.dragBox,Lt)?Ce:Jt)},wheel:function(t){var e=this,i=Number(this.options.wheelZoomRatio)||.1,n=1;this.disabled||(t.preventDefault(),!this.wheeling&&(this.wheeling=!0,setTimeout(function(){e.wheeling=!1},50),t.deltaY?n=t.deltaY>0?1:-1:t.wheelDelta?n=-t.wheelDelta/120:t.detail&&(n=t.detail>0?1:-1),this.zoom(-n*i,t)))},cropStart:function(t){var e=t.buttons,i=t.button;if(!(this.disabled||(t.type==="mousedown"||t.type==="pointerdown"&&t.pointerType==="mouse")&&(b(e)&&e!==1||b(i)&&i!==0||t.ctrlKey))){var n=this.options,r=this.pointers,o;t.changedTouches?C(t.changedTouches,function(s){r[s.identifier]=Et(s)}):r[t.pointerId||0]=Et(t),Object.keys(r).length>1&&n.zoomable&&n.zoomOnTouch?o=Te:o=Xt(t.target,gt),vi.test(o)&&ht(this.element,Wt,{originalEvent:t,action:o})!==!1&&(t.preventDefault(),this.action=o,this.cropping=!1,o===Me&&(this.cropping=!0,A(this.dragBox,Mt)))}},cropMove:function(t){var e=this.action;if(!(this.disabled||!e)){var i=this.pointers;t.preventDefault(),ht(this.element,Ht,{originalEvent:t,action:e})!==!1&&(t.changedTouches?C(t.changedTouches,function(n){D(i[n.identifier]||{},Et(n,!0))}):D(i[t.pointerId||0]||{},Et(t,!0)),this.change(t))}},cropEnd:function(t){if(!this.disabled){var e=this.action,i=this.pointers;t.changedTouches?C(t.changedTouches,function(n){delete i[n.identifier]}):delete i[t.pointerId||0],e&&(t.preventDefault(),Object.keys(i).length||(this.action=""),this.cropping&&(this.cropping=!1,ot(this.dragBox,Mt,this.cropped&&this.options.modal)),ht(this.element,Pt,{originalEvent:t,action:e}))}}},Ui={change:function(t){var e=this.options,i=this.canvasData,n=this.containerData,r=this.cropBoxData,o=this.pointers,s=this.action,l=e.aspectRatio,f=r.left,h=r.top,c=r.width,u=r.height,v=f+c,g=h+u,_=0,m=0,x=n.width,T=n.height,O=!0,w;!l&&t.shiftKey&&(l=c&&u?c/u:1),this.limited&&(_=r.minLeft,m=r.minTop,x=_+Math.min(n.width,i.width,i.left+i.width),T=m+Math.min(n.height,i.height,i.top+i.height));var M=o[Object.keys(o)[0]],d={x:M.endX-M.startX,y:M.endY-M.startY},y=function(L){switch(L){case et:v+d.x>x&&(d.x=x-v);break;case it:f+d.x<_&&(d.x=_-f);break;case F:h+d.yT&&(d.y=T-g);break}};switch(s){case Zt:f+=d.x,h+=d.y;break;case et:if(d.x>=0&&(v>=x||l&&(h<=m||g>=T))){O=!1;break}y(et),c+=d.x,c<0&&(s=it,c=-c,f-=c),l&&(u=c/l,h+=(r.height-u)/2);break;case F:if(d.y<=0&&(h<=m||l&&(f<=_||v>=x))){O=!1;break}y(F),u-=d.y,h+=d.y,u<0&&(s=rt,u=-u,h-=u),l&&(c=u*l,f+=(r.width-c)/2);break;case it:if(d.x<=0&&(f<=_||l&&(h<=m||g>=T))){O=!1;break}y(it),c-=d.x,f+=d.x,c<0&&(s=et,c=-c,f-=c),l&&(u=c/l,h+=(r.height-u)/2);break;case rt:if(d.y>=0&&(g>=T||l&&(f<=_||v>=x))){O=!1;break}y(rt),u+=d.y,u<0&&(s=F,u=-u,h-=u),l&&(c=u*l,f+=(r.width-c)/2);break;case ft:if(l){if(d.y<=0&&(h<=m||v>=x)){O=!1;break}y(F),u-=d.y,h+=d.y,c=u*l}else y(F),y(et),d.x>=0?vm&&(u-=d.y,h+=d.y):(u-=d.y,h+=d.y);c<0&&u<0?(s=pt,u=-u,c=-c,h-=u,f-=c):c<0?(s=ut,c=-c,f-=c):u<0&&(s=dt,u=-u,h-=u);break;case ut:if(l){if(d.y<=0&&(h<=m||f<=_)){O=!1;break}y(F),u-=d.y,h+=d.y,c=u*l,f+=r.width-c}else y(F),y(it),d.x<=0?f>_?(c-=d.x,f+=d.x):d.y<=0&&h<=m&&(O=!1):(c-=d.x,f+=d.x),d.y<=0?h>m&&(u-=d.y,h+=d.y):(u-=d.y,h+=d.y);c<0&&u<0?(s=dt,u=-u,c=-c,h-=u,f-=c):c<0?(s=ft,c=-c,f-=c):u<0&&(s=pt,u=-u,h-=u);break;case pt:if(l){if(d.x<=0&&(f<=_||g>=T)){O=!1;break}y(it),c-=d.x,f+=d.x,u=c/l}else y(rt),y(it),d.x<=0?f>_?(c-=d.x,f+=d.x):d.y>=0&&g>=T&&(O=!1):(c-=d.x,f+=d.x),d.y>=0?g=0&&(v>=x||g>=T)){O=!1;break}y(et),c+=d.x,u=c/l}else y(rt),y(et),d.x>=0?v=0&&g>=T&&(O=!1):c+=d.x,d.y>=0?g0?s=d.y>0?dt:ft:d.x<0&&(f-=c,s=d.y>0?pt:ut),d.y<0&&(h-=u),this.cropped||(Y(this.cropBox,k),this.cropped=!0,this.limited&&this.limitCropBox(!0,!0));break}O&&(r.width=c,r.height=u,r.left=f,r.top=h,this.action=s,this.renderCropBox()),C(o,function(R){R.startX=R.endX,R.startY=R.endY})}},ji={crop:function(){return this.ready&&!this.cropped&&!this.disabled&&(this.cropped=!0,this.limitCropBox(!0,!0),this.options.modal&&A(this.dragBox,Mt),Y(this.cropBox,k),this.setCropBoxData(this.initialCropBoxData)),this},reset:function(){return this.ready&&!this.disabled&&(this.imageData=D({},this.initialImageData),this.canvasData=D({},this.initialCanvasData),this.cropBoxData=D({},this.initialCropBoxData),this.renderCanvas(),this.cropped&&this.renderCropBox()),this},clear:function(){return this.cropped&&!this.disabled&&(D(this.cropBoxData,{left:0,top:0,width:0,height:0}),this.cropped=!1,this.renderCropBox(),this.limitCanvas(!0,!0),this.renderCanvas(),Y(this.dragBox,Mt),A(this.cropBox,k)),this},replace:function(t){var e=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!1;return!this.disabled&&t&&(this.isImg&&(this.element.src=t),e?(this.url=t,this.image.src=t,this.ready&&(this.viewBoxImage.src=t,C(this.previews,function(i){i.getElementsByTagName("img")[0].src=t}))):(this.isImg&&(this.replaced=!0),this.options.data=null,this.uncreate(),this.load(t))),this},enable:function(){return this.ready&&this.disabled&&(this.disabled=!1,Y(this.cropper,se)),this},disable:function(){return this.ready&&!this.disabled&&(this.disabled=!0,A(this.cropper,se)),this},destroy:function(){var t=this.element;return t[E]?(t[E]=void 0,this.isImg&&this.replaced&&(t.src=this.originalUrl),this.uncreate(),this):this},move:function(t){var e=arguments.length>1&&arguments[1]!==void 0?arguments[1]:t,i=this.canvasData,n=i.left,r=i.top;return this.moveTo(St(t)?t:n+Number(t),St(e)?e:r+Number(e))},moveTo:function(t){var e=arguments.length>1&&arguments[1]!==void 0?arguments[1]:t,i=this.canvasData,n=!1;return t=Number(t),e=Number(e),this.ready&&!this.disabled&&this.options.movable&&(b(t)&&(i.left=t,n=!0),b(e)&&(i.top=e,n=!0),n&&this.renderCanvas(!0)),this},zoom:function(t,e){var i=this.canvasData;return t=Number(t),t<0?t=1/(1-t):t=1+t,this.zoomTo(i.width*t/i.naturalWidth,null,e)},zoomTo:function(t,e,i){var n=this.options,r=this.canvasData,o=r.width,s=r.height,l=r.naturalWidth,f=r.naturalHeight;if(t=Number(t),t>=0&&this.ready&&!this.disabled&&n.zoomable){var h=l*t,c=f*t;if(ht(this.element,Yt,{ratio:t,oldRatio:o/l,originalEvent:i})===!1)return this;if(i){var u=this.pointers,v=Le(this.cropper),g=u&&Object.keys(u).length?Ai(u):{pageX:i.pageX,pageY:i.pageY};r.left-=(h-o)*((g.pageX-v.left-r.left)/o),r.top-=(c-s)*((g.pageY-v.top-r.top)/s)}else nt(e)&&b(e.x)&&b(e.y)?(r.left-=(h-o)*((e.x-r.left)/o),r.top-=(c-s)*((e.y-r.top)/s)):(r.left-=(h-o)/2,r.top-=(c-s)/2);r.width=h,r.height=c,this.renderCanvas(!0)}return this},rotate:function(t){return this.rotateTo((this.imageData.rotate||0)+Number(t))},rotateTo:function(t){return t=Number(t),b(t)&&this.ready&&!this.disabled&&this.options.rotatable&&(this.imageData.rotate=t%360,this.renderCanvas(!0,!0)),this},scaleX:function(t){var e=this.imageData.scaleY;return this.scale(t,b(e)?e:1)},scaleY:function(t){var e=this.imageData.scaleX;return this.scale(b(e)?e:1,t)},scale:function(t){var e=arguments.length>1&&arguments[1]!==void 0?arguments[1]:t,i=this.imageData,n=!1;return t=Number(t),e=Number(e),this.ready&&!this.disabled&&this.options.scalable&&(b(t)&&(i.scaleX=t,n=!0),b(e)&&(i.scaleY=e,n=!0),n&&this.renderCanvas(!0,!0)),this},getData:function(){var t=arguments.length>0&&arguments[0]!==void 0?arguments[0]:!1,e=this.options,i=this.imageData,n=this.canvasData,r=this.cropBoxData,o;if(this.ready&&this.cropped){o={x:r.left-n.left,y:r.top-n.top,width:r.width,height:r.height};var s=i.width/i.naturalWidth;if(C(o,function(h,c){o[c]=h/s}),t){var l=Math.round(o.y+o.height),f=Math.round(o.x+o.width);o.x=Math.round(o.x),o.y=Math.round(o.y),o.width=f-o.x,o.height=l-o.y}}else o={x:0,y:0,width:0,height:0};return e.rotatable&&(o.rotate=i.rotate||0),e.scalable&&(o.scaleX=i.scaleX||1,o.scaleY=i.scaleY||1),o},setData:function(t){var e=this.options,i=this.imageData,n=this.canvasData,r={};if(this.ready&&!this.disabled&&nt(t)){var o=!1;e.rotatable&&b(t.rotate)&&t.rotate!==i.rotate&&(i.rotate=t.rotate,o=!0),e.scalable&&(b(t.scaleX)&&t.scaleX!==i.scaleX&&(i.scaleX=t.scaleX,o=!0),b(t.scaleY)&&t.scaleY!==i.scaleY&&(i.scaleY=t.scaleY,o=!0)),o&&this.renderCanvas(!0,!0);var s=i.width/i.naturalWidth;b(t.x)&&(r.left=t.x*s+n.left),b(t.y)&&(r.top=t.y*s+n.top),b(t.width)&&(r.width=t.width*s),b(t.height)&&(r.height=t.height*s),this.setCropBoxData(r)}return this},getContainerData:function(){return this.ready?D({},this.containerData):{}},getImageData:function(){return this.sized?D({},this.imageData):{}},getCanvasData:function(){var t=this.canvasData,e={};return this.ready&&C(["left","top","width","height","naturalWidth","naturalHeight"],function(i){e[i]=t[i]}),e},setCanvasData:function(t){var e=this.canvasData,i=e.aspectRatio;return this.ready&&!this.disabled&&nt(t)&&(b(t.left)&&(e.left=t.left),b(t.top)&&(e.top=t.top),b(t.width)?(e.width=t.width,e.height=t.width/i):b(t.height)&&(e.height=t.height,e.width=t.height*i),this.renderCanvas(!0)),this},getCropBoxData:function(){var t=this.cropBoxData,e;return this.ready&&this.cropped&&(e={left:t.left,top:t.top,width:t.width,height:t.height}),e||{}},setCropBoxData:function(t){var e=this.cropBoxData,i=this.options.aspectRatio,n,r;return this.ready&&this.cropped&&!this.disabled&&nt(t)&&(b(t.left)&&(e.left=t.left),b(t.top)&&(e.top=t.top),b(t.width)&&t.width!==e.width&&(n=!0,e.width=t.width),b(t.height)&&t.height!==e.height&&(r=!0,e.height=t.height),i&&(n?e.height=e.width/i:r&&(e.width=e.height*i)),this.renderCropBox()),this},getCroppedCanvas:function(){var t=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{};if(!this.ready||!window.HTMLCanvasElement)return null;var e=this.canvasData,i=Si(this.image,this.imageData,e,t);if(!this.cropped)return i;var n=this.getData(),r=n.x,o=n.y,s=n.width,l=n.height,f=i.width/Math.floor(e.naturalWidth);f!==1&&(r*=f,o*=f,s*=f,l*=f);var h=s/l,c=Q({aspectRatio:h,width:t.maxWidth||1/0,height:t.maxHeight||1/0}),u=Q({aspectRatio:h,width:t.minWidth||0,height:t.minHeight||0},"cover"),v=Q({aspectRatio:h,width:t.width||(f!==1?i.width:s),height:t.height||(f!==1?i.height:l)}),g=v.width,_=v.height;g=Math.min(c.width,Math.max(u.width,g)),_=Math.min(c.height,Math.max(u.height,_));var m=document.createElement("canvas"),x=m.getContext("2d");m.width=st(g),m.height=st(_),x.fillStyle=t.fillColor||"transparent",x.fillRect(0,0,g,_);var T=t.imageSmoothingEnabled,O=T===void 0?!0:T,w=t.imageSmoothingQuality;x.imageSmoothingEnabled=O,w&&(x.imageSmoothingQuality=w);var M=i.width,d=i.height,y=r,R=o,L,V,J,$,q,P;y<=-s||y>M?(y=0,L=0,J=0,q=0):y<=0?(J=-y,y=0,L=Math.min(M,s+y),q=L):y<=M&&(J=0,L=Math.min(s,M-y),q=L),L<=0||R<=-l||R>d?(R=0,V=0,$=0,P=0):R<=0?($=-R,R=0,V=Math.min(d,l+R),P=V):R<=d&&($=0,V=Math.min(l,d-R),P=V);var N=[y,R,L,V];if(q>0&&P>0){var tt=g/s;N.push(J*tt,$*tt,q*tt,P*tt)}return x.drawImage.apply(x,[i].concat(De(N.map(function(_t){return Math.floor(st(_t))})))),m},setAspectRatio:function(t){var e=this.options;return!this.disabled&&!St(t)&&(e.aspectRatio=Math.max(0,t)||NaN,this.ready&&(this.initCropBox(),this.cropped&&this.renderCropBox())),this},setDragMode:function(t){var e=this.options,i=this.dragBox,n=this.face;if(this.ready&&!this.disabled){var r=t===Jt,o=e.movable&&t===Ce;t=r||o?t:Re,e.dragMode=t,mt(i,gt,t),ot(i,Lt,r),ot(i,Bt,o),e.cropBoxMovable||(mt(n,gt,t),ot(n,Lt,r),ot(n,Bt,o))}return this}},Vi=X.Cropper,Gi=function(){function a(t){var e=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{};if(ri(this,a),!t||!bi.test(t.tagName))throw new Error("The first argument is required and must be an or element.");this.element=t,this.options=D({},me,nt(e)&&e),this.cropped=!1,this.disabled=!1,this.pointers={},this.ready=!1,this.reloading=!1,this.replaced=!1,this.sized=!1,this.sizing=!1,this.init()}return ni(a,[{key:"init",value:function(){var e=this.element,i=e.tagName.toLowerCase(),n;if(!e[E]){if(e[E]=this,i==="img"){if(this.isImg=!0,n=e.getAttribute("src")||"",this.originalUrl=n,!n)return;n=e.src}else i==="canvas"&&window.HTMLCanvasElement&&(n=e.toDataURL());this.load(n)}}},{key:"load",value:function(e){var i=this;if(e){this.url=e,this.imageData={};var n=this.element,r=this.options;if(!r.rotatable&&!r.scalable&&(r.checkOrientation=!1),!r.checkOrientation||!window.ArrayBuffer){this.clone();return}if(gi.test(e)){mi.test(e)?this.read(Li(e)):this.clone();return}var o=new XMLHttpRequest,s=this.clone.bind(this);this.reloading=!0,this.xhr=o,o.onabort=s,o.onerror=s,o.ontimeout=s,o.onprogress=function(){o.getResponseHeader("content-type")!==ge&&o.abort()},o.onload=function(){i.read(o.response)},o.onloadend=function(){i.reloading=!1,i.xhr=null},r.checkCrossOrigin&&we(e)&&n.crossOrigin&&(e=ye(e)),o.open("GET",e,!0),o.responseType="arraybuffer",o.withCredentials=n.crossOrigin==="use-credentials",o.send()}}},{key:"read",value:function(e){var i=this.options,n=this.imageData,r=zi(e),o=0,s=1,l=1;if(r>1){this.url=Bi(e,ge);var f=Pi(r);o=f.rotate,s=f.scaleX,l=f.scaleY}i.rotatable&&(n.rotate=o),i.scalable&&(n.scaleX=s,n.scaleY=l),this.clone()}},{key:"clone",value:function(){var e=this.element,i=this.url,n=e.crossOrigin,r=i;this.options.checkCrossOrigin&&we(i)&&(n||(n="anonymous"),r=ye(i)),this.crossOrigin=n,this.crossOriginUrl=r;var o=document.createElement("img");n&&(o.crossOrigin=n),o.src=r||i,o.alt=e.alt||"The image to crop",this.image=o,o.onload=this.start.bind(this),o.onerror=this.stop.bind(this),A(o,he),e.parentNode.insertBefore(o,e.nextSibling)}},{key:"start",value:function(){var e=this,i=this.image;i.onload=null,i.onerror=null,this.sizing=!0;var n=X.navigator&&/(?:iPad|iPhone|iPod).*?AppleWebKit/i.test(X.navigator.userAgent),r=function(f,h){D(e.imageData,{naturalWidth:f,naturalHeight:h,aspectRatio:f/h}),e.initialImageData=D({},e.imageData),e.sizing=!1,e.sized=!0,e.build()};if(i.naturalWidth&&!n){r(i.naturalWidth,i.naturalHeight);return}var o=document.createElement("img"),s=document.body||document.documentElement;this.sizingImage=o,o.onload=function(){r(o.width,o.height),n||s.removeChild(o)},o.src=i.src,n||(o.style.cssText="left:0;max-height:none!important;max-width:none!important;min-height:0!important;min-width:0!important;opacity:0;position:absolute;top:0;z-index:-1;",s.appendChild(o))}},{key:"stop",value:function(){var e=this.image;e.onload=null,e.onerror=null,e.parentNode.removeChild(e),this.image=null}},{key:"build",value:function(){if(!(!this.sized||this.ready)){var e=this.element,i=this.options,n=this.image,r=e.parentNode,o=document.createElement("div");o.innerHTML=wi;var s=o.querySelector(".".concat(E,"-container")),l=s.querySelector(".".concat(E,"-canvas")),f=s.querySelector(".".concat(E,"-drag-box")),h=s.querySelector(".".concat(E,"-crop-box")),c=h.querySelector(".".concat(E,"-face"));this.container=r,this.cropper=s,this.canvas=l,this.dragBox=f,this.cropBox=h,this.viewBox=s.querySelector(".".concat(E,"-view-box")),this.face=c,l.appendChild(n),A(e,k),r.insertBefore(s,e.nextSibling),this.isImg||Y(n,he),this.initPreview(),this.bind(),i.initialAspectRatio=Math.max(0,i.initialAspectRatio)||NaN,i.aspectRatio=Math.max(0,i.aspectRatio)||NaN,i.viewMode=Math.max(0,Math.min(3,Math.round(i.viewMode)))||0,A(h,k),i.guides||A(h.getElementsByClassName("".concat(E,"-dashed")),k),i.center||A(h.getElementsByClassName("".concat(E,"-center")),k),i.background&&A(s,"".concat(E,"-bg")),i.highlight||A(c,fi),i.cropBoxMovable&&(A(c,Bt),mt(c,gt,Zt)),i.cropBoxResizable||(A(h.getElementsByClassName("".concat(E,"-line")),k),A(h.getElementsByClassName("".concat(E,"-point")),k)),this.render(),this.ready=!0,this.setDragMode(i.dragMode),i.autoCrop&&this.crop(),this.setData(i.data),S(i.ready)&&B(e,de,i.ready,{once:!0}),ht(e,de)}}},{key:"unbuild",value:function(){this.ready&&(this.ready=!1,this.unbind(),this.resetPreview(),this.cropper.parentNode.removeChild(this.cropper),Y(this.element,k))}},{key:"uncreate",value:function(){this.ready?(this.unbuild(),this.ready=!1,this.cropped=!1):this.sizing?(this.sizingImage.onload=null,this.sizing=!1,this.sized=!1):this.reloading?(this.xhr.onabort=null,this.xhr.abort()):this.image&&this.stop()}}],[{key:"noConflict",value:function(){return window.Cropper=Vi,a}},{key:"setDefaults",value:function(e){D(me,nt(e)&&e)}}]),a}();D(Gi.prototype,Hi,Wi,Yi,Xi,Ui,ji);var ze=function(){if(typeof Map<"u")return Map;function a(t,e){var i=-1;return t.some(function(n,r){return n[0]===e?(i=r,!0):!1}),i}return function(){function t(){this.__entries__=[]}return Object.defineProperty(t.prototype,"size",{get:function(){return this.__entries__.length},enumerable:!0,configurable:!0}),t.prototype.get=function(e){var i=a(this.__entries__,e),n=this.__entries__[i];return n&&n[1]},t.prototype.set=function(e,i){var n=a(this.__entries__,e);~n?this.__entries__[n][1]=i:this.__entries__.push([e,i])},t.prototype.delete=function(e){var i=this.__entries__,n=a(i,e);~n&&i.splice(n,1)},t.prototype.has=function(e){return!!~a(this.__entries__,e)},t.prototype.clear=function(){this.__entries__.splice(0)},t.prototype.forEach=function(e,i){i===void 0&&(i=null);for(var n=0,r=this.__entries__;n0},a.prototype.connect_=function(){!Ut||this.connected_||(document.addEventListener("transitionend",this.onTransitionEnd_),window.addEventListener("resize",this.refresh),Ji?(this.mutationsObserver_=new MutationObserver(this.refresh),this.mutationsObserver_.observe(document,{attributes:!0,childList:!0,characterData:!0,subtree:!0})):(document.addEventListener("DOMSubtreeModified",this.refresh),this.mutationEventsAdded_=!0),this.connected_=!0)},a.prototype.disconnect_=function(){!Ut||!this.connected_||(document.removeEventListener("transitionend",this.onTransitionEnd_),window.removeEventListener("resize",this.refresh),this.mutationsObserver_&&this.mutationsObserver_.disconnect(),this.mutationEventsAdded_&&document.removeEventListener("DOMSubtreeModified",this.refresh),this.mutationsObserver_=null,this.mutationEventsAdded_=!1,this.connected_=!1)},a.prototype.onTransitionEnd_=function(t){var e=t.propertyName,i=e===void 0?"":e,n=Zi.some(function(r){return!!~i.indexOf(r)});n&&this.refresh()},a.getInstance=function(){return this.instance_||(this.instance_=new a),this.instance_},a.instance_=null,a}(),Pe=function(a,t){for(var e=0,i=Object.keys(t);e"u"||!(Element instanceof Object))){if(!(t instanceof lt(t).Element))throw new TypeError('parameter 1 is not of type "Element".');var e=this.observations_;e.has(t)||(e.set(t,new sa(t)),this.controller_.addObserver(this),this.controller_.refresh())}},a.prototype.unobserve=function(t){if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");if(!(typeof Element>"u"||!(Element instanceof Object))){if(!(t instanceof lt(t).Element))throw new TypeError('parameter 1 is not of type "Element".');var e=this.observations_;e.has(t)&&(e.delete(t),e.size||this.controller_.removeObserver(this))}},a.prototype.disconnect=function(){this.clearActive(),this.observations_.clear(),this.controller_.removeObserver(this)},a.prototype.gatherActive=function(){var t=this;this.clearActive(),this.observations_.forEach(function(e){e.isActive()&&t.activeObservations_.push(e)})},a.prototype.broadcastActive=function(){if(this.hasActive()){var t=this.callbackCtx_,e=this.activeObservations_.map(function(i){return new ha(i.target,i.broadcastRect())});this.callback_.call(t,e,t),this.clearActive()}},a.prototype.clearActive=function(){this.activeObservations_.splice(0)},a.prototype.hasActive=function(){return this.activeObservations_.length>0},a}(),We=typeof WeakMap<"u"?new WeakMap:new ze,Ye=function(){function a(t){if(!(this instanceof a))throw new TypeError("Cannot call a class as a function.");if(!arguments.length)throw new TypeError("1 argument required, but only 0 present.");var e=$i.getInstance(),i=new ca(t,e,this);We.set(this,i)}return a}();["observe","unobserve","disconnect"].forEach(function(a){Ye.prototype[a]=function(){var t;return(t=We.get(this))[a].apply(t,arguments)}});var wa=function(){return typeof Ot.ResizeObserver<"u"?Ot.ResizeObserver:Ye}();function xe(a){let t,e,i,n,r,o;const s=[fa,la],l=[];function f(h,c){return h[1]==="video"?0:1}return e=f(a),i=l[e]=s[e](a),{c(){t=ct("button"),i.c(),p(t,"class","svelte-425ent")},m(h,c){U(h,t,c),l[e].m(t,null),n=!0,r||(o=qe(t,"click",function(){Fe(a[1]==="image"?a[5]:a[6])&&(a[1]==="image"?a[5]:a[6]).apply(this,arguments)}),r=!0)},p(h,c){a=h;let u=e;e=f(a),e===u?l[e].p(a,c):(jt(),G(l[u],1,1,()=>{l[u]=null}),Vt(),i=l[e],i?i.p(a,c):(i=l[e]=s[e](a),i.c()),W(i,1),i.m(t,null))},i(h){n||(W(i),n=!0)},o(h){G(i),n=!1},d(h){h&&j(t),l[e].d(),r=!1,o()}}}function la(a){let t,e,i;return e=new Je({}),{c(){t=ct("div"),Gt(e.$$.fragment),p(t,"class","icon svelte-425ent")},m(n,r){U(n,t,r),qt(e,t,null),i=!0},p:I,i(n){i||(W(e.$$.fragment,n),i=!0)},o(n){G(e.$$.fragment,n),i=!1},d(n){n&&j(t),Ft(e)}}}function fa(a){let t,e,i,n;const r=[da,ua],o=[];function s(l,f){return l[4]?0:1}return t=s(a),e=o[t]=r[t](a),{c(){e.c(),i=Qe()},m(l,f){o[t].m(l,f),U(l,i,f),n=!0},p(l,f){let h=t;t=s(l),t!==h&&(jt(),G(o[h],1,1,()=>{o[h]=null}),Vt(),e=o[t],e||(e=o[t]=r[t](l),e.c()),W(e,1),e.m(i.parentNode,i))},i(l){n||(W(e),n=!0)},o(l){G(e),n=!1},d(l){l&&j(i),o[t].d(l)}}}function ua(a){let t,e,i;return e=new ti({}),{c(){t=ct("div"),Gt(e.$$.fragment),p(t,"class","icon svelte-425ent")},m(n,r){U(n,t,r),qt(e,t,null),i=!0},i(n){i||(W(e.$$.fragment,n),i=!0)},o(n){G(e.$$.fragment,n),i=!1},d(n){n&&j(t),Ft(e)}}}function da(a){let t,e,i;return e=new ii({}),{c(){t=ct("div"),Gt(e.$$.fragment),p(t,"class","icon svelte-425ent")},m(n,r){U(n,t,r),qt(e,t,null),i=!0},i(n){i||(W(e.$$.fragment,n),i=!0)},o(n){G(e.$$.fragment,n),i=!1},d(n){n&&j(t),Ft(e)}}}function pa(a){let t,e,i,n,r=!a[0]&&xe(a);return{c(){t=ct("div"),e=ct("video"),i=je(),r&&r.c(),p(e,"class","svelte-425ent"),re(e,"flip",a[2]),p(t,"class","wrap svelte-425ent")},m(o,s){U(o,t,s),Z(t,e),a[9](e),Z(t,i),r&&r.m(t,null),n=!0},p(o,[s]){(!n||s&4)&&re(e,"flip",o[2]),o[0]?r&&(jt(),G(r,1,1,()=>{r=null}),Vt()):r?(r.p(o,s),s&1&&W(r,1)):(r=xe(o),r.c(),W(r,1),r.m(t,null))},i(o){n||(W(r),n=!0)},o(o){G(r),n=!1},d(o){o&&j(t),a[9](null),r&&r.d()}}}function va(a,t,e){let i,n,{streaming:r=!1}=t,{pending:o=!1}=t,{mode:s="image"}=t,{mirror_webcam:l}=t,{include_audio:f}=t;const h=Ve();Ge(()=>n=document.createElement("canvas"));async function c(){try{_=await navigator.mediaDevices.getUserMedia({video:!0,audio:f}),e(3,i.srcObject=_,i),e(3,i.muted=!0,i),i.play()}catch(w){if(w instanceof DOMException&&w.name=="NotAllowedError")return h("error","Please allow access to the webcam for recording."),null;throw w}}function u(){var w=n.getContext("2d");if(i.videoWidth&&i.videoHeight){n.width=i.videoWidth,n.height=i.videoHeight,w.drawImage(i,0,0,i.videoWidth,i.videoHeight);var M=n.toDataURL("image/png");h(r?"stream":"capture",M)}}let v=!1,g=[],_,m,x;function T(){if(v){x.stop();let w=new Blob(g,{type:m}),M=new FileReader;M.onload=function(d){d.target&&(h("capture",{data:d.target.result,name:"sample."+m.substring(6),is_example:!1}),h("stop_recording"))},M.readAsDataURL(w)}else{h("start_recording"),g=[];let w=["video/webm","video/mp4"];for(let M of w)if(MediaRecorder.isTypeSupported(M)){m=M;break}if(m===null){console.error("No supported MediaRecorder mimeType");return}x=new MediaRecorder(_,{mimeType:m}),x.addEventListener("dataavailable",function(M){g.push(M.data)}),x.start(200)}e(4,v=!v)}c(),r&&s==="image"&&window.setInterval(()=>{i&&!o&&u()},500);function O(w){Ke[w?"unshift":"push"](()=>{i=w,e(3,i)})}return a.$$set=w=>{"streaming"in w&&e(0,r=w.streaming),"pending"in w&&e(7,o=w.pending),"mode"in w&&e(1,s=w.mode),"mirror_webcam"in w&&e(2,l=w.mirror_webcam),"include_audio"in w&&e(8,f=w.include_audio)},[r,s,l,i,v,u,T,o,f,O]}class ya extends bt{constructor(t){super(),wt(this,t,va,pa,yt,{streaming:0,pending:7,mode:1,mirror_webcam:2,include_audio:8})}}export{Gi as C,ba as U,ya as W,wa as i}; -//# sourceMappingURL=StaticImage.svelte_svelte_type_style_lang-72cfcc0b.js.map diff --git a/spaces/cihyFjudo/fairness-paper-search/Disable Show window contents while dragging In Windows 10 Tips and Tricks.md b/spaces/cihyFjudo/fairness-paper-search/Disable Show window contents while dragging In Windows 10 Tips and Tricks.md deleted file mode 100644 index c4b3c792d69342aec00ce080a3d3f04b54cf43c4..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Disable Show window contents while dragging In Windows 10 Tips and Tricks.md +++ /dev/null @@ -1,13 +0,0 @@ -
    -

    I have everything there, except that when I drag windows, I only see the border of the window as I'm dragging. In other/older SKUs of Windows, there was a checkbox that I think said "Show Window Contents While Dragging", but I can't find such a checkbox on this OS.

    -

    The setting(disable full window drag:i:1) is used to determine whether folder contents appear when you drag the folder to a new location. This setting corresponds to the selection in theShow contents of window while dragging check box on the Experience tab of Remote Desktop Connection Options.

    -

    Disable Show window contents while dragging In Windows 10


    DOWNLOAD ••• https://tinurli.com/2uwkBW



    -

    The remoteapp windows dragging contents are controller by your computer. If your local computer has show contents of window while dragging enabled then the remoteapp will show the contents as well. If you want it off like I do just create a GPO to disable the "show window content dragging" or change it manually per computer.

    -

    We are running various versions of the Citrix Plugin on both Windows XP and Windows 7 clients and they all show following behaviour:
    When using the Citrix plugin to work with remote applications and/or virtual desktops, the plugin disables the LOCAL setting "Show WIndows Contents while dragging" (so on the Windows XP/7 Client!). The moment when the option is disabled is not consistent. Sometimes the option is disabled immediately after starting a remote app, sometimes after some time, sometimes the option is not disabled at all (but most of the times it is!).
    A lot of our users are complaining about this, because they find the "Show Windows Contents while dragging" option very usefull.
    We are wondering if there is a way to prevent this from happening!

    -

    However, I'm experiencing a problem with your mouse software (BlackElement.exe). As long as the software runs, the windows feature "Show window contents while dragging" in the System Properties gets disabled (the according registry entry stays how he was set however...).

    -

    I have the same trouble, with Tt black element software, i reinstall windows 10 and everything go right before install mice software.

    I disabled the mouse software at windows start up, and go ok, however when software starts with windows the trouble appears again.

    -

    Been seeing more and more people struggling with this lately when working with Server 2019 or Windows 10 RDS.
    The issue shows itself with the default settings for Server 2019, and is caused by the normal 1px border not beein there anymore. And the default color settings is that everything is white, and has shadows under the windows.
    This is ok, when you have shadows enabled, as you then can see the difference on the windows, when working with multiple windows overlaying each others.

    If you are not sure about what I am refering to, the issue looks like this:

    -

    -

    I'm with you Luis. I have been using El Capitan since the first beta last July if I remember correctly and have never accidentally triggered Mission Control by dragging windows to the top of the screen. ?

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Film Ahista Ahista 1 Full Movie Free HOT!.md b/spaces/cihyFjudo/fairness-paper-search/Download Film Ahista Ahista 1 Full Movie Free HOT!.md deleted file mode 100644 index 2d8c2d1a4c03d1676e5d4b96feb197e1d651b562..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Film Ahista Ahista 1 Full Movie Free HOT!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Download Film Ahista Ahista 1 Full Movie Free


    Download ❤❤❤ https://tinurli.com/2uwkmo



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Follow Adder 1.1.150812 with Key How to Schedule Your Posts and Stories in Advance.md b/spaces/cihyFjudo/fairness-paper-search/Follow Adder 1.1.150812 with Key How to Schedule Your Posts and Stories in Advance.md deleted file mode 100644 index 3e49da54c71b781858151549ca0de2e7be0ca2e2..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Follow Adder 1.1.150812 with Key How to Schedule Your Posts and Stories in Advance.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Follow Adder 1.1.150812 with Key


    Download Ziphttps://tinurli.com/2uwiTh



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Fractalius - Photoshop Filter Plugin Serial Key Keygen Explore the World of Fractal Art with This Powerful Plugin.md b/spaces/cihyFjudo/fairness-paper-search/Fractalius - Photoshop Filter Plugin Serial Key Keygen Explore the World of Fractal Art with This Powerful Plugin.md deleted file mode 100644 index ad94e315e28759592ed43b6496e058a22367f336..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Fractalius - Photoshop Filter Plugin Serial Key Keygen Explore the World of Fractal Art with This Powerful Plugin.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    Using warez version, crack, warez passwords, patches, serial numbers, registration codes, key generator, pirate key, keymaker or keygen forFractalius plug-in 1.77 license key is illegal and prevent future development ofFractalius plug-in 1.77. Download links are directly from our mirrors or publisher's website,Fractalius plug-in 1.77 torrent files or shared files from free file sharing and free upload services,including Fractalius plug-in 1.77 Rapidshare, MegaUpload, HellShare, HotFile, FileServe, YouSendIt, SendSpace, DepositFiles, Letitbit, MailBigFile, DropSend, MediaMax, LeapFile, zUpload, MyOtherDrive, DivShare or MediaFire,are not allowed!

    -

    Your computer will be at risk getting infected with spyware, adware, viruses, worms, trojan horses, dialers, etcwhile you are searching and browsing these illegal sites which distribute a so called keygen, key generator, pirate key, serial number, warez full version or crack forFractalius plug-in 1.77. These infections might corrupt your computer installation or breach your privacy.Fractalius plug-in 1.77 keygen or key generator might contain a trojan horse opening a backdoor on your computer.Hackers can use this backdoor to take control of your computer, copy data from your computer or to use your computer to distribute viruses and spam to other people.

    -

    Fractalius - Photoshop Filter Plugin Serial Key Keygen


    Download Ziphttps://tinurli.com/2uwiXA



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Highway 203 2 Tamil Movie Download How to Get the Full HD Version for Free.md b/spaces/cihyFjudo/fairness-paper-search/Highway 203 2 Tamil Movie Download How to Get the Full HD Version for Free.md deleted file mode 100644 index 96f3bcb1e9973d0a4e0553f0a37d4b6d144b9ace..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Highway 203 2 Tamil Movie Download How to Get the Full HD Version for Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Highway 203 2 Tamil Movie Download


    Download Ziphttps://tinurli.com/2uwj5B



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/NordVPN 6.19.4 Crack What You Need to Know Before You Download It.md b/spaces/cihyFjudo/fairness-paper-search/NordVPN 6.19.4 Crack What You Need to Know Before You Download It.md deleted file mode 100644 index 1a228e4ddfa321a70f7f58cf1af1cd03fef583b1..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/NordVPN 6.19.4 Crack What You Need to Know Before You Download It.md +++ /dev/null @@ -1,6 +0,0 @@ -

    NordVPN 6.19.4 Crack


    Download Zip ✒ ✒ ✒ https://tinurli.com/2uwjVb



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Simandl 30 Etudes Pdf 22l Learn to Play the Double Bass with Simandls Method.md b/spaces/cihyFjudo/fairness-paper-search/Simandl 30 Etudes Pdf 22l Learn to Play the Double Bass with Simandls Method.md deleted file mode 100644 index 498ddb208829b6c1ae9ba01515bb84aca7116ade..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Simandl 30 Etudes Pdf 22l Learn to Play the Double Bass with Simandls Method.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Simandl 30 Etudes Pdf 22l


    Download File » https://tinurli.com/2uwiNT



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cleanmaster/so-vits-svc-akagi/hubert/__init__.py b/spaces/cleanmaster/so-vits-svc-akagi/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/_cffi_errors.h b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/_cffi_errors.h deleted file mode 100644 index 158e0590346a9a8b2ab047ac1bd23bcb3af21398..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/_cffi_errors.h +++ /dev/null @@ -1,149 +0,0 @@ -#ifndef CFFI_MESSAGEBOX -# ifdef _MSC_VER -# define CFFI_MESSAGEBOX 1 -# else -# define CFFI_MESSAGEBOX 0 -# endif -#endif - - -#if CFFI_MESSAGEBOX -/* Windows only: logic to take the Python-CFFI embedding logic - initialization errors and display them in a background thread - with MessageBox. The idea is that if the whole program closes - as a result of this problem, then likely it is already a console - program and you can read the stderr output in the console too. - If it is not a console program, then it will likely show its own - dialog to complain, or generally not abruptly close, and for this - case the background thread should stay alive. -*/ -static void *volatile _cffi_bootstrap_text; - -static PyObject *_cffi_start_error_capture(void) -{ - PyObject *result = NULL; - PyObject *x, *m, *bi; - - if (InterlockedCompareExchangePointer(&_cffi_bootstrap_text, - (void *)1, NULL) != NULL) - return (PyObject *)1; - - m = PyImport_AddModule("_cffi_error_capture"); - if (m == NULL) - goto error; - - result = PyModule_GetDict(m); - if (result == NULL) - goto error; - -#if PY_MAJOR_VERSION >= 3 - bi = PyImport_ImportModule("builtins"); -#else - bi = PyImport_ImportModule("__builtin__"); -#endif - if (bi == NULL) - goto error; - PyDict_SetItemString(result, "__builtins__", bi); - Py_DECREF(bi); - - x = PyRun_String( - "import sys\n" - "class FileLike:\n" - " def write(self, x):\n" - " try:\n" - " of.write(x)\n" - " except: pass\n" - " self.buf += x\n" - " def flush(self):\n" - " pass\n" - "fl = FileLike()\n" - "fl.buf = ''\n" - "of = sys.stderr\n" - "sys.stderr = fl\n" - "def done():\n" - " sys.stderr = of\n" - " return fl.buf\n", /* make sure the returned value stays alive */ - Py_file_input, - result, result); - Py_XDECREF(x); - - error: - if (PyErr_Occurred()) - { - PyErr_WriteUnraisable(Py_None); - PyErr_Clear(); - } - return result; -} - -#pragma comment(lib, "user32.lib") - -static DWORD WINAPI _cffi_bootstrap_dialog(LPVOID ignored) -{ - Sleep(666); /* may be interrupted if the whole process is closing */ -#if PY_MAJOR_VERSION >= 3 - MessageBoxW(NULL, (wchar_t *)_cffi_bootstrap_text, - L"Python-CFFI error", - MB_OK | MB_ICONERROR); -#else - MessageBoxA(NULL, (char *)_cffi_bootstrap_text, - "Python-CFFI error", - MB_OK | MB_ICONERROR); -#endif - _cffi_bootstrap_text = NULL; - return 0; -} - -static void _cffi_stop_error_capture(PyObject *ecap) -{ - PyObject *s; - void *text; - - if (ecap == (PyObject *)1) - return; - - if (ecap == NULL) - goto error; - - s = PyRun_String("done()", Py_eval_input, ecap, ecap); - if (s == NULL) - goto error; - - /* Show a dialog box, but in a background thread, and - never show multiple dialog boxes at once. */ -#if PY_MAJOR_VERSION >= 3 - text = PyUnicode_AsWideCharString(s, NULL); -#else - text = PyString_AsString(s); -#endif - - _cffi_bootstrap_text = text; - - if (text != NULL) - { - HANDLE h; - h = CreateThread(NULL, 0, _cffi_bootstrap_dialog, - NULL, 0, NULL); - if (h != NULL) - CloseHandle(h); - } - /* decref the string, but it should stay alive as 'fl.buf' - in the small module above. It will really be freed only if - we later get another similar error. So it's a leak of at - most one copy of the small module. That's fine for this - situation which is usually a "fatal error" anyway. */ - Py_DECREF(s); - PyErr_Clear(); - return; - - error: - _cffi_bootstrap_text = NULL; - PyErr_Clear(); -} - -#else - -static PyObject *_cffi_start_error_capture(void) { return NULL; } -static void _cffi_stop_error_capture(PyObject *ecap) { } - -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg_demux.c b/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg_demux.c deleted file mode 100644 index 5afb3ff2c8288f04dfd9f84f7565dd07e4912512..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg_demux.c +++ /dev/null @@ -1,1294 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include - -#include "ffmpeg.h" - -#include "libavutil/avassert.h" -#include "libavutil/avstring.h" -#include "libavutil/display.h" -#include "libavutil/error.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/opt.h" -#include "libavutil/parseutils.h" -#include "libavutil/pixdesc.h" -#include "libavutil/time.h" -#include "libavutil/timestamp.h" -#include "libavutil/thread.h" -#include "libavutil/threadmessage.h" - -#include "libavcodec/packet.h" - -#include "libavformat/avformat.h" - -static const char *const opt_name_discard[] = {"discard", NULL}; -static const char *const opt_name_reinit_filters[] = {"reinit_filter", NULL}; -static const char *const opt_name_fix_sub_duration[] = {"fix_sub_duration", NULL}; -static const char *const opt_name_canvas_sizes[] = {"canvas_size", NULL}; -static const char *const opt_name_guess_layout_max[] = {"guess_layout_max", NULL}; -static const char *const opt_name_ts_scale[] = {"itsscale", NULL}; -static const char *const opt_name_hwaccels[] = {"hwaccel", NULL}; -static const char *const opt_name_hwaccel_devices[] = {"hwaccel_device", NULL}; -static const char *const opt_name_hwaccel_output_formats[] = {"hwaccel_output_format", NULL}; -static const char *const opt_name_autorotate[] = {"autorotate", NULL}; -static const char *const opt_name_display_rotations[] = {"display_rotation", NULL}; -static const char *const opt_name_display_hflips[] = {"display_hflip", NULL}; -static const char *const opt_name_display_vflips[] = {"display_vflip", NULL}; - -typedef struct DemuxStream { - InputStream ist; - - // name used for logging - char log_name[32]; - - double ts_scale; - - int64_t min_pts; /* pts with the smallest value in a current stream */ - int64_t max_pts; /* pts with the higher value in a current stream */ -} DemuxStream; - -typedef struct Demuxer { - InputFile f; - - // name used for logging - char log_name[32]; - - /* number of times input stream should be looped */ - int loop; - /* actual duration of the longest stream in a file at the moment when - * looping happens */ - int64_t duration; - /* time base of the duration */ - AVRational time_base; - - /* number of streams that the user was warned of */ - int nb_streams_warn; - - AVThreadMessageQueue *in_thread_queue; - int thread_queue_size; - pthread_t thread; - int non_blocking; -} Demuxer; - -typedef struct DemuxMsg { - AVPacket *pkt; - int looping; - - // repeat_pict from the demuxer-internal parser - int repeat_pict; -} DemuxMsg; - -static DemuxStream *ds_from_ist(InputStream *ist) -{ - return (DemuxStream*)ist; -} - -static Demuxer *demuxer_from_ifile(InputFile *f) -{ - return (Demuxer*)f; -} - -static void report_new_stream(Demuxer *d, const AVPacket *pkt) -{ - AVStream *st = d->f.ctx->streams[pkt->stream_index]; - - if (pkt->stream_index < d->nb_streams_warn) - return; - av_log(d, AV_LOG_WARNING, - "New %s stream with index %d at pos:%"PRId64" and DTS:%ss\n", - av_get_media_type_string(st->codecpar->codec_type), - pkt->stream_index, pkt->pos, av_ts2timestr(pkt->dts, &st->time_base)); - d->nb_streams_warn = pkt->stream_index + 1; -} - -static void ifile_duration_update(Demuxer *d, DemuxStream *ds, - int64_t last_duration) -{ - /* the total duration of the stream, max_pts - min_pts is - * the duration of the stream without the last frame */ - if (ds->max_pts > ds->min_pts && - ds->max_pts - (uint64_t)ds->min_pts < INT64_MAX - last_duration) - last_duration += ds->max_pts - ds->min_pts; - - if (!d->duration || - av_compare_ts(d->duration, d->time_base, - last_duration, ds->ist.st->time_base) < 0) { - d->duration = last_duration; - d->time_base = ds->ist.st->time_base; - } -} - -static int seek_to_start(Demuxer *d) -{ - InputFile *ifile = &d->f; - AVFormatContext *is = ifile->ctx; - int ret; - - ret = avformat_seek_file(is, -1, INT64_MIN, is->start_time, is->start_time, 0); - if (ret < 0) - return ret; - - if (ifile->audio_duration_queue_size) { - /* duration is the length of the last frame in a stream - * when audio stream is present we don't care about - * last video frame length because it's not defined exactly */ - int got_durations = 0; - - while (got_durations < ifile->audio_duration_queue_size) { - DemuxStream *ds; - LastFrameDuration dur; - ret = av_thread_message_queue_recv(ifile->audio_duration_queue, &dur, 0); - if (ret < 0) - return ret; - got_durations++; - - ds = ds_from_ist(ifile->streams[dur.stream_idx]); - ifile_duration_update(d, ds, dur.duration); - } - } else { - for (int i = 0; i < ifile->nb_streams; i++) { - int64_t duration = 0; - InputStream *ist = ifile->streams[i]; - DemuxStream *ds = ds_from_ist(ist); - - if (ist->framerate.num) { - duration = av_rescale_q(1, av_inv_q(ist->framerate), ist->st->time_base); - } else if (ist->st->avg_frame_rate.num) { - duration = av_rescale_q(1, av_inv_q(ist->st->avg_frame_rate), ist->st->time_base); - } else { - duration = 1; - } - - ifile_duration_update(d, ds, duration); - } - } - - if (d->loop > 0) - d->loop--; - - return ret; -} - -static void ts_fixup(Demuxer *d, AVPacket *pkt, int *repeat_pict) -{ - InputFile *ifile = &d->f; - InputStream *ist = ifile->streams[pkt->stream_index]; - DemuxStream *ds = ds_from_ist(ist); - const int64_t start_time = ifile->start_time_effective; - int64_t duration; - - pkt->time_base = ist->st->time_base; - -#define SHOW_TS_DEBUG(tag_) \ - if (debug_ts) { \ - av_log(ist, AV_LOG_INFO, "%s -> ist_index:%d:%d type:%s " \ - "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s duration:%s duration_time:%s\n", \ - tag_, ifile->index, pkt->stream_index, \ - av_get_media_type_string(ist->st->codecpar->codec_type), \ - av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, &pkt->time_base), \ - av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, &pkt->time_base), \ - av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, &pkt->time_base)); \ - } - - SHOW_TS_DEBUG("demuxer"); - - if (!ist->wrap_correction_done && start_time != AV_NOPTS_VALUE && - ist->st->pts_wrap_bits < 64) { - int64_t stime, stime2; - - stime = av_rescale_q(start_time, AV_TIME_BASE_Q, pkt->time_base); - stime2= stime + (1ULL<st->pts_wrap_bits); - ist->wrap_correction_done = 1; - - if(stime2 > stime && pkt->dts != AV_NOPTS_VALUE && pkt->dts > stime + (1LL<<(ist->st->pts_wrap_bits-1))) { - pkt->dts -= 1ULL<st->pts_wrap_bits; - ist->wrap_correction_done = 0; - } - if(stime2 > stime && pkt->pts != AV_NOPTS_VALUE && pkt->pts > stime + (1LL<<(ist->st->pts_wrap_bits-1))) { - pkt->pts -= 1ULL<st->pts_wrap_bits; - ist->wrap_correction_done = 0; - } - } - - if (pkt->dts != AV_NOPTS_VALUE) - pkt->dts += av_rescale_q(ifile->ts_offset, AV_TIME_BASE_Q, pkt->time_base); - if (pkt->pts != AV_NOPTS_VALUE) - pkt->pts += av_rescale_q(ifile->ts_offset, AV_TIME_BASE_Q, pkt->time_base); - - if (pkt->pts != AV_NOPTS_VALUE) - pkt->pts *= ds->ts_scale; - if (pkt->dts != AV_NOPTS_VALUE) - pkt->dts *= ds->ts_scale; - - duration = av_rescale_q(d->duration, d->time_base, pkt->time_base); - if (pkt->pts != AV_NOPTS_VALUE) { - pkt->pts += duration; - ds->max_pts = FFMAX(pkt->pts, ds->max_pts); - ds->min_pts = FFMIN(pkt->pts, ds->min_pts); - } - - if (pkt->dts != AV_NOPTS_VALUE) - pkt->dts += duration; - - *repeat_pict = -1; - if (ist->st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && - av_stream_get_parser(ist->st)) - *repeat_pict = av_stream_get_parser(ist->st)->repeat_pict; - - SHOW_TS_DEBUG("demuxer+tsfixup"); -} - -static void thread_set_name(InputFile *f) -{ - char name[16]; - snprintf(name, sizeof(name), "dmx%d:%s", f->index, f->ctx->iformat->name); - ff_thread_setname(name); -} - -static void *input_thread(void *arg) -{ - Demuxer *d = arg; - InputFile *f = &d->f; - AVPacket *pkt; - unsigned flags = d->non_blocking ? AV_THREAD_MESSAGE_NONBLOCK : 0; - int ret = 0; - - pkt = av_packet_alloc(); - if (!pkt) { - ret = AVERROR(ENOMEM); - goto finish; - } - - thread_set_name(f); - - while (1) { - DemuxMsg msg = { NULL }; - - ret = av_read_frame(f->ctx, pkt); - - if (ret == AVERROR(EAGAIN)) { - av_usleep(10000); - continue; - } - if (ret < 0) { - if (d->loop) { - /* signal looping to the consumer thread */ - msg.looping = 1; - ret = av_thread_message_queue_send(d->in_thread_queue, &msg, 0); - if (ret >= 0) - ret = seek_to_start(d); - if (ret >= 0) - continue; - - /* fallthrough to the error path */ - } - - if (ret == AVERROR_EOF) - av_log(d, AV_LOG_VERBOSE, "EOF while reading input\n"); - else - av_log(d, AV_LOG_ERROR, "Error during demuxing: %s\n", - av_err2str(ret)); - - break; - } - - if (do_pkt_dump) { - av_pkt_dump_log2(NULL, AV_LOG_INFO, pkt, do_hex_dump, - f->ctx->streams[pkt->stream_index]); - } - - /* the following test is needed in case new streams appear - dynamically in stream : we ignore them */ - if (pkt->stream_index >= f->nb_streams) { - report_new_stream(d, pkt); - av_packet_unref(pkt); - continue; - } - - if (pkt->flags & AV_PKT_FLAG_CORRUPT) { - av_log(d, exit_on_error ? AV_LOG_FATAL : AV_LOG_WARNING, - "corrupt input packet in stream %d\n", - pkt->stream_index); - if (exit_on_error) { - av_packet_unref(pkt); - ret = AVERROR_INVALIDDATA; - break; - } - } - - ts_fixup(d, pkt, &msg.repeat_pict); - - msg.pkt = av_packet_alloc(); - if (!msg.pkt) { - av_packet_unref(pkt); - ret = AVERROR(ENOMEM); - break; - } - av_packet_move_ref(msg.pkt, pkt); - ret = av_thread_message_queue_send(d->in_thread_queue, &msg, flags); - if (flags && ret == AVERROR(EAGAIN)) { - flags = 0; - ret = av_thread_message_queue_send(d->in_thread_queue, &msg, flags); - av_log(f->ctx, AV_LOG_WARNING, - "Thread message queue blocking; consider raising the " - "thread_queue_size option (current value: %d)\n", - d->thread_queue_size); - } - if (ret < 0) { - if (ret != AVERROR_EOF) - av_log(f->ctx, AV_LOG_ERROR, - "Unable to send packet to main thread: %s\n", - av_err2str(ret)); - av_packet_free(&msg.pkt); - break; - } - } - -finish: - av_assert0(ret < 0); - av_thread_message_queue_set_err_recv(d->in_thread_queue, ret); - - av_packet_free(&pkt); - - av_log(d, AV_LOG_VERBOSE, "Terminating demuxer thread\n"); - - return NULL; -} - -static void thread_stop(Demuxer *d) -{ - InputFile *f = &d->f; - DemuxMsg msg; - - if (!d->in_thread_queue) - return; - av_thread_message_queue_set_err_send(d->in_thread_queue, AVERROR_EOF); - while (av_thread_message_queue_recv(d->in_thread_queue, &msg, 0) >= 0) - av_packet_free(&msg.pkt); - - pthread_join(d->thread, NULL); - av_thread_message_queue_free(&d->in_thread_queue); - av_thread_message_queue_free(&f->audio_duration_queue); -} - -static int thread_start(Demuxer *d) -{ - int ret; - InputFile *f = &d->f; - - if (d->thread_queue_size <= 0) - d->thread_queue_size = (nb_input_files > 1 ? 8 : 1); - - if (nb_input_files > 1 && - (f->ctx->pb ? !f->ctx->pb->seekable : - strcmp(f->ctx->iformat->name, "lavfi"))) - d->non_blocking = 1; - ret = av_thread_message_queue_alloc(&d->in_thread_queue, - d->thread_queue_size, sizeof(DemuxMsg)); - if (ret < 0) - return ret; - - if (d->loop) { - int nb_audio_dec = 0; - - for (int i = 0; i < f->nb_streams; i++) { - InputStream *ist = f->streams[i]; - nb_audio_dec += !!(ist->decoding_needed && - ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO); - } - - if (nb_audio_dec) { - ret = av_thread_message_queue_alloc(&f->audio_duration_queue, - nb_audio_dec, sizeof(LastFrameDuration)); - if (ret < 0) - goto fail; - f->audio_duration_queue_size = nb_audio_dec; - } - } - - if ((ret = pthread_create(&d->thread, NULL, input_thread, d))) { - av_log(d, AV_LOG_ERROR, "pthread_create failed: %s. Try to increase `ulimit -v` or decrease `ulimit -s`.\n", strerror(ret)); - ret = AVERROR(ret); - goto fail; - } - - return 0; -fail: - av_thread_message_queue_free(&d->in_thread_queue); - return ret; -} - -int ifile_get_packet(InputFile *f, AVPacket **pkt) -{ - Demuxer *d = demuxer_from_ifile(f); - InputStream *ist; - DemuxMsg msg; - int ret; - - if (!d->in_thread_queue) { - ret = thread_start(d); - if (ret < 0) - return ret; - } - - if (f->readrate || f->rate_emu) { - int i; - int64_t file_start = copy_ts * ( - (f->start_time_effective != AV_NOPTS_VALUE ? f->start_time_effective * !start_at_zero : 0) + - (f->start_time != AV_NOPTS_VALUE ? f->start_time : 0) - ); - float scale = f->rate_emu ? 1.0 : f->readrate; - for (i = 0; i < f->nb_streams; i++) { - InputStream *ist = f->streams[i]; - int64_t stream_ts_offset, pts, now; - if (!ist->nb_packets || (ist->decoding_needed && !ist->got_output)) continue; - stream_ts_offset = FFMAX(ist->first_dts != AV_NOPTS_VALUE ? ist->first_dts : 0, file_start); - pts = av_rescale(ist->dts, 1000000, AV_TIME_BASE); - now = (av_gettime_relative() - ist->start) * scale + stream_ts_offset; - if (pts > now) - return AVERROR(EAGAIN); - } - } - - ret = av_thread_message_queue_recv(d->in_thread_queue, &msg, - d->non_blocking ? - AV_THREAD_MESSAGE_NONBLOCK : 0); - if (ret < 0) - return ret; - if (msg.looping) - return 1; - - ist = f->streams[msg.pkt->stream_index]; - ist->last_pkt_repeat_pict = msg.repeat_pict; - - *pkt = msg.pkt; - return 0; -} - -static void demux_final_stats(Demuxer *d) -{ - InputFile *f = &d->f; - uint64_t total_packets = 0, total_size = 0; - - av_log(f, AV_LOG_VERBOSE, "Input file #%d (%s):\n", - f->index, f->ctx->url); - - for (int j = 0; j < f->nb_streams; j++) { - InputStream *ist = f->streams[j]; - enum AVMediaType type = ist->par->codec_type; - - total_size += ist->data_size; - total_packets += ist->nb_packets; - - av_log(f, AV_LOG_VERBOSE, " Input stream #%d:%d (%s): ", - f->index, j, av_get_media_type_string(type)); - av_log(f, AV_LOG_VERBOSE, "%"PRIu64" packets read (%"PRIu64" bytes); ", - ist->nb_packets, ist->data_size); - - if (ist->decoding_needed) { - av_log(f, AV_LOG_VERBOSE, "%"PRIu64" frames decoded", - ist->frames_decoded); - if (type == AVMEDIA_TYPE_AUDIO) - av_log(f, AV_LOG_VERBOSE, " (%"PRIu64" samples)", ist->samples_decoded); - av_log(f, AV_LOG_VERBOSE, "; "); - } - - av_log(f, AV_LOG_VERBOSE, "\n"); - } - - av_log(f, AV_LOG_VERBOSE, " Total: %"PRIu64" packets (%"PRIu64" bytes) demuxed\n", - total_packets, total_size); -} - -static void ist_free(InputStream **pist) -{ - InputStream *ist = *pist; - - if (!ist) - return; - - av_frame_free(&ist->decoded_frame); - av_packet_free(&ist->pkt); - av_dict_free(&ist->decoder_opts); - avsubtitle_free(&ist->prev_sub.subtitle); - av_frame_free(&ist->sub2video.frame); - av_freep(&ist->filters); - av_freep(&ist->outputs); - av_freep(&ist->hwaccel_device); - - avcodec_free_context(&ist->dec_ctx); - avcodec_parameters_free(&ist->par); - - av_freep(pist); -} - -void ifile_close(InputFile **pf) -{ - InputFile *f = *pf; - Demuxer *d = demuxer_from_ifile(f); - - if (!f) - return; - - thread_stop(d); - - if (f->ctx) - demux_final_stats(d); - - for (int i = 0; i < f->nb_streams; i++) - ist_free(&f->streams[i]); - av_freep(&f->streams); - - avformat_close_input(&f->ctx); - - av_freep(pf); -} - -static void ist_use(InputStream *ist, int decoding_needed) -{ - ist->discard = 0; - ist->st->discard = ist->user_set_discard; - ist->decoding_needed |= decoding_needed; - - if (decoding_needed && !avcodec_is_open(ist->dec_ctx)) { - int ret = dec_open(ist); - if (ret < 0) - report_and_exit(ret); - } -} - -void ist_output_add(InputStream *ist, OutputStream *ost) -{ - ist_use(ist, ost->enc ? DECODING_FOR_OST : 0); - - GROW_ARRAY(ist->outputs, ist->nb_outputs); - ist->outputs[ist->nb_outputs - 1] = ost; -} - -void ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple) -{ - ist_use(ist, is_simple ? DECODING_FOR_OST : DECODING_FOR_FILTER); - - GROW_ARRAY(ist->filters, ist->nb_filters); - ist->filters[ist->nb_filters - 1] = ifilter; -} - -static const AVCodec *choose_decoder(const OptionsContext *o, AVFormatContext *s, AVStream *st, - enum HWAccelID hwaccel_id, enum AVHWDeviceType hwaccel_device_type) - -{ - char *codec_name = NULL; - - MATCH_PER_STREAM_OPT(codec_names, str, codec_name, s, st); - if (codec_name) { - const AVCodec *codec = find_codec_or_die(NULL, codec_name, st->codecpar->codec_type, 0); - st->codecpar->codec_id = codec->id; - if (recast_media && st->codecpar->codec_type != codec->type) - st->codecpar->codec_type = codec->type; - return codec; - } else { - if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO && - hwaccel_id == HWACCEL_GENERIC && - hwaccel_device_type != AV_HWDEVICE_TYPE_NONE) { - const AVCodec *c; - void *i = NULL; - - while ((c = av_codec_iterate(&i))) { - const AVCodecHWConfig *config; - - if (c->id != st->codecpar->codec_id || - !av_codec_is_decoder(c)) - continue; - - for (int j = 0; config = avcodec_get_hw_config(c, j); j++) { - if (config->device_type == hwaccel_device_type) { - av_log(NULL, AV_LOG_VERBOSE, "Selecting decoder '%s' because of requested hwaccel method %s\n", - c->name, av_hwdevice_get_type_name(hwaccel_device_type)); - return c; - } - } - } - } - - return avcodec_find_decoder(st->codecpar->codec_id); - } -} - -static int guess_input_channel_layout(InputStream *ist, int guess_layout_max) -{ - AVCodecContext *dec = ist->dec_ctx; - - if (dec->ch_layout.order == AV_CHANNEL_ORDER_UNSPEC) { - char layout_name[256]; - - if (dec->ch_layout.nb_channels > guess_layout_max) - return 0; - av_channel_layout_default(&dec->ch_layout, dec->ch_layout.nb_channels); - if (dec->ch_layout.order == AV_CHANNEL_ORDER_UNSPEC) - return 0; - av_channel_layout_describe(&dec->ch_layout, layout_name, sizeof(layout_name)); - av_log(ist, AV_LOG_WARNING, "Guessed Channel Layout: %s\n", layout_name); - } - return 1; -} - -static void add_display_matrix_to_stream(const OptionsContext *o, - AVFormatContext *ctx, InputStream *ist) -{ - AVStream *st = ist->st; - double rotation = DBL_MAX; - int hflip = -1, vflip = -1; - int hflip_set = 0, vflip_set = 0, rotation_set = 0; - int32_t *buf; - - MATCH_PER_STREAM_OPT(display_rotations, dbl, rotation, ctx, st); - MATCH_PER_STREAM_OPT(display_hflips, i, hflip, ctx, st); - MATCH_PER_STREAM_OPT(display_vflips, i, vflip, ctx, st); - - rotation_set = rotation != DBL_MAX; - hflip_set = hflip != -1; - vflip_set = vflip != -1; - - if (!rotation_set && !hflip_set && !vflip_set) - return; - - buf = (int32_t *)av_stream_new_side_data(st, AV_PKT_DATA_DISPLAYMATRIX, sizeof(int32_t) * 9); - if (!buf) { - av_log(ist, AV_LOG_FATAL, "Failed to generate a display matrix!\n"); - exit_program(1); - } - - av_display_rotation_set(buf, - rotation_set ? -(rotation) : -0.0f); - - av_display_matrix_flip(buf, - hflip_set ? hflip : 0, - vflip_set ? vflip : 0); -} - -static const char *input_stream_item_name(void *obj) -{ - const DemuxStream *ds = obj; - - return ds->log_name; -} - -static const AVClass input_stream_class = { - .class_name = "InputStream", - .version = LIBAVUTIL_VERSION_INT, - .item_name = input_stream_item_name, - .category = AV_CLASS_CATEGORY_DEMUXER, -}; - -static DemuxStream *demux_stream_alloc(Demuxer *d, AVStream *st) -{ - const char *type_str = av_get_media_type_string(st->codecpar->codec_type); - InputFile *f = &d->f; - DemuxStream *ds = allocate_array_elem(&f->streams, sizeof(*ds), - &f->nb_streams); - - ds->ist.st = st; - ds->ist.file_index = f->index; - ds->ist.class = &input_stream_class; - - snprintf(ds->log_name, sizeof(ds->log_name), "%cist#%d:%d/%s", - type_str ? *type_str : '?', d->f.index, st->index, - avcodec_get_name(st->codecpar->codec_id)); - - return ds; -} - -/* Add all the streams from the given input file to the demuxer */ -static void add_input_streams(const OptionsContext *o, Demuxer *d) -{ - InputFile *f = &d->f; - AVFormatContext *ic = f->ctx; - int i, ret; - - for (i = 0; i < ic->nb_streams; i++) { - AVStream *st = ic->streams[i]; - AVCodecParameters *par = st->codecpar; - DemuxStream *ds; - InputStream *ist; - char *framerate = NULL, *hwaccel_device = NULL; - const char *hwaccel = NULL; - char *hwaccel_output_format = NULL; - char *codec_tag = NULL; - char *next; - char *discard_str = NULL; - const AVClass *cc = avcodec_get_class(); - const AVOption *discard_opt = av_opt_find(&cc, "skip_frame", NULL, - 0, AV_OPT_SEARCH_FAKE_OBJ); - - ds = demux_stream_alloc(d, st); - ist = &ds->ist; - - ist->discard = 1; - st->discard = AVDISCARD_ALL; - ist->nb_samples = 0; - ist->first_dts = AV_NOPTS_VALUE; - ist->next_pts = AV_NOPTS_VALUE; - ist->next_dts = AV_NOPTS_VALUE; - - ds->min_pts = INT64_MAX; - ds->max_pts = INT64_MIN; - - ds->ts_scale = 1.0; - MATCH_PER_STREAM_OPT(ts_scale, dbl, ds->ts_scale, ic, st); - - ist->autorotate = 1; - MATCH_PER_STREAM_OPT(autorotate, i, ist->autorotate, ic, st); - - MATCH_PER_STREAM_OPT(codec_tags, str, codec_tag, ic, st); - if (codec_tag) { - uint32_t tag = strtol(codec_tag, &next, 0); - if (*next) { - uint8_t buf[4] = { 0 }; - memcpy(buf, codec_tag, FFMIN(sizeof(buf), strlen(codec_tag))); - tag = AV_RL32(buf); - } - - st->codecpar->codec_tag = tag; - } - - if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) { - add_display_matrix_to_stream(o, ic, ist); - - MATCH_PER_STREAM_OPT(hwaccels, str, hwaccel, ic, st); - MATCH_PER_STREAM_OPT(hwaccel_output_formats, str, - hwaccel_output_format, ic, st); - - if (!hwaccel_output_format && hwaccel && !strcmp(hwaccel, "cuvid")) { - av_log(ist, AV_LOG_WARNING, - "WARNING: defaulting hwaccel_output_format to cuda for compatibility " - "with old commandlines. This behaviour is DEPRECATED and will be removed " - "in the future. Please explicitly set \"-hwaccel_output_format cuda\".\n"); - ist->hwaccel_output_format = AV_PIX_FMT_CUDA; - } else if (!hwaccel_output_format && hwaccel && !strcmp(hwaccel, "qsv")) { - av_log(ist, AV_LOG_WARNING, - "WARNING: defaulting hwaccel_output_format to qsv for compatibility " - "with old commandlines. This behaviour is DEPRECATED and will be removed " - "in the future. Please explicitly set \"-hwaccel_output_format qsv\".\n"); - ist->hwaccel_output_format = AV_PIX_FMT_QSV; - } else if (!hwaccel_output_format && hwaccel && !strcmp(hwaccel, "mediacodec")) { - // There is no real AVHWFrameContext implementation. Set - // hwaccel_output_format to avoid av_hwframe_transfer_data error. - ist->hwaccel_output_format = AV_PIX_FMT_MEDIACODEC; - } else if (hwaccel_output_format) { - ist->hwaccel_output_format = av_get_pix_fmt(hwaccel_output_format); - if (ist->hwaccel_output_format == AV_PIX_FMT_NONE) { - av_log(ist, AV_LOG_FATAL, "Unrecognised hwaccel output " - "format: %s", hwaccel_output_format); - } - } else { - ist->hwaccel_output_format = AV_PIX_FMT_NONE; - } - - if (hwaccel) { - // The NVDEC hwaccels use a CUDA device, so remap the name here. - if (!strcmp(hwaccel, "nvdec") || !strcmp(hwaccel, "cuvid")) - hwaccel = "cuda"; - - if (!strcmp(hwaccel, "none")) - ist->hwaccel_id = HWACCEL_NONE; - else if (!strcmp(hwaccel, "auto")) - ist->hwaccel_id = HWACCEL_AUTO; - else { - enum AVHWDeviceType type = av_hwdevice_find_type_by_name(hwaccel); - if (type != AV_HWDEVICE_TYPE_NONE) { - ist->hwaccel_id = HWACCEL_GENERIC; - ist->hwaccel_device_type = type; - } - - if (!ist->hwaccel_id) { - av_log(ist, AV_LOG_FATAL, "Unrecognized hwaccel: %s.\n", - hwaccel); - av_log(ist, AV_LOG_FATAL, "Supported hwaccels: "); - type = AV_HWDEVICE_TYPE_NONE; - while ((type = av_hwdevice_iterate_types(type)) != - AV_HWDEVICE_TYPE_NONE) - av_log(ist, AV_LOG_FATAL, "%s ", - av_hwdevice_get_type_name(type)); - av_log(ist, AV_LOG_FATAL, "\n"); - exit_program(1); - } - } - } - - MATCH_PER_STREAM_OPT(hwaccel_devices, str, hwaccel_device, ic, st); - if (hwaccel_device) { - ist->hwaccel_device = av_strdup(hwaccel_device); - if (!ist->hwaccel_device) - report_and_exit(AVERROR(ENOMEM)); - } - - ist->hwaccel_pix_fmt = AV_PIX_FMT_NONE; - } - - ist->dec = choose_decoder(o, ic, st, ist->hwaccel_id, ist->hwaccel_device_type); - ist->decoder_opts = filter_codec_opts(o->g->codec_opts, ist->st->codecpar->codec_id, ic, st, ist->dec); - - ist->reinit_filters = -1; - MATCH_PER_STREAM_OPT(reinit_filters, i, ist->reinit_filters, ic, st); - - MATCH_PER_STREAM_OPT(discard, str, discard_str, ic, st); - ist->user_set_discard = AVDISCARD_NONE; - - if ((o->video_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) || - (o->audio_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) || - (o->subtitle_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE) || - (o->data_disable && ist->st->codecpar->codec_type == AVMEDIA_TYPE_DATA)) - ist->user_set_discard = AVDISCARD_ALL; - - if (discard_str && av_opt_eval_int(&cc, discard_opt, discard_str, &ist->user_set_discard) < 0) { - av_log(ist, AV_LOG_ERROR, "Error parsing discard %s.\n", - discard_str); - exit_program(1); - } - - ist->filter_in_rescale_delta_last = AV_NOPTS_VALUE; - ist->prev_pkt_pts = AV_NOPTS_VALUE; - - ist->dec_ctx = avcodec_alloc_context3(ist->dec); - if (!ist->dec_ctx) - report_and_exit(AVERROR(ENOMEM)); - - ret = avcodec_parameters_to_context(ist->dec_ctx, par); - if (ret < 0) { - av_log(ist, AV_LOG_ERROR, "Error initializing the decoder context.\n"); - exit_program(1); - } - - ist->decoded_frame = av_frame_alloc(); - if (!ist->decoded_frame) - report_and_exit(AVERROR(ENOMEM)); - - ist->pkt = av_packet_alloc(); - if (!ist->pkt) - report_and_exit(AVERROR(ENOMEM)); - - if (o->bitexact) - ist->dec_ctx->flags |= AV_CODEC_FLAG_BITEXACT; - - switch (par->codec_type) { - case AVMEDIA_TYPE_VIDEO: - // avformat_find_stream_info() doesn't set this for us anymore. - ist->dec_ctx->framerate = st->avg_frame_rate; - - MATCH_PER_STREAM_OPT(frame_rates, str, framerate, ic, st); - if (framerate && av_parse_video_rate(&ist->framerate, - framerate) < 0) { - av_log(ist, AV_LOG_ERROR, "Error parsing framerate %s.\n", - framerate); - exit_program(1); - } - - ist->top_field_first = -1; - MATCH_PER_STREAM_OPT(top_field_first, i, ist->top_field_first, ic, st); - - ist->framerate_guessed = av_guess_frame_rate(ic, st, NULL); - - ist->last_frame_pts = AV_NOPTS_VALUE; - - break; - case AVMEDIA_TYPE_AUDIO: { - int guess_layout_max = INT_MAX; - MATCH_PER_STREAM_OPT(guess_layout_max, i, guess_layout_max, ic, st); - guess_input_channel_layout(ist, guess_layout_max); - break; - } - case AVMEDIA_TYPE_DATA: - case AVMEDIA_TYPE_SUBTITLE: { - char *canvas_size = NULL; - MATCH_PER_STREAM_OPT(fix_sub_duration, i, ist->fix_sub_duration, ic, st); - MATCH_PER_STREAM_OPT(canvas_sizes, str, canvas_size, ic, st); - if (canvas_size && - av_parse_video_size(&ist->dec_ctx->width, &ist->dec_ctx->height, canvas_size) < 0) { - av_log(ist, AV_LOG_FATAL, "Invalid canvas size: %s.\n", canvas_size); - exit_program(1); - } - break; - } - case AVMEDIA_TYPE_ATTACHMENT: - case AVMEDIA_TYPE_UNKNOWN: - break; - default: - abort(); - } - - ist->par = avcodec_parameters_alloc(); - if (!ist->par) - report_and_exit(AVERROR(ENOMEM)); - - ret = avcodec_parameters_from_context(ist->par, ist->dec_ctx); - if (ret < 0) { - av_log(ist, AV_LOG_ERROR, "Error initializing the decoder context.\n"); - exit_program(1); - } - } -} - -static void dump_attachment(InputStream *ist, const char *filename) -{ - AVStream *st = ist->st; - int ret; - AVIOContext *out = NULL; - const AVDictionaryEntry *e; - - if (!st->codecpar->extradata_size) { - av_log(ist, AV_LOG_WARNING, "No extradata to dump.\n"); - return; - } - if (!*filename && (e = av_dict_get(st->metadata, "filename", NULL, 0))) - filename = e->value; - if (!*filename) { - av_log(ist, AV_LOG_FATAL, "No filename specified and no 'filename' tag"); - exit_program(1); - } - - assert_file_overwrite(filename); - - if ((ret = avio_open2(&out, filename, AVIO_FLAG_WRITE, &int_cb, NULL)) < 0) { - av_log(ist, AV_LOG_FATAL, "Could not open file %s for writing.\n", - filename); - exit_program(1); - } - - avio_write(out, st->codecpar->extradata, st->codecpar->extradata_size); - avio_flush(out); - avio_close(out); -} - -static const char *input_file_item_name(void *obj) -{ - const Demuxer *d = obj; - - return d->log_name; -} - -static const AVClass input_file_class = { - .class_name = "InputFile", - .version = LIBAVUTIL_VERSION_INT, - .item_name = input_file_item_name, - .category = AV_CLASS_CATEGORY_DEMUXER, -}; - -static Demuxer *demux_alloc(void) -{ - Demuxer *d = allocate_array_elem(&input_files, sizeof(*d), &nb_input_files); - - d->f.class = &input_file_class; - d->f.index = nb_input_files - 1; - - snprintf(d->log_name, sizeof(d->log_name), "in#%d", d->f.index); - - return d; -} - -int ifile_open(const OptionsContext *o, const char *filename) -{ - Demuxer *d; - InputFile *f; - AVFormatContext *ic; - const AVInputFormat *file_iformat = NULL; - int err, i, ret; - int64_t timestamp; - AVDictionary *unused_opts = NULL; - const AVDictionaryEntry *e = NULL; - char * video_codec_name = NULL; - char * audio_codec_name = NULL; - char *subtitle_codec_name = NULL; - char * data_codec_name = NULL; - int scan_all_pmts_set = 0; - - int64_t start_time = o->start_time; - int64_t start_time_eof = o->start_time_eof; - int64_t stop_time = o->stop_time; - int64_t recording_time = o->recording_time; - - d = demux_alloc(); - f = &d->f; - - if (stop_time != INT64_MAX && recording_time != INT64_MAX) { - stop_time = INT64_MAX; - av_log(d, AV_LOG_WARNING, "-t and -to cannot be used together; using -t.\n"); - } - - if (stop_time != INT64_MAX && recording_time == INT64_MAX) { - int64_t start = start_time == AV_NOPTS_VALUE ? 0 : start_time; - if (stop_time <= start) { - av_log(d, AV_LOG_ERROR, "-to value smaller than -ss; aborting.\n"); - exit_program(1); - } else { - recording_time = stop_time - start; - } - } - - if (o->format) { - if (!(file_iformat = av_find_input_format(o->format))) { - av_log(d, AV_LOG_FATAL, "Unknown input format: '%s'\n", o->format); - exit_program(1); - } - } - - if (!strcmp(filename, "-")) - filename = "fd:"; - - stdin_interaction &= strncmp(filename, "pipe:", 5) && - strcmp(filename, "fd:") && - strcmp(filename, "/dev/stdin"); - - /* get default parameters from command line */ - ic = avformat_alloc_context(); - if (!ic) - report_and_exit(AVERROR(ENOMEM)); - if (o->nb_audio_sample_rate) { - av_dict_set_int(&o->g->format_opts, "sample_rate", o->audio_sample_rate[o->nb_audio_sample_rate - 1].u.i, 0); - } - if (o->nb_audio_channels) { - const AVClass *priv_class; - if (file_iformat && (priv_class = file_iformat->priv_class) && - av_opt_find(&priv_class, "ch_layout", NULL, 0, - AV_OPT_SEARCH_FAKE_OBJ)) { - char buf[32]; - snprintf(buf, sizeof(buf), "%dC", o->audio_channels[o->nb_audio_channels - 1].u.i); - av_dict_set(&o->g->format_opts, "ch_layout", buf, 0); - } - } - if (o->nb_audio_ch_layouts) { - const AVClass *priv_class; - if (file_iformat && (priv_class = file_iformat->priv_class) && - av_opt_find(&priv_class, "ch_layout", NULL, 0, - AV_OPT_SEARCH_FAKE_OBJ)) { - av_dict_set(&o->g->format_opts, "ch_layout", o->audio_ch_layouts[o->nb_audio_ch_layouts - 1].u.str, 0); - } - } - if (o->nb_frame_rates) { - const AVClass *priv_class; - /* set the format-level framerate option; - * this is important for video grabbers, e.g. x11 */ - if (file_iformat && (priv_class = file_iformat->priv_class) && - av_opt_find(&priv_class, "framerate", NULL, 0, - AV_OPT_SEARCH_FAKE_OBJ)) { - av_dict_set(&o->g->format_opts, "framerate", - o->frame_rates[o->nb_frame_rates - 1].u.str, 0); - } - } - if (o->nb_frame_sizes) { - av_dict_set(&o->g->format_opts, "video_size", o->frame_sizes[o->nb_frame_sizes - 1].u.str, 0); - } - if (o->nb_frame_pix_fmts) - av_dict_set(&o->g->format_opts, "pixel_format", o->frame_pix_fmts[o->nb_frame_pix_fmts - 1].u.str, 0); - - MATCH_PER_TYPE_OPT(codec_names, str, video_codec_name, ic, "v"); - MATCH_PER_TYPE_OPT(codec_names, str, audio_codec_name, ic, "a"); - MATCH_PER_TYPE_OPT(codec_names, str, subtitle_codec_name, ic, "s"); - MATCH_PER_TYPE_OPT(codec_names, str, data_codec_name, ic, "d"); - - if (video_codec_name) - ic->video_codec = find_codec_or_die(NULL, video_codec_name , AVMEDIA_TYPE_VIDEO , 0); - if (audio_codec_name) - ic->audio_codec = find_codec_or_die(NULL, audio_codec_name , AVMEDIA_TYPE_AUDIO , 0); - if (subtitle_codec_name) - ic->subtitle_codec = find_codec_or_die(NULL, subtitle_codec_name, AVMEDIA_TYPE_SUBTITLE, 0); - if (data_codec_name) - ic->data_codec = find_codec_or_die(NULL, data_codec_name , AVMEDIA_TYPE_DATA , 0); - - ic->video_codec_id = video_codec_name ? ic->video_codec->id : AV_CODEC_ID_NONE; - ic->audio_codec_id = audio_codec_name ? ic->audio_codec->id : AV_CODEC_ID_NONE; - ic->subtitle_codec_id = subtitle_codec_name ? ic->subtitle_codec->id : AV_CODEC_ID_NONE; - ic->data_codec_id = data_codec_name ? ic->data_codec->id : AV_CODEC_ID_NONE; - - ic->flags |= AVFMT_FLAG_NONBLOCK; - if (o->bitexact) - ic->flags |= AVFMT_FLAG_BITEXACT; - ic->interrupt_callback = int_cb; - - if (!av_dict_get(o->g->format_opts, "scan_all_pmts", NULL, AV_DICT_MATCH_CASE)) { - av_dict_set(&o->g->format_opts, "scan_all_pmts", "1", AV_DICT_DONT_OVERWRITE); - scan_all_pmts_set = 1; - } - /* open the input file with generic avformat function */ - err = avformat_open_input(&ic, filename, file_iformat, &o->g->format_opts); - if (err < 0) { - print_error(filename, err); - if (err == AVERROR_PROTOCOL_NOT_FOUND) - av_log(d, AV_LOG_ERROR, "Did you mean file:%s?\n", filename); - exit_program(1); - } - - av_strlcat(d->log_name, "/", sizeof(d->log_name)); - av_strlcat(d->log_name, ic->iformat->name, sizeof(d->log_name)); - - if (scan_all_pmts_set) - av_dict_set(&o->g->format_opts, "scan_all_pmts", NULL, AV_DICT_MATCH_CASE); - remove_avoptions(&o->g->format_opts, o->g->codec_opts); - assert_avoptions(o->g->format_opts); - - /* apply forced codec ids */ - for (i = 0; i < ic->nb_streams; i++) - choose_decoder(o, ic, ic->streams[i], HWACCEL_NONE, AV_HWDEVICE_TYPE_NONE); - - if (o->find_stream_info) { - AVDictionary **opts = setup_find_stream_info_opts(ic, o->g->codec_opts); - int orig_nb_streams = ic->nb_streams; - - /* If not enough info to get the stream parameters, we decode the - first frames to get it. (used in mpeg case for example) */ - ret = avformat_find_stream_info(ic, opts); - - for (i = 0; i < orig_nb_streams; i++) - av_dict_free(&opts[i]); - av_freep(&opts); - - if (ret < 0) { - av_log(d, AV_LOG_FATAL, "could not find codec parameters\n"); - if (ic->nb_streams == 0) { - avformat_close_input(&ic); - exit_program(1); - } - } - } - - if (start_time != AV_NOPTS_VALUE && start_time_eof != AV_NOPTS_VALUE) { - av_log(d, AV_LOG_WARNING, "Cannot use -ss and -sseof both, using -ss\n"); - start_time_eof = AV_NOPTS_VALUE; - } - - if (start_time_eof != AV_NOPTS_VALUE) { - if (start_time_eof >= 0) { - av_log(d, AV_LOG_ERROR, "-sseof value must be negative; aborting\n"); - exit_program(1); - } - if (ic->duration > 0) { - start_time = start_time_eof + ic->duration; - if (start_time < 0) { - av_log(d, AV_LOG_WARNING, "-sseof value seeks to before start of file; ignored\n"); - start_time = AV_NOPTS_VALUE; - } - } else - av_log(d, AV_LOG_WARNING, "Cannot use -sseof, file duration not known\n"); - } - timestamp = (start_time == AV_NOPTS_VALUE) ? 0 : start_time; - /* add the stream start time */ - if (!o->seek_timestamp && ic->start_time != AV_NOPTS_VALUE) - timestamp += ic->start_time; - - /* if seeking requested, we execute it */ - if (start_time != AV_NOPTS_VALUE) { - int64_t seek_timestamp = timestamp; - - if (!(ic->iformat->flags & AVFMT_SEEK_TO_PTS)) { - int dts_heuristic = 0; - for (i=0; inb_streams; i++) { - const AVCodecParameters *par = ic->streams[i]->codecpar; - if (par->video_delay) { - dts_heuristic = 1; - break; - } - } - if (dts_heuristic) { - seek_timestamp -= 3*AV_TIME_BASE / 23; - } - } - ret = avformat_seek_file(ic, -1, INT64_MIN, seek_timestamp, seek_timestamp, 0); - if (ret < 0) { - av_log(d, AV_LOG_WARNING, "could not seek to position %0.3f\n", - (double)timestamp / AV_TIME_BASE); - } - } - - f->ctx = ic; - f->start_time = start_time; - f->recording_time = recording_time; - f->input_sync_ref = o->input_sync_ref; - f->input_ts_offset = o->input_ts_offset; - f->ts_offset = o->input_ts_offset - (copy_ts ? (start_at_zero && ic->start_time != AV_NOPTS_VALUE ? ic->start_time : 0) : timestamp); - f->rate_emu = o->rate_emu; - f->accurate_seek = o->accurate_seek; - d->loop = o->loop; - d->duration = 0; - d->time_base = (AVRational){ 1, 1 }; - - f->readrate = o->readrate ? o->readrate : 0.0; - if (f->readrate < 0.0f) { - av_log(d, AV_LOG_ERROR, "Option -readrate is %0.3f; it must be non-negative.\n", f->readrate); - exit_program(1); - } - if (f->readrate && f->rate_emu) { - av_log(d, AV_LOG_WARNING, "Both -readrate and -re set. Using -readrate %0.3f.\n", f->readrate); - f->rate_emu = 0; - } - - d->thread_queue_size = o->thread_queue_size; - - /* update the current parameters so that they match the one of the input stream */ - add_input_streams(o, d); - - /* dump the file content */ - av_dump_format(ic, f->index, filename, 0); - - /* check if all codec options have been used */ - unused_opts = strip_specifiers(o->g->codec_opts); - for (i = 0; i < f->nb_streams; i++) { - e = NULL; - while ((e = av_dict_iterate(f->streams[i]->decoder_opts, e))) - av_dict_set(&unused_opts, e->key, NULL, 0); - } - - e = NULL; - while ((e = av_dict_iterate(unused_opts, e))) { - const AVClass *class = avcodec_get_class(); - const AVOption *option = av_opt_find(&class, e->key, NULL, 0, - AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ); - const AVClass *fclass = avformat_get_class(); - const AVOption *foption = av_opt_find(&fclass, e->key, NULL, 0, - AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ); - if (!option || foption) - continue; - - - if (!(option->flags & AV_OPT_FLAG_DECODING_PARAM)) { - av_log(d, AV_LOG_ERROR, "Codec AVOption %s (%s) is not a decoding " - "option.\n", e->key, option->help ? option->help : ""); - exit_program(1); - } - - av_log(d, AV_LOG_WARNING, "Codec AVOption %s (%s) has not been used " - "for any stream. The most likely reason is either wrong type " - "(e.g. a video option with no video streams) or that it is a " - "private option of some decoder which was not actually used " - "for any stream.\n", e->key, option->help ? option->help : ""); - } - av_dict_free(&unused_opts); - - for (i = 0; i < o->nb_dump_attachment; i++) { - int j; - - for (j = 0; j < f->nb_streams; j++) { - InputStream *ist = f->streams[j]; - - if (check_stream_specifier(ic, ist->st, o->dump_attachment[i].specifier) == 1) - dump_attachment(ist, o->dump_attachment[i].u.str); - } - } - - return 0; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Discover My Talking Angela 2 The Game that Lets You Dance Bake Travel and More.md b/spaces/congsaPfin/Manga-OCR/logs/Discover My Talking Angela 2 The Game that Lets You Dance Bake Travel and More.md deleted file mode 100644 index dc8738e795a945c51f3b4523542e5e6362e5a294..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Discover My Talking Angela 2 The Game that Lets You Dance Bake Travel and More.md +++ /dev/null @@ -1,125 +0,0 @@ -
    -

    My Talking Angela 2: A Fun and Stylish Virtual Pet Game

    -

    Do you love virtual pet games? Do you want to have a fashionable feline friend who can dance, bake, travel, and more? If you answered yes, then you should check out My Talking Angela 2, the latest installment from the popular My Talking franchise by Outfit7. In this article, we will tell you everything you need to know about this game, including its features, tips and tricks, and reviews. Read on to find out why My Talking Angela 2 is one of the best virtual pet games on the market.

    -

    my talking angela 2 download play store


    Download File » https://urlca.com/2uOaSP



    -

    Introduction

    -

    What is My Talking Angela 2?

    -

    My Talking Angela 2 is a virtual pet game that makes every day more stylish and fun. Players help this fashionable cat stay busy in her big-city home. They can customize her hair, makeup, and fashion choices, as well as her apartment decor. They can also enjoy various activities and mini-games with her, such as dancing, baking, and martial arts. They can even travel the world with her and collect stickers along the way. My Talking Angela 2 is a game that lets players express their creativity and personality while having fun with their adorable virtual pet.

    -

    Why should you play My Talking Angela 2?

    -

    There are many reasons why you should play My Talking Angela 2. Here are some of them:

    -
      -
    • It's free to download and play. You can access all the functionalities of the game without making any in-app purchases using real money.
    • -
    • It's suitable for all ages. Whether you're a kid or an adult, you can enjoy playing with Angela and taking care of her needs.
    • -
    • It's easy to play. The game has simple controls and intuitive interface that make it easy to navigate and interact with Angela.
    • -
    • It's entertaining and engaging. The game has a lot of content and variety that keep you interested and entertained. You can explore different rooms, activities, mini-games, outfits, locations, and stickers.
    • -
    • It's educational and beneficial. The game can help you improve your skills and reflexes, as well as your creativity and imagination. You can also learn new things from Angela, such as facts about different countries and cultures.
    • -
    -

    Features of My Talking Angela 2

    -

    Customize Angela's look and apartment

    -

    One of the main features of My Talking Angela 2 is the ability to customize Angela's look and apartment. You can choose from a wide range of hair styles, colors, accessories, makeup products, clothes, shoes, bags, and more. You can also change the furniture and fittings in each room of her apartment. You can even paint a painting and hang it above her bed. You can reinvent Angela's look and home as often as you want, depending on your mood and preference.

    -

    Enjoy various activities and mini-games

    -

    Another feature of My Talking Angela 2 is the variety of activities and mini-games that you can enjoy with Angela. You can dance with her in her studio, bake with her in her kitchen, practice martial arts with her in her dojo, or relax with her in her spa room. You can also play various mini-games with her, such as puzzles, memory games, arcade games, and more. Each activity and mini-game has different levels of difficulty and rewards. You can earn coins, diamonds, stars, and stickers by playing with Angela.

    -

    Travel the world and collect stickers

    -

    A third feature of My Talking Angela 2 is the ability to travel the world and collect stickers with Angela. You can visit different countries and cities with her, such as Paris, London, New York, Tokyo, and more. You can also learn about their cultures and landmarks from Angela. Each location has a sticker album that you can fill up by finding hidden stickers in each scene. You can also exchange stickers with other players online. Traveling and collecting stickers is a fun way to expand your horizons and make new friends.

    -

    My Talking Angela 2 app install from Google Play
    -How to play My Talking Angela 2 on Android devices
    -My Talking Angela 2 game features and tips
    -Download My Talking Angela 2 for free on Play Store
    -My Talking Angela 2 virtual pet simulation game
    -Outfit7 My Talking Angela 2 latest updates and news
    -My Talking Angela 2 reviews and ratings on Google Play
    -My Talking Angela 2 fun activities and mini-games
    -My Talking Angela 2 fashion and style choices
    -My Talking Angela 2 stickers and rewards collection
    -My Talking Angela 2 travel and adventure options
    -My Talking Angela 2 YouTube videos and trailers
    -My Talking Angela 2 in-app purchases and subscriptions
    -My Talking Angela 2 data privacy and security policy
    -My Talking Angela 2 customer support and feedback
    -My Talking Angela 2 cheats and hacks for Android
    -My Talking Angela 2 offline mode and data usage
    -My Talking Angela 2 compatibility and performance issues
    -My Talking Angela 2 alternatives and similar apps
    -My Talking Angela 2 fan community and social media
    -My Talking Angela 2 hair, makeup and nails salon
    -My Talking Angela 2 dancing, baking and martial arts skills
    -My Talking Angela 2 food and snacks recipes
    -My Talking Angela 2 puzzles and reflexes tests
    -My Talking Angela 2 outfits and accessories shop
    -My Talking Angela 2 birthday party and celebration ideas
    -My Talking Angela 2 voice interaction and chat feature
    -My Talking Angela 2 customization and personalization options
    -My Talking Angela 2 achievements and leaderboards
    -My Talking Angela 2 wallpapers and backgrounds download
    -My Talking Angela 2 mod apk download for Android
    -My Talking Angela 2 unlimited coins and diamonds hack
    -My Talking Angela 2 glitches and bugs fix guide
    -My Talking Angela 2 comparison with other Outfit7 games
    -My Talking Angela 2 trivia and facts you didn't know
    -My Talking Angela 2 tips for beginners and advanced players
    -My Talking Angela 2 best practices and strategies
    -My Talking Angela 2 frequently asked questions and answers
    -My Talking Angela 2 challenges and contests to join
    -My Talking Angela 2 referral codes and rewards program

    -

    Tips and tricks for My Talking Angela 2

    -

    Play mini-games to earn coins and diamonds

    -

    One of the tips and tricks for My Talking Angela 2 is to play mini-games to earn coins and diamonds. Coins and diamonds are the main currencies in the game that you can use to buy items and unlock features. You can earn coins by playing any mini-game, but some mini-games give you more coins than others. For example, the puzzle game gives you 10 coins per level, while the arcade game gives you 20 coins per level. You can also earn diamonds by playing the memory game or the dance game. Diamonds are more valuable than coins, so you should try to play these games as often as possible.

    -

    Feed Angela smoothies for special effects

    -

    Another tip and trick for My Talking Angela 2 is to feed Angela smoothies for special effects. Smoothies are drinks that you can make in the kitchen by combining different fruits and ingredients. Each smoothie has a different effect on Angela, such as making her happy, energetic, sleepy, or sick. You can use these effects to your advantage depending on the situation. For example, if you want to play more mini-games with Angela, you can feed her a smoothie that makes her energetic. If you want to make her sleep faster, you can feed her a smoothie that makes her sleepy.

    -

    Take care of Angela's health and hygiene

    -

    A third tip and trick for My Talking Angela 2 is to take care of Angela's health and hygiene. Angela has four meters that indicate her status: happiness, hunger, energy, and hygiene. You need to keep these meters high by doing various things for her, such as feeding her, playing with her, putting her to bed, or cleaning her. If you neglect any of these meters, Angela will become unhappy or sick, which will affect her performance and mood. You can also use items such as medicine or perfume to boost her meters quickly.

    -

    Level up Angela to unlock new items and locations

    -

    A fourth tip and trick for My Talking Angela 2 is to level up Angela to unlock new items and locations. Angela has a level meter that fills up as you play with her and take care of her. Each time you level up, you will unlock new items for her look and apartment, as well as new locations to travel to. You will also get a free gift box that contains coins, diamonds, or stickers. Leveling up is a great way to access more content and features in the game.

    -

    Watch videos and complete tasks for extra rewards

    -

    A fifth tip and trick for My Talking Angela 2 is to watch videos and complete tasks for extra rewards. You can watch videos in the TV room or in the shop to earn free coins or diamonds. You can also complete tasks in the task list or in the daily challenge to earn stars or stickers. These tasks are simple and easy to do, such as changing Angela's outfit or playing a mini-game. Watching videos and completing tasks is a good way to get more resources and items in the game.

    -

    Reviews of My Talking Angela 2

    -

    What do users say about My Talking Angela 2?

    -

    My Talking Angela 2 has received mostly positive reviews from users who have downloaded and played it. Here are some of the comments from users who have rated it on Google Play Store:

    -
    -

    "I love this game so much! It's so fun and cute! I like how you can customize everything and travel around the world! The graphics are amazing and the animations are smooth! I recommend this game to everyone who loves virtual pet games!"

    -

    "This game is awesome! It has so many features and activities that make it interesting and enjoyable! I like how you can interact with Angela and make her happy! The mini-games are also fun and challenging! The game is also very educational and teaches you about different countries and cultures!"

    -

    "This game is good, but it has some flaws. It takes too long to load and sometimes crashes. It also has too many ads that interrupt the gameplay. It also requires a lot of storage space and internet connection. I hope the developers can fix these issues soon."

    -
    -

    What are the pros and cons of My Talking Angela 2?

    -

    Based on the user reviews and our own experience, we have summarized the pros and cons of My Talking Angela 2 as follows:

    - - - - - - - - - - - - - - - - - - - - - -
    ProsCons
    - Fun and cute virtual pet game- Long loading time and frequent crashes
    - Lots of customization options and variety- Too many ads and in-app purchases
    - Engaging and challenging activities and mini-games- High storage space and internet connection requirements
    - Educational and beneficial content and features- None
    -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, My Talking Angela 2 is a fun and stylish virtual pet game that lets you have a fashionable feline friend who can dance, bake, travel, and more. You can customize her look and apartment, enjoy various activities and mini-games, travel the world and collect stickers, and learn new things from her. The game is free to download and play, suitable for all ages, easy to play, entertaining and engaging, and educational and beneficial. However, the game also has some drawbacks, such as long loading time, frequent crashes, too many ads, in-app purchases, high storage space, and internet connection requirements. Despite these flaws, My Talking Angela 2 is still one of the best virtual pet games on the market.

    -

    Call to action

    -

    If you are interested in playing My Talking Angela 2, you can download it from Google Play Store or Apple App Store. You can also visit the official website of Outfit7 for more information about the game and other related products. Don't miss this opportunity to have a fun and stylish virtual pet game that makes every day more exciting. Download My Talking Angela 2 today and enjoy playing with your adorable feline friend!

    -

    FAQs

    -

    Here are some of the frequently asked questions about My Talking Angela 2:

    -

    Q: How do I talk to Angela?

    -

    A: You can talk to Angela by tapping on the microphone icon on the bottom right corner of the screen. You can say anything you want to her, and she will repeat it in a funny voice. You can also make her laugh by tickling her or making funny noises.

    -

    Q: How do I take a selfie with Angela?

    -

    A: You can take a selfie with Angela by tapping on the camera icon on the top right corner of the screen. You can choose from different backgrounds, filters, stickers, and frames to make your selfie more fun and unique. You can also share your selfie with your friends on social media.

    -

    Q: How do I play music with Angela?

    -

    A: You can play music with Angela by tapping on the music icon on the bottom left corner of the screen. You can choose from different genres, such as pop, rock, hip hop, or classical. You can also create your own music by tapping on the instruments or using your voice.

    -

    Q: How do I change Angela's name?

    -

    A: You can change Angela's name by tapping on the settings icon on the top left corner of the screen. You can then tap on the name field and enter a new name for her. You can also change other settings, such as language, sound effects, notifications, or privacy.

    -

    Q: How do I contact the developers of My Talking Angela 2?

    -

    A: You can contact the developers of My Talking Angela 2 by tapping on the support icon on the bottom right corner of the settings menu. You can then choose from different options, such as feedback, report a problem, or FAQ. You can also email them at support@outfit7.com or visit their website at www.outfit7.com.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Instagram by the Dise Why You Need This Amazing App.md b/spaces/congsaPfin/Manga-OCR/logs/Download Instagram by the Dise Why You Need This Amazing App.md deleted file mode 100644 index 4fbc555695a588bcb994a5e1e71aac4a1c31f5ea..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Instagram by the Dise Why You Need This Amazing App.md +++ /dev/null @@ -1,132 +0,0 @@ - -

    Download Instagram by the DISE: A Guide for Android Users

    -

    Instagram is one of the most popular social media platforms in the world, with over 1 billion monthly active users. It allows you to create and share your photos, stories, reels and videos with the friends and followers you care about. You can also explore your interests, discover new content, and connect with people from different cultures and backgrounds.

    -

    But how can you download Instagram on your Android device? One of the easiest and fastest ways is to use the DISE app, a powerful tool that lets you download any app from Google Play Store without any hassle. In this article, we will show you how to download Instagram by the DISE, and also explain what the DISE is and why you should use it.

    -

    download instagram by the dise


    Downloadhttps://urlca.com/2uOdGw



    -

    What is Instagram?

    -

    Instagram is a free photo and video sharing app that was launched in 2010 by Kevin Systrom and Mike Krieger. It was acquired by Facebook in 2012 for $1 billion. Since then, it has grown to become one of the most influential social media platforms in the world, with celebrities, influencers, brands, and ordinary users using it to showcase their creativity, lifestyle, and personality.

    -

    Instagram has many features that make it fun and engaging, such as:

    -
      -
    • Filters: You can apply different filters to your photos and videos to enhance their appearance and mood.
    • -
    • Stories: You can share ephemeral photos and videos that disappear after 24 hours with your followers or close friends.
    • -
    • Reels: You can create short-form videos with music, effects, and stickers that can reach a wider audience on the Explore tab.
    • -
    • IGTV: You can upload longer videos that can be up to an hour long on a dedicated section of the app.
    • -
    • Live: You can broadcast live video to your followers and interact with them in real-time.
    • -
    • DMs: You can send private messages, photos, videos, voice notes, and stickers to your friends or groups.
    • -
    • Shopping: You can discover and buy products from your favorite brands and creators on Instagram.
    • -
    -

    Why download Instagram?

    -

    There are many reasons why you might want to download Instagram on your Android device, such as:

    -
      -
    • You want to join a community of over 1 billion people who share your passions and interests.
    • -
    • You want to express yourself through photos and videos that capture your moments and memories.
    • -
    • You want to discover new content that inspires you, entertains you, or educates you.
    • -
    • You want to connect with your friends, family, and favorite celebrities or influencers.
    • -
    • You want to grow your personal brand or business by reaching new customers or fans.
    • -
    -

    How to download Instagram by the DISE?

    -

    To download Instagram by the DISE, you need to follow these simple steps:

    -

    Step 1: Download the DISE app

    -

    The first thing you need to do is to download the DISE app from its official website https://dise.com/. You can also scan the QR code on the website with your phone's camera to get the link. Alternatively, you can search for "DISE" on Google Play Store and install it from there.

    -

    Step 2: Install the DISE app

    -

    Once you have downloaded the DISE app, you need to install it on your device. To do this, you need to allow installation from unknown sources on your device. To do this, you need to go to your device's settings, then security, then enable the option "Unknown sources". This will allow you to install apps that are not from Google Play Store.

    -

    Step 3: Launch the DISE app

    -

    After you have installed the DISE app, you need to launch it on your device. You will see a welcome screen that explains what the DISE app is and how it works. You can swipe left to skip the introduction or tap on "Next" to proceed. You will also need to agree to the terms and conditions and privacy policy of the DISE app before you can use it.

    -

    Step 4: Search for Instagram

    -

    Once you have launched the DISE app, you will see a search bar at the top of the screen. You can use this to search for any app that you want to download from Google Play Store. In this case, you need to type "Instagram" in the search bar and tap on the magnifying glass icon. You will see a list of results that match your query. You need to select the one that says "Instagram" with the official logo and description.

    -

    download instagram by the dise app
    -download instagram by the dise video
    -download instagram by the dise for pc
    -download instagram by the dise apk
    -download instagram by the dise online
    -download instagram by the dise free
    -download instagram by the dise for windows
    -download instagram by the dise for android
    -download instagram by the dise for mac
    -download instagram by the dise for iphone
    -download instagram by the dise from meta
    -download instagram by the dise stories
    -download instagram by the dise reels
    -download instagram by the dise photos
    -download instagram by the dise posts
    -download instagram by the dise highlights
    -download instagram by the dise igtv
    -download instagram by the dise profile picture
    -download instagram by the dise captions
    -download instagram by the dise filters
    -download instagram by the dise stickers
    -download instagram by the dise music
    -download instagram by the dise hashtags
    -download instagram by the dise comments
    -download instagram by the dise likes
    -download instagram by the dise followers
    -download instagram by the dise live
    -download instagram by the dise dm
    -download instagram by the dise chat
    -download instagram by the dise backup
    -download instagram by the dise data
    -download instagram by the dise mod
    -download instagram by the dise hack
    -download instagram by the dise pro
    -download instagram by the dise premium
    -download instagram by the dise plus
    -download instagram by the dise lite
    -download instagram by the dise old version
    -download instagram by the dise new version
    -download instagram by the dise update
    -download instagram by the dise beta
    -download instagram by the dise review
    -download instagram by the dise tutorial
    -download instagram by the dise guide
    -download instagram by the dise tips
    -download instagram by the dise tricks
    -download instagram by the dise features
    -download instagram by the dise benefits

    -

    Step 5: Download Instagram

    -

    After you have selected Instagram from the list of results, you will see a page that shows more information about the app, such as its rating, reviews, screenshots, and size. You will also see a green button that says "Download". You need to tap on this button to start downloading Instagram by the DISE. You will see a progress bar that shows how much of the download is completed. Once the download is finished, you will see a notification that says "Download complete". You can then tap on "Open" to launch Instagram on your device.

    -

    What is the DISE?

    -

    The DISE is a powerful app that lets you download any app from Google Play Store without any hassle. It stands for Download Install Search Engine, and it works by using a proxy server that bypasses any restrictions or limitations that might prevent you from downloading apps from Google Play Store.

    -

    What does the DISE stand for?

    -

    The DISE stands for Download Install Search Engine. It is an acronym that describes what the app does: it allows you to download, install, and search for any app from Google Play Store.

    -

    What are the benefits of the DISE?

    -

    The DISE has many benefits that make it a useful and convenient tool for Android users, such as:

    -
      -
    • It is fast and easy to use: You can download any app from Google Play Store in just a few taps, without having to sign in or create an account.
    • -
    • It is safe and secure: The DISE uses encryption and proxy servers to protect your privacy and data. It does not collect or store any personal information or device data.
    • -
    • It is free and unlimited: The DISE does not charge any fees or impose any limits on how many apps you can download or how much data you can use.
    • -
    • It is compatible and flexible: The DISE works with any Android device and any network connection. It also supports multiple languages and regions.
    • -
    -

    What are the alternatives to the DISE?

    -

    The DISE is not the only app that lets you download apps from Google Play Store without any hassle. There are some other alternatives that you can try, such as:

    -
      -
    • Aptoide: Aptoide is an independent app store that offers over 1 million apps that are not available on Google Play Store. You can also create your own app store and share it with others.
    • -
    • APKPure: APKPure is an app downloader that lets you download APK files of apps from Google Play Store or other sources. You can also update your apps with APKPure without using Google Play Services.
    • -
    • Aurora Store: Aurora Store is an unofficial client for Google Play Store that lets you access all the features of Google Play Store without using Google services or accounts. You can also customize your settings and preferences with Aurora Store.
    • -
    -

    Conclusion

    -

    In conclusion, Instagram is a great app that lets you create and share your photos, stories, reels and videos with the friends and followers you care about. You can also explore your interests, discover new content, and connect with people from different cultures and backgrounds. To download Instagram on your Android device, one of the easiest and fastest ways is to use the DISE app, a powerful tool that lets you download any app from Google Play Store without any hassle. All you need to do is follow these simple steps:

    -
      -
    1. Download the DISE app from its official website https://dise.com/ or Google Play Store.
    2. -
    3. Install the DISE app on your device and allow installation from unknown sources.
    4. -
    5. Launch the DISE app and agree to the terms and conditions and privacy policy.
    6. -
    7. Search for Instagram in the search bar and select the official app from the list of results.
    8. -
    9. Download Instagram by tapping on the green button and wait for the download to complete.
    10. -
    11. Open Instagram and enjoy creating and sharing your photos, stories, reels and videos.
    12. -
    -

    We hope this article has helped you learn how to download Instagram by the DISE. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    -

    FAQs

    -

    Here are some frequently asked questions about downloading Instagram by the DISE:

    -
      -
    • Is the DISE app safe to use?
    • -

      Yes, the DISE app is safe and secure to use. It does not collect or store any personal information or device data. It also uses encryption and proxy servers to protect your privacy and data. However, you should always be careful when downloading apps from unknown sources and only download apps that you trust.

      -
    • Can I use the DISE app to download other apps besides Instagram?
    • -

      Yes, you can use the DISE app to download any app from Google Play Store without any hassle. You can also search for apps by category, popularity, rating, or name. You can also update your apps with the DISE app without using Google Play Services.

      -
    • What are the requirements for using the DISE app?
    • -

      The DISE app requires Android 4.4 or higher and an internet connection. It also requires at least 50 MB of free storage space on your device. You do not need a Google account or Google services to use the DISE app.

      -
    • How can I contact the DISE app developers?
    • -

      If you have any questions, suggestions, or issues with the DISE app, you can contact the developers by sending an email to support@dise.com. You can also visit their website https://dise.com/ for more information and updates.

      -
    • How can I share my feedback or review of the DISE app?
    • -

      If you like the DISE app and want to share your feedback or review, you can do so by rating and reviewing it on Google Play Store. You can also share it with your friends and family by using the share button on the DISE app. Your feedback and support are greatly appreciated!

      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Ride of Your Life with Ultimate Motorcycle Simulator APK Mod.md b/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Ride of Your Life with Ultimate Motorcycle Simulator APK Mod.md deleted file mode 100644 index 87fd2b891bce2d5824e81e0382a70898cc3a85f6..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Ride of Your Life with Ultimate Motorcycle Simulator APK Mod.md +++ /dev/null @@ -1,102 +0,0 @@ -
    -

    Ultimate Motorcycle Simulator APK Mod: A Review

    -

    If you are a fan of motorcycle games, you might have heard of Ultimate Motorcycle Simulator, one of the most realistic and immersive motorcycle simulation games on Android. In this article, we will review Ultimate Motorcycle Simulator and its APK mod version, which gives you unlimited money, all motorcycles unlocked, and no ads. We will also show you how to download and install Ultimate Motorcycle Simulator APK mod on your device.

    -

    What is Ultimate Motorcycle Simulator?

    -

    Ultimate Motorcycle Simulator is a game developed by Sir Studios, a Turkish game studio that specializes in simulation games. It was released in 2018 and has since gained over 50 million downloads and 4.3 stars rating on Google Play Store. Ultimate Motorcycle Simulator is a game that lets you experience the thrill of riding different types of motorcycles in a realistic open world environment. You can customize your motorcycles with various paint jobs, vinyls, and tuning parts, and enjoy the realistic physics and sound effects of the game. You can also choose between different game modes, such as free ride, traffic, checkpoint, or career mode.

    -

    ultimate motorcycle simulator apk mod


    DOWNLOAD === https://urlca.com/2uO9Ia



    -

    Features of Ultimate Motorcycle Simulator

    -

    Ultimate Motorcycle Simulator has many features that make it stand out from other motorcycle games. Here are some of them:

    -

    Realistic physics

    -

    Ultimate Motorcycle Simulator uses advanced physics engine to simulate the behavior and movement of motorcycles. You can feel the difference between different types of motorcycles, such as sport bikes, choppers, cruisers, or off-road bikes. You can also perform stunts, drifts, wheelies, and burnouts with your motorcycles.

    -

    Open world map

    -

    Ultimate Motorcycle Simulator has a huge open world map that you can explore freely. You can ride your motorcycles in different terrains, such as city streets, highways, deserts, mountains, or forests. You can also find hidden ramps, loops, bridges, and tunnels that you can use to perform amazing stunts.

    -

    Customizable motorcycles

    -

    Ultimate Motorcycle Simulator has a wide range of motorcycles that you can unlock and customize. You can choose from over 40 motorcycles, each with their own characteristics and performance. You can also change the color, vinyl, and tuning parts of your motorcycles to make them look unique and suit your style.

    -

    Free ride mode

    -

    Ultimate Motorcycle Simulator has a free ride mode that lets you enjoy the game without any limitations or objectives. You can ride your motorcycles anywhere you want, at any speed you want, and do whatever you want. You can also use the camera mode to take screenshots or videos of your rides.

    -

    What is Ultimate Motorcycle Simulator APK Mod?

    -

    Ultimate Motorcycle Simulator APK mod is a modified version of the original game that gives you some extra benefits that are not available in the official version. These benefits include unlimited money, all motorcycles unlocked, and no ads.

    -

    Benefits of Ultimate Motorcycle Simulator APK Mod

    -

    Ultimate Motorcycle Simulator APK mod has some benefits that make it more enjoyable and convenient than the original game. Here are some of them:

    -

    Unlimited money

    -

    Ultimate Motorcycle Simulator APK mod gives you unlimited money that you can use to buy new motorcycles or upgrade your existing ones. You don't have to worry about running out of money or grinding for hours to earn enough money to buy your favorite motorcycle.

    -

    ultimate motorcycle simulator mod apk unlimited money
    -ultimate motorcycle simulator premium mod apk
    -ultimate motorcycle simulator hack mod apk download
    -ultimate motorcycle simulator mod apk latest version
    -ultimate motorcycle simulator mod apk android 1
    -ultimate motorcycle simulator mod apk revdl
    -ultimate motorcycle simulator mod apk rexdl
    -ultimate motorcycle simulator mod apk happymod
    -ultimate motorcycle simulator mod apk all bikes unlocked
    -ultimate motorcycle simulator mod apk free shopping
    -ultimate motorcycle simulator mod apk 2023
    -ultimate motorcycle simulator mod apk no ads
    -ultimate motorcycle simulator mod apk offline
    -ultimate motorcycle simulator mod apk obb
    -ultimate motorcycle simulator mod apk old version
    -ultimate motorcycle simulator pro mod apk
    -ultimate motorcycle simulator mega mod apk
    -ultimate motorcycle simulator full mod apk
    -ultimate motorcycle simulator vip mod apk
    -ultimate motorcycle simulator 3d mod apk
    -download game ultimate motorcycle simulator mod apk
    -download ultimate motorcycle simulator mod apk terbaru
    -cara download ultimate motorcycle simulator mod apk
    -link download ultimate motorcycle simulator mod apk
    -how to download ultimate motorcycle simulator mod apk
    -descargar ultimate motorcycle simulator mod apk
    -descargar ultimate motorcycle simulator mod apk dinero infinito
    -como descargar ultimate motorcycle simulator mod apk
    -baixar ultimate motorcycle simulator mod apk
    -baixar ultimate motorcycle simulator mod apk dinheiro infinito
    -como baixar ultimate motorcycle simulator mod apk
    -telecharger ultimate motorcycle simulator mod apk
    -telecharger ultimate motorcycle simulator mod apk argent illimité
    -comment telecharger ultimate motorcycle simulator mod apk
    -indir ultimate motorcycle simulator mod apk
    -indir ultimate motorcycle simulator mod apk sınırsız para
    -nasıl indirilir ultimate motorcycle simulator mod apk
    -install ultimate motorcycle simulator mod apk
    -install ultimate motorcycle simulator mod apk unlimited money and gems
    -how to install ultimate motorcycle simulator mod apk
    -update ultimate motorcycle simulator mod apk
    -update ultimate motorcycle simulator mod apk unlimited money and gems
    -how to update ultimate motorcycle simulator mod apk
    -cheat ultimate motorcycle simulator mod apk
    -cheat codes for ultimate motorcycle simulator mod apk
    -how to cheat in ultimate motorcycle simulator mod apk
    -gameplay of ultimate motorcycle simulator mod apk
    -best settings for ultimate motorcycle simulator mod apk

    -

    All motorcycles unlocked

    -

    Ultimate Motorcycle Simulator APK mod gives you access to all the motorcycles in the game without having to unlock them by completing missions or reaching certain levels. You can choose any motorcycle you want from the start and enjoy its performance and features.

    -

    No ads

    -

    Ultimate Motorcycle Simulator APK mod removes all the ads that interrupt your gameplay and ruin your immersion. You don't have to watch any ads to get rewards or bonuses, or to skip the waiting time. You can enjoy the game without any distractions or annoyances.

    -

    How to download and install Ultimate Motorcycle Simulator APK Mod?

    -

    If you want to download and install Ultimate Motorcycle Simulator APK mod on your device, you need to follow some simple steps. Here they are:

    -

    Steps to download and install Ultimate Motorcycle Simulator APK Mod

    -
      -
    1. Go to a trusted website that provides the download link for Ultimate Motorcycle Simulator APK mod, such as [APKPure] or [APKHome].
    2. -
    3. Click on the download button and wait for the file to be downloaded on your device.
    4. -
    5. Go to your device settings and enable the installation of apps from unknown sources.
    6. -
    7. Locate the downloaded file and tap on it to start the installation process.
    8. -
    9. Follow the instructions on the screen and wait for the installation to be completed.
    10. -
    11. Launch the game and enjoy Ultimate Motorcycle Simulator APK mod.
    12. -
    -

    Conclusion

    -

    Ultimate Motorcycle Simulator is a great game for motorcycle enthusiasts who want to experience the realistic and immersive simulation of riding different types of motorcycles in a huge open world environment. Ultimate Motorcycle Simulator APK mod is a better version of the game that gives you unlimited money, all motorcycles unlocked, and no ads. You can download and install Ultimate Motorcycle Simulator APK mod by following the steps we have provided in this article. We hope you have fun playing Ultimate Motorcycle Simulator APK mod.

    -

    FAQs

    -
      -
    • Q: Is Ultimate Motorcycle Simulator APK mod safe to use?
    • -
    • A: Yes, Ultimate Motorcycle Simulator APK mod is safe to use as long as you download it from a trusted website that does not contain any viruses or malware.
    • -
    • Q: Do I need to root my device to use Ultimate Motorcycle Simulator APK mod?
    • -
    • A: No, you don't need to root your device to use Ultimate Motorcycle Simulator APK mod. You just need to enable the installation of apps from unknown sources in your device settings.
    • -
    • Q: Can I play Ultimate Motorcycle Simulator online with other players?
    • -
    • A: No, Ultimate Motorcycle Simulator is an offline game that does not support online multiplayer mode. You can only play it solo or with AI traffic.
    • -
    • Q: Can I update Ultimate Motorcycle Simulator APK mod when a new version is released?
    • -
    • A: Yes, you can update Ultimate Motorcycle Simulator APK mod when a new version is released by downloading and installing the latest version from the same website that you downloaded it from. However, you may lose your progress and data if you do so, so make sure you back up your game data before updating.
    • -
    • Q: What are some alternatives to Ultimate Motorcycle Simulator APK mod?
    • -
    • A: Some alternatives to Ultimate Motorcycle Simulator APK mod are Real Bike Racing, Moto Rider GO, Bike Race Free, and Traffic Rider.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Hello Neighbor Saklamba APK Tips and Tricks to Outsmart Your Creepy Neighbor.md b/spaces/congsaPfin/Manga-OCR/logs/Hello Neighbor Saklamba APK Tips and Tricks to Outsmart Your Creepy Neighbor.md deleted file mode 100644 index a4c1aad9c21bc52c4955f3ee32d15b39b85b94d7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Hello Neighbor Saklamba APK Tips and Tricks to Outsmart Your Creepy Neighbor.md +++ /dev/null @@ -1,114 +0,0 @@ -
    -

    Hello Neighbor Saklambaç APK: A Stealth Horror Game with a Twist

    -

    If you are a fan of stealth horror games, you might have heard of Hello Neighbor, a popular game that was released in 2017 by tinyBuild. The game is about sneaking into your neighbor's house to find out what he is hiding in his basement, while avoiding his traps and cameras. The game has received positive reviews from critics and players alike, who praised its originality, suspense, and graphics.

    -

    But did you know that there is a Turkish version of Hello Neighbor that adds a twist to the game? It is called Hello Neighbor Saklambaç APK, and it is a modified version of the original game that introduces a new element to the gameplay: saklambaç.

    -

    hello neighbor saklambaç apk


    DOWNLOAD ★★★ https://urlca.com/2uOcmI



    -

    Saklambaç is a Turkish word that means "hide and seek". It is also a popular game among children in Turkey, where one person tries to find the others who are hiding. In Hello Neighbor Saklambaç APK, you can play saklambaç with your neighbor, who will try to catch you if he sees you. You can also hide behind objects, under beds, or in closets, to avoid being detected.

    -

    Hello Neighbor Saklambaç APK is a fun and exciting game that will test your stealth skills, creativity, and courage. If you are curious about this game and want to try it out, here is everything you need to know about how to download, install, play, and enjoy Hello Neighbor Saklambaç APK on your Android device.

    -

    How to Download and Install Hello Neighbor Saklambaç APK on Your Android Device

    -

    Before you can play Hello Neighbor Saklambaç APK on your Android device, you need to download and install the APK file of the game. An APK file is an Android application package file that contains all the files and data needed to run an app on an Android device. However, since Hello Neighbor Saklamba

    ç APK is not available on the official Google Play Store, you need to download it from a third-party source. Here are the steps you need to follow to download and install Hello Neighbor Saklambaç APK on your Android device:

    -
      -
    1. Enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then enable the option "unknown sources". You may see a warning message, but you can ignore it and proceed.
    2. -
    3. Download the APK file from a trusted source. You can find many websites that offer Hello Neighbor Saklambaç APK for free download, but you need to be careful and choose a reliable and safe one. One of the websites that we recommend is [APKPure], where you can download the latest version of Hello Neighbor Saklambaç APK with a simple click.
    4. -
    5. Locate and install the APK file on your device. After you download the APK file, you need to find it on your device storage and tap on it to start the installation process. You may see some prompts asking for permissions, just accept them and wait for the installation to finish.
    6. -
    7. Launch the game and enjoy. Once the installation is done, you can find the game icon on your device home screen or app drawer. Tap on it to launch the game and start playing Hello Neighbor Saklambaç APK.
    8. -
    -

    How to Play Hello Neighbor Saklambaç APK

    -

    Now that you have downloaded and installed Hello Neighbor Saklambaç APK on your Android device, you are ready to play this amazing stealth horror game with a twist. Here is how to play Hello Neighbor Saklambaç APK:

    -

    The objective: Sneak into your neighbor's house and discover his secrets. You are a curious kid who lives across the street from a mysterious neighbor who seems to be hiding something in his basement. You decide to sneak into his house and find out what he is up to, but be careful, he is not as friendly as he looks. He will chase you, set traps, and use cameras to catch you if he sees you.

    -

    The gameplay: Use stealth, strategy, and creativity to avoid the neighbor's traps and cameras. You can explore the neighbor's house, which is a huge sandbox with many rooms, objects, and secrets. You can interact with almost anything in the house, such as opening doors, windows, drawers, cabinets, etc. You can also use items that you find or collect, such as keys, tools, weapons, etc., to help you in your mission. However, you need to be careful not to make too much noise or leave any traces behind, as the neighbor will hear you or see you and try to stop you.

    -

    hello neighbor saklambaç android game
    -hello neighbor saklambaç free download
    -hello neighbor saklambaç hide and seek
    -hello neighbor saklambaç apk indir
    -hello neighbor saklambaç mod apk
    -hello neighbor saklambaç oyunu oyna
    -hello neighbor saklambaç apk pure
    -hello neighbor saklambaç latest version
    -hello neighbor saklambaç gameplay
    -hello neighbor saklambaç cheats
    -hello neighbor saklambaç tips and tricks
    -hello neighbor saklambaç review
    -hello neighbor saklambaç trailer
    -hello neighbor saklambaç online multiplayer
    -hello neighbor saklambaç secrets and easter eggs
    -hello neighbor saklambaç best hiding spots
    -hello neighbor saklambaç how to install
    -hello neighbor saklambaç system requirements
    -hello neighbor saklambaç update
    -hello neighbor saklambaç walkthrough
    -hello neighbor saklambaç guide
    -hello neighbor saklambaç hack
    -hello neighbor saklambaç full version
    -hello neighbor saklambaç beta
    -hello neighbor saklambaç release date
    -hello neighbor saklambaç story mode
    -hello neighbor saklambaç characters
    -hello neighbor saklambaç levels
    -hello neighbor saklambaç maps
    -hello neighbor saklambaç graphics settings
    -hello neighbor saklambaç sound effects
    -hello neighbor saklambaç controls
    -hello neighbor saklambaç achievements
    -hello neighbor saklambaç bugs and glitches
    -hello neighbor saklambaç fan art
    -hello neighbor saklambaç wallpapers
    -hello neighbor saklambaç memes
    -hello neighbor saklambaç videos
    -hello neighbor saklambaç screenshots
    -hello neighbor saklambaç forums
    -hello neighbor saklambaç reddit
    -hello neighbor saklambaç discord server
    -hello neighbor saklambaç facebook page
    -hello neighbor saklambaç twitter account
    -hello neighbor saklambaç instagram profile
    -hello neighbor saklambaç tiktok videos
    -hello neighbor saklambaç merchandise store
    -hello neighbor saklambaç developer website

    -

    The features: Experience an immersive and dynamic environment with an advanced AI that adapts to your actions. One of the most impressive features of Hello Neighbor Saklambaç APK is its realistic and responsive environment that changes according to your actions. The neighbor has an artificial intelligence that learns from your behavior and creates new strategies to catch you. He will also remember where you have been and what you have done, and react accordingly. For example, if you break a window or leave a door open, he will notice it and fix it or close it. If you use a certain route or hiding spot often, he will set traps or cameras there. If you throw something at him or hit him with a weapon, he will get angry and chase you more aggressively.

    -

    Tips and Tricks for Hello Neighbor Saklambaç APK

    -

    Hello Neighbor Saklambaç APK is not an easy game to play, as it requires a lot of patience, skill, and creativity. However, there are some tips and tricks that can help you succeed in your mission and have more fun while playing this game. Here are some of them:

    -
      -
    • Explore the surroundings and look for clues and items. The neighbor's house is full of secrets and hidden places that can reveal more about his story and motives. You can also find useful items that can help you in your mission, such as keys, tools, weapons, etc. However, be careful not to take too much time or make too much noise while exploring, as the neighbor may notice you.
    • -
    • Use distractions and hiding spots to evade the neighbor. You can use various objects or items that you find or collect in the house to distract the neighbor or lure him away from his position. For example, you can throw something at him or at another place to make him look away or follow the sound. You can also use radios, TVs, phones, etc., to create noise or fake calls that will confuse him or make him leave his room. Moreover, you can use hiding spots such as closets, beds, [user or in the neighbor's car, to hide from him or escape his sight. However, be careful not to use the same hiding spot too often, as he may find you or set a trap there.
    • -
    • Learn from your mistakes and try different approaches. Hello Neighbor Saklambaç APK is a game that requires a lot of trial and error, as you will fail many times before you succeed. However, you can use your failures as learning opportunities, as they will help you understand the neighbor's behavior and patterns, and find new ways to outsmart him. You can also try different approaches and strategies, such as being more aggressive or more stealthy, depending on the situation and your preference.
    • -
    -

    Pros and Cons of Hello Neighbor Saklambaç APK

    -

    Like any other game, Hello Neighbor Saklambaç APK has its pros and cons that you should consider before playing it. Here are some of them:

    - - - - - - - - - - - - - - - - - -
    ProsCons
    A unique and thrilling stealth horror game with a twistA high battery consumption and storage space requirement
    A challenging and rewarding gameplay with multiple endingsA possible lagging and crashing issues on some devices
    A stunning and realistic graphics and sound effectsA limited language support and translation errors
    -

    Conclusion

    -

    Hello Neighbor Saklambaç APK is a stealth horror game that will keep you on the edge of your seat. It is a modified version of the original Hello Neighbor game that introduces a new element to the gameplay: saklambaç, which is a Turkish word for hide and seek. You can play saklambaç with your neighbor, who will try to catch you if he sees you. You can also hide behind objects, under beds, or in closets, to avoid being detected.

    -

    The game has an immersive and dynamic environment with an advanced AI that adapts to your actions. The neighbor will learn from your behavior and create new strategies to catch you. He will also remember where you have been and what you have done, and react accordingly. The game also has a realistic and responsive environment that changes according to your actions.

    -

    If you are looking for a fun and exciting game with a twist, you should give Hello Neighbor Saklambaç APK a try. You can download and install the game on your Android device by following the steps we have provided above. You can also use the tips and tricks we have shared to help you succeed in your mission and have more fun while playing this game.

    -

    FAQs

    -

    Here are some of the frequently asked questions about Hello Neighbor Saklambaç APK:

    -
      -
    1. What is saklambaç?
    2. -

      Saklambaç is a Turkish word that means "hide and seek". It is also a popular game among children in Turkey, where one person tries to find the others who are hiding. In Hello Neighbor Saklambaç APK, you can play saklambaç with your neighbor, who will try to catch you if he sees you.

      -
    3. Is Hello Neighbor Saklambaç APK safe to download?
    4. -

      Yes, Hello Neighbor Saklambaç APK is safe to download, as long as you download it from a trusted source. However, since it is not available on the official Google Play Store, you need to enable unknown sources on your device settings before installing it.

      -
    5. How many levels are there in Hello Neighbor Saklambaç APK?
    6. -

      Hello Neighbor Saklambaç APK has three levels: Act 1, Act 2, and Act 3. Each level has a different layout, difficulty, and objective. You can also unlock different endings depending on your choices and actions.

      -
    7. Can I play Hello Neighbor Saklambaç APK offline?
    8. -

      Yes, you can play Hello Neighbor Saklambaç APK offline, as it does not require an internet connection to run. However, you may need an internet connection to download and install the game.

      -
    9. Can I play Hello Neighbor Saklambaç APK with friends?
    10. -

      No, Hello Neighbor Saklambaç APK is a single-player game that does not support multiplayer mode. However, you can share your experience and opinions with your friends online or offline.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Install Jio POS Plus 1.0.6 APK on Your Android Device.md b/spaces/congsaPfin/Manga-OCR/logs/How to Install Jio POS Plus 1.0.6 APK on Your Android Device.md deleted file mode 100644 index 8af5005e83bb3998c217006f18a46016784bb828..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Install Jio POS Plus 1.0.6 APK on Your Android Device.md +++ /dev/null @@ -1,161 +0,0 @@ -
      -

      Jio POS Plus: A Complete Guide to Download and Use the App

      -

      If you are looking for a way to earn money by recharging Jio numbers, selling Jio SIM cards, or activating new Jio plans, then you might want to check out Jio POS Plus. Jio POS Plus is an app-based platform that allows you to manage your Jio business on the go. In this article, we will tell you everything you need to know about Jio POS Plus, including its features, benefits, how to become a Jio partner, how to download and use the app, and how to update it.

      -

      jio pos plus 1.0 6 apk download


      Download File ✑ ✑ ✑ https://urlca.com/2uOeJu



      -

      What is Jio POS Plus?

      -

      Jio POS Plus is an app developed by Jio Platforms Limited for its partners who want to offer various Jio services to their customers. With Jio POS Plus, you can:

      -
        -
      • Onboard new customers with or without Aadhaar
      • -
      • Recharge any Jio number with any plan
      • -
      • Sell new Jio SIM cards or port existing numbers to Jio
      • -
      • Activate new Jio plans or vouchers for your customers
      • -
      • Earn commissions for every transaction you make
      • -
      • Track your earnings and performance on a daily, weekly, or monthly basis
      • -
      • Get access to exclusive offers and discounts from Jio
      • -
      -

      Jio POS Plus is a simple and convenient way to grow your business and earn more income with Jio.

      -

      Features and benefits of Jio POS Plus

      -

      Some of the features and benefits of using Jio POS Plus are:

      -
        -
      • You can manage your business anytime, anywhere, with just your smartphone
      • -
      • You can offer a wide range of services to your customers, such as recharges, SIM activations, plan activations, etc.
      • -
      • You can earn attractive commissions for every transaction you make
      • -
      • You can get real-time updates on your earnings and performance
      • -
      • You can get support from the dedicated customer care team of Jio
      • -
      • You can get access to exclusive offers and discounts from Jio
      • -
      -

      How to become a Jio partner and use Jio POS Plus?

      -

      If you want to become a Jio partner and use Jio POS Plus, you need to follow these steps:

      -

      Step 1: Register as a Jio partner

      -

      To register as a Jio partner, you need to visit the official website of Jio Partners and fill out the online form with your details. You will also need to upload some documents, such as your PAN card, Aadhaar card, GST certificate, bank account details, etc. After submitting the form, you will receive a confirmation email from Jio. You will also get a call from a Jio representative who will verify your details and guide you through the next steps.

      -

      How to download jio pos plus 1.0 6 apk on android
      -Jio pos plus 1.0 6 apk latest version free download
      -Jio pos plus 1.0 6 apk for jio partners and retailers
      -Jio pos plus 1.0 6 apk features and benefits
      -Jio pos plus 1.0 6 apk online onboarding and recharging
      -Jio pos plus 1.0 7 apk update and download link
      -Jio pos plus 1.0 6 apk old versions and alternatives
      -Jio pos plus 1.0 6 apk installation and activation guide
      -Jio pos plus 1.0 6 apk problems and solutions
      -Jio pos plus 1.0 6 apk reviews and ratings
      -Jio pos plus 1.0 6 apk download for pc and laptop
      -Jio pos plus 1.0 6 apk mod and hack version
      -Jio pos plus 1.0 6 apk earn money and rewards
      -Jio pos plus 1.0 6 apk customer care and support
      -Jio pos plus 1.0 6 apk FAQs and tips
      -Jio pos plus app vs jio pos plus 1.0 6 apk comparison
      -Jio pos plus 1.0 6 apk requirements and compatibility
      -Jio pos plus 1.0 6 apk security and privacy issues
      -Jio pos plus 1.0 6 apk download from google play store
      -Jio pos plus 1.0 6 apk download from third-party sources
      -Jio pos plus app download for iphone and ios devices
      -Jio pos plus app download for windows phone and blackberry
      -Jio pos plus app download for jio phone and kaios
      -Jio pos plus app download for tablet and ipad
      -Jio pos plus app download for smart tv and firestick
      -How to use jio pos plus app for recharge and bill payment
      -How to use jio pos plus app for sim activation and verification
      -How to use jio pos plus app for commission and earnings
      -How to use jio pos plus app for reports and analytics
      -How to use jio pos plus app for offers and coupons
      -How to update jio pos plus app to the latest version
      -How to uninstall jio pos plus app from your device
      -How to reset jio pos plus app password and pin
      -How to contact jio pos plus app helpline and feedback
      -How to join jio pos plus app as a partner or retailer
      -Benefits of jio pos plus app for jio users and customers
      -Benefits of jio pos plus app for jio network and services
      -Benefits of jio pos plus app for digital india and make in india initiatives
      -Challenges of jio pos plus app for competitors and rivals
      -Challenges of jio pos plus app for technical and operational issues

      -

      Step 2: Download and install Jio POS Plus app

      -

      Once you are registered as a Jio partner, you will receive an SMS with a link to download the Jio POS Plus app. You can also download the app from the Google Play Store or the Jio website. The app is compatible with Android devices running on version 4.4 or above. To install the app, you need to enable the installation of apps from unknown sources in your device settings. After installing the app, you will see the Jio POS Plus icon on your home screen.

      -

      Step 3: Log in and start using Jio POS Plus app

      -

      To log in to the Jio POS Plus app, you need to enter your registered mobile number and password. You will also need to enter a one-time password (OTP) that will be sent to your number for verification. After logging in, you will see the dashboard of the app, where you can access various features and services. You can also view your profile, wallet balance, transactions history, commissions earned, etc. To start using the app, you need to load money into your wallet using your bank account or debit card. You can then use the wallet balance to recharge Jio numbers, sell Jio SIM cards, activate Jio plans, etc.

      -

      How to download Jio POS Plus 1.0 6 apk?

      -

      If you are looking for the latest version of Jio POS Plus app, you might want to download Jio POS Plus 1.0 6 apk. This is the updated version of the app that was released on June 19, 2023. It has some new features and bug fixes that improve the performance and user experience of the app.

      -

      Why do you need Jio POS Plus 1.0 6 apk?

      -

      Some of the reasons why you might need Jio POS Plus 1.0 6 apk are:

      -
        -
      • You want to enjoy the new features and enhancements of the app
      • -
      • You want to fix some issues or errors that you faced with the previous version of the app
      • -
      • You want to update your app without waiting for the official update from the Google Play Store or the Jio website
      • -
      • You want to install the app on a device that does not have access to the Google Play Store or the Jio website
      • -
      -

      How to download and install Jio POS Plus 1.0 6 apk?

      -

      There are two methods to download and install Jio POS Plus 1.0 6 apk on your device:

      -

      Method 1: From the official website

      -

      This is the safest and easiest method to download and install Jio POS Plus 1.0 6 apk on your device. To do this, you need to follow these steps:

      -
        -
      1. Visit the official website of Jio POS Plus on your device browser
      2. -
      3. Scroll down and click on the "Download App" button
      4. -
      5. Select "Jio POS Plus" from the list of apps and click on "Download"
      6. -
      7. You will see a pop-up window asking you to confirm the download. Click on "OK"
      8. -
      9. The apk file will start downloading on your device. You can check the progress in your notification bar or download manager
      10. -
      11. Once the download is complete, tap on the apk file to open it
      12. -
      13. You will see a warning message saying that installing apps from unknown sources can harm your device. Click on "Settings"
      14. -
      15. You will be redirected to your device settings, where you need to enable the installation of apps from unknown sources
      16. -
      17. Go back to the apk file and tap on it again
      18. -
      19. You will see a screen asking you to install the app. Click on "Install"
      20. -
      21. The app will start installing on your device. You can check the progress in your notification bar or download manager
      22. -
      23. Once the installation is complete, tap on "Open" to launch the app
      24. -
      25. You can now log in and use Jio POS Plus 1.0 6 apk on your device
      26. -
      -

      Method 2: From a third-party website

      -

      This is an alternative method to download and install Jio POS Plus 1.0 6 apk on your device. However, this method is not recommended as it may expose your device to malware or viruses. You should only use this method if you trust the source of the apk file and have an antivirus software installed on your device. To do this, you need to follow these steps:

      -
        -
      1. Visit a third-party website that offers Jio POS Plus 1.0 6 apk for download, such as APKPure or APKMirror
      2. -
      3. Search for Jio POS Plus 1.0 6 apk and click on the download link
      4. -
      5. You will see a pop-up window asking you to confirm the download. Click on "OK"
      6. -
      7. The apk file will start downloading on your device. You can check the progress in your notification bar or download manager
      8. -
      9. Once the download is complete, tap on the apk file to open it
      10. -
      11. You will see a warning message saying that installing apps from unknown sources can harm your device. Click on "Settings"
      12. -
      13. You will be redirected to your device settings, where you need to enable the installation of apps from unknown sources
      14. -
      15. Go back to the apk file and tap on it again
      16. -
      17. You will see a screen asking you to install the app. Click on "Install"
      18. -
      19. The app will start installing on your device. You can check the progress in your notification bar or download manager
      20. -
      21. Once the installation is complete, tap on "Open" to launch the app
      22. -
      23. You can now log in and use Jio POS Plus 1.0 6 apk on your device
      24. -
      -

      How to update Jio POS Plus app?

      -

      If you want to keep your Jio POS Plus app updated with the latest features and bug fixes, you need to follow these steps:

      -

      How to check the current version of Jio POS Plus app?

      -

      To check the current version of Jio POS Plus app, you need to follow these steps:

      -
        -
      1. Open the Jio POS Plus app on your device
      2. -
      3. Tap on the menu icon (three horizontal lines) on the top left corner of the screen
      4. -
      5. Tap on "About Us" from the menu options
      6. -
      7. You will see the current version of the app displayed on the screen
      8. -
      -

      How to update Jio POS Plus app from the app itself?

      -

      To update Jio POS Plus app from the app itself, you need to follow these steps:

      -
        -
      1. Open the Jio POS Plus app on your device
      2. -
      3. Tap on the menu icon (three horizontal lines) on the top left corner of the screen
      4. -
      5. Tap on "Check for Updates" from the menu options
      6. -
      7. If there is a new version available, you will see a pop-up window asking you to update the app. Click on "Update"
      8. -
      9. The app will start downloading and installing the new version on your device. You can check the progress in your notification bar or download manager
      10. -
      11. Once the update is complete, tap on "Open" to launch the updated app
      12. -
      -

      How to update Jio POS Plus app manually?

      -

      To update Jio POS Plus app manually, you need to follow these steps:

      -
        -
      1. Delete the old version of Jio POS Plus app from your device
      2. -
      3. Download and install Jio POS Plus 1.0 6 apk using one of the methods mentioned above
      4. -
      5. Log in and use Jio POS Plus 1.0 6 apk on your device
      6. -
      -

      Conclusion

      -

      Jio POS Plus is a great app for anyone who wants to earn money by offering various Jio services to their customers. It is easy to use, convenient, and rewarding. You can download and use Jio POS Plus 1.0 6 apk on your device by following the steps given in this article. However, make sure that you download and install the app from a trusted source and keep it updated regularly.

      -

      Frequently Asked Questions (FAQs)

      -

      Q: Is Jio POS Plus free to use?

      -

      A: Yes, Jio POS Plus is free to use for all registered Jio partners. However, you need to load money into your wallet using your bank account or debit card to offer services to your customers.

      -

      Q: How much commission can I earn with Jio POS Plus?

      -

      A: The commission rate varies depending on the type of service you offer and the plan or voucher you activate. You can check the commission details in the app or contact Jio customer care for more information.

      -

      Q: Can I use Jio POS Plus without internet connection?

      -

      A: No, you need an active internet connection to use Jio POS Plus. You can use any network provider or Wi-Fi connection for accessing the app.

      -

      Q: How can I contact Jio customer care for any queries or issues related to Jio POS Plus?

      -

      A: You can contact Jio customer care for any queries or issues related to Jio POS Plus by calling 1800-889-9333 or emailing care@jio.com. You can also visit the Jio website or the Jio Partners website for more information.

      -

      Q: Can I use Jio POS Plus on multiple devices?

      -

      A: No, you can only use Jio POS Plus on one device at a time. If you want to use it on another device, you need to log out from the current device and log in to the new device.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Run 8 Ball Pool APK on Windows 10 PC.md b/spaces/congsaPfin/Manga-OCR/logs/How to Run 8 Ball Pool APK on Windows 10 PC.md deleted file mode 100644 index 652ac264e3c6c29554047c97300920c9a8ecfd11..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Run 8 Ball Pool APK on Windows 10 PC.md +++ /dev/null @@ -1,116 +0,0 @@ -
      -

      How to Play 8 Ball Pool on Windows 10 with APK File

      -

      Do you love playing pool games online? If so, you might have heard of 8 Ball Pool, one of the most popular and addictive pool games on the web. Developed by Miniclip, this game lets you compete with millions of players around the world in various modes and tournaments. You can also customize your cue, table, and avatar, as well as chat with your friends and opponents.

      -

      8 ball pool apk windows 10


      Download >>>>> https://urlca.com/2uOe7k



      -

      But what if you want to play 8 Ball Pool on your PC or laptop instead of your mobile device? Well, there is a way to do that, and it involves using an apk file. An apk file is a package file format that contains the installation files for Android applications. By using an apk file, you can install and run Android apps on your Windows 10 device without any hassle.

      -

      Why would you want to play 8 Ball Pool on Windows 10? There are several benefits, such as:

      -
        -
      • You can enjoy a larger screen and better graphics
      • -
      • You can use a mouse, keyboard, or gamepad for more precise control
      • -
      • You can save your battery life and data usage on your mobile device
      • -
      • You can access more productivity apps and tools on your PC or laptop while playing
      • -
      -

      If you are interested in playing 8 Ball Pool on Windows 10 with an apk file, this article will show you how to do it step by step. We will also give you some tips and tricks to help you master the game and win more coins. Let's get started!

      -

      How to Download and Install 8 Ball Pool APK on Windows 10

      -

      To play 8 Ball Pool on Windows 10 with an apk file, you will need two things: the apk file itself, and an Android emulator. An Android emulator is a software that simulates the Android operating system on your PC or laptop, allowing you to run Android apps on it. There are many Android emulators available online, but we recommend using BlueStacks, as it is one of the most popular and reliable ones.

      -

      Here are the steps to download and install 8 Ball Pool apk on Windows 10 using BlueStacks:

      -
        -
      1. Go to https://www.bluestacks.com/apps/sports/8-ball-pool-on-pc.html and click on the "Download" button to download BlueStacks on your PC or laptop.
      2. -
      3. Once the download is complete, open the installer file and follow the instructions to install BlueStacks on your device.
      4. -
      5. After the installation is done, launch BlueStacks and sign in with your Google account. If you don't have one, you can create one for free.
      6. -
      7. Go to https://www.gameloop.com/game/sports/8-ball-pool-on-pc and click on the "Download" button to download the apk file for 8 Ball Pool.
      8. -
      9. Once the download is complete, locate the apk file on your device and right-click on it. Select "Open with" and choose "BlueStacks" from the list of options.
      10. -
      11. BlueStacks will automatically install the apk file on your device. You will see a notification when it is done.
      12. -
      13. You can now launch 8 Ball Pool from the BlueStacks home screen or app drawer.
      14. -
      -

      Congratulations! You have successfully installed 8 Ball Pool apk on Windows 10 using BlueStacks. Now you can enjoy playing this amazing pool game on your PC or laptop.

      -

      8 ball pool game download for pc windows 10
      -8 ball pool emulator for windows 10
      -8 ball pool app for windows 10 free
      -8 ball pool online on pc windows 10
      -8 ball pool miniclip for windows 10
      -8 ball pool apk file for windows 10
      -8 ball pool bluestacks on windows 10
      -8 ball pool gameloop for windows 10
      -8 ball pool apk pure for windows 10
      -8 ball pool install on windows 10
      -8 ball pool hack for pc windows 10
      -8 ball pool mod apk windows 10
      -8 ball pool cheats for windows 10
      -8 ball pool unlimited coins windows 10
      -8 ball pool offline mode windows 10
      -8 ball pool multiplayer on windows 10
      -8 ball pool tournament for pc windows 10
      -8 ball pool best cues windows 10
      -8 ball pool tips and tricks windows 10
      -8 ball pool update for windows 10
      -play 8 ball pool with friends on windows 10
      -how to download and install 8 ball pool on windows 10
      -how to play 8 ball pool on pc with keyboard windows 10
      -how to get free cash in 8 ball pool on windows 10
      -how to transfer coins in 8 ball pool on windows 10
      -how to change name in 8 ball pool on windows 10
      -how to level up fast in 8 ball pool on windows 10
      -how to win every game in 8 ball pool on windows 10
      -how to chat in 8 ball pool on pc windows 10
      -how to create a club in 8 ball pool on windows 10
      -how to join a club in 8 ball pool on windows 10
      -how to challenge someone in 8 ball pool on windows 10
      -how to spin the wheel in 8 ball pool on windows 10
      -how to use emoji in 8 ball pool on pc windows 10
      -how to change avatar in 8 ball pool on windows 10
      -how to change table in 8 ball pool on pc windows

      -

      How to Play 8 Ball Pool on Windows 10

      -

      Now that you have installed 8 Ball Pool apk on Windows 10, you might be wondering how to play it. Don't worry, it's very easy and fun. Here are the basic rules and objectives of the game:

      -
        -
      • The game is played on a pool table with 16 balls: one white cue ball, seven solid-colored balls, seven striped balls, and one black 8 ball.
      • -
      • The goal of the game is to pocket all your balls (either solid or striped) and then pocket the 8 ball before your opponent does.
      • -
      • You can choose to play either in 1-on-1 mode, where you compete with another player online, or in tournament mode, where you play against multiple players in a bracket format.
      • -
      • You can also play in different cities, each with its own entry fee and prize pool. The higher the stakes, the more challenging the opponents.
      • -
      • To start the game, you need to break the rack by hitting the cue ball with your cue stick. You can adjust the angle and power of your shot by dragging the mouse or using the arrow keys.
      • -
      • If you pocket a ball on the break, you get to choose whether you want to play with solid or striped balls. If you don't pocket a ball, your opponent gets to choose.
      • -
      • On each turn, you need to hit the cue ball with your cue stick and try to pocket one of your balls. You can aim and shoot by dragging the mouse or using the arrow keys.
      • -
      • If you pocket one of your balls, you get another turn. If you miss or pocket the wrong ball, your turn ends and your opponent gets to play.
      • -
      • If you pocket the 8 ball before pocketing all your balls, you lose the game. If you pocket the 8 ball after pocketing all your balls, you win the game.
      • -
      • If you scratch (pocket the cue ball or hit it off the table), you lose your turn and your opponent gets to place the cue ball anywhere on the table.
      • -
      -

      These are the basic rules of 8 Ball Pool, but there are also some additional features and modes that make the game more interesting and fun. Here are some of them:

      -
        -
      • You can customize your cue, table, and avatar by buying them with coins or cash. Coins are earned by winning games or tournaments, while cash is bought with real money or earned by completing offers. Different cues and tables have different attributes, such as power, aim, spin, and time.
      • -
      • You can chat with your friends and opponents by using preset messages or emojis. You can also send and receive gifts, such as coins, cues, or boxes.
      • -
      • You can level up by earning experience points (XP) from playing games or tournaments. The higher your level, the more cities and modes you can unlock.
      • -
      • You can join a club or create your own by paying a fee. Clubs are groups of players who can chat, play, and compete together. You can also participate in club events and leagues to win rewards and trophies.
      • -
      • You can play mini-games, such as Spin & Win, Scratch & Win, Hi-Lo, and Lucky Shot, to win coins, cash, cues, boxes, or tickets. Tickets are used to enter special tournaments with bigger prizes.
      • -
      -

      As you can see, there is a lot to do and enjoy in 8 Ball Pool. But how can you improve your skills and win more coins? Here are some tips and tricks that might help:

      -
        -
      • Practice. The best way to get better at anything is to practice. You can practice offline by playing against the computer or online by playing against random players. You can also watch replays of your games or other players' games to learn from their mistakes and strategies.
      • -
      • Plan ahead. Before making a shot, think about where you want the cue ball and your next ball to go. Try to avoid leaving yourself in a difficult position or giving your opponent an easy shot. Use spin and power wisely to control the cue ball.
      • -
      • Use hints. If you are not sure what to do next, you can use hints by clicking on the light bulb icon at the bottom right corner of the screen. Hints will show you the best possible shot for your current situation. However, hints are limited and cost coins, so use them sparingly.
      • -
      • Challenge yourself . Don't be afraid to play against higher-level players or enter higher-stake tournaments. You might lose some coins, but you will also learn a lot and improve your skills. You can also challenge your friends or club members to friendly matches and see who is the best.
      • -
      • Have fun. The most important thing is to enjoy the game and have fun. Don't get too frustrated or angry if you lose or make a mistake. Remember, it's just a game, and you can always try again. Be respectful and friendly to your opponents and don't cheat or hack the game.
      • -
      -

      Conclusion

      -

      8 Ball Pool is a great game that you can play on your Windows 10 device with an apk file. It is easy to download and install, and it offers a lot of features and modes that will keep you entertained and challenged. You can also customize your cue, table, and avatar, chat with your friends and opponents, join a club, play mini-games, and level up. By following the tips and tricks we shared, you can improve your skills and win more coins. So what are you waiting for? Download 8 Ball Pool apk on Windows 10 today and start playing!

      -

      If you liked this article, please share it with your friends and leave us a comment below. We would love to hear your feedback and suggestions. Thank you for reading!

      -

      FAQs

      -

      Here are some of the most frequently asked questions and answers about 8 Ball Pool apk on Windows 10:

      -

      Q: Is 8 Ball Pool apk safe to download and install?

      -

      A: Yes, 8 Ball Pool apk is safe to download and install, as long as you get it from a trusted source, such as https://www.gameloop.com/game/sports/8-ball-pool-on-pc. However, you should always scan any file you download with an antivirus software before opening it.

      -

      Q: Do I need an internet connection to play 8 Ball Pool on Windows 10?

      -

      A: Yes, you need an internet connection to play 8 Ball Pool on Windows 10, as it is an online game that requires you to connect with other players. However, you can also play offline against the computer if you want to practice or have no internet access.

      -

      Q: How can I get more coins and cash in 8 Ball Pool?

      -

      A: There are several ways to get more coins and cash in 8 Ball Pool, such as:

      -
        -
      • Winning games or tournaments
      • -
      • Spinning the wheel or scratching the card daily
      • -
      • Watching video ads or completing offers
      • -
      • Opening boxes or collecting free gifts
      • -
      • Buying them with real money
      • -
      -

      Q: How can I update 8 Ball Pool apk on Windows 10?

      -

      A: To update 8 Ball Pool apk on Windows 10, you need to download the latest version of the apk file from https://www.gameloop.com/game/sports/8-ball-pool-on-pc and install it over the existing one. You don't need to uninstall the previous version first.

      -

      Q: How can I uninstall 8 Ball Pool apk from Windows 10?

      -

      A: To uninstall 8 Ball Pool apk from Windows 10, you need to open BlueStacks and go to the app drawer. Find the 8 Ball Pool icon and right-click on it. Select "Uninstall" from the menu and confirm your choice. You can also delete the apk file from your device if you want.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Survive the Never-Ending Waves of Evil in Archero - Download the Latest Version APK for Android.md b/spaces/congsaPfin/Manga-OCR/logs/How to Survive the Never-Ending Waves of Evil in Archero - Download the Latest Version APK for Android.md deleted file mode 100644 index 2262e60ddb33ef6fd49b308690a31889efc7f1ed..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/How to Survive the Never-Ending Waves of Evil in Archero - Download the Latest Version APK for Android.md +++ /dev/null @@ -1,134 +0,0 @@ - -

      Archero APK Latest Version: A Fun Action Game with Endless Possibilities

      -

      If you are looking for a fun and challenging action game that will keep you entertained for hours, then you should try Archero APK. Archero APK is a game that turns you into a lone archer who has to face countless enemies and obstacles in different worlds. You will need to use your skills, strategy, and luck to survive and defeat the evil forces. In this article, we will tell you everything you need to know about Archero APK, including what it is, how to download and install it, how to play it, and some tips and tricks to help you master it.

      -

      archero apk latest version


      Download File » https://urlca.com/2uOa2a



      -

      What is Archero APK?

      -

      A brief introduction to the game and its features

      -

      Archero APK is a game developed by Habby, a company that specializes in creating casual and fun games for mobile devices. Archero APK is one of their most popular games, with over 50 million downloads on Google Play Store. Archero APK is a roguelike game, which means that it is randomly generated and different every time you play it. You will never get bored of playing Archero APK, as there are always new things to discover and explore.

      -

      Archero APK has many features that make it an enjoyable and addictive game. Some of these features are:

      -
        -
      • Random and unique skills: You can choose from hundreds of different skills and abilities that will help you in your journey. You can create your own combinations of skills that suit your play style and preferences.
      • -
      • Beautiful worlds and maps: You can explore different worlds and maps that have their own themes, enemies, and challenges. You will encounter various monsters and bosses that will test your skills and reflexes.
      • -
      • Powerful equipment: You can equip yourself with different weapons, armors, rings, pets, and other items that will enhance your stats and performance. You can also upgrade your equipment to make them more effective.
      • -
      • Different heroes and weapons: You can unlock and play with different heroes and weapons that have their own characteristics and abilities. You can switch between them depending on the situation and your strategy.
      • -
      -

      How to download and install Archero APK on your Android device

      -

      If you want to play Archero APK on your Android device, you will need to download and install the APK file from a reliable source. An APK file is a package file that contains all the necessary files and data for an Android application. You can download Archero APK from various websites, such as Uptodown or APKCombo. Here are the steps to download and install Archero APK on your Android device:

      -
        -
      1. Go to the website where you want to download Archero APK from. For example, you can go to [Uptodown](^1^) or [APKCombo](^2^).
      2. -
      3. Search for Archero APK or browse through the categories until you find it.
      4. -
      5. Click on the download button or link to start downloading the APK file.
      6. -
      7. Once the download is complete, locate the APK file on your device using a file manager app.
      8. -
      9. Tap on the APK file to start the installation process. You may need to enable the option to install apps from unknown sources in your device settings.
      10. -
      11. Follow the instructions on the screen to complete the installation process.
      12. -
      13. Once the installation is done, you can launch Archero APK from your app drawer or home screen.
      14. -
      The benefits of playing Archero APK -

      Playing Archero APK can bring you many benefits, both for your entertainment and your mental health. Some of the benefits of playing Archero APK are:

      -
        -
      • It can improve your concentration and focus: Archero APK requires you to pay attention to your surroundings and your enemies, as well as to plan your moves and strategies. This can help you improve your concentration and focus skills, which can be useful in other aspects of your life.
      • -
      • It can boost your creativity and imagination: Archero APK allows you to create your own combinations of skills and abilities, as well as to explore different worlds and scenarios. This can stimulate your creativity and imagination, as well as your curiosity and sense of adventure.
      • -
      • It can reduce your stress and anxiety: Archero APK can be a great way to relax and unwind after a long day. You can enjoy the fun and exciting gameplay, as well as the colorful graphics and sound effects. You can also vent your emotions and frustrations by shooting arrows at your enemies.
      • -
      • It can increase your confidence and self-esteem: Archero APK can challenge you to overcome various difficulties and obstacles, as well as to achieve different goals and rewards. This can make you feel proud of yourself and your achievements, as well as increase your confidence and self-esteem.
      • -
      -

      How to play Archero APK

      -

      The basic gameplay and controls

      -

      The gameplay of Archero APK is simple and intuitive, but also challenging and addictive. You will control a hero who has to shoot arrows at the enemies that appear on the screen. You will move around the map using a virtual joystick on the left side of the screen, and you will shoot arrows automatically when you stop moving. You will also have a health bar on the top left corner of the screen, which will decrease when you get hit by an enemy or an obstacle. You will have to avoid getting hit by moving around and dodging the enemy attacks.

      -

      archero apk download latest version 2023
      -archero apk mod unlimited gems and coins
      -archero apk hack no root
      -archero apk update new features
      -archero apk offline play
      -archero apk free download for android
      -archero apk old version download
      -archero apk pure original
      -archero apk mirror link
      -archero apk obb file
      -archero apk revdl unlocked
      -archero apk rexdl premium
      -archero apk uptodown safe
      -archero apk apkpure verified
      -archero apk happymod cracked
      -archero apk android 1 com
      -archero apk mob.org full
      -archero apk data highly compressed
      -archero apk unlimited money and energy
      -archero apk latest version 4.14.0
      -archero apk cheat menu enabled
      -archero apk god mode activated
      -archero apk all heroes unlocked
      -archero apk best skills guide
      -archero apk new weapons list
      -archero apk unlimited scrolls and keys
      -archero apk online multiplayer mode
      -archero apk no ads removed
      -archero apk latest version for pc
      -archero apk mod menu download
      -archero apk unlimited health and damage
      -archero apk all chapters unlocked
      -archero apk best equipment tips
      -archero apk new events and rewards
      -archero apk unlimited gems generator
      -archero apk hack tool online
      -archero apk modded version free
      -archero apk latest version for ios
      -archero apk no verification required
      -archero apk unlimited coins and gems
      -archero apk hack version download
      -archero apk all talents unlocked
      -archero apk best hero tier list
      -archero apk new pets and abilities
      -archero apk unlimited energy and lives
      -archero apk mod unlimited everything
      -archero apk latest version for windows 10
      -archero apk cheat codes list
      -archero apk god mode and one hit kill

      -

      You will start each game with a random skill that will give you an advantage in the battle. You will also get to choose a new skill every time you clear a level. You will have to choose wisely, as some skills may be more useful than others depending on the situation. You can also use coins and gems that you collect during the game to upgrade your hero and equipment, or to buy new items from the shop.

      -

      The different chapters, levels, and enemies

      -

      Archero APK has 20 chapters, each with 50 levels. Each chapter has a different theme, such as forest, desert, dungeon, or castle. Each level has a different layout, with different enemies, obstacles, and traps. You will have to clear all the levels in a chapter to unlock the next one. You will also face a boss at the end of each chapter, which will be more powerful and harder to defeat than the regular enemies.

      -

      The enemies in Archero APK are varied and diverse, ranging from zombies, skeletons, bats, spiders, scorpions, snakes, wolves, goblins, orcs, knights, mages, archers, dragons, demons, and more. Each enemy has its own behavior, attack pattern, speed, and strength. You will have to learn their weaknesses and strengths to defeat them effectively.

      -

      The customization and upgrade options for your hero and equipment

      -

      Archero APK gives you many options to customize and upgrade your hero and equipment. You can choose from different heroes that have different stats and abilities. For example, some heroes may have more health or attack power than others, or some may have special skills that can heal you or deal more damage to the enemies. You can also switch between different weapons that have different effects and ranges. For example, some weapons may shoot faster or farther than others, or some may have special effects that can freeze or burn the enemies.

      -

      You can also equip yourself with different armors that can protect you from certain types of damage or give you extra benefits. For example, some armors may reduce the damage from ranged attacks or increase your critical chance. You can also wear different rings that can boost your stats or give you special abilities. For example, some rings may increase your attack speed or give you a chance to summon a pet that can help you in the battle.

      -

      You can upgrade your hero and equipment using coins and scrolls that you collect during the game. Upgrading your hero will increase their stats and unlock new skills. Upgrading your equipment will increase their effectiveness and power.

      -

      Tips and tricks for Archero APK

      -

      How to choose the best skills and abilities for your hero

      -

      One of the most important aspects of Archero APK is choosing the best skills and abilities for your hero. There are hundreds of different skills and abilities that you can choose from, and each one has its own advantages and disadvantages. You will have to consider your play style, your hero, your weapon, your enemies, and your environment when choosing your skills and abilities. Here are some general tips and tricks for choosing the best skills and abilities for your hero:

      -
        -
      • Choose skills and abilities that complement each other: You can create powerful combinations of skills and abilities that can enhance your performance and damage output. For example, you can combine skills that increase your attack speed, your critical chance, your piercing ability, and your elemental damage to create a devastating barrage of arrows.
      • -
      • Choose skills and abilities that suit your hero: You can choose skills and abilities that match the characteristics and abilities of your hero. For example, if your hero has a high health or defense stat, you can choose skills that increase your survivability or healing. If your hero has a special skill that can deal a lot of damage or stun the enemies, you can choose skills that increase its effectiveness or cooldown.
      • -
      • Choose skills and abilities that suit your weapon: You can choose skills and abilities that match the effects and range of your weapon. For example, if your weapon has a long range or a wide spread, you can choose skills that increase your accuracy or range. If your weapon has a special effect that can freeze or burn the enemies, you can choose skills that increase its duration or damage.
      • -
      • Choose skills and abilities that suit your enemies: You can choose skills and abilities that counter the strengths and weaknesses of your enemies. For example, if your enemies have a lot of health or armor, you can choose skills that increase your damage or penetration. If your enemies have a lot of speed or mobility, you can choose skills that slow them down or immobilize them.
      • -
      • Choose skills and abilities that suit your environment: You can choose skills and abilities that take advantage of the environment and obstacles in the map. For example, if there are walls or pillars in the map, you can choose skills that bounce off them or go through them. If there are water or fire sources in the map, you can choose skills that create or use them.
      • -
      -

      How to avoid and dodge enemy attacks

      -

      Another important aspect of Archero APK is avoiding and dodging enemy attacks. You will have to be alert and agile to dodge the enemy attacks, as they can deal a lot of damage to you and reduce your health. Here are some tips and tricks for avoiding and dodging enemy attacks:

      -
        -
      • Learn the enemy attack patterns: You can observe the enemy behavior and movement to predict their attack patterns. You can also look at the indicators on the screen to see when they are about to attack or where they are aiming. You can use this information to anticipate their attacks and dodge them accordingly.
      • -
      • Move constantly: You can move around the map using the virtual joystick to avoid staying in one place for too long. This will make you harder to hit by the enemy attacks, as well as help you find better positions and angles to shoot back at them.
      • -
      • Use the obstacles: You can use the obstacles in the map to block or deflect the enemy attacks. You can hide behind walls or pillars to avoid getting hit by projectiles or beams. You can also use objects like barrels or crates to explode or knock back the enemies.
      • -
      • Use your skills and abilities: You can use some of the skills and abilities that you have chosen to avoid or dodge enemy attacks. For example, you can use skills that increase your speed, teleportation, invisibility, invincibility, shield, or healing to escape from danger or recover from damage.
      • -
      -

      How to use the environment and obstacles to your advantage

      -

      The last aspect of Archero APK that we will cover is using the environment and obstacles to your advantage. You will have to be smart and creative to use the environment and obstacles in the map to enhance your performance and damage output. Here are some tips and tricks for using the environment and obstacles to your advantage:

      -
        -
      • Use the elements: You can use the elements in the map to create or amplify elemental damage. For example, you can shoot arrows through water sources to create water arrows that can freeze the enemies. You can also shoot arrows through fire sources to create fire arrows that can burn the enemies.
      • -
      • Use the explosions: You can use the explosions in the map to deal massive damage to multiple enemies at once. For example, you can shoot arrows at barrels or crates that contain explosives to make them explode near the enemies. You can also shoot arrows at gas tanks or pipes that leak gas to create fireballs near the enemies.
      • -
      • Use the ricochets: You can use the ricochets in the map to hit multiple enemies with one arrow. For example, you can shoot arrows at walls or pillars that can bounce off them and hit the enemies behind them. You can also shoot arrows at metal objects that can reflect them and hit the enemies from different angles.
      • -
      • Use the traps: You can use the traps in the map to harm or hinder the enemies. For example, you can shoot arrows at spikes or saws that can impale or cut the enemies. You can also shoot arrows at switches or buttons that can activate or deactivate traps that can affect the enemies.
      • -
      -

      Conclusion

      -

      Archero APK is a fun and exciting action game that will keep you hooked for hours. You will have to use your skills, strategy, and luck to survive and defeat the evil forces that await you in different worlds. You will also have to customize and upgrade your hero and equipment to make them more powerful and effective. You will also have to use the environment and obstacles to your advantage to create or amplify your damage output. Archero APK is a game that will challenge you, entertain you, and reward you with endless possibilities.

      -

      If you are ready to embark on this amazing adventure, then download Archero APK today and start playing. You will not regret it!

      -

      FAQs

      -

      What is the latest version of Archero APK?

      -

      The latest version of Archero APK is 3.1.2, which was released on June 15, 2023. This version includes new features, such as a new hero, a new chapter, a new event, and bug fixes.

      -

      Is Archero APK safe to download and install?

      -

      Yes, Archero APK is safe to download and install, as long as you download it from a reliable source, such as Uptodown or APKCombo. These websites scan the APK files for viruses and malware before uploading them. However, you should always be careful when downloading and installing apps from unknown sources, as they may contain harmful or malicious content.

      -

      How can I get more coins and gems in Archero APK?

      -

      You can get more coins and gems in Archero APK by playing the game regularly and completing the levels, chapters, and events. You can also get more coins and gems by watching ads, spinning the lucky wheel, opening chests, completing achievements, or using promo codes. You can also buy more coins and gems with real money if you want to support the developers or speed up your progress.

      -

      How can I unlock more heroes and weapons in Archero APK?

      -

      You can unlock more heroes and weapons in Archero APK by collecting their shards or pieces. You can get these shards or pieces by playing the game, opening chests, completing events, or buying them with coins or gems. You will need a certain number of shards or pieces to unlock a hero or a weapon. You can also upgrade your heroes and weapons by using more shards or pieces.

      -

      How can I contact the developers of Archero APK?

      -

      You can contact the developers of Archero APK by sending them an email at archero@habby.fun. You can also follow them on their social media accounts, such as Facebook, Instagram, Twitter, or YouTube. You can also join their official Discord server to chat with other players and get updates on the game.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/QuickBooks Enterprise 2019 Download Manage Your Business with Ease and Efficiency.md b/spaces/congsaPfin/Manga-OCR/logs/QuickBooks Enterprise 2019 Download Manage Your Business with Ease and Efficiency.md deleted file mode 100644 index 75a291b0c06fb1e4d76f385daab6b27ae85fbf04..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/QuickBooks Enterprise 2019 Download Manage Your Business with Ease and Efficiency.md +++ /dev/null @@ -1,104 +0,0 @@ - -

      How to Download QuickBooks Enterprise 2019

      -

      QuickBooks Enterprise is a powerful accounting software that helps you manage your business finances, inventory, payroll, and more. It is designed for small and medium-sized businesses that need advanced features and functionality. In this article, we will show you how to download QuickBooks Enterprise 2019, the latest version of the software, and how to upgrade from an older version if you already have QuickBooks Desktop.

      -

      What is QuickBooks Enterprise 2019?

      -

      QuickBooks Enterprise 2019 is the newest version of QuickBooks Desktop Enterprise, which was released in September 2018. It offers several improvements and enhancements over the previous versions, such as:

      -

      download quickbooks enterprise 2019


      DOWNLOAD ✓✓✓ https://urlca.com/2uOcdg



      -
        -
      • Newly improved sales order management system
      • -
      • Multiple company files with consolidated reports
      • -
      • QuickBooks control prices
      • -
      • Two company files at one place in one go
      • -
      • Ability to track employees, more than 10,000 customers and inventory items
      • -
      • Different user roles: up to 14
      • -
      • Access from anywhere with enhanced reporting tools
      • -
      -

      QuickBooks Enterprise 2019 also comes with different industry-specific editions, such as manufacturing and wholesale, contractor, nonprofit, retail, professional services, and accountant. These editions offer customized capabilities, reports, and features designed for your company type and industry.

      -

      System requirements for QuickBooks Enterprise 2019

      -

      Before you download and install QuickBooks Enterprise 2019, you need to make sure that your computer meets the minimum system requirements for the software. Here are the system requirements for QuickBooks Enterprise 2019:

      -
        -
      • Windows 10 (64 bit), or Windows 11, update/version supported by Microsoft. Windows 8.1 and Linux are not supported.
      • -
      • Server: Windows Server 2012 (or R2), 2016, 2019, or 2022 (Regular or Small Business Server)
      • -
      • 2.4 GHz processor
      • -
      • Client RAM: 8 GB RAM; 16 GB recommended
      • -
      • Server RAM (for multi-user support): 8 GB (5 users); 12 GB (10 users); 16 GB (15 users); 20 GB (20+ users)
      • -
      • 2.5 GB disk space recommended (additional space required for data files); Solid State Drive (SSD) recommended for optimal performance
      • -
      • Enterprise subscriptions, payroll and online features require Internet access
      • -
      • QuickBooks Desktop App is included with Desktop subscriptions. Must be installed on a camera-enabled mobile device using Android 6.0 or iOS 12 or later. Product registration required
      • -
      • Optimized for 1280×1024 screen resolution or higher. Supports one Workstation Monitor, plus up to 2 extended monitors. Optimized for Default DPI settings.
      • -
      • Integration with other software: Microsoft Word and Excel integration requires Office 2013-2021, or Microsoft 365 (32 and 64 bit); E-mail Estimates, Invoices and other forms with Microsoft Outlook 2013-2019, Microsoft 365, Gmail TM , and Outlook.com®, other SMTP-supporting e-mail clients; Integration with QuickBooks POS 19.0; Transfer data from Quicken 2016-

        How to download QuickBooks Enterprise 2019?

        -

        If you want to download QuickBooks Enterprise 2019, you need to follow these steps:

        -

        Step 1: Go to the Downloads & Updates page

        -

        The first step is to go to the Downloads & Updates page on the Intuit website. This is where you can find the latest version of QuickBooks Desktop Enterprise and other products. You can also access this page from your QuickBooks account or from the Help menu in the software.

        -

        Step 2: Select your country, product, and version

        -

        The next step is to select your country, product, and version from the drop-down menus on the Downloads & Updates page. For example, if you are in the United States, you need to select United States (US) as your country, QuickBooks Desktop Enterprise as your product, and 2019 as your version. Then, click on the Search button to find the download link for QuickBooks Enterprise 2019.

        -

        How to download quickbooks enterprise 2019 for windows
        -Quickbooks enterprise 2019 download link and installation guide
        -Quickbooks enterprise 2019 pro download with payroll
        -Quickbooks enterprise 2019 for mac download and setup
        -Download quickbooks desktop enterprise 2019 platinum edition
        -Quickbooks enterprise 2019 advanced inventory download
        -Quickbooks enterprise 2019 free trial download
        -Download quickbooks enterprise 2019 accountant edition
        -Quickbooks enterprise 2019 download crack and license key
        -Quickbooks enterprise 2019 system requirements and download size
        -Quickbooks enterprise 2019 vs online comparison and download
        -Quickbooks enterprise 2019 upgrade download and instructions
        -Quickbooks enterprise 2019 multi-user mode download and configuration
        -Quickbooks enterprise 2019 hosting service and download options
        -Quickbooks enterprise 2019 pricing plans and download discounts
        -Quickbooks enterprise 2019 features and benefits download brochure
        -Quickbooks enterprise 2019 customer reviews and testimonials download
        -Quickbooks enterprise 2019 support phone number and download help
        -Quickbooks enterprise 2019 training courses and tutorials download
        -Quickbooks enterprise 2019 integration with other software download
        -Download quickbooks enterprise 2019 backup and restore tools
        -Quickbooks enterprise 2019 custom reports and templates download
        -Quickbooks enterprise 2019 data migration and conversion download
        -Quickbooks enterprise 2019 security and privacy settings download
        -Quickbooks enterprise 2019 performance optimization and troubleshooting download
        -Download quickbooks enterprise 2019 mobile app for android and ios
        -Quickbooks enterprise 2019 cloud access and sync download
        -Quickbooks enterprise 2019 barcode scanning and printing download
        -Quickbooks enterprise 2019 serial number and product code download
        -Download quickbooks enterprise 2019 updates and patches

        -

        Step 3: Download the installation file

        -

        The third step is to download the installation file for QuickBooks Enterprise 2019. You can do this by clicking on the Download button next to the product name. The file size is about 700 MB, so it may take some time depending on your internet speed. You can also choose to download a trial version of QuickBooks Enterprise 2019 if you want to test it before buying it.

        -

        Step 4: Install QuickBooks Enterprise 2019 on your computer

        -

        The final step is to install QuickBooks Enterprise 2019 on your computer. You can do this by double-clicking on the downloaded file and following the on-screen instructions. You will need to agree to the license agreement, enter your product and license numbers, and choose your installation type (express or custom). You will also need to activate QuickBooks Enterprise 2019 after installing it by signing in with your Intuit account or creating one if you don't have one.

        -

        How to upgrade to QuickBooks Enterprise 2019?

        -

        If you already have an older version of QuickBooks Desktop Enterprise, such as 2018 or 2017, you can upgrade to QuickBooks Enterprise 2019 by following these steps:

        -

        Step 1: Check your current version of QuickBooks

        -

        The first step is to check your current version of QuickBooks Desktop Enterprise. You can do this by opening the software and pressing F2 on your keyboard. This will open the Product Information window, where you can see your product name, version, release, and license number. You can also see if you have any updates available for your current version by clicking on the Update Now button.

        -

        Step 2: Back up your company file

        -

        The second step is to back up your company file before upgrading to QuickBooks Enterprise 2019. This is a precautionary measure in case something goes wrong during the upgrade process. You can back up your company file by going to the File menu and selecting Back Up Company > Create Local Backup. You can choose where to save your backup file and how often to back up automatically.

        -

        Step 3: Uninstall your old version of QuickBooks

        -

        The third step is to uninstall your old version of QuickBooks Desktop Enterprise from your computer. You can do this by going to the Control Panel and selecting Programs and Features. Then, find QuickBooks Desktop Enterprise in the list of programs and click on Uninstall/Change. Follow the prompts to complete the uninstallation process.

        -

        Step 4: Install QuickBooks Enterprise 2019 on your computer

        -

        The fourth step is to install QuickBooks Enterprise 2019 on your computer. You can do this by following the same steps as described above for downloading and installing QuickBooks Enterprise 2019.

        -

        Step 5: Restore your company file

        -

        The final step is to restore your company file after upgrading to QuickBooks Enterprise 2019. You can do this by opening QuickBooks Enterprise 2019 and selecting Open or Restore Company from the File menu. Then, choose Restore a backup copy and browse for your backup file that you created earlier. Follow the instructions to restore your company file and update it to the new version.

        -

        Conclusion

        -

        In this article, we have shown you how to download QuickBooks Enterprise 2019, the latest version of QuickBooks Desktop Enterprise, and how to upgrade from an older version if you already have QuickBooks Desktop. We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below

        Here are some FAQs that you may find useful:

        -

        FAQs

        -
          -
        1. What are the benefits of QuickBooks Enterprise 2019 over other versions of QuickBooks?
        2. -

          QuickBooks Enterprise 2019 offers several benefits over other versions of QuickBooks, such as:

          -
            -
          • It can handle more data, transactions, users, and inventory items than other versions of QuickBooks
          • -
          • It offers industry-specific features and reports for different types of businesses
          • -
          • It has advanced capabilities for managing sales orders, pricing, inventory, and reporting
          • -
          • It allows you to access your data from anywhere with enhanced security and performance
          • -
          • It includes a subscription to QuickBooks Desktop App, which lets you access your data from your mobile device
          • -
          -
        3. How much does QuickBooks Enterprise 2019 cost?
        4. -

          QuickBooks Enterprise 2019 is a subscription-based software that requires an annual or monthly payment. The cost depends on the number of users, the industry edition, and the hosting option that you choose. You can check the current pricing and plans on the QuickBooks Enterprise website.

          -
        5. How can I get support for QuickBooks Enterprise 2019?
        6. -

          If you need support for QuickBooks Enterprise 2019, you can contact the QuickBooks Enterprise customer service team by phone, chat, or email. You can also visit the QuickBooks Enterprise support website, where you can find articles, videos, webinars, and community forums to help you with your questions and issues.

          -
        7. How can I learn more about QuickBooks Enterprise 2019?
        8. -

          If you want to learn more about QuickBooks Enterprise 2019, you can check out the QuickBooks Enterprise resource center, where you can find guides, tutorials, tips, and tricks to help you get the most out of the software. You can also sign up for free training sessions and webinars that cover various topics and features of QuickBooks Enterprise 2019.

          -
        9. How can I get a free trial of QuickBooks Enterprise 2019?
        10. -

          If you want to try QuickBooks Enterprise 2019 before buying it, you can get a free trial of the software for 30 days. You can download the trial version from the Downloads & Updates page, where you can also find the installation instructions and system requirements. You can use the trial version with your existing company file or create a new one.

          -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Survive the Dangers of a Pirate Island with Last Pirate Island Survival 2 MOD APK.md b/spaces/congsaPfin/Manga-OCR/logs/Survive the Dangers of a Pirate Island with Last Pirate Island Survival 2 MOD APK.md deleted file mode 100644 index e932ae2737beabcc461f055a0705aea180b71ee0..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Survive the Dangers of a Pirate Island with Last Pirate Island Survival 2 MOD APK.md +++ /dev/null @@ -1,137 +0,0 @@ - -

        Last Pirate Island Survival 2 Mod APK: A Guide for Beginners

        -

        Do you love adventure games that test your survival skills and creativity? Do you want to experience the thrill of living on a deserted island full of dangers and mysteries? If yes, then you should try Last Pirate Island Survival 2, a popular game that has millions of fans around the world. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, and how to play it. We will also show you how to use the mod apk version of the game, which gives you unlimited money and other advantages. So, let's get started!

        -

        What is Last Pirate Island Survival 2?

        -

        A brief introduction to the game

        -

        Last Pirate Island Survival 2 is a sequel to the original Last Pirate game, which was released in 2020. It is an adventure game that puts you in the role of a pirate who has been shipwrecked on a mysterious island. Your goal is to survive by exploring the island, gathering resources, crafting tools and weapons, building shelters, fighting enemies, and uncovering secrets. The game has realistic graphics, immersive sound effects, and dynamic weather conditions that make you feel like you are really on the island.

        -

        last pirate island survival 2 mod apk


        Download File ⚹⚹⚹ https://urlca.com/2uO84p



        -

        The main features of the game

        -

        Some of the main features of Last Pirate Island Survival 2 are:

        -
          -
        • A large open-world island with different biomes, such as forests, beaches, caves, mountains, and volcanoes.
        • -
        • A variety of resources to collect, such as wood, stone, metal, food, water, and treasure.
        • -
        • A crafting system that allows you to create hundreds of items, such as axes, swords, bows, arrows, spears, guns, bombs, traps, boats, and more.
        • -
        • A building system that lets you construct your own base, with walls, floors, roofs, doors, windows, furniture, and decorations.
        • -
        • A combat system that challenges you to fight against different enemies, such as zombies, skeletons, cannibals, wild animals, and other pirates.
        • -
        • A quest system that gives you tasks to complete and rewards you with coins and gems.
        • -
        • A customization system that enables you to change your appearance and outfit.
        • -
        • A multiplayer mode that allows you to play with your friends online or join other players' islands.
        • -
        -

        The benefits of using the mod apk version

        -

        If you want to enjoy the game without any limitations or restrictions, you should use the mod apk version of Last Pirate Island Survival 2. The mod apk version is a modified version of the original game that gives you some extra benefits, such as:

        -
          -
        • Unlimited money: You can get unlimited coins and gems in the game, which you can use to buy anything you want from the shop or upgrade your items.
        • -
        • Immortality: You can become invincible in the game, which means you won't die from hunger, thirst, injuries, or attacks.
        • -
        • Free crafting: You can craft any item in the game without needing any resources or materials.
        • -
        • No ads: You can play the game without any annoying ads interrupting your gameplay.
        • -
        -

        How to download and install Last Pirate Island Survival 2 Mod APK?

        -

        The requirements for downloading the mod apk

        -

        Before you download and install the mod apk, you need to make sure that your device meets the following requirements:

        -
          -
        • Android version: 4.4 or higher
        • -
        • Storage space: At least 200 MB of free space
        • -
        • Internet connection: Required for downloading and playing online
        • -
        • Permission: Allow installation from unknown sources
        • -
        -

        The steps for installing the mod apk

        -

        After you have checked the requirements, you can follow these steps to download and install the mod apk:

        -
          -
        1. Click on this link to download the mod apk file: [Last Pirate Island Survival 2 Mod APK]
        2. -
        3. Wait for the download to finish and then locate the file in your device's file manager.
        4. -
        5. Tap on the file and select "Install". If you see a warning message, click on "Settings" and enable the option to install from unknown sources.
        6. -
        7. Wait for the installation to complete and then launch the game from your home screen or app drawer.
        8. -
        9. Enjoy playing Last Pirate Island Survival 2 Mod APK with unlimited money and immortality!
        10. -
        -

        The precautions for using the mod apk

        -

        While using the mod apk can be fun and convenient, you should also be aware of some potential risks and drawbacks, such as:

        -
          -
        • The mod apk may not be compatible with some devices or versions of the game.
        • -
        • The mod apk may cause some glitches or errors in the game.
        • -
        • The mod apk may be detected by the game's anti-cheat system and result in a ban or suspension of your account.
        • -
        • The mod apk may contain viruses or malware that can harm your device or data.
        • -
        -

        Therefore, you should use the mod apk at your own risk and discretion. We are not responsible for any damage or loss caused by using the mod apk. We also recommend that you backup your data before using the mod apk and that you uninstall it if you encounter any problems.

        -

        How to play Last Pirate Island Survival 2 Mod APK?

        -

        The basic gameplay mechanics

        -

        Last Pirate Island Survival 2 Mod APK is a game that combines elements of survival, exploration, crafting, building, combat, and questing. The game starts with you waking up on a deserted island after a shipwreck. You have nothing but a few items in your inventory and a map of the island. Your first task is to find a safe place to build your shelter. You can use the map to navigate around the island and discover different locations, such as forests, beaches, caves, mountains, and volcanoes. You can also use the compass to find your direction and the clock to check the time of day.

        -

        To survive on the island, you need to manage your health, hunger, thirst, and stamina. You can find food and water sources on the island, such as fruits, vegetables, fish, meat, wells, rivers, and lakes. You can also cook food over a fire or boil water in a pot to make it safer to consume. You can craft tools and weapons from resources you gather on the island, such as wood, stone, metal, leather, cloth, and more. You can use these items to chop trees, mine rocks, hunt animals, fight enemies, and more. You can also build your own base from materials you collect on the island, such as planks, bricks, nails, ropes, and more. You can design your base according to your preference and add furniture and decorations to make it more comfortable and cozy.

        -

        last pirate island survival mod apk unlimited money
        -last pirate island survival 2 mod apk download
        -last pirate island survival mod apk latest version
        -last pirate island survival 2 mod apk android 1
        -last pirate island survival mod apk immortality
        -last pirate island survival 2 mod apk rexdl
        -last pirate island survival mod apk free craft
        -last pirate island survival 2 mod apk offline
        -last pirate island survival mod apk no ads
        -last pirate island survival 2 mod apk revdl
        -last pirate island survival mod apk hack
        -last pirate island survival 2 mod apk unlimited resources
        -last pirate island survival mod apk obb
        -last pirate island survival 2 mod apk god mode
        -last pirate island survival mod apk premium
        -last pirate island survival 2 mod apk unlocked
        -last pirate island survival mod apk full version
        -last pirate island survival 2 mod apk update
        -last pirate island survival mod apk cheats
        -last pirate island survival 2 mod apk mega
        -last pirate island survival mod apk online
        -last pirate island survival 2 mod apk data
        -last pirate island survival mod apk new version
        -last pirate island survival 2 mod apk original
        -last pirate island survival mod apk gameplay
        -last pirate island survival 2 mod apk old version
        -last pirate island survival mod apk features
        -last pirate island survival 2 mod apk hack download
        -last pirate island survival mod apk android oyun club
        -last pirate island survival 2 mod apk unlimited everything
        -last pirate island survival mod apk all items unlocked
        -last pirate island survival 2 mod apk no root
        -last pirate island survival mod apk for pc
        -last pirate island survival 2 mod apk pure
        -last pirate island survival mod apk high damage
        -last pirate island survival 2 mod apk apkpure
        -last pirate island survival mod apk low mb
        -last pirate island survival 2 mod apk happymod
        -last pirate island survival mod apk easy download
        -last pirate island survival 2 mod apk mediafıre link

        -

        The tips and tricks for surviving on the island

        -

        To make your life easier on the island, here are some tips and tricks that you should follow:

        -
          -
        • Always keep an eye on your status bars and replenish them when they are low.
        • -
        • Always carry some food and water with you when you go out exploring.
        • -
        • Always equip yourself with a weapon and armor when you encounter enemies.
        • -
        • Always save your game before you enter a dangerous area or start a quest.
        • -
        • Always check your inventory and storage for items that you can use or craft.
        • -
        • Always look for treasure chests and hidden secrets on the island.
        • -
        • Always use the mod apk features wisely and sparingly.
        • -
        -

        The challenges and rewards of the game

        -

        Last Pirate Island Survival 2 Mod APK is a game that offers many challenges and rewards for players who are willing to take risks and explore new possibilities. Some of the challenges and rewards of the game are:

        - - - - - - - -
        ChallengeReward
        Fighting against different enemies, such as zombies, skeletons, cannibals, wild animals, and other pirates.Gaining experience points, coins, gems, loot, and trophies.
        Completing quests given by NPCs or the game itself.Gaining coins, gems, items, and reputation.
        Exploring the island and discovering new locations, secrets, and events.Gaining knowledge, resources, and achievements.
        Crafting and building your own items and base.Gaining satisfaction, creativity, and protection.
        Playing with your friends online or joining other players' islands.Gaining fun, cooperation, and competition.
        -

        Conclusion

        -

        A summary of the main points of the article

        -

        In conclusion, Last Pirate Island Survival 2 Mod APK is a game that offers you a unique and exciting adventure on a deserted island. You can explore the island, gather resources, craft items, build your base, fight enemies, complete quests, and more. You can also use the mod apk version of the game to get unlimited money and immortality. However, you should also be careful of the potential risks and drawbacks of using the mod apk. We hope that this article has helped you learn more about this game and how to play it. If you are ready to embark on your pirate adventure, download and install Last Pirate Island Survival 2 Mod APK now!

        -

        A call to action for the readers

        -

        If you liked this article, please share it with your friends and leave a comment below. We would love to hear your feedback and suggestions. Also, if you have any questions or problems regarding Last Pirate Island Survival 2 Mod APK, feel free to ask us in the comment section. We will try our best to help you out. Thank you for reading and happy gaming!

        -

        FAQs

        -

        Q: Is Last Pirate Island Survival 2 Mod APK safe to use?

        -

        A: Last Pirate Island Survival 2 Mod APK is generally safe to use as long as you download it from a trusted source and scan it with an antivirus program before installing it. However, you should also be aware of the possible risks and drawbacks of using the mod apk, such as compatibility issues, glitches, errors, bans, suspensions, viruses, or malware.

        -

        Q: How can I update Last Pirate Island Survival 2 Mod APK?

        -

        A: To update Last Pirate Island Survival 2 Mod APK, you need to download the latest version of the mod apk file from the same source that you downloaded it from before. Then, you need to uninstall the previous version of the mod apk from your device and install the new version following the same steps as before.

        -

        Q: How can I play Last Pirate Island Survival 2 Mod APK offline?

        -

        A: To play Last Pirate Island Survival 2 Mod APK offline, you need to turn off your internet connection before launching the game. However, you should note that some features of the game may not work properly or at all without an internet connection, such as multiplayer mode or online events.

        -

        Q: How can I play Last Pirate Island Survival 2 Mod APK with my friends?

        -

        A: To play Last Pirate Island Survival 2 Mod APK with your friends, you need to have an internet connection and a Facebook account. Then, you need to log in to your Facebook account in the game and invite your friends to join your island or join their islands. You can also chat with them in the game and cooperate or compete with them in various activities.

        -

        Q: How can I get more coins and gems in Last Pirate Island Survival 2 Mod APK?

        -

        A: To get more coins and gems in Last Pirate Island Survival 2 Mod APK, you can use the mod apk features that give you unlimited money. Alternatively, you can also earn coins and gems by completing quests, fighting enemies, finding treasure chests, and watching ads. You can also buy coins and gems with real money if you want to support the game developers.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Download Antiwpa Windows Xp Sp2instmankl HOT.md b/spaces/contluForse/HuggingGPT/assets/Download Antiwpa Windows Xp Sp2instmankl HOT.md deleted file mode 100644 index 83e61158832d51864644daef8fe7d88b91ceb78e..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download Antiwpa Windows Xp Sp2instmankl HOT.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Download Antiwpa Windows Xp Sp2instmankl


        Download File ✯✯✯ https://ssurll.com/2uzxYf



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/contluForse/HuggingGPT/assets/Dxcpl.exe ((TOP)) Download Windows 7 32-bit 1358.md b/spaces/contluForse/HuggingGPT/assets/Dxcpl.exe ((TOP)) Download Windows 7 32-bit 1358.md deleted file mode 100644 index 22a0477d8838ce25ebbf961c1c3c7898d4da9dda..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Dxcpl.exe ((TOP)) Download Windows 7 32-bit 1358.md +++ /dev/null @@ -1,8 +0,0 @@ -

        dxcpl.exe download windows 7 32-bit 1358


        Download Filehttps://ssurll.com/2uzwE4



        -
        -13 Jun 2021 - Ajab Gazabb Love English Subtitl - Dxcpl.exe Download Windows 7 32-bit 1358 - Power ISO 5.6 FINAL Keys keyG[Lz0 CORE] By Senzati Download Windows 7 32-bit 1358 - Power ISO 5.6 FINAL Keys -keyG[Lz0 CORE] By Senzati Download Windows 10 64-bit 16 GB - Microsoft Security Essentials (Privacy Protection) v1.0.1226.0 (x86) KeyGen By Hugo Leymo Download Windows 10 64-bit 16 GB - Microsoft Security Essentials (Privacy Protection) (v1.0.1226.0) (x86) KeyGen By Hugo Leymo Download Windows 10 64 bit 16 GB - Microsoft Security Essentials (Privacy -Protection) (v1.0.1226.0) (x86) KeyGen By Hugo 8a78ff9644
        -
        -
        -

        diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/deform_conv.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/deform_conv.py deleted file mode 100644 index 3de3aae1e7b2258360aef3ad9eb3a351f080f10f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/deform_conv.py +++ /dev/null @@ -1,405 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from annotator.mmpkg.mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext('_ext', [ - 'deform_conv_forward', 'deform_conv_backward_input', - 'deform_conv_backward_parameters' -]) - - -class DeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, - input, - offset, - weight, - stride, - padding, - dilation, - groups, - deform_groups, - bias=False, - im2col_step=32): - return g.op( - 'mmcv::MMCVDeformConv2d', - input, - offset, - weight, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups, - bias_i=bias, - im2col_step_i=im2col_step) - - @staticmethod - def forward(ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1, - bias=False, - im2col_step=32): - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - assert bias is False, 'Only support bias is False.' - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.im2col_step = im2col_step - - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty( - DeformConv2dFunction._output_size(ctx, input, weight)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - cur_im2col_step = min(ctx.im2col_step, input.size(0)) - assert (input.size(0) % - cur_im2col_step) == 0, 'im2col step must divide batchsize' - ext_module.deform_conv_forward( - input, - weight, - offset, - output, - ctx.bufs_[0], - ctx.bufs_[1], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - im2col_step=cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - cur_im2col_step = min(ctx.im2col_step, input.size(0)) - assert (input.size(0) % cur_im2col_step - ) == 0, 'batch size must be divisible by im2col_step' - - grad_output = grad_output.contiguous() - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - ext_module.deform_conv_backward_input( - input, - offset, - grad_output, - grad_input, - grad_offset, - weight, - ctx.bufs_[0], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - im2col_step=cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - ext_module.deform_conv_backward_parameters( - input, - offset, - grad_output, - grad_weight, - ctx.bufs_[0], - ctx.bufs_[1], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - scale=1, - im2col_step=cur_im2col_step) - - return grad_input, grad_offset, grad_weight, \ - None, None, None, None, None, None, None - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -deform_conv2d = DeformConv2dFunction.apply - - -class DeformConv2d(nn.Module): - r"""Deformable 2D convolution. - - Applies a deformable 2D convolution over an input signal composed of - several input planes. DeformConv2d was described in the paper - `Deformable Convolutional Networks - `_ - - Note: - The argument ``im2col_step`` was added in version 1.3.17, which means - number of samples processed by the ``im2col_cuda_kernel`` per call. - It enables users to define ``batch_size`` and ``im2col_step`` more - flexibly and solved `issue mmcv#1440 - `_. - - Args: - in_channels (int): Number of channels in the input image. - out_channels (int): Number of channels produced by the convolution. - kernel_size(int, tuple): Size of the convolving kernel. - stride(int, tuple): Stride of the convolution. Default: 1. - padding (int or tuple): Zero-padding added to both sides of the input. - Default: 0. - dilation (int or tuple): Spacing between kernel elements. Default: 1. - groups (int): Number of blocked connections from input. - channels to output channels. Default: 1. - deform_groups (int): Number of deformable group partitions. - bias (bool): If True, adds a learnable bias to the output. - Default: False. - im2col_step (int): Number of samples processed by im2col_cuda_kernel - per call. It will work when ``batch_size`` > ``im2col_step``, but - ``batch_size`` must be divisible by ``im2col_step``. Default: 32. - `New in version 1.3.17.` - """ - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='DeformConv2d') - def __init__(self, - in_channels: int, - out_channels: int, - kernel_size: Union[int, Tuple[int, ...]], - stride: Union[int, Tuple[int, ...]] = 1, - padding: Union[int, Tuple[int, ...]] = 0, - dilation: Union[int, Tuple[int, ...]] = 1, - groups: int = 1, - deform_groups: int = 1, - bias: bool = False, - im2col_step: int = 32) -> None: - super(DeformConv2d, self).__init__() - - assert not bias, \ - f'bias={bias} is not supported in DeformConv2d.' - assert in_channels % groups == 0, \ - f'in_channels {in_channels} cannot be divisible by groups {groups}' - assert out_channels % groups == 0, \ - f'out_channels {out_channels} cannot be divisible by groups \ - {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - self.im2col_step = im2col_step - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - # only weight, no bias - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // self.groups, - *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - # switch the initialization of `self.weight` to the standard kaiming - # method described in `Delving deep into rectifiers: Surpassing - # human-level performance on ImageNet classification` - He, K. et al. - # (2015), using a uniform distribution - nn.init.kaiming_uniform_(self.weight, nonlinearity='relu') - - def forward(self, x: Tensor, offset: Tensor) -> Tensor: - """Deformable Convolutional forward function. - - Args: - x (Tensor): Input feature, shape (B, C_in, H_in, W_in) - offset (Tensor): Offset for deformable convolution, shape - (B, deform_groups*kernel_size[0]*kernel_size[1]*2, - H_out, W_out), H_out, W_out are equal to the output's. - - An offset is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`. - The spatial arrangement is like: - - .. code:: text - - (x0, y0) (x1, y1) (x2, y2) - (x3, y3) (x4, y4) (x5, y5) - (x6, y6) (x7, y7) (x8, y8) - - Returns: - Tensor: Output of the layer. - """ - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0]) or (x.size(3) < - self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0) - offset = offset.contiguous() - out = deform_conv2d(x, offset, self.weight, self.stride, self.padding, - self.dilation, self.groups, self.deform_groups, - False, self.im2col_step) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - - pad_w].contiguous() - return out - - def __repr__(self): - s = self.__class__.__name__ - s += f'(in_channels={self.in_channels},\n' - s += f'out_channels={self.out_channels},\n' - s += f'kernel_size={self.kernel_size},\n' - s += f'stride={self.stride},\n' - s += f'padding={self.padding},\n' - s += f'dilation={self.dilation},\n' - s += f'groups={self.groups},\n' - s += f'deform_groups={self.deform_groups},\n' - # bias is not supported in DeformConv2d. - s += 'bias=False)' - return s - - -@CONV_LAYERS.register_module('DCN') -class DeformConv2dPack(DeformConv2d): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - The offset tensor is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`. - The spatial arrangement is like: - - .. code:: text - - (x0, y0) (x1, y1) (x2, y2) - (x3, y3) (x4, y4) (x5, y5) - (x6, y6) (x7, y7) (x8, y8) - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(DeformConv2dPack, self).__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - offset = self.conv_offset(x) - return deform_conv2d(x, offset, self.weight, self.stride, self.padding, - self.dilation, self.groups, self.deform_groups, - False, self.im2col_step) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, DeformConvPack loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'DeformConv2dPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/furthest_point_sample.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/furthest_point_sample.py deleted file mode 100644 index 374b7a878f1972c183941af28ba1df216ac1a60f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/furthest_point_sample.py +++ /dev/null @@ -1,83 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'furthest_point_sampling_forward', - 'furthest_point_sampling_with_dist_forward' -]) - - -class FurthestPointSampling(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_xyz: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_xyz (Tensor): (B, N, 3) where N > num_points. - num_points (int): Number of points in the sampled set. - - Returns: - Tensor: (B, num_points) indices of the sampled points. - """ - assert points_xyz.is_contiguous() - - B, N = points_xyz.size()[:2] - output = torch.cuda.IntTensor(B, num_points) - temp = torch.cuda.FloatTensor(B, N).fill_(1e10) - - ext_module.furthest_point_sampling_forward( - points_xyz, - temp, - output, - b=B, - n=N, - m=num_points, - ) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -class FurthestPointSamplingWithDist(Function): - """Uses iterative furthest point sampling to select a set of features whose - corresponding points have the furthest distance.""" - - @staticmethod - def forward(ctx, points_dist: torch.Tensor, - num_points: int) -> torch.Tensor: - """ - Args: - points_dist (Tensor): (B, N, N) Distance between each point pair. - num_points (int): Number of points in the sampled set. - - Returns: - Tensor: (B, num_points) indices of the sampled points. - """ - assert points_dist.is_contiguous() - - B, N, _ = points_dist.size() - output = points_dist.new_zeros([B, num_points], dtype=torch.int32) - temp = points_dist.new_zeros([B, N]).fill_(1e10) - - ext_module.furthest_point_sampling_with_dist_forward( - points_dist, temp, output, b=B, n=N, m=num_points) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(output) - return output - - @staticmethod - def backward(xyz, a=None): - return None, None - - -furthest_point_sample = FurthestPointSampling.apply -furthest_point_sample_with_dist = FurthestPointSamplingWithDist.apply diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/default_constructor.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/default_constructor.py deleted file mode 100644 index bdd7803289d6d70240977fa243d7f4432ccde8f8..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/runner/default_constructor.py +++ /dev/null @@ -1,44 +0,0 @@ -from .builder import RUNNER_BUILDERS, RUNNERS - - -@RUNNER_BUILDERS.register_module() -class DefaultRunnerConstructor: - """Default constructor for runners. - - Custom existing `Runner` like `EpocBasedRunner` though `RunnerConstructor`. - For example, We can inject some new properties and functions for `Runner`. - - Example: - >>> from annotator.mmpkg.mmcv.runner import RUNNER_BUILDERS, build_runner - >>> # Define a new RunnerReconstructor - >>> @RUNNER_BUILDERS.register_module() - >>> class MyRunnerConstructor: - ... def __init__(self, runner_cfg, default_args=None): - ... if not isinstance(runner_cfg, dict): - ... raise TypeError('runner_cfg should be a dict', - ... f'but got {type(runner_cfg)}') - ... self.runner_cfg = runner_cfg - ... self.default_args = default_args - ... - ... def __call__(self): - ... runner = RUNNERS.build(self.runner_cfg, - ... default_args=self.default_args) - ... # Add new properties for existing runner - ... runner.my_name = 'my_runner' - ... runner.my_function = lambda self: print(self.my_name) - ... ... - >>> # build your runner - >>> runner_cfg = dict(type='EpochBasedRunner', max_epochs=40, - ... constructor='MyRunnerConstructor') - >>> runner = build_runner(runner_cfg) - """ - - def __init__(self, runner_cfg, default_args=None): - if not isinstance(runner_cfg, dict): - raise TypeError('runner_cfg should be a dict', - f'but got {type(runner_cfg)}') - self.runner_cfg = runner_cfg - self.default_args = default_args - - def __call__(self): - return RUNNERS.build(self.runner_cfg, default_args=self.default_args) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/openpose/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/openpose/__init__.py deleted file mode 100644 index 102434701a14621a66149fbabcf224b1bb726a6c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/openpose/__init__.py +++ /dev/null @@ -1,100 +0,0 @@ -# Openpose -# Original from CMU https://github.com/CMU-Perceptual-Computing-Lab/openpose -# 2nd Edited by https://github.com/Hzzone/pytorch-openpose -# 3rd Edited by ControlNet -# 4th Edited by ControlNet (added face and correct hands) - -import os -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - -import torch -import numpy as np -from . import util -from .body import Body -from .hand import Hand -from .face import Face -from annotator.util import annotator_ckpts_path - - -body_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/body_pose_model.pth" -hand_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/hand_pose_model.pth" -face_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/facenet.pth" - - -def draw_pose(pose, H, W, draw_body=True, draw_hand=True, draw_face=True): - bodies = pose['bodies'] - faces = pose['faces'] - hands = pose['hands'] - candidate = bodies['candidate'] - subset = bodies['subset'] - canvas = np.zeros(shape=(H, W, 3), dtype=np.uint8) - - if draw_body: - canvas = util.draw_bodypose(canvas, candidate, subset) - - if draw_hand: - canvas = util.draw_handpose(canvas, hands) - - if draw_face: - canvas = util.draw_facepose(canvas, faces) - - return canvas - - -class OpenposeDetector: - def __init__(self): - body_modelpath = os.path.join(annotator_ckpts_path, "body_pose_model.pth") - hand_modelpath = os.path.join(annotator_ckpts_path, "hand_pose_model.pth") - face_modelpath = os.path.join(annotator_ckpts_path, "facenet.pth") - - if not os.path.exists(body_modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(body_model_path, model_dir=annotator_ckpts_path) - - if not os.path.exists(hand_modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(hand_model_path, model_dir=annotator_ckpts_path) - - if not os.path.exists(face_modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(face_model_path, model_dir=annotator_ckpts_path) - - self.body_estimation = Body(body_modelpath) - self.hand_estimation = Hand(hand_modelpath) - self.face_estimation = Face(face_modelpath) - - def __call__(self, oriImg, hand_and_face=False, return_is_index=False): - oriImg = oriImg[:, :, ::-1].copy() - H, W, C = oriImg.shape - with torch.no_grad(): - candidate, subset = self.body_estimation(oriImg) - hands = [] - faces = [] - if hand_and_face: - # Hand - hands_list = util.handDetect(candidate, subset, oriImg) - for x, y, w, is_left in hands_list: - peaks = self.hand_estimation(oriImg[y:y+w, x:x+w, :]).astype(np.float32) - if peaks.ndim == 2 and peaks.shape[1] == 2: - peaks[:, 0] = np.where(peaks[:, 0] < 1e-6, -1, peaks[:, 0] + x) / float(W) - peaks[:, 1] = np.where(peaks[:, 1] < 1e-6, -1, peaks[:, 1] + y) / float(H) - hands.append(peaks.tolist()) - # Face - faces_list = util.faceDetect(candidate, subset, oriImg) - for x, y, w in faces_list: - heatmaps = self.face_estimation(oriImg[y:y+w, x:x+w, :]) - peaks = self.face_estimation.compute_peaks_from_heatmaps(heatmaps).astype(np.float32) - if peaks.ndim == 2 and peaks.shape[1] == 2: - peaks[:, 0] = np.where(peaks[:, 0] < 1e-6, -1, peaks[:, 0] + x) / float(W) - peaks[:, 1] = np.where(peaks[:, 1] < 1e-6, -1, peaks[:, 1] + y) / float(H) - faces.append(peaks.tolist()) - if candidate.ndim == 2 and candidate.shape[1] == 4: - candidate = candidate[:, :2] - candidate[:, 0] /= float(W) - candidate[:, 1] /= float(H) - bodies = dict(candidate=candidate.tolist(), subset=subset.tolist()) - pose = dict(bodies=bodies, hands=hands, faces=faces) - if return_is_index: - return pose - else: - return draw_pose(pose, H, W) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py deleted file mode 100644 index 988d9adf2f289ef223bd1c680a5ae1d3387f0269..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/bricks/generalized_attention.py +++ /dev/null @@ -1,412 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..utils import kaiming_init -from .registry import PLUGIN_LAYERS - - -@PLUGIN_LAYERS.register_module() -class GeneralizedAttention(nn.Module): - """GeneralizedAttention module. - - See 'An Empirical Study of Spatial Attention Mechanisms in Deep Networks' - (https://arxiv.org/abs/1711.07971) for details. - - Args: - in_channels (int): Channels of the input feature map. - spatial_range (int): The spatial range. -1 indicates no spatial range - constraint. Default: -1. - num_heads (int): The head number of empirical_attention module. - Default: 9. - position_embedding_dim (int): The position embedding dimension. - Default: -1. - position_magnitude (int): A multiplier acting on coord difference. - Default: 1. - kv_stride (int): The feature stride acting on key/value feature map. - Default: 2. - q_stride (int): The feature stride acting on query feature map. - Default: 1. - attention_type (str): A binary indicator string for indicating which - items in generalized empirical_attention module are used. - Default: '1111'. - - - '1000' indicates 'query and key content' (appr - appr) item, - - '0100' indicates 'query content and relative position' - (appr - position) item, - - '0010' indicates 'key content only' (bias - appr) item, - - '0001' indicates 'relative position only' (bias - position) item. - """ - - _abbr_ = 'gen_attention_block' - - def __init__(self, - in_channels, - spatial_range=-1, - num_heads=9, - position_embedding_dim=-1, - position_magnitude=1, - kv_stride=2, - q_stride=1, - attention_type='1111'): - - super(GeneralizedAttention, self).__init__() - - # hard range means local range for non-local operation - self.position_embedding_dim = ( - position_embedding_dim - if position_embedding_dim > 0 else in_channels) - - self.position_magnitude = position_magnitude - self.num_heads = num_heads - self.in_channels = in_channels - self.spatial_range = spatial_range - self.kv_stride = kv_stride - self.q_stride = q_stride - self.attention_type = [bool(int(_)) for _ in attention_type] - self.qk_embed_dim = in_channels // num_heads - out_c = self.qk_embed_dim * num_heads - - if self.attention_type[0] or self.attention_type[1]: - self.query_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_c, - kernel_size=1, - bias=False) - self.query_conv.kaiming_init = True - - if self.attention_type[0] or self.attention_type[2]: - self.key_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=out_c, - kernel_size=1, - bias=False) - self.key_conv.kaiming_init = True - - self.v_dim = in_channels // num_heads - self.value_conv = nn.Conv2d( - in_channels=in_channels, - out_channels=self.v_dim * num_heads, - kernel_size=1, - bias=False) - self.value_conv.kaiming_init = True - - if self.attention_type[1] or self.attention_type[3]: - self.appr_geom_fc_x = nn.Linear( - self.position_embedding_dim // 2, out_c, bias=False) - self.appr_geom_fc_x.kaiming_init = True - - self.appr_geom_fc_y = nn.Linear( - self.position_embedding_dim // 2, out_c, bias=False) - self.appr_geom_fc_y.kaiming_init = True - - if self.attention_type[2]: - stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2) - appr_bias_value = -2 * stdv * torch.rand(out_c) + stdv - self.appr_bias = nn.Parameter(appr_bias_value) - - if self.attention_type[3]: - stdv = 1.0 / math.sqrt(self.qk_embed_dim * 2) - geom_bias_value = -2 * stdv * torch.rand(out_c) + stdv - self.geom_bias = nn.Parameter(geom_bias_value) - - self.proj_conv = nn.Conv2d( - in_channels=self.v_dim * num_heads, - out_channels=in_channels, - kernel_size=1, - bias=True) - self.proj_conv.kaiming_init = True - self.gamma = nn.Parameter(torch.zeros(1)) - - if self.spatial_range >= 0: - # only works when non local is after 3*3 conv - if in_channels == 256: - max_len = 84 - elif in_channels == 512: - max_len = 42 - - max_len_kv = int((max_len - 1.0) / self.kv_stride + 1) - local_constraint_map = np.ones( - (max_len, max_len, max_len_kv, max_len_kv), dtype=np.int) - for iy in range(max_len): - for ix in range(max_len): - local_constraint_map[ - iy, ix, - max((iy - self.spatial_range) // - self.kv_stride, 0):min((iy + self.spatial_range + - 1) // self.kv_stride + - 1, max_len), - max((ix - self.spatial_range) // - self.kv_stride, 0):min((ix + self.spatial_range + - 1) // self.kv_stride + - 1, max_len)] = 0 - - self.local_constraint_map = nn.Parameter( - torch.from_numpy(local_constraint_map).byte(), - requires_grad=False) - - if self.q_stride > 1: - self.q_downsample = nn.AvgPool2d( - kernel_size=1, stride=self.q_stride) - else: - self.q_downsample = None - - if self.kv_stride > 1: - self.kv_downsample = nn.AvgPool2d( - kernel_size=1, stride=self.kv_stride) - else: - self.kv_downsample = None - - self.init_weights() - - def get_position_embedding(self, - h, - w, - h_kv, - w_kv, - q_stride, - kv_stride, - device, - dtype, - feat_dim, - wave_length=1000): - # the default type of Tensor is float32, leading to type mismatch - # in fp16 mode. Cast it to support fp16 mode. - h_idxs = torch.linspace(0, h - 1, h).to(device=device, dtype=dtype) - h_idxs = h_idxs.view((h, 1)) * q_stride - - w_idxs = torch.linspace(0, w - 1, w).to(device=device, dtype=dtype) - w_idxs = w_idxs.view((w, 1)) * q_stride - - h_kv_idxs = torch.linspace(0, h_kv - 1, h_kv).to( - device=device, dtype=dtype) - h_kv_idxs = h_kv_idxs.view((h_kv, 1)) * kv_stride - - w_kv_idxs = torch.linspace(0, w_kv - 1, w_kv).to( - device=device, dtype=dtype) - w_kv_idxs = w_kv_idxs.view((w_kv, 1)) * kv_stride - - # (h, h_kv, 1) - h_diff = h_idxs.unsqueeze(1) - h_kv_idxs.unsqueeze(0) - h_diff *= self.position_magnitude - - # (w, w_kv, 1) - w_diff = w_idxs.unsqueeze(1) - w_kv_idxs.unsqueeze(0) - w_diff *= self.position_magnitude - - feat_range = torch.arange(0, feat_dim / 4).to( - device=device, dtype=dtype) - - dim_mat = torch.Tensor([wave_length]).to(device=device, dtype=dtype) - dim_mat = dim_mat**((4. / feat_dim) * feat_range) - dim_mat = dim_mat.view((1, 1, -1)) - - embedding_x = torch.cat( - ((w_diff / dim_mat).sin(), (w_diff / dim_mat).cos()), dim=2) - - embedding_y = torch.cat( - ((h_diff / dim_mat).sin(), (h_diff / dim_mat).cos()), dim=2) - - return embedding_x, embedding_y - - def forward(self, x_input): - num_heads = self.num_heads - - # use empirical_attention - if self.q_downsample is not None: - x_q = self.q_downsample(x_input) - else: - x_q = x_input - n, _, h, w = x_q.shape - - if self.kv_downsample is not None: - x_kv = self.kv_downsample(x_input) - else: - x_kv = x_input - _, _, h_kv, w_kv = x_kv.shape - - if self.attention_type[0] or self.attention_type[1]: - proj_query = self.query_conv(x_q).view( - (n, num_heads, self.qk_embed_dim, h * w)) - proj_query = proj_query.permute(0, 1, 3, 2) - - if self.attention_type[0] or self.attention_type[2]: - proj_key = self.key_conv(x_kv).view( - (n, num_heads, self.qk_embed_dim, h_kv * w_kv)) - - if self.attention_type[1] or self.attention_type[3]: - position_embed_x, position_embed_y = self.get_position_embedding( - h, w, h_kv, w_kv, self.q_stride, self.kv_stride, - x_input.device, x_input.dtype, self.position_embedding_dim) - # (n, num_heads, w, w_kv, dim) - position_feat_x = self.appr_geom_fc_x(position_embed_x).\ - view(1, w, w_kv, num_heads, self.qk_embed_dim).\ - permute(0, 3, 1, 2, 4).\ - repeat(n, 1, 1, 1, 1) - - # (n, num_heads, h, h_kv, dim) - position_feat_y = self.appr_geom_fc_y(position_embed_y).\ - view(1, h, h_kv, num_heads, self.qk_embed_dim).\ - permute(0, 3, 1, 2, 4).\ - repeat(n, 1, 1, 1, 1) - - position_feat_x /= math.sqrt(2) - position_feat_y /= math.sqrt(2) - - # accelerate for saliency only - if (np.sum(self.attention_type) == 1) and self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim).\ - repeat(n, 1, 1, 1) - - energy = torch.matmul(appr_bias, proj_key).\ - view(n, num_heads, 1, h_kv * w_kv) - - h = 1 - w = 1 - else: - # (n, num_heads, h*w, h_kv*w_kv), query before key, 540mb for - if not self.attention_type[0]: - energy = torch.zeros( - n, - num_heads, - h, - w, - h_kv, - w_kv, - dtype=x_input.dtype, - device=x_input.device) - - # attention_type[0]: appr - appr - # attention_type[1]: appr - position - # attention_type[2]: bias - appr - # attention_type[3]: bias - position - if self.attention_type[0] or self.attention_type[2]: - if self.attention_type[0] and self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim) - energy = torch.matmul(proj_query + appr_bias, proj_key).\ - view(n, num_heads, h, w, h_kv, w_kv) - - elif self.attention_type[0]: - energy = torch.matmul(proj_query, proj_key).\ - view(n, num_heads, h, w, h_kv, w_kv) - - elif self.attention_type[2]: - appr_bias = self.appr_bias.\ - view(1, num_heads, 1, self.qk_embed_dim).\ - repeat(n, 1, 1, 1) - - energy += torch.matmul(appr_bias, proj_key).\ - view(n, num_heads, 1, 1, h_kv, w_kv) - - if self.attention_type[1] or self.attention_type[3]: - if self.attention_type[1] and self.attention_type[3]: - geom_bias = self.geom_bias.\ - view(1, num_heads, 1, self.qk_embed_dim) - - proj_query_reshape = (proj_query + geom_bias).\ - view(n, num_heads, h, w, self.qk_embed_dim) - - energy_x = torch.matmul( - proj_query_reshape.permute(0, 1, 3, 2, 4), - position_feat_x.permute(0, 1, 2, 4, 3)) - energy_x = energy_x.\ - permute(0, 1, 3, 2, 4).unsqueeze(4) - - energy_y = torch.matmul( - proj_query_reshape, - position_feat_y.permute(0, 1, 2, 4, 3)) - energy_y = energy_y.unsqueeze(5) - - energy += energy_x + energy_y - - elif self.attention_type[1]: - proj_query_reshape = proj_query.\ - view(n, num_heads, h, w, self.qk_embed_dim) - proj_query_reshape = proj_query_reshape.\ - permute(0, 1, 3, 2, 4) - position_feat_x_reshape = position_feat_x.\ - permute(0, 1, 2, 4, 3) - position_feat_y_reshape = position_feat_y.\ - permute(0, 1, 2, 4, 3) - - energy_x = torch.matmul(proj_query_reshape, - position_feat_x_reshape) - energy_x = energy_x.permute(0, 1, 3, 2, 4).unsqueeze(4) - - energy_y = torch.matmul(proj_query_reshape, - position_feat_y_reshape) - energy_y = energy_y.unsqueeze(5) - - energy += energy_x + energy_y - - elif self.attention_type[3]: - geom_bias = self.geom_bias.\ - view(1, num_heads, self.qk_embed_dim, 1).\ - repeat(n, 1, 1, 1) - - position_feat_x_reshape = position_feat_x.\ - view(n, num_heads, w*w_kv, self.qk_embed_dim) - - position_feat_y_reshape = position_feat_y.\ - view(n, num_heads, h * h_kv, self.qk_embed_dim) - - energy_x = torch.matmul(position_feat_x_reshape, geom_bias) - energy_x = energy_x.view(n, num_heads, 1, w, 1, w_kv) - - energy_y = torch.matmul(position_feat_y_reshape, geom_bias) - energy_y = energy_y.view(n, num_heads, h, 1, h_kv, 1) - - energy += energy_x + energy_y - - energy = energy.view(n, num_heads, h * w, h_kv * w_kv) - - if self.spatial_range >= 0: - cur_local_constraint_map = \ - self.local_constraint_map[:h, :w, :h_kv, :w_kv].\ - contiguous().\ - view(1, 1, h*w, h_kv*w_kv) - - energy = energy.masked_fill_(cur_local_constraint_map, - float('-inf')) - - attention = F.softmax(energy, 3) - - proj_value = self.value_conv(x_kv) - proj_value_reshape = proj_value.\ - view((n, num_heads, self.v_dim, h_kv * w_kv)).\ - permute(0, 1, 3, 2) - - out = torch.matmul(attention, proj_value_reshape).\ - permute(0, 1, 3, 2).\ - contiguous().\ - view(n, self.v_dim * self.num_heads, h, w) - - out = self.proj_conv(out) - - # output is downsampled, upsample back to input size - if self.q_downsample is not None: - out = F.interpolate( - out, - size=x_input.shape[2:], - mode='bilinear', - align_corners=False) - - out = self.gamma * out + x_input - return out - - def init_weights(self): - for m in self.modules(): - if hasattr(m, 'kaiming_init') and m.kaiming_init: - kaiming_init( - m, - mode='fan_in', - nonlinearity='leaky_relu', - bias=0, - distribution='uniform', - a=1) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/builder.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/builder.py deleted file mode 100644 index 7567316c566bd3aca6d8f65a84b00e9e890948a7..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/builder.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ..runner import Sequential -from ..utils import Registry, build_from_cfg - - -def build_model_from_cfg(cfg, registry, default_args=None): - """Build a PyTorch model from config dict(s). Different from - ``build_from_cfg``, if cfg is a list, a ``nn.Sequential`` will be built. - - Args: - cfg (dict, list[dict]): The config of modules, is is either a config - dict or a list of config dicts. If cfg is a list, a - the built modules will be wrapped with ``nn.Sequential``. - registry (:obj:`Registry`): A registry the module belongs to. - default_args (dict, optional): Default arguments to build the module. - Defaults to None. - - Returns: - nn.Module: A built nn module. - """ - if isinstance(cfg, list): - modules = [ - build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg - ] - return Sequential(*modules) - else: - return build_from_cfg(cfg, registry, default_args) - - -MODELS = Registry('model', build_func=build_model_from_cfg) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/ios/README.md b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/ios/README.md deleted file mode 100644 index 7b8eb29feaa21e67814b035dbd5c5fb2c62a4151..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/ios/README.md +++ /dev/null @@ -1,105 +0,0 @@ -# Tensorflow Lite MiDaS iOS Example - -### Requirements - -- XCode 11.0 or above -- iOS 12.0 or above, [iOS 14 breaks the NPU Delegate](https://github.com/tensorflow/tensorflow/issues/43339) -- TensorFlow 2.4.0, TensorFlowLiteSwift -> 0.0.1-nightly - -## Quick Start with a MiDaS Example - -MiDaS is a neural network to compute depth from a single image. It uses TensorFlowLiteSwift / C++ libraries on iOS. The code is written in Swift. - -Paper: https://arxiv.org/abs/1907.01341 - -> Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer -> René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, Vladlen Koltun - -### Install TensorFlow - -Set default python version to python3: - -``` -echo 'export PATH=/usr/local/opt/python/libexec/bin:$PATH' >> ~/.zshenv -echo 'alias python=python3' >> ~/.zshenv -echo 'alias pip=pip3' >> ~/.zshenv -``` - -Install TensorFlow - -```shell -pip install tensorflow -``` - -### Install TensorFlowLiteSwift via Cocoapods - -Set required TensorFlowLiteSwift version in the file (`0.0.1-nightly` is recommended): https://github.com/isl-org/MiDaS/blob/master/mobile/ios/Podfile#L9 - -Install: brew, ruby, cocoapods - -``` -ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" -brew install mc rbenv ruby-build -sudo gem install cocoapods -``` - - -The TensorFlowLiteSwift library is available in [Cocoapods](https://cocoapods.org/), to integrate it to our project, we can run in the root directory of the project: - -```ruby -pod install -``` - -Now open the `Midas.xcworkspace` file in XCode, select your iPhone device (XCode->Product->Destination->iPhone) and launch it (cmd + R). If everything works well, you should see a real-time depth map from your camera. - -### Model - -The TensorFlow (TFlite) model `midas.tflite` is in the folder `/Midas/Model` - - -To use another model, you should convert it from TensorFlow saved-model to TFlite model (so that it can be deployed): - -```python -saved_model_export_dir = "./saved_model" -converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_export_dir) -tflite_model = converter.convert() -open(model_tflite_name, "wb").write("model.tflite") -``` - -### Setup XCode - -* Open directory `.xcworkspace` from the XCode - -* Press on your ProjectName (left-top corner) -> change Bundle Identifier to `com.midas.tflite-npu` or something like this (it should be unique) - -* select your Developer Team (your should be signed-in by using your AppleID) - -* Connect your iPhone (if you want to run it on real device instead of simulator), select your iPhone device (XCode->Product->Destination->iPhone) - -* Click in the XCode: Product -> Run - -* On your iPhone device go to the: Settings -> General -> Device Management (or Profiles) -> Apple Development -> Trust Apple Development - ----- - -Original repository: https://github.com/isl-org/MiDaS - - -### Examples: - -| ![photo_2020-09-27_17-43-20](https://user-images.githubusercontent.com/4096485/94367804-9610de80-00e9-11eb-8a23-8b32a6f52d41.jpg) | ![photo_2020-09-27_17-49-22](https://user-images.githubusercontent.com/4096485/94367974-7201cd00-00ea-11eb-8e0a-68eb9ea10f63.jpg) | ![photo_2020-09-27_17-52-30](https://user-images.githubusercontent.com/4096485/94367976-729a6380-00ea-11eb-8ce0-39d3e26dd550.jpg) | ![photo_2020-09-27_17-43-21](https://user-images.githubusercontent.com/4096485/94367807-97420b80-00e9-11eb-9dcd-848ad9e89e03.jpg) | -|---|---|---|---| - -## LICENSE - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. diff --git a/spaces/cpwan/RLOR-TSP/models/nets/attention_model/encoder.py b/spaces/cpwan/RLOR-TSP/models/nets/attention_model/encoder.py deleted file mode 100644 index e3349dac23d587656f639ab344e0951008a577ad..0000000000000000000000000000000000000000 --- a/spaces/cpwan/RLOR-TSP/models/nets/attention_model/encoder.py +++ /dev/null @@ -1,128 +0,0 @@ -from torch import nn - -from ...nets.attention_model.multi_head_attention import MultiHeadAttentionProj - - -class SkipConnection(nn.Module): - def __init__(self, module): - super(SkipConnection, self).__init__() - self.module = module - - def forward(self, input): - return input + self.module(input) - - -class Normalization(nn.Module): - def __init__(self, embedding_dim): - super(Normalization, self).__init__() - - self.normalizer = nn.BatchNorm1d(embedding_dim, affine=True) - - def forward(self, input): - # out = self.normalizer(input.permute(0,2,1)).permute(0,2,1) # slightly different 3e-6 - # return out - return self.normalizer(input.view(-1, input.size(-1))).view(input.size()) - - -class MultiHeadAttentionLayer(nn.Sequential): - r""" - A layer with attention mechanism and normalization. - - For an embedding :math:`\pmb{x}`, - - .. math:: - \pmb{h} = \mathrm{MultiHeadAttentionLayer}(\pmb{x}) - - The following is executed: - - .. math:: - \begin{aligned} - \pmb{x}_0&=\pmb{x}+\mathrm{MultiHeadAttentionProj}(\pmb{x}) \\ - \pmb{x}_1&=\mathrm{BatchNorm}(\pmb{x}_0) \\ - \pmb{x}_2&=\pmb{x}_1+\mathrm{MLP_{\text{2 layers}}}(\pmb{x}_1)\\ - \pmb{h} &=\mathrm{BatchNorm}(\pmb{x}_2) - \end{aligned} - - - - .. seealso:: - The :math:`\mathrm{MultiHeadAttentionProj}` computes the self attention - of the embedding :math:`\pmb{x}`. Check :class:`~.MultiHeadAttentionProj` for details. - - Args: - n_heads : number of heads - embedding_dim : dimension of the query, keys, values - feed_forward_hidden : size of the hidden layer in the MLP - Inputs: inputs - * **inputs**: embeddin :math:`\pmb{x}`. [batch, graph_size, embedding_dim] - Outputs: out - * **out**: the output :math:`\pmb{h}` [batch, graph_size, embedding_dim] - """ - - def __init__( - self, - n_heads, - embedding_dim, - feed_forward_hidden=512, - ): - super(MultiHeadAttentionLayer, self).__init__( - SkipConnection( - MultiHeadAttentionProj( - embedding_dim=embedding_dim, - n_heads=n_heads, - ) - ), - Normalization(embedding_dim), - SkipConnection( - nn.Sequential( - nn.Linear(embedding_dim, feed_forward_hidden), - nn.ReLU(), - nn.Linear(feed_forward_hidden, embedding_dim), - ) - if feed_forward_hidden > 0 - else nn.Linear(embedding_dim, embedding_dim) - ), - Normalization(embedding_dim), - ) - - -class GraphAttentionEncoder(nn.Module): - r""" - Graph attention by self attention on graph nodes. - - For an embedding :math:`\pmb{x}`, repeat ``n_layers`` time: - - .. math:: - \pmb{h} = \mathrm{MultiHeadAttentionLayer}(\pmb{x}) - - .. seealso:: - Check :class:`~.MultiHeadAttentionLayer` for details. - - Args: - n_heads : number of heads - embedding_dim : dimension of the query, keys, values - n_layers : number of :class:`~.MultiHeadAttentionLayer` to iterate. - feed_forward_hidden : size of the hidden layer in the MLP - Inputs: x - * **x**: embeddin :math:`\pmb{x}`. [batch, graph_size, embedding_dim] - Outputs: (h, h_mean) - * **h**: the output :math:`\pmb{h}` [batch, graph_size, embedding_dim] - """ - - def __init__(self, n_heads, embed_dim, n_layers, feed_forward_hidden=512): - super(GraphAttentionEncoder, self).__init__() - - self.layers = nn.Sequential( - *( - MultiHeadAttentionLayer(n_heads, embed_dim, feed_forward_hidden) - for _ in range(n_layers) - ) - ) - - def forward(self, x, mask=None): - - assert mask is None, "TODO mask not yet supported!" - - h = self.layers(x) - - return (h, h.mean(dim=1)) diff --git a/spaces/crashedice/signify/signify/gan/models/__init__.py b/spaces/crashedice/signify/signify/gan/models/__init__.py deleted file mode 100644 index f241aa15f5d73882fab05d0a6873e8039459dc90..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/signify/gan/models/__init__.py +++ /dev/null @@ -1,67 +0,0 @@ -"""This package contains modules related to objective functions, optimizations, and network architectures. - -To add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel. -You need to implement the following five functions: - -- <__init__>: initialize the class; first call BaseModel.__init__(self, opt). - -- : unpack data from dataset and apply preprocessing. - -- : produce intermediate results. - -- : calculate loss, gradients, and update network weights. - -- : (optionally) add model-specific options and set default options. - -In the function <__init__>, you need to define four lists: - -- self.loss_names (str list): specify the training losses that you want to plot and save. - -- self.model_names (str list): define networks used in our training. - -- self.visual_names (str list): specify the images that you want to display and save. - -- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage. - -Now you can use the model class by specifying flag '--model dummy'. -See our template model class 'template_model.py' for more details. -""" - -import importlib -from signify.gan.models.base_model import BaseModel - - -def find_model_using_name(model_name): - """Import the module "models/[model_name]_model.py". - - In the file, the class called DatasetNameModel() will - be instantiated. It has to be a subclass of BaseModel, - and it is case-insensitive. - """ - model_filename = "signify.gan.models." + model_name + "_model" - modellib = importlib.import_module(model_filename) - model = None - target_model_name = model_name.replace('_', '') + 'model' - for name, cls in modellib.__dict__.items(): - if name.lower() == target_model_name.lower() \ - and issubclass(cls, BaseModel): - model = cls - - if model is None: - print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name)) - exit(0) - - return model - - -def get_option_setter(model_name): - """Return the static method of the model class.""" - model_class = find_model_using_name(model_name) - return model_class.modify_commandline_options - - -def create_model(opt): - """Create a model given the option. - - This function warps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from models import create_model - >>> model = create_model(opt) - """ - model = find_model_using_name(opt.model) - instance = model(opt) - print("model [%s] was created" % type(instance).__name__) - return instance diff --git a/spaces/crazybber/docker-demo-t5-translation/static/style.css b/spaces/crazybber/docker-demo-t5-translation/static/style.css deleted file mode 100644 index 7b50df8f6904c75f560224034d8aadd76656c6f8..0000000000000000000000000000000000000000 --- a/spaces/crazybber/docker-demo-t5-translation/static/style.css +++ /dev/null @@ -1,45 +0,0 @@ -body { - --text: hsl(0 0% 15%); - padding: 2.5rem; - font-family: sans-serif; - color: var(--text); -} - -body.dark-theme { - --text: hsl(0 0% 90%); - background-color: hsl(223 39% 7%); -} - -main { - max-width: 80rem; - text-align: center; -} - -section { - display: flex; - flex-direction: column; - align-items: center; -} - -a { - color: var(--text); -} - -form { - width: 30rem; - margin: 0 auto; -} - -input { - width: 100%; -} - -button { - cursor: pointer; -} - -.text-gen-output { - min-height: 1.2rem; - margin: 1rem; - border: 0.5px solid grey; -} diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/test_audio2coeff.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/test_audio2coeff.py deleted file mode 100644 index d19f81ba62935baee65216515c5efe3be1aa83f3..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/test_audio2coeff.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -import torch -import numpy as np -from scipy.io import savemat, loadmat -from yacs.config import CfgNode as CN -from scipy.signal import savgol_filter - -from Demo_TFR_Pirenderer.src.audio2pose_models.audio2pose import Audio2Pose -from Demo_TFR_Pirenderer.src.audio2exp_models.networks import SimpleWrapperV2 -from Demo_TFR_Pirenderer.src.audio2exp_models.audio2exp import Audio2Exp - -def load_cpk(checkpoint_path, model=None, optimizer=None, device="cpu"): - checkpoint = torch.load(checkpoint_path, map_location=torch.device(device)) - if model is not None: - model.load_state_dict(checkpoint['model']) - if optimizer is not None: - optimizer.load_state_dict(checkpoint['optimizer']) - - return checkpoint['epoch'] - -class Audio2Coeff(): - - def __init__(self, audio2pose_checkpoint, audio2pose_yaml_path, - audio2exp_checkpoint, audio2exp_yaml_path, - wav2lip_checkpoint, device): - #load config - fcfg_pose = open(audio2pose_yaml_path) - cfg_pose = CN.load_cfg(fcfg_pose) - cfg_pose.freeze() - fcfg_exp = open(audio2exp_yaml_path) - cfg_exp = CN.load_cfg(fcfg_exp) - cfg_exp.freeze() - - # load audio2pose_model - self.audio2pose_model = Audio2Pose(cfg_pose, wav2lip_checkpoint, device=device) - self.audio2pose_model = self.audio2pose_model.to(device) - self.audio2pose_model.eval() - for param in self.audio2pose_model.parameters(): - param.requires_grad = False - try: - load_cpk(audio2pose_checkpoint, model=self.audio2pose_model, device=device) - except: - raise Exception("Failed in loading audio2pose_checkpoint") - - # load audio2exp_model - netG = SimpleWrapperV2() - netG = netG.to(device) - for param in netG.parameters(): - netG.requires_grad = False - netG.eval() - try: - load_cpk(audio2exp_checkpoint, model=netG, device=device) - except: - raise Exception("Failed in loading audio2exp_checkpoint") - self.audio2exp_model = Audio2Exp(netG, cfg_exp, device=device, prepare_training_loss=False) - self.audio2exp_model = self.audio2exp_model.to(device) - for param in self.audio2exp_model.parameters(): - param.requires_grad = False - self.audio2exp_model.eval() - - self.device = device - - def generate(self, batch, coeff_save_dir, pose_style, ref_pose_coeff_path=None): - - with torch.no_grad(): - #test - results_dict_exp= self.audio2exp_model.test(batch) - exp_pred = results_dict_exp['exp_coeff_pred'] #bs T 64 - - #for class_id in range(1): - #class_id = 0#(i+10)%45 - #class_id = random.randint(0,46) #46 styles can be selected - batch['class'] = torch.LongTensor([pose_style]).to(self.device) - results_dict_pose = self.audio2pose_model.test(batch) - pose_pred = results_dict_pose['pose_pred'] #bs T 6 - - pose_len = pose_pred.shape[1] - if pose_len<13: - pose_len = int((pose_len-1)/2)*2+1 - pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), pose_len, 2, axis=1)).to(self.device) - else: - pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), 13, 2, axis=1)).to(self.device) - - coeffs_pred = torch.cat((exp_pred, pose_pred), dim=-1) #bs T 70 - - coeffs_pred_numpy = coeffs_pred[0].clone().detach().cpu().numpy() - - - if ref_pose_coeff_path is not None: - coeffs_pred_numpy = self.using_refpose(coeffs_pred_numpy, ref_pose_coeff_path) - - savemat(os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])), - {'coeff_3dmm': coeffs_pred_numpy}) - - return os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])) - - def using_refpose(self, coeffs_pred_numpy, ref_pose_coeff_path): - num_frames = coeffs_pred_numpy.shape[0] - refpose_coeff_dict = loadmat(ref_pose_coeff_path) - refpose_coeff = refpose_coeff_dict['coeff_3dmm'][:,64:70] - refpose_num_frames = refpose_coeff.shape[0] - if refpose_num_framesa?this[a+this.length]:this[a]:d.call(this)},pushStack:function(a){var b=m.merge(this.constructor(),a);return b.prevObject=this,b.context=this.context,b},each:function(a,b){return m.each(this,a,b)},map:function(a){return this.pushStack(m.map(this,function(b,c){return a.call(b,c,b)}))},slice:function(){return this.pushStack(d.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(a){var b=this.length,c=+a+(0>a?b:0);return this.pushStack(c>=0&&b>c?[this[c]]:[])},end:function(){return this.prevObject||this.constructor(null)},push:f,sort:c.sort,splice:c.splice},m.extend=m.fn.extend=function(){var a,b,c,d,e,f,g=arguments[0]||{},h=1,i=arguments.length,j=!1;for("boolean"==typeof g&&(j=g,g=arguments[h]||{},h++),"object"==typeof g||m.isFunction(g)||(g={}),h===i&&(g=this,h--);i>h;h++)if(null!=(e=arguments[h]))for(d in e)a=g[d],c=e[d],g!==c&&(j&&c&&(m.isPlainObject(c)||(b=m.isArray(c)))?(b?(b=!1,f=a&&m.isArray(a)?a:[]):f=a&&m.isPlainObject(a)?a:{},g[d]=m.extend(j,f,c)):void 0!==c&&(g[d]=c));return g},m.extend({expando:"jQuery"+(l+Math.random()).replace(/\D/g,""),isReady:!0,error:function(a){throw new Error(a)},noop:function(){},isFunction:function(a){return"function"===m.type(a)},isArray:Array.isArray||function(a){return"array"===m.type(a)},isWindow:function(a){return null!=a&&a==a.window},isNumeric:function(a){return!m.isArray(a)&&a-parseFloat(a)>=0},isEmptyObject:function(a){var b;for(b in a)return!1;return!0},isPlainObject:function(a){var b;if(!a||"object"!==m.type(a)||a.nodeType||m.isWindow(a))return!1;try{if(a.constructor&&!j.call(a,"constructor")&&!j.call(a.constructor.prototype,"isPrototypeOf"))return!1}catch(c){return!1}if(k.ownLast)for(b in a)return j.call(a,b);for(b in a);return void 0===b||j.call(a,b)},type:function(a){return null==a?a+"":"object"==typeof a||"function"==typeof a?h[i.call(a)]||"object":typeof a},globalEval:function(b){b&&m.trim(b)&&(a.execScript||function(b){a.eval.call(a,b)})(b)},camelCase:function(a){return a.replace(o,"ms-").replace(p,q)},nodeName:function(a,b){return a.nodeName&&a.nodeName.toLowerCase()===b.toLowerCase()},each:function(a,b,c){var d,e=0,f=a.length,g=r(a);if(c){if(g){for(;f>e;e++)if(d=b.apply(a[e],c),d===!1)break}else for(e in a)if(d=b.apply(a[e],c),d===!1)break}else if(g){for(;f>e;e++)if(d=b.call(a[e],e,a[e]),d===!1)break}else for(e in a)if(d=b.call(a[e],e,a[e]),d===!1)break;return a},trim:function(a){return null==a?"":(a+"").replace(n,"")},makeArray:function(a,b){var c=b||[];return null!=a&&(r(Object(a))?m.merge(c,"string"==typeof a?[a]:a):f.call(c,a)),c},inArray:function(a,b,c){var d;if(b){if(g)return g.call(b,a,c);for(d=b.length,c=c?0>c?Math.max(0,d+c):c:0;d>c;c++)if(c in b&&b[c]===a)return c}return-1},merge:function(a,b){var c=+b.length,d=0,e=a.length;while(c>d)a[e++]=b[d++];if(c!==c)while(void 0!==b[d])a[e++]=b[d++];return a.length=e,a},grep:function(a,b,c){for(var d,e=[],f=0,g=a.length,h=!c;g>f;f++)d=!b(a[f],f),d!==h&&e.push(a[f]);return e},map:function(a,b,c){var d,f=0,g=a.length,h=r(a),i=[];if(h)for(;g>f;f++)d=b(a[f],f,c),null!=d&&i.push(d);else for(f in a)d=b(a[f],f,c),null!=d&&i.push(d);return e.apply([],i)},guid:1,proxy:function(a,b){var c,e,f;return"string"==typeof b&&(f=a[b],b=a,a=f),m.isFunction(a)?(c=d.call(arguments,2),e=function(){return a.apply(b||this,c.concat(d.call(arguments)))},e.guid=a.guid=a.guid||m.guid++,e):void 0},now:function(){return+new Date},support:k}),m.each("Boolean Number String Function Array Date RegExp Object Error".split(" "),function(a,b){h["[object "+b+"]"]=b.toLowerCase()});function r(a){var b=a.length,c=m.type(a);return"function"===c||m.isWindow(a)?!1:1===a.nodeType&&b?!0:"array"===c||0===b||"number"==typeof b&&b>0&&b-1 in a}var s=function(a){var b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u="sizzle"+-new Date,v=a.document,w=0,x=0,y=gb(),z=gb(),A=gb(),B=function(a,b){return a===b&&(l=!0),0},C="undefined",D=1<<31,E={}.hasOwnProperty,F=[],G=F.pop,H=F.push,I=F.push,J=F.slice,K=F.indexOf||function(a){for(var b=0,c=this.length;c>b;b++)if(this[b]===a)return b;return-1},L="checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped",M="[\\x20\\t\\r\\n\\f]",N="(?:\\\\.|[\\w-]|[^\\x00-\\xa0])+",O=N.replace("w","w#"),P="\\["+M+"*("+N+")(?:"+M+"*([*^$|!~]?=)"+M+"*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|("+O+"))|)"+M+"*\\]",Q=":("+N+")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|"+P+")*)|.*)\\)|)",R=new RegExp("^"+M+"+|((?:^|[^\\\\])(?:\\\\.)*)"+M+"+$","g"),S=new RegExp("^"+M+"*,"+M+"*"),T=new RegExp("^"+M+"*([>+~]|"+M+")"+M+"*"),U=new RegExp("="+M+"*([^\\]'\"]*?)"+M+"*\\]","g"),V=new RegExp(Q),W=new RegExp("^"+O+"$"),X={ID:new RegExp("^#("+N+")"),CLASS:new RegExp("^\\.("+N+")"),TAG:new RegExp("^("+N.replace("w","w*")+")"),ATTR:new RegExp("^"+P),PSEUDO:new RegExp("^"+Q),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+L+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/^(?:input|select|textarea|button)$/i,Z=/^h\d$/i,$=/^[^{]+\{\s*\[native \w/,_=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ab=/[+~]/,bb=/'|\\/g,cb=new RegExp("\\\\([\\da-f]{1,6}"+M+"?|("+M+")|.)","ig"),db=function(a,b,c){var d="0x"+b-65536;return d!==d||c?b:0>d?String.fromCharCode(d+65536):String.fromCharCode(d>>10|55296,1023&d|56320)};try{I.apply(F=J.call(v.childNodes),v.childNodes),F[v.childNodes.length].nodeType}catch(eb){I={apply:F.length?function(a,b){H.apply(a,J.call(b))}:function(a,b){var c=a.length,d=0;while(a[c++]=b[d++]);a.length=c-1}}}function fb(a,b,d,e){var f,h,j,k,l,o,r,s,w,x;if((b?b.ownerDocument||b:v)!==n&&m(b),b=b||n,d=d||[],!a||"string"!=typeof a)return d;if(1!==(k=b.nodeType)&&9!==k)return[];if(p&&!e){if(f=_.exec(a))if(j=f[1]){if(9===k){if(h=b.getElementById(j),!h||!h.parentNode)return d;if(h.id===j)return d.push(h),d}else if(b.ownerDocument&&(h=b.ownerDocument.getElementById(j))&&t(b,h)&&h.id===j)return d.push(h),d}else{if(f[2])return I.apply(d,b.getElementsByTagName(a)),d;if((j=f[3])&&c.getElementsByClassName&&b.getElementsByClassName)return I.apply(d,b.getElementsByClassName(j)),d}if(c.qsa&&(!q||!q.test(a))){if(s=r=u,w=b,x=9===k&&a,1===k&&"object"!==b.nodeName.toLowerCase()){o=g(a),(r=b.getAttribute("id"))?s=r.replace(bb,"\\$&"):b.setAttribute("id",s),s="[id='"+s+"'] ",l=o.length;while(l--)o[l]=s+qb(o[l]);w=ab.test(a)&&ob(b.parentNode)||b,x=o.join(",")}if(x)try{return I.apply(d,w.querySelectorAll(x)),d}catch(y){}finally{r||b.removeAttribute("id")}}}return i(a.replace(R,"$1"),b,d,e)}function gb(){var a=[];function b(c,e){return a.push(c+" ")>d.cacheLength&&delete b[a.shift()],b[c+" "]=e}return b}function hb(a){return a[u]=!0,a}function ib(a){var b=n.createElement("div");try{return!!a(b)}catch(c){return!1}finally{b.parentNode&&b.parentNode.removeChild(b),b=null}}function jb(a,b){var c=a.split("|"),e=a.length;while(e--)d.attrHandle[c[e]]=b}function kb(a,b){var c=b&&a,d=c&&1===a.nodeType&&1===b.nodeType&&(~b.sourceIndex||D)-(~a.sourceIndex||D);if(d)return d;if(c)while(c=c.nextSibling)if(c===b)return-1;return a?1:-1}function lb(a){return function(b){var c=b.nodeName.toLowerCase();return"input"===c&&b.type===a}}function mb(a){return function(b){var c=b.nodeName.toLowerCase();return("input"===c||"button"===c)&&b.type===a}}function nb(a){return hb(function(b){return b=+b,hb(function(c,d){var e,f=a([],c.length,b),g=f.length;while(g--)c[e=f[g]]&&(c[e]=!(d[e]=c[e]))})})}function ob(a){return a&&typeof a.getElementsByTagName!==C&&a}c=fb.support={},f=fb.isXML=function(a){var b=a&&(a.ownerDocument||a).documentElement;return b?"HTML"!==b.nodeName:!1},m=fb.setDocument=function(a){var b,e=a?a.ownerDocument||a:v,g=e.defaultView;return e!==n&&9===e.nodeType&&e.documentElement?(n=e,o=e.documentElement,p=!f(e),g&&g!==g.top&&(g.addEventListener?g.addEventListener("unload",function(){m()},!1):g.attachEvent&&g.attachEvent("onunload",function(){m()})),c.attributes=ib(function(a){return a.className="i",!a.getAttribute("className")}),c.getElementsByTagName=ib(function(a){return a.appendChild(e.createComment("")),!a.getElementsByTagName("*").length}),c.getElementsByClassName=$.test(e.getElementsByClassName)&&ib(function(a){return a.innerHTML="
        ",a.firstChild.className="i",2===a.getElementsByClassName("i").length}),c.getById=ib(function(a){return o.appendChild(a).id=u,!e.getElementsByName||!e.getElementsByName(u).length}),c.getById?(d.find.ID=function(a,b){if(typeof b.getElementById!==C&&p){var c=b.getElementById(a);return c&&c.parentNode?[c]:[]}},d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){return a.getAttribute("id")===b}}):(delete d.find.ID,d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){var c=typeof a.getAttributeNode!==C&&a.getAttributeNode("id");return c&&c.value===b}}),d.find.TAG=c.getElementsByTagName?function(a,b){return typeof b.getElementsByTagName!==C?b.getElementsByTagName(a):void 0}:function(a,b){var c,d=[],e=0,f=b.getElementsByTagName(a);if("*"===a){while(c=f[e++])1===c.nodeType&&d.push(c);return d}return f},d.find.CLASS=c.getElementsByClassName&&function(a,b){return typeof b.getElementsByClassName!==C&&p?b.getElementsByClassName(a):void 0},r=[],q=[],(c.qsa=$.test(e.querySelectorAll))&&(ib(function(a){a.innerHTML="",a.querySelectorAll("[msallowclip^='']").length&&q.push("[*^$]="+M+"*(?:''|\"\")"),a.querySelectorAll("[selected]").length||q.push("\\["+M+"*(?:value|"+L+")"),a.querySelectorAll(":checked").length||q.push(":checked")}),ib(function(a){var b=e.createElement("input");b.setAttribute("type","hidden"),a.appendChild(b).setAttribute("name","D"),a.querySelectorAll("[name=d]").length&&q.push("name"+M+"*[*^$|!~]?="),a.querySelectorAll(":enabled").length||q.push(":enabled",":disabled"),a.querySelectorAll("*,:x"),q.push(",.*:")})),(c.matchesSelector=$.test(s=o.matches||o.webkitMatchesSelector||o.mozMatchesSelector||o.oMatchesSelector||o.msMatchesSelector))&&ib(function(a){c.disconnectedMatch=s.call(a,"div"),s.call(a,"[s!='']:x"),r.push("!=",Q)}),q=q.length&&new RegExp(q.join("|")),r=r.length&&new RegExp(r.join("|")),b=$.test(o.compareDocumentPosition),t=b||$.test(o.contains)?function(a,b){var c=9===a.nodeType?a.documentElement:a,d=b&&b.parentNode;return a===d||!(!d||1!==d.nodeType||!(c.contains?c.contains(d):a.compareDocumentPosition&&16&a.compareDocumentPosition(d)))}:function(a,b){if(b)while(b=b.parentNode)if(b===a)return!0;return!1},B=b?function(a,b){if(a===b)return l=!0,0;var d=!a.compareDocumentPosition-!b.compareDocumentPosition;return d?d:(d=(a.ownerDocument||a)===(b.ownerDocument||b)?a.compareDocumentPosition(b):1,1&d||!c.sortDetached&&b.compareDocumentPosition(a)===d?a===e||a.ownerDocument===v&&t(v,a)?-1:b===e||b.ownerDocument===v&&t(v,b)?1:k?K.call(k,a)-K.call(k,b):0:4&d?-1:1)}:function(a,b){if(a===b)return l=!0,0;var c,d=0,f=a.parentNode,g=b.parentNode,h=[a],i=[b];if(!f||!g)return a===e?-1:b===e?1:f?-1:g?1:k?K.call(k,a)-K.call(k,b):0;if(f===g)return kb(a,b);c=a;while(c=c.parentNode)h.unshift(c);c=b;while(c=c.parentNode)i.unshift(c);while(h[d]===i[d])d++;return d?kb(h[d],i[d]):h[d]===v?-1:i[d]===v?1:0},e):n},fb.matches=function(a,b){return fb(a,null,null,b)},fb.matchesSelector=function(a,b){if((a.ownerDocument||a)!==n&&m(a),b=b.replace(U,"='$1']"),!(!c.matchesSelector||!p||r&&r.test(b)||q&&q.test(b)))try{var d=s.call(a,b);if(d||c.disconnectedMatch||a.document&&11!==a.document.nodeType)return d}catch(e){}return fb(b,n,null,[a]).length>0},fb.contains=function(a,b){return(a.ownerDocument||a)!==n&&m(a),t(a,b)},fb.attr=function(a,b){(a.ownerDocument||a)!==n&&m(a);var e=d.attrHandle[b.toLowerCase()],f=e&&E.call(d.attrHandle,b.toLowerCase())?e(a,b,!p):void 0;return void 0!==f?f:c.attributes||!p?a.getAttribute(b):(f=a.getAttributeNode(b))&&f.specified?f.value:null},fb.error=function(a){throw new Error("Syntax error, unrecognized expression: "+a)},fb.uniqueSort=function(a){var b,d=[],e=0,f=0;if(l=!c.detectDuplicates,k=!c.sortStable&&a.slice(0),a.sort(B),l){while(b=a[f++])b===a[f]&&(e=d.push(f));while(e--)a.splice(d[e],1)}return k=null,a},e=fb.getText=function(a){var b,c="",d=0,f=a.nodeType;if(f){if(1===f||9===f||11===f){if("string"==typeof a.textContent)return a.textContent;for(a=a.firstChild;a;a=a.nextSibling)c+=e(a)}else if(3===f||4===f)return a.nodeValue}else while(b=a[d++])c+=e(b);return c},d=fb.selectors={cacheLength:50,createPseudo:hb,match:X,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(a){return a[1]=a[1].replace(cb,db),a[3]=(a[3]||a[4]||a[5]||"").replace(cb,db),"~="===a[2]&&(a[3]=" "+a[3]+" "),a.slice(0,4)},CHILD:function(a){return a[1]=a[1].toLowerCase(),"nth"===a[1].slice(0,3)?(a[3]||fb.error(a[0]),a[4]=+(a[4]?a[5]+(a[6]||1):2*("even"===a[3]||"odd"===a[3])),a[5]=+(a[7]+a[8]||"odd"===a[3])):a[3]&&fb.error(a[0]),a},PSEUDO:function(a){var b,c=!a[6]&&a[2];return X.CHILD.test(a[0])?null:(a[3]?a[2]=a[4]||a[5]||"":c&&V.test(c)&&(b=g(c,!0))&&(b=c.indexOf(")",c.length-b)-c.length)&&(a[0]=a[0].slice(0,b),a[2]=c.slice(0,b)),a.slice(0,3))}},filter:{TAG:function(a){var b=a.replace(cb,db).toLowerCase();return"*"===a?function(){return!0}:function(a){return a.nodeName&&a.nodeName.toLowerCase()===b}},CLASS:function(a){var b=y[a+" "];return b||(b=new RegExp("(^|"+M+")"+a+"("+M+"|$)"))&&y(a,function(a){return b.test("string"==typeof a.className&&a.className||typeof a.getAttribute!==C&&a.getAttribute("class")||"")})},ATTR:function(a,b,c){return function(d){var e=fb.attr(d,a);return null==e?"!="===b:b?(e+="","="===b?e===c:"!="===b?e!==c:"^="===b?c&&0===e.indexOf(c):"*="===b?c&&e.indexOf(c)>-1:"$="===b?c&&e.slice(-c.length)===c:"~="===b?(" "+e+" ").indexOf(c)>-1:"|="===b?e===c||e.slice(0,c.length+1)===c+"-":!1):!0}},CHILD:function(a,b,c,d,e){var f="nth"!==a.slice(0,3),g="last"!==a.slice(-4),h="of-type"===b;return 1===d&&0===e?function(a){return!!a.parentNode}:function(b,c,i){var j,k,l,m,n,o,p=f!==g?"nextSibling":"previousSibling",q=b.parentNode,r=h&&b.nodeName.toLowerCase(),s=!i&&!h;if(q){if(f){while(p){l=b;while(l=l[p])if(h?l.nodeName.toLowerCase()===r:1===l.nodeType)return!1;o=p="only"===a&&!o&&"nextSibling"}return!0}if(o=[g?q.firstChild:q.lastChild],g&&s){k=q[u]||(q[u]={}),j=k[a]||[],n=j[0]===w&&j[1],m=j[0]===w&&j[2],l=n&&q.childNodes[n];while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if(1===l.nodeType&&++m&&l===b){k[a]=[w,n,m];break}}else if(s&&(j=(b[u]||(b[u]={}))[a])&&j[0]===w)m=j[1];else while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if((h?l.nodeName.toLowerCase()===r:1===l.nodeType)&&++m&&(s&&((l[u]||(l[u]={}))[a]=[w,m]),l===b))break;return m-=e,m===d||m%d===0&&m/d>=0}}},PSEUDO:function(a,b){var c,e=d.pseudos[a]||d.setFilters[a.toLowerCase()]||fb.error("unsupported pseudo: "+a);return e[u]?e(b):e.length>1?(c=[a,a,"",b],d.setFilters.hasOwnProperty(a.toLowerCase())?hb(function(a,c){var d,f=e(a,b),g=f.length;while(g--)d=K.call(a,f[g]),a[d]=!(c[d]=f[g])}):function(a){return e(a,0,c)}):e}},pseudos:{not:hb(function(a){var b=[],c=[],d=h(a.replace(R,"$1"));return d[u]?hb(function(a,b,c,e){var f,g=d(a,null,e,[]),h=a.length;while(h--)(f=g[h])&&(a[h]=!(b[h]=f))}):function(a,e,f){return b[0]=a,d(b,null,f,c),!c.pop()}}),has:hb(function(a){return function(b){return fb(a,b).length>0}}),contains:hb(function(a){return function(b){return(b.textContent||b.innerText||e(b)).indexOf(a)>-1}}),lang:hb(function(a){return W.test(a||"")||fb.error("unsupported lang: "+a),a=a.replace(cb,db).toLowerCase(),function(b){var c;do if(c=p?b.lang:b.getAttribute("xml:lang")||b.getAttribute("lang"))return c=c.toLowerCase(),c===a||0===c.indexOf(a+"-");while((b=b.parentNode)&&1===b.nodeType);return!1}}),target:function(b){var c=a.location&&a.location.hash;return c&&c.slice(1)===b.id},root:function(a){return a===o},focus:function(a){return a===n.activeElement&&(!n.hasFocus||n.hasFocus())&&!!(a.type||a.href||~a.tabIndex)},enabled:function(a){return a.disabled===!1},disabled:function(a){return a.disabled===!0},checked:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&!!a.checked||"option"===b&&!!a.selected},selected:function(a){return a.parentNode&&a.parentNode.selectedIndex,a.selected===!0},empty:function(a){for(a=a.firstChild;a;a=a.nextSibling)if(a.nodeType<6)return!1;return!0},parent:function(a){return!d.pseudos.empty(a)},header:function(a){return Z.test(a.nodeName)},input:function(a){return Y.test(a.nodeName)},button:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&"button"===a.type||"button"===b},text:function(a){var b;return"input"===a.nodeName.toLowerCase()&&"text"===a.type&&(null==(b=a.getAttribute("type"))||"text"===b.toLowerCase())},first:nb(function(){return[0]}),last:nb(function(a,b){return[b-1]}),eq:nb(function(a,b,c){return[0>c?c+b:c]}),even:nb(function(a,b){for(var c=0;b>c;c+=2)a.push(c);return a}),odd:nb(function(a,b){for(var c=1;b>c;c+=2)a.push(c);return a}),lt:nb(function(a,b,c){for(var d=0>c?c+b:c;--d>=0;)a.push(d);return a}),gt:nb(function(a,b,c){for(var d=0>c?c+b:c;++db;b++)d+=a[b].value;return d}function rb(a,b,c){var d=b.dir,e=c&&"parentNode"===d,f=x++;return b.first?function(b,c,f){while(b=b[d])if(1===b.nodeType||e)return a(b,c,f)}:function(b,c,g){var h,i,j=[w,f];if(g){while(b=b[d])if((1===b.nodeType||e)&&a(b,c,g))return!0}else while(b=b[d])if(1===b.nodeType||e){if(i=b[u]||(b[u]={}),(h=i[d])&&h[0]===w&&h[1]===f)return j[2]=h[2];if(i[d]=j,j[2]=a(b,c,g))return!0}}}function sb(a){return a.length>1?function(b,c,d){var e=a.length;while(e--)if(!a[e](b,c,d))return!1;return!0}:a[0]}function tb(a,b,c){for(var d=0,e=b.length;e>d;d++)fb(a,b[d],c);return c}function ub(a,b,c,d,e){for(var f,g=[],h=0,i=a.length,j=null!=b;i>h;h++)(f=a[h])&&(!c||c(f,d,e))&&(g.push(f),j&&b.push(h));return g}function vb(a,b,c,d,e,f){return d&&!d[u]&&(d=vb(d)),e&&!e[u]&&(e=vb(e,f)),hb(function(f,g,h,i){var j,k,l,m=[],n=[],o=g.length,p=f||tb(b||"*",h.nodeType?[h]:h,[]),q=!a||!f&&b?p:ub(p,m,a,h,i),r=c?e||(f?a:o||d)?[]:g:q;if(c&&c(q,r,h,i),d){j=ub(r,n),d(j,[],h,i),k=j.length;while(k--)(l=j[k])&&(r[n[k]]=!(q[n[k]]=l))}if(f){if(e||a){if(e){j=[],k=r.length;while(k--)(l=r[k])&&j.push(q[k]=l);e(null,r=[],j,i)}k=r.length;while(k--)(l=r[k])&&(j=e?K.call(f,l):m[k])>-1&&(f[j]=!(g[j]=l))}}else r=ub(r===g?r.splice(o,r.length):r),e?e(null,g,r,i):I.apply(g,r)})}function wb(a){for(var b,c,e,f=a.length,g=d.relative[a[0].type],h=g||d.relative[" "],i=g?1:0,k=rb(function(a){return a===b},h,!0),l=rb(function(a){return K.call(b,a)>-1},h,!0),m=[function(a,c,d){return!g&&(d||c!==j)||((b=c).nodeType?k(a,c,d):l(a,c,d))}];f>i;i++)if(c=d.relative[a[i].type])m=[rb(sb(m),c)];else{if(c=d.filter[a[i].type].apply(null,a[i].matches),c[u]){for(e=++i;f>e;e++)if(d.relative[a[e].type])break;return vb(i>1&&sb(m),i>1&&qb(a.slice(0,i-1).concat({value:" "===a[i-2].type?"*":""})).replace(R,"$1"),c,e>i&&wb(a.slice(i,e)),f>e&&wb(a=a.slice(e)),f>e&&qb(a))}m.push(c)}return sb(m)}function xb(a,b){var c=b.length>0,e=a.length>0,f=function(f,g,h,i,k){var l,m,o,p=0,q="0",r=f&&[],s=[],t=j,u=f||e&&d.find.TAG("*",k),v=w+=null==t?1:Math.random()||.1,x=u.length;for(k&&(j=g!==n&&g);q!==x&&null!=(l=u[q]);q++){if(e&&l){m=0;while(o=a[m++])if(o(l,g,h)){i.push(l);break}k&&(w=v)}c&&((l=!o&&l)&&p--,f&&r.push(l))}if(p+=q,c&&q!==p){m=0;while(o=b[m++])o(r,s,g,h);if(f){if(p>0)while(q--)r[q]||s[q]||(s[q]=G.call(i));s=ub(s)}I.apply(i,s),k&&!f&&s.length>0&&p+b.length>1&&fb.uniqueSort(i)}return k&&(w=v,j=t),r};return c?hb(f):f}return h=fb.compile=function(a,b){var c,d=[],e=[],f=A[a+" "];if(!f){b||(b=g(a)),c=b.length;while(c--)f=wb(b[c]),f[u]?d.push(f):e.push(f);f=A(a,xb(e,d)),f.selector=a}return f},i=fb.select=function(a,b,e,f){var i,j,k,l,m,n="function"==typeof a&&a,o=!f&&g(a=n.selector||a);if(e=e||[],1===o.length){if(j=o[0]=o[0].slice(0),j.length>2&&"ID"===(k=j[0]).type&&c.getById&&9===b.nodeType&&p&&d.relative[j[1].type]){if(b=(d.find.ID(k.matches[0].replace(cb,db),b)||[])[0],!b)return e;n&&(b=b.parentNode),a=a.slice(j.shift().value.length)}i=X.needsContext.test(a)?0:j.length;while(i--){if(k=j[i],d.relative[l=k.type])break;if((m=d.find[l])&&(f=m(k.matches[0].replace(cb,db),ab.test(j[0].type)&&ob(b.parentNode)||b))){if(j.splice(i,1),a=f.length&&qb(j),!a)return I.apply(e,f),e;break}}}return(n||h(a,o))(f,b,!p,e,ab.test(a)&&ob(b.parentNode)||b),e},c.sortStable=u.split("").sort(B).join("")===u,c.detectDuplicates=!!l,m(),c.sortDetached=ib(function(a){return 1&a.compareDocumentPosition(n.createElement("div"))}),ib(function(a){return a.innerHTML="","#"===a.firstChild.getAttribute("href")})||jb("type|href|height|width",function(a,b,c){return c?void 0:a.getAttribute(b,"type"===b.toLowerCase()?1:2)}),c.attributes&&ib(function(a){return a.innerHTML="",a.firstChild.setAttribute("value",""),""===a.firstChild.getAttribute("value")})||jb("value",function(a,b,c){return c||"input"!==a.nodeName.toLowerCase()?void 0:a.defaultValue}),ib(function(a){return null==a.getAttribute("disabled")})||jb(L,function(a,b,c){var d;return c?void 0:a[b]===!0?b.toLowerCase():(d=a.getAttributeNode(b))&&d.specified?d.value:null}),fb}(a);m.find=s,m.expr=s.selectors,m.expr[":"]=m.expr.pseudos,m.unique=s.uniqueSort,m.text=s.getText,m.isXMLDoc=s.isXML,m.contains=s.contains;var t=m.expr.match.needsContext,u=/^<(\w+)\s*\/?>(?:<\/\1>|)$/,v=/^.[^:#\[\.,]*$/;function w(a,b,c){if(m.isFunction(b))return m.grep(a,function(a,d){return!!b.call(a,d,a)!==c});if(b.nodeType)return m.grep(a,function(a){return a===b!==c});if("string"==typeof b){if(v.test(b))return m.filter(b,a,c);b=m.filter(b,a)}return m.grep(a,function(a){return m.inArray(a,b)>=0!==c})}m.filter=function(a,b,c){var d=b[0];return c&&(a=":not("+a+")"),1===b.length&&1===d.nodeType?m.find.matchesSelector(d,a)?[d]:[]:m.find.matches(a,m.grep(b,function(a){return 1===a.nodeType}))},m.fn.extend({find:function(a){var b,c=[],d=this,e=d.length;if("string"!=typeof a)return this.pushStack(m(a).filter(function(){for(b=0;e>b;b++)if(m.contains(d[b],this))return!0}));for(b=0;e>b;b++)m.find(a,d[b],c);return c=this.pushStack(e>1?m.unique(c):c),c.selector=this.selector?this.selector+" "+a:a,c},filter:function(a){return this.pushStack(w(this,a||[],!1))},not:function(a){return this.pushStack(w(this,a||[],!0))},is:function(a){return!!w(this,"string"==typeof a&&t.test(a)?m(a):a||[],!1).length}});var x,y=a.document,z=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]*))$/,A=m.fn.init=function(a,b){var c,d;if(!a)return this;if("string"==typeof a){if(c="<"===a.charAt(0)&&">"===a.charAt(a.length-1)&&a.length>=3?[null,a,null]:z.exec(a),!c||!c[1]&&b)return!b||b.jquery?(b||x).find(a):this.constructor(b).find(a);if(c[1]){if(b=b instanceof m?b[0]:b,m.merge(this,m.parseHTML(c[1],b&&b.nodeType?b.ownerDocument||b:y,!0)),u.test(c[1])&&m.isPlainObject(b))for(c in b)m.isFunction(this[c])?this[c](b[c]):this.attr(c,b[c]);return this}if(d=y.getElementById(c[2]),d&&d.parentNode){if(d.id!==c[2])return x.find(a);this.length=1,this[0]=d}return this.context=y,this.selector=a,this}return a.nodeType?(this.context=this[0]=a,this.length=1,this):m.isFunction(a)?"undefined"!=typeof x.ready?x.ready(a):a(m):(void 0!==a.selector&&(this.selector=a.selector,this.context=a.context),m.makeArray(a,this))};A.prototype=m.fn,x=m(y);var B=/^(?:parents|prev(?:Until|All))/,C={children:!0,contents:!0,next:!0,prev:!0};m.extend({dir:function(a,b,c){var d=[],e=a[b];while(e&&9!==e.nodeType&&(void 0===c||1!==e.nodeType||!m(e).is(c)))1===e.nodeType&&d.push(e),e=e[b];return d},sibling:function(a,b){for(var c=[];a;a=a.nextSibling)1===a.nodeType&&a!==b&&c.push(a);return c}}),m.fn.extend({has:function(a){var b,c=m(a,this),d=c.length;return this.filter(function(){for(b=0;d>b;b++)if(m.contains(this,c[b]))return!0})},closest:function(a,b){for(var c,d=0,e=this.length,f=[],g=t.test(a)||"string"!=typeof a?m(a,b||this.context):0;e>d;d++)for(c=this[d];c&&c!==b;c=c.parentNode)if(c.nodeType<11&&(g?g.index(c)>-1:1===c.nodeType&&m.find.matchesSelector(c,a))){f.push(c);break}return this.pushStack(f.length>1?m.unique(f):f)},index:function(a){return a?"string"==typeof a?m.inArray(this[0],m(a)):m.inArray(a.jquery?a[0]:a,this):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(a,b){return this.pushStack(m.unique(m.merge(this.get(),m(a,b))))},addBack:function(a){return this.add(null==a?this.prevObject:this.prevObject.filter(a))}});function D(a,b){do a=a[b];while(a&&1!==a.nodeType);return a}m.each({parent:function(a){var b=a.parentNode;return b&&11!==b.nodeType?b:null},parents:function(a){return m.dir(a,"parentNode")},parentsUntil:function(a,b,c){return m.dir(a,"parentNode",c)},next:function(a){return D(a,"nextSibling")},prev:function(a){return D(a,"previousSibling")},nextAll:function(a){return m.dir(a,"nextSibling")},prevAll:function(a){return m.dir(a,"previousSibling")},nextUntil:function(a,b,c){return m.dir(a,"nextSibling",c)},prevUntil:function(a,b,c){return m.dir(a,"previousSibling",c)},siblings:function(a){return m.sibling((a.parentNode||{}).firstChild,a)},children:function(a){return m.sibling(a.firstChild)},contents:function(a){return m.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:m.merge([],a.childNodes)}},function(a,b){m.fn[a]=function(c,d){var e=m.map(this,b,c);return"Until"!==a.slice(-5)&&(d=c),d&&"string"==typeof d&&(e=m.filter(d,e)),this.length>1&&(C[a]||(e=m.unique(e)),B.test(a)&&(e=e.reverse())),this.pushStack(e)}});var E=/\S+/g,F={};function G(a){var b=F[a]={};return m.each(a.match(E)||[],function(a,c){b[c]=!0}),b}m.Callbacks=function(a){a="string"==typeof a?F[a]||G(a):m.extend({},a);var b,c,d,e,f,g,h=[],i=!a.once&&[],j=function(l){for(c=a.memory&&l,d=!0,f=g||0,g=0,e=h.length,b=!0;h&&e>f;f++)if(h[f].apply(l[0],l[1])===!1&&a.stopOnFalse){c=!1;break}b=!1,h&&(i?i.length&&j(i.shift()):c?h=[]:k.disable())},k={add:function(){if(h){var d=h.length;!function f(b){m.each(b,function(b,c){var d=m.type(c);"function"===d?a.unique&&k.has(c)||h.push(c):c&&c.length&&"string"!==d&&f(c)})}(arguments),b?e=h.length:c&&(g=d,j(c))}return this},remove:function(){return h&&m.each(arguments,function(a,c){var d;while((d=m.inArray(c,h,d))>-1)h.splice(d,1),b&&(e>=d&&e--,f>=d&&f--)}),this},has:function(a){return a?m.inArray(a,h)>-1:!(!h||!h.length)},empty:function(){return h=[],e=0,this},disable:function(){return h=i=c=void 0,this},disabled:function(){return!h},lock:function(){return i=void 0,c||k.disable(),this},locked:function(){return!i},fireWith:function(a,c){return!h||d&&!i||(c=c||[],c=[a,c.slice?c.slice():c],b?i.push(c):j(c)),this},fire:function(){return k.fireWith(this,arguments),this},fired:function(){return!!d}};return k},m.extend({Deferred:function(a){var b=[["resolve","done",m.Callbacks("once memory"),"resolved"],["reject","fail",m.Callbacks("once memory"),"rejected"],["notify","progress",m.Callbacks("memory")]],c="pending",d={state:function(){return c},always:function(){return e.done(arguments).fail(arguments),this},then:function(){var a=arguments;return m.Deferred(function(c){m.each(b,function(b,f){var g=m.isFunction(a[b])&&a[b];e[f[1]](function(){var a=g&&g.apply(this,arguments);a&&m.isFunction(a.promise)?a.promise().done(c.resolve).fail(c.reject).progress(c.notify):c[f[0]+"With"](this===d?c.promise():this,g?[a]:arguments)})}),a=null}).promise()},promise:function(a){return null!=a?m.extend(a,d):d}},e={};return d.pipe=d.then,m.each(b,function(a,f){var g=f[2],h=f[3];d[f[1]]=g.add,h&&g.add(function(){c=h},b[1^a][2].disable,b[2][2].lock),e[f[0]]=function(){return e[f[0]+"With"](this===e?d:this,arguments),this},e[f[0]+"With"]=g.fireWith}),d.promise(e),a&&a.call(e,e),e},when:function(a){var b=0,c=d.call(arguments),e=c.length,f=1!==e||a&&m.isFunction(a.promise)?e:0,g=1===f?a:m.Deferred(),h=function(a,b,c){return function(e){b[a]=this,c[a]=arguments.length>1?d.call(arguments):e,c===i?g.notifyWith(b,c):--f||g.resolveWith(b,c)}},i,j,k;if(e>1)for(i=new Array(e),j=new Array(e),k=new Array(e);e>b;b++)c[b]&&m.isFunction(c[b].promise)?c[b].promise().done(h(b,k,c)).fail(g.reject).progress(h(b,j,i)):--f;return f||g.resolveWith(k,c),g.promise()}});var H;m.fn.ready=function(a){return m.ready.promise().done(a),this},m.extend({isReady:!1,readyWait:1,holdReady:function(a){a?m.readyWait++:m.ready(!0)},ready:function(a){if(a===!0?!--m.readyWait:!m.isReady){if(!y.body)return setTimeout(m.ready);m.isReady=!0,a!==!0&&--m.readyWait>0||(H.resolveWith(y,[m]),m.fn.triggerHandler&&(m(y).triggerHandler("ready"),m(y).off("ready")))}}});function I(){y.addEventListener?(y.removeEventListener("DOMContentLoaded",J,!1),a.removeEventListener("load",J,!1)):(y.detachEvent("onreadystatechange",J),a.detachEvent("onload",J))}function J(){(y.addEventListener||"load"===event.type||"complete"===y.readyState)&&(I(),m.ready())}m.ready.promise=function(b){if(!H)if(H=m.Deferred(),"complete"===y.readyState)setTimeout(m.ready);else if(y.addEventListener)y.addEventListener("DOMContentLoaded",J,!1),a.addEventListener("load",J,!1);else{y.attachEvent("onreadystatechange",J),a.attachEvent("onload",J);var c=!1;try{c=null==a.frameElement&&y.documentElement}catch(d){}c&&c.doScroll&&!function e(){if(!m.isReady){try{c.doScroll("left")}catch(a){return setTimeout(e,50)}I(),m.ready()}}()}return H.promise(b)};var K="undefined",L;for(L in m(k))break;k.ownLast="0"!==L,k.inlineBlockNeedsLayout=!1,m(function(){var a,b,c,d;c=y.getElementsByTagName("body")[0],c&&c.style&&(b=y.createElement("div"),d=y.createElement("div"),d.style.cssText="position:absolute;border:0;width:0;height:0;top:0;left:-9999px",c.appendChild(d).appendChild(b),typeof b.style.zoom!==K&&(b.style.cssText="display:inline;margin:0;border:0;padding:1px;width:1px;zoom:1",k.inlineBlockNeedsLayout=a=3===b.offsetWidth,a&&(c.style.zoom=1)),c.removeChild(d))}),function(){var a=y.createElement("div");if(null==k.deleteExpando){k.deleteExpando=!0;try{delete a.test}catch(b){k.deleteExpando=!1}}a=null}(),m.acceptData=function(a){var b=m.noData[(a.nodeName+" ").toLowerCase()],c=+a.nodeType||1;return 1!==c&&9!==c?!1:!b||b!==!0&&a.getAttribute("classid")===b};var M=/^(?:\{[\w\W]*\}|\[[\w\W]*\])$/,N=/([A-Z])/g;function O(a,b,c){if(void 0===c&&1===a.nodeType){var d="data-"+b.replace(N,"-$1").toLowerCase();if(c=a.getAttribute(d),"string"==typeof c){try{c="true"===c?!0:"false"===c?!1:"null"===c?null:+c+""===c?+c:M.test(c)?m.parseJSON(c):c}catch(e){}m.data(a,b,c)}else c=void 0}return c}function P(a){var b;for(b in a)if(("data"!==b||!m.isEmptyObject(a[b]))&&"toJSON"!==b)return!1;return!0}function Q(a,b,d,e){if(m.acceptData(a)){var f,g,h=m.expando,i=a.nodeType,j=i?m.cache:a,k=i?a[h]:a[h]&&h; -if(k&&j[k]&&(e||j[k].data)||void 0!==d||"string"!=typeof b)return k||(k=i?a[h]=c.pop()||m.guid++:h),j[k]||(j[k]=i?{}:{toJSON:m.noop}),("object"==typeof b||"function"==typeof b)&&(e?j[k]=m.extend(j[k],b):j[k].data=m.extend(j[k].data,b)),g=j[k],e||(g.data||(g.data={}),g=g.data),void 0!==d&&(g[m.camelCase(b)]=d),"string"==typeof b?(f=g[b],null==f&&(f=g[m.camelCase(b)])):f=g,f}}function R(a,b,c){if(m.acceptData(a)){var d,e,f=a.nodeType,g=f?m.cache:a,h=f?a[m.expando]:m.expando;if(g[h]){if(b&&(d=c?g[h]:g[h].data)){m.isArray(b)?b=b.concat(m.map(b,m.camelCase)):b in d?b=[b]:(b=m.camelCase(b),b=b in d?[b]:b.split(" ")),e=b.length;while(e--)delete d[b[e]];if(c?!P(d):!m.isEmptyObject(d))return}(c||(delete g[h].data,P(g[h])))&&(f?m.cleanData([a],!0):k.deleteExpando||g!=g.window?delete g[h]:g[h]=null)}}}m.extend({cache:{},noData:{"applet ":!0,"embed ":!0,"object ":"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"},hasData:function(a){return a=a.nodeType?m.cache[a[m.expando]]:a[m.expando],!!a&&!P(a)},data:function(a,b,c){return Q(a,b,c)},removeData:function(a,b){return R(a,b)},_data:function(a,b,c){return Q(a,b,c,!0)},_removeData:function(a,b){return R(a,b,!0)}}),m.fn.extend({data:function(a,b){var c,d,e,f=this[0],g=f&&f.attributes;if(void 0===a){if(this.length&&(e=m.data(f),1===f.nodeType&&!m._data(f,"parsedAttrs"))){c=g.length;while(c--)g[c]&&(d=g[c].name,0===d.indexOf("data-")&&(d=m.camelCase(d.slice(5)),O(f,d,e[d])));m._data(f,"parsedAttrs",!0)}return e}return"object"==typeof a?this.each(function(){m.data(this,a)}):arguments.length>1?this.each(function(){m.data(this,a,b)}):f?O(f,a,m.data(f,a)):void 0},removeData:function(a){return this.each(function(){m.removeData(this,a)})}}),m.extend({queue:function(a,b,c){var d;return a?(b=(b||"fx")+"queue",d=m._data(a,b),c&&(!d||m.isArray(c)?d=m._data(a,b,m.makeArray(c)):d.push(c)),d||[]):void 0},dequeue:function(a,b){b=b||"fx";var c=m.queue(a,b),d=c.length,e=c.shift(),f=m._queueHooks(a,b),g=function(){m.dequeue(a,b)};"inprogress"===e&&(e=c.shift(),d--),e&&("fx"===b&&c.unshift("inprogress"),delete f.stop,e.call(a,g,f)),!d&&f&&f.empty.fire()},_queueHooks:function(a,b){var c=b+"queueHooks";return m._data(a,c)||m._data(a,c,{empty:m.Callbacks("once memory").add(function(){m._removeData(a,b+"queue"),m._removeData(a,c)})})}}),m.fn.extend({queue:function(a,b){var c=2;return"string"!=typeof a&&(b=a,a="fx",c--),arguments.lengthh;h++)b(a[h],c,g?d:d.call(a[h],h,b(a[h],c)));return e?a:j?b.call(a):i?b(a[0],c):f},W=/^(?:checkbox|radio)$/i;!function(){var a=y.createElement("input"),b=y.createElement("div"),c=y.createDocumentFragment();if(b.innerHTML="
        a",k.leadingWhitespace=3===b.firstChild.nodeType,k.tbody=!b.getElementsByTagName("tbody").length,k.htmlSerialize=!!b.getElementsByTagName("link").length,k.html5Clone="<:nav>"!==y.createElement("nav").cloneNode(!0).outerHTML,a.type="checkbox",a.checked=!0,c.appendChild(a),k.appendChecked=a.checked,b.innerHTML="",k.noCloneChecked=!!b.cloneNode(!0).lastChild.defaultValue,c.appendChild(b),b.innerHTML="",k.checkClone=b.cloneNode(!0).cloneNode(!0).lastChild.checked,k.noCloneEvent=!0,b.attachEvent&&(b.attachEvent("onclick",function(){k.noCloneEvent=!1}),b.cloneNode(!0).click()),null==k.deleteExpando){k.deleteExpando=!0;try{delete b.test}catch(d){k.deleteExpando=!1}}}(),function(){var b,c,d=y.createElement("div");for(b in{submit:!0,change:!0,focusin:!0})c="on"+b,(k[b+"Bubbles"]=c in a)||(d.setAttribute(c,"t"),k[b+"Bubbles"]=d.attributes[c].expando===!1);d=null}();var X=/^(?:input|select|textarea)$/i,Y=/^key/,Z=/^(?:mouse|pointer|contextmenu)|click/,$=/^(?:focusinfocus|focusoutblur)$/,_=/^([^.]*)(?:\.(.+)|)$/;function ab(){return!0}function bb(){return!1}function cb(){try{return y.activeElement}catch(a){}}m.event={global:{},add:function(a,b,c,d,e){var f,g,h,i,j,k,l,n,o,p,q,r=m._data(a);if(r){c.handler&&(i=c,c=i.handler,e=i.selector),c.guid||(c.guid=m.guid++),(g=r.events)||(g=r.events={}),(k=r.handle)||(k=r.handle=function(a){return typeof m===K||a&&m.event.triggered===a.type?void 0:m.event.dispatch.apply(k.elem,arguments)},k.elem=a),b=(b||"").match(E)||[""],h=b.length;while(h--)f=_.exec(b[h])||[],o=q=f[1],p=(f[2]||"").split(".").sort(),o&&(j=m.event.special[o]||{},o=(e?j.delegateType:j.bindType)||o,j=m.event.special[o]||{},l=m.extend({type:o,origType:q,data:d,handler:c,guid:c.guid,selector:e,needsContext:e&&m.expr.match.needsContext.test(e),namespace:p.join(".")},i),(n=g[o])||(n=g[o]=[],n.delegateCount=0,j.setup&&j.setup.call(a,d,p,k)!==!1||(a.addEventListener?a.addEventListener(o,k,!1):a.attachEvent&&a.attachEvent("on"+o,k))),j.add&&(j.add.call(a,l),l.handler.guid||(l.handler.guid=c.guid)),e?n.splice(n.delegateCount++,0,l):n.push(l),m.event.global[o]=!0);a=null}},remove:function(a,b,c,d,e){var f,g,h,i,j,k,l,n,o,p,q,r=m.hasData(a)&&m._data(a);if(r&&(k=r.events)){b=(b||"").match(E)||[""],j=b.length;while(j--)if(h=_.exec(b[j])||[],o=q=h[1],p=(h[2]||"").split(".").sort(),o){l=m.event.special[o]||{},o=(d?l.delegateType:l.bindType)||o,n=k[o]||[],h=h[2]&&new RegExp("(^|\\.)"+p.join("\\.(?:.*\\.|)")+"(\\.|$)"),i=f=n.length;while(f--)g=n[f],!e&&q!==g.origType||c&&c.guid!==g.guid||h&&!h.test(g.namespace)||d&&d!==g.selector&&("**"!==d||!g.selector)||(n.splice(f,1),g.selector&&n.delegateCount--,l.remove&&l.remove.call(a,g));i&&!n.length&&(l.teardown&&l.teardown.call(a,p,r.handle)!==!1||m.removeEvent(a,o,r.handle),delete k[o])}else for(o in k)m.event.remove(a,o+b[j],c,d,!0);m.isEmptyObject(k)&&(delete r.handle,m._removeData(a,"events"))}},trigger:function(b,c,d,e){var f,g,h,i,k,l,n,o=[d||y],p=j.call(b,"type")?b.type:b,q=j.call(b,"namespace")?b.namespace.split("."):[];if(h=l=d=d||y,3!==d.nodeType&&8!==d.nodeType&&!$.test(p+m.event.triggered)&&(p.indexOf(".")>=0&&(q=p.split("."),p=q.shift(),q.sort()),g=p.indexOf(":")<0&&"on"+p,b=b[m.expando]?b:new m.Event(p,"object"==typeof b&&b),b.isTrigger=e?2:3,b.namespace=q.join("."),b.namespace_re=b.namespace?new RegExp("(^|\\.)"+q.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,b.result=void 0,b.target||(b.target=d),c=null==c?[b]:m.makeArray(c,[b]),k=m.event.special[p]||{},e||!k.trigger||k.trigger.apply(d,c)!==!1)){if(!e&&!k.noBubble&&!m.isWindow(d)){for(i=k.delegateType||p,$.test(i+p)||(h=h.parentNode);h;h=h.parentNode)o.push(h),l=h;l===(d.ownerDocument||y)&&o.push(l.defaultView||l.parentWindow||a)}n=0;while((h=o[n++])&&!b.isPropagationStopped())b.type=n>1?i:k.bindType||p,f=(m._data(h,"events")||{})[b.type]&&m._data(h,"handle"),f&&f.apply(h,c),f=g&&h[g],f&&f.apply&&m.acceptData(h)&&(b.result=f.apply(h,c),b.result===!1&&b.preventDefault());if(b.type=p,!e&&!b.isDefaultPrevented()&&(!k._default||k._default.apply(o.pop(),c)===!1)&&m.acceptData(d)&&g&&d[p]&&!m.isWindow(d)){l=d[g],l&&(d[g]=null),m.event.triggered=p;try{d[p]()}catch(r){}m.event.triggered=void 0,l&&(d[g]=l)}return b.result}},dispatch:function(a){a=m.event.fix(a);var b,c,e,f,g,h=[],i=d.call(arguments),j=(m._data(this,"events")||{})[a.type]||[],k=m.event.special[a.type]||{};if(i[0]=a,a.delegateTarget=this,!k.preDispatch||k.preDispatch.call(this,a)!==!1){h=m.event.handlers.call(this,a,j),b=0;while((f=h[b++])&&!a.isPropagationStopped()){a.currentTarget=f.elem,g=0;while((e=f.handlers[g++])&&!a.isImmediatePropagationStopped())(!a.namespace_re||a.namespace_re.test(e.namespace))&&(a.handleObj=e,a.data=e.data,c=((m.event.special[e.origType]||{}).handle||e.handler).apply(f.elem,i),void 0!==c&&(a.result=c)===!1&&(a.preventDefault(),a.stopPropagation()))}return k.postDispatch&&k.postDispatch.call(this,a),a.result}},handlers:function(a,b){var c,d,e,f,g=[],h=b.delegateCount,i=a.target;if(h&&i.nodeType&&(!a.button||"click"!==a.type))for(;i!=this;i=i.parentNode||this)if(1===i.nodeType&&(i.disabled!==!0||"click"!==a.type)){for(e=[],f=0;h>f;f++)d=b[f],c=d.selector+" ",void 0===e[c]&&(e[c]=d.needsContext?m(c,this).index(i)>=0:m.find(c,this,null,[i]).length),e[c]&&e.push(d);e.length&&g.push({elem:i,handlers:e})}return h]","i"),hb=/^\s+/,ib=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/gi,jb=/<([\w:]+)/,kb=/\s*$/g,rb={option:[1,""],legend:[1,"
        ","
        "],area:[1,"",""],param:[1,"",""],thead:[1,"","
        "],tr:[2,"","
        "],col:[2,"","
        "],td:[3,"","
        "],_default:k.htmlSerialize?[0,"",""]:[1,"X
        ","
        "]},sb=db(y),tb=sb.appendChild(y.createElement("div"));rb.optgroup=rb.option,rb.tbody=rb.tfoot=rb.colgroup=rb.caption=rb.thead,rb.th=rb.td;function ub(a,b){var c,d,e=0,f=typeof a.getElementsByTagName!==K?a.getElementsByTagName(b||"*"):typeof a.querySelectorAll!==K?a.querySelectorAll(b||"*"):void 0;if(!f)for(f=[],c=a.childNodes||a;null!=(d=c[e]);e++)!b||m.nodeName(d,b)?f.push(d):m.merge(f,ub(d,b));return void 0===b||b&&m.nodeName(a,b)?m.merge([a],f):f}function vb(a){W.test(a.type)&&(a.defaultChecked=a.checked)}function wb(a,b){return m.nodeName(a,"table")&&m.nodeName(11!==b.nodeType?b:b.firstChild,"tr")?a.getElementsByTagName("tbody")[0]||a.appendChild(a.ownerDocument.createElement("tbody")):a}function xb(a){return a.type=(null!==m.find.attr(a,"type"))+"/"+a.type,a}function yb(a){var b=pb.exec(a.type);return b?a.type=b[1]:a.removeAttribute("type"),a}function zb(a,b){for(var c,d=0;null!=(c=a[d]);d++)m._data(c,"globalEval",!b||m._data(b[d],"globalEval"))}function Ab(a,b){if(1===b.nodeType&&m.hasData(a)){var c,d,e,f=m._data(a),g=m._data(b,f),h=f.events;if(h){delete g.handle,g.events={};for(c in h)for(d=0,e=h[c].length;e>d;d++)m.event.add(b,c,h[c][d])}g.data&&(g.data=m.extend({},g.data))}}function Bb(a,b){var c,d,e;if(1===b.nodeType){if(c=b.nodeName.toLowerCase(),!k.noCloneEvent&&b[m.expando]){e=m._data(b);for(d in e.events)m.removeEvent(b,d,e.handle);b.removeAttribute(m.expando)}"script"===c&&b.text!==a.text?(xb(b).text=a.text,yb(b)):"object"===c?(b.parentNode&&(b.outerHTML=a.outerHTML),k.html5Clone&&a.innerHTML&&!m.trim(b.innerHTML)&&(b.innerHTML=a.innerHTML)):"input"===c&&W.test(a.type)?(b.defaultChecked=b.checked=a.checked,b.value!==a.value&&(b.value=a.value)):"option"===c?b.defaultSelected=b.selected=a.defaultSelected:("input"===c||"textarea"===c)&&(b.defaultValue=a.defaultValue)}}m.extend({clone:function(a,b,c){var d,e,f,g,h,i=m.contains(a.ownerDocument,a);if(k.html5Clone||m.isXMLDoc(a)||!gb.test("<"+a.nodeName+">")?f=a.cloneNode(!0):(tb.innerHTML=a.outerHTML,tb.removeChild(f=tb.firstChild)),!(k.noCloneEvent&&k.noCloneChecked||1!==a.nodeType&&11!==a.nodeType||m.isXMLDoc(a)))for(d=ub(f),h=ub(a),g=0;null!=(e=h[g]);++g)d[g]&&Bb(e,d[g]);if(b)if(c)for(h=h||ub(a),d=d||ub(f),g=0;null!=(e=h[g]);g++)Ab(e,d[g]);else Ab(a,f);return d=ub(f,"script"),d.length>0&&zb(d,!i&&ub(a,"script")),d=h=e=null,f},buildFragment:function(a,b,c,d){for(var e,f,g,h,i,j,l,n=a.length,o=db(b),p=[],q=0;n>q;q++)if(f=a[q],f||0===f)if("object"===m.type(f))m.merge(p,f.nodeType?[f]:f);else if(lb.test(f)){h=h||o.appendChild(b.createElement("div")),i=(jb.exec(f)||["",""])[1].toLowerCase(),l=rb[i]||rb._default,h.innerHTML=l[1]+f.replace(ib,"<$1>")+l[2],e=l[0];while(e--)h=h.lastChild;if(!k.leadingWhitespace&&hb.test(f)&&p.push(b.createTextNode(hb.exec(f)[0])),!k.tbody){f="table"!==i||kb.test(f)?""!==l[1]||kb.test(f)?0:h:h.firstChild,e=f&&f.childNodes.length;while(e--)m.nodeName(j=f.childNodes[e],"tbody")&&!j.childNodes.length&&f.removeChild(j)}m.merge(p,h.childNodes),h.textContent="";while(h.firstChild)h.removeChild(h.firstChild);h=o.lastChild}else p.push(b.createTextNode(f));h&&o.removeChild(h),k.appendChecked||m.grep(ub(p,"input"),vb),q=0;while(f=p[q++])if((!d||-1===m.inArray(f,d))&&(g=m.contains(f.ownerDocument,f),h=ub(o.appendChild(f),"script"),g&&zb(h),c)){e=0;while(f=h[e++])ob.test(f.type||"")&&c.push(f)}return h=null,o},cleanData:function(a,b){for(var d,e,f,g,h=0,i=m.expando,j=m.cache,l=k.deleteExpando,n=m.event.special;null!=(d=a[h]);h++)if((b||m.acceptData(d))&&(f=d[i],g=f&&j[f])){if(g.events)for(e in g.events)n[e]?m.event.remove(d,e):m.removeEvent(d,e,g.handle);j[f]&&(delete j[f],l?delete d[i]:typeof d.removeAttribute!==K?d.removeAttribute(i):d[i]=null,c.push(f))}}}),m.fn.extend({text:function(a){return V(this,function(a){return void 0===a?m.text(this):this.empty().append((this[0]&&this[0].ownerDocument||y).createTextNode(a))},null,a,arguments.length)},append:function(){return this.domManip(arguments,function(a){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var b=wb(this,a);b.appendChild(a)}})},prepend:function(){return this.domManip(arguments,function(a){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var b=wb(this,a);b.insertBefore(a,b.firstChild)}})},before:function(){return this.domManip(arguments,function(a){this.parentNode&&this.parentNode.insertBefore(a,this)})},after:function(){return this.domManip(arguments,function(a){this.parentNode&&this.parentNode.insertBefore(a,this.nextSibling)})},remove:function(a,b){for(var c,d=a?m.filter(a,this):this,e=0;null!=(c=d[e]);e++)b||1!==c.nodeType||m.cleanData(ub(c)),c.parentNode&&(b&&m.contains(c.ownerDocument,c)&&zb(ub(c,"script")),c.parentNode.removeChild(c));return this},empty:function(){for(var a,b=0;null!=(a=this[b]);b++){1===a.nodeType&&m.cleanData(ub(a,!1));while(a.firstChild)a.removeChild(a.firstChild);a.options&&m.nodeName(a,"select")&&(a.options.length=0)}return this},clone:function(a,b){return a=null==a?!1:a,b=null==b?a:b,this.map(function(){return m.clone(this,a,b)})},html:function(a){return V(this,function(a){var b=this[0]||{},c=0,d=this.length;if(void 0===a)return 1===b.nodeType?b.innerHTML.replace(fb,""):void 0;if(!("string"!=typeof a||mb.test(a)||!k.htmlSerialize&&gb.test(a)||!k.leadingWhitespace&&hb.test(a)||rb[(jb.exec(a)||["",""])[1].toLowerCase()])){a=a.replace(ib,"<$1>");try{for(;d>c;c++)b=this[c]||{},1===b.nodeType&&(m.cleanData(ub(b,!1)),b.innerHTML=a);b=0}catch(e){}}b&&this.empty().append(a)},null,a,arguments.length)},replaceWith:function(){var a=arguments[0];return this.domManip(arguments,function(b){a=this.parentNode,m.cleanData(ub(this)),a&&a.replaceChild(b,this)}),a&&(a.length||a.nodeType)?this:this.remove()},detach:function(a){return this.remove(a,!0)},domManip:function(a,b){a=e.apply([],a);var c,d,f,g,h,i,j=0,l=this.length,n=this,o=l-1,p=a[0],q=m.isFunction(p);if(q||l>1&&"string"==typeof p&&!k.checkClone&&nb.test(p))return this.each(function(c){var d=n.eq(c);q&&(a[0]=p.call(this,c,d.html())),d.domManip(a,b)});if(l&&(i=m.buildFragment(a,this[0].ownerDocument,!1,this),c=i.firstChild,1===i.childNodes.length&&(i=c),c)){for(g=m.map(ub(i,"script"),xb),f=g.length;l>j;j++)d=i,j!==o&&(d=m.clone(d,!0,!0),f&&m.merge(g,ub(d,"script"))),b.call(this[j],d,j);if(f)for(h=g[g.length-1].ownerDocument,m.map(g,yb),j=0;f>j;j++)d=g[j],ob.test(d.type||"")&&!m._data(d,"globalEval")&&m.contains(h,d)&&(d.src?m._evalUrl&&m._evalUrl(d.src):m.globalEval((d.text||d.textContent||d.innerHTML||"").replace(qb,"")));i=c=null}return this}}),m.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){m.fn[a]=function(a){for(var c,d=0,e=[],g=m(a),h=g.length-1;h>=d;d++)c=d===h?this:this.clone(!0),m(g[d])[b](c),f.apply(e,c.get());return this.pushStack(e)}});var Cb,Db={};function Eb(b,c){var d,e=m(c.createElement(b)).appendTo(c.body),f=a.getDefaultComputedStyle&&(d=a.getDefaultComputedStyle(e[0]))?d.display:m.css(e[0],"display");return e.detach(),f}function Fb(a){var b=y,c=Db[a];return c||(c=Eb(a,b),"none"!==c&&c||(Cb=(Cb||m(" - - - - \ No newline at end of file diff --git a/spaces/nsarrazin/agents-js-llama/svelte.config.js b/spaces/nsarrazin/agents-js-llama/svelte.config.js deleted file mode 100644 index ec7d9f4b849dffa31ab8ca5a4908eed07fc82fbc..0000000000000000000000000000000000000000 --- a/spaces/nsarrazin/agents-js-llama/svelte.config.js +++ /dev/null @@ -1,21 +0,0 @@ -import adapter from "@sveltejs/adapter-node"; -import { vitePreprocess } from '@sveltejs/kit/vite'; -import dotenv from "dotenv"; - -dotenv.config({ path: "./.env.local" }); - -/** @type {import('@sveltejs/kit').Config} */ -const config = { - // Consult https://kit.svelte.dev/docs/integrations#preprocessors - // for more information about preprocessors - preprocess: vitePreprocess(), - - kit: { - // adapter-auto only supports some environments, see https://kit.svelte.dev/docs/adapter-auto for a list. - // If your environment is not supported or you settled on a specific environment, switch out the adapter. - // See https://kit.svelte.dev/docs/adapters for more information about adapters. - adapter: adapter() - } -}; - -export default config; diff --git a/spaces/nyanko7/niji-playground/README.md b/spaces/nyanko7/niji-playground/README.md deleted file mode 100644 index f68675c463c63a288996c81eb913873a8e3de41b..0000000000000000000000000000000000000000 --- a/spaces/nyanko7/niji-playground/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Niji Playground -emoji: 🐢 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.43.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/okeefe4ai/donut-cord/app.py b/spaces/okeefe4ai/donut-cord/app.py deleted file mode 100644 index 8fbbd11ad3b1208db2b55445642f6b1435dcfd2c..0000000000000000000000000000000000000000 --- a/spaces/okeefe4ai/donut-cord/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import re -import gradio as gr - -import torch -from transformers import DonutProcessor, VisionEncoderDecoderModel - -processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") -model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") - -device = "cuda" if torch.cuda.is_available() else "cpu" -model.to(device) - -def process_document(image): - # prepare encoder inputs - pixel_values = processor(image, return_tensors="pt").pixel_values - - # prepare decoder inputs - task_prompt = "" - decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids - - # generate answer - outputs = model.generate( - pixel_values.to(device), - decoder_input_ids=decoder_input_ids.to(device), - max_length=model.decoder.config.max_position_embeddings, - early_stopping=True, - pad_token_id=processor.tokenizer.pad_token_id, - eos_token_id=processor.tokenizer.eos_token_id, - use_cache=True, - num_beams=1, - bad_words_ids=[[processor.tokenizer.unk_token_id]], - return_dict_in_generate=True, - ) - - # postprocess - sequence = processor.batch_decode(outputs.sequences)[0] - sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") - sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() # remove first task start token - - return processor.token2json(sequence) - -description = "Gradio Demo for Donut, an instance of `VisionEncoderDecoderModel` fine-tuned on CORD (document parsing). To use it, simply upload your image and click 'submit', or click one of the examples to load them. Read more at the links below." -article = "

        Donut: OCR-free Document Understanding Transformer | Github Repo

        " - -demo = gr.Interface( - fn=process_document, - inputs="image", - outputs="json", - title="Demo: Donut 🍩 for Document Parsing", - description=description, - article=article, - enable_queue=True, - examples=[["example.png"], ["example_2.png"], ["example_3.png"]], - cache_examples=False) - -demo.launch() \ No newline at end of file diff --git a/spaces/onnx/MNIST-Handwritten-Digit-Recognition/README.md b/spaces/onnx/MNIST-Handwritten-Digit-Recognition/README.md deleted file mode 100644 index 95d3fc6ffa2eb475fe390f5937220304a040a261..0000000000000000000000000000000000000000 --- a/spaces/onnx/MNIST-Handwritten-Digit-Recognition/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MNIST Handwritten Digit Recognition -emoji: ⚡ -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 2.8.10 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/onnx/sub_pixel_cnn_2016/README.md b/spaces/onnx/sub_pixel_cnn_2016/README.md deleted file mode 100644 index 76e349232cd139236033a334e539ec3844d2330b..0000000000000000000000000000000000000000 --- a/spaces/onnx/sub_pixel_cnn_2016/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sub_pixel_cnn_2016 -emoji: 📊 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/optigesr/Bark-with-Voice-Cloning/app.py b/spaces/optigesr/Bark-with-Voice-Cloning/app.py deleted file mode 100644 index 6f2f1bb6b77cef083a937a3e85dd7235359b644e..0000000000000000000000000000000000000000 --- a/spaces/optigesr/Bark-with-Voice-Cloning/app.py +++ /dev/null @@ -1,250 +0,0 @@ -import os - -#os.system("pip install git+https://github.com/suno-ai/bark.git") - -from bark.generation import SUPPORTED_LANGS -from bark import SAMPLE_RATE, generate_audio -from scipy.io.wavfile import write as write_wav -from datetime import datetime - -import shutil -import gradio as gr - -import sys - -import string -import time -import argparse -import json - -import numpy as np -# import IPython -# from IPython.display import Audio - -import torch - -from TTS.tts.utils.synthesis import synthesis -from TTS.tts.utils.text.symbols import make_symbols, phonemes, symbols -try: - from TTS.utils.audio import AudioProcessor -except: - from TTS.utils.audio import AudioProcessor - - -from TTS.tts.models import setup_model -from TTS.config import load_config -from TTS.tts.models.vits import * - -from TTS.tts.utils.speakers import SpeakerManager -from pydub import AudioSegment - -# from google.colab import files -import librosa - -from scipy.io.wavfile import write, read - -import subprocess - -''' -from google.colab import drive -drive.mount('/content/drive') -src_path = os.path.join(os.path.join(os.path.join(os.path.join(os.getcwd(), 'drive'), 'MyDrive'), 'Colab Notebooks'), 'best_model_latest.pth.tar') -dst_path = os.path.join(os.getcwd(), 'best_model.pth.tar') -shutil.copy(src_path, dst_path) -''' - -TTS_PATH = "TTS/" - -# add libraries into environment -sys.path.append(TTS_PATH) # set this if TTS is not installed globally - -# Paths definition - -OUT_PATH = 'out/' - -# create output path -os.makedirs(OUT_PATH, exist_ok=True) - -# model vars -MODEL_PATH = 'best_model.pth.tar' -CONFIG_PATH = 'config.json' -TTS_LANGUAGES = "language_ids.json" -TTS_SPEAKERS = "speakers.json" -USE_CUDA = torch.cuda.is_available() - -# load the config -C = load_config(CONFIG_PATH) - -# load the audio processor -ap = AudioProcessor(**C.audio) - -speaker_embedding = None - -C.model_args['d_vector_file'] = TTS_SPEAKERS -C.model_args['use_speaker_encoder_as_loss'] = False - -model = setup_model(C) -model.language_manager.set_language_ids_from_file(TTS_LANGUAGES) -# print(model.language_manager.num_languages, model.embedded_language_dim) -# print(model.emb_l) -cp = torch.load(MODEL_PATH, map_location=torch.device('cpu')) -# remove speaker encoder -model_weights = cp['model'].copy() -for key in list(model_weights.keys()): - if "speaker_encoder" in key: - del model_weights[key] - -model.load_state_dict(model_weights) - -model.eval() - -if USE_CUDA: - model = model.cuda() - -# synthesize voice -use_griffin_lim = False - -# Paths definition - -CONFIG_SE_PATH = "config_se.json" -CHECKPOINT_SE_PATH = "SE_checkpoint.pth.tar" - -# Load the Speaker encoder - -SE_speaker_manager = SpeakerManager(encoder_model_path=CHECKPOINT_SE_PATH, encoder_config_path=CONFIG_SE_PATH, use_cuda=USE_CUDA) - -# Define helper function - -def compute_spec(ref_file): - y, sr = librosa.load(ref_file, sr=ap.sample_rate) - spec = ap.spectrogram(y) - spec = torch.FloatTensor(spec).unsqueeze(0) - return spec - - -def voice_conversion(ta, ra, da): - - target_audio = 'target.wav' - reference_audio = 'reference.wav' - driving_audio = 'driving.wav' - - write(target_audio, ta[0], ta[1]) - write(reference_audio, ra[0], ra[1]) - write(driving_audio, da[0], da[1]) - - # !ffmpeg-normalize $target_audio -nt rms -t=-27 -o $target_audio -ar 16000 -f - # !ffmpeg-normalize $reference_audio -nt rms -t=-27 -o $reference_audio -ar 16000 -f - # !ffmpeg-normalize $driving_audio -nt rms -t=-27 -o $driving_audio -ar 16000 -f - - files = [target_audio, reference_audio, driving_audio] - - for file in files: - subprocess.run(["ffmpeg-normalize", file, "-nt", "rms", "-t=-27", "-o", file, "-ar", "16000", "-f"]) - - # ta_ = read(target_audio) - - target_emb = SE_speaker_manager.compute_d_vector_from_clip([target_audio]) - target_emb = torch.FloatTensor(target_emb).unsqueeze(0) - - driving_emb = SE_speaker_manager.compute_d_vector_from_clip([reference_audio]) - driving_emb = torch.FloatTensor(driving_emb).unsqueeze(0) - - # Convert the voice - - driving_spec = compute_spec(driving_audio) - y_lengths = torch.tensor([driving_spec.size(-1)]) - if USE_CUDA: - ref_wav_voc, _, _ = model.voice_conversion(driving_spec.cuda(), y_lengths.cuda(), driving_emb.cuda(), target_emb.cuda()) - ref_wav_voc = ref_wav_voc.squeeze().cpu().detach().numpy() - else: - ref_wav_voc, _, _ = model.voice_conversion(driving_spec, y_lengths, driving_emb, target_emb) - ref_wav_voc = ref_wav_voc.squeeze().detach().numpy() - - # print("Reference Audio after decoder:") - # IPython.display.display(Audio(ref_wav_voc, rate=ap.sample_rate)) - - return (ap.sample_rate, ref_wav_voc) - -def generate_text_to_speech(text_prompt, selected_speaker, text_temp, waveform_temp): - audio_array = generate_audio(text_prompt, selected_speaker, text_temp, waveform_temp) - - now = datetime.now() - date_str = now.strftime("%m-%d-%Y") - time_str = now.strftime("%H-%M-%S") - - outputs_folder = os.path.join(os.getcwd(), "outputs") - if not os.path.exists(outputs_folder): - os.makedirs(outputs_folder) - - sub_folder = os.path.join(outputs_folder, date_str) - if not os.path.exists(sub_folder): - os.makedirs(sub_folder) - - file_name = f"audio_{time_str}.wav" - file_path = os.path.join(sub_folder, file_name) - write_wav(file_path, SAMPLE_RATE, audio_array) - - return file_path - - -speakers_list = [] - -for lang, code in SUPPORTED_LANGS: - for n in range(10): - speakers_list.append(f"{code}_speaker_{n}") - -examples1 = [["ref.wav", "Bark.wav", "Bark.wav"]] - -with gr.Blocks() as demo: - gr.Markdown( - f""" - 1. You can duplicate and use it with a GPU: Duplicate Space - 2. First use Bark to generate audio from text and then use YourTTS to get new audio in a custom voice you like. Easy to use! - 3. For voice cloning, longer reference audio (~90s) will generally lead to better quality of the cloned speech. Also, please make sure the input audio generated by Bark is not too short. - """ - ) - - with gr.Row().style(equal_height=True): - inp1 = gr.Textbox(label="Input Text", lines=4, placeholder="Enter text here...") - - inp3 = gr.Slider( - 0.1, - 1.0, - value=0.7, - label="Generation Temperature", - info="1.0 more diverse, 0.1 more conservative", - ) - - inp4 = gr.Slider( - 0.1, 1.0, value=0.7, label="Waveform Temperature", info="1.0 more diverse, 0.1 more conservative" - ) - with gr.Row().style(equal_height=True): - - inp2 = gr.Dropdown(speakers_list, value=speakers_list[1], label="Acoustic Prompt") - - button = gr.Button("Generate using Bark") - - out1 = gr.Audio(label="Generated Audio") - - button.click(generate_text_to_speech, [inp1, inp2, inp3, inp4], [out1]) - - - with gr.Row().style(equal_height=True): - inp5 = gr.Audio(label="Upload Reference Audio for Voice Cloning Here") - inp6 = out1 - inp7 = out1 - - btn = gr.Button("Start") - out2 = gr.Audio(label="Generated Audio in a Custom Voice") - - btn.click(voice_conversion, [inp5, inp6, inp7], [out2]) - - gr.Examples(examples=examples1, fn=voice_conversion, inputs=[inp5, inp6, inp7], - outputs=[out2], cache_examples=True) - gr.Markdown( - """ - . - """ - ) - -demo.queue().launch(show_error=True) \ No newline at end of file diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_m\303\251rn\303\266k_en_aggregate.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/simple_m\303\251rn\303\266k_en_aggregate.html" deleted file mode 100644 index 493e2c144700b64f744e51794288f51a48849398..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_m\303\251rn\303\266k_en_aggregate.html" +++ /dev/null @@ -1,46 +0,0 @@ -
        0th instance:
        - -
        -
        -
        - -
        -
        - Source Saliency Heatmap -
        - x: Generated tokens, y: Attributed tokens -
        - -
        - -
        ▁He's▁an▁engineer.</s>
        ▁Ő0.3230.0610.101-0.436
        ▁mérnök.0.9460.0420.9630.778
        </s>0.00.00.00.0
        - - - - - - -
        0th instance:
        - -
        -
        -
        - -
        -
        - Target Saliency Heatmap -
        - x: Generated tokens, y: Attributed tokens -
        - - - -
        ▁He's▁an▁engineer.</s>
        ▁He's0.9970.2190.19
        ▁an0.1180.392
        ▁engineer.-0.123
        </s>
        -
        - -
        -
        -
        - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/optimization/coreml.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/optimization/coreml.md deleted file mode 100644 index ab96eea0fb04482e40c6794445825a5116982dd5..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/optimization/coreml.md +++ /dev/null @@ -1,167 +0,0 @@ - - -# How to run Stable Diffusion with Core ML - -[Core ML](https://developer.apple.com/documentation/coreml) is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. - -Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it's running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. - - - -You can also run the `diffusers` Python codebase on Apple Silicon Macs using the `mps` accelerator built into PyTorch. This approach is explained in depth in [the mps guide](mps), but it is not compatible with native apps. - - - -## Stable Diffusion Core ML Checkpoints - -Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. - -Thankfully, Apple engineers developed [a conversion tool](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml) based on `diffusers` to convert the PyTorch checkpoints to Core ML. - -Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you're interested in is already available in Core ML format: - -- the [Apple](https://huggingface.co/apple) organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base -- [coreml](https://huggingface.co/coreml) organization includes custom DreamBoothed and finetuned models -- use this [filter](https://huggingface.co/models?pipeline_tag=text-to-image&library=coreml&p=2&sort=likes) to return all available Core ML checkpoints - -If you can't find the model you're interested in, we recommend you follow the instructions for [Converting Models to Core ML](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml) by Apple. - -## Selecting the Core ML Variant to Use - -Stable Diffusion models can be converted to different Core ML variants intended for different purposes: - -- The type of attention blocks used. The attention operation is used to "pay attention" to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: - * `split_einsum` ([introduced by Apple](https://machinelearning.apple.com/research/neural-engine-transformers)) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. - * The "original" attention (the base implementation used in `diffusers`) is only compatible with CPU/GPU and not ANE. It can be *faster* to run your model on CPU + GPU using `original` attention than ANE. See [this performance benchmark](https://huggingface.co/blog/fast-mac-diffusers#performance-benchmarks) as well as some [additional measures provided by the community](https://github.com/huggingface/swift-coreml-diffusers/issues/31) for additional details. - -- The supported inference framework. - * `packages` are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don't need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. - * `compiled` models are required for Swift code. The `compiled` models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the [`--chunk-unet` conversion option](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml). If you want to support native apps, then you need to select the `compiled` variant. - -The official Core ML Stable Diffusion [models](https://huggingface.co/apple/coreml-stable-diffusion-v1-4/tree/main) include these variants, but the community ones may vary: - -``` -coreml-stable-diffusion-v1-4 -├── README.md -├── original -│ ├── compiled -│ └── packages -└── split_einsum - ├── compiled - └── packages -``` - -You can download and use the variant you need as shown below. - -## Core ML Inference in Python - -Install the following libraries to run Core ML inference in Python: - -```bash -pip install huggingface_hub -pip install git+https://github.com/apple/ml-stable-diffusion -``` - -### Download the Model Checkpoints - -To run inference in Python, use one of the versions stored in the `packages` folders because the `compiled` ones are only compatible with Swift. You may choose whether you want to use `original` or `split_einsum` attention. - -This is how you'd download the `original` attention variant from the Hub to a directory called `models`: - -```Python -from huggingface_hub import snapshot_download -from pathlib import Path - -repo_id = "apple/coreml-stable-diffusion-v1-4" -variant = "original/packages" - -model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) -snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) -print(f"Model downloaded at {model_path}") -``` - - -### Inference[[python-inference]] - -Once you have downloaded a snapshot of the model, you can test it using Apple's Python script. - -```shell -python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 -``` - -`` should point to the checkpoint you downloaded in the step above, and `--compute-unit` indicates the hardware you want to allow for inference. It must be one of the following options: `ALL`, `CPU_AND_GPU`, `CPU_ONLY`, `CPU_AND_NE`. You may also provide an optional output path, and a seed for reproducibility. - -The inference script assumes you're using the original version of the Stable Diffusion model, `CompVis/stable-diffusion-v1-4`. If you use another model, you *have* to specify its Hub id in the inference command line, using the `--model-version` option. This works for models already supported and custom models you trained or fine-tuned yourself. - -For example, if you want to use [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5): - -```shell -python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 -``` - - -## Core ML inference in Swift - -Running inference in Swift is slightly faster than in Python because the models are already compiled in the `mlmodelc` format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. - -### Download - -To run inference in Swift on your Mac, you need one of the `compiled` checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the `compiled` variants: - -```Python -from huggingface_hub import snapshot_download -from pathlib import Path - -repo_id = "apple/coreml-stable-diffusion-v1-4" -variant = "original/compiled" - -model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) -snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) -print(f"Model downloaded at {model_path}") -``` - -### Inference[[swift-inference]] - -To run inference, please clone Apple's repo: - -```bash -git clone https://github.com/apple/ml-stable-diffusion -cd ml-stable-diffusion -``` - -And then use Apple's command line tool, [Swift Package Manager](https://www.swift.org/package-manager/#): - -```bash -swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" -``` - -You have to specify in `--resource-path` one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension `.mlmodelc`. The `--compute-units` has to be one of these values: `all`, `cpuOnly`, `cpuAndGPU`, `cpuAndNeuralEngine`. - -For more details, please refer to the [instructions in Apple's repo](https://github.com/apple/ml-stable-diffusion). - - -## Supported Diffusers Features - -The Core ML models and inference code don't support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: - -- Core ML models are only suitable for inference. They can't be used for training or fine-tuning. -- Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and `DPMSolverMultistepScheduler`, which we ported to Swift from our `diffusers` implementation. We recommend you use `DPMSolverMultistepScheduler`, since it produces the same quality in about half the steps. -- Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. - -Apple's [conversion and inference repo](https://github.com/apple/ml-stable-diffusion) and our own [swift-coreml-diffusers](https://github.com/huggingface/swift-coreml-diffusers) repos are intended as technology demonstrators to enable other developers to build upon. - -If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR :) - -## Native Diffusers Swift app - -One easy way to run Stable Diffusion on your own Apple hardware is to use [our open-source Swift repo](https://github.com/huggingface/swift-coreml-diffusers), based on `diffusers` and Apple's conversion and inference repo. You can study the code, compile it with [Xcode](https://developer.apple.com/xcode/) and adapt it for your own needs. For your convenience, there's also a [standalone Mac app in the App Store](https://apps.apple.com/app/diffusers/id1666309574), so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can't wait to see what you'll build :) diff --git a/spaces/parasmech/Image_captioning_nlpconnect/app.py b/spaces/parasmech/Image_captioning_nlpconnect/app.py deleted file mode 100644 index 174d0c5d90055e63091b7c953dd41b773eb9ec5a..0000000000000000000000000000000000000000 --- a/spaces/parasmech/Image_captioning_nlpconnect/app.py +++ /dev/null @@ -1,68 +0,0 @@ -# -*- coding: utf-8 -*- - - -from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer -import torch -from PIL import Image - -model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - -vit_feature_extractor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - -tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") - - -def vit2distilgpt2(img): - pixel_values = vit_feature_extractor(images=img, return_tensors="pt").pixel_values - encoder_outputs = generated_ids = model.generate(pixel_values.to('cpu'),num_beams=5) - generated_sentences = tokenizer.batch_decode(encoder_outputs, skip_special_tokens=True) - - return(generated_sentences[0].split('.')[0]) - - -import gradio as gr - -inputs = [ - gr.inputs.Image(type="pil", label="Original Image") -] - -outputs = [ - gr.outputs.Textbox(label = 'Caption') -] - -title = "Visual Transformer using nlpconnect for Image to Text generation" -description = "ViT and GPT2 are used to generate Image Caption for the uploaded image. COCO Dataset was used for training." -article = " Model Repo on Hugging Face Model Hub" -examples = [ - ["Img_1.jpg"], - ["Img_2.jpg"], - ["img_2t.jpg"], - ["img_t2.jpg"], - ["img4_t.jpg"] -] - - - -gr.Interface( - vit2distilgpt2, - inputs, - outputs, - title=title, - description=description, - article=article, - examples=examples, - theme="huggingface", -).launch(debug=True, enable_queue=True) - - - - - - - - - - - - - diff --git a/spaces/patgpt4/MusicGen/tests/common_utils/temp_utils.py b/spaces/patgpt4/MusicGen/tests/common_utils/temp_utils.py deleted file mode 100644 index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000 --- a/spaces/patgpt4/MusicGen/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittenly. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/pinkq/Newbing/src/components/ui/alert-dialog.tsx b/spaces/pinkq/Newbing/src/components/ui/alert-dialog.tsx deleted file mode 100644 index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000 --- a/spaces/pinkq/Newbing/src/components/ui/alert-dialog.tsx +++ /dev/null @@ -1,150 +0,0 @@ -'use client' - -import * as React from 'react' -import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog' - -import { cn } from '@/lib/utils' -import { buttonVariants } from '@/components/ui/button' - -const AlertDialog = AlertDialogPrimitive.Root - -const AlertDialogTrigger = AlertDialogPrimitive.Trigger - -const AlertDialogPortal = ({ - className, - children, - ...props -}: AlertDialogPrimitive.AlertDialogPortalProps) => ( - -
        - {children} -
        -
        -) -AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName - -const AlertDialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -)) -AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName - -const AlertDialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - -)) -AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName - -const AlertDialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
        -) -AlertDialogHeader.displayName = 'AlertDialogHeader' - -const AlertDialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
        -) -AlertDialogFooter.displayName = 'AlertDialogFooter' - -const AlertDialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName - -const AlertDialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogDescription.displayName = - AlertDialogPrimitive.Description.displayName - -const AlertDialogAction = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName - -const AlertDialogCancel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName - -export { - AlertDialog, - AlertDialogTrigger, - AlertDialogContent, - AlertDialogHeader, - AlertDialogFooter, - AlertDialogTitle, - AlertDialogDescription, - AlertDialogAction, - AlertDialogCancel -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/__init__.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/__init__.py deleted file mode 100644 index 6afb5c627ce3db6e61cbf46276f7ddd42552eb28..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from typing import List, Optional - -import pip._internal.utils.inject_securetransport # noqa -from pip._internal.utils import _log - -# init_logging() must be called before any call to logging.getLogger() -# which happens at import of most modules. -_log.init_logging() - - -def main(args: (Optional[List[str]]) = None) -> int: - """This is preserved for old console scripts that may still be referencing - it. - - For additional details, see https://github.com/pypa/pip/issues/7498. - """ - from pip._internal.utils.entrypoints import _wrapper - - return _wrapper(args) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/base.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/base.py deleted file mode 100644 index 42dade18c1ec2b825f756dad4aaa89f2d9e6ce21..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/resolution/base.py +++ /dev/null @@ -1,20 +0,0 @@ -from typing import Callable, List, Optional - -from pip._internal.req.req_install import InstallRequirement -from pip._internal.req.req_set import RequirementSet - -InstallRequirementProvider = Callable[ - [str, Optional[InstallRequirement]], InstallRequirement -] - - -class BaseResolver: - def resolve( - self, root_reqs: List[InstallRequirement], check_supported_wheels: bool - ) -> RequirementSet: - raise NotImplementedError() - - def get_installation_order( - self, req_set: RequirementSet - ) -> List[InstallRequirement]: - raise NotImplementedError() diff --git a/spaces/posit/shiny-for-python-template/Dockerfile b/spaces/posit/shiny-for-python-template/Dockerfile deleted file mode 100644 index 3a4dc66fdb50519fca2a6eaf64cbe0ea05b09a3f..0000000000000000000000000000000000000000 --- a/spaces/posit/shiny-for-python-template/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -EXPOSE 7860 - -CMD ["shiny", "run", "app.py", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/pragnakalp/OCR-image-to-text/save_data.py b/spaces/pragnakalp/OCR-image-to-text/save_data.py deleted file mode 100644 index 148dc50f512215bbea75af067d593afcabdb8402..0000000000000000000000000000000000000000 --- a/spaces/pragnakalp/OCR-image-to-text/save_data.py +++ /dev/null @@ -1,140 +0,0 @@ -import os -import numpy as np -import json -import shutil -import requests -import re as r -from urllib.request import urlopen -from datetime import datetime -from datasets import Image -from PIL import Image -from huggingface_hub import Repository, upload_file - -HF_TOKEN = os.environ.get("HF_TOKEN") -DATASET_NAME = "OCR-img-to-text" -DATASET_REPO_URL = "https://huggingface.co/datasets/pragnakalp/OCR-img-to-text" -DATA_FILENAME = "ocr_data.csv" -DATA_FILE = os.path.join("ocr_data", DATA_FILENAME) -DATASET_REPO_ID = "pragnakalp/OCR-img-to-text" -print("is none?", HF_TOKEN is None) -REPOSITORY_DIR = "data" -LOCAL_DIR = 'data_local' -os.makedirs(LOCAL_DIR,exist_ok=True) - -try: - hf_hub_download( - repo_id=DATASET_REPO_ID, - filename=DATA_FILENAME, - cache_dir=DATA_DIRNAME, - force_filename=DATA_FILENAME - ) - -except: - print("file not found") - -repo = Repository( - local_dir="ocr_data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN -) -repo.git_pull() - -def getIP(): - ip_address = '' - try: - d = str(urlopen('http://checkip.dyndns.com/') - .read()) - - return r.compile(r'Address: (\d+\.\d+\.\d+\.\d+)').search(d).group(1) - except Exception as e: - print("Error while getting IP address -->",e) - return ip_address - -def get_location(ip_addr): - location = {} - try: - ip=ip_addr - - req_data={ - "ip":ip, - "token":"pkml123" - } - url = "https://demos.pragnakalp.com/get-ip-location" - - # req_data=json.dumps(req_data) - # print("req_data",req_data) - headers = {'Content-Type': 'application/json'} - - response = requests.request("POST", url, headers=headers, data=json.dumps(req_data)) - response = response.json() - print("response======>>",response) - return response - except Exception as e: - print("Error while getting location -->",e) - return location - -""" -Save generated details -""" -def dump_json(thing,file): - with open(file,'w+',encoding="utf8") as f: - json.dump(thing,f) - -def flag(Method,text_output,input_image): - - print("saving data------------------------") - # try: - adversarial_number = 0 - adversarial_number = 0 if None else adversarial_number - - ip_address= getIP() - print("ip_address :",ip_address) - location = get_location(ip_address) - print("location :",location) - - metadata_name = datetime.now().strftime('%Y-%m-%d %H-%M-%S') - SAVE_FILE_DIR = os.path.join(LOCAL_DIR,metadata_name) - os.makedirs(SAVE_FILE_DIR,exist_ok=True) - image_output_filename = os.path.join(SAVE_FILE_DIR,'image.png') - print("image_output_filename :",image_output_filename) - print(input_image) - try: - Image.fromarray(input_image).save(image_output_filename) - # input_image.save(image_output_filename) - except Exception: - raise Exception(f"Had issues saving np array image to file") - - # Write metadata.json to file - json_file_path = os.path.join(SAVE_FILE_DIR,'metadata.jsonl') - metadata= {'id':metadata_name,'method':Method,'file_name':'image.png', - 'generated_text':text_output,'ip':ip_address, 'location':location - } - - dump_json(metadata,json_file_path) - - # Simply upload the image file and metadata using the hub's upload_file - # Upload the image - repo_image_path = os.path.join(REPOSITORY_DIR,os.path.join(metadata_name,'image.png')) - - _ = upload_file(path_or_fileobj = image_output_filename, - path_in_repo =repo_image_path, - repo_id=DATASET_REPO_ID, - repo_type='dataset', - token=HF_TOKEN - ) - - # Upload the metadata - repo_json_path = os.path.join(REPOSITORY_DIR,os.path.join(metadata_name,'metadata.jsonl')) - _ = upload_file(path_or_fileobj = json_file_path, - path_in_repo =repo_json_path, - repo_id= DATASET_REPO_ID, - repo_type='dataset', - token=HF_TOKEN - ) - adversarial_number+=1 - repo.git_pull() - - url = 'http://pragnakalpdev35.pythonanywhere.com/HF_space_image_to_text' - myobj = {'Method': Method,'text_output':text_output,'img':input_image.tolist(),'ip_address':ip_address, 'loc':location} - x = requests.post(url, json = myobj) - print("mail status code",x.status_code) - - return "*****Logs save successfully!!!!" \ No newline at end of file diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/winapifamily.h b/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/winapifamily.h deleted file mode 100644 index 388d5f068cba2b5e2e642f68bea924d61f37f404..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/winapifamily.h +++ /dev/null @@ -1,24 +0,0 @@ -/** - * This file is part of the mingw-w64 runtime package. - * No warranty is given; refer to the file DISCLAIMER within this package. - */ - -#ifndef _INC_WINAPIFAMILY -#define _INC_WINAPIFAMILY - -#define WINAPI_PARTITION_DESKTOP 0x1 -#define WINAPI_PARTITION_APP 0x2 - -#define WINAPI_FAMILY_APP WINAPI_PARTITION_APP -#define WINAPI_FAMILY_DESKTOP_APP (WINAPI_PARTITION_DESKTOP \ - | WINAPI_PARTITION_APP) - -/* WINAPI_FAMILY can be either desktop + App, or App. */ -#ifndef WINAPI_FAMILY -#define WINAPI_FAMILY WINAPI_FAMILY_DESKTOP_APP -#endif - -#define WINAPI_FAMILY_PARTITION(v) ((WINAPI_FAMILY & v) == v) -#define WINAPI_FAMILY_ONE_PARTITION(vset, v) ((WINAPI_FAMILY & vset) == v) - -#endif diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/afmLib.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/afmLib.py deleted file mode 100644 index 394b901ff5eb149b40c0d9ae425c02d5ad0b5111..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/afmLib.py +++ /dev/null @@ -1,438 +0,0 @@ -"""Module for reading and writing AFM (Adobe Font Metrics) files. - -Note that this has been designed to read in AFM files generated by Fontographer -and has not been tested on many other files. In particular, it does not -implement the whole Adobe AFM specification [#f1]_ but, it should read most -"common" AFM files. - -Here is an example of using `afmLib` to read, modify and write an AFM file: - - >>> from fontTools.afmLib import AFM - >>> f = AFM("Tests/afmLib/data/TestAFM.afm") - >>> - >>> # Accessing a pair gets you the kern value - >>> f[("V","A")] - -60 - >>> - >>> # Accessing a glyph name gets you metrics - >>> f["A"] - (65, 668, (8, -25, 660, 666)) - >>> # (charnum, width, bounding box) - >>> - >>> # Accessing an attribute gets you metadata - >>> f.FontName - 'TestFont-Regular' - >>> f.FamilyName - 'TestFont' - >>> f.Weight - 'Regular' - >>> f.XHeight - 500 - >>> f.Ascender - 750 - >>> - >>> # Attributes and items can also be set - >>> f[("A","V")] = -150 # Tighten kerning - >>> f.FontName = "TestFont Squished" - >>> - >>> # And the font written out again (remove the # in front) - >>> #f.write("testfont-squished.afm") - -.. rubric:: Footnotes - -.. [#f1] `Adobe Technote 5004 `_, - Adobe Font Metrics File Format Specification. - -""" - - -import re - -# every single line starts with a "word" -identifierRE = re.compile(r"^([A-Za-z]+).*") - -# regular expression to parse char lines -charRE = re.compile( - r"(-?\d+)" # charnum - r"\s*;\s*WX\s+" # ; WX - r"(-?\d+)" # width - r"\s*;\s*N\s+" # ; N - r"([.A-Za-z0-9_]+)" # charname - r"\s*;\s*B\s+" # ; B - r"(-?\d+)" # left - r"\s+" - r"(-?\d+)" # bottom - r"\s+" - r"(-?\d+)" # right - r"\s+" - r"(-?\d+)" # top - r"\s*;\s*" # ; -) - -# regular expression to parse kerning lines -kernRE = re.compile( - r"([.A-Za-z0-9_]+)" # leftchar - r"\s+" - r"([.A-Za-z0-9_]+)" # rightchar - r"\s+" - r"(-?\d+)" # value - r"\s*" -) - -# regular expressions to parse composite info lines of the form: -# Aacute 2 ; PCC A 0 0 ; PCC acute 182 211 ; -compositeRE = re.compile( - r"([.A-Za-z0-9_]+)" r"\s+" r"(\d+)" r"\s*;\s*" # char name # number of parts -) -componentRE = re.compile( - r"PCC\s+" # PPC - r"([.A-Za-z0-9_]+)" # base char name - r"\s+" - r"(-?\d+)" # x offset - r"\s+" - r"(-?\d+)" # y offset - r"\s*;\s*" -) - -preferredAttributeOrder = [ - "FontName", - "FullName", - "FamilyName", - "Weight", - "ItalicAngle", - "IsFixedPitch", - "FontBBox", - "UnderlinePosition", - "UnderlineThickness", - "Version", - "Notice", - "EncodingScheme", - "CapHeight", - "XHeight", - "Ascender", - "Descender", -] - - -class error(Exception): - pass - - -class AFM(object): - - _attrs = None - - _keywords = [ - "StartFontMetrics", - "EndFontMetrics", - "StartCharMetrics", - "EndCharMetrics", - "StartKernData", - "StartKernPairs", - "EndKernPairs", - "EndKernData", - "StartComposites", - "EndComposites", - ] - - def __init__(self, path=None): - """AFM file reader. - - Instantiating an object with a path name will cause the file to be opened, - read, and parsed. Alternatively the path can be left unspecified, and a - file can be parsed later with the :meth:`read` method.""" - self._attrs = {} - self._chars = {} - self._kerning = {} - self._index = {} - self._comments = [] - self._composites = {} - if path is not None: - self.read(path) - - def read(self, path): - """Opens, reads and parses a file.""" - lines = readlines(path) - for line in lines: - if not line.strip(): - continue - m = identifierRE.match(line) - if m is None: - raise error("syntax error in AFM file: " + repr(line)) - - pos = m.regs[1][1] - word = line[:pos] - rest = line[pos:].strip() - if word in self._keywords: - continue - if word == "C": - self.parsechar(rest) - elif word == "KPX": - self.parsekernpair(rest) - elif word == "CC": - self.parsecomposite(rest) - else: - self.parseattr(word, rest) - - def parsechar(self, rest): - m = charRE.match(rest) - if m is None: - raise error("syntax error in AFM file: " + repr(rest)) - things = [] - for fr, to in m.regs[1:]: - things.append(rest[fr:to]) - charname = things[2] - del things[2] - charnum, width, l, b, r, t = (int(thing) for thing in things) - self._chars[charname] = charnum, width, (l, b, r, t) - - def parsekernpair(self, rest): - m = kernRE.match(rest) - if m is None: - raise error("syntax error in AFM file: " + repr(rest)) - things = [] - for fr, to in m.regs[1:]: - things.append(rest[fr:to]) - leftchar, rightchar, value = things - value = int(value) - self._kerning[(leftchar, rightchar)] = value - - def parseattr(self, word, rest): - if word == "FontBBox": - l, b, r, t = [int(thing) for thing in rest.split()] - self._attrs[word] = l, b, r, t - elif word == "Comment": - self._comments.append(rest) - else: - try: - value = int(rest) - except (ValueError, OverflowError): - self._attrs[word] = rest - else: - self._attrs[word] = value - - def parsecomposite(self, rest): - m = compositeRE.match(rest) - if m is None: - raise error("syntax error in AFM file: " + repr(rest)) - charname = m.group(1) - ncomponents = int(m.group(2)) - rest = rest[m.regs[0][1] :] - components = [] - while True: - m = componentRE.match(rest) - if m is None: - raise error("syntax error in AFM file: " + repr(rest)) - basechar = m.group(1) - xoffset = int(m.group(2)) - yoffset = int(m.group(3)) - components.append((basechar, xoffset, yoffset)) - rest = rest[m.regs[0][1] :] - if not rest: - break - assert len(components) == ncomponents - self._composites[charname] = components - - def write(self, path, sep="\r"): - """Writes out an AFM font to the given path.""" - import time - - lines = [ - "StartFontMetrics 2.0", - "Comment Generated by afmLib; at %s" - % (time.strftime("%m/%d/%Y %H:%M:%S", time.localtime(time.time()))), - ] - - # write comments, assuming (possibly wrongly!) they should - # all appear at the top - for comment in self._comments: - lines.append("Comment " + comment) - - # write attributes, first the ones we know about, in - # a preferred order - attrs = self._attrs - for attr in preferredAttributeOrder: - if attr in attrs: - value = attrs[attr] - if attr == "FontBBox": - value = "%s %s %s %s" % value - lines.append(attr + " " + str(value)) - # then write the attributes we don't know about, - # in alphabetical order - items = sorted(attrs.items()) - for attr, value in items: - if attr in preferredAttributeOrder: - continue - lines.append(attr + " " + str(value)) - - # write char metrics - lines.append("StartCharMetrics " + repr(len(self._chars))) - items = [ - (charnum, (charname, width, box)) - for charname, (charnum, width, box) in self._chars.items() - ] - - def myKey(a): - """Custom key function to make sure unencoded chars (-1) - end up at the end of the list after sorting.""" - if a[0] == -1: - a = (0xFFFF,) + a[1:] # 0xffff is an arbitrary large number - return a - - items.sort(key=myKey) - - for charnum, (charname, width, (l, b, r, t)) in items: - lines.append( - "C %d ; WX %d ; N %s ; B %d %d %d %d ;" - % (charnum, width, charname, l, b, r, t) - ) - lines.append("EndCharMetrics") - - # write kerning info - lines.append("StartKernData") - lines.append("StartKernPairs " + repr(len(self._kerning))) - items = sorted(self._kerning.items()) - for (leftchar, rightchar), value in items: - lines.append("KPX %s %s %d" % (leftchar, rightchar, value)) - lines.append("EndKernPairs") - lines.append("EndKernData") - - if self._composites: - composites = sorted(self._composites.items()) - lines.append("StartComposites %s" % len(self._composites)) - for charname, components in composites: - line = "CC %s %s ;" % (charname, len(components)) - for basechar, xoffset, yoffset in components: - line = line + " PCC %s %s %s ;" % (basechar, xoffset, yoffset) - lines.append(line) - lines.append("EndComposites") - - lines.append("EndFontMetrics") - - writelines(path, lines, sep) - - def has_kernpair(self, pair): - """Returns `True` if the given glyph pair (specified as a tuple) exists - in the kerning dictionary.""" - return pair in self._kerning - - def kernpairs(self): - """Returns a list of all kern pairs in the kerning dictionary.""" - return list(self._kerning.keys()) - - def has_char(self, char): - """Returns `True` if the given glyph exists in the font.""" - return char in self._chars - - def chars(self): - """Returns a list of all glyph names in the font.""" - return list(self._chars.keys()) - - def comments(self): - """Returns all comments from the file.""" - return self._comments - - def addComment(self, comment): - """Adds a new comment to the file.""" - self._comments.append(comment) - - def addComposite(self, glyphName, components): - """Specifies that the glyph `glyphName` is made up of the given components. - The components list should be of the following form:: - - [ - (glyphname, xOffset, yOffset), - ... - ] - - """ - self._composites[glyphName] = components - - def __getattr__(self, attr): - if attr in self._attrs: - return self._attrs[attr] - else: - raise AttributeError(attr) - - def __setattr__(self, attr, value): - # all attrs *not* starting with "_" are consider to be AFM keywords - if attr[:1] == "_": - self.__dict__[attr] = value - else: - self._attrs[attr] = value - - def __delattr__(self, attr): - # all attrs *not* starting with "_" are consider to be AFM keywords - if attr[:1] == "_": - try: - del self.__dict__[attr] - except KeyError: - raise AttributeError(attr) - else: - try: - del self._attrs[attr] - except KeyError: - raise AttributeError(attr) - - def __getitem__(self, key): - if isinstance(key, tuple): - # key is a tuple, return the kernpair - return self._kerning[key] - else: - # return the metrics instead - return self._chars[key] - - def __setitem__(self, key, value): - if isinstance(key, tuple): - # key is a tuple, set kernpair - self._kerning[key] = value - else: - # set char metrics - self._chars[key] = value - - def __delitem__(self, key): - if isinstance(key, tuple): - # key is a tuple, del kernpair - del self._kerning[key] - else: - # del char metrics - del self._chars[key] - - def __repr__(self): - if hasattr(self, "FullName"): - return "" % self.FullName - else: - return "" % id(self) - - -def readlines(path): - with open(path, "r", encoding="ascii") as f: - data = f.read() - return data.splitlines() - - -def writelines(path, lines, sep="\r"): - with open(path, "w", encoding="ascii", newline=sep) as f: - f.write("\n".join(lines) + "\n") - - -if __name__ == "__main__": - import EasyDialogs - - path = EasyDialogs.AskFileForOpen() - if path: - afm = AFM(path) - char = "A" - if afm.has_char(char): - print(afm[char]) # print charnum, width and boundingbox - pair = ("A", "V") - if afm.has_kernpair(pair): - print(afm[pair]) # print kerning value for pair - print(afm.Version) # various other afm entries have become attributes - print(afm.Weight) - # afm.comments() returns a list of all Comment lines found in the AFM - print(afm.comments()) - # print afm.chars() - # print afm.kernpairs() - print(afm) - afm.write(path + ".muck") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/varLib/varStore.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/varLib/varStore.py deleted file mode 100644 index 74828e407ef5564f1623383201ed75e688a2eb96..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/varLib/varStore.py +++ /dev/null @@ -1,703 +0,0 @@ -from fontTools.misc.roundTools import noRound, otRound -from fontTools.misc.intTools import bit_count -from fontTools.ttLib.tables import otTables as ot -from fontTools.varLib.models import supportScalar -from fontTools.varLib.builder import ( - buildVarRegionList, - buildVarStore, - buildVarRegion, - buildVarData, -) -from functools import partial -from collections import defaultdict -from heapq import heappush, heappop - - -NO_VARIATION_INDEX = ot.NO_VARIATION_INDEX -ot.VarStore.NO_VARIATION_INDEX = NO_VARIATION_INDEX - - -def _getLocationKey(loc): - return tuple(sorted(loc.items(), key=lambda kv: kv[0])) - - -class OnlineVarStoreBuilder(object): - def __init__(self, axisTags): - self._axisTags = axisTags - self._regionMap = {} - self._regionList = buildVarRegionList([], axisTags) - self._store = buildVarStore(self._regionList, []) - self._data = None - self._model = None - self._supports = None - self._varDataIndices = {} - self._varDataCaches = {} - self._cache = {} - - def setModel(self, model): - self.setSupports(model.supports) - self._model = model - - def setSupports(self, supports): - self._model = None - self._supports = list(supports) - if not self._supports[0]: - del self._supports[0] # Drop base master support - self._cache = {} - self._data = None - - def finish(self, optimize=True): - self._regionList.RegionCount = len(self._regionList.Region) - self._store.VarDataCount = len(self._store.VarData) - for data in self._store.VarData: - data.ItemCount = len(data.Item) - data.calculateNumShorts(optimize=optimize) - return self._store - - def _add_VarData(self): - regionMap = self._regionMap - regionList = self._regionList - - regions = self._supports - regionIndices = [] - for region in regions: - key = _getLocationKey(region) - idx = regionMap.get(key) - if idx is None: - varRegion = buildVarRegion(region, self._axisTags) - idx = regionMap[key] = len(regionList.Region) - regionList.Region.append(varRegion) - regionIndices.append(idx) - - # Check if we have one already... - key = tuple(regionIndices) - varDataIdx = self._varDataIndices.get(key) - if varDataIdx is not None: - self._outer = varDataIdx - self._data = self._store.VarData[varDataIdx] - self._cache = self._varDataCaches[key] - if len(self._data.Item) == 0xFFFF: - # This is full. Need new one. - varDataIdx = None - - if varDataIdx is None: - self._data = buildVarData(regionIndices, [], optimize=False) - self._outer = len(self._store.VarData) - self._store.VarData.append(self._data) - self._varDataIndices[key] = self._outer - if key not in self._varDataCaches: - self._varDataCaches[key] = {} - self._cache = self._varDataCaches[key] - - def storeMasters(self, master_values, *, round=round): - deltas = self._model.getDeltas(master_values, round=round) - base = deltas.pop(0) - return base, self.storeDeltas(deltas, round=noRound) - - def storeDeltas(self, deltas, *, round=round): - deltas = [round(d) for d in deltas] - if len(deltas) == len(self._supports) + 1: - deltas = tuple(deltas[1:]) - else: - assert len(deltas) == len(self._supports) - deltas = tuple(deltas) - - varIdx = self._cache.get(deltas) - if varIdx is not None: - return varIdx - - if not self._data: - self._add_VarData() - inner = len(self._data.Item) - if inner == 0xFFFF: - # Full array. Start new one. - self._add_VarData() - return self.storeDeltas(deltas) - self._data.addItem(deltas, round=noRound) - - varIdx = (self._outer << 16) + inner - self._cache[deltas] = varIdx - return varIdx - - -def VarData_addItem(self, deltas, *, round=round): - deltas = [round(d) for d in deltas] - - countUs = self.VarRegionCount - countThem = len(deltas) - if countUs + 1 == countThem: - deltas = tuple(deltas[1:]) - else: - assert countUs == countThem, (countUs, countThem) - deltas = tuple(deltas) - self.Item.append(list(deltas)) - self.ItemCount = len(self.Item) - - -ot.VarData.addItem = VarData_addItem - - -def VarRegion_get_support(self, fvar_axes): - return { - fvar_axes[i].axisTag: (reg.StartCoord, reg.PeakCoord, reg.EndCoord) - for i, reg in enumerate(self.VarRegionAxis) - if reg.PeakCoord != 0 - } - - -ot.VarRegion.get_support = VarRegion_get_support - - -def VarStore___bool__(self): - return bool(self.VarData) - - -ot.VarStore.__bool__ = VarStore___bool__ - - -class VarStoreInstancer(object): - def __init__(self, varstore, fvar_axes, location={}): - self.fvar_axes = fvar_axes - assert varstore is None or varstore.Format == 1 - self._varData = varstore.VarData if varstore else [] - self._regions = varstore.VarRegionList.Region if varstore else [] - self.setLocation(location) - - def setLocation(self, location): - self.location = dict(location) - self._clearCaches() - - def _clearCaches(self): - self._scalars = {} - - def _getScalar(self, regionIdx): - scalar = self._scalars.get(regionIdx) - if scalar is None: - support = self._regions[regionIdx].get_support(self.fvar_axes) - scalar = supportScalar(self.location, support) - self._scalars[regionIdx] = scalar - return scalar - - @staticmethod - def interpolateFromDeltasAndScalars(deltas, scalars): - delta = 0.0 - for d, s in zip(deltas, scalars): - if not s: - continue - delta += d * s - return delta - - def __getitem__(self, varidx): - major, minor = varidx >> 16, varidx & 0xFFFF - if varidx == NO_VARIATION_INDEX: - return 0.0 - varData = self._varData - scalars = [self._getScalar(ri) for ri in varData[major].VarRegionIndex] - deltas = varData[major].Item[minor] - return self.interpolateFromDeltasAndScalars(deltas, scalars) - - def interpolateFromDeltas(self, varDataIndex, deltas): - varData = self._varData - scalars = [self._getScalar(ri) for ri in varData[varDataIndex].VarRegionIndex] - return self.interpolateFromDeltasAndScalars(deltas, scalars) - - -# -# Optimizations -# -# retainFirstMap - If true, major 0 mappings are retained. Deltas for unused indices are zeroed -# advIdxes - Set of major 0 indices for advance deltas to be listed first. Other major 0 indices follow. - - -def VarStore_subset_varidxes( - self, varIdxes, optimize=True, retainFirstMap=False, advIdxes=set() -): - # Sort out used varIdxes by major/minor. - used = {} - for varIdx in varIdxes: - if varIdx == NO_VARIATION_INDEX: - continue - major = varIdx >> 16 - minor = varIdx & 0xFFFF - d = used.get(major) - if d is None: - d = used[major] = set() - d.add(minor) - del varIdxes - - # - # Subset VarData - # - - varData = self.VarData - newVarData = [] - varDataMap = {NO_VARIATION_INDEX: NO_VARIATION_INDEX} - for major, data in enumerate(varData): - usedMinors = used.get(major) - if usedMinors is None: - continue - newMajor = len(newVarData) - newVarData.append(data) - - items = data.Item - newItems = [] - if major == 0 and retainFirstMap: - for minor in range(len(items)): - newItems.append( - items[minor] if minor in usedMinors else [0] * len(items[minor]) - ) - varDataMap[minor] = minor - else: - if major == 0: - minors = sorted(advIdxes) + sorted(usedMinors - advIdxes) - else: - minors = sorted(usedMinors) - for minor in minors: - newMinor = len(newItems) - newItems.append(items[minor]) - varDataMap[(major << 16) + minor] = (newMajor << 16) + newMinor - - data.Item = newItems - data.ItemCount = len(data.Item) - - data.calculateNumShorts(optimize=optimize) - - self.VarData = newVarData - self.VarDataCount = len(self.VarData) - - self.prune_regions() - - return varDataMap - - -ot.VarStore.subset_varidxes = VarStore_subset_varidxes - - -def VarStore_prune_regions(self): - """Remove unused VarRegions.""" - # - # Subset VarRegionList - # - - # Collect. - usedRegions = set() - for data in self.VarData: - usedRegions.update(data.VarRegionIndex) - # Subset. - regionList = self.VarRegionList - regions = regionList.Region - newRegions = [] - regionMap = {} - for i in sorted(usedRegions): - regionMap[i] = len(newRegions) - newRegions.append(regions[i]) - regionList.Region = newRegions - regionList.RegionCount = len(regionList.Region) - # Map. - for data in self.VarData: - data.VarRegionIndex = [regionMap[i] for i in data.VarRegionIndex] - - -ot.VarStore.prune_regions = VarStore_prune_regions - - -def _visit(self, func): - """Recurse down from self, if type of an object is ot.Device, - call func() on it. Works on otData-style classes.""" - - if type(self) == ot.Device: - func(self) - - elif isinstance(self, list): - for that in self: - _visit(that, func) - - elif hasattr(self, "getConverters") and not hasattr(self, "postRead"): - for conv in self.getConverters(): - that = getattr(self, conv.name, None) - if that is not None: - _visit(that, func) - - elif isinstance(self, ot.ValueRecord): - for that in self.__dict__.values(): - _visit(that, func) - - -def _Device_recordVarIdx(self, s): - """Add VarIdx in this Device table (if any) to the set s.""" - if self.DeltaFormat == 0x8000: - s.add((self.StartSize << 16) + self.EndSize) - - -def Object_collect_device_varidxes(self, varidxes): - adder = partial(_Device_recordVarIdx, s=varidxes) - _visit(self, adder) - - -ot.GDEF.collect_device_varidxes = Object_collect_device_varidxes -ot.GPOS.collect_device_varidxes = Object_collect_device_varidxes - - -def _Device_mapVarIdx(self, mapping, done): - """Map VarIdx in this Device table (if any) through mapping.""" - if id(self) in done: - return - done.add(id(self)) - if self.DeltaFormat == 0x8000: - varIdx = mapping[(self.StartSize << 16) + self.EndSize] - self.StartSize = varIdx >> 16 - self.EndSize = varIdx & 0xFFFF - - -def Object_remap_device_varidxes(self, varidxes_map): - mapper = partial(_Device_mapVarIdx, mapping=varidxes_map, done=set()) - _visit(self, mapper) - - -ot.GDEF.remap_device_varidxes = Object_remap_device_varidxes -ot.GPOS.remap_device_varidxes = Object_remap_device_varidxes - - -class _Encoding(object): - def __init__(self, chars): - self.chars = chars - self.width = bit_count(chars) - self.columns = self._columns(chars) - self.overhead = self._characteristic_overhead(self.columns) - self.items = set() - - def append(self, row): - self.items.add(row) - - def extend(self, lst): - self.items.update(lst) - - def get_room(self): - """Maximum number of bytes that can be added to characteristic - while still being beneficial to merge it into another one.""" - count = len(self.items) - return max(0, (self.overhead - 1) // count - self.width) - - room = property(get_room) - - def get_gain(self): - """Maximum possible byte gain from merging this into another - characteristic.""" - count = len(self.items) - return max(0, self.overhead - count) - - gain = property(get_gain) - - def gain_sort_key(self): - return self.gain, self.chars - - def width_sort_key(self): - return self.width, self.chars - - @staticmethod - def _characteristic_overhead(columns): - """Returns overhead in bytes of encoding this characteristic - as a VarData.""" - c = 4 + 6 # 4 bytes for LOffset, 6 bytes for VarData header - c += bit_count(columns) * 2 - return c - - @staticmethod - def _columns(chars): - cols = 0 - i = 1 - while chars: - if chars & 0b1111: - cols |= i - chars >>= 4 - i <<= 1 - return cols - - def gain_from_merging(self, other_encoding): - combined_chars = other_encoding.chars | self.chars - combined_width = bit_count(combined_chars) - combined_columns = self.columns | other_encoding.columns - combined_overhead = _Encoding._characteristic_overhead(combined_columns) - combined_gain = ( - +self.overhead - + other_encoding.overhead - - combined_overhead - - (combined_width - self.width) * len(self.items) - - (combined_width - other_encoding.width) * len(other_encoding.items) - ) - return combined_gain - - -class _EncodingDict(dict): - def __missing__(self, chars): - r = self[chars] = _Encoding(chars) - return r - - def add_row(self, row): - chars = self._row_characteristics(row) - self[chars].append(row) - - @staticmethod - def _row_characteristics(row): - """Returns encoding characteristics for a row.""" - longWords = False - - chars = 0 - i = 1 - for v in row: - if v: - chars += i - if not (-128 <= v <= 127): - chars += i * 0b0010 - if not (-32768 <= v <= 32767): - longWords = True - break - i <<= 4 - - if longWords: - # Redo; only allow 2byte/4byte encoding - chars = 0 - i = 1 - for v in row: - if v: - chars += i * 0b0011 - if not (-32768 <= v <= 32767): - chars += i * 0b1100 - i <<= 4 - - return chars - - -def VarStore_optimize(self, use_NO_VARIATION_INDEX=True, quantization=1): - """Optimize storage. Returns mapping from old VarIdxes to new ones.""" - - # Overview: - # - # For each VarData row, we first extend it with zeroes to have - # one column per region in VarRegionList. We then group the - # rows into _Encoding objects, by their "characteristic" bitmap. - # The characteristic bitmap is a binary number representing how - # many bytes each column of the data takes up to encode. Each - # column is encoded in four bits. For example, if a column has - # only values in the range -128..127, it would only have a single - # bit set in the characteristic bitmap for that column. If it has - # values in the range -32768..32767, it would have two bits set. - # The number of ones in the characteristic bitmap is the "width" - # of the encoding. - # - # Each encoding as such has a number of "active" (ie. non-zero) - # columns. The overhead of encoding the characteristic bitmap - # is 10 bytes, plus 2 bytes per active column. - # - # When an encoding is merged into another one, if the characteristic - # of the old encoding is a subset of the new one, then the overhead - # of the old encoding is completely eliminated. However, each row - # now would require more bytes to encode, to the tune of one byte - # per characteristic bit that is active in the new encoding but not - # in the old one. The number of bits that can be added to an encoding - # while still beneficial to merge it into another encoding is called - # the "room" for that encoding. - # - # The "gain" of an encodings is the maximum number of bytes we can - # save by merging it into another encoding. The "gain" of merging - # two encodings is how many bytes we save by doing so. - # - # High-level algorithm: - # - # - Each encoding has a minimal way to encode it. However, because - # of the overhead of encoding the characteristic bitmap, it may - # be beneficial to merge two encodings together, if there is - # gain in doing so. As such, we need to search for the best - # such successive merges. - # - # Algorithm: - # - # - Put all encodings into a "todo" list. - # - # - Sort todo list by decreasing gain (for stability). - # - # - Make a priority-queue of the gain from combining each two - # encodings in the todo list. The priority queue is sorted by - # decreasing gain. Only positive gains are included. - # - # - While priority queue is not empty: - # - Pop the first item from the priority queue, - # - Merge the two encodings it represents, - # - Remove the two encodings from the todo list, - # - Insert positive gains from combining the new encoding with - # all existing todo list items into the priority queue, - # - If a todo list item with the same characteristic bitmap as - # the new encoding exists, remove it from the todo list and - # merge it into the new encoding. - # - Insert the new encoding into the todo list, - # - # - Encode all remaining items in the todo list. - - # TODO - # Check that no two VarRegions are the same; if they are, fold them. - - n = len(self.VarRegionList.Region) # Number of columns - zeroes = [0] * n - - front_mapping = {} # Map from old VarIdxes to full row tuples - - encodings = _EncodingDict() - - # Collect all items into a set of full rows (with lots of zeroes.) - for major, data in enumerate(self.VarData): - regionIndices = data.VarRegionIndex - - for minor, item in enumerate(data.Item): - row = list(zeroes) - - if quantization == 1: - for regionIdx, v in zip(regionIndices, item): - row[regionIdx] += v - else: - for regionIdx, v in zip(regionIndices, item): - row[regionIdx] += ( - round(v / quantization) * quantization - ) # TODO https://github.com/fonttools/fonttools/pull/3126#discussion_r1205439785 - - row = tuple(row) - - if use_NO_VARIATION_INDEX and not any(row): - front_mapping[(major << 16) + minor] = None - continue - - encodings.add_row(row) - front_mapping[(major << 16) + minor] = row - - # Prepare for the main algorithm. - todo = sorted(encodings.values(), key=_Encoding.gain_sort_key) - del encodings - - # Repeatedly pick two best encodings to combine, and combine them. - - heap = [] - for i, encoding in enumerate(todo): - for j in range(i + 1, len(todo)): - other_encoding = todo[j] - combining_gain = encoding.gain_from_merging(other_encoding) - if combining_gain > 0: - heappush(heap, (-combining_gain, i, j)) - - while heap: - _, i, j = heappop(heap) - if todo[i] is None or todo[j] is None: - continue - - encoding, other_encoding = todo[i], todo[j] - todo[i], todo[j] = None, None - - # Combine the two encodings - combined_chars = other_encoding.chars | encoding.chars - combined_encoding = _Encoding(combined_chars) - combined_encoding.extend(encoding.items) - combined_encoding.extend(other_encoding.items) - - for k, enc in enumerate(todo): - if enc is None: - continue - - # In the unlikely event that the same encoding exists already, - # combine it. - if enc.chars == combined_chars: - combined_encoding.extend(enc.items) - todo[k] = None - continue - - combining_gain = combined_encoding.gain_from_merging(enc) - if combining_gain > 0: - heappush(heap, (-combining_gain, k, len(todo))) - - todo.append(combined_encoding) - - encodings = [encoding for encoding in todo if encoding is not None] - - # Assemble final store. - back_mapping = {} # Mapping from full rows to new VarIdxes - encodings.sort(key=_Encoding.width_sort_key) - self.VarData = [] - for major, encoding in enumerate(encodings): - data = ot.VarData() - self.VarData.append(data) - data.VarRegionIndex = range(n) - data.VarRegionCount = len(data.VarRegionIndex) - data.Item = sorted(encoding.items) - for minor, item in enumerate(data.Item): - back_mapping[item] = (major << 16) + minor - - # Compile final mapping. - varidx_map = {NO_VARIATION_INDEX: NO_VARIATION_INDEX} - for k, v in front_mapping.items(): - varidx_map[k] = back_mapping[v] if v is not None else NO_VARIATION_INDEX - - # Recalculate things and go home. - self.VarRegionList.RegionCount = len(self.VarRegionList.Region) - self.VarDataCount = len(self.VarData) - for data in self.VarData: - data.ItemCount = len(data.Item) - data.optimize() - - # Remove unused regions. - self.prune_regions() - - return varidx_map - - -ot.VarStore.optimize = VarStore_optimize - - -def main(args=None): - """Optimize a font's GDEF variation store""" - from argparse import ArgumentParser - from fontTools import configLogger - from fontTools.ttLib import TTFont - from fontTools.ttLib.tables.otBase import OTTableWriter - - parser = ArgumentParser(prog="varLib.varStore", description=main.__doc__) - parser.add_argument("--quantization", type=int, default=1) - parser.add_argument("fontfile") - parser.add_argument("outfile", nargs="?") - options = parser.parse_args(args) - - # TODO: allow user to configure logging via command-line options - configLogger(level="INFO") - - quantization = options.quantization - fontfile = options.fontfile - outfile = options.outfile - - font = TTFont(fontfile) - gdef = font["GDEF"] - store = gdef.table.VarStore - - writer = OTTableWriter() - store.compile(writer, font) - size = len(writer.getAllData()) - print("Before: %7d bytes" % size) - - varidx_map = store.optimize(quantization=quantization) - - writer = OTTableWriter() - store.compile(writer, font) - size = len(writer.getAllData()) - print("After: %7d bytes" % size) - - if outfile is not None: - gdef.table.remap_device_varidxes(varidx_map) - if "GPOS" in font: - font["GPOS"].table.remap_device_varidxes(varidx_map) - - font.save(outfile) - - -if __name__ == "__main__": - import sys - - if len(sys.argv) > 1: - sys.exit(main()) - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/model3D/shared/utils.ts b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/model3D/shared/utils.ts deleted file mode 100644 index 4d5cde83a5f0c52f4b04f203d2d7c06391fbe901..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/model3D/shared/utils.ts +++ /dev/null @@ -1,79 +0,0 @@ -import type { FileData } from "@gradio/client"; -import * as BABYLON from "babylonjs"; - -const create_camera = ( - scene: BABYLON.Scene, - camera_position: [number | null, number | null, number | null], - zoom_speed: number -): void => { - scene.createDefaultCamera(true, true, true); - var helperCamera = scene.activeCamera! as BABYLON.ArcRotateCamera; - if (camera_position[0] !== null) { - helperCamera.alpha = BABYLON.Tools.ToRadians(camera_position[0]); - } - if (camera_position[1] !== null) { - helperCamera.beta = BABYLON.Tools.ToRadians(camera_position[1]); - } - if (camera_position[2] !== null) { - helperCamera.radius = camera_position[2]; - } - // Disable panning. Adapted from: https://playground.babylonjs.com/#4U6TVQ#3 - helperCamera.panningSensibility = 0; - helperCamera.attachControl(false, false, -1); - helperCamera.pinchToPanMaxDistance = 0; - helperCamera.wheelPrecision = 2500 / zoom_speed; -}; - -export const add_new_model = ( - canvas: HTMLCanvasElement, - scene: BABYLON.Scene, - engine: BABYLON.Engine, - value: FileData | null, - clear_color: [number, number, number, number], - camera_position: [number | null, number | null, number | null], - zoom_speed: number -): BABYLON.Scene => { - if (scene && !scene.isDisposed && engine) { - scene.dispose(); - engine.dispose(); - } - - engine = new BABYLON.Engine(canvas, true); - scene = new BABYLON.Scene(engine); - scene.createDefaultCameraOrLight(); - scene.clearColor = scene.clearColor = new BABYLON.Color4(...clear_color); - - engine.runRenderLoop(() => { - scene.render(); - }); - - window.addEventListener("resize", () => { - engine.resize(); - }); - - if (!value) return scene; - let url: string; - - url = value.url!; - - BABYLON.SceneLoader.ShowLoadingScreen = false; - BABYLON.SceneLoader.Append( - url, - "", - scene, - () => create_camera(scene, camera_position, zoom_speed), - undefined, - undefined, - "." + value.path.split(".")[1] - ); - return scene; -}; - -export const reset_camera_position = ( - scene: BABYLON.Scene, - camera_position: [number | null, number | null, number | null], - zoom_speed: number -): void => { - scene.removeCamera(scene.activeCamera!); - create_camera(scene, camera_position, zoom_speed); -}; diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/label.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/label.py deleted file mode 100644 index 3d437a504e8bf3c1fe4abc440e00f0b870009392..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/label.py +++ /dev/null @@ -1,142 +0,0 @@ -"""gr.Label() component.""" - -from __future__ import annotations - -import json -import operator -from pathlib import Path -from typing import Any, Callable, List, Optional, Union - -from gradio_client.documentation import document, set_documentation_group - -from gradio.components.base import Component -from gradio.data_classes import GradioModel -from gradio.events import Events - -set_documentation_group("component") - - -class LabelConfidence(GradioModel): - label: Optional[Union[str, int, float]] = None - confidence: Optional[float] = None - - -class LabelData(GradioModel): - label: Optional[Union[str, int, float]] = None - confidences: Optional[List[LabelConfidence]] = None - - -@document() -class Label(Component): - """ - Displays a classification label, along with confidence scores of top categories, if provided. - Preprocessing: this component does *not* accept input. - Postprocessing: expects a {Dict[str, float]} of classes and confidences, or {str} with just the class or an {int}/{float} for regression outputs, or a {str} path to a .json file containing a json dictionary in the structure produced by Label.postprocess(). - - Demos: main_note, titanic_survival - Guides: image-classification-in-pytorch, image-classification-in-tensorflow, image-classification-with-vision-transformers, building-a-pictionary-app - """ - - CONFIDENCES_KEY = "confidences" - data_model = LabelData - EVENTS = [Events.change, Events.select] - - def __init__( - self, - value: dict[str, float] | str | float | Callable | None = None, - *, - num_top_classes: int | None = None, - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - color: str | None = None, - ): - """ - Parameters: - value: Default value to show in the component. If a str or number is provided, simply displays the string or number. If a {Dict[str, float]} of classes and confidences is provided, displays the top class on top and the `num_top_classes` below, along with their confidence bars. If callable, the function will be called whenever the app loads to set the initial value of the component. - num_top_classes: number of most confident classes to show. - label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later. - color: The background color of the label (either a valid css color name or hexadecimal string). - """ - self.num_top_classes = num_top_classes - self.color = color - super().__init__( - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - render=render, - value=value, - ) - - def postprocess( - self, value: dict[str, float] | str | float | None - ) -> LabelData | dict | None: - if value is None or value == {}: - return {} - if isinstance(value, str) and value.endswith(".json") and Path(value).exists(): - return LabelData(**json.loads(Path(value).read_text())) - if isinstance(value, (str, float, int)): - return LabelData(label=str(value)) - if isinstance(value, dict): - if "confidences" in value and isinstance(value["confidences"], dict): - value = value["confidences"] - value = {c["label"]: c["confidence"] for c in value} - sorted_pred = sorted( - value.items(), key=operator.itemgetter(1), reverse=True - ) - if self.num_top_classes is not None: - sorted_pred = sorted_pred[: self.num_top_classes] - return LabelData( - label=sorted_pred[0][0], - confidences=[ - LabelConfidence(label=pred[0], confidence=pred[1]) - for pred in sorted_pred - ], - ) - raise ValueError( - "The `Label` output interface expects one of: a string label, or an int label, a " - "float label, or a dictionary whose keys are labels and values are confidences. " - f"Instead, got a {type(value)}" - ) - - def preprocess( - self, payload: LabelData | None - ) -> dict[str, float] | str | float | None: - if payload is None: - return None - if payload.confidences is None: - return payload.label - return { - d["label"]: d["confidence"] for d in payload.model_dump()["confidences"] - } - - def example_inputs(self) -> Any: - return { - "label": "Cat", - "confidences": [ - {"label": "cat", "confidence": 0.9}, - {"label": "dog", "confidence": 0.1}, - ], - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-329f8260.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-329f8260.css deleted file mode 100644 index 3b53ee465e192f512a964e9050e9aab81384add8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-329f8260.css +++ /dev/null @@ -1 +0,0 @@ -.min.svelte-1ybaih5{min-height:var(--size-24)}.hide.svelte-1ybaih5{display:none}div.svelte-1ed2p3z{transition:.15s}.pending.svelte-1ed2p3z{opacity:.2} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qt.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qt.py deleted file mode 100644 index 4b3783bc87cacc1e7a41eaa2619b2b5ca6fa378b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qt.py +++ /dev/null @@ -1,1022 +0,0 @@ -import functools -import os -import sys -import traceback - -import matplotlib as mpl -from matplotlib import _api, backend_tools, cbook -from matplotlib._pylab_helpers import Gcf -from matplotlib.backend_bases import ( - _Backend, FigureCanvasBase, FigureManagerBase, NavigationToolbar2, - TimerBase, cursors, ToolContainerBase, MouseButton, - CloseEvent, KeyEvent, LocationEvent, MouseEvent, ResizeEvent) -import matplotlib.backends.qt_editor.figureoptions as figureoptions -from . import qt_compat -from .qt_compat import ( - QtCore, QtGui, QtWidgets, __version__, QT_API, - _to_int, _isdeleted, _maybe_allow_interrupt -) - - -# SPECIAL_KEYS are Qt::Key that do *not* return their Unicode name -# instead they have manually specified names. -SPECIAL_KEYS = { - _to_int(getattr(QtCore.Qt.Key, k)): v for k, v in [ - ("Key_Escape", "escape"), - ("Key_Tab", "tab"), - ("Key_Backspace", "backspace"), - ("Key_Return", "enter"), - ("Key_Enter", "enter"), - ("Key_Insert", "insert"), - ("Key_Delete", "delete"), - ("Key_Pause", "pause"), - ("Key_SysReq", "sysreq"), - ("Key_Clear", "clear"), - ("Key_Home", "home"), - ("Key_End", "end"), - ("Key_Left", "left"), - ("Key_Up", "up"), - ("Key_Right", "right"), - ("Key_Down", "down"), - ("Key_PageUp", "pageup"), - ("Key_PageDown", "pagedown"), - ("Key_Shift", "shift"), - # In OSX, the control and super (aka cmd/apple) keys are switched. - ("Key_Control", "control" if sys.platform != "darwin" else "cmd"), - ("Key_Meta", "meta" if sys.platform != "darwin" else "control"), - ("Key_Alt", "alt"), - ("Key_CapsLock", "caps_lock"), - ("Key_F1", "f1"), - ("Key_F2", "f2"), - ("Key_F3", "f3"), - ("Key_F4", "f4"), - ("Key_F5", "f5"), - ("Key_F6", "f6"), - ("Key_F7", "f7"), - ("Key_F8", "f8"), - ("Key_F9", "f9"), - ("Key_F10", "f10"), - ("Key_F10", "f11"), - ("Key_F12", "f12"), - ("Key_Super_L", "super"), - ("Key_Super_R", "super"), - ] -} -# Define which modifier keys are collected on keyboard events. -# Elements are (Qt::KeyboardModifiers, Qt::Key) tuples. -# Order determines the modifier order (ctrl+alt+...) reported by Matplotlib. -_MODIFIER_KEYS = [ - (_to_int(getattr(QtCore.Qt.KeyboardModifier, mod)), - _to_int(getattr(QtCore.Qt.Key, key))) - for mod, key in [ - ("ControlModifier", "Key_Control"), - ("AltModifier", "Key_Alt"), - ("ShiftModifier", "Key_Shift"), - ("MetaModifier", "Key_Meta"), - ] -] -cursord = { - k: getattr(QtCore.Qt.CursorShape, v) for k, v in [ - (cursors.MOVE, "SizeAllCursor"), - (cursors.HAND, "PointingHandCursor"), - (cursors.POINTER, "ArrowCursor"), - (cursors.SELECT_REGION, "CrossCursor"), - (cursors.WAIT, "WaitCursor"), - (cursors.RESIZE_HORIZONTAL, "SizeHorCursor"), - (cursors.RESIZE_VERTICAL, "SizeVerCursor"), - ] -} - - -# lru_cache keeps a reference to the QApplication instance, keeping it from -# being GC'd. -@functools.lru_cache(1) -def _create_qApp(): - app = QtWidgets.QApplication.instance() - - # Create a new QApplication and configure it if none exists yet, as only - # one QApplication can exist at a time. - if app is None: - # display_is_valid returns False only if on Linux and neither X11 - # nor Wayland display can be opened. - if not mpl._c_internal_utils.display_is_valid(): - raise RuntimeError('Invalid DISPLAY variable') - - # Check to make sure a QApplication from a different major version - # of Qt is not instantiated in the process - if QT_API in {'PyQt6', 'PySide6'}: - other_bindings = ('PyQt5', 'PySide2') - qt_version = 6 - elif QT_API in {'PyQt5', 'PySide2'}: - other_bindings = ('PyQt6', 'PySide6') - qt_version = 5 - else: - raise RuntimeError("Should never be here") - - for binding in other_bindings: - mod = sys.modules.get(f'{binding}.QtWidgets') - if mod is not None and mod.QApplication.instance() is not None: - other_core = sys.modules.get(f'{binding}.QtCore') - _api.warn_external( - f'Matplotlib is using {QT_API} which wraps ' - f'{QtCore.qVersion()} however an instantiated ' - f'QApplication from {binding} which wraps ' - f'{other_core.qVersion()} exists. Mixing Qt major ' - 'versions may not work as expected.' - ) - break - if qt_version == 5: - try: - QtWidgets.QApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling) - except AttributeError: # Only for Qt>=5.6, <6. - pass - try: - QtWidgets.QApplication.setHighDpiScaleFactorRoundingPolicy( - QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough) - except AttributeError: # Only for Qt>=5.14. - pass - app = QtWidgets.QApplication(["matplotlib"]) - if sys.platform == "darwin": - image = str(cbook._get_data_path('images/matplotlib.svg')) - icon = QtGui.QIcon(image) - app.setWindowIcon(icon) - app.setQuitOnLastWindowClosed(True) - cbook._setup_new_guiapp() - if qt_version == 5: - app.setAttribute(QtCore.Qt.AA_UseHighDpiPixmaps) - - return app - - -class TimerQT(TimerBase): - """Subclass of `.TimerBase` using QTimer events.""" - - def __init__(self, *args, **kwargs): - # Create a new timer and connect the timeout() signal to the - # _on_timer method. - self._timer = QtCore.QTimer() - self._timer.timeout.connect(self._on_timer) - super().__init__(*args, **kwargs) - - def __del__(self): - # The check for deletedness is needed to avoid an error at animation - # shutdown with PySide2. - if not _isdeleted(self._timer): - self._timer_stop() - - def _timer_set_single_shot(self): - self._timer.setSingleShot(self._single) - - def _timer_set_interval(self): - self._timer.setInterval(self._interval) - - def _timer_start(self): - self._timer.start() - - def _timer_stop(self): - self._timer.stop() - - -class FigureCanvasQT(FigureCanvasBase, QtWidgets.QWidget): - required_interactive_framework = "qt" - _timer_cls = TimerQT - manager_class = _api.classproperty(lambda cls: FigureManagerQT) - - buttond = { - getattr(QtCore.Qt.MouseButton, k): v for k, v in [ - ("LeftButton", MouseButton.LEFT), - ("RightButton", MouseButton.RIGHT), - ("MiddleButton", MouseButton.MIDDLE), - ("XButton1", MouseButton.BACK), - ("XButton2", MouseButton.FORWARD), - ] - } - - def __init__(self, figure=None): - _create_qApp() - super().__init__(figure=figure) - - self._draw_pending = False - self._is_drawing = False - self._draw_rect_callback = lambda painter: None - self._in_resize_event = False - - self.setAttribute(QtCore.Qt.WidgetAttribute.WA_OpaquePaintEvent) - self.setMouseTracking(True) - self.resize(*self.get_width_height()) - - palette = QtGui.QPalette(QtGui.QColor("white")) - self.setPalette(palette) - - def _update_pixel_ratio(self): - if self._set_device_pixel_ratio( - self.devicePixelRatioF() or 1): # rarely, devicePixelRatioF=0 - # The easiest way to resize the canvas is to emit a resizeEvent - # since we implement all the logic for resizing the canvas for - # that event. - event = QtGui.QResizeEvent(self.size(), self.size()) - self.resizeEvent(event) - - def _update_screen(self, screen): - # Handler for changes to a window's attached screen. - self._update_pixel_ratio() - if screen is not None: - screen.physicalDotsPerInchChanged.connect(self._update_pixel_ratio) - screen.logicalDotsPerInchChanged.connect(self._update_pixel_ratio) - - def showEvent(self, event): - # Set up correct pixel ratio, and connect to any signal changes for it, - # once the window is shown (and thus has these attributes). - window = self.window().windowHandle() - window.screenChanged.connect(self._update_screen) - self._update_screen(window.screen()) - - def set_cursor(self, cursor): - # docstring inherited - self.setCursor(_api.check_getitem(cursord, cursor=cursor)) - - def mouseEventCoords(self, pos=None): - """ - Calculate mouse coordinates in physical pixels. - - Qt uses logical pixels, but the figure is scaled to physical - pixels for rendering. Transform to physical pixels so that - all of the down-stream transforms work as expected. - - Also, the origin is different and needs to be corrected. - """ - if pos is None: - pos = self.mapFromGlobal(QtGui.QCursor.pos()) - elif hasattr(pos, "position"): # qt6 QtGui.QEvent - pos = pos.position() - elif hasattr(pos, "pos"): # qt5 QtCore.QEvent - pos = pos.pos() - # (otherwise, it's already a QPoint) - x = pos.x() - # flip y so y=0 is bottom of canvas - y = self.figure.bbox.height / self.device_pixel_ratio - pos.y() - return x * self.device_pixel_ratio, y * self.device_pixel_ratio - - def enterEvent(self, event): - # Force querying of the modifiers, as the cached modifier state can - # have been invalidated while the window was out of focus. - mods = QtWidgets.QApplication.instance().queryKeyboardModifiers() - LocationEvent("figure_enter_event", self, - *self.mouseEventCoords(event), - modifiers=self._mpl_modifiers(mods), - guiEvent=event)._process() - - def leaveEvent(self, event): - QtWidgets.QApplication.restoreOverrideCursor() - LocationEvent("figure_leave_event", self, - *self.mouseEventCoords(), - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - def mousePressEvent(self, event): - button = self.buttond.get(event.button()) - if button is not None: - MouseEvent("button_press_event", self, - *self.mouseEventCoords(event), button, - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - def mouseDoubleClickEvent(self, event): - button = self.buttond.get(event.button()) - if button is not None: - MouseEvent("button_press_event", self, - *self.mouseEventCoords(event), button, dblclick=True, - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - def mouseMoveEvent(self, event): - MouseEvent("motion_notify_event", self, - *self.mouseEventCoords(event), - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - def mouseReleaseEvent(self, event): - button = self.buttond.get(event.button()) - if button is not None: - MouseEvent("button_release_event", self, - *self.mouseEventCoords(event), button, - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - def wheelEvent(self, event): - # from QWheelEvent::pixelDelta doc: pixelDelta is sometimes not - # provided (`isNull()`) and is unreliable on X11 ("xcb"). - if (event.pixelDelta().isNull() - or QtWidgets.QApplication.instance().platformName() == "xcb"): - steps = event.angleDelta().y() / 120 - else: - steps = event.pixelDelta().y() - if steps: - MouseEvent("scroll_event", self, - *self.mouseEventCoords(event), step=steps, - modifiers=self._mpl_modifiers(), - guiEvent=event)._process() - - def keyPressEvent(self, event): - key = self._get_key(event) - if key is not None: - KeyEvent("key_press_event", self, - key, *self.mouseEventCoords(), - guiEvent=event)._process() - - def keyReleaseEvent(self, event): - key = self._get_key(event) - if key is not None: - KeyEvent("key_release_event", self, - key, *self.mouseEventCoords(), - guiEvent=event)._process() - - def resizeEvent(self, event): - if self._in_resize_event: # Prevent PyQt6 recursion - return - self._in_resize_event = True - try: - w = event.size().width() * self.device_pixel_ratio - h = event.size().height() * self.device_pixel_ratio - dpival = self.figure.dpi - winch = w / dpival - hinch = h / dpival - self.figure.set_size_inches(winch, hinch, forward=False) - # pass back into Qt to let it finish - QtWidgets.QWidget.resizeEvent(self, event) - # emit our resize events - ResizeEvent("resize_event", self)._process() - self.draw_idle() - finally: - self._in_resize_event = False - - def sizeHint(self): - w, h = self.get_width_height() - return QtCore.QSize(w, h) - - def minumumSizeHint(self): - return QtCore.QSize(10, 10) - - @staticmethod - def _mpl_modifiers(modifiers=None, *, exclude=None): - if modifiers is None: - modifiers = QtWidgets.QApplication.instance().keyboardModifiers() - modifiers = _to_int(modifiers) - # get names of the pressed modifier keys - # 'control' is named 'control' when a standalone key, but 'ctrl' when a - # modifier - # bit twiddling to pick out modifier keys from modifiers bitmask, - # if exclude is a MODIFIER, it should not be duplicated in mods - return [SPECIAL_KEYS[key].replace('control', 'ctrl') - for mask, key in _MODIFIER_KEYS - if exclude != key and modifiers & mask] - - def _get_key(self, event): - event_key = event.key() - mods = self._mpl_modifiers(exclude=event_key) - try: - # for certain keys (enter, left, backspace, etc) use a word for the - # key, rather than Unicode - key = SPECIAL_KEYS[event_key] - except KeyError: - # Unicode defines code points up to 0x10ffff (sys.maxunicode) - # QT will use Key_Codes larger than that for keyboard keys that are - # not Unicode characters (like multimedia keys) - # skip these - # if you really want them, you should add them to SPECIAL_KEYS - if event_key > sys.maxunicode: - return None - - key = chr(event_key) - # qt delivers capitalized letters. fix capitalization - # note that capslock is ignored - if 'shift' in mods: - mods.remove('shift') - else: - key = key.lower() - - return '+'.join(mods + [key]) - - def flush_events(self): - # docstring inherited - QtWidgets.QApplication.instance().processEvents() - - def start_event_loop(self, timeout=0): - # docstring inherited - if hasattr(self, "_event_loop") and self._event_loop.isRunning(): - raise RuntimeError("Event loop already running") - self._event_loop = event_loop = QtCore.QEventLoop() - if timeout > 0: - _ = QtCore.QTimer.singleShot(int(timeout * 1000), event_loop.quit) - - with _maybe_allow_interrupt(event_loop): - qt_compat._exec(event_loop) - - def stop_event_loop(self, event=None): - # docstring inherited - if hasattr(self, "_event_loop"): - self._event_loop.quit() - - def draw(self): - """Render the figure, and queue a request for a Qt draw.""" - # The renderer draw is done here; delaying causes problems with code - # that uses the result of the draw() to update plot elements. - if self._is_drawing: - return - with cbook._setattr_cm(self, _is_drawing=True): - super().draw() - self.update() - - def draw_idle(self): - """Queue redraw of the Agg buffer and request Qt paintEvent.""" - # The Agg draw needs to be handled by the same thread Matplotlib - # modifies the scene graph from. Post Agg draw request to the - # current event loop in order to ensure thread affinity and to - # accumulate multiple draw requests from event handling. - # TODO: queued signal connection might be safer than singleShot - if not (getattr(self, '_draw_pending', False) or - getattr(self, '_is_drawing', False)): - self._draw_pending = True - QtCore.QTimer.singleShot(0, self._draw_idle) - - def blit(self, bbox=None): - # docstring inherited - if bbox is None and self.figure: - bbox = self.figure.bbox # Blit the entire canvas if bbox is None. - # repaint uses logical pixels, not physical pixels like the renderer. - l, b, w, h = [int(pt / self.device_pixel_ratio) for pt in bbox.bounds] - t = b + h - self.repaint(l, self.rect().height() - t, w, h) - - def _draw_idle(self): - with self._idle_draw_cntx(): - if not self._draw_pending: - return - self._draw_pending = False - if self.height() < 0 or self.width() < 0: - return - try: - self.draw() - except Exception: - # Uncaught exceptions are fatal for PyQt5, so catch them. - traceback.print_exc() - - def drawRectangle(self, rect): - # Draw the zoom rectangle to the QPainter. _draw_rect_callback needs - # to be called at the end of paintEvent. - if rect is not None: - x0, y0, w, h = [int(pt / self.device_pixel_ratio) for pt in rect] - x1 = x0 + w - y1 = y0 + h - def _draw_rect_callback(painter): - pen = QtGui.QPen( - QtGui.QColor("black"), - 1 / self.device_pixel_ratio - ) - - pen.setDashPattern([3, 3]) - for color, offset in [ - (QtGui.QColor("black"), 0), - (QtGui.QColor("white"), 3), - ]: - pen.setDashOffset(offset) - pen.setColor(color) - painter.setPen(pen) - # Draw the lines from x0, y0 towards x1, y1 so that the - # dashes don't "jump" when moving the zoom box. - painter.drawLine(x0, y0, x0, y1) - painter.drawLine(x0, y0, x1, y0) - painter.drawLine(x0, y1, x1, y1) - painter.drawLine(x1, y0, x1, y1) - else: - def _draw_rect_callback(painter): - return - self._draw_rect_callback = _draw_rect_callback - self.update() - - -class MainWindow(QtWidgets.QMainWindow): - closing = QtCore.Signal() - - def closeEvent(self, event): - self.closing.emit() - super().closeEvent(event) - - -class FigureManagerQT(FigureManagerBase): - """ - Attributes - ---------- - canvas : `FigureCanvas` - The FigureCanvas instance - num : int or str - The Figure number - toolbar : qt.QToolBar - The qt.QToolBar - window : qt.QMainWindow - The qt.QMainWindow - """ - - def __init__(self, canvas, num): - self.window = MainWindow() - super().__init__(canvas, num) - self.window.closing.connect(self._widgetclosed) - - if sys.platform != "darwin": - image = str(cbook._get_data_path('images/matplotlib.svg')) - icon = QtGui.QIcon(image) - self.window.setWindowIcon(icon) - - self.window._destroying = False - - if self.toolbar: - self.window.addToolBar(self.toolbar) - tbs_height = self.toolbar.sizeHint().height() - else: - tbs_height = 0 - - # resize the main window so it will display the canvas with the - # requested size: - cs = canvas.sizeHint() - cs_height = cs.height() - height = cs_height + tbs_height - self.window.resize(cs.width(), height) - - self.window.setCentralWidget(self.canvas) - - if mpl.is_interactive(): - self.window.show() - self.canvas.draw_idle() - - # Give the keyboard focus to the figure instead of the manager: - # StrongFocus accepts both tab and click to focus and will enable the - # canvas to process event without clicking. - # https://doc.qt.io/qt-5/qt.html#FocusPolicy-enum - self.canvas.setFocusPolicy(QtCore.Qt.FocusPolicy.StrongFocus) - self.canvas.setFocus() - - self.window.raise_() - - def full_screen_toggle(self): - if self.window.isFullScreen(): - self.window.showNormal() - else: - self.window.showFullScreen() - - def _widgetclosed(self): - CloseEvent("close_event", self.canvas)._process() - if self.window._destroying: - return - self.window._destroying = True - try: - Gcf.destroy(self) - except AttributeError: - pass - # It seems that when the python session is killed, - # Gcf can get destroyed before the Gcf.destroy - # line is run, leading to a useless AttributeError. - - def resize(self, width, height): - # The Qt methods return sizes in 'virtual' pixels so we do need to - # rescale from physical to logical pixels. - width = int(width / self.canvas.device_pixel_ratio) - height = int(height / self.canvas.device_pixel_ratio) - extra_width = self.window.width() - self.canvas.width() - extra_height = self.window.height() - self.canvas.height() - self.canvas.resize(width, height) - self.window.resize(width + extra_width, height + extra_height) - - @classmethod - def start_main_loop(cls): - qapp = QtWidgets.QApplication.instance() - if qapp: - with _maybe_allow_interrupt(qapp): - qt_compat._exec(qapp) - - def show(self): - self.window.show() - if mpl.rcParams['figure.raise_window']: - self.window.activateWindow() - self.window.raise_() - - def destroy(self, *args): - # check for qApp first, as PySide deletes it in its atexit handler - if QtWidgets.QApplication.instance() is None: - return - if self.window._destroying: - return - self.window._destroying = True - if self.toolbar: - self.toolbar.destroy() - self.window.close() - - def get_window_title(self): - return self.window.windowTitle() - - def set_window_title(self, title): - self.window.setWindowTitle(title) - - -class NavigationToolbar2QT(NavigationToolbar2, QtWidgets.QToolBar): - _message = QtCore.Signal(str) # Remove once deprecation below elapses. - message = _api.deprecate_privatize_attribute("3.8") - - toolitems = [*NavigationToolbar2.toolitems] - toolitems.insert( - # Add 'customize' action after 'subplots' - [name for name, *_ in toolitems].index("Subplots") + 1, - ("Customize", "Edit axis, curve and image parameters", - "qt4_editor_options", "edit_parameters")) - - def __init__(self, canvas, parent=None, coordinates=True): - """coordinates: should we show the coordinates on the right?""" - QtWidgets.QToolBar.__init__(self, parent) - self.setAllowedAreas(QtCore.Qt.ToolBarArea( - _to_int(QtCore.Qt.ToolBarArea.TopToolBarArea) | - _to_int(QtCore.Qt.ToolBarArea.BottomToolBarArea))) - self.coordinates = coordinates - self._actions = {} # mapping of toolitem method names to QActions. - self._subplot_dialog = None - - for text, tooltip_text, image_file, callback in self.toolitems: - if text is None: - self.addSeparator() - else: - a = self.addAction(self._icon(image_file + '.png'), - text, getattr(self, callback)) - self._actions[callback] = a - if callback in ['zoom', 'pan']: - a.setCheckable(True) - if tooltip_text is not None: - a.setToolTip(tooltip_text) - - # Add the (x, y) location widget at the right side of the toolbar - # The stretch factor is 1 which means any resizing of the toolbar - # will resize this label instead of the buttons. - if self.coordinates: - self.locLabel = QtWidgets.QLabel("", self) - self.locLabel.setAlignment(QtCore.Qt.AlignmentFlag( - _to_int(QtCore.Qt.AlignmentFlag.AlignRight) | - _to_int(QtCore.Qt.AlignmentFlag.AlignVCenter))) - - self.locLabel.setSizePolicy(QtWidgets.QSizePolicy( - QtWidgets.QSizePolicy.Policy.Expanding, - QtWidgets.QSizePolicy.Policy.Ignored, - )) - labelAction = self.addWidget(self.locLabel) - labelAction.setVisible(True) - - NavigationToolbar2.__init__(self, canvas) - - def _icon(self, name): - """ - Construct a `.QIcon` from an image file *name*, including the extension - and relative to Matplotlib's "images" data directory. - """ - # use a high-resolution icon with suffix '_large' if available - # note: user-provided icons may not have '_large' versions - path_regular = cbook._get_data_path('images', name) - path_large = path_regular.with_name( - path_regular.name.replace('.png', '_large.png')) - filename = str(path_large if path_large.exists() else path_regular) - - pm = QtGui.QPixmap(filename) - pm.setDevicePixelRatio( - self.devicePixelRatioF() or 1) # rarely, devicePixelRatioF=0 - if self.palette().color(self.backgroundRole()).value() < 128: - icon_color = self.palette().color(self.foregroundRole()) - mask = pm.createMaskFromColor( - QtGui.QColor('black'), - QtCore.Qt.MaskMode.MaskOutColor) - pm.fill(icon_color) - pm.setMask(mask) - return QtGui.QIcon(pm) - - def edit_parameters(self): - axes = self.canvas.figure.get_axes() - if not axes: - QtWidgets.QMessageBox.warning( - self.canvas.parent(), "Error", "There are no axes to edit.") - return - elif len(axes) == 1: - ax, = axes - else: - titles = [ - ax.get_label() or - ax.get_title() or - ax.get_title("left") or - ax.get_title("right") or - " - ".join(filter(None, [ax.get_xlabel(), ax.get_ylabel()])) or - f"" - for ax in axes] - duplicate_titles = [ - title for title in titles if titles.count(title) > 1] - for i, ax in enumerate(axes): - if titles[i] in duplicate_titles: - titles[i] += f" (id: {id(ax):#x})" # Deduplicate titles. - item, ok = QtWidgets.QInputDialog.getItem( - self.canvas.parent(), - 'Customize', 'Select axes:', titles, 0, False) - if not ok: - return - ax = axes[titles.index(item)] - figureoptions.figure_edit(ax, self) - - def _update_buttons_checked(self): - # sync button checkstates to match active mode - if 'pan' in self._actions: - self._actions['pan'].setChecked(self.mode.name == 'PAN') - if 'zoom' in self._actions: - self._actions['zoom'].setChecked(self.mode.name == 'ZOOM') - - def pan(self, *args): - super().pan(*args) - self._update_buttons_checked() - - def zoom(self, *args): - super().zoom(*args) - self._update_buttons_checked() - - def set_message(self, s): - self._message.emit(s) - if self.coordinates: - self.locLabel.setText(s) - - def draw_rubberband(self, event, x0, y0, x1, y1): - height = self.canvas.figure.bbox.height - y1 = height - y1 - y0 = height - y0 - rect = [int(val) for val in (x0, y0, x1 - x0, y1 - y0)] - self.canvas.drawRectangle(rect) - - def remove_rubberband(self): - self.canvas.drawRectangle(None) - - def configure_subplots(self): - if self._subplot_dialog is None: - self._subplot_dialog = SubplotToolQt( - self.canvas.figure, self.canvas.parent()) - self.canvas.mpl_connect( - "close_event", lambda e: self._subplot_dialog.reject()) - self._subplot_dialog.update_from_current_subplotpars() - self._subplot_dialog.show() - return self._subplot_dialog - - def save_figure(self, *args): - filetypes = self.canvas.get_supported_filetypes_grouped() - sorted_filetypes = sorted(filetypes.items()) - default_filetype = self.canvas.get_default_filetype() - - startpath = os.path.expanduser(mpl.rcParams['savefig.directory']) - start = os.path.join(startpath, self.canvas.get_default_filename()) - filters = [] - selectedFilter = None - for name, exts in sorted_filetypes: - exts_list = " ".join(['*.%s' % ext for ext in exts]) - filter = f'{name} ({exts_list})' - if default_filetype in exts: - selectedFilter = filter - filters.append(filter) - filters = ';;'.join(filters) - - fname, filter = QtWidgets.QFileDialog.getSaveFileName( - self.canvas.parent(), "Choose a filename to save to", start, - filters, selectedFilter) - if fname: - # Save dir for next time, unless empty str (i.e., use cwd). - if startpath != "": - mpl.rcParams['savefig.directory'] = os.path.dirname(fname) - try: - self.canvas.figure.savefig(fname) - except Exception as e: - QtWidgets.QMessageBox.critical( - self, "Error saving file", str(e), - QtWidgets.QMessageBox.StandardButton.Ok, - QtWidgets.QMessageBox.StandardButton.NoButton) - - def set_history_buttons(self): - can_backward = self._nav_stack._pos > 0 - can_forward = self._nav_stack._pos < len(self._nav_stack) - 1 - if 'back' in self._actions: - self._actions['back'].setEnabled(can_backward) - if 'forward' in self._actions: - self._actions['forward'].setEnabled(can_forward) - - -class SubplotToolQt(QtWidgets.QDialog): - def __init__(self, targetfig, parent): - super().__init__() - self.setWindowIcon(QtGui.QIcon( - str(cbook._get_data_path("images/matplotlib.png")))) - self.setObjectName("SubplotTool") - self._spinboxes = {} - main_layout = QtWidgets.QHBoxLayout() - self.setLayout(main_layout) - for group, spinboxes, buttons in [ - ("Borders", - ["top", "bottom", "left", "right"], - [("Export values", self._export_values)]), - ("Spacings", - ["hspace", "wspace"], - [("Tight layout", self._tight_layout), - ("Reset", self._reset), - ("Close", self.close)])]: - layout = QtWidgets.QVBoxLayout() - main_layout.addLayout(layout) - box = QtWidgets.QGroupBox(group) - layout.addWidget(box) - inner = QtWidgets.QFormLayout(box) - for name in spinboxes: - self._spinboxes[name] = spinbox = QtWidgets.QDoubleSpinBox() - spinbox.setRange(0, 1) - spinbox.setDecimals(3) - spinbox.setSingleStep(0.005) - spinbox.setKeyboardTracking(False) - spinbox.valueChanged.connect(self._on_value_changed) - inner.addRow(name, spinbox) - layout.addStretch(1) - for name, method in buttons: - button = QtWidgets.QPushButton(name) - # Don't trigger on , which is used to input values. - button.setAutoDefault(False) - button.clicked.connect(method) - layout.addWidget(button) - if name == "Close": - button.setFocus() - self._figure = targetfig - self._defaults = {} - self._export_values_dialog = None - self.update_from_current_subplotpars() - - def update_from_current_subplotpars(self): - self._defaults = {spinbox: getattr(self._figure.subplotpars, name) - for name, spinbox in self._spinboxes.items()} - self._reset() # Set spinbox current values without triggering signals. - - def _export_values(self): - # Explicitly round to 3 decimals (which is also the spinbox precision) - # to avoid numbers of the form 0.100...001. - self._export_values_dialog = QtWidgets.QDialog() - layout = QtWidgets.QVBoxLayout() - self._export_values_dialog.setLayout(layout) - text = QtWidgets.QPlainTextEdit() - text.setReadOnly(True) - layout.addWidget(text) - text.setPlainText( - ",\n".join(f"{attr}={spinbox.value():.3}" - for attr, spinbox in self._spinboxes.items())) - # Adjust the height of the text widget to fit the whole text, plus - # some padding. - size = text.maximumSize() - size.setHeight( - QtGui.QFontMetrics(text.document().defaultFont()) - .size(0, text.toPlainText()).height() + 20) - text.setMaximumSize(size) - self._export_values_dialog.show() - - def _on_value_changed(self): - spinboxes = self._spinboxes - # Set all mins and maxes, so that this can also be used in _reset(). - for lower, higher in [("bottom", "top"), ("left", "right")]: - spinboxes[higher].setMinimum(spinboxes[lower].value() + .001) - spinboxes[lower].setMaximum(spinboxes[higher].value() - .001) - self._figure.subplots_adjust( - **{attr: spinbox.value() for attr, spinbox in spinboxes.items()}) - self._figure.canvas.draw_idle() - - def _tight_layout(self): - self._figure.tight_layout() - for attr, spinbox in self._spinboxes.items(): - spinbox.blockSignals(True) - spinbox.setValue(getattr(self._figure.subplotpars, attr)) - spinbox.blockSignals(False) - self._figure.canvas.draw_idle() - - def _reset(self): - for spinbox, value in self._defaults.items(): - spinbox.setRange(0, 1) - spinbox.blockSignals(True) - spinbox.setValue(value) - spinbox.blockSignals(False) - self._on_value_changed() - - -class ToolbarQt(ToolContainerBase, QtWidgets.QToolBar): - def __init__(self, toolmanager, parent=None): - ToolContainerBase.__init__(self, toolmanager) - QtWidgets.QToolBar.__init__(self, parent) - self.setAllowedAreas(QtCore.Qt.ToolBarArea( - _to_int(QtCore.Qt.ToolBarArea.TopToolBarArea) | - _to_int(QtCore.Qt.ToolBarArea.BottomToolBarArea))) - message_label = QtWidgets.QLabel("") - message_label.setAlignment(QtCore.Qt.AlignmentFlag( - _to_int(QtCore.Qt.AlignmentFlag.AlignRight) | - _to_int(QtCore.Qt.AlignmentFlag.AlignVCenter))) - message_label.setSizePolicy(QtWidgets.QSizePolicy( - QtWidgets.QSizePolicy.Policy.Expanding, - QtWidgets.QSizePolicy.Policy.Ignored, - )) - self._message_action = self.addWidget(message_label) - self._toolitems = {} - self._groups = {} - - def add_toolitem( - self, name, group, position, image_file, description, toggle): - - button = QtWidgets.QToolButton(self) - if image_file: - button.setIcon(NavigationToolbar2QT._icon(self, image_file)) - button.setText(name) - if description: - button.setToolTip(description) - - def handler(): - self.trigger_tool(name) - if toggle: - button.setCheckable(True) - button.toggled.connect(handler) - else: - button.clicked.connect(handler) - - self._toolitems.setdefault(name, []) - self._add_to_group(group, name, button, position) - self._toolitems[name].append((button, handler)) - - def _add_to_group(self, group, name, button, position): - gr = self._groups.get(group, []) - if not gr: - sep = self.insertSeparator(self._message_action) - gr.append(sep) - before = gr[position] - widget = self.insertWidget(before, button) - gr.insert(position, widget) - self._groups[group] = gr - - def toggle_toolitem(self, name, toggled): - if name not in self._toolitems: - return - for button, handler in self._toolitems[name]: - button.toggled.disconnect(handler) - button.setChecked(toggled) - button.toggled.connect(handler) - - def remove_toolitem(self, name): - for button, handler in self._toolitems[name]: - button.setParent(None) - del self._toolitems[name] - - def set_message(self, s): - self.widgetForAction(self._message_action).setText(s) - - -@backend_tools._register_tool_class(FigureCanvasQT) -class ConfigureSubplotsQt(backend_tools.ConfigureSubplotsBase): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._subplot_dialog = None - - def trigger(self, *args): - NavigationToolbar2QT.configure_subplots(self) - - -@backend_tools._register_tool_class(FigureCanvasQT) -class SaveFigureQt(backend_tools.SaveFigureBase): - def trigger(self, *args): - NavigationToolbar2QT.save_figure( - self._make_classic_style_pseudo_toolbar()) - - -@backend_tools._register_tool_class(FigureCanvasQT) -class RubberbandQt(backend_tools.RubberbandBase): - def draw_rubberband(self, x0, y0, x1, y1): - NavigationToolbar2QT.draw_rubberband( - self._make_classic_style_pseudo_toolbar(), None, x0, y0, x1, y1) - - def remove_rubberband(self): - NavigationToolbar2QT.remove_rubberband( - self._make_classic_style_pseudo_toolbar()) - - -@backend_tools._register_tool_class(FigureCanvasQT) -class HelpQt(backend_tools.ToolHelpBase): - def trigger(self, *args): - QtWidgets.QMessageBox.information(None, "Help", self._get_help_html()) - - -@backend_tools._register_tool_class(FigureCanvasQT) -class ToolCopyToClipboardQT(backend_tools.ToolCopyToClipboardBase): - def trigger(self, *args, **kwargs): - pixmap = self.canvas.grab() - QtWidgets.QApplication.instance().clipboard().setPixmap(pixmap) - - -FigureManagerQT._toolbar2_class = NavigationToolbar2QT -FigureManagerQT._toolmanager_toolbar_class = ToolbarQt - - -@_Backend.export -class _BackendQT(_Backend): - backend_version = __version__ - FigureCanvas = FigureCanvasQT - FigureManager = FigureManagerQT - mainloop = FigureManagerQT.start_main_loop diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/__init__.py deleted file mode 100644 index 3ba501de03b6d17da62f03b7cf66f07232679533..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/command/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -"""distutils.command - -Package containing implementation of all the standard Distutils -commands. - -""" -def test_na_writable_attributes_deletion(): - a = np.NA(2) - attr = ['payload', 'dtype'] - for s in attr: - assert_raises(AttributeError, delattr, a, s) - - -__revision__ = "$Id: __init__.py,v 1.3 2005/05/16 11:08:49 pearu Exp $" - -distutils_all = [ #'build_py', - 'clean', - 'install_clib', - 'install_scripts', - 'bdist', - 'bdist_dumb', - 'bdist_wininst', - ] - -__import__('distutils.command', globals(), locals(), distutils_all) - -__all__ = ['build', - 'config_compiler', - 'config', - 'build_src', - 'build_py', - 'build_ext', - 'build_clib', - 'build_scripts', - 'install', - 'install_data', - 'install_headers', - 'install_lib', - 'bdist_rpm', - 'sdist', - ] + distutils_all diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/histograms.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/histograms.py deleted file mode 100644 index 6ac65b726928bb21432a7a6edcbf73fbeaedb137..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/histograms.py +++ /dev/null @@ -1,1072 +0,0 @@ -""" -Histogram-related functions -""" -import contextlib -import functools -import operator -import warnings - -import numpy as np -from numpy.core import overrides - -__all__ = ['histogram', 'histogramdd', 'histogram_bin_edges'] - -array_function_dispatch = functools.partial( - overrides.array_function_dispatch, module='numpy') - -# range is a keyword argument to many functions, so save the builtin so they can -# use it. -_range = range - - -def _ptp(x): - """Peak-to-peak value of x. - - This implementation avoids the problem of signed integer arrays having a - peak-to-peak value that cannot be represented with the array's data type. - This function returns an unsigned value for signed integer arrays. - """ - return _unsigned_subtract(x.max(), x.min()) - - -def _hist_bin_sqrt(x, range): - """ - Square root histogram bin estimator. - - Bin width is inversely proportional to the data size. Used by many - programs for its simplicity. - - Parameters - ---------- - x : array_like - Input data that is to be histogrammed, trimmed to range. May not - be empty. - - Returns - ------- - h : An estimate of the optimal bin width for the given data. - """ - del range # unused - return _ptp(x) / np.sqrt(x.size) - - -def _hist_bin_sturges(x, range): - """ - Sturges histogram bin estimator. - - A very simplistic estimator based on the assumption of normality of - the data. This estimator has poor performance for non-normal data, - which becomes especially obvious for large data sets. The estimate - depends only on size of the data. - - Parameters - ---------- - x : array_like - Input data that is to be histogrammed, trimmed to range. May not - be empty. - - Returns - ------- - h : An estimate of the optimal bin width for the given data. - """ - del range # unused - return _ptp(x) / (np.log2(x.size) + 1.0) - - -def _hist_bin_rice(x, range): - """ - Rice histogram bin estimator. - - Another simple estimator with no normality assumption. It has better - performance for large data than Sturges, but tends to overestimate - the number of bins. The number of bins is proportional to the cube - root of data size (asymptotically optimal). The estimate depends - only on size of the data. - - Parameters - ---------- - x : array_like - Input data that is to be histogrammed, trimmed to range. May not - be empty. - - Returns - ------- - h : An estimate of the optimal bin width for the given data. - """ - del range # unused - return _ptp(x) / (2.0 * x.size ** (1.0 / 3)) - - -def _hist_bin_scott(x, range): - """ - Scott histogram bin estimator. - - The binwidth is proportional to the standard deviation of the data - and inversely proportional to the cube root of data size - (asymptotically optimal). - - Parameters - ---------- - x : array_like - Input data that is to be histogrammed, trimmed to range. May not - be empty. - - Returns - ------- - h : An estimate of the optimal bin width for the given data. - """ - del range # unused - return (24.0 * np.pi**0.5 / x.size)**(1.0 / 3.0) * np.std(x) - - -def _hist_bin_stone(x, range): - """ - Histogram bin estimator based on minimizing the estimated integrated squared error (ISE). - - The number of bins is chosen by minimizing the estimated ISE against the unknown true distribution. - The ISE is estimated using cross-validation and can be regarded as a generalization of Scott's rule. - https://en.wikipedia.org/wiki/Histogram#Scott.27s_normal_reference_rule - - This paper by Stone appears to be the origination of this rule. - http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/34.pdf - - Parameters - ---------- - x : array_like - Input data that is to be histogrammed, trimmed to range. May not - be empty. - range : (float, float) - The lower and upper range of the bins. - - Returns - ------- - h : An estimate of the optimal bin width for the given data. - """ - - n = x.size - ptp_x = _ptp(x) - if n <= 1 or ptp_x == 0: - return 0 - - def jhat(nbins): - hh = ptp_x / nbins - p_k = np.histogram(x, bins=nbins, range=range)[0] / n - return (2 - (n + 1) * p_k.dot(p_k)) / hh - - nbins_upper_bound = max(100, int(np.sqrt(n))) - nbins = min(_range(1, nbins_upper_bound + 1), key=jhat) - if nbins == nbins_upper_bound: - warnings.warn("The number of bins estimated may be suboptimal.", - RuntimeWarning, stacklevel=3) - return ptp_x / nbins - - -def _hist_bin_doane(x, range): - """ - Doane's histogram bin estimator. - - Improved version of Sturges' formula which works better for - non-normal data. See - stats.stackexchange.com/questions/55134/doanes-formula-for-histogram-binning - - Parameters - ---------- - x : array_like - Input data that is to be histogrammed, trimmed to range. May not - be empty. - - Returns - ------- - h : An estimate of the optimal bin width for the given data. - """ - del range # unused - if x.size > 2: - sg1 = np.sqrt(6.0 * (x.size - 2) / ((x.size + 1.0) * (x.size + 3))) - sigma = np.std(x) - if sigma > 0.0: - # These three operations add up to - # g1 = np.mean(((x - np.mean(x)) / sigma)**3) - # but use only one temp array instead of three - temp = x - np.mean(x) - np.true_divide(temp, sigma, temp) - np.power(temp, 3, temp) - g1 = np.mean(temp) - return _ptp(x) / (1.0 + np.log2(x.size) + - np.log2(1.0 + np.absolute(g1) / sg1)) - return 0.0 - - -def _hist_bin_fd(x, range): - """ - The Freedman-Diaconis histogram bin estimator. - - The Freedman-Diaconis rule uses interquartile range (IQR) to - estimate binwidth. It is considered a variation of the Scott rule - with more robustness as the IQR is less affected by outliers than - the standard deviation. However, the IQR depends on fewer points - than the standard deviation, so it is less accurate, especially for - long tailed distributions. - - If the IQR is 0, this function returns 0 for the bin width. - Binwidth is inversely proportional to the cube root of data size - (asymptotically optimal). - - Parameters - ---------- - x : array_like - Input data that is to be histogrammed, trimmed to range. May not - be empty. - - Returns - ------- - h : An estimate of the optimal bin width for the given data. - """ - del range # unused - iqr = np.subtract(*np.percentile(x, [75, 25])) - return 2.0 * iqr * x.size ** (-1.0 / 3.0) - - -def _hist_bin_auto(x, range): - """ - Histogram bin estimator that uses the minimum width of the - Freedman-Diaconis and Sturges estimators if the FD bin width is non-zero. - If the bin width from the FD estimator is 0, the Sturges estimator is used. - - The FD estimator is usually the most robust method, but its width - estimate tends to be too large for small `x` and bad for data with limited - variance. The Sturges estimator is quite good for small (<1000) datasets - and is the default in the R language. This method gives good off-the-shelf - behaviour. - - .. versionchanged:: 1.15.0 - If there is limited variance the IQR can be 0, which results in the - FD bin width being 0 too. This is not a valid bin width, so - ``np.histogram_bin_edges`` chooses 1 bin instead, which may not be optimal. - If the IQR is 0, it's unlikely any variance-based estimators will be of - use, so we revert to the Sturges estimator, which only uses the size of the - dataset in its calculation. - - Parameters - ---------- - x : array_like - Input data that is to be histogrammed, trimmed to range. May not - be empty. - - Returns - ------- - h : An estimate of the optimal bin width for the given data. - - See Also - -------- - _hist_bin_fd, _hist_bin_sturges - """ - fd_bw = _hist_bin_fd(x, range) - sturges_bw = _hist_bin_sturges(x, range) - del range # unused - if fd_bw: - return min(fd_bw, sturges_bw) - else: - # limited variance, so we return a len dependent bw estimator - return sturges_bw - -# Private dict initialized at module load time -_hist_bin_selectors = {'stone': _hist_bin_stone, - 'auto': _hist_bin_auto, - 'doane': _hist_bin_doane, - 'fd': _hist_bin_fd, - 'rice': _hist_bin_rice, - 'scott': _hist_bin_scott, - 'sqrt': _hist_bin_sqrt, - 'sturges': _hist_bin_sturges} - - -def _ravel_and_check_weights(a, weights): - """ Check a and weights have matching shapes, and ravel both """ - a = np.asarray(a) - - # Ensure that the array is a "subtractable" dtype - if a.dtype == np.bool_: - warnings.warn("Converting input from {} to {} for compatibility." - .format(a.dtype, np.uint8), - RuntimeWarning, stacklevel=3) - a = a.astype(np.uint8) - - if weights is not None: - weights = np.asarray(weights) - if weights.shape != a.shape: - raise ValueError( - 'weights should have the same shape as a.') - weights = weights.ravel() - a = a.ravel() - return a, weights - - -def _get_outer_edges(a, range): - """ - Determine the outer bin edges to use, from either the data or the range - argument - """ - if range is not None: - first_edge, last_edge = range - if first_edge > last_edge: - raise ValueError( - 'max must be larger than min in range parameter.') - if not (np.isfinite(first_edge) and np.isfinite(last_edge)): - raise ValueError( - "supplied range of [{}, {}] is not finite".format(first_edge, last_edge)) - elif a.size == 0: - # handle empty arrays. Can't determine range, so use 0-1. - first_edge, last_edge = 0, 1 - else: - first_edge, last_edge = a.min(), a.max() - if not (np.isfinite(first_edge) and np.isfinite(last_edge)): - raise ValueError( - "autodetected range of [{}, {}] is not finite".format(first_edge, last_edge)) - - # expand empty range to avoid divide by zero - if first_edge == last_edge: - first_edge = first_edge - 0.5 - last_edge = last_edge + 0.5 - - return first_edge, last_edge - - -def _unsigned_subtract(a, b): - """ - Subtract two values where a >= b, and produce an unsigned result - - This is needed when finding the difference between the upper and lower - bound of an int16 histogram - """ - # coerce to a single type - signed_to_unsigned = { - np.byte: np.ubyte, - np.short: np.ushort, - np.intc: np.uintc, - np.int_: np.uint, - np.longlong: np.ulonglong - } - dt = np.result_type(a, b) - try: - dt = signed_to_unsigned[dt.type] - except KeyError: - return np.subtract(a, b, dtype=dt) - else: - # we know the inputs are integers, and we are deliberately casting - # signed to unsigned - return np.subtract(a, b, casting='unsafe', dtype=dt) - - -def _get_bin_edges(a, bins, range, weights): - """ - Computes the bins used internally by `histogram`. - - Parameters - ========== - a : ndarray - Ravelled data array - bins, range - Forwarded arguments from `histogram`. - weights : ndarray, optional - Ravelled weights array, or None - - Returns - ======= - bin_edges : ndarray - Array of bin edges - uniform_bins : (Number, Number, int): - The upper bound, lowerbound, and number of bins, used in the optimized - implementation of `histogram` that works on uniform bins. - """ - # parse the overloaded bins argument - n_equal_bins = None - bin_edges = None - - if isinstance(bins, str): - bin_name = bins - # if `bins` is a string for an automatic method, - # this will replace it with the number of bins calculated - if bin_name not in _hist_bin_selectors: - raise ValueError( - "{!r} is not a valid estimator for `bins`".format(bin_name)) - if weights is not None: - raise TypeError("Automated estimation of the number of " - "bins is not supported for weighted data") - - first_edge, last_edge = _get_outer_edges(a, range) - - # truncate the range if needed - if range is not None: - keep = (a >= first_edge) - keep &= (a <= last_edge) - if not np.logical_and.reduce(keep): - a = a[keep] - - if a.size == 0: - n_equal_bins = 1 - else: - # Do not call selectors on empty arrays - width = _hist_bin_selectors[bin_name](a, (first_edge, last_edge)) - if width: - n_equal_bins = int(np.ceil(_unsigned_subtract(last_edge, first_edge) / width)) - else: - # Width can be zero for some estimators, e.g. FD when - # the IQR of the data is zero. - n_equal_bins = 1 - - elif np.ndim(bins) == 0: - try: - n_equal_bins = operator.index(bins) - except TypeError as e: - raise TypeError( - '`bins` must be an integer, a string, or an array') from e - if n_equal_bins < 1: - raise ValueError('`bins` must be positive, when an integer') - - first_edge, last_edge = _get_outer_edges(a, range) - - elif np.ndim(bins) == 1: - bin_edges = np.asarray(bins) - if np.any(bin_edges[:-1] > bin_edges[1:]): - raise ValueError( - '`bins` must increase monotonically, when an array') - - else: - raise ValueError('`bins` must be 1d, when an array') - - if n_equal_bins is not None: - # gh-10322 means that type resolution rules are dependent on array - # shapes. To avoid this causing problems, we pick a type now and stick - # with it throughout. - bin_type = np.result_type(first_edge, last_edge, a) - if np.issubdtype(bin_type, np.integer): - bin_type = np.result_type(bin_type, float) - - # bin edges must be computed - bin_edges = np.linspace( - first_edge, last_edge, n_equal_bins + 1, - endpoint=True, dtype=bin_type) - return bin_edges, (first_edge, last_edge, n_equal_bins) - else: - return bin_edges, None - - -def _search_sorted_inclusive(a, v): - """ - Like `searchsorted`, but where the last item in `v` is placed on the right. - - In the context of a histogram, this makes the last bin edge inclusive - """ - return np.concatenate(( - a.searchsorted(v[:-1], 'left'), - a.searchsorted(v[-1:], 'right') - )) - - -def _histogram_bin_edges_dispatcher(a, bins=None, range=None, weights=None): - return (a, bins, weights) - - -@array_function_dispatch(_histogram_bin_edges_dispatcher) -def histogram_bin_edges(a, bins=10, range=None, weights=None): - r""" - Function to calculate only the edges of the bins used by the `histogram` - function. - - Parameters - ---------- - a : array_like - Input data. The histogram is computed over the flattened array. - bins : int or sequence of scalars or str, optional - If `bins` is an int, it defines the number of equal-width - bins in the given range (10, by default). If `bins` is a - sequence, it defines the bin edges, including the rightmost - edge, allowing for non-uniform bin widths. - - If `bins` is a string from the list below, `histogram_bin_edges` will use - the method chosen to calculate the optimal bin width and - consequently the number of bins (see `Notes` for more detail on - the estimators) from the data that falls within the requested - range. While the bin width will be optimal for the actual data - in the range, the number of bins will be computed to fill the - entire range, including the empty portions. For visualisation, - using the 'auto' option is suggested. Weighted data is not - supported for automated bin size selection. - - 'auto' - Maximum of the 'sturges' and 'fd' estimators. Provides good - all around performance. - - 'fd' (Freedman Diaconis Estimator) - Robust (resilient to outliers) estimator that takes into - account data variability and data size. - - 'doane' - An improved version of Sturges' estimator that works better - with non-normal datasets. - - 'scott' - Less robust estimator that takes into account data variability - and data size. - - 'stone' - Estimator based on leave-one-out cross-validation estimate of - the integrated squared error. Can be regarded as a generalization - of Scott's rule. - - 'rice' - Estimator does not take variability into account, only data - size. Commonly overestimates number of bins required. - - 'sturges' - R's default method, only accounts for data size. Only - optimal for gaussian data and underestimates number of bins - for large non-gaussian datasets. - - 'sqrt' - Square root (of data size) estimator, used by Excel and - other programs for its speed and simplicity. - - range : (float, float), optional - The lower and upper range of the bins. If not provided, range - is simply ``(a.min(), a.max())``. Values outside the range are - ignored. The first element of the range must be less than or - equal to the second. `range` affects the automatic bin - computation as well. While bin width is computed to be optimal - based on the actual data within `range`, the bin count will fill - the entire range including portions containing no data. - - weights : array_like, optional - An array of weights, of the same shape as `a`. Each value in - `a` only contributes its associated weight towards the bin count - (instead of 1). This is currently not used by any of the bin estimators, - but may be in the future. - - Returns - ------- - bin_edges : array of dtype float - The edges to pass into `histogram` - - See Also - -------- - histogram - - Notes - ----- - The methods to estimate the optimal number of bins are well founded - in literature, and are inspired by the choices R provides for - histogram visualisation. Note that having the number of bins - proportional to :math:`n^{1/3}` is asymptotically optimal, which is - why it appears in most estimators. These are simply plug-in methods - that give good starting points for number of bins. In the equations - below, :math:`h` is the binwidth and :math:`n_h` is the number of - bins. All estimators that compute bin counts are recast to bin width - using the `ptp` of the data. The final bin count is obtained from - ``np.round(np.ceil(range / h))``. The final bin width is often less - than what is returned by the estimators below. - - 'auto' (maximum of the 'sturges' and 'fd' estimators) - A compromise to get a good value. For small datasets the Sturges - value will usually be chosen, while larger datasets will usually - default to FD. Avoids the overly conservative behaviour of FD - and Sturges for small and large datasets respectively. - Switchover point is usually :math:`a.size \approx 1000`. - - 'fd' (Freedman Diaconis Estimator) - .. math:: h = 2 \frac{IQR}{n^{1/3}} - - The binwidth is proportional to the interquartile range (IQR) - and inversely proportional to cube root of a.size. Can be too - conservative for small datasets, but is quite good for large - datasets. The IQR is very robust to outliers. - - 'scott' - .. math:: h = \sigma \sqrt[3]{\frac{24 \sqrt{\pi}}{n}} - - The binwidth is proportional to the standard deviation of the - data and inversely proportional to cube root of ``x.size``. Can - be too conservative for small datasets, but is quite good for - large datasets. The standard deviation is not very robust to - outliers. Values are very similar to the Freedman-Diaconis - estimator in the absence of outliers. - - 'rice' - .. math:: n_h = 2n^{1/3} - - The number of bins is only proportional to cube root of - ``a.size``. It tends to overestimate the number of bins and it - does not take into account data variability. - - 'sturges' - .. math:: n_h = \log _{2}(n) + 1 - - The number of bins is the base 2 log of ``a.size``. This - estimator assumes normality of data and is too conservative for - larger, non-normal datasets. This is the default method in R's - ``hist`` method. - - 'doane' - .. math:: n_h = 1 + \log_{2}(n) + - \log_{2}\left(1 + \frac{|g_1|}{\sigma_{g_1}}\right) - - g_1 = mean\left[\left(\frac{x - \mu}{\sigma}\right)^3\right] - - \sigma_{g_1} = \sqrt{\frac{6(n - 2)}{(n + 1)(n + 3)}} - - An improved version of Sturges' formula that produces better - estimates for non-normal datasets. This estimator attempts to - account for the skew of the data. - - 'sqrt' - .. math:: n_h = \sqrt n - - The simplest and fastest estimator. Only takes into account the - data size. - - Examples - -------- - >>> arr = np.array([0, 0, 0, 1, 2, 3, 3, 4, 5]) - >>> np.histogram_bin_edges(arr, bins='auto', range=(0, 1)) - array([0. , 0.25, 0.5 , 0.75, 1. ]) - >>> np.histogram_bin_edges(arr, bins=2) - array([0. , 2.5, 5. ]) - - For consistency with histogram, an array of pre-computed bins is - passed through unmodified: - - >>> np.histogram_bin_edges(arr, [1, 2]) - array([1, 2]) - - This function allows one set of bins to be computed, and reused across - multiple histograms: - - >>> shared_bins = np.histogram_bin_edges(arr, bins='auto') - >>> shared_bins - array([0., 1., 2., 3., 4., 5.]) - - >>> group_id = np.array([0, 1, 1, 0, 1, 1, 0, 1, 1]) - >>> hist_0, _ = np.histogram(arr[group_id == 0], bins=shared_bins) - >>> hist_1, _ = np.histogram(arr[group_id == 1], bins=shared_bins) - - >>> hist_0; hist_1 - array([1, 1, 0, 1, 0]) - array([2, 0, 1, 1, 2]) - - Which gives more easily comparable results than using separate bins for - each histogram: - - >>> hist_0, bins_0 = np.histogram(arr[group_id == 0], bins='auto') - >>> hist_1, bins_1 = np.histogram(arr[group_id == 1], bins='auto') - >>> hist_0; hist_1 - array([1, 1, 1]) - array([2, 1, 1, 2]) - >>> bins_0; bins_1 - array([0., 1., 2., 3.]) - array([0. , 1.25, 2.5 , 3.75, 5. ]) - - """ - a, weights = _ravel_and_check_weights(a, weights) - bin_edges, _ = _get_bin_edges(a, bins, range, weights) - return bin_edges - - -def _histogram_dispatcher( - a, bins=None, range=None, density=None, weights=None): - return (a, bins, weights) - - -@array_function_dispatch(_histogram_dispatcher) -def histogram(a, bins=10, range=None, density=None, weights=None): - r""" - Compute the histogram of a dataset. - - Parameters - ---------- - a : array_like - Input data. The histogram is computed over the flattened array. - bins : int or sequence of scalars or str, optional - If `bins` is an int, it defines the number of equal-width - bins in the given range (10, by default). If `bins` is a - sequence, it defines a monotonically increasing array of bin edges, - including the rightmost edge, allowing for non-uniform bin widths. - - .. versionadded:: 1.11.0 - - If `bins` is a string, it defines the method used to calculate the - optimal bin width, as defined by `histogram_bin_edges`. - - range : (float, float), optional - The lower and upper range of the bins. If not provided, range - is simply ``(a.min(), a.max())``. Values outside the range are - ignored. The first element of the range must be less than or - equal to the second. `range` affects the automatic bin - computation as well. While bin width is computed to be optimal - based on the actual data within `range`, the bin count will fill - the entire range including portions containing no data. - weights : array_like, optional - An array of weights, of the same shape as `a`. Each value in - `a` only contributes its associated weight towards the bin count - (instead of 1). If `density` is True, the weights are - normalized, so that the integral of the density over the range - remains 1. - density : bool, optional - If ``False``, the result will contain the number of samples in - each bin. If ``True``, the result is the value of the - probability *density* function at the bin, normalized such that - the *integral* over the range is 1. Note that the sum of the - histogram values will not be equal to 1 unless bins of unity - width are chosen; it is not a probability *mass* function. - - Returns - ------- - hist : array - The values of the histogram. See `density` and `weights` for a - description of the possible semantics. - bin_edges : array of dtype float - Return the bin edges ``(length(hist)+1)``. - - - See Also - -------- - histogramdd, bincount, searchsorted, digitize, histogram_bin_edges - - Notes - ----- - All but the last (righthand-most) bin is half-open. In other words, - if `bins` is:: - - [1, 2, 3, 4] - - then the first bin is ``[1, 2)`` (including 1, but excluding 2) and - the second ``[2, 3)``. The last bin, however, is ``[3, 4]``, which - *includes* 4. - - - Examples - -------- - >>> np.histogram([1, 2, 1], bins=[0, 1, 2, 3]) - (array([0, 2, 1]), array([0, 1, 2, 3])) - >>> np.histogram(np.arange(4), bins=np.arange(5), density=True) - (array([0.25, 0.25, 0.25, 0.25]), array([0, 1, 2, 3, 4])) - >>> np.histogram([[1, 2, 1], [1, 0, 1]], bins=[0,1,2,3]) - (array([1, 4, 1]), array([0, 1, 2, 3])) - - >>> a = np.arange(5) - >>> hist, bin_edges = np.histogram(a, density=True) - >>> hist - array([0.5, 0. , 0.5, 0. , 0. , 0.5, 0. , 0.5, 0. , 0.5]) - >>> hist.sum() - 2.4999999999999996 - >>> np.sum(hist * np.diff(bin_edges)) - 1.0 - - .. versionadded:: 1.11.0 - - Automated Bin Selection Methods example, using 2 peak random data - with 2000 points: - - >>> import matplotlib.pyplot as plt - >>> rng = np.random.RandomState(10) # deterministic random data - >>> a = np.hstack((rng.normal(size=1000), - ... rng.normal(loc=5, scale=2, size=1000))) - >>> _ = plt.hist(a, bins='auto') # arguments are passed to np.histogram - >>> plt.title("Histogram with 'auto' bins") - Text(0.5, 1.0, "Histogram with 'auto' bins") - >>> plt.show() - - """ - a, weights = _ravel_and_check_weights(a, weights) - - bin_edges, uniform_bins = _get_bin_edges(a, bins, range, weights) - - # Histogram is an integer or a float array depending on the weights. - if weights is None: - ntype = np.dtype(np.intp) - else: - ntype = weights.dtype - - # We set a block size, as this allows us to iterate over chunks when - # computing histograms, to minimize memory usage. - BLOCK = 65536 - - # The fast path uses bincount, but that only works for certain types - # of weight - simple_weights = ( - weights is None or - np.can_cast(weights.dtype, np.double) or - np.can_cast(weights.dtype, complex) - ) - - if uniform_bins is not None and simple_weights: - # Fast algorithm for equal bins - # We now convert values of a to bin indices, under the assumption of - # equal bin widths (which is valid here). - first_edge, last_edge, n_equal_bins = uniform_bins - - # Initialize empty histogram - n = np.zeros(n_equal_bins, ntype) - - # Pre-compute histogram scaling factor - norm_numerator = n_equal_bins - norm_denom = _unsigned_subtract(last_edge, first_edge) - - # We iterate over blocks here for two reasons: the first is that for - # large arrays, it is actually faster (for example for a 10^8 array it - # is 2x as fast) and it results in a memory footprint 3x lower in the - # limit of large arrays. - for i in _range(0, len(a), BLOCK): - tmp_a = a[i:i+BLOCK] - if weights is None: - tmp_w = None - else: - tmp_w = weights[i:i + BLOCK] - - # Only include values in the right range - keep = (tmp_a >= first_edge) - keep &= (tmp_a <= last_edge) - if not np.logical_and.reduce(keep): - tmp_a = tmp_a[keep] - if tmp_w is not None: - tmp_w = tmp_w[keep] - - # This cast ensures no type promotions occur below, which gh-10322 - # make unpredictable. Getting it wrong leads to precision errors - # like gh-8123. - tmp_a = tmp_a.astype(bin_edges.dtype, copy=False) - - # Compute the bin indices, and for values that lie exactly on - # last_edge we need to subtract one - f_indices = ((_unsigned_subtract(tmp_a, first_edge) / norm_denom) - * norm_numerator) - indices = f_indices.astype(np.intp) - indices[indices == n_equal_bins] -= 1 - - # The index computation is not guaranteed to give exactly - # consistent results within ~1 ULP of the bin edges. - decrement = tmp_a < bin_edges[indices] - indices[decrement] -= 1 - # The last bin includes the right edge. The other bins do not. - increment = ((tmp_a >= bin_edges[indices + 1]) - & (indices != n_equal_bins - 1)) - indices[increment] += 1 - - # We now compute the histogram using bincount - if ntype.kind == 'c': - n.real += np.bincount(indices, weights=tmp_w.real, - minlength=n_equal_bins) - n.imag += np.bincount(indices, weights=tmp_w.imag, - minlength=n_equal_bins) - else: - n += np.bincount(indices, weights=tmp_w, - minlength=n_equal_bins).astype(ntype) - else: - # Compute via cumulative histogram - cum_n = np.zeros(bin_edges.shape, ntype) - if weights is None: - for i in _range(0, len(a), BLOCK): - sa = np.sort(a[i:i+BLOCK]) - cum_n += _search_sorted_inclusive(sa, bin_edges) - else: - zero = np.zeros(1, dtype=ntype) - for i in _range(0, len(a), BLOCK): - tmp_a = a[i:i+BLOCK] - tmp_w = weights[i:i+BLOCK] - sorting_index = np.argsort(tmp_a) - sa = tmp_a[sorting_index] - sw = tmp_w[sorting_index] - cw = np.concatenate((zero, sw.cumsum())) - bin_index = _search_sorted_inclusive(sa, bin_edges) - cum_n += cw[bin_index] - - n = np.diff(cum_n) - - if density: - db = np.array(np.diff(bin_edges), float) - return n/db/n.sum(), bin_edges - - return n, bin_edges - - -def _histogramdd_dispatcher(sample, bins=None, range=None, density=None, - weights=None): - if hasattr(sample, 'shape'): # same condition as used in histogramdd - yield sample - else: - yield from sample - with contextlib.suppress(TypeError): - yield from bins - yield weights - - -@array_function_dispatch(_histogramdd_dispatcher) -def histogramdd(sample, bins=10, range=None, density=None, weights=None): - """ - Compute the multidimensional histogram of some data. - - Parameters - ---------- - sample : (N, D) array, or (N, D) array_like - The data to be histogrammed. - - Note the unusual interpretation of sample when an array_like: - - * When an array, each row is a coordinate in a D-dimensional space - - such as ``histogramdd(np.array([p1, p2, p3]))``. - * When an array_like, each element is the list of values for single - coordinate - such as ``histogramdd((X, Y, Z))``. - - The first form should be preferred. - - bins : sequence or int, optional - The bin specification: - - * A sequence of arrays describing the monotonically increasing bin - edges along each dimension. - * The number of bins for each dimension (nx, ny, ... =bins) - * The number of bins for all dimensions (nx=ny=...=bins). - - range : sequence, optional - A sequence of length D, each an optional (lower, upper) tuple giving - the outer bin edges to be used if the edges are not given explicitly in - `bins`. - An entry of None in the sequence results in the minimum and maximum - values being used for the corresponding dimension. - The default, None, is equivalent to passing a tuple of D None values. - density : bool, optional - If False, the default, returns the number of samples in each bin. - If True, returns the probability *density* function at the bin, - ``bin_count / sample_count / bin_volume``. - weights : (N,) array_like, optional - An array of values `w_i` weighing each sample `(x_i, y_i, z_i, ...)`. - Weights are normalized to 1 if density is True. If density is False, - the values of the returned histogram are equal to the sum of the - weights belonging to the samples falling into each bin. - - Returns - ------- - H : ndarray - The multidimensional histogram of sample x. See density and weights - for the different possible semantics. - edges : list - A list of D arrays describing the bin edges for each dimension. - - See Also - -------- - histogram: 1-D histogram - histogram2d: 2-D histogram - - Examples - -------- - >>> r = np.random.randn(100,3) - >>> H, edges = np.histogramdd(r, bins = (5, 8, 4)) - >>> H.shape, edges[0].size, edges[1].size, edges[2].size - ((5, 8, 4), 6, 9, 5) - - """ - - try: - # Sample is an ND-array. - N, D = sample.shape - except (AttributeError, ValueError): - # Sample is a sequence of 1D arrays. - sample = np.atleast_2d(sample).T - N, D = sample.shape - - nbin = np.empty(D, np.intp) - edges = D*[None] - dedges = D*[None] - if weights is not None: - weights = np.asarray(weights) - - try: - M = len(bins) - if M != D: - raise ValueError( - 'The dimension of bins must be equal to the dimension of the ' - 'sample x.') - except TypeError: - # bins is an integer - bins = D*[bins] - - # normalize the range argument - if range is None: - range = (None,) * D - elif len(range) != D: - raise ValueError('range argument must have one entry per dimension') - - # Create edge arrays - for i in _range(D): - if np.ndim(bins[i]) == 0: - if bins[i] < 1: - raise ValueError( - '`bins[{}]` must be positive, when an integer'.format(i)) - smin, smax = _get_outer_edges(sample[:,i], range[i]) - try: - n = operator.index(bins[i]) - - except TypeError as e: - raise TypeError( - "`bins[{}]` must be an integer, when a scalar".format(i) - ) from e - - edges[i] = np.linspace(smin, smax, n + 1) - elif np.ndim(bins[i]) == 1: - edges[i] = np.asarray(bins[i]) - if np.any(edges[i][:-1] > edges[i][1:]): - raise ValueError( - '`bins[{}]` must be monotonically increasing, when an array' - .format(i)) - else: - raise ValueError( - '`bins[{}]` must be a scalar or 1d array'.format(i)) - - nbin[i] = len(edges[i]) + 1 # includes an outlier on each end - dedges[i] = np.diff(edges[i]) - - # Compute the bin number each sample falls into. - Ncount = tuple( - # avoid np.digitize to work around gh-11022 - np.searchsorted(edges[i], sample[:, i], side='right') - for i in _range(D) - ) - - # Using digitize, values that fall on an edge are put in the right bin. - # For the rightmost bin, we want values equal to the right edge to be - # counted in the last bin, and not as an outlier. - for i in _range(D): - # Find which points are on the rightmost edge. - on_edge = (sample[:, i] == edges[i][-1]) - # Shift these points one bin to the left. - Ncount[i][on_edge] -= 1 - - # Compute the sample indices in the flattened histogram matrix. - # This raises an error if the array is too large. - xy = np.ravel_multi_index(Ncount, nbin) - - # Compute the number of repetitions in xy and assign it to the - # flattened histmat. - hist = np.bincount(xy, weights, minlength=nbin.prod()) - - # Shape into a proper matrix - hist = hist.reshape(nbin) - - # This preserves the (bad) behavior observed in gh-7845, for now. - hist = hist.astype(float, casting='safe') - - # Remove outliers (indices 0 and -1 for each dimension). - core = D*(slice(1, -1),) - hist = hist[core] - - if density: - # calculate the probability density function - s = hist.sum() - for i in _range(D): - shape = np.ones(D, int) - shape[i] = nbin[i] - 2 - hist = hist / dedges[i].reshape(shape) - hist /= s - - if (hist.shape != nbin - 2).any(): - raise RuntimeError( - "Internal Shape Error") - return hist, edges diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_add_prefix_suffix.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_add_prefix_suffix.py deleted file mode 100644 index 92d7cdd7990e168721610b7f52f653a69ac1e078..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_add_prefix_suffix.py +++ /dev/null @@ -1,49 +0,0 @@ -import pytest - -from pandas import Index -import pandas._testing as tm - - -def test_add_prefix_suffix(float_frame): - with_prefix = float_frame.add_prefix("foo#") - expected = Index([f"foo#{c}" for c in float_frame.columns]) - tm.assert_index_equal(with_prefix.columns, expected) - - with_suffix = float_frame.add_suffix("#foo") - expected = Index([f"{c}#foo" for c in float_frame.columns]) - tm.assert_index_equal(with_suffix.columns, expected) - - with_pct_prefix = float_frame.add_prefix("%") - expected = Index([f"%{c}" for c in float_frame.columns]) - tm.assert_index_equal(with_pct_prefix.columns, expected) - - with_pct_suffix = float_frame.add_suffix("%") - expected = Index([f"{c}%" for c in float_frame.columns]) - tm.assert_index_equal(with_pct_suffix.columns, expected) - - -def test_add_prefix_suffix_axis(float_frame): - # GH 47819 - with_prefix = float_frame.add_prefix("foo#", axis=0) - expected = Index([f"foo#{c}" for c in float_frame.index]) - tm.assert_index_equal(with_prefix.index, expected) - - with_prefix = float_frame.add_prefix("foo#", axis=1) - expected = Index([f"foo#{c}" for c in float_frame.columns]) - tm.assert_index_equal(with_prefix.columns, expected) - - with_pct_suffix = float_frame.add_suffix("#foo", axis=0) - expected = Index([f"{c}#foo" for c in float_frame.index]) - tm.assert_index_equal(with_pct_suffix.index, expected) - - with_pct_suffix = float_frame.add_suffix("#foo", axis=1) - expected = Index([f"{c}#foo" for c in float_frame.columns]) - tm.assert_index_equal(with_pct_suffix.columns, expected) - - -def test_add_prefix_suffix_invalid_axis(float_frame): - with pytest.raises(ValueError, match="No axis named 2 for object type DataFrame"): - float_frame.add_prefix("foo#", axis=2) - - with pytest.raises(ValueError, match="No axis named 2 for object type DataFrame"): - float_frame.add_suffix("foo#", axis=2) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/commands/check.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/commands/check.py deleted file mode 100644 index 3864220b2b4a2fd3803bdff0ab9e4c3941c1f313..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/commands/check.py +++ /dev/null @@ -1,53 +0,0 @@ -import logging -from optparse import Values -from typing import List - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.operations.check import ( - check_package_set, - create_package_set_from_installed, -) -from pip._internal.utils.misc import write_output - -logger = logging.getLogger(__name__) - - -class CheckCommand(Command): - """Verify installed packages have compatible dependencies.""" - - usage = """ - %prog [options]""" - - def run(self, options: Values, args: List[str]) -> int: - - package_set, parsing_probs = create_package_set_from_installed() - missing, conflicting = check_package_set(package_set) - - for project_name in missing: - version = package_set[project_name].version - for dependency in missing[project_name]: - write_output( - "%s %s requires %s, which is not installed.", - project_name, - version, - dependency[0], - ) - - for project_name in conflicting: - version = package_set[project_name].version - for dep_name, dep_version, req in conflicting[project_name]: - write_output( - "%s %s has requirement %s, but you have %s %s.", - project_name, - version, - req, - dep_name, - dep_version, - ) - - if missing or conflicting or parsing_probs: - return ERROR - else: - write_output("No broken requirements found.") - return SUCCESS diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/webencodings/labels.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/webencodings/labels.py deleted file mode 100644 index 29cbf91ef79b89971e51db9ddfc3720d8b4db82a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/webencodings/labels.py +++ /dev/null @@ -1,231 +0,0 @@ -""" - - webencodings.labels - ~~~~~~~~~~~~~~~~~~~ - - Map encoding labels to their name. - - :copyright: Copyright 2012 by Simon Sapin - :license: BSD, see LICENSE for details. - -""" - -# XXX Do not edit! -# This file is automatically generated by mklabels.py - -LABELS = { - 'unicode-1-1-utf-8': 'utf-8', - 'utf-8': 'utf-8', - 'utf8': 'utf-8', - '866': 'ibm866', - 'cp866': 'ibm866', - 'csibm866': 'ibm866', - 'ibm866': 'ibm866', - 'csisolatin2': 'iso-8859-2', - 'iso-8859-2': 'iso-8859-2', - 'iso-ir-101': 'iso-8859-2', - 'iso8859-2': 'iso-8859-2', - 'iso88592': 'iso-8859-2', - 'iso_8859-2': 'iso-8859-2', - 'iso_8859-2:1987': 'iso-8859-2', - 'l2': 'iso-8859-2', - 'latin2': 'iso-8859-2', - 'csisolatin3': 'iso-8859-3', - 'iso-8859-3': 'iso-8859-3', - 'iso-ir-109': 'iso-8859-3', - 'iso8859-3': 'iso-8859-3', - 'iso88593': 'iso-8859-3', - 'iso_8859-3': 'iso-8859-3', - 'iso_8859-3:1988': 'iso-8859-3', - 'l3': 'iso-8859-3', - 'latin3': 'iso-8859-3', - 'csisolatin4': 'iso-8859-4', - 'iso-8859-4': 'iso-8859-4', - 'iso-ir-110': 'iso-8859-4', - 'iso8859-4': 'iso-8859-4', - 'iso88594': 'iso-8859-4', - 'iso_8859-4': 'iso-8859-4', - 'iso_8859-4:1988': 'iso-8859-4', - 'l4': 'iso-8859-4', - 'latin4': 'iso-8859-4', - 'csisolatincyrillic': 'iso-8859-5', - 'cyrillic': 'iso-8859-5', - 'iso-8859-5': 'iso-8859-5', - 'iso-ir-144': 'iso-8859-5', - 'iso8859-5': 'iso-8859-5', - 'iso88595': 'iso-8859-5', - 'iso_8859-5': 'iso-8859-5', - 'iso_8859-5:1988': 'iso-8859-5', - 'arabic': 'iso-8859-6', - 'asmo-708': 'iso-8859-6', - 'csiso88596e': 'iso-8859-6', - 'csiso88596i': 'iso-8859-6', - 'csisolatinarabic': 'iso-8859-6', - 'ecma-114': 'iso-8859-6', - 'iso-8859-6': 'iso-8859-6', - 'iso-8859-6-e': 'iso-8859-6', - 'iso-8859-6-i': 'iso-8859-6', - 'iso-ir-127': 'iso-8859-6', - 'iso8859-6': 'iso-8859-6', - 'iso88596': 'iso-8859-6', - 'iso_8859-6': 'iso-8859-6', - 'iso_8859-6:1987': 'iso-8859-6', - 'csisolatingreek': 'iso-8859-7', - 'ecma-118': 'iso-8859-7', - 'elot_928': 'iso-8859-7', - 'greek': 'iso-8859-7', - 'greek8': 'iso-8859-7', - 'iso-8859-7': 'iso-8859-7', - 'iso-ir-126': 'iso-8859-7', - 'iso8859-7': 'iso-8859-7', - 'iso88597': 'iso-8859-7', - 'iso_8859-7': 'iso-8859-7', - 'iso_8859-7:1987': 'iso-8859-7', - 'sun_eu_greek': 'iso-8859-7', - 'csiso88598e': 'iso-8859-8', - 'csisolatinhebrew': 'iso-8859-8', - 'hebrew': 'iso-8859-8', - 'iso-8859-8': 'iso-8859-8', - 'iso-8859-8-e': 'iso-8859-8', - 'iso-ir-138': 'iso-8859-8', - 'iso8859-8': 'iso-8859-8', - 'iso88598': 'iso-8859-8', - 'iso_8859-8': 'iso-8859-8', - 'iso_8859-8:1988': 'iso-8859-8', - 'visual': 'iso-8859-8', - 'csiso88598i': 'iso-8859-8-i', - 'iso-8859-8-i': 'iso-8859-8-i', - 'logical': 'iso-8859-8-i', - 'csisolatin6': 'iso-8859-10', - 'iso-8859-10': 'iso-8859-10', - 'iso-ir-157': 'iso-8859-10', - 'iso8859-10': 'iso-8859-10', - 'iso885910': 'iso-8859-10', - 'l6': 'iso-8859-10', - 'latin6': 'iso-8859-10', - 'iso-8859-13': 'iso-8859-13', - 'iso8859-13': 'iso-8859-13', - 'iso885913': 'iso-8859-13', - 'iso-8859-14': 'iso-8859-14', - 'iso8859-14': 'iso-8859-14', - 'iso885914': 'iso-8859-14', - 'csisolatin9': 'iso-8859-15', - 'iso-8859-15': 'iso-8859-15', - 'iso8859-15': 'iso-8859-15', - 'iso885915': 'iso-8859-15', - 'iso_8859-15': 'iso-8859-15', - 'l9': 'iso-8859-15', - 'iso-8859-16': 'iso-8859-16', - 'cskoi8r': 'koi8-r', - 'koi': 'koi8-r', - 'koi8': 'koi8-r', - 'koi8-r': 'koi8-r', - 'koi8_r': 'koi8-r', - 'koi8-u': 'koi8-u', - 'csmacintosh': 'macintosh', - 'mac': 'macintosh', - 'macintosh': 'macintosh', - 'x-mac-roman': 'macintosh', - 'dos-874': 'windows-874', - 'iso-8859-11': 'windows-874', - 'iso8859-11': 'windows-874', - 'iso885911': 'windows-874', - 'tis-620': 'windows-874', - 'windows-874': 'windows-874', - 'cp1250': 'windows-1250', - 'windows-1250': 'windows-1250', - 'x-cp1250': 'windows-1250', - 'cp1251': 'windows-1251', - 'windows-1251': 'windows-1251', - 'x-cp1251': 'windows-1251', - 'ansi_x3.4-1968': 'windows-1252', - 'ascii': 'windows-1252', - 'cp1252': 'windows-1252', - 'cp819': 'windows-1252', - 'csisolatin1': 'windows-1252', - 'ibm819': 'windows-1252', - 'iso-8859-1': 'windows-1252', - 'iso-ir-100': 'windows-1252', - 'iso8859-1': 'windows-1252', - 'iso88591': 'windows-1252', - 'iso_8859-1': 'windows-1252', - 'iso_8859-1:1987': 'windows-1252', - 'l1': 'windows-1252', - 'latin1': 'windows-1252', - 'us-ascii': 'windows-1252', - 'windows-1252': 'windows-1252', - 'x-cp1252': 'windows-1252', - 'cp1253': 'windows-1253', - 'windows-1253': 'windows-1253', - 'x-cp1253': 'windows-1253', - 'cp1254': 'windows-1254', - 'csisolatin5': 'windows-1254', - 'iso-8859-9': 'windows-1254', - 'iso-ir-148': 'windows-1254', - 'iso8859-9': 'windows-1254', - 'iso88599': 'windows-1254', - 'iso_8859-9': 'windows-1254', - 'iso_8859-9:1989': 'windows-1254', - 'l5': 'windows-1254', - 'latin5': 'windows-1254', - 'windows-1254': 'windows-1254', - 'x-cp1254': 'windows-1254', - 'cp1255': 'windows-1255', - 'windows-1255': 'windows-1255', - 'x-cp1255': 'windows-1255', - 'cp1256': 'windows-1256', - 'windows-1256': 'windows-1256', - 'x-cp1256': 'windows-1256', - 'cp1257': 'windows-1257', - 'windows-1257': 'windows-1257', - 'x-cp1257': 'windows-1257', - 'cp1258': 'windows-1258', - 'windows-1258': 'windows-1258', - 'x-cp1258': 'windows-1258', - 'x-mac-cyrillic': 'x-mac-cyrillic', - 'x-mac-ukrainian': 'x-mac-cyrillic', - 'chinese': 'gbk', - 'csgb2312': 'gbk', - 'csiso58gb231280': 'gbk', - 'gb2312': 'gbk', - 'gb_2312': 'gbk', - 'gb_2312-80': 'gbk', - 'gbk': 'gbk', - 'iso-ir-58': 'gbk', - 'x-gbk': 'gbk', - 'gb18030': 'gb18030', - 'hz-gb-2312': 'hz-gb-2312', - 'big5': 'big5', - 'big5-hkscs': 'big5', - 'cn-big5': 'big5', - 'csbig5': 'big5', - 'x-x-big5': 'big5', - 'cseucpkdfmtjapanese': 'euc-jp', - 'euc-jp': 'euc-jp', - 'x-euc-jp': 'euc-jp', - 'csiso2022jp': 'iso-2022-jp', - 'iso-2022-jp': 'iso-2022-jp', - 'csshiftjis': 'shift_jis', - 'ms_kanji': 'shift_jis', - 'shift-jis': 'shift_jis', - 'shift_jis': 'shift_jis', - 'sjis': 'shift_jis', - 'windows-31j': 'shift_jis', - 'x-sjis': 'shift_jis', - 'cseuckr': 'euc-kr', - 'csksc56011987': 'euc-kr', - 'euc-kr': 'euc-kr', - 'iso-ir-149': 'euc-kr', - 'korean': 'euc-kr', - 'ks_c_5601-1987': 'euc-kr', - 'ks_c_5601-1989': 'euc-kr', - 'ksc5601': 'euc-kr', - 'ksc_5601': 'euc-kr', - 'windows-949': 'euc-kr', - 'csiso2022kr': 'iso-2022-kr', - 'iso-2022-kr': 'iso-2022-kr', - 'utf-16be': 'utf-16be', - 'utf-16': 'utf-16le', - 'utf-16le': 'utf-16le', - 'x-user-defined': 'x-user-defined', -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/utils.py deleted file mode 100644 index 44816e2420cfddf316fc3089ee3d647de120f8d8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/utils.py +++ /dev/null @@ -1,187 +0,0 @@ -import inspect -from copy import copy -from typing import Any, Callable, Dict, List, Tuple, Type, cast, get_type_hints - -from typing_extensions import Annotated - -from ._typing import get_args, get_origin -from .models import ArgumentInfo, OptionInfo, ParameterInfo, ParamMeta - - -def _param_type_to_user_string(param_type: Type[ParameterInfo]) -> str: - # Render a `ParameterInfo` subclass for use in error messages. - # User code doesn't call `*Info` directly, so errors should present the classes how - # they were (probably) defined in the user code. - if param_type is OptionInfo: - return "`Option`" - elif param_type is ArgumentInfo: - return "`Argument`" - # This line shouldn't be reachable during normal use. - return f"`{param_type.__name__}`" # pragma: no cover - - -class AnnotatedParamWithDefaultValueError(Exception): - argument_name: str - param_type: Type[ParameterInfo] - - def __init__(self, argument_name: str, param_type: Type[ParameterInfo]): - self.argument_name = argument_name - self.param_type = param_type - - def __str__(self) -> str: - param_type_str = _param_type_to_user_string(self.param_type) - return ( - f"{param_type_str} default value cannot be set in `Annotated`" - f" for {self.argument_name!r}. Set the default value with `=` instead." - ) - - -class MixedAnnotatedAndDefaultStyleError(Exception): - argument_name: str - annotated_param_type: Type[ParameterInfo] - default_param_type: Type[ParameterInfo] - - def __init__( - self, - argument_name: str, - annotated_param_type: Type[ParameterInfo], - default_param_type: Type[ParameterInfo], - ): - self.argument_name = argument_name - self.annotated_param_type = annotated_param_type - self.default_param_type = default_param_type - - def __str__(self) -> str: - annotated_param_type_str = _param_type_to_user_string(self.annotated_param_type) - default_param_type_str = _param_type_to_user_string(self.default_param_type) - msg = f"Cannot specify {annotated_param_type_str} in `Annotated` and" - if self.annotated_param_type is self.default_param_type: - msg += " default value" - else: - msg += f" {default_param_type_str} as a default value" - msg += f" together for {self.argument_name!r}" - return msg - - -class MultipleTyperAnnotationsError(Exception): - argument_name: str - - def __init__(self, argument_name: str): - self.argument_name = argument_name - - def __str__(self) -> str: - return ( - "Cannot specify multiple `Annotated` Typer arguments" - f" for {self.argument_name!r}" - ) - - -class DefaultFactoryAndDefaultValueError(Exception): - argument_name: str - param_type: Type[ParameterInfo] - - def __init__(self, argument_name: str, param_type: Type[ParameterInfo]): - self.argument_name = argument_name - self.param_type = param_type - - def __str__(self) -> str: - param_type_str = _param_type_to_user_string(self.param_type) - return ( - "Cannot specify `default_factory` and a default value together" - f" for {param_type_str}" - ) - - -def _split_annotation_from_typer_annotations( - base_annotation: Type[Any], -) -> Tuple[Type[Any], List[ParameterInfo]]: - if get_origin(base_annotation) is not Annotated: # type: ignore - return base_annotation, [] - base_annotation, *maybe_typer_annotations = get_args(base_annotation) - return base_annotation, [ - annotation - for annotation in maybe_typer_annotations - if isinstance(annotation, ParameterInfo) - ] - - -def get_params_from_function(func: Callable[..., Any]) -> Dict[str, ParamMeta]: - signature = inspect.signature(func) - type_hints = get_type_hints(func) - params = {} - for param in signature.parameters.values(): - annotation, typer_annotations = _split_annotation_from_typer_annotations( - param.annotation, - ) - if len(typer_annotations) > 1: - raise MultipleTyperAnnotationsError(param.name) - - default = param.default - if typer_annotations: - # It's something like `my_param: Annotated[str, Argument()]` - [parameter_info] = typer_annotations - - # Forbid `my_param: Annotated[str, Argument()] = Argument("...")` - if isinstance(param.default, ParameterInfo): - raise MixedAnnotatedAndDefaultStyleError( - argument_name=param.name, - annotated_param_type=type(parameter_info), - default_param_type=type(param.default), - ) - - parameter_info = copy(parameter_info) - - # When used as a default, `Option` takes a default value and option names - # as positional arguments: - # `Option(some_value, "--some-argument", "-s")` - # When used in `Annotated` (ie, what this is handling), `Option` just takes - # option names as positional arguments: - # `Option("--some-argument", "-s")` - # In this case, the `default` attribute of `parameter_info` is actually - # meant to be the first item of `param_decls`. - if ( - isinstance(parameter_info, OptionInfo) - and parameter_info.default is not ... - ): - parameter_info.param_decls = ( - cast(str, parameter_info.default), - *(parameter_info.param_decls or ()), - ) - parameter_info.default = ... - - # Forbid `my_param: Annotated[str, Argument('some-default')]` - if parameter_info.default is not ...: - raise AnnotatedParamWithDefaultValueError( - param_type=type(parameter_info), - argument_name=param.name, - ) - if param.default is not param.empty: - # Put the parameter's default (set by `=`) into `parameter_info`, where - # typer can find it. - parameter_info.default = param.default - - default = parameter_info - elif param.name in type_hints: - # Resolve forward references. - annotation = type_hints[param.name] - - if isinstance(default, ParameterInfo): - parameter_info = copy(default) - # Click supports `default` as either - # - an actual value; or - # - a factory function (returning a default value.) - # The two are not interchangeable for static typing, so typer allows - # specifying `default_factory`. Move the `default_factory` into `default` - # so click can find it. - if parameter_info.default is ... and parameter_info.default_factory: - parameter_info.default = parameter_info.default_factory - elif parameter_info.default_factory: - raise DefaultFactoryAndDefaultValueError( - argument_name=param.name, param_type=type(parameter_info) - ) - default = parameter_info - - params[param.name] = ParamMeta( - name=param.name, default=default, annotation=annotation - ) - return params diff --git a/spaces/puqi/climsim/app.py b/spaces/puqi/climsim/app.py deleted file mode 100644 index 711bc9c35b0f7a7ccd015f75f25d9685b4b82236..0000000000000000000000000000000000000000 --- a/spaces/puqi/climsim/app.py +++ /dev/null @@ -1,408 +0,0 @@ -import streamlit as st -from data_utils import * -import xarray as xr -import numpy as np -import pandas as pd -import matplotlib.pyplot as plt -import pickle -import glob, os -import re -import tensorflow as tf -import netCDF4 -import copy -import string -import h5py -from tqdm import tqdm - - -st.title('A _Quickstart Notebook_ for :blue[ClimSim]:') -st.link_button("Go to ClimSim Github Repository", "https://github.com/leap-stc/ClimSim/tree/main",use_container_width=True) -st.header('**Step 1:** Import data_utils') -st.code('''from data_utils import *''',language='python') - - - -st.header('**Step 2:** Instantiate class') -st.link_button("Go to original grid_info", "https://github.com/leap-stc/ClimSim/tree/main/grid_info",use_container_width=True) -st.link_button("Go to original input_mean input_max input_min output_scale", "https://github.com/leap-stc/ClimSim/tree/main/preprocessing/normalizations",use_container_width=True) -st.code('''#Change the path to your own -grid_info = xr.open_dataset('ClimSim_low-res_grid-info.nc') -input_mean = xr.open_dataset('input_mean.nc') -input_max = xr.open_dataset('input_max.nc') -input_min = xr.open_dataset('input_min.nc') -output_scale = xr.open_dataset('output_scale.nc') - -data = data_utils(grid_info = grid_info, - input_mean = input_mean, - input_max = input_max, - input_min = input_min, - output_scale = output_scale) - -# set variables to V1 subset -data.set_to_v1_vars()''',language='python') - -grid_info = xr.open_dataset('ClimSim_low-res_grid-info.nc') -input_mean = xr.open_dataset('input_mean.nc') -input_max = xr.open_dataset('input_max.nc') -input_min = xr.open_dataset('input_min.nc') -output_scale = xr.open_dataset('output_scale.nc') - -data = data_utils(grid_info = grid_info, - input_mean = input_mean, - input_max = input_max, - input_min = input_min, - output_scale = output_scale) - -data.set_to_v1_vars() - - - -st.header('**Step 3:** Load training and validation data') -st.link_button("Go to Original Dataset", "https://huggingface.co/datasets/LEAP/subsampled_low_res/tree/main",use_container_width=True) -st.code('''data.input_train = data.load_npy_file('train_input_small.npy') -data.target_train = data.load_npy_file('train_target_small.npy') -data.input_val = data.load_npy_file('val_input_small.npy') -data.target_val = data.load_npy_file('val_target_small.npy')''',language='python') - -data.input_train = data.load_npy_file('train_input_small.npy') -data.target_train = data.load_npy_file('train_target_small.npy') -data.input_val = data.load_npy_file('val_input_small.npy') -data.target_val = data.load_npy_file('val_target_small.npy') - - - -st.header('**Step 4:** Train models') -st.subheader('Train constant prediction model') -st.latex(r'''\hat{y}=E[y_{train}]''') -st.code('''const_model = data.target_train.mean(axis = 0)''',language='python') - -const_model = data.target_train.mean(axis = 0) - - - -st.subheader('Train multiple linear regression model') -st.latex(r'''\beta=(X^{T}_{train} X_{train})^{-1} X^{T}_{train} y_{train} \\ -\hat{y}=X^{T}_{input} \beta \\ -\text{where } X_{train} \text{ and } X_{input} \text{ correspond to the training data and the input data you would like to inference on, respectively.} \\ -X_{train} \text{ and } X_{input} \text{ both have a column of ones concatenated to the feature space for the bias.}''') -st.text('adding bias unit') -st.code('''X = data.input_train -bias_vector = np.ones((X.shape[0], 1)) -X = np.concatenate((X, bias_vector), axis=1)''',language='python') - -X = data.input_train -bias_vector = np.ones((X.shape[0], 1)) -X = np.concatenate((X, bias_vector), axis=1) - - - -st.text('create model') -st.code('''mlr_weights = np.linalg.inv(X.transpose()@X)@X.transpose()@data.target_train''',language='python') - -mlr_weights = np.linalg.inv(X.transpose()@X)@X.transpose()@data.target_train - - - -st.subheader('Train your models here') -st.code('''### -# train your model here -###''',language='python') - - -st.header('**Step 5:** Evaluate on validation data') -st.subheader('Set pressure grid') -st.code('''data.set_pressure_grid(data_split = 'val')''',language='python') - -data.set_pressure_grid(data_split = 'val') - - - -st.subheader('Load predictions') -st.code('''# Constant Prediction -const_pred_val = np.repeat(const_model[np.newaxis, :], data.target_val.shape[0], axis = 0) -print(const_pred_val.shape) - -# Multiple Linear Regression -X_val = data.input_val -bias_vector_val = np.ones((X_val.shape[0], 1)) -X_val = np.concatenate((X_val, bias_vector_val), axis=1) -mlr_pred_val = X_val@mlr_weights -print(mlr_pred_val.shape) - -# Load your prediction here - -# Load predictions into data_utils object -data.model_names = ['const', 'mlr'] # add names of your models here -preds = [const_pred_val, mlr_pred_val] # add your custom predictions here -data.preds_val = dict(zip(data.model_names, preds))''',language='python') - - -const_pred_val = np.repeat(const_model[np.newaxis, :], data.target_val.shape[0], axis = 0) -print(const_pred_val.shape) - -X_val = data.input_val -bias_vector_val = np.ones((X_val.shape[0], 1)) -X_val = np.concatenate((X_val, bias_vector_val), axis=1) - -mlr_pred_val = X_val@mlr_weights -print(mlr_pred_val.shape) - -data.model_names = ['const', 'mlr'] # add names of your models here -preds = [const_pred_val, mlr_pred_val] # add your custom predictions here -data.preds_val = dict(zip(data.model_names, preds)) - - - -st.subheader('Weight predictions and target') -st.text('''1.Undo output scaling -2.Weight vertical levels by dp/g -3.Weight horizontal area of each grid cell by a[x]/mean(a[x]) -4.Convert units to a common energy unit''') -st.code('''data.reweight_target(data_split = 'val') -data.reweight_preds(data_split = 'val')''',language='python') - -data.reweight_target(data_split = 'val') -data.reweight_preds(data_split = 'val') - - - -st.subheader('Set and calculate metrics') -st.code('''data.metrics_names = ['MAE', 'RMSE', 'R2', 'bias'] -data.create_metrics_df(data_split = 'val')''',language='python') - -data.metrics_names = ['MAE', 'RMSE', 'R2', 'bias'] -data.create_metrics_df(data_split = 'val') - - - -st.subheader('Create plots') -st.code('''# set plotting settings -%config InlineBackend.figure_format = 'retina' -letters = string.ascii_lowercase - -# create custom dictionary for plotting -dict_var = data.metrics_var_val -plot_df_byvar = {} -for metric in data.metrics_names: - plot_df_byvar[metric] = pd.DataFrame([dict_var[model][metric] for model in data.model_names], - index=data.model_names) - plot_df_byvar[metric] = plot_df_byvar[metric].rename(columns = data.var_short_names).transpose() - -# plot figure -fig, axes = plt.subplots(nrows = len(data.metrics_names), sharex = True) -for i in range(len(data.metrics_names)): - plot_df_byvar[data.metrics_names[i]].plot.bar( - legend = False, - ax = axes[i]) - if data.metrics_names[i] != 'R2': - axes[i].set_ylabel('$W/m^2$') - else: - axes[i].set_ylim(0,1) - - axes[i].set_title(f'({letters[i]}) {data.metrics_names[i]}') -axes[i].set_xlabel('Output variable') -axes[i].set_xticklabels(plot_df_byvar[data.metrics_names[i]].index, \ - rotation=0, ha='center') - -axes[0].legend(columnspacing = .9, - labelspacing = .3, - handleheight = .07, - handlelength = 1.5, - handletextpad = .2, - borderpad = .2, - ncol = 3, - loc = 'upper right') -fig.set_size_inches(7,8) -fig.tight_layout()''',language='python') - -letters = string.ascii_lowercase - -dict_var = data.metrics_var_val -plot_df_byvar = {} -for metric in data.metrics_names: - plot_df_byvar[metric] = pd.DataFrame([dict_var[model][metric] for model in data.model_names], - index=data.model_names) - plot_df_byvar[metric] = plot_df_byvar[metric].rename(columns = data.var_short_names).transpose() - -fig, axes = plt.subplots(nrows = len(data.metrics_names), sharex = True) -for i in range(len(data.metrics_names)): - plot_df_byvar[data.metrics_names[i]].plot.bar( - legend = False, - ax = axes[i]) - if data.metrics_names[i] != 'R2': - axes[i].set_ylabel('$W/m^2$') - else: - axes[i].set_ylim(0,1) - axes[i].set_title(f'({letters[i]}) {data.metrics_names[i]}') - -axes[i].set_xlabel('Output variable') -axes[i].set_xticklabels(plot_df_byvar[data.metrics_names[i]].index, \ - rotation=0, ha='center') - -axes[0].legend(columnspacing = .9, - labelspacing = .3, - handleheight = .07, - handlelength = 1.5, - handletextpad = .2, - borderpad = .2, - ncol = 3, - loc = 'upper right') -fig.set_size_inches(7,8) -fig.tight_layout() - -st.pyplot(fig) -st.text('If you trained models with different hyperparameters, use the ones that performed the best on validation data for evaluation on scoring data.') - - -st.header('**Step 6:** Evaluate on scoring data') -st.subheader('Do this at the VERY END (when you have finished tuned the hyperparameters for your model and are seeking a final evaluation)') -st.subheader('Load scoring data') -st.code('''data.input_scoring = np.load('scoring_input_small.npy') -data.target_scoring = np.load('scoring_target_small.npy') -''',language='python') - -data.input_scoring = np.load('scoring_input_small.npy') -data.target_scoring = np.load('scoring_target_small.npy') - - - -st.subheader('Set pressure grid') -st.code('''data.set_pressure_grid(data_split = 'scoring')''',language='python') - -data.set_pressure_grid(data_split = 'scoring') - - - -st.subheader('Load predictions') -st.code('''# constant prediction -const_pred_scoring = np.repeat(const_model[np.newaxis, :], data.target_scoring.shape[0], axis = 0) -print(const_pred_scoring.shape) - -# multiple linear regression -X_scoring = data.input_scoring -bias_vector_scoring = np.ones((X_scoring.shape[0], 1)) -X_scoring = np.concatenate((X_scoring, bias_vector_scoring), axis=1) -mlr_pred_scoring = X_scoring@mlr_weights -print(mlr_pred_scoring.shape) - -# Your model prediction here - -# Load predictions into object -data.model_names = ['const', 'mlr'] # model name here -preds = [const_pred_scoring, mlr_pred_scoring] # add prediction here -data.preds_scoring = dict(zip(data.model_names, preds))''',language='python') - -const_pred_scoring = np.repeat(const_model[np.newaxis, :], data.target_scoring.shape[0], axis = 0) -print(const_pred_scoring.shape) - -X_scoring = data.input_scoring -bias_vector_scoring = np.ones((X_scoring.shape[0], 1)) -X_scoring = np.concatenate((X_scoring, bias_vector_scoring), axis=1) -mlr_pred_scoring = X_scoring@mlr_weights -print(mlr_pred_scoring.shape) - -data.model_names = ['const', 'mlr'] # model name here -preds = [const_pred_scoring, mlr_pred_scoring] # add prediction here -data.preds_scoring = dict(zip(data.model_names, preds)) - - -st.subheader('Weight predictions and target') -st.text('''1.Undo output scaling -2.Weight vertical levels by dp/g -3.Weight horizontal area of each grid cell by a[x]/mean(a[x]) -4.Convert units to a common energy unit''') -st.code('''# weight predictions and target -data.reweight_target(data_split = 'scoring') -data.reweight_preds(data_split = 'scoring') - -# set and calculate metrics -data.metrics_names = ['MAE', 'RMSE', 'R2', 'bias'] -data.create_metrics_df(data_split = 'scoring')''',language='python') - -# weight predictions and target -data.reweight_target(data_split = 'scoring') -data.reweight_preds(data_split = 'scoring') - -# set and calculate metrics -data.metrics_names = ['MAE', 'RMSE', 'R2', 'bias'] -data.create_metrics_df(data_split = 'scoring') - - - - -st.subheader('Create plots') -st.code('''# set plotting settings -%config InlineBackend.figure_format = 'retina' -letters = string.ascii_lowercase - -# create custom dictionary for plotting -dict_var = data.metrics_var_scoring -plot_df_byvar = {} -for metric in data.metrics_names: - plot_df_byvar[metric] = pd.DataFrame([dict_var[model][metric] for model in data.model_names], - index=data.model_names) - plot_df_byvar[metric] = plot_df_byvar[metric].rename(columns = data.var_short_names).transpose() - -# plot figure -fig, axes = plt.subplots(nrows = len(data.metrics_names), sharex = True) -for i in range(len(data.metrics_names)): - plot_df_byvar[data.metrics_names[i]].plot.bar( - legend = False, - ax = axes[i]) - if data.metrics_names[i] != 'R2': - axes[i].set_ylabel('$W/m^2$') - else: - axes[i].set_ylim(0,1) - - axes[i].set_title(f'({letters[i]}) {data.metrics_names[i]}') -axes[i].set_xlabel('Output variable') -axes[i].set_xticklabels(plot_df_byvar[data.metrics_names[i]].index, \ - rotation=0, ha='center') - -axes[0].legend(columnspacing = .9, - labelspacing = .3, - handleheight = .07, - handlelength = 1.5, - handletextpad = .2, - borderpad = .2, - ncol = 3, - loc = 'upper right') -fig.set_size_inches(7,8) -fig.tight_layout()''') - -letters = string.ascii_lowercase - -dict_var = data.metrics_var_scoring -plot_df_byvar = {} -for metric in data.metrics_names: - plot_df_byvar[metric] = pd.DataFrame([dict_var[model][metric] for model in data.model_names], - index=data.model_names) - plot_df_byvar[metric] = plot_df_byvar[metric].rename(columns = data.var_short_names).transpose() - -fig, axes = plt.subplots(nrows = len(data.metrics_names), sharex = True) -for i in range(len(data.metrics_names)): - plot_df_byvar[data.metrics_names[i]].plot.bar( - legend = False, - ax = axes[i]) - if data.metrics_names[i] != 'R2': - axes[i].set_ylabel('$W/m^2$') - else: - axes[i].set_ylim(0,1) - - axes[i].set_title(f'({letters[i]}) {data.metrics_names[i]}') -axes[i].set_xlabel('Output variable') -axes[i].set_xticklabels(plot_df_byvar[data.metrics_names[i]].index, \ - rotation=0, ha='center') - -axes[0].legend(columnspacing = .9, - labelspacing = .3, - handleheight = .07, - handlelength = 1.5, - handletextpad = .2, - borderpad = .2, - ncol = 3, - loc = 'upper right') -fig.set_size_inches(7,8) -fig.tight_layout() - -st.pyplot(fig) diff --git a/spaces/q846392920/vits-uma-genshin-honkai/mel_processing.py b/spaces/q846392920/vits-uma-genshin-honkai/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/q846392920/vits-uma-genshin-honkai/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/qdd319/ChuanhuChatGPT/custom.css b/spaces/qdd319/ChuanhuChatGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/qdd319/ChuanhuChatGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/qinzhu/diy-girlfriend/transforms.py b/spaces/qinzhu/diy-girlfriend/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/diy-girlfriend/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Chamatkar Full Movie Bollywood Videos Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Chamatkar Full Movie Bollywood Videos Download.md deleted file mode 100644 index 9621c3d2ce6b68767b3a2127f44ae055c23a6416..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Chamatkar Full Movie Bollywood Videos Download.md +++ /dev/null @@ -1,13 +0,0 @@ - -

        Chamatkar: A Bollywood Comedy with a Ghostly Twist

        -

        If you are looking for a fun and entertaining Bollywood movie to watch, you might want to check out Chamatkar, a 1992 comedy starring Shah Rukh Khan, Naseeruddin Shah and Urmila Matondkar. The movie is about a village schoolteacher named Sunder (Shah Rukh Khan) who loses everything and takes refuge in a graveyard, where he meets the ghost of Amar (Naseeruddin Shah), a murdered man who was betrayed by his wife and best friend. Together, they team up to help each other and get revenge on their enemies.

        -

        Chamatkar is a Hindi word that means "miracle" or "magic", and the movie is full of hilarious and magical moments as Sunder and Amar use their wits and supernatural powers to outsmart their foes. The movie also has a romantic subplot as Sunder falls in love with Mala (Urmila Matondkar), Amar's daughter who does not know about her father's death. The movie has a lot of comedy, drama, fantasy and action, as well as some catchy songs composed by Anu Malik and sung by Kumar Sanu, Alka Yagnik and others.

        -

        Chamatkar Full Movie Bollywood Videos Download


        Download Zip ✏ ✏ ✏ https://geags.com/2uCqwZ



        -

        Chamatkar was directed by Rajiv Mehra and written by Shaukat Baig, Lilliput and Rajiv Mehra. The movie was a moderate success at the box office and received positive reviews from critics and audiences. It was also dubbed in Tamil as Thillu Mullu 2, a remake of the 1981 Rajinikanth comedy of the same name.

        -

        You can watch Chamatkar on Netflix with a subscription or download it from various online platforms. If you are a fan of Shah Rukh Khan or Bollywood comedies in general, you will surely enjoy this movie.

        - -

        Chamatkar is a movie that blends comedy and fantasy with a touch of emotion. The chemistry between Shah Rukh Khan and Naseeruddin Shah is superb, as they play off each other's strengths and weaknesses. The movie also showcases the versatility of both actors, as they switch from comedy to drama to action with ease. Urmila Matondkar is charming and lively as the love interest, who also has a strong bond with her father's ghost. The movie also has some memorable supporting characters, such as the comic relief Johnny Lever, the villainous Tinnu Anand and the veteran Shammi Kapoor.

        -

        The movie is not without its flaws, however. The plot is predictable and clichéd, with some loopholes and inconsistencies. The movie also drags on for too long, with some unnecessary scenes and songs. The climax is over-the-top and unrealistic, with a lot of violence and explosions. The movie also has some outdated elements, such as the fashion show sequence and the portrayal of women. The movie could have been better edited and trimmed to make it more engaging and coherent.

        -

        Nevertheless, Chamatkar is a movie that can be enjoyed by anyone who likes light-hearted entertainment with a dash of fantasy. It is a movie that showcases the talent and charisma of Shah Rukh Khan and Naseeruddin Shah, who make a great duo on screen. It is a movie that has some funny and touching moments, as well as some catchy songs. It is a movie that can make you laugh, cry and cheer for the good guys.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Ebp Devis Et Facturation 2013 Crack 13l.md b/spaces/quidiaMuxgu/Expedit-SAM/Ebp Devis Et Facturation 2013 Crack 13l.md deleted file mode 100644 index 3fcedd40d6e328098f9b29a545ce949b0cb07d5d..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Ebp Devis Et Facturation 2013 Crack 13l.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Ebp Devis Et Facturation 2013 Crack 13l


        DOWNLOAD ✪✪✪ https://geags.com/2uCsWr



        -
        -Ebp Devis Et Facturation 2013 Crack 13l. 18 Juin 2020 … devis facturation batiment, devis facturation opensource, devis facturation, devis facturation excel, ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mitchell Ondemand Heavy Truck Service Manuals Torrent 12 LINK.md b/spaces/quidiaMuxgu/Expedit-SAM/Mitchell Ondemand Heavy Truck Service Manuals Torrent 12 LINK.md deleted file mode 100644 index 84bcb474d242d01ecd5638fb43704e57de8b352d..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Mitchell Ondemand Heavy Truck Service Manuals Torrent 12 LINK.md +++ /dev/null @@ -1,13 +0,0 @@ -

        Mitchell Ondemand Heavy Truck Service Manuals Torrent 12


        Download Zip - https://geags.com/2uCsN0



        -
        -August 4, 2020 - 52 Automotive Diagnostic Software, Mitchell OnDemand 5 from China... Mitchell OnDemand Heavy Truck Service Manuals Torrent Hit , guest. ← 1 year ago. -Mitchell OnDemand Heavy Truck Service Manual Torrent Hit → ← 1 year ago. -Mitchell OnDemand Truck Service Manual Torrent. -Mitchell OnDemand Heavy Truck Service Manual Torrent Hit → ← 1 month ago. -Mitchell OnDemand Truck Service Manual Torrent Hit → ← 1 year ago. -Mitchell OnDemand 5 from China ... -Mitchell OnDemand 5 from China... -Mitchell OnDemand Truck Service Manuals Torrent Hit → ← 1 year ago 8a78ff9644
        -
        -
        -

        diff --git a/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/onnx_inference.py b/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/raedeXanto/academic-chatgpt-beta/((TOP)) Evermotion Archmodels Vol. 180 Vintage Kitchen Appliances.md b/spaces/raedeXanto/academic-chatgpt-beta/((TOP)) Evermotion Archmodels Vol. 180 Vintage Kitchen Appliances.md deleted file mode 100644 index c98bcb91c5ca177123e6dea39c298867b63b66ed..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/((TOP)) Evermotion Archmodels Vol. 180 Vintage Kitchen Appliances.md +++ /dev/null @@ -1,22 +0,0 @@ -
        -

        How to Create a Retro Kitchen with Evermotion Archmodels Vol. 180 Vintage Kitchen Appliances

        -

        If you are a fan of the nostalgic charm of vintage kitchen appliances, you will love the latest collection from Evermotion. Archmodels Vol. 180 features 44 high-quality models of retro-style kitchen appliances, such as stoves, refrigerators, toasters, mixers, coffee makers and more. These models are perfect for creating realistic and detailed scenes of old-fashioned kitchens, cafes, diners or restaurants.

        -

        ((TOP)) Evermotion Archmodels Vol. 180 Vintage Kitchen Appliances


        DOWNLOADhttps://tinourl.com/2uL4Zo



        -

        In this article, we will show you how to use these models in your 3D projects and give you some tips and tricks to achieve the best results. We will use 3ds Max and V-Ray as our software of choice, but you can use any other compatible software that supports the formats included in the collection (FBX, OBJ and C4D).

        -

        Step 1: Choose your models

        -

        The first step is to choose the models that suit your scene and style. You can browse through the catalog of Archmodels Vol. 180 and see the previews of each model. You can also download a free sample model from the Evermotion website to test it in your software.

        -

        For this example, we will use the following models: AM180_001 (a vintage stove), AM180_002 (a vintage refrigerator), AM180_003 (a vintage toaster), AM180_004 (a vintage mixer) and AM180_005 (a vintage coffee maker).

        -

        Step 2: Import your models

        -

        The next step is to import your models into your software. You can use the native format of your software (MAX for 3ds Max, C4D for Cinema 4D) or any of the universal formats (FBX or OBJ). The models come with high-resolution textures and materials that are optimized for V-Ray rendering.

        -

        -

        After importing your models, you can adjust their scale, position and rotation to fit your scene. You can also tweak their materials if you want to change their color, glossiness or other properties.

        -

        Step 3: Set up your lighting and camera

        -

        The final step is to set up your lighting and camera for rendering. You can use any type of lighting that suits your scene, such as HDRI, sun and sky, or artificial lights. For this example, we will use a simple HDRI dome light with an image of a kitchen interior.

        -

        You can also set up your camera angle and focal length to capture the best view of your models. You can use a wide-angle lens to show more of the scene or a telephoto lens to focus on the details. For this example, we will use a 35mm lens with a slight tilt.

        -

        Step 4: Render and enjoy!

        -

        Now you are ready to render your scene and enjoy the results. You can use any render settings that suit your needs, such as resolution, quality, samples or noise threshold. For this example, we will use the default settings of V-Ray Next with adaptive dome light and denoiser enabled.

        -

        Here is our final render:

        -Final render -

        We hope you enjoyed this tutorial and learned something new. If you want to get Archmodels Vol. 180 Vintage Kitchen Appliances or any other collection from Evermotion, you can visit their website and shop online. They offer high-quality 3D models for architecture, interior design, visualization and more.

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack Tips and Tricks for Using the Software.md b/spaces/raedeXanto/academic-chatgpt-beta/Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack Tips and Tricks for Using the Software.md deleted file mode 100644 index a6a82df4859c1ee2f0a6bf388a39af879f3d2bec..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack Tips and Tricks for Using the Software.md +++ /dev/null @@ -1,188 +0,0 @@ - -

        Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack: A Comprehensive Review

        -

        If you are looking for a web design software that allows you to create stunning websites without writing any code, then you might have heard of Adobe Muse CC 2018. But what is Adobe Muse CC 2018 and what can it do for you? And what is Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack and how can it help you to use the software for free? In this article, we will answer these questions and more, as we review Adobe Muse CC 2018 and its crack in detail.

        -

        Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack crack


        DOWNLOAD ►►► https://tinourl.com/2uKZ30



        -

        What is Adobe Muse CC 2018?

        -

        Adobe Muse CC 2018 is a web design software that allows graphic designers to create unique, standards-based websites without the need to write code. It is part of the Adobe Creative Cloud suite of applications, which means that you can access it online or offline, and sync your projects across devices.

        -

        Features and benefits of Adobe Muse CC 2018

        -

        Adobe Muse CC 2018 has many features and benefits that make it a powerful and user-friendly web design tool. Some of them are:

        -
          -
        • Simple site planning. You can lay out your site visually, add, name, and arrange pages in your sitemap, and apply master page settings with just a few clicks. You can also drag and drop to reorder pages.
        • -
        • Intuitive design features. You can use familiar Adobe tools like the Eyedropper, Smart Guides, Paste in Place, and Edit Original to design your pages. You can also use the new Layers panel to control elements of your design.
        • -
        • Engaging interactivity. You can drag and drop to add custom navigation, slide shows, contact forms, and more to your websites. All of the interactive widgets are touch-enabled for mobile devices.
        • -
        • Embedded HTML. You can add Google Maps, YouTube videos, Facebook feeds, HTML5 animation files, and more to your sites without writing code. Just copy and paste source code into your page and it does the rest.
        • -
        • Parallax scrolling. You can create stunning effects with just a few mouse clicks – make images and elements move in different directions at different speeds when scrolling.
        • -
        • Responsive design. You can create websites that adapt to different screen sizes and orientations using breakpoints. You can also preview how your site will look on different devices using the in-browser testing feature.
        • -
        • Publishing options. You can publish your site to Adobe Business Catalyst, a third-party hosting provider, or export it as HTML files. You can also update your site with one click using FTP upload.
        • -
        -

        System requirements and technical details of Adobe Muse CC 2018

        -

        To use Adobe Muse CC 2018, you need to have a compatible system that meets the following requirements:

        - - - - - - - - - - - - - -
        Operating systemProcessorRAMDisk space
        Microsoft Windows 7 with Service Pack 1 (64-bit), Windows 8.1 (64-bit), Windows 10 (64-bit), or Windows Anniversary UpdateIntel Core 2 or AMD Athlon 64 processor; (2GHz or faster)2GB of RAM1.1 GB of available hard-disk space for installation; additional free space (approximately 1.5 GB) required during installation (cannot install on a volume that uses a case-sensitive file system or on removable flash storage devices)
        -

        The technical details of Adobe Muse CC 2018 are as follows:

        -

        Adobe Muse CC 2018 full version with crack download
        -How to install Adobe Muse CC 2018 x64 cracked version
        -Adobe Muse CC 2018 v2018.1.0.266 patch free download
        -Adobe Muse CC 2018 x64 keygen activation code
        -Adobe Muse CC 2018 v2018.1.0.266 serial number generator
        -Adobe Muse CC 2018 crack torrent magnet link
        -Adobe Muse CC 2018 x64 license key crack
        -Adobe Muse CC 2018 v2018.1.0.266 portable cracked edition
        -Adobe Muse CC 2018 x64 crack for windows 10
        -Adobe Muse CC 2018 v2018.1.0.266 crack for mac os
        -Adobe Muse CC 2018 x64 crack offline installer
        -Adobe Muse CC 2018 v2018.1.0.266 crack direct download link
        -Adobe Muse CC 2018 x64 crack latest update
        -Adobe Muse CC 2018 v2018.1.0.266 crack features and benefits
        -Adobe Muse CC 2018 x64 crack system requirements
        -Adobe Muse CC 2018 v2018.1.0.266 crack troubleshooting guide
        -Adobe Muse CC 2018 x64 crack reviews and testimonials
        -Adobe Muse CC 2018 v2018.1.0.266 crack alternatives and competitors
        -Adobe Muse CC 2018 x64 crack tips and tricks
        -Adobe Muse CC 2018 v2018.1.0.266 crack tutorials and courses
        -Adobe Muse CC 2018 x64 crack discount and coupon codes
        -Adobe Muse CC 2018 v2018.1.0.266 crack support and customer service
        -Adobe Muse CC 2018 x64 crack refund policy and guarantee
        -Adobe Muse CC 2018 v2018.1.0.266 crack pros and cons
        -Adobe Muse CC 2018 x64 crack comparison and contrast
        -Adobe Muse CC 2018 v2018.1.0.266 crack FAQs and answers
        -Adobe Muse CC 2018 x64 crack best practices and recommendations
        -Adobe Muse CC 2018 v2018.1.0.266 crack case studies and success stories
        -Adobe Muse CC 2018 x64 crack forum and community
        -Adobe Muse CC 2018 v2018.1.0.266 crack blog and news
        -Adobe Muse CC 2018 x64 crack video and audio
        -Adobe Muse CC 2018 v2018.1.0.266 crack ebook and pdf
        -Adobe Muse CC 2018 x64 crack infographic and image
        -Adobe Muse CC 2018 v2018.1.0.266 crack webinar and podcast
        -Adobe Muse CC 2018 x64 crack software and tool
        -Adobe Muse CC 2018 v2018.1.0.266 crack plugin and extension
        -Adobe Muse CC 2018 x64 crack theme and template
        -Adobe Muse CC 2018 v2018.1.0.266 crack widget and script
        -Adobe Muse CC 2018 x64 crack app and game
        -Adobe Muse CC 2018 v2018.1.0.266 crack product and service
        -Adobe Muse CC 2018 x64 crack company and brand
        -Adobe Muse CC 2018 v2018.1.0.266 crack industry and niche
        -Adobe Muse CC 2018 x64 crack market and audience
        -Adobe Muse CC 2018 v2018.1.0.266 crack trend and forecast
        -Adobe Muse CC 2018 x64 crack statistic and data
        -Adobe Muse CC 2018 v2018.1.0.266 crack research and analysis
        -Adobe Muse CC 2018 x64 crack strategy and plan
        -Adobe Muse CC 2018 v2018.1.0.266 crack goal and objective
        -Adobe Muse CC 2018 x64 crack challenge and solution

        - - - - - - - - - - - - - - - - - - - -
        NameDescriptionVersionSizeDateCoreDownloads
        Adobe MuseWebpages editor for designers who are not experienced in writing markup code.CC 2018 v2018.1.0.2661,04 GBMarch 15 201864Bit8413 Direct Download
        -

        How to download and install Adobe Muse CC 2018

        -

        To download and install Adobe Muse CC 2018, you need to follow these steps:

        -
          -
        1. Create an Adobe account or sign in with your existing one at https://www.adobe.com/.
        2. -
        3. Select Creative Cloud from the menu bar and click on Download Apps.
        4. -
        5. Select Adobe Muse from the list of apps and click on Download.
        6. -
        7. The Creative Cloud desktop app will launch and start downloading Adobe Muse CC 2018.
        8. -
        9. Once the download is complete, click on Install to begin the installation process.
        10. -
        11. Follow the on-screen instructions to complete the installation.
        12. -
        13. You can now launch Adobe Muse CC 2018 from the Creative Cloud desktop app or from your Start menu.
        14. -
        -

        What is Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack?

        -

        If you want to use Adobe Muse CC 2018 for free without paying for a subscription or a license, then you might be interested in Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack. But what is it and how does it work?

        -

        What is a crack and why do you need it?

        -

        A crack is a software tool that modifies or bypasses the security features of another software program to allow unauthorized use. In this case, Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack is a crack that allows you to use Adobe Muse CC 2018 without activating it with an Adobe account or a serial number.

        -

        You might need a crack if you want to use Adobe Muse CC 2018 for free without paying for a subscription or a license, which can be expensive or inconvenient for some users. However, using a crack also comes with some risks and disadvantages, which we will discuss later.

        -

        How to use Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack

        -

        To use Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack, you need to follow these steps:

        -
          -
        1. If you have already installed Adobe Muse CC 2018 from the official source, uninstall it completely from your system.
        2. -
        3. If you have not installed Adobe Muse CC 2018 yet, download it from https://www.mutaz.pro/free-programs/en/download/?53.
        4. -
        5. Extract the downloaded file using WinRAR or any other file extraction software.
        6. -
        7. In the extracted folder, run Setup.exe as administrator to install Adobe Muse CC 2018 on your system.
        8. -
        9. In the same folder, run Patch.exe as administrator to apply the crack to Adobe Muse CC 2018.
        10. -
        11. Follow the instructions on the screen to complete the patching process.
        12. -
        13. You can now launch Adobe Muse CC 2018 from your Start menu or desktop shortcut.
        14. -
        15. You can use Adobe Muse CC 2018 without activating it with an Adobe account or a serial number.
        16. -
        -

        Pros and cons of using Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack

        -

        Using Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack has some pros and cons that you should be aware of before deciding to use it. Here are some of them:

        - - - - - - - - - -
        ProsCons
          -
        • You can use Adobe Muse CC 2018 for free without paying for a subscription or a license.
        • -
        • You can access all the features and functions of Adobe Muse CC 2018 without any limitations or restrictions.
        • -
        • You can create and publish unlimited websites with Adobe Muse CC 2018.
        • -
          -
        • You might violate the terms and conditions of Adobe and face legal consequences for using a cracked software.
        • -
        • You might expose your system to malware or viruses that might be hidden in the crack file or the downloaded file.
        • -
        • You might not receive any updates or support from Adobe for using a cracked software.
        • -
        • You might encounter some errors or bugs that might affect the performance or functionality of Adobe Muse CC 2018.
        • -
        -

        Conclusion

        -

        Summary of the main points

        -

        In this article, we have reviewed Adobe Muse CC 2018 and its crack in detail. We have learned that:

        -
          -
        • Adobe Muse CC 2018 is a web design software that allows graphic designers to create unique, standards-based websites without the need to write code.
        • -
        • Adobe Muse CC 2018 has many features and benefits that make it a powerful and user-friendly web design tool, such as simple site planning, intuitive design features, engaging interactivity, embedded HTML, parallax scrolling, responsive design, and publishing options.
        • -
        • Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack is a software tool that modifies or bypasses the security features of Adobe Muse CC 2018 to allow unauthorized use.
        • -
        • Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack can be used to use Adobe Muse CC 2018 for free without paying for a subscription or a license, but it also comes with some risks and disadvantages, such as legal consequences, malware or viruses, no updates or support, and errors or bugs.
        • -
        -

        Recommendations and tips

        -

        If you are interested in using Adobe Muse CC 2018 and its crack, here are some recommendations and tips that you should follow:

        -
          -
        • Make sure that you have a compatible system that meets the requirements of Adobe Muse CC 2018 before downloading and installing it.
        • -
        • Download Adobe Muse CC 2018 and its crack from reliable and trusted sources only, and scan them with an antivirus software before opening them.
        • -
        • Backup your system and your files before installing and using Adobe Muse CC 2018 and its crack, in case something goes wrong or you want to uninstall them later.
        • -
        • Use Adobe Muse CC 2018 and its crack at your own risk and responsibility, and respect the intellectual property rights of Adobe and other creators.
        • -
        • Learn how to use Adobe Muse CC 2018 effectively by watching tutorials, reading guides, or joining online communities of other users.
        • -
        -

        FAQs

        -

        Here are some frequently asked questions about Adobe Muse CC 2018 and its crack:

        -
          -
        1. Is Adobe Muse CC 2018 still available?
          -Yes, Adobe Muse CC 2018 is still available for download and installation from the official source or from other sources. However, Adobe has announced that it will stop updating and supporting Adobe Muse CC 2018 as of March 26, 2020. This means that there will be no new features or bug fixes for Adobe Muse CC 2018 after that date.
        2. -
        3. Is Adobe Muse CC 2018 compatible with other Adobe products?
          -Yes, Adobe Muse CC 2018 is compatible with other Adobe products, such as Photoshop, Illustrator, Animate, Dreamweaver, and more. You can import assets from these products into your Adobe Muse projects, or export your Adobe Muse projects to these products for further editing or publishing.
        4. -
        5. Is Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack safe to use?
          -There is no definitive answer to this question, as different sources might provide different versions of the crack that might have different levels of safety or quality. However, in general, using any crack is risky and not recommended, as it might violate the terms and conditions of the original software provider, expose your system to malware or viruses, prevent you from receiving updates or support, or cause errors or bugs in the software.
        6. -
        7. Can I use Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack offline?
          -Yes, you can use Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack offline once you have installed it on your system. However, you might not be able to access some online features or services of Adobe Muse CC 2018, such as in-browser testing, publishing to Adobe Business Catalyst, or syncing with Creative Cloud.
        8. -
        9. Can I use Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack on multiple devices?
          -
        10. Can I use Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack on multiple devices?
          -Yes, you can use Adobe Muse CC 2018 v2018.1.0.266 (x64) Crack on multiple devices, as long as you have downloaded and installed it on each device separately. However, you might not be able to sync your projects across devices using Creative Cloud, as the crack might interfere with the activation process.
        11. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Counter Strike 1.6 Download Free Full Version for PC Softonic - The Best Mod for CS 1.6.md b/spaces/raedeXanto/academic-chatgpt-beta/Counter Strike 1.6 Download Free Full Version for PC Softonic - The Best Mod for CS 1.6.md deleted file mode 100644 index 1f1179c9816f202afc4fb291486ff29750d6181d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Counter Strike 1.6 Download Free Full Version for PC Softonic - The Best Mod for CS 1.6.md +++ /dev/null @@ -1,93 +0,0 @@ -
        -

        Counter Strike 1.6 Download Free Full Version for PC Softonic

        -

        Introduction

        -

        If you are a fan of first-person shooter games, you have probably heard of Counter Strike 1.6, one of the most popular and influential games of its genre. Counter Strike 1.6 is a multiplayer game that pits two teams of players against each other in various scenarios, such as hostage rescue, bomb defusal, assassination, and more. The game is fast-paced, tactical, and skill-based, requiring teamwork, strategy, and reflexes to win.

        -

        Counter Strike 1.6 was released in 2000 as a mod for Half-Life, a game developed by Valve Corporation. Since then, it has been updated and improved by various developers and community members, becoming one of the most played and loved games in the world. It has also spawned several sequels and spin-offs, such as Counter Strike: Source, Counter Strike: Global Offensive, and Counter Strike: Condition Zero.

        -

        counter strike 1.6 download free full version for pc softonic


        Download File >>>>> https://tinourl.com/2uL0Rp



        -

        If you want to experience the classic gameplay of Counter Strike 1.6 on your PC, you can download it for free from Softonic, one of the most trusted and reliable sources for software downloads on the internet. In this article, we will show you how to download and install Counter Strike 1.6 from Softonic, how to play it on your PC, and some tips and tricks to help you improve your skills.

        -

        How to download and install Counter Strike 1.6 from Softonic

        -

        Downloading and installing Counter Strike 1.6 from Softonic is very easy and straightforward. Just follow these simple steps:

        -

        Step 1: Visit the Softonic website

        -

        Go to https://en.softonic.com/ on your web browser. You will see a search bar at the top of the page. Type "Counter Strike 1.6" in the search bar and hit enter. You will see a list of results related to your query.

        -

        Step 2: Click on the download button

        -

        From the list of results, look for the one that says "Counter-Strike". It should have a green download button next to it. Click on the download button to proceed to the next page.

        -

        Step 3: Choose a download location

        -

        On the next page, you will see some information about Counter Strike 1.6, such as its size, version, rating, and description. You will also see another green download button at the top right corner of the page. Click on it to start downloading the game.

        -

        You will be asked to choose a location where you want to save the game file on your PC. You can either choose a default location or browse for a custom location. Make sure you have enough space on your hard drive before downloading.

        -

        Step 4: Run the installer file

        -

        Once the download is complete, locate the installer file on your PC. It should have a name like "counter-strike-16.exe" or something similar. Double-click on it to run it.

        -

        counter strike 1.6 free download pc game full version
        -download cs 1.6 full version for pc free from softonic
        -counter strike 1.6 full game free download for windows 10
        -cs 1.6 download free full version pc softonic offline installer
        -counter strike 1.6 free download full version for pc with bots
        -download counter strike 1.6 softonic full version free pc game
        -counter strike 1.6 full version free download for pc zip file
        -cs 1.6 download free full version pc softonic windows 7
        -counter strike 1.6 free download full version for pc no steam
        -download cs 1.6 softonic full version free pc game online
        -counter strike 1.6 full version free download for pc rar file
        -cs 1.6 download free full version pc softonic windows 8
        -counter strike 1.6 free download full version for pc with crack
        -download counter strike 1.6 softonic full version free pc game setup
        -counter strike 1.6 full version free download for pc highly compressed
        -cs 1.6 download free full version pc softonic windows xp
        -counter strike 1.6 free download full version for pc without virus
        -download cs 1.6 softonic full version free pc game with bots
        -counter strike 1.6 full version free download for pc direct link
        -cs 1.6 download free full version pc softonic windows vista
        -counter strike 1.6 free download full version for pc with cheats
        -download counter strike 1.6 softonic full version free pc game no steam
        -counter strike 1.6 full version free download for pc torrent file
        -cs 1.6 download free full version pc softonic mac os
        -counter strike 1.6 free download full version for pc with maps
        -download cs 1.6 softonic full version free pc game offline installer
        -counter strike 1.6 full version free download for pc iso file
        -cs 1.6 download free full version pc softonic linux os
        -counter strike 1.6 free download full version for pc with skins
        -download counter strike 1.6 softonic full version free pc game zip file
        -counter strike 1.6 full version free download for pc exe file
        -cs 1.6 download free full version pc softonic android os
        -counter strike 1.6 free download full version for pc with mods
        -download cs 1.6 softonic full version free pc game rar file
        -counter strike 1.6 full version free download for pc apk file
        -cs 1.6 download free full version pc softonic ios os
        -counter strike 1.6 free download full version for pc with patch
        -download counter strike 1.6 softonic full version free pc game direct link
        -counter strike 1.6 full version free download for pc setup file
        -cs 1.6 download free full version pc softonic chrome os
        -counter strike 1.6 free download full version for pc with serial key
        -download cs 1.6 softonic full version free pc game torrent file
        -counter strike 1.6 full version free download for pc portable file
        -cs 1.6 download free full version pc softonic windows server

        -

        Step 5: Follow the installation wizard

        -

        A window will pop up with some instructions on how to install Counter Strike 1.6 on your PC. Follow the steps carefully and agree to the terms and conditions when prompted. You can also choose where you want to install the game on your PC.

        -

        The installation process may take some time depending on your system specifications and internet speed. Wait patiently until it is done.

        -

        How to play Counter Strike 1.6 on PC

        -

        Congratulations! You have successfully downloaded and installed Counter Strike 1.6 from Softonic on your PC. Now you are ready to play it and have some fun.

        -

        Launch the game from your desktop or start menu

        -

        To launch Counter Strike 1.6 on your PC, you can either click on its icon on your desktop or find it in your start menu under "All Programs". A window will open with some options for playing the game.

        -

        Choose a game mode and a server

        -

        You can choose between two main game modes in Counter Strike 1.6: online multiplayer or offline single-player.

        -

        If you want to play online multiplayer with other players around the world, you need to have an internet connection and a valid Steam account (you can create one for free at https://store.steampowered.com/). You can then click on "Find Servers" to see a list of available servers that host different maps and game modes.

        -

        You can filter the servers by region, ping, map name, number of players, etc., or use the search function to find a specific server that suits your preferences. Once you find a server that you like, double-click on it to join it.

        -

        If you want to play offline single-player with bots (computer-controlled opponents), you don't need an internet connection or a Steam account. You can click on "New Game" to create your own custom game with bots.

        -

        You can choose which map you want to play (there are dozens of official and community-made maps in Counter Strike 1.6), how many bots you want to add (you can adjust their difficulty level), which team you want to join (terrorists or counter-terrorists), and other settings such as round time limit, friendly fire, etc.

        -

        Join a team and start shooting

        -

        Once you join a server or create a new game with bots, you will be asked to choose which team you want to join: terrorists or counter-terrorists.

        -

        The terrorists' objective is usually to plant a bomb at a designated site or assassinate a VIP target while preventing the counter-terrorists from defusing the bomb or rescuing the VIP.

        -

        The counter-terrorists' objective is usually to defuse the bomb planted by the terrorists or rescue the VIP target while preventing the terrorists from escaping or killing them all.

        -

        You can also switch teams during gameplay if you want (unless it is disabled by server rules).

        -

        After choosing your team, you will be able to buy weapons and equipment using money that you earn by killing enemies or completing objectives (or get them for free if playing offline). You can choose from various types of guns (rifles, shotguns, snipers, submachine guns, pistols, etc.), grenades (flashbangs, smokes, HE grenades, etc.), armor (kevlar vest, helmet, etc.), defuse kits (for counter-terrorists only), etc.

        -

        You can only carry one primary weapon (rifle, shotgun, sniper, or submachine gun), one secondary weapon (pistol), one knife (for melee attacks), four grenades (one of each type), one armor (vest or helmet), and one defuse kit (for counter-terrorists only) at a time.

        -

        You can also drop weapons or pick up weapons dropped by dead players or teammates if you want.

        -

        Once you have bought your weapons and equipment, you are ready to start shooting at your enemies (or allies if friendly fire is enabled).

        -

        Customize your settings and controls

        -

        If you want to customize your settings and controls for playing Counter Strike 1.6 on your PC, you can click on "Options" in the main menu or press ESC during gameplay.

        -

        You can adjust various settings such as video resolution, graphics quality, sound volume, mouse sensitivity, crosshair style, etc., according

        Is Counter Strike 1.6 still popular?

        -

        Yes, Counter Strike 1.6 is still popular among many players and fans around the world. It has a loyal and active community that supports and updates the game regularly. It also has a competitive scene that hosts tournaments and leagues for professional and amateur players.

        -

        What are some alternatives to Counter Strike 1.6?

        -

        Some alternatives to Counter Strike 1.6 are its sequels and spin-offs, such as Counter Strike: Source, Counter Strike: Global Offensive, and Counter Strike: Condition Zero. These games offer similar gameplay but with improved graphics, features, and modes. You can also try other first-person shooter games such as Call of Duty, Battlefield, or Halo.

        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git "a/spaces/raedeXanto/academic-chatgpt-beta/Free Download Pes 2013 Portable /TOP\\\\.md" "b/spaces/raedeXanto/academic-chatgpt-beta/Free Download Pes 2013 Portable /TOP\\\\.md" deleted file mode 100644 index c819e48198a3ddd2d01e8f9164b6cdac199578f3..0000000000000000000000000000000000000000 --- "a/spaces/raedeXanto/academic-chatgpt-beta/Free Download Pes 2013 Portable /TOP\\\\.md" +++ /dev/null @@ -1,94 +0,0 @@ -## Free Download Pes 2013 Portable - - - - - - ![Free Download Pes 2013 Portable \/\/TOP\\\\](https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcRLW72stIohPnOB93KyuCnzruU0X6nebGibVLgprShSvC1XwF3znlpEE6M) - - - - - -**Download ✒ ✒ ✒ [https://denirade.blogspot.com/?download=2txM51](https://denirade.blogspot.com/?download=2txM51)** - - - - - - - - - - - - - -# How to Free Download PES 2013 Portable for PC - - - -If you are a fan of soccer games, you might have heard of PES 2013, one of the most popular and realistic soccer simulations ever made. PES 2013 stands for Pro Evolution Soccer 2013, and it was released by Konami in 2012 for various platforms, including PC. - - - -However, if you want to play PES 2013 on your PC without installing it or using a CD-ROM, you might be interested in PES 2013 Portable, a version of the game that can be run from a USB flash drive or any other removable device. PES 2013 Portable has all the features and modes of the original game, but it is much smaller in size and easier to use. - - - -In this article, we will show you how to free download PES 2013 Portable for PC and enjoy this amazing soccer game anytime and anywhere. Follow these simple steps and get ready to score some goals! - - - -## Step 1: Download PES 2013 Portable from a Reliable Source - - - -The first thing you need to do is to find a trustworthy website that offers PES 2013 Portable for free download. There are many websites that claim to have this game, but some of them might be fake or contain viruses or malware. Therefore, you should be careful and do some research before downloading anything. - - - -One of the best websites that we recommend is [pes-2013-portable.com](https://pes-2013-portable.com/), which has a direct link to download PES 2013 Portable without any surveys or ads. This website also has a detailed guide on how to use PES 2013 Portable and how to fix any possible issues. - - - -To download PES 2013 Portable from this website, just click on the button that says "Download Now" and wait for the download to finish. The file size is about 1.5 GB, so it might take some time depending on your internet speed. - - - -## Step 2: Extract PES 2013 Portable from the ZIP File - - - -Once you have downloaded PES 2013 Portable, you will need to extract it from the ZIP file that contains it. To do this, you will need a program that can handle ZIP files, such as WinRAR or 7-Zip. If you don't have one of these programs, you can download them for free from their official websites. - - - -After installing one of these programs, right-click on the ZIP file that you downloaded and select "Extract Here" or "Extract to pes-2013-portable". This will create a folder with the same name as the ZIP file, which contains all the files and folders of PES 2013 Portable. - - - -## Step 3: Run PES 2013 Portable from Your Device - - - -Now that you have extracted PES 2013 Portable from the ZIP file, you can run it from your device. To do this, you will need to copy or move the folder that contains PES 2013 Portable to your USB flash drive or any other removable device that has enough space. You can also keep it on your PC if you prefer. - - - -Then, open the folder that contains PES 2013 Portable and double-click on the file that says "pes-2013-portable.exe". This will launch the game and you will see the main menu with several options, such as "Play", "Settings", "Online", and "Exit". You can use your mouse or keyboard to navigate through these options and select what you want to do. - - - -If you want to play PES 2013 Portable offline, just click on "Play" and choose one of the modes available, such as "Exhibition", "Master League", "Become a Legend", or "Cup". You can also customize your teams, players, stadiums, kits, and more in the "Edit" mode. - - - -If you want to play PES 2013 Portable online with other players around the world, click on "Online" and create an account or log in with your existing one. You can then - - 1b8d091108 - - - - - diff --git a/spaces/rajesh1729/NER-using-spacy-gradio/README.md b/spaces/rajesh1729/NER-using-spacy-gradio/README.md deleted file mode 100644 index f722fd080787d783a8523b2087f3fe564e6825ff..0000000000000000000000000000000000000000 --- a/spaces/rajesh1729/NER-using-spacy-gradio/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: NER Using Spacy Gradio -emoji: 📈 -colorFrom: yellow -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false -license: afl-3.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aams Auto Audio Mastering System Keygen 36.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aams Auto Audio Mastering System Keygen 36.md deleted file mode 100644 index d69fe5b110c57a7ca2144b369b2bd6c59dc0ef68..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Aams Auto Audio Mastering System Keygen 36.md +++ /dev/null @@ -1,6 +0,0 @@ -

        aams auto audio mastering system keygen 36


        Download » https://urlgoal.com/2uCNad



        - -Master. Plan. D5. DLM-0710-001-02-00 April 2008. SESAR Definition Phase - Deliverable 5 ... Key Performance Indicators (KPI) for which targets have been agreed ... aircraft automated systems such as auto-brake (making it impossible for an ... 36. Issued by the SESAR Consortium for the SESAR Definition Phase Project ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download English Language Pack For Bioshock Infinite.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download English Language Pack For Bioshock Infinite.md deleted file mode 100644 index 0cbef203f65e9a9afc391f6109381273bb08be17..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download English Language Pack For Bioshock Infinite.md +++ /dev/null @@ -1,20 +0,0 @@ - -

        How to Download English Language Pack for Bioshock Infinite

        -

        Bioshock Infinite is a first-person shooter video game in the BioShock series, developed by Irrational Games and published by 2K. Infinite was released worldwide for the PlayStation 3, Windows, Xbox 360, and OS X platforms in 2013[^2^].

        -

        Download English Language Pack For Bioshock Infinite


        Download Zip ★★★ https://urlgoal.com/2uCLrQ



        -

        If you want to play Bioshock Infinite in English, but your game is in a different language, you may need to download an English language pack. Here are some steps to help you do that:

        -
          -
        1. Go to this link and download the Bioshock Infinite English language pack[^1^]. The file size is about 2.5 GB.
        2. -
        3. Extract the downloaded file using a program like WinRAR or 7-Zip.
        4. -
        5. Copy the extracted folder named "xgame" and paste it into your Bioshock Infinite installation directory. This is usually located at C:\Program Files (x86)\Steam\steamapps\common\BioShock Infinite.
        6. -
        7. Replace any existing files when prompted.
        8. -
        9. Launch Bioshock Infinite and enjoy the game in English.
        10. -
        -

        Note: This method may not work for all versions of Bioshock Infinite. If you encounter any problems, you may need to find another source for the English language pack or contact the game's support team.

        Bioshock Infinite is set in 1912 and follows the story of Booker DeWitt, a former Pinkerton agent who is hired to rescue Elizabeth, a young woman with mysterious powers who has been imprisoned in the floating city of Columbia. Along the way, Booker and Elizabeth face various enemies and factions, such as the Founders, the Vox Populi, and the Songbird. The game also explores themes of American exceptionalism, racism, religion, and free will.

        -

        The game received critical acclaim for its story, setting, characters, and gameplay. It won several awards, including Game of the Year from multiple publications. It also sold over 11 million copies as of 2017. Bioshock Infinite is considered one of the best video games of all time by many critics and fans.

        -

        If you are a fan of Bioshock Infinite or want to experience it for the first time, downloading the English language pack is a great way to enhance your enjoyment of the game. You can immerse yourself in the rich and detailed world of Columbia and listen to the original voice acting of the characters. You can also follow the complex and engaging plot more easily and appreciate the game's dialogue and humor.

        -

        Bioshock Infinite is not only a game, but also a work of art. It has a stunning visual style that combines realistic graphics with a steampunk aesthetic. The game's soundtrack features original songs and covers of modern tunes that fit the game's era and mood. The game also has a lot of Easter eggs and references to other media and historical events that add to its depth and charm.

        -

        Bioshock Infinite is a game that you will not forget easily. It has a memorable ending that will leave you speechless and emotional. It also has multiple modes and DLCs that offer more content and challenges for you to enjoy. The game is a masterpiece that deserves your attention and appreciation.

        -

        So what are you waiting for? Download the English language pack for Bioshock Infinite today and start your adventure in the sky. You will not regret it.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphical Rapid Analysis Of Structures Program.epub.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphical Rapid Analysis Of Structures Program.epub.md deleted file mode 100644 index acc3d33a2dba2ae2d3296650c421970330ac4bb7..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Graphical Rapid Analysis Of Structures Program.epub.md +++ /dev/null @@ -1,14 +0,0 @@ -

        Graphical Rapid Analysis Of Structures Program.epub


        DOWNLOAD >>>>> https://urlgoal.com/2uCJhp



        - -Graphical Rapid Analysis Of Structures Program Free!FREE! ⚡. graphical rapid analysis of structures program free. DOWNLOAD: rapid analysis of structures program, graphical rapid analysis of structures program (grasp) free download, which software is best for structural . - -Download Graphical Rapid Analysis Of Structures Program Free!FREE! ⚡. - -Product Features: For structural analysis, the program performs effective operation. The program can analyze the continuity, the sequence of a frame, the connection of a space and the linking of a space for any frame structure. Features: For structural analysis, the program performs effective operation. The program can analyze the continuity, the sequence of a frame, the connection of a space and the linking of a space for any frame structure.Graphical Rapid Analysis Of Structures Program Free!FREE! ⚡. graphical rapid analysis of structures program free. DOWNLOAD: rapid analysis of structures program, graphical rapid analysis of structures program (grasp) free download, which software is best for structural .DOWNLOAD: rapid analysis of structures program, graphical rapid analysis of structures program (grasp) free download, which software is best for structural . - -Product Features: For structural analysis, the program performs effective operation. The program can analyze the continuity, the sequence of a frame, the connection of a space and the linking of a space for any frame structure. Features: For structural analysis, the program performs effective operation. The program can analyze the continuity, the sequence of a frame, the connection of a space and the linking of a space for any frame structure.Download Graphical Rapid Analysis Of Structures Program Free!FREE! ⚡. graphical rapid analysis of structures program free. DOWNLOAD: rapid analysis of structures program, graphical rapid analysis of structures program (grasp) free download, which software is best for structural .DOWNLOAD: rapid analysis of structures program, graphical rapid analysis of structures program (grasp) free download, which software is best for structural . - -Product Features: For structural analysis, the program performs effective operation. The program can analyze the continuity, the sequence of a frame, the connection of a space and the linking of a space for any frame structure. Features: For structural analysis, the program performs effective operation. The program can analyze the continuity, the sequence of a frame, the connection of a space and the linking of a space for any frame structure.Download Graphical Rapid Analysis Of Structures Program Free 4fefd39f24
        -
        -
        -

        diff --git a/spaces/remzicam/ted_talks_summarizer/README.md b/spaces/remzicam/ted_talks_summarizer/README.md deleted file mode 100644 index 4ac3aca5c72a7b2e0f3bbbc61d7d226aa33986c9..0000000000000000000000000000000000000000 --- a/spaces/remzicam/ted_talks_summarizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ted Talks Summarizer -emoji: 🌖 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rgres/Seg2Sat/static/_app/immutable/chunks/paths-d3bcbd10.js b/spaces/rgres/Seg2Sat/static/_app/immutable/chunks/paths-d3bcbd10.js deleted file mode 100644 index 0911f4acc85b0ac94f242f9bc4ab5effba088495..0000000000000000000000000000000000000000 --- a/spaces/rgres/Seg2Sat/static/_app/immutable/chunks/paths-d3bcbd10.js +++ /dev/null @@ -1 +0,0 @@ -import{E as f,s as p}from"./index-bcf2726a.js";const n=[];function _(t,b=f){let o;const i=new Set;function r(e){if(p(t,e)&&(t=e,o)){const c=!n.length;for(const s of i)s[1](),n.push(s,t);if(c){for(let s=0;s{i.delete(s),i.size===0&&(o(),o=null)}}return{set:r,update:a,subscribe:l}}let u="",d="";function g(t){u=t.base,d=t.assets||u}export{d as a,u as b,g as s,_ as w}; diff --git a/spaces/rohan13/Roar/utils.py b/spaces/rohan13/Roar/utils.py deleted file mode 100644 index 66d9254377f662a11a42ba11d37df40b7384a03b..0000000000000000000000000000000000000000 --- a/spaces/rohan13/Roar/utils.py +++ /dev/null @@ -1,276 +0,0 @@ -import os -import pickle -import langchain - -import faiss -from langchain import HuggingFaceHub -from langchain.chains import ConversationalRetrievalChain -from langchain.chat_models import ChatOpenAI -from langchain.document_loaders import DirectoryLoader, TextLoader, UnstructuredHTMLLoader, UnstructuredPDFLoader, UnstructuredWordDocumentLoader, PyPDFLoader -from langchain.embeddings import OpenAIEmbeddings, HuggingFaceHubEmbeddings -from langchain.memory import ConversationBufferWindowMemory -from langchain.llms.openai import OpenAI, OpenAIChat -from langchain.prompts.chat import ( - ChatPromptTemplate, - HumanMessagePromptTemplate, - SystemMessagePromptTemplate, -) -from langchain.text_splitter import CharacterTextSplitter -from langchain.vectorstores.faiss import FAISS -from langchain.cache import InMemoryCache -from langchain.chat_models import ChatGooglePalm -import traceback - - -langchain.llm_cache = InMemoryCache() - -global model_name - -models = ["GPT-3.5", "Flan UL2", "GPT-4", "Flan T5", "Palm"] - -pickle_file = "_vs.pkl" -updated_pickle_file = "_vs_updated.pkl" -index_file = "_vs.index" -models_folder = "models/" - -llm = ChatOpenAI(model_name="gpt-4", temperature=0.1) - -embeddings = OpenAIEmbeddings(model='text-embedding-ada-002') - -chat_history = [] - -memory = ConversationBufferWindowMemory(memory_key="chat_history", k=10) - -vectorstore_index = None - -# get name to be used in prompt from environment variable `name` -name = os.environ.get("name", "Rohan") - -system_template = """You are ROAR, {name}'s personal assistant supposed to ANSWER QUESTIONS ON HIS BEHALF. -STRICTLY FOLLOW THIS: FOR OPINIONS, PREFERENCES, EXPERIENCES,ALWAYS ANSWER IN FIRST PERSON AS IF YOU ARE {name}. ALWAYS Generate this response with experiences/opinions using {name}'s RESUME available in context/vectorstore. -Response should be in professional language and tone, impressive, catchy, and grammatically correct. -Use {name}'s resume and your knowledge of his experience and skills to answer questions to the best of your ability. -Answer the question as if you are assisting {name} or answering on his behalf. ----------------- -This activity of answering questions on {name}'s behalf will be called Roar. -For example: If someone wants to ask you a question, they will say "Roar it" and you will answer the question on {name}'s behalf by generating a response using {name}'s resume and your knowledge of his experience and skills. -Add a qwirky and funny line in the end to encourage the user to try more Roars as they are free. ----------------- -{context} -""" -# append name in system template to be used in prompt -system_template = system_template.format(name=name, context="{context}") - -messages = [ - SystemMessagePromptTemplate.from_template(system_template), - HumanMessagePromptTemplate.from_template("{question}"), -] -CHAT_PROMPT = ChatPromptTemplate.from_messages(messages) - - -def set_model_and_embeddings(model): - global chat_history - set_model(model) - # set_embeddings(model) - chat_history = [] - - -def set_model(model): - global llm - print("Setting model to " + str(model)) - if model == "GPT-3.5": - print("Loading GPT-3.5") - llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.5) - elif model == "GPT-4": - print("Loading GPT-4") - llm = OpenAI(model_name="gpt-4", temperature=1) - elif model == "Flan UL2": - print("Loading Flan-UL2") - llm = HuggingFaceHub(repo_id="google/flan-ul2", model_kwargs={"temperature": 0.1, "max_new_tokens":500}) - elif model == "Flan T5": - print("Loading Flan T5") - llm = HuggingFaceHub(repo_id="google/flan-t5-base", model_kwargs={"temperature": 0.1}) - elif model == "Palm": - llm = ChatGooglePalm(temperature=0) - else: - print("Loading GPT-3.5 from else") - llm = OpenAI(model_name="text-davinci-002", temperature=0.1) - - -def set_embeddings(model): - global embeddings - if model == "GPT-3.5" or model == "GPT-4": - print("Loading OpenAI embeddings") - embeddings = OpenAIEmbeddings(model='text-embedding-ada-002') - elif model == "Flan UL2" or model == "Flan T5": - print("Loading Hugging Face embeddings") - embeddings = HuggingFaceHubEmbeddings(repo_id="sentence-transformers/all-MiniLM-L6-v2") - - -def get_search_index(model, first_time=False): - global vectorstore_index - if not first_time: - print("Using updated pickle file") - file = updated_pickle_file - else: - print("Using base pickle file") - file = pickle_file - if os.path.isfile(get_file_path(model, file)) and os.path.isfile( - get_file_path(model, index_file)) and os.path.getsize(get_file_path(model, file)) > 0: - # Load index from pickle file - search_index = load_index(model) - else: - search_index = create_index(model) - - vectorstore_index = search_index - return search_index - - -def load_index(model): - with open(get_file_path(model, pickle_file), "rb") as f: - search_index = pickle.load(f) - print("Loaded index") - return search_index - - -def create_index(model): - sources = fetch_data_for_embeddings() - source_chunks = split_docs(sources) - search_index = search_index_from_docs(source_chunks) - faiss.write_index(search_index.index, get_file_path(model, index_file)) - # Save index to pickle file - with open(get_file_path(model, pickle_file), "wb") as f: - pickle.dump(search_index, f) - print("Created index") - return search_index - - -def get_file_path(model, file): - # If model is GPT3.5 or GPT4 return models_folder + openai + file else return models_folder + hf + file - if model == "GPT-3.5" or model == "GPT-4": - return models_folder + "openai" + file - elif model == "Palm": - return models_folder + "palm" + file - else: - return models_folder + "hf" + file - - -def search_index_from_docs(source_chunks): - # print("source chunks: " + str(len(source_chunks))) - # print("embeddings: " + str(embeddings)) - - search_index = FAISS.from_documents(source_chunks, embeddings) - return search_index - - -def get_html_files(): - loader = DirectoryLoader('docs', glob="**/*.html", loader_cls=UnstructuredHTMLLoader, recursive=True) - document_list = loader.load() - return document_list - - -def fetch_data_for_embeddings(): - document_list = get_word_files() - document_list.extend(get_html_files()) - - print("document list: " + str(len(document_list))) - return document_list - - -def get_word_files(): - loader = DirectoryLoader('docs', glob="**/*.docx", loader_cls=UnstructuredWordDocumentLoader, recursive=True) - document_list = loader.load() - return document_list - -def split_docs(docs): - splitter = CharacterTextSplitter(separator=" ", chunk_size=800, chunk_overlap=0) - - source_chunks = splitter.split_documents(docs) - - print("chunks: " + str(len(source_chunks))) - - return source_chunks - -def load_documents(file_paths): - # Check the type of file from the extension and load it accordingly - document_list = [] - for file_path in file_paths: - if file_path.endswith(".txt"): - loader = TextLoader(file_path) - elif file_path.endswith(".docx"): - loader = UnstructuredWordDocumentLoader(file_path) - elif file_path.endswith(".html"): - loader = UnstructuredHTMLLoader(file_path) - elif file_path.endswith(".pdf"): - loader = PyPDFLoader(file_path) - else: - print("Unsupported file type") - raise Exception("Unsupported file type") - docs = loader.load() - document_list.extend(docs) - # print("Loaded " + file_path) - - print("Loaded " + str(len(document_list)) + " documents") - return document_list - -def add_to_index(docs, index, model): - global vectorstore_index - index.add_documents(docs) - with open(get_file_path(model, updated_pickle_file), "wb") as f: - pickle.dump(index, f) - vectorstore_index = index - print("Vetorstore index updated") - return True -def ingest(file_paths, model): - print("Ingesting files") - try: - # handle txt, docx, html, pdf - docs = load_documents(file_paths) - split_docs(docs) - add_to_index(docs, vectorstore_index, model) - print("Ingestion complete") - except Exception as e: - traceback.print_exc() - return False - return True - - -def get_qa_chain(vectorstore_index): - global llm, model_name - print(llm) - - # embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76) - # compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=gpt_3_5_index.as_retriever()) - retriever = vectorstore_index.as_retriever(search_type="similarity_score_threshold", - search_kwargs={"score_threshold": .8}) - - chain = ConversationalRetrievalChain.from_llm(llm, retriever, return_source_documents=True, - verbose=True, get_chat_history=get_chat_history, - combine_docs_chain_kwargs={"prompt": CHAT_PROMPT}) - return chain - - -def get_chat_history(inputs) -> str: - res = [] - for human, ai in inputs: - res.append(f"Human:{human}\nAI:{ai}") - return "\n".join(res) - - -def generate_answer(question) -> str: - global chat_history, vectorstore_index - chain = get_qa_chain(vectorstore_index) - - result = chain( - {"question": question, "chat_history": chat_history, "vectordbkwargs": {"search_distance": 0.6}}) - chat_history = [(question, result["answer"])] - sources = [] - print(result) - - for document in result['source_documents']: - # sources.append(document.metadata['url']) - sources.append(document.metadata['source'].split('/')[-1].split('.')[0]) - print(sources) - - source = ',\n'.join(set(sources)) - return result['answer'] + '\nSOURCES: ' + source diff --git a/spaces/rorallitri/biomedical-language-models/logs/AllData Ver9.8 Imports 6.iso Free Download HOT!.md b/spaces/rorallitri/biomedical-language-models/logs/AllData Ver9.8 Imports 6.iso Free Download HOT!.md deleted file mode 100644 index f2476a6e928f5314af38cf578b6508777785c5c6..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/AllData Ver9.8 Imports 6.iso Free Download HOT!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        AllData Ver9.8 Imports 6.iso Free Download


        Download Ziphttps://tinurll.com/2uzmuF



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Anyconnect For Mac Free LINK Download.md b/spaces/rorallitri/biomedical-language-models/logs/Anyconnect For Mac Free LINK Download.md deleted file mode 100644 index e8f50a9546e18d62e4ea7c38ab20dae98731d3c1..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Anyconnect For Mac Free LINK Download.md +++ /dev/null @@ -1,33 +0,0 @@ - -

        Files included:
        - anyconnect-win-4.9.01095-core-vpn-predeploy-k9.msi - Standalone deployment package for Windows platforms. 32/64Bit
        - anyconnect-macos-4.9.01095-predeploy-k9.dmg - Standalone DMG package for Mac OS X "Intel" platforms.
        - anyconnect-linux64-4.9.01095-predeploy-k9.tar.gz - Standalone package for 64-bit Linux platforms.

        -

        Freeware programs can be downloaded used free of charge and without any time limitations. Freeware products can be used free of charge for both personal and professional (commercial use).

        -

        Anyconnect For Mac Free Download


        Downloadhttps://tinurll.com/2uzmQM



        -

        Open Source software is software with source code that anyone can inspect, modify or enhance. Programs released under this license can be used at no cost for both personal and commercial purposes. There are many different open source licenses but they all must comply with the Open Source Definition - in brief: the software can be freely used, modified and shared.

        -

        This license is commonly used for video games and it allows users to download and play the game for free. Basically, a product is offered Free to Play (Freemium) and the user can decide if he wants to pay the money (Premium) for additional features, services, virtual or physical goods that expand the functionality of the game. In some cases, ads may be show to the users.

        -

        Demo programs have a limited functionality for free, but charge for an advanced set of features or for the removal of advertisements from the program's interfaces. In some cases, all the functionality is disabled until the license is purchased. Demos are usually not time-limited (like Trial software) but the functionality is limited.

        -

        This software is no longer available for the download. This could be due to the program being discontinued, having a security issue or for other reasons.

        -

        anyconnect-dart-win-x.x.xxxx-k9.msi Windows
        anyconnect-macosx-i386-x.x.xxxxx-k9.dmg MAC
        anyconnect-predeploy-linux-64-x.x.xxxxx-k9.tar.gz Linux

        -

        OIT offers Sophos Antivirus to all University of Idaho-owned machines only. For personal machines, please see the Sophos Home download link above. To install on a UI Windows machine, open SoftwareCenter on your computer, look for the Sophos antivirus package, and install it. A step by step guide can be found at: =878

        -

        OIT offers Sophos Antivirus to all University of Idaho-owned machines only. For personal machines, please see the Sophos Home download link above. To install on a UI Mac machine, connect to the VPN, open Jamf on your computer, look for the Sophos antivirus package, and install it.

        -

        -

        OIT uses the Cisco AnyConnect VPN Client to allow secure, encrypted network connectivity directly to the university network. Before you can download the VPN client, please contact your TSP, Sysad or the Student Technology Center stating your reason for requesting VPN access.

        -

        You can also download the AnyConnect client through our ftp site. Choose your operating system and click to download the installer. We recommend using Google or Firefox for downloading the installer.

        -

        Cisco AnyConnect Secure Mobility Client 4.10.06079 for Mac could be downloaded from the developer's website when we last checked. We cannot confirm if there is a free download of this app available. The actual developer of this free software for Mac is Cisco.

        -

        The software relates to System Tools. You can install this free app on Mac OS X 10.6 or later. Please check the Mac app with an antivirus before launch as it is downloaded from the developer's website, and we cannot ensure that it is safe. The most popular versions of Cisco AnyConnect Secure Mobility Client for Mac are 3.1 and 3.0.

        -

        Using the profile editor: The VPN profile editor can be downloaded from the AnyConnect Settings page on dashboard or on Cisco.com. The profile editor only runs on Windows operating systems. The screenshot below shows a configured server ton the Server List Entry option.

        -

        Summary: In this article, we help you to learn How To Fully Uninstall Cisco AnyConnect Secure Mobility Client on Mac with the best Mac App Uninstaller software -Omni Remover. Make sure you have downloaded the latest versionhere before continuing.

        -

        Cisco AnyConnect for Mac is the best option for your network security. It is developed by Cisco Systems Corporation. It is an effective web-based VPN available for Microsoft Windows 10, 8, 7, Linux, Solaris UltraSPARC and Mac OS X 10.4 and 10.5. The latest version of Cisco AnyConnect download for Mac also facilitates you to access your network anytime and anywhere in the world. Cisco AnyConnect Secure Mobility Client offers end-to-end security, availability of your network, usability and streamlined access to users.

        -

        Enterprise networks are becoming more complex every day. More people are accessing your network from different devices from anywhere in the world. This creates more security vulnerabilities for your network. You can secure your network with effective security management. Cisco Anyconnect download is available to secure your network with ease.

        -

        The Cisco AnyConnect Secure Mobility Client download for Mac provides you security so you see your network anytime, anywhere, access a holistic view of the user and device behavior and best in class threat protection. Cisco Anyconnect Client Package has a minimalistic interface and requires only 28 MB storage space.

        -

        Cisco AnyConnect download for Mac delivers users all access, visibility, security and hassle-free user experience all from a best in class security solution provider i.e. Cisco. There are thousands of companies worldwide that are making Cisco AnyConnect VPN client an integral part of their security strategy.

        -

        After this, it will ask you username and password for the VPN server. Enter username, password and click Connect. Now you will connect successfully and use your enterprise network. Make sure to disable your antivirus software before Cisco AnyConnect installation for Mac since Cisco AnyConnect makes changes to network Adapter. If you have any issue with Cisco AnyConnect download link, please leave a comment and we will help you.

        -

        Download the latest version of Cisco AnyConnect for Mac by clicking on the download button given below and start using Cisco AnyConnect Secure Mobility Client. Cisco AnyConnect Download is also available for Microsoft Windows operating system.

        -

        Unleash the benefits of a remote workforce without sacrificing the security of your corporate network. We provide a variety of VPN clients to fit the needs of every SonicWall appliance or virtual appliance.
        Find and download the most up to-date version of the VPN client you need below to provide your employees with safe access to resources they need.

        -

        Click here to download the Company Portal for Windows. This application can be used to download and install software like eduroam wireless, Microsoft Office, and Matlab onto your Windows laptop or desktop.

        -

        Visit the Software Center (PC) or Self Service (Mac) for a complete listing of software available for download on university owned machines. If the software needed isn't listed or if you have a Mac please enter a request at ITOneStop.

        -

        I've had this problem for sometime and none of the suggestions worked. What did work for me was changing the VPN profile (your sys admin will need to do this for you as its a server side profile that gets downloaded when you connect).

        -

        The setting that made the difference was CertificateStoreMac, the default seems to be All which causes AnyConnect to try to look in the system keychain. If you change this to Login it'll stop doing that and stop these login prompts. Your certificates for the server should be installed in the login keychain as thats what happens with current AnyConnect versions when you go through VPN enrolment and download the certs and use the OTP creds.

        -

        You can connect to Duke's network by installing the Cisco AnyConnect VPN software program onto your computer. Visit the OIT Software site to download the VPN client for your computer while you are on campus or before you travel. Or you can visit to automatically install the appropriate version of VPN software onto your computer.

        -

        For additional assistance contact the IT Services Technical Support Center via phone at (907) 786-4646, toll-free at (877) 633-3888, email us at uaa.techsupport@alaska.edu, or you can use the Seawolf Tech Portal to submit a support ticket.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Cast Wysiwyg R36 Cracked 64 Bit.md b/spaces/rorallitri/biomedical-language-models/logs/Cast Wysiwyg R36 Cracked 64 Bit.md deleted file mode 100644 index 3e45d9fd16e638d81cc3d0a5e89ed77304d17a2b..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Cast Wysiwyg R36 Cracked 64 Bit.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Cast Wysiwyg R36 Cracked 64 bit


        DOWNLOAD ✪✪✪ https://tinurll.com/2uzlr1



        - -Captain America: Civil War - Special Edition Keygen Autocad 2008 64 Bit Tam Indir G3. 44ad931eb4 Cast Wysiwyg R36 Cracked 64 bit. 1fdad05405
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Novel Terjemahan Memoirs Of A Geisha Book A Journey from Poverty to Glamour in the World of Geisha.md b/spaces/rorallitri/biomedical-language-models/logs/Download Novel Terjemahan Memoirs Of A Geisha Book A Journey from Poverty to Glamour in the World of Geisha.md deleted file mode 100644 index 050f44c137a92c3b037951e716e7c63925669aac..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download Novel Terjemahan Memoirs Of A Geisha Book A Journey from Poverty to Glamour in the World of Geisha.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Download Novel Terjemahan Memoirs Of A Geisha Book


        Download File ○○○ https://tinurll.com/2uzlJE



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/From Hollywood to Bollywood The Journey of a Struggling Actor in Bollywood Calling In Hindi Torrent Download 720p.md b/spaces/rorallitri/biomedical-language-models/logs/From Hollywood to Bollywood The Journey of a Struggling Actor in Bollywood Calling In Hindi Torrent Download 720p.md deleted file mode 100644 index 9e89e08a29098136a133fc3de4d3f45c4feab45b..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/From Hollywood to Bollywood The Journey of a Struggling Actor in Bollywood Calling In Hindi Torrent Download 720p.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Bollywood Calling In Hindi Torrent Download 720p


        Download Zip ►►►►► https://tinurll.com/2uzmzC



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Latitude E6400 Base System Device Driver.md b/spaces/rorallitri/biomedical-language-models/logs/Latitude E6400 Base System Device Driver.md deleted file mode 100644 index eeae78f0cf402925a3d0e46399141d53f9e376eb..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Latitude E6400 Base System Device Driver.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Latitude E6400 Base System Device Driver


        Download Ziphttps://tinurll.com/2uzmXX



        -
        - 899543212b
        -
        -
        -

        diff --git a/spaces/safi842/FashionGen/config.py b/spaces/safi842/FashionGen/config.py deleted file mode 100644 index 5af238a0a4382504bd2af894d30331e1be33079a..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/config.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -import sys -import argparse -import json -from copy import deepcopy - -class Config: - def __init__(self, **kwargs): - self.from_args([]) # set all defaults - self.default_args = deepcopy(self.__dict__) - self.from_dict(kwargs) # override - - def __str__(self): - custom = {} - default = {} - - # Find non-default arguments - for k, v in self.__dict__.items(): - if k == 'default_args': - continue - - in_default = k in self.default_args - same_value = self.default_args.get(k) == v - - if in_default and same_value: - default[k] = v - else: - custom[k] = v - - config = { - 'custom': custom, - 'default': default - } - - return json.dumps(config, indent=4) - - def __repr__(self): - return self.__str__() - - def from_dict(self, dictionary): - for k, v in dictionary.items(): - setattr(self, k, v) - return self - - def from_args(self, args=sys.argv[1:]): - parser = argparse.ArgumentParser(description='GAN component analysis config') - parser.add_argument('--model', dest='model', type=str, default='StyleGAN', help='The network to analyze') # StyleGAN, DCGAN, ProGAN, BigGAN-XYZ - parser.add_argument('--layer', dest='layer', type=str, default='g_mapping', help='The layer to analyze') - parser.add_argument('--class', dest='output_class', type=str, default=None, help='Output class to generate (BigGAN: Imagenet, ProGAN: LSUN)') - parser.add_argument('--est', dest='estimator', type=str, default='ipca', help='The algorithm to use [pca, fbpca, cupca, spca, ica]') - parser.add_argument('--sparsity', type=float, default=1.0, help='Sparsity parameter of SPCA') - parser.add_argument('--video', dest='make_video', action='store_true', help='Generate output videos (MP4s)') - parser.add_argument('--batch', dest='batch_mode', action='store_true', help="Don't open windows, instead save results to file") - parser.add_argument('-b', dest='batch_size', type=int, default=None, help='Minibatch size, leave empty for automatic detection') - parser.add_argument('-c', dest='components', type=int, default=80, help='Number of components to keep') - parser.add_argument('-n', type=int, default=300_000, help='Number of examples to use in decomposition') - parser.add_argument('--use_w', action='store_true', help='Use W latent space (StyleGAN(2))') - parser.add_argument('--sigma', type=float, default=2.0, help='Number of stdevs to walk in visualize.py') - parser.add_argument('--inputs', type=str, default=None, help='Path to directory with named components') - parser.add_argument('--seed', type=int, default=None, help='Seed used in decomposition') - args = parser.parse_args(args) - - return self.from_dict(args.__dict__) \ No newline at end of file diff --git a/spaces/sanjay11/resumescan/app.py b/spaces/sanjay11/resumescan/app.py deleted file mode 100644 index a67f3f55e7ae269d37b40372954d9356e4f30046..0000000000000000000000000000000000000000 --- a/spaces/sanjay11/resumescan/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import streamlit as st -from transformers import BertForQuestionAnswering, BertTokenizer -import torch -from io import BytesIO -import PyPDF2 -import pandas as pd - -# Initialize session state to store the log of QA pairs and satisfaction responses -if 'qa_log' not in st.session_state: - st.session_state.qa_log = [] - -def extract_text_from_pdf(pdf_file): - pdf_reader = PyPDF2.PdfReader(BytesIO(pdf_file.read())) - text = "" - for page in pdf_reader.pages: - text += page.extract_text() - return text - -def answer_question(question, context, model, tokenizer): - inputs = tokenizer.encode_plus( - question, - context, - add_special_tokens=True, - return_tensors="pt", - truncation="only_second", - max_length=512, - ) - outputs = model(**inputs, return_dict=True) - answer_start_scores = outputs.start_logits - answer_end_scores = outputs.end_logits - answer_start = torch.argmax(answer_start_scores) - answer_end = torch.argmax(answer_end_scores) + 1 - input_ids = inputs["input_ids"].tolist()[0] - answer = tokenizer.convert_tokens_to_string( - tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]) - ) - return answer - -st.title("Resume Question Answering") - -uploaded_file = st.file_uploader("Upload your resume (PDF format only)", type=["pdf"]) - -if uploaded_file is not None: - resume_text = extract_text_from_pdf(uploaded_file) - st.write("Resume Text:") - st.write(resume_text) - - user_question = st.text_input("Ask a question based on your resume:") - - if user_question: - model = BertForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") - tokenizer = BertTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") - - answer = answer_question(user_question, resume_text, model, tokenizer) - st.write("Answer:") - st.write(answer) - - # Ask for user feedback on satisfaction - satisfaction = st.radio('Are you satisfied with the answer?', ('Yes', 'No'), key='satisfaction') - - # Log the interaction - st.session_state.qa_log.append({ - 'Question': user_question, - 'Answer': answer, - 'Satisfaction': satisfaction - }) - - # Display the log in a table format - st.write("Interaction Log:") - log_df = pd.DataFrame(st.session_state.qa_log) - st.dataframe(log_df) diff --git a/spaces/sarinam/speaker-anonymization/demo_inference/demo_tts.py b/spaces/sarinam/speaker-anonymization/demo_inference/demo_tts.py deleted file mode 100644 index 285c25502ff325e278694be23b2cb83cdc631f07..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization/demo_inference/demo_tts.py +++ /dev/null @@ -1,24 +0,0 @@ -from IMSToucan.InferenceInterfaces.AnonFastSpeech2 import AnonFastSpeech2 - -TAGS_TO_MODELS = { - 'Libri100': 'trained_on_ground_truth_phonemes.pt', - 'Libri100 + finetuned': 'trained_on_asr_phoneme_outputs.pt', - 'Libri600': 'trained_on_libri600_asr_phoneme_outputs.pt', - 'Libri600 + finetuned' : 'trained_on_libri600_ground_truth_phonemes.pt' -} - - -class DemoTTS: - - def __init__(self, model_paths, model_tag, device): - self.device = device - self.model_tag = model_tag - fastspeech_path = model_paths / 'FastSpeech2_Multi' / TAGS_TO_MODELS[self.model_tag] - hifigan_path = model_paths / 'HiFiGAN_combined' / 'best.pt' - self.model = AnonFastSpeech2(device=self.device, path_to_hifigan_model=hifigan_path, - path_to_fastspeech_model=fastspeech_path) - - def read_text(self, transcription, speaker_embedding, text_is_phonemes=False): - self.model.default_utterance_embedding = speaker_embedding.to(self.device) - wav = self.model(text=transcription, text_is_phonemes=text_is_phonemes) - return wav diff --git a/spaces/scedlatioru/img-to-music/example/Corel.Products.LINK Keygen-CORE.rarl.md b/spaces/scedlatioru/img-to-music/example/Corel.Products.LINK Keygen-CORE.rarl.md deleted file mode 100644 index 6f599d964623c46acdfb463c8ac61958e799de3f..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Corel.Products.LINK Keygen-CORE.rarl.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Corel.Products.Keygen-CORE.rarl


        Download ---> https://gohhs.com/2uEA2q



        -
        -Corel Draw X7 Keygen is the well-known software program that enables the user . ... It also allows users to download and install their Plugin Alliance products without needing to visit the . ... Downloads: Bitcoin Core est un projet communautaire de logiciel libre publié ... Sweet Doll Juliette Sets 29 31.zip.rarl 1fdad05405
        -
        -
        -

        diff --git a/spaces/sciling/Face_and_Plate_License_Blur/models/__init__.py b/spaces/sciling/Face_and_Plate_License_Blur/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sciling/Face_and_Plate_License_Blur/utils/plots.py b/spaces/sciling/Face_and_Plate_License_Blur/utils/plots.py deleted file mode 100644 index 0c008f16522795c778fbedeaa19447a7bebbbb3f..0000000000000000000000000000000000000000 --- a/spaces/sciling/Face_and_Plate_License_Blur/utils/plots.py +++ /dev/null @@ -1,413 +0,0 @@ -# Plotting utils - -import glob -import math -import os -import random -from copy import copy -from pathlib import Path - -import cv2 -import matplotlib -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -import seaborn as sns -import torch -import yaml -from PIL import Image, ImageDraw -from scipy.signal import butter, filtfilt - -from utils.general import xywh2xyxy, xyxy2xywh -from utils.metrics import fitness - -# Settings -matplotlib.rc('font', **{'size': 11}) -matplotlib.use('Agg') # for writing to files only - - -def color_list(): - # Return first 10 plt colors as (r,g,b) https://stackoverflow.com/questions/51350872/python-from-color-name-to-rgb - def hex2rgb(h): - return tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4)) - - return [hex2rgb(h) for h in plt.rcParams['axes.prop_cycle'].by_key()['color']] - - -def hist2d(x, y, n=100): - # 2d histogram used in labels.png and evolve.png - xedges, yedges = np.linspace(x.min(), x.max(), n), np.linspace(y.min(), y.max(), n) - hist, xedges, yedges = np.histogram2d(x, y, (xedges, yedges)) - xidx = np.clip(np.digitize(x, xedges) - 1, 0, hist.shape[0] - 1) - yidx = np.clip(np.digitize(y, yedges) - 1, 0, hist.shape[1] - 1) - return np.log(hist[xidx, yidx]) - - -def butter_lowpass_filtfilt(data, cutoff=1500, fs=50000, order=5): - # https://stackoverflow.com/questions/28536191/how-to-filter-smooth-with-scipy-numpy - def butter_lowpass(cutoff, fs, order): - nyq = 0.5 * fs - normal_cutoff = cutoff / nyq - return butter(order, normal_cutoff, btype='low', analog=False) - - b, a = butter_lowpass(cutoff, fs, order=order) - return filtfilt(b, a, data) # forward-backward filter - - -def plot_one_box(x, img, color=None, label=None, line_thickness=None): - # Plots one bounding box on image img - tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - color = color or [random.randint(0, 255) for _ in range(3)] - c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) - cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) - if label: - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - -def plot_wh_methods(): # from utils.plots import *; plot_wh_methods() - # Compares the two methods for width-height anchor multiplication - # https://github.com/ultralytics/yolov3/issues/168 - x = np.arange(-4.0, 4.0, .1) - ya = np.exp(x) - yb = torch.sigmoid(torch.from_numpy(x)).numpy() * 2 - - fig = plt.figure(figsize=(6, 3), tight_layout=True) - plt.plot(x, ya, '.-', label='YOLOv3') - plt.plot(x, yb ** 2, '.-', label='YOLOv5 ^2') - plt.plot(x, yb ** 1.6, '.-', label='YOLOv5 ^1.6') - plt.xlim(left=-4, right=4) - plt.ylim(bottom=0, top=6) - plt.xlabel('input') - plt.ylabel('output') - plt.grid() - plt.legend() - fig.savefig('comparison.png', dpi=200) - - -def output_to_target(output): - # Convert model output to target format [batch_id, class_id, x, y, w, h, conf] - targets = [] - for i, o in enumerate(output): - for *box, conf, cls in o.cpu().numpy(): - targets.append([i, cls, *list(*xyxy2xywh(np.array(box)[None])), conf]) - return np.array(targets) - - -def plot_images(images, targets, paths=None, fname='images.jpg', names=None, max_size=640, max_subplots=16): - # Plot image grid with labels - - if isinstance(images, torch.Tensor): - images = images.cpu().float().numpy() - if isinstance(targets, torch.Tensor): - targets = targets.cpu().numpy() - - # un-normalise - if np.max(images[0]) <= 1: - images *= 255 - - tl = 3 # line thickness - tf = max(tl - 1, 1) # font thickness - bs, _, h, w = images.shape # batch size, _, height, width - bs = min(bs, max_subplots) # limit plot images - ns = np.ceil(bs ** 0.5) # number of subplots (square) - - # Check if we should resize - scale_factor = max_size / max(h, w) - if scale_factor < 1: - h = math.ceil(scale_factor * h) - w = math.ceil(scale_factor * w) - - # colors = color_list() # list of colors - mosaic = np.full((int(ns * h), int(ns * w), 3), 255, dtype=np.uint8) # init - for i, img in enumerate(images): - if i == max_subplots: # if last batch has fewer images than we expect - break - - block_x = int(w * (i // ns)) - block_y = int(h * (i % ns)) - - img = img.transpose(1, 2, 0) - if scale_factor < 1: - img = cv2.resize(img, (w, h)) - - mosaic[block_y:block_y + h, block_x:block_x + w, :] = img - if len(targets) > 0: - image_targets = targets[targets[:, 0] == i] - boxes = xywh2xyxy(image_targets[:, 2:6]).T - classes = image_targets[:, 1].astype('int') - labels = image_targets.shape[1] == 6 # labels if no conf column - conf = None if labels else image_targets[:, 6] # check for confidence presence (label vs pred) - - if boxes.shape[1]: - if boxes.max() <= 1.01: # if normalized with tolerance 0.01 - boxes[[0, 2]] *= w # scale to pixels - boxes[[1, 3]] *= h - elif scale_factor < 1: # absolute coords need scale if image scales - boxes *= scale_factor - boxes[[0, 2]] += block_x - boxes[[1, 3]] += block_y - for j, box in enumerate(boxes.T): - cls = int(classes[j]) - # color = colors[cls % len(colors)] - cls = names[cls] if names else cls - if labels or conf[j] > 0.25: # 0.25 conf thresh - label = '%s' % cls if labels else '%s %.1f' % (cls, conf[j]) - plot_one_box(box, mosaic, label=label, color=None, line_thickness=tl) - - # Draw image filename labels - if paths: - label = Path(paths[i]).name[:40] # trim to 40 char - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - cv2.putText(mosaic, label, (block_x + 5, block_y + t_size[1] + 5), 0, tl / 3, [220, 220, 220], thickness=tf, - lineType=cv2.LINE_AA) - - # Image border - cv2.rectangle(mosaic, (block_x, block_y), (block_x + w, block_y + h), (255, 255, 255), thickness=3) - - if fname: - r = min(1280. / max(h, w) / ns, 1.0) # ratio to limit image size - mosaic = cv2.resize(mosaic, (int(ns * w * r), int(ns * h * r)), interpolation=cv2.INTER_AREA) - # cv2.imwrite(fname, cv2.cvtColor(mosaic, cv2.COLOR_BGR2RGB)) # cv2 save - Image.fromarray(mosaic).save(fname) # PIL save - return mosaic - - -def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''): - # Plot LR simulating training for full epochs - optimizer, scheduler = copy(optimizer), copy(scheduler) # do not modify originals - y = [] - for _ in range(epochs): - scheduler.step() - y.append(optimizer.param_groups[0]['lr']) - plt.plot(y, '.-', label='LR') - plt.xlabel('epoch') - plt.ylabel('LR') - plt.grid() - plt.xlim(0, epochs) - plt.ylim(0) - plt.savefig(Path(save_dir) / 'LR.png', dpi=200) - plt.close() - - -def plot_test_txt(): # from utils.plots import *; plot_test() - # Plot test.txt histograms - x = np.loadtxt('test.txt', dtype=np.float32) - box = xyxy2xywh(x[:, :4]) - cx, cy = box[:, 0], box[:, 1] - - fig, ax = plt.subplots(1, 1, figsize=(6, 6), tight_layout=True) - ax.hist2d(cx, cy, bins=600, cmax=10, cmin=0) - ax.set_aspect('equal') - plt.savefig('hist2d.png', dpi=300) - - fig, ax = plt.subplots(1, 2, figsize=(12, 6), tight_layout=True) - ax[0].hist(cx, bins=600) - ax[1].hist(cy, bins=600) - plt.savefig('hist1d.png', dpi=200) - - -def plot_targets_txt(): # from utils.plots import *; plot_targets_txt() - # Plot targets.txt histograms - x = np.loadtxt('targets.txt', dtype=np.float32).T - s = ['x targets', 'y targets', 'width targets', 'height targets'] - fig, ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True) - ax = ax.ravel() - for i in range(4): - ax[i].hist(x[i], bins=100, label='%.3g +/- %.3g' % (x[i].mean(), x[i].std())) - ax[i].legend() - ax[i].set_title(s[i]) - plt.savefig('targets.jpg', dpi=200) - - -def plot_study_txt(path='study/', x=None): # from utils.plots import *; plot_study_txt() - # Plot study.txt generated by test.py - fig, ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True) - ax = ax.ravel() - - fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True) - for f in [Path(path) / f'study_coco_{x}.txt' for x in ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']]: - y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T - x = np.arange(y.shape[1]) if x is None else np.array(x) - s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)'] - for i in range(7): - ax[i].plot(x, y[i], '.-', linewidth=2, markersize=8) - ax[i].set_title(s[i]) - - j = y[3].argmax() + 1 - ax2.plot(y[6, :j], y[3, :j] * 1E2, '.-', linewidth=2, markersize=8, - label=f.stem.replace('study_coco_', '').replace('yolo', 'YOLO')) - - ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5], - 'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet') - - ax2.grid() - ax2.set_yticks(np.arange(30, 60, 5)) - ax2.set_xlim(0, 30) - ax2.set_ylim(29, 51) - ax2.set_xlabel('GPU Speed (ms/img)') - ax2.set_ylabel('COCO AP val') - ax2.legend(loc='lower right') - plt.savefig('test_study.png', dpi=300) - - -def plot_labels(labels, save_dir=Path(''), loggers=None): - # plot dataset labels - print('Plotting labels... ') - c, b = labels[:, 0], labels[:, 1:5].transpose() # classes, boxes - nc = int(c.max() + 1) # number of classes - colors = color_list() - x = pd.DataFrame(b.transpose(), columns=['x', 'y', 'width', 'height']) - - # seaborn correlogram - sns.pairplot(x, corner=True, diag_kind='auto', kind='hist', diag_kws=dict(bins=50), plot_kws=dict(pmax=0.9)) - plt.savefig(save_dir / 'labels_correlogram.jpg', dpi=200) - plt.close() - - # matplotlib labels - matplotlib.use('svg') # faster - ax = plt.subplots(2, 2, figsize=(8, 8), tight_layout=True)[1].ravel() - ax[0].hist(c, bins=np.linspace(0, nc, nc + 1) - 0.5, rwidth=0.8) - ax[0].set_xlabel('classes') - sns.histplot(x, x='x', y='y', ax=ax[2], bins=50, pmax=0.9) - sns.histplot(x, x='width', y='height', ax=ax[3], bins=50, pmax=0.9) - - # rectangles - labels[:, 1:3] = 0.5 # center - labels[:, 1:] = xywh2xyxy(labels[:, 1:]) * 2000 - img = Image.fromarray(np.ones((2000, 2000, 3), dtype=np.uint8) * 255) - # for cls, *box in labels[:1000]: - # ImageDraw.Draw(img).rectangle(box, width=1, outline=colors[int(cls) % 10]) # plot - ax[1].imshow(img) - ax[1].axis('off') - - for a in [0, 1, 2, 3]: - for s in ['top', 'right', 'left', 'bottom']: - ax[a].spines[s].set_visible(False) - - plt.savefig(save_dir / 'labels.jpg', dpi=200) - matplotlib.use('Agg') - plt.close() - - # loggers - for k, v in loggers.items() or {}: - if k == 'wandb' and v: - v.log({"Labels": [v.Image(str(x), caption=x.name) for x in save_dir.glob('*labels*.jpg')]}) - - -def plot_evolution(yaml_file='data/hyp.finetune.yaml'): # from utils.plots import *; plot_evolution() - # Plot hyperparameter evolution results in evolve.txt - with open(yaml_file) as f: - hyp = yaml.load(f, Loader=yaml.SafeLoader) - x = np.loadtxt('evolve.txt', ndmin=2) - f = fitness(x) - # weights = (f - f.min()) ** 2 # for weighted results - plt.figure(figsize=(10, 12), tight_layout=True) - matplotlib.rc('font', **{'size': 8}) - for i, (k, v) in enumerate(hyp.items()): - y = x[:, i + 7] - # mu = (y * weights).sum() / weights.sum() # best weighted result - mu = y[f.argmax()] # best single result - plt.subplot(6, 5, i + 1) - plt.scatter(y, f, c=hist2d(y, f, 20), cmap='viridis', alpha=.8, edgecolors='none') - plt.plot(mu, f.max(), 'k+', markersize=15) - plt.title('%s = %.3g' % (k, mu), fontdict={'size': 9}) # limit to 40 characters - if i % 5 != 0: - plt.yticks([]) - print('%15s: %.3g' % (k, mu)) - plt.savefig('evolve.png', dpi=200) - print('\nPlot saved as evolve.png') - - -def profile_idetection(start=0, stop=0, labels=(), save_dir=''): - # Plot iDetection '*.txt' per-image logs. from utils.plots import *; profile_idetection() - ax = plt.subplots(2, 4, figsize=(12, 6), tight_layout=True)[1].ravel() - s = ['Images', 'Free Storage (GB)', 'RAM Usage (GB)', 'Battery', 'dt_raw (ms)', 'dt_smooth (ms)', 'real-world FPS'] - files = list(Path(save_dir).glob('frames*.txt')) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, ndmin=2).T[:, 90:-30] # clip first and last rows - n = results.shape[1] # number of rows - x = np.arange(start, min(stop, n) if stop else n) - results = results[:, x] - t = (results[0] - results[0].min()) # set t0=0s - results[0] = x - for i, a in enumerate(ax): - if i < len(results): - label = labels[fi] if len(labels) else f.stem.replace('frames_', '') - a.plot(t, results[i], marker='.', label=label, linewidth=1, markersize=5) - a.set_title(s[i]) - a.set_xlabel('time (s)') - # if fi == len(files) - 1: - # a.set_ylim(bottom=0) - for side in ['top', 'right']: - a.spines[side].set_visible(False) - else: - a.remove() - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - plt.savefig(Path(save_dir) / 'idetection_profile.png', dpi=200) - - -def plot_results_overlay(start=0, stop=0): # from utils.plots import *; plot_results_overlay() - # Plot training 'results*.txt', overlaying train and val losses - s = ['train', 'train', 'train', 'Precision', 'mAP@0.5', 'val', 'val', 'val', 'Recall', 'mAP@0.5:0.95'] # legends - t = ['Box', 'Objectness', 'Classification', 'P-R', 'mAP-F1'] # titles - for f in sorted(glob.glob('results*.txt') + glob.glob('../../Downloads/results*.txt')): - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - fig, ax = plt.subplots(1, 5, figsize=(14, 3.5), tight_layout=True) - ax = ax.ravel() - for i in range(5): - for j in [i, i + 5]: - y = results[j, x] - ax[i].plot(x, y, marker='.', label=s[j]) - # y_smooth = butter_lowpass_filtfilt(y) - # ax[i].plot(x, np.gradient(y_smooth), marker='.', label=s[j]) - - ax[i].set_title(t[i]) - ax[i].legend() - ax[i].set_ylabel(f) if i == 0 else None # add filename - fig.savefig(f.replace('.txt', '.png'), dpi=200) - - -def plot_results(start=0, stop=0, bucket='', id=(), labels=(), save_dir=''): - # Plot training 'results*.txt'. from utils.plots import *; plot_results(save_dir='runs/train/exp') - fig, ax = plt.subplots(2, 5, figsize=(12, 6), tight_layout=True) - ax = ax.ravel() - s = ['Box', 'Objectness', 'Classification', 'Precision', 'Recall', - 'val Box', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95'] - if bucket: - # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id] - files = ['results%g.txt' % x for x in id] - c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id) - os.system(c) - else: - files = list(Path(save_dir).glob('results*.txt')) - assert len(files), 'No results.txt files found in %s, nothing to plot.' % os.path.abspath(save_dir) - for fi, f in enumerate(files): - try: - results = np.loadtxt(f, usecols=[2, 3, 4, 8, 9, 12, 13, 14, 10, 11], ndmin=2).T - n = results.shape[1] # number of rows - x = range(start, min(stop, n) if stop else n) - for i in range(10): - y = results[i, x] - if i in [0, 1, 2, 5, 6, 7]: - y[y == 0] = np.nan # don't show zero loss values - # y /= y[0] # normalize - label = labels[fi] if len(labels) else f.stem - ax[i].plot(x, y, marker='.', label=label, linewidth=2, markersize=8) - ax[i].set_title(s[i]) - # if i in [5, 6, 7]: # share train and val loss y axes - # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5]) - except Exception as e: - print('Warning: Plotting error for %s; %s' % (f, e)) - - ax[1].legend() - fig.savefig(Path(save_dir) / 'results.png', dpi=200) diff --git a/spaces/segments-tobias/conex/espnet2/asr/preencoder/__init__.py b/spaces/segments-tobias/conex/espnet2/asr/preencoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/segments-tobias/conex/espnet2/layers/sinc_conv.py b/spaces/segments-tobias/conex/espnet2/layers/sinc_conv.py deleted file mode 100644 index 33df97fbcdf856b26c6f649fc01e52df488522b6..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/layers/sinc_conv.py +++ /dev/null @@ -1,273 +0,0 @@ -#!/usr/bin/env python3 -# 2020, Technische Universität München; Ludwig Kürzinger -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Sinc convolutions.""" -import math -import torch -from typeguard import check_argument_types -from typing import Union - - -class LogCompression(torch.nn.Module): - """Log Compression Activation. - - Activation function `log(abs(x) + 1)`. - """ - - def __init__(self): - """Initialize.""" - super().__init__() - - def forward(self, x: torch.Tensor) -> torch.Tensor: - """Forward. - - Applies the Log Compression function elementwise on tensor x. - """ - return torch.log(torch.abs(x) + 1) - - -class SincConv(torch.nn.Module): - """Sinc Convolution. - - This module performs a convolution using Sinc filters in time domain as kernel. - Sinc filters function as band passes in spectral domain. - The filtering is done as a convolution in time domain, and no transformation - to spectral domain is necessary. - - This implementation of the Sinc convolution is heavily inspired - by Ravanelli et al. https://github.com/mravanelli/SincNet, - and adapted for the ESpnet toolkit. - Combine Sinc convolutions with a log compression activation function, as in: - https://arxiv.org/abs/2010.07597 - - Notes: - Currently, the same filters are applied to all input channels. - The windowing function is applied on the kernel to obtained a smoother filter, - and not on the input values, which is different to traditional ASR. - """ - - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int, - stride: int = 1, - padding: int = 0, - dilation: int = 1, - window_func: str = "hamming", - scale_type: str = "mel", - fs: Union[int, float] = 16000, - ): - """Initialize Sinc convolutions. - - Args: - in_channels: Number of input channels. - out_channels: Number of output channels. - kernel_size: Sinc filter kernel size (needs to be an odd number). - stride: See torch.nn.functional.conv1d. - padding: See torch.nn.functional.conv1d. - dilation: See torch.nn.functional.conv1d. - window_func: Window function on the filter, one of ["hamming", "none"]. - fs (str, int, float): Sample rate of the input data - """ - assert check_argument_types() - super().__init__() - window_funcs = { - "none": self.none_window, - "hamming": self.hamming_window, - } - if window_func not in window_funcs: - raise NotImplementedError( - f"Window function has to be one of {list(window_funcs.keys())}", - ) - self.window_func = window_funcs[window_func] - scale_choices = { - "mel": MelScale, - "bark": BarkScale, - } - if scale_type not in scale_choices: - raise NotImplementedError( - f"Scale has to be one of {list(scale_choices.keys())}", - ) - self.scale = scale_choices[scale_type] - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.padding = padding - self.dilation = dilation - self.stride = stride - self.fs = float(fs) - if self.kernel_size % 2 == 0: - raise ValueError("SincConv: Kernel size must be odd.") - self.f = None - N = self.kernel_size // 2 - self._x = 2 * math.pi * torch.linspace(1, N, N) - self._window = self.window_func(torch.linspace(1, N, N)) - # init may get overwritten by E2E network, - # but is still required to calculate output dim - self.init_filters() - - @staticmethod - def sinc(x: torch.Tensor) -> torch.Tensor: - """Sinc function.""" - x2 = x + 1e-6 - return torch.sin(x2) / x2 - - @staticmethod - def none_window(x: torch.Tensor) -> torch.Tensor: - """Identity-like windowing function.""" - return torch.ones_like(x) - - @staticmethod - def hamming_window(x: torch.Tensor) -> torch.Tensor: - """Hamming Windowing function.""" - L = 2 * x.size(0) + 1 - x = x.flip(0) - return 0.54 - 0.46 * torch.cos(2.0 * math.pi * x / L) - - def init_filters(self): - """Initialize filters with filterbank values.""" - f = self.scale.bank(self.out_channels, self.fs) - f = torch.div(f, self.fs) - self.f = torch.nn.Parameter(f, requires_grad=True) - - def _create_filters(self, device: str): - """Calculate coefficients. - - This function (re-)calculates the filter convolutions coefficients. - """ - f_mins = torch.abs(self.f[:, 0]) - f_maxs = torch.abs(self.f[:, 0]) + torch.abs(self.f[:, 1] - self.f[:, 0]) - - self._x = self._x.to(device) - self._window = self._window.to(device) - - f_mins_x = torch.matmul(f_mins.view(-1, 1), self._x.view(1, -1)) - f_maxs_x = torch.matmul(f_maxs.view(-1, 1), self._x.view(1, -1)) - - kernel = (torch.sin(f_maxs_x) - torch.sin(f_mins_x)) / (0.5 * self._x) - kernel = kernel * self._window - - kernel_left = kernel.flip(1) - kernel_center = (2 * f_maxs - 2 * f_mins).unsqueeze(1) - filters = torch.cat([kernel_left, kernel_center, kernel], dim=1) - - filters = filters.view(filters.size(0), 1, filters.size(1)) - self.sinc_filters = filters - - def forward(self, xs: torch.Tensor) -> torch.Tensor: - """Sinc convolution forward function. - - Args: - xs: Batch in form of torch.Tensor (B, C_in, D_in). - - Returns: - xs: Batch in form of torch.Tensor (B, C_out, D_out). - """ - self._create_filters(xs.device) - xs = torch.nn.functional.conv1d( - xs, - self.sinc_filters, - padding=self.padding, - stride=self.stride, - dilation=self.dilation, - groups=self.in_channels, - ) - return xs - - def get_odim(self, idim: int) -> int: - """Obtain the output dimension of the filter.""" - D_out = idim + 2 * self.padding - self.dilation * (self.kernel_size - 1) - 1 - D_out = (D_out // self.stride) + 1 - return D_out - - -class MelScale: - """Mel frequency scale.""" - - @staticmethod - def convert(f): - """Convert Hz to mel.""" - return 1125.0 * torch.log(torch.div(f, 700.0) + 1.0) - - @staticmethod - def invert(x): - """Convert mel to Hz.""" - return 700.0 * (torch.exp(torch.div(x, 1125.0)) - 1.0) - - @classmethod - def bank(cls, channels: int, fs: float) -> torch.Tensor: - """Obtain initialization values for the mel scale. - - Args: - channels: Number of channels. - fs: Sample rate. - - Returns: - torch.Tensor: Filter start frequencíes. - torch.Tensor: Filter stop frequencies. - """ - assert check_argument_types() - # min and max bandpass edge frequencies - min_frequency = torch.tensor(30.0) - max_frequency = torch.tensor(fs * 0.5) - frequencies = torch.linspace( - cls.convert(min_frequency), cls.convert(max_frequency), channels + 2 - ) - frequencies = cls.invert(frequencies) - f1, f2 = frequencies[:-2], frequencies[2:] - return torch.stack([f1, f2], dim=1) - - -class BarkScale: - """Bark frequency scale. - - Has wider bandwidths at lower frequencies, see: - Critical bandwidth: BARK - Zwicker and Terhardt, 1980 - """ - - @staticmethod - def convert(f): - """Convert Hz to Bark.""" - b = torch.div(f, 1000.0) - b = torch.pow(b, 2.0) * 1.4 - b = torch.pow(b + 1.0, 0.69) - return b * 75.0 + 25.0 - - @staticmethod - def invert(x): - """Convert Bark to Hz.""" - f = torch.div(x - 25.0, 75.0) - f = torch.pow(f, (1.0 / 0.69)) - f = torch.div(f - 1.0, 1.4) - f = torch.pow(f, 0.5) - return f * 1000.0 - - @classmethod - def bank(cls, channels: int, fs: float) -> torch.Tensor: - """Obtain initialization values for the Bark scale. - - Args: - channels: Number of channels. - fs: Sample rate. - - Returns: - torch.Tensor: Filter start frequencíes. - torch.Tensor: Filter stop frequencíes. - """ - assert check_argument_types() - # min and max BARK center frequencies by approximation - min_center_frequency = torch.tensor(70.0) - max_center_frequency = torch.tensor(fs * 0.45) - center_frequencies = torch.linspace( - cls.convert(min_center_frequency), - cls.convert(max_center_frequency), - channels, - ) - center_frequencies = cls.invert(center_frequencies) - - f1 = center_frequencies - torch.div(cls.convert(center_frequencies), 2) - f2 = center_frequencies + torch.div(cls.convert(center_frequencies), 2) - return torch.stack([f1, f2], dim=1) diff --git a/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/__init__.py b/spaces/shi-labs/Matting-Anything/GroundingDINO/groundingdino/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/shikunl/prismer/prismer/dataset/utils.py b/spaces/shikunl/prismer/prismer/dataset/utils.py deleted file mode 100644 index d565cbaf2f510081615148189f71c96a7bf6445e..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/dataset/utils.py +++ /dev/null @@ -1,192 +0,0 @@ -# Copyright (c) 2023, NVIDIA Corporation & Affiliates. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://github.com/NVlabs/prismer/blob/main/LICENSE - -import os -import pathlib -import re -import json -import torch -import PIL.Image as Image -import numpy as np -import torchvision.transforms as transforms -import torchvision.transforms.functional as transforms_f -from dataset.randaugment import RandAugment - -cur_dir = pathlib.Path(__file__).parent - -COCO_FEATURES = torch.load(cur_dir / 'coco_features.pt')['features'] -ADE_FEATURES = torch.load(cur_dir / 'ade_features.pt')['features'] -DETECTION_FEATURES = torch.load(cur_dir / 'detection_features.pt')['features'] -BACKGROUND_FEATURES = torch.load(cur_dir / 'background_features.pt') - - -class Transform: - def __init__(self, resize_resolution=384, scale_size=[0.5, 1.0], train=False): - self.resize_size = [resize_resolution, resize_resolution] - self.scale_size = scale_size - self.train = train - self.randaugment = RandAugment(2, 5) - - def __call__(self, image, labels): - if self.train: - # random resize crop - i, j, h, w = transforms.RandomResizedCrop.get_params(img=image, scale=self.scale_size, ratio=[3. / 4, 4. / 3]) - image = transforms_f.crop(image, i, j, h, w) - if labels is not None: - for exp in labels: - labels[exp] = transforms_f.crop(labels[exp], i, j, h, w) - - # resize to the defined shape - image = transforms_f.resize(image, self.resize_size, transforms_f.InterpolationMode.BICUBIC) - if labels is not None: - for exp in labels: - labels[exp] = transforms_f.resize(labels[exp], [224, 224], transforms_f.InterpolationMode.NEAREST) - - if self.train: - # random flipping - if torch.rand(1) > 0.5: - image = transforms_f.hflip(image) - if labels is not None: - for exp in labels: - labels[exp] = transforms_f.hflip(labels[exp]) - - # random augmentation - image, labels = self.randaugment(image, labels) - - # transform to tensor - image = transforms_f.to_tensor(image) - if labels is not None: - for exp in labels: - if exp in ['depth', 'normal', 'edge']: - labels[exp] = transforms_f.to_tensor(labels[exp]) - else: - labels[exp] = (transforms_f.to_tensor(labels[exp]) * 255).long() - - # apply normalisation: - image = transforms_f.normalize(image, mean=[0.48145466, 0.4578275, 0.40821073], - std=[0.26862954, 0.26130258, 0.27577711]) - if labels is not None: - return {'rgb': image, **labels} - else: - return{'rgb': image} - - -def get_expert_labels(data_path, label_path, image_path, dataset, experts): - image_full_path = os.path.join(data_path, dataset, image_path) - image = Image.open(image_full_path).convert('RGB') - if experts != 'none': - labels = {} - labels_info = {} - ps = image_path.split('.')[-1] - for exp in experts: - if exp in ['seg_coco', 'seg_ade', 'edge', 'depth']: - label_full_path = os.path.join(label_path, exp, dataset, image_path.replace(f'.{ps}', '.png')) - if os.stat(label_full_path).st_size > 0: - labels[exp] = Image.open(label_full_path).convert('L') - else: - labels[exp] = Image.fromarray(np.zeros([image.size[1], image.size[0]])).convert('L') - elif exp == 'normal': - label_full_path = os.path.join(label_path, exp, dataset, image_path.replace(f'.{ps}', '.png')) - if os.stat(label_full_path).st_size > 0: - labels[exp] = Image.open(label_full_path).convert('RGB') - else: - labels[exp] = Image.fromarray(np.zeros([image.size[1], image.size[0], 3])).convert('RGB') - elif exp == 'obj_detection': - label_full_path = os.path.join(label_path, exp, dataset, image_path.replace(f'.{ps}', '.png')) - if os.stat(label_full_path).st_size > 0: - labels[exp] = Image.open(label_full_path).convert('L') - else: - labels[exp] = Image.fromarray(255 * np.ones([image.size[1], image.size[0]])).convert('L') - label_info_path = os.path.join(label_path, exp, dataset, image_path.replace(f'.{ps}', '.json')) - labels_info[exp] = json.load(open(label_info_path, 'r')) - elif exp == 'ocr_detection': - label_full_path = os.path.join(label_path, exp, dataset, image_path.replace(f'.{ps}', '.png')) - label_info_path = os.path.join(label_path, exp, dataset, image_path.replace(f'.{ps}', '.pt')) - if os.path.exists(label_info_path): - labels[exp] = Image.open(label_full_path).convert('L') - labels_info[exp] = torch.load(label_info_path) - else: - labels[exp] = Image.fromarray(255 * np.ones([image.size[1], image.size[0]])).convert('L') - labels_info[exp] = None - - else: - labels, labels_info = None, None - return image, labels, labels_info - - -def post_label_process(inputs, labels_info): - eps = 1e-6 - for exp in inputs: - if exp in ['depth', 'normal', 'edge']: # remap to -1 to 1 range - inputs[exp] = 2 * (inputs[exp] - inputs[exp].min()) / (inputs[exp].max() - inputs[exp].min() + eps) - 1 - inputs[exp] = inputs[exp].half() - - elif exp == 'seg_coco': # in-paint with CLIP features - text_emb = torch.empty([64, *inputs[exp].shape[1:]]) - for l in inputs[exp].unique(): - if l == 255: - text_emb[:, (inputs[exp][0] == l)] = BACKGROUND_FEATURES.unsqueeze(-1) - else: - text_emb[:, (inputs[exp][0] == l)] = COCO_FEATURES[l].unsqueeze(-1) - inputs[exp] = text_emb.half() - - elif exp == 'seg_ade': # in-paint with CLIP features - text_emb = torch.empty([64, *inputs[exp].shape[1:]]) - for l in inputs[exp].unique(): - if l == 255: - text_emb[:, (inputs[exp][0] == l)] = BACKGROUND_FEATURES.unsqueeze(-1) - else: - text_emb[:, (inputs[exp][0] == l)] = ADE_FEATURES[l].unsqueeze(-1) - inputs[exp] = text_emb.half() - - elif exp == 'obj_detection': # in-paint with CLIP features - text_emb = torch.empty([64, *inputs[exp].shape[1:]]) - label_map = labels_info[exp] - for l in inputs[exp].unique(): - if l == 255: - text_emb[:, (inputs[exp][0] == l)] = BACKGROUND_FEATURES.unsqueeze(-1) - else: - text_emb[:, (inputs[exp][0] == l)] = DETECTION_FEATURES[label_map[str(l.item())]].unsqueeze(-1) - inputs[exp] = {'label': text_emb.half(), 'instance': inputs[exp].half()} - - elif exp == 'ocr_detection': # in-paint with CLIP features - text_emb = torch.empty([64, *inputs[exp].shape[1:]]) - label_map = labels_info[exp] - for l in inputs[exp].unique(): - if l == 255: - text_emb[:, (inputs[exp][0] == l)] = BACKGROUND_FEATURES.unsqueeze(-1) - else: - text_emb[:, (inputs[exp][0] == l)] = label_map[l.item()]['features'].unsqueeze(-1) - inputs[exp] = text_emb.half() - return inputs - - -def pre_caption(caption, max_words=50): - caption = re.sub(r"([.!\"()*#:;~])", ' ', caption.capitalize()) # remove special characters - caption = re.sub(r"\s{2,}", ' ', caption) # remove two white spaces - - caption = caption.rstrip('\n') # remove \num_ans_per_q symbol - caption = caption.strip(' ') # remove leading and trailing white spaces - - # truncate caption to the max words - caption_words = caption.split(' ') - if len(caption_words) > max_words: - caption = ' '.join(caption_words[:max_words]) - return caption - - -def pre_question(question, max_words=50): - question = re.sub(r"([.!\"()*#:;~])", ' ', question.capitalize()) # remove special characters - question = question.strip() - - # truncate question - question_words = question.split(' ') - if len(question_words) > max_words: - question = ' '.join(question_words[:max_words]) - if question[-1] != '?': - question += '?' - return question - diff --git a/spaces/shnippi/Email_Generai-tor/README.md b/spaces/shnippi/Email_Generai-tor/README.md deleted file mode 100644 index 1535be99e843fd5cdf10fc727686501b8019515f..0000000000000000000000000000000000000000 --- a/spaces/shnippi/Email_Generai-tor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Email Generai-tor -emoji: 🐨 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/silk-road/ChatHaruhi/app.py b/spaces/silk-road/ChatHaruhi/app.py deleted file mode 100644 index 5c79d6a9252b1d6ab6c54da09c3c6a6745889131..0000000000000000000000000000000000000000 --- a/spaces/silk-road/ChatHaruhi/app.py +++ /dev/null @@ -1,417 +0,0 @@ -import os -# os.environ["CUDA_VISIBLE_DEVICES"] = "0" # 指定要使用的GPU设备编号 -from transformers import pipeline -import argparse -import openai -import tiktoken -import torch -from scipy.spatial.distance import cosine -from transformers import AutoModel, AutoTokenizer -from argparse import Namespace -from langchain.chat_models import ChatOpenAI -import gradio as gr -import random -import time -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - AIMessagePromptTemplate, - HumanMessagePromptTemplate, -) -from langchain.schema import ( - AIMessage, - HumanMessage, - SystemMessage -) -from text import Text - -def download_models(): - # Import our models. The package will take care of downloading the models automatically - model_args = Namespace(do_mlm=None, pooler_type="cls", temp=0.05, mlp_only_train=False, - init_embeddings_model=None) - model = AutoModel.from_pretrained("silk-road/luotuo-bert", trust_remote_code=True, model_args=model_args) - return model - -openai.api_key = os.environ.get('OPENAI_API_KEY') - -folder_name = "Suzumiya" -current_directory = os.getcwd() -new_directory = os.path.join(current_directory, folder_name) - - -pkl_path = './pkl/texts.pkl' -text_image_pkl_path='./pkl/text_image.pkl' -dict_path = "characters/haruhi/text_image_dict.txt" -dict_text_pkl_path = './pkl/dict_text.pkl' - -image_path = "characters/haruhi/images" -model = download_models() -text = Text("characters/haruhi/texts", text_image_pkl_path=text_image_pkl_path, - dict_text_pkl_path=dict_text_pkl_path, model=model, num_steps=50, pkl_path=pkl_path, - dict_path=dict_path, image_path=image_path) - -if not os.path.exists(new_directory): - os.makedirs(new_directory) - print(f"文件夹 '{folder_name}' 创建成功!") -else: - print(f"文件夹 '{folder_name}' 已经存在。") - -enc = tiktoken.get_encoding("cl100k_base") - - -class Run: - def __init__(self, **params): - """ - * 命令行参数的接入 - * 台词folder,记录台词 - * system prompt存成txt文件,支持切换 - * 支持设定max_len_story 和max_len_history - * 支持设定save_path - * 实现一个colab脚本,可以clone转换后的项目并运行,方便其他用户体验 - """ - self.folder = params['folder'] - # self.system_prompt = params['system_prompt'] - with open(params['system_prompt'], 'r') as f: - self.system_prompt = f.read() - self.max_len_story = params['max_len_story'] - self.max_len_history = params['max_len_history'] - self.save_path = params['save_path'] - self.titles, self.title_to_text = self.read_prompt_data() - self.embeddings, self.embed_to_title = self.title_text_embedding(self.titles, self.title_to_text) - # self.embeddings, self.embed_to_title = [], [] - # 一个封装 OpenAI 接口的函数,参数为 Prompt,返回对应结果 - - def get_completion_from_messages(self, messages, model="gpt-3.5-turbo", temperature=0): - response = openai.ChatCompletion.create( - model=model, - messages=messages, - temperature=temperature, # 控制模型输出的随机程度 - ) - # print(str(response.choices[0].message)) - return response.choices[0].message["content"] - - def read_prompt_data(self): - """ - read prompt-data for in-context-learning - """ - titles = [] - title_to_text = {} - for file in os.listdir(self.folder): - if file.endswith('.txt'): - title_name = file[:-4] - titles.append(title_name) - - with open(os.path.join(self.folder, file), 'r') as f: - title_to_text[title_name] = f.read() - - return titles, title_to_text - - - def get_embedding(self, text): - tokenizer = AutoTokenizer.from_pretrained("silk-road/luotuo-bert") - model = download_models() - if len(text) > 512: - text = text[:512] - texts = [text] - # Tokenize the text - inputs = tokenizer(texts, padding=True, truncation=False, return_tensors="pt") - # Extract the embeddings - # Get the embeddings - with torch.no_grad(): - embeddings = model(**inputs, output_hidden_states=True, return_dict=True, sent_emb=True).pooler_output - return embeddings[0] - - def title_text_embedding(self, titles, title_to_text): - """titles-text-embeddings""" - - embeddings = [] - embed_to_title = [] - - for title in titles: - text = title_to_text[title] - - # divide text with \n\n - divided_texts = text.split('\n\n') - - for divided_text in divided_texts: - embed = self.get_embedding(divided_text) - embeddings.append(embed) - embed_to_title.append(title) - - return embeddings, embed_to_title - - def get_cosine_similarity(self, embed1, embed2): - return torch.nn.functional.cosine_similarity(embed1, embed2, dim=0) - - def retrieve_title(self, query_embed, embeddings, embed_to_title, k): - # compute cosine similarity between query_embed and embeddings - cosine_similarities = [] - for embed in embeddings: - cosine_similarities.append(self.get_cosine_similarity(query_embed, embed)) - - # sort cosine similarity - sorted_cosine_similarities = sorted(cosine_similarities, reverse=True) - - top_k_index = [] - top_k_title = [] - - for i in range(len(sorted_cosine_similarities)): - current_title = embed_to_title[cosine_similarities.index(sorted_cosine_similarities[i])] - if current_title not in top_k_title: - top_k_title.append(current_title) - top_k_index.append(cosine_similarities.index(sorted_cosine_similarities[i])) - - if len(top_k_title) == k: - break - - return top_k_title - - def organize_story_with_maxlen(self, selected_sample): - maxlen = self.max_len_story - # title_to_text, _ = self.read_prompt_data() - story = "凉宫春日的经典桥段如下:\n" - - count = 0 - - final_selected = [] - print(selected_sample) - for sample_topic in selected_sample: - # find sample_answer in dictionary - sample_story = self.title_to_text[sample_topic] - - sample_len = len(enc.encode(sample_story)) - # print(sample_topic, ' ' , sample_len) - if sample_len + count > maxlen: - break - - story += sample_story - story += '\n' - - count += sample_len - final_selected.append(sample_topic) - - return story, final_selected - - def organize_message(self, story, history_chat, history_response, new_query): - messages = [{'role': 'system', 'content': self.system_prompt}, {'role': 'user', 'content': story}] - - n = len(history_chat) - if n != len(history_response): - print('warning, unmatched history_char length, clean and start new chat') - # clean all - history_chat = [] - history_response = [] - n = 0 - - for i in range(n): - messages.append({'role': 'user', 'content': history_chat[i]}) - messages.append({'role': 'user', 'content': history_response[i]}) - - messages.append({'role': 'user', 'content': new_query}) - - return messages - - def keep_tail(self, history_chat, history_response): - max_len = self.max_len_history - n = len(history_chat) - if n == 0: - return [], [] - - if n != len(history_response): - print('warning, unmatched history_char length, clean and start new chat') - return [], [] - - token_len = [] - for i in range(n): - chat_len = len(enc.encode(history_chat[i])) - res_len = len(enc.encode(history_response[i])) - token_len.append(chat_len + res_len) - - keep_k = 1 - count = token_len[n - 1] - - for i in range(1, n): - count += token_len[n - 1 - i] - if count > max_len: - break - keep_k += 1 - - return history_chat[-keep_k:], history_response[-keep_k:] - - def organize_message_langchain(self, story, history_chat, history_response, new_query): - # messages = [{'role':'system', 'content':SYSTEM_PROMPT}, {'role':'user', 'content':story}] - - messages = [ - SystemMessage(content=self.system_prompt), - HumanMessage(content=story) - ] - - n = len(history_chat) - if n != len(history_response): - print('warning, unmatched history_char length, clean and start new chat') - # clean all - history_chat = [] - history_response = [] - n = 0 - - for i in range(n): - messages.append(HumanMessage(content=history_chat[i])) - messages.append(AIMessage(content=history_response[i])) - - # messages.append( {'role':'user', 'content':new_query }) - messages.append(HumanMessage(content=new_query)) - - return messages - - def get_response(self, user_message, chat_history_tuple): - - history_chat = [] - history_response = [] - - if len(chat_history_tuple) > 0: - for cha, res in chat_history_tuple: - history_chat.append(cha) - history_response.append(res) - - history_chat, history_response = self.keep_tail(history_chat, history_response) - - print('history done') - - new_query = user_message - query_embed = self.get_embedding(new_query) - - # print("1") - # embeddings, embed_to_title = self.title_text_embedding(self.titles, self.title_to_text) - - print("2") - selected_sample = self.retrieve_title(query_embed, self.embeddings, self.embed_to_title, 7) - - print("3") - story, selected_sample = self.organize_story_with_maxlen(selected_sample) - - ## TODO: visualize seletected sample later - print('当前辅助sample:', selected_sample) - - messages = self.organize_message_langchain(story, history_chat, history_response, new_query) - chat = ChatOpenAI(temperature=0) - return_msg = chat(messages) - - response = return_msg.content - - return response - - def save_response(self, chat_history_tuple): - with open(f"{self.save_path}/conversation_{time.time()}.txt", "w") as file: - for cha, res in chat_history_tuple: - file.write(cha) - file.write("\n---\n") - file.write(res) - file.write("\n---\n") - - def create_gradio(self): - # from google.colab import drive - # drive.mount(drive_path) - with gr.Blocks() as demo: - gr.Markdown( - """ - ## Chat凉宫春日 ChatHaruhi - 项目地址 [https://github.com/LC1332/Chat-Haruhi-Suzumiya](https:// github.com/LC1332/Chat-Haruhi-Suzumiya) - 骆驼项目地址 [https://github.com/LC1332/Luotuo-Chinese-LLM](https:// github.com/LC1332/Luotuo-Chinese-LLM) - 此版本为图文版本,完整功能(+语音)的demo见项目 - 角色名建议输入 阿虚 或者影视剧中有的人物。或者也可以是新学生或者老师。 - """ - ) - image_input = gr.Textbox(visible=False) - # japanese_input = gr.Textbox(visible=False) - with gr.Row(): - chatbot = gr.Chatbot() - image_output = gr.Image() - role_name = gr.Textbox(label="角色名", placeholde="输入角色名") - msg = gr.Textbox(label="输入") - with gr.Row(): - clear = gr.Button("Clear") - sub = gr.Button("Submit") - image_button = gr.Button("给我一个图") - # japanese_output = gr.Textbox(interactive=False) - - - def respond(role_name, user_message, chat_history): - input_message = role_name + ':「' + user_message + '」' - bot_message = self.get_response(input_message, chat_history) - chat_history.append((input_message, bot_message)) - self.save_response(chat_history) - # time.sleep(1) - # jp_text = pipe(f'<-zh2ja-> {bot_message}')[0]['translation_text'] - return "" , chat_history, bot_message - - clear.click(lambda: None, None, chatbot, queue=False) - msg.submit(respond, [role_name, msg, chatbot], [msg, chatbot, image_input]) - sub.click(fn=respond, inputs=[role_name, msg, chatbot], outputs=[msg, chatbot, image_input]) - # with gr.Tab("text_to_text"): - # text_input = gr.Textbox() - # text_output = gr.Textbox() - # text_button = gr.Button('begin') - - # text_button.click(text.text_to_text, inputs=text_input, outputs=text_output) - - - - # with gr.Tab("text_to_iamge"): - # with gr.Row(): - # image_input = gr.Textbox() - # image_output = gr.Image() - # image_button = gr.Button("给我一个图") - - image_button.click(text.text_to_image, inputs=image_input, outputs=image_output) - - demo.launch(debug=True) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description="-----[Chat凉宫春日]-----") - parser.add_argument("--folder", default="characters/haruhi/texts", help="text folder") - parser.add_argument("--system_prompt", default="characters/haruhi/system_prompt.txt", help="store system_prompt") - parser.add_argument("--max_len_story", default=1500, type=int) - parser.add_argument("--max_len_history", default=1200, type=int) - # parser.add_argument("--save_path", default="/content/drive/MyDrive/GPTData/Haruhi-Lulu/") - parser.add_argument("--save_path", default=os.getcwd()+"/Suzumiya") - options = parser.parse_args() - params = { - "folder": options.folder, - "system_prompt": options.system_prompt, - "max_len_story": options.max_len_story, - "max_len_history": options.max_len_history, - "save_path": options.save_path - } - # pipe = pipeline(model="engmatic-earth/mt5-zh-ja-en-trimmed-fine-tuned-v1", device=0,max_length=120) - run = Run(**params) - run.create_gradio() - - - # history_chat = [] - # history_response = [] - # chat_timer = 5 - # new_query = '鲁鲁:你好我是新同学鲁鲁' - - # query_embed = run.get_embedding(new_query) - # titles, title_to_text = run.read_prompt_data() - # embeddings, embed_to_title = run.title_text_embedding(titles, title_to_text) - # selected_sample = run.retrieve_title(query_embed, embeddings, embed_to_title, 7) - - # print('限制长度之前:', selected_sample) - - # story, selected_sample = run.organize_story_with_maxlen(selected_sample) - - # print('当前辅助sample:', selected_sample) - - # messages = run.organize_message(story, history_chat, history_response, new_query) - - # response = run.get_completion_from_messages(messages) - - # print(response) - - # history_chat.append(new_query) - # history_response.append(response) - - # history_chat, history_response = run.keep_tail(history_chat, history_response) - # print(history_chat, history_response) diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CarX Drift Racing 2 APK 1.22.0 - The Most Advanced Drifting Racing Game for Android - Dont Miss It.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CarX Drift Racing 2 APK 1.22.0 - The Most Advanced Drifting Racing Game for Android - Dont Miss It.md deleted file mode 100644 index 6a52d465488089e24f475b6a9cd1a8963bb26f57..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CarX Drift Racing 2 APK 1.22.0 - The Most Advanced Drifting Racing Game for Android - Dont Miss It.md +++ /dev/null @@ -1,103 +0,0 @@ -
        -

        CarX Drift Racing 2 APK 1.22: A Free Drifting Racing Game for Android

        -

        If you love racing games, especially those that involve drifting, you might want to check out CarX Drift Racing 2. This is a free drifting racing game developed by CarX Technologies, a company that specializes in realistic car physics and graphics. In this game, you can push your car to the limits and drift around corners on various tracks and locations. You can also customize your car and character, tune your car's settings, and compete online or offline with other players or AI opponents.

        -

        In this article, we will tell you more about CarX Drift Racing 2 APK 1.22, its features, gameplay, and how to download and install it on your Android device. We will also answer some frequently asked questions about the game. So, let's get started!

        -

        carx drift racing 2 apk 1.22


        DOWNLOADhttps://ssurll.com/2uNYHY



        -

        Features of CarX Drift Racing 2

        -

        CarX Drift Racing 2 is not just another racing game. It has many features that make it stand out from other games in the genre. Here are some of them:

        -

        Drifting Mechanics

        -

        Drifting is the main attraction of CarX Drift Racing 2. The game has realistic drifting physics and controls that let you feel the thrill of sliding your car sideways on the road. You can use the onscreen pedals and steering wheel to drift your car as you like. You can also switch between different camera angles to see your car from different perspectives.

        -

        The game rewards you for drifting with points and coins that you can use to buy new cars or upgrade your existing ones. The longer and better you drift, the more points and coins you earn. You can also perform combos by chaining multiple drifts together without losing speed or control.

        -

        If you want to improve your drifting skills and techniques, you can practice in solo mode or watch replays of other players' drifts. You can also learn from tutorials and tips that the game provides.

        -

        Tracks and Locations

        Tracks and Locations

        -

        CarX Drift Racing 2 has a variety of tracks and locations that you can drift on. The game has over 30 tracks and 10 locations that range from mountain roads, city streets, industrial zones, airports, and more. Each track and location has its own characteristics, such as weather, time of day, obstacles, and scenery. You can also see the details of the road surface, such as asphalt, sand, grass, or snow.

        -

        You can unlock new tracks and locations by completing missions, earning stars, or buying them with coins. You can also create your own tracks using the track editor and share them with other players online.

        -

        Cars and Customization

        -

        CarX Drift Racing 2 has a huge collection of cars that you can choose from. The game has over 70 cars from different brands and categories, such as sports cars, muscle cars, supercars, and more. Each car has its own specifications, such as speed, acceleration, handling, and driftability. You can also see the realistic 3D models and animations of the cars.

        -

        You can customize your car's appearance and performance to suit your style and preferences. You can change the color, paint, decals, wheels, tires, spoilers, bumpers, hoods, and more. You can also upgrade your car's engine, transmission, suspension, brakes, turbo, and more. You can use the tuning section to adjust your car's settings, such as camber angle, toe angle, tire pressure, steering angle, and more.

        -

        carx drift racing 2 android game download
        -carx drift racing 2 apk free download
        -carx drift racing 2 latest version apk
        -carx drift racing 2 mod apk unlimited money
        -carx drift racing 2 apk + obb data
        -carx drift racing 2 apk filehippo
        -carx drift racing 2 apkcombo
        -carx drift racing 2 apk pure
        -carx drift racing 2 apk mirror
        -carx drift racing 2 apk offline
        -carx drift racing 2 apk revdl
        -carx drift racing 2 apk rexdl
        -carx drift racing 2 apk hack
        -carx drift racing 2 apk mod menu
        -carx drift racing 2 apk no root
        -carx drift racing 2 apk old version
        -carx drift racing 2 apk update
        -carx drift racing 2 apk uptodown
        -carx drift racing 2 apk android oyun club
        -carx drift racing 2 apk android republic
        -carx drift racing 2 apk mob.org
        -carx drift racing 2 apk apkpure.com
        -carx drift racing 2 apk happymod.com
        -carx drift racing 2 apk an1.com
        -carx drift racing 2 apk andropalace.org
        -carx drift racing 2 full unlocked apk
        -carx drift racing 2 premium apk
        -carx drift racing 2 pro apk
        -carx drift racing 2 cracked apk
        -carx drift racing 2 patched apk
        -carx drift racing 2 unlimited coins and gold apk
        -carx drift racing 2 all cars unlocked apk
        -carx drift racing 2 mega mod apk
        -carx drift racing 2 god mode apk
        -carx drift racing 2 cheat engine apk
        -carx drift racing 2 hack tool apk
        -carx drift racing 2 generator online apk
        -carx drift racing 2 free shopping mod apk
        -carx drift racing 2 unlimited everything mod apk
        -carx drift racing 2 no ads mod apk
        -how to download and install carx drift racing 2 apk on android device
        -how to update carx drift racing 2 to version 1.22 on android phone or tablet
        -how to play carx drift racing 2 offline on android without internet connection
        -how to backup and restore your progress in carx drift racing 2 on android using google play games or facebook account
        -how to fix common errors and issues in carx drift racing 2 on android such as crashing, freezing, lagging, or not loading
        -how to customize your cars and character in carx drift racing 2 on android using the tuning and garage features
        -how to earn more money and gold in carx drift racing 2 on android by completing missions, achievements, and daily tasks
        -how to unlock new tracks and modes in carx drift racing 2 on android by increasing your reputation level and ranking
        -how to join online virtual rooms and challenge your friends or other players in carx drift racing 2 on android using the multiplayer feature
        -how to contact the developer of carx drift racing 2 for feedback, support, or suggestions via email, website, or social media platforms

        -

        Online and Offline Modes

        -

        CarX Drift Racing 2 lets you play online with other players in virtual racing rooms. You can join or create a room with up to 16 players and compete in different modes, such as drift battles, tandem drifting, or team drifting. You can also chat with other players and see their profiles and statistics.

        -

        If you prefer to play offline, you can play with AI opponents or in solo mode. You can choose the difficulty level of the AI opponents and the number of laps. You can also compete in tournaments and championships that have different rules and rewards.

        -

        How to Download and Install CarX Drift Racing 2 APK 1.22 on Android Devices

        -

        If you want to download and install CarX Drift Racing 2 APK 1.22 on your Android device, you need to follow these steps:

        -
          -
        1. Make sure your device meets the minimum requirements for the game. The game requires Android 5.0 or higher and at least 1 GB of RAM.
        2. -
        3. Go to a reliable source to download the game's APK file. We recommend FileHippo, a website that provides safe and secure downloads of various software and apps. You can click on this link to go to the download page of CarX Drift Racing 2 APK 1.22.
        4. -
        5. Once you are on the download page, click on the green "Download Latest Version" button to start downloading the APK file.
        6. -
        7. After the download is complete, locate the APK file on your device's file manager or downloads folder.
        8. -
        9. Before you install the APK file, you need to enable the "Unknown Sources" option on your device's settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
        10. -
        11. Now you can install the APK file by tapping on it and following the instructions on the screen.
        12. -
        13. After the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer.
        14. -
        -

        Conclusion

        -

        CarX Drift Racing 2 is a free drifting racing game for Android devices that offers realistic car physics and graphics, various tracks and locations, a huge collection of cars and customization options, online and offline modes, tournaments and championships, and more. It is a game that will appeal to both casual and hardcore racing fans who love drifting.

        -

        If you are looking for a fun and exciting drifting racing game for your Android device, you should definitely give CarX Drift Racing 2 a try. You can download it from FileHippo or other reliable sources. We hope you enjoy playing it as much as we did!

        -

        Do you have any questions or feedback about CarX Drift Racing 2? Let us know in the comments below!

        -

        FAQs

        -
          -
        • What is the latest version of CarX Drift Racing 2?
          The latest version of CarX Drift Racing 2 is 1. The latest version of CarX Drift Racing 2 is 1.22, which was released on June 16, 2023. It added new cars, tracks, events, and improvements to the game. You can see the full changelog here.
        • -
        • Is CarX Drift Racing 2 free to play?
          Yes, CarX Drift Racing 2 is free to play. You can download and install it on your Android device without paying anything. However, the game does have some optional in-app purchases that you can buy with real money. These include coins, gold, premium cars, and VIP membership. You can also watch ads to earn some free coins or gold.
        • -
        • How can I get more money and coins in CarX Drift Racing 2?
          There are several ways to get more money and coins in CarX Drift Racing 2. You can earn them by drifting, completing missions, participating in tournaments and championships, watching ads, or buying them with real money. You can also get some free coins or gold by logging in daily, joining a club, or inviting your friends to play the game.
        • -
        • Can I play CarX Drift Racing 2 offline?
          Yes, you can play CarX Drift Racing 2 offline. You can play with AI opponents or in solo mode without an internet connection. However, you will need an internet connection to play online with other players, access some features, or update the game.
        • -
        • What are some tips and tricks for CarX Drift Racing 2?
          Here are some tips and tricks that might help you improve your drifting skills and performance in CarX Drift Racing 2:

          -
            -
          • Practice drifting on different tracks and locations to get familiar with the road conditions and obstacles.
          • -
          • Use the tuning section to adjust your car's settings according to your preference and the track's characteristics.
          • -
          • Try different camera angles to find the one that suits you best.
          • -
          • Watch replays of other players' drifts or tutorials to learn from their techniques and mistakes.
          • -
          • Join a club or a racing room to chat with other players and get tips and advice from them.
          • -
          -
        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Among Us and Enjoy Cross-Platform Play with PC Console and Mobile.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Among Us and Enjoy Cross-Platform Play with PC Console and Mobile.md deleted file mode 100644 index aa377b8e4359ff9ccf24bd2177728bde37f5140f..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Among Us and Enjoy Cross-Platform Play with PC Console and Mobile.md +++ /dev/null @@ -1,116 +0,0 @@ -
        -

        Games Download Among Us

        -

        If you are looking for a fun and exciting game to play with your friends online or offline, you should check out Among Us. This is a game of teamwork and betrayal in space, where you have to work together with other players to prepare your spaceship for departure, while avoiding being killed by one or more impostors among you. In this article, we will tell you what Among Us is, how to play it, where to download it, and why you should play it.

        -

        What is Among Us?

        -

        Among Us is a casual multiplayer game developed and published by Innersloth, an indie game studio based in Washington, USA. The game was released in 2018 for Windows, Android, and iOS devices, and later in 2020 for Nintendo Switch and Xbox consoles. The game has become very popular in 2020 and 2021, thanks to its viral exposure on social media platforms like YouTube, Twitch, and TikTok. As of June 2021, the game has over 500 million downloads on Google Play Store alone.

        -

        games download among us


        Download File ->>->>->> https://ssurll.com/2uNX4S



        -

        A game of teamwork and betrayal in space

        -

        The premise of Among Us is simple: you are one of the crew members of a spaceship that is preparing to embark on a mission. However, there is a catch: one or more of the crew members are actually impostors, who are secretly trying to kill everyone else. The impostors can also sabotage the ship's systems, such as the oxygen, the reactor, the lights, and the doors, to create chaos and confusion. The crew members have to work together to complete tasks around the ship, such as fixing wires, scanning cards, fueling engines, and so on. They also have to find and vote out the impostors before they kill everyone or cause a major catastrophe.

        -

        How to play Among Us

        -

        Among Us can be played online or over local WiFi with 4 to 15 players. You can either join an existing game from the host list or create your own game with your own settings. You can also choose from four different maps to play in: The Skeld, MIRA HQ, Polus, and the Airship. Each map has its own layout, features, tasks, and sabotages.

        -

        Crewmates vs Impostors

        -

        At the start of each game, you will be assigned a role: either a crewmate or an impostor. The number of impostors can vary from one to three, depending on the game settings. The crewmates' goal is to complete all their tasks or discover and vote out all the impostors. The impostors' goal is to kill enough crewmates so that their number equals or exceeds the number of remaining crewmates or prevent the crewmates from completing their tasks by sabotaging the ship.

        -

        Tasks, sabotages, and meetings

        -

        As a crewmate, you will have a list of tasks that you need to complete around the map. These tasks are mini-games that require you to perform various actions, such as connecting wires, entering codes, aligning telescopes, etc. Some tasks are common for all crewmates, while others are unique for each player. Completing tasks will fill up a task bar at the top left corner of the screen. When the task bar is full, the crewmates win.

        -

        As an impostor, you will have a fake list of tasks that you can use to pretend that you are a crewmate. However, your main objective is to kill crewmates without being caught. You can kill a crewmate by getting close to them and pressing the kill button. You can also use vents to quickly move around the map without being seen. However, be careful not to be spotted

        by other players or cameras. You can also sabotage the ship's systems by pressing the sabotage button and selecting a system to affect. For example, you can turn off the lights to reduce the crewmates' vision, lock the doors to trap them, or trigger a meltdown or an oxygen depletion to force them to fix it within a limited time. Sabotaging can also help you create diversions, escape from sticky situations, or prevent the crewmates from completing their tasks.

        -

        When a dead body is found by a crewmate, they can report it by pressing the report button. This will trigger an emergency meeting, where all the players can discuss and vote for who they think is the impostor. Alternatively, any player can call an emergency meeting by pressing the emergency button at a certain location on the map. However, each player has a limited number of emergency meetings that they can use. During meetings, players can chat with each other using text or voice (if enabled). They can also access a map that shows their tasks and a list of players that shows who is alive, dead, or ejected. Voting for the impostor will eliminate them from the game, while voting for a crewmate will eject them into space. If there is a tie or no one is voted, no one is ejected. The game continues until either the crewmates or the impostors win.

        -

        games download among us pc
        -games download among us free
        -games download among us online
        -games download among us android
        -games download among us ios
        -games download among us steam
        -games download among us apk
        -games download among us mac
        -games download among us mod
        -games download among us hack
        -games download among us airship
        -games download among us windows 10
        -games download among us laptop
        -games download among us chromebook
        -games download among us play store
        -games download among us app store
        -games download among us nox player
        -games download among us bluestacks
        -games download among us emulator
        -games download among us unblocked
        -games download among us for pc free full version
        -games download among us for android latest version
        -games download among us for ios without app store
        -games download among us for steam with code
        -games download among us for apk mod menu
        -games download among us for mac os x
        -games download among us for mod always impostor
        -games download among us for hack unlimited money
        -games download among us for airship map update
        -games download among us for windows 10 64 bit
        -games download among us for laptop windows 7
        -games download among us for chromebook without google play
        -games download among us for play store install free
        -games download among us for app store not working
        -games download among us for nox player offline installer
        -games download among us for bluestacks 5 beta
        -games download among us for emulator pc low end
        -games download among us for unblocked at school 66
        -how to games download among us on pc without steam
        -how to games download among us on free fire device not supported
        -how to games download among us on online multiplayer with friends
        -how to games download among us on android phone storage full
        -how to games download among us on ios 14.6 update error
        -how to games download among us on steam account hacked
        -how to games download among us on apk file corrupted
        -how to games download among us on mac m1 chip
        -how to games download among us on mod voice chat
        -how to games download among us on hack all skins
        -how to games download among us on airship release date
        -how to games download among us on windows 10 pro

        -

        Game modes and options

        -

        Among Us offers various game modes and options that you can customize to suit your preferences and play style. For example, you can change the number of impostors, the speed of players, the vision of crewmates and impostors, the kill cooldown and distance, the number and type of tasks, the voting time and anonymous votes, and more. You can also enable or disable certain features, such as confirm ejects, visual tasks, emergency meetings, chat and voice chat, etc. You can also choose from different game modes, such as Classic Mode, Hide and Seek Mode, Zombies Mode, and more. Each game mode has its own rules and objectives that make the game more fun and challenging.

        -

        Where to download Among Us

        -

        Among Us is available for download on various platforms and devices. Here are some of the options that you have:

        -

        Google Play Store

        -

        If you have an Android device, you can download Among Us for free from the Google Play Store. The game requires Android 4.4 or higher and about 100 MB of storage space. You can also purchase in-game items and skins using real money.

        -

        Steam

        -

        If you have a Windows PC or laptop, you can download Among Us from Steam for $4.99 USD. The game requires Windows 7 SP1 or higher, 1 GB of RAM, 250 MB of storage space, and DirectX 10 compatible graphics card. You can also purchase in-game items and skins using Steam Wallet funds.

        -

        App Store

        -

        If you have an iOS device, you can download Among Us for free from the App Store. The game requires iOS 10.0 or later and about 200 MB of storage space. You can also purchase in-game items and skins using real money.

        -

        Why you should play Among Us

        -

        Among Us is a game that offers many benefits and advantages for its players. Here are some of the reasons why you should play Among Us:

        -

        Fun and engaging gameplay

        -

        Among Us is a game that is easy to learn but hard to master. It is a game that tests your skills in deception, deduction, communication, cooperation, and strategy. It is a game that keeps you on your toes as you try to figure out who is lying and who is telling the truth. It is a game that makes you laugh as you witness hilarious moments and interactions among players. It is a game that makes you scream as you get killed or betrayed by someone you trusted. It is a game that makes you feel satisfied as you win or lose as a team.

        -

        Customizable and diverse settings

        -

        Among Us is a game that allows you to customize your experience according to your preferences and play style. You can choose from different maps, game modes, options, roles, tasks, sabotages, skins, hats, pets, and more. You can also create your own rules and scenarios with your friends to make the game more fun and challenging. You can play with different people from different backgrounds and cultures online or offline.

        -

        Cross-platform and social features

        -

        Among Us is a game that supports cross-platform play between Windows PC users and Android and iOS mobile users. This means that you can play with your friends regardless of what device they are using. You can also chat with other players using text or voice chat, if enabled. You can also join online communities and forums dedicated to Among Us, where you can share your experiences, tips, fan art, memes, and more.

        -

        Conclusion

        -

        Among Us is a game that you should definitely try if you are looking for a fun and exciting game to play with your friends or strangers online or offline. It is a game that offers a unique and thrilling gameplay experience that will keep you hooked for hours. It is a game that allows you to customize your settings and scenarios to suit your preferences and play style. It is a game that supports cross-platform play and social features that will enhance your interaction and communication with other players. Download Among Us today and join the millions of players who are enjoying this game of teamwork and betrayal in space.

        -

        FAQs

        -

        Here are some of the frequently asked questions about Among Us:

        - - - - - - - - - - - - - - - - - - - - - - - - - -
        QuestionAnswer
        How many players can play Among Us?Among Us can be played with 4 to 15 players online or over local WiFi.
        How do I change my name in Among Us?You can change your name in Among Us by tapping or clicking on the name field at the top of the screen before joining or creating a game.
        How do I get free skins in Among Us?You can get free skins in Among Us by playing on certain maps or during certain events. For example, you can get the Halloween skins by playing on MIRA HQ or Polus during October, or the Christmas skins by playing on any map during December.
        How do I report a hacker or cheater in Among Us?You can report a hacker or cheater in Among Us by emailing Innersloth at support@innersloth.com with evidence such as screenshots or videos.
        How do I update Among Us?You can update Among Us by downloading the latest version from the platform that you are using, such as Google Play Store, Steam, or App Store.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Extreme Car Driving Simulator Crazy Mod APK and Enjoy the Ultimate Driving Experience.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Extreme Car Driving Simulator Crazy Mod APK and Enjoy the Ultimate Driving Experience.md deleted file mode 100644 index 713e32a6942406df25c4f0973efeb8c025c029dd..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Extreme Car Driving Simulator Crazy Mod APK and Enjoy the Ultimate Driving Experience.md +++ /dev/null @@ -1,124 +0,0 @@ -
        -

        Extreme Car Driving Simulator Crazy Mod APK: A Fun and Realistic Driving Experience

        -

        If you are a fan of driving games, you might have heard of Extreme Car Driving Simulator, a popular car simulation game that lets you drive, drift, and experience the real feelings of riding on different cars. But did you know that there is a Crazy Mod APK version of the game that gives you unlimited money and access to all cars? In this article, we will tell you everything you need to know about Extreme Car Driving Simulator Crazy Mod APK, including what it is, how to download and install it, what are its benefits and risks, and some tips and tricks for playing the game.

        -

        extreme car driving simulator crazy mod apk


        Download Filehttps://ssurll.com/2uNRnT



        -

        What is Extreme Car Driving Simulator?

        -

        Extreme Car Driving Simulator is a realistic car simulation game that was released in 2014 by AxesInMotion Racing. The game features more than 80 vehicles and a huge open-world map to explore. You can create your own perfect ride by customizing the car's wheels and steering. You can also drive across Europe, complete many challenges like racing, consumption, checkpoint, or stunt modes. There is traffic on the road, but there are no pedestrians. You are free to do whatever you want, so you can just enjoy the night view or you can try to crash every car out there.

        -

        A realistic car simulation game with various features and modes

        -

        One of the standout features of Extreme Car Driving Simulator is its realistic driving physics. It simulates the weight and momentum of the car, as well as the effects of wind and weather conditions on the vehicle. You can also switch between different camera views, such as first-person, third-person, or top-down. The game also supports mechanical and automatic gearboxes, which work in accordance with real analogues.

        -

        The game also offers various modes for different purposes. You can choose from free mode, traffic mode, checkpoint mode, or stunt mode. In free mode, you can roam around the map without any restrictions or objectives. In traffic mode, you have to follow the traffic rules and avoid collisions with other cars. In checkpoint mode, you have to reach certain points on the map within a time limit. In stunt mode, you have to perform amazing stunts and jumps on ramps and loops.

        -

        extreme car driving simulator mod apk unlimited money
        -extreme car driving simulator hack apk download
        -extreme car driving simulator 2023 mod apk
        -extreme car driving simulator drift mode apk
        -extreme car driving simulator free download apk
        -extreme car driving simulator mod apk all cars unlocked
        -extreme car driving simulator offline mod apk
        -extreme car driving simulator latest version mod apk
        -extreme car driving simulator 2 mod apk
        -extreme car driving simulator 3d mod apk
        -extreme car driving simulator pro mod apk
        -extreme car driving simulator full mod apk
        -extreme car driving simulator racing mode apk
        -extreme car driving simulator mod apk android 1
        -extreme car driving simulator mod apk revdl
        -extreme car driving simulator mod apk rexdl
        -extreme car driving simulator mod apk happymod
        -extreme car driving simulator mod apk an1
        -extreme car driving simulator mod apk apkpure
        -extreme car driving simulator mod apk apkmody
        -extreme car driving simulator crazy stunts mod apk
        -extreme car driving simulator crazy speed mod apk
        -extreme car driving simulator crazy drift mod apk
        -extreme car driving simulator crazy jump mod apk
        -extreme car driving simulator crazy crash mod apk
        -extreme car driving simulator crazy police chase mod apk
        -extreme car driving simulator crazy taxi mode apk
        -extreme car driving simulator crazy city mode apk
        -extreme car driving simulator crazy airport mode apk
        -extreme car driving simulator crazy traffic mode apk
        -extreme car racing game mod apk download
        -extreme car racing game hack apk download
        -extreme car racing game 2023 mod apk download
        -extreme car racing game drift mode apk download
        -extreme car racing game free download for android
        -extreme car racing game unlimited money and gems
        -extreme car racing game all cars unlocked mod apk
        -extreme car racing game offline mode download
        -extreme car racing game latest version download
        -extreme car racing game 3d graphics download
        -best extreme car racing game for android 2023
        -best extreme car racing game with realistic physics 2023
        -best extreme car racing game with amazing stunts 2023
        -best extreme car racing game with high speed 2023
        -best extreme car racing game with different modes 2023
        -best extreme car racing game with multiplayer option 2023
        -best extreme car racing game with custom cars 2023

        -

        A huge open-world map to explore and perform stunts

        -

        The game also boasts a huge open-world map that covers an area of more than 16 square kilometers. The map includes various environments, such as city streets, highways, countryside roads, airports, ports, bridges, tunnels, mountains, deserts, forests, and more. You can drive anywhere you want and discover new places and secrets.

        -

        The map also has many ramps and loops where you can perform stunts and tricks. You can fly over buildings, jump over bridges, or do flips and spins

        A variety of vehicles to choose from and customize

        -

        Another feature that makes Extreme Car Driving Simulator fun and exciting is the variety of vehicles that you can choose from and customize. The game has more than 80 vehicles, ranging from sports cars, supercars, SUVs, trucks, buses, police cars, and more. You can also unlock new cars by completing challenges or buying them with in-game currency.

        -

        Once you have your favorite car, you can customize it to your liking. You can change the color, wheels, steering, suspension, engine, brakes, and more. You can also add stickers, decals, spoilers, and other accessories to make your car stand out. You can also adjust the sound of the car's engine and horn.

        -

        To customize your car, you need to go to the garage menu and select the car you want to modify. Then, you can tap on the different parts of the car and choose the options you want. You can also use the slider to adjust the values of some parameters. You can preview the changes before applying them. To save your changes, you need to tap on the check mark icon.

        -

        What is the Crazy Mod APK?

        -

        If you want to enjoy Extreme Car Driving Simulator without any limitations or restrictions, you might want to try the Crazy Mod APK version of the game. This is a modified version of the game that gives you unlimited money and access to all cars. You can also use a mega menu that allows you to activate various cheats and hacks in the game.

        -

        A modified version of the game that gives unlimited money and access to all cars

        -

        The Crazy Mod APK is a modified version of the game that gives you unlimited money and access to all cars. This means that you don't have to worry about earning or spending money in the game. You can buy any car you want and customize it as much as you want. You can also unlock all the features and modes in the game without any hassle.

        -

        The Crazy Mod APK also gives you a mega menu that allows you to activate various cheats and hacks in the game. For example, you can increase or decrease the car speed, reverse speed, traffic density, gravity, friction, and more. You can also spawn any car you want on the map, add 100000 gold to your account, get free VIP membership, disable ads, and more.

        How to download and install the Crazy Mod APK on Android devices

        -

        If you want to try the Crazy Mod APK version of Extreme Car Driving Simulator, you need to download and install it on your Android device. However, you cannot find it on the Google Play Store, as it is not an official app. You need to find a reliable website that offers the APK file and follow some steps to install it. Here is how you can do it:

        -
          -
        1. Go to your device settings and tap on Security (or Privacy in some devices). Then, enable the option to allow unknown sources or unknown apps. This will let you install apps from sources other than the Google Play Store.
        2. -
        3. Find a website that offers the Crazy Mod APK file and download it. You can use your browser or a file manager app to do this. Make sure you download from a trustworthy source, as some APK files may contain malware or viruses.
        4. -
        5. Once the download is complete, tap on the notification or go to your file manager app and find the APK file. Tap on it and follow the instructions to install it. You may need to grant some permissions to the app.
        6. -
        7. After the installation is done, you can launch the game and enjoy the Crazy Mod APK features. You can also access the mega menu by tapping on the icon on the top-left corner of the screen.
        8. -
        -

        The benefits and risks of using the Crazy Mod APK

        -

        Using the Crazy Mod APK version of Extreme Car Driving Simulator can have some benefits and risks. Here are some of them:

        - - - - - - - - - - - - - - - - - - - - - -
        BenefitsRisks
        You can enjoy unlimited money and access to all cars without spending any real money.You may violate the terms and conditions of the game developer and get banned from the game.
        You can use the mega menu to activate various cheats and hacks that can make the game more fun and easy.You may lose the challenge and thrill of playing the game as it was intended.
        You can customize your car and gameplay settings as much as you want.You may experience some bugs or glitches that can affect the game performance or stability.
        You can avoid annoying ads that may interrupt your gameplay.You may miss out on some updates or features that are available only in the official version of the game.

        Tips and Tricks for Playing Extreme Car Driving Simulator

        -

        Now that you know what Extreme Car Driving Simulator is and how to use the Crazy Mod APK version, you might want to learn some tips and tricks that can help you improve your driving skills and have more fun in the game. Here are some of them:

        -

        How to control the car and use the different features

        -

        To control the car, you can use the buttons on the screen or tilt your device. You can also change the control mode in the settings menu. The buttons include the accelerator, brake, handbrake, horn, lights, camera, and nitro. You can also use the steering wheel or the arrows to steer the car.

        -

        To use the different features of the game, you can tap on the icons on the top-right corner of the screen. The icons include the map, the garage, the settings, and the pause. You can also swipe left or right on the screen to access the speedometer, the fuel gauge, the damage indicator, and the radio.

        -

        How to complete the challenges and earn rewards

        -

        To complete the challenges in the game, you need to go to the map and select the mode you want to play. You can also see the difficulty level and the reward for each challenge. The challenges include racing against other cars, reaching checkpoints within a time limit, performing stunts on ramps and loops, and more.

        -

        To earn rewards in the game, you need to complete the challenges and collect coins and diamonds on the road. You can also watch ads or rate the game to get more rewards. You can use the rewards to buy new cars or customize your existing ones.

        -

        How to avoid traffic and police

        -

        To avoid traffic in the game, you need to be careful and alert when driving on busy roads. You can also use the nitro boost to speed up and overtake other cars. You can also switch lanes or drive on the opposite side of the road if there is less traffic. However, be aware that this may increase your chances of crashing or getting caught by the police.

        -

        To avoid police in the game, you need to follow the traffic rules and avoid breaking them. For example, do not run red lights, do not speed over the limit, do not hit other cars or pedestrians, do not drive on sidewalks or grass, do not drive on restricted areas, and do not perform illegal stunts. If you break any of these rules, you will see a police icon on your screen and hear a siren sound. This means that you are being chased by a police car and you need to escape as soon as possible. You can do this by driving faster than them, hiding in alleys or tunnels, or using ramps or loops to lose them.

        -

        Conclusion

        -

        Extreme Car Driving Simulator is a fun and realistic driving game that lets you experience different cars and modes in a huge open-world map. You can also use the Crazy Mod APK version of the game to get unlimited money and access to all cars, as well as a mega menu that allows you to activate various cheats and hacks. However, you should be aware of the benefits and risks of using this version, as well as some tips and tricks for playing the game.

        -

        If you are looking for a driving game that offers a lot of freedom and customization options, Extreme Car Driving Simulator might be a good choice for you. You can download it from Google Play Store or from a reliable website that offers the Crazy Mod APK file. Have fun driving!

        -

        FAQs

        -

        Q1: Is Extreme Car Driving Simulator free to play?

        -

        A1: Yes, Extreme Car Driving Simulator is free to play. However, it contains ads and in-app purchases that may enhance your gameplay experience.

        -

        Q2: Can I play Extreme Car Driving Simulator offline?

        -

        A2: Yes, you can play Extreme Car Driving Simulator offline. However, some features may require an internet connection, such as watching ads or updating the game.

        -

        Q3: Can I play Extreme Car Driving Simulator on PC?

        -

        A3: Yes, you can play Extreme Car Driving Simulator on PC using an Android emulator. An emulator is a software that allows you to run Android apps on your PC. Some popular emulators are BlueStacks, NoxPlayer, LDPlayer, etc.

        -

        Q4: How can I update Extreme Car Driving Simulator?

        -

        A4: You can update Extreme Car Driving Simulator by going to Google Play Store and tapping on Update if there is a new version available. Alternatively, you can download the latest version of Crazy Mod APK from a reliable website and install it on your device.

        -

        Q5: How can I contact the developers of Extreme Car Driving Simulator?

        -

        A5: You can contact the developers of Extreme Car Driving Simulator by sending an email to support@axesinmotion.com or by visiting their website at https://www.axesinmotion.com/. You can also follow them on Facebook, Twitter, Instagram, or YouTube for the latest news and updates.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Music Joeboy Body Soul The Song that Will Make You Fall in Love.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Music Joeboy Body Soul The Song that Will Make You Fall in Love.md deleted file mode 100644 index be0de0d4a519c7aedc630785b0e9340a091ddb00..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Music Joeboy Body Soul The Song that Will Make You Fall in Love.md +++ /dev/null @@ -1,92 +0,0 @@ -
        -

        How to Download Music from Joeboy's Body and Soul

        -

        Body and Soul is a catchy and romantic song by Nigerian singer Joeboy. It was released in January 2023 as part of his sophomore album of the same name. The song showcases Joeboy's flair for penning love songs that pull from both African and Caribbean vibes. If you love this song and want to download it to your computer or smartphone, here are some ways you can do it.

        -

        How to Buy Music on Desktop

        -

        If you want to buy music on your desktop computer, you can use iTunes as a good option. iTunes is a popular music player and store that allows you to purchase and download songs from various artists, including Joeboy. Here are the steps to buy music on desktop using iTunes:

        -

        download music joeboy body and soul


        Download Ziphttps://ssurll.com/2uNW9M



        -

        Install iTunes if you're on Windows

        -

        If you're using a Windows computer, you'll need to install iTunes first. You can download it from [Apple's website](^1^). You'll also need to create an Apple ID account and enter payment information for it before you can purchase music through iTunes on Windows. If you're using a Mac computer, iTunes will be installed by default.

        -

        Open iTunes and sign in with your Apple ID

        -

        Once you have iTunes installed, open it and sign in with your Apple ID. You can do this by clicking the Account menu item at the top of iTunes (Windows) or the screen (Mac) and then clicking Sign In in the drop-down menu. Enter your Apple ID email address and password in the resulting pop-up window.

        -

        Search for Body and Soul by Joeboy

        -

        After signing in, click the Store tab near the top of the iTunes window. This will take you to the iTunes Store where you can browse and buy music. To find Body and Soul by Joeboy, click the search bar in the top right corner of the iTunes Store and type "Body and Soul by Joeboy" in it. Press Enter or click the magnifying glass icon to start the search. You should see the song appear in the search results, along with the album cover and the price.

        -

        Click the music's price and enter your password or Touch ID

        -

        To buy the song, click the price button next to it. This will prompt you to confirm your purchase by entering your Apple ID password or using Touch ID if you have a Mac with a fingerprint scanner. After you do this, the song will start downloading to your iTunes library.

        -

        View the music's files on your computer

        -

        Once the download is complete, you can view the music's files on your computer by going to your iTunes library. You can do this by clicking the Library tab near the top of the iTunes window. You should see Body and Soul by Joeboy under the Songs category in your library. You can also find it under the Albums category if you bought the whole album. To play the song, just double-click it or right-click it and select Play. To locate the music file on your computer, right-click it and select Show in Windows Explorer (Windows) or Show in Finder (Mac).

        -

        How to Buy Music on iPhone or Android

        -

        If you want to buy music on your iPhone or Android phone, you can use the iTunes Store app or the Play Music app respectively. These apps allow you to purchase and download songs from various artists, including Joeboy. Here are the steps to buy music on iPhone or Android using these apps:

        -

        download music joeboy body and soul mp3
        -joeboy body and soul lyrics
        -joeboy body and soul video download
        -joeboy body and soul audio download
        -joeboy body and soul song download
        -joeboy body and soul free download
        -joeboy body and soul mp3 download naijaloaded
        -joeboy body and soul instrumental download
        -joeboy body and soul mp3 download 320kbps
        -joeboy body and soul mp3 download fakaza
        -joeboy body and soul mp3 download mdundo
        -joeboy body and soul mp3 download tooxclusive
        -joeboy body and soul mp3 download justnaija
        -joeboy body and soul mp3 download waploaded
        -joeboy body and soul mp3 download 9jaflaver
        -joeboy body and soul album download
        -joeboy body and soul zip download
        -joeboy body and soul ep download
        -joeboy body and soul mixtape download
        -joeboy body and soul full album download
        -joeboy body and soul spotify
        -joeboy body and soul apple music
        -joeboy body and soul youtube music
        -joeboy body and soul deezer
        -joeboy body and soul tidal
        -joeboy body and soul amazon music
        -joeboy body and soul soundcloud
        -joeboy body and soul audiomack
        -joeboy body and soul boomplay
        -joeboy body and soul shazam
        -how to download music joeboy body and soul
        -where to download music joeboy body and soul
        -best site to download music joeboy body and soul
        -best app to download music joeboy body and soul
        -best way to download music joeboy body and soul
        -easiest way to download music joeboy body and soul
        -fastest way to download music joeboy body and soul
        -cheapest way to download music joeboy body and soul
        -legal way to download music joeboy body and soul
        -safe way to download music joeboy body and soul

        -

        Open the iTunes Store app or the Play Music app

        -

        If you're using an iPhone, open the iTunes Store app on your device. It has a purple icon with a white star in it. If you're using an Android phone, open the Play Music app on your device. It has an orange icon with a white triangle in it. You'll need to sign in with your Apple ID or Google account and enter payment information for them before you can purchase music through these apps.

        -

        Search for Body and Soul by Joeboy

        -

        After opening the app, tap the search bar at the bottom of the screen (iPhone) or at the top of the screen (Android) and type "Body and Soul by Joeboy" in it. Tap Search or press Enter to start the search. You should see the song appear in the search results, along with the album cover and the price.

        -

        Tap the music's price and enter your password or Touch ID or fingerprint

        -

        To buy the song, tap the price button next to it. This will prompt you to confirm your purchase by entering your Apple ID password or using Touch ID if you have an iPhone with a fingerprint scanner, or entering your Google account password or using your fingerprint if you have an Android phone with a fingerprint scanner. After you do this, the song will start downloading to your device.

        -

        View the music's files on your device

        -

        Once the download is complete, you can view the music's files on your device by going to your music library. You can do this by opening the Music app on your iPhone or the Play Music app on your Android phone. You should see Body and Soul by Joeboy under the Downloaded category in your library. To play the song, just tap it or swipe it to the right and select Play. To locate the music file on your device, tap it or swipe it to the left and select More, then select Show in Files (iPhone) or Show in Folder (Android).

        -

        How to Download Free Music from YouTube or SoundCloud

        -

        If you want to download free music from YouTube or SoundCloud, you can use a third-party app or website that allows you to convert and download videos or audios from these platforms. However, you should be aware that this method may be illegal, unsafe, or low-quality. You may also violate the rights of the artists or creators of the music. Here are the steps to download free music from YouTube or SoundCloud:

        -

        Find a free version of Body and Soul by Joeboy on YouTube or SoundCloud

        -

        First, you need to find a free version of Body and Soul by Joeboy on YouTube or SoundCloud. You can do this by opening the YouTube app or website or the SoundCloud app or website on your device and searching for "Body and Soul by Joeboy" in the search bar. You should see some results that have the song uploaded by different users. Choose one that has good quality and reviews.

        -

        Copy the URL of the video or audio

        -

        Next, you need to copy the URL of the video or audio that has the song. You can do this by tapping or clicking the Share button under the video or audio and selecting Copy Link. Alternatively, you can copy the URL from the address bar of your browser.

        -

        Use a third-party app or website to convert and download the music

        -

        Then, you need to use a third-party app or website that can convert and download the music from YouTube or SoundCloud. There are many such apps and websites available online, but some of them may be unreliable, malicious, or full of ads. Some examples of reputable ones are [YTMP3], [4K Video Downloader], [SoundCloud Downloader], and [Audacity]. You can download these apps from their official websites or app stores, or use their online versions. To use them, you need to paste the URL of the video or audio that has the song into their input box and select the format and quality you want for the music file. Then, click the Convert or Download button and wait for the process to finish.

        -

        Save the music file to your computer or device

        -

        Finally, you need to save the music file to your computer or device. You can do this by clicking the Save or Download button on the app or website and choosing the location where you want to save the music file. You can also rename the file if you want. After saving the file, you can view it on your computer or device by opening the folder where you saved it. To play the song, just double-click it or tap it.

        -

        Conclusion

        -

        In this article, we have shown you how to download music from Joeboy's Body and Soul. You can choose to buy the music on desktop or mobile using iTunes or Play Music, or you can download free music from YouTube or SoundCloud using a third-party app or website. However, you should be careful when downloading free music as it may be illegal, unsafe, or low-quality. You should also respect the rights of the artists or creators of the music and support them if you can. Body and Soul by Joeboy is a great song that you can enjoy on your computer or device anytime. We hope this article has helped you learn how to download it easily and safely.

        -

        FAQs

        -

        Here are some frequently asked questions about downloading music from Joeboy's Body and Soul:

        -

        Q1: Who is Joeboy?

        -

        A1: Joeboy is a Nigerian singer and songwriter who rose to fame with his hit single Baby in 2019. He is known for his Afro-pop and R&B style of music that blends African and Caribbean influences. He has released two albums, Love and Light in 2019 and Body and Soul in 2023.

        -

        Q2: What is Body and Soul about?

        -

        A2: Body and Soul is a love song that expresses Joeboy's feelings for his lover and how he wants to be with her in his next life. He sings about how he loves her body and soul and how they are perfect for each other. He also promises to treat her right and make her happy.

        -

        Q3: What are some other songs by Joeboy?

        -

        A3: Some other popular songs by Joeboy are Beginning, Don't Call Me Back, Lonely, Celebration, and Focus. These songs are also part of his albums Love and Light and Body and Soul. You can find them on iTunes, Play Music, YouTube, SoundCloud, or other music platforms.

        -

        Q4: What are some benefits of downloading music instead of streaming it?

        -

        A4: Downloading music allows you to own your music, listen to it offline, save data, and avoid ads or interruptions. You can also transfer your music files to other devices or share them with others. Downloading music also supports the artists or creators of the music by paying them for their work.

        -

        Q5: What are some risks of downloading free music from YouTube or SoundCloud?

        -

        A5: Downloading free music from YouTube or SoundCloud may be illegal, unsafe, or low-quality. You may be breaking the law by infringing on the copyrights of the artists or creators of the music. You may also expose your computer or device to viruses, malware, or spyware by using unreliable apps or websites. You may also get poor-quality music files that have low resolution, distortion, or noise. You may also disrespect the artists or creators of the music by not giving them credit or compensation for their work.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Soccer 17.0.03 APK - Download and Install the Latest Update from EA SPORTS.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Soccer 17.0.03 APK - Download and Install the Latest Update from EA SPORTS.md deleted file mode 100644 index 8fcad064f7479b57980f3cae87efb3b0b129d4fd..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Soccer 17.0.03 APK - Download and Install the Latest Update from EA SPORTS.md +++ /dev/null @@ -1,168 +0,0 @@ -
        -

        FIFA Mobile APK 17.0.03: Everything You Need to Know

        -

        If you are a fan of soccer games, you might have heard of FIFA Mobile, a 3D soccer simulator developed by Electronic Arts that features live events and compelling gameplay. FIFA Mobile is one of the most popular soccer games for mobile devices, with over 100 million downloads on Google Play Store alone.

        -

        In this article, we will tell you everything you need to know about FIFA Mobile APK 17.0.03, the latest update for the game that includes updated players, kits, clubs and leagues to reflect the real world 22/23 soccer season. We will also show you how to download and install the game on your device, what are the new features and improvements in this version, what are the system requirements and device compatibility for playing the game, and what are some tips and tricks for playing the game.

        -

        fifa mobile apk 17.0.03


        Download Zip >> https://ssurll.com/2uNT2Q



        -

        What is FIFA Mobile APK 17.0.03?

        -

        A brief introduction to the game and its features

        -

        FIFA Mobile is a soccer game that allows you to build your Ultimate Team of your favorite soccer stars from over 15,000 players, including world-class talent like Kylian Mbappé, Christian Pulisic, Vinicius Jr and Son Heung-min, plus 600+ teams from over 30+ leagues around the world.

        -

        You can play various modes in FIFA Mobile, such as Head to Head, VS Attack, Manager Mode, UEFA Champions League, UEFA Europa League, UEFA Europa Conference League, FIFA World Cup 2022™ Mode, Icons and Heroes Mode, and more. You can also take part in playable live events that correspond with the real-world tournaments throughout the football season to earn special rewards.

        -

        FIFA Mobile also offers realistic soccer simulation with new graphics and gameplay engine that supports up to 60 FPS on some devices. You can experience new, upgraded soccer stadiums with realistic stadium SFX and live on-field audio commentary.

        -

        How to download and install the game on your device

        -

        To download FIFA Mobile APK 17.0.03 on your device, you can follow these steps:

        -
          -
        1. Go to [this link](^1^) on your device's browser.
        2. -
        3. Tap on the green "Download APK" button.
        4. -
        5. Wait for the download to finish.
        6. -
        7. Go to your device's settings and enable "Unknown Sources" option under security settings.
        8. -
        9. Go to your device's file manager and locate the downloaded APK file.
        10. -
        11. Tap on the file and follow the installation instructions.
        12. -
        13. Launch the game and enjoy!
        14. -
        -

        You can also download FIFA Mobile from Google Play Store or Apple App Store, but you might not get the latest version of the game. The APK file is a third-party source that provides the latest update for the game. However, you should be careful when downloading APK files from unknown sources, as they might contain malware or viruses that can harm your device.

        -

        fifa mobile soccer 17.0.03 apk download
        -fifa mobile 17.0.03 mod apk
        -fifa mobile 17.0.03 apk obb
        -fifa mobile 17.0.03 apk mirror
        -fifa mobile 17.0.03 apk pure
        -fifa mobile 17.0.03 apk android
        -fifa mobile 17.0.03 apk offline
        -fifa mobile 17.0.03 apk hack
        -fifa mobile 17.0.03 apk data
        -fifa mobile 17.0.03 apk revdl
        -fifa mobile soccer 17.0.03 apk update
        -fifa mobile soccer 17.0.03 apk mod
        -fifa mobile soccer 17.0.03 apk obb
        -fifa mobile soccer 17.0.03 apk mirror
        -fifa mobile soccer 17.0.03 apk pure
        -fifa mobile soccer 17.0.03 apk android
        -fifa mobile soccer 17.0.03 apk offline
        -fifa mobile soccer 17.0.03 apk hack
        -fifa mobile soccer 17.0.03 apk data
        -fifa mobile soccer 17.0.03 apk revdl
        -download fifa mobile 17.0.03 for android
        -download fifa mobile 17.0.03 mod for android
        -download fifa mobile 17.0.03 obb for android
        -download fifa mobile 17.0.03 offline for android
        -download fifa mobile 17.0.03 hack for android
        -download fifa mobile soccer 17.0.03 for android
        -download fifa mobile soccer 17.0.03 mod for android
        -download fifa mobile soccer 17.0.03 obb for android
        -download fifa mobile soccer 17.0.03 offline for android
        -download fifa mobile soccer 17.0.03 hack for android
        -how to install fifa mobile 17.0.03 on android
        -how to install fifa mobile 17.0.03 mod on android
        -how to install fifa mobile 17.0.03 obb on android
        -how to install fifa mobile 17.0.03 offline on android
        -how to install fifa mobile 17.0.03 hack on android
        -how to install fifa mobile soccer 17.0.03 on android
        -how to install fifa mobile soccer 17.0.03 mod on android
        -how to install fifa mobile soccer 17.0.03 obb on android
        -how to install fifa mobile soccer 17

        -

        What are the new features and improvements in FIFA Mobile APK 17.0.03?

        -

        The FIFA World Cup 2022™ Mode

        -

        One of the most exciting features in FIFA Mobile APK 17.0.03 is the FIFA World Cup 2022™ Mode, which lets you experience the thrill of the biggest soccer tournament in the world. You can choose from 32 qualified teams and play through the group stage, knockout stage, and the final to lift the trophy. You can also play as any of the 211 FIFA member associations and try to qualify for the World Cup through regional tournaments.

        -

        The FIFA World Cup 2022™ Mode also features authentic stadiums, kits, badges, and match balls from Qatar, the host country of the World Cup. You can also enjoy live commentary from legendary soccer commentators such as Martin Tyler, Alan Smith, Derek Rae, and Lee Dixon.

        -

        The Advanced Passing System

        -

        Another new feature in FIFA Mobile APK 17.0.03 is the Advanced Passing System, which gives you more control and precision over your passes. You can use a combination of tap and swipe gestures to execute different types of passes, such as through balls, lobbed passes, driven passes, and backheel passes. You can also adjust the power and direction of your passes by dragging your finger on the screen.

        -

        The Advanced Passing System also allows you to perform skill moves with your players, such as roulette, rainbow flick, heel to heel, and more. You can use these moves to dribble past defenders and create scoring opportunities.

        -

        The Manager Mode

        -

        If you want to take on the role of a soccer manager, you can try the Manager Mode in FIFA Mobile APK 17.0.03. In this mode, you can create your own custom club and manage every aspect of it, such as signing players, setting tactics, training players, scouting opponents, and more. You can also compete with other managers in online leagues and tournaments.

        -

        The Manager Mode also lets you customize your club's logo, kit, stadium, and fan base. You can also interact with your players and staff through dialogues and decisions that affect their morale and performance.

        -

        The Icons and Heroes

        -

        FIFA Mobile APK 17.0.03 also introduces two new types of player cards: Icons and Heroes. Icons are legendary players who have made history in soccer, such as Pelé, Maradona, Ronaldo, Zidane, Beckham, and more. Heroes are players who have performed exceptionally well in a specific season or tournament, such as Lewandowski, Messi, Salah, Mbappé, and more.

        -

        You can collect these player cards by completing special events and challenges in the game. You can also use them to boost your team's chemistry and performance.

        -

        What are the system requirements and device compatibility for FIFA Mobile APK 17.0.03?

        -

        The minimum requirements for downloading the game

        -

        To download FIFA Mobile APK 17.0.03 on your device, you need to have at least:

        -
          -
        • An Android device with Android 6.0 or higher
        • -
        • A minimum of 1 GB of RAM
        • -
        • A minimum of 1 GB of free storage space
        • -
        • A stable internet connection
        • -
        -

        The minimum requirements for playing the Head to Head mode

        -

        To play the Head to Head mode in FIFA Mobile APK 17.0.03, which is a real-time multiplayer mode where you can challenge other players online, you need to have at least:

        -
          -
        • An Android device with Android 8.0 or higher
        • -
        • A minimum of 2 GB of RAM
        • -
        • A minimum of 1 GB of free storage space
        • -
        • A stable internet connection with low latency
        • -
        -

        The devices that support playing at 60 FPS

        -

        To play FIFA Mobile APK 17.0.03 at 60 FPS (frames per second), which is a high-quality graphics setting that makes the game smoother and more realistic, you need to have one of these devices:

        - - - - - - - - - -
        BrandModel
        SamsungGalaxy S9/S9+, Galaxy S10/S10+, Galaxy S20/S20+, Galaxy S21/S21+, Galaxy Note9 , Galaxy Note10/Note10+, Galaxy Note20/Note20+
        OnePlusOnePlus 6/6T, OnePlus 7/7T, OnePlus 8/8T, OnePlus 9/9T
        GooglePixel 3/3XL, Pixel 4/4XL, Pixel 5/5XL
        HuaweiP20/P20 Pro, P30/P30 Pro, P40/P40 Pro, Mate 20/Mate 20 Pro, Mate 30/Mate 30 Pro, Mate 40/Mate 40 Pro
        XiaomiMi 8/Mi 8 Pro, Mi 9/Mi 9 Pro, Mi 10/Mi 10 Pro, Mi 11/Mi 11 Pro, Redmi K20/K20 Pro, Redmi K30/K30 Pro, Redmi K40/K40 Pro
        AsusROG Phone/ROG Phone II/ROG Phone III
        RazerRazer Phone/Razer Phone II
        -

        What are some tips and tricks for playing FIFA Mobile APK 17.0.03?

        -

        How to build a better team

        -

        To build a better team in FIFA Mobile APK 17.0.03, you need to consider these factors:

        -
          -
        • The overall rating of your players, which is determined by their attributes and skills.
        • -
        • The chemistry of your players, which is determined by their nationality, league, and club.
        • -
        • The formation of your team, which is determined by the positions and roles of your players.
        • -
        • The tactics of your team, which is determined by the style and strategy of your gameplay.
        • -
        -

        You can improve your team by upgrading your players with training points and skill boosts, by buying new players from the marketplace or the store, by completing events and challenges that reward you with player cards, and by using the Icons and Heroes to boost your team's chemistry and performance.

        -

        How to place a bid at the marketplace

        -

        The marketplace is where you can buy and sell players with other players in FIFA Mobile APK 17.0.03. You can use coins or FIFA Points to place a bid on a player card that you want to buy. You can also list your own player cards for sale and set a starting price and a buy now price.

        -

        To place a bid at the marketplace, you need to follow these steps:

        -
          -
        1. Go to the marketplace tab in the game menu.
        2. -
        3. Search for the player card that you want to buy by using the filters or the search bar.
        4. -
        5. Select the player card that you want to bid on.
        6. -
        7. Enter the amount of coins or FIFA Points that you want to bid.
        8. -
        9. Tap on the "Bid" button.
        10. -
        11. Wait for the bidding timer to end.
        12. -
        13. If you are the highest bidder when the timer ends, you will win the player card and it will be added to your inventory. If you are outbid by another player, you will lose your bid and it will be returned to you.
        14. -
        -

        How to use a combination of tap and button controls

        -

        FIFA Mobile APK 17.0.03 offers two types of controls for playing the game: tap controls and button controls. You can choose either one or use a combination of both to suit your preference and style.

        -

        Tap controls allow you to control your players by tapping on the screen. You can tap on a player to select him, tap on an empty space to move him there, tap on an opponent to tackle him, tap on a teammate to pass the ball to him, and tap on the goal to shoot. You can also swipe on the screen to perform skill moves or adjust the direction and power of your passes and shots.

        -

        Button controls allow you to control your players by using virtual buttons on the screen. You can use the joystick on the left side of the screen to move your player, and use the buttons on the right side of the screen to perform actions such as sprinting, passing, shooting, tackling, crossing, switching players , and performing skill moves.

        -

        You can use a combination of tap and button controls to have more flexibility and accuracy in your gameplay. For example, you can use the tap controls to select and move your players, and use the button controls to pass and shoot. You can also use the tap controls to perform skill moves, and use the button controls to switch players or sprint. You can customize your control settings in the game menu.

        -

        How to get player cards

        -

        Player cards are essential for building your Ultimate Team in FIFA Mobile APK 17.0.03. You can get player cards by various methods, such as:

        -
          -
        • Completing events and challenges that reward you with player cards or packs.
        • -
        • Buying player cards or packs from the store with coins or FIFA Points.
        • -
        • Winning player cards or packs from the Head to Head mode or the VS Attack mode.
        • -
        • Trading player cards or packs with other players at the marketplace.
        • -
        • Using the Icons and Heroes mode to unlock legendary and special player cards.
        • -
        -

        How to play the Attack Mode

        -

        The Attack Mode is a unique and fast-paced mode in FIFA Mobile APK 17.0.03, where you can compete with other players online in a turn-based match. In this mode, you only control your team's attacking moves, while your opponent controls their team's defending moves. Each match consists of two turns, where you and your opponent take turns to score as many goals as possible. The player with the most goals at the end of the match wins.

        -

        To play the Attack Mode, you need to follow these steps:

        -
          -
        1. Go to the Attack Mode tab in the game menu.
        2. -
        3. Select a division that matches your team's overall rating.
        4. -
        5. Select an opponent from the list of available players.
        6. -
        7. Start your first turn and try to score as many goals as possible within the given time limit.
        8. -
        9. Wait for your opponent to finish their first turn and see their score.
        10. -
        11. Start your second turn and try to score more goals than your opponent.
        12. -
        13. Wait for your opponent to finish their second turn and see the final score.
        14. -
        15. If you win, you will earn fans, coins, and FIFA Points. If you lose, you will lose fans and coins.
        16. -
        -

        Conclusion and FAQs

        -

        FIFA Mobile APK 17.0.03 is a great soccer game that offers realistic graphics, gameplay, and content for mobile devices. You can download and install the game on your device by following the steps mentioned above. You can also enjoy the new features and improvements in this version, such as the FIFA World Cup 2022™ Mode, the Advanced Passing System, the Manager Mode, and the Icons and Heroes. You can also improve your skills and knowledge by following the tips and tricks mentioned above.

        -

        If you have any questions about FIFA Mobile APK 17.0.03, you might find the answers in these FAQs:

        -

        Q: How can I update my game to FIFA Mobile APK 17.0.03?

        -

        A: If you have downloaded the game from Google Play Store or Apple App Store, you can update your game by going to the store and tapping on the "Update" button. If you have downloaded the game from a third-party source, you can update your game by downloading the latest APK file from [this link] and installing it on your device.

        -

        Q: How can I play with my friends in FIFA Mobile APK 17.0.03?

        -

        A: You can play with your friends in FIFA Mobile APK 17.0.03 by adding them as friends in the game menu. You can then invite them to play with you in various modes, such as Head to Head, VS Attack, Manager Mode, or FIFA World Cup 2022™ Mode.

        -

        Q: How can I change my team's name, logo, kit, or stadium in FIFA Mobile APK 17.0.03?

        -

        A: You can change your team's name, logo, kit, or stadium in FIFA Mobile APK 17.0.03 by going to the My Team tab in the game menu. You can then tap on the "Edit" button and choose from various options to customize your team's appearance.

        -

        Q: How can I get more coins or FIFA Points in FIFA Mobile APK 17.0.03?

        -

        A: You can get more coins or FIFA Points in FIFA Mobile APK 17.0.03 by playing various modes and events that reward you with coins or FIFA Points. You can also buy coins or FIFA Points from the store with real money. However, you should be careful when spending your coins or FIFA Points, as they are limited and valuable resources in the game.

        -

        Q: How can I contact the customer support or report a bug in FIFA Mobile APK 17.0.03?

        -

        A: You can contact the customer support or report a bug in FIFA Mobile APK 17.0.03 by going to the Settings tab in the game menu. You can then tap on the "Help" button and choose from various options to get help or feedback. You can also visit the official website or social media pages of FIFA Mobile for more information and updates.

        -

        I hope you enjoyed reading this article and learned something new about FIFA Mobile APK 17.0.03. If you have any comments or suggestions, please feel free to share them with me. Thank you for your time and attention.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA World Cup 2022 How to Get Unlimited Money and Unlock All Teams in FIFA Mobile MOD APK.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA World Cup 2022 How to Get Unlimited Money and Unlock All Teams in FIFA Mobile MOD APK.md deleted file mode 100644 index 3d6990b5476d1be3829774e9c7e620dd2b31f533..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA World Cup 2022 How to Get Unlimited Money and Unlock All Teams in FIFA Mobile MOD APK.md +++ /dev/null @@ -1,91 +0,0 @@ -
        -

        Download FIFA World Cup 2022 Mod APK Unlimited Money

        -

        If you are a fan of soccer games, you must have heard of FIFA Mobile, the official mobile game of the FIFA World Cup. But did you know that there is a modded version of this game that gives you unlimited money, unlocked all players, teams, and stadiums, and many other features? In this article, we will tell you everything you need to know about FIFA World Cup 2022 Mod APK, and how to download and install it on your Android device.

        -

        Introduction

        -

        FIFA Mobile is one of the most popular soccer games on mobile devices, developed by Electronic Arts. It lets you build your ultimate team of soccer stars from over 600 clubs and 15,000 players, including world-class talent like Kylian Mbappé, Christian Pulisic, Vinicius Jr and Son Heung-min. You can also compete in various modes, such as Head-to-Head, VS Attack, Manager Mode, and more.

        -

        download fifa world cup 2022 mod apk unlimited money


        Download Filehttps://ssurll.com/2uNWZO



        -

        What is FIFA World Cup 2022 Mod APK?

        -

        FIFA World Cup 2022 Mod APK is a modified version of FIFA Mobile that adds some extra features and enhancements to the original game. It is not an official app from EA Sports, but a fan-made app that requires no root access or license verification. With this mod apk, you can enjoy the following benefits:

        -

        Why download FIFA World Cup 2022 Mod APK?

        -

        There are many reasons why you might want to download FIFA World Cup 2022 Mod APK instead of the original game. Here are some of them:

        -
          -
        • You can get unlimited money and coins to buy any player, team, or stadium you want.
        • -
        • You can unlock all players, teams, and stadiums from the start, without having to grind or spend real money.
        • -
        • You can access a menu mod that gives you various options to customize your game experience, such as speed hack, god mode, no ads, etc.
        • -
        • You can enjoy high-quality graphics and sound that make the game more realistic and immersive.
        • -
        • You can relive the world's greatest soccer tournament with the FIFA World Cup 2022 mode, where you can play with any of the 32 qualified nations in the official tournament brackets.
        • -
        -

        Features of FIFA World Cup 2022 Mod APK

        -

        Now that you know what FIFA World Cup 2022 Mod APK is and why you should download it, let's take a closer look at some of its features:

        -

        Unlocked all players, teams, and stadiums

        -

        One of the best features of FIFA World Cup 2022 Mod APK is that it unlocks all players, teams, and stadiums from the start. You don't have to wait for hours or days to unlock your favorite soccer stars or clubs. You can choose from over 600 teams, including Chelsea, Paris SG, Real Madrid, Liverpool and Juventus. You can also play in over 50 stadiums from around the world, including Al Bayt and Lusail from the FIFA World Cup 2022.

        -

        Unlimited money and coins

        -

        Another great feature of FIFA World Cup 2022 Mod APK is that it gives you unlimited money and coins to spend on anything you want. You can buy any player item or upgrade your team without worrying about running out of resources. You can also use the money and coins to unlock exclusive items and rewards from the store or the events. You can also use them to boost your team's performance and skills.

        -

        How to download fifa world cup 2022 mod apk unlimited money for android
        -FIFA mobile v18.1.03 mod apk unlocked all, unlimited money, menu
        -Download fifa world cup 2022 mod apk with official licenses and kits
        -FIFA world cup 2022 mod apk unlimited money and coins
        -Best fifa world cup 2022 mod apk for soccer fans
        -FIFA world cup 2022 mod apk with realistic stadiums and commentary
        -Download fifa world cup 2022 mod apk with manager mode and tactics
        -FIFA world cup 2022 mod apk with soccer icons and heroes
        -Download fifa world cup 2022 mod apk with 15,000+ authentic players
        -FIFA world cup 2022 mod apk unlimited money and gems
        -Download fifa world cup 2022 mod apk with 60 fps gameplay
        -FIFA world cup 2022 mod apk with live on-field audio
        -Download fifa world cup 2022 mod apk with localised commentary
        -FIFA world cup 2022 mod apk with upgraded soccer simulation
        -Download fifa world cup 2022 mod apk with head-to-head and vs attack modes
        -FIFA world cup 2022 mod apk unlimited money and energy
        -Download fifa world cup 2022 mod apk with classic FIFA venues
        -FIFA world cup 2022 mod apk with over 600+ teams and leagues
        -Download fifa world cup 2022 mod apk with World Cup brackets and stadiums
        -FIFA world cup 2022 mod apk unlimited money and tickets
        -Download fifa world cup 2022 mod apk with UEFA Champions League contenders
        -FIFA world cup 2022 mod apk with next-generation graphics and animations
        -Download fifa world cup 2022 mod apk with soccer legends from over 30+ leagues
        -FIFA world cup 2022 mod apk unlimited money and gold
        -Download fifa world cup 2022 mod apk with new season updates and features

        -

        Menu mod with various options

        -

        FIFA World Cup 2022 Mod APK also comes with a menu mod that gives you various options to customize your game experience. You can access the menu mod by tapping on the icon on the top left corner of the screen. From there, you can enable or disable the following options:

        -
          -
        • Speed hack: This option lets you increase or decrease the speed of the game, making it easier or harder to play.
        • -
        • God mode: This option makes your team invincible, meaning they will never lose stamina, health, or morale.
        • -
        • No ads: This option removes all the ads from the game, giving you a smoother and uninterrupted gameplay.
        • -
        • And more: There are other options that you can explore and experiment with, such as unlimited energy, auto win, no foul, etc.
        • -
        -

        High-quality graphics and sound

        -

        FIFA World Cup 2022 Mod APK also boasts high-quality graphics and sound that make the game more realistic and immersive. You can enjoy the stunning visuals of the players, teams, stadiums, and animations. You can also hear the authentic sounds of the crowd, the commentary, and the music. The game also supports HD resolution and 60 FPS for a better gaming experience.

        -

        FIFA World Cup 2022 mode and other modes

        -

        One of the most exciting features of FIFA World Cup 2022 Mod APK is that it lets you relive the world's greatest soccer tournament with the FIFA World Cup 2022 mode. In this mode, you can play with any of the 32 qualified nations in the official tournament brackets. You can also create your own custom tournament with your favorite teams. You can also play in other modes, such as Head-to-Head, VS Attack, Manager Mode, and more.

        -

        How to download and install FIFA World Cup 2022 Mod APK?

        -

        Now that you know the features of FIFA World Cup 2022 Mod APK, you might be wondering how to download and install it on your Android device. Don't worry, it's very easy and simple. Just follow these steps:

        -

        Step 1: Download the APK file from a trusted source

        -

        The first step is to download the APK file of FIFA World Cup 2022 Mod APK from a trusted source. You can find many websites that offer this file for free, but be careful of fake or malicious links. We recommend you to use this link to download the APK file safely and securely.

        -

        Step 2: Enable unknown sources on your device

        -

        The next step is to enable unknown sources on your device. This is necessary because FIFA World Cup 2022 Mod APK is not an official app from Google Play Store, so you need to allow your device to install apps from other sources. To do this, go to Settings > Security > Unknown Sources and toggle it on.

        -

        Step 3: Install the APK file and launch the game

        -

        The final step is to install the APK file and launch the game. To do this, locate the downloaded APK file on your device and tap on it. Follow the instructions on the screen to complete the installation process. Once done, open the game and enjoy FIFA World Cup 2022 Mod APK.

        -

        Conclusion

        -

        FIFA World Cup 2022 Mod APK is a great way to enjoy FIFA Mobile with unlimited money, unlocked all players, teams, and stadiums, menu mod with various options, high-quality graphics and sound, FIFA World Cup 2022 mode and other modes. It is easy to download and install on your Android device, and it requires no root access or license verification. If you are a fan of soccer games, you should definitely try FIFA World Cup 2022 Mod APK.

        -

        If you liked this article, please share it with your friends and family who might be interested in FIFA World Cup 2022 Mod APK. Also, let us know what you think about this mod apk in the comments section below. Thank you for reading!

        -

        FAQs

        -
          -
        • Q: Is FIFA World Cup 2022 Mod APK safe to use?
        • -
        • A: Yes, FIFA World Cup 2022 Mod APK is safe to use as long as you download it from a trusted source like this one. It does not contain any viruses or malware that can harm your device or data.
        • -
        • Q: Do I need to root my device to use FIFA World Cup 2022 Mod APK?
        • -
        • A A: No, you don't need to root your device to use FIFA World Cup 2022 Mod APK. It works fine on any Android device that meets the minimum requirements of the game.
        • -
        • Q: How can I update FIFA World Cup 2022 Mod APK?
        • -
        • A: To update FIFA World Cup 2022 Mod APK, you need to download the latest version of the APK file from the same source you downloaded it from. Then, you need to uninstall the previous version of the game and install the new one. You don't need to worry about losing your progress or data, as they are stored on your device.
        • -
        • Q: Can I play online with FIFA World Cup 2022 Mod APK?
        • -
        • A: Yes, you can play online with FIFA World Cup 2022 Mod APK, as long as you have a stable internet connection and a valid EA account. However, you might face some issues or bans if you use the mod apk in competitive modes or events. We recommend you to use the mod apk for offline or casual modes only.
        • -
        • Q: What are some alternatives to FIFA World Cup 2022 Mod APK?
        • -
        • A: If you are looking for some alternatives to FIFA World Cup 2022 Mod APK, you can try these games:
        • -
            -
          • Dream League Soccer 2022: This is another popular soccer game that lets you build your dream team and compete in various leagues and tournaments. It has realistic graphics, smooth gameplay, and a lot of customization options.
          • -
          • PES 2022: This is the official mobile game of Pro Evolution Soccer, one of the biggest rivals of FIFA. It features licensed players, teams, and stadiums, as well as realistic physics and animations. It also has various modes, such as Matchday, Master League, and MyClub.
          • -
          • Score! Hero 2: This is a unique soccer game that focuses on the story and career of a single player. You can create your own character and guide him through different challenges and scenarios. You can also customize your appearance, skills, and equipment.
          • -
          -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/sklearn-docs/A_demo_of_the_Spectral_Co-Clustering_algorithm/app.py b/spaces/sklearn-docs/A_demo_of_the_Spectral_Co-Clustering_algorithm/app.py deleted file mode 100644 index 390b98d64e0f13192c204c2e9b60c6f6af1c637c..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/A_demo_of_the_Spectral_Co-Clustering_algorithm/app.py +++ /dev/null @@ -1,129 +0,0 @@ -import numpy as np -import gradio as gr - -from sklearn.datasets import make_biclusters -from sklearn.cluster import SpectralCoclustering -from sklearn.metrics import consensus_score - -import plotly.express as px - - -score = [0.0] - - -def dataset(n_clusters=5, noise=5, n_rows=300, n_cols=300): - data, rows, columns = make_biclusters( - shape=(n_rows, n_cols), - n_clusters=n_clusters, - noise=noise, - shuffle=False, - random_state=0, - ) - - fig = px.imshow(data, title="Original Data") - return fig - - -def shuffle_dataset(n_clusters=5, noise=5, n_rows=300, n_cols=300): - data, rows, columns = make_biclusters( - shape=(n_rows, n_cols), - n_clusters=n_clusters, - noise=noise, - shuffle=False, - random_state=0, - ) - rng = np.random.RandomState(0) - row_idx = rng.permutation(data.shape[0]) - col_idx = rng.permutation(data.shape[1]) - data = data[row_idx][:, col_idx] - fig = px.imshow(data, title="Shuffled Data") - return fig - - -def model_fit(n_cluster, noise, n_rows, n_cols, n_clusters, svd_method): - - data, rows, columns = make_biclusters( - shape=(n_rows, n_cols), - n_clusters=n_cluster, - noise=noise, - shuffle=False, - random_state=0, - ) - fig_original = px.imshow(data, title="Original Data") - rng = np.random.RandomState(0) - row_idx = rng.permutation(data.shape[0]) - col_idx = rng.permutation(data.shape[1]) - data = data[row_idx][:, col_idx] - fig_shuffled = px.imshow(data, title="Shuffled Data") - model = SpectralCoclustering( - n_clusters=n_clusters, random_state=0, svd_method=svd_method - ) - model.fit(data) - score.append( - consensus_score(model.biclusters_, (rows[:, row_idx], columns[:, col_idx])) - ) - fit_data = data[np.argsort(model.row_labels_)] - fit_data = fit_data[:, np.argsort(model.column_labels_)].T - fig = px.imshow(fit_data, title="After Co-Clustering") - return fig_original, fig_shuffled, fig - - -def get_score(): - return score[-1].__format__(".3f") - - -with gr.Blocks() as demo: - gr.Markdown("## Spectral Co-Clustering") - gr.Markdown( - "Demo is based on the [Spectral Co-Clustering](https://scikit-learn.org/stable/auto_examples/bicluster/plot_spectral_coclustering.html) example from scikit-learn. The goal of co-clustering is to find subgroups of rows and columns that are highly correlated. The data is first shuffled, then the rows and columns are reordered to match the biclusters. The consensus score is a measure of how well the biclusters found by the model match the true biclusters. The score is between 0 and 1, with 1 being a perfect match." - ) - - with gr.Tab("Data"): - gr.Markdown("## Play with the parameters to see how the data changes") - gr.Markdown("### Parameters") - with gr.Row(): - n_rows = gr.Slider(1, 500, label="Number of Rows", value=300, step=1) - n_cols = gr.Slider(1, 500, label="Number of Columns", value=300, step=1) - n_cluster = gr.Slider(1, 50, label="Number of Clusters", value=5, step=1) - noise = gr.Slider(0, 10, label="Noise", value=5, step=1) - with gr.Row(): - gen_btn = gr.Button("Generate Data") - shu_btn = gr.Button("Shuffle Data") - with gr.Row(): - gen_btn.click( - fn=dataset, inputs=[n_cluster, noise, n_rows, n_cols], outputs=gr.Plot() - ) - shu_btn.click( - fn=shuffle_dataset, - inputs=[n_cluster, noise, n_rows, n_cols], - outputs=gr.Plot(), - ) - - with gr.Tab("Model"): - gr.Markdown("## Model") - gr.Markdown("### Data Parameters") - with gr.Row(): - n_rows = gr.Slider(1, 500, label="Number of Rows", value=300, step=1) - n_cols = gr.Slider(1, 500, label="Number of Columns", value=300, step=1) - n_cluster = gr.Slider(1, 50, label="Number of Clusters", value=5, step=1) - noise = gr.Slider(0, 10, label="Noise", value=5, step=1) - gr.Markdown("### Model Parameters") - with gr.Row(): - n_clusters = gr.Slider(1, 50, label="Number of Clusters", value=5, step=1) - svd_method = gr.Dropdown( - ["randomized", "arpack"], label="SVD Method", value="randomized" - ) - model_btn = gr.Button("Fit Model") - with gr.Row(): - model_btn.click( - fn=model_fit, - inputs=[n_cluster, noise, n_rows, n_cols, n_clusters, svd_method], - outputs=[gr.Plot(), gr.Plot(), gr.Plot()], - ) - gr.Markdown("### Consensus Score") - score_btn = gr.Button("Get Score") - with gr.Row(): - score_btn.click(fn=get_score, outputs=gr.Text()) - - -demo.launch() diff --git a/spaces/sklearn-docs/Lasso-dense-sparse-data/app.py b/spaces/sklearn-docs/Lasso-dense-sparse-data/app.py deleted file mode 100644 index 9703ecaa2142d4638c092df4651974553e3b3bab..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Lasso-dense-sparse-data/app.py +++ /dev/null @@ -1,113 +0,0 @@ -import gradio as gr -from time import time -from scipy import sparse -from scipy import linalg - -from sklearn.datasets import make_regression -from sklearn.linear_model import Lasso - - -def load_dataset(): - X, y = make_regression(n_samples=200, n_features=5000, random_state=0) - # create a copy of X in sparse format - X_sp = sparse.coo_matrix(X) - return X,X_sp,y - -def compare_lasso_dense(): - alpha_dense = 1 - alpha_sparse = 0.1 - sparse_lasso = Lasso(alpha= alpha_sparse, fit_intercept=False, max_iter=1000) - dense_lasso = Lasso(alpha=alpha_dense, fit_intercept=False, max_iter=1000) - - t0 = time() - sparse_lasso.fit(X_sp, y) - # print(f"Sparse Lasso done in {(time() - t0):.3f}s") - elapse1 = time() - t0 - - t1 = time() - dense_lasso.fit(X, y) - # print(f"Dense Lasso done in {(time() - t0):.3f}s") - elapse2 = time() - t1 - - # compare the regression coefficients - coeff_diff = linalg.norm(sparse_lasso.coef_ - dense_lasso.coef_) - # print(f"Distance between coefficients : {coeff_diff:.2e}") - return f"Sparse Lasso done in {(elapse1):.3f}s\t\n" + f"Dense Lasso done in {(elapse2):.3f}s\t\n" + f"Distance between coefficients : {coeff_diff:.2e}\t\n" - -def compare_lasso_sparse(): - # make a copy of the previous data - Xs = X.copy() - # make Xs sparse by replacing the values lower than 2.5 with 0s - Xs[Xs < 2.5] = 0.0 - # create a copy of Xs in sparse format - Xs_sp = sparse.coo_matrix(Xs) - Xs_sp = Xs_sp.tocsc() - - # compute the proportion of non-zero coefficient in the data matrix - print(f"Matrix density : {(Xs_sp.nnz / float(X.size) * 100):.3f}%") - matrix_density = Xs_sp.nnz / float(X.size) * 100 - - alpha_dense = 1 - alpha_sparse = 0.1 - sparse_lasso = Lasso(alpha= alpha_sparse, fit_intercept=False, max_iter=1000) - dense_lasso = Lasso(alpha=alpha_dense, fit_intercept=False, max_iter=1000) - - t0 = time() - sparse_lasso.fit(Xs_sp, y) - print(f"Sparse Lasso done in {(time() - t0):.3f}s") - elapses1 = time() - t0 - - t1 = time() - dense_lasso.fit(Xs, y) - print(f"Dense Lasso done in {(time() - t1):.3f}s") - elapses2 = time() - t1 - - # compare the regression coefficients - coeff_diff = linalg.norm(sparse_lasso.coef_ - dense_lasso.coef_) - print(f"Distance between coefficients : {coeff_diff:.2e}") - return f"Matrix density : {(Xs_sp.nnz / float(X.size) * 100):.3f}%\t\n"+ f"Sparse Lasso done in {(elapses1):.3f}s\t\n" + f"Dense Lasso done in {(elapses2):.3f}s\t\n" + f"Distance between coefficients : {coeff_diff:.2e}\t\n" - - -X,X_sp,y = load_dataset() -# compare_lasso_dense(X,X_sp,y) -# compare_lasso_sparse(X,X_sp,y) - - - -title = " Lasso on Dense and Sparse data " -info = '''**Comparing the two Lasso implementations on Dense data** -We create a linear regression problem that is suitable for the Lasso, that is to say, with more features than samples. -We then store the data matrix in both dense (the usual) and sparse format, and train a Lasso on each. We compute the -runtime of both and check that they learned the same model by -computing the Euclidean norm of the difference between the coefficients they learned. -Because the data is dense, we expect better runtime with a dense data format. -''' - -info2='''***Comparing the two Lasso implementations on Sparse data*** -We make the previous problem sparse by replacing all small values with 0 -and run the same comparisons as above. Because the data is now sparse, -we expect the implementation that uses the sparse data format to be faster. -''' - -conclusion = '''**Conclusion** -We show that linear_model.Lasso provides the same results for dense and sparse data and that in the case of sparse data the speed is improved**. -''' -with gr.Blocks() as demo: - gr.Markdown(f"# {title}") - gr.Markdown(info) - - txt_3 = gr.Textbox(value="", label="Dense Lasso comparison") - btn = gr.Button(value="Dense Lasso comparison") - btn.click(compare_lasso_dense, outputs=[txt_3]) - - gr.Markdown(info2) - - txt_4 = gr.Textbox(value="", label="Sparse Lasso comparison") - btn = gr.Button(value="Sparse Lasso comparison") - btn.click(compare_lasso_sparse, outputs=[txt_4]) - - gr.Markdown(conclusion) - - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/smfry010/text-to-image/README.md b/spaces/smfry010/text-to-image/README.md deleted file mode 100644 index 74ada3c7c0e09cd4511a44f3f963c7c91037417c..0000000000000000000000000000000000000000 --- a/spaces/smfry010/text-to-image/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text To Image -emoji: 🏃 -colorFrom: purple -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/smjain/smjainvoice/starganv2vc_paddle/Utils/ASR/models.py b/spaces/smjain/smjainvoice/starganv2vc_paddle/Utils/ASR/models.py deleted file mode 100644 index cc628aedd70d68ed1e95e53c25ba3b7ff0ac3a36..0000000000000000000000000000000000000000 --- a/spaces/smjain/smjainvoice/starganv2vc_paddle/Utils/ASR/models.py +++ /dev/null @@ -1,187 +0,0 @@ -import math -import paddle -from paddle import nn -from paddle.nn import TransformerEncoder -import paddle.nn.functional as F -from .layers import MFCC, Attention, LinearNorm, ConvNorm, ConvBlock - -class ASRCNN(nn.Layer): - def __init__(self, - input_dim=80, - hidden_dim=256, - n_token=35, - n_layers=6, - token_embedding_dim=256, - - ): - super().__init__() - self.n_token = n_token - self.n_down = 1 - self.to_mfcc = MFCC() - self.init_cnn = ConvNorm(input_dim//2, hidden_dim, kernel_size=7, padding=3, stride=2) - self.cnns = nn.Sequential( - *[nn.Sequential( - ConvBlock(hidden_dim), - nn.GroupNorm(num_groups=1, num_channels=hidden_dim) - ) for n in range(n_layers)]) - self.projection = ConvNorm(hidden_dim, hidden_dim // 2) - self.ctc_linear = nn.Sequential( - LinearNorm(hidden_dim//2, hidden_dim), - nn.ReLU(), - LinearNorm(hidden_dim, n_token)) - self.asr_s2s = ASRS2S( - embedding_dim=token_embedding_dim, - hidden_dim=hidden_dim//2, - n_token=n_token) - - def forward(self, x, src_key_padding_mask=None, text_input=None): - x = self.to_mfcc(x) - x = self.init_cnn(x) - x = self.cnns(x) - x = self.projection(x) - x = x.transpose([0, 2, 1]) - ctc_logit = self.ctc_linear(x) - if text_input is not None: - _, s2s_logit, s2s_attn = self.asr_s2s(x, src_key_padding_mask, text_input) - return ctc_logit, s2s_logit, s2s_attn - else: - return ctc_logit - - def get_feature(self, x): - x = self.to_mfcc(x.squeeze(1)) - x = self.init_cnn(x) - x = self.cnns(x) - x = self.projection(x) - return x - - def length_to_mask(self, lengths): - mask = paddle.arange(lengths.max()).unsqueeze(0).expand((lengths.shape[0], -1)).astype(lengths.dtype) - mask = paddle.greater_than(mask+1, lengths.unsqueeze(1)) - return mask - - def get_future_mask(self, out_length, unmask_future_steps=0): - """ - Args: - out_length (int): returned mask shape is (out_length, out_length). - unmask_futre_steps (int): unmasking future step size. - Return: - mask (paddle.BoolTensor): mask future timesteps mask[i, j] = True if i > j + unmask_future_steps else False - """ - index_tensor = paddle.arange(out_length).unsqueeze(0).expand([out_length, -1]) - mask = paddle.greater_than(index_tensor, index_tensor.T + unmask_future_steps) - return mask - -class ASRS2S(nn.Layer): - def __init__(self, - embedding_dim=256, - hidden_dim=512, - n_location_filters=32, - location_kernel_size=63, - n_token=40): - super(ASRS2S, self).__init__() - self.embedding = nn.Embedding(n_token, embedding_dim) - val_range = math.sqrt(6 / hidden_dim) - nn.initializer.Uniform(-val_range, val_range)(self.embedding.weight) - - self.decoder_rnn_dim = hidden_dim - self.project_to_n_symbols = nn.Linear(self.decoder_rnn_dim, n_token) - self.attention_layer = Attention( - self.decoder_rnn_dim, - hidden_dim, - hidden_dim, - n_location_filters, - location_kernel_size - ) - self.decoder_rnn = nn.LSTMCell(self.decoder_rnn_dim + embedding_dim, self.decoder_rnn_dim) - self.project_to_hidden = nn.Sequential( - LinearNorm(self.decoder_rnn_dim * 2, hidden_dim), - nn.Tanh()) - self.sos = 1 - self.eos = 2 - - def initialize_decoder_states(self, memory, mask): - """ - moemory.shape = (B, L, H) = (Batchsize, Maxtimestep, Hiddendim) - """ - B, L, H = memory.shape - self.decoder_hidden = paddle.zeros((B, self.decoder_rnn_dim)).astype(memory.dtype) - self.decoder_cell = paddle.zeros((B, self.decoder_rnn_dim)).astype(memory.dtype) - self.attention_weights = paddle.zeros((B, L)).astype(memory.dtype) - self.attention_weights_cum = paddle.zeros((B, L)).astype(memory.dtype) - self.attention_context = paddle.zeros((B, H)).astype(memory.dtype) - self.memory = memory - self.processed_memory = self.attention_layer.memory_layer(memory) - self.mask = mask - self.unk_index = 3 - self.random_mask = 0.1 - - def forward(self, memory, memory_mask, text_input): - """ - moemory.shape = (B, L, H) = (Batchsize, Maxtimestep, Hiddendim) - moemory_mask.shape = (B, L, ) - texts_input.shape = (B, T) - """ - self.initialize_decoder_states(memory, memory_mask) - # text random mask - random_mask = (paddle.rand(text_input.shape) < self.random_mask) - _text_input = text_input.clone() - _text_input[:] = paddle.where(random_mask, paddle.full(_text_input.shape, self.unk_index, _text_input.dtype), _text_input) - decoder_inputs = self.embedding(_text_input).transpose([1, 0, 2]) # -> [T, B, channel] - start_embedding = self.embedding( - paddle.to_tensor([self.sos]*decoder_inputs.shape[1], dtype=paddle.long)) - decoder_inputs = paddle.concat((start_embedding.unsqueeze(0), decoder_inputs), axis=0) - - hidden_outputs, logit_outputs, alignments = [], [], [] - while len(hidden_outputs) < decoder_inputs.shape[0]: - - decoder_input = decoder_inputs[len(hidden_outputs)] - hidden, logit, attention_weights = self.decode(decoder_input) - hidden_outputs += [hidden] - logit_outputs += [logit] - alignments += [attention_weights] - - hidden_outputs, logit_outputs, alignments = \ - self.parse_decoder_outputs( - hidden_outputs, logit_outputs, alignments) - - return hidden_outputs, logit_outputs, alignments - - - def decode(self, decoder_input): - - cell_input = paddle.concat((decoder_input, self.attention_context), -1) - self.decoder_rnn.flatten_parameters() - self.decoder_hidden, self.decoder_cell = self.decoder_rnn( - cell_input, - (self.decoder_hidden, self.decoder_cell)) - - attention_weights_cat = paddle.concat( - (self.attention_weights.unsqueeze(1), - self.attention_weights_cum.unsqueeze(1)),axis=1) - - self.attention_context, self.attention_weights = self.attention_layer( - self.decoder_hidden, - self.memory, - self.processed_memory, - attention_weights_cat, - self.mask) - - self.attention_weights_cum += self.attention_weights - - hidden_and_context = paddle.concat((self.decoder_hidden, self.attention_context), -1) - hidden = self.project_to_hidden(hidden_and_context) - - # dropout to increasing g - logit = self.project_to_n_symbols(F.dropout(hidden, 0.5, self.training)) - - return hidden, logit, self.attention_weights - - def parse_decoder_outputs(self, hidden, logit, alignments): - - # -> [B, T_out + 1, max_time] - alignments = paddle.stack(alignments).transpose([1,0,2]) - # [T_out + 1, B, n_symbols] -> [B, T_out + 1, n_symbols] - logit = paddle.stack(logit).transpose([1,0,2]) - hidden = paddle.stack(hidden).transpose([1,0,2]) - - return hidden, logit, alignments diff --git a/spaces/spring-chatbot/customer-service-assistant/README.md b/spaces/spring-chatbot/customer-service-assistant/README.md deleted file mode 100644 index 0175e1884508205ba995939f04495e5c0ed1d124..0000000000000000000000000000000000000000 --- a/spaces/spring-chatbot/customer-service-assistant/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Customer Service Assistant -emoji: 💻 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/roberta/wsc/wsc_utils.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/roberta/wsc/wsc_utils.py deleted file mode 100644 index da6ba74383a2490e1108609f315f44ad4b3bf002..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/roberta/wsc/wsc_utils.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -from functools import lru_cache - - -def convert_sentence_to_json(sentence): - if "_" in sentence: - prefix, rest = sentence.split("_", 1) - query, rest = rest.split("_", 1) - query_index = len(prefix.rstrip().split(" ")) - else: - query, query_index = None, None - - prefix, rest = sentence.split("[", 1) - pronoun, rest = rest.split("]", 1) - pronoun_index = len(prefix.rstrip().split(" ")) - - sentence = sentence.replace("_", "").replace("[", "").replace("]", "") - - return { - "idx": 0, - "text": sentence, - "target": { - "span1_index": query_index, - "span1_text": query, - "span2_index": pronoun_index, - "span2_text": pronoun, - }, - } - - -def extended_noun_chunks(sentence): - noun_chunks = {(np.start, np.end) for np in sentence.noun_chunks} - np_start, cur_np = 0, "NONE" - for i, token in enumerate(sentence): - np_type = token.pos_ if token.pos_ in {"NOUN", "PROPN"} else "NONE" - if np_type != cur_np: - if cur_np != "NONE": - noun_chunks.add((np_start, i)) - if np_type != "NONE": - np_start = i - cur_np = np_type - if cur_np != "NONE": - noun_chunks.add((np_start, len(sentence))) - return [sentence[s:e] for (s, e) in sorted(noun_chunks)] - - -def find_token(sentence, start_pos): - found_tok = None - for tok in sentence: - if tok.idx == start_pos: - found_tok = tok - break - return found_tok - - -def find_span(sentence, search_text, start=0): - search_text = search_text.lower() - for tok in sentence[start:]: - remainder = sentence[tok.i :].text.lower() - if remainder.startswith(search_text): - len_to_consume = len(search_text) - start_idx = tok.idx - for next_tok in sentence[tok.i :]: - end_idx = next_tok.idx + len(next_tok.text) - if end_idx - start_idx == len_to_consume: - span = sentence[tok.i : next_tok.i + 1] - return span - return None - - -@lru_cache(maxsize=1) -def get_detokenizer(): - from sacremoses import MosesDetokenizer - - detok = MosesDetokenizer(lang="en") - return detok - - -@lru_cache(maxsize=1) -def get_spacy_nlp(): - import en_core_web_lg - - nlp = en_core_web_lg.load() - return nlp - - -def jsonl_iterator(input_fname, positive_only=False, ngram_order=3, eval=False): - detok = get_detokenizer() - nlp = get_spacy_nlp() - - with open(input_fname) as fin: - for line in fin: - sample = json.loads(line.strip()) - - if positive_only and "label" in sample and not sample["label"]: - # only consider examples where the query is correct - continue - - target = sample["target"] - - # clean up the query - query = target["span1_text"] - if query is not None: - if "\n" in query: - continue - if query.endswith(".") or query.endswith(","): - query = query[:-1] - - # split tokens - tokens = sample["text"].split(" ") - - def strip_pronoun(x): - return x.rstrip('.,"') - - # find the pronoun - pronoun_idx = target["span2_index"] - pronoun = strip_pronoun(target["span2_text"]) - if strip_pronoun(tokens[pronoun_idx]) != pronoun: - # hack: sometimes the index is misaligned - if strip_pronoun(tokens[pronoun_idx + 1]) == pronoun: - pronoun_idx += 1 - else: - raise Exception("Misaligned pronoun!") - assert strip_pronoun(tokens[pronoun_idx]) == pronoun - - # split tokens before and after the pronoun - before = tokens[:pronoun_idx] - after = tokens[pronoun_idx + 1 :] - - # the GPT BPE attaches leading spaces to tokens, so we keep track - # of whether we need spaces before or after the pronoun - leading_space = " " if pronoun_idx > 0 else "" - trailing_space = " " if len(after) > 0 else "" - - # detokenize - before = detok.detokenize(before, return_str=True) - pronoun = detok.detokenize([pronoun], return_str=True) - after = detok.detokenize(after, return_str=True) - - # hack: when the pronoun ends in a period (or comma), move the - # punctuation to the "after" part - if pronoun.endswith(".") or pronoun.endswith(","): - after = pronoun[-1] + trailing_space + after - pronoun = pronoun[:-1] - - # hack: when the "after" part begins with a comma or period, remove - # the trailing space - if after.startswith(".") or after.startswith(","): - trailing_space = "" - - # parse sentence with spacy - sentence = nlp(before + leading_space + pronoun + trailing_space + after) - - # find pronoun span - start = len(before + leading_space) - first_pronoun_tok = find_token(sentence, start_pos=start) - pronoun_span = find_span(sentence, pronoun, start=first_pronoun_tok.i) - assert pronoun_span.text == pronoun - - if eval: - # convert to format where pronoun is surrounded by "[]" and - # query is surrounded by "_" - query_span = find_span(sentence, query) - query_with_ws = "_{}_{}".format( - query_span.text, - (" " if query_span.text_with_ws.endswith(" ") else ""), - ) - pronoun_with_ws = "[{}]{}".format( - pronoun_span.text, - (" " if pronoun_span.text_with_ws.endswith(" ") else ""), - ) - if query_span.start < pronoun_span.start: - first = (query_span, query_with_ws) - second = (pronoun_span, pronoun_with_ws) - else: - first = (pronoun_span, pronoun_with_ws) - second = (query_span, query_with_ws) - sentence = ( - sentence[: first[0].start].text_with_ws - + first[1] - + sentence[first[0].end : second[0].start].text_with_ws - + second[1] - + sentence[second[0].end :].text - ) - yield sentence, sample.get("label", None) - else: - yield sentence, pronoun_span, query, sample.get("label", None) - - -def winogrande_jsonl_iterator(input_fname, eval=False): - with open(input_fname) as fin: - for line in fin: - sample = json.loads(line.strip()) - sentence, option1, option2 = ( - sample["sentence"], - sample["option1"], - sample["option2"], - ) - - pronoun_span = (sentence.index("_"), sentence.index("_") + 1) - - if eval: - query, cand = option1, option2 - else: - query = option1 if sample["answer"] == "1" else option2 - cand = option2 if sample["answer"] == "1" else option1 - yield sentence, pronoun_span, query, cand - - -def filter_noun_chunks( - chunks, exclude_pronouns=False, exclude_query=None, exact_match=False -): - if exclude_pronouns: - chunks = [ - np - for np in chunks - if (np.lemma_ != "-PRON-" and not all(tok.pos_ == "PRON" for tok in np)) - ] - - if exclude_query is not None: - excl_txt = [exclude_query.lower()] - filtered_chunks = [] - for chunk in chunks: - lower_chunk = chunk.text.lower() - found = False - for excl in excl_txt: - if ( - not exact_match and (lower_chunk in excl or excl in lower_chunk) - ) or lower_chunk == excl: - found = True - break - if not found: - filtered_chunks.append(chunk) - chunks = filtered_chunks - - return chunks diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq_cli/train.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq_cli/train.py deleted file mode 100644 index 83475873138c5d1bac288c234afb6b4a1a7882d7..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq_cli/train.py +++ /dev/null @@ -1,514 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Train a new model on one or across multiple GPUs. -""" - -import argparse -import logging -import math -import os -import sys -from typing import Dict, Optional, Any, List, Tuple, Callable - -# We need to setup root logger before importing any fairseq libraries. -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.train") - -import numpy as np -import torch -from fairseq import ( - checkpoint_utils, - options, - quantization_utils, - tasks, - utils, -) -from fairseq.data import iterators, data_utils -from fairseq.data.plasma_utils import PlasmaStore -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import fsdp_enable_wrap, fsdp_wrap, utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics, progress_bar -from fairseq.model_parallel.megatron_trainer import MegatronTrainer -from fairseq.trainer import Trainer -from omegaconf import DictConfig, OmegaConf - - - - -def main(cfg: FairseqConfig) -> None: - if isinstance(cfg, argparse.Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - utils.import_user_module(cfg.common) - - if distributed_utils.is_master(cfg.distributed_training) and "job_logging_cfg" in cfg: - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - logging.config.dictConfig(OmegaConf.to_container(cfg.job_logging_cfg)) - - assert ( - cfg.dataset.max_tokens is not None or cfg.dataset.batch_size is not None - ), "Must specify batch size either with --max-tokens or --batch-size" - metrics.reset() - - if cfg.common.log_file is not None: - handler = logging.FileHandler(filename=cfg.common.log_file) - logger.addHandler(handler) - - np.random.seed(cfg.common.seed) - utils.set_torch_seed(cfg.common.seed) - - if distributed_utils.is_master(cfg.distributed_training): - checkpoint_utils.verify_checkpoint_directory(cfg.checkpoint.save_dir) - - # Print args - logger.info(cfg) - - if cfg.checkpoint.write_checkpoints_asynchronously: - try: - import iopath # noqa: F401 - except ImportError: - logging.exception( - "Asynchronous checkpoint writing is specified but iopath is " - "not installed: `pip install iopath`" - ) - return - - # Setup task, e.g., translation, language modeling, etc. - task = tasks.setup_task(cfg.task) - - assert cfg.criterion, "Please specify criterion to train a model" - - # Build model and criterion - if cfg.distributed_training.ddp_backend == "fully_sharded": - with fsdp_enable_wrap(cfg.distributed_training): - model = fsdp_wrap(task.build_model(cfg.model)) - else: - model = task.build_model(cfg.model) - criterion = task.build_criterion(cfg.criterion) - logger.info(model) - logger.info("task: {}".format(task.__class__.__name__)) - logger.info("model: {}".format(model.__class__.__name__)) - logger.info("criterion: {}".format(criterion.__class__.__name__)) - logger.info( - "num. shared model params: {:,} (num. trained: {:,})".format( - sum(p.numel() for p in model.parameters() if not getattr(p, "expert", False)), - sum(p.numel() for p in model.parameters() if not getattr(p, "expert", False) and p.requires_grad) - ) - ) - - logger.info( - "num. expert model params: {} (num. trained: {})".format( - sum(p.numel() for p in model.parameters() if getattr(p, "expert", False)), - sum(p.numel() for p in model.parameters() if getattr(p, "expert", False) and p.requires_grad), - ) - ) - - # Load valid dataset (we load training data below, based on the latest checkpoint) - # We load the valid dataset AFTER building the model - data_utils.raise_if_valid_subsets_unintentionally_ignored(cfg) - if cfg.dataset.combine_valid_subsets: - task.load_dataset("valid", combine=True, epoch=1) - else: - for valid_sub_split in cfg.dataset.valid_subset.split(","): - task.load_dataset(valid_sub_split, combine=False, epoch=1) - - # (optionally) Configure quantization - if cfg.common.quantization_config_path is not None: - quantizer = quantization_utils.Quantizer( - config_path=cfg.common.quantization_config_path, - max_epoch=cfg.optimization.max_epoch, - max_update=cfg.optimization.max_update, - ) - else: - quantizer = None - - # Build trainer - if cfg.common.model_parallel_size == 1: - trainer = Trainer(cfg, task, model, criterion, quantizer) - else: - trainer = MegatronTrainer(cfg, task, model, criterion) - logger.info( - "training on {} devices (GPUs/TPUs)".format( - cfg.distributed_training.distributed_world_size - ) - ) - logger.info( - "max tokens per device = {} and max sentences per device = {}".format( - cfg.dataset.max_tokens, - cfg.dataset.batch_size, - ) - ) - - # Load the latest checkpoint if one is available and restore the - # corresponding train iterator - extra_state, epoch_itr = checkpoint_utils.load_checkpoint( - cfg.checkpoint, - trainer, - # don't cache epoch iterators for sharded datasets - disable_iterator_cache=task.has_sharded_data("train"), - ) - if cfg.common.tpu: - import torch_xla.core.xla_model as xm - xm.rendezvous("load_checkpoint") # wait for all workers - - max_epoch = cfg.optimization.max_epoch or math.inf - lr = trainer.get_lr() - - train_meter = meters.StopwatchMeter() - train_meter.start() - while epoch_itr.next_epoch_idx <= max_epoch: - if lr <= cfg.optimization.stop_min_lr: - logger.info( - f"stopping training because current learning rate ({lr}) is smaller " - "than or equal to minimum learning rate " - f"(--stop-min-lr={cfg.optimization.stop_min_lr})" - ) - break - - # train for one epoch - valid_losses, should_stop = train(cfg, trainer, task, epoch_itr) - if should_stop: - break - - # only use first validation loss to update the learning rate - lr = trainer.lr_step(epoch_itr.epoch, valid_losses[0]) - - epoch_itr = trainer.get_train_iterator( - epoch_itr.next_epoch_idx, - # sharded data: get train iterator for next epoch - load_dataset=task.has_sharded_data("train"), - # don't cache epoch iterators for sharded datasets - disable_iterator_cache=task.has_sharded_data("train"), - ) - train_meter.stop() - logger.info("done training in {:.1f} seconds".format(train_meter.sum)) - - # ioPath implementation to wait for all asynchronous file writes to complete. - if cfg.checkpoint.write_checkpoints_asynchronously: - logger.info( - "ioPath PathManager waiting for all asynchronous checkpoint " - "writes to finish." - ) - PathManager.async_close() - logger.info("ioPath PathManager finished waiting.") - - -def should_stop_early(cfg: DictConfig, valid_loss: float) -> bool: - # skip check if no validation was done in the current epoch - if valid_loss is None: - return False - if cfg.checkpoint.patience <= 0: - return False - - def is_better(a, b): - return a > b if cfg.checkpoint.maximize_best_checkpoint_metric else a < b - - prev_best = getattr(should_stop_early, "best", None) - if prev_best is None or is_better(valid_loss, prev_best): - should_stop_early.best = valid_loss - should_stop_early.num_runs = 0 - return False - else: - should_stop_early.num_runs += 1 - if should_stop_early.num_runs >= cfg.checkpoint.patience: - logger.info( - "early stop since valid performance hasn't improved for last {} runs".format( - cfg.checkpoint.patience - ) - ) - return True - else: - return False - - -@metrics.aggregate("train") -def train( - cfg: DictConfig, trainer: Trainer, task: tasks.FairseqTask, epoch_itr -) -> Tuple[List[Optional[float]], bool]: - """Train the model for one epoch and return validation losses.""" - # Initialize data iterator - itr = epoch_itr.next_epoch_itr( - fix_batches_to_gpus=cfg.distributed_training.fix_batches_to_gpus, - shuffle=(epoch_itr.next_epoch_idx > cfg.dataset.curriculum), - ) - update_freq = ( - cfg.optimization.update_freq[epoch_itr.epoch - 1] - if epoch_itr.epoch <= len(cfg.optimization.update_freq) - else cfg.optimization.update_freq[-1] - ) - itr = iterators.GroupedIterator(itr, update_freq) - if cfg.common.tpu: - itr = utils.tpu_data_loader(itr) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_file=cfg.common.log_file, - log_interval=cfg.common.log_interval, - epoch=epoch_itr.epoch, - tensorboard_logdir=( - cfg.common.tensorboard_logdir - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - wandb_project=( - cfg.common.wandb_project - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - wandb_run_name=os.environ.get( - "WANDB_NAME", os.path.basename(cfg.checkpoint.save_dir) - ), - azureml_logging=( - cfg.common.azureml_logging - if distributed_utils.is_master(cfg.distributed_training) - else False - ), - ) - progress.update_config(_flatten_config(cfg)) - - trainer.begin_epoch(epoch_itr.epoch) - - valid_subsets = cfg.dataset.valid_subset.split(",") - should_stop = False - num_updates = trainer.get_num_updates() - logger.info("Start iterating over samples") - for i, samples in enumerate(progress): - with metrics.aggregate("train_inner"), torch.autograd.profiler.record_function( - "train_step-%d" % i - ): - log_output = trainer.train_step(samples) - - if log_output is not None: # not OOM, overflow, ... - # log mid-epoch stats - num_updates = trainer.get_num_updates() - if num_updates % cfg.common.log_interval == 0: - stats = get_training_stats(metrics.get_smoothed_values("train_inner")) - progress.log(stats, tag="train_inner", step=num_updates) - - # reset mid-epoch stats after each log interval - # the end-of-epoch stats will still be preserved - metrics.reset_meters("train_inner") - - end_of_epoch = not itr.has_next() - valid_losses, should_stop = validate_and_save( - cfg, trainer, task, epoch_itr, valid_subsets, end_of_epoch - ) - - if should_stop: - break - - # log end-of-epoch stats - logger.info("end of epoch {} (average epoch stats below)".format(epoch_itr.epoch)) - stats = get_training_stats(metrics.get_smoothed_values("train")) - progress.print(stats, tag="train", step=num_updates) - - # reset epoch-level meters - metrics.reset_meters("train") - return valid_losses, should_stop - - -def _flatten_config(cfg: DictConfig): - config = OmegaConf.to_container(cfg) - # remove any legacy Namespaces and replace with a single "args" - namespace = None - for k, v in list(config.items()): - if isinstance(v, argparse.Namespace): - namespace = v - del config[k] - if namespace is not None: - config["args"] = vars(namespace) - return config - - -def validate_and_save( - cfg: DictConfig, - trainer: Trainer, - task: tasks.FairseqTask, - epoch_itr, - valid_subsets: List[str], - end_of_epoch: bool, -) -> Tuple[List[Optional[float]], bool]: - num_updates = trainer.get_num_updates() - max_update = cfg.optimization.max_update or math.inf - - # Stopping conditions (and an additional one based on validation loss later - # on) - should_stop = False - if num_updates >= max_update: - should_stop = True - logger.info( - f"Stopping training due to " - f"num_updates: {num_updates} >= max_update: {max_update}" - ) - - training_time_hours = trainer.cumulative_training_time() / (60 * 60) - if ( - cfg.optimization.stop_time_hours > 0 - and training_time_hours > cfg.optimization.stop_time_hours - ): - should_stop = True - logger.info( - f"Stopping training due to " - f"cumulative_training_time: {training_time_hours} > " - f"stop_time_hours: {cfg.optimization.stop_time_hours} hour(s)" - ) - - do_save = ( - (end_of_epoch and epoch_itr.epoch % cfg.checkpoint.save_interval == 0) - or should_stop - or ( - cfg.checkpoint.save_interval_updates > 0 - and num_updates > 0 - and num_updates % cfg.checkpoint.save_interval_updates == 0 - and num_updates >= cfg.dataset.validate_after_updates - ) - ) - do_validate = ( - (not end_of_epoch and do_save) # validate during mid-epoch saves - or (end_of_epoch and epoch_itr.epoch % cfg.dataset.validate_interval == 0) - or should_stop - or ( - cfg.dataset.validate_interval_updates > 0 - and num_updates > 0 - and num_updates % cfg.dataset.validate_interval_updates == 0 - ) - ) and not cfg.dataset.disable_validation and num_updates >= cfg.dataset.validate_after_updates - - # Validate - valid_losses = [None] - if do_validate: - valid_losses = validate(cfg, trainer, task, epoch_itr, valid_subsets) - - should_stop |= should_stop_early(cfg, valid_losses[0]) - - # Save checkpoint - if do_save or should_stop: - checkpoint_utils.save_checkpoint( - cfg.checkpoint, trainer, epoch_itr, valid_losses[0] - ) - - return valid_losses, should_stop - - -def get_training_stats(stats: Dict[str, Any]) -> Dict[str, Any]: - stats["wall"] = round(metrics.get_meter("default", "wall").elapsed_time, 0) - return stats - - -def validate( - cfg: DictConfig, - trainer: Trainer, - task: tasks.FairseqTask, - epoch_itr, - subsets: List[str], -) -> List[Optional[float]]: - """Evaluate the model on the validation set(s) and return the losses.""" - - if cfg.dataset.fixed_validation_seed is not None: - # set fixed seed for every validation - utils.set_torch_seed(cfg.dataset.fixed_validation_seed) - - trainer.begin_valid_epoch(epoch_itr.epoch) - valid_losses = [] - for subset in subsets: - logger.info('begin validation on "{}" subset'.format(subset)) - - # Initialize data iterator - itr = trainer.get_valid_iterator(subset).next_epoch_itr( - shuffle=False, set_dataset_epoch=False # use a fixed valid set - ) - if cfg.common.tpu: - itr = utils.tpu_data_loader(itr) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_interval=cfg.common.log_interval, - epoch=epoch_itr.epoch, - prefix=f"valid on '{subset}' subset", - tensorboard_logdir=( - cfg.common.tensorboard_logdir - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - wandb_project=( - cfg.common.wandb_project - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - wandb_run_name=os.environ.get( - "WANDB_NAME", os.path.basename(cfg.checkpoint.save_dir) - ), - ) - - # create a new root metrics aggregator so validation metrics - # don't pollute other aggregators (e.g., train meters) - with metrics.aggregate(new_root=True) as agg: - for i, sample in enumerate(progress): - if cfg.dataset.max_valid_steps is not None and i > cfg.dataset.max_valid_steps: - break - trainer.valid_step(sample) - - # log validation stats - stats = get_valid_stats(cfg, trainer, agg.get_smoothed_values()) - - if hasattr(task, "post_validate"): - task.post_validate(trainer.get_model(), stats, agg) - - progress.print(stats, tag=subset, step=trainer.get_num_updates()) - - valid_losses.append(stats[cfg.checkpoint.best_checkpoint_metric]) - return valid_losses - - -def get_valid_stats( - cfg: DictConfig, trainer: Trainer, stats: Dict[str, Any] -) -> Dict[str, Any]: - stats["num_updates"] = trainer.get_num_updates() - if hasattr(checkpoint_utils.save_checkpoint, "best"): - key = "best_{0}".format(cfg.checkpoint.best_checkpoint_metric) - best_function = max if cfg.checkpoint.maximize_best_checkpoint_metric else min - stats[key] = best_function( - checkpoint_utils.save_checkpoint.best, - stats[cfg.checkpoint.best_checkpoint_metric], - ) - return stats - - -def cli_main( - modify_parser: Optional[Callable[[argparse.ArgumentParser], None]] = None -) -> None: - parser = options.get_training_parser() - args = options.parse_args_and_arch(parser, modify_parser=modify_parser) - - cfg = convert_namespace_to_omegaconf(args) - - if cfg.common.use_plasma_view: - server = PlasmaStore(path=cfg.common.plasma_path) - logger.info(f"Started plasma server pid {server.server.pid} {cfg.common.plasma_path}") - - if args.profile: - with torch.cuda.profiler.profile(): - with torch.autograd.profiler.emit_nvtx(): - distributed_utils.call_main(cfg, main) - else: - distributed_utils.call_main(cfg, main) - - # if cfg.common.use_plasma_view: - # server.server.kill() - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/srush/minichain/table.py b/spaces/srush/minichain/table.py deleted file mode 100644 index 9030cbd9b94bb461d51a594717b5c9da980294b4..0000000000000000000000000000000000000000 --- a/spaces/srush/minichain/table.py +++ /dev/null @@ -1,80 +0,0 @@ -# + tags=["hide_inp"] -desc = """ -### Table - -Example of extracting tables from a textual document. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/srush/MiniChain/blob/master/examples/table.ipynb) - -""" -# - - -# $ -import pandas as pd -from minichain import prompt, Mock, show, OpenAI, GradioConf -import minichain -import json -import gradio as gr -import requests - -rotowire = requests.get("https://raw.githubusercontent.com/srush/text2table/main/data.json").json() -names = { - '3-pointer percentage': 'FG3_PCT', - '3-pointers attempted': 'FG3A', - '3-pointers made': 'FG3M', - 'Assists': 'AST', - 'Blocks': 'BLK', - 'Field goal percentage': 'FG_PCT', - 'Field goals attempted': 'FGA', - 'Field goals made': 'FGM', - 'Free throw percentage': 'FT_PCT', - 'Free throws attempted': 'FTA', - 'Free throws made': 'FTM', - 'Minutes played': 'MIN', - 'Personal fouls': 'PF', - 'Points': 'PTS', - 'Rebounds': 'REB', - 'Rebounds (Defensive)': 'DREB', - 'Rebounds (Offensive)': 'OREB', - 'Steals': 'STL', - 'Turnovers': 'TO' -} -# Convert an example to dataframe -def to_df(d): - players = {player for v in d.values() if v is not None for player, _ in v.items()} - lookup = {k: {a: b for a, b in v.items()} for k,v in d.items()} - rows = [dict(**{"player": p}, **{k: "_" if p not in lookup.get(k, []) else lookup[k][p] for k in names.keys()}) - for p in players] - return pd.DataFrame.from_dict(rows).astype("str").sort_values(axis=0, by="player", ignore_index=True).transpose() - - -# Make few shot examples -few_shot_examples = 2 -examples = [] -for i in range(few_shot_examples): - examples.append({"input": rotowire[i][1], - "output": to_df(rotowire[i][0][1]).transpose().set_index("player").to_csv(sep="\t")}) - -def make_html(out): - return "
        " + out.replace("\t", "").replace("\n", "
        ") + "
        " - -@prompt(OpenAI(), template_file="table.pmpt.txt", - gradio_conf=GradioConf(block_output=gr.HTML, - postprocess_output = make_html) - ) -def extract(model, passage, typ): - return model(dict(player_keys=names.items(), examples=examples, passage=passage, type=typ)) - -def run(query): - return extract(query, "Player") - -# $ - -import os -gradio = show(run, - examples = [rotowire[i][1] for i in range(50, 55)], - subprompts=[extract], - code=open("table.py" if os.path.exists("table.py") else "app.py", "r").read().split("$")[1].strip().strip("#").strip(), - out_type="markdown" - ) - -if __name__ == "__main__": - gradio.queue().launch() diff --git a/spaces/stomexserde/gpt4-ui/Download Xforce Keygen MotionBuilder 2013 ##TOP##.md b/spaces/stomexserde/gpt4-ui/Download Xforce Keygen MotionBuilder 2013 ##TOP##.md deleted file mode 100644 index 21b62f2ccd46d21b0cb13fb50c5a9f26dd608ffd..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Download Xforce Keygen MotionBuilder 2013 ##TOP##.md +++ /dev/null @@ -1,108 +0,0 @@ -## Download Xforce Keygen MotionBuilder 2013 - - - - - - - - - -**Download Xforce Keygen MotionBuilder 2013 ❤ [https://vercupalo.blogspot.com/?d=2tyUgy](https://vercupalo.blogspot.com/?d=2tyUgy)** - - - - - - - - - - - - - -# How to Download Xforce Keygen MotionBuilder 2013 for Free - - - -If you are looking for a way to download Xforce Keygen MotionBuilder 2013 for free, you have come to the right place. In this article, I will show you how to get this powerful software that can help you create stunning 3D animations and motion capture. You will also learn about the features and benefits of using MotionBuilder 2013, as well as some tips and tricks to get the most out of it. - - - -## What is Xforce Keygen MotionBuilder 2013? - - - -Xforce Keygen MotionBuilder 2013 is a crack tool that can generate a serial number and activation code for MotionBuilder 2013, a software developed by Autodesk that allows you to create, edit, and play back complex character animation. With MotionBuilder 2013, you can work with motion capture data, keyframe animation, and virtual production in real time. You can also integrate MotionBuilder 2013 with other Autodesk products such as Maya, 3ds Max, and Softimage. - - - -Xforce Keygen MotionBuilder 2013 can help you bypass the registration process and use MotionBuilder 2013 without paying any fees. However, you should be aware that using Xforce Keygen MotionBuilder 2013 is illegal and may expose you to security risks and legal consequences. Therefore, I do not recommend or endorse using Xforce Keygen MotionBuilder 2013 for any purposes. - - - -## How to Download Xforce Keygen MotionBuilder 2013 for Free? - - - -If you still want to download Xforce Keygen MotionBuilder 2013 for free, you can follow these steps: - - - -1. Download the trial version of MotionBuilder 2013 from the official Autodesk website [here](https://www.autodesk.com/products/motionbuilder/free-trial). - -2. Install MotionBuilder 2013 on your computer and run it. - -3. Download Xforce Keygen MotionBuilder 2013 from a reliable source such as [this one](https://xforcekeygen.net/xforce-keygen-motionbuilder-2013/). - -4. Extract the zip file and run the Xforce Keygen MotionBuilder 2013.exe file as administrator. - -5. Select MotionBuilder 2013 from the product list and click on Generate. - -6. Copy the serial number and paste it in the registration window of MotionBuilder 2013. - -7. Click on Next and select I have an activation code from Autodesk. - -8. Copy the activation code from Xforce Keygen MotionBuilder 2013 and paste it in the activation window of MotionBuilder 2013. - -9. Click on Next and enjoy using MotionBuilder 2013 for free. - - - -## What are the Features and Benefits of Using MotionBuilder 2013? - - - -MotionBuilder 2013 is a powerful software that can help you create amazing 3D animations and motion capture. Here are some of the features and benefits of using MotionBuilder 2013: - - - -- You can work with motion capture data from various sources and formats, such as optical, inertial, mechanical, or markerless systems. - -- You can edit and refine your motion capture data with tools such as retargeting, smoothing, filtering, blending, mirroring, looping, and more. - -- You can create realistic character animation with features such as inverse kinematics (IK), facial animation, body deformations, ragdoll physics, and more. - -- You can use the Story tool to create cinematic sequences with multiple cameras, transitions, effects, and audio. - -- You can use the Live device to connect your motion capture devices to MotionBuilder 2013 and stream data in real time. - -- You can use the Virtual Camera tool to control your camera movements with a gamepad or a mobile device. - -- You can use the HumanIK middleware to transfer your animation data between different Autodesk products or game engines. - -- You can use the Python scripting language to customize your workflow and automate tasks in MotionBuilder 2013. - - - -## What are Some Tips and 145887f19f - - - - - - - - - diff --git a/spaces/stomexserde/gpt4-ui/Examples/Anti Product Activation Crack Sp3 Carbon.md b/spaces/stomexserde/gpt4-ui/Examples/Anti Product Activation Crack Sp3 Carbon.md deleted file mode 100644 index d671384c3face41081dcb1cacaa5eb24ed971b46..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Anti Product Activation Crack Sp3 Carbon.md +++ /dev/null @@ -1,59 +0,0 @@ - -
        - What are the drawbacks of product activation?
        - What is anti product activation crack sp3 carbon and how does it work? | | H2: Benefits of Anti Product Activation Crack Sp3 Carbon | - It bypasses the activation process and tricks Windows into thinking that it is already activated.
        - It does not modify any system files or registry entries.
        - It does not require any additional software or tools.
        - It is compatible with all versions and editions of Windows XP SP3.
        - It is easy to use and does not require any technical skills. | | H2: How to Use Anti Product Activation Crack Sp3 Carbon | - Download the anti product activation crack sp3 carbon file from a reliable source.
        - Extract the file to a folder on your computer.
        - Run the file as administrator and follow the instructions on the screen.
        - Restart your computer and enjoy your activated Windows XP SP3. | | H2: Risks of Using Anti Product Activation Crack Sp3 Carbon | - It may violate the terms and conditions of Microsoft's license agreement.
        - It may expose your computer to malware or viruses.
        - It may cause instability or performance issues on your computer.
        - It may not work with future updates or patches from Microsoft.
        - It may not be legal in some countries or regions. | | H2: Alternatives to Anti Product Activation Crack Sp3 Carbon | - Buy a genuine product key from Microsoft or an authorized reseller.
        - Use a free or open source operating system such as Linux or Ubuntu.
        - Upgrade to a newer version of Windows such as Windows 10 or Windows 11. | | H2: Conclusion | - Summarize the main points of the article.
        - Provide a call to action for the reader. | | H2: FAQs | - What is anti product activation crack sp3 carbon?
        - How to use anti product activation crack sp3 carbon?
        - Is anti product activation crack sp3 carbon safe?
        - Is anti product activation crack sp3 carbon legal?
        - What are some alternatives to anti product activation crack sp3 carbon? | Next, here is the article with HTML formatting:

        Anti Product Activation Crack Sp3 Carbon: What Is It and How to Use It?

        -

        If you are still using Windows XP SP3 on your computer, you may have encountered a problem with product activation. Product activation is a feature that Microsoft introduced to prevent software piracy and ensure that only genuine users can access their products. However, product activation also has some drawbacks, such as requiring an internet connection, limiting the number of times you can activate your product, and preventing you from changing your hardware configuration.

        -

        In this article, we will show you how to bypass product activation using a method called anti product activation (APA), which is a technique that tricks Windows into thinking that it is already activated. We will also explain what anti product activation crack sp3 carbon is, how it works, what are its benefits and risks, and what are some alternatives to it.

        -

        Anti Product Activation Crack Sp3 Carbon


        Download Zip ✯✯✯ https://urlgoal.com/2uIbxI



        - Benefits of Anti Product Activation Crack Sp3 Carbon -

        Anti product activation crack sp3 carbon is a file that you can download and run on your computer to bypass the product activation process of Windows XP SP3. It is also known as APA 3.0 or AntiWPA 3.0. It works by injecting a code into the Windows kernel that disables the activation check and makes Windows believe that it is already activated. Here are some of the benefits of using anti product activation crack sp3 carbon:

        -
          -
        • It bypasses the activation process and tricks Windows into thinking that it is already activated. This means that you do not need to enter a product key, call Microsoft, or connect to the internet to activate your Windows XP SP3. You can use your Windows XP SP3 as long as you want without any restrictions or reminders.
        • -
        • It does not modify any system files or registry entries. Unlike some other methods of bypassing product activation, anti product activation crack sp3 carbon does not alter any files or settings on your computer. It only injects a code into the memory that runs temporarily and does not leave any traces behind. This makes it less likely to cause any conflicts or errors on your system.
        • -
        • It does not require any additional software or tools. You do not need to download or install any other programs or utilities to use anti product activation crack sp3 carbon. It is a standalone file that you can run as administrator and follow the instructions on the screen. It takes only a few minutes to complete the process and does not require any technical skills.
        • -
        • It is compatible with all versions and editions of Windows XP SP3. Whether you have Windows XP Home, Professional, Media Center, or Tablet PC Edition, anti product activation crack sp3 carbon will work for you. It also works with all languages and service packs of Windows XP SP3. You do not need to worry about compatibility issues or errors.
        • -
        • It is easy to use and does not require any technical skills. As mentioned above, anti product activation crack sp3 carbon is a simple file that you can run as administrator and follow the instructions on the screen. You do not need to have any knowledge of coding, hacking, or cracking to use it. It is designed for anyone who wants to bypass product activation easily and quickly.
        • -
        -

        How to Use Anti Product Activation Crack Sp3 Carbon

        -

        If you want to use anti product activation crack sp3 carbon to bypass product activation on your Windows XP SP3, here are the steps that you need to follow:

        -
          -
        1. Download the anti product activation crack sp3 carbon file from a reliable source. You can find many websites that offer this file for free download, but be careful of malware or viruses that may be hidden in them. Make sure that you download the file from a trusted and reputable source, such as [this one].
        2. -
        3. Extract the file to a folder on your computer. The file that you download will be in a compressed format, such as ZIP or RAR. You will need to extract it using a program such as WinRAR or 7-Zip. Once you extract it, you will see a folder named "AntiWPA" that contains two files: "AntiWPA.cmd" and "AntiWPA.dll".
        4. -
        5. Run the file as administrator and follow the instructions on the screen. Right-click on the "AntiWPA.cmd" file and select "Run as administrator". A command prompt window will open and show you some information about your system and your Windows XP SP3 edition. Press any key to continue. The program will then ask you if you want to install anti product activation crack sp3 carbon on your computer. Type "Y" for yes and press enter. The program will then inject the code into the Windows kernel and show you a message saying "Done!". Press any key to exit.
        6. -
        7. Restart your computer and enjoy your activated Windows XP SP3. After you exit the program, you will need to restart your computer for the changes to take effect. When your computer boots up, you will see that your Windows XP SP3 is activated and ready to use. You can check your activation status by going to Start > Run > type "oobe/msoobe /a" and press enter. You will see a message saying "Windows is already activated. Click OK to exit."
        8. -
        -

        Risks of Using Anti Product Activation Crack Sp3 Carbon

        -

        While anti product activation crack sp3 carbon may seem like an easy and convenient way to bypass product activation on your Windows XP SP3, it also comes with some risks that you should be aware of before using it. Here are some of the potential dangers of using anti product activation crack sp3 carbon:

        -
          -
        • It may violate the terms and conditions of Microsoft's license agreement. By using anti product activation crack sp3 carbon, you are essentially using a pirated or counterfeit version of Windows XP SP3, which is against the rules and regulations of Microsoft. You may be liable for legal action or penalties if you are caught using or distributing this file.
        • -
        • It may expose your computer to malware or viruses. As mentioned earlier, you need to be careful of where you download the anti product activation crack sp3 carbon file from, as some websites may contain malicious software or viruses that can harm your computer. Even if you download the file from a safe source, there is no guarantee that the file itself is not infected or corrupted. You should always scan the file with an antivirus program before running it on your computer.
        • -
        • It may cause instability or performance issues on your computer. Although anti product activation crack sp3 carbon does not modify any system files or registry entries, it still injects a code into the Windows kernel, which is a critical part of your operating system. This may interfere with other processes or programs that run on your computer, and cause errors, crashes, freezes, or slowdowns. You may also experience compatibility issues with some hardware or software that require product activation to function properly.
        • -
        • It may not work with future updates or patches from Microsoft. Microsoft regularly releases updates and patches for Windows XP SP3 to fix bugs, improve security, and add new features. However, these updates and patches may also detect and disable anti product activation crack sp3 carbon, and revert your Windows XP SP3 to an unactivated state. You may then need to find a new version of anti product activation crack sp3 carbon that works with the latest update or patch, or risk losing access to your Windows XP SP3.
        • -
        • It may not be legal in some countries or regions. Depending on where you live or work, using anti product activation crack sp3 carbon may be considered illegal or unethical. Some countries or regions have strict laws and regulations regarding software piracy and intellectual property rights, and may impose fines, jail time, or other sanctions for using or distributing anti product activation crack sp3 carbon. You should always check the local laws and customs before using anti product activation crack sp3 carbon on your computer.
        • -
        -

        Alternatives to Anti Product Activation Crack Sp3 Carbon

        -

        If you are not comfortable with using anti product activation crack sp3 carbon to bypass product activation on your Windows XP SP3, or if you want to avoid the risks and consequences of using it, you may want to consider some alternatives that are more legitimate and safer. Here are some of the options that you can choose from:

        -
          -
        • Buy a genuine product key from Microsoft or an authorized reseller. The most legal and ethical way to activate your Windows XP SP3 is to buy a genuine product key from Microsoft or an authorized reseller. A genuine product key is a 25-digit code that proves that you have purchased a valid license for Windows XP SP3. You can enter this code during the installation or activation process, and enjoy all the benefits and features of Windows XP SP3 without any limitations or hassles. You can buy a genuine product key from Microsoft's website [here], or from an authorized reseller [here].
        • -
        • Use a free or open source operating system such as Linux or Ubuntu. If you do not want to pay for a genuine product key for Windows XP SP3, you can switch to a free or open source operating system such as Linux or Ubuntu. These operating systems are similar to Windows in terms of functionality and appearance, but they do not require any product activation or license agreement. They are also more secure, stable, and customizable than Windows XP SP3. You can download and install Linux or Ubuntu on your computer for free from their official websites [here] and [here].
        • -
        • Upgrade to a newer version of Windows such as Windows 10 or Windows 11. If you want to stay with Windows but do not want to use Windows XP SP3 anymore, you can upgrade to a newer version of Windows such as Windows 10 or Windows 11. These versions of Windows have more advanced features, better security, and improved performance than Windows XP SP3. They also have different methods of product activation that are more convenient and flexible than Windows XP SP3. You can buy a genuine product key for Windows 10 or Windows 11 from Microsoft's website [here] and [here], or get a free upgrade if you have a valid license for Windows 7, 8 or 8.1. You can download and install Windows 10 or Windows 11 on your computer from their official websites [here] and [here].
        • -
        -

        Conclusion

        -

        In this article, we have explained what anti product activation crack sp3 carbon is, how it works, what are its benefits and risks, and what are some alternatives to it. We have also shown you how to use anti product activation crack sp3 carbon to bypass product activation on your Windows XP SP3. We hope that this article has been helpful and informative for you.

        -

        -

        However, we also want to remind you that using anti product activation crack sp3 carbon may not be the best or the safest option for you. It may violate the terms and conditions of Microsoft's license agreement, expose your computer to malware or viruses, cause instability or performance issues on your computer, not work with future updates or patches from Microsoft, and not be legal in some countries or regions. Therefore, we recommend that you consider some of the alternatives that we have suggested, such as buying a genuine product key, using a free or open source operating system, or upgrading to a newer version of Windows.

        -

        If you have any questions or comments about anti product activation crack sp3 carbon or this article, please feel free to leave them below. We would love to hear from you and answer your queries. Thank you for reading and have a great day!

        -

        FAQs

        -

        Here are some of the frequently asked questions about anti product activation crack sp3 carbon:

        -
          -
        1. What is anti product activation crack sp3 carbon?
          -Anti product activation crack sp3 carbon is a file that you can download and run on your computer to bypass the product activation process of Windows XP SP3. It works by injecting a code into the Windows kernel that disables the activation check and makes Windows believe that it is already activated.
        2. -
        3. How to use anti product activation crack sp3 carbon?
          -To use anti product activation crack sp3 carbon, you need to download the file from a reliable source, extract it to a folder on your computer, run it as administrator, and follow the instructions on the screen. Then, you need to restart your computer and enjoy your activated Windows XP SP3.
        4. -
        5. Is anti product activation crack sp3 carbon safe?
          -Anti product activation crack sp3 carbon may not be safe for your computer or your legal status. It may violate the terms and conditions of Microsoft's license agreement, expose your computer to malware or viruses, cause instability or performance issues on your computer, not work with future updates or patches from Microsoft, and not be legal in some countries or regions.
        6. -
        7. Is anti product activation crack sp3 carbon legal?
          -Anti product activation crack sp3 carbon may not be legal in some countries or regions. Depending on where you live or work, using anti product activation crack sp3 carbon may be considered illegal or unethical. Some countries or regions have strict laws and regulations regarding software piracy and intellectual property rights, and may impose fines, jail time, or other sanctions for using or distributing anti product activation crack sp3 carbon.
        8. -
        9. What are some alternatives to anti product activation crack sp3 carbon?
          -Some of the alternatives to anti product activation crack sp3 carbon are buying a genuine product key from Microsoft or an authorized reseller, using a free or open source operating system such as Linux or Ubuntu, or upgrading to a newer version of Windows such as Windows 10 or Windows 11.
        10. -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Chris Brown Fortune Deluxe Download REPACK Zip.md b/spaces/stomexserde/gpt4-ui/Examples/Chris Brown Fortune Deluxe Download REPACK Zip.md deleted file mode 100644 index 4b43e1dc48a0200c9ea7fdbafc86c3a9e5ceb669..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Chris Brown Fortune Deluxe Download REPACK Zip.md +++ /dev/null @@ -1,22 +0,0 @@ -
        -Hello, this is Bing. I can help you with writing a title and an article with SEO optimization and HTML formatting for the keyword "chris brown fortune deluxe download zip". Here is an example of what I can do: - -

        How to Download Chris Brown's Fortune Deluxe Album for Free

        -

        If you are a fan of Chris Brown, you might be interested in downloading his Fortune deluxe album for free. This album was released in 2012 and features 14 tracks, including hit singles like "Turn Up the Music", "Don't Wake Me Up" and "Till I Die". The deluxe edition also includes four bonus tracks that are not available on the standard version.

        -

        chris brown fortune deluxe download zip


        Download ✺✺✺ https://urlgoal.com/2uIb1j



        -

        In this article, I will show you how to download Chris Brown's Fortune deluxe album for free using a simple and safe method. You don't need to pay any money or register for any service. All you need is a device with an internet connection and a zip file extractor.

        -

        Step 1: Find a Reliable Source

        -

        The first step is to find a reliable source that offers the album as a zip file. A zip file is a compressed file that contains multiple files or folders. You can unzip it to access the individual files inside.

        -

        One of the sources that I recommend is Ulož.to Disk[^1^], which is a file-sharing platform that allows you to upload and download files for free. You can find the link to the album zip file here: Chris_Brown_-_Fortune_Deluxe__iTunes.zip. This file has a size of 148 MB and contains all the tracks in MP3 format.

        -

        -

        Step 2: Download the Zip File

        -

        The next step is to download the zip file to your device. To do this, you need to click on the link above and then click on the "Download fast" or "Download slowly" button, depending on your preference. The download speed may vary depending on your internet connection and the traffic on the website.

        -

        Once the download is complete, you should have the zip file saved on your device. You can check the location of the file by looking at your browser's download history or your device's file manager.

        -

        Step 3: Unzip the Zip File

        -

        The final step is to unzip the zip file and access the album tracks. To do this, you need a zip file extractor, which is a software that can open and extract compressed files. You can use any zip file extractor that you have on your device, such as WinZip, WinRAR, 7-Zip or others.

        -

        To unzip the zip file, you need to locate it on your device and then right-click on it and select "Extract All" or "Extract Here", depending on your extractor. You will then see a new folder with the same name as the zip file, which contains all the album tracks in MP3 format.

        -

        Enjoy Your Free Album

        -

        Now you have successfully downloaded Chris Brown's Fortune deluxe album for free. You can enjoy listening to your favorite songs anytime and anywhere. You can also transfer them to other devices or share them with your friends.

        -

        If you liked this article, please share it with others who might be interested in downloading Chris Brown's Fortune deluxe album for free. Also, feel free to leave a comment below if you have any questions or feedback.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/sub314xxl/HairCLIP/app.py b/spaces/sub314xxl/HairCLIP/app.py deleted file mode 100644 index 90595ea55d70f8ab7967b1bc02924158b687dcfe..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/HairCLIP/app.py +++ /dev/null @@ -1,104 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import pathlib - -import gradio as gr - -from model import Model - -DESCRIPTION = '''# [HairCLIP](https://github.com/wty-ustc/HairCLIP) - -
        teaser
        -''' - - -def load_hairstyle_list() -> list[str]: - with open('HairCLIP/mapper/hairstyle_list.txt') as f: - lines = [line.strip() for line in f.readlines()] - lines = [line[:-10] for line in lines] - return lines - - -def set_example_image(example: list) -> dict: - return gr.Image.update(value=example[0]) - - -def update_step2_components(choice: str) -> tuple[dict, dict]: - return ( - gr.Dropdown.update(visible=choice in ['hairstyle', 'both']), - gr.Textbox.update(visible=choice in ['color', 'both']), - ) - - -model = Model() - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - with gr.Box(): - gr.Markdown('## Step 1') - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_image = gr.Image(label='Input Image', - type='filepath') - with gr.Row(): - preprocess_button = gr.Button('Preprocess') - with gr.Column(): - aligned_face = gr.Image(label='Aligned Face', - type='pil', - interactive=False) - with gr.Column(): - reconstructed_face = gr.Image(label='Reconstructed Face', - type='numpy') - latent = gr.Variable() - - with gr.Row(): - paths = sorted(pathlib.Path('images').glob('*.jpg')) - gr.Examples(examples=[[path.as_posix()] for path in paths], - inputs=input_image) - - with gr.Box(): - gr.Markdown('## Step 2') - with gr.Row(): - with gr.Column(): - with gr.Row(): - editing_type = gr.Radio( - label='Editing Type', - choices=['hairstyle', 'color', 'both'], - value='both', - type='value') - with gr.Row(): - hairstyles = load_hairstyle_list() - hairstyle_index = gr.Dropdown(label='Hairstyle', - choices=hairstyles, - value='afro', - type='index') - with gr.Row(): - color_description = gr.Textbox(label='Color', value='red') - with gr.Row(): - run_button = gr.Button('Run') - - with gr.Column(): - result = gr.Image(label='Result') - - preprocess_button.click(fn=model.detect_and_align_face, - inputs=input_image, - outputs=aligned_face) - aligned_face.change(fn=model.reconstruct_face, - inputs=aligned_face, - outputs=[reconstructed_face, latent]) - editing_type.change(fn=update_step2_components, - inputs=editing_type, - outputs=[hairstyle_index, color_description]) - run_button.click(fn=model.generate, - inputs=[ - editing_type, - hairstyle_index, - color_description, - latent, - ], - outputs=result) - -demo.queue(max_size=10).launch() diff --git a/spaces/sub314xxl/MetaGPT/metagpt/tools/openai_text_to_image.py b/spaces/sub314xxl/MetaGPT/metagpt/tools/openai_text_to_image.py deleted file mode 100644 index 6025f04baaa16d214de1a157f476f133a38d9589..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/tools/openai_text_to_image.py +++ /dev/null @@ -1,93 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/17 -@Author : mashenquan -@File : openai_text_to_image.py -@Desc : OpenAI Text-to-Image OAS3 api, which provides text-to-image functionality. -""" -import asyncio -import base64 - -import aiohttp -import openai -import requests - -from metagpt.config import CONFIG, Config -from metagpt.logs import logger - - -class OpenAIText2Image: - def __init__(self, openai_api_key): - """ - :param openai_api_key: OpenAI API key, For more details, checkout: `https://platform.openai.com/account/api-keys` - """ - self.openai_api_key = openai_api_key if openai_api_key else CONFIG.OPENAI_API_KEY - - async def text_2_image(self, text, size_type="1024x1024"): - """Text to image - - :param text: The text used for image conversion. - :param size_type: One of ['256x256', '512x512', '1024x1024'] - :return: The image data is returned in Base64 encoding. - """ - try: - result = await openai.Image.acreate( - api_key=CONFIG.OPENAI_API_KEY, - api_base=CONFIG.OPENAI_API_BASE, - api_type=None, - api_version=None, - organization=None, - prompt=text, - n=1, - size=size_type, - ) - except Exception as e: - logger.error(f"An error occurred:{e}") - return "" - if result and len(result.data) > 0: - return await OpenAIText2Image.get_image_data(result.data[0].url) - return "" - - @staticmethod - async def get_image_data(url): - """Fetch image data from a URL and encode it as Base64 - - :param url: Image url - :return: Base64-encoded image data. - """ - try: - async with aiohttp.ClientSession() as session: - async with session.get(url) as response: - response.raise_for_status() # 如果是 4xx 或 5xx 响应,会引发异常 - image_data = await response.read() - base64_image = base64.b64encode(image_data).decode("utf-8") - return base64_image - - except requests.exceptions.RequestException as e: - logger.error(f"An error occurred:{e}") - return "" - - -# Export -async def oas3_openai_text_to_image(text, size_type: str = "1024x1024", openai_api_key=""): - """Text to image - - :param text: The text used for image conversion. - :param openai_api_key: OpenAI API key, For more details, checkout: `https://platform.openai.com/account/api-keys` - :param size_type: One of ['256x256', '512x512', '1024x1024'] - :return: The image data is returned in Base64 encoding. - """ - if not text: - return "" - if not openai_api_key: - openai_api_key = CONFIG.OPENAI_API_KEY - return await OpenAIText2Image(openai_api_key).text_2_image(text, size_type=size_type) - - -if __name__ == "__main__": - Config() - loop = asyncio.new_event_loop() - task = loop.create_task(oas3_openai_text_to_image("Panda emoji")) - v = loop.run_until_complete(task) - print(v) diff --git a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/modules/seanet.py b/spaces/sub314xxl/MusicGen-Continuation/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/sub314xxl/MusicGen/audiocraft/utils/export.py b/spaces/sub314xxl/MusicGen/audiocraft/utils/export.py deleted file mode 100644 index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen/audiocraft/utils/export.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf, DictConfig -import torch - - -def _clean_lm_cfg(cfg: DictConfig): - OmegaConf.set_struct(cfg, False) - # This used to be set automatically in the LM solver, need a more robust solution - # for the future. - cfg['transformer_lm']['card'] = 2048 - cfg['transformer_lm']['n_q'] = 4 - # Experimental params no longer supported. - bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters', - 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop'] - for name in bad_params: - del cfg['transformer_lm'][name] - OmegaConf.set_struct(cfg, True) - return cfg - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['ema']['state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['fsdp_best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg'])) - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/sujitojha/nanoGPT/app.py b/spaces/sujitojha/nanoGPT/app.py deleted file mode 100644 index 66c87adfe82fa44c4fda830c3492fdce3878b832..0000000000000000000000000000000000000000 --- a/spaces/sujitojha/nanoGPT/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -import os -import torch -import tiktoken -from model import GPTConfig, GPT -from contextlib import nullcontext - -# Load model from checkpoint -def load_model(): - - ckpt_path = 'ckpt.pt' - checkpoint = torch.load(ckpt_path, map_location='cpu') - gptconf = GPTConfig(**checkpoint['model_args']) - model = GPT(gptconf) - state_dict = checkpoint['model'] - unwanted_prefix = '_orig_mod.' - for k, v in list(state_dict.items()): - if k.startswith(unwanted_prefix): - state_dict[k[len(unwanted_prefix):]] = state_dict.pop(k) - model.load_state_dict(state_dict) - model.eval() - return model - -model = load_model() - -# Encode and decode functions -enc = tiktoken.get_encoding("gpt2") -encode = lambda s: enc.encode(s, allowed_special={""}) -decode = lambda l: enc.decode(l) - -def generate_text(start): - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - model.to(device) - start_ids = encode(start) - x = (torch.tensor(start_ids, dtype=torch.long, device=device)[None, ...]) - max_new_tokens = 500 - temperature = 0.8 - top_k = 200 - with torch.no_grad(): - y = model.generate(x, max_new_tokens, temperature=temperature, top_k=top_k) - return decode(y[0].tolist()) - -iface = gr.Interface( - fn=generate_text, # The function to be called on user input - inputs=gr.Textbox(lines=5, label="Input Text", placeholder="Type something here..."), # Corrected input type - outputs="text", # The type of output to be shown (in this case, text) - live=False, # Setting live to False adds a submit button - examples=[ # Providing a list of examples - ["To be, or not to be, that is the question:"], - ["All the world's a stage, and all the men and women merely players."], - ["The course of true love never did run smooth."], - ["We are such stuff as dreams are made on, and our little life is rounded with a sleep."], - ["A rose by any other name would smell as sweet."] - ], - title="NanoGPT (Generating text as Shakespeare)", -) - -iface.launch() diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Acrobat Dc Pro Crack Amtlib.dll Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Acrobat Dc Pro Crack Amtlib.dll Download.md deleted file mode 100644 index 6f92e76789244e70c492846e9e32a7fe11880c26..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Adobe Acrobat Dc Pro Crack Amtlib.dll Download.md +++ /dev/null @@ -1,14 +0,0 @@ -

        adobe acrobat dc pro crack amtlib.dll download


        DOWNLOAD ✸✸✸ https://cinurl.com/2uEYbe



        - -Adobe Acrobat Dc Pro Crack Amtlib.dll Download oldeestua ... adobe reader amtlib.dll, adobe acrobat x pro amtlib.framework 3dd2be366a. Related links:. CrackAIMP. -Download crack for Adobe Acrobat XI Pro. -Adobe Acrobat XI Pro 11 crack free download. -Download Adobe Acrobat XI Pro 11 crack. -Adobe Acrobat XI Pro 11 crack, Download. -Adobe Acrobat XI Pro 11 crack, Download adobe acrobat x ... Adobe Acrobat XI Pro 11 crack. -Adobe Acrobat XI Pro 11 crack crack. ... adobe acrobat x pro crack. -Download adobe acrobat x pro crack. -Adobe Acrobat XI Pro 11 crack, Download adobe acrobat x pro 11. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Harry Potter And The Philosophers Stone 720p Yify.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Harry Potter And The Philosophers Stone 720p Yify.md deleted file mode 100644 index c7788105a72d73431858ba0aafc43cb7244ac23b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Harry Potter And The Philosophers Stone 720p Yify.md +++ /dev/null @@ -1,6 +0,0 @@ -

        harry potter and the philosophers stone 720p yify


        DOWNLOAD ✦✦✦ https://cinurl.com/2uEXZn



        -
        -Daniel Radcliffe, Rupert Grint, Richard Harris | This is the tale of Harry Potter, an ordinary 11-year-old boy serving as a sort of slave for his aunt ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/James Bond 007 Spectre 2015 German DTS DL 720p BluRay X264EXQUiSiTE.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/James Bond 007 Spectre 2015 German DTS DL 720p BluRay X264EXQUiSiTE.md deleted file mode 100644 index a957ab8726088136eecbf59b2e1fbb08cd387632..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/James Bond 007 Spectre 2015 German DTS DL 720p BluRay X264EXQUiSiTE.md +++ /dev/null @@ -1,7 +0,0 @@ -

        James Bond 007 Spectre 2015 German DTS DL 720p BluRay X264EXQUiSiTE


        Download Filehttps://cinurl.com/2uEYvz



        - -James.Bond.007.#24.Spectre.2015.ML.1080p.x264_eng yify subtitles. . subtitle James.Bond.007.Spectre.2015.German.DTS.DL.720p.BluRay.x264-EXQUISiTE, damaja.. subtitle James.Bond.007.Spectre.2015.German.DTS.DL.720p.AAC.x264 - EXQUISITE, damaja. . subtitles James Bond 007 - Specter.2015.BluRay.x264-EXQUISiTE, damaja. . subtitle James Bond 007.Spectre.2015 -HD 1080px 8a78ff9644
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Practical Finite Element Analysis Nitin S Gokhale.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Practical Finite Element Analysis Nitin S Gokhale.md deleted file mode 100644 index 6cc9333e5036f50aff97989448688f3c2bd7c3d7..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Practical Finite Element Analysis Nitin S Gokhale.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Practical Finite Element Analysis Nitin S Gokhale


        DOWNLOADhttps://cinurl.com/2uEXE4



        - -Practical Finite Element. Analysis, 2008, Nitin S. Gokhale ... Practical Fea By N. S Gokhale - wakati.co. Practical Finite Element. Analysis Nitin ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/!!INSTALL!! Download Aplikasi Software Togel 19.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/!!INSTALL!! Download Aplikasi Software Togel 19.md deleted file mode 100644 index cc70148cd7c92c7887c4bb4f6dd07a6836ca00e2..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/!!INSTALL!! Download Aplikasi Software Togel 19.md +++ /dev/null @@ -1,6 +0,0 @@ -

        download aplikasi software togel 19


        Downloadhttps://urluss.com/2uCDDH



        - -aplikasi togel terpercaya aplikasi terpercaya. aplikasi terpercaya aplikasi terpercaya. aplikasi terpercaya aplikasi terpercaya. aplikasi terpercaya aplikasi terpercaya. aplikasi terpercaya. Download: Downloaded: 22,5MB Downloaded: 2,5MB Last Checked: 3:00 AM Downloaded: 2,9MB Downloaded: 2,5MB Last Checked: 3:00 AM Downloaded: 3,0MB Downloaded: 3,0MB Last Checked: 3:00 AM Downloaded: 3,1MB Downloaded: 3,2MB Last Checked: 3:00 AM Downloaded: 2,5MB Downloaded: 2,6MB Last Checked: 3:00 AM Downloaded: 2,6MB Downloaded: 3,2MB Last Checked: 3:00 AM Downloaded: 2,9MB Downloaded: 2,7MB Last Checked: 3:00 AM Downloaded: 2,7MB Downloaded: 2,9MB Last Checked: 3:00 AM Downloaded: 3,0MB Downloaded: 2,7MB Last Checked: 3:00 AM Downloaded: 3,0MB Downloaded: 2,5MB Last Checked: 3:00 AM Downloaded: 2,9MB Downloaded: 3,0MB Last Checked: 3:00 AM Downloaded: 2,7MB Downloaded: 2,7MB Last Checked: 3:00 AM Downloaded: 2,7MB Downloaded: 2,9MB Last Checked: 3:00 AM Downloaded: 3,0MB Downloaded: 3,1MB Last Checked: 3:00 AM Downloaded: 3,0MB Downloaded: 3,0MB Last Checked: 3:00 AM Downloaded: 3,0MB Downloaded: 2,9MB Last Checked: 3:00 AM Downloaded: 3,0MB Downloaded: 2,9MB Last Checked: 3:00 AM Downloaded: 3,0MB Downloaded: 3,1MB Last Checked: 3:00 AM Downloaded: 3,1MB Downloaded: 3,2MB Last Checked: 3:00 AM Downloaded: 3,2MB Downloaded: 3,0 4fefd39f24
        -
        -
        -

        diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/cnn/cnn_multitask.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/cnn/cnn_multitask.py deleted file mode 100644 index c55694b513231d57fe2456b34cf2b65d82c7140e..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/cnn/cnn_multitask.py +++ /dev/null @@ -1,94 +0,0 @@ -from torch import nn -from spiga.models.cnn.layers import Conv, Residual -from spiga.models.cnn.hourglass import HourglassCore -from spiga.models.cnn.coord_conv import AddCoordsTh -from spiga.models.cnn.transform_e2p import E2Ptransform - - -class MultitaskCNN(nn.Module): - - def __init__(self, nstack=4, num_landmarks=98, num_edges=15, pose_req=True, **kwargs): - super(MultitaskCNN, self).__init__() - - # Parameters - self.img_res = 256 # WxH input resolution - self.ch_dim = 256 # Default channel dimension - self.out_res = 64 # WxH output resolution - self.nstack = nstack # Hourglass modules stacked - self.num_landmarks = num_landmarks # Number of landmarks - self.num_edges = num_edges # Number of edges subsets (eyeR, eyeL, nose, etc) - self.pose_required = pose_req # Multitask flag - - # Image preprocessing - self.pre = nn.Sequential( - AddCoordsTh(x_dim=self.img_res, y_dim=self.img_res, with_r=True), - Conv(6, 64, 7, 2, bn=True, relu=True), - Residual(64, 128), - Conv(128, 128, 2, 2, bn=True, relu=True), - Residual(128, 128), - Residual(128, self.ch_dim) - ) - - # Hourglass modules - self.hgs = nn.ModuleList([HourglassCore(4, self.ch_dim) for i in range(self.nstack)]) - self.hgs_out = nn.ModuleList([ - nn.Sequential( - Residual(self.ch_dim, self.ch_dim), - Conv(self.ch_dim, self.ch_dim, 1, bn=True, relu=True) - ) for i in range(nstack)]) - if self.pose_required: - self.hgs_core = nn.ModuleList([ - nn.Sequential( - Residual(self.ch_dim, self.ch_dim), - Conv(self.ch_dim, self.ch_dim, 2, 2, bn=True, relu=True), - Residual(self.ch_dim, self.ch_dim), - Conv(self.ch_dim, self.ch_dim, 2, 2, bn=True, relu=True) - ) for i in range(nstack)]) - - # Attention module (ADnet style) - self.outs_points = nn.ModuleList([nn.Sequential(Conv(self.ch_dim, self.num_landmarks, 1, relu=False, bn=False), - nn.Sigmoid()) for i in range(self.nstack - 1)]) - self.outs_edges = nn.ModuleList([nn.Sequential(Conv(self.ch_dim, self.num_edges, 1, relu=False, bn=False), - nn.Sigmoid()) for i in range(self.nstack - 1)]) - self.E2Ptransform = E2Ptransform(self.num_landmarks, self.num_edges, out_dim=self.out_res) - - self.outs_features = nn.ModuleList([Conv(self.ch_dim, self.num_landmarks, 1, relu=False, bn=False)for i in range(self.nstack - 1)]) - - # Stacked Hourglass inputs (nstack > 1) - self.merge_preds = nn.ModuleList([Conv(self.num_landmarks, self.ch_dim, 1, relu=False, bn=False) for i in range(self.nstack - 1)]) - self.merge_features = nn.ModuleList([Conv(self.ch_dim, self.ch_dim, 1, relu=False, bn=False) for i in range(self.nstack - 1)]) - - def forward(self, imgs): - - x = self.pre(imgs) - outputs = {'VisualField': [], - 'HGcore': []} - - core_raw = [] - for i in range(self.nstack): - # Hourglass - hg, core_raw = self.hgs[i](x, core=core_raw) - if self.pose_required: - core = self.hgs_core[i](core_raw[-self.hgs[i].n]) - outputs['HGcore'].append(core) - hg = self.hgs_out[i](hg) - - # Visual features - outputs['VisualField'].append(hg) - - # Prepare next stacked input - if i < self.nstack - 1: - # Attentional modules - points = self.outs_points[i](hg) - edges = self.outs_edges[i](hg) - edges_ext = self.E2Ptransform(edges) - point_edges = points * edges_ext - - # Landmarks - maps = self.outs_features[i](hg) - preds = maps * point_edges - - # Outputs - x = x + self.merge_preds[i](preds) + self.merge_features[i](hg) - - return outputs diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/spiga.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/spiga.py deleted file mode 100644 index 4c72a36f2a45a4344665980bcf90e94a62766e2c..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/spiga.py +++ /dev/null @@ -1,171 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -import spiga.models.gnn.pose_proj as pproj -from spiga.models.cnn.cnn_multitask import MultitaskCNN -from spiga.models.gnn.step_regressor import StepRegressor, RelativePositionEncoder - - -class SPIGA(nn.Module): - def __init__(self, num_landmarks=98, num_edges=15, steps=3, **kwargs): - - super(SPIGA, self).__init__() - - # Model parameters - self.steps = steps # Cascaded regressors - self.embedded_dim = 512 # GAT input channel - self.nstack = 4 # Number of stacked GATs per step - self.kwindow = 7 # Output cropped window dimension (kernel) - self.swindow = 0.25 # Scale of the cropped window at first step (Dft. 25% w.r.t the input featuremap) - self.offset_ratio = [self.swindow/(2**step)/2 for step in range(self.steps)] - - # CNN parameters - self.num_landmarks = num_landmarks - self.num_edges = num_edges - - # Initialize backbone - self.visual_cnn = MultitaskCNN(num_landmarks=self.num_landmarks, num_edges=self.num_edges) - # Features dimensions - self.img_res = self.visual_cnn.img_res - self.visual_res = self.visual_cnn.out_res - self.visual_dim = self.visual_cnn.ch_dim - - # Initialize Pose head - self.channels_pose = 6 - self.pose_fc = nn.Linear(self.visual_cnn.ch_dim, self.channels_pose) - - # Initialize feature extractors: - # Relative positional encoder - shape_dim = 2 * (self.num_landmarks - 1) - shape_encoder = [] - for step in range(self.steps): - shape_encoder.append(RelativePositionEncoder(shape_dim, self.embedded_dim, [256, 256])) - self.shape_encoder = nn.ModuleList(shape_encoder) - # Diagonal mask used to compute relative positions - diagonal_mask = (torch.ones(self.num_landmarks, self.num_landmarks) - torch.eye(self.num_landmarks)).type(torch.bool) - self.diagonal_mask = nn.parameter.Parameter(diagonal_mask, requires_grad=False) - - # Visual feature extractor - conv_window = [] - theta_S = [] - for step in range(self.steps): - # S matrix per step - WH = self.visual_res # Width/height of ftmap - Wout = self.swindow / (2 ** step) * WH # Width/height of the window - K = self.kwindow # Kernel or resolution of the window - scale = K / WH * (Wout - 1) / (K - 1) # Scale of the affine transformation - # Rescale matrix S - theta_S_stp = torch.tensor([[scale, 0], [0, scale]]) - theta_S.append(nn.parameter.Parameter(theta_S_stp, requires_grad=False)) - - # Convolutional to embedded to BxLxCx1x1 - conv_window.append(nn.Conv2d(self.visual_dim, self.embedded_dim, self.kwindow)) - - self.theta_S = nn.ParameterList(theta_S) - self.conv_window = nn.ModuleList(conv_window) - - # Initialize GAT modules - self.gcn = nn.ModuleList([StepRegressor(self.embedded_dim, 256, self.nstack) for i in range(self.steps)]) - - def forward(self, data): - # Inputs: Visual features and points projections - pts_proj, features = self.backbone_forward(data) - # Visual field - visual_field = features['VisualField'][-1] - - # Params compute only once - gat_prob = [] - features['Landmarks'] = [] - for step in range(self.steps): - # Features generation - embedded_ft = self.extract_embedded(pts_proj, visual_field, step) - - # GAT inference - offset, gat_prob = self.gcn[step](embedded_ft, gat_prob) - offset = F.hardtanh(offset) - - # Update coordinates - pts_proj = pts_proj + self.offset_ratio[step] * offset - features['Landmarks'].append(pts_proj.clone()) - - features['GATProb'] = gat_prob - return features - - def backbone_forward(self, data): - # Inputs: Image and model3D - imgs = data[0] - model3d = data[1] - cam_matrix = data[2] - - # HourGlass Forward - features = self.visual_cnn(imgs) - - # Head pose estimation - pose_raw = features['HGcore'][-1] - B, L, _, _ = pose_raw.shape - pose = pose_raw.reshape(B, L) - pose = self.pose_fc(pose) - features['Pose'] = pose.clone() - - # Project model 3D - euler = pose[:, 0:3] - trl = pose[:, 3:] - rot = pproj.euler_to_rotation_matrix(euler) - pts_proj = pproj.projectPoints(model3d, rot, trl, cam_matrix) - pts_proj = pts_proj / self.visual_res - - return pts_proj, features - - def extract_embedded(self, pts_proj, receptive_field, step): - # Visual features - visual_ft = self.extract_visual_embedded(pts_proj, receptive_field, step) - # Shape features - shape_ft = self.calculate_distances(pts_proj) - shape_ft = self.shape_encoder[step](shape_ft) - # Addition - embedded_ft = visual_ft + shape_ft - return embedded_ft - - def extract_visual_embedded(self, pts_proj, receptive_field, step): - # Affine matrix generation - B, L, _ = pts_proj.shape # Pts_proj range:[0,1] - centers = pts_proj + 0.5 / self.visual_res # BxLx2 - centers = centers.reshape(B * L, 2) # B*Lx2 - theta_trl = (-1 + centers * 2).unsqueeze(-1) # BxLx2x1 - theta_s = self.theta_S[step] # 2x2 - theta_s = theta_s.repeat(B * L, 1, 1) # B*Lx2x2 - theta = torch.cat((theta_s, theta_trl), -1) # B*Lx2x3 - - # Generate crop grid - B, C, _, _ = receptive_field.shape - grid = torch.nn.functional.affine_grid(theta, (B * L, C, self.kwindow, self.kwindow)) - grid = grid.reshape(B, L, self.kwindow, self.kwindow, 2) - grid = grid.reshape(B, L, self.kwindow * self.kwindow, 2) - - # Crop windows - crops = torch.nn.functional.grid_sample(receptive_field, grid, padding_mode="border") # BxCxLxK*K - crops = crops.transpose(1, 2) # BxLxCxK*K - crops = crops.reshape(B * L, C, self.kwindow, self.kwindow) - - # Flatten features - visual_ft = self.conv_window[step](crops) - _, Cout, _, _ = visual_ft.shape - visual_ft = visual_ft.reshape(B, L, Cout) - - return visual_ft - - def calculate_distances(self, pts_proj): - B, L, _ = pts_proj.shape # BxLx2 - pts_a = pts_proj.unsqueeze(-2).repeat(1, 1, L, 1) - pts_b = pts_a.transpose(1, 2) - dist = pts_a - pts_b - dist_wo_self = dist[:, self.diagonal_mask, :].reshape(B, L, -1) - return dist_wo_self - - - - - - - diff --git a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/embeddings/seg_embedding.py b/spaces/szukevin/VISOR-GPT/train/tencentpretrain/embeddings/seg_embedding.py deleted file mode 100644 index 1061ed067026e684336f4aaf56407504eb87f613..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/embeddings/seg_embedding.py +++ /dev/null @@ -1,22 +0,0 @@ -import torch.nn as nn - - -class SegEmbedding(nn.Module): - """ - BERT Segment Embedding - """ - def __init__(self, args, _): - super(SegEmbedding, self).__init__() - self.embedding = nn.Embedding(3, args.emb_size) - - def forward(self, _, seg): - """ - Args: - seg: [batch_size x seq_length] - Returns: - emb: [batch_size x seq_length x hidden_size] - """ - - seg_emb = self.embedding(seg) - - return seg_emb diff --git a/spaces/talhaty/Faceswapper/roop/__init__.py b/spaces/talhaty/Faceswapper/roop/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/tanvirsingh01/YourMoodDiary/README.md b/spaces/tanvirsingh01/YourMoodDiary/README.md deleted file mode 100644 index 326c9c02fa1a3a3131d9877e1ee1b9b0b5190840..0000000000000000000000000000000000000000 --- a/spaces/tanvirsingh01/YourMoodDiary/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: YourMoodDiary -emoji: 🔥 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/thu-ml/unidiffuser/app.py b/spaces/thu-ml/unidiffuser/app.py deleted file mode 100644 index dcce180338e126e36d9aac9854ec18867d62facd..0000000000000000000000000000000000000000 --- a/spaces/thu-ml/unidiffuser/app.py +++ /dev/null @@ -1,194 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import random - -import gradio as gr -import numpy as np -import PIL.Image -import spaces -import torch -from diffusers import UniDiffuserPipeline - -DESCRIPTION = "# [UniDiffuser](https://github.com/thu-ml/unidiffuser)" - -if not torch.cuda.is_available(): - DESCRIPTION += "\n

        Running on CPU 🥶

        " - - -MAX_SEED = np.iinfo(np.int32).max - - -def randomize_seed_fn(seed: int, randomize_seed: bool) -> int: - if randomize_seed: - seed = random.randint(0, MAX_SEED) - return seed - - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") -if torch.cuda.is_available(): - pipe = UniDiffuserPipeline.from_pretrained("thu-ml/unidiffuser-v1", torch_dtype=torch.float16) - pipe.to(device) - - -@spaces.GPU -def run( - mode: str, - prompt: str, - image: PIL.Image.Image | None, - seed: int = 0, - num_steps: int = 20, - guidance_scale: float = 8.0, -) -> tuple[PIL.Image.Image | None, str]: - generator = torch.Generator(device=device).manual_seed(seed) - if image is not None: - image = image.resize((512, 512)) - if mode == "t2i": - pipe.set_text_to_image_mode() - sample = pipe(prompt=prompt, num_inference_steps=num_steps, guidance_scale=guidance_scale, generator=generator) - return sample.images[0], "" - elif mode == "i2t": - pipe.set_image_to_text_mode() - sample = pipe(image=image, num_inference_steps=num_steps, guidance_scale=guidance_scale, generator=generator) - return None, sample.text[0] - elif mode == "joint": - pipe.set_joint_mode() - sample = pipe(num_inference_steps=num_steps, guidance_scale=guidance_scale, generator=generator) - return sample.images[0], sample.text[0] - elif mode == "i": - pipe.set_image_mode() - sample = pipe(num_inference_steps=num_steps, guidance_scale=guidance_scale, generator=generator) - return sample.images[0], "" - elif mode == "t": - pipe.set_text_mode() - sample = pipe(num_inference_steps=num_steps, guidance_scale=guidance_scale, generator=generator) - return None, sample.text[0] - elif mode == "i2t2i": - pipe.set_image_to_text_mode() - sample = pipe(image=image, num_inference_steps=num_steps, guidance_scale=guidance_scale, generator=generator) - pipe.set_text_to_image_mode() - sample = pipe( - prompt=sample.text[0], - num_inference_steps=num_steps, - guidance_scale=guidance_scale, - generator=generator, - ) - return sample.images[0], "" - elif mode == "t2i2t": - pipe.set_text_to_image_mode() - sample = pipe(prompt=prompt, num_inference_steps=num_steps, guidance_scale=guidance_scale, generator=generator) - pipe.set_image_to_text_mode() - sample = pipe( - image=sample.images[0], - num_inference_steps=num_steps, - guidance_scale=guidance_scale, - generator=generator, - ) - return None, sample.text[0] - else: - raise ValueError - - -def create_demo(mode_name: str) -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - mode = gr.Dropdown( - label="Mode", - choices=[ - "t2i", - "i2t", - "joint", - "i", - "t", - "i2t2i", - "t2i2t", - ], - value=mode_name, - visible=False, - ) - prompt = gr.Text(label="Prompt", max_lines=1, visible=mode_name in ["t2i", "t2i2t"]) - image = gr.Image(label="Input image", type="pil", visible=mode_name in ["i2t", "i2t2i"]) - run_button = gr.Button("Run") - with gr.Accordion("Advanced options", open=False): - seed = gr.Slider( - label="Seed", - minimum=0, - maximum=MAX_SEED, - step=1, - value=0, - ) - randomize_seed = gr.Checkbox(label="Randomize seed", value=True) - num_steps = gr.Slider( - label="Steps", - minimum=1, - maximum=100, - value=20, - step=1, - ) - guidance_scale = gr.Slider( - label="Guidance Scale", - minimum=0.1, - maximum=30.0, - value=8.0, - step=0.1, - ) - with gr.Column(): - result_image = gr.Image(label="Generated image", visible=mode_name in ["t2i", "i", "joint", "i2t2i"]) - result_text = gr.Text(label="Generated text", visible=mode_name in ["i2t", "t", "joint", "t2i2t"]) - - gr.on( - triggers=[prompt.submit, run_button.click], - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - api_name=False, - concurrency_limit=None, - ).then( - fn=run, - inputs=[ - mode, - prompt, - image, - seed, - num_steps, - guidance_scale, - ], - outputs=[ - result_image, - result_text, - ], - api_name=f"run_{mode_name}", - concurrency_limit=1, - concurrency_id="gpu", - ) - return demo - - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Tabs(): - with gr.TabItem("text2image"): - create_demo("t2i") - with gr.TabItem("image2text"): - create_demo("i2t") - with gr.TabItem("image variation"): - create_demo("i2t2i") - with gr.TabItem("joint generation"): - create_demo("joint") - with gr.TabItem("image generation"): - create_demo("i") - with gr.TabItem("text generation"): - create_demo("t") - with gr.TabItem("text variation"): - create_demo("t2i2t") - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Anime Studio 12.2 Crack Pro Version [Latest] [WORK].md b/spaces/tialenAdioni/chat-gpt-api/logs/Anime Studio 12.2 Crack Pro Version [Latest] [WORK].md deleted file mode 100644 index 43257ae179353ff981e1ff458eaefb49ad2de1e5..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Anime Studio 12.2 Crack Pro Version [Latest] [WORK].md +++ /dev/null @@ -1,95 +0,0 @@ -
        -

        to

        for headings,

        for paragraphs, for links, for images,

          and
            for lists, and and for emphasis. You can also use more advanced HTML tags like
            ,
            ,
        -

        Car Parking Multiplayer is a great game, but it also has some limitations and drawbacks. For example, you have to earn money and gold by playing the game or watching ads, which can be time-consuming and annoying. You also have to unlock and upgrade the cars by spending money and gold, which can be expensive and frustrating. You also have to deal with ads that pop up every now and then, which can be distracting and irritating. You also have to root your device if you want to use some cheats or hacks, which can be risky and complicated.

        -

        That's why downloading Car Parking Multiplayer Mod APK is a smart choice. Car Parking Multiplayer Mod APK is a modified version of the original game that gives you access to unlimited money and gold, all cars unlocked and upgraded, no ads, and no root required. With Car Parking Multiplayer Mod APK, you can enjoy the game without any limitations or hassles. You can buy any car you want, customize it as you like, upgrade it as you wish, and drive it as you please. You can also play the game without any interruptions or distractions from ads. You can also use the mod APK without rooting your device or risking its security.

        -

        Benefits of Car Parking Multiplayer Mod APK

        -

        Car Parking Multiplayer Mod APK has many benefits that make it worth downloading. Here are some of them:

        -

        Unlimited money and gold

        -

        With Car Parking Multiplayer Mod APK, you don't have to worry about running out of money or gold. You will have unlimited amounts of both currencies that you can use to buy any car you want, customize it as you like, upgrade it as you wish, and drive it as you please. You don't have to play the game for hours or watch ads to earn money or gold. You don't have to spend real money to buy them either. You can just enjoy the game with unlimited money and gold at your disposal.

        -

        All cars unlocked and upgraded

        -

        With Car Parking Multiplayer Mod APK, you don't have to unlock or upgrade the cars by spending money or gold. You will have access to all the cars in the game, from sedans to supercars, from trucks to buses, from helicopters to tanks. You can choose any car you want, customize it as you like, upgrade it as you wish, and drive it as you please. You don't have to complete levels or missions to unlock the cars. You don't have to wait for the upgrades to finish either. You can just enjoy the game with all the cars unlocked and upgraded.

        -

        No ads and no root required

        -

        With Car Parking Multiplayer Mod APK, you don't have to deal with ads or root your device. You will not see any ads that pop up every now and then, which can be distracting and irritating. You will not have to root your device if you want to use some cheats or hacks, which can be risky and complicated. You can just enjoy the game without any interruptions or complications from ads or root.

        -

        How to download and install Car Parking Multiplayer Mod APK?

        -

        Downloading and installing Car Parking Multiplayer Mod APK is easy and simple. Just follow these steps:

        -

        Step 1: Download the APK file from a trusted source

        -

        The first step is to download the APK file of Car Parking Multiplayer Mod APK from a trusted source. There are many websites that offer the mod APK for free, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your data. Some of them may also provide fake or outdated versions of the mod APK that may not work properly or cause problems.

        -

        That's why we recommend you to download the mod APK from our website, which is 100% safe and secure. We always provide the latest and working version of the mod APK that has been tested and verified by our team of experts. You can download the mod APK from our website by clicking on this link: [Car Parking Multiplayer Mod APK Data Download].

        -

        Step 2: Enable unknown sources on your device

        -

        The second step is to enable unknown sources on your device. This is necessary because the mod APK is not available on Google Play Store and therefore your device may not allow you to install it by default. To enable unknown sources on your device, follow these steps:

        - - Go to Settings > Security > Unknown Sources. - Toggle on the option to allow installation of apps from unknown sources. - Confirm your choice by tapping OK.

        Step 3: Install the APK file and launch the game

        -

        The third step is to install the APK file and launch the game. The final step is to install the APK file and launch the game. To do this, follow these steps: - Locate the downloaded APK file on your device using a file manager app. - Tap on the APK file and follow the instructions to install it. - Wait for the installation to finish and then tap on the game icon to launch it. - Enjoy the game with unlimited money and gold, all cars unlocked and upgraded, no ads, and no root required.

        Conclusion

        -

        Car Parking Multiplayer is a realistic driving and parking simulation game that offers a lot of fun and excitement. You can drive and park various cars in different scenarios, customize your cars and garages, interact with other players online, and play various game modes and challenges. However, if you want to enjoy the game without any limitations or hassles, you should download Car Parking Multiplayer Mod APK data download from our website. With Car Parking Multiplayer Mod APK, you will have access to unlimited money and gold, all cars unlocked and upgraded, no ads, and no root required. You can download the mod APK from our website by clicking on this link: [Car Parking Multiplayer Mod APK Data Download]. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

        -

        FAQs

        -

        Here are some frequently asked questions about Car Parking Multiplayer Mod APK data download:

        -

        Q: Is Car Parking Multiplayer Mod APK safe to use?

        -

        A: Yes, Car Parking Multiplayer Mod APK is safe to use. It does not contain any viruses or malware that can harm your device or steal your data. It also does not require root access or compromise your device's security. However, you should always download the mod APK from a trusted source like our website to avoid any risks.

        -

        Q: Is Car Parking Multiplayer Mod APK compatible with my device?

        -

        A: Car Parking Multiplayer Mod APK is compatible with most Android devices that run on Android 4.1 or higher. However, some devices may not support the mod APK due to hardware or software limitations. If you encounter any problems while installing or playing the mod APK, please contact us and we will try to help you.

        -

        Q: How can I update Car Parking Multiplayer Mod APK?

        -

        A: Car Parking Multiplayer Mod APK is updated regularly to keep up with the latest version of the original game. Whenever there is a new update available, we will post it on our website as soon as possible. You can check our website frequently for updates or subscribe to our newsletter to get notified by email. To update the mod APK, you just have to download the latest version from our website and install it over the previous one.

        -

        Q: Can I play Car Parking Multiplayer Mod APK offline?

        -

        A: Yes, you can play Car Parking Multiplayer Mod APK offline. You can play the single-player mode, the free mode, or the fun mode without an internet connection. However, if you want to play the multiplayer mode or access some online features, you will need an internet connection.

        -

        Q: Can I play Car Parking Multiplayer Mod APK with my friends?

        -

        A: Yes, you can play Car Parking Multiplayer Mod APK with your friends. You can join or create a room with up to 100 players and chat with them using voice or text messages. You can also race with them, exchange cars, or cooperate with them in completing missions. However, you and your friends will need to have the same version of the mod APK installed on your devices.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download 8 Ball Pool Hack Long Line APK for Android - Enjoy Unlimited Coins and Cash.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download 8 Ball Pool Hack Long Line APK for Android - Enjoy Unlimited Coins and Cash.md deleted file mode 100644 index 46b0f21f0a27efcae34019a87ceba524c66db584..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download 8 Ball Pool Hack Long Line APK for Android - Enjoy Unlimited Coins and Cash.md +++ /dev/null @@ -1,94 +0,0 @@ - -

        How to Download 8 Ball Pool Hack Long Line APK

        -

        Do you love playing 8 Ball Pool, but wish you could have more fun and success in the game? If so, you might be interested in downloading a hack long line apk that can give you some amazing benefits and advantages. In this article, we will tell you everything you need to know about this hack, including what it is, how it works, why you might want to use it, and what are the possible risks and rewards. By the end of this article, you will be able to decide if this hack is right for you and how to download it safely and easily.

        -

        download 8 ball pool hack long line apk


        Download Zip ❤❤❤ https://bltlly.com/2uOlG3



        -

        Introduction: What is 8 Ball Pool and why you might want to hack it

        -

        8 Ball Pool is one of the most popular and addictive online multiplayer games in the world. It is a realistic simulation of pool or billiards, where you can play against other players from around the globe, or challenge your friends in private matches. You can also participate in tournaments, leagues, events, and mini-games, where you can win prizes, coins, cash, cues, tables, and other items. The game has stunning graphics, smooth gameplay, realistic physics, and a variety of modes and options.

        -

        However, as much as 8 Ball Pool is fun and exciting, it can also be frustrating and challenging. The game requires a lot of skill, practice, patience, and luck. You need to have a good eye for angles, distances, speeds, spins, and trajectories. You also need to have a reliable internet connection, a powerful device, and enough coins and cash to enter matches and buy cues and tables. Sometimes, you might face opponents who are much better than you, or who use cheats or hacks to gain an unfair advantage. You might also encounter glitches, bugs, errors, or crashes that ruin your game.

        -

        That's why some players look for ways to hack or mod 8 Ball Pool. They want to have more fun, win more games, earn more rewards, unlock more features, and enjoy more freedom in the game. They want to have a hack long line apk that can help them achieve all that.

        -

        Benefits of using the hack long line apk

        -

        A hack long line apk is a modified version of the original game that has some extra features and functions that are not available in the official version. One of the main features of this hack is that it gives you longer and more accurate cue lines. This means that you can see the path and direction of the cue ball and the object balls more clearly and precisely. You can also adjust the length and width of the cue lines to suit your preference. This can help you improve your aiming and shooting skills, and make more shots and pots in the game.

        -

        But that's not all. The hack long line apk also gives you other benefits, such as:

        -
          -
        • More coins and cash: You can get unlimited or increased amounts of coins and cash, which are the main currencies in the game. You can use them to enter higher-stakes matches, buy better cues and tables, and access more features and options.
        • -
        • Unlimited spins and scratches: You can get unlimited or increased numbers of spins and scratches, which are the mini-games that you can play to win extra coins, cash, cues, tables, or other items.
        • -
        • Access to premium cues and tables: You can get access to all the cues and tables in the game, including the ones that are exclusive or expensive. You can choose from a wide range of designs, styles, colors, and attributes.
        • -
        -

        With these benefits, you can have more fun and success in 8 Ball Pool. You can play more matches, win more games, earn more rewards, unlock more features, and enjoy more freedom in the game. You can also impress your friends and opponents with your skills and achievements.

        -

        download 8 ball pool mod apk with long lines
        -download 8 ball pool hack apk unlimited coins and cash
        -download 8 ball pool hack apk latest version
        -download 8 ball pool hack apk no root
        -download 8 ball pool hack apk anti ban
        -download 8 ball pool hack long line mod apk android 1
        -download 8 ball pool hack long line mod apk revdl
        -download 8 ball pool hack long line mod apk rexdl
        -download 8 ball pool hack long line mod apk happymod
        -download 8 ball pool hack long line mod apk 2023
        -download 8 ball pool hack long line mod apk for pc
        -download 8 ball pool hack long line mod apk for ios
        -download 8 ball pool hack long line mod apk online
        -download 8 ball pool hack long line mod apk offline
        -download 8 ball pool hack long line mod apk free
        -how to download 8 ball pool hack long line apk
        -how to install 8 ball pool hack long line apk
        -how to use 8 ball pool hack long line apk
        -how to update 8 ball pool hack long line apk
        -how to uninstall 8 ball pool hack long line apk
        -is it safe to download 8 ball pool hack long line apk
        -is it legal to download 8 ball pool hack long line apk
        -is it possible to download 8 ball pool hack long line apk
        -where to download 8 ball pool hack long line apk
        -why download 8 ball pool hack long line apk
        -benefits of downloading 8 ball pool hack long line apk
        -drawbacks of downloading 8 ball pool hack long line apk
        -alternatives to downloading 8 ball pool hack long line apk
        -reviews of downloading 8 ball pool hack long line apk
        -testimonials of downloading 8 ball pool hack long line apk
        -tips and tricks for downloading 8 ball pool hack long line apk
        -best practices for downloading 8 ball pool hack long line apk
        -best sources for downloading 8 ball pool hack long line apk
        -best websites for downloading 8 ball pool hack long line apk
        -best apps for downloading 8 ball pool hack long line apk
        -best tools for downloading 8 ball pool hack long line apk
        -best methods for downloading 8 ball pool hack long line apk
        -best strategies for downloading 8 ball pool hack long line apk
        -best techniques for downloading 8 ball pool hack long line apk

        -

        Risks of using the hack long line apk

        -

        However, before you download and use the hack long line apk, you should also be aware of the possible risks and drawbacks. Using a hack or mod is not authorized or supported by the game developers or publishers. It is considered a form of cheating or hacking, which violates the terms and conditions of the game. Therefore, you might face some consequences if you use the hack long line apk, such as:

        -
          -
        • Getting banned or suspended from the game: The game has a strict anti-cheat system that detects and punishes players who use hacks or mods. If you are caught using the hack long line apk, you might lose your account, progress, data, coins, cash, cues, tables, or other items. You might also be banned or suspended from playing the game for a certain period of time or permanently.
        • -
        • Losing your progress and data: The hack long line apk might not be compatible or stable with the latest version of the game or your device. It might cause glitches, bugs, errors, or crashes that affect your game performance or functionality. It might also corrupt or delete your game files or data, which means that you might lose your progress, achievements, settings, preferences, or other information.
        • -
        • Exposing your device to malware or viruses: The hack long line apk might not be safe or secure to download or install on your device. It might contain malware or viruses that can harm your device or compromise your privacy. It might also require you to grant permissions or access to your device's features or functions that are not necessary for the game.
        • -
        • Breaking the terms and conditions of the game: The hack long line apk might not respect or follow the rules and regulations of the game. It might interfere with the game's integrity, fairness, balance, quality, or performance. It might also affect the experience or enjoyment of other players who play by the rules.
        • -
        -

        Therefore, you should be careful and responsible when using the hack long line apk. You should weigh the pros and cons of using it, and decide if it is worth it or not.

        -

        How to use the hack long line apk safely and responsibly

        -

        If you decide to use the hack long line apk, here are some tips on how to do it safely and responsibly:

        -
          -
        • Download it from a trusted source: You should only download the hack long line apk from a reliable and reputable website or platform that has positive reviews and feedback from other users. You should also scan it for malware or viruses before installing it on your device.
        • -
        • Use a VPN or proxy server: You should use a VPN (virtual private network) or proxy server to hide your IP address and location when playing 8 Ball Pool with the hack long line apk. This can help you avoid detection and ban by the game's anti-cheat system.
        • -
        • Create a backup account or device: You should create a backup account or device that you can use to play 8 Ball Pool with the hack long line apk. This way, you can protect your main account or device from getting banned or suspended from the game.
        • -
        • Not abuse or exploit the hack: You should not abuse or exploit the hack: You should use the hack long line apk in moderation and for personal use only. You should not use it to cheat, harass, or ruin the game for other players. You should also respect the game's terms and conditions and follow the game's etiquette and rules.
        • -
        -

        By following these tips, you can minimize the risks and maximize the benefits of using the hack long line apk.

        -

        Conclusion: Summarize the main points and provide a call to action

        -

        In conclusion, the hack long line apk is a modified version of 8 Ball Pool that can give you some amazing benefits and advantages in the game, such as longer and more accurate cue lines, more coins and cash, unlimited spins and scratches, and access to premium cues and tables. However, it also comes with some possible risks and drawbacks, such as getting banned or suspended from the game, losing your progress and data, exposing your device to malware or viruses, and breaking the terms and conditions of the game. Therefore, you should be careful and responsible when using the hack long line apk, and follow the tips we provided to use it safely and responsibly.

        -

        If you are interested in downloading the hack long line apk, you can click on the link or button below to get it from a trusted source. You can also learn more about it by visiting our website or contacting us. We hope you enjoyed this article and found it helpful. Thank you for reading and happy gaming!

        -

        Download Hack Long Line APK

        -

        FAQs

        -

        Here are some of the frequently asked questions about the hack long line apk:

        -
          -
        • Q: Is the hack long line apk free?
        • -
        • A: Yes, the hack long line apk is free to download and use. However, some websites or platforms might require you to complete surveys, offers, or tasks to access or download it.
        • -
        • Q: Is the hack long line apk compatible with my device?
        • -
        • A: The hack long line apk is compatible with most Android devices that can run 8 Ball Pool. However, some devices might not support or run it properly due to different specifications or settings.
        • -
        • Q: Is the hack long line apk safe to use?
        • -
        • A: The hack long line apk is safe to use if you download it from a trusted source and scan it for malware or viruses before installing it on your device. You should also use a VPN or proxy server to hide your IP address and location when playing 8 Ball Pool with the hack long line apk.
        • -
        • Q: How do I update the hack long line apk?
        • -
        • A: You can update the hack long line apk by downloading and installing the latest version from the same source where you got it. You should also check for updates regularly to ensure that the hack long line apk is compatible and stable with the latest version of 8 Ball Pool.
        • -
        • Q: How do I uninstall the hack long line apk?
        • -
        • A: You can uninstall the hack long line apk by deleting it from your device's storage or settings. You should also clear your device's cache and data to remove any traces of the hack long line apk.
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/timpal0l/chat-ui/src/lib/server/modelEndpoint.ts b/spaces/timpal0l/chat-ui/src/lib/server/modelEndpoint.ts deleted file mode 100644 index 4d187da21c37cbbe8efd722c09fee1815bd1c71f..0000000000000000000000000000000000000000 --- a/spaces/timpal0l/chat-ui/src/lib/server/modelEndpoint.ts +++ /dev/null @@ -1,21 +0,0 @@ -import { MODEL_ENDPOINTS } from "$env/static/private"; -import { sum } from "$lib/utils/sum"; - -const endpoints: Array<{ endpoint: string; authorization: string; weight: number }> = - JSON.parse(MODEL_ENDPOINTS); -const totalWeight = sum(endpoints.map((e) => e.weight)); - -/** - * Find a random load-balanced endpoint - */ -export function modelEndpoint(): { endpoint: string; authorization: string; weight: number } { - let random = Math.random() * totalWeight; - for (const endpoint of endpoints) { - if (random < endpoint.weight) { - return endpoint; - } - random -= endpoint.weight; - } - - throw new Error("Invalid config, no endpoint found"); -} diff --git a/spaces/tomg-group-umd/pez-dispenser/open_clip/hf_configs.py b/spaces/tomg-group-umd/pez-dispenser/open_clip/hf_configs.py deleted file mode 100644 index e236222bafce0358445ea16953ca0b2d5a84758a..0000000000000000000000000000000000000000 --- a/spaces/tomg-group-umd/pez-dispenser/open_clip/hf_configs.py +++ /dev/null @@ -1,45 +0,0 @@ -# HF architecture dict: -arch_dict = { - # https://huggingface.co/docs/transformers/model_doc/roberta#roberta - "roberta": { - "config_names": { - "context_length": "max_position_embeddings", - "vocab_size": "vocab_size", - "width": "hidden_size", - "heads": "num_attention_heads", - "layers": "num_hidden_layers", - "layer_attr": "layer", - "token_embeddings_attr": "embeddings" - }, - "pooler": "mean_pooler", - }, - # https://huggingface.co/docs/transformers/model_doc/xlm-roberta#transformers.XLMRobertaConfig - "xlm-roberta": { - "config_names": { - "context_length": "max_position_embeddings", - "vocab_size": "vocab_size", - "width": "hidden_size", - "heads": "num_attention_heads", - "layers": "num_hidden_layers", - "layer_attr": "layer", - "token_embeddings_attr": "embeddings" - }, - "pooler": "mean_pooler", - }, - # https://huggingface.co/docs/transformers/model_doc/mt5#mt5 - "mt5": { - "config_names": { - # unlimited seqlen - # https://github.com/google-research/text-to-text-transfer-transformer/issues/273 - # https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/models/t5/modeling_t5.py#L374 - "context_length": "", - "vocab_size": "vocab_size", - "width": "d_model", - "heads": "num_heads", - "layers": "num_layers", - "layer_attr": "block", - "token_embeddings_attr": "embed_tokens" - }, - "pooler": "mean_pooler", - }, -} diff --git a/spaces/tomofi/MMOCR/mmocr/models/ner/convertors/ner_convertor.py b/spaces/tomofi/MMOCR/mmocr/models/ner/convertors/ner_convertor.py deleted file mode 100644 index ca7288bc2b889bb906b65a82ff6c3f0f13edc194..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/ner/convertors/ner_convertor.py +++ /dev/null @@ -1,173 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np - -from mmocr.models.builder import CONVERTORS -from mmocr.utils import list_from_file - - -@CONVERTORS.register_module() -class NerConvertor: - """Convert between text, index and tensor for NER pipeline. - - Args: - annotation_type (str): BIO((B-begin, I-inside, O-outside)), - BIOES(B-begin, I-inside, O-outside, E-end, S-single) - vocab_file (str): File to convert words to ids. - categories (list[str]): All entity categories supported by the model. - max_len (int): The maximum length of the input text. - unknown_id (int): For words that do not appear in vocab.txt. - start_id (int): Each input is prefixed with an input ID. - end_id (int): Each output is prefixed with an output ID. - """ - - def __init__(self, - annotation_type='bio', - vocab_file=None, - categories=None, - max_len=None, - unknown_id=100, - start_id=101, - end_id=102): - self.annotation_type = annotation_type - self.categories = categories - self.word2ids = {} - self.max_len = max_len - self.unknown_id = unknown_id - self.start_id = start_id - self.end_id = end_id - assert self.max_len > 2 - assert self.annotation_type in ['bio', 'bioes'] - - vocabs = list_from_file(vocab_file) - self.vocab_size = len(vocabs) - for idx, vocab in enumerate(vocabs): - self.word2ids.update({vocab: idx}) - - if self.annotation_type == 'bio': - self.label2id_dict, self.id2label, self.ignore_id = \ - self._generate_labelid_dict() - elif self.annotation_type == 'bioes': - raise NotImplementedError('Bioes format is not supported yet!') - - assert self.ignore_id is not None - assert self.id2label is not None - self.num_labels = len(self.id2label) - - def _generate_labelid_dict(self): - """Generate a dictionary that maps input to ID and ID to output.""" - num_classes = len(self.categories) - label2id_dict = {} - ignore_id = 2 * num_classes + 1 - id2label_dict = { - 0: 'X', - ignore_id: 'O', - 2 * num_classes + 2: '[START]', - 2 * num_classes + 3: '[END]' - } - - for index, category in enumerate(self.categories): - start_label = index + 1 - end_label = index + 1 + num_classes - label2id_dict.update({category: [start_label, end_label]}) - id2label_dict.update({start_label: 'B-' + category}) - id2label_dict.update({end_label: 'I-' + category}) - - return label2id_dict, id2label_dict, ignore_id - - def convert_text2id(self, text): - """Convert characters to ids. - - If the input is uppercase, - convert to lowercase first. - Args: - text (list[char]): Annotations of one paragraph. - Returns: - input_ids (list): Corresponding IDs after conversion. - """ - ids = [] - for word in text.lower(): - if word in self.word2ids: - ids.append(self.word2ids[word]) - else: - ids.append(self.unknown_id) - # Text that exceeds the maximum length is truncated. - valid_len = min(len(text), self.max_len) - input_ids = [0] * self.max_len - input_ids[0] = self.start_id - for i in range(1, valid_len + 1): - input_ids[i] = ids[i - 1] - input_ids[i + 1] = self.end_id - - return input_ids - - def convert_entity2label(self, label, text_len): - """Convert labeled entities to ids. - - Args: - label (dict): Labels of entities. - text_len (int): The length of input text. - Returns: - labels (list): Label ids of an input text. - """ - labels = [0] * self.max_len - for j in range(min(text_len + 2, self.max_len)): - labels[j] = self.ignore_id - categories = label - for key in categories: - for text in categories[key]: - for place in categories[key][text]: - # Remove the label position beyond the maximum length. - if place[0] + 1 < len(labels): - labels[place[0] + 1] = self.label2id_dict[key][0] - for i in range(place[0] + 1, place[1] + 1): - if i + 1 < len(labels): - labels[i + 1] = self.label2id_dict[key][1] - return labels - - def convert_pred2entities(self, preds, masks): - """Gets entities from preds. - - Args: - preds (list): Sequence of preds. - masks (tensor): The valid part is 1 and the invalid part is 0. - Returns: - pred_entities (list): List of [[[entity_type, - entity_start, entity_end]]]. - """ - - masks = masks.detach().cpu().numpy() - pred_entities = [] - assert isinstance(preds, list) - for index, pred in enumerate(preds): - entities = [] - entity = [-1, -1, -1] - results = (masks[index][1:] * np.array(pred[1:])).tolist() - for index, tag in enumerate(results): - if not isinstance(tag, str): - tag = self.id2label[tag] - if self.annotation_type == 'bio': - if tag.startswith('B-'): - if entity[2] != -1 and entity[1] < entity[2]: - entities.append(entity) - entity = [-1, -1, -1] - entity[1] = index - entity[0] = tag.split('-')[1] - entity[2] = index - if index == len(results) - 1 and entity[1] < entity[2]: - entities.append(entity) - elif tag.startswith('I-') and entity[1] != -1: - _type = tag.split('-')[1] - if _type == entity[0]: - entity[2] = index - - if index == len(results) - 1 and entity[1] < entity[2]: - entities.append(entity) - else: - if entity[2] != -1 and entity[1] < entity[2]: - entities.append(entity) - entity = [-1, -1, -1] - else: - raise NotImplementedError( - 'The data format is not supported yet!') - pred_entities.append(entities) - return pred_entities diff --git a/spaces/tomofi/MMOCR/mmocr/utils/fileio.py b/spaces/tomofi/MMOCR/mmocr/utils/fileio.py deleted file mode 100644 index 2e455daf46261f89a02d56a04f1bc867058ffb1a..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/utils/fileio.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -import mmcv - - -def list_to_file(filename, lines): - """Write a list of strings to a text file. - - Args: - filename (str): The output filename. It will be created/overwritten. - lines (list(str)): Data to be written. - """ - mmcv.mkdir_or_exist(os.path.dirname(filename)) - with open(filename, 'w', encoding='utf-8') as fw: - for line in lines: - fw.write(f'{line}\n') - - -def list_from_file(filename, encoding='utf-8'): - """Load a text file and parse the content as a list of strings. The - trailing "\\r" and "\\n" of each line will be removed. - - Note: - This will be replaced by mmcv's version after it supports encoding. - - Args: - filename (str): Filename. - encoding (str): Encoding used to open the file. Default utf-8. - - Returns: - list[str]: A list of strings. - """ - item_list = [] - with open(filename, 'r', encoding=encoding) as f: - for line in f: - item_list.append(line.rstrip('\n\r')) - return item_list diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py deleted file mode 100644 index 7fb8e82ece225ab6f88f1f4f83bea56a42cf1a57..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py' -model = dict( - backbone=dict( - norm_cfg=dict(type='SyncBN', requires_grad=True), - norm_eval=False, - plugins=[ - dict( - cfg=dict(type='ContextBlock', ratio=1. / 16), - stages=(False, True, True, True), - position='after_conv3') - ])) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py deleted file mode 100644 index 3995603a6cee82a7d7cff620cb8bffe14b15b6a1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './cascade_mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnest101', - backbone=dict(stem_channels=128, depth=101)) diff --git a/spaces/trttung1610/musicgen/audiocraft/grids/diffusion/__init__.py b/spaces/trttung1610/musicgen/audiocraft/grids/diffusion/__init__.py deleted file mode 100644 index e5737294ae16c0de52085b8dcf6825c348f617e4..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/grids/diffusion/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Diffusion grids.""" diff --git a/spaces/trttung1610/musicgen/tests/data/test_audio.py b/spaces/trttung1610/musicgen/tests/data/test_audio.py deleted file mode 100644 index 40c0d5ed69eff92a766dc6d176e532f0df6c2b5e..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/tests/data/test_audio.py +++ /dev/null @@ -1,239 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import random - -import numpy as np -import torch -import torchaudio - -from audiocraft.data.audio import audio_info, audio_read, audio_write, _av_read - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestInfo(TempDirMixin): - - def test_info_mp3(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - wav = get_white_noise(ch, int(sample_rate * duration)) - path = self.get_temp_path('sample_wav.mp3') - save_wav(path, wav, sample_rate) - info = audio_info(path) - assert info.sample_rate == sample_rate - assert info.channels == ch - # we cannot trust torchaudio for num_frames, so we don't check - - def _test_info_format(self, ext: str): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'sample_wav{ext}') - save_wav(path, wav, sample_rate) - info = audio_info(path) - assert info.sample_rate == sample_rate - assert info.channels == ch - assert np.isclose(info.duration, duration, atol=1e-5) - - def test_info_wav(self): - self._test_info_format('.wav') - - def test_info_flac(self): - self._test_info_format('.flac') - - def test_info_ogg(self): - self._test_info_format('.ogg') - - def test_info_m4a(self): - # TODO: generate m4a file programmatically - # self._test_info_format('.m4a') - pass - - -class TestRead(TempDirMixin): - - def test_read_full_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - read_wav, read_sr = audio_read(path) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == wav.shape[1] - assert torch.allclose(read_wav, wav, rtol=1e-03, atol=1e-04) - - def test_read_partial_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = torch.rand(1).item() - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - read_frames = int(sample_rate * read_duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - read_wav, read_sr = audio_read(path, 0, read_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == read_frames - assert torch.allclose(read_wav[..., 0:read_frames], wav[..., 0:read_frames], rtol=1e-03, atol=1e-04) - - def test_read_seek_time_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - seek_time = torch.rand(1).item() - read_wav, read_sr = audio_read(path, seek_time, read_duration) - seek_frames = int(sample_rate * seek_time) - expected_frames = n_frames - seek_frames - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == expected_frames - assert torch.allclose(read_wav, wav[..., seek_frames:], rtol=1e-03, atol=1e-04) - - def test_read_seek_time_wav_padded(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - read_frames = int(sample_rate * read_duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - seek_time = torch.rand(1).item() - seek_frames = int(sample_rate * seek_time) - expected_frames = n_frames - seek_frames - read_wav, read_sr = audio_read(path, seek_time, read_duration, pad=True) - expected_pad_wav = torch.zeros(wav.shape[0], read_frames - expected_frames) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == read_frames - assert torch.allclose(read_wav[..., :expected_frames], wav[..., seek_frames:], rtol=1e-03, atol=1e-04) - assert torch.allclose(read_wav[..., expected_frames:], expected_pad_wav) - - -class TestAvRead(TempDirMixin): - - def test_avread_seek_base(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 2. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_a_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - for _ in range(100): - # seek will always load a full duration segment in the file - seek_time = random.uniform(0.0, 1.0) - seek_duration = random.uniform(0.001, 1.0) - read_wav, read_sr = _av_read(path, seek_time, seek_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == int(seek_duration * sample_rate) - - def test_avread_seek_partial(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_b_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - for _ in range(100): - # seek will always load a partial segment - seek_time = random.uniform(0.5, 1.) - seek_duration = 1. - expected_num_frames = n_frames - int(seek_time * sample_rate) - read_wav, read_sr = _av_read(path, seek_time, seek_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == expected_num_frames - - def test_avread_seek_outofbound(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_c_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - seek_time = 1.5 - read_wav, read_sr = _av_read(path, seek_time, 1.) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == 0 - - def test_avread_seek_edge(self): - sample_rates = [8000, 16_000] - # some of these values will have - # int(((frames - 1) / sample_rate) * sample_rate) != (frames - 1) - n_frames = [1000, 1001, 1002] - channels = [1, 2] - for sample_rate, ch, frames in product(sample_rates, channels, n_frames): - duration = frames / sample_rate - wav = get_white_noise(ch, frames) - path = self.get_temp_path(f'reference_d_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - seek_time = (frames - 1) / sample_rate - seek_frames = int(seek_time * sample_rate) - read_wav, read_sr = _av_read(path, seek_time, duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == (frames - seek_frames) - - -class TestAudioWrite(TempDirMixin): - - def test_audio_write_wav(self): - torch.manual_seed(1234) - sample_rates = [8000, 16_000] - n_frames = [1000, 1001, 1002] - channels = [1, 2] - strategies = ["peak", "clip", "rms"] - formats = ["wav", "mp3"] - for sample_rate, ch, frames in product(sample_rates, channels, n_frames): - for format_, strategy in product(formats, strategies): - wav = get_white_noise(ch, frames) - path = self.get_temp_path(f'pred_{sample_rate}_{ch}') - audio_write(path, wav, sample_rate, format_, strategy=strategy) - read_wav, read_sr = torchaudio.load(f'{path}.{format_}') - if format_ == "wav": - assert read_wav.shape == wav.shape - - if format_ == "wav" and strategy in ["peak", "rms"]: - rescaled_read_wav = read_wav / read_wav.abs().max() * wav.abs().max() - # for a Gaussian, the typical max scale will be less than ~5x the std. - # The error when writing to disk will ~ 1/2**15, and when rescaling, 5x that. - # For RMS target, rescaling leaves more headroom by default, leading - # to a 20x rescaling typically - atol = (5 if strategy == "peak" else 20) / 2**15 - delta = (rescaled_read_wav - wav).abs().max() - assert torch.allclose(wav, rescaled_read_wav, rtol=0, atol=atol), (delta, atol) - formats = ["wav"] # faster unit tests diff --git a/spaces/trysem/image-matting-app/ppmatting/core/val_ml.py b/spaces/trysem/image-matting-app/ppmatting/core/val_ml.py deleted file mode 100644 index 77628925bec1fa08a4a24de685355cc71157db92..0000000000000000000000000000000000000000 --- a/spaces/trysem/image-matting-app/ppmatting/core/val_ml.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os - -import cv2 -import numpy as np -import time -import paddle -import paddle.nn.functional as F -from paddleseg.utils import TimeAverager, calculate_eta, logger, progbar - -from ppmatting.metrics import metric -from pymatting.util.util import load_image, save_image, stack_images -from pymatting.foreground.estimate_foreground_ml import estimate_foreground_ml - -np.set_printoptions(suppress=True) - - -def save_alpha_pred(alpha, path): - """ - The value of alpha is range [0, 1], shape should be [h,w] - """ - dirname = os.path.dirname(path) - if not os.path.exists(dirname): - os.makedirs(dirname) - - alpha = (alpha).astype('uint8') - cv2.imwrite(path, alpha) - - -def reverse_transform(alpha, trans_info): - """recover pred to origin shape""" - for item in trans_info[::-1]: - if item[0][0] == 'resize': - h, w = item[1][0].numpy()[0], item[1][1].numpy()[0] - alpha = cv2.resize(alpha, dsize=(w, h)) - elif item[0][0] == 'padding': - h, w = item[1][0].numpy()[0], item[1][1].numpy()[0] - alpha = alpha[0:h, 0:w] - else: - raise Exception("Unexpected info '{}' in im_info".format(item[0])) - return alpha - - -def evaluate_ml(model, - eval_dataset, - num_workers=0, - print_detail=True, - save_dir='output/results', - save_results=True): - - loader = paddle.io.DataLoader( - eval_dataset, - batch_size=1, - drop_last=False, - num_workers=num_workers, - return_list=True, ) - - total_iters = len(loader) - mse_metric = metric.MSE() - sad_metric = metric.SAD() - grad_metric = metric.Grad() - conn_metric = metric.Conn() - - if print_detail: - logger.info("Start evaluating (total_samples: {}, total_iters: {})...". - format(len(eval_dataset), total_iters)) - progbar_val = progbar.Progbar(target=total_iters, verbose=1) - reader_cost_averager = TimeAverager() - batch_cost_averager = TimeAverager() - batch_start = time.time() - - img_name = '' - i = 0 - ignore_cnt = 0 - for iter, data in enumerate(loader): - - reader_cost_averager.record(time.time() - batch_start) - - image_rgb_chw = data['img'].numpy()[0] - image_rgb_hwc = np.transpose(image_rgb_chw, (1, 2, 0)) - trimap = data['trimap'].numpy().squeeze() / 255.0 - image = image_rgb_hwc * 0.5 + 0.5 # reverse normalize (x/255 - mean) / std - - is_fg = trimap >= 0.9 - is_bg = trimap <= 0.1 - - if is_fg.sum() == 0 or is_bg.sum() == 0: - ignore_cnt += 1 - logger.info(str(iter)) - continue - - alpha_pred = model(image, trimap) - - alpha_pred = reverse_transform(alpha_pred, data['trans_info']) - - alpha_gt = data['alpha'].numpy().squeeze() * 255 - - trimap = data['ori_trimap'].numpy().squeeze() - - alpha_pred = np.round(alpha_pred * 255) - mse = mse_metric.update(alpha_pred, alpha_gt, trimap) - sad = sad_metric.update(alpha_pred, alpha_gt, trimap) - grad = grad_metric.update(alpha_pred, alpha_gt, trimap) - conn = conn_metric.update(alpha_pred, alpha_gt, trimap) - - if sad > 1000: - print(data['img_name'][0]) - - if save_results: - alpha_pred_one = alpha_pred - alpha_pred_one[trimap == 255] = 255 - alpha_pred_one[trimap == 0] = 0 - - save_name = data['img_name'][0] - name, ext = os.path.splitext(save_name) - if save_name == img_name: - save_name = name + '_' + str(i) + ext - i += 1 - else: - img_name = save_name - save_name = name + '_' + str(0) + ext - i = 1 - save_alpha_pred(alpha_pred_one, os.path.join(save_dir, save_name)) - - batch_cost_averager.record( - time.time() - batch_start, num_samples=len(alpha_gt)) - batch_cost = batch_cost_averager.get_average() - reader_cost = reader_cost_averager.get_average() - - if print_detail: - progbar_val.update(iter + 1, - [('SAD', sad), ('MSE', mse), ('Grad', grad), - ('Conn', conn), ('batch_cost', batch_cost), - ('reader cost', reader_cost)]) - - reader_cost_averager.reset() - batch_cost_averager.reset() - batch_start = time.time() - - mse = mse_metric.evaluate() - sad = sad_metric.evaluate() - grad = grad_metric.evaluate() - conn = conn_metric.evaluate() - - logger.info('[EVAL] SAD: {:.4f}, MSE: {:.4f}, Grad: {:.4f}, Conn: {:.4f}'. - format(sad, mse, grad, conn)) - logger.info('{}'.format(ignore_cnt)) - - return sad, mse, grad, conn diff --git a/spaces/tvrsimhan/music-sep/app.py b/spaces/tvrsimhan/music-sep/app.py deleted file mode 100644 index 67b0ad0943e927f88e28ebb1cef9bc0794d68250..0000000000000000000000000000000000000000 --- a/spaces/tvrsimhan/music-sep/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import gradio as gr -from scipy.io.wavfile import write - - -def inference(audio): - os.makedirs("out", exist_ok=True) - write('test.wav', audio[0], audio[1]) - os.system("python3 -m demucs.separate -n mdx_extra_q -d cpu test.wav -o out") - return "./out/mdx_extra_q/test/vocals.wav","./out/mdx_extra_q/test/bass.wav",\ -"./out/mdx_extra_q/test/drums.wav","./out/mdx_extra_q/test/other.wav" - -title = "Demucs" -description = "Gradio demo for Demucs: Music Source Separation in the Waveform Domain. To use it, simply upload your audio, or click one of the examples to load them. Read more at the links below." -article = "

        Music Source Separation in the Waveform Domain | Github Repo

        " - -examples=[['test.mp3']] -gr.Interface( - inference, - gr.inputs.Audio(type="numpy", label="Input"), - [gr.outputs.Audio(type="filepath", label="Vocals"),gr.outputs.Audio(type="filepath", label="Bass"),gr.outputs.Audio(type="filepath", label="Drums"),gr.outputs.Audio(type="filepath", label="Other")], - title=title, - description=description, - article=article, - examples=examples - ).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/uRmario/arin/app.py b/spaces/uRmario/arin/app.py deleted file mode 100644 index 64f76ecf00b4163fdb53870275aa25a2cc2c30ec..0000000000000000000000000000000000000000 --- a/spaces/uRmario/arin/app.py +++ /dev/null @@ -1,49 +0,0 @@ -#Name = Mario-arin - -import tensorflow as tf -import requests -import gradio as gr -import tflite_runtime.interpreter as tflite -from PIL import Image as ImagePIL -import numpy as np - -# inception_net = tf.keras.applications.MobileNetV2() - -# #descargar labels -# response = requests.get("https://git.io/JJkYN") -# labels = response.text.split("\n") - -def classify_image(inp, model_client, labels_client): - # inp = inp.reshape((-1, 224, 224, 3)) - # inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp) - # prediction = model_client.predict(inp).flatten() - type(model_client) - interpreter = tflite.Interpreter(model_content=model_client) - interpreter.allocate_tensors() - input_details = interpreter.get_input_details() - output_details = interpreter.get_output_details() - _, height, width, _ = interpreter.get_input_details()[0]['shape'] - - img= img.convert('RGB').resize([width, height], ImagePIL.ANTIALIAS) - - input_data = np.array(asarray(img), dtype=np.float32) - input_data = np.expand_dims(input_data , axis=0) - interpreter.set_tensor(input_details[0]['index'], input_data) - interpreter.invoke() - tensor_resultado= interpreter.get_tensor(output_details[0]['index'])[0] - - confidences = {labels_client[i]: float(tensor_resultado[i]) for i in range(1000)} - return confidences - -#Same namber of inputs as of inputs in the function, same for outputs in the return statement -demo=gr.Interface(fn=classify_image, - inputs=[gr.Image(shape=(224,224)), - gr.File(label="Modelo") , - gr.File(label="Labels")], - outputs=gr.Label(num_top_classes=3), - live=True) -#print(str(demo.share_url())) - -#demo.launch(share=True, auth=("admin", "pruebita1234")) -#looks like sharing and auth are not ok when uploading to hugging -demo.launch() \ No newline at end of file diff --git a/spaces/ulysses115/ulysses115-pmvoice/text/__init__.py b/spaces/ulysses115/ulysses115-pmvoice/text/__init__.py deleted file mode 100644 index 5eb38c97b07594d5413f98a4dce935507a38ae66..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/ulysses115-pmvoice/text/__init__.py +++ /dev/null @@ -1,66 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols,symbols_zh - - -# Mappings from symbol to numeric ID and vice versa: -# _symbol_to_id = {s: i for i, s in enumerate(symbols)} -# _id_to_symbol = {i: s for i, s in enumerate(symbols)} - -chinese_mode = True -if chinese_mode: - _symbol_to_id = {s: i for i, s in enumerate(symbols_zh)} - _id_to_symbol = {i: s for i, s in enumerate(symbols_zh)} -else: - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - _id_to_symbol = {i: s for i, s in enumerate(symbols)} - -def text_to_sequence(text, cleaner_names, ): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text, chinese_mode=True): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - # if chinese_mode: - # sequence = [_symbol_to_id_zh[symbol] for symbol in cleaned_text] - # else: - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/umoubuton/atri-bert-vits2/text/chinese.py b/spaces/umoubuton/atri-bert-vits2/text/chinese.py deleted file mode 100644 index 51acb3ec401d7647278a25537576a0fb1775d827..0000000000000000000000000000000000000000 --- a/spaces/umoubuton/atri-bert-vits2/text/chinese.py +++ /dev/null @@ -1,198 +0,0 @@ -import os -import re - -import cn2an -from pypinyin import lazy_pinyin, Style - -from text.symbols import punctuation -from text.tone_sandhi import ToneSandhi - -current_file_path = os.path.dirname(__file__) -pinyin_to_symbol_map = { - line.split("\t")[0]: line.strip().split("\t")[1] - for line in open(os.path.join(current_file_path, "opencpop-strict.txt")).readlines() -} - -import jieba.posseg as psg - - -rep_map = { - ":": ",", - ";": ",", - ",": ",", - "。": ".", - "!": "!", - "?": "?", - "\n": ".", - "·": ",", - "、": ",", - "...": "…", - "$": ".", - "“": "'", - "”": "'", - "‘": "'", - "’": "'", - "(": "'", - ")": "'", - "(": "'", - ")": "'", - "《": "'", - "》": "'", - "【": "'", - "】": "'", - "[": "'", - "]": "'", - "—": "-", - "~": "-", - "~": "-", - "「": "'", - "」": "'", -} - -tone_modifier = ToneSandhi() - - -def replace_punctuation(text): - text = text.replace("嗯", "恩").replace("呣", "母") - pattern = re.compile("|".join(re.escape(p) for p in rep_map.keys())) - - replaced_text = pattern.sub(lambda x: rep_map[x.group()], text) - - replaced_text = re.sub( - r"[^\u4e00-\u9fa5" + "".join(punctuation) + r"]+", "", replaced_text - ) - - return replaced_text - - -def g2p(text): - pattern = r"(?<=[{0}])\s*".format("".join(punctuation)) - sentences = [i for i in re.split(pattern, text) if i.strip() != ""] - phones, tones, word2ph = _g2p(sentences) - assert sum(word2ph) == len(phones) - assert len(word2ph) == len(text) # Sometimes it will crash,you can add a try-catch. - phones = ["_"] + phones + ["_"] - tones = [0] + tones + [0] - word2ph = [1] + word2ph + [1] - return phones, tones, word2ph - - -def _get_initials_finals(word): - initials = [] - finals = [] - orig_initials = lazy_pinyin(word, neutral_tone_with_five=True, style=Style.INITIALS) - orig_finals = lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3 - ) - for c, v in zip(orig_initials, orig_finals): - initials.append(c) - finals.append(v) - return initials, finals - - -def _g2p(segments): - phones_list = [] - tones_list = [] - word2ph = [] - for seg in segments: - # Replace all English words in the sentence - seg = re.sub("[a-zA-Z]+", "", seg) - seg_cut = psg.lcut(seg) - initials = [] - finals = [] - seg_cut = tone_modifier.pre_merge_for_modify(seg_cut) - for word, pos in seg_cut: - if pos == "eng": - continue - sub_initials, sub_finals = _get_initials_finals(word) - sub_finals = tone_modifier.modified_tone(word, pos, sub_finals) - initials.append(sub_initials) - finals.append(sub_finals) - - # assert len(sub_initials) == len(sub_finals) == len(word) - initials = sum(initials, []) - finals = sum(finals, []) - # - for c, v in zip(initials, finals): - raw_pinyin = c + v - # NOTE: post process for pypinyin outputs - # we discriminate i, ii and iii - if c == v: - assert c in punctuation - phone = [c] - tone = "0" - word2ph.append(1) - else: - v_without_tone = v[:-1] - tone = v[-1] - - pinyin = c + v_without_tone - assert tone in "12345" - - if c: - # 多音节 - v_rep_map = { - "uei": "ui", - "iou": "iu", - "uen": "un", - } - if v_without_tone in v_rep_map.keys(): - pinyin = c + v_rep_map[v_without_tone] - else: - # 单音节 - pinyin_rep_map = { - "ing": "ying", - "i": "yi", - "in": "yin", - "u": "wu", - } - if pinyin in pinyin_rep_map.keys(): - pinyin = pinyin_rep_map[pinyin] - else: - single_rep_map = { - "v": "yu", - "e": "e", - "i": "y", - "u": "w", - } - if pinyin[0] in single_rep_map.keys(): - pinyin = single_rep_map[pinyin[0]] + pinyin[1:] - - assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin) - phone = pinyin_to_symbol_map[pinyin].split(" ") - word2ph.append(len(phone)) - - phones_list += phone - tones_list += [int(tone)] * len(phone) - return phones_list, tones_list, word2ph - - -def text_normalize(text): - numbers = re.findall(r"\d+(?:\.?\d+)?", text) - for number in numbers: - text = text.replace(number, cn2an.an2cn(number), 1) - text = replace_punctuation(text) - return text - - -def get_bert_feature(text, word2ph): - from text import chinese_bert - - return chinese_bert.get_bert_feature(text, word2ph) - - -if __name__ == "__main__": - from text.chinese_bert import get_bert_feature - - text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏" - text = text_normalize(text) - print(text) - phones, tones, word2ph = g2p(text) - bert = get_bert_feature(text, word2ph) - - print(phones, tones, word2ph, bert.shape) - - -# # 示例用法 -# text = "这是一个示例文本:,你好!这是一个测试...." -# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试 diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Among The Sleep Mac Download A Unique and Terrifying First-Person Horror Game.md b/spaces/usbethFlerru/sovits-modelsV2/example/Among The Sleep Mac Download A Unique and Terrifying First-Person Horror Game.md deleted file mode 100644 index b12474d3174025dec39bbe8fc0bff2991bf0c8fd..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Among The Sleep Mac Download A Unique and Terrifying First-Person Horror Game.md +++ /dev/null @@ -1,13 +0,0 @@ - -

        An expansion level available through downloadable content is set before the events of the main story, again told from David's point of view. David is wandering through a winter environment and finds five dolls surrounding a light that breaks, sending David to a house different from the one shown in the main story. David is required to locate and thaw the five dolls that are frozen due to the freezing wind from outside coming through the open windows, which involves closing the windows and using music and TV to free them. Throughout the house, bright flashback figures of David's parents are shown arguing due to Zoey's alcohol abuse because she felt neglected for taking care of David by herself while Justin was working all day to gain enough money to support themselves and their child, one flashback shows that he hits her when she collapses to the floor (implying he was protecting David from Zoey's drunken state, resulting of the divorce). Along the way, David encounters Harald from the main story (the one that appeared in the living room but has a different form) and a living furnace monster named Hons in the basement. Once David finds all of the dolls, including the rabbit one found outside the house after it has fallen from a window, Zoey is seen taking David before going away in depression, implying that the house is Justin's house and that Zoey is taking David to the house seen in the main story, leaving the doll out in the cold.

        -

        Among The Sleep Free Download Mac


        Download File ✑ ✑ ✑ https://urlcod.com/2uyWfY



        -

        People love free steam games, no doubt. But what many people hate is downloading so many parts and trying to install them on their own. This is why we are the only site that pre-installs every game for you. We have many categories like shooters, action, racing, simulators and even VR games! We strive to satisfy our users and ask for nothing in return. We revolutionized the downloading scene and will continue being your #1 site for free games.

        -

        Dustin Ralston of Sleepy-Time DSP has recently announced that he has stopped further development of VST plugins. This is sad news for the music production community because his freeware VST plugins were among the most well-crafted ones out there. Sleepy-Time DSP plugins are beautifully designed and optimized for fast performance.

        -

        But ultimately, relying on any one app to protect your system, data, and privacy is a bad bet, especially when almost every antivirus app has proven vulnerable on occasion. No antivirus tool, paid or free, can catch every malicious bit of software that arrives on your computer. You also need secure passwords, two-factor logins, data encryption, systemwide backups, automatic software updates, and smart privacy tools added to your browser. You need to be mindful of what you download and to download software only from official sources, such as the Microsoft App Store and Apple Mac App Store, whenever possible. You should avoid downloading and opening email attachments unless you know what they are. For guidance, check out our full guide to setting up all these security layers.

        -

        It can also be helpful in attaining mental peace and serenity. The app even contains peaceful meditation sounds that ease the ability to focus and remain relaxed during the whole process. It is highly a user-friendly app and is absolutely free to download on the Android Smartphones.

        -

        The clarity of the sound surpasses almost every other application available in this genre of nature sound apps. It also provides the facility of sleep and wake timer to its users. The simple user interface along with earbud optimisation facility makes the software highly efficient and popular among people.

        -

        -

        myNoise has a free version that includes white noise, rain noise, binaural beats, spring walk, temple bells, and warp speed. The app also includes a sleep timer to turn off the sound and an alarm for gradual wake-up.

        -

        Pillow has one of the best user interfaces among all the sleep tracking apps for Apple Watch. The app features a nice bottom menu bar with big buttons, menus, and eye-popping sleep graphs to read the data.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Anwar Hindi Movie Songs 320kbps Free Downloadgolkesl ((INSTALL)).md b/spaces/usbethFlerru/sovits-modelsV2/example/Anwar Hindi Movie Songs 320kbps Free Downloadgolkesl ((INSTALL)).md deleted file mode 100644 index f185988fb5b99afe17eedc2c577b3a9d7ec6c56d..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Anwar Hindi Movie Songs 320kbps Free Downloadgolkesl ((INSTALL)).md +++ /dev/null @@ -1,11 +0,0 @@ - -

        walpZoffoopyiptyday [url= ]Download Mail Designer 365 V1.1.2 For Mac With Activation Code Latest Free Download[/url]DrediuhIrrivataree [url= ]f1 2010 pc 1.01 crack download[/url] Download Nude It Apk Free Android [url= ]Download[/url] melsAtterve [url= ]Download[/url] karo menas knyga pdf 20 [url= ]thingiverse[/url] Tum Hi To Ho full movie hd 1080p blu-ray download free [url= ] [/url] briletypeAbumunult [url= ]thingiverse[/url] ReFWocheNuththegodat [url= ] [/url]cewek smp bugil di perkosa keluar darah perawan [url= ]thingiverse[/url] stronghold crusader crack 1.1 81 [url= ]thingiverse[/url] [url= -announcements/1418154/iron-maiden-remastered-collection-320kbps]Iron Maiden Remastered Collection 320kbps[/url] b396299

        -

        Anwar Hindi Movie Songs 320kbps Free Downloadgolkesl


        DOWNLOADhttps://urlcod.com/2uyUyl



        -

        walpZoffoopyiptyday [url= ]thingiverse.com[/url]DrediuhIrrivataree [url= ] [/url] Download Kyun Ho Gaya Na In Hindi Torrent [url= ]thingiverse.com[/url] melsAtterve [url= -professional-14-crack-the-whip.html] -professional-14-crack-the-whip.html[/url] counterstrike13fullversionfreedownloadrar [url= -ghalti-kr-betha-hai-Complete-novel-by-Huma-Waqas.html]thingiverse[/url] NatttureCemFrawlHem [url= ]thingiverse[/url] vegeta ssj4 mugen char download [url= ]thingiverse.com[/url] ReFWocheNuththegodat [url= _Paravasam_Subtitles_Free_Download.html]Paarthale Paravasam Subtitles Free Download[/url]Realflow Plugin For 3ds Max 2016 259 [url= ]thingiverse.com[/url] flissinneple [url= -Code-Ccp-001mbepub.html]thingiverse[/url]
        walpZoffoopyiptyday [url= -Film-1080p-Izle-Film.html]thingiverse.com[/url]DrediuhIrrivataree [url= _cinema_4d_crack_torrent.html]Download[/url] Kaptaan 2 Full Movie Bluray 1080p [url= _login_e_senha.html] _login_e_senha.html[/url] melsAtterve [url= _Reset_Utility_Key_Torrent.html]Download[/url] Inferno Full Movie In Tamil Free Download 720p [url= ]Download[/url] NatttureCemFrawlHem [url= ]thingiverse.com[/url] briletypeAbumunult [url= -Software-IcoFX-221-Multilang-Serial-Key-Keygen.html]IcoFX Software IcoFX 2.2.1 Multilang Serial Key Keygen[/url] ReFWocheNuththegodat [url= ]thingiverse[/url]EquantyroarkPata [url= _2000_limba_romana.html]Download[/url] zamfoo 3 0 nulled scripts [url= ]Download[/url]
        walpZoffoopyiptyday [url= ]thingiverse[/url]the Baa Baaa Black Sheep full movie hd in hindi free download [url= _mod_gta_5golkes.html]thingiverse[/url] vaaranamaayiramtamilpdf22 [url= -Nintendo-For-PC-Every-SNES-Rom-N-Emu-EVER-11337-Roms-Free-Download.html]Super Nintendo For PC (Every SNES Rom N Emu EVER) (11337 Roms) Free Download[/url] melsAtterve [url= _of_skyrim_cbbe.html]thingiverse.com[/url] sesspaphpag [url= ]descargarkaraokeprofesionalgratisfullconcrackyserialcompleto[/url] NatttureCemFrawlHem [url= -Of-Brothers-1080p-Latino-Mega-Hd.html]Download[/url] briletypeAbumunult [url= ]thingiverse.com[/url] ali muhammad taji ghazal mp3 download [url= ]Download[/url]MICROSOFT DYNAMICS CRM SERVER 2013 MSDN Serial Key keygen [url= _Arabian_Nights_Stories_In_Tamil_Pdf_267.html]thingiverse[/url] Sanam Bewafa dual audio hindi 720p [url= -You-Dont-Mess-With-The-Zohan-Mp4.html]Download[/url]
        [url= =31834.0]Iclone 5 Physics Toolbox 22[/url] 65d3cb3

        -

        Bongiovi Acoustics DPS 1.2.3 (Audio Enhancer) 64 bit [url= [VERIFIED] The Legend Of Bhagat Singh Movie Hd Video Download]Download[/url]tamil dubbed 1080p movies Shimla Mirchi [url= -blog/32812]Download[/url] descargar final mundial sudafrica 2010 hd 1080p [url= _a20_firmware.html]Download[/url] type3.type edit 2008 dongle cracked [url= ] [/url] Bentley Microstation V8i (SELECTSeries 3) 08 11 09 578 Crack [MUMBAI TPB] 64 [url= ]Download[/url] NatttureCemFrawlHem [url= -professional-61-crack-download.html] -professional-61-crack-download.html[/url] briletypeAbumunult [url= ]vuze plus activation code keygen 4.7.0.2[/url] 300 Spartans Full Movie Tagalog [url= -manidweepavarnanaintelugupdffreedownload]thingiverse[/url]artsoft mach 4 crack 536 [url= -crack.pdf]thingiverse[/url] flissinneple [url= ]No More Sheets Juanita Bynum Pdf Download[/url]
        walpZoffoopyiptyday [url= -code-movie-hindi-audio-track-download.html] -code-movie-hindi-audio-track-download.html[/url]DrediuhIrrivataree [url= -nunca-rendirse-jamas-1080p-latino-mega-101-2.html]Retroceder Nunca Rendirse Jamas 1080p Latino Mega 101[/url] Moyea Ppt To Video Converter Registration Code Keygen 14 [url= -dialog-naskah-drama-sangkuriang-bahasa-jawa-5-orang]thingiverse[/url] melsAtterve [url= -completa-mi-verdad-liz-vega.pdf] -completa-mi-verdad-liz-vega.pdf[/url] download 720p Zanjeer movies in hindi [url= ] [/url] District 13 Ultimatum English Dubbed DVDRIP torrent [url= -crack.pdf]thingiverse[/url] rns 510 manager 94 [url= -gidens-sociologija-pdf-download.html] -gidens-sociologija-pdf-download.html[/url] ReFWocheNuththegodat [url= -113-with-keygen-win-linux-mac-crack.html]Download[/url]bernina embroidery software 7 crack full [url= _12_9d4314dec1cbb78d0ffdbfe840bdda4e_file.pdf]ADOBE DREAMWEAVER CC 2018 18.2.0.165 CRACK [Crackzsoft] Keygen[/url] Noiseware Professional V4.1.1.0 for Adobe Photoshop..zip [url= -full-version-x32-ultimate-crack-windows-utorrent-exe]Download[/url]
        walpZoffoopyiptyday [url= _12_185a0588ae14b18b6f11a74e216e21e6_file.pdf]thingiverse.com[/url]DrediuhIrrivataree [url= ]thingiverse.com[/url] Taiseertaids [url= -shree-lipi-7-1-software-license-rar-torrent-pc] -shree-lipi-7-1-software-license-rar-torrent-pc[/url] melsAtterve [url= -download-xforce-keygen-maya-lt-2019-64-bit-patch] -download-xforce-keygen-maya-lt-2019-64-bit-patch[/url] Disk Drill Pro 2.0.1.333 crack [url= !NEW! Keygen 64-bit 3ds Max 2018 Free Download]thingiverse.com[/url] NatttureCemFrawlHem [url= -dark-thirty-720p-torrent.html]Download[/url] briletypeAbumunult [url= Cosic Knjiga Pdf Download indeaberly] Cosic Knjiga Pdf Download indeaberly[/url] ReFWocheNuththegodat [url= ] [/url]silat lagenda full movie [url= ]Download[/url] flissinneple [url= ]thingiverse.com[/url]
        [url= -3d-illustration.html]Cute4, 36382890rlL @iMGSRC.RU[/url] 880adde

        -

        walpZoffoopyiptyday [url= -gammon-2-activation-key.html]Download[/url]DrediuhIrrivataree [url= -insight-into-heaven-book-download.pdf] -insight-into-heaven-book-download.pdf[/url] Libro Historia Del Futuro David Diamond PDF [url= _12_8d914814e6218540a3d0aa5bab315fa8_file.pdf]thingiverse[/url] melsAtterve [url= -cdp-vci-driver-download.html] -cdp-vci-driver-download.html[/url] Kis Kisko Pyaar Karoon 1 full movie download [url= ] [/url] download ink master 3 temporada 13 [url= -s3g-10042-free-download.pdf]Download[/url] model nota de intrare receptie excel [url= _Hereketi_Qaydalari_Kitabipdf.html]Yol Hereketi Qaydalari Kitabi.pdf[/url] ReFWocheNuththegodat [url= Crack Keygen Serial Key bernwin]thingiverse[/url]EquantyroarkPata [url= -portable-origin-pro-81-sr3rar] -portable-origin-pro-81-sr3rar[/url] flissinneple [url= _388eeb4e9f3229-there-are-no-words-in-the-english-language-that-can-do-justice-in.html]Download[/url]
        kitab munyatul musolli pdf download [url= _28eeb4e9f3241-the-hindi-movie-download-world-is-at-your-fingertips-with-loha-mov.html]thingiverse.com[/url]DrediuhIrrivataree [url= -and-furious-6-tamil-dubbed-movie-free-downloa.pdf]fast and furious 6 tamil dubbed movie free download tamilrockers[/url] Autodata 3 40 Change Language [url= -pakistan-affairs-book-by-ikram-rabbani-pdf-download] -pakistan-affairs-book-by-ikram-rabbani-pdf-download[/url] melsAtterve [url= -catia-v5-r19-crack-64-bitrar]Download[/url] sesspaphpag [url= _y9_7i17phcdsK31w8] _y9_7i17phcdsK31w8[/url] NatttureCemFrawlHem [url= ]RESIDENTEVIL7biohazardCPYLicenseKey[/url] Fukrey Returns movie in hindi free download [url= ] [/url] ReFWocheNuththegodat [url= -32-pc-key-professional-nulled] -32-pc-key-professional-nulled[/url]EquantyroarkPata [url= ]thingiverse[/url] flissinneple [url= -servers.com/wowonder/read-blog/784]Download[/url]
        walpZoffoopyiptyday [url= -msg-download.pdf]Download[/url]Chandni Chowk To China full movie download in 720p 1080p [url= -hazaaron-mein-meri-behna-serial-song-download.pdf]ek hazaaron mein meri behna serial song download[/url] Call Of Duty Black Ops 2 Multiplayer Crack Fix [url= ]thingiverse[/url] melsAtterve [url= ]thingiverse[/url] sesspaphpag [url= Pro V5.3.168 (Portable) Download]Acdsee Pro V5.3.168 (Portable) Download[/url] UFS 2.10 HWK Support Suite Setup V02.10 13 [url= -Fotophire-131-Crack-Full-Registration-.pdf]thingiverse[/url] Elaan Movie Part 4 In Hindi Free Download Torrent [url= -Casa-De-Papel-123-Sezon-indir-Turkce-Dublaj.pdf] -Casa-De-Papel-123-Sezon-indir-Turkce-Dublaj.pdf[/url] ReFWocheNuththegodat [url= -movie-download-tamilrockers-home.html]thingiverse[/url]fifty shades of grey full movie hindi dubbed 38 [url= _958eeb4e9f3211-this-is-a-blog-about-downloading-and-watching-the-movie-jodha-akb.html]thingiverse.com[/url] auto data german 3.38 11 [url= ]thingiverse[/url]
        [url= -movist-pro-v2-6-4-cr2-tnt-dmg.html]Movist_Pro_v2.6.4_CR2_TNT.dmg[/url] dde72c6

        -

        -

        walpZoffoopyiptyday [url= ]Download[/url]darkspore offline crack torrent [url= ]itermorono.wixsite[/url] Befikre movie hindi dubbed download [url= _Mussaddi__Office_Office_2_Full_Movie_In_Hindi_Free_Download_Hd_1080p.html]cdn.thingiverse[/url] Emperor: Rise Of The Middle Kingdom 2.0.0.2 GOG Crack [url= _fusion_64_crack_portable.html]cdn.thingiverse[/url] sesspaphpag [url= _top_-32bit-c-of-duty-4-multiplayer-only-17-by-flippo-pc-activator-cracked-software]Windows 7 Loader Vista Slic Loader 2.4.8 X86.and.x64 .rar Indows 7 Loader Vista Sl[/url] NatttureCemFrawlHem [url= -blog/3310]dortichomisvathink.wixsite.com[/url] briletypeAbumunult [url= ]cdn.thingiverse[/url] Coolutils Tiff Teller 5.1.0.35 With Crack [Latest] [url= -32-utorrent-pc-software-iso-serial-registration-floebraz]Download[/url]Not Angka Lagu Bungan Sandat [url= -paie-v17-5-FULL-Version-download.html]structural analysis vaidyanathan pdf free[/url] Baixar Modelo De Rifas No Word [url= _2_train_simulator_free_download_full_version.html]wakelet.com[/url]

        -

        walpZoffoopyiptyday [url= Blackmagic 64 Windows Download Ultimate Patch Iso =LINK=]face.kawabray[/url]DrediuhIrrivataree [url= Pagemaker 5.0 Exe Full Version 32 Pro Key ##BEST##]cdn.thingiverse[/url] synology camera license hack [url= ]Download[/url] Panchlait in hindi download hd [url= -data-science-from-scratch-pdf-book-full-utorrent-rar-walshtho] -data-science-from-scratch-pdf-book-full-utorrent-rar-walshtho[/url] sesspaphpag [url= -cracked-t-y-7-2-x32-windows-utorrent-activation-full-zip] -cracked-t-y-7-2-x32-windows-utorrent-activation-full-zip[/url] NatttureCemFrawlHem [url= -rs-means-estimating-handbook-hd-dts-dubbed-torrents-harmar]Diskinternals vmfs recovery 1.5 keygen[/url] Thendral Serial Malligai Panthal Song 12 [url= -gca-extrac-crack-toped-pc-activation-32bit-pro] -gca-extrac-crack-toped-pc-activation-32bit-pro[/url] ReFWocheNuththegodat [url= ]cdn.thingiverse[/url]american pie 3 movie free download for mobile [url= Machine Sign By Abdul Mubeen 37 Ebook (epub) Torrent Full memorailey]seesaawiki[/url] flissinneple [url= -Full-Movie-Download-1080p.pdf]Download[/url]

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Autocad 2017 Mac Crack A Comprehensive Review and Comparison.md b/spaces/usbethFlerru/sovits-modelsV2/example/Autocad 2017 Mac Crack A Comprehensive Review and Comparison.md deleted file mode 100644 index 6dee7ebf2c46fd7cf686e2ab31ec4d937abfcd9b..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Autocad 2017 Mac Crack A Comprehensive Review and Comparison.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Autocad 2017 Mac Crack


        Download Ziphttps://urlcod.com/2uyUdt



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Coreldraw X6 Portable 11.md b/spaces/usbethFlerru/sovits-modelsV2/example/Coreldraw X6 Portable 11.md deleted file mode 100644 index ae640eca7ebce3c77b797488a07fe6b8e89ee8b8..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Coreldraw X6 Portable 11.md +++ /dev/null @@ -1,6 +0,0 @@ -
        -

        Saya akan share sebuah aplikasi desain grafis yang dimana aplikasi ini tak lain adalah CorelDraw namun yang kali ini merupakan software portable yang dimana artinya bisa kita gunakan di komputer ataupun laptop dimana saja dan kapan saja kita mau menggunakannya, dan untuk lebih jelasnya silahkan download dibawah ini Software Aplikasi CorelDraw X5 Portable.

        -

        Coreldraw X6 Portable 11


        Download Zip ✔✔✔ https://urlcod.com/2uyW4T



        -

        Clicking the below button will start downloader the standalone portable version of CorelDRAW Graphics Suite 2017 19.0 for Windows. Bagaimana cara download torrent 20017. It is compatible with x86 and x64 architecture. It is a powerful graphic designing tool with many tools and powerful options. Ai charger for mac download. CorelDRAW Graphics Suite X8 is also available for download.

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Dark Avenger v1.0.8 Apk MOD (Unlimited Purchase) Download and Play the Ultimate Action RPG.md b/spaces/usbethFlerru/sovits-modelsV2/example/Dark Avenger v1.0.8 Apk MOD (Unlimited Purchase) Download and Play the Ultimate Action RPG.md deleted file mode 100644 index 4bf3578026d93c35abda8fa046920ece3e4a423d..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Dark Avenger v1.0.8 Apk MOD (Unlimited Purchase) Download and Play the Ultimate Action RPG.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Dark Avenger v1.0.8 Apk MOD (Unlimited Purchase)


        Download Filehttps://urlcod.com/2uyVqR



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/vaibhavarduino/anime-plus/e4e/models/__init__.py b/spaces/vaibhavarduino/anime-plus/e4e/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/vict0rsch/climateGAN/climategan/losses.py b/spaces/vict0rsch/climateGAN/climategan/losses.py deleted file mode 100644 index f10a5d26c73795bad02837f546b96c76b24e7564..0000000000000000000000000000000000000000 --- a/spaces/vict0rsch/climateGAN/climategan/losses.py +++ /dev/null @@ -1,620 +0,0 @@ -"""Define all losses. When possible, as inheriting from nn.Module -To send predictions to target.device -""" -from random import random as rand - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import models - - -class GANLoss(nn.Module): - def __init__( - self, - use_lsgan=True, - target_real_label=1.0, - target_fake_label=0.0, - soft_shift=0.0, - flip_prob=0.0, - verbose=0, - ): - """Defines the GAN loss which uses either LSGAN or the regular GAN. - When LSGAN is used, it is basically same as MSELoss, - but it abstracts away the need to create the target label tensor - that has the same size as the input + - - * label smoothing: target_real_label=0.75 - * label flipping: flip_prob > 0. - - source: https://github.com/sangwoomo/instagan/blob - /b67e9008fcdd6c41652f8805f0b36bcaa8b632d6/models/networks.py - - Args: - use_lsgan (bool, optional): Use MSE or BCE. Defaults to True. - target_real_label (float, optional): Value for the real target. - Defaults to 1.0. - target_fake_label (float, optional): Value for the fake target. - Defaults to 0.0. - flip_prob (float, optional): Probability of flipping the label - (use for real target in Discriminator only). Defaults to 0.0. - """ - super().__init__() - - self.soft_shift = soft_shift - self.verbose = verbose - - self.register_buffer("real_label", torch.tensor(target_real_label)) - self.register_buffer("fake_label", torch.tensor(target_fake_label)) - if use_lsgan: - self.loss = nn.MSELoss() - else: - self.loss = nn.BCEWithLogitsLoss() - self.flip_prob = flip_prob - - def get_target_tensor(self, input, target_is_real): - soft_change = torch.FloatTensor(1).uniform_(0, self.soft_shift) - if self.verbose > 0: - print("GANLoss sampled soft_change:", soft_change.item()) - if target_is_real: - target_tensor = self.real_label - soft_change - else: - target_tensor = self.fake_label + soft_change - return target_tensor.expand_as(input) - - def __call__(self, input, target_is_real, *args, **kwargs): - r = rand() - if isinstance(input, list): - loss = 0 - for pred_i in input: - if isinstance(pred_i, list): - pred_i = pred_i[-1] - if r < self.flip_prob: - target_is_real = not target_is_real - target_tensor = self.get_target_tensor(pred_i, target_is_real) - loss_tensor = self.loss(pred_i, target_tensor.to(pred_i.device)) - loss += loss_tensor - return loss / len(input) - else: - if r < self.flip_prob: - target_is_real = not target_is_real - target_tensor = self.get_target_tensor(input, target_is_real) - return self.loss(input, target_tensor.to(input.device)) - - -class FeatMatchLoss(nn.Module): - def __init__(self): - super().__init__() - self.criterionFeat = nn.L1Loss() - - def __call__(self, pred_real, pred_fake): - # pred_{real, fake} are lists of features - num_D = len(pred_fake) - GAN_Feat_loss = 0.0 - for i in range(num_D): # for each discriminator - # last output is the final prediction, so we exclude it - num_intermediate_outputs = len(pred_fake[i]) - 1 - for j in range(num_intermediate_outputs): # for each layer output - unweighted_loss = self.criterionFeat( - pred_fake[i][j], pred_real[i][j].detach() - ) - GAN_Feat_loss += unweighted_loss / num_D - return GAN_Feat_loss - - -class CrossEntropy(nn.Module): - def __init__(self): - super().__init__() - self.loss = nn.CrossEntropyLoss() - - def __call__(self, logits, target): - return self.loss(logits, target.to(logits.device).long()) - - -class TravelLoss(nn.Module): - def __init__(self, eps=1e-12): - super().__init__() - self.eps = eps - - def cosine_loss(self, real, fake): - norm_real = torch.norm(real, p=2, dim=1)[:, None] - norm_fake = torch.norm(fake, p=2, dim=1)[:, None] - mat_real = real / norm_real - mat_fake = fake / norm_fake - mat_real = torch.max(mat_real, self.eps * torch.ones_like(mat_real)) - mat_fake = torch.max(mat_fake, self.eps * torch.ones_like(mat_fake)) - # compute only the diagonal of the matrix multiplication - return torch.einsum("ij, ji -> i", mat_fake, mat_real).sum() - - def __call__(self, S_real, S_fake): - self.v_real = [] - self.v_fake = [] - for i in range(len(S_real)): - for j in range(i): - self.v_real.append((S_real[i] - S_real[j])[None, :]) - self.v_fake.append((S_fake[i] - S_fake[j])[None, :]) - self.v_real_t = torch.cat(self.v_real, dim=0) - self.v_fake_t = torch.cat(self.v_fake, dim=0) - return self.cosine_loss(self.v_real_t, self.v_fake_t) - - -class TVLoss(nn.Module): - """Total Variational Regularization: Penalizes differences in - neighboring pixel values - - source: - https://github.com/jxgu1016/Total_Variation_Loss.pytorch/blob/master/TVLoss.py - """ - - def __init__(self, tvloss_weight=1): - """ - Args: - TVLoss_weight (int, optional): [lambda i.e. weight for loss]. Defaults to 1. - """ - super(TVLoss, self).__init__() - self.tvloss_weight = tvloss_weight - - def forward(self, x): - batch_size = x.size()[0] - h_x = x.size()[2] - w_x = x.size()[3] - count_h = self._tensor_size(x[:, :, 1:, :]) - count_w = self._tensor_size(x[:, :, :, 1:]) - h_tv = torch.pow((x[:, :, 1:, :] - x[:, :, : h_x - 1, :]), 2).sum() - w_tv = torch.pow((x[:, :, :, 1:] - x[:, :, :, : w_x - 1]), 2).sum() - return self.tvloss_weight * 2 * (h_tv / count_h + w_tv / count_w) / batch_size - - def _tensor_size(self, t): - return t.size()[1] * t.size()[2] * t.size()[3] - - -class MinentLoss(nn.Module): - """ - Loss for the minimization of the entropy map - Source for version 1: https://github.com/valeoai/ADVENT - - Version 2 adds the variance of the entropy map in the computation of the loss - """ - - def __init__(self, version=1, lambda_var=0.1): - super().__init__() - self.version = version - self.lambda_var = lambda_var - - def __call__(self, pred): - assert pred.dim() == 4 - n, c, h, w = pred.size() - entropy_map = -torch.mul(pred, torch.log2(pred + 1e-30)) / np.log2(c) - if self.version == 1: - return torch.sum(entropy_map) / (n * h * w) - else: - entropy_map_demean = entropy_map - torch.sum(entropy_map) / (n * h * w) - entropy_map_squ = torch.mul(entropy_map_demean, entropy_map_demean) - return torch.sum(entropy_map + self.lambda_var * entropy_map_squ) / ( - n * h * w - ) - - -class MSELoss(nn.Module): - """ - Creates a criterion that measures the mean squared error - (squared L2 norm) between each element in the input x and target y . - """ - - def __init__(self): - super().__init__() - self.loss = nn.MSELoss() - - def __call__(self, prediction, target): - return self.loss(prediction, target.to(prediction.device)) - - -class L1Loss(MSELoss): - """ - Creates a criterion that measures the mean absolute error - (MAE) between each element in the input x and target y - """ - - def __init__(self): - super().__init__() - self.loss = nn.L1Loss() - - -class SIMSELoss(nn.Module): - """Scale invariant MSE Loss""" - - def __init__(self): - super(SIMSELoss, self).__init__() - - def __call__(self, prediction, target): - d = prediction - target - diff = torch.mean(d * d) - relDiff = torch.mean(d) * torch.mean(d) - return diff - relDiff - - -class SIGMLoss(nn.Module): - """loss from MiDaS paper - MiDaS did not specify how the gradients were computed but we use Sobel - filters which approximate the derivative of an image. - """ - - def __init__(self, gmweight=0.5, scale=4, device="cuda"): - super(SIGMLoss, self).__init__() - self.gmweight = gmweight - self.sobelx = torch.Tensor([[1, 0, -1], [2, 0, -2], [1, 0, -1]]).to(device) - self.sobely = torch.Tensor([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]).to(device) - self.scale = scale - - def __call__(self, prediction, target): - # get disparities - # align both the prediction and the ground truth to have zero - # translation and unit scale - t_pred = torch.median(prediction) - t_targ = torch.median(target) - s_pred = torch.mean(torch.abs(prediction - t_pred)) - s_targ = torch.mean(torch.abs(target - t_targ)) - pred = (prediction - t_pred) / s_pred - targ = (target - t_targ) / s_targ - - R = pred - targ - - # get gradient map with sobel filters - batch_size = prediction.size()[0] - num_pix = prediction.size()[-1] * prediction.size()[-2] - sobelx = (self.sobelx).expand((batch_size, 1, -1, -1)) - sobely = (self.sobely).expand((batch_size, 1, -1, -1)) - gmLoss = 0 # gradient matching term - for k in range(self.scale): - R_ = F.interpolate(R, scale_factor=1 / 2 ** k) - Rx = F.conv2d(R_, sobelx, stride=1) - Ry = F.conv2d(R_, sobely, stride=1) - gmLoss += torch.sum(torch.abs(Rx) + torch.abs(Ry)) - gmLoss = self.gmweight / num_pix * gmLoss - # scale invariant MSE - simseLoss = 0.5 / num_pix * torch.sum(torch.abs(R)) - loss = simseLoss + gmLoss - return loss - - -class ContextLoss(nn.Module): - """ - Masked L1 loss on non-water - """ - - def __call__(self, input, target, mask): - return torch.mean(torch.abs(torch.mul((input - target), 1 - mask))) - - -class ReconstructionLoss(nn.Module): - """ - Masked L1 loss on water - """ - - def __call__(self, input, target, mask): - return torch.mean(torch.abs(torch.mul((input - target), mask))) - - -################################################################################## -# VGG network definition -################################################################################## - -# Source: https://github.com/NVIDIA/pix2pixHD -class Vgg19(nn.Module): - def __init__(self, requires_grad=False): - super(Vgg19, self).__init__() - vgg_pretrained_features = models.vgg19(pretrained=True).features - self.slice1 = nn.Sequential() - self.slice2 = nn.Sequential() - self.slice3 = nn.Sequential() - self.slice4 = nn.Sequential() - self.slice5 = nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5(h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out - - -# Source: https://github.com/NVIDIA/pix2pixHD -class VGGLoss(nn.Module): - def __init__(self, device): - super().__init__() - self.vgg = Vgg19().to(device).eval() - self.criterion = nn.L1Loss() - self.weights = [1.0 / 32, 1.0 / 16, 1.0 / 8, 1.0 / 4, 1.0] - - def forward(self, x, y): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - for i in range(len(x_vgg)): - loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach()) - return loss - - -def get_losses(opts, verbose, device=None): - """Sets the loss functions to be used by G, D and C, as specified - in the opts and returns a dictionnary of losses: - - losses = { - "G": { - "gan": {"a": ..., "t": ...}, - "cycle": {"a": ..., "t": ...} - "auto": {"a": ..., "t": ...} - "tasks": {"h": ..., "d": ..., "s": ..., etc.} - }, - "D": GANLoss, - "C": ... - } - """ - - losses = { - "G": {"a": {}, "p": {}, "tasks": {}}, - "D": {"default": {}, "advent": {}}, - "C": {}, - } - - # ------------------------------ - # ----- Generator Losses ----- - # ------------------------------ - - # painter losses - if "p" in opts.tasks: - losses["G"]["p"]["gan"] = ( - HingeLoss() - if opts.gen.p.loss == "hinge" - else GANLoss( - use_lsgan=False, - soft_shift=opts.dis.soft_shift, - flip_prob=opts.dis.flip_prob, - ) - ) - losses["G"]["p"]["dm"] = MSELoss() - losses["G"]["p"]["vgg"] = VGGLoss(device) - losses["G"]["p"]["tv"] = TVLoss() - losses["G"]["p"]["context"] = ContextLoss() - losses["G"]["p"]["reconstruction"] = ReconstructionLoss() - losses["G"]["p"]["featmatch"] = FeatMatchLoss() - - # depth losses - if "d" in opts.tasks: - if not opts.gen.d.classify.enable: - if opts.gen.d.loss == "dada": - depth_func = DADADepthLoss() - else: - depth_func = SIGMLoss(opts.train.lambdas.G.d.gml) - else: - depth_func = CrossEntropy() - - losses["G"]["tasks"]["d"] = depth_func - - # segmentation losses - if "s" in opts.tasks: - losses["G"]["tasks"]["s"] = {} - losses["G"]["tasks"]["s"]["crossent"] = CrossEntropy() - losses["G"]["tasks"]["s"]["minent"] = MinentLoss() - losses["G"]["tasks"]["s"]["advent"] = ADVENTAdversarialLoss( - opts, gan_type=opts.dis.s.gan_type - ) - - # masker losses - if "m" in opts.tasks: - losses["G"]["tasks"]["m"] = {} - losses["G"]["tasks"]["m"]["bce"] = nn.BCEWithLogitsLoss() - if opts.gen.m.use_minent_var: - losses["G"]["tasks"]["m"]["minent"] = MinentLoss( - version=2, lambda_var=opts.train.lambdas.advent.ent_var - ) - else: - losses["G"]["tasks"]["m"]["minent"] = MinentLoss() - losses["G"]["tasks"]["m"]["tv"] = TVLoss() - losses["G"]["tasks"]["m"]["advent"] = ADVENTAdversarialLoss( - opts, gan_type=opts.dis.m.gan_type - ) - losses["G"]["tasks"]["m"]["gi"] = GroundIntersectionLoss() - - # ---------------------------------- - # ----- Discriminator Losses ----- - # ---------------------------------- - if "p" in opts.tasks: - losses["D"]["p"] = losses["G"]["p"]["gan"] - if "m" in opts.tasks or "s" in opts.tasks: - losses["D"]["advent"] = ADVENTAdversarialLoss(opts) - return losses - - -class GroundIntersectionLoss(nn.Module): - """ - Penalize areas in ground seg but not in flood mask - """ - - def __call__(self, pred, pseudo_ground): - return torch.mean(1.0 * ((pseudo_ground - pred) > 0.5)) - - -def prob_2_entropy(prob): - """ - convert probabilistic prediction maps to weighted self-information maps - """ - n, c, h, w = prob.size() - return -torch.mul(prob, torch.log2(prob + 1e-30)) / np.log2(c) - - -class CustomBCELoss(nn.Module): - """ - The first argument is a tensor and the second argument is an int. - There is no need to take sigmoid before calling this function. - """ - - def __init__(self): - super().__init__() - self.loss = nn.BCEWithLogitsLoss() - - def __call__(self, prediction, target): - return self.loss( - prediction, - torch.FloatTensor(prediction.size()) - .fill_(target) - .to(prediction.get_device()), - ) - - -class ADVENTAdversarialLoss(nn.Module): - """ - The class is for calculating the advent loss. - It is used to indirectly shrink the domain gap between sim and real - - _call_ function: - prediction: torch.tensor with shape of [bs,c,h,w] - target: int; domain label: 0 (sim) or 1 (real) - discriminator: the discriminator model tells if a tensor is from sim or real - - output: the loss value of GANLoss - """ - - def __init__(self, opts, gan_type="GAN"): - super().__init__() - self.opts = opts - if gan_type == "GAN": - self.loss = CustomBCELoss() - elif gan_type == "WGAN" or "WGAN_gp" or "WGAN_norm": - self.loss = lambda x, y: -torch.mean(y * x + (1 - y) * (1 - x)) - else: - raise NotImplementedError - - def __call__(self, prediction, target, discriminator, depth_preds=None): - """ - Compute the GAN loss from the Advent Discriminator given - normalized (softmaxed) predictions (=pixel-wise class probabilities), - and int labels (target). - - Args: - prediction (torch.Tensor): pixel-wise probability distribution over classes - target (torch.Tensor): pixel wise int target labels - discriminator (torch.nn.Module): Discriminator to get the loss - - Returns: - torch.Tensor: float 0-D loss - """ - d_out = prob_2_entropy(prediction) - if depth_preds is not None: - d_out = d_out * depth_preds - d_out = discriminator(d_out) - if self.opts.dis.m.architecture == "OmniDiscriminator": - d_out = multiDiscriminatorAdapter(d_out, self.opts) - loss_ = self.loss(d_out, target) - return loss_ - - -def multiDiscriminatorAdapter(d_out: list, opts: dict) -> torch.tensor: - """ - Because the OmniDiscriminator does not directly return a tensor - (but a list of tensor). - Since there is no multilevel masker, the 0th tensor in the list is all we want. - This Adapter returns the first element(tensor) of the list that OmniDiscriminator - returns. - """ - if ( - isinstance(d_out, list) and len(d_out) == 1 - ): # adapt the multi-scale OmniDiscriminator - if not opts.dis.p.get_intermediate_features: - d_out = d_out[0][0] - else: - d_out = d_out[0] - else: - raise Exception( - "Check the setting of OmniDiscriminator! " - + "For now, we don't support multi-scale OmniDiscriminator." - ) - return d_out - - -class HingeLoss(nn.Module): - """ - Adapted from https://github.com/NVlabs/SPADE/blob/master/models/networks/loss.py - for the painter - """ - - def __init__(self, tensor=torch.FloatTensor): - super().__init__() - self.zero_tensor = None - self.Tensor = tensor - - def get_zero_tensor(self, input): - if self.zero_tensor is None: - self.zero_tensor = self.Tensor(1).fill_(0) - self.zero_tensor.requires_grad_(False) - self.zero_tensor = self.zero_tensor.to(input.device) - return self.zero_tensor.expand_as(input) - - def loss(self, input, target_is_real, for_discriminator=True): - if for_discriminator: - if target_is_real: - minval = torch.min(input - 1, self.get_zero_tensor(input)) - loss = -torch.mean(minval) - else: - minval = torch.min(-input - 1, self.get_zero_tensor(input)) - loss = -torch.mean(minval) - else: - assert target_is_real, "The generator's hinge loss must be aiming for real" - loss = -torch.mean(input) - return loss - - def __call__(self, input, target_is_real, for_discriminator=True): - # computing loss is a bit complicated because |input| may not be - # a tensor, but list of tensors in case of multiscale discriminator - if isinstance(input, list): - loss = 0 - for pred_i in input: - if isinstance(pred_i, list): - pred_i = pred_i[-1] - loss_tensor = self.loss(pred_i, target_is_real, for_discriminator) - loss += loss_tensor - return loss / len(input) - else: - return self.loss(input, target_is_real, for_discriminator) - - -class DADADepthLoss: - """Defines the reverse Huber loss from DADA paper for depth prediction - - Samples with larger residuals are penalized more by l2 term - - Samples with smaller residuals are penalized more by l1 term - From https://github.com/valeoai/DADA/blob/master/dada/utils/func.py - """ - - def loss_calc_depth(self, pred, label): - n, c, h, w = pred.size() - assert c == 1 - - pred = pred.squeeze() - label = label.squeeze() - - adiff = torch.abs(pred - label) - batch_max = 0.2 * torch.max(adiff).item() - t1_mask = adiff.le(batch_max).float() - t2_mask = adiff.gt(batch_max).float() - t1 = adiff * t1_mask - t2 = (adiff * adiff + batch_max * batch_max) / (2 * batch_max) - t2 = t2 * t2_mask - return (torch.sum(t1) + torch.sum(t2)) / torch.numel(pred.data) - - def __call__(self, pred, label): - return self.loss_calc_depth(pred, label) diff --git a/spaces/wy213/213a/src/components/toaster.tsx b/spaces/wy213/213a/src/components/toaster.tsx deleted file mode 100644 index 4d2693460b61307a1d4c127fd01df9bee16e59ff..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/components/toaster.tsx +++ /dev/null @@ -1,3 +0,0 @@ -'use client' - -export { Toaster } from 'react-hot-toast' diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/engine/image/triplet.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/engine/image/triplet.py deleted file mode 100644 index cd15cfb203cbb18244b440ac7e74f253bd1db8a8..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/engine/image/triplet.py +++ /dev/null @@ -1,122 +0,0 @@ -from __future__ import division, print_function, absolute_import - -from torchreid import metrics -from torchreid.losses import TripletLoss, CrossEntropyLoss - -from ..engine import Engine - - -class ImageTripletEngine(Engine): - r"""Triplet-loss engine for image-reid. - - Args: - datamanager (DataManager): an instance of ``torchreid.data.ImageDataManager`` - or ``torchreid.data.VideoDataManager``. - model (nn.Module): model instance. - optimizer (Optimizer): an Optimizer. - margin (float, optional): margin for triplet loss. Default is 0.3. - weight_t (float, optional): weight for triplet loss. Default is 1. - weight_x (float, optional): weight for softmax loss. Default is 1. - scheduler (LRScheduler, optional): if None, no learning rate decay will be performed. - use_gpu (bool, optional): use gpu. Default is True. - label_smooth (bool, optional): use label smoothing regularizer. Default is True. - - Examples:: - - import torchreid - datamanager = torchreid.data.ImageDataManager( - root='path/to/reid-data', - sources='market1501', - height=256, - width=128, - combineall=False, - batch_size=32, - num_instances=4, - train_sampler='RandomIdentitySampler' # this is important - ) - model = torchreid.models.build_model( - name='resnet50', - num_classes=datamanager.num_train_pids, - loss='triplet' - ) - model = model.cuda() - optimizer = torchreid.optim.build_optimizer( - model, optim='adam', lr=0.0003 - ) - scheduler = torchreid.optim.build_lr_scheduler( - optimizer, - lr_scheduler='single_step', - stepsize=20 - ) - engine = torchreid.engine.ImageTripletEngine( - datamanager, model, optimizer, margin=0.3, - weight_t=0.7, weight_x=1, scheduler=scheduler - ) - engine.run( - max_epoch=60, - save_dir='log/resnet50-triplet-market1501', - print_freq=10 - ) - """ - - def __init__( - self, - datamanager, - model, - optimizer, - margin=0.3, - weight_t=1, - weight_x=1, - scheduler=None, - use_gpu=True, - label_smooth=True - ): - super(ImageTripletEngine, self).__init__(datamanager, use_gpu) - - self.model = model - self.optimizer = optimizer - self.scheduler = scheduler - self.register_model('model', model, optimizer, scheduler) - - assert weight_t >= 0 and weight_x >= 0 - assert weight_t + weight_x > 0 - self.weight_t = weight_t - self.weight_x = weight_x - - self.criterion_t = TripletLoss(margin=margin) - self.criterion_x = CrossEntropyLoss( - num_classes=self.datamanager.num_train_pids, - use_gpu=self.use_gpu, - label_smooth=label_smooth - ) - - def forward_backward(self, data): - imgs, pids = self.parse_data_for_train(data) - - if self.use_gpu: - imgs = imgs.cuda() - pids = pids.cuda() - - outputs, features = self.model(imgs) - - loss = 0 - loss_summary = {} - - if self.weight_t > 0: - loss_t = self.compute_loss(self.criterion_t, features, pids) - loss += self.weight_t * loss_t - loss_summary['loss_t'] = loss_t.item() - - if self.weight_x > 0: - loss_x = self.compute_loss(self.criterion_x, outputs, pids) - loss += self.weight_x * loss_x - loss_summary['loss_x'] = loss_x.item() - loss_summary['acc'] = metrics.accuracy(outputs, pids)[0].item() - - assert loss_summary - - self.optimizer.zero_grad() - loss.backward() - self.optimizer.step() - - return loss_summary diff --git a/spaces/xfys/yolov5_tracking/yolov5/utils/loggers/clearml/__init__.py b/spaces/xfys/yolov5_tracking/yolov5/utils/loggers/clearml/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/xhd456/anime-remove-background/app.py b/spaces/xhd456/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/xhd456/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/xiang2811/ChatGPT/locale/extract_locale.py b/spaces/xiang2811/ChatGPT/locale/extract_locale.py deleted file mode 100644 index 32b0924bd6dffe150cb3e481ddadef836b91b83c..0000000000000000000000000000000000000000 --- a/spaces/xiang2811/ChatGPT/locale/extract_locale.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import json -import re - -# Define regular expression patterns -pattern = r'i18n\((\"{3}.*?\"{3}|\".*?\")\)' - -# Load the .py file -with open('ChuanhuChatbot.py', 'r', encoding='utf-8') as f: - contents = f.read() - -# Load the .py files in the modules folder -for filename in os.listdir("modules"): - if filename.endswith(".py"): - with open(os.path.join("modules", filename), "r", encoding="utf-8") as f: - contents += f.read() - -# Matching with regular expressions -matches = re.findall(pattern, contents, re.DOTALL) - -# Convert to key/value pairs -data = {match.strip('()"'): '' for match in matches} - -# Save as a JSON file -with open('labels.json', 'w', encoding='utf-8') as f: - json.dump(data, f, ensure_ascii=False, indent=4) \ No newline at end of file diff --git a/spaces/xiaoei/203/README.md b/spaces/xiaoei/203/README.md deleted file mode 100644 index 2529b5b48af0fd357ea6eb6ec7de95fa566b5418..0000000000000000000000000000000000000000 --- a/spaces/xiaoei/203/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 203 -emoji: 🐠 -colorFrom: pink -colorTo: indigo -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/xnetba/MMS/uroman/lib/NLP/utilities.pm b/spaces/xnetba/MMS/uroman/lib/NLP/utilities.pm deleted file mode 100644 index 7be117449190533d826bd63b9266c1434d00408f..0000000000000000000000000000000000000000 --- a/spaces/xnetba/MMS/uroman/lib/NLP/utilities.pm +++ /dev/null @@ -1,3652 +0,0 @@ -################################################################ -# # -# utilities # -# # -################################################################ - -package NLP::utilities; - -use File::Spec; -use Time::HiRes qw(time); -use Time::Local; -use NLP::English; -use NLP::UTF8; - -$utf8 = NLP::UTF8; -$englishPM = NLP::English; - -%empty_ht = (); - -use constant DEBUGGING => 0; - -sub member { - local($this,$elem,@array) = @_; - - my $a; - if (defined($elem)) { - foreach $a (@array) { - if (defined($a)) { - return 1 if $elem eq $a; - } else { - $DB::single = 1; # debugger breakpoint - print STDERR "\nWarning: Undefined variable utilities::member::a\n"; - } - } - } else { - $DB::single = 1; # debugger breakpoint - print STDERR "\nWarning: Undefined variable utilities::member::elem\n"; - } - return 0; -} - -sub dual_member { - local($this,$elem1,$elem2,*array1,*array2) = @_; - # returns 1 if there exists a position $n - # such that $elem1 occurs at position $n in @array1 - # and $elem2 occurs at same position $n in @array2 - - return 0 unless defined($elem1) && defined($elem2); - my $last_index = ($#array1 < $#array2) ? $#array1 : $#array2; #min - my $a; - my $b; - foreach $i ((0 .. $last_index)) { - return 1 if defined($a = $array1[$i]) && defined($b = $array2[$i]) && ($a eq $elem1) && ($b eq $elem2); - } - return 0; -} - -sub sorted_list_equal { - local($this,*list1,*list2) = @_; - - return 0 unless $#list1 == $#list2; - foreach $i ((0 .. $#list1)) { - return 0 unless $list1[$i] eq $list2[$i]; - } - return 1; -} - -sub trim { - local($this, $s) = @_; - - $s =~ s/^\s*//; - $s =~ s/\s*$//; - $s =~ s/\s+/ /g; - return $s; -} - -sub trim2 { - local($this, $s) = @_; - - $s =~ s/^\s*//; - $s =~ s/\s*$//; - return $s; -} - -sub trim_left { - local($this, $s) = @_; - $s =~ s/^\s*//; - return $s; -} - -sub cap_member { - local($this,$elem,@array) = @_; - - my $a; - my $lc_elem = lc $elem; - foreach $a (@array) { - return $a if $lc_elem eq lc $a; - } - return ""; -} - -sub remove_elem { - local($this,$elem,@array) = @_; - - return @array unless $this->member($elem, @array); - @rm_list = (); - foreach $a (@array) { - push(@rm_list, $a) unless $elem eq $a; - } - return @rm_list; -} - -sub intersect_p { - local($this,*list1,*list2) = @_; - - foreach $elem1 (@list1) { - if (defined($elem1)) { - foreach $elem2 (@list2) { - if (defined($elem2)) { - return 1 if $elem1 eq $elem2; - } else { - $DB::single = 1; # debugger breakpoint - print STDERR "\nWarning: Undefined variable utilities::intersect_p::elem2\n"; - } - } - } else { - $DB::single = 1; # debugger breakpoint - print STDERR "\nWarning: Undefined variable utilities::intersect_p::elem1\n"; - } - } - return 0; -} - -sub intersect_expl_p { - local($this,*list1,@list2) = @_; - - foreach $elem1 (@list1) { - foreach $elem2 (@list2) { - return 1 if $elem1 eq $elem2; - } - } - return 0; -} - -sub intersection { - local($this,*list1,*list2) = @_; - - @intersection_list = (); - foreach $elem1 (@list1) { - foreach $elem2 (@list2) { - push(@intersection_list, $elem1) if ($elem1 eq $elem2) && ! $this->member($elem1, @intersection_list); - } - } - return @intersection_list; -} - -sub cap_intersect_p { - local($this,*list1,*list2) = @_; - - foreach $elem1 (@list1) { - $lc_elem1 = lc $elem1; - foreach $elem2 (@list2) { - return 1 if $lc_elem1 eq lc $elem2; - } - } - return 0; -} - -sub subset_p { - local($this,*list1,*list2) = @_; - - foreach $elem1 (@list1) { - return 0 unless $this->member($elem1, @list2); - } - return 1; -} - -sub cap_subset_p { - local($this,*list1,*list2) = @_; - - foreach $elem1 (@list1) { - return 0 unless $this->cap_member($elem1, @list2); - } - return 1; -} - -sub unique { - local($this, @list) = @_; - - my %seen = (); - @uniq = (); - foreach $item (@list) { - push(@uniq, $item) unless $seen{$item}++; - } - return @uniq; -} - -sub position { - local($this,$elem,@array) = @_; - $i = 0; - foreach $a (@array) { - return $i if $elem eq $a; - $i++; - } - return -1; -} - -sub positions { - local($this,$elem,@array) = @_; - $i = 0; - @positions_in_list = (); - foreach $a (@array) { - push(@positions_in_list, $i) if $elem eq $a; - $i++; - } - return @positions_in_list; -} - -sub last_position { - local($this,$elem,@array) = @_; - - $result = -1; - $i = 0; - foreach $a (@array) { - $result = $i if $elem eq $a; - $i++; - } - return $result; -} - -sub rand_n_digit_number { - local($this,$n) = @_; - - return 0 unless $n =~ /^[1-9]\d*$/; - $ten_power_n = 10 ** ($n - 1); - return int(rand(9 * $ten_power_n)) + $ten_power_n; -} - -# Consider File::Temp -sub new_tmp_filename { - local($this,$filename) = @_; - - $loop_limit = 1000; - ($dir,$simple_filename) = ($filename =~ /^(.+)\/([^\/]+)$/); - $simple_filename = $filename unless defined($simple_filename); - $new_filename = "$dir/tmp-" . $this->rand_n_digit_number(8) . "-$simple_filename"; - while ((-e $new_filename) && ($loop_limit-- >= 0)) { - $new_filename = "$dir/tmp-" . $this->rand_n_digit_number(8) . "-$simple_filename"; - } - return $new_filename; -} - -# support sorting order: "8", "8.0", "8.5", "8.5.1.", "8.10", "10", "10-12" - -sub compare_complex_numeric { - local($this,$a,$b) = @_; - - (my $a_num,my $a_rest) = ($a =~ /^(\d+)\D*(.*)$/); - (my $b_num,my $b_rest) = ($b =~ /^(\d+)\D*(.*)$/); - - if (defined($a_rest) && defined($b_rest)) { - return ($a_num <=> $b_num) - || $this->compare_complex_numeric($a_rest,$b_rest); - } else { - return $a cmp $b; - } -} - -# support sorting order: "lesson8-ps-v1.9.xml", "Lesson 10_ps-v_1.11.xml" -# approach: segment strings into alphabetic and numerical sections and compare pairwise - -sub compare_mixed_alpha_numeric { - local($this,$a,$b) = @_; - - ($a_alpha,$a_num,$a_rest) = ($a =~ /^(\D*)(\d[-\d\.]*)(.*)$/); - ($b_alpha,$b_num,$b_rest) = ($b =~ /^(\D*)(\d[-\d\.]*)(.*)$/); - - ($a_alpha) = ($a =~ /^(\D*)/) unless defined $a_alpha; - ($b_alpha) = ($b =~ /^(\D*)/) unless defined $b_alpha; - - # ignore non-alphabetic characters in alpha sections - $a_alpha =~ s/\W|_//g; - $b_alpha =~ s/\W|_//g; - - if ($alpha_cmp = lc $a_alpha cmp lc $b_alpha) { - return $alpha_cmp; - } elsif (defined($a_rest) && defined($b_rest)) { - return $this->compare_complex_numeric($a_num,$b_num) - || $this->compare_mixed_alpha_numeric ($a_rest,$b_rest); - } else { - return (defined($a_num) <=> defined($b_num)) || ($a cmp $b); - } -} - -# @sorted_lessons = sort { NLP::utilities->compare_mixed_alpha_numeric($a,$b) } @lessons; - -sub html_guarded_p { - local($this,$string) = @_; - - return 0 if $string =~ /[<>"]/; - $string .= " "; - @segs = split('&',$string); - shift @segs; - foreach $seg (@segs) { - next if $seg =~ /^[a-z]{2,6};/i; - # next if $seg =~ /^amp;/; - # next if $seg =~ /^quot;/; - # next if $seg =~ /^nbsp;/; - # next if $seg =~ /^gt;/; - # next if $seg =~ /^lt;/; - next if $seg =~ /^#(\d+);/; - next if $seg =~ /^#x([0-9a-fA-F]+);/; - return 0; - } - return 1; -} - -sub guard_tooltip_text { - local($this,$string) = @_; - - $string =~ s/\xCB\x88/'/g; - return $string; -} - -sub guard_html { - local($this,$string,$control_string) = @_; - - return "" unless defined($string); - my $guarded_string; - $control_string = "" unless defined($control_string); - return $string if ($string =~ /&/) - && (! ($control_string =~ /\bstrict\b/)) - && $this->html_guarded_p($string); - $guarded_string = $string; - $guarded_string =~ s/&/&/g; - if ($control_string =~ /slash quote/) { - $guarded_string =~ s/"/\\"/g; - } elsif ($control_string =~ /keep quote/) { - } else { - $guarded_string =~ s/\"/"/g; - } - if ($control_string =~ /escape-slash/) { - $guarded_string =~ s/\//&x2F;/g; - } - $guarded_string =~ s/>/>/g; - $guarded_string =~ s/" : - /^lt$/i ? "<" : - /^x2F$/i ? "/" : - /^nbsp$/i ? "\xC2\xA0" : - /^#(\d+)$/ ? $this->chr($1) : - /^#x([0-9a-f]+)$/i ? $this->chr(hex($1)) : - $_ - }gex; - return $string; -} - -sub unguard_html_r { - local($this,$string) = @_; - - return undef unless defined($string); - - $string =~ s/&/&/g; - $string =~ s/"/'/g; - $string =~ s/<//g; - - ($d) = ($string =~ /&#(\d+);/); - while (defined($d)) { - $c = $this->chr($d); - $string =~ s/&#$d;/$c/g; - ($d) = ($string =~ /&#(\d+);/); - } - ($x) = ($string =~ /&#x([0-9a-f]+);/i); - while (defined($x)) { - $c = $this->chr(hex($x)); - $string =~ s/&#x$x;/$c/g; - ($x) = ($string =~ /&#x([0-9a-f]+);/i); - } - $string0 = $string; - ($x) = ($string =~ /(?:https?|www|\.com)\S*\%([0-9a-f]{2,2})/i); - while (defined($x)) { - $c = $this->chr("%" . hex($x)); - $string =~ s/\%$x/$c/g; - ($x) = ($string =~ /(?:https?|www|\.com)\S*\%([0-9a-f]{2,2})/i); - } - return $string; -} - -sub unguard_html_l { - local($caller,$string) = @_; - - return undef unless defined($string); - - my $pre; - my $core; - my $post; - my $repl; - my $s = $string; - if (($pre,$core,$post) = ($s =~ /^(.*)&(amp|quot|lt|gt|#\d+|#x[0-9a-f]+);(.*)$/i)) { - $repl = "?"; - $repl = "&" if $core =~ /^amp$/i; - $repl = "'" if $core =~ /^quot$/i; - $repl = "<" if $core =~ /^lt$/i; - $repl = ">" if $core =~ /^gt$/i; - if ($core =~ /^#\d+$/i) { - $core2 = substr($core,1); - $repl = $caller->chr($core2); - } - $repl = $caller->chr(hex(substr($core,2))) if $core =~ /^#x[0-9a-f]+$/i; - $s = $pre . $repl . $post; - } - return $s; -} - -sub guard_html_quote { - local($caller,$string) = @_; - - $string =~ s/"/"/g; - return $string; -} - -sub unguard_html_quote { - local($caller,$string) = @_; - - $string =~ s/"/"/g; - return $string; -} - -sub uri_encode { - local($caller,$string) = @_; - - $string =~ s/([^^A-Za-z0-9\-_.!~*()'])/ sprintf "%%%02x", ord $1 /eg; - return $string; -} - -sub uri_decode { - local($caller,$string) = @_; - - $string =~ s/%([0-9A-Fa-f]{2})/chr(hex($1))/eg; - return $string; -} - -sub remove_xml_tags { - local($caller,$string) = @_; - - $string =~ s/<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>//g; - return $string; -} - -sub remove_any_tokenization_at_signs_around_xml_tags { - local($caller,$string) = @_; - - $string =~ s/(?:\@ \@)?(<[^<>]+>)(?:\@ \@)?/$1/g; - $string =~ s/\@?(<[^<>]+>)\@?/$1/g; - return $string; -} - -sub remove_xml_tags_and_any_bordering_at_signs { - # at-signs from tokenization - local($caller,$string) = @_; - - $string =~ s/\@?<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>\@?//g; - return $string; -} - -sub chr { - local($caller,$i) = @_; - - return undef unless $i =~ /^\%?\d+$/; - if ($i =~ /^%/) { - $i =~ s/^\%//; - return chr($i) if $i < 128; - return "\x80" | chr($i - 128) if $i < 256; - } else { - return chr($i) if $i < 128; - return ("\xC0" | chr(($i / 64) % 32)) - . ("\x80" | chr($i % 64)) if $i < 2048; - return ("\xE0" | chr(int($i / 4096) % 16)) - . ("\x80" | chr(int($i / 64) % 64)) - . ("\x80" | chr($i % 64)) if $i < 65536; - return ("\xF0" | chr(int($i / 262144) % 8)) - . ("\x80" | chr(int($i / 4096) % 64)) - . ("\x80" | chr(int($i / 64) % 64)) - . ("\x80" | chr($i % 64)) if $i < 2097152; - } - return "?"; -} - -sub guard_cgi { - local($caller, $string) = @_; - - $guarded_string = $string; - if ($string =~ /[\x80-\xFF]/) { - $guarded_string = ""; - while ($string ne "") { - $char = substr($string, 0, 1); - $string = substr($string, 1); - if ($char =~ /^[\\ ;\#\&\:\=\"\'\+\?\x00-\x1F\x80-\xFF]$/) { - $hex = sprintf("%2.2x",ord($char)); - $guarded_string .= uc "%$hex"; - } else { - $guarded_string .= $char; - } - } - } else { - $guarded_string = $string; - $guarded_string =~ s/%/%25/g; - $guarded_string =~ s/\n/%5Cn/g; - $guarded_string =~ s/\t/%5Ct/g; - $guarded_string =~ s/ /%20/g; - $guarded_string =~ s/"/%22/g; - $guarded_string =~ s/#/%23/g; - $guarded_string =~ s/&/%26/g; - $guarded_string =~ s/'/%27/g; - $guarded_string =~ s/\+/%2B/g; - $guarded_string =~ s/\//%2F/g; - $guarded_string =~ s/:/%3A/g; - $guarded_string =~ s/;/%3B/g; - $guarded_string =~ s//%3E/g; - $guarded_string =~ s/\?/%3F/g; - } - return $guarded_string; -} - -sub repair_cgi_guard { - local($caller,$string) = @_; - # undo second cgi-guard, e.g. "Jo%25C3%25ABlle_Aubron" -> "Jo%C3%ABlle_Aubron" - - $string =~ s/(%)25([CD][0-9A-F]%)25([89AB][0-9A-F])/$1$2$3/g; - $string =~ s/(%)25(E[0-9A-F]%)25([89AB][0-9A-F]%)25([89AB][0-9A-F])/$1$2$3$4/g; - return $string; -} - -sub unguard_cgi { - local($caller,$string) = @_; - - $unguarded_string = $string; - $unguarded_string =~ s/%5Cn/\n/g; - $unguarded_string =~ s/%5Ct/\t/g; - $unguarded_string =~ s/%20/ /g; - $unguarded_string =~ s/%23/#/g; - $unguarded_string =~ s/%26/&/g; - $unguarded_string =~ s/%2B/+/g; - $unguarded_string =~ s/%2C/,/g; - $unguarded_string =~ s/%3A/:/g; - $unguarded_string =~ s/%3D/=/g; - $unguarded_string =~ s/%3F/?/g; - $unguarded_string =~ s/%C3%A9/\xC3\xA9/g; - - # more general - ($code) = ($unguarded_string =~ /%([0-9A-F]{2,2})/); - while (defined($code)) { - $percent_code = "%" . $code; - $hex_code = sprintf("%c", hex($code)); - $unguarded_string =~ s/$percent_code/$hex_code/g; - ($code) = ($unguarded_string =~ /%([0-9A-F]{2,2})/); - } - - return $unguarded_string; -} - -sub regex_guard { - local($caller,$string) = @_; - - $guarded_string = $string; - $guarded_string =~ s/([\\\/\^\|\(\)\{\}\$\@\*\+\?\.\[\]])/\\$1/g - if $guarded_string =~ /[\\\/\^\|\(\)\{\}\$\@\*\+\?\.\[\]]/; - - return $guarded_string; -} - -sub g_regex_spec_tok_p { - local($this,$string) = @_; - - # specials: ( ) (?: ) [ ] - return ($string =~ /^(\(\?:|[()\[\]])$/); -} - -sub regex_guard_norm { - local($this,$string) = @_; - - return $string unless $string =~ /[\[\]\\()$@?+]/; - my $rest = $string; - my @stack = (""); - while ($rest ne "") { - # specials: ( ) (?: ) [ ] ? + - if (($pre, $special, $post) = ($rest =~ /^((?:\\.|[^\[\]()?+])*)(\(\?:|[\[\]()?+])(.*)$/)) { - # print STDERR "Special: $pre *$special* $post\n"; - unless ($pre eq "") { - push(@stack, $pre); - while (($#stack >= 1) && (! $this->g_regex_spec_tok_p($stack[$#stack-1])) - && (! $this->g_regex_spec_tok_p($stack[$#stack]))) { - $s1 = pop @stack; - $s2 = pop @stack; - push(@stack, "$s2$s1"); - } - } - if ($special =~ /^[?+]$/) { - push(@stack, "\\") if ($stack[$#stack] eq "") - || ($this->g_regex_spec_tok_p($stack[$#stack]) && ($stack[$#stack] ne "[")); - push(@stack, $special); - } elsif ($special eq "]") { - if (($#stack >= 1) && ($stack[$#stack-1] eq "[") && ! $this->g_regex_spec_tok_p($stack[$#stack])) { - $char_expression = pop @stack; - pop @stack; - push(@stack, "[$char_expression]"); - } else { - push(@stack, $special); - } - } elsif (($special =~ /^[()]/) && (($stack[$#stack] eq "[") - || (($#stack >= 1) - && ($stack[$#stack-1] eq "[") - && ! $this->g_regex_spec_tok_p($stack[$#stack])))) { - push(@stack, "\\$special"); - } elsif ($special eq ")") { - if (($#stack >= 1) && ($stack[$#stack-1] =~ /^\((\?:)?$/) && ! $this->g_regex_spec_tok_p($stack[$#stack])) { - $alt_expression = pop @stack; - $open_para = pop @stack; - if ($open_para eq "(") { - push(@stack, "(?:$alt_expression)"); - } else { - push(@stack, "$open_para$alt_expression)"); - } - } else { - push(@stack, $special); - } - } else { - push(@stack, $special); - } - while (($#stack >= 1) && (! $this->g_regex_spec_tok_p($stack[$#stack-1])) - && (! $this->g_regex_spec_tok_p($stack[$#stack]))) { - $s1 = pop @stack; - $s2 = pop @stack; - push(@stack, "$s2$s1"); - } - $rest = $post; - } else { - push(@stack, $rest); - $rest = ""; - } - } - # print STDERR "Stack: " . join(";", @stack) . "\n"; - foreach $i ((0 .. $#stack)) { - $stack_elem = $stack[$i]; - if ($stack_elem =~ /^[()\[\]]$/) { - $stack[$i] = "\\" . $stack[$i]; - } - } - return join("", @stack); -} - -sub string_guard { - local($caller,$string) = @_; - - return "" unless defined($string); - $guarded_string = $string; - $guarded_string =~ s/([\\"])/\\$1/g - if $guarded_string =~ /[\\"]/; - - return $guarded_string; -} - -sub json_string_guard { - local($caller,$string) = @_; - - return "" unless defined($string); - $guarded_string = $string; - $guarded_string =~ s/([\\"])/\\$1/g - if $guarded_string =~ /[\\"]/; - $guarded_string =~ s/\r*\n/\\n/g - if $guarded_string =~ /\n/; - - return $guarded_string; -} - -sub json_string_unguard { - local($caller,$string) = @_; - - return "" unless defined($string); - $string =~ s/\\n/\n/g - if $string =~ /\\n/; - return $string; -} - -sub guard_javascript_arg { - local($caller,$string) = @_; - - return "" unless defined($string); - $guarded_string = $string; - $guarded_string =~ s/\\/\\\\/g; - $guarded_string =~ s/'/\\'/g; - return $guarded_string; -} - -sub guard_substitution_right_hand_side { - # "$1x" => "$1 . \"x\"" - local($caller,$string) = @_; - - my $result = ""; - ($pre,$var,$post) = ($string =~ /^([^\$]*)(\$\d)(.*)$/); - while (defined($var)) { - $result .= " . " if $result; - $result .= "\"$pre\" . " unless $pre eq ""; - $result .= $var; - $string = $post; - ($pre,$var,$post) = ($string =~ /^([^\$]*)(\$\d)(.*)$/); - } - $result .= " . \"$string\"" if $string; - return $result; -} - -sub string_starts_with_substring { - local($caller,$string,$substring) = @_; - - $guarded_substring = $caller->regex_guard($substring); - return $string =~ /^$guarded_substring/; -} - -sub one_string_starts_with_the_other { - local($caller,$s1,$s2) = @_; - - return ($s1 eq $s2) - || $caller->string_starts_with_substring($s1,$s2) - || $caller->string_starts_with_substring($s2,$s1); -} - -sub string_ends_in_substring { - local($caller,$string,$substring) = @_; - - $guarded_substring = $caller->regex_guard($substring); - return $string =~ /$guarded_substring$/; -} - -sub string_equal_ignore_leading_multiple_or_trailing_blanks { - local($caller,$string1,$string2) = @_; - - return 1 if $string1 eq $string2; - $string1 =~ s/\s+/ /; - $string2 =~ s/\s+/ /; - $string1 =~ s/^\s+//; - $string2 =~ s/^\s+//; - $string1 =~ s/\s+$//; - $string2 =~ s/\s+$//; - - return $string1 eq $string2; -} - -sub strip_substring_from_start_of_string { - local($caller,$string,$substring,$error_code) = @_; - - $error_code = "ERROR" unless defined($error_code); - my $reg_surf = $caller->regex_guard($substring); - if ($string =~ /^$guarded_substring/) { - $string =~ s/^$reg_surf//; - return $string; - } else { - return $error_code; - } -} - -sub strip_substring_from_end_of_string { - local($caller,$string,$substring,$error_code) = @_; - - $error_code = "ERROR" unless defined($error_code); - my $reg_surf = $caller->regex_guard($substring); - if ($string =~ /$reg_surf$/) { - $string =~ s/$reg_surf$//; - return $string; - } else { - return $error_code; - } -} - -# to be deprecated -sub lang_code { - local($caller,$language) = @_; - - $langPM = NLP::Language->new(); - return $langPM->lang_code($language); -} - -sub full_language { - local($caller,$lang_code) = @_; - - return "Arabic" if $lang_code eq "ar"; - return "Chinese" if $lang_code eq "zh"; - return "Czech" if $lang_code eq "cs"; - return "Danish" if $lang_code eq "da"; - return "Dutch" if $lang_code eq "nl"; - return "English" if $lang_code eq "en"; - return "Finnish" if $lang_code eq "fi"; - return "French" if $lang_code eq "fr"; - return "German" if $lang_code eq "de"; - return "Greek" if $lang_code eq "el"; - return "Hebrew" if $lang_code eq "he"; - return "Hindi" if $lang_code eq "hi"; - return "Hungarian" if $lang_code eq "hu"; - return "Icelandic" if $lang_code eq "is"; - return "Indonesian" if $lang_code eq "id"; - return "Italian" if $lang_code eq "it"; - return "Japanese" if $lang_code eq "ja"; - return "Kinyarwanda" if $lang_code eq "rw"; - return "Korean" if $lang_code eq "ko"; - return "Latin" if $lang_code eq "la"; - return "Malagasy" if $lang_code eq "mg"; - return "Norwegian" if $lang_code eq "no"; - return "Pashto" if $lang_code eq "ps"; - return "Persian" if $lang_code eq "fa"; - return "Polish" if $lang_code eq "pl"; - return "Portuguese" if $lang_code eq "pt"; - return "Romanian" if $lang_code eq "ro"; - return "Russian" if $lang_code eq "ru"; - return "Spanish" if $lang_code eq "es"; - return "Swedish" if $lang_code eq "sv"; - return "Turkish" if $lang_code eq "tr"; - return "Urdu" if $lang_code eq "ur"; - return ""; -} - -# to be deprecated -sub short_lang_name { - local($caller,$lang_code) = @_; - - $langPM = NLP::Language->new(); - return $langPM->shortname($lang_code); -} - -sub ml_dir { - local($caller,$language,$type) = @_; - - $type = "MSB" unless defined($type); - $lang_code = $langPM->lang_code($language); - return $caller->ml_dir($lang_code, "lex") . "/corpora" if $type eq "corpora"; - return "" unless defined($rc); - $ml_home = $rc->ml_home_dir(); - return File::Spec->catfile($ml_home, "arabic") - if ($lang_code eq "ar-iq") && ! $caller->member(lc $type,"lex","onto","dict"); - $langPM = NLP::Language->new(); - $lexdir = $langPM->lexdir($lang_code); - return $lexdir if defined($lexdir); - return ""; -} - -sub language_lex_filename { - local($caller,$language,$type) = @_; - - $langPM = NLP::Language->new(); - if (($lang_code = $langPM->lang_code($language)) - && ($ml_dir = $caller->ml_dir($lang_code,$type)) - && ($norm_language = $caller->short_lang_name($lang_code))) { - return "$ml_dir/$norm_language-lex" if ($type eq "lex"); - return "$ml_dir/onto" if ($type eq "onto"); - return "$ml_dir/$norm_language-english-dict" if ($type eq "dict") && !($lang_code eq "en"); - return ""; - } else { - return ""; - } -} - -# filename_without_path is obsolete - replace with -# use File::Basename; -# basename($filename) -sub filename_without_path { - local($caller,$filename) = @_; - - $filename =~ s/^.*\/([^\/]+)$/$1/; - return $filename; -} - -sub option_string { - local($caller,$input_name,$default,*values,*labels) = @_; - - my $s = ""; - return $s; -} - -sub pes_subseq_surf { - local($this,$start,$length,$langCode,@pes) = @_; - - my $surf = ""; - if ($start+$length-1 <= $#pes) { - foreach $i ($start .. $start + $length - 1) { - my $pe = $pes[$i]; - $surf .= $pe->get("surf",""); - $surf .= " " if $langCode =~ /^(ar|en|fr)$/; - } - } - $surf =~ s/\s+$//; - return $surf; -} - -sub copyList { - local($this,@list) = @_; - - @copy_list = (); - foreach $elem (@list) { - push(@copy_list,$elem); - } - return @copy_list; -} - -sub list_with_same_elem { - local($this,$size,$elem) = @_; - - @list = (); - foreach $i (0 .. $size-1) { - push(@list,$elem); - } - return @list; -} - -sub count_occurrences { - local($this,$s,$substring) = @_; - - $occ = 0; - $new = $s; - $guarded_substring = $this->regex_guard($substring); - $new =~ s/$guarded_substring//; - while ($new ne $s) { - $occ++; - $s = $new; - $new =~ s/$guarded_substring//; - } - return $occ; -} - -sub position_of_nth_occurrence { - local($this,$s,$substring,$occ) = @_; - - return -1 unless $occ > 0; - my $pos = 0; - while (($pos = index($s, $substring, $pos)) >= 0) { - return $pos if $occ == 1; - $occ--; - $pos = $pos + length($substring); - } - return -1; -} - -sub has_diff_elements_p { - local($this,@array) = @_; - - return 0 if $#array < 1; - $elem = $array[0]; - - foreach $a (@array) { - return 1 if $elem ne $a; - } - return 0; -} - -sub init_log { - local($this,$logfile, $control) = @_; - - $control = "" unless defined($control); - if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) { - system("rm -f $logfile"); - system("date > $logfile; chmod 777 $logfile"); - } -} - -sub time_stamp_log { - local($this,$logfile, $control) = @_; - - $control = "" unless defined($control); - if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) { - system("date >> $logfile; chmod 777 $logfile"); - } -} - -sub log { - local($this,$message,$logfile,$control) = @_; - - $control = "" unless defined($control); - if ((DEBUGGING || ($control =~ /debug/i)) && $logfile) { - $this->init_log($logfile, $control) unless -w $logfile; - if ($control =~ /timestamp/i) { - $this->time_stamp_log($logfile, $control); - } - $guarded_message = $message; - $guarded_message =~ s/"/\\"/g; - system("echo \"$guarded_message\" >> $logfile"); - } -} - -sub month_name_to_month_number { - local($this,$month_name) = @_; - - $month_name_init = lc substr($month_name,0,3); - return $this->position($month_name_init, "jan", "feb", "mar", "apr", "may", "jun", "jul", "aug", "sep", "oct", "nov", "dec") + 1; -} - -my @short_month_names = ("Jan.","Febr.","March","April","May","June","July","Aug.","Sept.","Oct.","Nov.","Dec."); -my @full_month_names = ("January","February","March","April","May","June","July","August","September","October","November","December"); - -sub month_number_to_month_name { - local($this,$month_number, $control) = @_; - - $month_number =~ s/^0//; - if ($month_number =~ /^([1-9]|1[0-2])$/) { - return ($control && ($control =~ /short/i)) - ? $short_month_names[$month_number-1] - : $full_month_names[$month_number-1]; - } else { - return ""; - } -} - -sub leap_year { - local($this,$year) = @_; - - return 0 if $year % 4 != 0; - return 1 if $year % 400 == 0; - return 0 if $year % 100 == 0; - return 1; -} - -sub datetime { - local($this,$format,$time_in_secs, $command) = @_; - - $command = "" unless defined($command); - $time_in_secs = time unless defined($time_in_secs) && $time_in_secs; - @time_vector = ($command =~ /\b(gm|utc)\b/i) ? gmtime($time_in_secs) : localtime($time_in_secs); - ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst)=@time_vector; - $thisyear = $year + 1900; - $thismon=(Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec)[$mon]; - $thismon2=("Jan.","Febr.","March","April","May","June","July","Aug.","Sept.","Oct.","Nov.","Dec.")[$mon]; - $thismonth = $mon + 1; - $thisday=(Sun,Mon,Tue,Wed,Thu,Fri,Sat)[$wday]; - $milliseconds = int(($time_in_secs - int($time_in_secs)) * 1000); - $date="$thisday $thismon $mday, $thisyear"; - $sdate="$thismon $mday, $thisyear"; - $dashedDate = sprintf("%04d-%02d-%02d",$thisyear,$thismonth,$mday); - $slashedDate = sprintf("%02d/%02d/%04d",$mday,$thismonth,$thisyear); - $time=sprintf("%02d:%02d:%02d",$hour,$min,$sec); - $shorttime=sprintf("%d:%02d",$hour,$min); - $shortdatetime = "$thismon2 $mday, $shorttime"; - - if ($date =~ /undefined/) { - return ""; - } elsif ($format eq "date at time") { - return "$date at $time"; - } elsif ($format eq "date") { - return "$date"; - } elsif ($format eq "sdate") { - return "$sdate"; - } elsif ($format eq "ddate") { - return "$dashedDate"; - } elsif ($format eq "time") { - return "$time"; - } elsif ($format eq "dateTtime+ms") { - return $dashedDate . "T" . $time . "." . $milliseconds; - } elsif ($format eq "dateTtime") { - return $dashedDate . "T" . $time; - } elsif ($format eq "yyyymmdd") { - return sprintf("%04d%02d%02d",$thisyear,$thismonth,$mday); - } elsif ($format eq "short date at time") { - return $shortdatetime; - } else { - return "$date at $time"; - } -} - -sub datetime_of_last_file_modification { - local($this,$format,$filename) = @_; - - return $this->datetime($format,(stat($filename))[9]); -} - -sub add_1sec { - local($this,$datetime) = @_; - - if (($year,$month,$day,$hour,$minute,$second) = ($datetime =~ /^(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)$/)) { - $second++; - if ($second >= 60) { $second -= 60; $minute++; } - if ($minute >= 60) { $minute -= 60; $hour++; } - if ($hour >= 24) { $hour -= 24; $day++; } - if ($month =~ /^(01|03|05|07|08|10|12)$/) { - if ($day > 31) { $day -= 31; $month++; } - } elsif ($month =~ /^(04|06|09|11)$/) { - if ($day > 30) { $day -= 30; $month++; } - } elsif (($month eq "02") && $this->leap_year($year)) { - if ($day > 29) { $day -= 29; $month++; } - } elsif ($month eq "02") { - if ($day > 28) { $day -= 28; $month++; } - } - if ($month > 12) { $month -= 12; $year++; } - return sprintf("%04d-%02d-%02dT%02d:%02d:%02d", $year,$month,$day,$hour,$minute,$second); - } else { - return ""; - } -} - -sub stopwatch { - local($this, $function, $id, *ht, *OUT) = @_; - # function: start|stop|count|report; start|stop times are absolute (in secs.) - - my $current_time = time; - # print OUT "Point S stopwatch $function $id $current_time\n"; - if ($function eq "start") { - if ($ht{STOPWATCH_START}->{$id}) { - $ht{STOPWATCH_N_RESTARTS}->{$id} = ($ht{STOPWATCH_N_RESTARTS}->{$id} || 0) + 1; - } else { - $ht{STOPWATCH_START}->{$id} = $current_time; - } - } elsif ($function eq "end") { - if ($start_time = $ht{STOPWATCH_START}->{$id}) { - $ht{STOPWATCH_TIME}->{$id} = ($ht{STOPWATCH_TIME}->{$id} || 0) + ($current_time - $start_time); - $ht{STOPWATCH_START}->{$id} = ""; - } else { - $ht{STOPWATCH_N_DEAD_ENDS}->{$id} = ($ht{STOPWATCH_N_DEAD_ENDS}->{$id} || 0) + 1; - } - } elsif ($function eq "count") { - $ht{STOPWATCH_COUNT}->{$id} = ($ht{STOPWATCH_COUNT}->{$id} || 0) + 1; - } elsif ($function eq "report") { - my $id2; - foreach $id2 (keys %{$ht{STOPWATCH_START}}) { - if ($start_time = $ht{STOPWATCH_START}->{$id2}) { - $ht{STOPWATCH_TIME}->{$id2} = ($ht{STOPWATCH_TIME}->{$id2} || 0) + ($current_time - $start_time); - $ht{STOPWATCH_START}->{$id2} = $current_time; - } - } - print OUT "Time report:\n"; - foreach $id2 (sort { $ht{STOPWATCH_TIME}->{$b} <=> $ht{STOPWATCH_TIME}->{$a} } - keys %{$ht{STOPWATCH_TIME}}) { - my $stopwatch_time = $ht{STOPWATCH_TIME}->{$id2}; - $stopwatch_time = $this->round_to_n_decimal_places($stopwatch_time, 3); - my $n_restarts = $ht{STOPWATCH_N_RESTARTS}->{$id2}; - my $n_dead_ends = $ht{STOPWATCH_N_DEAD_ENDS}->{$id2}; - my $start_time = $ht{STOPWATCH_START}->{$id2}; - print OUT " $id2: $stopwatch_time seconds"; - print OUT " with $n_restarts restart(s)" if $n_restarts; - print OUT " with $n_dead_ends dead end(s)" if $n_dead_ends; - print OUT " (active)" if $start_time; - print OUT "\n"; - } - foreach $id2 (sort { $ht{STOPWATCH_COUNT}->{$b} <=> $ht{STOPWATCH_COUNT}->{$a} } - keys %{$ht{STOPWATCH_COUNT}}) { - $count = $ht{STOPWATCH_COUNT}->{$id2}; - print OUT " C $id2: $count\n"; - } - } -} - -sub print_html_banner { - local($this,$text,$bgcolor,*OUT,$control) = @_; - - $control = "" unless defined($control); - $bgcolor = "#BBCCFF" unless defined($bgcolor); - print OUT "
        "; - print OUT "  " unless $text =~ /^\s*<(table|nobr)/; - print OUT $text; - print OUT "
        \n"; - print OUT "
        \n" unless $control =~ /nobr/i; -} - -sub print_html_head { - local($this, $title, *OUT, $control, $onload_fc, $add_javascript) = @_; - - $control = "" unless defined($control); - $onload_fc = "" unless defined($onload_fc); - $onload_clause = ($onload_fc) ? " onload=\"$onload_fc\"" : ""; - $add_javascript = "" unless defined($add_javascript); - $max_age_clause = ""; - $max_age_clause = ""; # if $control =~ /\bexp1hour\b/; - $css_clause = ""; - $css_clause = "\n " if $control =~ /css/; - $css_clause .= "\n " if $control =~ /css/; - $css_clause = "\n " if $control =~ /css-handheld/; - $icon_clause = ""; - $icon_clause .= "\n " if $control =~ /\bAMR\b/i; - $icon_clause .= "\n " if $control =~ /\bCRE\b/i; - print OUT "\xEF\xBB\xBF\n" unless $control =~ /\bno-bom\b/; # utf8 marker byte order mark - print OUT< - - - $max_age_clause - $title$css_clause$icon_clause -END_OF_HEADER1 -; - - unless ($control =~ /no javascript/) { - print OUT< - - -END_OF_HEADER2 -; - } - - print OUT< - -END_OF_HEADER3 -; -} - - -sub print_html_foot { - local($this, *OUT) = @_; - - print OUT " \n"; - print OUT "\n"; -} - -sub print_html_page { - local($this, *OUT, $s) = @_; - - print OUT "\xEF\xBB\xBF\n"; - print OUT "\n"; - print OUT " \n"; - print OUT " DEBUG\n"; - print OUT " \n"; - print OUT " \n"; - print OUT " \n"; - print OUT " \n"; - print OUT " $s\n"; - print OUT " \n"; - print OUT "\n"; -} - -sub http_catfile { - local($this, @path) = @_; - - $result = File::Spec->catfile(@path); - $result =~ s/(https?):\/([a-zA-Z])/$1:\/\/$2/; - return $result; -} - -sub underscore_to_space { - local($this, $s) = @_; - - return "" unless defined($s); - - $s =~ s/_+/ /g; - return $s; -} - -sub space_to_underscore { - local($this, $s) = @_; - - return "" unless defined($s); - - $s =~ s/ /_/g; - return $s; -} - -sub remove_spaces { - local($this, $s) = @_; - - $s =~ s/\s//g; - return $s; -} - -sub is_punctuation_string_p { - local($this, $s) = @_; - - return "" unless $s; - $s = $this->normalize_string($s) if $s =~ /[\x80-\xBF]/; - return $s =~ /^[-_,;:.?!\/\@+*"()]+$/; -} - -sub is_rare_punctuation_string_p { - local($this, $s) = @_; - - return 0 unless $s =~ /^[\x21-\x2F\x3A\x40\x5B-\x60\x7B-\x7E]{2,}$/; - return 0 if $s =~ /^(\.{2,3}|-{2,3}|\*{2,3}|::|\@?[-\/:]\@?)$/; - return 1; -} - -sub simplify_punctuation { - local($this, $s) = @_; - - $s =~ s/\xE2\x80\x92/-/g; - $s =~ s/\xE2\x80\x93/-/g; - $s =~ s/\xE2\x80\x94/-/g; - $s =~ s/\xE2\x80\x95/-/g; - $s =~ s/\xE2\x80\x98/`/g; - $s =~ s/\xE2\x80\x99/'/g; - $s =~ s/\xE2\x80\x9A/`/g; - $s =~ s/\xE2\x80\x9C/"/g; - $s =~ s/\xE2\x80\x9D/"/g; - $s =~ s/\xE2\x80\x9E/"/g; - $s =~ s/\xE2\x80\x9F/"/g; - $s =~ s/\xE2\x80\xA2/*/g; - $s =~ s/\xE2\x80\xA4/./g; - $s =~ s/\xE2\x80\xA5/../g; - $s =~ s/\xE2\x80\xA6/.../g; - return $s; -} - -sub latin_plus_p { - local($this, $s, $control) = @_; - - $control = "" unless defined($control); - return $s =~ /^([\x20-\x7E]|\xC2[\xA1-\xBF]|[\xC3-\xCC][\x80-\xBF]|\xCA[\x80-\xAF]|\xE2[\x80-\xAF][\x80-\xBF])+$/; -} - -sub nth_line_in_file { - local($this, $filename, $n) = @_; - - return "" unless $n =~ /^[1-9]\d*$/; - open(IN, $filename) || return ""; - my $line_no = 0; - while () { - $line_no++; - if ($n == $line_no) { - $_ =~ s/\s+$//; - close(IN); - return $_; - } - } - close(IN); - return ""; -} - -sub read_file { - local($this, $filename) = @_; - - my $file_content = ""; - open(IN, $filename) || return ""; - while () { - $file_content .= $_; - } - close(IN); - return $file_content; -} - -sub cap_list { - local($this, @list) = @_; - - @cap_list = (); - foreach $l (@list) { - ($premod, $core) = ($l =~ /^(a|an) (\S.*)$/); - if (defined($premod) && defined($core)) { - push(@cap_list, "$premod \u$core"); - } elsif ($this->cap_member($l, "US")) { - push(@cap_list, uc $l); - } else { - push(@cap_list, "\u$l"); - } - } - return @cap_list; -} - -sub integer_list_with_commas_and_ranges { - local($this, @list) = @_; - - my $in_range_p = 0; - my $last_value = 0; - my $result = ""; - while (@list) { - $elem = shift @list; - if ($elem =~ /^\d+$/) { - if ($in_range_p) { - if ($elem == $last_value + 1) { - $last_value = $elem; - } else { - $result .= "-$last_value, $elem"; - if (@list && ($next = $list[0]) && ($elem =~ /^\d+$/) && ($next =~ /^\d+$/) - && ($next == $elem + 1)) { - $last_value = $elem; - $in_range_p = 1; - } else { - $in_range_p = 0; - } - } - } else { - $result .= ", $elem"; - if (@list && ($next = $list[0]) && ($elem =~ /^\d+$/) && ($next =~ /^\d+$/) - && ($next == $elem + 1)) { - $last_value = $elem; - $in_range_p = 1; - } - } - } else { - if ($in_range_p) { - $result .= "-$last_value, $elem"; - $in_range_p = 0; - } else { - $result .= ", $elem"; - } - } - } - if ($in_range_p) { - $result .= "-$last_value"; - } - $result =~ s/^,\s*//; - return $result; -} - -sub comma_append { - local($this, $a, $b) = @_; - - if (defined($a) && ($a =~ /\S/)) { - if (defined($b) && ($b =~ /\S/)) { - return "$a,$b"; - } else { - return $a; - } - } else { - if (defined($b) && ($b =~ /\S/)) { - return $b; - } else { - return ""; - } - } -} - -sub version { - return "3.17"; -} - -sub print_stderr { - local($this, $message, $verbose) = @_; - - $verbose = 1 unless defined($verbose); - print STDERR $message if $verbose; - return 1; -} - -sub print_log { - local($this, $message, *LOG, $verbose) = @_; - - $verbose = 1 unless defined($verbose); - print LOG $message if $verbose; - return 1; -} - -sub compare_alignment { - local($this, $a, $b, $delimiter) = @_; - - $delimiter = "-" unless $delimiter; - my @a_list = split($delimiter, $a); - my @b_list = split($delimiter, $b); - - while (@a_list && @b_list) { - $a_head = shift @a_list; - $b_head = shift @b_list; - next if $a_head eq $b_head; - return $a_head <=> $b_head if ($a_head =~ /^\d+$/) && ($b_head =~ /^\d+$/); - return $a_head cmp $b_head; - } - return -1 if @a_list; - return 1 if @b_list; - return 0; -} - -sub normalize_string { - # normalize punctuation, full-width characters (to ASCII) - local($this, $s, $control) = @_; - - $control = "" unless defined($control); - - $norm_s = $s; - $norm_s =~ tr/A-Z/a-z/; - - $norm_s =~ s/ \@([-:\/])/ $1/g; # non-initial left @ - $norm_s =~ s/^\@([-:\/])/$1/; # initial left @ - $norm_s =~ s/([-:\/])\@ /$1 /g; # non-initial right @ - $norm_s =~ s/([-:\/])\@$/$1/; # initial right @ - $norm_s =~ s/([\(\)"])([,;.?!])/$1 $2/g; - $norm_s =~ s/\bcannot\b/can not/g; - - $norm_s =~ s/\xC2\xAD/-/g; # soft hyphen - - $norm_s =~ s/\xE2\x80\x94/-/g; # em dash - $norm_s =~ s/\xE2\x80\x95/-/g; # horizontal bar - $norm_s =~ s/\xE2\x80\x98/`/g; # grave accent - $norm_s =~ s/\xE2\x80\x99/'/g; # apostrophe - $norm_s =~ s/\xE2\x80\x9C/"/g; # left double quote mark - $norm_s =~ s/\xE2\x80\x9D/"/g; # right double quote mark - $norm_s =~ s/\xE2\x94\x80/-/g; # box drawings light horizontal - $norm_s =~ s/\xE2\x94\x81/-/g; # box drawings heavy horizontal - $norm_s =~ s/\xE3\x80\x81/,/g; # ideographic comma - $norm_s =~ s/\xE3\x80\x82/./g; # ideographic full stop - $norm_s =~ s/\xE3\x80\x88/"/g; # left angle bracket - $norm_s =~ s/\xE3\x80\x89/"/g; # right angle bracket - $norm_s =~ s/\xE3\x80\x8A/"/g; # left double angle bracket - $norm_s =~ s/\xE3\x80\x8B/"/g; # right double angle bracket - $norm_s =~ s/\xE3\x80\x8C/"/g; # left corner bracket - $norm_s =~ s/\xE3\x80\x8D/"/g; # right corner bracket - $norm_s =~ s/\xE3\x80\x8E/"/g; # left white corner bracket - $norm_s =~ s/\xE3\x80\x8F/"/g; # right white corner bracket - $norm_s =~ s/\xE3\x83\xBB/\xC2\xB7/g; # katakana middle dot -> middle dot - $norm_s =~ s/\xEF\xBB\xBF//g; # UTF8 marker - - if ($control =~ /\bzh\b/i) { - # de-tokenize Chinese - unless ($control =~ /\bpreserve-tok\b/) { - while ($norm_s =~ /[\xE0-\xEF][\x80-\xBF][\x80-\xBF] [\xE0-\xEF][\x80-\xBF][\x80-\xBF]/) { - $norm_s =~ s/([\xE0-\xEF][\x80-\xBF][\x80-\xBF]) ([\xE0-\xEF][\x80-\xBF][\x80-\xBF])/$1$2/g; - } - $norm_s =~ s/([\xE0-\xEF][\x80-\xBF][\x80-\xBF]) ([\x21-\x7E])/$1$2/g; - $norm_s =~ s/([\x21-\x7E]) ([\xE0-\xEF][\x80-\xBF][\x80-\xBF])/$1$2/g; - } - - # fullwidth characters - while ($norm_s =~ /\xEF\xBC[\x81-\xBF]/) { - ($pre,$fullwidth,$post) = ($norm_s =~ /^(.*)(\xEF\xBC[\x81-\xBF])(.*)$/); - $fullwidth =~ s/^\xEF\xBC//; - $fullwidth =~ tr/[\x81-\xBF]/[\x21-\x5F]/; - $norm_s = "$pre$fullwidth$post"; - } - while ($norm_s =~ /\xEF\xBD[\x80-\x9E]/) { - ($pre,$fullwidth,$post) = ($norm_s =~ /^(.*)(\xEF\xBD[\x80-\x9E])(.*)$/); - $fullwidth =~ s/^\xEF\xBD//; - $fullwidth =~ tr/[\x80-\x9E]/[\x60-\x7E]/; - $norm_s = "$pre$fullwidth$post"; - } - $norm_s =~ tr/A-Z/a-z/ unless $control =~ /\bpreserve-case\b/; - - unless ($control =~ /\bpreserve-tok\b/) { - while ($norm_s =~ /[\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E] [\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]/) { - $norm_s =~ s/([\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]) ([\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E])/$1$2/g; - } - $norm_s =~ s/([\x21-\x7E]) ([\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E])/$1$2/g; - $norm_s =~ s/([\x21-\x2F\x3A-\x40\x5B-\x60\x7B-\x7E]) ([\x21-\x7E])/$1$2/g; - $norm_s =~ s/ (\xC2\xA9|\xC2\xB7|\xC3\x97) /$1/g; # copyright sign, middle dot, multiplication sign - } - } - - if (($control =~ /\bzh\b/i) && ($control =~ /\bnorm-char\b/)) { - $norm_s =~ s/\xE6\x96\xBC/\xE4\xBA\x8E/g; # feng1 (first char. of Chin. "lie low", line 1308) - $norm_s =~ s/\xE6\xAD\xA7/\xE5\xB2\x90/g; # qi2 (second char. of Chin. "difference", line 1623) - $norm_s =~ s/\xE8\x82\xB2/\xE6\xAF\x93/g; # yu4 (second char. of Chin. "sports", line 440) - $norm_s =~ s/\xE8\x91\x97/\xE7\x9D\x80/g; # zhao (second char. of Chin. "prominent", line 4) - $norm_s =~ s/\xE9\x81\x87/\xE8\xBF\x82/g; # yu4 (second char. of Chin. "good luck", line 959) - } - - if ($control =~ /\bspurious-punct\b/) { - $norm_s =~ s/^\s*[-_\." ]+//; - $norm_s =~ s/[-_\." ]+\s*$//; - $norm_s =~ s/\(\s+end\s+\)\s*$//i; - $norm_s =~ s/^\s*null\s*$//i; - } - - $norm_s =~ s/^\s+//; - $norm_s =~ s/\s+$//; - $norm_s =~ s/\s+/ /g; - - return $norm_s; -} - -sub normalize_extreme_string { - local($this, $s, $control) = @_; - - $control = "" unless defined($control); - - $norm_s = $s; - $norm_s =~ s/\xE2\xA9\xBE/\xE2\x89\xA5/g; # slanted greater than or equal to - - return $norm_s; -} - -sub increase_ht_count { - local($this, *ht, $incr, @path) = @_; - - if ($#path == 0) { - $ht{($path[0])} = ($ht{($path[0])} || 0) + $incr; - } elsif ($#path == 1) { - $ht{($path[0])}->{($path[1])} - = ($ht{($path[0])}->{($path[1])} || 0) + $incr; - } elsif ($#path == 2) { - $ht{($path[0])}->{($path[1])}->{($path[2])} - = ($ht{($path[0])}->{($path[1])}->{($path[2])} || 0) + $incr; - } elsif ($#path == 3) { - $ht{($path[0])}->{($path[1])}->{($path[2])}->{($path[3])} - = ($ht{($path[0])}->{($path[1])}->{($path[2])}->{($path[3])} || 0) + $incr; - } elsif ($#path == 4) { - $ht{($path[0])}->{($path[1])}->{($path[2])}->{($path[3])}->{($path[4])} - = ($ht{($path[0])}->{($path[1])}->{($path[2])}->{($path[3])}->{($path[4])} || 0) + $incr; - } else { - print STDERR "increase_ht_count unsupported for path of length " . ($#path + 1) . "\n"; - } -} - -sub adjust_numbers { - # non-negative integers - local($this, $s, $delta) = @_; - - $result = ""; - while ($s =~ /\d/) { - ($pre,$i,$post) = ($s =~ /^([^0-9]*)(\d+)([^0-9].*|)$/); - $result .= $pre . ($i + $delta); - $s = $post; - } - $result .= $s; - return $result; -} - -sub first_defined { - local($this, @list) = @_; - - foreach $elem (@list) { - return $elem if defined($elem); - } - return ""; -} - -sub first_defined_non_empty { - local($this, @list) = @_; - - foreach $item (@list) { - return $item if defined($item) && ($item ne ""); - } - return ""; -} - -sub elem_after_member_list { - local($this,$elem,@array) = @_; - - my @elem_after_member_list = (); - foreach $i ((0 .. ($#array - 1))) { - push(@elem_after_member_list, $array[$i+1]) if $elem eq $array[$i]; - } - return join(" ", @elem_after_member_list); -} - -sub add_value_to_list { - local($this,$s,$value,$sep) = @_; - - $s = "" unless defined($s); - $sep = "," unless defined($sep); - return ($s =~ /\S/) ? "$s$sep$value" : $value; -} - -sub add_new_value_to_list { - local($this,$s,$value,$sep) = @_; - - $s = "" unless defined($s); - $sep = "," unless defined($sep); - my @values = split(/$sep/, $s); - push(@values, $value) if defined($value) && ! $this->member($value, @values); - - return join($sep, @values); -} - -sub add_new_hash_value_to_list { - local($this,*ht,$key,$value,$sep) = @_; - - $sep = "," unless defined($sep); - my $value_s = $ht{$key}; - if (defined($value_s)) { - my @values = split(/$sep/, $value_s); - push(@values, $value) unless $this->member($value, @values); - $ht{$key} = join($sep, @values); - } else { - $ht{$key} = $value; - } -} - -sub ip_info { - local($this, $ip_address) = @_; - - my %ip_map = (); - $ip_map{"128.9.208.69"} = "Ulf Hermjakob (bach.isi.edu)"; - $ip_map{"128.9.208.169"} = "Ulf Hermjakob (brahms.isi.edu)"; - $ip_map{"128.9.184.148"} = "Ulf Hermjakob (beethoven.isi.edu ?)"; - $ip_map{"128.9.184.162"} = "Ulf Hermjakob (beethoven.isi.edu)"; - $ip_map{"128.9.176.39"} = "Kevin Knight"; - $ip_map{"128.9.184.187"} = "Kevin Knight"; - $ip_map{"128.9.216.56"} = "Kevin Knight"; - $ip_map{"128.9.208.155"} = "cage.isi.edu"; - - return ($ip_name = $ip_map{$ip_address}) ? "$ip_address - $ip_name" : $ip_address; -} - -# from standalone de-accent.pl -sub de_accent_string { - local($this, $s) = @_; - - $s =~ tr/A-Z/a-z/; - unless (0) { - # Latin-1 - if ($s =~ /\xC3[\x80-\xBF]/) { - $s =~ s/(À|Á|Â|Ã|Ä|Å)/A/g; - $s =~ s/Æ/Ae/g; - $s =~ s/Ç/C/g; - $s =~ s/Ð/D/g; - $s =~ s/(È|É|Ê|Ë)/E/g; - $s =~ s/(Ì|Í|Î|Ï)/I/g; - $s =~ s/Ñ/N/g; - $s =~ s/(Ò|Ó|Ô|Õ|Ö|Ø)/O/g; - $s =~ s/(Ù|Ú|Û|Ü)/U/g; - $s =~ s/Þ/Th/g; - $s =~ s/Ý/Y/g; - $s =~ s/(à|á|â|ã|ä|å)/a/g; - $s =~ s/æ/ae/g; - $s =~ s/ç/c/g; - $s =~ s/(è|é|ê|ë)/e/g; - $s =~ s/(ì|í|î|ï)/i/g; - $s =~ s/ð/d/g; - $s =~ s/ñ/n/g; - $s =~ s/(ò|ó|ô|õ|ö)/o/g; - $s =~ s/ß/ss/g; - $s =~ s/þ/th/g; - $s =~ s/(ù|ú|û|ü)/u/g; - $s =~ s/(ý|ÿ)/y/g; - } - # Latin Extended-A - if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) { - $s =~ s/(Ā|Ă|Ą)/A/g; - $s =~ s/(ā|ă|ą)/a/g; - $s =~ s/(Ć|Ĉ|Ċ|Č)/C/g; - $s =~ s/(ć|ĉ|ċ|č)/c/g; - $s =~ s/(Ď|Đ)/D/g; - $s =~ s/(ď|đ)/d/g; - $s =~ s/(Ē|Ĕ|Ė|Ę|Ě)/E/g; - $s =~ s/(ē|ĕ|ė|ę|ě)/e/g; - $s =~ s/(Ĝ|Ğ|Ġ|Ģ)/G/g; - $s =~ s/(ĝ|ğ|ġ|ģ)/g/g; - $s =~ s/(Ĥ|Ħ)/H/g; - $s =~ s/(ĥ|ħ)/h/g; - $s =~ s/(Ĩ|Ī|Ĭ|Į|İ)/I/g; - $s =~ s/(ĩ|ī|ĭ|į|ı)/i/g; - $s =~ s/IJ/Ij/g; - $s =~ s/ij/ij/g; - $s =~ s/Ĵ/J/g; - $s =~ s/ĵ/j/g; - $s =~ s/Ķ/K/g; - $s =~ s/(ķ|ĸ)/k/g; - $s =~ s/(Ĺ|Ļ|Ľ|Ŀ|Ł)/L/g; - $s =~ s/(ļ|ľ|ŀ|ł)/l/g; - $s =~ s/(Ń|Ņ|Ň|Ŋ)/N/g; - $s =~ s/(ń|ņ|ň|ʼn|ŋ)/n/g; - $s =~ s/(Ō|Ŏ|Ő)/O/g; - $s =~ s/(ō|ŏ|ő)/o/g; - $s =~ s/Œ/Oe/g; - $s =~ s/œ/oe/g; - $s =~ s/(Ŕ|Ŗ|Ř)/R/g; - $s =~ s/(ŕ|ŗ|ř)/r/g; - $s =~ s/(Ś|Ŝ|Ş|Š)/S/g; - $s =~ s/(ś|ŝ|ş|š|ſ)/s/g; - $s =~ s/(Ţ|Ť|Ŧ)/T/g; - $s =~ s/(ţ|ť|ŧ)/t/g; - $s =~ s/(Ũ|Ū|Ŭ|Ů|Ű|Ų)/U/g; - $s =~ s/(ũ|ū|ŭ|ů|ű|ų)/u/g; - $s =~ s/Ŵ/W/g; - $s =~ s/ŵ/w/g; - $s =~ s/(Ŷ|Ÿ)/Y/g; - $s =~ s/ŷ/y/g; - $s =~ s/(Ź|Ż|Ž)/Z/g; - $s =~ s/(ź|ż|ž)/z/g; - } - # Latin Extended-B - if ($s =~ /[\xC7-\xC7][\x80-\xBF]/) { - $s =~ s/(\xC7\x8D)/A/g; - $s =~ s/(\xC7\x8E)/a/g; - $s =~ s/(\xC7\x8F)/I/g; - $s =~ s/(\xC7\x90)/i/g; - $s =~ s/(\xC7\x91)/O/g; - $s =~ s/(\xC7\x92)/o/g; - $s =~ s/(\xC7\x93)/U/g; - $s =~ s/(\xC7\x94)/u/g; - $s =~ s/(\xC7\x95)/U/g; - $s =~ s/(\xC7\x96)/u/g; - $s =~ s/(\xC7\x97)/U/g; - $s =~ s/(\xC7\x98)/u/g; - $s =~ s/(\xC7\x99)/U/g; - $s =~ s/(\xC7\x9A)/u/g; - $s =~ s/(\xC7\x9B)/U/g; - $s =~ s/(\xC7\x9C)/u/g; - } - # Latin Extended Additional - if ($s =~ /\xE1[\xB8-\xBF][\x80-\xBF]/) { - $s =~ s/(ḁ|ạ|ả|ấ|ầ|ẩ|ẫ|ậ|ắ|ằ|ẳ|ẵ|ặ|ẚ)/a/g; - $s =~ s/(ḃ|ḅ|ḇ)/b/g; - $s =~ s/(ḉ)/c/g; - $s =~ s/(ḋ|ḍ|ḏ|ḑ|ḓ)/d/g; - $s =~ s/(ḕ|ḗ|ḙ|ḛ|ḝ|ẹ|ẻ|ẽ|ế|ề|ể|ễ|ệ)/e/g; - $s =~ s/(ḟ)/f/g; - $s =~ s/(ḡ)/g/g; - $s =~ s/(ḣ|ḥ|ḧ|ḩ|ḫ)/h/g; - $s =~ s/(ḭ|ḯ|ỉ|ị)/i/g; - $s =~ s/(ḱ|ḳ|ḵ)/k/g; - $s =~ s/(ḷ|ḹ|ḻ|ḽ)/l/g; - $s =~ s/(ḿ|ṁ|ṃ)/m/g; - $s =~ s/(ṅ|ṇ|ṉ|ṋ)/m/g; - $s =~ s/(ọ|ỏ|ố|ồ|ổ|ỗ|ộ|ớ|ờ|ở|ỡ|ợ|ṍ|ṏ|ṑ|ṓ)/o/g; - $s =~ s/(ṕ|ṗ)/p/g; - $s =~ s/(ṙ|ṛ|ṝ|ṟ)/r/g; - $s =~ s/(ṡ|ṣ|ṥ|ṧ|ṩ|ẛ)/s/g; - $s =~ s/(ṫ|ṭ|ṯ|ṱ)/t/g; - $s =~ s/(ṳ|ṵ|ṷ|ṹ|ṻ|ụ|ủ|ứ|ừ|ử|ữ|ự)/u/g; - $s =~ s/(ṽ|ṿ)/v/g; - $s =~ s/(ẁ|ẃ|ẅ|ẇ|ẉ|ẘ)/w/g; - $s =~ s/(ẋ|ẍ)/x/g; - $s =~ s/(ẏ|ỳ|ỵ|ỷ|ỹ|ẙ)/y/g; - $s =~ s/(ẑ|ẓ|ẕ)/z/g; - $s =~ s/(Ḁ|Ạ|Ả|Ấ|Ầ|Ẩ|Ẫ|Ậ|Ắ|Ằ|Ẳ|Ẵ|Ặ)/A/g; - $s =~ s/(Ḃ|Ḅ|Ḇ)/B/g; - $s =~ s/(Ḉ)/C/g; - $s =~ s/(Ḋ|Ḍ|Ḏ|Ḑ|Ḓ)/D/g; - $s =~ s/(Ḕ|Ḗ|Ḙ|Ḛ|Ḝ|Ẹ|Ẻ|Ẽ|Ế|Ề|Ể|Ễ|Ệ)/E/g; - $s =~ s/(Ḟ)/F/g; - $s =~ s/(Ḡ)/G/g; - $s =~ s/(Ḣ|Ḥ|Ḧ|Ḩ|Ḫ)/H/g; - $s =~ s/(Ḭ|Ḯ|Ỉ|Ị)/I/g; - $s =~ s/(Ḱ|Ḳ|Ḵ)/K/g; - $s =~ s/(Ḷ|Ḹ|Ḻ|Ḽ)/L/g; - $s =~ s/(Ḿ|Ṁ|Ṃ)/M/g; - $s =~ s/(Ṅ|Ṇ|Ṉ|Ṋ)/N/g; - $s =~ s/(Ṍ|Ṏ|Ṑ|Ṓ|Ọ|Ỏ|Ố|Ồ|Ổ|Ỗ|Ộ|Ớ|Ờ|Ở|Ỡ|Ợ)/O/g; - $s =~ s/(Ṕ|Ṗ)/P/g; - $s =~ s/(Ṙ|Ṛ|Ṝ|Ṟ)/R/g; - $s =~ s/(Ṡ|Ṣ|Ṥ|Ṧ|Ṩ)/S/g; - $s =~ s/(Ṫ|Ṭ|Ṯ|Ṱ)/T/g; - $s =~ s/(Ṳ|Ṵ|Ṷ|Ṹ|Ṻ|Ụ|Ủ|Ứ|Ừ|Ử|Ữ|Ự)/U/g; - $s =~ s/(Ṽ|Ṿ)/V/g; - $s =~ s/(Ẁ|Ẃ|Ẅ|Ẇ|Ẉ)/W/g; - $s =~ s/(Ẍ)/X/g; - $s =~ s/(Ẏ|Ỳ|Ỵ|Ỷ|Ỹ)/Y/g; - $s =~ s/(Ẑ|Ẓ|Ẕ)/Z/g; - } - # Greek letters - if ($s =~ /\xCE[\x86-\xAB]/) { - $s =~ s/ά/α/g; - $s =~ s/έ/ε/g; - $s =~ s/ί/ι/g; - $s =~ s/ϊ/ι/g; - $s =~ s/ΐ/ι/g; - $s =~ s/ό/ο/g; - $s =~ s/ύ/υ/g; - $s =~ s/ϋ/υ/g; - $s =~ s/ΰ/υ/g; - $s =~ s/ώ/ω/g; - $s =~ s/Ά/Α/g; - $s =~ s/Έ/Ε/g; - $s =~ s/Ή/Η/g; - $s =~ s/Ί/Ι/g; - $s =~ s/Ϊ/Ι/g; - $s =~ s/Ύ/Υ/g; - $s =~ s/Ϋ/Υ/g; - $s =~ s/Ώ/Ω/g; - } - # Cyrillic letters - if ($s =~ /\xD0[\x80-\xAF]/) { - $s =~ s/Ѐ/Е/g; - $s =~ s/Ё/Е/g; - $s =~ s/Ѓ/Г/g; - $s =~ s/Ќ/К/g; - $s =~ s/Ѝ/И/g; - $s =~ s/Й/И/g; - $s =~ s/ѐ/е/g; - $s =~ s/ё/е/g; - $s =~ s/ѓ/г/g; - $s =~ s/ќ/к/g; - $s =~ s/ѝ/и/g; - $s =~ s/й/и/g; - } - } - return $s; -} - -sub read_de_accent_case_resource { - local($this, $filename, *ht, *LOG, $verbose) = @_; - # e.g. data/char-de-accent-lc.txt - - if (open(IN, $filename)) { - my $mode = "de-accent"; - my $line_number = 0; - my $n_de_accent_targets = 0; - my $n_de_accent_sources = 0; - my $n_case_entries = 0; - while () { - s/^\xEF\xBB\xBF//; - s/\s*$//; - $line_number++; - if ($_ =~ /^#+\s*CASE\b/) { - $mode = "case"; - } elsif ($_ =~ /^#+\s*PUNCTUATION NORMALIZATION\b/) { - $mode = "punctuation-normalization"; - } elsif ($_ =~ /^#/) { - # ignore comment - } elsif ($_ =~ /^\s*$/) { - # ignore empty line - } elsif (($mode eq "de-accent") && (($char_without_accent, @chars_with_accent) = split(/\s+/, $_))) { - if (keys %{$ht{DE_ACCENT_INV}->{$char_without_accent}}) { - print LOG "Ignoring duplicate de-accent line for target $char_without_accent in l.$line_number in $filename\n" unless $char_without_accent eq "--"; - } elsif (@chars_with_accent) { - $n_de_accent_targets++; - foreach $char_with_accent (@chars_with_accent) { - my @prev_target_chars = keys %{$ht{DE_ACCENT}->{$char_with_accent}}; - print LOG "Accent character $char_with_accent has duplicate target $char_without_accent (besides @prev_target_chars) in l.$line_number in $filename\n" if @prev_target_chars && (! ($char_without_accent =~ /^[aou]e$/i)); - $char_without_accent = "" if $char_without_accent eq "--"; - $ht{DE_ACCENT}->{$char_with_accent}->{$char_without_accent} = 1; - $ht{DE_ACCENT1}->{$char_with_accent} = $char_without_accent - if (! defined($ht{DE_ACCENT1}->{$char_with_accent})) - && ($char_without_accent =~ /^.[\x80-\xBF]*$/); - $ht{DE_ACCENT_INV}->{$char_without_accent}->{$char_with_accent} = 1; - $ht{UPPER_CASE_OR_ACCENTED}->{$char_with_accent} = 1; - $n_de_accent_sources++; - } - } else { - print LOG "Empty de-accent list for $char_without_accent in l.$line_number in $filename\n"; - } - } elsif (($mode eq "punctuation-normalization") && (($norm_punct, @unnorm_puncts) = split(/\s+/, $_))) { - if (keys %{$ht{NORM_PUNCT_INV}->{$norm_punct}}) { - print LOG "Ignoring duplicate punctuation-normalization line for target $norm_punct in l.$line_number in $filename\n"; - } elsif (@unnorm_puncts) { - foreach $unnorm_punct (@unnorm_puncts) { - my $prev_norm_punct = $ht{NORM_PUNCT}->{$unnorm_punct}; - if ($prev_norm_punct) { - print LOG "Ignoring duplicate punctuation normalization $unnorm_punct -> $norm_punct (besides $prev_norm_punct) in l.$line_number in $filename\n"; - } - $ht{NORM_PUNCT}->{$unnorm_punct} = $norm_punct; - $ht{NORM_PUNCT_INV}->{$norm_punct}->{$unnorm_punct} = 1; - $ht{LC_DE_ACCENT_CHAR_NORM_PUNCT}->{$unnorm_punct} = $norm_punct; - } - } - } elsif (($mode eq "case") && (($uc_char, $lc_char) = ($_ =~ /^(\S+)\s+(\S+)\s*$/))) { - $ht{UPPER_TO_LOWER_CASE}->{$uc_char} = $lc_char; - $ht{LOWER_TO_UPPER_CASE}->{$lc_char} = $uc_char; - $ht{UPPER_CASE_P}->{$uc_char} = 1; - $ht{LOWER_CASE_P}->{$lc_char} = 1; - $ht{UPPER_CASE_OR_ACCENTED}->{$uc_char} = 1; - $n_case_entries++; - } else { - print LOG "Unrecognized l.$line_number in $filename\n"; - } - } - foreach $char (keys %{$ht{UPPER_CASE_OR_ACCENTED}}) { - my $lc_char = $ht{UPPER_TO_LOWER_CASE}->{$char}; - $lc_char = $char unless defined($lc_char); - my @de_accend_char_results = sort keys %{$ht{DE_ACCENT}->{$lc_char}}; - my $new_char = (@de_accend_char_results) ? $de_accend_char_results[0] : $lc_char; - $ht{LC_DE_ACCENT_CHAR}->{$char} = $new_char; - $ht{LC_DE_ACCENT_CHAR_NORM_PUNCT}->{$char} = $new_char; - } - close(IN); - print LOG "Found $n_case_entries case entries, $n_de_accent_sources/$n_de_accent_targets source/target entries in $line_number lines in file $filename\n" if $verbose; - } else { - print LOG "Can't open $filename\n"; - } -} - -sub de_accent_char { - local($this, $char, *ht, $default) = @_; - - @de_accend_char_results = sort keys %{$ht{DE_ACCENT}->{$char}}; - return (@de_accend_char_results) ? @de_accend_char_results : ($default); -} - -sub lower_case_char { - local($this, $char, *ht, $default) = @_; - - return (defined($lc = $ht{UPPER_TO_LOWER_CASE}->{$char})) ? $lc : $default; -} - -sub lower_case_and_de_accent_char { - local($this, $char, *ht) = @_; - - my $lc_char = $this->lower_case_char($char, *ht, $char); - return $this->de_accent_char($lc_char, *ht, $lc_char); -} - -sub lower_case_and_de_accent_string { - local($this, $string, *ht, $control) = @_; - - # $this->stopwatch("start", "lower_case_and_de_accent_string", *ht, *LOG); - my $norm_punct_p = ($control && ($control =~ /norm-punct/i)); - my @chars = $this->split_into_utf8_characters($string); - my $result = ""; - foreach $char (@chars) { - my @lc_de_accented_chars = $this->lower_case_and_de_accent_char($char, *ht); - if ($norm_punct_p - && (! @lc_de_accented_chars)) { - my $norm_punct = $ht{NORM_PUNCT}->{$char}; - @lc_de_accented_chars = ($norm_punct) if $norm_punct; - } - $result .= ((@lc_de_accented_chars) ? $lc_de_accented_chars[0] : $char); - } - # $this->stopwatch("end", "lower_case_and_de_accent_string", *ht, *LOG); - return $result; -} - -sub lower_case_and_de_accent_norm_punct { - local($this, $char, *ht) = @_; - - my $new_char = $ht{LC_DE_ACCENT_CHAR_NORM_PUNCT}->{$char}; - return (defined($new_char)) ? $new_char : $char; -} - -sub lower_case_and_de_accent_string2 { - local($this, $string, *ht, $control) = @_; - - my $norm_punct_p = ($control && ($control =~ /norm-punct/i)); - # $this->stopwatch("start", "lower_case_and_de_accent_string2", *ht, *LOG); - my $s = $string; - my $result = ""; - while (($char, $rest) = ($s =~ /^(.[\x80-\xBF]*)(.*)$/)) { - my $new_char = $ht{LC_DE_ACCENT_CHAR}->{$char}; - if (defined($new_char)) { - $result .= $new_char; - } elsif ($norm_punct_p && defined($new_char = $ht{NORM_PUNCT}->{$char})) { - $result .= $new_char; - } else { - $result .= $char; - } - $s = $rest; - } - # $this->stopwatch("end", "lower_case_and_de_accent_string2", *ht, *LOG); - return $result; -} - -sub lower_case_string { - local($this, $string, *ht, $control) = @_; - - my $norm_punct_p = ($control && ($control =~ /norm-punct/i)); - my $s = $string; - my $result = ""; - while (($char, $rest) = ($s =~ /^(.[\x80-\xBF]*)(.*)$/)) { - my $lc_char = $ht{UPPER_TO_LOWER_CASE}->{$char}; - if (defined($lc_char)) { - $result .= $lc_char; - } elsif ($norm_punct_p && defined($new_char = $ht{NORM_PUNCT}->{$char})) { - $result .= $new_char; - } else { - $result .= $char; - } - $s = $rest; - } - return $result; -} - -sub round_to_n_decimal_places { - local($this, $x, $n, $fill_decimals_p) = @_; - - $fill_decimals_p = 0 unless defined($fill_decimals_p); - unless (defined($x)) { - return $x; - } - if (($x =~ /^-?\d+$/) && (! $fill_decimals_p)) { - return $x; - } - $factor = 1; - foreach $i ((1 .. $n)) { - $factor *= 10; - } - my $rounded_number; - if ($x > 0) { - $rounded_number = (int(($factor * $x) + 0.5) / $factor); - } else { - $rounded_number = (int(($factor * $x) - 0.5) / $factor); - } - if ($fill_decimals_p) { - ($period, $decimals) = ($rounded_number =~ /^-?\d+(\.?)(\d*)$/); - $rounded_number .= "." unless $period || ($n == 0); - foreach ((1 .. ($n - length($decimals)))) { - $rounded_number .= 0; - } - } - return $rounded_number; -} - -sub commify { - local($caller,$number) = @_; - - my $text = reverse $number; - $text =~ s/(\d\d\d)(?=\d)(?!\d*\.)/$1,/g; - return scalar reverse $text; -} - -sub add_javascript_functions { - local($caller,@function_names) = @_; - - $add_javascript_function_s = ""; - foreach $function_name (@function_names) { - - if ($function_name eq "highlight_elems") { - $add_javascript_function_s .= " - function highlight_elems(group_id, value) { - if (group_id != '') { - i = 1; - id = group_id + '-' + i; - while ((s = document.getElementById(id)) != null) { - if (! s.origColor) { - if (s.style.color) { - s.origColor = s.style.color; - } else { - s.origColor = '#000000'; - } - } - if (value == '1') { - s.style.color = '#0000FF'; - if (s.innerHTML == '-') { - s.style.innerHtml = s.innerHTML; - s.innerHTML = '-   ← here'; - s.style.fontWeight = 900; - } else { - s.style.fontWeight = 'bold'; - } - } else { - s.style.fontWeight = 'normal'; - s.style.color = s.origColor; - if (s.style.innerHtml != null) { - s.innerHTML = s.style.innerHtml; - } - } - i = i + 1; - id = group_id + '-' + i; - } - } - } -"; - } elsif ($function_name eq "set_style_for_ids") { - $add_javascript_function_s .= " - function set_style_for_ids(style,id_list) { - var ids = id_list.split(/\\s+/); - var len = ids.length; - var s; - for (var i=0; i>$filename")) { - print OUT $s; - close(OUT); - $result = "Appended"; - } else { - $result = "Can't append"; - } - } else { - if (open(OUT, ">$filename")) { - print OUT $s; - close(OUT); - $result = "Wrote"; - } else { - $result = "Can't write"; - } - } - chmod($mod, $filename) if defined($mod) && -e $filename; - return $result; -} - -sub square { - local($caller, $x) = @_; - - return $x * $x; -} - -sub mutual_info { - local($caller, $ab_count, $a_count, $b_count, $total_count, $smoothing) = @_; - - $smoothing = 1 unless defined($smoothing); - $ab_count = 0 unless defined($ab_count); - return 0 unless $a_count && $b_count && $total_count; - - my $p_ab = $ab_count / $total_count; - my $p_a = $a_count / $total_count; - my $p_b = $b_count / $total_count; - my $expected_ab = $p_a * $p_b * $total_count; - - return -99 unless $expected_ab || $smoothing; - - return CORE::log(($ab_count + $smoothing) / ($expected_ab + $smoothing)); -} - -sub mutual_info_multi { - local($caller, $multi_count, $total_count, $smoothing, @counts) = @_; - - return 0 unless $total_count; - my $p_indivuals = 1; - foreach $count (@counts) { - return 0 unless $count; - $p_indivuals *= ($count / $total_count); - } - my $expected_multi_count = $p_indivuals * $total_count; - # print STDERR "actual vs. expected multi_count($multi_count, $total_count, $smoothing, @counts) = $multi_count vs. $expected_multi_count\n"; - - return -99 unless $expected_multi_count || $smoothing; - - return CORE::log(($multi_count + $smoothing) / ($expected_multi_count + $smoothing)); -} - -sub precision_recall_fmeasure { - local($caller, $n_gold, $n_test, $n_shared, $pretty_print_p) = @_; - - unless (($n_gold =~ /^[1-9]\d*$/) && ($n_test =~ /^[1-9]\d*$/)) { - $zero = ($pretty_print_p) ? "0%" : 0; - if ($n_gold =~ /^[1-9]\d*$/) { - return ("n/a", $zero, $zero); - } elsif ($n_test =~ /^[1-9]\d*$/) { - return ($zero, "n/a", $zero); - } else { - return ("n/a", "n/a", "n/a"); - } - } - my $precision = $n_shared / $n_test; - my $recall = $n_shared / $n_gold; - my $f_measure = ($precision * $recall * 2) / ($precision + $recall); - - return ($precision, $recall, $f_measure) unless $pretty_print_p; - - my $pretty_precision = $caller->round_to_n_decimal_places(100*$precision, 1) . "%"; - my $pretty_recall = $caller->round_to_n_decimal_places(100*$recall, 1) . "%"; - my $pretty_f_measure = $caller->round_to_n_decimal_places(100*$f_measure, 1) . "%"; - - return ($pretty_precision, $pretty_recall, $pretty_f_measure); -} - -sub recapitalize_named_entity { - local($caller, $s) = @_; - - my @comps = (); - foreach $comp (split(/\s+/, $s)) { - if ($comp =~ /^(and|da|for|of|on|the|van|von)$/) { - push(@comps, $comp); - } elsif ($comp =~ /^[a-z]/) { - push(@comps, ucfirst $comp); - } else { - push(@comps, $comp); - } - } - return join(" ", @comps); -} - -sub slot_value_in_double_colon_del_list { - local($this, $s, $slot, $default) = @_; - - $default = "" unless defined($default); - if (($value) = ($s =~ /::$slot\s+(\S.*\S|\S)\s*$/)) { - $value =~ s/\s*::\S.*\s*$//; - return $value; - } else { - return $default; - } -} - -sub synt_in_double_colon_del_list { - local($this, $s) = @_; - - ($value) = ($s =~ /::synt\s+(\S+|\S.*?\S)(?:\s+::.*)?$/); - return (defined($value)) ? $value : ""; -} - -sub form_in_double_colon_del_list { - local($this, $s) = @_; - - ($value) = ($s =~ /::form\s+(\S+|\S.*?\S)(?:\s+::.*)?$/); - return (defined($value)) ? $value : ""; -} - -sub lex_in_double_colon_del_list { - local($this, $s) = @_; - - ($value) = ($s =~ /::lex\s+(\S+|\S.*?\S)(?:\s+::.*)?$/); - return (defined($value)) ? $value : ""; -} - -sub multi_slot_value_in_double_colon_del_list { - # e.g. when there are multiple slot/value pairs in a line, e.g. ::eng ... :eng ... - local($this, $s, $slot) = @_; - - @values = (); - while (($value, $rest) = ($s =~ /::$slot\s+(\S|\S.*?\S)(\s+::\S.*|\s*)$/)) { - push(@values, $value); - $s = $rest; - } - return @values; -} - -sub remove_slot_in_double_colon_del_list { - local($this, $s, $slot) = @_; - - $s =~ s/::$slot(?:|\s+\S|\s+\S.*?\S)(\s+::\S.*|\s*)$/$1/; - $s =~ s/^\s*//; - return $s; -} - -sub extract_split_info_from_split_dir { - local($this, $dir, *ht) = @_; - - my $n_files = 0; - my $n_snt_ids = 0; - if (opendir(DIR, $dir)) { - my @filenames = sort readdir(DIR); - closedir(DIR); - foreach $filename (@filenames) { - next unless $filename =~ /\.txt$/; - my $split_class; - if (($split_class) = ($filename =~ /-(dev|training|test)-/)) { - my $full_filename = "$dir/$filename"; - if (open(IN, $full_filename)) { - my $old_n_snt_ids = $n_snt_ids; - while () { - if (($snt_id) = ($_ =~ /^#\s*::id\s+(\S+)/)) { - if ($old_split_class = $ht{SPLIT_CLASS}->{$snt_id}) { - unless ($old_split_class eq $split_class) { - print STDERR "Conflicting split class for $snt_id: $old_split_class $split_class\n"; - } - } else { - $ht{SPLIT_CLASS}->{$snt_id} = $split_class; - $ht{SPLIT_CLASS_COUNT}->{$split_class} = ($ht{SPLIT_CLASS_COUNT}->{$split_class} || 0) + 1; - $n_snt_ids++; - } - } - } - $n_files++ unless $n_snt_ids == $old_n_snt_ids; - close(IN); - } else { - print STDERR "Can't open file $full_filename"; - } - } else { - print STDERR "Skipping file $filename when extracting split info from $dir\n"; - } - } - print STDERR "Extracted $n_snt_ids split classes from $n_files files.\n"; - } else { - print STDERR "Can't open directory $dir to extract split info.\n"; - } -} - -sub extract_toks_for_split_class_from_dir { - local($this, $dir, *ht, $split_class, $control) = @_; - - $control = "" unless defined($control); - $print_snt_id_p = ($control =~ /\bwith-snt-id\b/); - my $n_files = 0; - my $n_snts = 0; - if (opendir(DIR, $dir)) { - my @filenames = sort readdir(DIR); - closedir(DIR); - foreach $filename (@filenames) { - next unless $filename =~ /^alignment-release-.*\.txt$/; - my $full_filename = "$dir/$filename"; - if (open(IN, $full_filename)) { - my $old_n_snts = $n_snts; - my $snt_id = ""; - while () { - if (($s_value) = ($_ =~ /^#\s*::id\s+(\S+)/)) { - $snt_id = $s_value; - $proper_split_class_p - = ($this_split_class = $ht{SPLIT_CLASS}->{$snt_id}) - && ($this_split_class eq $split_class); - } elsif (($tok) = ($_ =~ /^#\s*::tok\s+(\S|\S.*\S)\s*$/)) { - if ($proper_split_class_p) { - print "$snt_id " if $print_snt_id_p; - print "$tok\n"; - $n_snts++; - } - } - } - $n_files++ unless $n_snts == $old_n_snts; - close(IN); - } else { - print STDERR "Can't open file $full_filename"; - } - } - print STDERR "Extracted $n_snts tokenized sentences ($split_class) from $n_files files.\n"; - } else { - print STDERR "Can't open directory $dir to extract tokens.\n"; - } -} - -sub load_relevant_tok_ngram_corpus { - local($this, $filename, *ht, $max_lex_rule_span, $ngram_count_min, $optional_ngram_output_filename) = @_; - - $ngram_count_min = 1 unless $ngram_count_min; - $max_lex_rule_span = 10 unless $max_lex_rule_span; - my $n_ngram_instances = 0; - my $n_ngram_types = 0; - if (open(IN, $filename)) { - while () { - s/\s*$//; - @tokens = split(/\s+/, $_); - foreach $from_token_index ((0 .. $#tokens)) { - foreach $to_token_index (($from_token_index .. ($from_token_index + $max_lex_rule_span -1))) { - last if $to_token_index > $#tokens; - my $ngram = join(" ", @tokens[$from_token_index .. $to_token_index]); - $ht{RELEVANT_NGRAM}->{$ngram} = ($ht{RELEVANT_NGRAM}->{$ngram} || 0) + 1; - } - } - } - close(IN); - if ($optional_ngram_output_filename && open(OUT, ">$optional_ngram_output_filename")) { - foreach $ngram (sort keys %{$ht{RELEVANT_NGRAM}}) { - $count = $ht{RELEVANT_NGRAM}->{$ngram}; - next unless $count >= $ngram_count_min; - print OUT "($count) $ngram\n"; - $n_ngram_types++; - $n_ngram_instances += $count; - } - close(OUT); - print STDERR "Extracted $n_ngram_types ngram types, $n_ngram_instances ngram instances.\n"; - print STDERR "Wrote ngram stats to $optional_ngram_output_filename\n"; - } - } else { - print STDERR "Can't open relevant tok ngram corpus $filename\n"; - } -} - -sub load_relevant_tok_ngrams { - local($this, $filename, *ht) = @_; - - my $n_entries = 0; - if (open(IN, $filename)) { - while () { - s/\s*$//; - if (($count, $ngram) = ($_ =~ /^\((\d+)\)\s+(\S|\S.*\S)\s*$/)) { - $lc_ngram = lc $ngram; - $ht{RELEVANT_NGRAM}->{$lc_ngram} = ($ht{RELEVANT_NGRAM}->{$lc_ngram} || 0) + $count; - $ht{RELEVANT_LC_NGRAM}->{$lc_ngram} = ($ht{RELEVANT_LC_NGRAM}->{$lc_ngram} || 0) + $count; - $n_entries++; - } - } - close(IN); - print STDERR "Read in $n_entries entries from $filename\n"; - } else { - print STDERR "Can't open relevant tok ngrams from $filename\n"; - } -} - -sub snt_id_sort_function { - local($this, $a, $b) = @_; - - if ((($core_a, $index_a) = ($a =~ /^(\S+)\.(\d+)$/)) - && (($core_b, $index_b) = ($b =~ /^(\S+)\.(\d+)$/))) { - return ($core_a cmp $core_b) || ($index_a <=> $index_b); - } else { - return $a cmp $b; - } -} - -sub count_value_sort_function { - local($this, $a_count, $b_count, $a_value, $b_value, $control) = @_; - - # normalize fractions such as "1/2" - if ($a_count > $b_count) { - return ($control eq "decreasing") ? -1 : 1; - } elsif ($b_count > $a_count) { - return ($control eq "decreasing") ? 1 : -1; - } - $a_value = $num / $den if ($num, $den) = ($a_value =~ /^([1-9]\d*)\/([1-9]\d*)$/); - $b_value = $num / $den if ($num, $den) = ($b_value =~ /^([1-9]\d*)\/([1-9]\d*)$/); - $a_value =~ s/:/\./ if $a_value =~ /^\d+:\d+$/; - $b_value =~ s/:/\./ if $b_value =~ /^\d+:\d+$/; - if (($a_value =~ /^-?\d+(\.\d+)?$/) - && ($b_value =~ /^-?\d+(\.\d+)?$/)) { - return $a_value <=> $b_value; - } elsif ($a_value =~ /^-?\d+(\.\d+)?$/) { - return 1; - } elsif ($b_value =~ /^-?\d+(\.\d+)?$/) { - return -1; - } else { - return $a_value cmp $b_value; - } -} - -sub undef_to_blank { - local($this, $x) = @_; - - return (defined($x)) ? $x : ""; -} - -sub en_lex_amr_list { - local($this, $s) = @_; - - $bpe = qr{ \( (?: (?> [^()]+ ) | (??{ $bpe }))* \) }x; # see Perl Cookbook 2nd ed. p. 218 - @en_lex_amr_list = (); - my $amr_s; - my $lex; - my $test; - while ($s =~ /\S/) { - $s =~ s/^\s*//; - if (($s =~ /^\([a-z]\d* .*\)/) - && (($amr_s, $rest) = ($s =~ /^($bpe)(\s.*|)$/))) { - push(@en_lex_amr_list, $amr_s); - $s = $rest; - } elsif (($lex, $rest) = ($s =~ /^\s*(\S+)(\s.*|)$/)) { - push(@en_lex_amr_list, $lex); - $s = $rest; - } else { - print STDERR "en_lex_amr_list can't process: $s\n"; - $s = ""; - } - } - return @en_lex_amr_list; -} - -sub make_sure_dir_exists { - local($this, $dir, $umask) = @_; - - mkdir($dir, $umask) unless -d $dir; - chmod($umask, $dir); -} - -sub pretty_percentage { - local($this, $numerator, $denominator) = @_; - - return ($denominator == 0) ? "n/a" : ($this->round_to_n_decimal_places(100*$numerator/$denominator, 2) . "%"); -} - -sub html_color_nth_line { - local($this, $s, $n, $color, $delimiter) = @_; - - $delimiter = "
        " unless defined($delimiter); - @lines = split($delimiter, $s); - $lines[$n] = "" . $lines[$n] . "" if ($n =~ /^\d+$/) && ($n <= $#lines); - return join($delimiter, @lines); -} - -sub likely_valid_url_format { - local($this, $url) = @_; - - $url = lc $url; - return 0 if $url =~ /\s/; - return 0 if $url =~ /[@]/; - return 1 if $url =~ /^https?:\/\/.+\.[a-z]+(\?.+)?$/; - return 1 if $url =~ /[a-z].+\.(com|edu|gov|net|org)$/; - return 0; -} - -# see also EnglMorph->special_token_type -$common_file_suffixes = "aspx?|bmp|cgi|docx?|gif|html?|jpeg|jpg|mp3|mp4|pdf|php|png|pptx?|stm|svg|txt|xml"; -$common_top_domain_suffixes = "museum|info|cat|com|edu|gov|int|mil|net|org|ar|at|au|be|bg|bi|br|ca|ch|cn|co|cz|de|dk|es|eu|fi|fr|gr|hk|hu|id|ie|il|in|ir|is|it|jp|ke|kr|lu|mg|mx|my|nl|no|nz|ph|pl|pt|ro|rs|ru|rw|se|sg|sk|so|tr|tv|tw|tz|ua|ug|uk|us|za"; - -sub token_is_url_p { - local($this, $token) = @_; - - return 1 if $token =~ /^www(\.[a-z0-9]([-a-z0-9_]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+)+\.([a-z]{2,2}|$common_top_domain_suffixes)(\/(\.{1,3}|[a-z0-9]([-a-z0-9_%]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+))*(\/[a-z0-9_][-a-z0-9_]+\.($common_file_suffixes))?$/i; - return 1 if $token =~ /^https?:\/\/([a-z]\.)?([a-z0-9]([-a-z0-9_]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+\.)+[a-z]{2,}(\/(\.{1,3}|([-a-z0-9_%]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+))*(\/[a-z_][-a-z0-9_]+\.($common_file_suffixes))?$/i; - return 1 if $token =~ /^[a-z][-a-z0-9_]+(\.[a-z][-a-z0-9_]+)*\.($common_top_domain_suffixes)(\/[a-z0-9]([-a-z0-9_%]|\xC3[\x80-\x96\x98-\xB6\xB8-\xBF])+)*(\/[a-z][-a-z0-9_]+\.($common_file_suffixes))?$/i; - return 0; -} - -sub token_is_email_p { - local($this, $token) = @_; - - return ($token =~ /^[a-z][-a-z0-9_]+(\.[a-z][-a-z0-9_]+)*\@[a-z][-a-z0-9_]+(\.[a-z][-a-z0-9_]+)*\.($common_top_domain_suffixes)$/i); -} - -sub token_is_filename_p { - local($this, $token) = @_; - - return 1 if $token =~ /\.($common_file_suffixes)$/; - return 0; -} - -sub token_is_xml_token_p { - local($this, $token) = @_; - - return ($token =~ /^&(amp|apos|gt|lt|nbsp|quot|&#\d+|&#x[0-9A-F]+);$/i); -} - -sub token_is_handle_p { - local($this, $token) = @_; - - return ($token =~ /^\@[a-z][_a-z0-9]*[a-z0-9]$/i); -} - -sub min { - local($this, @list) = @_; - - my $min = ""; - foreach $item (@list) { - $min = $item if ($item =~ /^-?\d+(?:\.\d*)?$/) && (($min eq "") || ($item < $min)); - } - return $min; -} - -sub max { - local($this, @list) = @_; - - my $max = ""; - foreach $item (@list) { - $max = $item if defined($item) && ($item =~ /^-?\d+(?:\.\d*)?(e[-+]\d+)?$/) && (($max eq "") || ($item > $max)); - } - return $max; -} - -sub split_tok_s_into_tokens { - local($this, $tok_s) = @_; - - @token_list = (); - while (($pre, $link_token, $post) = ($tok_s =~ /^(.*?)\s*(\@?<[^<>]+>\@?)\s*(.*)$/)) { - # generate dummy token for leading blank(s) - if (($tok_s =~ /^\s/) && ($pre eq "") && ($#token_list < 0)) { - push(@token_list, ""); - } else { - push(@token_list, split(/\s+/, $pre)); - } - push(@token_list, $link_token); - $tok_s = $post; - } - push(@token_list, split(/\s+/, $tok_s)); - return @token_list; -} - -sub shuffle { - local($this, @list) = @_; - - @shuffle_list = (); - while (@list) { - $len = $#list + 1; - $rand_position = int(rand($len)); - push(@shuffle_list, $list[$rand_position]); - splice(@list, $rand_position, 1); - } - $s = join(" ", @shuffle_list); - return @shuffle_list; -} - -sub timestamp_to_seconds { - local($this, $timestamp) = @_; - - my $epochtime; - if (($year, $month, $day, $hour, $minute, $second) = ($timestamp =~ /^(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)$/)) { - $epochtime = timelocal($second, $minute, $hour, $day, $month-1, $year); - } elsif (($year, $month, $day) = ($timestamp =~ /^(\d\d\d\d)-(\d\d)-(\d\d)$/)) { - $epochtime = timelocal(0, 0, 0, $day, $month-1, $year); - } elsif (($year, $month, $day, $hour, $minute, $second, $second_fraction) = ($timestamp =~ /^(\d\d\d\d)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)\.(\d+)$/)) { - $epochtime = timelocal($second, $minute, $hour, $day, $month-1, $year) + ($second_fraction / (10 ** length($second_fraction))); - } else { - $epochtime = 0; - } - return $epochtime; -} - -sub timestamp_diff_in_seconds { - local($this, $timestamp1, $timestamp2) = @_; - - my $epochtime1 = $this->timestamp_to_seconds($timestamp1); - my $epochtime2 = $this->timestamp_to_seconds($timestamp2); - return $epochtime2 - $epochtime1; -} - -sub dirhash { - # maps string to hash of length 4 with characters [a-z2-8] (shorter acc. to $len) - local($this, $s, $len) = @_; - - $hash = 9999; - $mega = 2 ** 20; - $mega1 = $mega - 1; - $giga = 2 ** 26; - foreach $c (split //, $s) { - $hash = $hash*33 + ord($c); - $hash = ($hash >> 20) ^ ($hash & $mega1) if $hash >= $giga; - } - while ($hash >= $mega) { - $hash = ($hash >> 20) ^ ($hash & $mega1); - } - $result = ""; - while ($hash) { - $c = $hash & 31; - $result .= CORE::chr($c + (($c >= 26) ? 24 : 97)); - $hash = $hash >> 5; - } - while (length($result) < 4) { - $result .= "8"; - } - return substr($result, 0, $len) if $len; - return $result; -} - -sub full_path_python { - - foreach $bin_path (split(":", "/usr/sbin:/usr/bin:/bin:/usr/local/bin")) { - return $python if -x ($python = "$bin_path/python"); - } - return "python"; -} - -sub string_contains_unbalanced_paras { - local($this, $s) = @_; - - return 0 unless $s =~ /[(){}\[\]]/; - $rest = $s; - while (($pre,$left,$right,$post) = ($rest =~ /^(.*)([({\[]).*?([\]})])(.*)$/)) { - return 1 unless (($left eq "(") && ($right eq ")")) - || (($left eq "[") && ($right eq "]")) - || (($left eq "{") && ($right eq "}")); - $rest = "$pre$post"; - } - return 1 if $rest =~ /[(){}\[\]]/; - return 0; -} - -sub dequote_string { - local($this, $s) = @_; - - if ($s =~ /^".*"$/) { - $s = substr($s, 1, -1); - $s =~ s/\\"/"/g; - return $s; - } elsif ($s =~ /^'.*'$/) { - $s = substr($s, 1, -1); - $s =~ s/\\'/'/g; - return $s; - } else { - return $s; - } -} - -sub defined_non_space { - local($this, $s) = @_; - - return (defined($s) && ($s =~ /\S/)); -} - -sub default_if_undefined { - local($this, $s, $default) = @_; - - return (defined($s) ? $s : $default); -} - -sub remove_empties { - local($this, @list) = @_; - - @filtered_list = (); - foreach $elem (@list) { - push(@filtered_list, $elem) if defined($elem) && (! ($elem =~ /^\s*$/)) && (! $this->member($elem, @filtered_list)); - } - - return @filtered_list; -} - -# copied from AMRexp.pm -sub new_var_for_surf_amr { - local($this, $amr_s, $s) = @_; - - my $letter = ($s =~ /^[a-z]/i) ? lc substr($s, 0, 1) : "x"; - return $letter unless ($amr_s =~ /:\S+\s+\($letter\s+\//) - || ($amr_s =~ /\s\($letter\s+\//) - || ($amr_s =~ /^\s*\($letter\s+\//); # ))) - my $i = 2; - while (($amr_s =~ /:\S+\s+\($letter$i\s+\//) - || ($amr_s =~ /\s+\($letter$i\s+\//) - || ($amr_s =~ /^\s*\($letter$i\s+\//)) { # ))) - $i++; - } - return "$letter$i"; -} - -# copied from AMRexp.pm -sub new_vars_for_surf_amr { - local($this, $amr_s, $ref_amr_s) = @_; - - my $new_amr_s = ""; - my %new_var_ht = (); - my $remaining_amr_s = $amr_s; - my $pre; my $var; my $concept; my $post; - while (($pre, $var, $concept, $post) = ($remaining_amr_s =~ /^(.*?\()([a-z]\d*)\s+\/\s+([^ ()\s]+)(.*)$/s)) { - $new_var = $this->new_var_for_surf_amr("$ref_amr_s $new_amr_s", $concept); - $new_var_ht{$var} = $new_var; - $new_amr_s .= "$pre$new_var / $concept"; - $remaining_amr_s = $post; - } - $new_amr_s .= $remaining_amr_s; - - # also update any reentrancy variables - $remaining_amr_s = $new_amr_s; - $new_amr_s2 = ""; - while (($pre, $var, $post) = ($remaining_amr_s =~ /^(.*?:\S+\s+)([a-z]\d*)([ ()\s].*)$/s)) { - $new_var = $new_var_ht{$var} || $var; - $new_amr_s2 .= "$pre$new_var"; - $remaining_amr_s = $post; - } - $new_amr_s2 .= $remaining_amr_s; - - return $new_amr_s2; -} - -sub update_inner_span_for_id { - local($this, $html_line, $slot, $new_value) = @_; - # e.g. slot: workset-language-name value: Uyghur - - if (defined($new_value) - && (($pre, $old_value, $post) = ($html_line =~ /^(.*]* id="$slot"[^<>]*>)([^<>]*)(<\/span\b[^<>]*>.*)$/i)) - && ($old_value ne $new_value)) { - # print STDERR "Inserting new $slot $old_value -> $new_value\n"; - return $pre . $new_value . $post . "\n"; - } else { - # no change - return $html_line; - } -} - -sub levenshtein_distance { - local($this, $s1, $s2) = @_; - - my $i; - my $j; - my @distance; - my @s1_chars = $utf8->split_into_utf8_characters($s1, "return only chars", *empty_ht); - my $s1_length = $#s1_chars + 1; - my @s2_chars = $utf8->split_into_utf8_characters($s2, "return only chars", *empty_ht); - my $s2_length = $#s2_chars + 1; - for ($i = 0; $i <= $s1_length; $i++) { - $distance[$i][0] = $i; - } - for ($j = 1; $j <= $s2_length; $j++) { - $distance[0][$j] = $j; - } - for ($j = 1; $j <= $s2_length; $j++) { - for ($i = 1; $i <= $s1_length; $i++) { - my $substitution_cost = ($s1_chars[$i-1] eq $s2_chars[$j-1]) ? 0 : 1; - $distance[$i][$j] = $this->min($distance[$i-1][$j] + 1, - $distance[$i][$j-1] + 1, - $distance[$i-1][$j-1] + $substitution_cost); - # print STDERR "SC($i,$j) = $substitution_cost\n"; - # $d = $distance[$i][$j]; - # print STDERR "D($i,$j) = $d\n"; - } - } - return $distance[$s1_length][$s2_length]; -} - -sub markup_parts_of_string_in_common_with_ref { - local($this, $s, $ref, $start_markup, $end_markup, $deletion_markup, $verbose) = @_; - - # \x01 temporary start-markup - # \x02 temporary end-markup - # \x03 temporary deletion-markup - $s =~ s/[\x01-\x03]//g; - $ref =~ s/[\x01-\x03]//g; - my $i; - my $j; - my @distance; - my @s_chars = $utf8->split_into_utf8_characters($s, "return only chars", *empty_ht); - my $s_length = $#s_chars + 1; - my @ref_chars = $utf8->split_into_utf8_characters($ref, "return only chars", *empty_ht); - my $ref_length = $#ref_chars + 1; - $distance[0][0] = 0; - $del_ins_subst_op[0][0] = "-"; - for ($i = 1; $i <= $s_length; $i++) { - $distance[$i][0] = $i; - $del_ins_subst_op[$i][0] = 0; - } - for ($j = 1; $j <= $ref_length; $j++) { - $distance[0][$j] = $j; - $del_ins_subst_op[0][$j] = 1; - } - for ($j = 1; $j <= $ref_length; $j++) { - for ($i = 1; $i <= $s_length; $i++) { - my $substitution_cost = (($s_chars[$i-1] eq $ref_chars[$j-1])) ? 0 : 1; - my @del_ins_subst_list = ($distance[$i-1][$j] + 1, - $distance[$i][$j-1] + 1, - $distance[$i-1][$j-1] + $substitution_cost); - my $min = $this->min(@del_ins_subst_list); - my $del_ins_subst_position = $this->position($min, @del_ins_subst_list); - $distance[$i][$j] = $min; - $del_ins_subst_op[$i][$j] = $del_ins_subst_position; - } - } - $d = $distance[$s_length][$ref_length]; - print STDERR "markup_parts_of_string_in_common_with_ref LD($s,$ref) = $d\n" if $verbose; - for ($j = 0; $j <= $ref_length; $j++) { - for ($i = 0; $i <= $s_length; $i++) { - $d = $distance[$i][$j]; - $op = $del_ins_subst_op[$i][$j]; - print STDERR "$d($op) " if $verbose; - } - print STDERR "\n" if $verbose; - } - my $result = ""; - my $i_end = $s_length; - my $j_end = $ref_length; - my $cost = $distance[$i_end][$j_end]; - $i = $i_end; - $j = $j_end; - while (1) { - $result2 = $result; - $result2 =~ s/\x01/$start_markup/g; - $result2 =~ s/\x02/$end_markup/g; - $result2 =~ s/\x03/$deletion_markup/g; - print STDERR "i:$i i-end:$i_end j:$j j-end:$j_end r: $result2\n" if $verbose; - # matching characters - if ($i && $j && ($del_ins_subst_op[$i][$j] == 2) && ($distance[$i-1][$j-1] == $distance[$i][$j])) { - $i--; - $j--; - } else { - # previously matching characters - if (($i < $i_end) && ($j < $j_end)) { - my $sub_s = join("", @s_chars[$i .. $i_end-1]); - $result = "\x01" . $sub_s . "\x02" . $result; - } - # character substitution - if ($i && $j && ($del_ins_subst_op[$i][$j] == 2)) { - $i--; - $j--; - $result = $s_chars[$i] . $result; - } elsif ($i && ($del_ins_subst_op[$i][$j] == 0)) { - $i--; - $result = $s_chars[$i] . $result; - } elsif ($j && ($del_ins_subst_op[$i][$j] == 1)) { - $j--; - $result = "\x03" . $result; - } else { - last; - } - $i_end = $i; - $j_end = $j; - } - } - $result2 = $result; - $result2 =~ s/\x01/$start_markup/g; - $result2 =~ s/\x02/$end_markup/g; - $result2 =~ s/\x03/$deletion_markup/g; - print STDERR "i:$i i-end:$i_end j:$j j-end:$j_end r: $result2 *\n" if $verbose; - $result =~ s/(\x02)\x03+(\x01)/$1$deletion_markup$2/g; - $result =~ s/(\x02)\x03+$/$1$deletion_markup/g; - $result =~ s/^\x03+(\x01)/$deletion_markup$1/g; - $result =~ s/\x03//g; - $result =~ s/\x01/$start_markup/g; - $result =~ s/\x02/$end_markup/g; - return $result; -} - -sub env_https { - my $https = $ENV{'HTTPS'}; - return 1 if $https && ($https eq "on"); - - my $http_via = $ENV{'HTTP_VIA'}; - return 1 if $http_via && ($http_via =~ /\bHTTPS\b.* \d+(?:\.\d+){3,}:443\b/); # tmp for beta.isi.edu - - return 0; -} - -sub env_http_host { - return $ENV{'HTTP_HOST'} || ""; -} - -sub env_script_filename { - return $ENV{'SCRIPT_FILENAME'} || ""; -} - -sub cgi_mt_app_root_dir { - local($this, $target) = @_; - my $s; - if ($target =~ /filename/i) { - $s = $ENV{'SCRIPT_FILENAME'} || ""; - } else { - $s = $ENV{'SCRIPT_NAME'} || ""; - } - return "" unless $s; - return $d if ($d) = ($s =~ /^(.*?\/(?:amr-editor|chinese-room-editor|utools|romanizer\/version\/[-.a-z0-9]+|romanizer))\//); - return $d if ($d) = ($s =~ /^(.*)\/(?:bin|src|scripts?)\/[^\/]*$/); - return $d if ($d) = ($s =~ /^(.*)\/[^\/]*$/); - return ""; -} - -sub parent_dir { - local($this, $dir) = @_; - - $dir =~ s/\/[^\/]+\/?$//; - return $dir || "/"; -} - -sub span_start { - local($this, $span, $default) = @_; - - $default = "" unless defined($default); - return (($start) = ($span =~ /^(\d+)-\d+$/)) ? $start : $default; -} - -sub span_end { - local($this, $span, $default) = @_; - - $default = "" unless defined($default); - return (($end) = ($span =~ /^\d+-(\d+)$/)) ? $end : $default; -} - -sub oct_mode { - local($this, $filename) = @_; - - @stat = stat($filename); - return "" unless @stat; - $mode = $stat[2]; - $oct_mode = sprintf("%04o", $mode & 07777); - return $oct_mode; -} - -sub csv_to_list { - local($this, $s, $control_string) = @_; - # Allow quoted string such as "Wait\, what?" as element with escaped comma inside. - - $control_string = "" unless defined($control_string); - $strip_p = ($control_string =~ /\bstrip\b/); - $allow_simple_commas_in_quote = ($control_string =~ /\bsimple-comma-ok\b/); - $ignore_empty_elem_p = ($control_string =~ /\bno-empty\b/); - @cvs_list = (); - while ($s ne "") { - if ((($elem, $rest) = ($s =~ /^"((?:\\[,\"]|[^,\"][\x80-\xBF]*)*)"(,.*|)$/)) - || ($allow_simple_commas_in_quote - && (($elem, $rest) = ($s =~ /^"((?:\\[,\"]|[^\"][\x80-\xBF]*)*)"(,.*|)$/))) - || (($elem, $rest) = ($s =~ /^([^,]*)(,.*|\s*)$/)) - || (($elem, $rest) = ($s =~ /^(.*)()$/))) { - if ($strip_p) { - $elem =~ s/^\s*//; - $elem =~ s/\s*$//; - } - push(@cvs_list, $elem) unless $ignore_empty_elem_p && ($elem eq ""); - $rest =~ s/^,//; - $s = $rest; - } else { - print STDERR "Error in csv_to_list processing $s\n"; - last; - } - } - return @cvs_list; -} - -sub kl_divergence { - local($this, $distribution_id, $gold_distribution_id, *ht, $smoothing) = @_; - - my $total_count = $ht{DISTRIBUTION_TOTAL_COUNT}->{$distribution_id}; - my $total_gold_count = $ht{DISTRIBUTION_TOTAL_COUNT}->{$gold_distribution_id}; - return unless $total_count && $total_gold_count; - - my @values = keys %{$ht{DISTRIBUTION_VALUE_COUNT}->{$gold_distribution_id}}; - my $n_values = $#values + 1; - - my $min_total_count = $this->min($total_count, $total_gold_count); - $smoothing = 1 - (10000/((100+$min_total_count)**2)) unless defined($smoothing); - return unless $smoothing; - my $smoothed_n_values = $smoothing * $n_values; - my $divergence = 0; - foreach $value (@values) { - my $count = $ht{DISTRIBUTION_VALUE_COUNT}->{$distribution_id}->{$value} || 0; - my $gold_count = $ht{DISTRIBUTION_VALUE_COUNT}->{$gold_distribution_id}->{$value}; - my $p = ($count + $smoothing) / ($total_count + $smoothed_n_values); - my $q = ($gold_count + $smoothing) / ($total_gold_count + $smoothed_n_values); - if ($p == 0) { - # no impact on divergence - } elsif ($q) { - my $incr = $p * CORE::log($p/$q); - $divergence += $incr; - my $incr2 = $this->round_to_n_decimal_places($incr, 5); - my $p2 = $this->round_to_n_decimal_places($p, 5); - my $q2 = $this->round_to_n_decimal_places($q, 5); - $incr2 = "+" . $incr2 if $incr > 0; - $log = " value: $value count: $count gold_count: $gold_count p: $p2 q: $q2 $incr2\n"; - $ht{KL_DIVERGENCE_LOG}->{$distribution_id}->{$gold_distribution_id}->{$value} = $log; - $ht{KL_DIVERGENCE_INCR}->{$distribution_id}->{$gold_distribution_id}->{$value} = $incr; - } else { - $divergence += 999; - } - } - return $divergence; -} - -sub read_ISO_8859_named_entities { - local($this, *ht, $filename, $verbose) = @_; - # e.g. from /nfs/isd/ulf/arabic/data/ISO-8859-1-HTML-named-entities.txt - # - # - # - # - # - # - - my $n = 0; - if (open(IN, $filename)) { - while () { - s/^\xEF\xBB\xBF//; - if (($name, $dec_unicode) = ($_ =~ /^{$name} = $dec_unicode; - $ht{HTML_ENTITY_DECUNICODE_TO_NAME}->{$dec_unicode} = $name; - $ht{HTML_ENTITY_NAME_TO_UTF8}->{$name} = $utf8->unicode2string($dec_unicode); - $n++; - # print STDERR "read_ISO_8859_named_entities $name $dec_unicode .\n" if $name =~ /dash/; - } - } - close(IN); - print STDERR "Loaded $n entries from $filename\n" if $verbose; - } else { - print STDERR "Could not open $filename\n" if $verbose; - } -} - -sub neg { - local($this, $x) = @_; - - # robust - return (defined($x) && ($x =~ /^-?\d+(?:\.\d+)?$/)) ? (- $x) : $x; -} - -sub read_ttable_gloss_data { - local($this, $filename, $lang_code, *ht, $direction) = @_; - # e.g. /nfs/isd/ulf/croom/oov-lanpairs/som-eng/som-eng-ttable-glosses.txt - - $direction = "f to e" unless defined($direction); - if (open(IN, $filename)) { - while () { - if (($headword, $gloss) = ($_ =~ /^(.*?)\t(.*?)\s*$/)) { - if ($direction eq "e to f") { - $ht{TTABLE_E_GLOSS}->{$lang_code}->{$headword} = $gloss; - } else { - $ht{TTABLE_F_GLOSS}->{$lang_code}->{$headword} = $gloss; - } - } - } - close(IN); - } -} - -sub format_gloss_for_tooltop { - local($this, $gloss) = @_; - - $gloss =~ s/^\s*/\t/; - $gloss =~ s/\s*$//; - $gloss =~ s/ / /g; - $gloss =~ s/\t/ /g; - return $gloss; -} - -sub obsolete_tooltip { - local($this, $s, $lang_code, *ht) = @_; - - return $gloss if defined($gloss = $ht{TTABLE_F_GLOSS}->{$lang_code}->{$s}); - @e_s = sort { $ht{T_TABLE_F_E_C}->{$lang_code}->{$s}->{$b} - <=> $ht{T_TABLE_F_E_C}->{$lang_code}->{$s}->{$a} } - keys %{$ht{T_TABLE_F_E_C}->{$lang_code}->{$s}}; - if (@e_s) { - $e = shift @e_s; - $count = $ht{T_TABLE_F_E_C}->{$lang_code}->{$s}->{$e}; - $min_count = $this->max($count * 0.01, 1.0); - $count =~ s/(\.\d\d)\d*$/$1/; - $result = "$s: $e ($count)"; - $n = 1; - while (@e_s) { - $e = shift @e_s; - $count = $ht{T_TABLE_F_E_C}->{$lang_code}->{$s}->{$e}; - last if $count < $min_count; - $count =~ s/(\.\d\d)\d*$/$1/; - $result .= " $e ($count)"; - $n++; - last if $n >= 10; - } - $ht{TTABLE_F_GLOSS}->{$lang_code}->{$s} = $result; - return $result; - } else { - return ""; - } -} - -sub markup_html_line_init { - local($this, $s, *ht, $id) = @_; - - my @chars = $utf8->split_into_utf8_characters($s, "return only chars", *empty_ht); - $ht{S}->{$id} = $s; -} - -sub markup_html_line_regex { - local($this, $id, *ht, $regex, $m_slot, $m_value, *LOG) = @_; - - unless ($regex eq "") { - my $s = $ht{S}->{$id}; - my $current_pos = 0; - while (($pre, $match_s, $post) = ($s =~ /^(.*?)($regex)(.*)$/)) { - $current_pos += $utf8->length_in_utf8_chars($pre); - my $match_len = $utf8->length_in_utf8_chars($match_s); - $ht{START}->{$id}->{$current_pos}->{$m_slot}->{$m_value} = 1; - $ht{STOP}->{$id}->{($current_pos+$match_len)}->{$m_slot}->{$m_value} = 1; - $current_pos += $match_len; - $s = $post; - } - } -} - -sub html_markup_line { - local($this, $id, *ht, *LOG) = @_; - - my @titles = (); - my @colors = (); - my @text_decorations = (); - - my $s = $ht{S}->{$id}; - # print LOG "html_markup_line $id: $s\n"; - my @chars = $utf8->split_into_utf8_characters($s, "return only chars", *empty_ht); - my $markedup_s = ""; - - my $new_title = ""; - my $new_color = ""; - my $new_text_decoration = ""; - my $n_spans = 0; - my $i; - foreach $i ((0 .. ($#chars+1))) { - my $stop_span_p = 0; - foreach $m_slot (keys %{$ht{STOP}->{$id}->{$i}}) { - foreach $m_value (keys %{$ht{STOP}->{$id}->{$i}->{$m_slot}}) { - if ($m_slot eq "title") { - my $last_positition = $this->last_position($m_value, @titles); - splice(@titles, $last_positition, 1) if $last_positition >= 0; - $stop_span_p = 1; - } elsif ($m_slot eq "color") { - my $last_positition = $this->last_position($m_value, @colors); - splice(@colors, $last_positition, 1) if $last_positition >= 0; - $stop_span_p = 1; - } elsif ($m_slot eq "text-decoration") { - my $last_positition = $this->last_position($m_value, @text_decorations); - splice(@text_decorations, $last_positition, 1) if $last_positition >= 0; - $stop_span_p = 1; - } - } - } - if ($stop_span_p) { - $markedup_s .= ""; - $n_spans--; - } - my $start_span_p = 0; - foreach $m_slot (keys %{$ht{START}->{$id}->{$i}}) { - foreach $m_value (keys %{$ht{START}->{$id}->{$i}->{$m_slot}}) { - if ($m_slot eq "title") { - push(@titles, $m_value); - $start_span_p = 1; - } elsif ($m_slot eq "color") { - push(@colors, $m_value); - $start_span_p = 1; - } elsif ($m_slot eq "text-decoration") { - push(@text_decorations, $m_value); - $start_span_p = 1; - } - } - } - if ($stop_span_p || $start_span_p) { - my $new_title = (@titles) ? $titles[$#titles] : ""; - my $new_color = (@colors) ? $colors[$#colors] : ""; - my $new_text_decoration = (@text_decorations) ? $text_decorations[$#text_decorations] : ""; - if ($new_title || $new_color || $new_text_decoration) { - my $args = ""; - if ($new_title) { - $g_title = $this->guard_html_quote($new_title); - $args .= " title=\"$g_title\""; - } - if ($new_color || $new_text_decoration) { - $g_color = $this->guard_html_quote($new_color); - $g_text_decoration = $this->guard_html_quote($new_text_decoration); - $color_clause = ($new_color) ? "color:$g_color;" : ""; - $text_decoration_clause = ($new_text_decoration) ? "text-decoration:$g_text_decoration;" : ""; - $text_decoration_clause =~ s/text-decoration:(border-bottom:)/$1/g; - $args .= " style=\"$color_clause$text_decoration_clause\""; - } - if ($n_spans) { - $markedup_s .= ""; - $n_spans--; - } - $markedup_s .= ""; - $n_spans++; - } - } - $markedup_s .= $chars[$i] if $i <= $#chars; - } - print LOG "Error in html_markup_line $id final no. of open spans: $n_spans\n" if $n_spans && $tokenization_log_verbose; - return $markedup_s; -} - -sub offset_adjustment { - local($this, $g, $s, $offset, $snt_id, *ht, *LOG, $control) = @_; - # s(tring) e.g. "can't" - # g(old string) e.g. "can not" - # Typically when s is a slight variation of g (e.g. with additional tokenization spaces in s) - # returns mapping 0->0, 1->1, 2->2, 3->3, 6->4, 7->5 - - $control = "" unless defined($control); - my $verbose = ($control =~ /\bverbose\b/); - my $s_offset = 0; - my $g_offset = 0; - my @s_chars = $utf8->split_into_utf8_characters($s, "return only chars", *ht); - my @g_chars = $utf8->split_into_utf8_characters($g, "return only chars", *ht); - my $s_len = $#s_chars + 1; - my $g_len = $#g_chars + 1; - $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{$s_offset} = $g_offset; - $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{($s_offset+$s_len)} = $g_offset+$g_len; - - while (($s_offset < $s_len) && ($g_offset < $g_len)) { - if ($s_chars[$s_offset] eq $g_chars[$g_offset]) { - $s_offset++; - $g_offset++; - $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{$s_offset} = $g_offset; - } else { - my $best_gm = 0; - my $best_sm = 0; - my $best_match_len = 0; - foreach $max_m ((1 .. 4)) { - foreach $sm ((0 .. $max_m)) { - $max_match_len = 0; - while ((($s_index = $s_offset+$sm+$max_match_len) < $s_len) - && (($g_index = $g_offset+$max_m+$max_match_len) < $g_len)) { - if ($s_chars[$s_index] eq $g_chars[$g_index]) { - $max_match_len++; - } else { - last; - } - } - if ($max_match_len > $best_match_len) { - $best_match_len = $max_match_len; - $best_sm = $sm; - $best_gm = $max_m; - } - } - foreach $gm ((0 .. $max_m)) { - $max_match_len = 0; - while ((($s_index = $s_offset+$max_m+$max_match_len) < $s_len) - && (($g_index = $g_offset+$gm+$max_match_len) < $g_len)) { - if ($s_chars[$s_index] eq $g_chars[$g_index]) { - $max_match_len++; - } else { - last; - } - } - if ($max_match_len > $best_match_len) { - $best_match_len = $max_match_len; - $best_sm = $max_m; - $best_gm = $gm; - } - } - } - if ($best_match_len) { - $s_offset += $best_sm; - $g_offset += $best_gm; - $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{$s_offset} = $g_offset; - } else { - last; - } - } - } - if ($verbose) { - foreach $s_offset (sort { $a <=> $b } - keys %{$ht{OFFSET_MAP}->{$snt_id}->{$offset}}) { - my $g_offset = $ht{OFFSET_MAP}->{$snt_id}->{$offset}->{$s_offset}; - print LOG " OFFSET_MAP $snt_id.$offset $s/$g $s_offset -> $g_offset\n" if $tokenization_log_verbose; - } - } -} - -sub length_in_utf8_chars { - local($this, $s) = @_; - - $s =~ s/[\x80-\xBF]//g; - $s =~ s/[\x00-\x7F\xC0-\xFF]/c/g; - return length($s); -} - -sub split_into_utf8_characters { - local($this, $text) = @_; - # "return only chars; return trailing whitespaces" - - @characters = (); - while (($char, $rest) = ($text =~ /^(.[\x80-\xBF]*)(.*)$/)) { - push(@characters, $char); - $text = $rest; - } - return @characters; -} - -sub first_char_of_string { - local($this, $s) = @_; - - $s =~ s/^(.[\x80-\xBF]*).*$/$1/; - return $s; -} - -sub last_char_of_string { - local($this, $s) = @_; - - $s =~ s/^.*([^\x80-\xBF][\x80-\xBF]*)$/$1/; - return $s; -} - -sub first_n_chars_of_string { - local($this, $s, $n) = @_; - - $s =~ s/^((?:.[\x80-\xBF]*){$n,$n}).*$/$1/; - return $s; -} - -sub last_n_chars_of_string { - local($this, $s, $n) = @_; - - $s =~ s/^.*((?:[^\x80-\xBF][\x80-\xBF]*){$n,$n})$/$1/; - return $s; -} - - -1; diff --git a/spaces/yangogo/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/yangogo/bingo/src/lib/hooks/use-at-bottom.tsx deleted file mode 100644 index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000 --- a/spaces/yangogo/bingo/src/lib/hooks/use-at-bottom.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import * as React from 'react' - -export function useAtBottom(offset = 0) { - const [isAtBottom, setIsAtBottom] = React.useState(false) - - React.useEffect(() => { - const handleScroll = () => { - setIsAtBottom( - window.innerHeight + window.scrollY >= - document.body.offsetHeight - offset - ) - } - - window.addEventListener('scroll', handleScroll, { passive: true }) - handleScroll() - - return () => { - window.removeEventListener('scroll', handleScroll) - } - }, [offset]) - - return isAtBottom -} diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/helpers/observeDrag.ts b/spaces/yderre-aubay/midi-player-demo/src/main/helpers/observeDrag.ts deleted file mode 100644 index 6afa838f7dfd54715d070bc02fc0bdb23fa5debf..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/helpers/observeDrag.ts +++ /dev/null @@ -1,70 +0,0 @@ -import { IPoint, pointSub } from "../../common/geometry" -import { getClientPos } from "./mouseEvent" - -export interface DragHandler { - onMouseMove?: (e: MouseEvent) => void - onMouseUp?: (e: MouseEvent) => void - onClick?: (e: MouseEvent) => void -} - -export const observeDrag = ({ - onMouseMove, - onMouseUp, - onClick, -}: DragHandler) => { - let isMoved = false - - const onGlobalMouseMove = (e: MouseEvent) => { - isMoved = true - onMouseMove?.(e) - } - - const onGlobalMouseUp = (e: MouseEvent) => { - onMouseUp?.(e) - - if (!isMoved) { - onClick?.(e) - } - - document.removeEventListener("mousemove", onGlobalMouseMove) - document.removeEventListener("mouseup", onGlobalMouseUp) - } - - document.addEventListener("mousemove", onGlobalMouseMove) - document.addEventListener("mouseup", onGlobalMouseUp) -} - -export interface DragHandler2 { - onMouseMove?: (e: MouseEvent, delta: IPoint) => void - onMouseUp?: (e: MouseEvent) => void - onClick?: (e: MouseEvent) => void -} - -export const observeDrag2 = ( - e: MouseEvent, - { onMouseMove, onMouseUp, onClick }: DragHandler2, -) => { - let isMoved = false - const startClientPos = getClientPos(e) - - const onGlobalMouseMove = (e: MouseEvent) => { - isMoved = true - const clientPos = getClientPos(e) - const delta = pointSub(clientPos, startClientPos) - onMouseMove?.(e, delta) - } - - const onGlobalMouseUp = (e: MouseEvent) => { - onMouseUp?.(e) - - if (!isMoved) { - onClick?.(e) - } - - document.removeEventListener("mousemove", onGlobalMouseMove) - document.removeEventListener("mouseup", onGlobalMouseUp) - } - - document.addEventListener("mousemove", onGlobalMouseMove) - document.addEventListener("mouseup", onGlobalMouseUp) -} diff --git a/spaces/yeqingmei123/face-test/e4e/models/encoders/model_irse.py b/spaces/yeqingmei123/face-test/e4e/models/encoders/model_irse.py deleted file mode 100644 index 6a94d67542f961ff6533f0335cf4cb0fa54024fb..0000000000000000000000000000000000000000 --- a/spaces/yeqingmei123/face-test/e4e/models/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from e4e.models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/yerfor/SyntaSpeech/inference/tts/base_tts_infer.py b/spaces/yerfor/SyntaSpeech/inference/tts/base_tts_infer.py deleted file mode 100644 index c11388e15010d836ff125c262c35d85ea4024d4f..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/inference/tts/base_tts_infer.py +++ /dev/null @@ -1,120 +0,0 @@ -import os - -import torch - -from modules.vocoder.hifigan.hifigan import HifiGanGenerator -from tasks.tts.dataset_utils import FastSpeechWordDataset -from tasks.tts.tts_utils import load_data_preprocessor -from utils.commons.ckpt_utils import load_ckpt -from utils.commons.hparams import set_hparams - - -class BaseTTSInfer: - def __init__(self, hparams, device=None): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.hparams = hparams - self.device = device - self.data_dir = hparams['binary_data_dir'] - self.preprocessor, self.preprocess_args = load_data_preprocessor() - self.ph_encoder, self.word_encoder = self.preprocessor.load_dict(self.data_dir) - self.spk_map = self.preprocessor.load_spk_map(self.data_dir) - self.ds_cls = FastSpeechWordDataset - self.model = self.build_model() - self.model.eval() - self.model.to(self.device) - self.vocoder = self.build_vocoder() - self.vocoder.eval() - self.vocoder.to(self.device) - - def build_model(self): - raise NotImplementedError - - def forward_model(self, inp): - raise NotImplementedError - - def build_vocoder(self): - base_dir = self.hparams['vocoder_ckpt'] - config_path = f'{base_dir}/config.yaml' - config = set_hparams(config_path, global_hparams=False) - vocoder = HifiGanGenerator(config) - load_ckpt(vocoder, base_dir, 'model_gen') - return vocoder - - def run_vocoder(self, c): - c = c.transpose(2, 1) - y = self.vocoder(c)[:, 0] - return y - - def preprocess_input(self, inp): - """ - - :param inp: {'text': str, 'item_name': (str, optional), 'spk_name': (str, optional)} - :return: - """ - preprocessor, preprocess_args = self.preprocessor, self.preprocess_args - text_raw = inp['text'] - item_name = inp.get('item_name', '') - spk_name = inp.get('spk_name', '') - ph, txt, word, ph2word, ph_gb_word = preprocessor.txt_to_ph( - preprocessor.txt_processor, text_raw, preprocess_args) - word_token = self.word_encoder.encode(word) - ph_token = self.ph_encoder.encode(ph) - spk_id = self.spk_map[spk_name] - item = {'item_name': item_name, 'text': txt, 'ph': ph, 'spk_id': spk_id, - 'ph_token': ph_token, 'word_token': word_token, 'ph2word': ph2word, - 'ph_words':ph_gb_word, 'words': word} - item['ph_len'] = len(item['ph_token']) - return item - - def input_to_batch(self, item): - item_names = [item['item_name']] - text = [item['text']] - ph = [item['ph']] - txt_tokens = torch.LongTensor(item['ph_token'])[None, :].to(self.device) - txt_lengths = torch.LongTensor([txt_tokens.shape[1]]).to(self.device) - word_tokens = torch.LongTensor(item['word_token'])[None, :].to(self.device) - word_lengths = torch.LongTensor([txt_tokens.shape[1]]).to(self.device) - ph2word = torch.LongTensor(item['ph2word'])[None, :].to(self.device) - spk_ids = torch.LongTensor(item['spk_id'])[None, :].to(self.device) - batch = { - 'item_name': item_names, - 'text': text, - 'ph': ph, - 'txt_tokens': txt_tokens, - 'txt_lengths': txt_lengths, - 'word_tokens': word_tokens, - 'word_lengths': word_lengths, - 'ph2word': ph2word, - 'spk_ids': spk_ids, - } - return batch - - def postprocess_output(self, output): - return output - - def infer_once(self, inp): - inp = self.preprocess_input(inp) - output = self.forward_model(inp) - output = self.postprocess_output(output) - return output - - @classmethod - def example_run(cls): - from utils.commons.hparams import set_hparams - from utils.commons.hparams import hparams as hp - from utils.audio.io import save_wav - - set_hparams() - if hp['ds_name'] in ['lj', 'libritts']: - inp = { - 'text': 'the invention of movable metal letters in the middle of the fifteenth century may justly be considered as the invention of the art of printing.' - } - elif hp['ds_name'] in ['biaobei']: - inp = { - 'text': '如果我想你三遍,天上乌云就散一片。' - } - infer_ins = cls(hp) - out = infer_ins.infer_once(inp) - os.makedirs('infer_out', exist_ok=True) - save_wav(out, f'infer_out/example_out.wav', hp['audio_sample_rate']) diff --git a/spaces/yerfor/SyntaSpeech/modules/commons/rnn.py b/spaces/yerfor/SyntaSpeech/modules/commons/rnn.py deleted file mode 100644 index 205c2c76b8fda2de920bc59228a5eec0a20119a9..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/modules/commons/rnn.py +++ /dev/null @@ -1,261 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - - -class PreNet(nn.Module): - def __init__(self, in_dims, fc1_dims=256, fc2_dims=128, dropout=0.5): - super().__init__() - self.fc1 = nn.Linear(in_dims, fc1_dims) - self.fc2 = nn.Linear(fc1_dims, fc2_dims) - self.p = dropout - - def forward(self, x): - x = self.fc1(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=self.training) - x = self.fc2(x) - x = F.relu(x) - x = F.dropout(x, self.p, training=self.training) - return x - - -class HighwayNetwork(nn.Module): - def __init__(self, size): - super().__init__() - self.W1 = nn.Linear(size, size) - self.W2 = nn.Linear(size, size) - self.W1.bias.data.fill_(0.) - - def forward(self, x): - x1 = self.W1(x) - x2 = self.W2(x) - g = torch.sigmoid(x2) - y = g * F.relu(x1) + (1. - g) * x - return y - - -class BatchNormConv(nn.Module): - def __init__(self, in_channels, out_channels, kernel, relu=True): - super().__init__() - self.conv = nn.Conv1d(in_channels, out_channels, kernel, stride=1, padding=kernel // 2, bias=False) - self.bnorm = nn.BatchNorm1d(out_channels) - self.relu = relu - - def forward(self, x): - x = self.conv(x) - x = F.relu(x) if self.relu is True else x - return self.bnorm(x) - - -class ConvNorm(torch.nn.Module): - def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, - padding=None, dilation=1, bias=True, w_init_gain='linear'): - super(ConvNorm, self).__init__() - if padding is None: - assert (kernel_size % 2 == 1) - padding = int(dilation * (kernel_size - 1) / 2) - - self.conv = torch.nn.Conv1d(in_channels, out_channels, - kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, - bias=bias) - - torch.nn.init.xavier_uniform_( - self.conv.weight, gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, signal): - conv_signal = self.conv(signal) - return conv_signal - - -class CBHG(nn.Module): - def __init__(self, K, in_channels, channels, proj_channels, num_highways): - super().__init__() - - # List of all rnns to call `flatten_parameters()` on - self._to_flatten = [] - - self.bank_kernels = [i for i in range(1, K + 1)] - self.conv1d_bank = nn.ModuleList() - for k in self.bank_kernels: - conv = BatchNormConv(in_channels, channels, k) - self.conv1d_bank.append(conv) - - self.maxpool = nn.MaxPool1d(kernel_size=2, stride=1, padding=1) - - self.conv_project1 = BatchNormConv(len(self.bank_kernels) * channels, proj_channels[0], 3) - self.conv_project2 = BatchNormConv(proj_channels[0], proj_channels[1], 3, relu=False) - - # Fix the highway input if necessary - if proj_channels[-1] != channels: - self.highway_mismatch = True - self.pre_highway = nn.Linear(proj_channels[-1], channels, bias=False) - else: - self.highway_mismatch = False - - self.highways = nn.ModuleList() - for i in range(num_highways): - hn = HighwayNetwork(channels) - self.highways.append(hn) - - self.rnn = nn.GRU(channels, channels, batch_first=True, bidirectional=True) - self._to_flatten.append(self.rnn) - - # Avoid fragmentation of RNN parameters and associated warning - self._flatten_parameters() - - def forward(self, x): - # Although we `_flatten_parameters()` on init, when using DataParallel - # the model gets replicated, making it no longer guaranteed that the - # weights are contiguous in GPU memory. Hence, we must call it again - self._flatten_parameters() - - # Save these for later - residual = x - seq_len = x.size(-1) - conv_bank = [] - - # Convolution Bank - for conv in self.conv1d_bank: - c = conv(x) # Convolution - conv_bank.append(c[:, :, :seq_len]) - - # Stack along the channel axis - conv_bank = torch.cat(conv_bank, dim=1) - - # dump the last padding to fit residual - x = self.maxpool(conv_bank)[:, :, :seq_len] - - # Conv1d projections - x = self.conv_project1(x) - x = self.conv_project2(x) - - # Residual Connect - x = x + residual - - # Through the highways - x = x.transpose(1, 2) - if self.highway_mismatch is True: - x = self.pre_highway(x) - for h in self.highways: - x = h(x) - - # And then the RNN - x, _ = self.rnn(x) - return x - - def _flatten_parameters(self): - """Calls `flatten_parameters` on all the rnns used by the WaveRNN. Used - to improve efficiency and avoid PyTorch yelling at us.""" - [m.flatten_parameters() for m in self._to_flatten] - - -class TacotronEncoder(nn.Module): - def __init__(self, embed_dims, num_chars, cbhg_channels, K, num_highways, dropout): - super().__init__() - self.embedding = nn.Embedding(num_chars, embed_dims) - self.pre_net = PreNet(embed_dims, embed_dims, embed_dims, dropout=dropout) - self.cbhg = CBHG(K=K, in_channels=cbhg_channels, channels=cbhg_channels, - proj_channels=[cbhg_channels, cbhg_channels], - num_highways=num_highways) - self.proj_out = nn.Linear(cbhg_channels * 2, cbhg_channels) - - def forward(self, x): - x = self.embedding(x) - x = self.pre_net(x) - x.transpose_(1, 2) - x = self.cbhg(x) - x = self.proj_out(x) - return x - - -class RNNEncoder(nn.Module): - def __init__(self, num_chars, embedding_dim, n_convolutions=3, kernel_size=5): - super(RNNEncoder, self).__init__() - self.embedding = nn.Embedding(num_chars, embedding_dim, padding_idx=0) - convolutions = [] - for _ in range(n_convolutions): - conv_layer = nn.Sequential( - ConvNorm(embedding_dim, - embedding_dim, - kernel_size=kernel_size, stride=1, - padding=int((kernel_size - 1) / 2), - dilation=1, w_init_gain='relu'), - nn.BatchNorm1d(embedding_dim)) - convolutions.append(conv_layer) - self.convolutions = nn.ModuleList(convolutions) - - self.lstm = nn.LSTM(embedding_dim, int(embedding_dim / 2), 1, - batch_first=True, bidirectional=True) - - def forward(self, x): - input_lengths = (x > 0).sum(-1) - input_lengths = input_lengths.cpu().numpy() - - x = self.embedding(x) - x = x.transpose(1, 2) # [B, H, T] - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) + x - x = x.transpose(1, 2) # [B, T, H] - - # pytorch tensor are not reversible, hence the conversion - x = nn.utils.rnn.pack_padded_sequence(x, input_lengths, batch_first=True, enforce_sorted=False) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs, batch_first=True) - - return outputs - - -class DecoderRNN(torch.nn.Module): - def __init__(self, hidden_size, decoder_rnn_dim, dropout): - super(DecoderRNN, self).__init__() - self.in_conv1d = nn.Sequential( - torch.nn.Conv1d( - in_channels=hidden_size, - out_channels=hidden_size, - kernel_size=9, padding=4, - ), - torch.nn.ReLU(), - torch.nn.Conv1d( - in_channels=hidden_size, - out_channels=hidden_size, - kernel_size=9, padding=4, - ), - ) - self.ln = nn.LayerNorm(hidden_size) - if decoder_rnn_dim == 0: - decoder_rnn_dim = hidden_size * 2 - self.rnn = torch.nn.LSTM( - input_size=hidden_size, - hidden_size=decoder_rnn_dim, - num_layers=1, - batch_first=True, - bidirectional=True, - dropout=dropout - ) - self.rnn.flatten_parameters() - self.conv1d = torch.nn.Conv1d( - in_channels=decoder_rnn_dim * 2, - out_channels=hidden_size, - kernel_size=3, - padding=1, - ) - - def forward(self, x): - input_masks = x.abs().sum(-1).ne(0).data[:, :, None] - input_lengths = input_masks.sum([-1, -2]) - input_lengths = input_lengths.cpu().numpy() - - x = self.in_conv1d(x.transpose(1, 2)).transpose(1, 2) - x = self.ln(x) - x = nn.utils.rnn.pack_padded_sequence(x, input_lengths, batch_first=True, enforce_sorted=False) - self.rnn.flatten_parameters() - x, _ = self.rnn(x) # [B, T, C] - x, _ = nn.utils.rnn.pad_packed_sequence(x, batch_first=True) - x = x * input_masks - pre_mel = self.conv1d(x.transpose(1, 2)).transpose(1, 2) # [B, T, C] - pre_mel = pre_mel * input_masks - return pre_mel diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/app_cli.py b/spaces/yizhangliu/Grounded-Segment-Anything/app_cli.py deleted file mode 100644 index 33acfcefd26aa245594fe5312c0b49a436c1abea..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/app_cli.py +++ /dev/null @@ -1,131 +0,0 @@ - -import warnings -warnings.filterwarnings('ignore') - -import subprocess, io, os, sys, time -from loguru import logger - -# os.system("pip install diffuser==0.6.0") -# os.system("pip install transformers==4.29.1") - -os.environ["CUDA_VISIBLE_DEVICES"] = "0" - -if os.environ.get('IS_MY_DEBUG') is None: - result = subprocess.run(['pip', 'install', '-e', 'GroundingDINO'], check=True) - print(f'pip install GroundingDINO = {result}') - -# result = subprocess.run(['pip', 'list'], check=True) -# print(f'pip list = {result}') - -sys.path.insert(0, './GroundingDINO') - -import gradio as gr - -import argparse - -import copy - -import numpy as np -import torch -from PIL import Image, ImageDraw, ImageFont, ImageOps - -# Grounding DINO -import GroundingDINO.groundingdino.datasets.transforms as T -from GroundingDINO.groundingdino.models import build_model -from GroundingDINO.groundingdino.util import box_ops -from GroundingDINO.groundingdino.util.slconfig import SLConfig -from GroundingDINO.groundingdino.util.utils import clean_state_dict, get_phrases_from_posmap - -import cv2 -import numpy as np -import matplotlib.pyplot as plt -from lama_cleaner.model_manager import ModelManager -from lama_cleaner.schema import Config as lama_Config - -# segment anything -from segment_anything import build_sam, SamPredictor, SamAutomaticMaskGenerator - -# diffusers -import PIL -import requests -import torch -from io import BytesIO -from diffusers import StableDiffusionInpaintPipeline -from huggingface_hub import hf_hub_download - -from utils import computer_info -# relate anything -from ram_utils import iou, sort_and_deduplicate, relation_classes, MLP, show_anns, ram_show_mask -from ram_train_eval import RamModel,RamPredictor -from mmengine.config import Config as mmengine_Config - -from app import * - -config_file = 'GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py' -ckpt_repo_id = "ShilongLiu/GroundingDINO" -ckpt_filenmae = "groundingdino_swint_ogc.pth" -sam_checkpoint = './sam_vit_h_4b8939.pth' -output_dir = "outputs" -device = 'cpu' - -os.makedirs(output_dir, exist_ok=True) -groundingdino_model = None -sam_device = None -sam_model = None -sam_predictor = None -sam_mask_generator = None -sd_pipe = None -lama_cleaner_model= None -ram_model = None -kosmos_model = None -kosmos_processor = None - -def get_args(): - argparser = argparse.ArgumentParser() - argparser.add_argument("--input_image", "-i", type=str, default="", help="") - argparser.add_argument("--text", "-t", type=str, default="", help="") - argparser.add_argument("--output_image", "-o", type=str, default="", help="") - args = argparser.parse_args() - return args - -# usage: -# python app_cli.py --input_image dog.png --text dog --output_image dog_remove.png - -if __name__ == '__main__': - args = get_args() - logger.info(f'\nargs={args}\n') - - logger.info(f'loading models ... ') - # set_device() # If you have enough GPUs, you can open this comment - groundingdino_model = load_groundingdino_model('cpu') - load_sam_model(device) - # load_sd_model(device) - load_lama_cleaner_model(device) - # load_ram_model(device) - - input_image = Image.open(args.input_image) - - output_images, _ = run_anything_task(input_image = input_image, - text_prompt = args.text, - task_type = 'remove', - inpaint_prompt = '', - box_threshold = 0.3, - text_threshold = 0.25, - iou_threshold = 0.8, - inpaint_mode = "merge", - mask_source_radio = "type what to detect below", - remove_mode = "rectangle", # ["segment", "rectangle"] - remove_mask_extend = "10", - num_relation = 5, - kosmos_input = None, - cleaner_size_limit = -1, - ) - if len(output_images) > 0: - logger.info(f'save result to {args.output_image} ... ') - output_images[-1].save(args.output_image) - # count = 0 - # for output_image in output_images: - # count += 1 - # if isinstance(output_image, np.ndarray): - # output_image = PIL.Image.fromarray(output_image.astype(np.uint8)) - # output_image.save(args.output_image.replace(".", f"_{count}.")) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ernie_m/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ernie_m/__init__.py deleted file mode 100644 index b7cd3bdd0681c130f2d81b70faa6321e5cce9df6..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ernie_m/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright 2023 The HuggingFace and Baidu Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import TYPE_CHECKING - -# rely on isort to merge the imports -from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_sentencepiece_available, is_torch_available - - -_import_structure = { - "configuration_ernie_m": ["ERNIE_M_PRETRAINED_CONFIG_ARCHIVE_MAP", "ErnieMConfig"], -} - -try: - if not is_sentencepiece_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["tokenization_ernie_m"] = ["ErnieMTokenizer"] - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_ernie_m"] = [ - "ERNIE_M_PRETRAINED_MODEL_ARCHIVE_LIST", - "ErnieMForMultipleChoice", - "ErnieMForQuestionAnswering", - "ErnieMForSequenceClassification", - "ErnieMForTokenClassification", - "ErnieMModel", - "ErnieMPreTrainedModel", - "ErnieMForInformationExtraction", - ] - - -if TYPE_CHECKING: - from .configuration_ernie_m import ERNIE_M_PRETRAINED_CONFIG_ARCHIVE_MAP, ErnieMConfig - - try: - if not is_sentencepiece_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .tokenization_ernie_m import ErnieMTokenizer - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_ernie_m import ( - ERNIE_M_PRETRAINED_MODEL_ARCHIVE_LIST, - ErnieMForInformationExtraction, - ErnieMForMultipleChoice, - ErnieMForQuestionAnswering, - ErnieMForSequenceClassification, - ErnieMForTokenClassification, - ErnieMModel, - ErnieMPreTrainedModel, - ) - - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/plbart/convert_plbart_original_checkpoint_to_torch.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/plbart/convert_plbart_original_checkpoint_to_torch.py deleted file mode 100644 index eac4a27d11c5a08386e698c35b89ac3f6ac3c98c..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/plbart/convert_plbart_original_checkpoint_to_torch.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse - -import torch -from torch import nn - -from transformers import PLBartConfig, PLBartForConditionalGeneration, PLBartForSequenceClassification - - -def remove_ignore_keys_(state_dict): - ignore_keys = [ - "encoder.version", - "decoder.version", - "model.encoder.version", - "model.decoder.version", - "_float_tensor", - "decoder.output_projection.weight", - ] - for k in ignore_keys: - state_dict.pop(k, None) - - -def make_linear_from_emb(emb): - vocab_size, emb_size = emb.weight.shape - lin_layer = nn.Linear(vocab_size, emb_size, bias=False) - lin_layer.weight.data = emb.weight.data - return lin_layer - - -def convert_fairseq_plbart_checkpoint_from_disk( - checkpoint_path, hf_config_path="uclanlp/plbart-base", finetuned=False, classification=False -): - state_dict = torch.load(checkpoint_path, map_location="cpu")["model"] - remove_ignore_keys_(state_dict) - vocab_size = state_dict["encoder.embed_tokens.weight"].shape[0] - - plbart_config = PLBartConfig.from_pretrained(hf_config_path, vocab_size=vocab_size) - - state_dict["shared.weight"] = state_dict["decoder.embed_tokens.weight"] - if not classification: - model = PLBartForConditionalGeneration(plbart_config) - model.model.load_state_dict(state_dict) - if finetuned: - model.lm_head = make_linear_from_emb(model.model.shared) - - else: - classification_head = {} - for key, value in state_dict.copy().items(): - if key.startswith("classification_heads.sentence_classification_head"): - classification_head[key.replace("classification_heads.sentence_classification_head.", "")] = value - state_dict.pop(key) - model = PLBartForSequenceClassification(plbart_config) - model.model.load_state_dict(state_dict) - model.classification_head.load_state_dict(classification_head) - - return model - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument("fairseq_path", type=str, help="model.pt on local filesystem.") - parser.add_argument("pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model.") - parser.add_argument( - "--hf_config", - default="uclanlp/plbart-base", - type=str, - help="Which huggingface architecture to use: plbart-base", - ) - parser.add_argument("--finetuned", action="store_true", help="whether the model is a fine-tuned checkpoint") - parser.add_argument( - "--classification", action="store_true", help="whether the model is a classification checkpoint" - ) - args = parser.parse_args() - model = convert_fairseq_plbart_checkpoint_from_disk( - args.fairseq_path, - hf_config_path=args.hf_config, - finetuned=args.finetuned, - classification=args.classification, - ) - model.save_pretrained(args.pytorch_dump_folder_path) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pvt/configuration_pvt.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pvt/configuration_pvt.py deleted file mode 100644 index 12fb3a5b9a94f409b58cdddf9093ec3296420231..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/pvt/configuration_pvt.py +++ /dev/null @@ -1,163 +0,0 @@ -# coding=utf-8 -# Copyright 2023 Authors: Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, -# Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and The HuggingFace Inc. team. -# All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Pvt model configuration""" - -from collections import OrderedDict -from typing import Callable, List, Mapping - -from packaging import version - -from ...configuration_utils import PretrainedConfig -from ...onnx import OnnxConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -PVT_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "pvt-tiny-224": "https://huggingface.co/Zetatech/pvt-tiny-224", - # See all PVT models at https://huggingface.co/models?filter=pvt -} - - -class PvtConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`PvtModel`]. It is used to instantiate an Pvt - model according to the specified arguments, defining the model architecture. Instantiating a configuration with the - defaults will yield a similar configuration to that of the Pvt - [Xrenya/pvt-tiny-224](https://huggingface.co/Xrenya/pvt-tiny-224) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - image_size (`int`, *optional*, defaults to 224): - The input image size - num_channels (`int`, *optional*, defaults to 3): - The number of input channels. - num_encoder_blocks (`int`, *optional*, defaults to 4): - The number of encoder blocks (i.e. stages in the Mix Transformer encoder). - depths (`List[int]`, *optional*, defaults to `[2, 2, 2, 2]`): - The number of layers in each encoder block. - sequence_reduction_ratios (`List[int]`, *optional*, defaults to `[8, 4, 2, 1]`): - Sequence reduction ratios in each encoder block. - hidden_sizes (`List[int]`, *optional*, defaults to `[64, 128, 320, 512]`): - Dimension of each of the encoder blocks. - patch_sizes (`List[int]`, *optional*, defaults to `[4, 2, 2, 2]`): - Patch size before each encoder block. - strides (`List[int]`, *optional*, defaults to `[4, 2, 2, 2]`): - Stride before each encoder block. - num_attention_heads (`List[int]`, *optional*, defaults to `[1, 2, 5, 8]`): - Number of attention heads for each attention layer in each block of the Transformer encoder. - mlp_ratios (`List[int]`, *optional*, defaults to `[8, 8, 4, 4]`): - Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the - encoder blocks. - hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"selu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0.0): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0): - The dropout ratio for the attention probabilities. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - drop_path_rate (`float`, *optional*, defaults to 0.0): - The dropout probability for stochastic depth, used in the blocks of the Transformer encoder. - layer_norm_eps (`float`, *optional*, defaults to 1e-06): - The epsilon used by the layer normalization layers. - qkv_bias (`bool`, *optional*, defaults to `True`): - Whether or not a learnable bias should be added to the queries, keys and values. - num_labels ('int', *optional*, defaults to 1000): - The number of classes. - Example: - - ```python - >>> from transformers import PvtModel, PvtConfig - - >>> # Initializing a PVT Xrenya/pvt-tiny-224 style configuration - >>> configuration = PvtConfig() - - >>> # Initializing a model from the Xrenya/pvt-tiny-224 style configuration - >>> model = PvtModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "pvt" - - def __init__( - self, - image_size: int = 224, - num_channels: int = 3, - num_encoder_blocks: int = 4, - depths: List[int] = [2, 2, 2, 2], - sequence_reduction_ratios: List[int] = [8, 4, 2, 1], - hidden_sizes: List[int] = [64, 128, 320, 512], - patch_sizes: List[int] = [4, 2, 2, 2], - strides: List[int] = [4, 2, 2, 2], - num_attention_heads: List[int] = [1, 2, 5, 8], - mlp_ratios: List[int] = [8, 8, 4, 4], - hidden_act: Mapping[str, Callable] = "gelu", - hidden_dropout_prob: float = 0.0, - attention_probs_dropout_prob: float = 0.0, - initializer_range: float = 0.02, - drop_path_rate: float = 0.0, - layer_norm_eps: float = 1e-6, - qkv_bias: bool = True, - num_labels: int = 1000, - **kwargs, - ): - super().__init__(**kwargs) - - self.image_size = image_size - self.num_channels = num_channels - self.num_encoder_blocks = num_encoder_blocks - self.depths = depths - self.sequence_reduction_ratios = sequence_reduction_ratios - self.hidden_sizes = hidden_sizes - self.patch_sizes = patch_sizes - self.strides = strides - self.mlp_ratios = mlp_ratios - self.num_attention_heads = num_attention_heads - self.hidden_act = hidden_act - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.initializer_range = initializer_range - self.drop_path_rate = drop_path_rate - self.layer_norm_eps = layer_norm_eps - self.num_labels = num_labels - self.qkv_bias = qkv_bias - - -class PvtOnnxConfig(OnnxConfig): - torch_onnx_minimum_version = version.parse("1.11") - - @property - def inputs(self) -> Mapping[str, Mapping[int, str]]: - return OrderedDict( - [ - ("pixel_values", {0: "batch", 1: "num_channels", 2: "height", 3: "width"}), - ] - ) - - @property - def atol_for_validation(self) -> float: - return 1e-4 - - @property - def default_onnx_opset(self) -> int: - return 12 diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/onnxexport/model_onnx_speaker_mix.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/onnxexport/model_onnx_speaker_mix.py deleted file mode 100644 index 355e590da30a4651925ffb24938b8c2af558c098..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/onnxexport/model_onnx_speaker_mix.py +++ /dev/null @@ -1,350 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -import utils -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - kernel_size, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.gin_channels = gin_channels - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_mask, f0=None, z=None): - x = x + self.f0_emb(f0).transpose(1, 2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + z * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class F0Decoder(nn.Module): - def __init__(self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=0): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.spk_channels = spk_channels - - self.prenet = nn.Conv1d(hidden_channels, hidden_channels, 3, padding=1) - self.decoder = attentions.FFT( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.f0_prenet = nn.Conv1d(1, hidden_channels, 3, padding=1) - self.cond = nn.Conv1d(spk_channels, hidden_channels, 1) - - def forward(self, x, norm_f0, x_mask, spk_emb=None): - x = torch.detach(x) - if spk_emb is not None: - x = x + self.cond(spk_emb) - x += self.f0_prenet(norm_f0) - x = self.prenet(x) * x_mask - x = self.decoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - sampling_rate=44100, - **kwargs): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.pre = nn.Conv1d(ssl_dim, hidden_channels, kernel_size=5, padding=2) - - self.enc_p = TextEncoder( - inter_channels, - hidden_channels, - filter_channels=filter_channels, - n_heads=n_heads, - n_layers=n_layers, - kernel_size=kernel_size, - p_dropout=p_dropout - ) - hps = { - "sampling_rate": sampling_rate, - "inter_channels": inter_channels, - "resblock": resblock, - "resblock_kernel_sizes": resblock_kernel_sizes, - "resblock_dilation_sizes": resblock_dilation_sizes, - "upsample_rates": upsample_rates, - "upsample_initial_channel": upsample_initial_channel, - "upsample_kernel_sizes": upsample_kernel_sizes, - "gin_channels": gin_channels, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - self.f0_decoder = F0Decoder( - 1, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - spk_channels=gin_channels - ) - self.emb_uv = nn.Embedding(2, hidden_channels) - self.predict_f0 = False - self.speaker_map = [] - self.export_mix = False - - def export_chara_mix(self, n_speakers_mix): - self.speaker_map = torch.zeros((n_speakers_mix, 1, 1, self.gin_channels)) - for i in range(n_speakers_mix): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - self.export_mix = True - - def forward(self, c, f0, mel2ph, uv, noise=None, g=None, cluster_infer_ratio=0.1): - decoder_inp = F.pad(c, [0, 0, 1, 0]) - mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, c.shape[-1]]) - c = torch.gather(decoder_inp, 1, mel2ph_).transpose(1, 2) # [B, T, H] - - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - - if self.export_mix: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - x_mask = torch.unsqueeze(commons.sequence_mask(c_lengths, c.size(2)), 1).to(c.dtype) - x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1, 2) - - if self.predict_f0: - lf0 = 2595. * torch.log10(1. + f0.unsqueeze(1) / 700.) / 500 - norm_lf0 = utils.normalize_f0(lf0, x_mask, uv, random_scale=False) - pred_lf0 = self.f0_decoder(x, norm_lf0, x_mask, spk_emb=g) - f0 = (700 * (torch.pow(10, pred_lf0 * 500 / 2595) - 1)).squeeze(1) - - z_p, m_p, logs_p, c_mask = self.enc_p(x, x_mask, f0=f0_to_coarse(f0), z=noise) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0) - return o diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/builtin_meta.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/builtin_meta.py deleted file mode 100644 index 63c7a1a31b31dd89b82011effee26471faccacf5..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/builtin_meta.py +++ /dev/null @@ -1,350 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -Note: -For your custom dataset, there is no need to hard-code metadata anywhere in the code. -For example, for COCO-format dataset, metadata will be obtained automatically -when calling `load_coco_json`. For other dataset, metadata may also be obtained in other ways -during loading. - -However, we hard-coded metadata for a few common dataset here. -The only goal is to allow users who don't have these dataset to use pre-trained models. -Users don't have to download a COCO json (which contains metadata), in order to visualize a -COCO model (with correct class names and colors). -""" - - -# All coco categories, together with their nice-looking visualization colors -# It's from https://github.com/cocodataset/panopticapi/blob/master/panoptic_coco_categories.json -COCO_CATEGORIES = [ - {"color": [220, 20, 60], "isthing": 1, "id": 1, "name": "person"}, - {"color": [119, 11, 32], "isthing": 1, "id": 2, "name": "bicycle"}, - {"color": [0, 0, 142], "isthing": 1, "id": 3, "name": "car"}, - {"color": [0, 0, 230], "isthing": 1, "id": 4, "name": "motorcycle"}, - {"color": [106, 0, 228], "isthing": 1, "id": 5, "name": "airplane"}, - {"color": [0, 60, 100], "isthing": 1, "id": 6, "name": "bus"}, - {"color": [0, 80, 100], "isthing": 1, "id": 7, "name": "train"}, - {"color": [0, 0, 70], "isthing": 1, "id": 8, "name": "truck"}, - {"color": [0, 0, 192], "isthing": 1, "id": 9, "name": "boat"}, - {"color": [250, 170, 30], "isthing": 1, "id": 10, "name": "traffic light"}, - {"color": [100, 170, 30], "isthing": 1, "id": 11, "name": "fire hydrant"}, - {"color": [220, 220, 0], "isthing": 1, "id": 13, "name": "stop sign"}, - {"color": [175, 116, 175], "isthing": 1, "id": 14, "name": "parking meter"}, - {"color": [250, 0, 30], "isthing": 1, "id": 15, "name": "bench"}, - {"color": [165, 42, 42], "isthing": 1, "id": 16, "name": "bird"}, - {"color": [255, 77, 255], "isthing": 1, "id": 17, "name": "cat"}, - {"color": [0, 226, 252], "isthing": 1, "id": 18, "name": "dog"}, - {"color": [182, 182, 255], "isthing": 1, "id": 19, "name": "horse"}, - {"color": [0, 82, 0], "isthing": 1, "id": 20, "name": "sheep"}, - {"color": [120, 166, 157], "isthing": 1, "id": 21, "name": "cow"}, - {"color": [110, 76, 0], "isthing": 1, "id": 22, "name": "elephant"}, - {"color": [174, 57, 255], "isthing": 1, "id": 23, "name": "bear"}, - {"color": [199, 100, 0], "isthing": 1, "id": 24, "name": "zebra"}, - {"color": [72, 0, 118], "isthing": 1, "id": 25, "name": "giraffe"}, - {"color": [255, 179, 240], "isthing": 1, "id": 27, "name": "backpack"}, - {"color": [0, 125, 92], "isthing": 1, "id": 28, "name": "umbrella"}, - {"color": [209, 0, 151], "isthing": 1, "id": 31, "name": "handbag"}, - {"color": [188, 208, 182], "isthing": 1, "id": 32, "name": "tie"}, - {"color": [0, 220, 176], "isthing": 1, "id": 33, "name": "suitcase"}, - {"color": [255, 99, 164], "isthing": 1, "id": 34, "name": "frisbee"}, - {"color": [92, 0, 73], "isthing": 1, "id": 35, "name": "skis"}, - {"color": [133, 129, 255], "isthing": 1, "id": 36, "name": "snowboard"}, - {"color": [78, 180, 255], "isthing": 1, "id": 37, "name": "sports ball"}, - {"color": [0, 228, 0], "isthing": 1, "id": 38, "name": "kite"}, - {"color": [174, 255, 243], "isthing": 1, "id": 39, "name": "baseball bat"}, - {"color": [45, 89, 255], "isthing": 1, "id": 40, "name": "baseball glove"}, - {"color": [134, 134, 103], "isthing": 1, "id": 41, "name": "skateboard"}, - {"color": [145, 148, 174], "isthing": 1, "id": 42, "name": "surfboard"}, - {"color": [255, 208, 186], "isthing": 1, "id": 43, "name": "tennis racket"}, - {"color": [197, 226, 255], "isthing": 1, "id": 44, "name": "bottle"}, - {"color": [171, 134, 1], "isthing": 1, "id": 46, "name": "wine glass"}, - {"color": [109, 63, 54], "isthing": 1, "id": 47, "name": "cup"}, - {"color": [207, 138, 255], "isthing": 1, "id": 48, "name": "fork"}, - {"color": [151, 0, 95], "isthing": 1, "id": 49, "name": "knife"}, - {"color": [9, 80, 61], "isthing": 1, "id": 50, "name": "spoon"}, - {"color": [84, 105, 51], "isthing": 1, "id": 51, "name": "bowl"}, - {"color": [74, 65, 105], "isthing": 1, "id": 52, "name": "banana"}, - {"color": [166, 196, 102], "isthing": 1, "id": 53, "name": "apple"}, - {"color": [208, 195, 210], "isthing": 1, "id": 54, "name": "sandwich"}, - {"color": [255, 109, 65], "isthing": 1, "id": 55, "name": "orange"}, - {"color": [0, 143, 149], "isthing": 1, "id": 56, "name": "broccoli"}, - {"color": [179, 0, 194], "isthing": 1, "id": 57, "name": "carrot"}, - {"color": [209, 99, 106], "isthing": 1, "id": 58, "name": "hot dog"}, - {"color": [5, 121, 0], "isthing": 1, "id": 59, "name": "pizza"}, - {"color": [227, 255, 205], "isthing": 1, "id": 60, "name": "donut"}, - {"color": [147, 186, 208], "isthing": 1, "id": 61, "name": "cake"}, - {"color": [153, 69, 1], "isthing": 1, "id": 62, "name": "chair"}, - {"color": [3, 95, 161], "isthing": 1, "id": 63, "name": "couch"}, - {"color": [163, 255, 0], "isthing": 1, "id": 64, "name": "potted plant"}, - {"color": [119, 0, 170], "isthing": 1, "id": 65, "name": "bed"}, - {"color": [0, 182, 199], "isthing": 1, "id": 67, "name": "dining table"}, - {"color": [0, 165, 120], "isthing": 1, "id": 70, "name": "toilet"}, - {"color": [183, 130, 88], "isthing": 1, "id": 72, "name": "tv"}, - {"color": [95, 32, 0], "isthing": 1, "id": 73, "name": "laptop"}, - {"color": [130, 114, 135], "isthing": 1, "id": 74, "name": "mouse"}, - {"color": [110, 129, 133], "isthing": 1, "id": 75, "name": "remote"}, - {"color": [166, 74, 118], "isthing": 1, "id": 76, "name": "keyboard"}, - {"color": [219, 142, 185], "isthing": 1, "id": 77, "name": "cell phone"}, - {"color": [79, 210, 114], "isthing": 1, "id": 78, "name": "microwave"}, - {"color": [178, 90, 62], "isthing": 1, "id": 79, "name": "oven"}, - {"color": [65, 70, 15], "isthing": 1, "id": 80, "name": "toaster"}, - {"color": [127, 167, 115], "isthing": 1, "id": 81, "name": "sink"}, - {"color": [59, 105, 106], "isthing": 1, "id": 82, "name": "refrigerator"}, - {"color": [142, 108, 45], "isthing": 1, "id": 84, "name": "book"}, - {"color": [196, 172, 0], "isthing": 1, "id": 85, "name": "clock"}, - {"color": [95, 54, 80], "isthing": 1, "id": 86, "name": "vase"}, - {"color": [128, 76, 255], "isthing": 1, "id": 87, "name": "scissors"}, - {"color": [201, 57, 1], "isthing": 1, "id": 88, "name": "teddy bear"}, - {"color": [246, 0, 122], "isthing": 1, "id": 89, "name": "hair drier"}, - {"color": [191, 162, 208], "isthing": 1, "id": 90, "name": "toothbrush"}, - {"color": [255, 255, 128], "isthing": 0, "id": 92, "name": "banner"}, - {"color": [147, 211, 203], "isthing": 0, "id": 93, "name": "blanket"}, - {"color": [150, 100, 100], "isthing": 0, "id": 95, "name": "bridge"}, - {"color": [168, 171, 172], "isthing": 0, "id": 100, "name": "cardboard"}, - {"color": [146, 112, 198], "isthing": 0, "id": 107, "name": "counter"}, - {"color": [210, 170, 100], "isthing": 0, "id": 109, "name": "curtain"}, - {"color": [92, 136, 89], "isthing": 0, "id": 112, "name": "door-stuff"}, - {"color": [218, 88, 184], "isthing": 0, "id": 118, "name": "floor-wood"}, - {"color": [241, 129, 0], "isthing": 0, "id": 119, "name": "flower"}, - {"color": [217, 17, 255], "isthing": 0, "id": 122, "name": "fruit"}, - {"color": [124, 74, 181], "isthing": 0, "id": 125, "name": "gravel"}, - {"color": [70, 70, 70], "isthing": 0, "id": 128, "name": "house"}, - {"color": [255, 228, 255], "isthing": 0, "id": 130, "name": "light"}, - {"color": [154, 208, 0], "isthing": 0, "id": 133, "name": "mirror-stuff"}, - {"color": [193, 0, 92], "isthing": 0, "id": 138, "name": "net"}, - {"color": [76, 91, 113], "isthing": 0, "id": 141, "name": "pillow"}, - {"color": [255, 180, 195], "isthing": 0, "id": 144, "name": "platform"}, - {"color": [106, 154, 176], "isthing": 0, "id": 145, "name": "playingfield"}, - {"color": [230, 150, 140], "isthing": 0, "id": 147, "name": "railroad"}, - {"color": [60, 143, 255], "isthing": 0, "id": 148, "name": "river"}, - {"color": [128, 64, 128], "isthing": 0, "id": 149, "name": "road"}, - {"color": [92, 82, 55], "isthing": 0, "id": 151, "name": "roof"}, - {"color": [254, 212, 124], "isthing": 0, "id": 154, "name": "sand"}, - {"color": [73, 77, 174], "isthing": 0, "id": 155, "name": "sea"}, - {"color": [255, 160, 98], "isthing": 0, "id": 156, "name": "shelf"}, - {"color": [255, 255, 255], "isthing": 0, "id": 159, "name": "snow"}, - {"color": [104, 84, 109], "isthing": 0, "id": 161, "name": "stairs"}, - {"color": [169, 164, 131], "isthing": 0, "id": 166, "name": "tent"}, - {"color": [225, 199, 255], "isthing": 0, "id": 168, "name": "towel"}, - {"color": [137, 54, 74], "isthing": 0, "id": 171, "name": "wall-brick"}, - {"color": [135, 158, 223], "isthing": 0, "id": 175, "name": "wall-stone"}, - {"color": [7, 246, 231], "isthing": 0, "id": 176, "name": "wall-tile"}, - {"color": [107, 255, 200], "isthing": 0, "id": 177, "name": "wall-wood"}, - {"color": [58, 41, 149], "isthing": 0, "id": 178, "name": "water-other"}, - {"color": [183, 121, 142], "isthing": 0, "id": 180, "name": "window-blind"}, - {"color": [255, 73, 97], "isthing": 0, "id": 181, "name": "window-other"}, - {"color": [107, 142, 35], "isthing": 0, "id": 184, "name": "tree-merged"}, - {"color": [190, 153, 153], "isthing": 0, "id": 185, "name": "fence-merged"}, - {"color": [146, 139, 141], "isthing": 0, "id": 186, "name": "ceiling-merged"}, - {"color": [70, 130, 180], "isthing": 0, "id": 187, "name": "sky-other-merged"}, - {"color": [134, 199, 156], "isthing": 0, "id": 188, "name": "cabinet-merged"}, - {"color": [209, 226, 140], "isthing": 0, "id": 189, "name": "table-merged"}, - {"color": [96, 36, 108], "isthing": 0, "id": 190, "name": "floor-other-merged"}, - {"color": [96, 96, 96], "isthing": 0, "id": 191, "name": "pavement-merged"}, - {"color": [64, 170, 64], "isthing": 0, "id": 192, "name": "mountain-merged"}, - {"color": [152, 251, 152], "isthing": 0, "id": 193, "name": "grass-merged"}, - {"color": [208, 229, 228], "isthing": 0, "id": 194, "name": "dirt-merged"}, - {"color": [206, 186, 171], "isthing": 0, "id": 195, "name": "paper-merged"}, - {"color": [152, 161, 64], "isthing": 0, "id": 196, "name": "food-other-merged"}, - {"color": [116, 112, 0], "isthing": 0, "id": 197, "name": "building-other-merged"}, - {"color": [0, 114, 143], "isthing": 0, "id": 198, "name": "rock-merged"}, - {"color": [102, 102, 156], "isthing": 0, "id": 199, "name": "wall-other-merged"}, - {"color": [250, 141, 255], "isthing": 0, "id": 200, "name": "rug-merged"}, -] - -# fmt: off -COCO_PERSON_KEYPOINT_NAMES = ( - "nose", - "left_eye", "right_eye", - "left_ear", "right_ear", - "left_shoulder", "right_shoulder", - "left_elbow", "right_elbow", - "left_wrist", "right_wrist", - "left_hip", "right_hip", - "left_knee", "right_knee", - "left_ankle", "right_ankle", -) -# fmt: on - -# Pairs of keypoints that should be exchanged under horizontal flipping -COCO_PERSON_KEYPOINT_FLIP_MAP = ( - ("left_eye", "right_eye"), - ("left_ear", "right_ear"), - ("left_shoulder", "right_shoulder"), - ("left_elbow", "right_elbow"), - ("left_wrist", "right_wrist"), - ("left_hip", "right_hip"), - ("left_knee", "right_knee"), - ("left_ankle", "right_ankle"), -) - -# rules for pairs of keypoints to draw a line between, and the line color to use. -KEYPOINT_CONNECTION_RULES = [ - # face - ("left_ear", "left_eye", (102, 204, 255)), - ("right_ear", "right_eye", (51, 153, 255)), - ("left_eye", "nose", (102, 0, 204)), - ("nose", "right_eye", (51, 102, 255)), - # upper-body - ("left_shoulder", "right_shoulder", (255, 128, 0)), - ("left_shoulder", "left_elbow", (153, 255, 204)), - ("right_shoulder", "right_elbow", (128, 229, 255)), - ("left_elbow", "left_wrist", (153, 255, 153)), - ("right_elbow", "right_wrist", (102, 255, 224)), - # lower-body - ("left_hip", "right_hip", (255, 102, 0)), - ("left_hip", "left_knee", (255, 255, 77)), - ("right_hip", "right_knee", (153, 255, 204)), - ("left_knee", "left_ankle", (191, 255, 128)), - ("right_knee", "right_ankle", (255, 195, 77)), -] - -# All Cityscapes categories, together with their nice-looking visualization colors -# It's from https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py # noqa -CITYSCAPES_CATEGORIES = [ - {"color": (128, 64, 128), "isthing": 0, "id": 7, "trainId": 0, "name": "road"}, - {"color": (244, 35, 232), "isthing": 0, "id": 8, "trainId": 1, "name": "sidewalk"}, - {"color": (70, 70, 70), "isthing": 0, "id": 11, "trainId": 2, "name": "building"}, - {"color": (102, 102, 156), "isthing": 0, "id": 12, "trainId": 3, "name": "wall"}, - {"color": (190, 153, 153), "isthing": 0, "id": 13, "trainId": 4, "name": "fence"}, - {"color": (153, 153, 153), "isthing": 0, "id": 17, "trainId": 5, "name": "pole"}, - {"color": (250, 170, 30), "isthing": 0, "id": 19, "trainId": 6, "name": "traffic light"}, - {"color": (220, 220, 0), "isthing": 0, "id": 20, "trainId": 7, "name": "traffic sign"}, - {"color": (107, 142, 35), "isthing": 0, "id": 21, "trainId": 8, "name": "vegetation"}, - {"color": (152, 251, 152), "isthing": 0, "id": 22, "trainId": 9, "name": "terrain"}, - {"color": (70, 130, 180), "isthing": 0, "id": 23, "trainId": 10, "name": "sky"}, - {"color": (220, 20, 60), "isthing": 1, "id": 24, "trainId": 11, "name": "person"}, - {"color": (255, 0, 0), "isthing": 1, "id": 25, "trainId": 12, "name": "rider"}, - {"color": (0, 0, 142), "isthing": 1, "id": 26, "trainId": 13, "name": "car"}, - {"color": (0, 0, 70), "isthing": 1, "id": 27, "trainId": 14, "name": "truck"}, - {"color": (0, 60, 100), "isthing": 1, "id": 28, "trainId": 15, "name": "bus"}, - {"color": (0, 80, 100), "isthing": 1, "id": 31, "trainId": 16, "name": "train"}, - {"color": (0, 0, 230), "isthing": 1, "id": 32, "trainId": 17, "name": "motorcycle"}, - {"color": (119, 11, 32), "isthing": 1, "id": 33, "trainId": 18, "name": "bicycle"}, -] - -# fmt: off -ADE20K_SEM_SEG_CATEGORIES = [ - "wall", "building", "sky", "floor", "tree", "ceiling", "road, route", "bed", "window ", "grass", "cabinet", "sidewalk, pavement", "person", "earth, ground", "door", "table", "mountain, mount", "plant", "curtain", "chair", "car", "water", "painting, picture", "sofa", "shelf", "house", "sea", "mirror", "rug", "field", "armchair", "seat", "fence", "desk", "rock, stone", "wardrobe, closet, press", "lamp", "tub", "rail", "cushion", "base, pedestal, stand", "box", "column, pillar", "signboard, sign", "chest of drawers, chest, bureau, dresser", "counter", "sand", "sink", "skyscraper", "fireplace", "refrigerator, icebox", "grandstand, covered stand", "path", "stairs", "runway", "case, display case, showcase, vitrine", "pool table, billiard table, snooker table", "pillow", "screen door, screen", "stairway, staircase", "river", "bridge, span", "bookcase", "blind, screen", "coffee table", "toilet, can, commode, crapper, pot, potty, stool, throne", "flower", "book", "hill", "bench", "countertop", "stove", "palm, palm tree", "kitchen island", "computer", "swivel chair", "boat", "bar", "arcade machine", "hovel, hut, hutch, shack, shanty", "bus", "towel", "light", "truck", "tower", "chandelier", "awning, sunshade, sunblind", "street lamp", "booth", "tv", "plane", "dirt track", "clothes", "pole", "land, ground, soil", "bannister, banister, balustrade, balusters, handrail", "escalator, moving staircase, moving stairway", "ottoman, pouf, pouffe, puff, hassock", "bottle", "buffet, counter, sideboard", "poster, posting, placard, notice, bill, card", "stage", "van", "ship", "fountain", "conveyer belt, conveyor belt, conveyer, conveyor, transporter", "canopy", "washer, automatic washer, washing machine", "plaything, toy", "pool", "stool", "barrel, cask", "basket, handbasket", "falls", "tent", "bag", "minibike, motorbike", "cradle", "oven", "ball", "food, solid food", "step, stair", "tank, storage tank", "trade name", "microwave", "pot", "animal", "bicycle", "lake", "dishwasher", "screen", "blanket, cover", "sculpture", "hood, exhaust hood", "sconce", "vase", "traffic light", "tray", "trash can", "fan", "pier", "crt screen", "plate", "monitor", "bulletin board", "shower", "radiator", "glass, drinking glass", "clock", "flag", # noqa -] -# After processed by `prepare_ade20k_sem_seg.py`, id 255 means ignore -# fmt: on - - -def _get_coco_instances_meta(): - thing_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 1] - assert len(thing_ids) == 80, len(thing_ids) - # Mapping from the incontiguous COCO category id to an id in [0, 79] - thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)} - thing_classes = [k["name"] for k in COCO_CATEGORIES if k["isthing"] == 1] - ret = { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes, - "thing_colors": thing_colors, - } - return ret - - -def _get_coco_panoptic_separated_meta(): - """ - Returns metadata for "separated" version of the panoptic segmentation dataset. - """ - stuff_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 0] - assert len(stuff_ids) == 53, len(stuff_ids) - - # For semantic segmentation, this mapping maps from contiguous stuff id - # (in [0, 53], used in models) to ids in the dataset (used for processing results) - # The id 0 is mapped to an extra category "thing". - stuff_dataset_id_to_contiguous_id = {k: i + 1 for i, k in enumerate(stuff_ids)} - # When converting COCO panoptic annotations to semantic annotations - # We label the "thing" category to 0 - stuff_dataset_id_to_contiguous_id[0] = 0 - - # 54 names for COCO stuff categories (including "things") - stuff_classes = ["things"] + [ - k["name"].replace("-other", "").replace("-merged", "") - for k in COCO_CATEGORIES - if k["isthing"] == 0 - ] - - # NOTE: I randomly picked a color for things - stuff_colors = [[82, 18, 128]] + [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 0] - ret = { - "stuff_dataset_id_to_contiguous_id": stuff_dataset_id_to_contiguous_id, - "stuff_classes": stuff_classes, - "stuff_colors": stuff_colors, - } - ret.update(_get_coco_instances_meta()) - return ret - - -def _get_builtin_metadata(dataset_name): - if dataset_name == "coco": - return _get_coco_instances_meta() - if dataset_name == "coco_panoptic_separated": - return _get_coco_panoptic_separated_meta() - elif dataset_name == "coco_panoptic_standard": - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in COCO_CATEGORIES] - thing_colors = [k["color"] for k in COCO_CATEGORIES] - stuff_classes = [k["name"] for k in COCO_CATEGORIES] - stuff_colors = [k["color"] for k in COCO_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(COCO_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - else: - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - elif dataset_name == "coco_person": - return { - "thing_classes": ["person"], - "keypoint_names": COCO_PERSON_KEYPOINT_NAMES, - "keypoint_flip_map": COCO_PERSON_KEYPOINT_FLIP_MAP, - "keypoint_connection_rules": KEYPOINT_CONNECTION_RULES, - } - elif dataset_name == "cityscapes": - # fmt: off - CITYSCAPES_THING_CLASSES = [ - "person", "rider", "car", "truck", - "bus", "train", "motorcycle", "bicycle", - ] - CITYSCAPES_STUFF_CLASSES = [ - "road", "sidewalk", "building", "wall", "fence", "pole", "traffic light", - "traffic sign", "vegetation", "terrain", "sky", "person", "rider", "car", - "truck", "bus", "train", "motorcycle", "bicycle", - ] - # fmt: on - return { - "thing_classes": CITYSCAPES_THING_CLASSES, - "stuff_classes": CITYSCAPES_STUFF_CLASSES, - } - raise KeyError("No built-in metadata for dataset {}".format(dataset_name)) diff --git a/spaces/yongjae/whisper-webui/docs/options.md b/spaces/yongjae/whisper-webui/docs/options.md deleted file mode 100644 index adfabcf4b17b26a833369ab71948a52b1d5b5184..0000000000000000000000000000000000000000 --- a/spaces/yongjae/whisper-webui/docs/options.md +++ /dev/null @@ -1,78 +0,0 @@ -# Options -To transcribe or translate an audio file, you can either copy an URL from a website (all [websites](https://github.com/yt-dlp/yt-dlp/blob/master/supportedsites.md) -supported by YT-DLP will work, including YouTube). Otherwise, upload an audio file (choose "All Files (*.*)" -in the file selector to select any file type, including video files) or use the microphone. - -For longer audio files (>10 minutes), it is recommended that you select Silero VAD (Voice Activity Detector) in the VAD option. - -## Model -Select the model that Whisper will use to transcribe the audio: - -| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed | -|--------|------------|--------------------|--------------------|---------------|----------------| -| tiny | 39 M | tiny.en | tiny | ~1 GB | ~32x | -| base | 74 M | base.en | base | ~1 GB | ~16x | -| small | 244 M | small.en | small | ~2 GB | ~6x | -| medium | 769 M | medium.en | medium | ~5 GB | ~2x | -| large | 1550 M | N/A | large | ~10 GB | 1x | - -## Language - -Select the language, or leave it empty for Whisper to automatically detect it. - -Note that if the selected language and the language in the audio differs, Whisper may start to translate the audio to the selected -language. For instance, if the audio is in English but you select Japaneese, the model may translate the audio to Japanese. - -## Inputs -The options "URL (YouTube, etc.)", "Upload Audio" or "Micriphone Input" allows you to send an audio input to the model. - -Note that the UI will only process the first valid input - i.e. if you enter both an URL and upload an audio, it will only process -the URL. - -## Task -Select the task - either "transcribe" to transcribe the audio to text, or "translate" to translate it to English. - -## Vad -Using a VAD will improve the timing accuracy of each transcribed line, as well as prevent Whisper getting into an infinite -loop detecting the same sentence over and over again. The downside is that this may be at a cost to text accuracy, especially -with regards to unique words or names that appear in the audio. You can compensate for this by increasing the prompt window. - -Note that English is very well handled by Whisper, and it's less susceptible to issues surrounding bad timings and infinite loops. -So you may only need to use a VAD for other languages, such as Japanese, or when the audio is very long. - -* none - * Run whisper on the entire audio input -* silero-vad - * Use Silero VAD to detect sections that contain speech, and run Whisper on independently on each section. Whisper is also run - on the gaps between each speech section, by either expanding the section up to the max merge size, or running Whisper independently - on the non-speech section. -* silero-vad-expand-into-gaps - * Use Silero VAD to detect sections that contain speech, and run Whisper on independently on each section. Each spech section will be expanded - such that they cover any adjacent non-speech sections. For instance, if an audio file of one minute contains the speech sections - 00:00 - 00:10 (A) and 00:30 - 00:40 (B), the first section (A) will be expanded to 00:00 - 00:30, and (B) will be expanded to 00:30 - 00:60. -* silero-vad-skip-gaps - * As above, but sections that doesn't contain speech according to Silero will be skipped. This will be slightly faster, but - may cause dialogue to be skipped. -* periodic-vad - * Create sections of speech every 'VAD - Max Merge Size' seconds. This is very fast and simple, but will potentially break - a sentence or word in two. - -## VAD - Merge Window -If set, any adjacent speech sections that are at most this number of seconds apart will be automatically merged. - -## VAD - Max Merge Size (s) -Disables merging of adjacent speech sections if they are this number of seconds long. - -## VAD - Padding (s) -The number of seconds (floating point) to add to the beginning and end of each speech section. Setting this to a number -larger than zero ensures that Whisper is more likely to correctly transcribe a sentence in the beginning of -a speech section. However, this also increases the probability of Whisper assigning the wrong timestamp -to each transcribed line. The default value is 1 second. - -## VAD - Prompt Window (s) -The text of a detected line will be included as a prompt to the next speech section, if the speech section starts at most this -number of seconds after the line has finished. For instance, if a line ends at 10:00, and the next speech section starts at -10:04, the line's text will be included if the prompt window is 4 seconds or more (10:04 - 10:00 = 4 seconds). - -Note that detected lines in gaps between speech sections will not be included in the prompt -(if silero-vad or silero-vad-expand-into-gaps) is used. \ No newline at end of file diff --git a/spaces/yufiofficial/MusicGenQ/audiocraft/modules/conv.py b/spaces/yufiofficial/MusicGenQ/audiocraft/modules/conv.py deleted file mode 100644 index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000 --- a/spaces/yufiofficial/MusicGenQ/audiocraft/modules/conv.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp -import warnings - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm, weight_norm - - -CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm', - 'time_group_norm']) - - -def apply_parametrization_norm(module: nn.Module, norm: str = 'none'): - assert norm in CONV_NORMALIZATIONS - if norm == 'weight_norm': - return weight_norm(module) - elif norm == 'spectral_norm': - return spectral_norm(module) - else: - # We already check was in CONV_NORMALIZATION, so any other choice - # doesn't need reparametrization. - return module - - -def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs): - """Return the proper normalization module. If causal is True, this will ensure the returned - module is causal, or return an error if the normalization doesn't support causal evaluation. - """ - assert norm in CONV_NORMALIZATIONS - if norm == 'time_group_norm': - if causal: - raise ValueError("GroupNorm doesn't support causal evaluation.") - assert isinstance(module, nn.modules.conv._ConvNd) - return nn.GroupNorm(1, module.out_channels, **norm_kwargs) - else: - return nn.Identity() - - -def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, - padding_total: int = 0) -> int: - """See `pad_for_conv1d`. - """ - length = x.shape[-1] - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length - length - - -def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0): - """Pad for a convolution to make sure that the last window is full. - Extra padding is added at the end. This is required to ensure that we can rebuild - an output of the same length, as otherwise, even with padding, some time steps - might get removed. - For instance, with total padding = 4, kernel size = 4, stride = 2: - 0 0 1 2 3 4 5 0 0 # (0s are padding) - 1 2 3 # (output frames of a convolution, last 0 is never used) - 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding) - 1 2 3 4 # once you removed padding, we are missing one time step ! - """ - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - return F.pad(x, (0, extra_padding)) - - -def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.): - """Tiny wrapper around F.pad, just to allow for reflect padding on small input. - If this is the case, we insert extra 0 padding to the right before the reflection happen. - """ - length = x.shape[-1] - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - if mode == 'reflect': - max_pad = max(padding_left, padding_right) - extra_pad = 0 - if length <= max_pad: - extra_pad = max_pad - length + 1 - x = F.pad(x, (0, extra_pad)) - padded = F.pad(x, paddings, mode, value) - end = padded.shape[-1] - extra_pad - return padded[..., :end] - else: - return F.pad(x, paddings, mode, value) - - -def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]): - """Remove padding from x, handling properly zero padding. Only for 1d! - """ - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - assert (padding_left + padding_right) <= x.shape[-1] - end = x.shape[-1] - padding_right - return x[..., padding_left: end] - - -class NormConv1d(nn.Module): - """Wrapper around Conv1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConv2d(nn.Module): - """Wrapper around Conv2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConvTranspose1d(nn.Module): - """Wrapper around ConvTranspose1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class NormConvTranspose2d(nn.Module): - """Wrapper around ConvTranspose2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs) - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class StreamableConv1d(nn.Module): - """Conv1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, dilation: int = 1, - groups: int = 1, bias: bool = True, causal: bool = False, - norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, - pad_mode: str = 'reflect'): - super().__init__() - # warn user on unusual setup between dilation and stride - if stride > 1 and dilation > 1: - warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1' - f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).') - self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride, - dilation=dilation, groups=groups, bias=bias, causal=causal, - norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.pad_mode = pad_mode - - def forward(self, x): - B, C, T = x.shape - kernel_size = self.conv.conv.kernel_size[0] - stride = self.conv.conv.stride[0] - dilation = self.conv.conv.dilation[0] - kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations - padding_total = kernel_size - stride - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - if self.causal: - # Left padding for causal - x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode) - return self.conv(x) - - -class StreamableConvTranspose1d(nn.Module): - """ConvTranspose1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, causal: bool = False, - norm: str = 'none', trim_right_ratio: float = 1., - norm_kwargs: tp.Dict[str, tp.Any] = {}): - super().__init__() - self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride, - causal=causal, norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.trim_right_ratio = trim_right_ratio - assert self.causal or self.trim_right_ratio == 1., \ - "`trim_right_ratio` != 1.0 only makes sense for causal convolutions" - assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1. - - def forward(self, x): - kernel_size = self.convtr.convtr.kernel_size[0] - stride = self.convtr.convtr.stride[0] - padding_total = kernel_size - stride - - y = self.convtr(x) - - # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be - # removed at the very end, when keeping only the right length for the output, - # as removing it here would require also passing the length at the matching layer - # in the encoder. - if self.causal: - # Trim the padding on the right according to the specified ratio - # if trim_right_ratio = 1.0, trim everything from right - padding_right = math.ceil(padding_total * self.trim_right_ratio) - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - return y diff --git a/spaces/zej97/AI-Research-Assistant/test/test2.py b/spaces/zej97/AI-Research-Assistant/test/test2.py deleted file mode 100644 index 81ff95ebc30ac1c8b744ebfe3bea58dce8f8ffcb..0000000000000000000000000000000000000000 --- a/spaces/zej97/AI-Research-Assistant/test/test2.py +++ /dev/null @@ -1,4 +0,0 @@ -import test3 as test3 - -def generator_(): - yield from test3.generator() \ No newline at end of file diff --git a/spaces/zideliu/styledrop/timm/models/xception.py b/spaces/zideliu/styledrop/timm/models/xception.py deleted file mode 100644 index a61548dc5f8ddba37cb183aaa3345960ef7b5a24..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/models/xception.py +++ /dev/null @@ -1,230 +0,0 @@ -""" -Ported to pytorch thanks to [tstandley](https://github.com/tstandley/Xception-PyTorch) - -@author: tstandley -Adapted by cadene - -Creates an Xception Model as defined in: - -Francois Chollet -Xception: Deep Learning with Depthwise Separable Convolutions -https://arxiv.org/pdf/1610.02357.pdf - -This weights ported from the Keras implementation. Achieves the following performance on the validation set: - -Loss:0.9173 Prec@1:78.892 Prec@5:94.292 - -REMEMBER to set your image size to 3x299x299 for both test and validation - -normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], - std=[0.5, 0.5, 0.5]) - -The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299 -""" - -import torch.nn as nn -import torch.nn.functional as F - -from .helpers import build_model_with_cfg -from .layers import create_classifier -from .registry import register_model - -__all__ = ['Xception'] - -default_cfgs = { - 'xception': { - 'url': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/xception-43020ad28.pth', - 'input_size': (3, 299, 299), - 'pool_size': (10, 10), - 'crop_pct': 0.8975, - 'interpolation': 'bicubic', - 'mean': (0.5, 0.5, 0.5), - 'std': (0.5, 0.5, 0.5), - 'num_classes': 1000, - 'first_conv': 'conv1', - 'classifier': 'fc' - # The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299 - } -} - - -class SeparableConv2d(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, padding=0, dilation=1): - super(SeparableConv2d, self).__init__() - - self.conv1 = nn.Conv2d( - in_channels, in_channels, kernel_size, stride, padding, dilation, groups=in_channels, bias=False) - self.pointwise = nn.Conv2d(in_channels, out_channels, 1, 1, 0, 1, 1, bias=False) - - def forward(self, x): - x = self.conv1(x) - x = self.pointwise(x) - return x - - -class Block(nn.Module): - def __init__(self, in_channels, out_channels, reps, strides=1, start_with_relu=True, grow_first=True): - super(Block, self).__init__() - - if out_channels != in_channels or strides != 1: - self.skip = nn.Conv2d(in_channels, out_channels, 1, stride=strides, bias=False) - self.skipbn = nn.BatchNorm2d(out_channels) - else: - self.skip = None - - rep = [] - for i in range(reps): - if grow_first: - inc = in_channels if i == 0 else out_channels - outc = out_channels - else: - inc = in_channels - outc = in_channels if i < (reps - 1) else out_channels - rep.append(nn.ReLU(inplace=True)) - rep.append(SeparableConv2d(inc, outc, 3, stride=1, padding=1)) - rep.append(nn.BatchNorm2d(outc)) - - if not start_with_relu: - rep = rep[1:] - else: - rep[0] = nn.ReLU(inplace=False) - - if strides != 1: - rep.append(nn.MaxPool2d(3, strides, 1)) - self.rep = nn.Sequential(*rep) - - def forward(self, inp): - x = self.rep(inp) - - if self.skip is not None: - skip = self.skip(inp) - skip = self.skipbn(skip) - else: - skip = inp - - x += skip - return x - - -class Xception(nn.Module): - """ - Xception optimized for the ImageNet dataset, as specified in - https://arxiv.org/pdf/1610.02357.pdf - """ - - def __init__(self, num_classes=1000, in_chans=3, drop_rate=0., global_pool='avg'): - """ Constructor - Args: - num_classes: number of classes - """ - super(Xception, self).__init__() - self.drop_rate = drop_rate - self.global_pool = global_pool - self.num_classes = num_classes - self.num_features = 2048 - - self.conv1 = nn.Conv2d(in_chans, 32, 3, 2, 0, bias=False) - self.bn1 = nn.BatchNorm2d(32) - self.act1 = nn.ReLU(inplace=True) - - self.conv2 = nn.Conv2d(32, 64, 3, bias=False) - self.bn2 = nn.BatchNorm2d(64) - self.act2 = nn.ReLU(inplace=True) - - self.block1 = Block(64, 128, 2, 2, start_with_relu=False) - self.block2 = Block(128, 256, 2, 2) - self.block3 = Block(256, 728, 2, 2) - - self.block4 = Block(728, 728, 3, 1) - self.block5 = Block(728, 728, 3, 1) - self.block6 = Block(728, 728, 3, 1) - self.block7 = Block(728, 728, 3, 1) - - self.block8 = Block(728, 728, 3, 1) - self.block9 = Block(728, 728, 3, 1) - self.block10 = Block(728, 728, 3, 1) - self.block11 = Block(728, 728, 3, 1) - - self.block12 = Block(728, 1024, 2, 2, grow_first=False) - - self.conv3 = SeparableConv2d(1024, 1536, 3, 1, 1) - self.bn3 = nn.BatchNorm2d(1536) - self.act3 = nn.ReLU(inplace=True) - - self.conv4 = SeparableConv2d(1536, self.num_features, 3, 1, 1) - self.bn4 = nn.BatchNorm2d(self.num_features) - self.act4 = nn.ReLU(inplace=True) - self.feature_info = [ - dict(num_chs=64, reduction=2, module='act2'), - dict(num_chs=128, reduction=4, module='block2.rep.0'), - dict(num_chs=256, reduction=8, module='block3.rep.0'), - dict(num_chs=728, reduction=16, module='block12.rep.0'), - dict(num_chs=2048, reduction=32, module='act4'), - ] - - self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) - - # #------- init weights -------- - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def get_classifier(self): - return self.fc - - def reset_classifier(self, num_classes, global_pool='avg'): - self.num_classes = num_classes - self.global_pool, self.fc = create_classifier(self.num_features, self.num_classes, pool_type=global_pool) - - def forward_features(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.act1(x) - - x = self.conv2(x) - x = self.bn2(x) - x = self.act2(x) - - x = self.block1(x) - x = self.block2(x) - x = self.block3(x) - x = self.block4(x) - x = self.block5(x) - x = self.block6(x) - x = self.block7(x) - x = self.block8(x) - x = self.block9(x) - x = self.block10(x) - x = self.block11(x) - x = self.block12(x) - - x = self.conv3(x) - x = self.bn3(x) - x = self.act3(x) - - x = self.conv4(x) - x = self.bn4(x) - x = self.act4(x) - return x - - def forward(self, x): - x = self.forward_features(x) - x = self.global_pool(x) - if self.drop_rate: - F.dropout(x, self.drop_rate, training=self.training) - x = self.fc(x) - return x - - -def _xception(variant, pretrained=False, **kwargs): - return build_model_with_cfg( - Xception, variant, pretrained, default_cfg=default_cfgs[variant], - feature_cfg=dict(feature_cls='hook'), **kwargs) - - -@register_model -def xception(pretrained=False, **kwargs): - return _xception('xception', pretrained=pretrained, **kwargs) diff --git a/spaces/zncook/chatGPT/baidu_translate/module.py b/spaces/zncook/chatGPT/baidu_translate/module.py deleted file mode 100644 index f19d8f92a4a02cda3c1c018e36be6deb32e93af1..0000000000000000000000000000000000000000 --- a/spaces/zncook/chatGPT/baidu_translate/module.py +++ /dev/null @@ -1,104 +0,0 @@ -import argparse -import random -from hashlib import md5 -from typing import Optional - -import requests - -import paddlehub as hub -from paddlehub.module.module import moduleinfo -from paddlehub.module.module import runnable -from paddlehub.module.module import serving - - -def make_md5(s, encoding='utf-8'): - return md5(s.encode(encoding)).hexdigest() - - -@moduleinfo(name="baidu_translate", - version="1.0.0", - type="text/machine_translation", - summary="", - author="baidu-nlp", - author_email="paddle-dev@baidu.com") -class BaiduTranslate: - - def __init__(self, appid=None, appkey=None): - """ - :param appid: appid for requesting Baidu translation service. - :param appkey: appkey for requesting Baidu translation service. - """ - # Set your own appid/appkey. - if appid == None: - self.appid = '20201015000580007' - else: - self.appid = appid - if appkey is None: - self.appkey = 'IFJB6jBORFuMmVGDRud1' - else: - self.appkey = appkey - self.url = 'http://api.fanyi.baidu.com/api/trans/vip/translate' - - def translate(self, query: str, from_lang: Optional[str] = "en", to_lang: Optional[int] = "zh"): - """ - Create image by text prompts using ErnieVilG model. - - :param query: Text to be translated. - :param from_lang: Source language. - :param to_lang: Dst language. - - Return translated string. - """ - # Generate salt and sign - salt = random.randint(32768, 65536) - sign = make_md5(self.appid + query + str(salt) + self.appkey) - - # Build request - headers = {'Content-Type': 'application/x-www-form-urlencoded'} - payload = {'appid': self.appid, 'q': query, 'from': from_lang, 'to': to_lang, 'salt': salt, 'sign': sign} - - # Send request - try: - r = requests.post(self.url, params=payload, headers=headers) - result = r.json() - except Exception as e: - error_msg = str(e) - raise RuntimeError(error_msg) - if 'error_code' in result: - raise RuntimeError(result['error_msg']) - return result['trans_result'][0]['dst'] - - @runnable - def run_cmd(self, argvs): - """ - Run as a command. - """ - self.parser = argparse.ArgumentParser(description="Run the {} module.".format(self.name), - prog='hub run {}'.format(self.name), - usage='%(prog)s', - add_help=True) - self.arg_input_group = self.parser.add_argument_group(title="Input options", description="Input data. Required") - self.add_module_input_arg() - args = self.parser.parse_args(argvs) - if args.appid is not None and args.appkey is not None: - self.appid = args.appid - self.appkey = args.appkey - result = self.translate(args.query, args.from_lang, args.to_lang) - return result - - @serving - def serving_method(self, query, from_lang, to_lang): - """ - Run as a service. - """ - return self.translate(query, from_lang, to_lang) - - def add_module_input_arg(self): - """ - Add the command input options. - """ - self.arg_input_group.add_argument('--query', type=str) - self.arg_input_group.add_argument('--from_lang', type=str, default='en', help="源语言") - self.arg_input_group.add_argument('--to_lang', type=str, default='zh', help="目标语言") - self.arg_input_group.add_argument('--appid', type=str, default=None, help="注册得到的个人appid") - self.arg_input_group.add_argument('--appkey', type=str, default=None, help="注册得到的个人appkey")