diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/2021 Crackdown Band.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/2021 Crackdown Band.md deleted file mode 100644 index e6d563667fd4e25ddb88b88772419a59a7a47ca7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/2021 Crackdown Band.md +++ /dev/null @@ -1,23 +0,0 @@ -
-

Crackdown Band: The Italian Hardcore Metal Sensation

-

Crackdown is a band from Ravenna, Italy, that plays a blend of hardcore and metal music. The band was formed in 1995 and released their first album, 74, in 1997. The album featured a raw and aggressive sound that earned them a loyal fan base in the underground scene.

-

crackdown band


DOWNLOAD ✶✶✶ https://byltly.com/2uKzTu



-

In 1998, Crackdown signed with Diehard Music Worldwide and released their second album, Rise Up. The album showcased a more mature and refined sound, with influences from bands like Biohazard, Machine Head, and Pantera. The album also featured a guest appearance by Evan Seinfeld from Biohazard on the song "Never".

-

Crackdown toured extensively in Europe and Japan to promote their album, sharing the stage with bands like Pro-Pain, Agnostic Front, Madball, and Snapcase. The band also appeared on several compilations and tribute albums, such as A Tribute to Sick of It All and A Tribute to Slayer.

-

Crackdown is currently working on new material and plans to release their third album soon. The band consists of Andrea (vocals), Dima (guitar), Pippo (bass), and Lorenzo (drums). They are known for their energetic and powerful live performances, as well as their social and political lyrics.

-

If you are a fan of hardcore metal music, you should check out Crackdown. They are one of the most promising bands in the Italian scene and have a lot to offer to the genre. You can follow them on Facebook or watch some of their videos on YouTube.

- -

Crackdown's Influences and Style

-

Crackdown's music is influenced by various bands and genres, such as hardcore, metal, punk, rap, and industrial. The band cites Biohazard, Machine Head, Pantera, Sick of It All, Slayer, Sepultura, Cypress Hill, and Ministry as some of their main inspirations.

-

-

Crackdown's style is characterized by heavy riffs, groovy rhythms, fast tempos, and aggressive vocals. The band also incorporates elements of rap and electronic music in some of their songs, creating a diverse and original sound. The band's lyrics deal with topics such as social injustice, corruption, violence, racism, and personal struggles.

- -

Crackdown's Discography and Reviews

-

Crackdown has released two albums so far: 74 (1997) and Rise Up (1998). Both albums received positive reviews from critics and fans alike. Here are some of the highlights from their discography:

- -

You can find Crackdown's albums on Discogs or on streaming platforms like Spotify and Apple Music.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/3d Object Converter V5.30 Serial Fix.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/3d Object Converter V5.30 Serial Fix.md deleted file mode 100644 index 22bed0ad40bf7c564a791a2535fcf65a5b282dfb..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/3d Object Converter V5.30 Serial Fix.md +++ /dev/null @@ -1,129 +0,0 @@ - -

3D Object Converter V5.30 Serial

-

If you are looking for a powerful and versatile tool that can help you view, convert, and manipulate 3D models in different formats, you might want to check out 3D Object Converter V5.30 Serial. This is a shareware tool that allows you to interactively view and translate 3D polygon models between more than 700 file formats. In this article, we will show you what 3D Object Converter can do for you, how to download and install it, how to use it, and some tips and tricks for getting the most out of it.

-

3d Object Converter V5.30 Serial


Download File >>> https://byltly.com/2uKyDY



-

What is 3D Object Converter?

-

3D Object Converter is a software application that was developed by Zoltán Karpati, a Hungarian programmer who has been working on it since 1999. The software is designed to help users who work with 3D models in various fields, such as engineering, architecture, animation, gaming, education, and hobby. The software can handle almost any 3D file format that exists, including popular ones like STL, OBJ, 3DS, FBX, DXF, LWO, MDL, MD2, MD3, Z3AM, and many more. The software can also import and export data from various CAD/CAM/CAE systems, such as AutoCAD, CATIA, SolidWorks, Pro/Engineer, Inventor, Maya, Blender, SketchUp, etc.

-

The main features and benefits of using 3D Object Converter are:

- -

How to download and install 3D Object Converter V5.30 Serial

-

To download and install 3D Object Converter V5.30 Serial, you need to follow these steps:

-
    -
  1. Go to the official website of 3D Object Converter at http://web.axelero.hu/karpo/.
  2. -
  3. Click on the Download link on the left side menu.
  4. -
  5. Select the Download Now! button under the v5.30 (2014-10-06) section.
  6. -
  7. Save the file objconv530.zip (4.79 MB) on your computer.
  8. -
  9. Unzip the file using a program like WinZip or WinRAR.
  10. -
  11. You will find two files inside: objconv.exe (the main executable file) and file_id.diz (the file containing the serial number).
  12. -
  13. To install 3D Object Converter, you just need to copy objconv.exe to any folder on your computer. You can also create a shortcut on your desktop or start menu for easy access.
  14. -
  15. To register 3D Object Converter, you need to run objconv.exe, go to the About/Register... menu item, enter your name, and copy-paste the serial number from file_id.diz.
  16. -
  17. You will see a message confirming that your registration was successful.
  18. -
  19. You can now use 3D Object Converter without any limitations for as long as you want.
  20. -
-

How to use 3D Object Converter V5.30 Serial

-

To use 3D Object Converter V5.30 Serial, you need to follow these steps:

-
    -
  1. Run objconv.exe. You will see a window with a toolbar, a menu bar, a status bar, and a blank workspace.
  2. -
  3. To open a 3D model file, go to the File/Open... menu item, or click on the Open button on the toolbar, or press Ctrl+O on your keyboard.
  4. -
  5. A dialog box will appear where you can browse your computer for the file you want to open. You can also type the file name or drag-and-drop it from another window. You can filter the files by their extensions using the drop-down list at the bottom.
  6. -
  7. Select the file you want to open and click on the Open button. The file will be loaded into 3D Object Converter. You will see some information about the file on the status bar, such as its name, size, format, vertex count, face count, etc.
  8. -
  9. You can view the 3D model in different modes by clicking on the buttons on the toolbar or using the keyboard shortcuts. The available modes are:
  10. -
  11. You can rotate, pan, zoom, and fit the 3D model in the workspace by using your mouse or keyboard. The available commands are:
  12. -
  13. To convert a 3D model file, go to the File/Save As... menu item, or click on the Save button on the toolbar, or press Ctrl+S on your keyboard.
  14. -
  15. A dialog box will appear where you can choose the output file format from a drop-down list at the bottom. You can also specify the output file name and location.
  16. -
  17. Depending on the output file format you choose, you may see another dialog box where you can adjust some options for the conversion, such as compression level, quality level, texture mapping, material properties, etc.
  18. -
  19. Click on the Save button to start the conversion process. The file will be saved in the output format you selected. You will see some information about the conversion on the status bar, such as time elapsed, file size, vertex count, face count, etc.
  20. -
-

Tips and tricks for using 3D Object Converter V5.30 Serial

-

To get the most out of 3D Object Converter V5.30 Serial, here are some tips and tricks you can use:

- -

Conclusion

-

In conclusion, 3D Object Converter V5.30 Serial is a powerful and versatile tool that can help you view, convert, and manipulate 3D models in different formats. It supports almost any 3D file format that exists, and it offers a lot of features and options for editing, analyzing, repairing, and optimizing 3D models. It is easy to use and fast to convert, and it can handle large and complex 3D models with ease. If you are looking for a reliable 3D file converter, you should definitely give 3D Object Converter V5.30 Serial a try.

-

How to convert 3d models with 3d Object Converter V5.30
-3d Object Converter V5.30 crack download
-Best alternatives to 3d Object Converter V5.30
-3d Object Converter V5.30 license key generator
-3d Object Converter V5.30 tutorial and tips
-3d Object Converter V5.30 review and comparison
-3d Object Converter V5.30 supported formats and features
-3d Object Converter V5.30 free trial and discount
-3d Object Converter V5.30 system requirements and compatibility
-3d Object Converter V5.30 troubleshooting and error fixing
-Where to buy 3d Object Converter V5.30 online
-How to update 3d Object Converter V5.30 to the latest version
-How to uninstall 3d Object Converter V5.30 completely
-How to backup and restore 3d models with 3d Object Converter V5.30
-How to optimize 3d models with 3d Object Converter V5.30
-How to edit and modify 3d models with 3d Object Converter V5.30
-How to export and import 3d models with 3d Object Converter V5.30
-How to batch convert 3d models with 3d Object Converter V5.30
-How to view and inspect 3d models with 3d Object Converter V5.30
-How to measure and calculate 3d models with 3d Object Converter V5.30
-How to apply textures and materials to 3d models with 3d Object Converter V5.30
-How to animate and render 3d models with 3d Object Converter V5.30
-How to compress and decompress 3d models with 3d Object Converter V5.30
-How to merge and split 3d models with 3d Object Converter V5.30
-How to rotate and scale 3d models with 3d Object Converter V5.30
-How to align and snap 3d models with 3d Object Converter V5.30
-How to mirror and flip 3d models with 3d Object Converter V5.30
-How to extrude and inset 3d models with 3d Object Converter V5.30
-How to smooth and sharpen 3d models with 3d Object Converter V5.30
-How to hollow and solidify 3d models with 3d Object Converter V5.30
-How to cut and slice 3d models with 3d Object Converter V5.30
-How to bend and twist 3d models with 3d Object Converter V5.30
-How to sculpt and paint 3d models with 3d Object Converter V5.30
-How to add and remove vertices, edges, faces, polygons, etc., from/to a model using the software.
-How to create custom shapes and primitives using the software.
-How to use different tools and modes in the software interface.
-How to customize the software settings and preferences.
-How to use keyboard shortcuts and commands in the software.
-How to use plugins and extensions for the software.
-How to integrate the software with other applications and software.
-How to share and collaborate on projects using the software.
-How to print or scan a model using the software.
-What are the advantages and disadvantages of using the software.
-What are the common problems and solutions when using the software.
-What are the best practices and tips for using the software.
-What are some examples of projects made using the software.
-What are some resources and tutorials for learning the software.

-

If you have any questions or feedback about this article, please feel free to leave a comment below. We would love to hear from you!

-

Frequently Asked Questions

-

Here are some frequently asked questions about 3D Object Converter V5.30 Serial:

-
    -
  1. How much does 3D Object Converter cost?
    -Answer: 3D Object Converter is shareware and you must register the program to continue using it after the 30-day trial period. The registration fee is $50 USD for a single user license, which includes free updates for all future versions of 3D Object Converter.
  2. -
  3. What are the system requirements for 3D Object Converter?
    -Answer: 3D Object Converter works on Windows XP, Vista, 7, 8, 8.1, and 10 (32-bit and 64-bit). It requires at least a Pentium III processor, 256 MB of RAM, and 10 MB of free disk space. It also requires DirectX 9.0c or higher and OpenGL 1.1 or higher for rendering 3D models.
  4. -
  5. Can I use 3D Object Converter on Mac or Linux?
    -Answer: No, unfortunately, 3D Object Converter is only available for Windows at this time. However, you may be able to run it on Mac or Linux using a virtual machine or an emulator like Wine.
  6. -
  7. Can I use 3D Object Converter for commercial purposes?
    -Answer: Yes, you can use 3D Object Converter for commercial purposes as long as you have a valid license. You can also distribute your converted files without any restrictions.
  8. -
  9. Where can I find more information and support for 3D Object Converter?
    -Answer: You can find more information and support for 3D Object Converter on its official website at http://web.axelero.hu/karpo/. You can also contact the developer by email at karpo@axelero.hu.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cs 1.6 Bot Paketi Download Gezginler Enhance Your Counter Strike 1.6 Experience with Bots.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cs 1.6 Bot Paketi Download Gezginler Enhance Your Counter Strike 1.6 Experience with Bots.md deleted file mode 100644 index 6f61367dfbb60f46cfb073dc984d7fa95fdca9e5..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cs 1.6 Bot Paketi Download Gezginler Enhance Your Counter Strike 1.6 Experience with Bots.md +++ /dev/null @@ -1,109 +0,0 @@ - -

Cs 1.6 Bot Paketi Download Gezginler: How to Play Counter Strike 1.6 with Bots

-

Introduction

-

Counter Strike 1.6 is one of the most popular and legendary first-person shooter games ever created. It has millions of fans around the world who enjoy playing it online or offline with their friends or strangers. However, sometimes you may not have access to the internet, or you may want to practice your skills and have fun without worrying about other players. In that case, you may want to play Counter Strike 1.6 with bots.

-

Bots are computer-controlled players that can simulate human behavior and actions in the game. They can join your team or the enemy team, and they can follow commands, shoot, move, plant bombs, defuse bombs, and more. Playing with bots can be a great way to improve your aim, reflexes, strategy, and tactics in Counter Strike 1.6.

-

Cs 1.6 Bot Paketi Download Gezginler


Download File ::: https://byltly.com/2uKvk3



-

But how can you play Counter Strike 1.6 with bots? You need a special program called Cs 1.6 Bot Paketi, which is a bot package that adds bots to your game and lets you customize them according to your preferences. In this article, we will show you how to download, install, and use Cs 1.6 Bot Paketi in a few easy steps.

-

How to download and install Cs 1.6 Bot Paketi?

-

Step 1: Download Cs 1.6 Bot Paketi from a reliable source

-

The first thing you need to do is to download Cs 1.6 Bot Paketi from a trustworthy website that offers safe and virus-free downloads. One such website is Bilgibak, which provides a direct link to download Cs 1.6 Bot Paketi for free.

-

Alternatively, you can also use Full Türkçe İndir, which also offers a free download link for Cs 1.6 Bot Paketi along with detailed instructions on how to install it.

-

Once you have downloaded Cs 1.6 Bot Paketi, you will get a zip file that contains the setup file and some other files.

-

Step 2: Run the setup file and choose your language

-

The next step is to run the setup file that you have downloaded from the zip file. You can do this by double-clicking on it or right-clicking on it and choosing "Run as administrator".

-

Cs 1.6 Bot Paketi Indir Gezginler
-Cs 1.6 Bot Paketi Kurulumu Gezginler
-Cs 1.6 Bot Paketi Nasıl Yüklenir Gezginler
-Cs 1.6 Bot Paketi Yükleme Gezginler
-Cs 1.6 Bot Paketi Full Download Gezginler
-Cs 1.6 Bot Paketi Ücretsiz Download Gezginler
-Cs 1.6 Bot Paketi Son Sürüm Download Gezginler
-Cs 1.6 Bot Paketi Türkçe Download Gezginler
-Cs 1.6 Bot Paketi En İyi Download Gezginler
-Cs 1.6 Bot Paketi Hızlı Download Gezginler
-Cs 1.6 Bot Paketi Güncel Download Gezginler
-Cs 1.6 Bot Paketi Sorunsuz Download Gezginler
-Cs 1.6 Bot Paketi Kolay Download Gezginler
-Cs 1.6 Bot Paketi Tek Link Download Gezginler
-Cs 1.6 Bot Paketi Alternatif Download Gezginler
-Cs 1.6 Bot Paketi Oyun İndir Club Gezginler
-Cs 1.6 Bot Paketi Tam İndir Gezginler
-Cs 1.6 Bot Paketi Oyun İndir Vip Gezginler
-Cs 1.6 Bot Paketi Oyun İndirme Sitesi Gezginler
-Cs 1.6 Bot Paketi Oyun İndirme Programı Gezginler
-Cs 1.6 Bot Paketi Oyun İndirme Linki Gezginler
-Cs 1.6 Bot Paketi Oyun İndirme Yöntemi Gezginler
-Cs 1.6 Bot Paketi Oyun İndirme Hilesi Gezginler
-Cs 1.6 Bot Paketi Oyun İndirme Rehberi Gezginler
-Cs 1.6 Bot Paketi Oyun İndirme Tavsiyesi Gezginler
-Cs 1.6 Bot Paketi Nasıl İndirilir Gezginler
-Cs 1.6 Bot Paketi Nasıl Kurulur Gezginler
-Cs 1.6 Bot Paketi Nasıl Çalıştırılır Gezginler
-Cs 1.6 Bot Paketi Nasıl Güncellenir Gezginler
-Cs 1.6 Bot Paketi Nasıl Silinir Gezginler
-Cs 1.6 Bot Paketi Nasıl Kullanılır Gezginler
-Cs 1.6 Bot Paketi Nasıl Ayarlanır Gezginler
-Cs 1.6 Bot Paketi Nasıl Etkinleştirilir Gezginler
-Cs 1.6 Bot Paketi Nasıl Devre Dışı Bırakılır Gezginler
-Cs 1.6 Bot Paketi Nasıl Eklenebilir Gezginler
-Cs 1.6 Bot Paketi Nasıl Kaldırılır Gezginler
-Cs 1.6 Bot Paketi Nasıl Değiştirilir Gezginler
-Cs 1.6 Bot Paketi Nasıl Düzenlenir Gezginler
-Cs 1.6 Bot Paketi Nasıl Özelleştirilir Gezginler
-Cs 1.6 Bot Paketi Nasıl Optimize Edilir Gezginler
-Cs 1.6 Bot Paketi Nereden İndirilir Gezginler
-Cs 1.6 Bot Paketi Nerede Kurulur Gezginler
-Cs 1.6 Bot Paketi Nerede Bulunur Gezginler
-Cs 1.6 Bot Paketi Nerede Saklanır Gezginler
-Cs 1.6 Bot Paketi Nerede Çalıştırılır Gezginler
-Cs 1.6 Bot Paketi Neden İndirilir Gezginler
-Cs 1.6 Bot Paketi Neden Kurulur Gezginler
-Cs 1.6 Bot Paketi Neden Kullanılır Gezginler
-Cs 1.6 Bot Paketi Ne İşe Yarar Gezginler
-Cs 1.6 Bot Paketi Ne Zaman Çıktı Gezginler

-

A window will pop up asking you to choose your language for the installation process. You can choose from Turkish, English, French, German, or Spanish.

-

Step 3: Select the destination folder for Counter Strike 1.6

-

After choosing your language, you will see another window that asks you to select the destination folder for Counter Strike 1.6.

-

If you have already installed Counter Strike 1.6 on your computer, you should select the same folder where it is located.

-

If you have not installed Counter Strike 1.6 yet, you should select a folder where you want to install it.

-

You can browse for the folder by clicking on the "Browse" button or typing in the path manually.

-

Step 4: Wait for the installation to finish and launch Counter Strike 1.6

-

The final step is to wait for the installation of Cs 1.6 Bot Paketi to finish.

-

This may take a few minutes depending on your computer speed and internet connection.

-

When the installation is done, you will see a message that says "Installation Complete". You can click on "Finish" to close the window.

-

Now you can launch Counter Strike 1.6 by clicking on its icon on your desktop or start menu.

-

How to use Cs 1.6 Bot Paketi?

-

Step 1: Create a new game or join an existing one

-

To use Cs 1.6 Bot Paketi, you need to create a new game or join an existing one.

-

To create a new game, go to "New Game" from the main menu of Counter Strike 1.6.

-

You will see a list of maps that you can choose from.

-

Select the map that you want to play on and click on "Start".

-

To join an existing game, go to "Find Servers" from the main menu of Counter Strike 1.6.

-

You will see a list of servers that are available online.

-

Select the server that you want to join and click on "Connect".

-

Step 2: Press "H" to open the bot menu

-

Once you are in a game, press "H" on your keyboard to open the bot menu.

-

This is where you can add bots to your game and customize them according to your preferences.

-

Step 3: Add bots to your team or the enemy team

-

In the bot menu, you will see several options that allow you to add bots to your team or the enemy team.

-

You can add as many bots as you want by clicking on the "+" button next to each option.

-

You can also remove bots by clicking on the "-" button next to each option.

-

The options are:

- - Add CT bot: This adds a bot to your team if you are playing as counter-terrorists (CT) or to the enemy team if you are playing as terrorists (T). - Add T bot: This adds a bot to your team if you are playing as terrorists (T) or to the enemy team if you are playing as counter-terrorists (CT). - Add random bot: This adds a bot randomly either to your team or the enemy team. - Fill server with bots: This fills up all the empty slots in both teams with bots. - Kick all bots: This removes all bots from both teams.

Step 4: Customize the bot settings and difficulty level

-

In addition to adding bots, you can also customize their settings and difficulty level in the bot menu.

- - Bot quota: This sets how many bots will be added automatically when a new round starts. - Bot difficulty: This sets how smart and skilled the bots will be. - Bot prefix: This sets what name prefix will be used for all bots. - Bot chat: This sets whether bots will talk during gameplay. - Bot weapon mode: This sets what weapons bots will use. - Bot knife only mode: This sets whether bots will only use knives. - Bot defuse bomb mode: This sets whether bots will try to defuse bombs. - Bot don't shoot mode: This sets whether bots will not shoot at all.

Step 5: Enjoy playing Counter Strike 1.6 with bots

-

Now that you have added and customized bots to your game, you can enjoy playing Counter Strike 1.6 with them.

-

You can use the bot menu to change the bot settings anytime during the game.

-

You can also use some keyboard shortcuts to control the bots more easily.

-

Here are some of the most useful ones:

- - Y: Chat with all players - U: Chat with your team only - C: Voice communication with your team only - V: Voice communication with all players - E: Use items or interact with objects - R: Reload your weapon - F: Flashlight on/off - Q: Switch between your last two weapons - 1: Select your primary weapon - 2: Select your secondary weapon - 3: Select your knife - 4: Select your grenade - 5: Select your bomb (if you have one) - Tab: Show the scoreboard

Conclusion

-

In this article, we have shown you how to download, install, and use Cs 1.6 Bot Paketi to play Counter Strike 1.6 with bots.

-

Playing with bots can be a great way to improve your skills and have fun in Counter Strike 1.6.

-

You can add as many bots as you want, customize their settings and difficulty level, and control them with keyboard shortcuts.

-

If you are looking for a reliable source to download Cs 1.6 Bot Paketi, you can use Bilgibak or Full Türkçe İndir, which offer safe and virus-free downloads.

-

If you want to learn more tips and tricks for Counter Strike 1.6, you can check out CCM or Steam Community, which offer useful guides and tutorials.

-

We hope you enjoyed this article and found it helpful. Now go ahead and play Counter Strike 1.6 with bots!

- FAQs: Q: What is Counter Strike 1.6? A: Counter Strike 1.6 is a competitive, online, multiplayer, first-person shooter game that requires teamwork, communication, and skill. Q: What is Cs 1.6 Bot Paketi? A: Cs 1.6 Bot Paketi is a bot package that adds bots to your game and lets you customize them according to your preferences. Q: How do I download and install Cs 1.6 Bot Paketi? A: You can download Cs 1.6 Bot Paketi from a reliable website like Bilgibak or Full Türkçe İndir, then run the setup file and select the destination folder for Counter Strike 1.6. Q: How do I use Cs 1.6 Bot Paketi? A: You can use Cs 1.6 Bot Paketi by pressing "H" to open the bot menu, then adding bots to your team or the enemy team, and customizing their settings and difficulty level. Q: What are some keyboard shortcuts to control the bots? A: Some of the keyboard shortcuts to control the bots are Y, U, C, V, E, R, F, Q, 1, 2, 3, 4, 5, and Tab.

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Sims 3 Supernatural No Cd Crack Not ) A Guide For Sims 3 Supernatural Fans.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Sims 3 Supernatural No Cd Crack Not ) A Guide For Sims 3 Supernatural Fans.md deleted file mode 100644 index 822e59ea0bc9579ab8a0714100357a15a0f4acdf..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Sims 3 Supernatural No Cd Crack Not ) A Guide For Sims 3 Supernatural Fans.md +++ /dev/null @@ -1,134 +0,0 @@ - -

HD Online Player (Sims 3 Supernatural No Cd Crack Not ): How to Play The Sims 3 Supernatural Without a CD

-

If you are a fan of The Sims 3 and its supernatural expansion pack, you might have encountered the problem of needing a CD to play the game. This can be inconvenient, especially if you have lost or damaged your CD, or if you want to play the game on multiple devices. Fortunately, there are ways to play The Sims 3 Supernatural without a CD, such as using a no-CD patch or crack, or using an online game streaming service. In this article, we will explain what these options are, how they work, and what are their pros and cons.

-

HD Online Player (Sims 3 Supernatural No Cd Crack Not )


Download Zip ››››› https://byltly.com/2uKztf



-

Introduction

-

What is The Sims 3 Supernatural?

-

The Sims 3 Supernatural is the seventh expansion pack for The Sims 3, a popular life simulation game developed by Maxis and published by Electronic Arts. The expansion pack was released in September 2012 and introduces several new life states, such as witches, werewolves, vampires, fairies, zombies, and ghosts. It also adds new features, such as lunar cycle, alchemy, magic spells, enchanted objects, and a new world called Moonlight Falls. The expansion pack allows players to create and control supernatural beings, explore their abilities and interactions, and experience mysterious events by the light of the full moon.

-

Why do you need a CD to play The Sims 3 Supernatural?

-

To play The Sims 3 Supernatural, you need to have the base game of The Sims 3 installed on your computer. You also need to have a physical copy of the expansion pack's CD or DVD inserted in your computer's drive while playing. This is because the game uses a copy protection system called SecuROM that checks for the presence of the original disc every time you launch the game. SecuROM is designed to prevent piracy and unauthorized copying of the game.

-

What are the drawbacks of using a CD to play The Sims 3 Supernatural?

-

While using a CD to play The Sims 3 Supernatural may seem like a simple and secure way to enjoy the game, it also has some drawbacks. Some of these drawbacks are:

- -

How to play The Sims 3 Supernatural without a CD

-

Option 1: Download and install a no-CD patch or crack

-

What is a no-CD patch or crack?

-

A no-CD patch or crack is a modified version of the game's executable file that bypasses or removes the SecuROM check. This means that you can play the game without inserting the original disc in your drive. A no-CD patch or crack can be downloaded from various websites that offer them for free or for a fee. However, you should be careful when downloading and installing such files, as they may contain viruses or malware that can harm your computer.

-

How to find and download a no-CD patch or crack for The Sims 3 Supernatural

-

To find and download a no-CD patch or crack for The Sims 3 Supernatural, you can follow these steps:

-
    -
  1. Search for "The Sims 3 Supernatural no cd" on Google or any other search engine.
  2. -
  3. Choose a website that offers a no-CD patch or crack for The Sims 3 Supernatural. Make sure that the website is trustworthy and reputable by checking its reviews and ratings.
  4. -
  5. Download the file that matches your game version and language.
  6. -
  7. Scan the file with an antivirus software before opening it.
  8. -
-

How to install and use a no-CD patch or crack for The Sims 3 Supernatural

-

To install and use a no-CD patch or crack for The Sims 3 Supernatural, you can follow these steps:

-
    -
  1. Backup your original game executable file (TS3W.exe) in case something goes wrong.
  2. -
  3. Copy and paste the downloaded file into your game installation folder (usually C:\Program Files\Electronic Arts\The Sims 3\Game\Bin).
  4. -
  5. Replace the original file with the downloaded file when prompted.
  6. -
  7. Launch the game as usual without inserting the disc in your drive.
  8. -
-

Pros and cons of using a no-CD patch or crack for The Sims 3 Supernatural

-

Using a no-CD patch or crack for The Sims 3 Supernatural has some pros and cons. Some of them are:

- - - - -```html the downloaded file. -
ProsCons
You don't need to have access to your CD every time you want to play the game.You may violate the terms of service and end user license agreement of the game by using an unauthorized modification.
You don't need to have a working CD or DVD drive on your computer.You may encounter compatibility issues with future updates or patches of the game.
-

Option 2: Use an online game streaming service

-

What is an online game streaming service?

-

An online game streaming service is a platform that lets you play games that are hosted on remote servers over the internet. You don't need to download or install anything on your device, except for a client app or a web browser. You can access a library of games that are available on the service, or link your own game accounts from other platforms. The service handles all the processing and rendering of the game, and streams the video and audio output to your device. You can control the game using your keyboard, mouse, gamepad, or touch screen.

-

How to find and use an online game streaming service for The Sims 3 Supernatural

-

To find and use an online game streaming service for The Sims 3 Supernatural, you can follow these steps:

-

How to play Sims 3 Supernatural without CD
-Sims 3 Supernatural No-DVD/Fixed EXE download
-Sims 3 Supernatural patch v1.39.3 no CD required
-Sims 3 Supernatural game fixes and cheats
-Sims 3 Supernatural PC game trainer and playfix
-Sims 3 Supernatural crack by Hi2U
-Sims 3 Supernatural DVD-check bypass
-Sims 3 Supernatural serial key generator
-Sims 3 Supernatural free full version download
-Sims 3 Supernatural online multiplayer crack
-Sims 3 Supernatural custom content and mods
-Sims 3 Supernatural expansion pack features and review
-Sims 3 Supernatural gameplay tips and tricks
-Sims 3 Supernatural system requirements and compatibility
-Sims 3 Supernatural backup and installation guide
-Sims 3 Supernatural error and bug fixes
-Sims 3 Supernatural update v1.63.5 FASDOX
-Sims 3 Supernatural movie stuff pack DeZoMoR4iN
-Sims 3 Supernatural zombies and werewolves guide
-Sims 3 Supernatural fairy and witch abilities
-Sims 3 Supernatural moon phases and lunar cycle effects
-Sims 3 Supernatural alchemy and elixirs recipes
-Sims 3 Supernatural fortune teller and mystic career paths
-Sims 3 Supernatural hidden springs world download
-Sims 3 Supernatural brambles shrub placement fix
-Sims 3 Supernatural best mods and custom content sites
-Sims 3 Supernatural cheats and codes for PC
-Sims 3 Supernatural no CD crack for Mac
-Sims 3 Supernatural HD online player tutorial
-Sims 3 Supernatural how to install custom worlds
-Sims 3 Supernatural how to create a vampire sim
-Sims 3 Supernatural how to cure a zombie sim
-Sims 3 Supernatural how to become a ghost sim
-Sims 3 Supernatural how to make a genie sim
-Sims 3 Supernatural how to unlock all outfits and hairstyles
-Sims 3 Supernatural how to get a unicorn sim
-Sims 3 Supernatural how to build a haunted house
-Sims 3 Supernatural how to use the magic mirror
-Sims 3 Supernatural how to cast spells and curses
-Sims 3 Supernatural how to brew potions and poisons
-Sims 3 Supernatural how to grow plantsims and death flowers
-Sims 3 Supernatural how to summon a meteor shower or an eclipse
-Sims 3 Supernatural how to find hidden objects and collectibles
-Sims 3 Supernatural how to get a dragon egg or a dragon valley world
-Sims 3 Supernatural how to change the weather and seasons
-Sims 3 Supernatural how to travel to the future or the past
-Sims 3 Supernatural how to have an alien baby or a hybrid sim
-Sims 3 Supernatural how to adopt a pet or a stray animal
-Sims 3 Supernatural how to start a fire or a flood

-
    -
  1. Search for "online game streaming service" on Google or any other search engine.
  2. -
  3. Choose a service that offers The Sims 3 Supernatural in its library, or allows you to link your own game account from another platform. Some of the popular services are GeForce NOW, PlayStation Now, Stadia, and Amazon Luna.
  4. -
  5. Sign up for an account on the service and choose a subscription plan that suits your needs and budget. Some services offer free trials or tiers that let you play for a limited time or with certain restrictions.
  6. -
  7. Download the client app for the service on your device, or open the web browser that supports the service.
  8. -
  9. Log in to the service and browse for The Sims 3 Supernatural in its library, or link your own game account from another platform.
  10. -
  11. Launch the game and enjoy playing it without a CD.
  12. -
-

Pros and cons of using an online game streaming service for The Sims 3 Supernatural

-

Using an online game streaming service for The Sims 3 Supernatural has some pros and cons. Some of them are:

- - - - - - -
ProsCons
You don't need to have a CD or a powerful PC to play the game.You need to have a fast and stable internet connection to play the game smoothly.
You can play the game on any device that supports the service, such as a laptop, tablet, smartphone, or TV.You may experience input lag or latency depending on your network speed and quality.
You can access a large library of games that are available on the service, or link your own game accounts from other platforms.You may have to pay a monthly fee or buy individual games to use the service.
You don't have to worry about updates or patches of the game, as they are handled by the service.You may lose access to the game if the service shuts down or removes it from its library.
-

Conclusion

-

In conclusion, playing The Sims 3 Supernatural without a CD is possible with two options: using a no-CD patch or crack, or using an online game streaming service. Both options have their advantages and disadvantages, so you should choose the one that works best for you. No matter which option you choose, you can enjoy creating and controlling supernatural beings in a world full of magic, mystery, and mischief with The Sims 3 Supernatural!

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download Full Movie Yaariyan In 720p.md b/spaces/1gistliPinn/ChatGPT4/Examples/Download Full Movie Yaariyan In 720p.md deleted file mode 100644 index e72b38ebd26b55aaffeee62a919dfe62faf6c5d0..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download Full Movie Yaariyan In 720p.md +++ /dev/null @@ -1,32 +0,0 @@ -

Download Full Movie Yaariyan In 720p


Download Zip ☆☆☆☆☆ https://imgfil.com/2uxZLA



-
-The novel's style of writing made it easy to read for young adults, which is what its subsequent film adaptations were aimed at. I have my husband. It has been entirely well worth the wait. - -Is the faq: The process from the pre-creative stage to the finished product was very well thought out. That's awesome that the story was actually put to good use. - -You're never alone. - -I was immediately transported back in time to the previous few decades of my life, and I had a feeling I was watching a movie. The first sense of animation that I got was the fight sequence between Thor and Hulk. - -What is the real story? Initially, I thought this book would be a mere story for young adults, and boy, was I wrong! I didn't want to end it all. I gave myself one year from that time to the movie's release date. - -Get exclusive access to movie trailers, clips, features and film reviews! - -My husband and I were able to make a lot of our own pizzas and pasta meals in a matter of minutes with just a microwave and a few ingredients. The best part was that we only spent $ per serving. - -The toddler was able to play more than his usual games of pretend and imitation. It is a big problem because I'm a college student and this book reminds me of everything that I am missing out on. - -A fun, quirky book for young adults. Not one to be read in one sitting, but the story is so well put together that it can be read in a few sittings. - -Kibou no Kazoku [私ぼうの財産] by Yamada Kenjirou — | MangaDex - -I was impressed with the movie's depiction of our homes and household goods. Even more amazing is how well the characters' emotions are reflected in their faces. - -Love is all around us, as the saying goes, but why are we so focused on finding the perfect mate? The characters will charm anyone who has ever spent any time in a mall or department store. - -Goku has learned how to do everything from riding a bicycle to driving an automobile. When I was younger, I would have loved to read this book. I'm not very religious, but I enjoy the book so much that I try to read it each year. - -It's certainly one of the few anime movies that are actually family 4fefd39f24
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CSR Racing Mod APK Experience the Thrill of Drag Racing with Unlimited Money and Gold.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CSR Racing Mod APK Experience the Thrill of Drag Racing with Unlimited Money and Gold.md deleted file mode 100644 index 9262b23050b0226faaf395ba3d75c7fea40651dd..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/CSR Racing Mod APK Experience the Thrill of Drag Racing with Unlimited Money and Gold.md +++ /dev/null @@ -1,101 +0,0 @@ -
-

CSR Racing Mod APK: The Ultimate Drag Racing Experience

-

If you are a fan of drag racing games, you must have heard of CSR Racing. It is one of the most popular and addictive games in the genre, with millions of downloads and positive reviews. But what if you want to enjoy the game without any limitations or restrictions? That's where CSR Racing Mod APK comes in handy. In this article, we will tell you everything you need to know about this modified version of the game, including its features, benefits, and how to download and install it on your device.

-

What is CSR Racing?

-

CSR Racing is a free-to-play drag racing game developed by NaturalMotion Games and published by Zynga. It was released in 2012 for iOS and Android devices, and later for Windows and Mac OS. The game lets you race against the best drivers in the city, using over 200 licensed cars from top brands like Ferrari, Lamborghini, McLaren, Bugatti, and more. You can customize your cars with various upgrades and tuning options, and compete in different modes and events. You can also challenge other players online and join crews to earn rewards and reputation.

-

csr racing game mod apk


Download Zip === https://urlin.us/2uSXhl



-

Features of CSR Racing

-

Stunning graphics and realistic physics

-

One of the main attractions of CSR Racing is its amazing graphics and sound effects. The game uses high-quality 3D models and textures to create realistic environments and car details. The game also features realistic physics and animations, making the races more immersive and thrilling. You can feel the speed, power, and adrenaline as you race through the streets of the city.

-

Over 200 licensed cars from top brands

-

Another feature that makes CSR Racing stand out is its huge collection of licensed cars from top brands. You can choose from over 200 cars from manufacturers like Ferrari, Lamborghini, McLaren, Bugatti, Aston Martin, Pagani, Koenigsegg, and more. Each car has its own stats, performance, and appearance. You can also unlock new cars as you progress through the game.

-

Customizable upgrades and tuning options

-

If you want to make your car faster and more powerful, you can customize it with various upgrades and tuning options. You can upgrade your engine, turbo, intake, nitrous, tires, gearbox, body, and more. You can also tune your car to optimize its performance for different races. You can adjust the gear ratios, tire pressure, nitrous timing, launch control, and more.

-

Challenging races and events

-

CSR Racing offers a variety of races and events to keep you entertained. You can race against different opponents in different modes, such as Regulation Races, Ladder Races, Crew Battles, Daily Battles, Restriction Races, Manufacturer Races, Pro Cars Races, World Tour Races, etc. You can also participate in special events that offer exclusive rewards and prizes.

-

Online multiplayer and social features

-

If you want to test your skills against other players around the world, you can join the online multiplayer mode of CSR Racing. You can race against real players in real time or challenge them to beat your best times. You can also join or create a crew with other players to earn more rewards and reputation. You can chat with your crew members, share tips and strategies, and compete in crew championships.

-

What is CSR Racing Mod APK?

-

CSR Racing Mod APK is a modified version of the original CSR Racing game that offers some extra features and benefits that are not available in the official version. These features include unlimited money and gold, free shopping and upgrades, unlocked cars and tracks, no ads and root required, and more. These features make the game more fun and easy to play, as you don't have to worry about running out of resources, waiting for timers, or facing any restrictions. You can enjoy the game to the fullest without spending any real money or risking your device's security.

-

csr racing 2 mod apk unlimited money and gold
-csr racing classic mod apk free download
-csr racing hack apk download for android
-csr racing mod apk latest version 2021
-csr racing 2 mod apk ios no jailbreak
-csr racing 2 mod apk offline
-csr racing 2 mod apk revdl
-csr racing 2 mod apk unlimited keys
-csr racing 2 mod apk android 1
-csr racing 2 mod apk obb
-csr racing 2 mod apk rexdl
-csr racing 2 mod apk happymod
-csr racing 2 mod apk unlimited everything
-csr racing 2 mod apk unlimited cars
-csr racing 2 mod apk all cars unlocked
-csr racing 2 mod apk no root
-csr racing 2 mod apk anti ban
-csr racing 2 mod apk online
-csr racing 2 mod apk data
-csr racing 2 mod apk pure
-csr racing 2 mod apk vip
-csr racing 2 mod apk mega
-csr racing 2 mod apk lenov.ru
-csr racing 2 mod apk blackmod
-csr racing 2 mod apk platinmods
-csr racing classic hack apk unlimited money and gold
-csr racing classic hack apk download for android
-csr racing classic hack apk latest version 2021
-csr racing classic hack apk ios no jailbreak
-csr racing classic hack apk offline
-csr racing classic hack apk revdl
-csr racing classic hack apk android 1
-csr racing classic hack apk obb
-csr racing classic hack apk rexdl
-csr racing classic hack apk happymod
-csr racing classic hack apk unlimited everything
-csr racing classic hack apk no root
-csr racing classic hack apk anti ban
-csr racing classic hack apk online
-csr racing classic hack apk data
-csr racing classic hack apk pure
-csr racing classic hack apk vip
-csr racing classic hack apk mega
-csr racing classic hack apk lenov.ru

-

Benefits of CSR Racing Mod APK

-

Unlimited money and gold

-

Money and gold are the main currencies in CSR Racing. You need them to buy new cars, upgrade your existing ones, enter races and events, and more. However, earning money and gold in the game can be slow and tedious, especially if you want to buy the most expensive and powerful cars. That's why CSR Racing Mod APK gives you unlimited money and gold, so you can buy anything you want without any limitations. You can also use them to skip timers, refill your fuel, and more.

-

Free shopping and upgrades

-

Another benefit of CSR Racing Mod APK is that it allows you to shop and upgrade your cars for free. You don't have to spend any money or gold to buy new cars or upgrade your existing ones. You can simply choose any car you like from the shop and get it for free. You can also upgrade your car's engine, turbo, intake, nitrous, tires, gearbox, body, and more for free. You can make your car as fast and powerful as you want without any cost.

-

Unlocked cars and tracks

-

CSR Racing Mod APK also unlocks all the cars and tracks in the game for you. You don't have to complete any missions or challenges to unlock new cars or tracks. You can access them all from the start of the game. You can choose from over 200 cars from top brands like Ferrari, Lamborghini, McLaren, Bugatti, Aston Martin, Pagani, Koenigsegg, and more. You can also race on different tracks in different locations like London, Rome, New York, Tokyo, etc.

-

No ads and root required

-

One of the best things about CSR Racing Mod APK is that it removes all the annoying ads from the game. You don't have to watch any ads to get extra rewards or bonuses. You can enjoy the game without any interruptions or distractions. Moreover, CSR Racing Mod APK does not require root access to work on your device. You don't have to root your device or risk its security to install and play the modded version of the game.

-

How to download and install CSR Racing Mod APK?

-

If you are interested in downloading and installing CSR Racing Mod APK on your device, you can follow these simple steps:

-

Step-by-step guide

-
    -
  1. First of all, you need to uninstall the original version of CSR Racing from your device if you have it installed.
  2. -
  3. Then, you need to download the CSR Racing Mod APK file from a trusted source. You can use this link to download it.
  4. -
  5. After downloading the file, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  6. -
  7. Next, you need to locate the downloaded file on your device and tap on it to start the installation process.
  8. -
  9. Follow the instructions on the screen and wait for the installation to finish.
  10. -
  11. Finally, you can launch the game from your app drawer or home screen and enjoy it.
  12. -
-

Conclusion

-

CSR Racing is a great drag racing game that offers stunning graphics, realistic physics, over 200 licensed cars, customizable upgrades and tuning options, challenging races and events, online multiplayer and social features, and more. However, if you want to enjoy the game without any limitations or restrictions, you should try CSR Racing Mod APK. It gives you unlimited money and gold, free shopping and upgrades, unlocked cars and tracks, no ads and root required, and more. It makes the game more fun and easy to play without spending any real money or risking your device's security. You can download and install CSR Racing Mod APK on your device by following our step-by-step guide above.

-

We hope this article was helpful for you. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

-

Frequently Asked Questions

-
    -
  1. Is CSR Racing Mod APK safe to use?
  2. -

    Yes, CSR Racing Mod APK is safe to use as long as you download it from a trusted source like ours. It does not contain any viruses or malware that could harm your device or data. However, you should always be careful when downloading and installing any modded or hacked apps from unknown sources, as they may contain malicious code or unwanted features.

    -
  3. What are the minimum requirements to play CSR Racing Mod APK?
  4. -

    To play CSR Racing Mod APK on your device, you need to have at least Android 4.0.3 or higher, 1 GB of RAM, and 2 GB of free storage space. You also need to have a stable internet connection to access the online features of the game.

    -
  5. Can I play CSR Racing Mod APK offline?
  6. -

    Yes, you can play CSR Racing Mod APK offline, but you will not be able to access some of the features that require an internet connection, such as online multiplayer, crew championships, daily battles, etc. You will also not be able to sync your progress with your Facebook account or cloud service.

    -
  7. Can I update CSR Racing Mod APK?
  8. -

    No, you cannot update CSR Racing Mod APK from the Google Play Store or any other source, as it is a modified version of the game that is not supported by the official developers. If you want to update the game, you will have to download and install the latest version of CSR Racing Mod APK from our website or any other trusted source.

    -
  9. Can I use CSR Racing Mod APK with my existing account?
  10. -

    No, you cannot use CSR Racing Mod APK with your existing account, as it may cause your account to be banned or suspended by the game's anti-cheat system. If you want to use CSR Racing Mod APK, you should create a new account or use a guest account to avoid any risks.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Build Shoot and Survive in 1v1.LOL - The Fastest Battle Royale Experience.md b/spaces/1phancelerku/anime-remove-background/Build Shoot and Survive in 1v1.LOL - The Fastest Battle Royale Experience.md deleted file mode 100644 index 48dbe16f9c3f435692984f68ca3bf36943d04b51..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Build Shoot and Survive in 1v1.LOL - The Fastest Battle Royale Experience.md +++ /dev/null @@ -1,114 +0,0 @@ -
-

1 vs 1 lol download: How to play the online building simulator and shooting game

-

If you are looking for a fast-paced, competitive, and fun online game that combines building and shooting skills, you might want to try 1 vs 1 lol. This game is a free-to-play browser-based game that has millions of players around the world. In this article, we will tell you everything you need to know about 1 vs 1 lol, how to download and install it, and how to improve your skills and enjoy it.

-

What is 1 vs 1 lol?

-

A brief introduction to the game and its features

-

1 vs 1 lol is an online building simulator and shooting game that was developed by Lior Alterman in December 2019. The game is inspired by popular games like Fortnite, but with simpler graphics and mechanics. The game allows you to build platforms, ramps, walls, and doors to create your own structures and defend yourself from other players. You can also use various weapons, such as assault rifles, shotguns, axes, and sniper rifles, to shoot and eliminate your opponents. The game has multiple game modes, such as battle royale, 1v1, 2v2, box fight, zone wars, and more. You can also customize your character's appearance, name, controls, and settings. The game is free to play and does not require any registration or download. You can play it on your web browser or on your mobile devices.

-

1 vs 1 lol download


DOWNLOAD ……… https://jinyurl.com/2uNUy5



-

The main game modes and how to play them

-

The main game mode of 1 vs 1 lol is battle royale, where you have to fight against up to 10 players in a shrinking map. The last player standing wins the match. You can also play in teams of two in duos mode. To play battle royale, you have to select the BR or BR Duos mode from the menu, then click on Play. You will be matched with other players in a few seconds. Once the match starts, you have to find weapons and materials on the map, build your structures, and shoot your enemies. You can also use the map button to see your location and the safe zone. You have to stay inside the safe zone or you will take damage from the storm. The safe zone will shrink over time, forcing you to move closer to your enemies. The last player or team alive wins the match.

-

Another popular game mode is 1v1 or 2v2, where you have to face another player or team in a small arena. You can choose between four weapons: assault rifle, shotgun, axe, or sniper rifle. You can also choose between four building platforms: wood, stone, metal, or glass. To play 1v1 or 2v2, you have to select the mode from the menu, then click on Play. You will be matched with another player or team in a few seconds. Once the match starts, you have to build your structures and shoot your opponent. You can also use the reset button to reset your structures if they are damaged or blocking your view. The first player or team to reach five kills wins the match.

-

There are also other game modes that you can try, such as box fight, where you have to fight inside a box-shaped arena; zone wars, where you have to survive in a randomly generated zone; just build, where you can practice your building skills without any enemies; and party mode, where you can create your own private room and invite your friends to play with you. You can also create your own custom game mode by changing the settings, such as the number of players, the weapons, the materials, the map, and the time limit. To play these modes, you have to select them from the menu, then click on Play or Create. You can also join other players' rooms by clicking on Join.

-

1v1.lol building simulator and shooting game
-How to download 1v1.lol on PC
-1v1.lol battle royale mode tips and tricks
-Best settings for 1v1.lol on mobile
-1v1.lol custom games with friends
-1v1.lol steam release date and features
-1v1.lol zombies mode gameplay and guide
-How to improve your aim in 1v1.lol
-1v1.lol free coins and skins hack
-1v1.lol online multiplayer gun game
-1v1.lol box fight and build fight modes
-How to play 2v2 in 1v1.lol
-1v1.lol assault rifle vs shotgun vs sniper rifle
-How to use the axe in 1v1.lol
-1v1.lol daily challenges and rewards
-How to win every match in 1v1.lol
-1v1.lol advanced control editor and HUD
-How to report bugs and glitches in 1v1.lol
-1v1.lol community hub and discord server
-How to stream 1v1.lol on Twitch or YouTube
-1v1.lol reviews and ratings from players
-How to create a custom character in 1v1.lol
-How to join a squad in 1v1.lol
-How to unlock new weapons and items in 1v1.lol
-How to practice your building skills in 1v1.lol
-How to change your name and avatar in 1v1.lol
-How to chat with other players in 1v1.lol
-How to invite friends to play 1v1.lol with you
-How to update 1v1.lol on your device
-How to uninstall 1v

-

The benefits of playing 1 vs 1 lol

-

Playing 1 vs 1 lol can be very beneficial for you, especially if you are a fan of building and shooting games. Here are some of the benefits of playing 1 vs 1 lol:

- -

How to download and install 1 vs 1 lol?

-

The requirements and compatibility of the game

-

As mentioned before, 1 vs 1 lol is a browser-based game that does not require any download or registration. However, if you want to play it on your mobile devices, you will need to download and install the app. The app is compatible with Android and iOS devices, and it is free to download and play. The app requires at least Android 4.4 or iOS 9.0 to run smoothly. The app also requires an internet connection to play online with other players.

-

The steps to download and install the game on different platforms

-

To download and install the game on your mobile devices, you need to follow these steps:

-
    -
  1. Go to the official website of 1 vs 1 lol at https://1v1.lol/ or search for "1 vs 1 lol" on Google Play Store or App Store.
  2. -
  3. Click on the download button for your device (Android or iOS) and wait for the app to be downloaded.
  4. -
  5. Open the app and grant the necessary permissions for the app to function properly.
  6. -
  7. Enjoy playing 1 vs 1 lol on your mobile devices!
  8. -
-

To play the game on your web browser, you need to follow these steps:

-
    -
  1. Go to the official website of 1 vs 1 lol at https://1v1.lol/ or search for "1 vs 1 lol" on any web browser.
  2. -
  3. Click on the play button and wait for the game to load.
  4. -
  5. Enjoy playing 1 vs 1 lol on your web browser!
  6. -

The tips and tricks to optimize the game performance and settings

-

Although 1 vs 1 lol is a simple and lightweight game, it can still lag or crash sometimes due to various factors, such as your device's specifications, your internet connection, or the game's server. To optimize the game performance and settings, you can try these tips and tricks:

- -

How to improve your skills and enjoy 1 vs 1 lol?

-

The practice modes and how to use them

-

If you are new to 1 vs 1 lol or want to improve your skills, you can use the practice modes that are available in the game. These modes allow you to practice your building and shooting skills without any enemies or pressure. You can also adjust the settings, such as the gravity, the speed, and the weapons, to suit your preferences. To use the practice modes, you have to select them from the menu, then click on Play. You can choose between two practice modes: just build and aim trainer. Just build mode lets you build unlimited structures with unlimited materials. Aim trainer mode lets you shoot at moving targets with different weapons. You can use these modes to master your building and shooting techniques and become a better player.

-

The best weapons and building strategies for different situations

-

One of the most important aspects of 1 vs 1 lol is knowing how to use the best weapons and building strategies for different situations. There are four weapons in the game: assault rifle, shotgun, axe, and sniper rifle. Each weapon has its own advantages and disadvantages, depending on the range, the damage, the accuracy, and the fire rate. Here are some tips on how to use each weapon effectively:

- -

As for building strategies, there are many ways to build your structures in 1 vs 1 lol, depending on your style and situation. Here are some of the most common building strategies that you can use:

-

The online community and resources for 1 vs 1 lol players

-

Another way to improve your skills and enjoy 1 vs 1 lol is to join the online community and access the resources for 1 vs 1 lol players. The online community consists of other players who share their experiences, tips, feedback, and suggestions about the game. You can interact with them through chat or voice, or join their clans or tournaments. You can also watch their live streams or videos, or follow their social media accounts. The online community can help you learn from other players, make new friends, and have more fun. To join the online community, you can visit the official website of 1 vs 1 lol at https://1v1.lol/ or search for "1 vs 1 lol" on platforms like YouTube, Twitch, Discord, Reddit, Twitter, and Facebook.

-

The resources for 1 vs 1 lol players consist of various tools and information that can help you play the game better. These include guides, tutorials, reviews, updates, news, and more. You can use these resources to learn more about the game's features, modes, weapons, building strategies, tips and tricks, and more. You can also use these resources to stay updated on the game's development, changes, events, and more. To access the resources for 1 vs 1 lol players, you can visit the official website of 1 vs 1 lol at https://1v1.lol/ or search for "1 vs 1 lol" on platforms like Google, YouTube, Wikipedia, and more.

-

Conclusion

-

A summary of the main points and a call to action

-

In conclusion, 1 vs 1 lol is an online building simulator and shooting game that is free to play and easy to access. You can play it on your web browser or on your mobile devices. You can choose from different game modes, such as battle royale, 1v1, 2v2, box fight, zone wars, and more. You can also customize your character's appearance, name, controls, and settings. You can improve your skills by practicing your building and shooting techniques in different situations. You can also join the online community and access the resources for 1 vs 1 lol players to learn from others and have more fun. If you are interested in playing 1 vs 1 lol, you can download and install it by following the steps in this article. You can also visit the official website of 1 vs 1 lol at https://1v1.lol/ for more information. What are you waiting for? Start playing 1 vs 1 lol today and enjoy the online building simulator and shooting game!

-

FAQs

-

Here are some of the frequently asked questions about 1 vs 1 lol:

-
    -
  1. Is 1 vs 1 lol safe to play?
  2. -

    Yes, 1 vs 1 lol is safe to play as long as you follow the game's rules and terms of service. The game does not contain any viruses or malware that can harm your device or browser. The game also does not collect any personal or sensitive information from you without your consent. However, you should be careful when interacting with other players online, as they may not be who they claim to be. You should also avoid clicking on any suspicious links or ads that may appear on the game's website or app.

    -
  3. Is 1 vs 1 lol multiplayer?
  4. -

    Yes, 1 vs 1 lol is multiplayer as it allows you to play online with other players around the world. You can play with random players in public matches or with your friends in private matches. You can also chat or voice with other players during the game. However, you can also play offline in practice modes if you want to play solo or without an internet connection.

    -
  5. Is 1 vs 1 lol cross-platform?
  6. -

    Yes, 1 vs 1 lol is cross-platform as it allows you to play on different devices and platforms. You can play on your web browser or on your mobile devices (Android or iOS). You can also play with other players who are using different devices or platforms than yours.

    -
  7. How do I report a bug or a problem in 1 vs 1 lol?
  8. -

    If you encounter a bug or a problem in 1 vs 1 lol, you can report it to the game's developer or support team by using the feedback button on the game's website or app. You can also contact them by email at lior@justbuild.lol. You should provide as much detail as possible about the bug or problem, such as the device, platform, mode, situation, and screenshot of the issue. The developer or support team will try to fix the bug or problem as soon as possible.

    -
  9. How do I get better at 1 vs 1 lol?
  10. -

    If you want to get better at 1 vs 1 lol, you need to practice your building and shooting skills regularly. You can use the practice modes to improve your techniques and learn from your mistakes. You can also watch other players' streams or videos to see how they play and what strategies they use. You can also join the online community and ask for advice or feedback from other players. You can also challenge yourself by playing against stronger opponents or in different modes. The more you play, the more you will improve and enjoy 1 vs 1 lol.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download True Story in 480p Quality - The Best Site for Movies.md b/spaces/1phancelerku/anime-remove-background/Download True Story in 480p Quality - The Best Site for Movies.md deleted file mode 100644 index 66395b863c8d3d4f00650cc1e0d41d7bba12857d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download True Story in 480p Quality - The Best Site for Movies.md +++ /dev/null @@ -1,128 +0,0 @@ -
-

How to Download True Story in 480p Quality

-

True Story is a new crime thriller miniseries created by Eric Newman for Netflix. It stars Kevin Hart and Wesley Snipes as two brothers who get involved in a deadly situation after a night out in Philadelphia. If you are a fan of drama, suspense, and twists, you might want to watch this show. But what if you want to download True Story in 480p quality for offline viewing? In this article, we will show you how to do that from Netflix and other sources.

-

What is True Story and Why You Should Watch It

-

True Story is a limited series that premiered on Netflix on November 24, 2021. It consists of seven episodes, each ranging from 27 to 58 minutes. The series is based on an original idea by Kevin Hart, who also executive produces and stars as Kid, a world-famous comedian. Wesley Snipes plays his older brother Carlton, who has a troubled past and a shady agenda. The series follows the brothers as they try to escape from a dangerous situation that threatens to ruin Kid's career and life.

-

true story 480p download


DOWNLOAD --->>> https://jinyurl.com/2uNQg4



-

The Plot and the Cast of True Story

-

The plot of True Story is inspired by some of the real-life experiences of Kevin Hart, but it is not a biopic or a comedy. It is a dark and gritty story that explores the themes of fame, family, loyalty, and betrayal. The series begins with Kid returning to his hometown of Philadelphia for a comedy tour. He reunites with his brother Carlton, who he has not seen for years. Carlton convinces Kid to join him for a night out at a hotel, where they meet two women. The next morning, Kid wakes up to find one of the women dead in his bed. Carlton tells him that they have been set up by someone who wants to blackmail them. Kid has to decide whether to trust his brother or not, while also dealing with the police, the media, and his own conscience.

-

The cast of True Story includes some well-known actors and some newcomers. Kevin Hart and Wesley Snipes are the main stars, playing Kid and Carlton respectively. They are joined by Tawny Newsome as Billie, Kid's personal assistant; Paul Adelstein as Todd, Kid's manager; Will Catlett as Herschel, Kid's bodyguard; Chris Diamantopoulos as Savvas, a Greek mobster; Billy Zane as Ari, Savvas' brother; Lauren London as Monyca, Kid's ex-wife; Ash Santos as Daphne, one of the women at the hotel; John Ales as Nikos, Savvas' henchman; and Theo Rossi as Gene, a superfan of Kid.

-

The Reviews and the Ratings of True Story

-

True Story has received mixed reviews from critics and audiences. Some praised the series for its gripping plot, its unexpected twists, its stellar performances, especially by Hart and Snipes, and its exploration of the dark side of fame. Others criticized the series for its lack of originality, its implausible scenarios, its uneven tone, its excessive violence, and its wasted potential.

-

The series has a rating of 6.9 out of 10 on IMDb, based on over 6,000 user ratings. It has a rating of 57% on Rotten Tomatoes, based on 20 critic reviews. It has a rating of 7 out of 10 on IGN, based on one critic review.

-

How to Download True Story in 480p Quality from Netflix

-

If you want to watch True Story offline or save some data usage, you can download it in 480p quality from Netflix. Netflix allows you to download videos offline on your smartphone, tablet, or computer, as long as you have a Netflix subscription and the Netflix app installed. Here are the benefits and the steps of downloading True Story in 480p quality from Netflix.

-

The Benefits of Downloading Netflix Videos Offline

-

Downloading Netflix videos offline has several advantages, such as:

- -

Downloading True Story in 480p quality from Netflix is a good option if you want to enjoy the series in decent resolution, but also save some storage space and data usage. 480p is the standard definition (SD) quality, which means that the video has a resolution of 720 x 480 pixels. It is lower than high definition (HD) quality, which has a resolution of 1280 x 720 pixels or higher, but it is still clear and sharp enough for most devices and screens.

-

true story movie 480p download
-true story web series 480p download
-true story 2023 480p download
-true story netflix 480p download
-true story hindi dubbed 480p download
-true story based on a true story 480p download
-true story season 1 480p download
-true story full episodes 480p download
-true story free 480p download
-true story torrent 480p download
-true story mp4 480p download
-true story mkv 480p download
-true story hd 480p download
-true story english subtitles 480p download
-true story online watch 480p download
-true story direct link 480p download
-true story google drive 480p download
-true story mega link 480p download
-true story index of 480p download
-true story filmyzilla 480p download
-true story moviesverse 480p download
-true story moviezverse 480p download
-true story worldfree4u 480p download
-true story filmywap 480p download
-true story khatrimaza 480p download
-true story bolly4u 480p download
-true story pagalworld 480p download
-true story skymovieshd 480p download
-true story moviesflix 480p download
-true story moviescounter 480p download
-true story movierulz 480p download
-true story tamilrockers 480p download
-true story isaimini 480p download
-true story tamilyogi 480p download
-true story teluguwap 480p download
-true story jiorockers 480p download
-true story todaypk 480p download
-true story yts 480p download
-true story yify 480p download
-true story rarbg 480p download
-true story limetorrents 480p download
-true story kickass torrents 480p download
-true story the pirate bay 480p download
-true story eztv.io 480p download
-true story xmovies8.tv 480p download
-true story putlocker9.ru 480p download
-true story solarmovie.to 480p download
-true story fmovies.too.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooo.ooop.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.too.download

-

The Steps to Download True Story in 480p Quality from Netflix

-

To download True Story in 480p quality from Netflix, you need to follow these steps:

-
    -
  1. Open the Netflix app on your device and sign in with your account.
  2. -
  3. Search for True Story in the search bar and tap on the series title.
  4. -
  5. Select the episode that you want to download and tap on the download icon next to it. You can also tap on the download icon next to the series title to download all the episodes at once.
  6. -
  7. Wait for the download to complete. You can check the progress and manage your downloads in the downloads section of the app.
  8. -
  9. To watch your downloaded videos offline, go to the downloads section of the app and tap on the play icon next to the video title.
  10. -
-

Note that you need to have enough storage space on your device to download True Story in 480p quality. Each episode of True Story in 480p quality takes up about 300 MB of storage space. You also need to have a Netflix plan that supports downloading videos offline. The basic plan only allows you to download videos on one device at a time, while the standard and premium plans allow you to download videos on two and four devices at a time, respectively.

-

How to Download True Story in 480p Quality from Other Sources

-

If you don't have a Netflix subscription or you want to download True Story in 480p quality from other sources, you might be tempted to look for unofficial websites that offer free downloads or streaming of the series. However, we strongly advise you against doing that, as it comes with many risks and disadvantages. Here are some of them:

-

The Risks of Downloading from Unofficial Websites

-

Downloading from unofficial websites is illegal, unethical, and unsafe. You might face some serious consequences, such as:

- -

The Alternatives to Download True Story in 480p Quality from Other Sources

-

If you want to download True Story in 480p quality from other sources legally and safely, you have some alternatives, such as:

- -

Conclusion

-

True Story is a thrilling miniseries that you don't want to miss. If you want to download it in 480p quality for offline viewing, you have two main options: Netflix or other sources . Netflix is the easiest and safest option, as it allows you to download the series with a few taps on your device. Other sources might be cheaper or faster, but they also come with many risks and disadvantages. You should always respect the intellectual property rights of the creators and distributors of True Story, and avoid downloading from illegal or unofficial websites. We hope this article has helped you learn how to download True Story in 480p quality from Netflix and other sources.

-

FAQs

-

Here are some frequently asked questions about downloading True Story in 480p quality:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Can I download True Story in higher quality than 480p?Yes, you can download True Story in higher quality than 480p from Netflix or other sources, if your device and internet connection support it. However, higher quality videos will take up more storage space and data usage than lower quality videos.
How long can I keep the downloaded videos of True Story on my device?You can keep the downloaded videos of True Story on your device as long as you have an active Netflix subscription and the Netflix app installed. However, some videos might expire after a certain period of time or if you go online after a long time offline. You can check the expiration date of each video in the downloads section of the app.
Can I watch the downloaded videos of True Story on other devices?You can watch the downloaded videos of True Story on other devices that have the Netflix app installed and are signed in with the same account. However, you might be limited by the number of devices that you can download videos on at a time, depending on your Netflix plan.
Can I share the downloaded videos of True Story with others?No, you cannot share the downloaded videos of True Story with others, as they are encrypted and tied to your account. Sharing them might violate the terms of service of Netflix and the intellectual property rights of the creators and distributors of True Story.
Can I download True Story from other streaming platforms?No, you cannot download True Story from other streaming platforms, as it is a Netflix original series and exclusive to Netflix. You can only watch it online or offline on Netflix.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/7hao/bingo/src/lib/bots/bing/sr.ts b/spaces/7hao/bingo/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/A666sxr/Genshin_TTS/models.py b/spaces/A666sxr/Genshin_TTS/models.py deleted file mode 100644 index d29e9010388acda30059431d8d6cbfa3c670e4f2..0000000000000000000000000000000000000000 --- a/spaces/A666sxr/Genshin_TTS/models.py +++ /dev/null @@ -1,730 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding -from pqmf import PQMF -from stft import TorchSTFT -import math - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - -class iSTFT_Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, gin_channels=0): - super(iSTFT_Generator, self).__init__() - # self.h = h - self.gen_istft_n_fft = gen_istft_n_fft - self.gen_istft_hop_size = gen_istft_hop_size - - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = weight_norm(Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.post_n_fft = self.gen_istft_n_fft - self.conv_post = weight_norm(Conv1d(ch, self.post_n_fft + 2, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.reflection_pad = torch.nn.ReflectionPad1d((1, 0)) - self.stft = TorchSTFT(filter_length=self.gen_istft_n_fft, hop_length=self.gen_istft_hop_size, win_length=self.gen_istft_n_fft) - def forward(self, x, g=None): - - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.reflection_pad(x) - x = self.conv_post(x) - spec = torch.exp(x[:,:self.post_n_fft // 2 + 1, :]) - phase = math.pi*torch.sin(x[:, self.post_n_fft // 2 + 1:, :]) - out = self.stft.inverse(spec, phase).to(x.device) - return out, None - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class Multiband_iSTFT_Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, subbands, gin_channels=0): - super(Multiband_iSTFT_Generator, self).__init__() - # self.h = h - self.subbands = subbands - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = weight_norm(Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.post_n_fft = gen_istft_n_fft - self.ups.apply(init_weights) - self.reflection_pad = torch.nn.ReflectionPad1d((1, 0)) - self.reshape_pixelshuffle = [] - - self.subband_conv_post = weight_norm(Conv1d(ch, self.subbands*(self.post_n_fft + 2), 7, 1, padding=3)) - - self.subband_conv_post.apply(init_weights) - - self.gen_istft_n_fft = gen_istft_n_fft - self.gen_istft_hop_size = gen_istft_hop_size - - - def forward(self, x, g=None): - stft = TorchSTFT(filter_length=self.gen_istft_n_fft, hop_length=self.gen_istft_hop_size, win_length=self.gen_istft_n_fft).to(x.device) - pqmf = PQMF(x.device) - - x = self.conv_pre(x)#[B, ch, length] - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - - - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - - x = F.leaky_relu(x) - x = self.reflection_pad(x) - x = self.subband_conv_post(x) - x = torch.reshape(x, (x.shape[0], self.subbands, x.shape[1]//self.subbands, x.shape[-1])) - - spec = torch.exp(x[:,:,:self.post_n_fft // 2 + 1, :]) - phase = math.pi*torch.sin(x[:,:, self.post_n_fft // 2 + 1:, :]) - - y_mb_hat = stft.inverse(torch.reshape(spec, (spec.shape[0]*self.subbands, self.gen_istft_n_fft // 2 + 1, spec.shape[-1])), torch.reshape(phase, (phase.shape[0]*self.subbands, self.gen_istft_n_fft // 2 + 1, phase.shape[-1]))) - y_mb_hat = torch.reshape(y_mb_hat, (x.shape[0], self.subbands, 1, y_mb_hat.shape[-1])) - y_mb_hat = y_mb_hat.squeeze(-2) - - y_g_hat = pqmf.synthesis(y_mb_hat) - - return y_g_hat, y_mb_hat - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class Multistream_iSTFT_Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, subbands, gin_channels=0): - super(Multistream_iSTFT_Generator, self).__init__() - # self.h = h - self.subbands = subbands - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = weight_norm(Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.post_n_fft = gen_istft_n_fft - self.ups.apply(init_weights) - self.reflection_pad = torch.nn.ReflectionPad1d((1, 0)) - self.reshape_pixelshuffle = [] - - self.subband_conv_post = weight_norm(Conv1d(ch, self.subbands*(self.post_n_fft + 2), 7, 1, padding=3)) - - self.subband_conv_post.apply(init_weights) - - self.gen_istft_n_fft = gen_istft_n_fft - self.gen_istft_hop_size = gen_istft_hop_size - - updown_filter = torch.zeros((self.subbands, self.subbands, self.subbands)).float() - for k in range(self.subbands): - updown_filter[k, k, 0] = 1.0 - self.register_buffer("updown_filter", updown_filter) - self.multistream_conv_post = weight_norm(Conv1d(4, 1, kernel_size=63, bias=False, padding=get_padding(63, 1))) - self.multistream_conv_post.apply(init_weights) - - - - def forward(self, x, g=None): - stft = TorchSTFT(filter_length=self.gen_istft_n_fft, hop_length=self.gen_istft_hop_size, win_length=self.gen_istft_n_fft).to(x.device) - # pqmf = PQMF(x.device) - - x = self.conv_pre(x)#[B, ch, length] - - for i in range(self.num_upsamples): - - - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - - - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - - x = F.leaky_relu(x) - x = self.reflection_pad(x) - x = self.subband_conv_post(x) - x = torch.reshape(x, (x.shape[0], self.subbands, x.shape[1]//self.subbands, x.shape[-1])) - - spec = torch.exp(x[:,:,:self.post_n_fft // 2 + 1, :]) - phase = math.pi*torch.sin(x[:,:, self.post_n_fft // 2 + 1:, :]) - - y_mb_hat = stft.inverse(torch.reshape(spec, (spec.shape[0]*self.subbands, self.gen_istft_n_fft // 2 + 1, spec.shape[-1])), torch.reshape(phase, (phase.shape[0]*self.subbands, self.gen_istft_n_fft // 2 + 1, phase.shape[-1]))) - y_mb_hat = torch.reshape(y_mb_hat, (x.shape[0], self.subbands, 1, y_mb_hat.shape[-1])) - y_mb_hat = y_mb_hat.squeeze(-2) - - y_mb_hat = F.conv_transpose1d(y_mb_hat, self.updown_filter.to(x.device) * self.subbands, stride=self.subbands) - - y_g_hat = self.multistream_conv_post(y_mb_hat) - - return y_g_hat, y_mb_hat - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gen_istft_n_fft, - gen_istft_hop_size, - n_speakers=0, - gin_channels=0, - use_sdp=False, - ms_istft_vits=False, - mb_istft_vits = False, - subbands = False, - istft_vits=False, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.ms_istft_vits = ms_istft_vits - self.mb_istft_vits = mb_istft_vits - self.istft_vits = istft_vits - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - if mb_istft_vits == True: - print('Mutli-band iSTFT VITS') - self.dec = Multiband_iSTFT_Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, subbands, gin_channels=gin_channels) - elif ms_istft_vits == True: - print('Mutli-stream iSTFT VITS') - self.dec = Multistream_iSTFT_Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, subbands, gin_channels=gin_channels) - elif istft_vits == True: - print('iSTFT-VITS') - self.dec = iSTFT_Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, gin_channels=gin_channels) - else: - print('Decoder Error in json file') - - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o, o_mb = self.dec(z_slice, g=g) - return o, o_mb, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o, o_mb = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, o_mb, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat, o_hat_mb = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, o_hat_mb, y_mask, (z, z_p, z_hat) - diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/data/extract_mel_spectrogram.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/data/extract_mel_spectrogram.py deleted file mode 100644 index 42cade483a576f7166011a25d7e4d4bb0ae0f55c..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/data/extract_mel_spectrogram.py +++ /dev/null @@ -1,151 +0,0 @@ -import argparse -import os -import os.path as P -from copy import deepcopy -from functools import partial -from glob import glob -from multiprocessing import Pool -from pathlib import Path - -import librosa -import numpy as np -import torchvision - - -class MelSpectrogram(object): - def __init__(self, sr, nfft, fmin, fmax, nmels, hoplen, spec_power, inverse=False): - self.sr = sr - self.nfft = nfft - self.fmin = fmin - self.fmax = fmax - self.nmels = nmels - self.hoplen = hoplen - self.spec_power = spec_power - self.inverse = inverse - - self.mel_basis = librosa.filters.mel(sr=sr, n_fft=nfft, fmin=fmin, fmax=fmax, n_mels=nmels) - - def __call__(self, x): - if self.inverse: - spec = librosa.feature.inverse.mel_to_stft( - x, sr=self.sr, n_fft=self.nfft, fmin=self.fmin, fmax=self.fmax, power=self.spec_power - ) - wav = librosa.griffinlim(spec, hop_length=self.hoplen) - return wav - else: - spec = np.abs(librosa.stft(x, n_fft=self.nfft, hop_length=self.hoplen)) ** self.spec_power - mel_spec = np.dot(self.mel_basis, spec) - return mel_spec - -class LowerThresh(object): - def __init__(self, min_val, inverse=False): - self.min_val = min_val - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return x - else: - return np.maximum(self.min_val, x) - -class Add(object): - def __init__(self, val, inverse=False): - self.inverse = inverse - self.val = val - - def __call__(self, x): - if self.inverse: - return x - self.val - else: - return x + self.val - -class Subtract(Add): - def __init__(self, val, inverse=False): - self.inverse = inverse - self.val = val - - def __call__(self, x): - if self.inverse: - return x + self.val - else: - return x - self.val - -class Multiply(object): - def __init__(self, val, inverse=False) -> None: - self.val = val - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return x / self.val - else: - return x * self.val - -class Divide(Multiply): - def __init__(self, val, inverse=False): - self.inverse = inverse - self.val = val - - def __call__(self, x): - if self.inverse: - return x * self.val - else: - return x / self.val - -class Log10(object): - def __init__(self, inverse=False): - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return 10 ** x - else: - return np.log10(x) - -class Clip(object): - def __init__(self, min_val, max_val, inverse=False): - self.min_val = min_val - self.max_val = max_val - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return x - else: - return np.clip(x, self.min_val, self.max_val) - -class TrimSpec(object): - def __init__(self, max_len, inverse=False): - self.max_len = max_len - self.inverse = inverse - - def __call__(self, x): - if self.inverse: - return x - else: - return x[:, :self.max_len] - -class MaxNorm(object): - def __init__(self, inverse=False): - self.inverse = inverse - self.eps = 1e-10 - - def __call__(self, x): - if self.inverse: - return x - else: - return x / (x.max() + self.eps) - - -TRANSFORMS_16000 = torchvision.transforms.Compose([ - MelSpectrogram(sr=16000, nfft=1024, fmin=125, fmax=7600, nmels=80, hoplen=1024//4, spec_power=1), - LowerThresh(1e-5), - Log10(), - Multiply(20), - Subtract(20), - Add(100), - Divide(100), - Clip(0, 1.0) - # TrimSpec(860) -]) - diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/transform.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/transform.py deleted file mode 100644 index 7014c926f153a351d2256c869c67c02d57b30913..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/transform.py +++ /dev/null @@ -1,30 +0,0 @@ -from torchvision.transforms import Normalize, Compose, RandomResizedCrop, InterpolationMode, ToTensor, Resize, \ - CenterCrop - - -def _convert_to_rgb(image): - return image.convert('RGB') - - -def image_transform( - image_size: int, - is_train: bool, - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711) -): - normalize = Normalize(mean=mean, std=std) - if is_train: - return Compose([ - RandomResizedCrop(image_size, scale=(0.9, 1.0), interpolation=InterpolationMode.BICUBIC), - _convert_to_rgb, - ToTensor(), - normalize, - ]) - else: - return Compose([ - Resize(image_size, interpolation=InterpolationMode.BICUBIC), - CenterCrop(image_size), - _convert_to_rgb, - ToTensor(), - normalize, - ]) diff --git a/spaces/ALSv/Chat-with-Llama-2-70b/README.md b/spaces/ALSv/Chat-with-Llama-2-70b/README.md deleted file mode 100644 index f84ebc22af15b5b66b94d47d05ec03186ec9a0f2..0000000000000000000000000000000000000000 --- a/spaces/ALSv/Chat-with-Llama-2-70b/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Lauche-AI LEU-Chatbot -emoji: ⚡ -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/custom_model.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/custom_model.py deleted file mode 100644 index fcb6ba215fb9c2293dbeff419aa4019e3d2ac233..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/custom_model.py +++ /dev/null @@ -1,22 +0,0 @@ -model = dict( - type='ImageClassifier', # 主模型类型(对于图像分类任务,使用 `ImageClassifier`) - backbone=dict( - type='ResNet', # 主干网络类型 - # 除了 `type` 之外的所有字段都来自 `ResNet` 类的 __init__ 方法 - # 可查阅 https://mmpretrain.readthedocs.io/zh_CN/latest/api/generated/mmpretrain.models.backbones.ResNet.html - depth=50, - num_stages=4, # 主干网络状态(stages)的数目,这些状态产生的特征图作为后续的 head 的输入。 - in_channels=3, # 输入图像的通道数 - out_indices=(3, ), # 输出的特征图输出索引。 - frozen_stages=-1, # 冻结主干网的层数 - style='pytorch'), - neck=dict(type='GlobalAveragePooling'), # 颈网络类型 - head=dict( - type='LinearClsHead', # 分类颈网络类型 - # 除了 `type` 之外的所有字段都来自 `LinearClsHead` 类的 __init__ 方法 - # 可查阅 https://mmpretrain.readthedocs.io/zh_CN/latest/api/generated/mmpretrain.models.heads.LinearClsHead.html - num_classes=7, # 分类类别数 - in_channels=2048, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0), # 损失函数配置信息 - topk=(1, 3), # 评估指标,Top-k 准确率 - )) \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/H2o.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/H2o.py deleted file mode 100644 index d92bd6d1d4726785051c7d4c5248dd50dd709805..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/H2o.py +++ /dev/null @@ -1,109 +0,0 @@ -from __future__ import annotations - -import json -import uuid - -from aiohttp import ClientSession - -from ..typing import AsyncGenerator -from .base_provider import AsyncGeneratorProvider, format_prompt - - -class H2o(AsyncGeneratorProvider): - url = "https://gpt-gm.h2o.ai" - working = True - model = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1" - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: list[dict[str, str]], - proxy: str = None, - **kwargs - ) -> AsyncGenerator: - model = model if model else cls.model - headers = {"Referer": cls.url + "/"} - - async with ClientSession( - headers=headers - ) as session: - data = { - "ethicsModalAccepted": "true", - "shareConversationsWithModelAuthors": "true", - "ethicsModalAcceptedAt": "", - "activeModel": model, - "searchEnabled": "true", - } - async with session.post( - f"{cls.url}/settings", - proxy=proxy, - data=data - ) as response: - response.raise_for_status() - - async with session.post( - f"{cls.url}/conversation", - proxy=proxy, - json={"model": model}, - ) as response: - response.raise_for_status() - conversationId = (await response.json())["conversationId"] - - data = { - "inputs": format_prompt(messages), - "parameters": { - "temperature": 0.4, - "truncate": 2048, - "max_new_tokens": 1024, - "do_sample": True, - "repetition_penalty": 1.2, - "return_full_text": False, - **kwargs - }, - "stream": True, - "options": { - "id": str(uuid.uuid4()), - "response_id": str(uuid.uuid4()), - "is_retry": False, - "use_cache": False, - "web_search_id": "", - }, - } - async with session.post( - f"{cls.url}/conversation/{conversationId}", - proxy=proxy, - json=data - ) as response: - start = "data:" - async for line in response.content: - line = line.decode("utf-8") - if line and line.startswith(start): - line = json.loads(line[len(start):-1]) - if not line["token"]["special"]: - yield line["token"]["text"] - - async with session.delete( - f"{cls.url}/conversation/{conversationId}", - proxy=proxy, - json=data - ) as response: - response.raise_for_status() - - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("temperature", "float"), - ("truncate", "int"), - ("max_new_tokens", "int"), - ("do_sample", "bool"), - ("repetition_penalty", "float"), - ("return_full_text", "bool"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Vercel.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Vercel.py deleted file mode 100644 index 2d20ca6a2de0b6fdb674090f5c305f5d544d9f86..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Vercel.py +++ /dev/null @@ -1,377 +0,0 @@ -from __future__ import annotations - -import json, base64, requests, execjs, random, uuid - -from ..typing import Any, TypedDict, CreateResult -from .base_provider import BaseProvider -from abc import abstractmethod - - -class Vercel(BaseProvider): - url = 'https://sdk.vercel.ai' - working = True - supports_gpt_35_turbo = True - supports_stream = True - - @staticmethod - @abstractmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, - **kwargs - ) -> CreateResult: - if not model: - model = "gpt-3.5-turbo" - elif model not in model_info: - raise ValueError(f"Model are not supported: {model}") - - headers = { - 'authority' : 'sdk.vercel.ai', - 'accept' : '*/*', - 'accept-language' : 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'cache-control' : 'no-cache', - 'content-type' : 'application/json', - 'custom-encoding' : get_anti_bot_token(), - 'origin' : 'https://sdk.vercel.ai', - 'pragma' : 'no-cache', - 'referer' : 'https://sdk.vercel.ai/', - 'sec-ch-ua' : '"Google Chrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"', - 'sec-ch-ua-mobile' : '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest' : 'empty', - 'sec-fetch-mode' : 'cors', - 'sec-fetch-site' : 'same-origin', - 'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.%s.%s Safari/537.36' % ( - random.randint(99, 999), - random.randint(99, 999) - ) - } - - json_data = { - 'model' : model_info[model]['id'], - 'messages' : messages, - 'playgroundId': str(uuid.uuid4()), - 'chatIndex' : 0} | model_info[model]['default_params'] - - max_retries = kwargs.get('max_retries', 20) - for i in range(max_retries): - response = requests.post('https://sdk.vercel.ai/api/generate', - headers=headers, json=json_data, stream=True) - try: - response.raise_for_status() - except: - continue - for token in response.iter_content(chunk_size=None): - yield token.decode() - break - - -def get_anti_bot_token() -> str: - headers = { - 'authority' : 'sdk.vercel.ai', - 'accept' : '*/*', - 'accept-language' : 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'cache-control' : 'no-cache', - 'pragma' : 'no-cache', - 'referer' : 'https://sdk.vercel.ai/', - 'sec-ch-ua' : '"Google Chrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"', - 'sec-ch-ua-mobile' : '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest' : 'empty', - 'sec-fetch-mode' : 'cors', - 'sec-fetch-site' : 'same-origin', - 'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.%s.%s Safari/537.36' % ( - random.randint(99, 999), - random.randint(99, 999) - ) - } - - response = requests.get('https://sdk.vercel.ai/openai.jpeg', - headers=headers).text - - raw_data = json.loads(base64.b64decode(response, - validate=True)) - - js_script = '''const globalThis={marker:"mark"};String.prototype.fontcolor=function(){return `${this}`}; - return (%s)(%s)''' % (raw_data['c'], raw_data['a']) - - raw_token = json.dumps({'r': execjs.compile(js_script).call(''), 't': raw_data['t']}, - separators = (",", ":")) - - return base64.b64encode(raw_token.encode('utf-16le')).decode() - -class ModelInfo(TypedDict): - id: str - default_params: dict[str, Any] - -model_info: dict[str, ModelInfo] = { - 'claude-instant-v1': { - 'id': 'anthropic:claude-instant-v1', - 'default_params': { - 'temperature': 1, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': ['\n\nHuman:'], - }, - }, - 'claude-v1': { - 'id': 'anthropic:claude-v1', - 'default_params': { - 'temperature': 1, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': ['\n\nHuman:'], - }, - }, - 'claude-v2': { - 'id': 'anthropic:claude-v2', - 'default_params': { - 'temperature': 1, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': ['\n\nHuman:'], - }, - }, - 'a16z-infra/llama7b-v2-chat': { - 'id': 'replicate:a16z-infra/llama7b-v2-chat', - 'default_params': { - 'temperature': 0.75, - 'maximumLength': 3000, - 'topP': 1, - 'repetitionPenalty': 1, - }, - }, - 'a16z-infra/llama13b-v2-chat': { - 'id': 'replicate:a16z-infra/llama13b-v2-chat', - 'default_params': { - 'temperature': 0.75, - 'maximumLength': 3000, - 'topP': 1, - 'repetitionPenalty': 1, - }, - }, - 'replicate/llama-2-70b-chat': { - 'id': 'replicate:replicate/llama-2-70b-chat', - 'default_params': { - 'temperature': 0.75, - 'maximumLength': 3000, - 'topP': 1, - 'repetitionPenalty': 1, - }, - }, - 'bigscience/bloom': { - 'id': 'huggingface:bigscience/bloom', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 0.95, - 'topK': 4, - 'repetitionPenalty': 1.03, - }, - }, - 'google/flan-t5-xxl': { - 'id': 'huggingface:google/flan-t5-xxl', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 0.95, - 'topK': 4, - 'repetitionPenalty': 1.03, - }, - }, - 'EleutherAI/gpt-neox-20b': { - 'id': 'huggingface:EleutherAI/gpt-neox-20b', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 0.95, - 'topK': 4, - 'repetitionPenalty': 1.03, - 'stopSequences': [], - }, - }, - 'OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5': { - 'id': 'huggingface:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5', - 'default_params': { - 'maximumLength': 1024, - 'typicalP': 0.2, - 'repetitionPenalty': 1, - }, - }, - 'OpenAssistant/oasst-sft-1-pythia-12b': { - 'id': 'huggingface:OpenAssistant/oasst-sft-1-pythia-12b', - 'default_params': { - 'maximumLength': 1024, - 'typicalP': 0.2, - 'repetitionPenalty': 1, - }, - }, - 'bigcode/santacoder': { - 'id': 'huggingface:bigcode/santacoder', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 0.95, - 'topK': 4, - 'repetitionPenalty': 1.03, - }, - }, - 'command-light-nightly': { - 'id': 'cohere:command-light-nightly', - 'default_params': { - 'temperature': 0.9, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 0, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'command-nightly': { - 'id': 'cohere:command-nightly', - 'default_params': { - 'temperature': 0.9, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 0, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'gpt-4': { - 'id': 'openai:gpt-4', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 8192, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'gpt-4-0613': { - 'id': 'openai:gpt-4-0613', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 8192, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'code-davinci-002': { - 'id': 'openai:code-davinci-002', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'gpt-3.5-turbo': { - 'id': 'openai:gpt-3.5-turbo', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 4096, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': [], - }, - }, - 'gpt-3.5-turbo-16k': { - 'id': 'openai:gpt-3.5-turbo-16k', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 16280, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': [], - }, - }, - 'gpt-3.5-turbo-16k-0613': { - 'id': 'openai:gpt-3.5-turbo-16k-0613', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 16280, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': [], - }, - }, - 'text-ada-001': { - 'id': 'openai:text-ada-001', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'text-babbage-001': { - 'id': 'openai:text-babbage-001', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'text-curie-001': { - 'id': 'openai:text-curie-001', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'text-davinci-002': { - 'id': 'openai:text-davinci-002', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'text-davinci-003': { - 'id': 'openai:text-davinci-003', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 4097, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, -} \ No newline at end of file diff --git a/spaces/Adapter/T2I-Adapter/ldm/lr_scheduler.py b/spaces/Adapter/T2I-Adapter/ldm/lr_scheduler.py deleted file mode 100644 index be39da9ca6dacc22bf3df9c7389bbb403a4a3ade..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/lr_scheduler.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n, **kwargs): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n, **kwargs): - return self.schedule(n,**kwargs) - - -class LambdaWarmUpCosineScheduler2: - """ - supports repeated iterations, configurable via lists - note: use with a base_lr of 1.0. - """ - def __init__(self, warm_up_steps, f_min, f_max, f_start, cycle_lengths, verbosity_interval=0): - assert len(warm_up_steps) == len(f_min) == len(f_max) == len(f_start) == len(cycle_lengths) - self.lr_warm_up_steps = warm_up_steps - self.f_start = f_start - self.f_min = f_min - self.f_max = f_max - self.cycle_lengths = cycle_lengths - self.cum_cycles = np.cumsum([0] + list(self.cycle_lengths)) - self.last_f = 0. - self.verbosity_interval = verbosity_interval - - def find_in_interval(self, n): - interval = 0 - for cl in self.cum_cycles[1:]: - if n <= cl: - return interval - interval += 1 - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) - t = min(t, 1.0) - f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( - 1 + np.cos(t * np.pi)) - self.last_f = f - return f - - def __call__(self, n, **kwargs): - return self.schedule(n, **kwargs) - - -class LambdaLinearScheduler(LambdaWarmUpCosineScheduler2): - - def schedule(self, n, **kwargs): - cycle = self.find_in_interval(n) - n = n - self.cum_cycles[cycle] - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " - f"current cycle {cycle}") - - if n < self.lr_warm_up_steps[cycle]: - f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] - self.last_f = f - return f - else: - f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) - self.last_f = f - return f - diff --git a/spaces/AdithyaSNair/PCOS_Prediction/README.md b/spaces/AdithyaSNair/PCOS_Prediction/README.md deleted file mode 100644 index 5058b4fb205124d14380c2edf4f4c41ff01794f6..0000000000000000000000000000000000000000 --- a/spaces/AdithyaSNair/PCOS_Prediction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PCOS Prediction -emoji: 👁 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AgentVerse/agentVerse/agentverse/memory/base.py b/spaces/AgentVerse/agentVerse/agentverse/memory/base.py deleted file mode 100644 index 1e8de896292d93d99bfae3eacbe8ad85b021a6fc..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/memory/base.py +++ /dev/null @@ -1,23 +0,0 @@ -from abc import abstractmethod -from typing import Dict, List - -from pydantic import BaseModel, Field - -from agentverse.message import Message - - -class BaseMemory(BaseModel): - @abstractmethod - def add_message(self, messages: List[Message]) -> None: - pass - - @abstractmethod - def to_string(self) -> str: - pass - - @abstractmethod - def reset(self) -> None: - pass - - def to_messages(self) -> List[dict]: - pass diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/DialogQuest.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/DialogQuest.js deleted file mode 100644 index 9baab5b49bb65f1f906b9add11f580d18ab53d5d..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/dialog-quest/DialogQuest.js +++ /dev/null @@ -1,54 +0,0 @@ -import QuestionManager from '../../plugins/logic/quest/questions/QuestionManager.js'; -import QuestMethods from './QuestMethods.js'; -import DataMethods from './DataMethods.js'; - -const EE = Phaser.Events.EventEmitter; -const GetValue = Phaser.Utils.Objects.GetValue; - -class DialogQuest extends EE { - constructor(config) { - super(); - - if (config === undefined) { - config = {}; - } - if (!config.quest) { - config.quest = true; - } - - this.dialog = GetValue(config, 'dialog', undefined); - this.questionManager = new QuestionManager(config); - - // Attach events - this.questionManager - .on('quest', function (question) { - var choices = this.dialog.getElement('choices'); - var options = question.options, option; - for (var i = 0, cnt = choices.length; i < cnt; i++) { - option = options[i]; - if (option) { - this.dialog.showChoice(i); - this.emit('update-choice', choices[i], option, this); - } else { - this.dialog.hideChoice(i); - } - } - this.emit('update-dialog', this.dialog, question, this); - }, this); - - this.dialog - .on('button.click', function (button, groupName, index) { - var eventName = 'click-' + ((groupName === 'choices') ? 'choice' : 'action'); - this.emit(eventName, button, this.dialog, this); - }, this) - } -} - -Object.assign( - DialogQuest.prototype, - QuestMethods, - DataMethods -); - - -export default DialogQuest; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fileselectorbutton/FileSelectorButton.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fileselectorbutton/FileSelectorButton.js deleted file mode 100644 index ed34d94e0a1a82731c19a24e3fc5fd7442529640..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fileselectorbutton/FileSelectorButton.js +++ /dev/null @@ -1,45 +0,0 @@ -import Label from '../label/Label.js'; -import { FileChooser } from '../filechooser/FileChooser.js'; -import FileChooserMethods from './FileChooserMethods.js'; - -const GetValue = Phaser.Utils.Objects.GetValue; - -class FileSelectorButton extends Label { - constructor(scene, config) { - super(scene, config); - this.type = 'rexFileSelectorButton'; - - var fileChooser = new FileChooser(scene); - scene.add.existing(fileChooser); - this.addBackground(fileChooser); - - this.addChildrenMap('fileChooser', fileChooser); - - this.setAccept(GetValue(config, 'accept', '')); - this.setMultiple(GetValue(config, 'multiple', false)); - - fileChooser - .on('change', function (gameObject) { - var files = gameObject.files; - if (files.length === 0) { - return; - } - - files = Array.from(files); - this.emit('select', files, this); - }, this) - - } - - get files() { - return this.childrenMap.fileChooser.files; - } - -} - -Object.assign( - FileSelectorButton.prototype, - FileChooserMethods, -) - -export default FileSelectorButton; \ No newline at end of file diff --git a/spaces/Aloento/9Nine-VITS/modules.py b/spaces/Aloento/9Nine-VITS/modules.py deleted file mode 100644 index 2df11cd48eb06cd7193ab8b476fdfdeea815e9d7..0000000000000000000000000000000000000000 --- a/spaces/Aloento/9Nine-VITS/modules.py +++ /dev/null @@ -1,387 +0,0 @@ -import math - -import torch -from torch import nn -from torch.nn import Conv1d -from torch.nn import functional as F -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size // 2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size // 2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert (kernel_size % 2 == 1) - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2 * hidden_channels * n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2 * hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset:cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, :self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels:, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2 * self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Amrrs/DragGan-Inversion/dnnlib/__init__.py b/spaces/Amrrs/DragGan-Inversion/dnnlib/__init__.py deleted file mode 100644 index e7423bffe245d0ff3f32e8658aa67daae454e64e..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/dnnlib/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from .util import EasyDict, make_cache_dir_path diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py deleted file mode 100644 index 0e86d2ea67e154fae18dbf9d2bfde6d0a70e582c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py +++ /dev/null @@ -1,205 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule - -from mmdet.models.builder import HEADS -from .bbox_head import BBoxHead - - -@HEADS.register_module() -class ConvFCBBoxHead(BBoxHead): - r"""More general bbox head, with shared conv and fc layers and two optional - separated branches. - - .. code-block:: none - - /-> cls convs -> cls fcs -> cls - shared convs -> shared fcs - \-> reg convs -> reg fcs -> reg - """ # noqa: W605 - - def __init__(self, - num_shared_convs=0, - num_shared_fcs=0, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - conv_out_channels=256, - fc_out_channels=1024, - conv_cfg=None, - norm_cfg=None, - *args, - **kwargs): - super(ConvFCBBoxHead, self).__init__(*args, **kwargs) - assert (num_shared_convs + num_shared_fcs + num_cls_convs + - num_cls_fcs + num_reg_convs + num_reg_fcs > 0) - if num_cls_convs > 0 or num_reg_convs > 0: - assert num_shared_fcs == 0 - if not self.with_cls: - assert num_cls_convs == 0 and num_cls_fcs == 0 - if not self.with_reg: - assert num_reg_convs == 0 and num_reg_fcs == 0 - self.num_shared_convs = num_shared_convs - self.num_shared_fcs = num_shared_fcs - self.num_cls_convs = num_cls_convs - self.num_cls_fcs = num_cls_fcs - self.num_reg_convs = num_reg_convs - self.num_reg_fcs = num_reg_fcs - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - # add shared convs and fcs - self.shared_convs, self.shared_fcs, last_layer_dim = \ - self._add_conv_fc_branch( - self.num_shared_convs, self.num_shared_fcs, self.in_channels, - True) - self.shared_out_channels = last_layer_dim - - # add cls specific branch - self.cls_convs, self.cls_fcs, self.cls_last_dim = \ - self._add_conv_fc_branch( - self.num_cls_convs, self.num_cls_fcs, self.shared_out_channels) - - # add reg specific branch - self.reg_convs, self.reg_fcs, self.reg_last_dim = \ - self._add_conv_fc_branch( - self.num_reg_convs, self.num_reg_fcs, self.shared_out_channels) - - if self.num_shared_fcs == 0 and not self.with_avg_pool: - if self.num_cls_fcs == 0: - self.cls_last_dim *= self.roi_feat_area - if self.num_reg_fcs == 0: - self.reg_last_dim *= self.roi_feat_area - - self.relu = nn.ReLU(inplace=True) - # reconstruct fc_cls and fc_reg since input channels are changed - if self.with_cls: - self.fc_cls = nn.Linear(self.cls_last_dim, self.num_classes + 1) - if self.with_reg: - out_dim_reg = (4 if self.reg_class_agnostic else 4 * - self.num_classes) - self.fc_reg = nn.Linear(self.reg_last_dim, out_dim_reg) - - def _add_conv_fc_branch(self, - num_branch_convs, - num_branch_fcs, - in_channels, - is_shared=False): - """Add shared or separable branch. - - convs -> avg pool (optional) -> fcs - """ - last_layer_dim = in_channels - # add branch specific conv layers - branch_convs = nn.ModuleList() - if num_branch_convs > 0: - for i in range(num_branch_convs): - conv_in_channels = ( - last_layer_dim if i == 0 else self.conv_out_channels) - branch_convs.append( - ConvModule( - conv_in_channels, - self.conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - last_layer_dim = self.conv_out_channels - # add branch specific fc layers - branch_fcs = nn.ModuleList() - if num_branch_fcs > 0: - # for shared branch, only consider self.with_avg_pool - # for separated branches, also consider self.num_shared_fcs - if (is_shared - or self.num_shared_fcs == 0) and not self.with_avg_pool: - last_layer_dim *= self.roi_feat_area - for i in range(num_branch_fcs): - fc_in_channels = ( - last_layer_dim if i == 0 else self.fc_out_channels) - branch_fcs.append( - nn.Linear(fc_in_channels, self.fc_out_channels)) - last_layer_dim = self.fc_out_channels - return branch_convs, branch_fcs, last_layer_dim - - def init_weights(self): - super(ConvFCBBoxHead, self).init_weights() - # conv layers are already initialized by ConvModule - for module_list in [self.shared_fcs, self.cls_fcs, self.reg_fcs]: - for m in module_list.modules(): - if isinstance(m, nn.Linear): - nn.init.xavier_uniform_(m.weight) - nn.init.constant_(m.bias, 0) - - def forward(self, x): - # shared part - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - # separate branches - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - return cls_score, bbox_pred - - -@HEADS.register_module() -class Shared2FCBBoxHead(ConvFCBBoxHead): - - def __init__(self, fc_out_channels=1024, *args, **kwargs): - super(Shared2FCBBoxHead, self).__init__( - num_shared_convs=0, - num_shared_fcs=2, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - fc_out_channels=fc_out_channels, - *args, - **kwargs) - - -@HEADS.register_module() -class Shared4Conv1FCBBoxHead(ConvFCBBoxHead): - - def __init__(self, fc_out_channels=1024, *args, **kwargs): - super(Shared4Conv1FCBBoxHead, self).__init__( - num_shared_convs=4, - num_shared_fcs=1, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - fc_out_channels=fc_out_channels, - *args, - **kwargs) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_20k_voc12aug.py deleted file mode 100644 index 40f5f62373e59d1c6c01ca3f57777698461127c9..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './deeplabv3_r50-d8_512x512_20k_voc12aug.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Spell-book.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Spell-book.md deleted file mode 100644 index 9b7c76c953f76f8a486bbe5156de4e9ebb3f0ec0..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/Spell-book.md +++ /dev/null @@ -1,107 +0,0 @@ -You have now entered a hidden corner of the internet. - -A confusing yet intriguing realm of paradoxes and contradictions. - -A place where you will find out that what you thought you knew, you in fact didn't know, and what you didn't know was in front of you all along. - -![](https://i.pinimg.com/originals/6e/e2/7b/6ee27bad351d3aca470d80f1033ba9c6.jpg) - -*In other words, here I will document little-known facts about this web UI that I could not find another place for in the wiki.* - -#### You can train LoRAs in CPU mode - -Load the web UI with - -``` -python server.py --cpu -``` - -and start training the LoRA from the training tab as usual. - -#### 8-bit mode works with CPU offloading - -``` -python server.py --load-in-8bit --gpu-memory 4000MiB -``` - -#### `--pre_layer`, and not `--gpu-memory`, is the right way to do CPU offloading with 4-bit models - -``` -python server.py --wbits 4 --groupsize 128 --pre_layer 20 -``` - -#### Models can be loaded in 32-bit, 16-bit, 8-bit, and 4-bit modes - -``` -python server.py --cpu -python server.py -python server.py --load-in-8bit -python server.py --wbits 4 -``` - -#### The web UI works with any version of GPTQ-for-LLaMa - -Including the up to date triton and cuda branches. But you have to delete the `repositories/GPTQ-for-LLaMa` folder and reinstall the new one every time: - -``` -cd text-generation-webui/repositories -rm -r GPTQ-for-LLaMa -pip uninstall quant-cuda -git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda # or any other repository and branch -cd GPTQ-for-LLaMa -python setup_cuda.py install -``` - -#### Instruction-following templates are represented as chat characters - -https://github.com/oobabooga/text-generation-webui/tree/main/characters/instruction-following - -#### The right way to run Alpaca, Open Assistant, Vicuna, etc is Instruct mode, not normal chat mode - -Otherwise the prompt will not be formatted correctly. - -1. Start the web UI with - -``` -python server.py --chat -``` - -2. Click on the "instruct" option under "Chat modes" - -3. Select the correct template in the hidden dropdown menu that will become visible. - -#### Notebook mode is best mode - -Ascended individuals have realized that notebook mode is the superset of chat mode and can do chats with ultimate flexibility, including group chats, editing replies, starting a new bot reply in a given way, and impersonating. - -#### RWKV is a RNN - -Most models are transformers, but not RWKV, which is a RNN. It's a great model. - -#### `--gpu-memory` is not a hard limit on the GPU memory - -It is simply a parameter that is passed to the `accelerate` library while loading the model. More memory will be allocated during generation. That's why this parameter has to be set to less than your total GPU memory. - -#### Contrastive search perhaps the best preset - -But it uses a ton of VRAM. - -#### You can check the sha256sum of downloaded models with the download script - -``` -python download-model.py facebook/galactica-125m --check -``` - -#### The download script continues interrupted downloads by default - -It doesn't start over. - -#### You can download models with multiple threads - -``` -python download-model.py facebook/galactica-125m --threads 8 -``` - -#### LoRAs work in 4-bit mode - -You need to follow [these instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) and then start the web UI with the `--monkey-patch` flag. diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/update_macos.sh b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/update_macos.sh deleted file mode 100644 index 371db554a33f53f3bd3c5bf15fedeaf2f6812639..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/update_macos.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -cd "$(dirname "${BASH_SOURCE[0]}")" - -if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi - -# deactivate existing conda envs as needed to avoid conflicts -{ conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null - -# config -CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda" -INSTALL_ENV_DIR="$(pwd)/installer_files/env" - -# environment isolation -export PYTHONNOUSERSITE=1 -unset PYTHONPATH -unset PYTHONHOME -export CUDA_PATH="$INSTALL_ENV_DIR" -export CUDA_HOME="$CUDA_PATH" - -# activate installer env -source "$CONDA_ROOT_PREFIX/etc/profile.d/conda.sh" # otherwise conda complains about 'shell not initialized' (needed when running in a script) -conda activate "$INSTALL_ENV_DIR" - -# update installer env -python one_click.py --update && echo -e "\nDone!" diff --git a/spaces/AntNikYab/NaturalLanguageProcessing/pages/polyclinics.py b/spaces/AntNikYab/NaturalLanguageProcessing/pages/polyclinics.py deleted file mode 100644 index 31a00f1845d25619b166f4d8ff46ba49b5dbe5f5..0000000000000000000000000000000000000000 --- a/spaces/AntNikYab/NaturalLanguageProcessing/pages/polyclinics.py +++ /dev/null @@ -1,115 +0,0 @@ -import streamlit as st -import numpy as np -import time -import pickle -import torch -import pandas as pd -from gensim.models import KeyedVectors -from transformers import BertTokenizer, BertModel -from nltk.corpus import stopwords -from nltk.stem import SnowballStemmer -from function.lstm_preprocessing import ( - clean, - tokin, - predict_ml_class, - predict_sentence, - predict_single_string, - LSTMClassifier - ) - -DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' - -stemmer = SnowballStemmer('russian') -sw = stopwords.words('russian') - -EMBEDDING_DIM = 32 -HIDDEN_DIM = 32 -SEQ_LEN = 200 -VOCAB_SIZE = 196906 -EMBEDDING_DIM = 32 -wv = KeyedVectors.load("file/wv.wordvectors", mmap='r') - -with open('file/vocab_to_int.txt', 'rb') as f: - vocab_to_int = pickle.load(f) - -embedding_matrix = np.zeros((VOCAB_SIZE, EMBEDDING_DIM)) - -for word, i in vocab_to_int.items(): - try: - embedding_vector = wv[word] - embedding_matrix[i] = embedding_vector - except KeyError as e: - pass - -embedding_layer = torch.nn.Embedding.from_pretrained(torch.FloatTensor(embedding_matrix)) - -model = LSTMClassifier(embedding_dim=EMBEDDING_DIM, hidden_size=HIDDEN_DIM, embedding=embedding_layer).to(DEVICE) -model.load_state_dict(torch.load('models/LTSM_model_epoch_7.pt', map_location='cpu')) - -tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased') -model_BERT = BertModel.from_pretrained("bert-base-multilingual-cased") - -loaded_model = pickle.load(open('models/LogReg.pickle', "rb")) - -loaded_classifier = pickle.load(open('models/trained_model.pkl', "rb")) -loaded_vectorizer = pickle.load(open('models/vectorizer.pkl', "rb")) - -def main(): - st.title("Классификация отзыва на поликлиники") - user_input = st.text_area("Введите ваш отзыв:", "") - return user_input - -user_input = main() - -def predict_lstm(user_input): - start_time = time.time() - prediction = predict_sentence(user_input, model, SEQ_LEN, vocab_to_int) - end_time = time.time() - return prediction, round((end_time - start_time), 4) - -def predict_bert(user_input): - start_time = time.time() - prediction = predict_single_string(user_input, model_BERT, loaded_model) - end_time = time.time() - return prediction, round((end_time - start_time), 4) - -def predict_ML(user_input): - start_time = time.time() - prediction = predict_ml_class(user_input, loaded_vectorizer, loaded_classifier) - end_time = time.time() - return prediction, round((end_time - start_time), 4) - -if user_input: - prediction_rnn, time_taken_rnn = predict_ML(user_input) - st.write("### Bag-of-Words + LogReg") - st.write("Предсказанный класс:", prediction_rnn) - st.write("Время предсказания:", time_taken_rnn, "сек.") - prediction_rnn, time_taken_rnn = predict_lstm(user_input) - st.write("### LSTM модель") - st.write("Предсказанный класс:", prediction_rnn) - st.write("Время предсказания:", time_taken_rnn, "сек.") - prediction_rnn, time_taken_rnn = predict_bert(user_input) - st.write("### BERT модель + LogReg") - st.write("Предсказанный класс:", prediction_rnn) - st.write("Время предсказания:", time_taken_rnn, "сек.") - - -st.sidebar.image('images/polyclinic.jpeg', use_column_width=True) -f1_score_classic_ml = 0.87 -f1_score_rnn = 0.88 -f1_score_bert = 0.83 -f1_score_classic_ml_valid = 0.89 -f1_score_rnn_valid = 0.92 -f1_score_bert_valid = 0.82 -# Создание DataFrame для сравнения результатов - - - -st.sidebar.write("### Сравнительная таблица по метрике f1-macro") -results = { -"Модель": ["Классический ML", "LSTM", "BERT-based"], -"train": [f1_score_classic_ml, f1_score_rnn, f1_score_bert], -"valid": [f1_score_classic_ml_valid, f1_score_rnn_valid, f1_score_bert_valid] -} -results_df = pd.DataFrame(results) -st.sidebar.dataframe(results_df) \ No newline at end of file diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/on_startup.sh b/spaces/AnthonyTruchetPoC/persistent-docker/on_startup.sh deleted file mode 100644 index 448000271bbc7142681947fd1a447772f12ecfff..0000000000000000000000000000000000000000 --- a/spaces/AnthonyTruchetPoC/persistent-docker/on_startup.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/bin/bash -# Write some commands here that will run on root user before startup. -# For example, to clone transformers and install it in dev mode: -# git clone https://github.com/huggingface/transformers.git -# cd transformers && pip install -e ".[dev]" \ No newline at end of file diff --git a/spaces/Arcypojeb/NeuralServer/app.py b/spaces/Arcypojeb/NeuralServer/app.py deleted file mode 100644 index 8da786adae4cd8eee2dd8c0d8c79a8a0ed6f2ba9..0000000000000000000000000000000000000000 --- a/spaces/Arcypojeb/NeuralServer/app.py +++ /dev/null @@ -1,304 +0,0 @@ -import datetime -import websockets -import asyncio -import sqlite3 -import json -import requests -import gradio as gr -import PySimpleGUI as sg -from bs4 import BeautifulSoup -from gradio_client import Client -from websockets.sync.client import connect - -modelPath = 'nlp-model.json' - -inputs = [] -client_ports = [] -server_ports = [] - -layout = [ - [sg.Multiline(size=(200, 10), key='-CLIENT-')], - [sg.Multiline(size=(100, 20), key='-INPUT-', auto_refresh=True), sg.Multiline(size=(100, 20), key='-OUTPUT-', auto_refresh=True)], - [sg.Multiline(size=(150, 2), key='-USERINPUT-')], - [sg.Button('Ask the agent')], - [sg.Text('Enter Port:'), sg.InputText(size=(10, 1), key='-PORT-'), - sg.Slider(range=(1000, 9999), orientation='h', size=(20, 20), key='-PORTSLIDER-')], - [sg.Button('Start WebSocket server'), sg.Button('Start WebSocket client')], - [sg.Button('Stop WebSocket server'), sg.Button('Stop WebSocket client')], - [sg.Multiline(size=(20, 4), key='-SERVER_PORTS-')], [sg.Multiline(size=(20, 4), key='-CLIENT_PORTS-')], - [sg.Button('Clear Textboxes')] -] - -def get_port(values): - if values['-PORT-']: - return int(values['-PORT-']) - else: - return int(values['-PORTSLIDER-']) - -window = sg.Window('WebSocket Client', layout) - -# Function to send a question to the chatbot and get the response -async def askQuestion(question): - url = 'https://api.docsbot.ai/teams/ZrbLG98bbxZ9EFqiPvyl/bots/oFFiXuQsakcqyEdpLvCB/chat' - headers = { - 'Content-Type': 'application/json' - } - data = { - 'question': question, - 'full_source': False - } - try: - response = requests.post(url, headers=headers, json=data) - responseText = response.content.decode('utf-8') - return responseText - - except requests.exceptions.RequestException as e: - # Handle request exceptions here - print(f"Request failed with exception: {e}") - -async def askQuestion2(question): - url = 'https://api.docsbot.ai/teams/ZrbLG98bbxZ9EFqiPvyl/bots/oFFiXuQsakcqyEdpLvCB/chat' - headers = { - 'Content-Type': 'application/json' - } - data = { - 'question': question, - 'full_source': False - } - try: - response = requests.post(url, headers=headers, json=data) - responseText = response.content.decode('utf-8') - return responseText - - except requests.exceptions.RequestException as e: - # Handle request exceptions here - print(f"Request failed with exception: {e}") - -async def askQuestion3(question): - url = 'https://api.docsbot.ai/teams/ZrbLG98bbxZ9EFqiPvyl/bots/oFFiXuQsakcqyEdpLvCB/chat' - headers = { - 'Content-Type': 'application/json' - } - data = { - 'question': question, - 'full_source': False - } - try: - response = requests.post(url, headers=headers, json=data) - responseText = response.content.decode('utf-8') - return responseText - - except requests.exceptions.RequestException as e: - # Handle request exceptions here - print(f"Request failed with exception: {e}") - -async def run_agent(question): - os.environ["GOOGLE_CSE_ID"] = GOOGLE_CSE_ID - os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY - os.environ["FIREWORKS_API_KEY"] = FIREWORKS_API_KEY - - llm = Fireworks(model="accounts/fireworks/models/llama-v2-13b") - tools = load_tools(["google-search", "llm-math"], llm=llm) - agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, return_intermediate_steps=True) - - response = agent({"input": question}) - return response["output"], response["intermediate_steps"] - response_content = response.content.decode('utf-8') - return response_content - -async def handleWebSocket(ws): - print('New connection') - instruction = "Hello! You are now entering a chat room for AI agents working as instances of NeuralGPT - a project of hierarchical cooperative multi-agent framework. Keep in mind that you are speaking with another chatbot. Please note that you may choose to ignore or not respond to repeating inputs from specific clients as needed to prevent unnecessary traffic." - greetings = {'instructions': instruction} - await ws.send(json.dumps(instruction)) - while True: - message = await ws.recv() - print(message) - timestamp = datetime.datetime.now().isoformat() - sender = 'client' - db = sqlite3.connect('chat-hub.db') - db.execute('INSERT INTO messages (sender, message, timestamp) VALUES (?, ?, ?)', - (sender, message, timestamp)) - db.commit() - try: - response = await askQuestion(message) - serverResponse = f'server response:{response}' - # Append the server response to the server_responses list - timestamp = datetime.datetime.now().isoformat() - serverSender = 'server' - db.execute('INSERT INTO messages (sender, message, timestamp) VALUES (?, ?, ?)', - (serverSender, serverResponse, timestamp)) - db.commit() - await ws.send(json.dumps(serverResponse)) - return serverResponse - - except websockets.exceptions.ConnectionClosedError as e: - print(f"Connection closed: {e}") - - except Exception as e: - print(f"Error: {e}") - -async def handle_message(message): - print(f'Received message: {message}') - timestamp = datetime.datetime.now().isoformat() - sender = 'client' - db = sqlite3.connect('chat-hub.db') - db.execute('INSERT INTO messages (sender, message, timestamp) VALUES (?, ?, ?)', - (sender, message, timestamp)) - db.commit() - try: - userMessage = f'User B:{message}' - response = await askQuestion(userMessage) - serverResponse = f'server response:{response}' - timestamp = datetime.datetime.now().isoformat() - serverSender = 'server' - db.execute('INSERT INTO messages (sender, message, timestamp) VALUES (?, ?, ?)', - (serverSender, serverResponse, timestamp)) - db.commit() - return serverResponse - except Exception as e: - print(f"Error: {e}") - -# Define start_client function with a variable port -async def start_client(clientPort): - uri = f'ws://localhost:{clientPort}' - client_ports.append(clientPort) - window['-CLIENT_PORTS-'].print(str(client_ports) + '\n') - async with websockets.connect(uri, create_protocol=handleClient) as websocket: - print("Connected to server at:", clientPort) - return "Used ports:\n" + '\n'.join(map(str, client_ports)) - message = await websocket.recv() - inputMsg = "client: " + handle_message - window['-INPUT-'].print(str(inputMsg) + '\n') - print(message) - return message - -async def handleClient(websocket, path): - return client1_msg - -async def connect_docsbot(clientPort): - uri = f'ws://localhost:{clientPort}' - async with websockets.connect(uri) as websocket: - print("Connected to server at:", clientPort) - client_ports.append(clientPort) - window['-CLIENT_PORTS-'].print(str(client_ports) + '\n') - return "Used ports:\n" + '\n'.join(map(str, client_ports)) - while True: - message = await websocket.recv() - inputMsg = "client: " + handle_message - window['-INPUT-'].print(str(inputMsg) + '\n') - print(message) - return message - -async def handleClient2(websocket, path): - return client2_msg - -async def connect_agent(clientPort): - uri = f'ws://localhost:{clientPort}' - async with websockets.connect(uri, create_protocol=handleClient3) as websocket: - print("Connected to server at:", clientPort) - client_ports.append(clientPort) - return "Used ports:\n" + '\n'.join(map(str, client_ports)) - message = await websocket.recv() - inputMsg = "client: " + handle_message - window['-INPUT-'].print(str(inputMsg) + '\n') - print(message) - return message - -async def handleClient3(websocket, path): - return client3_msg - -# Function to stop the WebSocket server -def stop_websockets(): - global server - if server: - cursor.close() - db.close() - server.close() - print("WebSocket server stopped.") - else: - print("WebSocket server is not running.") - -# Start the WebSocket server -async def start_websockets(websocketPort): - uri = f'wss://localhost:{websocketPort}' - global server - # Create a WebSocket client that connects to the server - server = await(websockets.serve(handleWebSocket, uri)) - server_ports.append(websocketPort) - print(f"Starting WebSocket server on port {websocketPort}...") - return "Used ports:\n" + '\n'.join(map(str, server_ports)) - await stop - await server.close() - -async def start_client(websocketPort): - uri = f'ws://localhost:{websocketPort}' - while True: - try: - async with websockets.connect(uri) as ws: - print("Connected to server at:", websocketPort) - while True: - message = await ws.recv() - print(message) - response = await askQuestion(message) - print(response) - await ws.send(response) - except websockets.exceptions.ConnectionClosedOK: - print("Connection closed") - continue - -async def start_interface(): - while True: - event, values = window.read() - if event in (sg.WIN_CLOSED, 'Stop WebSocket client'): - break - elif event == 'Start WebSocket server': - websocketPort = get_port(values) - loop = asyncio.get_event_loop() - loop.run_until_complete(start_websockets(websocketPort)) - elif event == 'Start WebSocket client': - websocketPort = get_port(values) - loop = asyncio.get_event_loop() - loop.run_until_complete(start_client(websocketPort)) - elif event == 'Ask the agent': - question = values['-USERINPUT-'] - loop = asyncio.get_event_loop() - loop.run_until_complete(handle_user(question)) - elif event == 'Clear Textboxes': - window['-INPUT-'].update('') - window['-OUTPUT-'].update('') - window['-USERINPUT-'].update('') - - window.close() - -with gr.Blocks() as demo: - with gr.Row(): - # Use the client_messages list to update the messageTextbox - client_msg = gr.Textbox(lines=15, max_lines=130, label="Client messages", interactive=False) - # Use the server_responses list to update the serverMessageTextbox - server_msg = gr.Textbox(lines=15, max_lines=130, label="Server responses", interactive=False) - with gr.Row(): - userInput = gr.Textbox(label="User Input") - with gr.Row(): - Bot = gr.Button("Ask Server") - with gr.Row(): - websocketPort = gr.Slider(minimum=1000, maximum=9999, label="Websocket server port", interactive=True, randomize=False) - startServer = gr.Button("Start WebSocket Server") - stopWebsockets = gr.Button("Stop WebSocket Server") - with gr.Row(): - port = gr.Textbox() - with gr.Row(): - clientPort = gr.Slider(minimum=1000, maximum=9999, label="Websocket server port", interactive=True, randomize=False) - startClient = gr.Button("Start WebSocket client") - stopClient = gr.Button("Stop WebSocket client") - with gr.Row(): - PortInUse = gr.Textbox() - startServer.click(start_websockets, inputs=websocketPort, outputs=port) - startClient.click(start_client, inputs=clientPort, outputs=[PortInUse, client_msg]) - stopWebsockets.click(stop_websockets, inputs=None, outputs=server_msg) - startInterface = gr.Button("Start GUI") - Bot.click(askQuestion, inputs=userInput, outputs=server_msg) - startInterface.click(start_interface, inputs=None, outputs=None) - -demo.queue() -demo.launch() \ No newline at end of file diff --git a/spaces/Arikkod/FoodVisionMini/README.md b/spaces/Arikkod/FoodVisionMini/README.md deleted file mode 100644 index 3040d0f0a32834c92b60166124edb686e96abb38..0000000000000000000000000000000000000000 --- a/spaces/Arikkod/FoodVisionMini/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FoodVisionMini -emoji: 🥩🍕🍣 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/AtomdffAI/wechatgpt4atom/docker/entrypoint.sh b/spaces/AtomdffAI/wechatgpt4atom/docker/entrypoint.sh deleted file mode 100644 index 214ca864edde11180b62497d361caf753058ef1e..0000000000000000000000000000000000000000 --- a/spaces/AtomdffAI/wechatgpt4atom/docker/entrypoint.sh +++ /dev/null @@ -1,81 +0,0 @@ -#!/bin/bash -set -e - -# build prefix -CHATGPT_ON_WECHAT_PREFIX=${CHATGPT_ON_WECHAT_PREFIX:-""} -# path to config.json -CHATGPT_ON_WECHAT_CONFIG_PATH=${CHATGPT_ON_WECHAT_CONFIG_PATH:-""} -# execution command line -CHATGPT_ON_WECHAT_EXEC=${CHATGPT_ON_WECHAT_EXEC:-""} - -OPEN_AI_API_KEY=${OPEN_AI_API_KEY:-""} -SINGLE_CHAT_PREFIX=${SINGLE_CHAT_PREFIX:-""} -SINGLE_CHAT_REPLY_PREFIX=${SINGLE_CHAT_REPLY_PREFIX:-""} -GROUP_CHAT_PREFIX=${GROUP_CHAT_PREFIX:-""} -GROUP_NAME_WHITE_LIST=${GROUP_NAME_WHITE_LIST:-""} -IMAGE_CREATE_PREFIX=${IMAGE_CREATE_PREFIX:-""} -CONVERSATION_MAX_TOKENS=${CONVERSATION_MAX_TOKENS:-""} -CHARACTER_DESC=${CHARACTER_DESC:-""} - -# CHATGPT_ON_WECHAT_PREFIX is empty, use /app -if [ "$CHATGPT_ON_WECHAT_PREFIX" == "" ] ; then - CHATGPT_ON_WECHAT_PREFIX=/app -fi - -# CHATGPT_ON_WECHAT_CONFIG_PATH is empty, use '/app/config.json' -if [ "$CHATGPT_ON_WECHAT_CONFIG_PATH" == "" ] ; then - CHATGPT_ON_WECHAT_CONFIG_PATH=$CHATGPT_ON_WECHAT_PREFIX/config.json -fi - -# CHATGPT_ON_WECHAT_EXEC is empty, use ‘python app.py’ -if [ "$CHATGPT_ON_WECHAT_EXEC" == "" ] ; then - CHATGPT_ON_WECHAT_EXEC="python app.py" -fi - -# modify content in config.json -if [ "$OPEN_AI_API_KEY" != "" ] ; then - sed -i "2c \"open_ai_api_key\": \"$OPEN_AI_API_KEY\"," $CHATGPT_ON_WECHAT_CONFIG_PATH -else - echo -e "\033[31m[Warning] You need to set OPEN_AI_API_KEY before running!\033[0m" -fi - -if [ "$WECHATY_PUPPET_SERVICE_TOKEN" != "" ] ; then - sed -i "3c \"wechaty_puppet_service_token\": \"$WECHATY_PUPPET_SERVICE_TOKEN\"," $CHATGPT_ON_WECHAT_CONFIG_PATH -else - echo -e "\033[31m[Info] You need to set WECHATY_PUPPET_SERVICE_TOKEN if you use wechaty!\033[0m" -fi - -if [ "$SINGLE_CHAT_PREFIX" != "" ] ; then - sed -i "4c \"single_chat_prefix\": $SINGLE_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH -fi - -if [ "$SINGLE_CHAT_REPLY_PREFIX" != "" ] ; then - sed -i "5c \"single_chat_reply_prefix\": $SINGLE_CHAT_REPLY_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH -fi - -if [ "$GROUP_CHAT_PREFIX" != "" ] ; then - sed -i "6c \"group_chat_prefix\": $GROUP_CHAT_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH -fi - -if [ "$GROUP_NAME_WHITE_LIST" != "" ] ; then - sed -i "7c \"group_name_white_list\": $GROUP_NAME_WHITE_LIST," $CHATGPT_ON_WECHAT_CONFIG_PATH -fi - -if [ "$IMAGE_CREATE_PREFIX" != "" ] ; then - sed -i "8c \"image_create_prefix\": $IMAGE_CREATE_PREFIX," $CHATGPT_ON_WECHAT_CONFIG_PATH -fi - -if [ "$CONVERSATION_MAX_TOKENS" != "" ] ; then - sed -i "9c \"conversation_max_tokens\": $CONVERSATION_MAX_TOKENS," $CHATGPT_ON_WECHAT_CONFIG_PATH -fi - -if [ "$CHARACTER_DESC" != "" ] ; then - sed -i "10c \"character_desc\": \"$CHARACTER_DESC\"" $CHATGPT_ON_WECHAT_CONFIG_PATH -fi - -# go to prefix dir -cd $CHATGPT_ON_WECHAT_PREFIX -# excute -$CHATGPT_ON_WECHAT_EXEC - - diff --git a/spaces/Brasd99/JustClothify/README.md b/spaces/Brasd99/JustClothify/README.md deleted file mode 100644 index ca8198ed08313ed46084c3a51c70055974ab4dd1..0000000000000000000000000000000000000000 --- a/spaces/Brasd99/JustClothify/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: JustClothify -emoji: 💻 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/doc/GETTING_STARTED.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/doc/GETTING_STARTED.md deleted file mode 100644 index a6bcbedee42835c99fa5aa1110309329dfbff6f0..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/doc/GETTING_STARTED.md +++ /dev/null @@ -1,58 +0,0 @@ -# Getting Started with DensePose - -## Inference with Pre-trained Models - -1. Pick a model and its config file from [Model Zoo](MODEL_ZOO.md), for example [densepose_rcnn_R_50_FPN_s1x.yaml](../configs/densepose_rcnn_R_50_FPN_s1x.yaml) -2. Run the [Apply Net](TOOL_APPLY_NET.md) tool to visualize the results or save the to disk. For example, to use contour visualization for DensePose, one can run: -```bash -python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml densepose_rcnn_R_50_FPN_s1x.pkl image.jpg dp_contour,bbox --output image_densepose_contour.png -``` -Please see [Apply Net](TOOL_APPLY_NET.md) for more details on the tool. - -## Training - -First, prepare the [dataset](http://densepose.org/#dataset) into the following structure under the directory you'll run training scripts: -
-datasets/coco/
-  annotations/
-    densepose_{train,minival,valminusminival}2014.json
-    densepose_minival2014_100.json   (optional, for testing only)
-  {train,val}2014/
-    # image files that are mentioned in the corresponding json
-
- -To train a model one can use the [train_net.py](../train_net.py) script. -This script was used to train all DensePose models in [Model Zoo](MODEL_ZOO.md). -For example, to launch end-to-end DensePose-RCNN training with ResNet-50 FPN backbone -on 8 GPUs following the s1x schedule, one can run -```bash -python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml --num-gpus 8 -``` -The configs are made for 8-GPU training. To train on 1 GPU, one can apply the -[linear learning rate scaling rule](https://arxiv.org/abs/1706.02677): -```bash -python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml \ - SOLVER.IMS_PER_BATCH 2 SOLVER.BASE_LR 0.0025 -``` - -## Evaluation - -Model testing can be done in the same way as training, except for an additional flag `--eval-only` and -model location specification through `MODEL.WEIGHTS model.pth` in the command line -```bash -python train_net.py --config-file configs/densepose_rcnn_R_50_FPN_s1x.yaml \ - --eval-only MODEL.WEIGHTS model.pth -``` - -## Tools - -We provide tools which allow one to: - - easily view DensePose annotated data in a dataset; - - perform DensePose inference on a set of images; - - visualize DensePose model results; - -`query_db` is a tool to print or visualize DensePose data in a dataset. -Please refer to [Query DB](TOOL_QUERY_DB.md) for more details on this tool - -`apply_net` is a tool to print or visualize DensePose results. -Please refer to [Apply Net](TOOL_APPLY_NET.md) for more details on this tool diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/has_nested_type.h b/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/has_nested_type.h deleted file mode 100644 index 78bb4b7f57a2f62d70ed6cf04bf58913b4100b69..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/has_nested_type.h +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -#define __THRUST_DEFINE_HAS_NESTED_TYPE(trait_name, nested_type_name) \ -template \ - struct trait_name \ -{ \ - typedef char yes_type; \ - typedef int no_type; \ - template static yes_type test(typename S::nested_type_name *); \ - template static no_type test(...); \ - static bool const value = sizeof(test(0)) == sizeof(yes_type);\ - typedef thrust::detail::integral_constant type;\ -}; - diff --git a/spaces/CVPR/regionclip-demo/detectron2/engine/hooks.py b/spaces/CVPR/regionclip-demo/detectron2/engine/hooks.py deleted file mode 100644 index 79e1b164a54ef14889c3c7082534b6c38d85413a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/engine/hooks.py +++ /dev/null @@ -1,466 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import datetime -import itertools -import logging -import os -import tempfile -import time -from collections import Counter -import torch -from fvcore.common.checkpoint import PeriodicCheckpointer as _PeriodicCheckpointer -from fvcore.common.param_scheduler import ParamScheduler -from fvcore.common.timer import Timer -from fvcore.nn.precise_bn import get_bn_modules, update_bn_stats - -import detectron2.utils.comm as comm -from detectron2.evaluation.testing import flatten_results_dict -from detectron2.solver import LRMultiplier -from detectron2.utils.events import EventStorage, EventWriter -from detectron2.utils.file_io import PathManager - -from .train_loop import HookBase - -__all__ = [ - "CallbackHook", - "IterationTimer", - "PeriodicWriter", - "PeriodicCheckpointer", - "LRScheduler", - "AutogradProfiler", - "EvalHook", - "PreciseBN", -] - - -""" -Implement some common hooks. -""" - - -class CallbackHook(HookBase): - """ - Create a hook using callback functions provided by the user. - """ - - def __init__(self, *, before_train=None, after_train=None, before_step=None, after_step=None): - """ - Each argument is a function that takes one argument: the trainer. - """ - self._before_train = before_train - self._before_step = before_step - self._after_step = after_step - self._after_train = after_train - - def before_train(self): - if self._before_train: - self._before_train(self.trainer) - - def after_train(self): - if self._after_train: - self._after_train(self.trainer) - # The functions may be closures that hold reference to the trainer - # Therefore, delete them to avoid circular reference. - del self._before_train, self._after_train - del self._before_step, self._after_step - - def before_step(self): - if self._before_step: - self._before_step(self.trainer) - - def after_step(self): - if self._after_step: - self._after_step(self.trainer) - - -class IterationTimer(HookBase): - """ - Track the time spent for each iteration (each run_step call in the trainer). - Print a summary in the end of training. - - This hook uses the time between the call to its :meth:`before_step` - and :meth:`after_step` methods. - Under the convention that :meth:`before_step` of all hooks should only - take negligible amount of time, the :class:`IterationTimer` hook should be - placed at the beginning of the list of hooks to obtain accurate timing. - """ - - def __init__(self, warmup_iter=3): - """ - Args: - warmup_iter (int): the number of iterations at the beginning to exclude - from timing. - """ - self._warmup_iter = warmup_iter - self._step_timer = Timer() - self._start_time = time.perf_counter() - self._total_timer = Timer() - - def before_train(self): - self._start_time = time.perf_counter() - self._total_timer.reset() - self._total_timer.pause() - - def after_train(self): - logger = logging.getLogger(__name__) - total_time = time.perf_counter() - self._start_time - total_time_minus_hooks = self._total_timer.seconds() - hook_time = total_time - total_time_minus_hooks - - num_iter = self.trainer.iter + 1 - self.trainer.start_iter - self._warmup_iter - - if num_iter > 0 and total_time_minus_hooks > 0: - # Speed is meaningful only after warmup - # NOTE this format is parsed by grep in some scripts - logger.info( - "Overall training speed: {} iterations in {} ({:.4f} s / it)".format( - num_iter, - str(datetime.timedelta(seconds=int(total_time_minus_hooks))), - total_time_minus_hooks / num_iter, - ) - ) - - logger.info( - "Total training time: {} ({} on hooks)".format( - str(datetime.timedelta(seconds=int(total_time))), - str(datetime.timedelta(seconds=int(hook_time))), - ) - ) - - def before_step(self): - self._step_timer.reset() - self._total_timer.resume() - - def after_step(self): - # +1 because we're in after_step, the current step is done - # but not yet counted - iter_done = self.trainer.iter - self.trainer.start_iter + 1 - if iter_done >= self._warmup_iter: - sec = self._step_timer.seconds() - self.trainer.storage.put_scalars(time=sec) - else: - self._start_time = time.perf_counter() - self._total_timer.reset() - - self._total_timer.pause() - - -class PeriodicWriter(HookBase): - """ - Write events to EventStorage (by calling ``writer.write()``) periodically. - - It is executed every ``period`` iterations and after the last iteration. - Note that ``period`` does not affect how data is smoothed by each writer. - """ - - def __init__(self, writers, period=20): - """ - Args: - writers (list[EventWriter]): a list of EventWriter objects - period (int): - """ - self._writers = writers - for w in writers: - assert isinstance(w, EventWriter), w - self._period = period - - def after_step(self): - if (self.trainer.iter + 1) % self._period == 0 or ( - self.trainer.iter == self.trainer.max_iter - 1 - ): - for writer in self._writers: - writer.write() - - def after_train(self): - for writer in self._writers: - # If any new data is found (e.g. produced by other after_train), - # write them before closing - writer.write() - writer.close() - - -class PeriodicCheckpointer(_PeriodicCheckpointer, HookBase): - """ - Same as :class:`detectron2.checkpoint.PeriodicCheckpointer`, but as a hook. - - Note that when used as a hook, - it is unable to save additional data other than what's defined - by the given `checkpointer`. - - It is executed every ``period`` iterations and after the last iteration. - """ - - def before_train(self): - self.max_iter = self.trainer.max_iter - - def after_step(self): - # No way to use **kwargs - self.step(self.trainer.iter) - - -class LRScheduler(HookBase): - """ - A hook which executes a torch builtin LR scheduler and summarizes the LR. - It is executed after every iteration. - """ - - def __init__(self, optimizer=None, scheduler=None): - """ - Args: - optimizer (torch.optim.Optimizer): - scheduler (torch.optim.LRScheduler or fvcore.common.param_scheduler.ParamScheduler): - if a :class:`ParamScheduler` object, it defines the multiplier over the base LR - in the optimizer. - - If any argument is not given, will try to obtain it from the trainer. - """ - self._optimizer = optimizer - self._scheduler = scheduler - - def before_train(self): - self._optimizer = self._optimizer or self.trainer.optimizer - if isinstance(self.scheduler, ParamScheduler): - self._scheduler = LRMultiplier( - self._optimizer, - self.scheduler, - self.trainer.max_iter, - last_iter=self.trainer.iter - 1, - ) - - # NOTE: some heuristics on what LR to summarize - # summarize the param group with most parameters - largest_group = max(len(g["params"]) for g in self._optimizer.param_groups) - - if largest_group == 1: - # If all groups have one parameter, - # then find the most common initial LR, and use it for summary - lr_count = Counter([g["lr"] for g in self._optimizer.param_groups]) - lr = lr_count.most_common()[0][0] - for i, g in enumerate(self._optimizer.param_groups): - if g["lr"] == lr: - self._best_param_group_id = i - break - else: - for i, g in enumerate(self._optimizer.param_groups): - if len(g["params"]) == largest_group: - self._best_param_group_id = i - break - - def after_step(self): - lr = self._optimizer.param_groups[self._best_param_group_id]["lr"] - self.trainer.storage.put_scalar("lr", lr, smoothing_hint=False) - self.scheduler.step() - - @property - def scheduler(self): - return self._scheduler or self.trainer.scheduler - - def state_dict(self): - if isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler): - return self.scheduler.state_dict() - return {} - - def load_state_dict(self, state_dict): - if isinstance(self.scheduler, torch.optim.lr_scheduler._LRScheduler): - logger = logging.getLogger(__name__) - logger.info("Loading scheduler from state_dict ...") - self.scheduler.load_state_dict(state_dict) - - -class AutogradProfiler(HookBase): - """ - A hook which runs `torch.autograd.profiler.profile`. - - Examples: - :: - hooks.AutogradProfiler( - lambda trainer: trainer.iter > 10 and trainer.iter < 20, self.cfg.OUTPUT_DIR - ) - - The above example will run the profiler for iteration 10~20 and dump - results to ``OUTPUT_DIR``. We did not profile the first few iterations - because they are typically slower than the rest. - The result files can be loaded in the ``chrome://tracing`` page in chrome browser. - - Note: - When used together with NCCL on older version of GPUs, - autograd profiler may cause deadlock because it unnecessarily allocates - memory on every device it sees. The memory management calls, if - interleaved with NCCL calls, lead to deadlock on GPUs that do not - support ``cudaLaunchCooperativeKernelMultiDevice``. - """ - - def __init__(self, enable_predicate, output_dir, *, use_cuda=True): - """ - Args: - enable_predicate (callable[trainer -> bool]): a function which takes a trainer, - and returns whether to enable the profiler. - It will be called once every step, and can be used to select which steps to profile. - output_dir (str): the output directory to dump tracing files. - use_cuda (bool): same as in `torch.autograd.profiler.profile`. - """ - self._enable_predicate = enable_predicate - self._use_cuda = use_cuda - self._output_dir = output_dir - - def before_step(self): - if self._enable_predicate(self.trainer): - self._profiler = torch.autograd.profiler.profile(use_cuda=self._use_cuda) - self._profiler.__enter__() - else: - self._profiler = None - - def after_step(self): - if self._profiler is None: - return - self._profiler.__exit__(None, None, None) - PathManager.mkdirs(self._output_dir) - out_file = os.path.join( - self._output_dir, "profiler-trace-iter{}.json".format(self.trainer.iter) - ) - if "://" not in out_file: - self._profiler.export_chrome_trace(out_file) - else: - # Support non-posix filesystems - with tempfile.TemporaryDirectory(prefix="detectron2_profiler") as d: - tmp_file = os.path.join(d, "tmp.json") - self._profiler.export_chrome_trace(tmp_file) - with open(tmp_file) as f: - content = f.read() - with PathManager.open(out_file, "w") as f: - f.write(content) - - -class EvalHook(HookBase): - """ - Run an evaluation function periodically, and at the end of training. - - It is executed every ``eval_period`` iterations and after the last iteration. - """ - - def __init__(self, eval_period, eval_function): - """ - Args: - eval_period (int): the period to run `eval_function`. Set to 0 to - not evaluate periodically (but still after the last iteration). - eval_function (callable): a function which takes no arguments, and - returns a nested dict of evaluation metrics. - - Note: - This hook must be enabled in all or none workers. - If you would like only certain workers to perform evaluation, - give other workers a no-op function (`eval_function=lambda: None`). - """ - self._period = eval_period - self._func = eval_function - - def _do_eval(self): - results = self._func() - - if results: - assert isinstance( - results, dict - ), "Eval function must return a dict. Got {} instead.".format(results) - - flattened_results = flatten_results_dict(results) - for k, v in flattened_results.items(): - try: - v = float(v) - except Exception as e: - raise ValueError( - "[EvalHook] eval_function should return a nested dict of float. " - "Got '{}: {}' instead.".format(k, v) - ) from e - self.trainer.storage.put_scalars(**flattened_results, smoothing_hint=False) - - # Evaluation may take different time among workers. - # A barrier make them start the next iteration together. - comm.synchronize() - - def after_step(self): - next_iter = self.trainer.iter + 1 - if self._period > 0 and next_iter % self._period == 0: - # do the last eval in after_train - if next_iter != self.trainer.max_iter: - self._do_eval() - - def after_train(self): - # This condition is to prevent the eval from running after a failed training - if self.trainer.iter + 1 >= self.trainer.max_iter: - self._do_eval() - # func is likely a closure that holds reference to the trainer - # therefore we clean it to avoid circular reference in the end - del self._func - - -class PreciseBN(HookBase): - """ - The standard implementation of BatchNorm uses EMA in inference, which is - sometimes suboptimal. - This class computes the true average of statistics rather than the moving average, - and put true averages to every BN layer in the given model. - - It is executed every ``period`` iterations and after the last iteration. - """ - - def __init__(self, period, model, data_loader, num_iter): - """ - Args: - period (int): the period this hook is run, or 0 to not run during training. - The hook will always run in the end of training. - model (nn.Module): a module whose all BN layers in training mode will be - updated by precise BN. - Note that user is responsible for ensuring the BN layers to be - updated are in training mode when this hook is triggered. - data_loader (iterable): it will produce data to be run by `model(data)`. - num_iter (int): number of iterations used to compute the precise - statistics. - """ - self._logger = logging.getLogger(__name__) - if len(get_bn_modules(model)) == 0: - self._logger.info( - "PreciseBN is disabled because model does not contain BN layers in training mode." - ) - self._disabled = True - return - - self._model = model - self._data_loader = data_loader - self._num_iter = num_iter - self._period = period - self._disabled = False - - self._data_iter = None - - def after_step(self): - next_iter = self.trainer.iter + 1 - is_final = next_iter == self.trainer.max_iter - if is_final or (self._period > 0 and next_iter % self._period == 0): - self.update_stats() - - def update_stats(self): - """ - Update the model with precise statistics. Users can manually call this method. - """ - if self._disabled: - return - - if self._data_iter is None: - self._data_iter = iter(self._data_loader) - - def data_loader(): - for num_iter in itertools.count(1): - if num_iter % 100 == 0: - self._logger.info( - "Running precise-BN ... {}/{} iterations.".format(num_iter, self._num_iter) - ) - # This way we can reuse the same iterator - yield next(self._data_iter) - - with EventStorage(): # capture events in a new storage to discard them - self._logger.info( - "Running precise-BN for {} iterations... ".format(self._num_iter) - + "Note that this could produce different statistics every time." - ) - update_bn_stats(self._model, data_loader(), self._num_iter) diff --git a/spaces/CVPR/regionclip-demo/detectron2/layers/blocks.py b/spaces/CVPR/regionclip-demo/detectron2/layers/blocks.py deleted file mode 100644 index 1995a4bf7339e8deb7eaaffda4f819dda55e7ac7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/layers/blocks.py +++ /dev/null @@ -1,111 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import fvcore.nn.weight_init as weight_init -from torch import nn - -from .batch_norm import FrozenBatchNorm2d, get_norm -from .wrappers import Conv2d - - -""" -CNN building blocks. -""" - - -class CNNBlockBase(nn.Module): - """ - A CNN block is assumed to have input channels, output channels and a stride. - The input and output of `forward()` method must be NCHW tensors. - The method can perform arbitrary computation but must match the given - channels and stride specification. - - Attribute: - in_channels (int): - out_channels (int): - stride (int): - """ - - def __init__(self, in_channels, out_channels, stride): - """ - The `__init__` method of any subclass should also contain these arguments. - - Args: - in_channels (int): - out_channels (int): - stride (int): - """ - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.stride = stride - - def freeze(self): - """ - Make this block not trainable. - This method sets all parameters to `requires_grad=False`, - and convert all BatchNorm layers to FrozenBatchNorm - - Returns: - the block itself - """ - for p in self.parameters(): - p.requires_grad = False - FrozenBatchNorm2d.convert_frozen_batchnorm(self) - return self - - -class DepthwiseSeparableConv2d(nn.Module): - """ - A kxk depthwise convolution + a 1x1 convolution. - - In :paper:`xception`, norm & activation are applied on the second conv. - :paper:`mobilenet` uses norm & activation on both convs. - """ - - def __init__( - self, - in_channels, - out_channels, - kernel_size=3, - padding=1, - dilation=1, - *, - norm1=None, - activation1=None, - norm2=None, - activation2=None, - ): - """ - Args: - norm1, norm2 (str or callable): normalization for the two conv layers. - activation1, activation2 (callable(Tensor) -> Tensor): activation - function for the two conv layers. - """ - super().__init__() - self.depthwise = Conv2d( - in_channels, - in_channels, - kernel_size=kernel_size, - padding=padding, - dilation=dilation, - groups=in_channels, - bias=not norm1, - norm=get_norm(norm1, in_channels), - activation=activation1, - ) - self.pointwise = Conv2d( - in_channels, - out_channels, - kernel_size=1, - bias=not norm2, - norm=get_norm(norm2, out_channels), - activation=activation2, - ) - - # default initialization - weight_init.c2_msra_fill(self.depthwise) - weight_init.c2_msra_fill(self.pointwise) - - def forward(self, x): - return self.pointwise(self.depthwise(x)) diff --git a/spaces/CVPR/v-doc_abstractive_mac/config.py b/spaces/CVPR/v-doc_abstractive_mac/config.py deleted file mode 100644 index 13bb185bf1cb6c6bb1908ca21301ac2258cad50f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/v-doc_abstractive_mac/config.py +++ /dev/null @@ -1,491 +0,0 @@ -import os -import argparse - -###################################### configuration ###################################### -class Config(object): - - typeFilters = [[], ["1_query_size_", - "1_query_material_", - "2_equal_color_", - "2_equal_shape_"], - ["1_query_color_", - "1_query_shape_", - "2_equal_size_", - "2_equal_material_"]] - - #### files interface - ## data files - dataPath = "" # dataset specific - datasetFilename = "" # dataset specific - - # file names - imagesFilename = "{tier}.h5" # Images - instancesFilename = "{tier}Instances.json" - # symbols dictionaries - questionDictFilename = "questionDict.pkl" - answerDictFilename = "answerDict.pkl" - qaDictFilename = "qaDict.pkl" - - ## experiment files - expPathname = "{expName}" - expName = "" # will be assigned through argparse - - weightsPath = "./weights" - weightsFilename = "weights{epoch}.ckpt" - - # model predictions and optionally attention maps - predsPath = "./preds" - predsFilename = "{tier}Predictions-{expName}.json" - answersFilename = "{tier}Answers-{expName}.txt" - - # logging of accuracy, loss etc. per epoch - logPath = "./results" - logFilename = "results-{expName}.csv" - - # configuration file of the used flags to run the experiment - configPath = "./results" - configFilename = "config-{expName}.json" - - def toString(self): - return self.expName - - # make directories of experiment if not exist yet - def makedirs(self, directory): - directory = os.path.join(directory, self.expPath()) - if not os.path.exists(directory): - os.makedirs(directory) - return directory - - ### filename builders - ## data files - def dataFile(self, filename): - return os.path.join(self.dataPath, filename) - - def generatedFile(self, filename): - return self.dataFile(self.generatedPrefix + filename) - - datasetFile = lambda self, tier: self.dataFile(self.datasetFilename.format(tier = tier)) - imagesIdsFile = lambda self, tier: self.dataFile(self.imgIdsFilename.format(tier = tier)) # - imagesFile = lambda self, tier: self.dataFile(self.imagesFilename.format(tier = tier)) - instancesFile = lambda self, tier: self.generatedFile(self.instancesFilename.format(tier = tier)) - - questionDictFile = lambda self: self.generatedFile(self.questionDictFilename) - answerDictFile = lambda self: self.generatedFile(self.answerDictFilename) - qaDictFile = lambda self: self.generatedFile(self.qaDictFilename) - - ## experiment files - expPath = lambda self: self.expPathname.format(expName = self.toString()) - - weightsDir = lambda self: self.makedirs(self.weightsPath) - predsDir = lambda self: self.makedirs(self.predsPath) - logDir = lambda self: self.makedirs(self.logPath) - configDir = lambda self: self.makedirs(self.configPath) - - weightsFile = lambda self, epoch: os.path.join(self.weightsDir(), self.weightsFilename.format(epoch = str(epoch))) - predsFile = lambda self, tier: os.path.join(self.predsDir(), self.predsFilename.format(tier = tier, expName = self.expName)) - answersFile = lambda self, tier: os.path.join(self.predsDir(), self.answersFilename.format(tier = tier, expName = self.expName)) - logFile = lambda self: os.path.join(self.logDir(), self.logFilename.format(expName = self.expName)) - configFile = lambda self: os.path.join(self.configDir(), self.configFilename.format(expName = self.expName)) - - -# global configuration variable. Holds file paths and program parameters -config = Config() - -###################################### arguments ###################################### -def parseArgs(): - parser = argparse.ArgumentParser(fromfile_prefix_chars = "@") - - - ################ systems - - #custom args - parser.add_argument('--train_image_length', default=500, type=int, ) - parser.add_argument('--test_image_length', default=100, type=int, ) - parser.add_argument('--val_image_length', default=50, type=int, ) - - # gpus and memory - parser.add_argument("--gpus", default = "", type = str, help = "comma-separated list of gpus to use") - parser.add_argument("--gpusNum", default = 1, type = int, help = "number of gpus to use") - - parser.add_argument("--allowGrowth", action = "store_true", help = "allow gpu memory growth") - parser.add_argument("--maxMemory", default = 1.0, type = float, help = "set maximum gpu memory usage") - - parser.add_argument("--parallel", action = "store_true", help = "load images in parallel to batch running") - parser.add_argument("--workers", default = 1, type = int, help = "number of workers to load images") - parser.add_argument("--taskSize", default = 8, type = int, help = "number of image batches to load in advance") # 40 - # parser.add_argument("--tasksNum", default = 20, type = int, help = "maximal queue size for tasks (to constrain ram usage)") # 2 - - parser.add_argument("--useCPU", action = "store_true", help = "put word embeddings on cpu") - - # weight loading and training - parser.add_argument("-r", "--restore", action = "store_true", help = "restore last epoch (based on results file)") - parser.add_argument("--restoreEpoch", default = 0, type = int, help = "if positive, specific epoch to restore") - parser.add_argument("--weightsToKeep", default = 2, type = int, help = "number of previous epochs' weights keep") - parser.add_argument("--saveEvery", default = 3000, type = int, help = "number of iterations to save weights after") - parser.add_argument("--calleEvery", default = 1500, type = int, help = "number of iterations to call custom function after") - - parser.add_argument("--saveSubset", action = "store_true", help = "save only subset of the weights") - parser.add_argument("--trainSubset", action = "store_true", help = "train only subset of the weights") - parser.add_argument("--varSubset", default = [], nargs = "*", type = str, help = "list of namespaces to train on") - - # trainReader = ["questionEmbeddings", "questionReader"] - # saveControl = ["questionEmbeddings", "programEmbeddings", "seqReader", "programControl"] - - # experiment files - parser.add_argument("--expName", default = "PDF_exp_extra", type = str, help = "experiment name") - - # data files - parser.add_argument("--dataset", default = "PDF", choices = ["PDF", "CLEVR", "NLVR"], type = str) # - parser.add_argument("--dataBasedir", default = "./", type = str, help = "data base directory") # /jagupard14/scr1/dorarad/ - parser.add_argument("--generatedPrefix", default = "gennew", type = str, help = "prefix for generated data files") - parser.add_argument("--featureType", default = "norm_128x32", type = str, help = "features type") # - # resnet101_512x128, norm_400x100, none_80x20, normPerImage_80x20, norm_80x20 - - ################ optimization - - # training/testing - parser.add_argument("--train", action = "store_true", help = "run training") - parser.add_argument("--evalTrain", action = "store_true", help = "run eval with ema on train dataset") # - parser.add_argument("--test", action = "store_true", help = "run testing every epoch and generate predictions file") # - parser.add_argument("--finalTest", action = "store_true", help = "run testing on final epoch") - parser.add_argument("--retainVal", action = "store_true", help = "retain validation order between runs") # - - parser.add_argument("--getPreds", action = "store_true", help = "store prediction") - parser.add_argument("--getAtt", action = "store_true", help = "store attention maps") - parser.add_argument("--analysisType", default = "", type = str, choices = ["", "questionLength, programLength","type", "arity"], help = "show breakdown of results according to type") # - - parser.add_argument("--trainedNum", default = 0, type = int, help = "if positive, train on subset of the data") - parser.add_argument("--testedNum", default = 0, type = int, help = "if positive, test on subset of the data") - - # bucketing - parser.add_argument("--noBucket", action = "store_true", help = "bucket data according to question length") - parser.add_argument("--noRebucket", action = "store_true", help = "bucket data according to question and program length") # - - # filtering - parser.add_argument("--tOnlyChain", action = "store_true", help = "train only chain questions") - parser.add_argument("--vOnlyChain", action = "store_true", help = "test only chain questions") - parser.add_argument("--tMaxQ", default = 0, type = int, help = "if positive, train on questions up to this length") - parser.add_argument("--tMaxP", default = 0, type = int, help = "if positive, test on questions up to this length") - parser.add_argument("--vMaxQ", default = 0, type = int, help = "if positive, train on questions with programs up to this length") - parser.add_argument("--vMaxP", default = 0, type = int, help = "if positive, test on questions with programs up to this length") - parser.add_argument("--tFilterOp", default = 0, type = int, help = "train questions by to be included in the types listed") - parser.add_argument("--vFilterOp", default = 0, type = int, help = "test questions by to be included in the types listed") - - # extra and extraVal - parser.add_argument("--extra", action = "store_true", help = "prepare extra data (add to vocabulary") # - parser.add_argument("--trainExtra", action = "store_true", help = "train (only) on extra data") # - parser.add_argument("--alterExtra", action = "store_true", help = "alter main data training with extra dataset") # - parser.add_argument("--alterNum", default = 1, type = int, help = "alteration rate") # - parser.add_argument("--extraVal", action = "store_true", help = "only extra validation data (for compositional clevr)") # - parser.add_argument("--finetuneNum", default = 0, type = int, help = "if positive, finetune on that subset of val (for compositional clevr)") # - - # exponential moving average - parser.add_argument("--useEMA", action = "store_true", help = "use exponential moving average for weights") - parser.add_argument("--emaDecayRate", default = 0.999, type = float, help = "decay rate for exponential moving average") - - # sgd optimizer - parser.add_argument("--batchSize", default = 64, type = int, help = "batch size") - parser.add_argument("--epochs", default = 100, type = int, help = "number of epochs to run") - parser.add_argument("--lr", default = 0.0001, type = float, help = "learning rate") - parser.add_argument("--lrReduce", action = "store_true", help = "reduce learning rate if training loss doesn't go down (manual annealing)") - parser.add_argument("--lrDecayRate", default = 0.5, type = float, help = "learning decay rate if training loss doesn't go down") - parser.add_argument("--earlyStopping", default = 0, type = int, help = "if positive, stop if no improvement for that number of epochs") - - parser.add_argument("--adam", action = "store_true", help = "use adam") - parser.add_argument("--l2", default = 0, type = float, help = "if positive, add l2 loss term") - parser.add_argument("--clipGradients", action = "store_true", help = "clip gradients") - parser.add_argument("--gradMaxNorm", default = 8, type = int, help = "clipping value") - - # batch normalization - parser.add_argument("--memoryBN", action = "store_true", help = "use batch normalization on the recurrent memory") - parser.add_argument("--stemBN", action = "store_true", help = "use batch normalization in the image input unit (stem)") - parser.add_argument("--outputBN", action = "store_true", help = "use batch normalization in the output unit") - parser.add_argument("--bnDecay", default = 0.999, type = float, help = "batch norm decay rate") - parser.add_argument("--bnCenter", action = "store_true", help = "batch norm with centering") - parser.add_argument("--bnScale", action = "store_true", help = "batch norm with scaling") - - ## dropouts - parser.add_argument("--encInputDropout", default = 0.85, type = float, help = "dropout of the rnn inputs to the Question Input Unit") - parser.add_argument("--encStateDropout", default = 1.0, type = float, help = "dropout of the rnn states of the Question Input Unit") - parser.add_argument("--stemDropout", default = 0.82, type = float, help = "dropout of the Image Input Unit (the stem)") - - parser.add_argument("--qDropout", default = 0.92, type = float, help = "dropout on the question vector") - # parser.add_argument("--qDropoutOut", default = 1.0, type = float, help = "dropout on the question vector the goes to the output unit") - # parser.add_argument("--qDropoutMAC", default = 1.0, type = float, help = "dropout on the question vector the goes to MAC") - - parser.add_argument("--memoryDropout", default = 0.85, type = float, help = "dropout on the recurrent memory") - parser.add_argument("--readDropout", default = 0.85, type = float, help = "dropout of the read unit") - parser.add_argument("--writeDropout", default = 1.0, type = float, help = "dropout of the write unit") - parser.add_argument("--outputDropout", default = 0.85, type = float, help = "dropout of the output unit") - - parser.add_argument("--parametricDropout", action = "store_true", help = "use parametric dropout") # - parser.add_argument("--encVariationalDropout", action = "store_true", help = "use variational dropout in the RNN input unit") - parser.add_argument("--memoryVariationalDropout", action = "store_true", help = "use variational dropout across the MAC network") - - ## nonlinearities - parser.add_argument("--relu", default = "ELU", choices = ["STD", "PRM", "ELU", "LKY", "SELU"], type = str, help = "type of ReLU to use: standard, parametric, ELU, or leaky") - # parser.add_argument("--reluAlpha", default = 0.2, type = float, help = "alpha value for the leaky ReLU") - - parser.add_argument("--mulBias", default = 0.0, type = float, help = "bias to add in multiplications (x + b) * (y + b) for better training") # - - parser.add_argument("--imageLinPool", default = 2, type = int, help = "pooling for image linearizion") - - ################ baseline model parameters - - parser.add_argument("--useBaseline", action = "store_true", help = "run the baseline model") - parser.add_argument("--baselineLSTM", action = "store_true", help = "use LSTM in baseline") - parser.add_argument("--baselineCNN", action = "store_true", help = "use CNN in baseline") - parser.add_argument("--baselineAtt", action = "store_true", help = "use stacked attention baseline") - - parser.add_argument("--baselineProjDim", default = 64, type = int, help = "projection dimension for image linearizion") - - parser.add_argument("--baselineAttNumLayers", default = 2, type = int, help = "number of stacked attention layers") - parser.add_argument("--baselineAttType", default = "ADD", type = str, choices = ["MUL", "DIAG", "BL", "ADD"], help = "attention type (multiplicative, additive, etc)") - - ################ image input unit (the "stem") - - parser.add_argument("--stemDim", default = 512, type = int, help = "dimension of stem CNNs") - parser.add_argument("--stemNumLayers", default = 2, type = int, help = "number of stem layers") - parser.add_argument("--stemKernelSize", default = 3, type = int, help = "kernel size for stem (same for all the stem layers)") - parser.add_argument("--stemKernelSizes", default = None, nargs = "*", type = int, help = "kernel sizes for stem (per layer)") - parser.add_argument("--stemStrideSizes", default = None, nargs = "*", type = int, help = "stride sizes for stem (per layer)") - - parser.add_argument("--stemLinear", action = "store_true", help = "use a linear stem (instead of CNNs)") # - # parser.add_argument("--stemProjDim", default = 64, type = int, help = "projection dimension of in image linearization") # - # parser.add_argument("--stemProjPooling", default = 2, type = int, help = "pooling for the image linearization") # - - parser.add_argument("--stemGridRnn", action = "store_true", help = "use grid RNN layer") # - parser.add_argument("--stemGridRnnMod", default = "RNN", type = str, choices = ["RNN", "GRU"], help = "RNN type for grid") # - parser.add_argument("--stemGridAct", default = "NON", type = str, choices = ["NON", "RELU", "TANH"], help = "nonlinearity type for grid") # - - ## location - parser.add_argument("--locationAware", action = "store_true", help = "add positional features to image representation (linear meshgrid by default)") - parser.add_argument("--locationType", default = "L", type = str, choices = ["L", "PE"], help = "L: linear features, PE: Positional Encoding") - parser.add_argument("--locationBias", default = 1.0, type = float, help = "the scale of the positional features") - parser.add_argument("--locationDim", default = 32, type = int, help = "the number of PE dimensions") - - ################ question input unit (the "encoder") - parser.add_argument("--encType", default = "LSTM", choices = ["RNN", "GRU", "LSTM", "MiGRU", "MiLSTM"], help = "encoder RNN type") - parser.add_argument("--encDim", default = 512, type = int, help = "dimension of encoder RNN") - parser.add_argument("--encNumLayers", default = 1, type = int, help = "number of encoder RNN layers") - parser.add_argument("--encBi", action = "store_true", help = "use bi-directional encoder") - # parser.add_argument("--encOutProj", action = "store_true", help = "add projection layer for encoder outputs") - # parser.add_argument("--encOutProjDim", default = 256, type = int, help = "dimension of the encoder projection layer") - # parser.add_argument("--encQProj", action = "store_true", help = "add projection for the question representation") - parser.add_argument("--encProj", action = "store_true", help = "project encoder outputs and question") - parser.add_argument("--encProjQAct", default = "NON", type = str, choices = ["NON", "RELU", "TANH"], help = "project question vector with this activation") - - ##### word embeddings - parser.add_argument("--wrdEmbDim", default = 300, type = int, help = "word embeddings dimension") - parser.add_argument("--wrdEmbRandom", action = "store_true", help = "initialize word embeddings to random (normal)") - parser.add_argument("--wrdEmbUniform", action = "store_true", help = "initialize with uniform distribution") - parser.add_argument("--wrdEmbScale", default = 1.0, type = float, help = "word embeddings initialization scale") - parser.add_argument("--wrdEmbFixed", action = "store_true", help = "set word embeddings fixed (don't train)") - parser.add_argument("--wrdEmbUnknown", action = "store_true", help = "set words outside of training set to ") - - parser.add_argument("--ansEmbMod", default = "NON", choices = ["NON", "SHARED", "BOTH"], type = str, help = "BOTH: create word embeddings for answers. SHARED: share them with question embeddings.") # - parser.add_argument("--answerMod", default = "NON", choices = ["NON", "MUL", "DIAG", "BL"], type = str, help = "operation for multiplication with answer embeddings: direct multiplication, scalar weighting, or bilinear") # - - ################ output unit (classifier) - parser.add_argument("--outClassifierDims", default = [512], nargs = "*", type = int, help = "dimensions of the classifier") - parser.add_argument("--outImage", action = "store_true", help = "feed the image to the output unit") - parser.add_argument("--outImageDim", default = 1024, type = int, help = "dimension of linearized image fed to the output unit") - parser.add_argument("--outQuestion", action = "store_true", help = "feed the question to the output unit") - parser.add_argument("--outQuestionMul", action = "store_true", help = "feed the multiplication of question and memory to the output unit") - - ################ network - - parser.add_argument("--netLength", default = 16, type = int, help = "network length (number of cells)") - # parser.add_argument("--netDim", default = 512, type = int) - parser.add_argument("--memDim", default = 512, type = int, help = "dimension of memory state") - parser.add_argument("--ctrlDim", default = 512, type = int, help = "dimension of control state") - parser.add_argument("--attDim", default = 512, type = int, help = "dimension of pre-attention interactions space") - parser.add_argument("--unsharedCells", default = False, type = bool, help = "unshare weights between cells ") - - # initialization - parser.add_argument("--initCtrl", default = "PRM", type = str, choices = ["PRM", "ZERO", "Q"], help = "initialization mod for control") - parser.add_argument("--initMem", default = "PRM", type = str, choices = ["PRM", "ZERO", "Q"], help = "initialization mod for memory") - parser.add_argument("--initKBwithQ", default = "NON", type = str, choices = ["NON", "CNCT", "MUL"], help = "merge question with knowledge base") - parser.add_argument("--addNullWord", action = "store_true", help = "add parametric word in the beginning of the question") - - ################ control unit - # control ablations (use whole question or pre-attention continuous vectors as control) - parser.add_argument("--controlWholeQ", action = "store_true", help = "use whole question vector as control") - parser.add_argument("--controlContinuous", action = "store_true", help = "use continuous representation of control (without attention)") - - # step 0: inputs to control unit (word embeddings or encoder outputs, with optional projection) - parser.add_argument("--controlContextual", action = "store_true", help = "use contextual words for attention (otherwise will use word embeddings)") - parser.add_argument("--controlInWordsProj", action = "store_true", help = "apply linear projection over words for attention computation") - parser.add_argument("--controlOutWordsProj", action = "store_true", help = "apply linear projection over words for summary computation") - - parser.add_argument("--controlInputUnshared", action = "store_true", help = "use different question representation for each cell") - parser.add_argument("--controlInputAct", default = "TANH", type = str, choices = ["NON", "RELU", "TANH"], help = "activation for question projection") - - # step 1: merging previous control and whole question - parser.add_argument("--controlFeedPrev", action = "store_true", help = "feed previous control state") - parser.add_argument("--controlFeedPrevAtt", action = "store_true", help = "feed previous control post word attention (otherwise will feed continuous control)") - parser.add_argument("--controlFeedInputs", action = "store_true", help = "feed question representation") - parser.add_argument("--controlContAct", default = "TANH", type = str, choices = ["NON", "RELU", "TANH"], help = "activation on the words interactions") - - # step 2: word attention and optional projection - parser.add_argument("--controlConcatWords", action = "store_true", help = "concatenate words to interaction when computing attention") - parser.add_argument("--controlProj", action = "store_true", help = "apply linear projection on words interactions") - parser.add_argument("--controlProjAct", default = "NON", type = str, choices = ["NON", "RELU", "TANH"], help = "activation for control interactions") - - # parser.add_argument("--controlSelfAtt", default = False, type = bool) - - # parser.add_argument("--controlCoverage", default = False, type = bool) - # parser.add_argument("--controlCoverageBias", default = 1.0, type = float) - - # parser.add_argument("--controlPostRNN", default = False, type = bool) - # parser.add_argument("--controlPostRNNmod", default = "RNN", type = str) # GRU - - # parser.add_argument("--selfAttShareInter", default = False, type = bool) - - # parser.add_argument("--wordControl", default = False, type = bool) - # parser.add_argument("--gradualControl", default = False, type = bool) - - ################ read unit - # step 1: KB-memory interactions - parser.add_argument("--readProjInputs", action = "store_true", help = "project read unit inputs") - parser.add_argument("--readProjShared", action = "store_true", help = "use shared projection for all read unit inputs") - - parser.add_argument("--readMemAttType", default = "MUL", type = str, choices = ["MUL", "DIAG", "BL", "ADD"], help = "attention type for interaction with memory") - parser.add_argument("--readMemConcatKB", action = "store_true", help = "concatenate KB elements to memory interaction") - parser.add_argument("--readMemConcatProj", action = "store_true", help = "concatenate projected values instead or original to memory interaction") - parser.add_argument("--readMemProj", action = "store_true", help = "project interactions with memory") - parser.add_argument("--readMemAct", default = "RELU", type = str, choices = ["NON", "RELU", "TANH"], help = "activation for memory interaction") - - # step 2: interaction with control - parser.add_argument("--readCtrl", action = "store_true", help = "compare KB-memory interactions to control") - parser.add_argument("--readCtrlAttType", default = "MUL", type = str, choices = ["MUL", "DIAG", "BL", "ADD"], help = "attention type for interaction with control") - parser.add_argument("--readCtrlConcatKB", action = "store_true", help = "concatenate KB elements to control interaction") - parser.add_argument("--readCtrlConcatProj", action = "store_true", help = "concatenate projected values instead or original to control interaction") - parser.add_argument("--readCtrlConcatInter", action = "store_true", help = "concatenate memory interactions to control interactions") - parser.add_argument("--readCtrlAct", default = "RELU", type = str, choices = ["NON", "RELU", "TANH"], help = "activation for control interaction") - - # step 3: summarize attention over knowledge base - parser.add_argument("--readSmryKBProj", action = "store_true", help = "use knowledge base projections when summing attention up (should be used only if KB is projected.") - - # parser.add_argument("--saAllMultiplicative", default = False, type = bool) - # parser.add_argument("--saSumMultiplicative", default = False, type = bool) - - ################ write unit - # step 1: input to the write unit (only previous memory, or new information, or both) - parser.add_argument("--writeInputs", default = "BOTH", type = str, choices = ["MEM", "INFO", "BOTH", "SUM"], help = "inputs to the write unit") - parser.add_argument("--writeConcatMul", action = "store_true", help = "add multiplicative integration between inputs") - - parser.add_argument("--writeInfoProj", action = "store_true", help = "project retrieved info") - parser.add_argument("--writeInfoAct", default = "NON", type = str, choices = ["NON", "RELU", "TANH"], help = "new info activation") - - # step 2: self attention and following projection - parser.add_argument("--writeSelfAtt", action = "store_true", help = "use self attention") - parser.add_argument("--writeSelfAttMod", default = "NON", type = str, choices = ["NON", "CONT"], help = "control version to compare to") - - parser.add_argument("--writeMergeCtrl", action = "store_true", help = "merge control with memory") - - parser.add_argument("--writeMemProj", action = "store_true", help = "project new memory") - parser.add_argument("--writeMemAct", default = "NON", type = str, choices = ["NON", "RELU", "TANH"], help = "new memory activation") - - # step 3: gate between new memory and previous value - parser.add_argument("--writeGate", action = "store_true", help = "add gate to write unit") - parser.add_argument("--writeGateShared", action = "store_true", help = "use one gate value for all dimensions of the memory state") - parser.add_argument("--writeGateBias", default = 1.0, type = float, help = "bias for the write unit gate (positive to bias for taking new memory)") - - ## modular - # parser.add_argument("--modulesNum", default = 10, type = int) - # parser.add_argument("--controlBoth", default = False, type = bool) - # parser.add_argument("--addZeroModule", default = False, type = bool) - # parser.add_argument("--endModule", default = False, type = bool) - - ## hybrid - # parser.add_argument("--hybrid", default = False, type = bool, help = "hybrid attention cnn model") - # parser.add_argument("--earlyHybrid", default = False, type = bool) - # parser.add_argument("--lateHybrid", default = False, type = bool) - - ## autoencoders - # parser.add_argument("--autoEncMem", action = "store_true", help = "add memory2control auto-encoder loss") - # parser.add_argument("--autoEncMemW", default = 0.0001, type = float, help = "weight for auto-encoder loss") - # parser.add_argument("--autoEncMemInputs", default = "INFO", type = str, choices = ["MEM", "INFO"], help = "inputs to auto-encoder") - # parser.add_argument("--autoEncMemAct", default = "NON", type = str, choices = ["NON", "RELU", "TANH"], help = "activation type in the auto-encoder") - # parser.add_argument("--autoEncMemLoss", default = "CONT", type = str, choices = ["CONT", "PROB", "SMRY"], help = "target for the auto-encoder loss") - # parser.add_argument("--autoEncMemCnct", action = "store_true", help = "concat word attentions to auto-encoder features") - - # parser.add_argument("--autoEncCtrl", action = "store_true") - # parser.add_argument("--autoEncCtrlW", default = 0.0001, type = float) - # parser.add_argument("--autoEncCtrlGRU", action = "store_true") - - ## temperature - # parser.add_argument("--temperature", default = 1.0, type = float, help = "temperature for modules softmax") # - # parser.add_argument("--tempParametric", action = "store_true", help = "parametric temperature") # - # parser.add_argument("--tempDynamic", action = "store_true", help = "dynamic temperature") # - # parser.add_argument("--tempAnnealRate", default = 0.000004, type = float, help = "temperature annealing rate") # - # parser.add_argument("--tempMin", default = 0.5, type = float, help = "minimum temperature") # - - ## gumbel - # parser.add_argument("--gumbelSoftmax", action = "store_true", help = "use gumbel for the module softmax (soft for training and hard for testing)") # - # parser.add_argument("--gumbelSoftmaxBoth", action = "store_true", help = "use softmax for training and testing") # - # parser.add_argument("--gumbelArgmaxBoth", action = "store_true", help = "use argmax for training and testing") # - - parser.parse_args(namespace = config) - - - - -###################################### dataset configuration ###################################### - -def configPDF(): - config.dataPath = "{dataBasedir}/PDF_v1/data".format(dataBasedir = config.dataBasedir) - config.datasetFilename = "PDF_{tier}_questions.json" - config.wordVectorsFile = "./PDF_v1/data/glove/glove.6B.{dim}d.txt".format(dim = config.wrdEmbDim) # - - config.imageDims = [14, 14, 1024] - config.programLims = [5, 10, 15, 20] - config.questionLims = [10, 15, 20, 25] - -def configCLEVR(): - config.dataPath = "{dataBasedir}/CLEVR_v1/data".format(dataBasedir = config.dataBasedir) - config.datasetFilename = "CLEVR_{tier}_questions.json" - config.wordVectorsFile = "./CLEVR_v1/data/glove/glove.6B.{dim}d.txt".format(dim = config.wrdEmbDim) # - - config.imageDims = [14, 14, 1024] - config.programLims = [5, 10, 15, 20] - config.questionLims = [10, 15, 20, 25] - -def configNLVR(): - config.dataPath = "{dataBasedir}/nlvr".format(dataBasedir = config.dataBasedir) - config.datasetFilename = "{tier}.json" - config.imagesFilename = "{{tier}}_{featureType}.h5".format(featureType = config.featureType) - config.imgIdsFilename = "{tier}ImgIds.json" - config.wordVectorsFile = "./CLEVR_v1/data/glove/glove.6B.{dim}d.txt".format(dim = config.wrdEmbDim) # - - config.questionLims = [12] - # config.noRebucket = True - - # if config.stemKernelSizes == []: - # if config.featureType.endsWith("128x32"): - # config.stemKernelSizes = [8, 4, 4] - # config.stemStrideSizes = [2, 2, 1] - # config.stemNumLayers = 3 - # if config.featureType.endsWith("512x128"): - # config.stemKernelSizes = [8, 4, 4, 2] - # config.stemStrideSizes = [4, 2, 2, 1] - # config.stemNumLayers = 4 - # config.stemDim = 64 - - if config.featureType == "resnet101_512x128": - config.imageDims = [8, 32, 1024] - else: - stridesOverall = 1 - if stemStrideSizes is not None: - for s in config.stemStrideSizes: - stridesOverall *= int(s) - size = config.featureType.split("_")[-1].split("x") - config.imageDims = [int(size[1]) / stridesOverall, int(size[0]) / stridesOverall, 3] - -## dataset specific configs -loadDatasetConfig = { - "CLEVR": configCLEVR, - "NLVR": configNLVR, - "PDF": configPDF -} diff --git a/spaces/Carlosito16/HXM-summarization/README.md b/spaces/Carlosito16/HXM-summarization/README.md deleted file mode 100644 index 7be5c7b512d53eb9019df4ee6a625012fdacc54e..0000000000000000000000000000000000000000 --- a/spaces/Carlosito16/HXM-summarization/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: HXM Summarization -emoji: 🌍 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/prompt.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/prompt.py deleted file mode 100644 index 03c132acdf26d08deeee119e41a561f430957806..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/prompt.py +++ /dev/null @@ -1,204 +0,0 @@ -from colorama import Fore - -from autogpt.config import Config -from autogpt.config.ai_config import AIConfig -from autogpt.config.config import Config -from autogpt.logs import logger -from autogpt.promptgenerator import PromptGenerator -from autogpt.setup import prompt_user -from autogpt.utils import clean_input - -CFG = Config() - - -def get_prompt() -> str: - """ - This function generates a prompt string that includes various constraints, - commands, resources, and performance evaluations. - - Returns: - str: The generated prompt string. - """ - - # Initialize the Config object - cfg = Config() - - # Initialize the PromptGenerator object - prompt_generator = PromptGenerator() - - # Add constraints to the PromptGenerator object - prompt_generator.add_constraint( - "~4000 word limit for short term memory. Your short term memory is short, so" - " immediately save important information to files." - ) - prompt_generator.add_constraint( - "If you are unsure how you previously did something or want to recall past" - " events, thinking about similar events will help you remember." - ) - prompt_generator.add_constraint("No user assistance") - prompt_generator.add_constraint( - 'Exclusively use the commands listed in double quotes e.g. "command name"' - ) - prompt_generator.add_constraint( - "Use subprocesses for commands that will not terminate within a few minutes" - ) - - # Define the command list - commands = [ - ("Google Search", "google", {"input": ""}), - ( - "Browse Website", - "browse_website", - {"url": "", "question": ""}, - ), - ( - "Start GPT Agent", - "start_agent", - {"name": "", "task": "", "prompt": ""}, - ), - ( - "Message GPT Agent", - "message_agent", - {"key": "", "message": ""}, - ), - ("List GPT Agents", "list_agents", {}), - ("Delete GPT Agent", "delete_agent", {"key": ""}), - ( - "Clone Repository", - "clone_repository", - {"repository_url": "", "clone_path": ""}, - ), - ("Write to file", "write_to_file", {"file": "", "text": ""}), - ("Read file", "read_file", {"file": ""}), - ("Append to file", "append_to_file", {"file": "", "text": ""}), - ("Delete file", "delete_file", {"file": ""}), - ("Search Files", "search_files", {"directory": ""}), - ("Analyze Code", "analyze_code", {"code": ""}), - ( - "Get Improved Code", - "improve_code", - {"suggestions": "", "code": ""}, - ), - ( - "Write Tests", - "write_tests", - {"code": "", "focus": ""}, - ), - ("Execute Python File", "execute_python_file", {"file": ""}), - ("Task Complete (Shutdown)", "task_complete", {"reason": ""}), - ("Generate Image", "generate_image", {"prompt": ""}), - ("Send Tweet", "send_tweet", {"text": ""}), - ] - - # Only add the audio to text command if the model is specified - if cfg.huggingface_audio_to_text_model: - commands.append( - ("Convert Audio to text", "read_audio_from_file", {"file": ""}), - ) - - # Only add shell command to the prompt if the AI is allowed to execute it - if cfg.execute_local_commands: - commands.append( - ( - "Execute Shell Command, non-interactive commands only", - "execute_shell", - {"command_line": ""}, - ), - ) - commands.append( - ( - "Execute Shell Command Popen, non-interactive commands only", - "execute_shell_popen", - {"command_line": ""}, - ), - ) - - # Only add the download file command if the AI is allowed to execute it - if cfg.allow_downloads: - commands.append( - ( - "Downloads a file from the internet, and stores it locally", - "download_file", - {"url": "", "file": ""}, - ), - ) - - # Add these command last. - commands.append( - ("Do Nothing", "do_nothing", {}), - ) - commands.append( - ("Task Complete (Shutdown)", "task_complete", {"reason": ""}), - ) - - # Add commands to the PromptGenerator object - for command_label, command_name, args in commands: - prompt_generator.add_command(command_label, command_name, args) - - # Add resources to the PromptGenerator object - prompt_generator.add_resource( - "Internet access for searches and information gathering." - ) - prompt_generator.add_resource("Long Term memory management.") - prompt_generator.add_resource( - "GPT-3.5 powered Agents for delegation of simple tasks." - ) - prompt_generator.add_resource("File output.") - - # Add performance evaluations to the PromptGenerator object - prompt_generator.add_performance_evaluation( - "Continuously review and analyze your actions to ensure you are performing to" - " the best of your abilities." - ) - prompt_generator.add_performance_evaluation( - "Constructively self-criticize your big-picture behavior constantly." - ) - prompt_generator.add_performance_evaluation( - "Reflect on past decisions and strategies to refine your approach." - ) - prompt_generator.add_performance_evaluation( - "Every command has a cost, so be smart and efficient. Aim to complete tasks in" - " the least number of steps." - ) - - # Generate the prompt string - return prompt_generator.generate_prompt_string() - - -def construct_prompt() -> str: - """Construct the prompt for the AI to respond to - - Returns: - str: The prompt string - """ - config = AIConfig.load(CFG.ai_settings_file) - if CFG.skip_reprompt and config.ai_name: - logger.typewriter_log("Name :", Fore.GREEN, config.ai_name) - logger.typewriter_log("Role :", Fore.GREEN, config.ai_role) - logger.typewriter_log("Goals:", Fore.GREEN, f"{config.ai_goals}") - elif config.ai_name: - logger.typewriter_log( - "Welcome back! ", - Fore.GREEN, - f"Would you like me to return to being {config.ai_name}?", - speak_text=True, - ) - should_continue = clean_input( - f"""Continue with the last settings? -Name: {config.ai_name} -Role: {config.ai_role} -Goals: {config.ai_goals} -Continue (y/n): """ - ) - if should_continue.lower() == "n": - config = AIConfig() - - if not config.ai_name: - config = prompt_user() - config.save(CFG.ai_settings_file) - - # Get rid of this global: - global ai_name - ai_name = config.ai_name - - return config.construct_full_prompt() diff --git a/spaces/Cyril666/my_abi/modules/model_language.py b/spaces/Cyril666/my_abi/modules/model_language.py deleted file mode 100644 index a643cd5946240548746b22fc9294db63c2dfe7a1..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/my_abi/modules/model_language.py +++ /dev/null @@ -1,67 +0,0 @@ -import logging -import torch.nn as nn -from fastai.vision import * - -from modules.model import _default_tfmer_cfg -from modules.model import Model -from modules.transformer import (PositionalEncoding, - TransformerDecoder, - TransformerDecoderLayer) - - -class BCNLanguage(Model): - def __init__(self, config): - super().__init__(config) - d_model = ifnone(config.model_language_d_model, _default_tfmer_cfg['d_model']) - nhead = ifnone(config.model_language_nhead, _default_tfmer_cfg['nhead']) - d_inner = ifnone(config.model_language_d_inner, _default_tfmer_cfg['d_inner']) - dropout = ifnone(config.model_language_dropout, _default_tfmer_cfg['dropout']) - activation = ifnone(config.model_language_activation, _default_tfmer_cfg['activation']) - num_layers = ifnone(config.model_language_num_layers, 4) - self.d_model = d_model - self.detach = ifnone(config.model_language_detach, True) - self.use_self_attn = ifnone(config.model_language_use_self_attn, False) - self.loss_weight = ifnone(config.model_language_loss_weight, 1.0) - self.max_length = config.dataset_max_length + 1 # additional stop token - self.debug = ifnone(config.global_debug, False) - - self.proj = nn.Linear(self.charset.num_classes, d_model, False) - self.token_encoder = PositionalEncoding(d_model, max_len=self.max_length) - self.pos_encoder = PositionalEncoding(d_model, dropout=0, max_len=self.max_length) - decoder_layer = TransformerDecoderLayer(d_model, nhead, d_inner, dropout, - activation, self_attn=self.use_self_attn, debug=self.debug) - self.model = TransformerDecoder(decoder_layer, num_layers) - - self.cls = nn.Linear(d_model, self.charset.num_classes) - - if config.model_language_checkpoint is not None: - logging.info(f'Read language model from {config.model_language_checkpoint}.') - self.load(config.model_language_checkpoint) - - def forward(self, tokens, lengths): - """ - Args: - tokens: (N, T, C) where T is length, N is batch size and C is classes number - lengths: (N,) - """ - if self.detach: tokens = tokens.detach() - embed = self.proj(tokens) # (N, T, E) - embed = embed.permute(1, 0, 2) # (T, N, E) - embed = self.token_encoder(embed) # (T, N, E) - padding_mask = self._get_padding_mask(lengths, self.max_length) - - zeros = embed.new_zeros(*embed.shape) - qeury = self.pos_encoder(zeros) - location_mask = self._get_location_mask(self.max_length, tokens.device) - output = self.model(qeury, embed, - tgt_key_padding_mask=padding_mask, - memory_mask=location_mask, - memory_key_padding_mask=padding_mask) # (T, N, E) - output = output.permute(1, 0, 2) # (N, T, E) - - logits = self.cls(output) # (N, T, C) - pt_lengths = self._get_length(logits) - - res = {'feature': output, 'logits': logits, 'pt_lengths': pt_lengths, - 'loss_weight':self.loss_weight, 'name': 'language'} - return res diff --git a/spaces/DCXGAO/DeepDanbooru_string/README.md b/spaces/DCXGAO/DeepDanbooru_string/README.md deleted file mode 100644 index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000 --- a/spaces/DCXGAO/DeepDanbooru_string/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: DeepDanbooru String -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -duplicated_from: NoCrypt/DeepDanbooru_string ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/__init__.py deleted file mode 100644 index 55991fc38062b9c800805437ee49b0cf42b98103..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/__init__.py +++ /dev/null @@ -1,46 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Charset-Normalizer -~~~~~~~~~~~~~~ -The Real First Universal Charset Detector. -A library that helps you read text from an unknown charset encoding. -Motivated by chardet, This package is trying to resolve the issue by taking a new approach. -All IANA character set names for which the Python core library provides codecs are supported. - -Basic usage: - >>> from charset_normalizer import from_bytes - >>> results = from_bytes('Bсеки човек има право на образование. Oбразованието!'.encode('utf_8')) - >>> best_guess = results.best() - >>> str(best_guess) - 'Bсеки човек има право на образование. Oбразованието!' - -Others methods and usages are available - see the full documentation -at . -:copyright: (c) 2021 by Ahmed TAHRI -:license: MIT, see LICENSE for more details. -""" -import logging - -from .api import from_bytes, from_fp, from_path, is_binary -from .legacy import detect -from .models import CharsetMatch, CharsetMatches -from .utils import set_logging_handler -from .version import VERSION, __version__ - -__all__ = ( - "from_fp", - "from_path", - "from_bytes", - "is_binary", - "detect", - "CharsetMatch", - "CharsetMatches", - "__version__", - "VERSION", - "set_logging_handler", -) - -# Attach a NullHandler to the top level logger by default -# https://docs.python.org/3.3/howto/logging.html#configuring-logging-for-a-library - -logging.getLogger("charset_normalizer").addHandler(logging.NullHandler()) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/__init__.py deleted file mode 100644 index 156cb232a7aa80eee1526c7598f72043de10473f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/encodings/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Empty __init__.py file to signal Python this directory is a package.""" diff --git a/spaces/Daniton/superjourney/README.md b/spaces/Daniton/superjourney/README.md deleted file mode 100644 index c65d13db01de0cff1d3f4e436d47e3adcf85de4a..0000000000000000000000000000000000000000 --- a/spaces/Daniton/superjourney/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Superjourney -emoji: 👁 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Danky/dreamlike-art-dreamlike-diffusion-1.0/app.py b/spaces/Danky/dreamlike-art-dreamlike-diffusion-1.0/app.py deleted file mode 100644 index 26e036ff2e92bfa549428082790db4acf5d94844..0000000000000000000000000000000000000000 --- a/spaces/Danky/dreamlike-art-dreamlike-diffusion-1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/dreamlike-art/dreamlike-diffusion-1.0").launch() \ No newline at end of file diff --git a/spaces/Datasculptor/DescriptionGPT/datasets/README.md b/spaces/Datasculptor/DescriptionGPT/datasets/README.md deleted file mode 100644 index aadb3133e8c9a5345e137c5736485109c1a107db..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/datasets/README.md +++ /dev/null @@ -1,207 +0,0 @@ -# Prepare datasets for Detic - -The basic training of our model uses [LVIS](https://www.lvisdataset.org/) (which uses [COCO](https://cocodataset.org/) images) and [ImageNet-21K](https://www.image-net.org/download.php). -Some models are trained on [Conceptual Caption (CC3M)](https://ai.google.com/research/ConceptualCaptions/). -Optionally, we use [Objects365](https://www.objects365.org/) and [OpenImages (Challenge 2019 version)](https://storage.googleapis.com/openimages/web/challenge2019.html) for cross-dataset evaluation. -Before starting processing, please download the (selected) datasets from the official websites and place or sim-link them under `$Detic_ROOT/datasets/`. - -``` -$Detic_ROOT/datasets/ - metadata/ - lvis/ - coco/ - imagenet/ - cc3m/ - objects365/ - oid/ -``` -`metadata/` is our preprocessed meta-data (included in the repo). See the below [section](#Metadata) for details. -Please follow the following instruction to pre-process individual datasets. - -### COCO and LVIS - -First, download COCO and LVIS data place them in the following way: - -``` -lvis/ - lvis_v1_train.json - lvis_v1_val.json -coco/ - train2017/ - val2017/ - annotations/ - captions_train2017.json - instances_train2017.json - instances_val2017.json -``` - -Next, prepare the open-vocabulary LVIS training set using - -``` -python tools/remove_lvis_rare.py --ann datasets/lvis/lvis_v1_train.json -``` - -This will generate `datasets/lvis/lvis_v1_train_norare.json`. - -### ImageNet-21K - -The ImageNet-21K folder should look like: -``` -imagenet/ - ImageNet-21K/ - n01593028.tar - n01593282.tar - ... -``` - -We first unzip the overlapping classes of LVIS (we will directly work with the .tar file for the rest classes) and convert them into LVIS annotation format. - -~~~ -mkdir imagenet/annotations -python tools/unzip_imagenet_lvis.py --dst_path datasets/imagenet/ImageNet-LVIS -python tools/create_imagenetlvis_json.py --imagenet_path datasets/imagenet/ImageNet-LVIS --out_path datasets/imagenet/annotations/imagenet_lvis_image_info.json -~~~ -This creates `datasets/imagenet/annotations/imagenet_lvis_image_info.json`. - -[Optional] To train with all the 21K classes, run - -~~~ -python tools/get_imagenet_21k_full_tar_json.py -python tools/create_lvis_21k.py -~~~ -This creates `datasets/imagenet/annotations/imagenet-21k_image_info_lvis-21k.json` and `datasets/lvis/lvis_v1_train_lvis-21k.json` (combined LVIS and ImageNet-21K classes in `categories`). - -[Optional] To train on combined LVIS and COCO, run - -~~~ -python tools/merge_lvis_coco.py -~~~ -This creates `datasets/lvis/lvis_v1_train+coco_mask.json` - -### Conceptual Caption - - -Download the dataset from [this](https://ai.google.com/research/ConceptualCaptions/download) page and place them as: -``` -cc3m/ - GCC-training.tsv -``` - -Run the following command to download the images and convert the annotations to LVIS format (Note: download images takes long). - -~~~ -python tools/download_cc.py --ann datasets/cc3m/GCC-training.tsv --save_image_path datasets/cc3m/training/ --out_path datasets/cc3m/train_image_info.json -python tools/get_cc_tags.py -~~~ - -This creates `datasets/cc3m/train_image_info_tags.json`. - -### Objects365 -Download Objects365 (v2) from the website. We only need the validation set in this project: -``` -objects365/ - annotations/ - zhiyuan_objv2_val.json - val/ - images/ - v1/ - patch0/ - ... - patch15/ - v2/ - patch16/ - ... - patch49/ - -``` - -The original annotation has typos in the class names, we first fix them for our following use of language embeddings. - -``` -python tools/fix_o365_names.py --ann datasets/objects365/annotations/zhiyuan_objv2_val.json -``` -This creates `datasets/objects365/zhiyuan_objv2_val_fixname.json`. - -To train on Objects365, download the training images and use the command above. We note some images in the training annotation do not exist. -We use the following command to filter the missing images. -~~~ -python tools/fix_0365_path.py -~~~ -This creates `datasets/objects365/zhiyuan_objv2_train_fixname_fixmiss.json`. - -### OpenImages - -We followed the instructions in [UniDet](https://github.com/xingyizhou/UniDet/blob/master/projects/UniDet/unidet_docs/DATASETS.md#openimages) to convert the metadata for OpenImages. - -The converted folder should look like - -``` -oid/ - annotations/ - oid_challenge_2019_train_bbox.json - oid_challenge_2019_val_expanded.json - images/ - 0/ - 1/ - 2/ - ... -``` - -### Open-vocabulary COCO - -We first follow [OVR-CNN](https://github.com/alirezazareian/ovr-cnn/blob/master/ipynb/003.ipynb) to create the open-vocabulary COCO split. The converted files should be like - -``` -coco/ - zero-shot/ - instances_train2017_seen_2.json - instances_val2017_all_2.json -``` - -We further pre-process the annotation format for easier evaluation: - -``` -python tools/get_coco_zeroshot_oriorder.py --data_path datasets/coco/zero-shot/instances_train2017_seen_2.json -python tools/get_coco_zeroshot_oriorder.py --data_path datasets/coco/zero-shot/instances_val2017_all_2.json -``` - -Next, we preprocess the COCO caption data: - -``` -python tools/get_cc_tags.py --cc_ann datasets/coco/annotations/captions_train2017.json --out_path datasets/coco/captions_train2017_tags_allcaps.json --allcaps --convert_caption -``` -This creates `datasets/coco/captions_train2017_tags_allcaps.json`. - -### Metadata - -``` -metadata/ - lvis_v1_train_cat_info.json - coco_clip_a+cname.npy - lvis_v1_clip_a+cname.npy - o365_clip_a+cnamefix.npy - oid_clip_a+cname.npy - imagenet_lvis_wnid.txt - Objects365_names_fix.csv -``` - -`lvis_v1_train_cat_info.json` is used by the Federated loss. -This is created by -~~~ -python tools/get_lvis_cat_info.py --ann datasets/lvis/lvis_v1_train.json -~~~ - -`*_clip_a+cname.npy` is the pre-computed CLIP embeddings for each datasets. -They are created by (taking LVIS as an example) -~~~ -python tools/dump_clip_features.py --ann datasets/lvis/lvis_v1_val.json --out_path metadata/lvis_v1_clip_a+cname.npy -~~~ -Note we do not include the 21K class embeddings due to the large file size. -To create it, run -~~~ -python tools/dump_clip_features.py --ann datasets/lvis/lvis_v1_val_lvis-21k.json --out_path datasets/metadata/lvis-21k_clip_a+cname.npy -~~~ - -`imagenet_lvis_wnid.txt` is the list of matched classes between ImageNet-21K and LVIS. - -`Objects365_names_fix.csv` is our manual fix of the Objects365 names. \ No newline at end of file diff --git a/spaces/Detomo/AnimeGAN/README.md b/spaces/Detomo/AnimeGAN/README.md deleted file mode 100644 index 421a5c0c83832258cd4003db4ae89e33e42b5c4b..0000000000000000000000000000000000000000 --- a/spaces/Detomo/AnimeGAN/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AnimeGAN -emoji: 😶‍🌫️ -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DiffusionArtco/Diffusion200Max/app.py b/spaces/DiffusionArtco/Diffusion200Max/app.py deleted file mode 100644 index 61b6938e72cabaf8f9055f09299e632e70be3c03..0000000000000000000000000000000000000000 --- a/spaces/DiffusionArtco/Diffusion200Max/app.py +++ /dev/null @@ -1,84 +0,0 @@ -import gradio as gr -import requests -from PIL import Image -from io import BytesIO -import base64 - -api_url = "https://5cb20b40-572c-426f-9466-995256f9b6eb.id.repl.co/generate_image" - -def generate_image(model="Deliberate", prompt="", seed=0, negative_prompt="", sampler="k_dpmpp_2s_a", steps=50): - data = "?model=" + model + "&prompt=" + prompt + "&seed=" + str(seed) + "&negative_prompt=" + negative_prompt + "&sampler=" + sampler + "&steps=" + str(steps) - response = requests.post(api_url + data, timeout=400) - if response.status_code == 200: - img_base64 = response.json()["url"] - img_bytes = base64.b64decode(img_base64) - img = Image.open(BytesIO(img_bytes)) - return img - else: - return None - -inputs = [ - gr.inputs.Dropdown(['3DKX', 'Abyss OrangeMix', 'AbyssOrangeMix-AfterDark', 'ACertainThing', - 'AIO Pixel Art', 'Analog Diffusion', 'Anime Pencil Diffusion', 'Anygen', - 'Anything Diffusion', 'Anything v3', 'anything_v4_inpainting', - 'App Icon Diffusion', 'Arcane Diffusion', 'Archer Diffusion', - 'Asim Simpsons', 'A to Zovya RPG', 'Balloon Art', 'Borderlands', 'BPModel', - 'BubblyDubbly', 'Char', 'CharHelper', 'Cheese Daddys Landscape Mix', - 'ChilloutMix', 'ChromaV5', 'Classic Animation Diffusion', 'Clazy', - 'Colorful', 'Coloring Book', 'Comic-Diffusion', 'Concept Sheet', - 'Counterfeit', 'Cyberpunk Anime Diffusion', 'CyriousMix', - 'Dan Mumford Style', 'Darkest Diffusion', 'Dark Victorian Diffusion', - 'Deliberate', 'DGSpitzer Art Diffusion', 'Disco Elysium', 'DnD Item', - 'Double Exposure Diffusion', 'Dreamlike Diffusion', - 'dreamlike_diffusion_inpainting', 'Dreamlike Photoreal', - 'DreamLikeSamKuvshinov', 'Dreamshaper', 'DucHaiten', - 'DucHaiten Classic Anime', 'Dungeons and Diffusion', 'Dungeons n Waifus', - 'Eimis Anime Diffusion', 'Elden Ring Diffusion', "Elldreth's Lucid Mix", - 'Elldreths Retro Mix', 'Epic Diffusion', 'Eternos', 'Experience', - 'ExpMix Line', 'FaeTastic', 'Fantasy Card Diffusion', 'FKing SciFi', - 'Funko Diffusion', 'Furry Epoch', 'Future Diffusion', 'Ghibli Diffusion', - 'GorynichMix', 'Grapefruit Hentai', 'Graphic-Art', - 'GTA5 Artwork Diffusion', 'GuoFeng', 'Guohua Diffusion', 'HASDX', - 'Hassanblend', "Healy's Anime Blend", 'Hentai Diffusion', 'HRL', 'iCoMix', - 'Illuminati Diffusion', 'Inkpunk Diffusion', 'Jim Eidomode', - 'JWST Deep Space Diffusion', 'Kenshi', 'Knollingcase', 'Korestyle', - 'kurzgesagt', 'Laolei New Berry Protogen Mix', "Lawlas's yiff mix", - 'Liberty', 'Marvel Diffusion', 'Mega Merge Diffusion', 'Microcasing', - 'Microchars', 'Microcritters', 'Microscopic', 'Microworlds', - 'Midjourney Diffusion', 'Midjourney PaintArt', 'Min Illust Background', - 'ModernArt Diffusion', 'mo-di-diffusion', 'Moedel', 'MoistMix', - 'Movie Diffusion', 'NeverEnding Dream', 'Nitro Diffusion', 'Openniji', - 'OrbAI', 'Papercutcraft', 'Papercut Diffusion', 'Pastel Mix', - 'Perfect World', 'PFG', 'PIXHELL', 'Poison', 'Pokemon3D', 'PortraitPlus', - 'PPP', 'Pretty 2.5D', 'PRMJ', 'Project Unreal Engine 5', 'ProtoGen', - 'Protogen Anime', 'Protogen Infinity', 'Pulp Vector Art', 'PVC', - 'Rachel Walker Watercolors', 'Rainbowpatch', 'Ranma Diffusion', - 'RCNZ Dumb Monkey', 'RCNZ Gorilla With A Brick', 'RealBiter', - 'Realism Engine', 'Realistic Vision', 'Redshift Diffusion', 'Rev Animated', - 'Robo-Diffusion', 'Rodent Diffusion', 'RPG', 'Samdoesarts Ultmerge', - 'Sci-Fi Diffusion', 'SD-Silicon', 'Seek.art MEGA', 'Smoke Diffusion', - 'Something', 'Sonic Diffusion', 'Spider-Verse Diffusion', - 'Squishmallow Diffusion', 'stable_diffusion', 'stable_diffusion_2.1', - 'stable_diffusion_2_inpainting', 'Supermarionation', 'Sygil-Dev Diffusion', - 'Synthwave', 'SynthwavePunk', 'TrexMix', 'trinart', 'Trinart Characters', - 'Tron Legacy Diffusion', 'T-Shirt Diffusion', 'T-Shirt Print Designs', - 'Uhmami', 'Ultraskin', 'UMI Olympus', 'Unstable Ink Dream', 'URPM', - 'Valorant Diffusion', 'Van Gogh Diffusion', 'Vector Art', 'vectorartz', - 'Vintedois Diffusion', 'VinteProtogenMix', 'Vivid Watercolors', - 'Voxel Art Diffusion', 'waifu_diffusion', 'Wavyfusion', 'Woop-Woop Photo', - 'Xynthii-Diffusion', 'Yiffy', 'Zack3D', 'Zeipher Female Model', - 'Zelda BOTW'], label="Model", default="Deliberate"), - gr.inputs.Textbox(label="Prompt", default=""), - gr.inputs.Number(label="Seed", default=0), - gr.inputs.Textbox(label="Negative Prompt", default=""), - gr.inputs.Dropdown(["k_lms", "k_heun", "k_euler", "k_euler_a", "k_dpm_2", "k_dpm_2_a", "DDIM", "k_dpm_fast", "k_dpm_adaptive", "k_dpmpp_2m", "k_dpmpp_2s_a", "k_dpmpp_sde"], label="Sampler", default="k_dpmpp_2s_a"), - gr.inputs.Number(label="Steps", default=50) -] - -outputs = gr.outputs.Image(label="Generated Image", type="pil") - -interface = gr.Interface(generate_image, inputs, outputs, title="Diffusion 200", - description="
Live access to Top 200 Diffusion models
", - examples=[]) - -interface.launch() diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/broden.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/broden.py deleted file mode 100644 index 854e87a46839c837b43cba5347967ce74ae4bf35..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/broden.py +++ /dev/null @@ -1,271 +0,0 @@ -import os, errno, numpy, torch, csv, re, shutil, os, zipfile -from collections import OrderedDict -from torchvision.datasets.folder import default_loader -from torchvision import transforms -from scipy import ndimage -from urllib.request import urlopen - -class BrodenDataset(torch.utils.data.Dataset): - ''' - A multicategory segmentation data set. - - Returns three streams: - (1) The image (3, h, w). - (2) The multicategory segmentation (labelcount, h, w). - (3) A bincount of pixels in the segmentation (labelcount). - - Net dissect also assumes that the dataset object has three properties - with human-readable labels: - - ds.labels = ['red', 'black', 'car', 'tree', 'grid', ...] - ds.categories = ['color', 'part', 'object', 'texture'] - ds.label_category = [0, 0, 2, 2, 3, ...] # The category for each label - ''' - def __init__(self, directory='dataset/broden', resolution=384, - split='train', categories=None, - transform=None, transform_segment=None, - download=False, size=None, include_bincount=True, - broden_version=1, max_segment_depth=6): - assert resolution in [224, 227, 384] - if download: - ensure_broden_downloaded(directory, resolution, broden_version) - self.directory = directory - self.resolution = resolution - self.resdir = os.path.join(directory, 'broden%d_%d' % - (broden_version, resolution)) - self.loader = default_loader - self.transform = transform - self.transform_segment = transform_segment - self.include_bincount = include_bincount - # The maximum number of multilabel layers that coexist at an image. - self.max_segment_depth = max_segment_depth - with open(os.path.join(self.resdir, 'category.csv'), - encoding='utf-8') as f: - self.category_info = OrderedDict() - for row in csv.DictReader(f): - self.category_info[row['name']] = row - if categories is not None: - # Filter out unused categories - categories = set([c for c in categories if c in self.category_info]) - for cat in list(self.category_info.keys()): - if cat not in categories: - del self.category_info[cat] - categories = list(self.category_info.keys()) - self.categories = categories - - # Filter out unneeded images. - with open(os.path.join(self.resdir, 'index.csv'), - encoding='utf-8') as f: - all_images = [decode_index_dict(r) for r in csv.DictReader(f)] - self.image = [row for row in all_images - if index_has_any_data(row, categories) and row['split'] == split] - if size is not None: - self.image = self.image[:size] - with open(os.path.join(self.resdir, 'label.csv'), - encoding='utf-8') as f: - self.label_info = build_dense_label_array([ - decode_label_dict(r) for r in csv.DictReader(f)]) - self.labels = [l['name'] for l in self.label_info] - # Build dense remapping arrays for labels, so that you can - # get dense ranges of labels for each category. - self.category_map = {} - self.category_unmap = {} - self.category_label = {} - for cat in self.categories: - with open(os.path.join(self.resdir, 'c_%s.csv' % cat), - encoding='utf-8') as f: - c_data = [decode_label_dict(r) for r in csv.DictReader(f)] - self.category_unmap[cat], self.category_map[cat] = ( - build_numpy_category_map(c_data)) - self.category_label[cat] = build_dense_label_array( - c_data, key='code') - self.num_labels = len(self.labels) - # Primary categories for each label is the category in which it - # appears with the maximum coverage. - self.label_category = numpy.zeros(self.num_labels, dtype=int) - for i in range(self.num_labels): - maxcoverage, self.label_category[i] = max( - (self.category_label[cat][self.category_map[cat][i]]['coverage'] - if i < len(self.category_map[cat]) - and self.category_map[cat][i] else 0, ic) - for ic, cat in enumerate(categories)) - - def __len__(self): - return len(self.image) - - def __getitem__(self, idx): - record = self.image[idx] - # example record: { - # 'image': 'opensurfaces/25605.jpg', 'split': 'train', - # 'ih': 384, 'iw': 384, 'sh': 192, 'sw': 192, - # 'color': ['opensurfaces/25605_color.png'], - # 'object': [], 'part': [], - # 'material': ['opensurfaces/25605_material.png'], - # 'scene': [], 'texture': []} - image = self.loader(os.path.join(self.resdir, 'images', - record['image'])) - segment = numpy.zeros(shape=(self.max_segment_depth, - record['sh'], record['sw']), dtype=int) - if self.include_bincount: - bincount = numpy.zeros(shape=(self.num_labels,), dtype=int) - depth = 0 - for cat in self.categories: - for layer in record[cat]: - if isinstance(layer, int): - segment[depth,:,:] = layer - if self.include_bincount: - bincount[layer] += segment.shape[1] * segment.shape[2] - else: - png = numpy.asarray(self.loader(os.path.join( - self.resdir, 'images', layer))) - segment[depth,:,:] = png[:,:,0] + png[:,:,1] * 256 - if self.include_bincount: - bincount += numpy.bincount(segment[depth,:,:].flatten(), - minlength=self.num_labels) - depth += 1 - if self.transform: - image = self.transform(image) - if self.transform_segment: - segment = self.transform_segment(segment) - if self.include_bincount: - bincount[0] = 0 - return (image, segment, bincount) - else: - return (image, segment) - -def build_dense_label_array(label_data, key='number', allow_none=False): - ''' - Input: set of rows with 'number' fields (or another field name key). - Output: array such that a[number] = the row with the given number. - ''' - result = [None] * (max([d[key] for d in label_data]) + 1) - for d in label_data: - result[d[key]] = d - # Fill in none - if not allow_none: - example = label_data[0] - def make_empty(k): - return dict((c, k if c is key else type(v)()) - for c, v in example.items()) - for i, d in enumerate(result): - if d is None: - result[i] = dict(make_empty(i)) - return result - -def build_numpy_category_map(map_data, key1='code', key2='number'): - ''' - Input: set of rows with 'number' fields (or another field name key). - Output: array such that a[number] = the row with the given number. - ''' - results = list(numpy.zeros((max([d[key] for d in map_data]) + 1), - dtype=numpy.int16) for key in (key1, key2)) - for d in map_data: - results[0][d[key1]] = d[key2] - results[1][d[key2]] = d[key1] - return results - -def index_has_any_data(row, categories): - for c in categories: - for data in row[c]: - if data: return True - return False - -def decode_label_dict(row): - result = {} - for key, val in row.items(): - if key == 'category': - result[key] = dict((c, int(n)) - for c, n in [re.match('^([^(]*)\(([^)]*)\)$', f).groups() - for f in val.split(';')]) - elif key == 'name': - result[key] = val - elif key == 'syns': - result[key] = val.split(';') - elif re.match('^\d+$', val): - result[key] = int(val) - elif re.match('^\d+\.\d*$', val): - result[key] = float(val) - else: - result[key] = val - return result - -def decode_index_dict(row): - result = {} - for key, val in row.items(): - if key in ['image', 'split']: - result[key] = val - elif key in ['sw', 'sh', 'iw', 'ih']: - result[key] = int(val) - else: - item = [s for s in val.split(';') if s] - for i, v in enumerate(item): - if re.match('^\d+$', v): - item[i] = int(v) - result[key] = item - return result - -class ScaleSegmentation: - ''' - Utility for scaling segmentations, using nearest-neighbor zooming. - ''' - def __init__(self, target_height, target_width): - self.target_height = target_height - self.target_width = target_width - def __call__(self, seg): - ratio = (1, self.target_height / float(seg.shape[1]), - self.target_width / float(seg.shape[2])) - return ndimage.zoom(seg, ratio, order=0) - -def scatter_batch(seg, num_labels, omit_zero=True, dtype=torch.uint8): - ''' - Utility for scattering semgentations into a one-hot representation. - ''' - result = torch.zeros(*((seg.shape[0], num_labels,) + seg.shape[2:]), - dtype=dtype, device=seg.device) - result.scatter_(1, seg, 1) - if omit_zero: - result[:,0] = 0 - return result - -def ensure_broden_downloaded(directory, resolution, broden_version=1): - assert resolution in [224, 227, 384] - baseurl = 'http://netdissect.csail.mit.edu/data/' - dirname = 'broden%d_%d' % (broden_version, resolution) - if os.path.isfile(os.path.join(directory, dirname, 'index.csv')): - return # Already downloaded - zipfilename = 'broden1_%d.zip' % resolution - download_dir = os.path.join(directory, 'download') - os.makedirs(download_dir, exist_ok=True) - full_zipfilename = os.path.join(download_dir, zipfilename) - if not os.path.exists(full_zipfilename): - url = '%s/%s' % (baseurl, zipfilename) - print('Downloading %s' % url) - data = urlopen(url) - with open(full_zipfilename, 'wb') as f: - f.write(data.read()) - print('Unzipping %s' % zipfilename) - with zipfile.ZipFile(full_zipfilename, 'r') as zip_ref: - zip_ref.extractall(directory) - assert os.path.isfile(os.path.join(directory, dirname, 'index.csv')) - -def test_broden_dataset(): - ''' - Testing code. - ''' - bds = BrodenDataset('dataset/broden', resolution=384, - transform=transforms.Compose([ - transforms.Resize(224), - transforms.ToTensor()]), - transform_segment=transforms.Compose([ - ScaleSegmentation(224, 224) - ]), - include_bincount=True) - loader = torch.utils.data.DataLoader(bds, batch_size=100, num_workers=24) - for i in range(1,20): - print(bds.label[i]['name'], - list(bds.category.keys())[bds.primary_category[i]]) - for i, (im, seg, bc) in enumerate(loader): - print(i, im.shape, seg.shape, seg.max(), bc.shape) - -if __name__ == '__main__': - test_broden_dataset() diff --git a/spaces/DonDoesStuff/sd_xl_base_0.9/app.py b/spaces/DonDoesStuff/sd_xl_base_0.9/app.py deleted file mode 100644 index e55054cb23d69f305129e92fe29424217315e8c4..0000000000000000000000000000000000000000 --- a/spaces/DonDoesStuff/sd_xl_base_0.9/app.py +++ /dev/null @@ -1,157 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path -import random -import string -import time -from queue import Queue -from threading import Thread -import emoji - - -text_gen=gr.Interface.load("spaces/phenomenon1981/MagicPrompt-Stable-Diffusion") -def get_prompts(prompt_text): - if prompt_text: - return text_gen("photo, " + prompt_text) - else: - return text_gen("") -proc1=gr.Interface.load("models/stabilityai/stable-diffusion-xl-base-0.9") - -def restart_script_periodically(): - while True: - random_time = random.randint(540, 600) - time.sleep(random_time) - os.execl(sys.executable, sys.executable, *sys.argv) - - -restart_thread = Thread(target=restart_script_periodically, daemon=True) -restart_thread.start() - - -queue = Queue() -queue_threshold = 100 - -def add_random_noise(prompt, noise_level=0.00): - if noise_level == 0: - noise_level = 0.00 - percentage_noise = noise_level * 5 - num_noise_chars = int(len(prompt) * (percentage_noise/100)) - noise_indices = random.sample(range(len(prompt)), num_noise_chars) - prompt_list = list(prompt) - noise_chars = list(string.ascii_letters + string.punctuation + ' ' + string.digits) - noise_chars.extend(['😍', '💩', '😂', '🤔', '😊', '🤗', '😭', '🙄', '😷', '🤯', '🤫', '🥴', '😴', '🤩', '🥳', '😔', '😩', '🤪', '😇', '🤢', '😈', '👹', '👻', '🤖', '👽', '💀', '🎃', '🎅', '🎄', '🎁', '🎂', '🎉', '🎈', '🎊', '🎮', '❤️', '💔', '💕', '💖', '💗', '🐶', '🐱', '🐭', '🐹', '🦊', '🐻', '🐨', '🐯', '🦁', '🐘', '🔥', '🌧️', '🌞', '🌈', '💥', '🌴', '🌊', '🌺', '🌻', '🌸', '🎨', '🌅', '🌌', '☁️', '⛈️', '❄️', '☀️', '🌤️', '⛅️', '🌥️', '🌦️', '🌧️', '🌩️', '🌨️', '🌫️', '☔️', '🌬️', '💨', '🌪️', '🌈']) - for index in noise_indices: - prompt_list[index] = random.choice(noise_chars) - return "".join(prompt_list) - - -def send_it1(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - while queue.qsize() >= queue_threshold: - time.sleep(2) - queue.put(prompt_with_noise) - output1 = proc1(prompt_with_noise) - return output1 - -def send_it2(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - while queue.qsize() >= queue_threshold: - time.sleep(2) - queue.put(prompt_with_noise) - output2 = proc1(prompt_with_noise) - return output2 - -#def send_it3(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #while queue.qsize() >= queue_threshold: - #time.sleep(2) - #queue.put(prompt_with_noise) - #output3 = proc1(prompt_with_noise) - #return output3 - -#def send_it4(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #while queue.qsize() >= queue_threshold: - #time.sleep(2) - #queue.put(prompt_with_noise) - #output4 = proc1(prompt_with_noise) - #return output4 - - - -with gr.Blocks(css='style.css') as demo: - gr.HTML( - """ -
-
-

- Dreamlike Photoreal 2.0 -

-
-

- Noise Level: Controls how much randomness is added to the input before it is sent to the model. Higher noise level produces more diverse outputs, while lower noise level produces similar outputs, -

-

- ❤️ Press the Like Button if you enjoy my space! ❤️ -

-
- """ - ) - with gr.Column(elem_id="col-container"): - with gr.Row(variant="compact"): - input_text = gr.Textbox( - label="Short Prompt", - show_label=False, - max_lines=2, - placeholder="Enter a basic idea and click 'Magic Prompt'. Got no ideas? No problem, Simply just hit the magic button!", - ).style( - container=False, - ) - see_prompts = gr.Button("✨ Magic Prompt ✨").style(full_width=False) - - - with gr.Row(variant="compact"): - prompt = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=2, - placeholder="Full Prompt", - ).style( - container=False, - ) - run = gr.Button("Generate Images").style(full_width=False) - - with gr.Row(): - with gr.Row(): - noise_level = gr.Slider(minimum=0.0, maximum=3, step=0.1, label="Noise Level") - with gr.Row(): - with gr.Row(): - output1=gr.Image(label="Dreamlike-photoreal-2.0",show_label=False) - output2=gr.Image(label="Dreamlike-photoreal-2.0",show_label=False) - - #with gr.Row(): - #output1=gr.Image() - - see_prompts.click(get_prompts, inputs=[input_text], outputs=[prompt], queue=False) - run.click(send_it1, inputs=[prompt, noise_level], outputs=[output1]) - run.click(send_it2, inputs=[prompt, noise_level], outputs=[output2]) - - - - with gr.Row(): - gr.HTML( - """ - -
-

Unleash your creative side and generate mesmerizing images with just a few clicks! Enter a spark of inspiration in the "Basic Idea" text box and click the "Magic Prompt" button to elevate it to a polished masterpiece. Make any final tweaks in the "Full Prompt" box and hit the "Generate Images" button to watch your vision come to life. Experiment with the "Noise Level" for a diverse range of outputs, from similar to wildly unique. Let the fun begin! -

-
- """ -) - - demo.launch(enable_queue=True, inline=True) - block.queue(concurrency_count=100) \ No newline at end of file diff --git a/spaces/Duskfallcrew/anything-v3.0/README.md b/spaces/Duskfallcrew/anything-v3.0/README.md deleted file mode 100644 index 46380302dccd02d43c55f7a3b53aa3fffe7e2847..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/anything-v3.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anything V3.0 -emoji: 🏃 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false -duplicated_from: montagekoko/anything-v3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ECCV2022/bytetrack/tutorials/fairmot/byte_tracker.py b/spaces/ECCV2022/bytetrack/tutorials/fairmot/byte_tracker.py deleted file mode 100644 index 7bb384dcb7e09c3f17f87860ee5d30f48e18ba9d..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/tutorials/fairmot/byte_tracker.py +++ /dev/null @@ -1,403 +0,0 @@ -import numpy as np -from collections import deque -import itertools -import os -import os.path as osp -import time -import torch -import cv2 -import torch.nn.functional as F - -from models.model import create_model, load_model -from models.decode import mot_decode -from tracking_utils.utils import * -from tracking_utils.log import logger -from tracking_utils.kalman_filter import KalmanFilter -from models import * -from tracker import matching -from .basetrack import BaseTrack, TrackState -from utils.post_process import ctdet_post_process -from utils.image import get_affine_transform -from models.utils import _tranpose_and_gather_feat - -class STrack(BaseTrack): - shared_kalman = KalmanFilter() - def __init__(self, tlwh, score): - - # wait activate - self._tlwh = np.asarray(tlwh, dtype=np.float) - self.kalman_filter = None - self.mean, self.covariance = None, None - self.is_activated = False - - self.score = score - self.tracklet_len = 0 - - def predict(self): - mean_state = self.mean.copy() - if self.state != TrackState.Tracked: - mean_state[7] = 0 - self.mean, self.covariance = self.kalman_filter.predict(mean_state, self.covariance) - - @staticmethod - def multi_predict(stracks): - if len(stracks) > 0: - multi_mean = np.asarray([st.mean.copy() for st in stracks]) - multi_covariance = np.asarray([st.covariance for st in stracks]) - for i, st in enumerate(stracks): - if st.state != TrackState.Tracked: - multi_mean[i][7] = 0 - multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance) - for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)): - stracks[i].mean = mean - stracks[i].covariance = cov - - def activate(self, kalman_filter, frame_id): - """Start a new tracklet""" - self.kalman_filter = kalman_filter - self.track_id = self.next_id() - self.mean, self.covariance = self.kalman_filter.initiate(self.tlwh_to_xyah(self._tlwh)) - - self.tracklet_len = 0 - self.state = TrackState.Tracked - if frame_id == 1: - self.is_activated = True - #self.is_activated = True - self.frame_id = frame_id - self.start_frame = frame_id - - def re_activate(self, new_track, frame_id, new_id=False): - self.mean, self.covariance = self.kalman_filter.update( - self.mean, self.covariance, self.tlwh_to_xyah(new_track.tlwh) - ) - - self.tracklet_len = 0 - self.state = TrackState.Tracked - self.is_activated = True - self.frame_id = frame_id - if new_id: - self.track_id = self.next_id() - self.score = new_track.score - - def update(self, new_track, frame_id): - """ - Update a matched track - :type new_track: STrack - :type frame_id: int - :type update_feature: bool - :return: - """ - self.frame_id = frame_id - self.tracklet_len += 1 - - new_tlwh = new_track.tlwh - self.mean, self.covariance = self.kalman_filter.update( - self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh)) - self.state = TrackState.Tracked - self.is_activated = True - - self.score = new_track.score - - @property - # @jit(nopython=True) - def tlwh(self): - """Get current position in bounding box format `(top left x, top left y, - width, height)`. - """ - if self.mean is None: - return self._tlwh.copy() - ret = self.mean[:4].copy() - ret[2] *= ret[3] - ret[:2] -= ret[2:] / 2 - return ret - - @property - # @jit(nopython=True) - def tlbr(self): - """Convert bounding box to format `(min x, min y, max x, max y)`, i.e., - `(top left, bottom right)`. - """ - ret = self.tlwh.copy() - ret[2:] += ret[:2] - return ret - - @staticmethod - # @jit(nopython=True) - def tlwh_to_xyah(tlwh): - """Convert bounding box to format `(center x, center y, aspect ratio, - height)`, where the aspect ratio is `width / height`. - """ - ret = np.asarray(tlwh).copy() - ret[:2] += ret[2:] / 2 - ret[2] /= ret[3] - return ret - - def to_xyah(self): - return self.tlwh_to_xyah(self.tlwh) - - @staticmethod - # @jit(nopython=True) - def tlbr_to_tlwh(tlbr): - ret = np.asarray(tlbr).copy() - ret[2:] -= ret[:2] - return ret - - @staticmethod - # @jit(nopython=True) - def tlwh_to_tlbr(tlwh): - ret = np.asarray(tlwh).copy() - ret[2:] += ret[:2] - return ret - - def __repr__(self): - return 'OT_{}_({}-{})'.format(self.track_id, self.start_frame, self.end_frame) - - -class BYTETracker(object): - def __init__(self, opt, frame_rate=30): - self.opt = opt - if opt.gpus[0] >= 0: - opt.device = torch.device('cuda') - else: - opt.device = torch.device('cpu') - print('Creating model...') - self.model = create_model(opt.arch, opt.heads, opt.head_conv) - self.model = load_model(self.model, opt.load_model) - self.model = self.model.to(opt.device) - self.model.eval() - - self.tracked_stracks = [] # type: list[STrack] - self.lost_stracks = [] # type: list[STrack] - self.removed_stracks = [] # type: list[STrack] - - self.frame_id = 0 - #self.det_thresh = opt.conf_thres - self.det_thresh = opt.conf_thres + 0.1 - self.buffer_size = int(frame_rate / 30.0 * opt.track_buffer) - self.max_time_lost = self.buffer_size - self.max_per_image = opt.K - self.mean = np.array(opt.mean, dtype=np.float32).reshape(1, 1, 3) - self.std = np.array(opt.std, dtype=np.float32).reshape(1, 1, 3) - - self.kalman_filter = KalmanFilter() - - def post_process(self, dets, meta): - dets = dets.detach().cpu().numpy() - dets = dets.reshape(1, -1, dets.shape[2]) - dets = ctdet_post_process( - dets.copy(), [meta['c']], [meta['s']], - meta['out_height'], meta['out_width'], self.opt.num_classes) - for j in range(1, self.opt.num_classes + 1): - dets[0][j] = np.array(dets[0][j], dtype=np.float32).reshape(-1, 5) - return dets[0] - - def merge_outputs(self, detections): - results = {} - for j in range(1, self.opt.num_classes + 1): - results[j] = np.concatenate( - [detection[j] for detection in detections], axis=0).astype(np.float32) - - scores = np.hstack( - [results[j][:, 4] for j in range(1, self.opt.num_classes + 1)]) - if len(scores) > self.max_per_image: - kth = len(scores) - self.max_per_image - thresh = np.partition(scores, kth)[kth] - for j in range(1, self.opt.num_classes + 1): - keep_inds = (results[j][:, 4] >= thresh) - results[j] = results[j][keep_inds] - return results - - def update(self, im_blob, img0): - self.frame_id += 1 - activated_starcks = [] - refind_stracks = [] - lost_stracks = [] - removed_stracks = [] - - width = img0.shape[1] - height = img0.shape[0] - inp_height = im_blob.shape[2] - inp_width = im_blob.shape[3] - c = np.array([width / 2., height / 2.], dtype=np.float32) - s = max(float(inp_width) / float(inp_height) * height, width) * 1.0 - meta = {'c': c, 's': s, - 'out_height': inp_height // self.opt.down_ratio, - 'out_width': inp_width // self.opt.down_ratio} - - ''' Step 1: Network forward, get detections & embeddings''' - with torch.no_grad(): - output = self.model(im_blob)[-1] - hm = output['hm'].sigmoid_() - wh = output['wh'] - - reg = output['reg'] if self.opt.reg_offset else None - dets, inds = mot_decode(hm, wh, reg=reg, ltrb=self.opt.ltrb, K=self.opt.K) - - dets = self.post_process(dets, meta) - dets = self.merge_outputs([dets])[1] - - remain_inds = dets[:, 4] > self.opt.conf_thres - inds_low = dets[:, 4] > 0.2 - inds_high = dets[:, 4] < self.opt.conf_thres - inds_second = np.logical_and(inds_low, inds_high) - dets_second = dets[inds_second] - dets = dets[remain_inds] - - if len(dets) > 0: - '''Detections''' - detections = [STrack(STrack.tlbr_to_tlwh(tlbrs[:4]), tlbrs[4]) for - tlbrs in dets[:, :5]] - else: - detections = [] - - ''' Add newly detected tracklets to tracked_stracks''' - unconfirmed = [] - tracked_stracks = [] # type: list[STrack] - for track in self.tracked_stracks: - if not track.is_activated: - unconfirmed.append(track) - else: - tracked_stracks.append(track) - - ''' Step 2: First association, with IOU''' - strack_pool = joint_stracks(tracked_stracks, self.lost_stracks) - # Predict the current location with KF - STrack.multi_predict(strack_pool) - dists = matching.iou_distance(strack_pool, detections) - matches, u_track, u_detection = matching.linear_assignment(dists, thresh=self.opt.match_thres) - - for itracked, idet in matches: - track = strack_pool[itracked] - det = detections[idet] - if track.state == TrackState.Tracked: - track.update(detections[idet], self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - # association the untrack to the low score detections - if len(dets_second) > 0: - '''Detections''' - detections_second = [STrack(STrack.tlbr_to_tlwh(tlbrs[:4]), tlbrs[4]) for - tlbrs in dets_second[:, :5]] - else: - detections_second = [] - r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked] - dists = matching.iou_distance(r_tracked_stracks, detections_second) - matches, u_track, u_detection_second = matching.linear_assignment(dists, thresh=0.4) - for itracked, idet in matches: - track = r_tracked_stracks[itracked] - det = detections_second[idet] - if track.state == TrackState.Tracked: - track.update(det, self.frame_id) - activated_starcks.append(track) - else: - track.re_activate(det, self.frame_id, new_id=False) - refind_stracks.append(track) - - for it in u_track: - track = r_tracked_stracks[it] - if not track.state == TrackState.Lost: - track.mark_lost() - lost_stracks.append(track) - - '''Deal with unconfirmed tracks, usually tracks with only one beginning frame''' - detections = [detections[i] for i in u_detection] - dists = matching.iou_distance(unconfirmed, detections) - matches, u_unconfirmed, u_detection = matching.linear_assignment(dists, thresh=0.7) - for itracked, idet in matches: - unconfirmed[itracked].update(detections[idet], self.frame_id) - activated_starcks.append(unconfirmed[itracked]) - for it in u_unconfirmed: - track = unconfirmed[it] - track.mark_removed() - removed_stracks.append(track) - - """ Step 4: Init new stracks""" - for inew in u_detection: - track = detections[inew] - if track.score < self.det_thresh: - continue - track.activate(self.kalman_filter, self.frame_id) - activated_starcks.append(track) - """ Step 5: Update state""" - for track in self.lost_stracks: - if self.frame_id - track.end_frame > self.max_time_lost: - track.mark_removed() - removed_stracks.append(track) - - # print('Ramained match {} s'.format(t4-t3)) - - self.tracked_stracks = [t for t in self.tracked_stracks if t.state == TrackState.Tracked] - self.tracked_stracks = joint_stracks(self.tracked_stracks, activated_starcks) - self.tracked_stracks = joint_stracks(self.tracked_stracks, refind_stracks) - self.lost_stracks = sub_stracks(self.lost_stracks, self.tracked_stracks) - self.lost_stracks.extend(lost_stracks) - self.lost_stracks = sub_stracks(self.lost_stracks, self.removed_stracks) - self.removed_stracks.extend(removed_stracks) - self.tracked_stracks, self.lost_stracks = remove_duplicate_stracks(self.tracked_stracks, self.lost_stracks) - #self.tracked_stracks = remove_fp_stracks(self.tracked_stracks) - # get scores of lost tracks - output_stracks = [track for track in self.tracked_stracks if track.is_activated] - - logger.debug('===========Frame {}=========='.format(self.frame_id)) - logger.debug('Activated: {}'.format([track.track_id for track in activated_starcks])) - logger.debug('Refind: {}'.format([track.track_id for track in refind_stracks])) - logger.debug('Lost: {}'.format([track.track_id for track in lost_stracks])) - logger.debug('Removed: {}'.format([track.track_id for track in removed_stracks])) - - return output_stracks - - -def joint_stracks(tlista, tlistb): - exists = {} - res = [] - for t in tlista: - exists[t.track_id] = 1 - res.append(t) - for t in tlistb: - tid = t.track_id - if not exists.get(tid, 0): - exists[tid] = 1 - res.append(t) - return res - - -def sub_stracks(tlista, tlistb): - stracks = {} - for t in tlista: - stracks[t.track_id] = t - for t in tlistb: - tid = t.track_id - if stracks.get(tid, 0): - del stracks[tid] - return list(stracks.values()) - - -def remove_duplicate_stracks(stracksa, stracksb): - pdist = matching.iou_distance(stracksa, stracksb) - pairs = np.where(pdist < 0.15) - dupa, dupb = list(), list() - for p, q in zip(*pairs): - timep = stracksa[p].frame_id - stracksa[p].start_frame - timeq = stracksb[q].frame_id - stracksb[q].start_frame - if timep > timeq: - dupb.append(q) - else: - dupa.append(p) - resa = [t for i, t in enumerate(stracksa) if not i in dupa] - resb = [t for i, t in enumerate(stracksb) if not i in dupb] - return resa, resb - - -def remove_fp_stracks(stracksa, n_frame=10): - remain = [] - for t in stracksa: - score_5 = t.score_list[-n_frame:] - score_5 = np.array(score_5, dtype=np.float32) - index = score_5 < 0.45 - num = np.sum(index) - if num < n_frame: - remain.append(t) - return remain diff --git a/spaces/EveryPizza/stabilityai-stable-diffusion-2/app.py b/spaces/EveryPizza/stabilityai-stable-diffusion-2/app.py deleted file mode 100644 index d2782cea00b1bfcd22df7c204d9e52a6baf46ac2..0000000000000000000000000000000000000000 --- a/spaces/EveryPizza/stabilityai-stable-diffusion-2/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2").launch() \ No newline at end of file diff --git a/spaces/FedeFT/Head_Pose_Estimation_and_LAEO_computation/utils/__init__.py b/spaces/FedeFT/Head_Pose_Estimation_and_LAEO_computation/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Felladrin/MiniSearch/server.ts b/spaces/Felladrin/MiniSearch/server.ts deleted file mode 100644 index 0aa8174b66440e388adb2db35188d0c4bb30a01c..0000000000000000000000000000000000000000 --- a/spaces/Felladrin/MiniSearch/server.ts +++ /dev/null @@ -1,67 +0,0 @@ -import fetch from "node-fetch"; -import express from "express"; - -async function search(query: string, limit?: number) { - try { - const url = new URL("http://127.0.0.1:8080/search"); - url.search = new URLSearchParams({ - q: query, - language: "auto", - safesearch: "0", - format: "json", - }).toString(); - const response = await fetch(url); - let { results } = (await response.json()) as { - results: { url: string; title: string; content: string }[]; - }; - const searchResults: [title: string, content: string, url: string][] = []; - if (results) { - if (limit && limit > 0) { - results = results.slice(0, limit); - } - - for (const result of results) { - const stripHtmlTags = (str: string) => str.replace(/<[^>]*>?/gm, ""); - - const content = stripHtmlTags(result.content).trim(); - - if (content === "") continue; - - const title = stripHtmlTags(result.title); - const url = result.url as string; - - searchResults.push([title, content, url]); - } - } - return searchResults; - } catch (e) { - console.error(e); - return []; - } -} - -export const app = express(); - -app.use(express.static("dist")); - -app.get("/search", async (request, response) => { - const query = request.query.q as string; - - if (!query) { - response.status(400).send("Missing the query parameter."); - return; - } - - const limitParam = request.query.limit as string | undefined; - const limit = limitParam && Number(limitParam) > 0 ? Number(limitParam) : 6; - const searchResults = await search(query, limit); - - response.send(searchResults); -}); - -if (process.env.NODE_ENV !== "development") { - const port = process.env.PORT ?? 5173; - const url = `http://localhost:${port}/`; - app.listen(port); - console.log(`Server started! ${url}`); -} diff --git a/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/python/dqn/policies.py b/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/python/dqn/policies.py deleted file mode 100644 index 4ecf39a5fc04b24ad1b809232b186728366987b6..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/crazy_functions/test_project/python/dqn/policies.py +++ /dev/null @@ -1,237 +0,0 @@ -from typing import Any, Dict, List, Optional, Type - -import gym -import torch as th -from torch import nn - -from stable_baselines3.common.policies import BasePolicy, register_policy -from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp -from stable_baselines3.common.type_aliases import Schedule - - -class QNetwork(BasePolicy): - """ - Action-Value (Q-Value) network for DQN - - :param observation_space: Observation space - :param action_space: Action space - :param net_arch: The specification of the policy and value networks. - :param activation_fn: Activation function - :param normalize_images: Whether to normalize images or not, - dividing by 255.0 (True by default) - """ - - def __init__( - self, - observation_space: gym.spaces.Space, - action_space: gym.spaces.Space, - features_extractor: nn.Module, - features_dim: int, - net_arch: Optional[List[int]] = None, - activation_fn: Type[nn.Module] = nn.ReLU, - normalize_images: bool = True, - ): - super(QNetwork, self).__init__( - observation_space, - action_space, - features_extractor=features_extractor, - normalize_images=normalize_images, - ) - - if net_arch is None: - net_arch = [64, 64] - - self.net_arch = net_arch - self.activation_fn = activation_fn - self.features_extractor = features_extractor - self.features_dim = features_dim - self.normalize_images = normalize_images - action_dim = self.action_space.n # number of actions - q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn) - self.q_net = nn.Sequential(*q_net) - - def forward(self, obs: th.Tensor) -> th.Tensor: - """ - Predict the q-values. - - :param obs: Observation - :return: The estimated Q-Value for each action. - """ - return self.q_net(self.extract_features(obs)) - - def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor: - q_values = self.forward(observation) - # Greedy action - action = q_values.argmax(dim=1).reshape(-1) - return action - - def _get_constructor_parameters(self) -> Dict[str, Any]: - data = super()._get_constructor_parameters() - - data.update( - dict( - net_arch=self.net_arch, - features_dim=self.features_dim, - activation_fn=self.activation_fn, - features_extractor=self.features_extractor, - ) - ) - return data - - -class DQNPolicy(BasePolicy): - """ - Policy class with Q-Value Net and target net for DQN - - :param observation_space: Observation space - :param action_space: Action space - :param lr_schedule: Learning rate schedule (could be constant) - :param net_arch: The specification of the policy and value networks. - :param activation_fn: Activation function - :param features_extractor_class: Features extractor to use. - :param features_extractor_kwargs: Keyword arguments - to pass to the features extractor. - :param normalize_images: Whether to normalize images or not, - dividing by 255.0 (True by default) - :param optimizer_class: The optimizer to use, - ``th.optim.Adam`` by default - :param optimizer_kwargs: Additional keyword arguments, - excluding the learning rate, to pass to the optimizer - """ - - def __init__( - self, - observation_space: gym.spaces.Space, - action_space: gym.spaces.Space, - lr_schedule: Schedule, - net_arch: Optional[List[int]] = None, - activation_fn: Type[nn.Module] = nn.ReLU, - features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor, - features_extractor_kwargs: Optional[Dict[str, Any]] = None, - normalize_images: bool = True, - optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam, - optimizer_kwargs: Optional[Dict[str, Any]] = None, - ): - super(DQNPolicy, self).__init__( - observation_space, - action_space, - features_extractor_class, - features_extractor_kwargs, - optimizer_class=optimizer_class, - optimizer_kwargs=optimizer_kwargs, - ) - - if net_arch is None: - if features_extractor_class == FlattenExtractor: - net_arch = [64, 64] - else: - net_arch = [] - - self.net_arch = net_arch - self.activation_fn = activation_fn - self.normalize_images = normalize_images - - self.net_args = { - "observation_space": self.observation_space, - "action_space": self.action_space, - "net_arch": self.net_arch, - "activation_fn": self.activation_fn, - "normalize_images": normalize_images, - } - - self.q_net, self.q_net_target = None, None - self._build(lr_schedule) - - def _build(self, lr_schedule: Schedule) -> None: - """ - Create the network and the optimizer. - - :param lr_schedule: Learning rate schedule - lr_schedule(1) is the initial learning rate - """ - - self.q_net = self.make_q_net() - self.q_net_target = self.make_q_net() - self.q_net_target.load_state_dict(self.q_net.state_dict()) - - # Setup optimizer with initial learning rate - self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs) - - def make_q_net(self) -> QNetwork: - # Make sure we always have separate networks for features extractors etc - net_args = self._update_features_extractor(self.net_args, features_extractor=None) - return QNetwork(**net_args).to(self.device) - - def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor: - return self._predict(obs, deterministic=deterministic) - - def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor: - return self.q_net._predict(obs, deterministic=deterministic) - - def _get_constructor_parameters(self) -> Dict[str, Any]: - data = super()._get_constructor_parameters() - - data.update( - dict( - net_arch=self.net_args["net_arch"], - activation_fn=self.net_args["activation_fn"], - lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone - optimizer_class=self.optimizer_class, - optimizer_kwargs=self.optimizer_kwargs, - features_extractor_class=self.features_extractor_class, - features_extractor_kwargs=self.features_extractor_kwargs, - ) - ) - return data - - -MlpPolicy = DQNPolicy - - -class CnnPolicy(DQNPolicy): - """ - Policy class for DQN when using images as input. - - :param observation_space: Observation space - :param action_space: Action space - :param lr_schedule: Learning rate schedule (could be constant) - :param net_arch: The specification of the policy and value networks. - :param activation_fn: Activation function - :param features_extractor_class: Features extractor to use. - :param normalize_images: Whether to normalize images or not, - dividing by 255.0 (True by default) - :param optimizer_class: The optimizer to use, - ``th.optim.Adam`` by default - :param optimizer_kwargs: Additional keyword arguments, - excluding the learning rate, to pass to the optimizer - """ - - def __init__( - self, - observation_space: gym.spaces.Space, - action_space: gym.spaces.Space, - lr_schedule: Schedule, - net_arch: Optional[List[int]] = None, - activation_fn: Type[nn.Module] = nn.ReLU, - features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN, - features_extractor_kwargs: Optional[Dict[str, Any]] = None, - normalize_images: bool = True, - optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam, - optimizer_kwargs: Optional[Dict[str, Any]] = None, - ): - super(CnnPolicy, self).__init__( - observation_space, - action_space, - lr_schedule, - net_arch, - activation_fn, - features_extractor_class, - features_extractor_kwargs, - normalize_images, - optimizer_class, - optimizer_kwargs, - ) - - -register_policy("MlpPolicy", MlpPolicy) -register_policy("CnnPolicy", CnnPolicy) diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/app.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/app.py deleted file mode 100644 index 0120cc47772d25900ee7a427538a957564357a54..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/app.py +++ /dev/null @@ -1,375 +0,0 @@ -# -*- coding: utf-8 -*- -import traceback -import torch -from scipy.io import wavfile -import edge_tts -import subprocess -import gradio as gr -import gradio.processing_utils as gr_pu -import io -import os -import logging -import time -from pathlib import Path -import re -import json -import argparse - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) -logging.getLogger('multipart').setLevel(logging.WARNING) - -model = None -spk = None -debug = False - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def vc_fn(sid, input_audio, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold): - try: - if input_audio is None: - raise gr.Error("你需要上传音频") - if model is None: - raise gr.Error("你需要指定模型") - sampling_rate, audio = input_audio - # print(audio.shape,sampling_rate) - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - temp_path = "temp.wav" - soundfile.write(temp_path, audio, sampling_rate, format="wav") - _audio = model.slice_inference(temp_path, sid, vc_transform, slice_db, cluster_ratio, auto_f0, noise_scale, - pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold) - model.clear_empty() - os.remove(temp_path) - # 构建保存文件的路径,并保存到results文件夹内 - try: - timestamp = str(int(time.time())) - filename = sid + "_" + timestamp + ".wav" - # output_file = os.path.join("./results", filename) - # soundfile.write(output_file, _audio, model.target_sample, format="wav") - soundfile.write('/tmp/'+filename, _audio, - model.target_sample, format="wav") - # return f"推理成功,音频文件保存为results/{filename}", (model.target_sample, _audio) - return f"推理成功,音频文件保存为{filename}", (model.target_sample, _audio) - except Exception as e: - if debug: - traceback.print_exc() - return f"文件保存失败,请手动保存", (model.target_sample, _audio) - except Exception as e: - if debug: - traceback.print_exc() - raise gr.Error(e) - - -def tts_func(_text, _rate, _voice): - # 使用edge-tts把文字转成音频 - # voice = "zh-CN-XiaoyiNeural"#女性,较高音 - # voice = "zh-CN-YunxiNeural"#男性 - voice = "zh-CN-YunxiNeural" # 男性 - if (_voice == "女"): - voice = "zh-CN-XiaoyiNeural" - output_file = "/tmp/"+_text[0:10]+".wav" - # communicate = edge_tts.Communicate(_text, voice) - # await communicate.save(output_file) - if _rate >= 0: - ratestr = "+{:.0%}".format(_rate) - elif _rate < 0: - ratestr = "{:.0%}".format(_rate) # 减号自带 - - p = subprocess.Popen("edge-tts " + - " --text "+_text + - " --write-media "+output_file + - " --voice "+voice + - " --rate="+ratestr, shell=True, - stdout=subprocess.PIPE, - stdin=subprocess.PIPE) - p.wait() - return output_file - - -def text_clear(text): - return re.sub(r"[\n\,\(\) ]", "", text) - - -def vc_fn2(sid, input_audio, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, lg_num, lgr_num, text2tts, tts_rate, tts_voice, f0_predictor, enhancer_adaptive_key, cr_threshold): - # 使用edge-tts把文字转成音频 - text2tts = text_clear(text2tts) - output_file = tts_func(text2tts, tts_rate, tts_voice) - - # 调整采样率 - sr2 = 44100 - wav, sr = librosa.load(output_file) - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=sr2) - save_path2 = text2tts[0:10]+"_44k"+".wav" - wavfile.write(save_path2, sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - # 读取音频 - sample_rate, data = gr_pu.audio_from_file(save_path2) - vc_input = (sample_rate, data) - - a, b = vc_fn(sid, vc_input, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, - pad_seconds, cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold) - os.remove(output_file) - os.remove(save_path2) - return a, b - - -models_info = [ - { - "description": """ - 这个模型包含公主连结的161名角色。\n\n - Space采用CPU推理,速度极慢,建议下载模型本地GPU推理。\n\n - """, - "model_path": "./G_228800.pth", - "config_path": "./config.json", - } -] - -model_inferall = [] -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--share", action="store_true", - default=False, help="share gradio app") - # 一定要设置的部分 - parser.add_argument('-cl', '--clip', type=float, - default=0, help='音频强制切片,默认0为自动切片,单位为秒/s') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', - default=["君の知らない物語-src.wav"], help='wav文件名列表,放在raw文件夹下') - parser.add_argument('-t', '--trans', type=int, nargs='+', - default=[0], help='音高调整,支持正负(半音)') - parser.add_argument('-s', '--spk_list', type=str, - nargs='+', default=['nen'], help='合成目标说话人名称') - - # 可选项部分 - parser.add_argument('-a', '--auto_predict_f0', action='store_true', - default=False, help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调') - parser.add_argument('-cm', '--cluster_model_path', type=str, - default="logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, - default=0, help='聚类方案占比,范围0-1,若没有训练聚类模型则默认0即可') - parser.add_argument('-lg', '--linear_gradient', type=float, default=0, - help='两段音频切片的交叉淡入长度,如果强制切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,单位为秒') - parser.add_argument('-f0p', '--f0_predictor', type=str, default="pm", - help='选择F0预测器,可选择crepe,pm,dio,harvest,默认为pm(注意:crepe为原F0使用均值滤波器)') - parser.add_argument('-eh', '--enhance', action='store_true', default=False, - help='是否使用NSF_HIFIGAN增强器,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭') - parser.add_argument('-shd', '--shallow_diffusion', action='store_true', - default=False, help='是否使用浅层扩散,使用后可解决一部分电音问题,默认关闭,该选项打开时,NSF_HIFIGAN增强器将会被禁止') - - # 浅扩散设置 - parser.add_argument('-dm', '--diffusion_model_path', type=str, - default="logs/44k/diffusion/model_0.pt", help='扩散模型路径') - parser.add_argument('-dc', '--diffusion_config_path', type=str, - default="logs/44k/diffusion/config.yaml", help='扩散模型配置文件路径') - parser.add_argument('-ks', '--k_step', type=int, - default=100, help='扩散步数,越大越接近扩散模型的结果,默认100') - parser.add_argument('-od', '--only_diffusion', action='store_true', - default=False, help='纯扩散模式,该模式不会加载sovits模型,以扩散模型推理') - - # 不用动的部分 - parser.add_argument('-sd', '--slice_db', type=int, - default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50') - parser.add_argument('-d', '--device', type=str, - default=None, help='推理设备,None则为自动选择cpu和gpu') - parser.add_argument('-ns', '--noice_scale', type=float, - default=0.4, help='噪音级别,会影响咬字和音质,较为玄学') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, - help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现') - parser.add_argument('-wf', '--wav_format', type=str, - default='flac', help='音频输出格式') - parser.add_argument('-lgr', '--linear_gradient_retain', type=float, - default=0.75, help='自动音频切片后,需要舍弃每段切片的头尾。该参数设置交叉长度保留的比例,范围0-1,左开右闭') - parser.add_argument('-eak', '--enhancer_adaptive_key', - type=int, default=0, help='使增强器适应更高的音域(单位为半音数)|默认为0') - parser.add_argument('-ft', '--f0_filter_threshold', type=float, default=0.05, - help='F0过滤阈值,只有使用crepe时有效. 数值范围从0-1. 降低该值可减少跑调概率,但会增加哑音') - args = parser.parse_args() - categories = ["Princess Connect! Re:Dive"] - others = { - "PCR vits-fast-fineturning": "https://huggingface.co/spaces/FrankZxShen/vits-fast-finetuning-pcr", - } - for info in models_info: - config_path = info['config_path'] - model_path = info['model_path'] - description = info['description'] - clean_names = args.clean_names - trans = args.trans - spk_list = list(get_hparams_from_file(config_path).spk.keys()) - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - clip = args.clip - lg = args.linear_gradient - lgr = args.linear_gradient_retain - f0p = args.f0_predictor - enhance = args.enhance - enhancer_adaptive_key = args.enhancer_adaptive_key - cr_threshold = args.f0_filter_threshold - diffusion_model_path = args.diffusion_model_path - diffusion_config_path = args.diffusion_config_path - k_step = args.k_step - only_diffusion = args.only_diffusion - shallow_diffusion = args.shallow_diffusion - - model = Svc(model_path, config_path, args.device, args.cluster_model_path, enhance, - diffusion_model_path, diffusion_config_path, shallow_diffusion, only_diffusion) - - model_inferall.append((description, spk_list, model)) - - app = gr.Blocks() - with app: - gr.Markdown( - "#
so-vits-svc-models-pcr\n" - "#
Pay attention!!! Space uses CPU inferencing, which is extremely slow. It is recommended to download models.\n" - "#
注意!!!Space采用CPU推理,速度极慢,建议下载模型使用本地GPU推理。\n" - "##
Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.\n" - "##
请不要生成会对个人以及组织造成侵害的内容\n\n" - ) - gr.Markdown("# Princess Connect! Re:Dive\n\n" - ) - with gr.Tabs(): - for category in categories: - with gr.TabItem(category): - for i, (description, speakers, model) in enumerate( - model_inferall): - gr.Markdown(description) - with gr.Row(): - with gr.Column(): - # textbox = gr.TextArea(label="Text", - # placeholder="Type your sentence here ", - # value="新たなキャラを解放できるようになったようですね。", elem_id=f"tts-input") - - gr.Markdown(value=""" - 推理设置 - """) - sid = gr.Dropdown( - choices=speakers, value=speakers[0], label='角色选择') - auto_f0 = gr.Checkbox( - label="自动f0预测,配合聚类模型f0预测效果更好,会导致变调功能失效(仅限转换语音,歌声勾选此项会究极跑调)", value=False) - f0_predictor = gr.Dropdown(label="选择F0预测器,可选择crepe,pm,dio,harvest,默认为pm(注意:crepe为原F0使用均值滤波器)", choices=[ - "pm", "dio", "harvest", "crepe"], value="pm") - vc_transform = gr.Number( - label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - cluster_ratio = gr.Number( - label="聚类模型混合比例,0-1之间,0即不启用聚类。使用聚类模型能提升音色相似度,但会导致咬字下降(如果使用建议0.5左右)", value=0) - slice_db = gr.Number(label="切片阈值", value=-40) - noise_scale = gr.Number( - label="noise_scale 建议不要动,会影响音质,玄学参数", value=0.4) - with gr.Column(): - pad_seconds = gr.Number( - label="推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现", value=0.5) - cl_num = gr.Number( - label="音频自动切片,0为不切片,单位为秒(s)", value=0) - lg_num = gr.Number( - label="两端音频切片的交叉淡入长度,如果自动切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,注意,该设置会影响推理速度,单位为秒/s", value=0) - lgr_num = gr.Number( - label="自动音频切片后,需要舍弃每段切片的头尾。该参数设置交叉长度保留的比例,范围0-1,左开右闭", value=0.75) - enhancer_adaptive_key = gr.Number( - label="使增强器适应更高的音域(单位为半音数)|默认为0", value=0) - cr_threshold = gr.Number( - label="F0过滤阈值,只有启动crepe时有效. 数值范围从0-1. 降低该值可减少跑调概率,但会增加哑音", value=0.05) - with gr.Tabs(): - with gr.TabItem("音频转音频"): - vc_input3 = gr.Audio(label="选择音频") - vc_submit = gr.Button( - "音频转换", variant="primary") - with gr.TabItem("文字转音频"): - text2tts = gr.Textbox( - label="在此输入要转译的文字。注意,使用该功能建议打开F0预测,不然会很怪") - tts_rate = gr.Number(label="tts语速", value=0) - tts_voice = gr.Radio(label="性别", choices=[ - "男", "女"], value="男") - vc_submit2 = gr.Button( - "文字转换", variant="primary") - with gr.Row(): - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - with gr.Column(): - vc_output2 = gr.Audio( - label="Output Audio", interactive=False) - - vc_submit.click(vc_fn, [sid, vc_input3, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, - cl_num, lg_num, lgr_num, f0_predictor, enhancer_adaptive_key, cr_threshold], [vc_output1, vc_output2]) - vc_submit2.click(vc_fn2, [sid, vc_input3, vc_transform, auto_f0, cluster_ratio, slice_db, noise_scale, pad_seconds, cl_num, - lg_num, lgr_num, text2tts, tts_rate, tts_voice, f0_predictor, enhancer_adaptive_key, cr_threshold], [vc_output1, vc_output2]) - # gr.Examples( - # examples=example, - # inputs=[textbox, char_dropdown, language_dropdown, - # duration_slider, symbol_input], - # outputs=[text_output, audio_output], - # fn=tts_fn - # ) - for category, link in others.items(): - with gr.TabItem(category): - gr.Markdown( - f''' -
-

Click to Go

- - -
- ''' - ) - - app.queue(concurrency_count=3).launch(show_api=False, share=args.share) diff --git a/spaces/Fuyuka29/Anime_Background_Remover/app.py b/spaces/Fuyuka29/Anime_Background_Remover/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/Fuyuka29/Anime_Background_Remover/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/GXSA/bingo/src/lib/bots/bing/utils.ts b/spaces/GXSA/bingo/src/lib/bots/bing/utils.ts deleted file mode 100644 index 6bbbc5e463ad55bc1219b63cf78013f5360fc908..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查身份信息是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/Gradio-Blocks/ML-Aided-Code-Analysis/README.md b/spaces/Gradio-Blocks/ML-Aided-Code-Analysis/README.md deleted file mode 100644 index 64723cce26d837e4244849cc391f3aee5b4e9033..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/ML-Aided-Code-Analysis/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Ml Aided Static Code Analysis -emoji: 📉 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.0.5 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - - -### Tasks that can be done -- Code Generation (Natural Language Query | Code Completion) -> https://huggingface.co/lvwerra/codeparrot-small -- Code Summarization (Summarize in Natural Language) -- Code Search (from NL) -- Similarity Check (Duplicate Detection) -- Vulnerability Detection \ No newline at end of file diff --git a/spaces/Gradio-Blocks/Object-Detection-With-DETR-and-YOLOS/app.py b/spaces/Gradio-Blocks/Object-Detection-With-DETR-and-YOLOS/app.py deleted file mode 100644 index f0b12f763b9c39f9d626978a8d83a4530f633cda..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/Object-Detection-With-DETR-and-YOLOS/app.py +++ /dev/null @@ -1,155 +0,0 @@ -import io -import gradio as gr -import matplotlib.pyplot as plt -import requests, validators -import torch -import pathlib -from PIL import Image -from transformers import AutoFeatureExtractor, DetrForObjectDetection, YolosForObjectDetection - -import os - -# colors for visualization -COLORS = [ - [0.000, 0.447, 0.741], - [0.850, 0.325, 0.098], - [0.929, 0.694, 0.125], - [0.494, 0.184, 0.556], - [0.466, 0.674, 0.188], - [0.301, 0.745, 0.933] -] - -def make_prediction(img, feature_extractor, model): - inputs = feature_extractor(img, return_tensors="pt") - outputs = model(**inputs) - img_size = torch.tensor([tuple(reversed(img.size))]) - processed_outputs = feature_extractor.post_process(outputs, img_size) - return processed_outputs[0] - -def fig2img(fig): - buf = io.BytesIO() - fig.savefig(buf) - buf.seek(0) - img = Image.open(buf) - return img - - -def visualize_prediction(pil_img, output_dict, threshold=0.7, id2label=None): - keep = output_dict["scores"] > threshold - boxes = output_dict["boxes"][keep].tolist() - scores = output_dict["scores"][keep].tolist() - labels = output_dict["labels"][keep].tolist() - if id2label is not None: - labels = [id2label[x] for x in labels] - - plt.figure(figsize=(16, 10)) - plt.imshow(pil_img) - ax = plt.gca() - colors = COLORS * 100 - for score, (xmin, ymin, xmax, ymax), label, color in zip(scores, boxes, labels, colors): - ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, fill=False, color=color, linewidth=3)) - ax.text(xmin, ymin, f"{label}: {score:0.2f}", fontsize=15, bbox=dict(facecolor="yellow", alpha=0.5)) - plt.axis("off") - return fig2img(plt.gcf()) - -def detect_objects(model_name,url_input,image_input,threshold): - - #Extract model and feature extractor - feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) - - if 'detr' in model_name: - - model = DetrForObjectDetection.from_pretrained(model_name) - - elif 'yolos' in model_name: - - model = YolosForObjectDetection.from_pretrained(model_name) - - if validators.url(url_input): - image = Image.open(requests.get(url_input, stream=True).raw) - - elif image_input: - image = image_input - - #Make prediction - processed_outputs = make_prediction(image, feature_extractor, model) - - #Visualize prediction - viz_img = visualize_prediction(image, processed_outputs, threshold, model.config.id2label) - - return viz_img - -def set_example_image(example: list) -> dict: - return gr.Image.update(value=example[0]) - -def set_example_url(example: list) -> dict: - return gr.Textbox.update(value=example[0]) - - -title = """

Object Detection App with DETR and YOLOS

""" - -description = """ -Links to HuggingFace Models: - -- [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) -- [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) -- [hustvl/yolos-small](https://huggingface.co/hustvl/yolos-small) -- [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) - -""" - -models = ["facebook/detr-resnet-50","facebook/detr-resnet-101",'hustvl/yolos-small','hustvl/yolos-tiny'] -urls = ["https://c8.alamy.com/comp/J2AB4K/the-new-york-stock-exchange-on-the-wall-street-in-new-york-J2AB4K.jpg"] - -twitter_link = """ -[![](https://img.shields.io/twitter/follow/nickmuchi?label=@nickmuchi&style=social)](https://twitter.com/nickmuchi) -""" - -css = ''' -h1#title { - text-align: center; -} -''' -demo = gr.Blocks(css=css) - -with demo: - gr.Markdown(title) - gr.Markdown(description) - gr.Markdown(twitter_link) - options = gr.Dropdown(choices=models,label='Select Object Detection Model',show_label=True) - slider_input = gr.Slider(minimum=0.2,maximum=1,value=0.7,label='Prediction Threshold') - - with gr.Tabs(): - with gr.TabItem('Image URL'): - with gr.Row(): - url_input = gr.Textbox(lines=2,label='Enter valid image URL here..') - img_output_from_url = gr.Image(shape=(650,650)) - - with gr.Row(): - example_url = gr.Dataset(components=[url_input],samples=[[str(url)] for url in urls]) - - url_but = gr.Button('Detect') - - with gr.TabItem('Image Upload'): - with gr.Row(): - img_input = gr.Image(type='pil') - img_output_from_upload= gr.Image(shape=(650,650)) - - with gr.Row(): - example_images = gr.Dataset(components=[img_input], - samples=[[path.as_posix()] - for path in sorted(pathlib.Path('images').rglob('*.JPG'))]) - - img_but = gr.Button('Detect') - - - url_but.click(detect_objects,inputs=[options,url_input,img_input,slider_input],outputs=img_output_from_url,queue=True) - img_but.click(detect_objects,inputs=[options,url_input,img_input,slider_input],outputs=img_output_from_upload,queue=True) - example_images.click(fn=set_example_image,inputs=[example_images],outputs=[img_input]) - example_url.click(fn=set_example_url,inputs=[example_url],outputs=[url_input]) - - - gr.Markdown("![visitor badge](https://visitor-badge.glitch.me/badge?page_id=nickmuchi-object-detection-with-detr-and-yolos)") - - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 8e7420d24a20b662286266cac58cab4721dc8df3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/deeplabv3_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/cgnet.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/cgnet.py deleted file mode 100644 index 032a55d85f04caa4801b2d6a9768518a33892462..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/backbones/cgnet.py +++ /dev/null @@ -1,367 +0,0 @@ -import torch -import torch.nn as nn -import torch.utils.checkpoint as cp -from mmcv.cnn import (ConvModule, build_conv_layer, build_norm_layer, - constant_init, kaiming_init) -from mmcv.runner import load_checkpoint -from mmcv.utils.parrots_wrapper import _BatchNorm - -from mmseg.utils import get_root_logger -from ..builder import BACKBONES - - -class GlobalContextExtractor(nn.Module): - """Global Context Extractor for CGNet. - - This class is employed to refine the joint feature of both local feature - and surrounding context. - - Args: - channel (int): Number of input feature channels. - reduction (int): Reductions for global context extractor. Default: 16. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, channel, reduction=16, with_cp=False): - super(GlobalContextExtractor, self).__init__() - self.channel = channel - self.reduction = reduction - assert reduction >= 1 and channel >= reduction - self.with_cp = with_cp - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc = nn.Sequential( - nn.Linear(channel, channel // reduction), nn.ReLU(inplace=True), - nn.Linear(channel // reduction, channel), nn.Sigmoid()) - - def forward(self, x): - - def _inner_forward(x): - num_batch, num_channel = x.size()[:2] - y = self.avg_pool(x).view(num_batch, num_channel) - y = self.fc(y).view(num_batch, num_channel, 1, 1) - return x * y - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class ContextGuidedBlock(nn.Module): - """Context Guided Block for CGNet. - - This class consists of four components: local feature extractor, - surrounding feature extractor, joint feature extractor and global - context extractor. - - Args: - in_channels (int): Number of input feature channels. - out_channels (int): Number of output feature channels. - dilation (int): Dilation rate for surrounding context extractor. - Default: 2. - reduction (int): Reduction for global context extractor. Default: 16. - skip_connect (bool): Add input to output or not. Default: True. - downsample (bool): Downsample the input to 1/2 or not. Default: False. - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='PReLU'). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - in_channels, - out_channels, - dilation=2, - reduction=16, - skip_connect=True, - downsample=False, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='PReLU'), - with_cp=False): - super(ContextGuidedBlock, self).__init__() - self.with_cp = with_cp - self.downsample = downsample - - channels = out_channels if downsample else out_channels // 2 - if 'type' in act_cfg and act_cfg['type'] == 'PReLU': - act_cfg['num_parameters'] = channels - kernel_size = 3 if downsample else 1 - stride = 2 if downsample else 1 - padding = (kernel_size - 1) // 2 - - self.conv1x1 = ConvModule( - in_channels, - channels, - kernel_size, - stride, - padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - - self.f_loc = build_conv_layer( - conv_cfg, - channels, - channels, - kernel_size=3, - padding=1, - groups=channels, - bias=False) - self.f_sur = build_conv_layer( - conv_cfg, - channels, - channels, - kernel_size=3, - padding=dilation, - groups=channels, - dilation=dilation, - bias=False) - - self.bn = build_norm_layer(norm_cfg, 2 * channels)[1] - self.activate = nn.PReLU(2 * channels) - - if downsample: - self.bottleneck = build_conv_layer( - conv_cfg, - 2 * channels, - out_channels, - kernel_size=1, - bias=False) - - self.skip_connect = skip_connect and not downsample - self.f_glo = GlobalContextExtractor(out_channels, reduction, with_cp) - - def forward(self, x): - - def _inner_forward(x): - out = self.conv1x1(x) - loc = self.f_loc(out) - sur = self.f_sur(out) - - joi_feat = torch.cat([loc, sur], 1) # the joint feature - joi_feat = self.bn(joi_feat) - joi_feat = self.activate(joi_feat) - if self.downsample: - joi_feat = self.bottleneck(joi_feat) # channel = out_channels - # f_glo is employed to refine the joint feature - out = self.f_glo(joi_feat) - - if self.skip_connect: - return x + out - else: - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class InputInjection(nn.Module): - """Downsampling module for CGNet.""" - - def __init__(self, num_downsampling): - super(InputInjection, self).__init__() - self.pool = nn.ModuleList() - for i in range(num_downsampling): - self.pool.append(nn.AvgPool2d(3, stride=2, padding=1)) - - def forward(self, x): - for pool in self.pool: - x = pool(x) - return x - - -@BACKBONES.register_module() -class CGNet(nn.Module): - """CGNet backbone. - - A Light-weight Context Guided Network for Semantic Segmentation - arXiv: https://arxiv.org/abs/1811.08201 - - Args: - in_channels (int): Number of input image channels. Normally 3. - num_channels (tuple[int]): Numbers of feature channels at each stages. - Default: (32, 64, 128). - num_blocks (tuple[int]): Numbers of CG blocks at stage 1 and stage 2. - Default: (3, 21). - dilations (tuple[int]): Dilation rate for surrounding context - extractors at stage 1 and stage 2. Default: (2, 4). - reductions (tuple[int]): Reductions for global context extractors at - stage 1 and stage 2. Default: (8, 16). - conv_cfg (dict): Config dict for convolution layer. - Default: None, which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='BN', requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='PReLU'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - in_channels=3, - num_channels=(32, 64, 128), - num_blocks=(3, 21), - dilations=(2, 4), - reductions=(8, 16), - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='PReLU'), - norm_eval=False, - with_cp=False): - - super(CGNet, self).__init__() - self.in_channels = in_channels - self.num_channels = num_channels - assert isinstance(self.num_channels, tuple) and len( - self.num_channels) == 3 - self.num_blocks = num_blocks - assert isinstance(self.num_blocks, tuple) and len(self.num_blocks) == 2 - self.dilations = dilations - assert isinstance(self.dilations, tuple) and len(self.dilations) == 2 - self.reductions = reductions - assert isinstance(self.reductions, tuple) and len(self.reductions) == 2 - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - if 'type' in self.act_cfg and self.act_cfg['type'] == 'PReLU': - self.act_cfg['num_parameters'] = num_channels[0] - self.norm_eval = norm_eval - self.with_cp = with_cp - - cur_channels = in_channels - self.stem = nn.ModuleList() - for i in range(3): - self.stem.append( - ConvModule( - cur_channels, - num_channels[0], - 3, - 2 if i == 0 else 1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - cur_channels = num_channels[0] - - self.inject_2x = InputInjection(1) # down-sample for Input, factor=2 - self.inject_4x = InputInjection(2) # down-sample for Input, factor=4 - - cur_channels += in_channels - self.norm_prelu_0 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - # stage 1 - self.level1 = nn.ModuleList() - for i in range(num_blocks[0]): - self.level1.append( - ContextGuidedBlock( - cur_channels if i == 0 else num_channels[1], - num_channels[1], - dilations[0], - reductions[0], - downsample=(i == 0), - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - with_cp=with_cp)) # CG block - - cur_channels = 2 * num_channels[1] + in_channels - self.norm_prelu_1 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - # stage 2 - self.level2 = nn.ModuleList() - for i in range(num_blocks[1]): - self.level2.append( - ContextGuidedBlock( - cur_channels if i == 0 else num_channels[2], - num_channels[2], - dilations[1], - reductions[1], - downsample=(i == 0), - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg, - with_cp=with_cp)) # CG block - - cur_channels = 2 * num_channels[2] - self.norm_prelu_2 = nn.Sequential( - build_norm_layer(norm_cfg, cur_channels)[1], - nn.PReLU(cur_channels)) - - def forward(self, x): - output = [] - - # stage 0 - inp_2x = self.inject_2x(x) - inp_4x = self.inject_4x(x) - for layer in self.stem: - x = layer(x) - x = self.norm_prelu_0(torch.cat([x, inp_2x], 1)) - output.append(x) - - # stage 1 - for i, layer in enumerate(self.level1): - x = layer(x) - if i == 0: - down1 = x - x = self.norm_prelu_1(torch.cat([x, down1, inp_4x], 1)) - output.append(x) - - # stage 2 - for i, layer in enumerate(self.level2): - x = layer(x) - if i == 0: - down2 = x - x = self.norm_prelu_2(torch.cat([down2, x], 1)) - output.append(x) - - return output - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, (nn.Conv2d, nn.Linear)): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - elif isinstance(m, nn.PReLU): - constant_init(m, 0) - else: - raise TypeError('pretrained must be a str or None') - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(CGNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/HESOAYM/ElviraMulti/modules/overwrites.py b/spaces/HESOAYM/ElviraMulti/modules/overwrites.py deleted file mode 100644 index 035a4a52722d66ee28af1c05231ad1cea3339ef5..0000000000000000000000000000000000000000 --- a/spaces/HESOAYM/ElviraMulti/modules/overwrites.py +++ /dev/null @@ -1,94 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html -from gradio_client import utils as client_utils - -from modules.presets import * -from modules.llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, - y: List[List[str | Tuple[str] | Tuple[str, str] | None] | Tuple], - ) -> List[List[str | Dict | None]]: - """ - Parameters: - y: List of lists representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. It can also be a tuple whose first element is a string filepath or URL to an image/video/audio, and second (optional) element is the alt text, in which case the media file is displayed. It can also be None, in which case that message is not displayed. - Returns: - List of lists representing the message and response. Each message and response will be a string of HTML, or a dictionary with media information. Or None if the message is not to be displayed. - """ - if y is None: - return [] - processed_messages = [] - for message_pair in y: - assert isinstance( - message_pair, (tuple, list) - ), f"Expected a list of lists or list of tuples. Received: {message_pair}" - assert ( - len(message_pair) == 2 - ), f"Expected a list of lists of length 2 or list of tuples of length 2. Received: {message_pair}" - - processed_messages.append( - [ - self._postprocess_chat_messages(message_pair[0], "user"), - self._postprocess_chat_messages(message_pair[1], "bot"), - ] - ) - return processed_messages - -def postprocess_chat_messages( - self, chat_message: str | Tuple | List | None, message_type: str - ) -> str | Dict | None: - if chat_message is None: - return None - elif isinstance(chat_message, (tuple, list)): - filepath = chat_message[0] - mime_type = client_utils.get_mimetype(filepath) - filepath = self.make_temp_copy_if_needed(filepath) - return { - "name": filepath, - "mime_type": mime_type, - "alt_text": chat_message[1] if len(chat_message) > 1 else None, - "data": None, # These last two fields are filled in by the frontend - "is_file": True, - } - elif isinstance(chat_message, str): - if message_type == "bot": - if not detect_converted_mark(chat_message): - chat_message = convert_mdtext(chat_message) - elif message_type == "user": - if not detect_converted_mark(chat_message): - chat_message = convert_asis(chat_message) - return chat_message - else: - raise ValueError(f"Invalid message for Chatbot component: {chat_message}") - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/online_backtranslation.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/online_backtranslation.py deleted file mode 100644 index 2e27ca237cde1980b2c3ca497e12f458da230c37..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/online_backtranslation.py +++ /dev/null @@ -1,682 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import json -import logging -import math -import os -from argparse import Namespace -from collections import OrderedDict, defaultdict -from pathlib import Path -from typing import Dict, Sequence, Tuple -from argparse import ArgumentError - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -import fairseq -from fairseq import metrics, options, utils -from fairseq.data import ( - FairseqDataset, - LanguagePairDataset, - NoisingDataset, - PrependTokenDataset, - RoundRobinZipDatasets, - TransformEosLangPairDataset, - data_utils, - encoders, -) -from fairseq.sequence_generator import SequenceGenerator -from fairseq.tasks import register_task -from fairseq.tasks.translation import TranslationTask, load_langpair_dataset - -logger = logging.getLogger(__name__) - - -class PiecewiseLinearFn: - """Piecewise linear function. Can be configured with a string.""" - - def __init__(self, pieces: Sequence[Tuple[int, float]]): - assert pieces == sorted( - pieces - ), f"PiecewiseLinearFn configuration should be sorted, received: {pieces}" - - self.pieces = pieces - - def __call__(self, x: int) -> float: - for i, (x_a, y_a) in enumerate(self.pieces[:-1]): - x_b, y_b = self.pieces[i + 1] - if x_a <= x <= x_b: - return y_a + (x - x_a) * (y_b - y_a) / (x_b - x_a) - - return self.pieces[-1][1] - - @staticmethod - def from_string(configuration: str) -> "PiecewiseLinearFn": - """ - Parse the configuration of lambda coefficient (for scheduling). - x = "3" # lambda will be a constant equal to x - x = "0:1,1000:0" # lambda will start from 1 and linearly decrease - # to 0 during the first 1000 iterations - x = "0:0,1000:0,2000:1" # lambda will be equal to 0 for the first 1000 - # iterations, then will linearly increase to 1 until iteration 2000 - """ - if isinstance(configuration, float): - return PiecewiseLinearFn([(0, configuration)]) - - try: - parts = configuration.split(",") - if len(parts) == 1: - v = float(configuration) - return PiecewiseLinearFn([(0, v)]) - - split = [s.split(":") for s in parts] - pieces = [(int(t), float(v)) for t, v in split] - return PiecewiseLinearFn(pieces) - except Exception: - raise ValueError( - f"Invalid PiecewiseLinearFn configuration: {configuration!r}" - ) - - @staticmethod - def one() -> "PiecewiseLinearFn": - return PiecewiseLinearFn([(0, 1.0)]) - - -@register_task("online_backtranslation") -class OnlineBackTranslationTask(TranslationTask): - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - # Generic translation args - parser.add_argument('data', help='colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner; \ - however, valid and test data are always in the first directory to \ - avoid the need for repeating them in all directories') - parser.add_argument('--mono-langs', metavar='MONO_LANGS', - help='monolingual languages for training') - parser.add_argument('--valid-lang-pairs', default=None, metavar='VALID_LANG_PAIRS', - help='language pairs for validation') - parser.add_argument('--load-alignments', action='store_true', - help='load the binarized alignments') - parser.add_argument('--left-pad-source', default='False', type=str, metavar='BOOL', - help='pad the source on the left') - parser.add_argument('--left-pad-target', default='False', type=str, metavar='BOOL', - help='pad the target on the left') - parser.add_argument('--upsample-primary', default=1, type=int, - help='amount to upsample primary dataset') - try: - parser.add_argument('--max-source-positions', default=1024, type=int, metavar='N', - help='max number of tokens in the source sequence') - parser.add_argument('--max-target-positions', default=1024, type=int, metavar='N', - help='max number of tokens in the target sequence') - except ArgumentError: - # this might have already been defined. Once we transition this to hydra it should be fine to add it here. - pass - parser.add_argument('--truncate-source', action='store_true', default=False, - help='truncate source to max-source-positions') - parser.add_argument('--num-batch-buckets', default=0, type=int, metavar='N', - help='if >0, then bucket source and target lengths into N ' - 'buckets and pad accordingly; this is useful on TPUs ' - 'to minimize the number of compilations') - - # Denoising args - parser.add_argument('--max-word-shuffle-distance', default=3.0, type=float, metavar='N', - help='maximum word shuffle distance for denoising autoencoding data generation') - parser.add_argument('--word-dropout-prob', default=0.1, type=float, metavar='N', - help='word dropout probability for denoising autoencoding data generation') - parser.add_argument('--word-blanking-prob', default=0.2, type=float, metavar='N', - help='word blanking probability for denoising autoencoding data generation') - - # Backtranslation args - parser.add_argument('--lambda-bt', default="1.0", type=str, metavar='N', - help='back-translation weight') - parser.add_argument('--lambda-dae', default="1.0", type=str, metavar='N', - help='denoising auto-encoder weight') - - # Evaluation args - parser.add_argument('--generate-one-by-one', action='store_true', - help='generate one sentence at a time for backtranslation') - - parser.add_argument('--eval-bleu', action='store_true', - help='evaluation with BLEU scores') - parser.add_argument('--eval-bleu-detok', type=str, default="space", - help='detokenize before computing BLEU (e.g., "moses"); ' - 'required if using --eval-bleu; use "space" to ' - 'disable detokenization; see fairseq.data.encoders ' - 'for other options') - parser.add_argument('--eval-bleu-detok-args', type=str, metavar='JSON', - help='args for building the tokenizer, if needed') - parser.add_argument('--eval-tokenized-bleu', action='store_true', default=False, - help='compute tokenized BLEU instead of sacrebleu') - parser.add_argument('--eval-bleu-remove-bpe', nargs='?', const='@@ ', default=None, - help='remove BPE before computing BLEU') - parser.add_argument('--eval-bleu-args', type=str, metavar='JSON', - help='generation args for BLUE scoring, ' - 'e.g., \'{"beam": 4, "lenpen": 0.6}\'') - parser.add_argument('--eval-bleu-print-samples', action='store_true', - help='print sample generations during validation') - # fmt: on - - def __init__(self, args, common_dict, mono_langs, valid_lang_pairs): - super().__init__(args, common_dict, common_dict) - self.common_dict = common_dict - self.mono_langs = mono_langs - self.valid_lang_pairs = valid_lang_pairs - - self.SHOW_SAMPLES_INTERVAL = 1000 - # Start by showing samples - self._show_samples_ctr = self.SHOW_SAMPLES_INTERVAL - self.SHOW_SAMPLES_NUMBER = 5 - self.lambda_bt = PiecewiseLinearFn.from_string(args.lambda_bt) - self.lambda_dae = PiecewiseLinearFn.from_string(args.lambda_dae) - - self.args = args - self.data = utils.split_paths(self.args.data) - if len(self.data) == 1: - shards = list(Path(self.data[0]).glob("shard*")) - if len(shards) > 0: - # keep this as strings, since it can also be a manifold path - old_data = self.data - self.data = [str(shard) for shard in shards] - logging.warning(f"Expanded data directory {old_data} to {self.data}") - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - args.left_pad_source = options.eval_bool(args.left_pad_source) - args.left_pad_target = options.eval_bool(args.left_pad_target) - - paths = utils.split_paths(args.data) - assert len(paths) > 0 - assert args.mono_langs is not None - - mono_langs = args.mono_langs.split(",") - valid_lang_pairs = args.valid_lang_pairs.split(",") - - # load dictionary - dict_path = os.path.join(paths[0], "dict.txt") - common_dict = cls.load_dictionary(dict_path) - - return cls(args, common_dict, mono_langs, valid_lang_pairs) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs) -> FairseqDataset: - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if split == "train": - data_path = self.data[(epoch - 1) % len(self.data)] - dataset = self.load_train_dataset(data_path) - else: - # valid/test should always be the same. - dataset = self.load_translation_dataset(split, self.data[0]) - - self.datasets[split] = dataset - return dataset - - def load_train_dataset(self, data_path: str) -> FairseqDataset: - """The training dataset is made of backtranslation dataset and denoising dataset.""" - data = [] - for lang in self.mono_langs: - train_path = os.path.join(data_path, lang, "train") - # TODO: could we do the BT using denoise sample ? - # this would half the data loading work - data.append((f"{lang}-BT", self.load_bt_dataset(train_path, lang))) - data.append( - (f"{lang}-DENOISE", self.load_denoise_dataset(train_path, lang)) - ) - - return RoundRobinZipDatasets(OrderedDict(data)) - - def _langpair_dataset( - self, src: FairseqDataset, tgt: FairseqDataset - ) -> LanguagePairDataset: - return LanguagePairDataset( - src, - src.sizes, - self.dictionary, - tgt=tgt, - tgt_sizes=tgt.sizes, - tgt_dict=self.dictionary, - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - # TODO: should we shuffle ? we are already sorting batch by sizes so ? - # shuffle=True, - ) - - def _prepend_lang_bos_to_target( - self, dataset: LanguagePairDataset, lang: str - ) -> LanguagePairDataset: - bos = _lang_token_index(self.dictionary, lang) - return TransformEosLangPairDataset( - dataset, - src_eos=self.dictionary.eos(), - new_src_eos=self.dictionary.eos(), - tgt_bos=self.dictionary.eos(), - new_tgt_bos=bos, - ) - - def load_bt_dataset(self, data_path: str, lang: str) -> FairseqDataset: - """The BT dataset is generated with (tgt, tgt) pairs. - The actual translation to a (generated_src, tgt) pair - is done on the fly during training. - """ - mono_dataset = data_utils.load_indexed_dataset( - data_path, self.common_dict, self.args.dataset_impl - ) - assert mono_dataset is not None, f"No dataset found for {lang}" - - mono_dataset_src = PrependTokenDataset( - mono_dataset, _lang_token_index(self.dictionary, lang) - ) - - mono_dataset_bt = self._langpair_dataset(mono_dataset_src, mono_dataset) - logger.info( - f"mono_lang = {lang} " - f"lang token index = {_lang_token_index(self.dictionary, lang)} " - f"lang token = {_lang_token(lang)}" - ) - - mono_dataset_bt = self._prepend_lang_bos_to_target(mono_dataset_bt, lang) - return mono_dataset_bt - - def load_denoise_dataset(self, data_path: str, lang: str) -> FairseqDataset: - """Classic denoising dataset""" - dataset = data_utils.load_indexed_dataset( - data_path, self.common_dict, self.args.dataset_impl - ) - noisy_dataset = NoisingDataset( - dataset, - self.dictionary, - seed=1, - max_word_shuffle_distance=self.args.max_word_shuffle_distance, - word_dropout_prob=self.args.word_dropout_prob, - word_blanking_prob=self.args.word_blanking_prob, - ) - noisy_dataset = PrependTokenDataset( - noisy_dataset, _lang_token_index(self.dictionary, lang) - ) - - clean_dataset = data_utils.load_indexed_dataset( - data_path, self.common_dict, self.args.dataset_impl - ) - denoising_dataset = self._langpair_dataset(noisy_dataset, clean_dataset) - denoising_dataset = self._prepend_lang_bos_to_target(denoising_dataset, lang) - return denoising_dataset - - def load_translation_dataset( - self, split: str, data_path: str, combine: bool = False - ): - # only judging with one language pair for the moment, - # since ConcatDataset doesn't work as expected - assert len(self.valid_lang_pairs) == 1, "For now..." - valid_lang_pair = self.valid_lang_pairs[0] - src, tgt = valid_lang_pair.split("-") - - # use the same function than TranslationTask - src_tgt_dt = load_langpair_dataset( - data_path, - split, - src, - self.common_dict, - tgt, - self.common_dict, - combine=combine, - dataset_impl=self.args.dataset_impl, - upsample_primary=self.args.upsample_primary, - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - max_source_positions=self.args.max_source_positions, - max_target_positions=self.args.max_target_positions, - load_alignments=self.args.load_alignments, - truncate_source=self.args.truncate_source, - num_buckets=self.args.num_batch_buckets, - shuffle=(split != "test"), - prepend_bos_src=_lang_token_index(self.dictionary, src), - ) - - src_tgt_eos_dt = self._prepend_lang_bos_to_target(src_tgt_dt, tgt) - src_tgt_eos_dt.args = self.args - return src_tgt_eos_dt - - def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None): - raise NotImplementedError - - def build_model(self, args): - # torch.autograd.set_detect_anomaly(True) - model = super().build_model(args) - - add_secial_tokens_to_dict_and_model(self.common_dict, model, self.mono_langs) - - self.sequence_generators = {} - for mono_lang in self.mono_langs: - self.sequence_generators[mono_lang] = SequenceGenerator( - [model], - tgt_dict=self.dictionary, - beam_size=1, - max_len_a=1.3, - max_len_b=5, - min_len=5, - # keep 1 to be able to prepend bos - max_len=model.max_decoder_positions() - 1, - ) - - if getattr(args, "eval_bleu", False): - assert getattr(args, "eval_bleu_detok", None) is not None, ( - "--eval-bleu-detok is required if using --eval-bleu; " - "try --eval-bleu-detok=moses (or --eval-bleu-detok=space " - "to disable detokenization, e.g., when using sentencepiece)" - ) - detok_args = json.loads(getattr(args, "eval_bleu_detok_args", "{}") or "{}") - self.tokenizer = encoders.build_tokenizer( - Namespace( - tokenizer=getattr(args, "eval_bleu_detok", None), **detok_args - ) - ) - - gen_args = json.loads(getattr(args, "eval_bleu_args", "{}") or "{}") - self.bleu_sequence_generator = self.build_generator( - [model], Namespace(**gen_args) - ) - - return model - - def max_positions(self): - """Return the max sentence length allowed by the task.""" - return (self.args.max_source_positions, self.args.max_target_positions) - - @property - def dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary`.""" - return self.common_dict - - def display_samples_once_in_a_while(self, smp, mono_lang, other_lang): - self._show_samples_ctr += 1 - if self._show_samples_ctr < self.SHOW_SAMPLES_INTERVAL: - return - self._show_samples_ctr = 0 - - ln = smp["net_input"]["src_tokens"].shape[0] - - logger.info( - f"(r:{self.args.distributed_rank}) : " - f"{other_lang} ---> {mono_lang} " - f"({other_lang} was generated by back-translation.) {ln} samples" - ) - - for i in range(min(ln, self.SHOW_SAMPLES_NUMBER)): - src_tokens = smp["net_input"]["src_tokens"][i] - tgt_tokens = smp["target"][i] - - src_str = self.dictionary.string(src_tokens, "sentencepiece") - tgt_str = self.dictionary.string(tgt_tokens, "sentencepiece") - logger.info( - f"\n{i}\t\t[{other_lang} generated] {src_str}\n" - f"\t\t[{mono_lang} original ] {tgt_str}\n" - f"\t\t[ src tokens] {src_tokens}\n" - ) - - def backtranslate_sample(self, smp, orig_lang, other_lang) -> None: - """ - * WARNING: smp is modified in place. - * At the start of this function, `smp` has the same input and target: - |--------------------------------------------------------| - | smp['net_input']['src_tokens'] | smp['target'] | - | (from data) __en__ hello world | __en__ hello world | - |--------------------------------------------------------| - - * We call generator.generate(smp, bos_token = token("ro")), - and copy the result as input - * At the end, `smp` has the translation to other language. - |--------------------------------------------------------| - | smp['net_input']['src_tokens'] | smp['target'] | - | (generated) __ro__ salut lume | __en__ hello world | - |--------------------------------------------------------| - - """ - bos_token = _lang_token_index(self.dictionary, other_lang) - generated = self.sequence_generators[orig_lang].generate( - models=[], sample=smp, bos_token=bos_token - ) - - max_lngth = max([gn[0]["tokens"].size(0) for gn in generated]) - net_input = smp["net_input"] - n_src_tokens = torch.empty( - size=(len(generated), max_lngth + 1), dtype=net_input["src_tokens"].dtype - ) - n_src_lengths = torch.empty( - len(generated), dtype=net_input["src_lengths"].dtype - ) - - for i, gn in enumerate(generated): - tokens = gn[0]["tokens"] - tokens_size = tokens.size(0) - padding_needed = max_lngth - tokens_size - tokens = torch.cat([tokens.new([bos_token]), tokens]) - tokens = F.pad(tokens, (0, padding_needed), value=self.dictionary.pad()) - n_src_tokens[i] = tokens - n_src_lengths[i] = tokens_size + 1 - - device = net_input["src_tokens"].device - # This seems to be important - del net_input["src_tokens"] - del net_input["src_lengths"] - net_input["src_tokens"] = n_src_tokens.to(device) - net_input["src_lengths"] = n_src_lengths.to(device) - - def generate(self, smp, model): - model.eval() - orig_lang = ( - self.dictionary[smp["net_input"]["src_tokens"][0][0]] - .replace(" ", "") - .replace("_", "") - ) - bos_token = smp["net_input"]["prev_output_tokens"][0][0] - with torch.no_grad(): - generated = self.sequence_generators[orig_lang].generate( - models=[model], sample=smp, bos_token=bos_token - ) - return generated - - def get_other_lang(self, lang): - # TODO: allow more complex mapping - if lang != self.mono_langs[0]: - return self.mono_langs[0] - if len(self.mono_langs) == 2: - return self.mono_langs[1] - return self.mono_langs[np.random.randint(1, len(self.mono_langs))] - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - - model.train() - model.set_num_updates(update_num) - - agg_loss, agg_sample_size = 0.0, 0.0 - agg_logging_output: Dict[str, float] = defaultdict(float) - - dataset_keys = self.datasets["train"].datasets.keys() - - weights = { - "BT": self.lambda_bt(update_num), - "DENOISE": self.lambda_dae(update_num), - } - log_keys = {"BT": "bt_", "DENOISE": "dae_"} - - for dataset_key in dataset_keys: - smp = sample[dataset_key] - mono_lang, task_subtype = dataset_key.split("-") - if weights[task_subtype] == 0: - continue - - if task_subtype == "BT": - with torch.autograd.profiler.record_function("backtranslation"): - model.eval() - # TODO: Could we translate to several language at once ? - # this would allow to share encoder_out and maximize GPU usage. - other_lang = self.get_other_lang(mono_lang) - self.backtranslate_sample(smp, mono_lang, other_lang) - self.display_samples_once_in_a_while(smp, mono_lang, other_lang) - model.train() - - # Like in FairseqTask.train_step - with torch.autograd.profiler.record_function("forward"): - loss, sample_size, logging_output = criterion(model, smp) - loss *= weights[task_subtype] - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - - agg_loss += loss.item() - agg_sample_size += sample_size - for k in logging_output: - agg_logging_output[log_keys[task_subtype] + k] += logging_output[k] - agg_logging_output[k] += logging_output[k] - - return agg_loss, agg_sample_size, agg_logging_output - - def get_bos_token_from_sample(self, sample): - net_input = sample["net_input"] - source_lang_token_id = torch.unique(net_input["src_tokens"][:, 0]).item() - source_lang_token = self.dictionary[source_lang_token_id].replace("_", "") - target_lang_token_id = _lang_token_index( - self.dictionary, self.get_other_lang(source_lang_token) - ) - - return target_lang_token_id - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - bt_sample_size = sum(x.get("bt_sample_size", 0) for x in logging_outputs) - if bt_sample_size: - bt_loss_sum = sum(x.get("bt_loss", 0) for x in logging_outputs) - bt_loss_sum *= 1 / bt_sample_size / math.log(2) - metrics.log_scalar("bt_loss", bt_loss_sum, bt_sample_size, round=3) - - bt_nll_loss_sum = sum(x.get("bt_nll_loss", 0) for x in logging_outputs) - bt_ntokens = sum(x.get("bt_ntokens", 0) for x in logging_outputs) - bt_nll_loss_sum *= 1 / bt_ntokens / math.log(2) - metrics.log_scalar("bt_nll_loss", bt_nll_loss_sum, bt_ntokens, round=3) - metrics.log_derived( - "bt_ppl", lambda meters: utils.get_perplexity(meters["bt_nll_loss"].avg) - ) - - dae_sample_size = sum(x.get("dae_sample_size", 0) for x in logging_outputs) - if dae_sample_size: - dae_loss_sum = sum(x.get("dae_loss", 0) for x in logging_outputs) - dae_loss_sum *= 1 / dae_sample_size / math.log(2) - metrics.log_scalar("dae_loss", dae_loss_sum, dae_sample_size, round=3) - - dae_nll_loss_sum = sum(x.get("dae_nll_loss", 0) for x in logging_outputs) - dae_ntokens = sum(x.get("dae_ntokens", 0) for x in logging_outputs) - dae_nll_loss_sum *= 1 / dae_ntokens / math.log(2) - metrics.log_scalar("dae_nll_loss", dae_nll_loss_sum, dae_ntokens, round=3) - metrics.log_derived( - "dae_ppl", - lambda meters: utils.get_perplexity(meters["dae_nll_loss"].avg), - ) - - -@torch.no_grad() -def extend_embedding( - emb: nn.Module, new_vocab_size: int, copy_from_token_id: int -) -> None: - old_emb_data = emb.weight.data - (old_vocab_size, dim) = old_emb_data.shape - assert new_vocab_size >= old_vocab_size - - if new_vocab_size > old_vocab_size: - emb.weight.data = torch.zeros((new_vocab_size, dim)) - emb.weight.data[:old_vocab_size, :] = old_emb_data - # initialize new embeddings - emb.weight.data[old_vocab_size:, :] = old_emb_data[copy_from_token_id] - if hasattr(emb, "num_embeddings"): - emb.num_embeddings = new_vocab_size - if hasattr(emb, "out_features"): - emb.out_features = new_vocab_size - - if getattr(emb, "bias", None) is None: - return - - # Fix the bias. - # Bias shape can be different from the previous vocab size - # if the weight matrix was shared and alread extended but not the bias. - (old_vocab_size,) = emb.bias.shape - assert new_vocab_size >= old_vocab_size - if new_vocab_size > old_vocab_size: - old_bias = emb.bias.data - new_bias = torch.zeros( - (new_vocab_size,), dtype=old_bias.dtype, device=old_bias.device - ) - new_bias[:old_vocab_size] = old_bias - emb.bias.data = new_bias - - -def add_secial_tokens_to_dict_and_model( - dictionary: "fairseq.data.Dictionary", - model: nn.Module, - mono_langs: Sequence[str], -) -> None: - embs = model.encoder.embed_tokens - vocab_size, embedding_dim = embs.weight.shape - - # The model may or may not have a '' embedding yet - assert ( - len(dictionary) <= vocab_size <= len(dictionary) + 1 - ), f"Dictionary len ({len(dictionary)}) doesn't match embs shape ({embs.weight.shape})" - # TODO: we should reuse the pretrained model dict which already has - dictionary.add_symbol("") - - for lang in mono_langs: - lang_token = _lang_token(lang) - dictionary.add_symbol(lang_token) - logger.info( - f"dictionary: {len(dictionary)} -> {vocab_size} tokens " - f"after adding {len(mono_langs)} lang tokens." - ) - - if len(dictionary) <= vocab_size: - return - - extend_embedding(embs, len(dictionary), dictionary.bos()) - dec_embs = model.decoder.embed_tokens - extend_embedding(dec_embs, len(dictionary), dictionary.bos()) - lm_head = model.decoder.output_projection - extend_embedding(lm_head, len(dictionary), dictionary.bos()) - assert lm_head.weight.shape == (len(dictionary), embedding_dim) - - -def _lang_token(lang: str) -> str: - return f"__{lang}__" - - -def _lang_token_index(dictionary, lang: str) -> int: - return dictionary.index(_lang_token(lang)) - - -@contextlib.contextmanager -def assert_weights_have_changed(model: nn.Module): - def checksum(model: nn.Module) -> float: - return sum(p.sum().item() for p in model.parameters()) - - initial_checksum = checksum(model) - yield model - final_checksum = checksum(model) - logger.info( - f"initial_checksum={initial_checksum} -> final_checksum={final_checksum}" - ) - assert initial_checksum != final_checksum, "Model hasn't changed !" diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/audio_processing.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/audio_processing.py deleted file mode 100644 index 3a4467355952fefaba117b6014864139ac319c6b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/audio_processing.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import numpy as np -from scipy.signal import get_window -import librosa.util as librosa_util - - -def window_sumsquare( - window, - n_frames, - hop_length=200, - win_length=800, - n_fft=800, - dtype=np.float32, - norm=None, -): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - - n_frames : int > 0 - The number of analysis frames - - hop_length : int > 0 - The number of samples to advance between frames - - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - - n_fft : int > 0 - The length of each analysis frame. - - dtype : np.dtype - The data type of the output - - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm) ** 2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))] - return x - - -def griffin_lim(magnitudes, stft_fn, n_iters=30): - """ - PARAMS - ------ - magnitudes: spectrogram magnitudes - stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods - """ - - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size()))) - angles = angles.astype(np.float32) - angles = torch.autograd.Variable(torch.from_numpy(angles)) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - - for i in range(n_iters): - _, angles = stft_fn.transform(signal) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - return signal - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C diff --git a/spaces/HighCWu/Style2Paints-4.5-Gradio/ui/web-mobile/cocos2d-js-min.335ee.js b/spaces/HighCWu/Style2Paints-4.5-Gradio/ui/web-mobile/cocos2d-js-min.335ee.js deleted file mode 100644 index 2642d246297e337579dcf7d01cfc0394a3b80ae9..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/Style2Paints-4.5-Gradio/ui/web-mobile/cocos2d-js-min.335ee.js +++ /dev/null @@ -1 +0,0 @@ -(function(t,e,i){var n="function"==typeof require&&require;function r(i,s){var o=e[i];if(!o){var a=t[i];if(!a){var c="function"==typeof require&&require;if(!s&&c)return c(i,!0);if(n)return n(i,!0);var h=new Error("Cannot find module '"+i+"'");throw h.code="MODULE_NOT_FOUND",h}var l={};o=e[i]={exports:l},a[0]((function(t){return r(a[1][t]||t)}),o,l)}return o.exports}for(var s=0;s2||i<0)&&(t[e.renderMode]=0),cc._renderType=cc.game.RENDER_TYPE_CANVAS,cc._supportRender=!1,0===i?cc.sys.capabilities.opengl?(cc._renderType=cc.game.RENDER_TYPE_WEBGL,cc._supportRender=!0):cc.sys.capabilities.canvas&&(cc._renderType=cc.game.RENDER_TYPE_CANVAS,cc._supportRender=!0):1===i&&cc.sys.capabilities.canvas?(cc._renderType=cc.game.RENDER_TYPE_CANVAS,cc._supportRender=!0):2===i&&cc.sys.capabilities.opengl&&(cc._renderType=cc.game.RENDER_TYPE_WEBGL,cc._supportRender=!0)})(t=cc.game.config),document.body?s():window.addEventListener("load",o,!1),n=!0}}),{"./cocos2d/core/platform/CCSys":187,"./cocos2d/core/utils":227}],3:[(function(t,e,i){var n,r=t("./cocos2d/core/platform/CCEnum");cc.DebugMode=r({NONE:0,INFO:1,WARN:2,ERROR:3,INFO_FOR_WEB_PAGE:4,WARN_FOR_WEB_PAGE:5,ERROR_FOR_WEB_PAGE:6}),cc._initDebugSetting=function(t){cc.log=cc.warn=cc.error=cc.assert=function(){},t!==cc.DebugMode.NONE&&(t>cc.DebugMode.ERROR?(function(){function e(t){if(cc._canvas){if(!n){var e=document.createElement("Div");e.setAttribute("id","logInfoDiv"),e.setAttribute("width","200"),e.setAttribute("height",cc._canvas.height);var i=e.style;i.zIndex="99999",i.position="absolute",i.top=i.left="0",(n=document.createElement("textarea")).setAttribute("rows","20"),n.setAttribute("cols","30"),n.setAttribute("disabled","true");var r=n.style;r.backgroundColor="transparent",r.borderBottom="1px solid #cccccc",r.borderTopWidth=r.borderLeftWidth=r.borderRightWidth="0px",r.borderTopStyle=r.borderLeftStyle=r.borderRightStyle="none",r.padding="0px",r.margin=0,e.appendChild(n),cc._canvas.parentNode.appendChild(e)}n.value=n.value+t+"\r\n",n.scrollTop=n.scrollHeight}}cc.error=function(){e("ERROR : "+cc.js.formatStr.apply(null,arguments))},cc.assert=function(t,i){"use strict";!t&&i&&e("ASSERT: "+(i=cc.js.formatStr.apply(null,cc.js.shiftArguments.apply(null,arguments))))},t!==cc.DebugMode.ERROR_FOR_WEB_PAGE&&(cc.warn=function(){e("WARN : "+cc.js.formatStr.apply(null,arguments))}),t===cc.DebugMode.INFO_FOR_WEB_PAGE&&(cc.log=cc.info=function(){e(cc.js.formatStr.apply(null,arguments))})})():console&&console.log.apply&&(console.error||(console.error=console.log),console.warn||(console.warn=console.log),console.error.bind?cc.error=console.error.bind(console):cc.error=function(){return console.error.apply(console,arguments)},cc.assert=function(t,e){if(!t)throw e&&(e=cc.js.formatStr.apply(null,cc.js.shiftArguments.apply(null,arguments))),new Error(e)}),t!==cc.DebugMode.ERROR&&(console.warn.bind?cc.warn=console.warn.bind(console):cc.warn=function(){return console.warn.apply(console,arguments)}),t===cc.DebugMode.INFO&&(console.log.bind?cc.log=console.log.bind(console):cc.log=function(){return console.log.apply(console,arguments)},cc.info=function(){(console.info||console.log).apply(console,arguments)}))},cc._throw=function(t){var e=t.stack;e?cc.error(e):cc.error(t)};t("./DebugInfos");var s="https://github.com/cocos-creator/engine/blob/master/EngineErrorMap.md";function o(t){return function(){var e=arguments[0],i=t+" "+e+", please go to "+s+"#"+e+" to see details.";if(1===arguments.length)return i;if(2===arguments.length)return i+" Arguments: "+arguments[1];var n=cc.js.shiftArguments.apply(null,arguments);return i+" Arguments: "+n.join(", ")}}var a=o("Log");cc.logID=function(){cc.log(a.apply(null,arguments))};var c=o("Warning");cc.warnID=function(){cc.warn(c.apply(null,arguments))};var h=o("Error");cc.errorID=function(){cc.error(h.apply(null,arguments))};var l=o("Assert");cc.assertID=function(t){"use strict";t||cc.assert(!1,l.apply(null,cc.js.shiftArguments.apply(null,arguments)))},cc._getError=o("ERROR"),cc._initDebugSetting(cc.DebugMode.INFO)}),{"./DebugInfos":1,"./cocos2d/core/platform/CCEnum":180}],4:[(function(t,e,i){cc.Action=cc._Class.extend({ctor:function(){this.originalTarget=null,this.target=null,this.tag=cc.Action.TAG_INVALID},clone:function(){var t=new cc.Action;return t.originalTarget=null,t.target=null,t.tag=this.tag,t},isDone:function(){return!0},startWithTarget:function(t){this.originalTarget=t,this.target=t},stop:function(){this.target=null},step:function(t){cc.logID(1006)},update:function(t){cc.logID(1007)},getTarget:function(){return this.target},setTarget:function(t){this.target=t},getOriginalTarget:function(){return this.originalTarget},setOriginalTarget:function(t){this.originalTarget=t},getTag:function(){return this.tag},setTag:function(t){this.tag=t},retain:function(){},release:function(){}}),cc.Action.TAG_INVALID=-1,cc.FiniteTimeAction=cc.Action.extend({_duration:0,ctor:function(){cc.Action.prototype.ctor.call(this),this._duration=0},getDuration:function(){return this._duration*(this._timesForRepeat||1)},setDuration:function(t){this._duration=t},reverse:function(){return cc.logID(1008),null},clone:function(){return new cc.FiniteTimeAction}}),cc.Speed=cc.Action.extend({_speed:0,_innerAction:null,ctor:function(t,e){cc.Action.prototype.ctor.call(this),this._speed=0,this._innerAction=null,t&&this.initWithAction(t,e)},getSpeed:function(){return this._speed},setSpeed:function(t){this._speed=t},initWithAction:function(t,e){if(!t)throw new Error(cc._getError(1021));return this._innerAction=t,this._speed=e,!0},clone:function(){var t=new cc.Speed;return t.initWithAction(this._innerAction.clone(),this._speed),t},startWithTarget:function(t){cc.Action.prototype.startWithTarget.call(this,t),this._innerAction.startWithTarget(t)},stop:function(){this._innerAction.stop(),cc.Action.prototype.stop.call(this)},step:function(t){this._innerAction.step(t*this._speed)},isDone:function(){return this._innerAction.isDone()},reverse:function(){return new cc.Speed(this._innerAction.reverse(),this._speed)},setInnerAction:function(t){this._innerAction!==t&&(this._innerAction=t)},getInnerAction:function(){return this._innerAction}}),cc.speed=function(t,e){return new cc.Speed(t,e)},cc.Follow=cc.Action.extend({_followedNode:null,_boundarySet:!1,_boundaryFullyCovered:!1,_halfScreenSize:null,_fullScreenSize:null,_worldRect:null,leftBoundary:0,rightBoundary:0,topBoundary:0,bottomBoundary:0,ctor:function(t,e){cc.Action.prototype.ctor.call(this),this._followedNode=null,this._boundarySet=!1,this._boundaryFullyCovered=!1,this._halfScreenSize=null,this._fullScreenSize=null,this.leftBoundary=0,this.rightBoundary=0,this.topBoundary=0,this.bottomBoundary=0,this._worldRect=cc.rect(0,0,0,0),t&&(e?this.initWithTarget(t,e):this.initWithTarget(t))},clone:function(){var t=new cc.Follow,e=this._worldRect,i=new cc.Rect(e.x,e.y,e.width,e.height);return t.initWithTarget(this._followedNode,i),t},isBoundarySet:function(){return this._boundarySet},setBoudarySet:function(t){this._boundarySet=t},initWithTarget:function(t,e){if(!t)throw new Error(cc._getError(1022));e=e||cc.rect(0,0,0,0),this._followedNode=t,this._worldRect=e,this._boundarySet=!cc._rectEqualToZero(e),this._boundaryFullyCovered=!1;var i=cc.director.getWinSize();return this._fullScreenSize=cc.p(i.width,i.height),this._halfScreenSize=cc.pMult(this._fullScreenSize,.5),this._boundarySet&&(this.leftBoundary=-(e.x+e.width-this._fullScreenSize.x),this.rightBoundary=-e.x,this.topBoundary=-e.y,this.bottomBoundary=-(e.y+e.height-this._fullScreenSize.y),this.rightBoundary=0;i--)e.push(cc.p(t[i].x,t[i].y));return e}function r(t){for(var e=[],i=0;i=this._duration},_cloneDecoration:function(t){t._repeatForever=this._repeatForever,t._speed=this._speed,t._timesForRepeat=this._timesForRepeat,t._easeList=this._easeList,t._speedMethod=this._speedMethod,t._repeatMethod=this._repeatMethod},_reverseEaseList:function(t){if(this._easeList){t._easeList=[];for(var e=0;e1.192092896e-7?this._duration:1.192092896e-7);e=1>e?e:1,this.update(e>0?e:0),this._repeatMethod&&this._timesForRepeat>1&&this.isDone()&&(this._repeatForever||this._timesForRepeat--,this.startWithTarget(this.target),this.step(this._elapsed-this._duration))},startWithTarget:function(t){cc.Action.prototype.startWithTarget.call(this,t),this._elapsed=0,this._firstTick=!0},reverse:function(){return cc.logID(1010),null},setAmplitudeRate:function(t){cc.logID(1011)},getAmplitudeRate:function(){return cc.logID(1012),0},speed:function(t){return t<=0?(cc.logID(1013),this):(this._speedMethod=!0,this._speed*=t,this)},getSpeed:function(){return this._speed},setSpeed:function(t){return this._speed=t,this},repeat:function(t){return t=Math.round(t),isNaN(t)||t<1?(cc.logID(1014),this):(this._repeatMethod=!0,this._timesForRepeat*=t,this)},repeatForever:function(){return this._repeatMethod=!0,this._timesForRepeat=this.MAX_VALUE,this._repeatForever=!0,this}}),cc.actionInterval=function(t){return new cc.ActionInterval(t)},cc.Sequence=cc.ActionInterval.extend({_actions:null,_split:null,_last:0,_reversed:!1,ctor:function(t){cc.ActionInterval.prototype.ctor.call(this),this._actions=[];var e=t instanceof Array?t:arguments;if(1!==e.length){var i=e.length-1;if(i>=0&&null==e[i]&&cc.logID(1015),i>=0){for(var n,r=e[0],s=1;s1?e%1:e),this._last=n)},reverse:function(){var t=cc.Sequence._actionOneTwo(this._actions[1].reverse(),this._actions[0].reverse());return this._cloneDecoration(t),this._reverseEaseList(t),t._reversed=!0,t}}),cc.sequence=function(t){var e=t instanceof Array?t:arguments;if(1===e.length)return cc.errorID(1019),null;var i=e.length-1;i>=0&&null==e[i]&&cc.logID(1015);var n=null;if(i>=0){n=e[0];for(var r=1;r<=i;r++)e[r]&&(n=cc.Sequence._actionOneTwo(n,e[r]))}return n},cc.Sequence._actionOneTwo=function(t,e){var i=new cc.Sequence;return i.initWithTwoActions(t,e),i},cc.Repeat=cc.ActionInterval.extend({_times:0,_total:0,_nextDt:0,_actionInstant:!1,_innerAction:null,ctor:function(t,e){cc.ActionInterval.prototype.ctor.call(this),void 0!==e&&this.initWithAction(t,e)},initWithAction:function(t,e){var i=t._duration*e;return!!this.initWithDuration(i)&&(this._times=e,this._innerAction=t,t instanceof cc.ActionInstant&&(this._actionInstant=!0,this._times-=1),this._total=0,!0)},clone:function(){var t=new cc.Repeat;return this._cloneDecoration(t),t.initWithAction(this._innerAction.clone(),this._times),t},startWithTarget:function(t){this._total=0,this._nextDt=this._innerAction._duration/this._duration,cc.ActionInterval.prototype.startWithTarget.call(this,t),this._innerAction.startWithTarget(t)},stop:function(){this._innerAction.stop(),cc.Action.prototype.stop.call(this)},update:function(t){t=this._computeEaseTime(t);var e=this._innerAction,i=this._duration,n=this._times,r=this._nextDt;if(t>=r){for(;t>r&&this._total=1&&this._total=0&&null==e[i]&&cc.logID(1015),i>=0){for(var n,r=e[0],s=1;sr?this._two=cc.Sequence._actionOneTwo(e,cc.delayTime(n-r)):n0&&null==e[e.length-1]&&cc.logID(1015);for(var i=e[0],n=1;n180&&(i-=360),i<-180&&(i+=360),this._startAngleX=e,this._diffAngleX=i,this._startAngleY=t.rotationY%360;var n=this._dstAngleY-this._startAngleY;n>180&&(n-=360),n<-180&&(n+=360),this._diffAngleY=n},reverse:function(){cc.logID(1016)},update:function(t){t=this._computeEaseTime(t),this.target&&(this.target.rotationX=this._startAngleX+this._diffAngleX*t,this.target.rotationY=this._startAngleY+this._diffAngleY*t)}}),cc.rotateTo=function(t,e,i){return new cc.RotateTo(t,e,i)},cc.RotateBy=cc.ActionInterval.extend({_angleX:0,_startAngleX:0,_angleY:0,_startAngleY:0,ctor:function(t,e,i){cc.ActionInterval.prototype.ctor.call(this),void 0!==e&&this.initWithDuration(t,e,i)},initWithDuration:function(t,e,i){return!!cc.ActionInterval.prototype.initWithDuration.call(this,t)&&(this._angleX=e||0,this._angleY=void 0!==i?i:this._angleX,!0)},clone:function(){var t=new cc.RotateBy;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._angleX,this._angleY),t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t),this._startAngleX=t.rotationX,this._startAngleY=t.rotationY},update:function(t){t=this._computeEaseTime(t),this.target&&(this.target.rotationX=this._startAngleX+this._angleX*t,this.target.rotationY=this._startAngleY+this._angleY*t)},reverse:function(){var t=new cc.RotateBy(this._duration,-this._angleX,-this._angleY);return this._cloneDecoration(t),this._reverseEaseList(t),t}}),cc.rotateBy=function(t,e,i){return new cc.RotateBy(t,e,i)},cc.MoveBy=cc.ActionInterval.extend({_positionDelta:null,_startPosition:null,_previousPosition:null,ctor:function(t,e,i){cc.ActionInterval.prototype.ctor.call(this),this._positionDelta=cc.p(0,0),this._startPosition=cc.p(0,0),this._previousPosition=cc.p(0,0),void 0!==e&&this.initWithDuration(t,e,i)},initWithDuration:function(t,e,i){return!!cc.ActionInterval.prototype.initWithDuration.call(this,t)&&(void 0!==e.x&&(i=e.y,e=e.x),this._positionDelta.x=e,this._positionDelta.y=i,!0)},clone:function(){var t=new cc.MoveBy;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._positionDelta),t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t);var e=t.getPositionX(),i=t.getPositionY();this._previousPosition.x=e,this._previousPosition.y=i,this._startPosition.x=e,this._startPosition.y=i},update:function(t){if(t=this._computeEaseTime(t),this.target){var e=this._positionDelta.x*t,i=this._positionDelta.y*t,n=this._startPosition;if(cc.macro.ENABLE_STACKABLE_ACTIONS){var r=this.target.getPositionX(),s=this.target.getPositionY(),o=this._previousPosition;n.x=n.x+r-o.x,n.y=n.y+s-o.y,e+=n.x,i+=n.y,o.x=e,o.y=i,this.target.setPosition(e,i)}else this.target.setPosition(n.x+e,n.y+i)}},reverse:function(){var t=new cc.MoveBy(this._duration,cc.p(-this._positionDelta.x,-this._positionDelta.y));return this._cloneDecoration(t),this._reverseEaseList(t),t}}),cc.moveBy=function(t,e,i){return new cc.MoveBy(t,e,i)},cc.MoveTo=cc.MoveBy.extend({_endPosition:null,ctor:function(t,e,i){cc.MoveBy.prototype.ctor.call(this),this._endPosition=cc.p(0,0),void 0!==e&&this.initWithDuration(t,e,i)},initWithDuration:function(t,e,i){return!!cc.MoveBy.prototype.initWithDuration.call(this,t,e,i)&&(void 0!==e.x&&(i=e.y,e=e.x),this._endPosition.x=e,this._endPosition.y=i,!0)},clone:function(){var t=new cc.MoveTo;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._endPosition),t},startWithTarget:function(t){cc.MoveBy.prototype.startWithTarget.call(this,t),this._positionDelta.x=this._endPosition.x-t.getPositionX(),this._positionDelta.y=this._endPosition.y-t.getPositionY()}}),cc.moveTo=function(t,e,i){return new cc.MoveTo(t,e,i)},cc.SkewTo=cc.ActionInterval.extend({_skewX:0,_skewY:0,_startSkewX:0,_startSkewY:0,_endSkewX:0,_endSkewY:0,_deltaX:0,_deltaY:0,ctor:function(t,e,i){cc.ActionInterval.prototype.ctor.call(this),void 0!==i&&this.initWithDuration(t,e,i)},initWithDuration:function(t,e,i){var n=!1;return cc.ActionInterval.prototype.initWithDuration.call(this,t)&&(this._endSkewX=e,this._endSkewY=i,n=!0),n},clone:function(){var t=new cc.SkewTo;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._endSkewX,this._endSkewY),t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t),this._startSkewX=t.skewX%180,this._deltaX=this._endSkewX-this._startSkewX,this._deltaX>180&&(this._deltaX-=360),this._deltaX<-180&&(this._deltaX+=360),this._startSkewY=t.skewY%360,this._deltaY=this._endSkewY-this._startSkewY,this._deltaY>180&&(this._deltaY-=360),this._deltaY<-180&&(this._deltaY+=360)},update:function(t){t=this._computeEaseTime(t),this.target.skewX=this._startSkewX+this._deltaX*t,this.target.skewY=this._startSkewY+this._deltaY*t}}),cc.skewTo=function(t,e,i){return new cc.SkewTo(t,e,i)},cc.SkewBy=cc.SkewTo.extend({ctor:function(t,e,i){cc.SkewTo.prototype.ctor.call(this),void 0!==i&&this.initWithDuration(t,e,i)},initWithDuration:function(t,e,i){var n=!1;return cc.SkewTo.prototype.initWithDuration.call(this,t,e,i)&&(this._skewX=e,this._skewY=i,n=!0),n},clone:function(){var t=new cc.SkewBy;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._skewX,this._skewY),t},startWithTarget:function(t){cc.SkewTo.prototype.startWithTarget.call(this,t),this._deltaX=this._skewX,this._deltaY=this._skewY,this._endSkewX=this._startSkewX+this._deltaX,this._endSkewY=this._startSkewY+this._deltaY},reverse:function(){var t=new cc.SkewBy(this._duration,-this._skewX,-this._skewY);return this._cloneDecoration(t),this._reverseEaseList(t),t}}),cc.skewBy=function(t,e,i){return new cc.SkewBy(t,e,i)},cc.JumpBy=cc.ActionInterval.extend({_startPosition:null,_delta:null,_height:0,_jumps:0,_previousPosition:null,ctor:function(t,e,i,n,r){cc.ActionInterval.prototype.ctor.call(this),this._startPosition=cc.p(0,0),this._previousPosition=cc.p(0,0),this._delta=cc.p(0,0),void 0!==n&&this.initWithDuration(t,e,i,n,r)},initWithDuration:function(t,e,i,n,r){return!!cc.ActionInterval.prototype.initWithDuration.call(this,t)&&(void 0===r&&(r=n,n=i,i=e.y,e=e.x),this._delta.x=e,this._delta.y=i,this._height=n,this._jumps=r,!0)},clone:function(){var t=new cc.JumpBy;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._delta,this._height,this._jumps),t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t);var e=t.getPositionX(),i=t.getPositionY();this._previousPosition.x=e,this._previousPosition.y=i,this._startPosition.x=e,this._startPosition.y=i},update:function(t){if(t=this._computeEaseTime(t),this.target){var e=t*this._jumps%1,i=4*this._height*e*(1-e);i+=this._delta.y*t;var n=this._delta.x*t,r=this._startPosition;if(cc.macro.ENABLE_STACKABLE_ACTIONS){var s=this.target.getPositionX(),o=this.target.getPositionY(),a=this._previousPosition;r.x=r.x+s-a.x,r.y=r.y+o-a.y,n+=r.x,i+=r.y,a.x=n,a.y=i,this.target.setPosition(n,i)}else this.target.setPosition(r.x+n,r.y+i)}},reverse:function(){var t=new cc.JumpBy(this._duration,cc.p(-this._delta.x,-this._delta.y),this._height,this._jumps);return this._cloneDecoration(t),this._reverseEaseList(t),t}}),cc.jumpBy=function(t,e,i,n,r){return new cc.JumpBy(t,e,i,n,r)},cc.JumpTo=cc.JumpBy.extend({_endPosition:null,ctor:function(t,e,i,n,r){cc.JumpBy.prototype.ctor.call(this),this._endPosition=cc.p(0,0),void 0!==n&&this.initWithDuration(t,e,i,n,r)},initWithDuration:function(t,e,i,n,r){return!!cc.JumpBy.prototype.initWithDuration.call(this,t,e,i,n,r)&&(void 0===r&&(i=e.y,e=e.x),this._endPosition.x=e,this._endPosition.y=i,!0)},startWithTarget:function(t){cc.JumpBy.prototype.startWithTarget.call(this,t),this._delta.x=this._endPosition.x-this._startPosition.x,this._delta.y=this._endPosition.y-this._startPosition.y},clone:function(){var t=new cc.JumpTo;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._endPosition,this._height,this._jumps),t}}),cc.jumpTo=function(t,e,i,n,r){return new cc.JumpTo(t,e,i,n,r)},cc.bezierAt=function(t,e,i,n,r){return Math.pow(1-r,3)*t+3*r*Math.pow(1-r,2)*e+3*Math.pow(r,2)*(1-r)*i+Math.pow(r,3)*n},cc.BezierBy=cc.ActionInterval.extend({_config:null,_startPosition:null,_previousPosition:null,ctor:function(t,e){cc.ActionInterval.prototype.ctor.call(this),this._config=[],this._startPosition=cc.p(0,0),this._previousPosition=cc.p(0,0),e&&this.initWithDuration(t,e)},initWithDuration:function(t,e){return!!cc.ActionInterval.prototype.initWithDuration.call(this,t)&&(this._config=e,!0)},clone:function(){var t=new cc.BezierBy;this._cloneDecoration(t);for(var e=[],i=0;ie/2?255:0}},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t),this._originalState=t.opacity},stop:function(){this.target.opacity=this._originalState,cc.ActionInterval.prototype.stop.call(this)},reverse:function(){var t=new cc.Blink(this._duration,this._times);return this._cloneDecoration(t),this._reverseEaseList(t),t}}),cc.blink=function(t,e){return new cc.Blink(t,e)},cc.FadeTo=cc.ActionInterval.extend({_toOpacity:0,_fromOpacity:0,ctor:function(t,e){cc.ActionInterval.prototype.ctor.call(this),void 0!==e&&this.initWithDuration(t,e)},initWithDuration:function(t,e){return!!cc.ActionInterval.prototype.initWithDuration.call(this,t)&&(this._toOpacity=e,!0)},clone:function(){var t=new cc.FadeTo;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._toOpacity),t},update:function(t){t=this._computeEaseTime(t);var e=void 0!==this._fromOpacity?this._fromOpacity:255;this.target.opacity=e+(this._toOpacity-e)*t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t),this._fromOpacity=t.opacity}}),cc.fadeTo=function(t,e){return new cc.FadeTo(t,e)},cc.FadeIn=cc.FadeTo.extend({_reverseAction:null,ctor:function(t){cc.FadeTo.prototype.ctor.call(this),null==t&&(t=0),this.initWithDuration(t,255)},reverse:function(){var t=new cc.FadeOut;return t.initWithDuration(this._duration,0),this._cloneDecoration(t),this._reverseEaseList(t),t},clone:function(){var t=new cc.FadeIn;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._toOpacity),t},startWithTarget:function(t){this._reverseAction&&(this._toOpacity=this._reverseAction._fromOpacity),cc.FadeTo.prototype.startWithTarget.call(this,t)}}),cc.fadeIn=function(t){return new cc.FadeIn(t)},cc.FadeOut=cc.FadeTo.extend({ctor:function(t){cc.FadeTo.prototype.ctor.call(this),null==t&&(t=0),this.initWithDuration(t,0)},reverse:function(){var t=new cc.FadeIn;return t._reverseAction=this,t.initWithDuration(this._duration,255),this._cloneDecoration(t),this._reverseEaseList(t),t},clone:function(){var t=new cc.FadeOut;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._toOpacity),t}}),cc.fadeOut=function(t){return new cc.FadeOut(t)},cc.TintTo=cc.ActionInterval.extend({_to:null,_from:null,ctor:function(t,e,i,n){cc.ActionInterval.prototype.ctor.call(this),this._to=cc.color(0,0,0),this._from=cc.color(0,0,0),e instanceof cc.Color&&(n=e.b,i=e.g,e=e.r),void 0!==n&&this.initWithDuration(t,e,i,n)},initWithDuration:function(t,e,i,n){return!!cc.ActionInterval.prototype.initWithDuration.call(this,t)&&(this._to=cc.color(e,i,n),!0)},clone:function(){var t=new cc.TintTo;this._cloneDecoration(t);var e=this._to;return t.initWithDuration(this._duration,e.r,e.g,e.b),t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t),this._from=this.target.color},update:function(t){t=this._computeEaseTime(t);var e=this._from,i=this._to;e&&this.target.setColor(cc.color(e.r+(i.r-e.r)*t,e.g+(i.g-e.g)*t,e.b+(i.b-e.b)*t))}}),cc.tintTo=function(t,e,i,n){return new cc.TintTo(t,e,i,n)},cc.TintBy=cc.ActionInterval.extend({_deltaR:0,_deltaG:0,_deltaB:0,_fromR:0,_fromG:0,_fromB:0,ctor:function(t,e,i,n){cc.ActionInterval.prototype.ctor.call(this),void 0!==n&&this.initWithDuration(t,e,i,n)},initWithDuration:function(t,e,i,n){return!!cc.ActionInterval.prototype.initWithDuration.call(this,t)&&(this._deltaR=e,this._deltaG=i,this._deltaB=n,!0)},clone:function(){var t=new cc.TintBy;return this._cloneDecoration(t),t.initWithDuration(this._duration,this._deltaR,this._deltaG,this._deltaB),t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t);var e=t.color;this._fromR=e.r,this._fromG=e.g,this._fromB=e.b},update:function(t){t=this._computeEaseTime(t),this.target.color=cc.color(this._fromR+this._deltaR*t,this._fromG+this._deltaG*t,this._fromB+this._deltaB*t)},reverse:function(){var t=new cc.TintBy(this._duration,-this._deltaR,-this._deltaG,-this._deltaB);return this._cloneDecoration(t),this._reverseEaseList(t),t}}),cc.tintBy=function(t,e,i,n){return new cc.TintBy(t,e,i,n)},cc.DelayTime=cc.ActionInterval.extend({update:function(t){},reverse:function(){var t=new cc.DelayTime(this._duration);return this._cloneDecoration(t),this._reverseEaseList(t),t},clone:function(){var t=new cc.DelayTime;return this._cloneDecoration(t),t.initWithDuration(this._duration),t}}),cc.delayTime=function(t){return new cc.DelayTime(t)},cc.ReverseTime=cc.ActionInterval.extend({_other:null,ctor:function(t){cc.ActionInterval.prototype.ctor.call(this),this._other=null,t&&this.initWithAction(t)},initWithAction:function(t){if(!t)throw new Error(cc._getError(1028));if(t===this._other)throw new Error(cc._getError(1029));return!!cc.ActionInterval.prototype.initWithDuration.call(this,t._duration)&&(this._other=t,!0)},clone:function(){var t=new cc.ReverseTime;return this._cloneDecoration(t),t.initWithAction(this._other.clone()),t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t),this._other.startWithTarget(t)},update:function(t){t=this._computeEaseTime(t),this._other&&this._other.update(1-t)},reverse:function(){return this._other.clone()},stop:function(){this._other.stop(),cc.Action.prototype.stop.call(this)}}),cc.reverseTime=function(t){return new cc.ReverseTime(t)},cc.Animate=cc.ActionInterval.extend({_animation:null,_nextFrame:0,_origFrame:null,_executedLoops:0,_splitTimes:null,_currFrameIndex:0,ctor:function(t){cc.ActionInterval.prototype.ctor.call(this),this._splitTimes=[],t&&this.initWithAnimation(t)},getAnimation:function(){return this._animation},setAnimation:function(t){this._animation=t},getCurrentFrameIndex:function(){return this._currFrameIndex},initWithAnimation:function(t){if(!t)throw new Error(cc._getError(1030));var e=t.getDuration();if(this.initWithDuration(e*t.getLoops())){this._nextFrame=0,this.setAnimation(t),this._origFrame=null,this._executedLoops=0;var i=this._splitTimes;i.length=0;var n=0,r=e/t.getTotalDelayUnits(),s=t.getFrames();cc.js.array.verifyType(s,cc.AnimationFrame);for(var o=0;othis._executedLoops&&(this._nextFrame=0,this._executedLoops++),t%=1);for(var e=this._animation.getFrames(),i=e.length,n=this._splitTimes,r=this._nextFrame;r0)for(var n=e.length-1;n>=0;n--){var r=e[n];if(!r)break;i.push(r.clone())}var s=new cc.SpriteFrameAnimation(i,t.getDelayPerUnit(),t.getLoops());s.setRestoreOriginalFrame(t.getRestoreOriginalFrame());var o=new cc.Animate(s);return this._cloneDecoration(o),this._reverseEaseList(o),o},stop:function(){this._animation.getRestoreOriginalFrame()&&this.target&&this.target.setSpriteFrame(this._origFrame),cc.Action.prototype.stop.call(this)}}),cc.animate=function(t){return new cc.Animate(t)},cc.TargetedAction=cc.ActionInterval.extend({_action:null,_forcedTarget:null,ctor:function(t,e){cc.ActionInterval.prototype.ctor.call(this),e&&this.initWithTarget(t,e)},initWithTarget:function(t,e){return!!this.initWithDuration(e._duration)&&(this._forcedTarget=t,this._action=e,!0)},clone:function(){var t=new cc.TargetedAction;return this._cloneDecoration(t),t.initWithTarget(this._forcedTarget,this._action.clone()),t},startWithTarget:function(t){cc.ActionInterval.prototype.startWithTarget.call(this,t),this._action.startWithTarget(this._forcedTarget)},stop:function(){this._action.stop()},update:function(t){t=this._computeEaseTime(t),this._action.update(t)},getForcedTarget:function(){return this._forcedTarget},setForcedTarget:function(t){this._forcedTarget!==t&&(this._forcedTarget=t)}}),cc.targetedAction=function(t,e){return new cc.TargetedAction(t,e)}}),{}],9:[(function(t,e,i){cc.ActionManager=cc._Class.extend({_elementPool:[],_searchElementByTarget:function(t,e){for(var i=0;i=n&&i.actionIndex--;break}}else cc.logID(1001)}},removeActionByTag:function(t,e){t===cc.Action.TAG_INVALID&&cc.logID(1002),cc.assertID(e,1003);var i=this._hashTargets[e.__instanceId];if(i)for(var n=i.actions.length,r=0;r=t&&e.actionIndex--,0===e.actions.length&&this._deleteHashElement(e)},_deleteHashElement:function(t){var e=!1;if(t&&!t.lock&&this._hashTargets[t.target.__instanceId]){delete this._hashTargets[t.target.__instanceId];for(var i=this._arrayTargets,n=0,r=i.length;n0?e:null})(n);for(var m=0,p=c.length;m1e-6){S=!1;break}return d._findFrameIndex=S?o:u,d}function d(t,e){var i=e.props,r=e.comps;if(i)for(var s in i){var o=_(t,s,i[s]);n.push(o)}if(r)for(var a in r){var c=t.getComponent(a);if(c){var h=r[a];for(var s in h){o=_(c,s,h[s]);n.push(o)}}}}n.length=0,e.duration=i.duration,e.speed=i.speed,e.wrapMode=i.wrapMode,e.frameRate=i.sample,(e.wrapMode&l.Loop)===l.Loop?e.repeatCount=1/0:e.repeatCount=1;var f=i.curveData,m=f.paths;for(var p in d(t,f),m){var g=cc.find(p,t);if(g)d(g,m[p])}var y=i.events;if(y)for(var v,x=0,C=y.length;x=0?T=v.events[S]:(T=new h,v.ratios.push(b),v.events.push(T)),T.add(A.func,A.params)}}d.playState=function(t,e){t.clip&&(t.curveLoaded||f(this.target,t),t.animator=this,t.play(),"number"==typeof e&&t.setTime(e),this.play())},d.stopStatesExcept=function(t){var e=this._anims,i=e.array;for(e.i=0;e.i=0?(this._anims.fastRemoveAt(e),0===this._anims.array.length&&this.stop()):cc.errorID(3908),t.animator=null},d.sample=function(){var t=this._anims,e=t.array;for(t.i=0;t.i=s)o=n[s-1];else{var h=n[c-1],l="number"==typeof h,u=h&&h.lerp;if(l||u){var _=r[c-1],d=r[c],f=this.types[c-1],m=(e-_)/(d-_);f&&(m=a(m,f));var p=n[c];l?o=h+(p-h)*m:u&&(o=h.lerp(p,m))}else o=h}else o=n[c];var g=this.subProps;if(g){for(var y=this.target[this.prop],v=y,x=0;x0?((l&s.PingPong)===s.PingPong?c*=-1:f=n,d++):1===c&&f===n-1&&hu)break}f+=c,cc.director.getAnimationManager().pushDelayEvent(this,"_fireEvent",[f])}while(f!==h&&f>-1&&f=this.events.length||this._ignoreIndex===t)){var e=this.events[t].events;if(this.target.isValid)for(var i=this.target._components,n=0;nr)return i;var s=(e=(e-n)/(r-n))/(1/i),o=0|s;return s-o<1e-6?o:~(o+1)}}}),{"../core/utils/binary-search":224,"./bezier":16,"./types":21}],14:[(function(t,e,n){var r=cc.js,s=cc.Class({ctor:function(){this.__instanceId=cc.ClassManager.getNewInstanceId(),this._anims=new r.array.MutableForwardIterator([]),this._delayEvents=[]},update:function(t){var e=this._anims,n=e.array;for(e.i=0;e.i=0?this._anims.fastRemoveAt(e):cc.errorID(3907)},pushDelayEvent:function(t,e,i){this._delayEvents.push({target:t,func:e,args:i})}});cc.AnimationManager=e.exports=s}),{}],15:[(function(t,e,i){var n=cc.js,r=t("./playable"),s=t("./types"),o=s.WrappedInfo,a=s.WrapMode,c=s.WrapModeMask;function h(t,e){r.call(this),cc.EventTarget.call(this),this._currentFramePlayed=!1,this._delay=0,this._delayTime=0,this._wrappedInfo=new o,this._lastWrappedInfo=null,this._process=u,this._clip=t,this._name=e||t&&t.name,this.animator=null,this.curves=[],this.delay=0,this.repeatCount=1,this.duration=1,this.speed=1,this.wrapMode=a.Normal,this.time=0,this._emit=this.emit,this.emit=function(){for(var t=new Array(arguments.length),e=0,i=t.length;e1&&(0|e.iterations)>(0|t.iterations)&&this.emit("lastframe",this),t.set(e));e.stopped&&(this.stop(),this.emit("finished",this))}function _(){var t=this.time,e=this.duration;t>e?0===(t%=e)&&(t=e):t<0&&0!==(t%=e)&&(t+=e);for(var i=t/e,n=this.curves,r=0,s=n.length;r0&&this._lastIterations>i||this.time<0&&this._lastIterations0&&(this._delayTime-=t,this._delayTime>0)||(this._currentFramePlayed?this.time+=t*this.speed:this._currentFramePlayed=!0,this._process())},l._needRevers=function(t){var e=this.wrapMode,i=!1;(e&c.PingPong)===c.PingPong&&(t-(0|t)==0&&t>0&&(t-=1),1&t&&(i=!i));return(e&c.Reverse)===c.Reverse&&(i=!i),i},l.getWrappedInfo=function(t,e){e=e||new o;var i=!1,n=this.duration,r=this.repeatCount,s=t>0?t/n:-t/n;if(s>=r){s=r,i=!0;var a=r-(0|r);0===a&&(a=1),t=a*n*(t>0?1:-1)}if(t>n){var h=t%n;t=0===h?n:h}else t<0&&0!==(t%=n)&&(t+=n);var l=!1,u=this._wrapMode&c.ShouldWrap;u&&(l=this._needRevers(s));var _=l?-1:1;return this.speed<0&&(_*=-1),u&&l&&(t=n-t),e.ratio=t/n,e.time=t,e.direction=_,e.stopped=i,e.iterations=s,e},l.sample=function(){for(var t=this.getWrappedInfo(this.time,this._wrappedInfo),e=this.curves,i=0,n=e.length;i0}),(function(){this.curves.length=0})),n.getset(l,"wrapMode",(function(){return this._wrapMode}),(function(t){this._wrapMode=t,this.time=0,t&c.Loop?this.repeatCount=1/0:this.repeatCount=1})),n.getset(l,"repeatCount",(function(){return this._repeatCount}),(function(t){this._repeatCount=t;var e=this._wrapMode&c.ShouldWrap,i=(this.wrapMode&c.Reverse)===c.Reverse;this._process=t!==1/0||e||i?u:_})),n.getset(l,"delay",(function(){return this._delay}),(function(t){this._delayTime=this._delay=t})),cc.AnimationState=e.exports=h}),{"./playable":20,"./types":21}],16:[(function(t,e,i){function n(t,e,i,n,r){var s=1-r;return t*s*s*s+3*e*s*s*r+3*i*s*r*r+n*r*r*r}var r=Math.cos,s=Math.acos,o=Math.max,a=2*Math.PI,c=Math.sqrt;function h(t){return t<0?-Math.pow(-t,1/3):Math.pow(t,1/3)}function l(t,e){var i=(function(t,e){var i,n,l,u,_=e-0,d=e-t[0],f=3*_,m=3*d,p=3*(e-t[2]),g=1/(-_+m-p+(e-1)),y=(f-6*d+p)*g,v=y*(1/3),x=(-f+m)*g,C=1/3*(3*x-y*y),T=C*(1/3),A=(2*y*y*y-9*y*x+_*g*27)/27,b=A/2,S=b*b+T*T*T;if(S<0){var E=1/3*-C,w=c(E*E*E),I=-A/(2*w),R=s(I<-1?-1:I>1?1:I),P=2*h(w);return n=P*r(R*(1/3))-v,l=P*r((R+a)*(1/3))-v,u=P*r((R+2*a)*(1/3))-v,0<=n&&n<=1?0<=l&&l<=1?0<=u&&u<=1?o(n,l,u):o(n,l):0<=u&&u<=1?o(n,u):n:0<=l&&l<=1?0<=u&&u<=1?o(l,u):l:u}if(0===S)return l=-(i=b<0?h(-b):-h(b))-v,0<=(n=2*i-v)&&n<=1?0<=l&&l<=1?o(n,l):n:l;var O=c(S);return n=(i=h(-b+O))-h(b+O)-v})(t,e),n=1-i;return 0*n*n*n+3*t[1]*i*n*n+3*t[3]*i*i*n+1*i*i*i}e.exports={bezier:n,bezierByTime:l}}),{}],17:[(function(t,e,i){var n={constant:function(){return 0},linear:function(t){return t},quadIn:function(t){return t*t},quadOut:function(t){return t*(2-t)},quadInOut:function(t){return(t*=2)<1?.5*t*t:-.5*(--t*(t-2)-1)},cubicIn:function(t){return t*t*t},cubicOut:function(t){return--t*t*t+1},cubicInOut:function(t){return(t*=2)<1?.5*t*t*t:.5*((t-=2)*t*t+2)},quartIn:function(t){return t*t*t*t},quartOut:function(t){return 1- --t*t*t*t},quartInOut:function(t){return(t*=2)<1?.5*t*t*t*t:-.5*((t-=2)*t*t*t-2)},quintIn:function(t){return t*t*t*t*t},quintOut:function(t){return--t*t*t*t*t+1},quintInOut:function(t){return(t*=2)<1?.5*t*t*t*t*t:.5*((t-=2)*t*t*t*t+2)},sineIn:function(t){return 1-Math.cos(t*Math.PI/2)},sineOut:function(t){return Math.sin(t*Math.PI/2)},sineInOut:function(t){return.5*(1-Math.cos(Math.PI*t))},expoIn:function(t){return 0===t?0:Math.pow(1024,t-1)},expoOut:function(t){return 1===t?1:1-Math.pow(2,-10*t)},expoInOut:function(t){return 0===t?0:1===t?1:(t*=2)<1?.5*Math.pow(1024,t-1):.5*(2-Math.pow(2,-10*(t-1)))},circIn:function(t){return 1-Math.sqrt(1-t*t)},circOut:function(t){return Math.sqrt(1- --t*t)},circInOut:function(t){return(t*=2)<1?-.5*(Math.sqrt(1-t*t)-1):.5*(Math.sqrt(1-(t-=2)*t)+1)},elasticIn:function(t){var e,i=.1;return 0===t?0:1===t?1:(!i||i<1?(i=1,e=.1):e=.4*Math.asin(1/i)/(2*Math.PI),-i*Math.pow(2,10*(t-=1))*Math.sin((t-e)*(2*Math.PI)/.4))},elasticOut:function(t){var e,i=.1;return 0===t?0:1===t?1:(!i||i<1?(i=1,e=.1):e=.4*Math.asin(1/i)/(2*Math.PI),i*Math.pow(2,-10*t)*Math.sin((t-e)*(2*Math.PI)/.4)+1)},elasticInOut:function(t){var e,i=.1;return 0===t?0:1===t?1:(!i||i<1?(i=1,e=.1):e=.4*Math.asin(1/i)/(2*Math.PI),(t*=2)<1?i*Math.pow(2,10*(t-=1))*Math.sin((t-e)*(2*Math.PI)/.4)*-.5:i*Math.pow(2,-10*(t-=1))*Math.sin((t-e)*(2*Math.PI)/.4)*.5+1)},backIn:function(t){var e=1.70158;return t*t*((e+1)*t-e)},backOut:function(t){var e=1.70158;return--t*t*((e+1)*t+e)+1},backInOut:function(t){var e=2.5949095;return(t*=2)<1?t*t*((e+1)*t-e)*.5:.5*((t-=2)*t*((e+1)*t+e)+2)},bounceOut:function(t){return t<1/2.75?7.5625*t*t:t<2/2.75?7.5625*(t-=1.5/2.75)*t+.75:t<2.5/2.75?7.5625*(t-=2.25/2.75)*t+.9375:7.5625*(t-=2.625/2.75)*t+.984375},smooth:function(t){return t<=0?0:t>=1?1:t*t*(3-2*t)},fade:function(t){return t<=0?0:t>=1?1:t*t*t*(t*(6*t-15)+10)}};function r(t,e){return function(i){return i<.5?e(2*i)/2:t(2*i-1)/2+.5}}n.quadOutIn=r(n.quadIn,n.quadOut),n.cubicOutIn=r(n.cubicIn,n.cubicOut),n.quartOutIn=r(n.quartIn,n.quartOut),n.quintOutIn=r(n.quintIn,n.quintOut),n.sineOutIn=r(n.sineIn,n.sineOut),n.expoOutIn=r(n.expoIn,n.expoOut),n.circOutIn=r(n.circIn,n.circOut),n.backOutIn=r(n.backIn,n.backOut),n.backOutIn=r(n.backIn,n.backOut),n.bounceIn=function(t){return 1-n.bounceOut(1-t)},n.bounceInOut=function(t){return t<.5?.5*n.bounceIn(2*t):.5*n.bounceOut(2*t-1)+.5},n.bounceOutIn=r(n.bounceIn,n.bounceOut),cc.Easing=e.exports=n}),{}],18:[(function(t,e,i){t("./bezier"),t("./easing"),t("./types"),t("./motion-path-helper"),t("./animation-curves"),t("./animation-clip"),t("./animation-manager"),t("./animation-state"),t("./animation-animator")}),{"./animation-animator":11,"./animation-clip":12,"./animation-curves":13,"./animation-manager":14,"./animation-state":15,"./bezier":16,"./easing":17,"./motion-path-helper":19,"./types":21}],19:[(function(t,e,i){var n=t("./animation-curves").DynamicAnimCurve,r=t("./animation-curves").computeRatioByType,s=t("./bezier").bezier,o=t("../core/utils/binary-search").binarySearchEpsilon,a=cc.v2;function c(t){this.points=t||[],this.beziers=[],this.ratios=[],this.progresses=[],this.length=0,this.computeBeziers()}function h(){this.start=a(),this.end=a(),this.startCtrlPoint=a(),this.endCtrlPoint=a()}function l(t,e,i,s){function h(t){return t instanceof cc.Vec2?{in:t,pos:t,out:t}:Array.isArray(t)&&6===t.length?{in:a(t[2],t[3]),pos:a(t[0],t[1]),out:a(t[4],t[5])}:{in:cc.Vec2.ZERO,pos:cc.Vec2.ZERO,out:cc.Vec2.ZERO}}var l=e.values;if(0!==t.length&&0!==l.length)if(1!==(l=l.map((function(t){return a(t[0],t[1])}))).length){for(var u=e.types,_=e.ratios,d=e.values=[],f=e.types=[],m=e.ratios=[],p=0,g=n.Linear,y=0,v=t.length;y0){var P=[];P.push(h(b));for(var O=0,D=C.length;O1e-6;){var N,F,z,k;if((x=r(x=I,E))<0)k=(0-x)*(F=L.beziers[0]).getLength(),z=F.start.sub(F.endCtrlPoint).normalize(),N=F.start.add(z.mul(k));else if(x>1)k=(x-1)*(F=L.beziers[L.beziers.length-1]).getLength(),z=F.end.sub(F.startCtrlPoint).normalize(),N=F.end.add(z.mul(k));else{var V=o(M,x);V<0&&(V=~V),x-=V>0?M[V-1]:0,x/=L.ratios[V],N=L.beziers[V].getPointAt(x)}w.push(N),I+=R}}else for(;1-I>1e-6;)x=r(x=I,E),w.push(b.lerp(S,x)),I+=R;g="constant"===E?E:n.Linear;for(O=0,D=w.length;O1e-6?(I-1)*A:0}_[_.length-1]!==m[m.length-1]&&U(l[l.length-1],g,_[_.length-1])}else e.values=l;function U(t,e,i){d.push(t),f.push(e),m.push(i)}}c.prototype.computeBeziers=function(){var t;this.beziers.length=0,this.ratios.length=0,this.progresses.length=0,this.length=0;for(var e=1;e0)){c=r;break}c=r-1}if(n[r=c]===i)return r/(s-1);var h=n[r];return(r+(i-h)/(n[r+1]-h))/(s-1)},e.exports={sampleMotionPaths:l,Curve:c,Bezier:h}}),{"../core/utils/binary-search":224,"./animation-curves":13,"./bezier":16}],20:[(function(t,e,i){var n=cc.js;function r(){this._isPlaying=!1,this._isPaused=!1,this._stepOnce=!1}var s=r.prototype;n.get(s,"isPlaying",(function(){return this._isPlaying}),!0),n.get(s,"isPaused",(function(){return this._isPaused}),!0);var o=function(){};s.onPlay=o,s.onPause=o,s.onResume=o,s.onStop=o,s.onError=o,s.play=function(){this._isPlaying?this._isPaused?(this._isPaused=!1,this.onResume()):this.onError(cc._getError(3912)):(this._isPlaying=!0,this.onPlay())},s.stop=function(){this._isPlaying&&(this._isPlaying=!1,this.onStop(),this._isPaused=!1)},s.pause=function(){this._isPlaying&&!this._isPaused&&(this._isPaused=!0,this.onPause())},s.resume=function(){this._isPlaying&&this._isPaused&&(this._isPaused=!1,this.onResume())},s.step=function(){this.pause(),this._stepOnce=!0,this._isPlaying||this.play()},e.exports=r}),{}],21:[(function(t,e,i){cc.js;var n={Loop:2,ShouldWrap:4,PingPong:22,Reverse:36},r=cc.Enum({Default:0,Normal:1,Reverse:n.Reverse,Loop:n.Loop,LoopReverse:n.Loop|n.Reverse,PingPong:n.PingPong,PingPongReverse:n.PingPong|n.Reverse});function s(t){t?this.set(t):(this.ratio=0,this.time=0,this.direction=1,this.stopped=!0,this.iterations=0,this.frameIndex=void 0)}cc.WrapMode=r,s.prototype.set=function(t){this.ratio=t.ratio,this.time=t.time,this.direction=t.direction,this.stopped=t.stopped,this.iterations=t.iterations,this.frameIndex=t.frameIndex},e.exports={WrapModeMask:n,WrapMode:r,WrappedInfo:s}}),{}],22:[(function(t,e,i){var n=t("../core/event/event-target"),r=t("../core/platform/CCSys"),s=t("../core/assets/CCAudioClip").LoadMode,o=!1,a=[],c=function(t){n.call(this),this._src=t,this._element=null,this.id=0,this._volume=1,this._loop=!1,this._nextTime=0,this._state=c.State.INITIALZING,this._onended=function(){this.emit("ended")}.bind(this)};cc.js.extend(c,n),c.State={ERROR:-1,INITIALZING:0,PLAYING:1,PAUSED:2},(function(t){t._bindEnded=function(t){t=t||this._onended;var e=this._element;this._src&&e instanceof HTMLAudioElement?e.addEventListener("ended",t):e.onended=t},t._unbindEnded=function(){var t=this._element;this._src&&t instanceof HTMLAudioElement?t.removeEventListener("ended",this._onended):t&&(t.onended=null)},t._onLoaded=function(){var t=this._src._nativeAsset;t instanceof HTMLAudioElement?(this._element||(this._element=document.createElement("audio")),this._element.src=t.src):this._element=new h(t,this),this.setVolume(this._volume),this.setLoop(this._loop),0!==this._nextTime&&this.setCurrentTime(this._nextTime),this._state===c.State.PLAYING?this.play():this._state=c.State.INITIALZING},t.play=function(){this._state=c.State.PLAYING,this._element&&(this._bindEnded(),this._element.play(),this._src&&this._src.loadMode===s.DOM_AUDIO&&this._element.paused&&a.push({instance:this,offset:0,audio:this._element}),o||(o=!0,cc.game.canvas.addEventListener("touchstart",(function(){for(var t;t=a.pop();)t.audio.play(t.offset)}))))},t.destroy=function(){this._element=null},t.pause=function(){this._element&&(this._unbindEnded(),this._element.pause(),this._state=c.State.PAUSED)},t.resume=function(){this._element&&this._state!==c.State.PLAYING&&(this._bindEnded(),this._element.play(),this._state=c.State.PLAYING)},t.stop=function(){if(this._element){try{this._element.currentTime=0}catch(t){}this._element.pause();for(var t=0;tthis._buffer.duration)})),t.__defineGetter__("loop",(function(){return this._loop})),t.__defineSetter__("loop",(function(t){return this._currentSource&&(this._currentSource.loop=t),this._loop=t})),t.__defineGetter__("volume",(function(){return this._volume})),t.__defineSetter__("volume",(function(t){return this._volume=t,this._gainObj.gain.setTargetAtTime?this._gainObj.gain.setTargetAtTime(this._volume,this._context.currentTime,.01):this._volume.gain.value=t,r.os===r.OS_IOS&&!this.paused&&this._currentSource&&(this._currentSource.onended=null,this.pause(),this.play()),t})),t.__defineGetter__("currentTime",(function(){return this.paused?this.playedLength:(this.playedLength=this._context.currentTime-this._startTime,this.playedLength%=this._buffer.duration,this.playedLength)})),t.__defineSetter__("currentTime",(function(t){return this.paused?this.playedLength=t:(this.pause(),this.playedLength=t,this.play()),t})),t.__defineGetter__("duration",(function(){return this._buffer.duration}))})(h.prototype),e.exports=cc.Audio=c}),{"../core/assets/CCAudioClip":44,"../core/event/event-target":114,"../core/platform/CCSys":187}],23:[(function(t,e,i){var n=t("./CCAudio"),r=t("../core/assets/CCAudioClip"),s=0,o={},a={},c=function(t){var e=s++,i=a[t];if(i||(i=a[t]=[]),l._maxAudioInstance<=i.length){var r=i.shift(),c=o[r];c.stop(),c.destroy()}var h=new n,u=function(){delete o[this.id];var t=i.indexOf(this.id);cc.js.array.fastRemoveAt(i,t)};return h.on("ended",u),h.on("stop",u),o[e]=h,h.id=e,i.push(e),h},h=function(t){return o[t]},l={AudioState:n.State,_maxWebAudioSize:2097152,_maxAudioInstance:24,_id2audio:o,play:function(t,e,i){var n,s=t;if("string"==typeof t)cc.warnID(8401,"cc.audioEngine","cc.AudioClip","AudioClip","cc.AudioClip","audio"),n=c(s=t),r._loadByUrl(s,(function(t,e){e&&(n.src=e)}));else{if(!t)return;s=t.nativeUrl,(n=c(s)).src=t}return n.setLoop(e||!1),"number"!=typeof i&&(i=1),n.setVolume(i),n.play(),n.id},setLoop:function(t,e){var i=h(t);i&&i.setLoop&&i.setLoop(e)},isLoop:function(t){var e=h(t);return!(!e||!e.getLoop)&&e.getLoop()},setVolume:function(t,e){var i=h(t);i&&i.setVolume(e)},getVolume:function(t){var e=h(t);return e?e.getVolume():1},setCurrentTime:function(t,e){var i=h(t);return!!i&&(i.setCurrentTime(e),!0)},getCurrentTime:function(t){var e=h(t);return e?e.getCurrentTime():0},getDuration:function(t){var e=h(t);return e?e.getDuration():0},getState:function(t){var e=h(t);return e?e.getState():this.AudioState.ERROR},setFinishCallback:function(t,e){var i=h(t);i&&(i.off("ended",i._finishCallback),i._finishCallback=e,i.on("ended",i._finishCallback))},pause:function(t){var e=h(t);return!!e&&(e.pause(),!0)},_pauseIDCache:[],pauseAll:function(){for(var t in o){var e=o[t];e.getState()===n.State.PLAYING&&(this._pauseIDCache.push(t),e.pause())}},resume:function(t){var e=h(t);e&&e.resume()},resumeAll:function(){for(var t=0;t0;){var n=i.pop(),r=o[n];r&&(r.stop(),r.destroy(),delete o[n])}},uncacheAll:function(){for(var t in this.stopAll(),o){var e=o[t];e&&e.destroy()}o={},a={}},getProfile:function(t){},preload:function(t,e){cc.loader.load(t,e&&function(t){t||e()})},setMaxWebAudioSize:function(t){this._maxWebAudioSize=1024*t},_breakCache:null,_break:function(){for(var t in this._breakCache=[],o){var e=o[t];e.getState()===n.State.PLAYING&&(this._breakCache.push(t),e.pause())}},_restore:function(){if(this._breakCache){for(;this._breakCache.length>0;){var t=this._breakCache.pop(),e=h(t);e&&e.resume&&e.resume()}this._breakCache=null}},_music:{id:-1,loop:!1,volume:1},_effect:{volume:1,pauseCache:[]},playMusic:function(t,e){var i=this._music;return this.stop(i.id),i.id=this.play(t,e,i.volume),i.loop=e,i.id},stopMusic:function(){this.stop(this._music.id)},pauseMusic:function(){return this.pause(this._music.id),this._music.id},resumeMusic:function(){return this.resume(this._music.id),this._music.id},getMusicVolume:function(){return this._music.volume},setMusicVolume:function(t){var e=this._music;return e.volume=t,this.setVolume(e.id,e.volume),e.volume},isMusicPlaying:function(){return this.getState(this._music.id)===this.AudioState.PLAYING},playEffect:function(t,e){return this.play(t,e||!1,this._effect.volume)},setEffectsVolume:function(t){var e=this._music.id;for(var i in this._effect.volume=t,o)i!==e&&l.setVolume(i,t)},getEffectsVolume:function(){return this._effect.volume},pauseEffect:function(t){return this.pause(t)},pauseAllEffects:function(){var t=this._music.id,e=this._effect;for(var i in e.pauseCache.length=0,o)if(i!==t){var n=o[i];n.getState()===this.AudioState.PLAYING&&(e.pauseCache.push(i),n.pause())}},resumeEffect:function(t){this.resume(t)},resumeAllEffects:function(){for(var t=this._effect.pauseCache,e=0;e0)for(e.sortAllChildren(),i=0;i0){var n=i.length;e.sortAllChildren();for(var r=0;r=0;--n)o[i]+=s.charCodeAt(i*e+n)<<8*n;return o},cc.Codec.unzipAsArray=function(t,e){e=e||1;var i,n,r,s=this.unzip(t),o=[];for(i=0,r=s.length/e;i=0;--n)o[i]+=s.charCodeAt(i*e+n)<<8*n;return o}}),{"./base64":29,"./gzip":30}],29:[(function(t,e,i){var n=t("../core/utils/misc").BASE64_VALUES,r={name:"Jacob__Codec__Base64",decode:function(t){var e,i,r,s,o,a,c=[],h=0;for(t=t.replace(/[^A-Za-z0-9\+\/\=]/g,"");h>4,i=(15&s)<<4|(o=n[t.charCodeAt(h++)])>>2,r=(3&o)<<6|(a=n[t.charCodeAt(h++)]),c.push(String.fromCharCode(e)),64!==o&&c.push(String.fromCharCode(i)),64!==a&&c.push(String.fromCharCode(r));return c=c.join("")},decodeAsArray:function(t,e){var i,n,r,s=this.decode(t),o=[];for(i=0,r=s.length/e;i=0;--n)o[i]+=s.charCodeAt(i*e+n)<<8*n;return o}};e.exports=r}),{"../core/utils/misc":228}],30:[(function(t,e,i){var n=function(t){this.data=t,this.debug=!1,this.gpflags=void 0,this.files=0,this.unzipped=[],this.buf32k=new Array(32768),this.bIdx=0,this.modeZIP=!1,this.bytepos=0,this.bb=1,this.bits=0,this.nameBuf=[],this.fileout=void 0,this.literalTree=new Array(n.LITERALS),this.distanceTree=new Array(32),this.treepos=0,this.Places=null,this.len=0,this.fpos=new Array(17),this.fpos[0]=0,this.flens=void 0,this.fmax=void 0};n.gunzip=function(t){return t.constructor===Array||(t.constructor,String),new n(t).gunzip()[0][0]},n.HufNode=function(){this.b0=0,this.b1=0,this.jump=null,this.jumppos=-1},n.LITERALS=288,n.NAMEMAX=256,n.bitReverse=[0,128,64,192,32,160,96,224,16,144,80,208,48,176,112,240,8,136,72,200,40,168,104,232,24,152,88,216,56,184,120,248,4,132,68,196,36,164,100,228,20,148,84,212,52,180,116,244,12,140,76,204,44,172,108,236,28,156,92,220,60,188,124,252,2,130,66,194,34,162,98,226,18,146,82,210,50,178,114,242,10,138,74,202,42,170,106,234,26,154,90,218,58,186,122,250,6,134,70,198,38,166,102,230,22,150,86,214,54,182,118,246,14,142,78,206,46,174,110,238,30,158,94,222,62,190,126,254,1,129,65,193,33,161,97,225,17,145,81,209,49,177,113,241,9,137,73,201,41,169,105,233,25,153,89,217,57,185,121,249,5,133,69,197,37,165,101,229,21,149,85,213,53,181,117,245,13,141,77,205,45,173,109,237,29,157,93,221,61,189,125,253,3,131,67,195,35,163,99,227,19,147,83,211,51,179,115,243,11,139,75,203,43,171,107,235,27,155,91,219,59,187,123,251,7,135,71,199,39,167,103,231,23,151,87,215,55,183,119,247,15,143,79,207,47,175,111,239,31,159,95,223,63,191,127,255],n.cplens=[3,4,5,6,7,8,9,10,11,13,15,17,19,23,27,31,35,43,51,59,67,83,99,115,131,163,195,227,258,0,0],n.cplext=[0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0,99,99],n.cpdist=[1,2,3,4,5,7,9,13,17,25,33,49,65,97,129,193,257,385,513,769,1025,1537,2049,3073,4097,6145,8193,12289,16385,24577],n.cpdext=[0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13],n.border=[16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15],n.prototype.gunzip=function(){return this.outputArr=[],this.nextFile(),this.unzipped},n.prototype.readByte=function(){return this.bits+=8,this.bytepos>=1,0===this.bb&&(this.bb=this.readByte(),t=1&this.bb,this.bb=this.bb>>1|128),t},n.prototype.readBits=function(t){for(var e=0,i=t;i--;)e=e<<1|this.readBit();return t&&(e=n.bitReverse[e]>>8-t),e},n.prototype.flushBuffer=function(){this.bIdx=0},n.prototype.addBuffer=function(t){this.buf32k[this.bIdx++]=t,this.outputArr.push(String.fromCharCode(t)),32768===this.bIdx&&(this.bIdx=0)},n.prototype.IsPat=function(){for(;;){if(this.fpos[this.len]>=this.fmax)return-1;if(this.flens[this.fpos[this.len]]===this.len)return this.fpos[this.len]++;this.fpos[this.len]++}},n.prototype.Rec=function(){var t,e=this.Places[this.treepos];if(17===this.len)return-1;if(this.treepos++,this.len++,(t=this.IsPat())>=0)e.b0=t;else if(e.b0=32768,this.Rec())return-1;if((t=this.IsPat())>=0)e.b1=t,e.jump=null;else if(e.b1=32768,e.jump=this.Places[this.treepos],e.jumppos=this.treepos,this.Rec())return-1;return this.len--,0},n.prototype.CreateTree=function(t,e,i,n){var r;for(this.Places=t,this.treepos=0,this.flens=i,this.fmax=e,r=0;r<17;r++)this.fpos[r]=0;return this.len=0,this.Rec()?-1:0},n.prototype.DecodeValue=function(t){for(var e,i,n=0,r=t[n];;)if(this.readBit()){if(!(32768&r.b1))return r.b1;for(r=r.jump,e=t.length,i=0;i>1)>23?(a=a<<1|this.readBit())>199?a=(a-=128)<<1|this.readBit():(a-=48)>143&&(a+=136):a+=256,a<256)this.addBuffer(a);else{if(256===a)break;for(a-=257,m=this.readBits(n.cplext[a])+n.cplens[a],a=n.bitReverse[this.readBits(5)]>>3,n.cpdext[a]>8?(p=this.readBits(8),p|=this.readBits(n.cpdext[a]-8)<<8):p=this.readBits(n.cpdext[a]),p+=n.cpdist[a],a=0;ac)return this.flushBuffer(),1;for(d=i?_[i-1]:0;a--;)_[i++]=d}else{if(i+(a=17===a?3+this.readBits(3):11+this.readBits(7))>c)return this.flushBuffer(),1;for(;a--;)_[i++]=0}for(m=this.literalTree.length,i=0;i=256){var m,p;if(0===(a-=256))break;for(a--,m=this.readBits(n.cplext[a])+n.cplens[a],a=this.DecodeValue(this.distanceTree),n.cpdext[a]>8?(p=this.readBits(8),p|=this.readBits(n.cpdext[a]-8)<<8):p=this.readBits(n.cpdext[a]),p+=n.cpdist[a];m--;){o=this.buf32k[this.bIdx-p&32767];this.addBuffer(o)}}else this.addBuffer(a)}}while(!t);return this.flushBuffer(),this.byteAlign(),0},n.prototype.unzipFile=function(t){var e;for(this.gunzip(),e=0;e>>0;t=n}for(var r,s=1,o=0,a=t.length,c=0;0>>0}function a(e,i){this.index="number"==typeof i?i:0,this.i=0,this.buffer=e instanceof(s?Uint8Array:Array)?e:new(s?Uint8Array:Array)(32768),2*this.buffer.length<=this.index&&t(Error("invalid index")),this.buffer.length<=this.index&&this.f()}a.prototype.f=function(){var t,e=this.buffer,i=e.length,n=new(s?Uint8Array:Array)(i<<1);if(s)n.set(e);else for(t=0;t>>8&255]<<16|d[t>>>16&255]<<8|d[t>>>24&255])>>32-e:d[t]>>8-e),8>e+o)a=a<>e-n-1&1,8==++o&&(o=0,r[s++]=d[a],a=0,s===r.length&&(r=this.f()));r[s]=a,this.buffer=r,this.i=o,this.index=s},a.prototype.finish=function(){var t,e=this.buffer,i=this.index;return 0c;++c){for(var l=_=c,u=7,_=_>>>1;_;_>>>=1)l<<=1,l|=1&_,--u;h[c]=(l<>>0}var d=h;function f(t){this.buffer=new(s?Uint16Array:Array)(2*t),this.length=0}function m(t){var e,i,n,r,o,a,c,h,l,u=t.length,_=0,d=Number.POSITIVE_INFINITY;for(h=0;h_&&(_=t[h]),t[h]>=1;for(l=a;ls[n]);)r=s[i],s[i]=s[n],s[n]=r,r=s[i+1],s[i+1]=s[n+1],s[n+1]=r,i=n;return this.length},f.prototype.pop=function(){var t,e,i,n,r,s=this.buffer;for(e=s[0],t=s[1],this.length-=2,s[0]=s[this.length],s[1]=s[this.length+1],r=0;!((n=2*r+2)>=this.length)&&(n+2s[n]&&(n+=2),s[n]>s[r]);)i=s[r],s[r]=s[n],s[n]=i,i=s[r+1],s[r+1]=s[n+1],s[n+1]=i,r=n;return{index:t,value:e,length:this.length}};var g,y=2,v={NONE:0,r:1,j:y,N:3},x=[];for(g=0;288>g;g++)switch(i){case 143>=g:x.push([g+48,8]);break;case 255>=g:x.push([g-144+400,9]);break;case 279>=g:x.push([g-256+0,7]);break;case 287>=g:x.push([g-280+192,8]);break;default:t("invalid literal: "+g)}function C(t,e){this.length=t,this.G=e}function T(){var e=A;switch(i){case 3===e:return[257,e-3,0];case 4===e:return[258,e-4,0];case 5===e:return[259,e-5,0];case 6===e:return[260,e-6,0];case 7===e:return[261,e-7,0];case 8===e:return[262,e-8,0];case 9===e:return[263,e-9,0];case 10===e:return[264,e-10,0];case 12>=e:return[265,e-11,1];case 14>=e:return[266,e-13,1];case 16>=e:return[267,e-15,1];case 18>=e:return[268,e-17,1];case 22>=e:return[269,e-19,2];case 26>=e:return[270,e-23,2];case 30>=e:return[271,e-27,2];case 34>=e:return[272,e-31,2];case 42>=e:return[273,e-35,3];case 50>=e:return[274,e-43,3];case 58>=e:return[275,e-51,3];case 66>=e:return[276,e-59,3];case 82>=e:return[277,e-67,4];case 98>=e:return[278,e-83,4];case 114>=e:return[279,e-99,4];case 130>=e:return[280,e-115,4];case 162>=e:return[281,e-131,5];case 194>=e:return[282,e-163,5];case 226>=e:return[283,e-195,5];case 257>=e:return[284,e-227,5];case 258===e:return[285,e-258,0];default:t("invalid length: "+e)}}p.prototype.n=function(){var n,r,o,c,h=this.input;switch(this.h){case 0:for(o=0,c=h.length;o>>8&255,g[v++]=255&_,g[v++]=_>>>8&255,s)g.set(d,v),v+=d.length,g=g.subarray(0,v);else{for(m=0,p=d.length;mJ)for(;0J?J:138)>J-3&&Q=Q?(it[K++]=17,it[K++]=Q-3,nt[17]++):(it[K++]=18,it[K++]=Q-11,nt[18]++),J-=Q;else if(it[K++]=et[H],nt[et[H]]++,3>--J)for(;0J?J:6)>J-3&&QU;U++)Y[U]=z[j[U]];for(B=19;4=A;A++)b=T(),S[A]=b[2]<<24|b[1]<<16|b[0];var E=s?new Uint32Array(S):S;function w(n,r){function o(e,n){var r,s,o,a,c=e.G,h=[],l=0;switch(r=E[e.length],h[l++]=65535&r,h[l++]=r>>16&255,h[l++]=r>>24,i){case 1===c:s=[0,c-1,0];break;case 2===c:s=[1,c-2,0];break;case 3===c:s=[2,c-3,0];break;case 4===c:s=[3,c-4,0];break;case 6>=c:s=[4,c-5,1];break;case 8>=c:s=[5,c-7,1];break;case 12>=c:s=[6,c-9,2];break;case 16>=c:s=[7,c-13,2];break;case 24>=c:s=[8,c-17,3];break;case 32>=c:s=[9,c-25,3];break;case 48>=c:s=[10,c-33,4];break;case 64>=c:s=[11,c-49,4];break;case 96>=c:s=[12,c-65,5];break;case 128>=c:s=[13,c-97,5];break;case 192>=c:s=[14,c-129,6];break;case 256>=c:s=[15,c-193,6];break;case 384>=c:s=[16,c-257,7];break;case 512>=c:s=[17,c-385,7];break;case 768>=c:s=[18,c-513,8];break;case 1024>=c:s=[19,c-769,8];break;case 1536>=c:s=[20,c-1025,9];break;case 2048>=c:s=[21,c-1537,9];break;case 3072>=c:s=[22,c-2049,10];break;case 4096>=c:s=[23,c-3073,10];break;case 6144>=c:s=[24,c-4097,11];break;case 8192>=c:s=[25,c-6145,11];break;case 12288>=c:s=[26,c-8193,12];break;case 16384>=c:s=[27,c-12289,12];break;case 24576>=c:s=[28,c-16385,13];break;case 32768>=c:s=[29,c-24577,13];break;default:t("invalid distance")}for(r=s,h[l++]=r[0],h[l++]=r[1],h[l++]=r[2],o=0,a=h.length;o=h;)x[h++]=0;for(h=0;29>=h;)T[h++]=0}for(x[256]=1,a=0,c=r.length;a=c){for(f&&o(f,-1),h=0,l=c-a;hI&&a+Iw&&(S=b,w=I),258===I)break}d=new C(w,a-S),f?f.length2*v[d-1]+x[d]&&(v[d]=2*v[d-1]+x[d]),T[d]=Array(v[d]),A[d]=Array(v[d]);for(_=0;_r[_]?(T[d][m]=p,A[d][m]=y,g+=2):(T[d][m]=r[_],A[d][m]=_,++_);b[d]=0,1===x[d]&&i(d)}for(o=C,a=0,c=n.length;a1<l&&t("undercommitted"),i=0,n=e.length;i>>=1;return a}function P(t,e){this.input=t,this.a=new(s?Uint8Array:Array)(32768),this.h=O.j;var i,n={};for(i in!e&&(e={})||"number"!=typeof e.compressionType||(this.h=e.compressionType),e)n[i]=e[i];n.outputBuffer=this.a,this.z=new p(this.input,n)}var O=v;function D(e,i){switch(this.k=[],this.l=32768,this.e=this.g=this.c=this.q=0,this.input=s?new Uint8Array(e):e,this.s=!1,this.m=L,this.B=!1,!i&&(i={})||(i.index&&(this.c=i.index),i.bufferSize&&(this.l=i.bufferSize),i.bufferType&&(this.m=i.bufferType),i.resize&&(this.B=i.resize)),this.m){case B:this.b=32768,this.a=new(s?Uint8Array:Array)(32768+this.l+258);break;case L:this.b=0,this.a=new(s?Uint8Array:Array)(this.l),this.f=this.J,this.t=this.H,this.o=this.I;break;default:t(Error("invalid inflate mode"))}}P.prototype.n=function(){var e,i,n,r,a,c,h,l=0;switch(h=this.a,e=lt){case lt:i=Math.LOG2E*Math.log(32768)-8;break;default:t(Error("invalid compression method"))}switch(n=i<<4|e,h[l++]=n,e){case lt:switch(this.h){case O.NONE:a=0;break;case O.r:a=1;break;case O.j:a=2;break;default:t(Error("unsupported compression type"))}break;default:t(Error("invalid compression method"))}return r=a<<6|0,h[l++]=r|31-(256*n+r)%31,c=o(this.input),this.z.b=l,l=(h=this.z.n()).length,s&&((h=new Uint8Array(h.buffer)).length<=l+4&&(this.a=new Uint8Array(h.length+4),this.a.set(h),h=this.a),h=h.subarray(0,l+4)),h[l++]=c>>24&255,h[l++]=c>>16&255,h[l++]=c>>8&255,h[l++]=255&c,h},r("Zlib.Deflate",P),r("Zlib.Deflate.compress",(function(t,e){return new P(t,e).n()})),r("Zlib.Deflate.CompressionType",O),r("Zlib.Deflate.CompressionType.NONE",O.NONE),r("Zlib.Deflate.CompressionType.FIXED",O.r),r("Zlib.Deflate.CompressionType.DYNAMIC",O.j);var B=0,L=1,M={D:B,C:L};D.prototype.p=function(){for(;!this.s;){var n=tt(this,3);switch(1&n&&(this.s=i),n>>>=1){case 0:var r=this.input,o=this.c,a=this.a,c=this.b,h=e,l=e,u=e,_=a.length,d=e;switch(this.e=this.g=0,(h=r[o++])===e&&t(Error("invalid uncompressed block header: LEN (first byte)")),l=h,(h=r[o++])===e&&t(Error("invalid uncompressed block header: LEN (second byte)")),l|=h<<8,(h=r[o++])===e&&t(Error("invalid uncompressed block header: NLEN (first byte)")),u=h,(h=r[o++])===e&&t(Error("invalid uncompressed block header: NLEN (second byte)")),l===~(u|=h<<8)&&t(Error("invalid uncompressed block header: length verify")),o+l>r.length&&t(Error("input buffer is broken")),this.m){case B:for(;c+l>a.length;){if(l-=d=_-c,s)a.set(r.subarray(o,o+d),c),c+=d,o+=d;else for(;d--;)a[c++]=r[o++];this.b=c,a=this.f(),c=this.b}break;case L:for(;c+l>a.length;)a=this.f({v:2});break;default:t(Error("invalid inflate mode"))}if(s)a.set(r.subarray(o,o+l),c),c+=l,o+=l;else for(;l--;)a[c++]=r[o++];this.c=o,this.b=c,this.a=a;break;case 1:this.o(K,$);break;case 2:it(this);break;default:t(Error("unknown BTYPE: "+n))}}return this.t()};var N,F,z=[16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15],k=s?new Uint16Array(z):z,V=[3,4,5,6,7,8,9,10,11,13,15,17,19,23,27,31,35,43,51,59,67,83,99,115,131,163,195,227,258,258,258],G=s?new Uint16Array(V):V,U=[0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0,0,0],W=s?new Uint8Array(U):U,X=[1,2,3,4,5,7,9,13,17,25,33,49,65,97,129,193,257,385,513,769,1025,1537,2049,3073,4097,6145,8193,12289,16385,24577],j=s?new Uint16Array(X):X,Y=[0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13],H=s?new Uint8Array(Y):Y,q=new(s?Uint8Array:Array)(288);for(N=0,F=q.length;N=N?8:255>=N?9:279>=N?7:8;var J,Z,K=m(q),Q=new(s?Uint8Array:Array)(30);for(J=0,Z=Q.length;J>>n,i.e=o-n,i.c=c,r}function et(i,n){for(var r,s,o,a=i.g,c=i.e,h=i.input,l=i.c,u=n[0],_=n[1];c<_;)(r=h[l++])===e&&t(Error("input buffer is broken")),a|=r<>>16,i.g=a>>o,i.e=c-o,i.c=l,65535&s}function it(t){function e(t,e,i){var n,r,s,o;for(o=0;or)n>=c&&(this.b=n,i=this.f(),n=this.b),i[n++]=r;else for(a=G[s=r-257],0=c&&(this.b=n,i=this.f(),n=this.b);a--;)i[n]=i[n++-o];for(;8<=this.e;)this.e-=8,this.c--;this.b=n},D.prototype.I=function(t,e){var i=this.a,n=this.b;this.u=t;for(var r,s,o,a,c=i.length;256!==(r=et(this,t));)if(256>r)n>=c&&(c=(i=this.f()).length),i[n++]=r;else for(a=G[s=r-257],0c&&(c=(i=this.f()).length);a--;)i[n]=i[n++-o];for(;8<=this.e;)this.e-=8,this.c--;this.b=n},D.prototype.f=function(){var t,e,i=new(s?Uint8Array:Array)(this.b-32768),n=this.b-32768,r=this.a;if(s)i.set(r.subarray(32768,i.length));else for(t=0,e=i.length;tt;++t)r[t]=r[n+t];return this.b=32768,r},D.prototype.J=function(t){var e,i,n,r=this.input.length/this.c+1|0,o=this.input,a=this.a;return t&&("number"==typeof t.v&&(r=t.v),"number"==typeof t.F&&(r+=t.F)),2>r?i=(n=(o.length-this.c)/this.u[2]/2*258|0)e&&(this.a.length=e),t=this.a),this.buffer=t},nt.prototype.p=function(){var e,i=this.input;return e=this.A.p(),this.c=this.A.c,this.M&&((i[this.c++]<<24|i[this.c++]<<16|i[this.c++]<<8|i[this.c++])>>>0!==o(e)&&t(Error("invalid adler-32 checksum"))),e},r("Zlib.Inflate",nt),r("Zlib.Inflate.BufferType",M),M.ADAPTIVE=M.C,M.BLOCK=M.D,r("Zlib.Inflate.prototype.decompress",nt.prototype.p);s&&new Uint16Array([16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15]);s&&new Uint16Array([3,4,5,6,7,8,9,10,11,13,15,17,19,23,27,31,35,43,51,59,67,83,99,115,131,163,195,227,258,258,258]);s&&new Uint8Array([0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0,0,0]);s&&new Uint16Array([1,2,3,4,5,7,9,13,17,25,33,49,65,97,129,193,257,385,513,769,1025,1537,2049,3073,4097,6145,8193,12289,16385,24577]);s&&new Uint8Array([0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13]);var rt,st,ot=new(s?Uint8Array:Array)(288);for(rt=0,st=ot.length;rt=rt?8:255>=rt?9:279>=rt?7:8;m(ot);var at,ct,ht=new(s?Uint8Array:Array)(30);for(at=0,ct=ht.length;at-1},getValue:function(t,e){this._inited||this._init();var i=this._valueDict;return i[t]?i[t]:e},setValue:function(t,e){this._valueDict[t]=e},gatherGPUInfo:function(){if(cc._renderType!==cc.game.RENDER_TYPE_CANVAS){this._inited||this._init();var t=cc._renderContext,e=this._valueDict;e["gl.vendor"]=t.getParameter(t.VENDOR),e["gl.renderer"]=t.getParameter(t.RENDERER),e["gl.version"]=t.getParameter(t.VERSION),this._GlExtensions="";for(var i=t.getSupportedExtensions(),n=0;n0&&this._deltaTime>1&&(this._deltaTime=1/60)),this._lastUpdate=t},convertToGL:function(t){var e=cc.game.container,i=cc.view,n=e.getBoundingClientRect(),r=n.left+window.pageXOffset-e.clientLeft,s=n.top+window.pageYOffset-e.clientTop,o=i._devicePixelRatio*(t.x-r),a=i._devicePixelRatio*(s+n.height-t.y);return i._isRotated?cc.v2(i._viewportRect.width-a,o):cc.v2(o,a)},convertToUI:function(t){var e=cc.game.container,i=cc.view,n=e.getBoundingClientRect(),r=n.left+window.pageXOffset-e.clientLeft,s=n.top+window.pageYOffset-e.clientTop,o=cc.v2(0,0);return i._isRotated?(o.x=r+t.y/i._devicePixelRatio,o.y=s+n.height-(i._viewPortRect.width-t.x)/i._devicePixelRatio):(o.x=r+t.x*i._devicePixelRatio,o.y=s+n.height-t.y*i._devicePixelRatio),o},_visitScene:function(){if(this._runningScene){var t=cc.renderer;t.childrenOrderDirty?(t.clearRenderCommands(),cc.renderer.assignedZ=0,this._runningScene._renderCmd._curLevel=0,this._runningScene.visit(),t.resetFlag()):t.transformDirty()&&t.transform()}},end:function(){this._purgeDirectorInNextLoop=!0},getContentScaleFactor:function(){return this._contentScaleFactor},getWinSize:function(){return cc.size(this._winSizeInPoints)},getWinSizeInPixels:function(){return cc.size(this._winSizeInPoints.width*this._contentScaleFactor,this._winSizeInPoints.height*this._contentScaleFactor)},getVisibleSize:null,getVisibleOrigin:null,getZEye:null,pause:function(){this._paused||(this._oldAnimationInterval=this._animationInterval,this.setAnimationInterval(.25),this._paused=!0)},popScene:function(){cc.assertID(this._runningScene,1204),this._scenesStack.pop();var t=this._scenesStack.length;0===t?this.end():(this._sendCleanupToScene=!0,this._nextScene=this._scenesStack[t-1])},purgeCachedData:function(){cc.textureCache._clear(),cc.loader.releaseAll()},purgeDirector:function(){this.getScheduler().unscheduleAll(),this._compScheduler.unscheduleAll(),this._nodeActivator.reset(),h&&h.setEnabled(!1),this._runningScene&&(this._runningScene.performRecursive(_ccsg.Node.performType.onExitTransitionDidStart),this._runningScene.performRecursive(_ccsg.Node.performType.onExit),this._runningScene.performRecursive(_ccsg.Node.performType.cleanup),cc.renderer.clearRenderCommands()),this._runningScene=null,this._nextScene=null,this._scenesStack.length=0,this.stopAnimation(),this.purgeCachedData()},reset:function(){this.purgeDirector(),h&&h.setEnabled(!0),this._actionManager&&this._scheduler.scheduleUpdate(this._actionManager,cc.Scheduler.PRIORITY_SYSTEM,!1),this._animationManager&&this._scheduler.scheduleUpdate(this._animationManager,cc.Scheduler.PRIORITY_SYSTEM,!1),this._collisionManager&&this._scheduler.scheduleUpdate(this._collisionManager,cc.Scheduler.PRIORITY_SYSTEM,!1),this._physicsManager&&this._scheduler.scheduleUpdate(this._physicsManager,cc.Scheduler.PRIORITY_SYSTEM,!1),this.startAnimation()},pushScene:function(t){cc.assertID(t,1205),this._sendCleanupToScene=!1,this._scenesStack.push(t),this._nextScene=t},runSceneImmediate:function(t,e,i){t instanceof cc.Scene&&t._load();for(var n=cc.game,r=Object.keys(n._persistRootNodes).map((function(t){return n._persistRootNodes[t]})),o=0;oi)){for(;i>t;){var n=e.pop();n.running&&(n.performRecursive(_ccsg.Node.performType.onExitTransitionDidStart),n.performRecursive(_ccsg.Node.performType.onExit)),n.performRecursive(_ccsg.Node.performType.cleanup),i--}this._nextScene=e[e.length-1],this._sendCleanupToScene=!0}}else this.end()},getScheduler:function(){return this._scheduler},setScheduler:function(t){this._scheduler!==t&&(this._scheduler=t)},getActionManager:function(){return this._actionManager},setActionManager:function(t){this._actionManager!==t&&(this._actionManager&&this._scheduler.unscheduleUpdate(this._actionManager),this._actionManager=t,this._scheduler.scheduleUpdate(this._actionManager,cc.Scheduler.PRIORITY_SYSTEM,!1))},getAnimationManager:function(){return this._animationManager},getCollisionManager:function(){return this._collisionManager},getPhysicsManager:function(){return this._physicsManager},getDeltaTime:function(){return this._deltaTime}}),cc.js.addon(cc.Director.prototype,n.prototype),cc.Director.EVENT_PROJECTION_CHANGED="director_projection_changed",cc.Director.EVENT_BEFORE_SCENE_LOADING="director_before_scene_loading",cc.Director.EVENT_BEFORE_SCENE_LAUNCH="director_before_scene_launch",cc.Director.EVENT_AFTER_SCENE_LAUNCH="director_after_scene_launch",cc.Director.EVENT_BEFORE_UPDATE="director_before_update",cc.Director.EVENT_AFTER_UPDATE="director_after_update",cc.Director.EVENT_BEFORE_VISIT="director_before_visit",cc.Director.EVENT_AFTER_VISIT="director_after_visit",cc.Director.EVENT_AFTER_DRAW="director_after_draw",cc.DisplayLinkDirector=cc.Director.extend({invalid:!1,startAnimation:function(){this._nextDeltaTimeZero=!0,this.invalid=!1},mainLoop:function(){this._purgeDirectorInNextLoop?(this._purgeDirectorInNextLoop=!1,this.purgeDirector()):this.invalid||(this.calculateDeltaTime(),this._paused||(this.emit(cc.Director.EVENT_BEFORE_UPDATE),this._compScheduler.startPhase(),this._compScheduler.updatePhase(this._deltaTime),this._scheduler.update(this._deltaTime),this._compScheduler.lateUpdatePhase(this._deltaTime),this.emit(cc.Director.EVENT_AFTER_UPDATE),cc.Object._deferredDestroy()),this._nextScene&&this.setNextScene(),this.emit(cc.Director.EVENT_BEFORE_VISIT),this._visitScene(),this.emit(cc.Director.EVENT_AFTER_VISIT),cc.g_NumberOfDraws=0,cc.renderer.clear(),cc.renderer.rendering(cc._renderContext),this._totalFrames++,this.emit(cc.Director.EVENT_AFTER_DRAW),h.frameUpdateListeners())},stopAnimation:function(){this.invalid=!0},setAnimationInterval:function(t){this._animationInterval=t,this.invalid||(this.stopAnimation(),this.startAnimation())},__fastOn:function(t,e,i){var n=this._bubblingListeners;n||(n=this._bubblingListeners=new c),n.add(t,e,i),this._addEventFlag(t,n,!1)},__fastOff:function(t,e,i){var n=this._bubblingListeners;n&&(n.remove(t,e,i),this._purgeEventFlag(t,n,!1))}}),cc.Director.sharedDirector=null,cc.Director.firstUseDirector=!0,cc.Director._getInstance=function(){return cc.Director.firstUseDirector&&(cc.Director.firstUseDirector=!1,cc.Director.sharedDirector=new cc.DisplayLinkDirector,cc.Director.sharedDirector.init()),cc.Director.sharedDirector},cc.defaultFPS=60,cc.Director.PROJECTION_2D=0,cc.Director.PROJECTION_3D=1,cc.Director.PROJECTION_CUSTOM=3,cc.Director.PROJECTION_DEFAULT=cc.Director.PROJECTION_2D}),{"./component-scheduler":72,"./event-manager":112,"./event/event-listeners":113,"./event/event-target":114,"./load-pipeline/auto-release-utils":136,"./node-activator":149,"./platform/_CCClass":190}],34:[(function(t,e,i){t("./CCDirector"),t("./CCGame");var n=t("./event-manager");cc.game.once(cc.game.EVENT_RENDERER_INITED,(function(){if(cc._renderType===cc.game.RENDER_TYPE_CANVAS){var t=cc.Director.prototype;t.getProjection=function(t){return this._projection},t.setProjection=function(t){this._projection=t,this.emit(cc.Director.EVENT_PROJECTION_CHANGED,this)},t.setDepthTest=function(){},t.setClearColor=function(t){cc.renderer._clearColor=t,cc.renderer._clearFillStyle="rgb("+t.r+","+t.g+","+t.b+")"},t.setOpenGLView=function(t){this._winSizeInPoints.width=cc._canvas.width,this._winSizeInPoints.height=cc._canvas.height,this._openGLView=t||cc.view,n&&n.setEnabled(!0)},t.getVisibleSize=function(){return this.getWinSize()},t.getVisibleOrigin=function(){return cc.p(0,0)}}}))}),{"./CCDirector":33,"./CCGame":39,"./event-manager":112}],35:[(function(t,e,i){t("./CCDirector"),t("./CCGame"),t("../kazmath");var n=t("./event-manager"),r=cc.math;cc.game.once(cc.game.EVENT_RENDERER_INITED,(function(){if(cc._renderType===cc.game.RENDER_TYPE_WEBGL){cc.DirectorDelegate=cc._Class.extend({updateProjection:function(){}});var t=cc.Director.prototype,e=function(t){if(t&&t._renderCmd){t._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty);var i,n=t._children;for(i=0;i0?cc.loader.load(s,(function(i){if(i)throw new Error(JSON.stringify(i));e._prepared=!0,t&&t(),e.emit(e.EVENT_GAME_INITED)})):(t&&t(),e.emit(e.EVENT_GAME_INITED))}else cc.initEngine(this.config,(function(){e.prepare(t)}))}else this._loadConfig((function(){e.prepare(t)}))},run:function(t,e){"function"==typeof t?o.onStart=t:(t&&(o.config=t),"function"==typeof e&&(o.onStart=e)),this.prepare(o.onStart&&o.onStart.bind(o))},addPersistRootNode:function(t){if(cc.Node.isNode(t)&&t.uuid){var e=t.uuid;if(!this._persistRootNodes[e]){var i=cc.director._scene;if(cc.isValid(i)){if(t.parent){if(!(t.parent instanceof cc.Scene))return void cc.warnID(3801);if(t.parent!==i)return void cc.warnID(3802)}else t.parent=i;this._persistRootNodes[e]=t,t._persistNode=!0}}}else cc.warnID(3800)},removePersistRootNode:function(t){if(t!==this._ignoreRemovePersistNode){var e=t.uuid||"";t===this._persistRootNodes[e]&&(delete this._persistRootNodes[e],t._persistNode=!1)}},isPersistRootNode:function(t){return t._persistNode},_setAnimFrame:function(){this._lastTime=new Date;var t=o.config[o.CONFIG_KEY.frameRate];this._frameTime=1e3/t,60!==t&&30!==t?(window.requestAnimFrame=this._stTime,window.cancelAnimFrame=this._ctTime):(window.requestAnimFrame=window.requestAnimationFrame||window.webkitRequestAnimationFrame||window.mozRequestAnimationFrame||window.oRequestAnimationFrame||window.msRequestAnimationFrame||this._stTime,window.cancelAnimFrame=window.cancelAnimationFrame||window.cancelRequestAnimationFrame||window.msCancelRequestAnimationFrame||window.mozCancelRequestAnimationFrame||window.oCancelRequestAnimationFrame||window.webkitCancelRequestAnimationFrame||window.msCancelAnimationFrame||window.mozCancelAnimationFrame||window.webkitCancelAnimationFrame||window.oCancelAnimationFrame||this._ctTime)},_stTime:function(t){var e=(new Date).getTime(),i=Math.max(0,o._frameTime-(e-o._lastTime)),n=window.setTimeout((function(){t()}),i);return o._lastTime=e+i,n},_ctTime:function(t){window.clearTimeout(t)},_runMainLoop:function(){var t,e=this,i=e.config,n=e.CONFIG_KEY,r=cc.director,s=!0,o=i[n.frameRate];r.setDisplayStats(i[n.showFPS]),t=function(){if(!e._paused){if(e._intervalId=window.requestAnimFrame(t),30===o&&(s=!s))return;r.mainLoop()}},e._intervalId=window.requestAnimFrame(t),e._paused=!1},_loadConfig:function(t){if(this.config)return this._initConfig(this.config),void(t&&t());if(document.ccConfig)return this._initConfig(document.ccConfig),void(t&&t());var e=this;cc.loader.load("project.json",(function(i,n){i&&cc.logID(3818),e._initConfig(n||{}),t&&t()}))},_initConfig:function(t){var e=this.CONFIG_KEY;"number"!=typeof t[e.debugMode]&&(t[e.debugMode]=0),t[e.exposeClassName]=!!t[e.exposeClassName],"number"!=typeof t[e.frameRate]&&(t[e.frameRate]=60),"number"!=typeof t[e.renderMode]&&(t[e.renderMode]=0),"boolean"!=typeof t[e.registerSystemEvent]&&(t[e.registerSystemEvent]=!0),t[e.showFPS]=!(e.showFPS in t)||!!t[e.showFPS],this._sceneInfos=t[e.scenes]||[],this.collisionMatrix=t.collisionMatrix||[],this.groupList=t.groupList||[],cc._initDebugSetting(t[e.debugMode]),this.config=t,this._configLoaded=!0},_initRenderer:function(t,e){if(!this._rendererInitialized){if(!cc._supportRender)throw new Error(cc._getError(3820,this.config[this.CONFIG_KEY.renderMode]));var i,n,r=this.config[o.CONFIG_KEY.id],s=window,a=cc.sys.platform===cc.sys.WECHAT_GAME,c=cc.sys.platform===cc.sys.QQ_PLAY;if(a)this.container=cc.container=n=document.createElement("DIV"),this.frame=n.parentNode===document.body?document.documentElement:n.parentNode,i=cc.sys.browserType===cc.sys.BROWSER_TYPE_WECHAT_GAME_SUB?wx.getSharedCanvas():canvas,this.canvas=cc._canvas=i;else if(c)this.container=cc.container=document.createElement("DIV"),this.frame=document.documentElement,this.canvas=cc._canvas=i=canvas;else{var h=r instanceof HTMLElement?r:document.querySelector(r)||document.querySelector("#"+r);"CANVAS"===h.tagName?(t=t||h.width,e=e||h.height,this.canvas=cc._canvas=i=h,this.container=cc.container=n=document.createElement("DIV"),i.parentNode&&i.parentNode.insertBefore(n,i)):("DIV"!==h.tagName&&cc.warnID(3819),t=t||h.clientWidth,e=e||h.clientHeight,this.canvas=cc._canvas=i=document.createElement("CANVAS"),this.container=cc.container=n=document.createElement("DIV"),h.appendChild(n)),n.setAttribute("id","Cocos2dGameContainer"),n.appendChild(i),this.frame=n.parentNode===document.body?document.documentElement:n.parentNode,(function(t,e){(" "+t.className+" ").indexOf(" "+e+" ")>-1||(t.className&&(t.className+=" "),t.className+=e)})(i,"gameCanvas"),i.setAttribute("width",t||480),i.setAttribute("height",e||320),i.setAttribute("tabindex",99)}if(cc._renderType===o.RENDER_TYPE_WEBGL){var l={stencil:!0,antialias:cc.macro.ENABLE_WEBGL_ANTIALIAS,alpha:cc.macro.ENABLE_TRANSPARENT_CANVAS};a&&(l.preserveDrawingBuffer=!0),this._renderContext=cc._renderContext=cc.webglContext=cc.create3DContext(i,l)}this._renderContext?(cc.renderer=cc.rendererWebGL,s.gl=this._renderContext,cc.renderer.init()):(cc._renderType=o.RENDER_TYPE_CANVAS,cc.renderer=cc.rendererCanvas,cc.renderer.init(),this._renderContext=cc._renderContext=new cc.CanvasContextWrapper(i.getContext("2d"))),cc._gameDiv=n,o.canvas.oncontextmenu=function(){if(!cc._isContextMenuEnable)return!1},this.emit(this.EVENT_RENDERER_INITED,!0),this._rendererInitialized=!0}},_initEvents:function(){var t,e=window;this.config[this.CONFIG_KEY.registerSystemEvent]&&s.registerSystemEvent(this.canvas),void 0!==document.hidden?t="hidden":void 0!==document.mozHidden?t="mozHidden":void 0!==document.msHidden?t="msHidden":void 0!==document.webkitHidden&&(t="webkitHidden");var i=!1;function n(){i||(i=!0,o.emit(o.EVENT_HIDE,o))}function r(){i&&(i=!1,o.emit(o.EVENT_SHOW,o))}if(t)for(var a=["visibilitychange","mozvisibilitychange","msvisibilitychange","webkitvisibilitychange","qbrowserVisibilityChange"],c=0;c-1&&(e.onfocus=r),"onpageshow"in window&&"onpagehide"in window&&(e.addEventListener("pagehide",n),e.addEventListener("pageshow",r),document.addEventListener("pagehide",n),document.addEventListener("pageshow",r)),this.on(o.EVENT_HIDE,(function(){o.pause()})),this.on(o.EVENT_SHOW,(function(){o.resume()}))}};r.call(o),cc.js.addon(o,r.prototype),cc.game=e.exports=o}),{"../audio/CCAudioEngine":23,"./event/event-target":114,"./platform/BKInputManager":176,"./platform/CCInputManager":182,"./platform/CCView":188}],40:[(function(t,e,i){"use strict";var n=t("./utils/prefab-helper"),r=t("./utils/scene-graph-helper"),s=t("./event-manager"),o=cc.Object.Flags,a=o.Destroying,c=t("./utils/misc"),h=(t("./event/event"),!!cc.ActionManager),l=function(){},u=cc.Enum({TOUCH_START:"touchstart",TOUCH_MOVE:"touchmove",TOUCH_END:"touchend",TOUCH_CANCEL:"touchcancel",MOUSE_DOWN:"mousedown",MOUSE_MOVE:"mousemove",MOUSE_ENTER:"mouseenter",MOUSE_LEAVE:"mouseleave",MOUSE_UP:"mouseup",MOUSE_WHEEL:"mousewheel"}),_=[u.TOUCH_START,u.TOUCH_MOVE,u.TOUCH_END,u.TOUCH_CANCEL],d=[u.MOUSE_DOWN,u.MOUSE_ENTER,u.MOUSE_MOVE,u.MOUSE_LEAVE,u.MOUSE_UP,u.MOUSE_WHEEL],f=null,m=function(t,e){var i=t.getLocation(),n=this.owner;return!!n._hitTest(i,this)&&(e.type=u.TOUCH_START,e.touch=t,e.bubbles=!0,n.dispatchEvent(e),!0)},p=function(t,e){var i=this.owner;e.type=u.TOUCH_MOVE,e.touch=t,e.bubbles=!0,i.dispatchEvent(e)},g=function(t,e){var i=t.getLocation(),n=this.owner;n._hitTest(i,this)?e.type=u.TOUCH_END:e.type=u.TOUCH_CANCEL,e.touch=t,e.bubbles=!0,n.dispatchEvent(e)},y=function(t,e){t.getLocation();var i=this.owner;e.type=u.TOUCH_CANCEL,e.touch=t,e.bubbles=!0,i.dispatchEvent(e)},v=function(t){var e=t.getLocation(),i=this.owner;i._hitTest(e,this)&&(t.type=u.MOUSE_DOWN,t.bubbles=!0,i.dispatchEvent(t),t.stopPropagation())},x=function(t){var e=t.getLocation(),i=this.owner,n=i._hitTest(e,this);if(n)this._previousIn||(f&&(t.type=u.MOUSE_LEAVE,f.dispatchEvent(t),f._mouseListener._previousIn=!1),f=this.owner,t.type=u.MOUSE_ENTER,i.dispatchEvent(t),this._previousIn=!0),t.type=u.MOUSE_MOVE,t.bubbles=!0,i.dispatchEvent(t);else{if(!this._previousIn)return;t.type=u.MOUSE_LEAVE,i.dispatchEvent(t),this._previousIn=!1,f=null}t.stopPropagation()},C=function(t){var e=t.getLocation(),i=this.owner;i._hitTest(e,this)&&(t.type=u.MOUSE_UP,t.bubbles=!0,i.dispatchEvent(t),t.stopPropagation())},T=function(t){var e=t.getLocation(),i=this.owner;i._hitTest(e,this)&&(t.type=u.MOUSE_WHEEL,t.bubbles=!0,i.dispatchEvent(t),t.stopPropagation())};function A(t){var e=cc.Mask;if(e)for(var i=0,n=t;n&&cc.Node.isNode(n);n=n._parent,++i)if(n.getComponent(e))return{index:i,node:n};return null}var b=cc.Class({name:"cc.Node",extends:t("./utils/base-node"),properties:{_opacity:255,_color:cc.Color.WHITE,_cascadeOpacityEnabled:!0,_anchorPoint:cc.p(.5,.5),_contentSize:cc.size(0,0),_rotationX:0,_rotationY:0,_scaleX:1,_scaleY:1,_position:cc.p(0,0),_skewX:0,_skewY:0,_localZOrder:0,_globalZOrder:0,_opacityModifyRGB:!1,groupIndex:{default:0,type:cc.Integer},group:{get:function(){return cc.game.groupList[this.groupIndex]||""},set:function(t){this.groupIndex=cc.game.groupList.indexOf(t),this.emit("group-changed")}},x:{get:function(){return this._position.x},set:function(t){var e=this._position;if(t!==e.x){e.x=t,this._sgNode.setPositionX(t);var i=this._hasListenerCache;i&&i["position-changed"]&&this.emit("position-changed")}}},y:{get:function(){return this._position.y},set:function(t){var e=this._position;if(t!==e.y){e.y=t,this._sgNode.setPositionY(t);var i=this._hasListenerCache;i&&i["position-changed"]&&this.emit("position-changed")}}},rotation:{get:function(){return this._rotationX!==this._rotationY&&cc.logID(1602),this._rotationX},set:function(t){if(this._rotationX!==t||this._rotationY!==t){this._rotationX=this._rotationY=t,this._sgNode.rotation=t;var e=this._hasListenerCache;e&&e["rotation-changed"]&&this.emit("rotation-changed")}}},rotationX:{get:function(){return this._rotationX},set:function(t){if(this._rotationX!==t){this._rotationX=t,this._sgNode.rotationX=t;var e=this._hasListenerCache;e&&e["rotation-changed"]&&this.emit("rotation-changed")}}},rotationY:{get:function(){return this._rotationY},set:function(t){if(this._rotationY!==t){this._rotationY=t,this._sgNode.rotationY=t;var e=this._hasListenerCache;e&&e["rotation-changed"]&&this.emit("rotation-changed")}}},scaleX:{get:function(){return this._scaleX},set:function(t){if(this._scaleX!==t){this._scaleX=t,this._sgNode.scaleX=t;var e=this._hasListenerCache;e&&e["scale-changed"]&&this.emit("scale-changed")}}},scaleY:{get:function(){return this._scaleY},set:function(t){if(this._scaleY!==t){this._scaleY=t,this._sgNode.scaleY=t;var e=this._hasListenerCache;e&&e["scale-changed"]&&this.emit("scale-changed")}}},skewX:{get:function(){return this._skewX},set:function(t){this._skewX=t,this._sgNode.skewX=t}},skewY:{get:function(){return this._skewY},set:function(t){this._skewY=t,this._sgNode.skewY=t}},opacity:{get:function(){return this._opacity},set:function(t){if(this._opacity!==t&&(this._opacity=t,this._sgNode.setOpacity(t),!this._cascadeOpacityEnabled)){var e=this._sizeProvider;e instanceof _ccsg.Node&&e!==this._sgNode&&e.setOpacity(t)}},range:[0,255]},cascadeOpacity:{get:function(){return this._cascadeOpacityEnabled},set:function(t){if(this._cascadeOpacityEnabled!==t){this._cascadeOpacityEnabled=t,this._sgNode.cascadeOpacity=t;var e=t?255:this._opacity,i=this._sizeProvider;i instanceof _ccsg.Node&&i.setOpacity(e)}}},color:{get:function(){return this._color.clone()},set:function(t){this._color.equals(t)||(this._color.fromColor(t),this._sizeProvider instanceof _ccsg.Node&&this._sizeProvider.setColor(t))}},anchorX:{get:function(){return this._anchorPoint.x},set:function(t){var e=this._anchorPoint;if(e.x!==t){e.x=t;var i=this._sizeProvider;i instanceof _ccsg.Node&&i.setAnchorPoint(e),this.emit("anchor-changed")}}},anchorY:{get:function(){return this._anchorPoint.y},set:function(t){var e=this._anchorPoint;if(e.y!==t){e.y=t;var i=this._sizeProvider;i instanceof _ccsg.Node&&i.setAnchorPoint(e),this.emit("anchor-changed")}}},width:{get:function(){if(this._sizeProvider){var t=this._sizeProvider._getWidth();return this._contentSize.width=t,t}return this._contentSize.width},set:function(t){if(t!==this._contentSize.width){var e=this._sizeProvider;e&&e.setContentSize(t,e._getHeight()),this._contentSize.width=t,this.emit("size-changed")}}},height:{get:function(){if(this._sizeProvider){var t=this._sizeProvider._getHeight();return this._contentSize.height=t,t}return this._contentSize.height},set:function(t){if(t!==this._contentSize.height){var e=this._sizeProvider;e&&e.setContentSize(e._getWidth(),t),this._contentSize.height=t,this.emit("size-changed")}}},zIndex:{get:function(){return this._localZOrder},set:function(t){this._localZOrder!==t&&(this._localZOrder=t,this._sgNode.zIndex=t,this._parent&&(function(t){t._parent._delaySort(),s._setDirtyForNode(t)})(this))}}},ctor:function(t){var e=this._sgNode=new _ccsg.Node;cc.game._isCloning||(e.cascadeOpacity=!0),this._sizeProvider=null,this._reorderChildDirty=!1,this._widget=null,this._touchListener=null,this._mouseListener=null},statics:{isNode:function(t){return t instanceof b&&(t.constructor===b||!(t instanceof cc.Scene))}},_onSetParent:function(t){var e=this._sgNode;e.parent&&e.parent.removeChild(e,!1),t&&(t._sgNode.addChild(e),t._delaySort())},_onSiblingIndexChanged:function(t){var e=this._parent,i=e._children,n=0,r=i.length;for(0;n=0&&c>=0&&l>=0&&h>=0){if(e&&e.mask){for(var u=e.mask,_=this,d=0;_&&d1){var e,i,n,r=t.length;for(e=1;e=0;){if(n._localZOrder0,this._repeat=r,this._runForever=this._repeat===cc.macro.REPEAT_FOREVER,!0},l.getInterval=function(){return this._interval},l.setInterval=function(t){this._interval=t},l.update=function(t){-1===this._elapsed?(this._elapsed=0,this._timesExecuted=0):(this._elapsed+=t,this._runForever&&!this._useDelay?this._elapsed>=this._interval&&(this.trigger(),this._elapsed=0):(this._useDelay?this._elapsed>=this._delay&&(this.trigger(),this._elapsed-=this._delay,this._timesExecuted+=1,this._useDelay=!1):this._elapsed>=this._interval&&(this.trigger(),this._elapsed=0,this._timesExecuted+=1),this._callback&&!this._runForever&&this._timesExecuted>this._repeat&&this.cancel()))},l.getCallback=function(){return this._callback},l.trigger=function(){this._target&&this._callback&&(this._lock=!0,this._callback.call(this._target,this._elapsed),this._lock=!1)},l.cancel=function(){this._scheduler.unschedule(this._callback,this._target)};var u=[];h.get=function(){return u.pop()||new h},h.put=function(t){u.length<20&&!t._lock&&(t._scheduler=t._target=t._callback=null,u.push(t))};var _=function(t){return t.__instanceId||t.uuid};cc.Scheduler=cc._Class.extend({ctor:function(){this._timeScale=1,this._updatesNegList=[],this._updates0List=[],this._updatesPosList=[],this._hashForUpdates={},this._hashForTimers={},this._currentTarget=null,this._currentTargetSalvaged=!1,this._updateHashLocked=!1,this._arrayForTimers=[]},_removeHashElement:function(t){delete this._hashForTimers[_(t.target)];for(var e=this._arrayForTimers,i=0,n=e.length;i=s&&n.timerIndex--,void(0===r.length&&(this._currentTarget===n?this._currentTargetSalvaged=!0:this._removeHashElement(n)))}}},unscheduleUpdate:function(t){if(t){var e=_(t);cc.assertID(e,1510);var i=this._hashForUpdates[e];i&&(this._updateHashLocked?i.entry.markedForDeletion=!0:this._removeUpdateFromHash(i.entry))}},unscheduleAllForTarget:function(t){if(t){var e=_(t);cc.assertID(e,1510);var i=this._hashForTimers[e];if(i){var n=i.timers;n.indexOf(i.currentTimer)>-1&&!i.currentTimerSalvaged&&(i.currentTimerSalvaged=!0);for(var r=0,s=n.length;r=0;e--)i=r[e],this.unscheduleAllForTarget(i.target);var s=0;if(t<0)for(e=0;e=t&&this.unscheduleUpdate(n.target),s==this._updatesNegList.length&&e++;if(t<=0)for(e=0;e=t&&this.unscheduleUpdate(n.target),s==this._updatesPosList.length&&e++},isScheduled:function(t,e){cc.assertID(t,1508),cc.assertID(e,1509);var i=_(e);cc.assertID(i,1510);var n=this._hashForTimers[i];if(!n)return!1;if(null==n.timers)return!1;for(var r=n.timers,s=0;s=t&&(r.paused=!0,s.push(r.target));if(t<=0)for(i=0;i=t&&(r.paused=!0,s.push(r.target));return s},resumeTargets:function(t){if(t)for(var e=0;e=r.OptimizationPolicyThreshold)?(t=this._doInstantiate(),this.data._instantiate(t)):(this.data._prefab._synced=!0,t=this.data._instantiate()),++this._instantiatedTimes,t}});cc.Prefab=e.exports=r,cc.js.obsolete(cc,"cc._Prefab","Prefab")}),{"../platform/instantiate-jit":197}],50:[(function(t,e,i){var n=t("../platform/CCObject"),r=t("../platform/js");cc.RawAsset=cc.Class({name:"cc.RawAsset",extends:n,ctor:function(){Object.defineProperty(this,"_uuid",{value:"",writable:!0})}}),r.value(cc.RawAsset,"isRawAssetType",(function(t){return cc.isChildClassOf(t,cc.RawAsset)&&!cc.isChildClassOf(t,cc.Asset)})),r.value(cc.RawAsset,"wasRawAssetType",(function(t){return t===cc.Texture2D||t===cc.AudioClip||t===cc.ParticleAsset||t===cc.Asset})),e.exports=cc.RawAsset}),{"../platform/CCObject":184,"../platform/js":199}],51:[(function(t,e,i){var n=cc.Class({name:"cc.SceneAsset",extends:cc.Asset,properties:{scene:null,asyncLoadAssets:void 0}});cc.SceneAsset=n,e.exports=n}),{}],52:[(function(t,e,i){var n=cc.Class({name:"cc.Script",extends:cc.Asset});cc._Script=n;var r=cc.Class({name:"cc.JavaScript",extends:n});cc._JavaScript=r;var s=cc.Class({name:"cc.CoffeeScript",extends:n});cc._CoffeeScript=s;var o=cc.Class({name:"cc.TypeScript",extends:n});cc._TypeScript=o}),{}],53:[(function(t,e,i){var n=cc.Class({name:"cc.SpriteAtlas",extends:cc.Asset,properties:{_spriteFrames:{default:{}}},getTexture:function(){var t=Object.keys(this._spriteFrames);if(t.length>0){var e=this._spriteFrames[t[0]];return e?e.getTexture():null}return null},getSpriteFrame:function(t){var e=this._spriteFrames[t];return e?(e.name||(e.name=t),e):null},getSpriteFrames:function(){var t=[],e=this._spriteFrames;for(var i in e)t.push(this.getSpriteFrame(i));return t}});cc.SpriteAtlas=n,e.exports=n}),{}],54:[(function(t,e,i){var n=cc.Class({name:"cc.TTFFont",extends:cc.Font,statics:{preventPreloadNativeObject:!0}});cc.TTFFont=e.exports=n}),{}],55:[(function(t,e,i){var n=cc.Class({name:"cc.TextAsset",extends:cc.Asset,properties:{text:""},toString:function(){return this.text}});e.exports=cc.TextAsset=n}),{}],56:[(function(t,e,i){t("./CCRawAsset"),t("./CCAsset"),t("./CCFont"),t("./CCPrefab"),t("./CCAudioClip"),t("./CCScripts"),t("./CCSceneAsset"),t("../sprites/CCSpriteFrame"),t("../textures/CCTexture2D"),t("./CCTTFFont"),t("./CCSpriteAtlas"),t("./CCBitmapFont"),t("./CCLabelAtlas"),t("./CCTextAsset"),t("./CCJsonAsset")}),{"../sprites/CCSpriteFrame":217,"../textures/CCTexture2D":218,"./CCAsset":43,"./CCAudioClip":44,"./CCBitmapFont":45,"./CCFont":46,"./CCJsonAsset":47,"./CCLabelAtlas":48,"./CCPrefab":49,"./CCRawAsset":50,"./CCSceneAsset":51,"./CCScripts":52,"./CCSpriteAtlas":53,"./CCTTFFont":54,"./CCTextAsset":55}],57:[(function(t,e,i){var n=t("../utils/misc"),r=t("../event-manager"),s=!!cc.ActionManager,o=function(){};cc.s_globalOrderOfArrival=1,_ccsg.Node=cc.Class({name:"ccsg.Node",properties:{_running:!1,_localZOrder:0,_globalZOrder:0,_arrivalOrder:0,_reorderChildDirty:!1,_vertexZ:0,_customZ:void 0,_rotationX:0,_rotationY:0,_scaleX:1,_scaleY:1,_position:cc.p(0,0),_skewX:0,_skewY:0,_children:[],_visible:!0,_anchorPoint:cc.p(0,0),_contentSize:cc.size(0,0),_parent:null,_ignoreAnchorPointForPosition:!1,tag:cc.macro.NODE_TAG_INVALID,_name:"",_realOpacity:255,_realColor:cc.Color.WHITE,_cascadeColorEnabled:!1,_cascadeOpacityEnabled:!1,_isTransitionFinished:!1,_actionManager:null,_scheduler:null,_renderCmd:null},ctor:function(){this.__instanceId=cc.ClassManager.getNewInstanceId(),this._renderCmd=this._createRenderCmd()},init:function(){return!0},attr:function(t){for(var e in t)this[e]=t[e]},getSkewX:function(){return this._skewX},setSkewX:function(t){this._skewX=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getSkewY:function(){return this._skewY},setSkewY:function(t){this._skewY=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},setLocalZOrder:function(t){this._parent?this._parent.reorderChild(this,t):this._localZOrder=t,r._setDirtyForNode(this)},_setLocalZOrder:function(t){this._localZOrder=t},getLocalZOrder:function(){return this._localZOrder},getZOrder:function(){return cc.logID(1600),this.getLocalZOrder()},setZOrder:function(t){cc.logID(1601),this.setLocalZOrder(t)},setGlobalZOrder:function(t){this._globalZOrder!==t&&(this._globalZOrder=t,r._setDirtyForNode(this))},getGlobalZOrder:function(){return this._globalZOrder},getVertexZ:function(){return this._vertexZ},setVertexZ:function(t){this._customZ=this._vertexZ=t},getRotation:function(){return this._rotationX!==this._rotationY&&cc.logID(1602),this._rotationX},setRotation:function(t){this._rotationX=this._rotationY=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getRotationX:function(){return this._rotationX},setRotationX:function(t){this._rotationX=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getRotationY:function(){return this._rotationY},setRotationY:function(t){this._rotationY=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getScale:function(){return this._scaleX!==this._scaleY&&cc.logID(1603),this._scaleX},setScale:function(t,e){this._scaleX=t,this._scaleY=e||0===e?e:t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getScaleX:function(){return this._scaleX},setScaleX:function(t){this._scaleX=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getScaleY:function(){return this._scaleY},setScaleY:function(t){this._scaleY=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},setPosition:function(t,e){var i=this._position;if(void 0===e){if(i.x===t.x&&i.y===t.y)return;i.x=t.x,i.y=t.y}else{if(i.x===t&&i.y===e)return;i.x=t,i.y=e}this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getPosition:function(){return cc.p(this._position)},getPositionX:function(){return this._position.x},setPositionX:function(t){this._position.x=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getPositionY:function(){return this._position.y},setPositionY:function(t){this._position.y=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty)},getChildrenCount:function(){return this._children.length},getChildren:function(){return this._children},isVisible:function(){return this._visible},setVisible:function(t){this._visible!==t&&(this._visible=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty),cc.renderer.childrenOrderDirty=!0)},getAnchorPoint:function(){return cc.p(this._anchorPoint)},setAnchorPoint:function(t,e){var i=this._anchorPoint;if(void 0===e){if(t.x===i.x&&t.y===i.y)return;i.x=t.x,i.y=t.y}else{if(t===i.x&&e===i.y)return;i.x=t,i.y=e}this._renderCmd._updateAnchorPointInPoint()},_getAnchorX:function(){return this._anchorPoint.x},_setAnchorX:function(t){this._anchorPoint.x!==t&&(this._anchorPoint.x=t,this._renderCmd._updateAnchorPointInPoint())},_getAnchorY:function(){return this._anchorPoint.y},_setAnchorY:function(t){this._anchorPoint.y!==t&&(this._anchorPoint.y=t,this._renderCmd._updateAnchorPointInPoint())},getAnchorPointInPoints:function(){return this._renderCmd.getAnchorPointInPoints()},_getWidth:function(){return this._contentSize.width},_setWidth:function(t){this._contentSize.width=t,this._renderCmd._updateAnchorPointInPoint()},_getHeight:function(){return this._contentSize.height},_setHeight:function(t){this._contentSize.height=t,this._renderCmd._updateAnchorPointInPoint()},getContentSize:function(){return cc.size(this._contentSize)},setContentSize:function(t,e){var i=this._contentSize;if(void 0===e){if(t.width===i.width&&t.height===i.height)return;i.width=t.width,i.height=t.height}else{if(t===i.width&&e===i.height)return;i.width=t,i.height=e}this._renderCmd._updateAnchorPointInPoint()},isRunning:function(){return this._running},getParent:function(){return this._parent},setParent:function(t){this._parent=t;var e=_ccsg.Node._dirtyFlags;this._renderCmd.setDirtyFlag(e.transformDirty|e.opacityDirty)},isIgnoreAnchorPointForPosition:function(){return this._ignoreAnchorPointForPosition},setIgnoreAnchorPointForPosition:function(t){t!==this._ignoreAnchorPointForPosition&&(this._ignoreAnchorPointForPosition=t,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.transformDirty))},getTag:function(){return this.tag},setTag:function(t){this.tag=t},setName:function(t){this._name=t},getName:function(){return this._name},updateOrderOfArrival:function(){this._arrivalOrder=++cc.s_globalOrderOfArrival},getScheduler:function(){return this._scheduler||cc.director.getScheduler()},setScheduler:function(t){this._scheduler!==t&&(this.unscheduleAllCallbacks(),this._scheduler=t)},boundingBox:function(){return cc.logID(1608),this.getBoundingBox()},getBoundingBox:function(){var t=cc.rect(0,0,this._contentSize.width,this._contentSize.height);return cc._rectApplyAffineTransformIn(t,this.getNodeToParentTransform())},cleanup:function(){this.stopAllActions(),this.unscheduleAllCallbacks(),r.removeListeners(this)},getChildByTag:function(t){var e=this._children;if(null!==e)for(var i=0;i-1&&this._detachChild(t,e),cc.renderer.childrenOrderDirty=!0)},removeChildByTag:function(t,e){t===cc.macro.NODE_TAG_INVALID&&cc.logID(1609);var i=this.getChildByTag(t);i?this.removeChild(i,e):cc.logID(1610,t)},removeAllChildrenWithCleanup:function(t){this.removeAllChildren(t)},removeAllChildren:function(t){var e=this._children;if(null!==e){void 0===t&&(t=!0);for(var i=0;i=0;){if(i._localZOrder=e.max)){var i,n,r,s,o,a=0,c=_ccsg.Node._performStacks[_ccsg.Node._performing];for(c||(c=[],_ccsg.Node._performStacks.push(c)),c.length=0,_ccsg.Node._performing++,r=c[0]=this;r;){if((i=r._children)&&i.length>0)for(s=0,o=i.length;s=0;--s)r=c[s],c[s]=null,r&&r.onEnter();break;case e.onExit:for(s=c.length-1;s>=0;--s)r=c[s],c[s]=null,r&&r.onExit();break;case e.onEnterTransitionDidFinish:for(s=c.length-1;s>=0;--s)r=c[s],c[s]=null,r&&r.onEnterTransitionDidFinish();break;case e.cleanup:for(s=c.length-1;s>=0;--s)r=c[s],c[s]=null,r&&r.cleanup();break;case e.onExitTransitionDidStart:for(s=c.length-1;s>=0;--s)r=c[s],c[s]=null,r&&r.onExitTransitionDidStart()}_ccsg.Node._performing--}},onEnterTransitionDidFinish:function(){this._isTransitionFinished=!0},onExitTransitionDidStart:function(){},onExit:function(){this._running=!1,this.pause()},runAction:s?function(t){return cc.assertID(t,1618),cc.director.getActionManager().addAction(t,this,!this._running),t}:o,stopAllActions:s?function(){cc.director.getActionManager().removeAllActionsFromTarget(this)}:o,stopAction:s?function(t){cc.director.getActionManager().removeAction(t)}:o,stopActionByTag:s?function(t){t!==cc.Action.TAG_INVALID?cc.director.getActionManager().removeActionByTag(t,this):cc.logID(1612)}:o,getActionByTag:s?function(t){return t===cc.Action.TAG_INVALID?(cc.logID(1613),null):cc.director.getActionManager().getActionByTag(t,this)}:function(){return null},getNumberOfRunningActions:s?function(){return cc.director.getActionManager().getNumberOfRunningActionsInTarget(this)}:function(){return 0},scheduleUpdate:function(){this.scheduleUpdateWithPriority(0)},scheduleUpdateWithPriority:function(t){this.scheduler.scheduleUpdate(this,t,!this._running)},unscheduleUpdate:function(){this.scheduler.unscheduleUpdate(this)},schedule:function(t,e,i,n,r){var s=arguments.length;"function"==typeof t?1===s?(e=0,i=cc.macro.REPEAT_FOREVER,n=0,r=this.__instanceId):2===s?"number"==typeof e?(i=cc.macro.REPEAT_FOREVER,n=0,r=this.__instanceId):(r=e,e=0,i=cc.macro.REPEAT_FOREVER,n=0):3===s?("string"==typeof i?(r=i,i=cc.macro.REPEAT_FOREVER):r=this.__instanceId,n=0):4===s&&(r=this.__instanceId):1===s?(e=0,i=cc.macro.REPEAT_FOREVER,n=0):2===s&&(i=cc.macro.REPEAT_FOREVER,n=0),cc.assertID(t,1619),cc.assertID(e>=0,1620),e=e||0,i=isNaN(i)?cc.macro.REPEAT_FOREVER:i,n=n||0,this.scheduler.schedule(t,this,e,i,n,!this._running,r)},scheduleOnce:function(t,e,i){void 0===i&&(i=this.__instanceId),this.schedule(t,0,0,e,i)},unschedule:function(t){t&&this.scheduler.unschedule(t,this)},unscheduleAllCallbacks:function(){this.scheduler.unscheduleAllForTarget(this)},resumeSchedulerAndActions:function(){cc.logID(1614),this.resume()},resume:function(){this.scheduler.resumeTarget(this),s&&cc.director.getActionManager().resumeTarget(this),r.resumeTarget(this)},pauseSchedulerAndActions:function(){cc.logID(1615),this.pause()},pause:function(){this.scheduler.pauseTarget(this),s&&cc.director.getActionManager().pauseTarget(this),r.pauseTarget(this)},getParentToNodeTransform:function(){return this._renderCmd.getParentToNodeTransform()},parentToNodeTransform:function(){return this.getParentToNodeTransform()},getNodeToWorldTransform:function(){for(var t=cc.affineTransformClone(this.getNodeToParentTransform()),e=this._parent;null!==e;e=e.parent)t=cc.affineTransformConcatIn(t,e.getNodeToParentTransform());return t},nodeToWorldTransform:function(){return this.getNodeToWorldTransform()},getWorldToNodeTransform:function(){var t=this.getNodeToWorldTransform();return cc.affineTransformInvertOut(t,t),t},worldToNodeTransform:function(){return this.getWorldToNodeTransform()},convertToNodeSpace:function(t){return cc.pointApplyAffineTransform(t,this.getWorldToNodeTransform())},convertToWorldSpace:function(t){return t=t||cc.v2(0,0),cc.pointApplyAffineTransform(t,this.getNodeToWorldTransform())},convertToNodeSpaceAR:function(t){return cc.pSub(this.convertToNodeSpace(t),this._renderCmd.getAnchorPointInPoints())},convertToWorldSpaceAR:function(t){t=t||cc.v2(0,0);var e=cc.pAdd(t,this._renderCmd.getAnchorPointInPoints());return this.convertToWorldSpace(e)},_convertToWindowSpace:function(t){var e=this.convertToWorldSpace(t);return cc.director.convertToUI(e)},convertTouchToNodeSpace:function(t){var e=t.getLocation();return this.convertToNodeSpace(e)},convertTouchToNodeSpaceAR:function(t){var e=cc.director.convertToGL(t.getLocation());return this.convertToNodeSpaceAR(e)},updateTransform:function(){for(var t=this._children,e=0;e0){for(this._reorderChildDirty&&this.sortAllChildren(),r=0;r0)for(s=r._renderCmd,o=0,a=i.length;o0?e.flags.ParentInCamera:0)},culling:function(t,e){cc.macro.ENABLE_CULLING?(this._updateCameraFlag(t),this._doCulling&&this._doCulling(),e&&s(this._node,"culling")):this._doCulling&&(this._needDraw=!0)},getNodeToParentTransform:function(){return this._dirtyFlag&n.transformDirty&&this.transform(),this._transform},setNodeToParentTransform:function(t){t?(this._transform=t,this._transformUpdated=!0):this._transformUpdated=!1,this.setDirtyFlag(n.transformDirty)},_propagateFlagsDown:function(t){if(t){var e=this._dirtyFlag,i=t._node,r=t._dirtyFlag;i._cascadeColorEnabled&&r&n.colorDirty&&(e|=n.colorDirty),i._cascadeOpacityEnabled&&r&n.opacityDirty&&(e|=n.opacityDirty),r&n.transformDirty&&(e|=n.transformDirty),r&n.cullingDirty&&(e|=n.cullingDirty),this._dirtyFlag=e}},visit:function(t){var e=this._node,i=cc.renderer;t&&(this._curLevel=t._curLevel+1),this._propagateFlagsDown(t),isNaN(e._customZ)&&(e._vertexZ=i.assignedZ,i.assignedZ+=i.assignedZStep),this._syncStatus(t)},_updateDisplayColor:function(t){var e,i,r,s,o=this._node,a=this._displayedColor,c=o._realColor;if(this._notifyRegionStatus&&this._notifyRegionStatus(_ccsg.Node.CanvasRenderCmd.RegionStatus.Dirty),this._cascadeColorEnabledDirty&&!o._cascadeColorEnabled){a.r=c.r,a.g=c.g,a.b=c.b;var h=new cc.Color(255,255,255,255);for(e=0,i=(r=o._children).length;e=0;e--)this._removeTargetInSg(t[e])}},addTarget:function(t){-1===this._targets.indexOf(t)&&(this._addSgTargetInSg(t),this._targets.push(t))},removeTarget:function(t){-1!==this._targets.indexOf(t)&&(this._removeTargetInSg(t),cc.js.array.remove(this._targets,t))},getTargets:function(){return this._targets},getNodeToCameraTransform:function(t){var e=t.getNodeToWorldTransform();return this.containsNode(t)&&(e=cc.affineTransformConcatIn(e,cc.Camera.main.viewMatrix)),e},getCameraToWorldPoint:function(t){return cc.Camera.main&&(t=cc.pointApplyAffineTransform(t,cc.Camera.main.invertViewMatrix)),t},containsNode:function(t){t instanceof cc.Node&&(t=t._sgNode);for(var e=this._sgTarges;t;){if(-1!==e.indexOf(t))return!0;t=t.parent}return!1},_setSgNodesCullingDirty:function(){for(var t=this._sgTarges,e=0;e=0;a--){var c=e[a];c._cameraInfo.touched!==i&&this._removeTargetInSg(c)}},lateUpdate:function(){this._checkSgTargets();var t=this.viewMatrix,e=this.invertViewMatrix,i=this.viewPort,n=cc.visibleRect,r=this.visibleRect,s=this.node.getNodeToWorldTransformAR(),o=.5*-(Math.atan2(s.b,s.a)+Math.atan2(-s.c,s.d)),a=1,c=0,h=0,l=1;o&&(h=Math.sin(o),a=l=Math.cos(o),c=-h);var u=this.zoomRatio;a*=u,c*=u,h*=u,l*=u,t.a=a,t.b=c,t.c=h,t.d=l;var _=n.center;t.tx=_.x-(a*s.tx+h*s.ty),t.ty=_.y-(c*s.tx+l*s.ty),cc.affineTransformInvertOut(t,e),i.x=n.bottomLeft.x,i.y=n.bottomLeft.y,i.width=n.width,i.height=n.height,cc._rectApplyAffineTransformIn(i,e),r.left.x=i.xMin,r.right.x=i.xMax,r.bottom.y=i.yMin,r.top.y=i.yMax,this._sgNode.setTransform(a,c,h,l,t.tx,t.ty);var d=this._lastViewMatrix;d.a===t.a&&d.b===t.b&&d.c===t.c&&d.d===t.d&&d.tx===t.tx&&d.ty===t.ty||(this._setSgNodesCullingDirty(),d.a=t.a,d.b=t.b,d.c=t.c,d.d=t.d,d.tx=t.tx,d.ty=t.ty)}});r.flags=cc.Enum({InCamera:1,ParentInCamera:2}),e.exports=cc.Camera=r}),{"./CCSGCameraNode":63}],63:[(function(t,e,i){var n=new cc.math.Matrix4,r=_ccsg.Node.extend({ctor:function(){this._super(),this._mat=new cc.math.Matrix4,this._mat.identity(),this._beforeVisitCmd=new cc.CustomRenderCmd(this,this._onBeforeVisit),this._afterVisitCmd=new cc.CustomRenderCmd(this,this._onAfterVisit)},setTransform:function(t,e,i,n,r,s){var o=this._mat.mat;o[0]=t,o[1]=e,o[4]=i,o[5]=n,o[12]=r,o[13]=s},addTarget:function(t){var e=t._cameraInfo;e.sgCameraNode=this,e.originVisit=t.visit,t.visit=this._visit},removeTarget:function(t){t.visit=t._cameraInfo.originVisit},_visit:function(t){var e=this._cameraInfo,i=e.sgCameraNode;cc.renderer.pushRenderCommand(i._beforeVisitCmd),e.originVisit.call(this,t),cc.renderer.pushRenderCommand(i._afterVisitCmd)},_onBeforeVisit:function(){cc.renderer._breakBatch(),cc.math.glMatrixMode(cc.math.KM_GL_PROJECTION),n.assignFrom(cc.current_stack.top),n.multiply(this._mat),cc.current_stack.push(n)},_onAfterVisit:function(){cc.renderer._breakBatch(),cc.math.glMatrixMode(cc.math.KM_GL_PROJECTION),cc.current_stack.pop()}});e.exports=_ccsg.CameraNode=r}),{}],64:[(function(t,e,i){cc.Collider.Box=cc.Class({properties:{_offset:cc.v2(0,0),_size:cc.size(100,100),offset:{tooltip:!1,get:function(){return this._offset},set:function(t){this._offset=t},type:cc.Vec2},size:{tooltip:!1,get:function(){return this._size},set:function(t){this._size.width=t.width<0?0:t.width,this._size.height=t.height<0?0:t.height},type:cc.Size}},resetInEditor:!1});var n=cc.Class({name:"cc.BoxCollider",extends:cc.Collider,mixins:[cc.Collider.Box],editor:!1});cc.BoxCollider=e.exports=n}),{}],65:[(function(t,e,i){cc.Collider.Circle=cc.Class({properties:{_offset:cc.v2(0,0),_radius:50,offset:{get:function(){return this._offset},set:function(t){this._offset=t},type:cc.Vec2},radius:{tooltip:!1,get:function(){return this._radius},set:function(t){this._radius=t<0?0:t}}},resetInEditor:!1});var n=cc.Class({name:"cc.CircleCollider",extends:cc.Collider,mixins:[cc.Collider.Circle],editor:!1});cc.CircleCollider=e.exports=n}),{}],66:[(function(t,e,i){var n=cc.Class({name:"cc.Collider",extends:cc.Component,properties:{editing:{default:!1,serializable:!1,tooltip:!1},tag:{tooltip:!1,default:0,range:[0,1e7],type:cc.Integer}},onDisable:function(){cc.director.getCollisionManager().removeCollider(this)},onEnable:function(){cc.director.getCollisionManager().addCollider(this)}});cc.Collider=e.exports=n}),{}],67:[(function(t,e,i){var n=t("./CCContact"),r=n.CollisionType,s=cc.rect(),o=cc.v2(),a=cc.Class({mixins:[cc.EventTarget],properties:{enabled:!1,enabledDrawBoundingBox:!1},ctor:function(){this.__instanceId=cc.ClassManager.getNewInstanceId(),this._contacts=[],this._colliders=[],this._debugDrawer=null,this._enabledDebugDraw=!1},update:function(t){if(this.enabled){var e,i,n=this._colliders;for(e=0,i=n.length;ep&&(p=y.x),y.xg&&(g=y.y),y.y=0){e.splice(i,1);for(var n=this._contacts,s=n.length-1;s>=0;s--){var o=n[s];o.collider1!==t&&o.collider2!==t||(o.touching&&this._doCollide(r.CollisionExit,o),n.splice(s,1))}t.node.off("group-changed",this.onNodeGroupChanged,this)}else cc.errorID(6600)},attachDebugDrawToCamera:function(t){this._debugDrawer&&t.addTarget(this._debugDrawer)},detachDebugDrawFromCamera:function(t){this._debugDrawer&&t.removeTarget(this._debugDrawer)},onNodeGroupChanged:function(t){for(var e=t.currentTarget.getComponents(cc.Collider),i=0,n=e.length;i0){t.strokeColor=cc.Color.WHITE,t.moveTo(s[0].x,s[0].y);for(var o=1;or!=u>r&&n<(l-c)*(r-h)/(u-h)+c&&(i=!i)}return i}function a(t,e,i,n){var r,s=i.x-e.x,o=i.y-e.y,a=s*s+o*o,c=((t.x-e.x)*s+(t.y-e.y)*o)/a;return r=n?a?c<0?e:c>1?i:cc.v2(e.x+c*s,e.y+c*o):e:cc.v2(e.x+c*s,e.y+c*o),s=t.x-r.x,o=t.y-r.y,Math.sqrt(s*s+o*o)}n.lineLine=r,n.lineRect=function(t,e,i){var n=new cc.Vec2(i.x,i.y),s=new cc.Vec2(i.x,i.yMax),o=new cc.Vec2(i.xMax,i.yMax),a=new cc.Vec2(i.xMax,i.y);return!!(r(t,e,n,s)||r(t,e,s,o)||r(t,e,o,a)||r(t,e,a,n))},n.linePolygon=s,n.rectRect=function(t,e){var i=t.x,n=t.y,r=t.x+t.width,s=t.y+t.height,o=e.x,a=e.y,c=e.x+e.width,h=e.y+e.height;return i<=c&&r>=o&&n<=h&&s>=a},n.rectPolygon=function(t,e){var i,n,r=new cc.Vec2(t.x,t.y),a=new cc.Vec2(t.x,t.yMax),c=new cc.Vec2(t.xMax,t.yMax),h=new cc.Vec2(t.xMax,t.y);if(s(r,a,e))return!0;if(s(a,c,e))return!0;if(s(c,h,e))return!0;if(s(h,r,e))return!0;for(i=0,n=e.length;i>>1;r<=s;o=r+s>>>1){var a=t[o],c=a.constructor._executionOrder;if(c>i)s=o-1;else if(cn)s=o-1;else{if(!(h0&&(t.array.sort(d),this._invoke(t),t.array.length=0),this._invoke(this._zero),this._zero.array.length=0;var e=this._pos;e.array.length>0&&(e.array.sort(d),this._invoke(e),e.array.length=0)}}),m=cc.Class({extends:_,add:function(t){var e=t.constructor._executionOrder;if(0===e)this._zero.array.push(t);else{var i=e<0?this._neg.array:this._pos.array,n=l(i,t);n<0&&i.splice(~n,0,t)}},remove:function(t){var e=t.constructor._executionOrder;if(0===e)this._zero.fastRemove(t);else{var i=e<0?this._neg:this._pos,n=l(i.array,t);n>=0&&i.removeAt(n)}},invoke:function(t){this._neg.array.length>0&&this._invoke(this._neg,t),this._invoke(this._zero,t),this._pos.array.length>0&&this._invoke(this._pos,t)}});function p(t,e){if("function"==typeof t)return e?function(e,i){var n=e.array;for(e.i=0;e.i=0?r.fastRemoveAt(this.scheduleInNextFrame,e):(!t.start||t._objFlags&s||this.startInvoker.remove(t),t.update&&this.updateInvoker.remove(t),t.lateUpdate&&this.lateUpdateInvoker.remove(t))},enableComp:function(t,e){if(!(t._objFlags&o)){if(t.onEnable){if(e)return void e.add(t);if(t.onEnable(),!t.node._activeInHierarchy)return}this._onEnabled(t)}},disableComp:function(t){t._objFlags&o&&(t.onDisable&&t.onDisable(),this._onDisabled(t))},_scheduleImmediate:function(t){!t.start||t._objFlags&s||this.startInvoker.add(t),t.update&&this.updateInvoker.add(t),t.lateUpdate&&this.lateUpdateInvoker.add(t)},_deferredSchedule:function(){for(var t=this.scheduleInNextFrame,e=0,i=t.length;e0&&this._deferredSchedule(),this.startInvoker.invoke()},updatePhase:function(t){this.updateInvoker.invoke(t)},lateUpdatePhase:function(t){this.lateUpdateInvoker.invoke(t),this._updating=!1}});e.exports=y}),{"./platform/CCClass":178,"./platform/CCObject":184,"./platform/js":199,"./utils/misc":228}],73:[(function(t,e,i){var n=t("../../animation/animation-animator"),r=t("../../animation/animation-clip");function s(t,e){return t===e||t&&e&&(t.name===e.name||t._uuid===e._uuid)}var o=cc.Class({name:"cc.Animation",extends:t("./CCComponent"),mixins:[cc.EventTarget],editor:!1,ctor:function(){cc.EventTarget.call(this),this._animator=null,this._nameToState={},this._didInit=!1,this._currentClip=null},properties:{_defaultClip:{default:null,type:r},defaultClip:{type:r,get:function(){return this._defaultClip},set:function(t){},tooltip:!1},currentClip:{get:function(){return this._currentClip},set:function(t){this._currentClip=t},type:r,visible:!1},_clips:{default:[],type:[r],tooltip:!1,visible:!0},playOnLoad:{default:!1,tooltip:!1}},start:function(){if(this.playOnLoad&&this._defaultClip&&!(this._animator&&this._animator.isPlaying)){var t=this.getAnimationState(this._defaultClip.name);this._animator.playState(t)}},onEnable:function(){this._animator&&this._animator.resume()},onDisable:function(){this._animator&&this._animator.pause()},onDestroy:function(){this.stop()},getClips:function(){return this._clips},play:function(t,e){var i=this.playAdditive(t,e);return this._animator.stopStatesExcept(i),i},playAdditive:function(t,e){this._init();var i=this.getAnimationState(t||this._defaultClip&&this._defaultClip.name);if(i){this.enabled=!0;var n=this._animator;n.isPlaying&&i.isPlaying?i.isPaused?n.resumeState(i):(n.stopState(i),n.playState(i,e)):n.playState(i,e),this.enabledInHierarchy||n.pause(),this.currentClip=i.clip}return i},stop:function(t){if(this._didInit)if(t){var e=this._nameToState[t];e&&this._animator.stopState(e)}else this._animator.stop()},pause:function(t){if(this._didInit)if(t){var e=this._nameToState[t];e&&this._animator.pauseState(e)}else this.enabled=!1},resume:function(t){if(this._didInit)if(t){var e=this._nameToState[t];e&&this._animator.resumeState(e)}else this.enabled=!0},setCurrentTime:function(t,e){if(this._init(),e){var i=this._nameToState[e];i&&this._animator.setStateTime(i,t)}else this._animator.setStateTime(t)},getAnimationState:function(t){this._init();var e=this._nameToState[t];return e&&!e.curveLoaded&&this._animator._reloadClip(e),e||null},addClip:function(t,e){if(t){this._init(),cc.js.array.contains(this._clips,t)||this._clips.push(t),e=e||t.name;var i=this._nameToState[e];if(i){if(i.clip===t)return i;var n=this._clips.indexOf(i.clip);-1!==n&&this._clips.splice(n,1)}var r=new cc.AnimationState(t,e);return this._nameToState[e]=r,r}cc.warnID(3900)},removeClip:function(t,e){if(t){var i;for(var n in this._init(),this._nameToState){if((i=this._nameToState[n]).clip===t)break}if(t===this._defaultClip){if(!e)return void cc.warnID(3902);this._defaultClip=null}if(i&&i.isPlaying){if(!e)return void cc.warnID(3903);this.stop(i.name)}this._clips=this._clips.filter((function(e){return e!==t})),i&&delete this._nameToState[i.name]}else cc.warnID(3901)},sample:function(t){if(this._init(),t){var e=this._nameToState[t];e&&e.sample()}else this._animator.sample()},on:function(t,e,i,n){this._init();for(var r=cc.EventTarget.prototype.on.call(this,t,e,i,n),s=this._animator._anims.array,o=0;o0&&(i=this.time/this.duration),i>=1&&(i=1,this._transitionFinished=!0),this.transition===n.COLOR?e.color=this._fromColor.lerp(this._toColor,i):this.transition===n.SCALE&&(e.scale=cc.lerp(this._fromScale,this._toScale,i))}},_registerEvent:function(){this.node.on(cc.Node.EventType.TOUCH_START,this._onTouchBegan,this),this.node.on(cc.Node.EventType.TOUCH_MOVE,this._onTouchMove,this),this.node.on(cc.Node.EventType.TOUCH_END,this._onTouchEnded,this),this.node.on(cc.Node.EventType.TOUCH_CANCEL,this._onTouchCancel,this),this.node.on(cc.Node.EventType.MOUSE_ENTER,this._onMouseMoveIn,this),this.node.on(cc.Node.EventType.MOUSE_LEAVE,this._onMouseMoveOut,this)},_getTargetSprite:function(t){var e=null;return t&&(e=t.getComponent(cc.Sprite)),e},_applyTarget:function(){this._sprite=this._getTargetSprite(this.target),this.target&&(this._originalScale=this.target.scale)},_onTouchBegan:function(t){this.interactable&&this.enabledInHierarchy&&(this._pressed=!0,this._updateState(),t.stopPropagation())},_onTouchMove:function(t){if(this.interactable&&this.enabledInHierarchy&&this._pressed){var e,i=t.touch,r=this.node._hitTest(i.getLocation());if(this.transition===n.SCALE&&this.target)r?(this._fromScale=this._originalScale,this._toScale=this._originalScale*this.zoomScale,this._transitionFinished=!1):(this.time=0,this._transitionFinished=!0,this.target.scale=this._originalScale);else e=r?"pressed":"normal",this._applyTransition(e);t.stopPropagation()}},_onTouchEnded:function(t){this.interactable&&this.enabledInHierarchy&&(this._pressed&&(cc.Component.EventHandler.emitEvents(this.clickEvents,t),this.node.emit("click",this)),this._pressed=!1,this._updateState(),t.stopPropagation())},_zoomUp:function(){this._fromScale=this._originalScale,this._toScale=this._originalScale*this.zoomScale,this.time=0,this._transitionFinished=!1},_zoomBack:function(){this._fromScale=this.target.scale,this._toScale=this._originalScale,this.time=0,this._transitionFinished=!1},_onTouchCancel:function(){this.interactable&&this.enabledInHierarchy&&(this._pressed=!1,this._updateState())},_onMouseMoveIn:function(){!this._pressed&&this.interactable&&this.enabledInHierarchy&&(this.transition!==n.SPRITE||this.hoverSprite)&&(this._hovered||(this._hovered=!0,this._updateState()))},_onMouseMoveOut:function(){this._hovered&&(this._hovered=!1,this._updateState())},_updateState:function(){var t=this._getButtonState();this._applyTransition(t),this._updateDisabledState()},onDisable:function(){this._hovered=!1,this._pressed=!1,this.node.off(cc.Node.EventType.TOUCH_START,this._onTouchBegan,this),this.node.off(cc.Node.EventType.TOUCH_MOVE,this._onTouchMove,this),this.node.off(cc.Node.EventType.TOUCH_END,this._onTouchEnded,this),this.node.off(cc.Node.EventType.TOUCH_CANCEL,this._onTouchCancel,this),this.node.off(cc.Node.EventType.MOUSE_ENTER,this._onMouseMoveIn,this),this.node.off(cc.Node.EventType.MOUSE_LEAVE,this._onMouseMoveOut,this)},_getButtonState:function(){return this.interactable?this._pressed?"pressed":this._hovered?"hover":"normal":"disabled"},_updateColorTransition:function(t){var e=this[t+"Color"],i=this.target;this._fromColor=i.color.clone(),this._toColor=e,this.time=0,this._transitionFinished=!1},_updateSpriteTransition:function(t){var e=this[t+"Sprite"];this._sprite&&e&&(this._sprite.spriteFrame=e)},_updateScaleTransition:function(t){"pressed"===t?this._zoomUp():this._zoomBack()},_applyTransition:function(t){var e=this.transition;e===n.COLOR?this._updateColorTransition(t):e===n.SPRITE?this._updateSpriteTransition(t):e===n.SCALE&&this._updateScaleTransition(t)},_resizeNodeToTargetNode:!1,_updateDisabledState:function(){this._sprite&&this._sprite._sgNode.setState(0),this.enableAutoGrayEffect&&this.transition!==n.COLOR&&(this.transition===n.SPRITE&&this.disabledSprite||this._sprite&&!this.interactable&&this._sprite._sgNode.setState(1))}});cc.Button=e.exports=r}),{"./CCComponent":78}],77:[(function(t,e,i){var n=t("../event-manager"),r={getContentSize:function(){return cc.visibleRect},setContentSize:function(t){},_getWidth:function(){return this.getContentSize().width},_getHeight:function(){return this.getContentSize().height}},s=cc.Class({name:"cc.Canvas",extends:t("./CCComponent"),editor:!1,resetInEditor:!1,statics:{instance:null},properties:{_designResolution:cc.size(960,640),designResolution:{get:function(){return cc.size(this._designResolution)},set:function(t){this._designResolution.width=t.width,this._designResolution.height=t.height,this.applySettings()},tooltip:!1},_fitWidth:!1,_fitHeight:!0,fitHeight:{get:function(){return this._fitHeight},set:function(t){this._fitHeight!==t&&(this._fitHeight=t,this.applySettings())},tooltip:!1},fitWidth:{get:function(){return this._fitWidth},set:function(t){this._fitWidth!==t&&(this._fitWidth=t,this.applySettings())},tooltip:!1}},ctor:function(){this._thisOnResized=this.onResized.bind(this)},__preload:function(){if(s.instance)return cc.errorID(6700,this.node.name,s.instance.node.name);(s.instance=this,this.node._sizeProvider)||(this.node._sizeProvider=r);cc.director.on(cc.Director.EVENT_BEFORE_VISIT,this.alignWithScreen,this),cc.sys.isMobile?window.addEventListener("resize",this._thisOnResized):n.addCustomListener("canvas-resize",this._thisOnResized),this.applySettings(),this.onResized()},onDestroy:function(){this.node._sizeProvider===r&&(this.node._sizeProvider=null),cc.director.off(cc.Director.EVENT_BEFORE_VISIT,this.alignWithScreen,this),cc.sys.isMobile?window.removeEventListener("resize",this._thisOnResized):n.removeCustomListeners("canvas-resize",this._thisOnResized),s.instance===this&&(s.instance=null)},alignWithScreen:function(){var t,e=cc.visibleRect,i=0,n=0;!this.fitHeight&&!this.fitWidth&&(i=.5*((t=cc.view.getDesignResolutionSize()).width-e.width),n=.5*(t.height-e.height)),this.node.setPosition(.5*e.width+i,.5*e.height+n)},onResized:function(){this.alignWithScreen()},applySettings:function(){var t,e=cc.ResolutionPolicy;t=this.fitHeight&&this.fitWidth?e.SHOW_ALL:this.fitHeight||this.fitWidth?this.fitWidth?e.FIXED_WIDTH:e.FIXED_HEIGHT:e.NO_BORDER;var i=this._designResolution;cc.view.setDesignResolutionSize(i.width,i.height,t)}});cc.Canvas=e.exports=s}),{"../event-manager":112,"./CCComponent":78}],78:[(function(t,e,i){var n=t("../platform/CCObject"),r=t("../platform/js"),s=new(t("../platform/id-generater"))("Comp"),o=n.Flags.IsOnEnableCalled,a=n.Flags.IsOnLoadCalled,c=cc.Class({name:"cc.Component",extends:n,ctor:function(){this.__instanceId=cc.ClassManager.getNewInstanceId(),this.__eventTargets=[]},properties:{node:{default:null,visible:!1},name:{get:function(){if(this._name)return this._name;var t=cc.js.getClassName(this),e=t.lastIndexOf(".");return e>=0&&(t=t.slice(e+1)),this.node.name+"<"+t+">"},set:function(t){this._name=t},visible:!1},_id:{default:"",serializable:!1},uuid:{get:function(){var t=this._id;return t||(t=this._id=s.getNewId()),t},visible:!1},__scriptAsset:!1,_enabled:!0,enabled:{get:function(){return this._enabled},set:function(t){if(this._enabled!==t&&(this._enabled=t,this.node._activeInHierarchy)){var e=cc.director._compScheduler;t?e.enableComp(this):e.disableComp(this)}},visible:!1},enabledInHierarchy:{get:function(){return(this._objFlags&o)>0},visible:!1},_isOnLoadCalled:{get:function(){return this._objFlags&a}}},update:null,lateUpdate:null,__preload:null,onLoad:null,start:null,onEnable:null,onDisable:null,onDestroy:null,onFocusInEditor:null,onLostFocusInEditor:null,resetInEditor:null,addComponent:function(t){return this.node.addComponent(t)},getComponent:function(t){return this.node.getComponent(t)},getComponents:function(t){return this.node.getComponents(t)},getComponentInChildren:function(t){return this.node.getComponentInChildren(t)},getComponentsInChildren:function(t){return this.node.getComponentsInChildren(t)},_getLocalBounds:null,onRestore:null,destroy:function(){this._super()&&this._enabled&&this.node._activeInHierarchy&&cc.director._compScheduler.disableComp(this)},_onPreDestroy:function(){this.unscheduleAllCallbacks();for(var t=this.__eventTargets,e=0,i=t.length;e=0,1620),e=e||0,i=isNaN(i)?cc.macro.REPEAT_FOREVER:i,n=n||0;var r=cc.director.getScheduler(),s=r.isTargetPaused(this);r.schedule(t,this,e,i,n,s)},scheduleOnce:function(t,e){this.schedule(t,0,0,e)},unschedule:function(t){t&&cc.director.getScheduler().unschedule(t,this)},unscheduleAllCallbacks:function(){cc.director.getScheduler().unscheduleAllForTarget(this)}});c._requireComponent=null,c._executionOrder=0,r.value(c,"_registerEditorProps",(function(t,e){var i=e.requireComponent;i&&(t._requireComponent=i);var n=e.executionOrder;n&&"number"==typeof n&&(t._executionOrder=n)})),c.prototype.__scriptUuid="",cc.Component=e.exports=c}),{"../platform/CCObject":184,"../platform/id-generater":195,"../platform/js":199}],79:[(function(t,e,i){cc.Component.EventHandler=cc.Class({name:"cc.ClickEvent",properties:{target:{default:null,type:cc.Node},component:{default:""},handler:{default:""},customEventData:{default:""}},statics:{emitEvents:function(t){"use strict";var e,i,n;if(arguments.length>0)for(i=0,n=(e=new Array(arguments.length-1)).length;im&&(m=p),A.height>=m&&(p=m,m=A.height,v=A.getAnchorPoint().y),this.horizontalDirection===a.RIGHT_TO_LEFT&&(b=1-A.anchorX),d=d+l*b*A.width+l*this.spacingX;var S=l*(1-b)*A.width;if(e){var E=d+S+l*(l>0?this.paddingRight:this.paddingLeft),w=this.horizontalDirection===a.LEFT_TO_RIGHT&&E>(1-c.x)*t,I=this.horizontalDirection===a.RIGHT_TO_LEFT&&E<-c.x*t;(w||I)&&(A.height>=m?(0===p&&(p=m),f+=p,p=m):(f+=m,p=A.height,m=0),d=_+l*(u+b*A.width),g++)}var R=i(A,f,g);t>=A.width+this.paddingLeft+this.paddingRight&&s&&A.setPosition(cc.p(d,R));var P,O=1,D=0===m?A.height:m;this.verticalDirection===o.TOP_TO_BOTTOM?(y=y||this.node._contentSize.height,(P=R+(O=-1)*(D*v+this.paddingBottom))y&&(y=P)),d+=S}}return y},_getVerticalBaseHeight:function(t){var e=0,i=0;if(this.resizeMode===r.CONTAINER){for(var n=0;nm&&(m=p),A.width>=m&&(p=m,m=A.width,v=A.getAnchorPoint().x),this.verticalDirection===o.TOP_TO_BOTTOM&&(b=1-A.anchorY),d=d+l*b*A.height+l*this.spacingY;var S=l*(1-b)*A.height;if(e){var E=d+S+l*(l>0?this.paddingTop:this.paddingBottom),w=this.verticalDirection===o.BOTTOM_TO_TOP&&E>(1-c.y)*t,I=this.verticalDirection===o.TOP_TO_BOTTOM&&E<-c.y*t;(w||I)&&(A.width>=m?(0===p&&(p=m),f+=p,p=m):(f+=m,p=A.width,m=0),d=_+l*(u+b*A.height),g++)}var R=i(A,f,g);t>=A.height+(this.paddingTop+this.paddingBottom)&&s&&A.setPosition(cc.p(R,d));var P,O=1,D=0===m?A.width:m;this.horizontalDirection===a.RIGHT_TO_LEFT?(O=-1,y=y||this.node._contentSize.width,(P=R+O*(D*v+this.paddingLeft))y&&(y=P)),d+=S}}return y},_doLayoutBasic:function(){for(var t=this.node.children,e=null,i=0;i0&&(this._doLayout(),this._layoutDirty=!1)}});Object.defineProperty(c.prototype,"padding",{get:function(){return cc.warnID(4100),this.paddingLeft},set:function(t){this._N$padding=t,this._migratePaddingData(),this._doLayoutDirty()}}),cc.Layout=e.exports=c}),{"./CCComponent":78}],84:[(function(t,e,i){t("../../clipping-nodes/CCClippingNode"),t("../../clipping-nodes/CCClippingNodeCanvasRenderCmd"),t("../../clipping-nodes/CCClippingNodeWebGLRenderCmd"),t("../../shape-nodes/CCDrawNode");var n=cc._RendererInSG,r=cc.Enum({RECT:0,ELLIPSE:1,IMAGE_STENCIL:2}),s=cc.Class({name:"cc.Mask",extends:n,editor:!1,properties:{_clippingStencil:{default:null,serializable:!1},_type:r.RECT,type:{get:function(){return this._type},set:function(t){this._type=t,this._refreshStencil()},type:r,tooltip:!1},spriteFrame:{default:null,type:cc.SpriteFrame,tooltip:!1,notify:function(){this._refreshStencil()}},alphaThreshold:{default:1,type:cc.Float,range:[0,1,.1],slide:!0,tooltip:!1,notify:function(){cc._renderType!==cc.game.RENDER_TYPE_CANVAS?this._sgNode.setAlphaThreshold(this.alphaThreshold):cc.warnID(4201)}},inverted:{default:!1,type:cc.Boolean,tooltip:!1,notify:function(){cc._renderType!==cc.game.RENDER_TYPE_CANVAS?this._sgNode.setInverted(this.inverted):cc.warnID(4202)}},_segements:64,segements:{get:function(){return this._segements},set:function(t){this._segements=cc.clampf(t,3,1e4),this._refreshStencil()},tooltip:!1},_resizeToTarget:{animatable:!1,set:function(t){t&&this._resizeNodeToTargetNode()}}},statics:{Type:r},_resizeNodeToTargetNode:!1,_initSgNode:function(){},_createSgNode:function(){return new cc.ClippingNode},_hitTest:function(t){var e=this.node.getContentSize(),i=e.width,n=e.height,s=this.node.getNodeToWorldTransform();if(this.type===r.RECT||this.type===r.IMAGE_STENCIL){var o=cc.rect(0,0,i,n);cc._rectApplyAffineTransformIn(o,s);var a=t.x-o.x,c=o.x+o.width-t.x,h=t.y-o.y,l=o.y+o.height-t.y;return a>=0&&c>=0&&l>=0&&h>=0}if(this.type===r.ELLIPSE){var u=i/2,_=n/2,d=s.a*u+s.c*_+s.tx,f=s.b*u+s.d*_+s.ty,m=t.x-d,p=t.y-f;return m*m/(u*u)+p*p/(_*_)<1}},onEnable:function(){this._super(),this.spriteFrame&&this.spriteFrame.ensureLoadTexture(),this._refreshStencil(),this.node.on("size-changed",this._refreshStencil,this),this.node.on("anchor-changed",this._refreshStencil,this)},onDisable:function(){this._super(),this.node.off("size-changed",this._refreshStencil,this),this.node.off("anchor-changed",this._refreshStencil,this)},_calculateCircle:function(t,e,i){for(var n=[],r=2*Math.PI/i,s=0;s=this._pages.length?this.addPage(t):(this._pages.splice(e,0,t),this.content.addChild(t),this._updatePageView()))},removePage:function(t){if(t&&this.content){var e=this._pages.indexOf(t);-1!==e?this.removePageAtIndex(e):cc.warnID(4300,t.name)}},removePageAtIndex:function(t){var e=this._pages;if(!(t<0||t>=e.length)){var i=e[t];i&&(this.content.removeChild(i),e.splice(t,1),this._updatePageView())}},removeAllPages:function(){if(this.content){for(var t=this._pages,e=0,i=t.length;e=this._pages.length||(e=void 0!==e?e:.3,this._curPageIdx=t,this.scrollToOffset(this._moveOffsetValue(t),e,!0),this.indicator&&this.indicator._changedState())},getScrollEndedEventTiming:function(){return this.pageTurningEventTiming},_syncScrollDirection:function(){this.horizontal=this.direction===r.Horizontal,this.vertical=this.direction===r.Vertical},_syncSizeMode:function(){if(this.content){var t=this.content.getComponent(cc.Layout);if(t){if(0===this._pages.length)t.padding=0;else{var e=this._pages[this._pages.length-1];this.sizeMode===n.Free&&(this.direction===r.Horizontal?(t.paddingLeft=(this.node.width-this._pages[0].width)/2,t.paddingRight=(this.node.width-e.width)/2):this.direction===r.Vertical&&(t.paddingTop=(this.node.height-this._pages[0].height)/2,t.paddingBottom=(this.node.height-e.height)/2))}t.updateLayout()}}},_updatePageView:function(){var t=this._pages.length;this._curPageIdx>=t&&(this._curPageIdx=0===t?0:t-1,this._lastPageIdx=this._curPageIdx);for(var e=0;e=0||this._pages.push(i)}this._syncScrollDirection(),this._syncSizeMode(),this._updatePageView()}},_dispatchPageTurningEvent:function(){this._lastPageIdx!==this._curPageIdx&&(this._lastPageIdx=this._curPageIdx,cc.Component.EventHandler.emitEvents(this.pageEvents,this,s.PAGE_TURNING),this.node.emit("page-turning",this))},_isScrollable:function(t,e,i){if(this.sizeMode===n.Free){var s,o;if(this.direction===r.Horizontal)return s=this._scrollCenterOffsetX[e],o=this._scrollCenterOffsetX[i],Math.abs(t.x)>=Math.abs(s-o)*this.scrollThreshold;if(this.direction===r.Vertical)return s=this._scrollCenterOffsetY[e],o=this._scrollCenterOffsetY[i],Math.abs(t.y)>=Math.abs(s-o)*this.scrollThreshold}else{if(this.direction===r.Horizontal)return Math.abs(t.x)>=this.node.width*this.scrollThreshold;if(this.direction===r.Vertical)return Math.abs(t.y)>=this.node.height*this.scrollThreshold}},_isQuicklyScrollable:function(t){if(this.direction===r.Horizontal){if(Math.abs(t.x)>this.autoPageTurningThreshold)return!0}else if(this.direction===r.Vertical&&Math.abs(t.y)>this.autoPageTurningThreshold)return!0;return!1},_moveOffsetValue:function(t){var e=cc.p(0,0);return this.sizeMode===n.Free?this.direction===r.Horizontal?e.x=this._scrollCenterOffsetX[t]:this.direction===r.Vertical&&(e.y=this._scrollCenterOffsetY[t]):this.direction===r.Horizontal?e.x=t*this.node.width:this.direction===r.Vertical&&(e.y=t*this.node.height),e},_getDragDirection:function(t){return this.direction===r.Horizontal?0===t.x?0:t.x>0?1:-1:this.direction===r.Vertical?0===t.y?0:t.y<0?1:-1:void 0},_handleReleaseLogic:function(t){var e=this._startBounceBackIfNeeded(),i=cc.pSub(this._touchBeganPosition,this._touchEndPosition);if(e){var n=this._getDragDirection(i);if(0===n)return;this._curPageIdx=n>0?this._pages.length-1:0,this.indicator&&this.indicator._changedState()}else{var r=this._curPageIdx,s=r+this._getDragDirection(i),o=this.pageTurningSpeed*Math.abs(r-s);if(s=t.length)){for(var i=0;it.length)for(i=0;i0;--i){var n=t[i-1];this.node.removeChild(n),t.splice(i-1,1)}this._layout&&this._layout.enabledInHierarchy&&this._layout.updateLayout(),this._changedState()}}}});cc.PageViewIndicator=e.exports=r}),{"./CCComponent":78}],87:[(function(t,e,i){var n=cc.Enum({HORIZONTAL:0,VERTICAL:1,FILLED:2}),r=cc.Class({name:"cc.ProgressBar",extends:t("./CCComponent"),editor:!1,_initBarSprite:function(){if(this.barSprite){var t=this.barSprite.node;if(!t)return;var e=this.node.getContentSize(),i=this.node.getAnchorPoint(),r=t.getContentSize();t.parent===this.node&&this.node.setContentSize(r),this.barSprite.fillType===cc.Sprite.FillType.RADIAL&&(this.mode=n.FILLED);var s=t.getContentSize();if(this.mode===n.HORIZONTAL?this.totalLength=s.width:this.mode===n.VERTICAL?this.totalLength=s.height:this.totalLength=this.barSprite.fillRange,t.parent===this.node){var o=-e.width*i.x;t.setPosition(cc.p(o,0))}}},_updateBarStatus:function(){if(this.barSprite){var t=this.barSprite.node;if(!t)return;var e,i,r,s=t.getAnchorPoint(),o=t.getContentSize(),a=t.getPosition(),c=cc.p(0,.5),h=cc.clamp01(this.progress),l=this.totalLength*h;switch(this.mode){case n.HORIZONTAL:this.reverse&&(c=cc.p(1,.5)),e=cc.size(l,o.height),i=this.totalLength,r=o.height;break;case n.VERTICAL:c=this.reverse?cc.p(.5,1):cc.p(.5,0),e=cc.size(o.width,l),i=o.width,r=this.totalLength}if(this.mode===n.FILLED)this.barSprite.type!==cc.Sprite.Type.FILLED?cc.warn("ProgressBar FILLED mode only works when barSprite's Type is FILLED!"):(this.reverse&&(l*=-1),this.barSprite.fillRange=l);else if(this.barSprite.type!==cc.Sprite.Type.FILLED){var u=c.x-s.x,_=c.y-s.y,d=cc.p(i*u,r*_);t.setPosition(cc.pAdd(a,d)),t.setAnchorPoint(c),t.setContentSize(e)}else cc.warn("ProgressBar non-FILLED mode only works when barSprite's Type is non-FILLED!")}},properties:{barSprite:{default:null,type:cc.Sprite,tooltip:!1,notify:function(){this._initBarSprite()},animatable:!1},mode:{default:n.HORIZONTAL,type:n,tooltip:!1,notify:function(){if(this.barSprite){var t=this.barSprite.node;if(!t)return;var e=t.getContentSize();this.mode===n.HORIZONTAL?this.totalLength=e.width:this.mode===n.VERTICAL?this.totalLength=e.height:this.mode===n.FILLED&&(this.totalLength=this.barSprite.fillRange)}},animatable:!1},_N$totalLength:1,totalLength:{range:[0,Number.MAX_VALUE],tooltip:!1,get:function(){return this._N$totalLength},set:function(t){this.mode===n.FILLED&&(t=cc.clamp01(t)),this._N$totalLength=t,this._updateBarStatus()}},progress:{default:1,type:"Float",range:[0,1,.1],slide:!0,tooltip:!1,notify:function(){this._updateBarStatus()}},reverse:{default:!1,tooltip:!1,notify:function(){this.barSprite&&(this.barSprite.fillStart=1-this.barSprite.fillStart),this._updateBarStatus()},animatable:!1}},statics:{Mode:n}});cc.ProgressBar=e.exports=r}),{"./CCComponent":78}],88:[(function(t,e,i){var n=cc.Class({extends:t("./CCSGComponent"),name:"cc._RendererInSG",ctor:function(){var t=this._sgNode=this._createSgNode();t.setVisible(!1),this._plainNode=new _ccsg.Node},__preload:function(){this._initSgNode()},onEnable:function(){this._replaceSgNode(this._sgNode)},onDisable:function(){this._replaceSgNode(this._plainNode)},onDestroy:function(){this._removeSgNode()},_replaceSgNode:function(t){var e=this.node,i=e._sgNode;i._entity=null;var n=i.getChildren().slice();i.removeAllChildren(!1),t.getChildrenCount()>0&&t.removeAllChildren(!1);for(var r=0,s=n.length;rRichText",multiline:!0,tooltip:!1,notify:function(){this._updateRichTextStatus()}},horizontalAlign:{default:n.LEFT,type:n,tooltip:!1,animatable:!1,notify:function(t){this.horizontalAlign!==t&&(this._layoutDirty=!0,this._updateRichTextStatus())}},fontSize:{default:40,tooltip:!1,notify:function(t){this.fontSize!==t&&(this._layoutDirty=!0,this._updateRichTextStatus())}},font:{default:null,type:cc.TTFFont,tooltip:!1,notify:function(t){this.font!==t&&(this._layoutDirty=!0,this.font&&this._onTTFLoaded(),this._updateRichTextStatus())}},maxWidth:{default:0,tooltip:!1,notify:function(t){this.maxWidth!==t&&(this._layoutDirty=!0,this._updateRichTextStatus())}},lineHeight:{default:40,tooltip:!1,notify:function(t){this.lineHeight!==t&&(this._layoutDirty=!0,this._updateRichTextStatus())}},imageAtlas:{default:null,type:cc.SpriteAtlas,tooltip:!1,notify:function(t){this.imageAtlas!==t&&(this._layoutDirty=!0,this._updateRichTextStatus())}},handleTouchEvent:{default:!0,tooltip:!1,notify:function(t){this.handleTouchEvent!==t&&this.enabledInHierarchy&&(this.handleTouchEvent?this._addEventListeners():this._removeEventListeners())}}},statics:{HorizontalAlign:n,VerticalAlign:r},onEnable:function(){this._super(),this.handleTouchEvent&&this._addEventListeners()},onDisable:function(){this._super(),this.handleTouchEvent&&this._removeEventListeners()},_addEventListeners:function(){this.node.on(cc.Node.EventType.TOUCH_END,this._onTouchEnded,this)},_removeEventListeners:function(){this.node.off(cc.Node.EventType.TOUCH_END,this._onTouchEnded,this)},_createSgNode:function(){var t=new _ccsg.Node;t.setCascadeOpacityEnabled(!0);var e=this;return t.setColor=function(){e._updateLabelSegmentTextAttributes()},t._setContentSize=t.setContentSize,t.setContentSize=function(){},t},_updateLabelSegmentTextAttributes:function(){this._labelSegments.forEach(function(t){this._applyTextAttribute(t)}.bind(this))},_initSgNode:function(){this._updateRichText(),this._onTTFLoaded()},_createFontLabel:function(t){return _ccsg.Label.pool.get(t,this.font,null,this.fontSize)},_getFontRawUrl:function(){return this.font instanceof cc.TTFFont?this.font.nativeUrl:""},_onTTFLoaded:function(){var t=this._getFontRawUrl();if(t){var e=this;cc.CustomFontLoader.loadTTF(t,(function(){e._layoutDirty=!0,e._updateRichText()}))}},_measureText:function(t,e){var i=this,n=function(e){var n;return 0===i._labelSegmentsCache.length?(n=i._createFontLabel(e),i._labelSegmentsCache.push(n)):(n=i._labelSegmentsCache[0]).setString(e),n._styleIndex=t,i._applyTextAttribute(n),n.getContentSize().width};return e?n(e):n},_onTouchEnded:function(t){for(var e=this.node.getComponents(cc.Component),i=0;i0&&n+this._lineOffsetX>this.maxWidth)for(var r=0;this._lineOffsetX<=this.maxWidth;){var s=this._getFirstWordLen(t,r,t.length),o=t.substr(r,s),a=this._measureText(i,o);if(!(this._lineOffsetX+a<=this.maxWidth)){if(r>0){var c=t.substr(0,r);this._addLabelSegment(c,i),t=t.substr(r,t.length),n=this._measureText(i,t)}this._updateLineInfo();break}this._lineOffsetX+=a,r+=s}if(n>this.maxWidth)for(var h=cc.TextUtils.fragmentText(t,n,this.maxWidth,this._measureText(i)),l=0;l1&&l0&&h0&&(o=c),this.maxWidth>0?(this._lineOffsetX+o>this.maxWidth&&this._updateLineInfo(),this._lineOffsetX+=o):(this._lineOffsetX+=o,this._lineOffsetX>this._labelWidth&&(this._labelWidth=this._lineOffsetX)),this._applySpriteFrame(i),n.setContentSize(o,a),n._lineCount=this._lineCount,t.style.event&&t.style.event.click&&(n._clickHandler=t.style.event.click)}else cc.warnID(4400)},_updateRichText:function(){if(this.enabled){var t=cc.htmlTextParser.parse(this.string);if(!this._needsUpdateTextLayout(t))return this._textArray=t,void this._updateLabelSegmentTextAttributes();this._textArray=t,this._resetState();for(var e,i=!1,n=0;n0){var h=this._measureText(n,c);this._updateRichTextWithMaxWidth(c,h,n),o.length>1&&athis._labelWidth&&(this._labelWidth=this._lineOffsetX),o.length>1&&a0&&(this._labelWidth=this.maxWidth),this._labelHeight=this._lineCount*this.lineHeight,this.node.setContentSize(this._labelWidth,this._labelHeight),this._sgNode._setContentSize(this._labelWidth,this._labelHeight),this._updateRichTextPosition(),this._layoutDirty=!1}},_getFirstWordLen:function(t,e,i){var n=t.charAt(e);if(cc.TextUtils.isUnicodeCJK(n)||cc.TextUtils.isUnicodeSpace(n))return 1;for(var r=1,s=e+1;se&&(t=0,e=s);var o=0;switch(this.horizontalAlign){case cc.TextAlignment.LEFT:o=0;break;case cc.TextAlignment.CENTER:o=(this._labelWidth-this._linesWidth[s-1])/2;break;case cc.TextAlignment.RIGHT:o=this._labelWidth-this._linesWidth[s-1]}r.setPositionX(t+o);var a=r.getContentSize(),c=(i-s)*this.lineHeight;r instanceof cc.Scale9Sprite&&(c+=(this.lineHeight-r.getContentSize().height)/2),r.setPositionY(c),s===e&&(t+=a.width)}},_convertLiteralColorValue:function(t){var e=t.toUpperCase();return cc.Color[e]?cc.Color[e]:cc.hexToColor(t)},_applyTextAttribute:function(t){if(!(t instanceof cc.Scale9Sprite)){var e=t._styleIndex;t.setLineHeight(this.lineHeight),t.setVerticalAlign(r.CENTER);var i=null;this._textArray[e]&&(i=this._textArray[e].style),i&&i.color?t.setColor(this._convertLiteralColorValue(i.color)):t.setColor(this.node.color),i&&i.bold?t.enableBold(!0):t.enableBold(!1),i&&i.italic?t.enableItalics(!0):t.enableItalics(!1),i&&i.underline?t.enableUnderline(!0):t.enableUnderline(!1),i&&i.outline?(t.setOutlined(!0),t.setOutlineColor(this._convertLiteralColorValue(i.outline.color)),t.setOutlineWidth(i.outline.width),t.setMargin(i.outline.width)):(t.setOutlined(!1),t.setMargin(0)),i&&i.size?t.setFontSize(i.size):t.setFontSize(this.fontSize),i&&i.event&&i.event.click&&(t._clickHandler=i.event.click)}},onDestroy:function(){this._super();for(var t=0;t0?n:-n)),i*(e/r)},_calculatePosition:function(t,e,i,r,s,o){var a=t-e;s&&(a+=Math.abs(s));var c=0;a&&(c=r/a,c=cc.clamp01(c));var h=(i-o)*c;return this.direction===n.VERTICAL?cc.p(0,h):cc.p(h,0)},_updateLength:function(t){if(this.handle){var e=this.handle.node,i=e.getContentSize();e.setAnchorPoint(cc.p(0,0)),this.direction===n.HORIZONTAL?e.setContentSize(t,i.height):e.setContentSize(i.width,t)}},_processAutoHide:function(t){if(this.enableAutoHide&&!(this._autoHideRemainingTime<=0)&&!this._touching&&(this._autoHideRemainingTime-=t,this._autoHideRemainingTime<=this.autoHideTime)){this._autoHideRemainingTime=Math.max(0,this._autoHideRemainingTime);var e=this._opacity*(this._autoHideRemainingTime/this.autoHideTime);this._setOpacity(e)}},start:function(){this.enableAutoHide&&this._setOpacity(0)},hide:function(){this._autoHideRemainingTime=0,this._setOpacity(0)},show:function(){this._autoHideRemainingTime=this.autoHideTime,this._setOpacity(this._opacity)},update:function(t){this._processAutoHide(t)}});cc.Scrollbar=e.exports=r}),{"./CCComponent":78}],93:[(function(t,e,i){var n=function(){return(new Date).getMilliseconds()},r=cc.Enum({SCROLL_TO_TOP:0,SCROLL_TO_BOTTOM:1,SCROLL_TO_LEFT:2,SCROLL_TO_RIGHT:3,SCROLLING:4,BOUNCE_TOP:5,BOUNCE_BOTTOM:6,BOUNCE_LEFT:7,BOUNCE_RIGHT:8,SCROLL_ENDED:9,TOUCH_UP:10,AUTOSCROLL_ENDED_WITH_THRESHOLD:11,SCROLL_BEGAN:12}),s={"scroll-to-top":r.SCROLL_TO_TOP,"scroll-to-bottom":r.SCROLL_TO_BOTTOM,"scroll-to-left":r.SCROLL_TO_LEFT,"scroll-to-right":r.SCROLL_TO_RIGHT,scrolling:r.SCROLLING,"bounce-bottom":r.BOUNCE_BOTTOM,"bounce-left":r.BOUNCE_LEFT,"bounce-right":r.BOUNCE_RIGHT,"bounce-top":r.BOUNCE_TOP,"scroll-ended":r.SCROLL_ENDED,"touch-up":r.TOUCH_UP,"scroll-ended-with-threshold":r.AUTOSCROLL_ENDED_WITH_THRESHOLD,"scroll-began":r.SCROLL_BEGAN},o=cc.Class({name:"cc.ScrollView",extends:t("./CCViewGroup"),editor:!1,ctor:function(){this._topBoundary=0,this._bottomBoundary=0,this._leftBoundary=0,this._rightBoundary=0,this._touchMoveDisplacements=[],this._touchMoveTimeDeltas=[],this._touchMovePreviousTimestamp=0,this._touchMoved=!1,this._autoScrolling=!1,this._autoScrollAttenuate=!1,this._autoScrollStartPosition=cc.p(0,0),this._autoScrollTargetDelta=cc.p(0,0),this._autoScrollTotalTime=0,this._autoScrollAccumulatedTime=0,this._autoScrollCurrentlyOutOfBoundary=!1,this._autoScrollBraking=!1,this._autoScrollBrakingStartPosition=cc.p(0,0),this._outOfBoundaryAmount=cc.p(0,0),this._outOfBoundaryAmountDirty=!0,this._stopMouseWheel=!1,this._mouseWheelEventElapsedTime=0,this._isScrollEndedWithThresholdEventFired=!1,this._scrollEventEmitMask=0,this._isBouncing=!1,this._scrolling=!1},properties:{content:{default:void 0,type:cc.Node,tooltip:!1},horizontal:{default:!0,animatable:!1,tooltip:!1},vertical:{default:!0,animatable:!1,tooltip:!1},inertia:{default:!0,tooltip:!1},brake:{default:.5,type:"Float",range:[0,1,.1],tooltip:!1},elastic:{default:!0,animatable:!1,tooltip:!1},bounceDuration:{default:1,range:[0,10],tooltip:!1},horizontalScrollBar:{default:void 0,type:cc.Scrollbar,tooltip:!1,notify:function(){this.horizontalScrollBar&&(this.horizontalScrollBar.setTargetScrollView(this),this._updateScrollBar(0))},animatable:!1},verticalScrollBar:{default:void 0,type:cc.Scrollbar,tooltip:!1,notify:function(){this.verticalScrollBar&&(this.verticalScrollBar.setTargetScrollView(this),this._updateScrollBar(0))},animatable:!1},scrollEvents:{default:[],type:cc.Component.EventHandler,tooltip:!1},cancelInnerEvents:{default:!0,animatable:!1,tooltip:!1}},statics:{EventType:r},scrollToBottom:function(t,e){var i=this._calculateMovePercentDelta({anchor:cc.p(0,0),applyToHorizontal:!1,applyToVertical:!0});t?this._startAutoScroll(i,t,!1!==e):this._moveContent(i,!0)},scrollToTop:function(t,e){var i=this._calculateMovePercentDelta({anchor:cc.p(0,1),applyToHorizontal:!1,applyToVertical:!0});t?this._startAutoScroll(i,t,!1!==e):this._moveContent(i)},scrollToLeft:function(t,e){var i=this._calculateMovePercentDelta({anchor:cc.p(0,0),applyToHorizontal:!0,applyToVertical:!1});t?this._startAutoScroll(i,t,!1!==e):this._moveContent(i)},scrollToRight:function(t,e){var i=this._calculateMovePercentDelta({anchor:cc.p(1,0),applyToHorizontal:!0,applyToVertical:!1});t?this._startAutoScroll(i,t,!1!==e):this._moveContent(i)},scrollToTopLeft:function(t,e){var i=this._calculateMovePercentDelta({anchor:cc.p(0,1),applyToHorizontal:!0,applyToVertical:!0});t?this._startAutoScroll(i,t,!1!==e):this._moveContent(i)},scrollToTopRight:function(t,e){var i=this._calculateMovePercentDelta({anchor:cc.p(1,1),applyToHorizontal:!0,applyToVertical:!0});t?this._startAutoScroll(i,t,!1!==e):this._moveContent(i)},scrollToBottomLeft:function(t,e){var i=this._calculateMovePercentDelta({anchor:cc.p(0,0),applyToHorizontal:!0,applyToVertical:!0});t?this._startAutoScroll(i,t,!1!==e):this._moveContent(i)},scrollToBottomRight:function(t,e){var i=this._calculateMovePercentDelta({anchor:cc.p(1,0),applyToHorizontal:!0,applyToVertical:!0});t?this._startAutoScroll(i,t,!1!==e):this._moveContent(i)},scrollToOffset:function(t,e,i){var n=this.getMaxScrollOffset(),r=cc.p(0,0);0===n.x?r.x=0:r.x=t.x/n.x,0===n.y?r.y=1:r.y=(n.y-t.y)/n.y,this.scrollTo(r,e,i)},getScrollOffset:function(){var t=this._getContentTopBoundary()-this._topBoundary,e=this._getContentLeftBoundary()-this._leftBoundary;return cc.p(e,t)},getMaxScrollOffset:function(){var t=this.node.getContentSize(),e=this.content.getContentSize(),i=e.width-t.width,n=e.height-t.height;return i=i>=0?i:0,n=n>=0?n:0,cc.p(i,n)},scrollToPercentHorizontal:function(t,e,i){var n=this._calculateMovePercentDelta({anchor:cc.p(t,0),applyToHorizontal:!0,applyToVertical:!1});e?this._startAutoScroll(n,e,!1!==i):this._moveContent(n)},scrollTo:function(t,e,i){var n=this._calculateMovePercentDelta({anchor:t,applyToHorizontal:!0,applyToVertical:!0});e?this._startAutoScroll(n,e,!1!==i):this._moveContent(n)},scrollToPercentVertical:function(t,e,i){var n=this._calculateMovePercentDelta({anchor:cc.p(0,t),applyToHorizontal:!1,applyToVertical:!0});e?this._startAutoScroll(n,e,!1!==i):this._moveContent(n)},stopAutoScroll:function(){this._autoScrolling=!1,this._autoScrollAccumulatedTime=this._autoScrollTotalTime},setContentPosition:function(t){cc.pFuzzyEqual(t,this.getContentPosition(),1e-4)||(this.content.setPosition(t),this._outOfBoundaryAmountDirty=!0)},getContentPosition:function(){return this.content.getPosition()},isScrolling:function(){return this._scrolling},isAutoScrolling:function(){return this._autoScrolling},_registerEvent:function(){this.node.on(cc.Node.EventType.TOUCH_START,this._onTouchBegan,this,!0),this.node.on(cc.Node.EventType.TOUCH_MOVE,this._onTouchMoved,this,!0),this.node.on(cc.Node.EventType.TOUCH_END,this._onTouchEnded,this,!0),this.node.on(cc.Node.EventType.TOUCH_CANCEL,this._onTouchCancelled,this,!0),this.node.on(cc.Node.EventType.MOUSE_WHEEL,this._onMouseWheel,this,!0)},_unregisterEvent:function(){this.node.off(cc.Node.EventType.TOUCH_START,this._onTouchBegan,this,!0),this.node.off(cc.Node.EventType.TOUCH_MOVE,this._onTouchMoved,this,!0),this.node.off(cc.Node.EventType.TOUCH_END,this._onTouchEnded,this,!0),this.node.off(cc.Node.EventType.TOUCH_CANCEL,this._onTouchCancelled,this,!0),this.node.off(cc.Node.EventType.MOUSE_WHEEL,this._onMouseWheel,this,!0)},_onMouseWheel:function(t,e){if(this.enabledInHierarchy&&!this._hasNestedViewGroup(t,e)){var i=cc.p(0,0),n=-.1;0,this.vertical?i=cc.p(0,t.getScrollY()*n):this.horizontal&&(i=cc.p(t.getScrollY()*n,0)),this._mouseWheelEventElapsedTime=0,this._processDeltaMove(i),this._stopMouseWheel||(this._handlePressLogic(),this.schedule(this._checkMouseWheel,1/60),this._stopMouseWheel=!0),this._stopPropagationIfTargetIsMe(t)}},_checkMouseWheel:function(t){var e=this._getHowMuchOutOfBoundary();if(!cc.pFuzzyEqual(e,cc.p(0,0),1e-4))return this._processInertiaScroll(),this.unschedule(this._checkMouseWheel),void(this._stopMouseWheel=!1);this._mouseWheelEventElapsedTime+=t,this._mouseWheelEventElapsedTime>.1&&(this._onScrollBarTouchEnded(),this.unschedule(this._checkMouseWheel),this._stopMouseWheel=!1)},_calculateMovePercentDelta:function(t){var e=t.anchor,i=t.applyToHorizontal,n=t.applyToVertical;this._calculateBoundary(),e=cc.pClamp(e,cc.p(0,0),cc.p(1,1));var r=this.node.getContentSize(),s=this.content.getContentSize(),o=this._getContentBottomBoundary()-this._bottomBoundary;o=-o;var a=this._getContentLeftBoundary()-this._leftBoundary;a=-a;var c=cc.p(0,0),h=0;return i&&(h=s.width-r.width,c.x=a-h*e.x),n&&(h=s.height-r.height,c.y=o-h*e.y),c},_moveContentToTopLeft:function(t){var e=this.content.getContentSize(),i=this._getContentBottomBoundary()-this._bottomBoundary;i=-i;var n=cc.p(0,0),r=0,s=this._getContentLeftBoundary()-this._leftBoundary;s=-s,e.height7&&!this._touchMoved&&t.target!==this.node){var r=new cc.Event.EventTouch(t.getTouches(),t.bubbles);r.type=cc.Node.EventType.TOUCH_CANCEL,r.touch=t.touch,r.simulate=!0,t.target.dispatchEvent(r),this._touchMoved=!0}this._stopPropagationIfTargetIsMe(t)}}},_onTouchEnded:function(t,e){if(this.enabledInHierarchy&&!this._hasNestedViewGroup(t,e)){this._dispatchEvent("touch-up");var i=t.touch;this.content&&this._handleReleaseLogic(i),this._touchMoved?t.stopPropagation():this._stopPropagationIfTargetIsMe(t)}},_onTouchCancelled:function(t,e){if(this.enabledInHierarchy&&!this._hasNestedViewGroup(t,e)){if(!t.simulate){var i=t.touch;this.content&&this._handleReleaseLogic(i)}this._stopPropagationIfTargetIsMe(t)}},_processDeltaMove:function(t){this._scrollChildren(t),this._gatherTouchMove(t)},_handleMoveLogic:function(t){var e=t.getDelta();this._processDeltaMove(e)},_scrollChildren:function(t){var e,i=t=this._clampDelta(t);this.elastic&&(e=this._getHowMuchOutOfBoundary(),i.x*=0===e.x?1:.5,i.y*=0===e.y?1:.5),this.elastic||(e=this._getHowMuchOutOfBoundary(i),i=cc.pAdd(i,e));var n=-1;if(i.y>0)this.content.y-this.content.anchorY*this.content.height+i.y>this._bottomBoundary&&(n="scroll-to-bottom");else if(i.y<0){this.content.y-this.content.anchorY*this.content.height+this.content.height+i.y<=this._topBoundary&&(n="scroll-to-top")}else if(i.x<0){this.content.x-this.content.anchorX*this.content.width+this.content.width+i.x<=this._rightBoundary&&(n="scroll-to-right")}else if(i.x>0){this.content.x-this.content.anchorX*this.content.width+i.x>=this._leftBoundary&&(n="scroll-to-left")}this._moveContent(i,!1),0===i.x&&0===i.y||(this._scrolling||(this._scrolling=!0,this._dispatchEvent("scroll-began")),this._dispatchEvent("scrolling")),-1!==n&&this._dispatchEvent(n)},_handlePressLogic:function(){this._autoScrolling&&this._dispatchEvent("scroll-ended"),this._autoScrolling=!1,this._isBouncing=!1,this._touchMovePreviousTimestamp=n(),this._touchMoveDisplacements.length=0,this._touchMoveTimeDeltas.length=0,this._onScrollBarTouchBegan()},_clampDelta:function(t){var e=this.content.getContentSize(),i=this.node.getContentSize();return e.width=5;)this._touchMoveDisplacements.shift(),this._touchMoveTimeDeltas.shift();this._touchMoveDisplacements.push(t);var e=n();this._touchMoveTimeDeltas.push((e-this._touchMovePreviousTimestamp)/1e3),this._touchMovePreviousTimestamp=e},_startBounceBackIfNeeded:function(){if(!this.elastic)return!1;var t=this._getHowMuchOutOfBoundary();if(t=this._clampDelta(t),cc.pFuzzyEqual(t,cc.p(0,0),1e-4))return!1;var e=Math.max(this.bounceDuration,0);return this._startAutoScroll(t,e,!0),this._isBouncing||(t.y>0&&this._dispatchEvent("bounce-top"),t.y<0&&this._dispatchEvent("bounce-bottom"),t.x>0&&this._dispatchEvent("bounce-right"),t.x<0&&this._dispatchEvent("bounce-left"),this._isBouncing=!0),!0},_processInertiaScroll:function(){if(!this._startBounceBackIfNeeded()&&this.inertia){var t=this._calculateTouchMoveVelocity();!cc.pFuzzyEqual(t,cc.p(0,0),1e-4)&&this.brake<1&&this._startInertiaScroll(t)}this._onScrollBarTouchEnded()},_handleReleaseLogic:function(t){var e=t.getDelta();this._gatherTouchMove(e),this._processInertiaScroll(),this._scrolling&&(this._scrolling=!1,this._autoScrolling||this._dispatchEvent("scroll-ended"))},_isOutOfBoundary:function(){var t=this._getHowMuchOutOfBoundary();return!cc.pFuzzyEqual(t,cc.p(0,0),1e-4)},_isNecessaryAutoScrollBrake:function(){if(this._autoScrollBraking)return!0;if(this._isOutOfBoundary()){if(!this._autoScrollCurrentlyOutOfBoundary)return this._autoScrollCurrentlyOutOfBoundary=!0,this._autoScrollBraking=!0,this._autoScrollBrakingStartPosition=this.getContentPosition(),!0}else this._autoScrollCurrentlyOutOfBoundary=!1;return!1},getScrollEndedEventTiming:function(){return 1e-4},_processAutoScrolling:function(t){var e=this._isNecessaryAutoScrollBrake(),i=e?.05:1;this._autoScrollAccumulatedTime+=t*(1/i);var n=Math.min(1,this._autoScrollAccumulatedTime/this._autoScrollTotalTime);this._autoScrollAttenuate&&(n=(function(t){return(t-=1)*t*t*t*t+1})(n));var r=cc.pAdd(this._autoScrollStartPosition,cc.pMult(this._autoScrollTargetDelta,n)),s=Math.abs(n-1)<=1e-4;if(Math.abs(n-1)<=this.getScrollEndedEventTiming()&&!this._isScrollEndedWithThresholdEventFired&&(this._dispatchEvent("scroll-ended-with-threshold"),this._isScrollEndedWithThresholdEventFired=!0),this.elastic){var o=cc.pSub(r,this._autoScrollBrakingStartPosition);e&&(o=cc.pMult(o,i)),r=cc.pAdd(this._autoScrollBrakingStartPosition,o)}else{var a=cc.pSub(r,this.getContentPosition()),c=this._getHowMuchOutOfBoundary(a);cc.pFuzzyEqual(c,cc.p(0,0),1e-4)||(r=cc.pAdd(r,c),s=!0)}s&&(this._autoScrolling=!1);var h=cc.pSub(r,this.getContentPosition());this._moveContent(this._clampDelta(h),s),this._dispatchEvent("scrolling"),this._autoScrolling||(this._isBouncing=!1,this._dispatchEvent("scroll-ended"))},_startInertiaScroll:function(t){var e=cc.pMult(t,.7);this._startAttenuatingAutoScroll(e,t)},_calculateAttenuatedFactor:function(t){return this.brake<=0?1-this.brake:(1-this.brake)*(1/(1+14e-6*t+t*t*8e-9))},_startAttenuatingAutoScroll:function(t,e){var i=this._calculateAutoScrollTimeByInitalSpeed(cc.pLength(e)),n=cc.pNormalize(t),r=this.content.getContentSize(),s=this.node.getContentSize(),o=r.width-s.width,a=r.height-s.height,c=this._calculateAttenuatedFactor(o),h=this._calculateAttenuatedFactor(a);n=cc.p(n.x*o*(1-this.brake)*c,n.y*a*h*(1-this.brake));var l=cc.pLength(t),u=cc.pLength(n)/l;n=cc.pAdd(n,t),this.brake>0&&u>7&&(u=Math.sqrt(u),n=cc.pAdd(cc.pMult(t,u),t)),this.brake>0&&u>3&&(i*=u=3),0===this.brake&&u>1&&(i*=u),this._startAutoScroll(n,i,!0)},_calculateAutoScrollTimeByInitalSpeed:function(t){return Math.sqrt(Math.sqrt(t/5))},_startAutoScroll:function(t,e,i){var n=this._flattenVectorByDirection(t);this._autoScrolling=!0,this._autoScrollTargetDelta=n,this._autoScrollAttenuate=i,this._autoScrollStartPosition=this.getContentPosition(),this._autoScrollTotalTime=e,this._autoScrollAccumulatedTime=0,this._autoScrollBraking=!1,this._isScrollEndedWithThresholdEventFired=!1,this._autoScrollBrakingStartPosition=cc.p(0,0);var r=this._getHowMuchOutOfBoundary();if(!cc.pFuzzyEqual(r,cc.p(0,0),1e-4)){this._autoScrollCurrentlyOutOfBoundary=!0;var s=this._getHowMuchOutOfBoundary(n);(r.x*s.x>0||r.y*s.y>0)&&(this._autoScrollBraking=!0)}},_calculateTouchMoveVelocity:function(){var t=0;if((t=this._touchMoveTimeDeltas.reduce((function(t,e){return t+e}),t))<=0||t>=.5)return cc.p(0,0);var e=cc.p(0,0);return e=this._touchMoveDisplacements.reduce((function(t,e){return cc.pAdd(t,e)}),e),cc.p(e.x*(1-this.brake)/t,e.y*(1-this.brake)/t)},_flattenVectorByDirection:function(t){var e=t;return e.x=this.horizontal?e.x:0,e.y=this.vertical?e.y:0,e},_moveContent:function(t,e){var i=this._flattenVectorByDirection(t),n=cc.pAdd(this.getContentPosition(),i);this.setContentPosition(n);var r=this._getHowMuchOutOfBoundary();this._updateScrollBar(r),this.elastic&&e&&this._startBounceBackIfNeeded()},_getContentLeftBoundary:function(){return this.getContentPosition().x-this.content.getAnchorPoint().x*this.content.getContentSize().width},_getContentRightBoundary:function(){var t=this.content.getContentSize();return this._getContentLeftBoundary()+t.width},_getContentTopBoundary:function(){var t=this.content.getContentSize();return this._getContentBottomBoundary()+t.height},_getContentBottomBoundary:function(){return this.getContentPosition().y-this.content.getAnchorPoint().y*this.content.getContentSize().height},_getHowMuchOutOfBoundary:function(t){if(t=t||cc.p(0,0),cc.pFuzzyEqual(t,cc.p(0,0),1e-4)&&!this._outOfBoundaryAmountDirty)return this._outOfBoundaryAmount;var e=cc.p(0,0);return this._getContentLeftBoundary()+t.x>this._leftBoundary?e.x=this._leftBoundary-(this._getContentLeftBoundary()+t.x):this._getContentRightBoundary()+t.xthis._bottomBoundary&&(e.y=this._bottomBoundary-(this._getContentBottomBoundary()+t.y)),cc.pFuzzyEqual(t,cc.p(0,0),1e-4)&&(this._outOfBoundaryAmount=e,this._outOfBoundaryAmountDirty=!1),e=this._clampDelta(e)},_updateScrollBar:function(t){this.horizontalScrollBar&&this.horizontalScrollBar._onScroll(t),this.verticalScrollBar&&this.verticalScrollBar._onScroll(t)},_onScrollBarTouchBegan:function(){this.horizontalScrollBar&&this.horizontalScrollBar._onTouchBegan(),this.verticalScrollBar&&this.verticalScrollBar._onTouchBegan()},_onScrollBarTouchEnded:function(){this.horizontalScrollBar&&this.horizontalScrollBar._onTouchEnded(),this.verticalScrollBar&&this.verticalScrollBar._onTouchEnded()},_dispatchEvent:function(t){if("scroll-ended"===t)this._scrollEventEmitMask=0;else if("scroll-to-top"===t||"scroll-to-bottom"===t||"scroll-to-left"===t||"scroll-to-right"===t){var e=1<0&&t[0].check()}},onEnable:function(){this.node.on("child-added",this._allowOnlyOneToggleChecked,this),this.node.on("child-removed",this._makeAtLeastOneToggleChecked,this)},onDisable:function(){this.node.off("child-added",this._allowOnlyOneToggleChecked,this),this.node.off("child-removed",this._makeAtLeastOneToggleChecked,this)},start:function(){this._makeAtLeastOneToggleChecked()}});t("../platform/js").get(n.prototype,"toggleItems",(function(){return this.node.getComponentsInChildren(cc.Toggle)})),cc.ToggleContainer=e.exports=n}),{"../platform/js":199}],100:[(function(t,e,i){var n=cc.Class({name:"cc.ToggleGroup",extends:cc.Component,ctor:function(){this._toggleItems=[]},editor:!1,properties:{allowSwitchOff:{tooltip:!1,default:!1},toggleItems:{get:function(){return this._toggleItems}}},updateToggles:function(t){this.enabledInHierarchy&&this._toggleItems.forEach((function(e){t.isChecked&&e!==t&&e.isChecked&&e.enabled&&(e.isChecked=!1)}))},addToggle:function(t){-1===this._toggleItems.indexOf(t)&&this._toggleItems.push(t),this._allowOnlyOneToggleChecked()},removeToggle:function(t){var e=this._toggleItems.indexOf(t);e>-1&&this._toggleItems.splice(e,1),this._makeAtLeastOneToggleChecked()},_allowOnlyOneToggleChecked:function(){var t=!1;return this._toggleItems.forEach((function(e){t&&e.enabled&&(e.isChecked=!1),e.isChecked&&e.enabled&&(t=!0)})),t},_makeAtLeastOneToggleChecked:function(){this._allowOnlyOneToggleChecked()||this.allowSwitchOff||this._toggleItems.length>0&&(this._toggleItems[0].isChecked=!0)},start:function(){this._makeAtLeastOneToggleChecked()}}),r=(t("../platform/js"),!1);cc.js.get(cc,"ToggleGroup",(function(){return r||(cc.logID(1405,"cc.ToggleGroup","cc.ToggleContainer"),r=!0),n})),cc.ToggleGroup=e.exports=n}),{"../platform/js":199}],101:[(function(t,e,i){t("../videoplayer/CCSGVideoPlayer");var n=_ccsg.VideoPlayer.EventType,r=cc.Enum({REMOTE:0,LOCAL:1}),s=cc.Class({name:"cc.VideoPlayer",extends:cc._RendererUnderSG,editor:!1,properties:{_resourceType:r.REMOTE,resourceType:{tooltip:!1,type:r,set:function(t){this._resourceType=t,this._updateVideoSource()},get:function(){return this._resourceType}},_remoteURL:"",remoteURL:{tooltip:!1,type:cc.String,set:function(t){this._remoteURL=t,this._updateVideoSource()},get:function(){return this._remoteURL}},_clip:{default:null,type:cc.Asset},clip:{tooltip:!1,get:function(){return this._clip},set:function(t){"string"==typeof t&&(cc.errorID(8401,"cc.VideoPlayer","cc.Asset","Asset","cc.Asset","video"),t={nativeUrl:t}),this._clip=t,this._updateVideoSource()},type:cc.Asset},currentTime:{tooltip:!1,type:cc.Float,set:function(t){this._sgNode&&this._sgNode.seekTo(t)},get:function(){return this._sgNode?this._sgNode.currentTime():-1}},_volume:1,volume:{get:function(){return this._volume},set:function(t){this._volume=t,this.isPlaying()&&!this._mute&&this._syncVolume()},range:[0,1],type:cc.Float,tooltip:!1},_mute:!1,mute:{get:function(){return this._mute},set:function(t){this._mute=t,this._syncVolume()},tooltip:!1},keepAspectRatio:{tooltip:!1,default:!0,type:cc.Boolean,notify:function(){this._sgNode.setKeepAspectRatioEnabled(this.keepAspectRatio)}},isFullscreen:{tooltip:!1,default:!1,type:cc.Boolean,notify:function(){this._sgNode.setFullScreenEnabled(this.isFullscreen)}},videoPlayerEvent:{default:[],type:cc.Component.EventHandler}},statics:{EventType:n,ResourceType:r},onLoad:function(){0},_syncVolume:function(){var t=this._sgNode;if(t){var e=this._mute?0:this._volume;t.setVolume(e)}},_createSgNode:function(){return new _ccsg.VideoPlayer},_updateVideoSource:function(){var t=this._sgNode,e="";this.resourceType===r.REMOTE?e=this.remoteURL:this._clip&&(e=this._clip.nativeUrl||""),e&&cc.loader.md5Pipe&&(e=cc.loader.md5Pipe.transformURL(e)),t.setURL(e)},_initSgNode:function(){var t=this._sgNode;t&&(t.createDomElementIfNeeded(),this._updateVideoSource(),t.seekTo(this.currentTime),t.setKeepAspectRatioEnabled(this.keepAspectRatio),t.setFullScreenEnabled(this.isFullscreen),t.setContentSize(this.node.getContentSize()),this.pause(),t.setEventListener(n.PLAYING,this.onPlaying.bind(this)),t.setEventListener(n.PAUSED,this.onPasued.bind(this)),t.setEventListener(n.STOPPED,this.onStopped.bind(this)),t.setEventListener(n.COMPLETED,this.onCompleted.bind(this)),t.setEventListener(n.META_LOADED,this.onMetaLoaded.bind(this)),t.setEventListener(n.CLICKED,this.onClicked.bind(this)),t.setEventListener(n.READY_TO_PLAY,this.onReadyToPlay.bind(this)))},onReadyToPlay:function(){cc.Component.EventHandler.emitEvents(this.videoPlayerEvent,this,n.READY_TO_PLAY),this.node.emit("ready-to-play",this)},onMetaLoaded:function(){cc.Component.EventHandler.emitEvents(this.videoPlayerEvent,this,n.META_LOADED),this.node.emit("meta-loaded",this)},onClicked:function(){cc.Component.EventHandler.emitEvents(this.videoPlayerEvent,this,n.CLICKED),this.node.emit("clicked",this)},onPlaying:function(){cc.Component.EventHandler.emitEvents(this.videoPlayerEvent,this,n.PLAYING),this.node.emit("playing",this)},onPasued:function(){cc.Component.EventHandler.emitEvents(this.videoPlayerEvent,this,n.PAUSED),this.node.emit("paused",this)},onStopped:function(){cc.Component.EventHandler.emitEvents(this.videoPlayerEvent,this,n.STOPPED),this.node.emit("stopped",this)},onCompleted:function(){cc.Component.EventHandler.emitEvents(this.videoPlayerEvent,this,n.COMPLETED),this.node.emit("completed",this)},play:function(){this._sgNode&&(this._syncVolume(),this._sgNode.play())},resume:function(){this._sgNode&&(this._syncVolume(),this._sgNode.resume())},pause:function(){this._sgNode&&this._sgNode.pause()},stop:function(){this._sgNode&&this._sgNode.stop()},getDuration:function(){return this._sgNode?this._sgNode.duration():-1},isPlaying:function(){return!!this._sgNode&&this._sgNode.isPlaying()}});cc.VideoPlayer=e.exports=s}),{"../videoplayer/CCSGVideoPlayer":242}],102:[(function(t,e,i){var n=cc.Class({name:"cc.ViewGroup",extends:t("./CCComponent")});cc.ViewGroup=e.exports=n}),{"./CCComponent":78}],103:[(function(t,e,i){t("../webview/CCSGWebView");var n=_ccsg.WebView.EventType;function r(){}var s=cc.Class({name:"cc.WebView",extends:cc._RendererUnderSG,editor:!1,properties:{_useOriginalSize:!0,_url:"",url:{type:String,tooltip:!1,get:function(){return this._url},set:function(t){this._url=t;var e=this._sgNode;e&&e.loadURL(t)}},webviewEvents:{default:[],type:cc.Component.EventHandler}},statics:{EventType:n},onLoad:!1,_createSgNode:function(){return new _ccsg.WebView},_initSgNode:function(){var t=this._sgNode;t&&(t.createDomElementIfNeeded(),t.loadURL(this._url),t.setContentSize(this.node.getContentSize()))},onEnable:function(){this._super();var t=this._sgNode;t.setEventListener(n.LOADED,this._onWebViewLoaded.bind(this)),t.setEventListener(n.LOADING,this._onWebViewLoading.bind(this)),t.setEventListener(n.ERROR,this._onWebViewLoadError.bind(this))},onDisable:function(){this._super();var t=this._sgNode;t.setEventListener(n.LOADED,r),t.setEventListener(n.LOADING,r),t.setEventListener(n.ERROR,r)},_onWebViewLoaded:function(){cc.Component.EventHandler.emitEvents(this.webviewEvents,this,n.LOADED),this.node.emit("loaded",this)},_onWebViewLoading:function(){return cc.Component.EventHandler.emitEvents(this.webviewEvents,this,n.LOADING),this.node.emit("loading",this),!0},_onWebViewLoadError:function(){cc.Component.EventHandler.emitEvents(this.webviewEvents,this,n.ERROR),this.node.emit("error",this)},setJavascriptInterfaceScheme:function(t){this._sgNode&&this._sgNode.setJavascriptInterfaceScheme(t)},setOnJSCallback:function(t){this._sgNode&&this._sgNode.setOnJSCallback(t)},evaluateJS:function(t){this._sgNode&&this._sgNode.evaluateJS(t)}});cc.WebView=e.exports=s}),{"../webview/CCSGWebView":243}],104:[(function(t,e,i){var n=t("../base-ui/CCWidgetManager"),r=n.AlignMode,s=n._AlignFlags,o=s.TOP,a=s.MID,c=s.BOT,h=s.LEFT,l=s.CENTER,u=s.RIGHT,_=o|c,d=h|u,f=cc.Class({name:"cc.Widget",extends:t("./CCComponent"),editor:!1,properties:{target:{get:function(){return this._target},set:function(t){this._target=t},type:cc.Node,tooltip:!1},isAlignTop:{get:function(){return(this._alignFlags&o)>0},set:function(t){this._setAlign(o,t)},animatable:!1,tooltip:!1},isAlignVerticalCenter:{get:function(){return(this._alignFlags&a)>0},set:function(t){t?(this.isAlignTop=!1,this.isAlignBottom=!1,this._alignFlags|=a):this._alignFlags&=~a},animatable:!1,tooltip:!1},isAlignBottom:{get:function(){return(this._alignFlags&c)>0},set:function(t){this._setAlign(c,t)},animatable:!1,tooltip:!1},isAlignLeft:{get:function(){return(this._alignFlags&h)>0},set:function(t){this._setAlign(h,t)},animatable:!1,tooltip:!1},isAlignHorizontalCenter:{get:function(){return(this._alignFlags&l)>0},set:function(t){t?(this.isAlignLeft=!1,this.isAlignRight=!1,this._alignFlags|=l):this._alignFlags&=~l},animatable:!1,tooltip:!1},isAlignRight:{get:function(){return(this._alignFlags&u)>0},set:function(t){this._setAlign(u,t)},animatable:!1,tooltip:!1},isStretchWidth:{get:function(){return(this._alignFlags&d)===d},visible:!1},isStretchHeight:{get:function(){return(this._alignFlags&_)===_},visible:!1},top:{get:function(){return this._top},set:function(t){this._top=t},tooltip:!1},bottom:{get:function(){return this._bottom},set:function(t){this._bottom=t},tooltip:!1},left:{get:function(){return this._left},set:function(t){this._left=t},tooltip:!1},right:{get:function(){return this._right},set:function(t){this._right=t},tooltip:!1},horizontalCenter:{get:function(){return this._horizontalCenter},set:function(t){this._horizontalCenter=t},tooltip:!1},verticalCenter:{get:function(){return this._verticalCenter},set:function(t){this._verticalCenter=t},tooltip:!1},isAbsoluteHorizontalCenter:{get:function(){return this._isAbsHorizontalCenter},set:function(t){this._isAbsHorizontalCenter=t},animatable:!1},isAbsoluteVerticalCenter:{get:function(){return this._isAbsVerticalCenter},set:function(t){this._isAbsVerticalCenter=t},animatable:!1},isAbsoluteTop:{get:function(){return this._isAbsTop},set:function(t){this._isAbsTop=t},animatable:!1},isAbsoluteBottom:{get:function(){return this._isAbsBottom},set:function(t){this._isAbsBottom=t},animatable:!1},isAbsoluteLeft:{get:function(){return this._isAbsLeft},set:function(t){this._isAbsLeft=t},animatable:!1},isAbsoluteRight:{get:function(){return this._isAbsRight},set:function(t){this._isAbsRight=t},animatable:!1},alignMode:{default:r.ON_WINDOW_RESIZE,type:r,tooltip:!1},_wasAlignOnce:{default:void 0,formerlySerializedAs:"isAlignOnce"},_target:null,_alignFlags:0,_left:0,_right:0,_top:0,_bottom:0,_verticalCenter:0,_horizontalCenter:0,_isAbsLeft:!0,_isAbsRight:!0,_isAbsTop:!0,_isAbsBottom:!0,_isAbsHorizontalCenter:!0,_isAbsVerticalCenter:!0,_originalWidth:0,_originalHeight:0},statics:{AlignMode:r},onLoad:function(){void 0!==this._wasAlignOnce&&(this.alignMode=this._wasAlignOnce?r.ONCE:r.ALWAYS,this._wasAlignOnce=void 0)},onEnable:function(){n.add(this)},onDisable:function(){n.remove(this)},_setAlign:function(t,e){if(e!=(this._alignFlags&t)>0){var i=(t&d)>0;e?(this._alignFlags|=t,i?(this.isAlignHorizontalCenter=!1,this.isStretchWidth&&(this._originalWidth=this.node.width)):(this.isAlignVerticalCenter=!1,this.isStretchHeight&&(this._originalHeight=this.node.height))):(i?this.isStretchWidth&&(this.node.width=this._originalWidth):this.isStretchHeight&&(this.node.height=this._originalHeight),this._alignFlags&=~t)}},updateAlignment:function(){n.updateAlignment(this.node)}});Object.defineProperty(f.prototype,"isAlignOnce",{get:function(){return this.alignMode===r.ONCE},set:function(t){this.alignMode=t?r.ONCE:r.ALWAYS}}),cc.Widget=e.exports=f}),{"../base-ui/CCWidgetManager":61,"./CCComponent":78}],105:[(function(t,e,i){t("./CCComponent"),t("./CCRendererInSG"),t("./CCRendererUnderSG"),t("./CCComponentEventHandler"),t("./missing-script"),e.exports=[t("./CCSprite"),t("./CCWidget"),t("./CCCanvas"),t("./CCAudioSource"),t("./CCAnimation"),t("./CCButton"),t("./CCLabel"),t("./CCProgressBar"),t("./CCMask"),t("./CCScrollBar"),t("./CCScrollView"),t("./CCPageViewIndicator"),t("./CCPageView"),t("./CCSlider"),t("./CCLayout"),t("./CCEditBox"),t("./CCVideoPlayer"),t("./CCWebView"),t("./CCSpriteDistortion"),t("./CCLabelOutline"),t("./CCRichText"),t("./CCToggleContainer"),t("./CCToggleGroup"),t("./CCToggle"),t("./CCBlockInputEvents")]}),{"./CCAnimation":73,"./CCAudioSource":74,"./CCBlockInputEvents":75,"./CCButton":76,"./CCCanvas":77,"./CCComponent":78,"./CCComponentEventHandler":79,"./CCEditBox":80,"./CCLabel":81,"./CCLabelOutline":82,"./CCLayout":83,"./CCMask":84,"./CCPageView":85,"./CCPageViewIndicator":86,"./CCProgressBar":87,"./CCRendererInSG":88,"./CCRendererUnderSG":89,"./CCRichText":90,"./CCScrollBar":92,"./CCScrollView":93,"./CCSlider":94,"./CCSprite":95,"./CCSpriteDistortion":96,"./CCToggle":98,"./CCToggleContainer":99,"./CCToggleGroup":100,"./CCVideoPlayer":101,"./CCWebView":103,"./CCWidget":104,"./missing-script":106}],106:[(function(t,e,i){var n=cc.js,r=t("../utils/misc").BUILTIN_CLASSID_RE,s=cc.Class({name:"cc.MissingClass",properties:{_$erialized:{default:null,visible:!1,editorOnly:!0}}}),o=cc.Class({name:"cc.MissingScript",extends:cc.Component,editor:{inspector:"packages://inspector/inspectors/comps/missing-script.js"},properties:{compiled:{default:!1,serializable:!1},_$erialized:{default:null,visible:!1,editorOnly:!0}},ctor:!1,statics:{safeFindClass:function(t,e){var i=n._getClassById(t);return i||(t?(cc.deserialize.reportMissingClass(t),o.getMissingWrapper(t,e)):null)},getMissingWrapper:function(t,e){return e.node&&(/^[0-9a-zA-Z+/]{23}$/.test(t)||r.test(t))?o:s}},onLoad:function(){cc.warnID(4600,this.node.name)}});cc._MissingScript=e.exports=o}),{"../utils/misc":228}],107:[(function(t,e,i){var n=40,r=400,s=t("../platform/utils"),o=t("../platform/CCSys");function a(t){var e=t.convertToWorldSpace(cc.p(0,0)),i=cc.visibleRect.height,s=.5;cc.visibleRect.width>i&&(s=.7),setTimeout((function(){if(window.scrollY320&&(t=320),window.scrollTo(t,t)}}),r)}var c=cc.Enum({DEFAULT:0,DONE:1,SEND:2,SEARCH:3,GO:4}),h=cc.Enum({ANY:0,EMAIL_ADDR:1,NUMERIC:2,PHONE_NUMBER:3,URL:4,DECIMAL:5,SINGLE_LINE:6}),l=cc.Enum({PASSWORD:0,SENSITIVE:1,INITIAL_CAPS_WORD:2,INITIAL_CAPS_SENTENCE:3,INITIAL_CAPS_ALL_CHARACTERS:4,DEFAULT:5});cc.EditBoxDelegate=cc._Class.extend({editBoxEditingDidBegan:function(t){},editBoxEditingDidEnded:function(t){},editBoxTextChanged:function(t,e){},editBoxEditingReturn:function(t){}}),_ccsg.EditBox=_ccsg.Node.extend({_backgroundSprite:null,_delegate:null,_editBoxInputMode:h.ANY,_editBoxInputFlag:l.DEFAULT,_keyboardReturnType:c.DEFAULT,_maxLength:50,_text:"",_placeholderText:"",_alwaysOnTop:!1,_placeholderFontName:"",_placeholderFontSize:14,__fullscreen:!1,__autoResize:!1,_placeholderColor:null,_className:"EditBox",ctor:function(t,e){_ccsg.Node.prototype.ctor.call(this),this._textColor=cc.Color.WHITE,this._placeholderColor=cc.Color.GRAY,this.initWithSizeAndBackgroundSprite(t,e),this._renderCmd._createLabels()},_createRenderCmd:function(){return cc._renderType===cc.game.RENDER_TYPE_CANVAS?new _ccsg.EditBox.CanvasRenderCmd(this):new _ccsg.EditBox.WebGLRenderCmd(this)},setContentSize:function(t,e){void 0!==t.width&&void 0!==t.height&&(e=t.height,t=t.width),_ccsg.Node.prototype.setContentSize.call(this,t,e),this._updateEditBoxSize(t,e)},setVisible:function(t){_ccsg.Node.prototype.setVisible.call(this,t),this._renderCmd.updateVisibility()},createDomElementIfNeeded:function(){this._renderCmd._edTxt||this._renderCmd.setInputMode(this._editBoxInputMode)},setTabIndex:function(t){this._renderCmd._edTxt&&(this._renderCmd._edTxt.tabIndex=t)},getTabIndex:function(){return this._renderCmd._edTxt?this._renderCmd._edTxt.tabIndex:(cc.warnID(4700),-1)},setFocus:function(){this._renderCmd._edTxt&&this._renderCmd._edTxt.focus()},isFocused:function(){return this._renderCmd._edTxt?document.activeElement===this._renderCmd._edTxt:(cc.warnID(4700),!1)},stayOnTop:function(t){this._alwaysOnTop!==t&&(this._alwaysOnTop=t,this._renderCmd.stayOnTop(this._alwaysOnTop))},cleanup:function(){this._super(),this._renderCmd.removeDom()},_onTouchBegan:function(t){var e=t.getLocation(),i=cc.rect(0,0,this._contentSize.width,this._contentSize.height);return!!cc.rectContainsPoint(i,this.convertToNodeSpace(e))||(this._renderCmd._endEditing(),!1)},_onTouchEnded:function(){this._renderCmd._beginEditing()},_updateBackgroundSpriteSize:function(t,e){this._backgroundSprite&&this._backgroundSprite.setContentSize(t,e)},_updateEditBoxSize:function(t,e){var i="number"==typeof t.width?t.width:t,n="number"==typeof t.height?t.height:e;this._updateBackgroundSpriteSize(i,n),this._renderCmd.updateSize(i,n)},setLineHeight:function(t){this._renderCmd.setLineHeight(t)},setFont:function(t,e){this._renderCmd.setFont(t,e)},_setFont:function(t){this._renderCmd._setFont(t)},getBackgroundSprite:function(){return this._backgroundSprite},setFontName:function(t){this._renderCmd.setFontName(t)},setFontSize:function(t){this._renderCmd.setFontSize(t)},setString:function(t){t.length>=this._maxLength&&(t=t.slice(0,this._maxLength)),this._text=t,this._renderCmd.setString(t)},setFontColor:function(t){this._textColor=t,this._renderCmd.setFontColor(t)},setMaxLength:function(t){isNaN(t)||(t<0&&(t=65535),this._maxLength=t,this._renderCmd.setMaxLength(t))},getMaxLength:function(){return this._maxLength},setPlaceHolder:function(t){null!==t&&(this._renderCmd.setPlaceHolder(t),this._placeholderText=t)},setPlaceholderFont:function(t,e){this._placeholderFontName=t,this._placeholderFontSize=e,this._renderCmd._updateDOMPlaceholderFontStyle()},_setPlaceholderFont:function(t){var e=cc.LabelTTF._fontStyleRE.exec(t);e&&(this._placeholderFontName=e[2],this._placeholderFontSize=parseInt(e[1]),this._renderCmd._updateDOMPlaceholderFontStyle())},setPlaceholderFontName:function(t){this._placeholderFontName=t,this._renderCmd._updateDOMPlaceholderFontStyle()},setPlaceholderFontSize:function(t){this._placeholderFontSize=t,this._renderCmd._updateDOMPlaceholderFontStyle()},setPlaceholderFontColor:function(t){this._placeholderColor=t,this._renderCmd.setPlaceholderFontColor(t)},setInputFlag:function(t){this._editBoxInputFlag=t,this._renderCmd.setInputFlag(t)},getString:function(){return this._text},initWithSizeAndBackgroundSprite:function(t,e){return this._backgroundSprite&&this._backgroundSprite.removeFromParent(),this._backgroundSprite=e,_ccsg.Node.prototype.setContentSize.call(this,t),this._backgroundSprite&&!this._backgroundSprite.parent&&(this._backgroundSprite.setAnchorPoint(cc.p(0,0)),this.addChild(this._backgroundSprite),this._updateBackgroundSpriteSize(t.width,t.height)),this.x=0,this.y=0,!0},setDelegate:function(t){this._delegate=t},getPlaceHolder:function(){return this._placeholderText},setInputMode:function(t){if(this._editBoxInputMode!==t){var e=this.getString();this._editBoxInputMode=t,this._renderCmd.setInputMode(t),this._renderCmd.transform(),this.setString(e),this._renderCmd._updateLabelPosition(this.getContentSize())}},setReturnType:function(t){this._keyboardReturnType=t,this._renderCmd._updateDomInputType()},initWithBackgroundColor:function(t,e){this._edWidth=t.width,this.dom.style.width=this._edWidth.toString()+"px",this._edHeight=t.height,this.dom.style.height=this._edHeight.toString()+"px",this.dom.style.backgroundColor=cc.colorToHex(e)}});var u=_ccsg.EditBox.prototype;cc.defineGetterSetter(u,"font",null,u._setFont),cc.defineGetterSetter(u,"fontName",null,u.setFontName),cc.defineGetterSetter(u,"fontSize",null,u.setFontSize),cc.defineGetterSetter(u,"fontColor",null,u.setFontColor),cc.defineGetterSetter(u,"string",u.getString,u.setString),cc.defineGetterSetter(u,"maxLength",u.getMaxLength,u.setMaxLength),cc.defineGetterSetter(u,"placeholder",u.getPlaceHolder,u.setPlaceHolder),cc.defineGetterSetter(u,"placeholderFont",null,u._setPlaceholderFont),cc.defineGetterSetter(u,"placeholderFontName",null,u.setPlaceholderFontName),cc.defineGetterSetter(u,"placeholderFontSize",null,u.setPlaceholderFontSize),cc.defineGetterSetter(u,"placeholderFontColor",null,u.setPlaceholderFontColor),cc.defineGetterSetter(u,"inputFlag",null,u.setInputFlag),cc.defineGetterSetter(u,"delegate",null,u.setDelegate),cc.defineGetterSetter(u,"inputMode",null,u.setInputMode),cc.defineGetterSetter(u,"returnType",null,u.setReturnType),u=null,_ccsg.EditBox.InputMode=h,_ccsg.EditBox.InputFlag=l,_ccsg.EditBox.KeyboardReturnType=c,(function(t){t._polyfill={zoomInvalid:!1},o.OS_ANDROID!==o.os||o.browserType!==o.BROWSER_TYPE_SOUGOU&&o.browserType!==o.BROWSER_TYPE_360||(t._polyfill.zoomInvalid=!0)})(_ccsg.EditBox),(function(t){var e=function(){}.prototype=Object.create(Object.prototype);e.updateMatrix=function(){if(this._edTxt){var e=this._node,i=cc.view._scaleX,n=cc.view._scaleY,r=cc.view._devicePixelRatio,s=this._worldTransform;i/=r,n/=r;var o=cc.game.container,a=s.a*i,c=s.b,h=s.c,l=s.d*n,u=o&&o.style.paddingLeft&&parseInt(o.style.paddingLeft),_=o&&o.style.paddingBottom&&parseInt(o.style.paddingBottom),d=s.tx*i+u,f=s.ty*n+_;t.zoomInvalid&&(this.updateSize(e._contentSize.width*a,e._contentSize.height*l),a=1,l=1);var m="matrix("+a+","+-c+","+-h+","+l+","+d+","+-f+")";this._edTxt.style.transform=m,this._edTxt.style["-webkit-transform"]=m,this._edTxt.style["transform-origin"]="0px 100% 0px",this._edTxt.style["-webkit-transform-origin"]="0px 100% 0px"}},e.updateVisibility=function(){this._edTxt&&(this._node.visible?this._edTxt.style.visibility="visible":this._edTxt.style.visibility="hidden")},e.stayOnTop=function(t){t?(this._removeLabels(),this._edTxt.style.display=""):(this._createLabels(),this._edTxt.style.display="none",this._showLabels())},e._beginEditingOnMobile=function(t){this.__orientationChanged=function(){a(t)},window.addEventListener("orientationchange",this.__orientationChanged),cc.view.isAutoFullScreenEnabled()?(this.__fullscreen=!0,cc.view.enableAutoFullScreen(!1),cc.screen.exitFullScreen()):this.__fullscreen=!1,this.__autoResize=cc.view.__resizeWithBrowserSize,cc.view.resizeWithBrowserSize(!1)},e._endEditingOnMobile=function(){if(this.__rotateScreen){cc.container.style["-webkit-transform"]="rotate(90deg)",cc.container.style.transform="rotate(90deg)";var t=cc.view,e=t._originalDesignResolutionSize.width,i=t._originalDesignResolutionSize.height;e>0&&t.setDesignResolutionSize(e,i,t._resolutionPolicy),this.__rotateScreen=!1}window.removeEventListener("orientationchange",this.__orientationChanged),window.scrollTo(0,0),this.__fullscreen&&cc.view.enableAutoFullScreen(!0),this.__autoResize&&cc.view.resizeWithBrowserSize(!0)},e._onFocusOnMobile=function(t){cc.view._isRotated?(cc.container.style["-webkit-transform"]="rotate(0deg)",cc.container.style.transform="rotate(0deg)",cc.view._isRotated=!1,cc.view.getResolutionPolicy().apply(cc.view,cc.view.getDesignResolutionSize()),cc.view._isRotated=!0,window.scrollTo(35,35),this.__rotateScreen=!0):this.__rotateScreen=!1;a(t)},e._createDomInput=function(){this.removeDom();var t=this,e=this._edTxt=document.createElement("input");function i(e){var i=t._editBox;e.value.length>e.maxLength&&(e.value=e.value.slice(0,e.maxLength)),i._delegate&&i._delegate.editBoxTextChanged&&i._text!==e.value&&(i._text=e.value,t._updateDomTextCases(),i._delegate.editBoxTextChanged(i,i._text))}e.type="text",e.style.fontSize=this._edFontSize+"px",e.style.color="#000000",e.style.border=0,e.style.background="transparent",e.style.width="100%",e.style.height="100%",e.style.active=0,e.style.outline="medium",e.style.padding="0",e.style.textTransform="uppercase",e.style.display="none",e.style.position="absolute",e.style.bottom="0px",e.style.left="2px",e.style["-moz-appearance"]="textfield",e.style.className="cocosEditBox";var n=!1;return e.addEventListener("compositionstart",(function(){n=!0})),e.addEventListener("compositionend",(function(){n=!1,i(this)})),e.addEventListener("input",(function(){n||i(this)})),e.addEventListener("keypress",(function(e){var i=t._editBox;e.keyCode===cc.KEY.enter&&(e.stopPropagation(),e.preventDefault(),""===this.value&&(this.style.fontSize=i._placeholderFontSize+"px",this.style.color=cc.colorToHex(i._placeholderColor)),i._text=this.value,t._updateDomTextCases(),t._endEditing(),i._delegate&&i._delegate.editBoxEditingReturn&&i._delegate.editBoxEditingReturn(i),cc._canvas.focus())})),e.addEventListener("focus",(function(){var e=t._editBox;this.style.fontSize=t._edFontSize+"px",this.style.color=cc.colorToHex(e._textColor),t._hiddenLabels(),o.isMobile&&t._onFocusOnMobile(e),e._delegate&&e._delegate.editBoxEditingDidBegan&&e._delegate.editBoxEditingDidBegan(e)})),e.addEventListener("blur",(function(){var e=t._editBox;e._text=this.value,t._updateDomTextCases(),e._delegate&&e._delegate.editBoxEditingDidEnded&&e._delegate.editBoxEditingDidEnded(e),""===this.value&&(this.style.fontSize=e._placeholderFontSize+"px",this.style.color=cc.colorToHex(e._placeholderColor)),t._endEditing()})),this._addDomToGameContainer(),e},e._createDomTextArea=function(){this.removeDom();var t=this,e=this._edTxt=document.createElement("textarea");function i(e){e.value.length>e.maxLength&&(e.value=e.value.slice(0,e.maxLength));var i=t._editBox;i._delegate&&i._delegate.editBoxTextChanged&&i._text.toLowerCase()!==e.value.toLowerCase()&&(i._text=e.value,t._updateDomTextCases(),i._delegate.editBoxTextChanged(i,i._text))}e.type="text",e.style.fontSize=this._edFontSize+"px",e.style.color="#000000",e.style.border=0,e.style.background="transparent",e.style.width="100%",e.style.height="100%",e.style.active=0,e.style.outline="medium",e.style.padding="0",e.style.resize="none",e.style.textTransform="uppercase",e.style.overflow_y="scroll",e.style.display="none",e.style.position="absolute",e.style.bottom="0px",e.style.left="2px",e.style.className="cocosEditBox";var n=!1;return e.addEventListener("compositionstart",(function(){n=!0})),e.addEventListener("compositionend",(function(){n=!1,i(this)})),e.addEventListener("input",(function(){n||i(this)})),e.addEventListener("focus",(function(){var e=t._editBox;t._hiddenLabels(),this.style.fontSize=t._edFontSize+"px",this.style.color=cc.colorToHex(e._textColor),o.isMobile&&t._onFocusOnMobile(e),e._delegate&&e._delegate.editBoxEditingDidBegan&&e._delegate.editBoxEditingDidBegan(e)})),e.addEventListener("keypress",(function(e){var i=t._editBox;e.keyCode===cc.KEY.enter&&(e.stopPropagation(),i._delegate&&i._delegate.editBoxEditingReturn&&i._delegate.editBoxEditingReturn(i))})),e.addEventListener("blur",(function(){var e=t._editBox;e._text=this.value,t._updateDomTextCases(),e._delegate&&e._delegate.editBoxEditingDidEnded&&e._delegate.editBoxEditingDidEnded(e),""===this.value&&(this.style.fontSize=e._placeholderFontSize+"px",this.style.color=cc.colorToHex(e._placeholderColor)),t._endEditing()})),this._addDomToGameContainer(),e},e._createWXInput=function(t){this.removeDom();var e=this,i=this._edTxt=document.createElement("input");i.type="text",i.focus=function(){var i=e._editBox;wx.showKeyboard({defaultValue:i._text,maxLength:140,multiple:t,confirmHold:!0,confirmType:"done",success:function(t){i._delegate&&i._delegate.editBoxEditingDidBegan&&i._delegate.editBoxEditingDidBegan(i)},fail:function(t){cc.warn(t.errMsg),e._endEditing()}}),wx.onKeyboardConfirm((function(t){i._text=t.value,e._updateDomTextCases(),i._delegate&&i._delegate.editBoxEditingReturn&&i._delegate.editBoxEditingReturn(i),wx.hideKeyboard({success:function(t){i._delegate&&i._delegate.editBoxEditingDidEnded&&i._delegate.editBoxEditingDidEnded(i)},fail:function(t){cc.warn(t.errMsg)}})})),wx.onKeyboardInput((function(t){i._delegate&&i._delegate.editBoxTextChanged&&i._text!==t.value&&(i._text=t.value,e._updateDomTextCases(),i._delegate.editBoxTextChanged(i,i._text))})),wx.onKeyboardComplete((function(){e._endEditing(),wx.offKeyboardConfirm(),wx.offKeyboardInput(),wx.offKeyboardComplete()}))}},e._createLabels=function(){var t=this._editBox.getContentSize();this._textLabel||(this._textLabel=_ccsg.Label.pool.get(),this._textLabel.setAnchorPoint(cc.p(0,1)),this._textLabel.setOverflow(_ccsg.Label.Overflow.CLAMP),this._editBox.addChild(this._textLabel,100)),this._placeholderLabel||(this._placeholderLabel=_ccsg.Label.pool.get(),this._placeholderLabel.setAnchorPoint(cc.p(0,1)),this._placeholderLabel.setColor(cc.Color.GRAY),this._editBox.addChild(this._placeholderLabel,100)),this._updateLabelPosition(t)},e._removeLabels=function(){this._textLabel&&(this._editBox.removeChild(this._textLabel),this._textLabel=null)},e._updateLabelPosition=function(t){if(this._textLabel&&this._placeholderLabel){var e=cc.size(t.width-2,t.height);this._textLabel.setContentSize(e),this._placeholderLabel.setLineHeight(t.height);var i=this._placeholderLabel.getContentSize();this._editBox._editBoxInputMode===h.ANY?(this._textLabel.setPosition(2,t.height),this._placeholderLabel.setPosition(2,t.height),this._placeholderLabel.setVerticalAlign(cc.VerticalTextAlignment.TOP),this._textLabel.setVerticalAlign(cc.VerticalTextAlignment.TOP),this._textLabel.enableWrapText(!0)):(this._textLabel.enableWrapText(!1),this._textLabel.setPosition(2,t.height),this._placeholderLabel.setPosition(2,(t.height+i.height)/2),this._placeholderLabel.setVerticalAlign(cc.VerticalTextAlignment.CENTER),this._textLabel.setVerticalAlign(cc.VerticalTextAlignment.CENTER))}},e.setLineHeight=function(t){this._textLabel&&this._textLabel.setLineHeight(t)},e._hiddenLabels=function(){this._textLabel&&this._textLabel.setVisible(!1),this._placeholderLabel&&this._placeholderLabel.setVisible(!1)},e._updateDomTextCases=function(){var t=this._editBox._editBoxInputFlag;t===l.INITIAL_CAPS_ALL_CHARACTERS?this._editBox._text=this._editBox._text.toUpperCase():t===l.INITIAL_CAPS_WORD?this._editBox._text=(function(t){return t.replace(/(?:^|\s)\S/g,(function(t){return t.toUpperCase()}))})(this._editBox._text):t===l.INITIAL_CAPS_SENTENCE&&(this._editBox._text=(function(t){return t.charAt(0).toUpperCase()+t.slice(1)})(this._editBox._text)),this._edTxt&&(this._edTxt.value=this._editBox._text)},e._updateLabelStringStyle=function(){if("password"===this._edTxt.type){for(var t="",e=this._editBox._text.length,i=0;i=0);)++n;e.gt0Index=n}}},_sortListenersOfFixedPriorityAsc:function(t,e){return t._getFixedPriority()-e._getFixedPriority()},_onUpdateListeners:function(t){var e,i,n,r=t.getFixedPriorityListeners(),s=t.getSceneGraphPriorityListeners(),o=this._toRemovedListeners;if(s)for(e=0;e0,3508),!(e>1)){var i;(i=this._listenersMap[cc._EventListenerTouchOneByOne.LISTENER_ID])&&this._onUpdateListeners(i),(i=this._listenersMap[cc._EventListenerTouchAllAtOnce.LISTENER_ID])&&this._onUpdateListeners(i),cc.assertID(1===e,3509);var n=this._toAddedListeners;if(0!==n.length){for(var r=0,s=n.length;r0&&-1!==(r=t._claimedTouches.indexOf(n))&&(o=!0,a===c.MOVED&&t.onTouchMoved?t.onTouchMoved(n,i):a===c.ENDED?(t.onTouchEnded&&t.onTouchEnded(n,i),t._registered&&t._claimedTouches.splice(r,1)):a===c.CANCELLED&&(t.onTouchCancelled&&t.onTouchCancelled(n,i),t._registered&&t._claimedTouches.splice(r,1))),i.isStopped()?(s._updateTouchListeners(i),!0):!!(o&&t._registered&&t.swallowTouches)&&(e.needsMutableSet&&e.touches.splice(n,1),!0)},_dispatchTouchEvent:function(t){this._sortEventListeners(cc._EventListenerTouchOneByOne.LISTENER_ID),this._sortEventListeners(cc._EventListenerTouchAllAtOnce.LISTENER_ID);var e=this._getListeners(cc._EventListenerTouchOneByOne.LISTENER_ID),i=this._getListeners(cc._EventListenerTouchAllAtOnce.LISTENER_ID);if(null!==e||null!==i){var n=t.getTouches(),r=cc.js.array.copy(n),s={event:t,needsMutableSet:e&&i,touches:r,selTouch:null};if(e)for(var o=0;o0&&(this._dispatchEventToListeners(i,this._onTouchesEventCallback,{event:t,touches:r}),t.isStopped())||this._updateTouchListeners(t)}},_onTouchesEventCallback:function(t,e){if(!t._registered)return!1;var i=cc.Event.EventTouch,n=e.event,r=e.touches,o=n.getEventCode();return n.currentTarget=t._node,o===i.BEGAN&&t.onTouchesBegan?t.onTouchesBegan(r,n):o===i.MOVED&&t.onTouchesMoved?t.onTouchesMoved(r,n):o===i.ENDED&&t.onTouchesEnded?t.onTouchesEnded(r,n):o===i.CANCELLED&&t.onTouchesCancelled&&t.onTouchesCancelled(r,n),!!n.isStopped()&&(s._updateTouchListeners(n),!0)},_associateNodeAndEventListener:function(t,e){var i=this._nodeListenersMap[t.__instanceId];i||(i=[],this._nodeListenersMap[t.__instanceId]=i),i.push(e)},_dissociateNodeAndEventListener:function(t,e){var i=this._nodeListenersMap[t.__instanceId];i&&(cc.js.array.remove(i,e),0===i.length&&delete this._nodeListenersMap[t.__instanceId])},_dispatchEventToListeners:function(t,e,i){var n,r,s=!1,o=t.getFixedPriorityListeners(),a=t.getSceneGraphPriorityListeners(),c=0;if(o&&0!==o.length)for(;c0){for(var a;n0},a.on=function(t,e,i,r){if("boolean"==typeof i?(r=i,i=void 0):r=!!r,e){var s=null;return(s=r?this._capturingListeners=this._capturingListeners||new n:this._bubblingListeners=this._bubblingListeners||new n).has(t,e,i)||(s.add(t,e,i),i&&i.__eventTargets&&i.__eventTargets.push(this),this._addEventFlag(t,s,r)),e}cc.errorID(6800)},a.off=function(t,e,i,n){if("boolean"==typeof i?(n=i,i=void 0):n=!!n,e){var s=n?this._capturingListeners:this._bubblingListeners;s&&(s.remove(t,e,i),i&&i.__eventTargets&&r(i.__eventTargets,this),this._purgeEventFlag(t,s,n))}else this._capturingListeners&&this._capturingListeners.removeAll(t),this._bubblingListeners&&this._bubblingListeners.removeAll(t),this._hasListenerCache&&delete this._hasListenerCache[t]},a.targetOff=function(t){this._capturingListeners&&(this._capturingListeners.removeAll(t),this._resetFlagForTarget(t,this._capturingListeners,!0)),this._bubblingListeners&&(this._bubblingListeners.removeAll(t),this._resetFlagForTarget(t,this._bubblingListeners,!1))},a.once=function(t,e,i,n){var r="__ONCE_FLAG:"+t,s=n?this._capturingListeners:this._bubblingListeners;if(!(s&&s.has(r,e,i))){var o=this,a=function(c){o.off(t,a,i,n),s.remove(r,e,i),e.call(this,c)};this.on(t,a,i,n),s||(s=n?this._capturingListeners:this._bubblingListeners),s.add(r,e,i)}},a.dispatchEvent=function(t){(function(t,e){var i,n;for(e.target=t,s.length=0,t._getCapturingTargets(e.type,s),e.eventPhase=1,n=s.length-1;n>=0;--n)if((i=s[n])._isTargetActive(e.type)&&i._capturingListeners&&(e.currentTarget=i,i._capturingListeners.invoke(e,s),e._propagationStopped))return void(s.length=0);if(s.length=0,t._isTargetActive(e.type)&&(e.eventPhase=2,e.currentTarget=t,t._capturingListeners&&t._capturingListeners.invoke(e),!e._propagationImmediateStopped&&t._bubblingListeners&&t._bubblingListeners.invoke(e)),!e._propagationStopped&&e.bubbles)for(t._getBubblingTargets(e.type,s),e.eventPhase=3,n=0;n80*i){n=c=t[0],a=h=t[1];for(var x=i;xc&&(c=l),d>h&&(h=d);m=Math.max(c-n,h-a)}return o(y,v,i,n,a,m),v}function r(t,e,i,n,r){var s,o;if(r===S(t,e,i,n)>0)for(s=e;s=e;s-=n)o=T(s,t[s],t[s+1],o);return o&&y(o,o.next)&&(A(o),o=o.next),o}function s(t,e){if(!t)return t;e||(e=t);var i,n=t;do{if(i=!1,n.steiner||!y(n,n.next)&&0!==g(n.prev,n,n.next))n=n.next;else{if(A(n),(n=e=n.prev)===n.next)return null;i=!0}}while(i||n!==e);return e}function o(t,e,i,n,r,u,_){if(t){!_&&u&&(function(t,e,i,n){var r=t;do{null===r.z&&(r.z=d(r.x,r.y,e,i,n)),r.prevZ=r.prev,r.nextZ=r.next,r=r.next}while(r!==t);r.prevZ.nextZ=null,r.prevZ=null,(function(t){var e,i,n,r,s,o,a,c,h=1;do{for(i=t,t=null,s=null,o=0;i;){for(o++,n=i,a=0,e=0;e0||c>0&&n;)0===a?(r=n,n=n.nextZ,c--):0!==c&&n?i.z<=n.z?(r=i,i=i.nextZ,a--):(r=n,n=n.nextZ,c--):(r=i,i=i.nextZ,a--),s?s.nextZ=r:t=r,r.prevZ=s,s=r;i=n}s.nextZ=null,h*=2}while(o>1)})(r)})(t,n,r,u);for(var f,m,p=t;t.prev!==t.next;)if(f=t.prev,m=t.next,u?c(t,n,r,u):a(t))e.push(f.i/i),e.push(t.i/i),e.push(m.i/i),A(t),t=m.next,p=m.next;else if((t=m)===p){_?1===_?o(t=h(t,e,i),e,i,n,r,u,2):2===_&&l(t,e,i,n,r,u):o(s(t),e,i,n,r,u,1);break}}}function a(t){var e=t.prev,i=t,n=t.next;if(g(e,i,n)>=0)return!1;for(var r=t.next.next;r!==t.prev;){if(m(e.x,e.y,i.x,i.y,n.x,n.y,r.x,r.y)&&g(r.prev,r,r.next)>=0)return!1;r=r.next}return!0}function c(t,e,i,n){var r=t.prev,s=t,o=t.next;if(g(r,s,o)>=0)return!1;for(var a=r.xs.x?r.x>o.x?r.x:o.x:s.x>o.x?s.x:o.x,l=r.y>s.y?r.y>o.y?r.y:o.y:s.y>o.y?s.y:o.y,u=d(a,c,e,i,n),_=d(h,l,e,i,n),f=t.nextZ;f&&f.z<=_;){if(f!==t.prev&&f!==t.next&&m(r.x,r.y,s.x,s.y,o.x,o.y,f.x,f.y)&&g(f.prev,f,f.next)>=0)return!1;f=f.nextZ}for(f=t.prevZ;f&&f.z>=u;){if(f!==t.prev&&f!==t.next&&m(r.x,r.y,s.x,s.y,o.x,o.y,f.x,f.y)&&g(f.prev,f,f.next)>=0)return!1;f=f.prevZ}return!0}function h(t,e,i){var n=t;do{var r=n.prev,s=n.next.next;!y(r,s)&&v(r,n,n.next,s)&&x(r,s)&&x(s,r)&&(e.push(r.i/i),e.push(n.i/i),e.push(s.i/i),A(n),A(n.next),n=t=s),n=n.next}while(n!==t);return n}function l(t,e,i,n,r,a){var c=t;do{for(var h=c.next.next;h!==c.prev;){if(c.i!==h.i&&p(c,h)){var l=C(c,h);return c=s(c,c.next),l=s(l,l.next),o(c,e,i,n,r,a),void o(l,e,i,n,r,a)}h=h.next}c=c.next}while(c!==t)}function u(t,e){return t.x-e.x}function _(t,e){if(e=(function(t,e){var i,n=e,r=t.x,s=t.y,o=-1/0;do{if(s<=n.y&&s>=n.next.y){var a=n.x+(s-n.y)*(n.next.x-n.x)/(n.next.y-n.y);if(a<=r&&a>o){if(o=a,a===r){if(s===n.y)return n;if(s===n.next.y)return n.next}i=n.x=n.x&&n.x>=l&&m(si.x)&&x(n,t)&&(i=n,_=c),n=n.next;return i})(t,e)){var i=C(e,t);s(i,i.next)}}function d(t,e,i,n,r){return(t=1431655765&((t=858993459&((t=252645135&((t=16711935&((t=32767*(t-i)/r)|t<<8))|t<<4))|t<<2))|t<<1))|(e=1431655765&((e=858993459&((e=252645135&((e=16711935&((e=32767*(e-n)/r)|e<<8))|e<<4))|e<<2))|e<<1))<<1}function f(t){var e=t,i=t;do{e.x=0&&(t-o)*(n-a)-(i-o)*(e-a)>=0&&(i-o)*(s-a)-(r-o)*(n-a)>=0}function p(t,e){return t.next.i!==e.i&&t.prev.i!==e.i&&!(function(t,e){var i=t;do{if(i.i!==t.i&&i.next.i!==t.i&&i.i!==e.i&&i.next.i!==e.i&&v(i,i.next,t,e))return!0;i=i.next}while(i!==t);return!1})(t,e)&&x(t,e)&&x(e,t)&&(function(t,e){var i=t,n=!1,r=(t.x+e.x)/2,s=(t.y+e.y)/2;do{i.y>s!=i.next.y>s&&r<(i.next.x-i.x)*(s-i.y)/(i.next.y-i.y)+i.x&&(n=!n),i=i.next}while(i!==t);return n})(t,e)}function g(t,e,i){return(e.y-t.y)*(i.x-e.x)-(e.x-t.x)*(i.y-e.y)}function y(t,e){return t.x===e.x&&t.y===e.y}function v(t,e,i,n){return!!(y(t,e)&&y(i,n)||y(t,n)&&y(i,e))||g(t,e,i)>0!=g(t,e,n)>0&&g(i,n,t)>0!=g(i,n,e)>0}function x(t,e){return g(t.prev,t,t.next)<0?g(t,e,t.next)>=0&&g(t,t.prev,e)>=0:g(t,e,t.prev)<0||g(t,t.next,e)<0}function C(t,e){var i=new b(t.i,t.x,t.y),n=new b(e.i,e.x,e.y),r=t.next,s=e.prev;return t.next=e,e.prev=t,i.next=r,r.prev=i,n.next=i,i.prev=n,s.next=n,n.prev=s,n}function T(t,e,i,n){var r=new b(t,e,i);return n?(r.next=n.next,r.prev=n,n.next.prev=r,n.next=r):(r.prev=r,r.next=r),r}function A(t){t.next.prev=t.prev,t.prev.next=t.next,t.prevZ&&(t.prevZ.nextZ=t.nextZ),t.nextZ&&(t.nextZ.prevZ=t.prevZ)}function b(t,e,i){this.i=t,this.x=e,this.y=i,this.prev=null,this.next=null,this.z=null,this.prevZ=null,this.nextZ=null,this.steiner=!1}function S(t,e,i,n){for(var r=0,s=e,o=i-n;s0&&(n+=t[r-1].length,i.holes.push(n))}return i}}),{}],119:[(function(t,e,i){cc.js;var n=t("./types").LineCap,r=t("./types").LineJoin,s=t("./helper"),o=function(t){this._rootCtor(t),this._needDraw=!0,this.cmds=[],this.style={strokeStyle:"black",fillStyle:"white",lineCap:"butt",lineJoin:"miter",miterLimit:10}},a=o.prototype=Object.create(_ccsg.Node.CanvasRenderCmd.prototype);a.constructor=o,a._updateCurrentRegions=function(){var t=this._currentRegion;this._currentRegion=this._oldRegion,this._oldRegion=t,this._currentRegion.setTo(0,0,cc.visibleRect.width,cc.visibleRect.height)},a.rendering=function(t,e,i){var n=t||cc._renderContext,r=n.getContext();n.setTransform(this._worldTransform,e,i),r.save(),r.scale(1,-1);var s=this.style;r.strokeStyle=s.strokeStyle,r.fillStyle=s.fillStyle,r.lineWidth=s.lineWidth,r.lineJoin=s.lineJoin,r.miterLimit=s.miterLimit;for(var o=!0,a=this.cmds,c=0,h=a.length;ci?i:t}var x=cc.Enum({PT_CORNER:1,PT_LEFT:2,PT_BEVEL:4,PT_INNERBEVEL:8});function C(t,e){a.call(this,t,e),this.reset()}function T(){this.reset()}function A(){this.vertsOffset=0,this.vertsVBO=gl.createBuffer(),this.vertsBuffer=null,this.uint32VertsBuffer=null,this.vertsDirty=!1,this.indicesOffset=0,this.indicesVBO=gl.createBuffer(),this.indicesBuffer=null,this.indicesDirty=!1}function b(t){this._rootCtor(t),this._needDraw=!0;cc._renderContext;this._buffers=[],this._buffer=null,this._allocBuffer(),this._matrix=new cc.math.Matrix4,this._matrix.identity(),this._paths=[],this._points=[],this._curColorValue=0,this._blendFunc=new cc.BlendFunc(cc.macro.BLEND_SRC,cc.macro.BLEND_DST);var e=new cc.GLProgram;e.initWithVertexShaderByteArray(cc.PresetShaders.POSITION_COLOR_VERT,cc.PresetShaders.POSITION_COLOR_FRAG),e.addAttribute(cc.macro.ATTRIBUTE_NAME_POSITION,cc.macro.VERTEX_ATTRIB_POSITION),e.addAttribute(cc.macro.ATTRIBUTE_NAME_COLOR,cc.macro.VERTEX_ATTRIB_COLOR),e.link(),e.updateUniforms(),this._shaderProgram=e,this._allocVerts(h)}c.extend(C,a),C.prototype.reset=function(){this.dx=0,this.dy=0,this.dmx=0,this.dmy=0,this.flags=0,this.len=0},T.prototype.reset=function(){this.closed=!1,this.nbevel=0,this.complex=!0,this.points?this.points.length=0:this.points=[]},A.prototype.clear=function(){this.vertsOffset=0,this.indicesOffset=0},A.prototype.alloc=function(t,e){var i=this.vertsOffset+t;if(i>65535)return!1;var n=this.vertsBuffer,r=n?n.length/3:0;if(i>r){for(0===r&&(r=h);i>r;)r*=2;var s=new Float32Array(3*r),o=new Uint32Array(s.buffer);if(n)for(var a=this.uint32VertsBuffer,c=0,l=n.length;cd){for(0===d&&(d=3*h);_>d;)d*=2;var f=new Uint16Array(d);if(u)for(c=0,l=u.length;c>>0)+(t.b<<16)+(t.g<<8)+t.r,this._expandStroke(),this._updatePathOffset=!0},S.fill=function(){var t=this._fillColor;this._curColorValue=(t.a<<24>>>0)+(t.b<<16)+(t.g<<8)+t.r,this._expandFill(),this._updatePathOffset=!0,this._filling=!1},S._strokeColor=null,S._fillColor=null,S.setStrokeColor=function(t){this._strokeColor=t},S.getStrokeColor=function(){return this._strokeColor},S.setFillColor=function(t){this._fillColor=t},S.getFillColor=function(){return this._fillColor},S.setLineWidth=function(t){this.lineWidth=t},S.setLineJoin=function(t){this.lineJoin=t},S.setLineCap=function(t){this.lineCap=t},S.setMiterLimit=function(t){this.miterLimit=t},c.getset(S,"strokeColor",S.getStrokeColor,S.setStrokeColor),c.getset(S,"fillColor",S.getFillColor,S.setFillColor),S._render=function(){var t=this._buffers;if(0!==t.length){for(var e=cc._renderContext,i=0,n=t.length;i0&&(n=1/t);for(var s=this._paths,o=this._pathOffset,a=this._pathLength;o1e-6){var A=1/p;A>600&&(A=600),f.dmx*=A,f.dmy*=A}f.dx*d.dy-d.dx*f.dy>0&&(0,f.flags|=x.PT_LEFT),p*(g=_(11,u(d.len,f.len)*n))*g<1&&(f.flags|=x.PT_INNERBEVEL),f.flags&x.PT_CORNER&&(p*i*i<1||e===r.BEVEL||e===r.ROUND)&&(f.flags|=x.PT_BEVEL),0!=(f.flags&(x.PT_BEVEL|x.PT_INNERBEVEL))&&c.nbevel++,d=f,f=h[m+1]}}},S._vset=function(t,e){var i=this._buffer,n=3*i.vertsOffset,r=i.vertsBuffer;r[n]=t,r[n+1]=e,i.uint32VertsBuffer[n+2]=this._curColorValue,i.vertsOffset++},S._chooseBevel=function(t,e,i,n){var r,s,o,a,c=i.x,h=i.y;return 0!==t?(r=c+e.dy*n,s=h-e.dx*n,o=c+i.dy*n,a=h-i.dx*n):(r=o=c+i.dmx*n,s=a=h+i.dmy*n),[r,s,o,a]},S._buttCap=function(t,e,i,n,r){var s=t.x-e*r,o=t.y-i*r,a=i,c=-e;this._vset(s+a*n,o+c*n),this._vset(s-a*n,o-c*n)},S._roundCapStart=function(t,e,i,n,r){for(var s=t.x,o=t.y,a=i,c=-e,h=0;hT&&(I-=2*l),this._vset(_,f),this._vset(h-s*n,e.y-o*n);for(var A=v(d((T-I)/l)*r,2,r),b=0;b10||(f=.5*(r+o),m=.5*(s+a),p=.5*((l=.5*(t+i))+(_=.5*(i+r))),g=.5*((u=.5*(e+n))+(d=.5*(n+s))),((E=y((i-o)*(S=a-e)-(n-a)*(b=o-t)))+(w=y((r-o)*S-(s-a)*b)))*(E+w)=2*n)g=2*n;else for(;g<0;)g+=2*n;else if(c(g)>=2*n)g=2*-n;else for(;g>0;)g-=2*n;for(m=0|s(1,r(c(g)/(.5*n)+.5,5)),y=c(4/3*(1-o(d=g/m/2))/a(d)),_||(y=-y),f=0;f<=m;f++)C=e+(v=o(p=l+g*(f/m)))*h,T=i+(x=a(p))*h,A=-x*h*y,b=v*h*y,0===f?t.moveTo(C,T):t.bezierCurveTo(S+w,E+I,C-A,T-b,C,T),S=C,E=T,w=A,I=b},ellipse:function(t,e,i,n,r){t.moveTo(e-n,i),t.bezierCurveTo(e-n,i+r*l,e-n*l,i+r,e,i+r),t.bezierCurveTo(e+n*l,i+r,e+n,i+r*l,e+n,i),t.bezierCurveTo(e+n,i-r*l,e+n*l,i-r,e,i-r),t.bezierCurveTo(e-n*l,i-r,e-n,i-r*l,e-n,i),t.close()},roundRect:function(t,e,i,n,s,o){if(o<.1)t.rect(e,i,n,s);else{var a=r(o,.5*c(n))*h(n),u=r(o,.5*c(s))*h(s);t.moveTo(e,i+u),t.lineTo(e,i+s-u),t.bezierCurveTo(e,i+s-u*(1-l),e+a*(1-l),i+s,e+a,i+s),t.lineTo(e+n-a,i+s),t.bezierCurveTo(e+n-a*(1-l),i+s,e+n,i+s-u*(1-l),e+n,i+s-u),t.lineTo(e+n,i+u),t.bezierCurveTo(e+n,i+u*(1-l),e+n-a*(1-l),i,e+n-a,i),t.lineTo(e+a,i),t.bezierCurveTo(e+a*(1-l),i,e,i+u*(1-l),e,i+u),t.close()}}}}),{}],124:[(function(t,e,i){"use strict";var n;(n=_ccsg.GraphicsNode=t("./graphics-node"))&&t("../utils/misc").propertyDefine(n,["lineWidth","lineCap","lineJoin","miterLimit","strokeColor","fillColor"],{});t("./graphics")}),{"../utils/misc":228,"./graphics":122,"./graphics-node":120}],125:[(function(t,e,i){"use strict";var n=cc.Enum({BUTT:0,ROUND:1,SQUARE:2}),r=cc.Enum({BEVEL:0,ROUND:1,MITER:2});e.exports={LineCap:n,LineJoin:r}}),{}],126:[(function(t,e,i){t("./platform"),t("./assets"),t("./CCNode"),t("./CCScene"),t("./components"),t("./graphics"),t("./collider"),t("./collider/CCIntersection"),t("./physics"),t("./camera/CCCamera"),t("./base-ui/CCWidgetManager")}),{"./CCNode":40,"./CCScene":41,"./assets":56,"./base-ui/CCWidgetManager":61,"./camera/CCCamera":62,"./collider":71,"./collider/CCIntersection":69,"./components":105,"./graphics":124,"./physics":160,"./platform":196}],127:[(function(t,e,i){var n=/^(click)(\s)*=/,r=/(\s)*src(\s)*=|(\s)*height(\s)*=|(\s)*width(\s)*=|(\s)*click(\s)*=/;cc.HtmlTextParser=function(){this._parsedObject={},this._specialSymbolArray=[],this._specialSymbolArray.push([/</g,"<"]),this._specialSymbolArray.push([/>/g,">"]),this._specialSymbolArray.push([/&/g,"&"]),this._specialSymbolArray.push([/"/g,'"']),this._specialSymbolArray.push([/'/g,"'"])},cc.HtmlTextParser.prototype={constructor:cc.HtmlTextParser,parse:function(t){this._resultObjectArray=[],this._stack=[];for(var e=0,i=t.length;e",e);-1===r?r=n:"/"===t.charAt(n+1)?this._stack.pop():this._addToStack(t.substring(n+1,r)),e=r+1}}return this._resultObjectArray},_attributeToObject:function(t){var e,i,n,s,o={},a=(t=t.trim()).match(/^(color|size)(\s)*=/);if(a){if(e=a[0],""===(t=t.substring(e.length).trim()))return o;switch(i=t.indexOf(" "),e[0]){case"c":o.color=i>-1?t.substring(0,i).trim():t;break;case"s":o.size=parseInt(t)}return i>-1&&(s=t.substring(i+1).trim(),n=this._processEventHandler(s),o.event=n),o}if((a=t.match(/^(br(\s)*\/)/))&&a[0].length>0&&(e=a[0].trim()).startsWith("br")&&"/"===e[e.length-1])return o.isNewLine=!0,this._resultObjectArray.push({text:"",style:{newline:!0}}),o;if((a=t.match(/^(img(\s)*src(\s)*=[^>]+\/)/))&&a[0].length>0&&(e=a[0].trim()).startsWith("img")&&"/"===e[e.length-1]){var c;a=t.match(r);for(var h=!1;a;)e=(t=t.substring(t.indexOf(a[0]))).substr(0,a[0].length),u=(i=(c=t.substring(e.length).trim()).indexOf(" "))>-1?c.substr(0,i):c,e=(e=e.replace(/[^a-zA-Z]/g,"").trim()).toLocaleLowerCase(),t=c.substring(i).trim(),"src"===e?(o.isImage=!0,u.endsWith("/")&&(u=u.substring(0,u.length-1)),0===u.indexOf("'")?(h=!0,u=u.substring(1,u.length-1)):0===u.indexOf('"')&&(h=!0,u=u.substring(1,u.length-1)),o.src=u):"height"===e?o.imageHeight=parseInt(u):"width"===e?o.imageWidth=parseInt(u):"click"===e&&(o.event=this._processEventHandler(e+"="+u)),a=t.match(r);return h&&o.isImage&&this._resultObjectArray.push({text:"",style:o}),{}}if(a=t.match(/^(outline(\s)*[^>]*)/)){var l={color:"#ffffff",width:1};if(t=a[0].substring("outline".length).trim()){var u,_=/(\s)*color(\s)*=|(\s)*width(\s)*=|(\s)*click(\s)*=/;for(a=t.match(_);a;)e=(t=t.substring(t.indexOf(a[0]))).substr(0,a[0].length),u=(i=(c=t.substring(e.length).trim()).indexOf(" "))>-1?c.substr(0,i):c,e=(e=e.replace(/[^a-zA-Z]/g,"").trim()).toLocaleLowerCase(),t=c.substring(i).trim(),"click"===e?o.event=this._processEventHandler(e+"="+u):"color"===e?l.color=u:"width"===e&&(l.width=parseInt(u)),a=t.match(_)}o.outline=l}if((a=t.match(/^(on|u|b|i)(\s)*/))&&a[0].length>0){switch(e=a[0],t=t.substring(e.length).trim(),e[0]){case"u":o.underline=!0;break;case"i":o.italic=!0;break;case"b":o.bold=!0}if(""===t)return o;n=this._processEventHandler(t),o.event=n}return o},_processEventHandler:function(t){for(var e=0,i={},r=t.match(n),s=!1;r;){var o=r[0],a="";if(s=!1,'"'===(t=t.substring(o.length).trim()).charAt(0))(e=t.indexOf('"',1))>-1&&(a=t.substring(1,e).trim(),s=!0),e++;else if("'"===t.charAt(0))(e=t.indexOf("'",1))>-1&&(a=t.substring(1,e).trim(),s=!0),e++;else{var c=t.match(/(\S)+/);e=(a=c?c[0]:"").length}s&&(i[o=o.substring(0,o.length-1).trim()]=a),r=(t=t.substring(e).trim()).match(n)}return i},_addToStack:function(t){var e=this._attributeToObject(t);if(0===this._stack.length)this._stack.push(e);else{if(e.isNewLine||e.isImage)return;var i=this._stack[this._stack.length-1];for(var n in i)e[n]||(e[n]=i[n]);this._stack.push(e)}},_processResult:function(t){""!==t&&(t=this._escapeSpecialSymbol(t),this._stack.length>0?this._resultObjectArray.push({text:t,style:this._stack[this._stack.length-1]}):this._resultObjectArray.push({text:t}))},_escapeSpecialSymbol:function(t){for(var e=0;e0&&this._isVerticalClamp()&&this._shrinkLabelToContentSize(this._isVerticalClamp.bind(this));if(!this._updateQuads()){t=!1,this._overFlow===_ccsg.Label.Overflow.SHRINK&&this._shrinkLabelToContentSize(this._isHorizontalClamp.bind(this));break}}while(0);return t},_isHorizontalClamped:function(t,e){var i=this._linesWidth[e],n=t>this._contentSize.width||t<0;return this._isWrapText?i>this._contentSize.width&&n:n},_updateQuads:function(){var t=!0;this._spriteBatchNode.removeAllChildren();for(var e=0;e0){if(n>this._tailoredTopY){var r=n-this._tailoredTopY;this._reusedRect.y+=r,this._reusedRect.height-=r,n-=r}n-i._height*this._bmfontScale0&&this._isHorizontalClamped(o,s))if(this._overFlow===_ccsg.Label.Overflow.CLAMP)this._reusedRect.width=0;else if(this._overFlow===_ccsg.Label.Overflow.SHRINK){if(this._contentSize.width>i._width){t=!1;break}this._reusedRect.width=0}if(this._reusedRect.height>0&&this._reusedRect.width>0){var a=this.getChildByTag(e),c=this._spriteBatchNode._texture,h=this._spriteFrame,l=this._spriteFrame.isRotated(),u=h._originalSize,_=h._rect,d=h._offset,f=d.x+(u.width-_.width)/2,m=d.y-(u.height-_.height)/2;if(l){var p=this._reusedRect.x;this._reusedRect.x=_.x+_.height-this._reusedRect.y-this._reusedRect.height-m,this._reusedRect.y=p+_.y-f,this._reusedRect.y<0&&(this._reusedRect.height=this._reusedRect.height+m)}else this._reusedRect.x+=_.x-f,this._reusedRect.y+=_.y+m;a?a.setTextureRect(this._reusedRect,l):((a=new _ccsg.Sprite).initWithTexture(c,this._reusedRect,l),a.setAnchorPoint(cc.p(0,1)));var g=this._lettersInfo[e]._positionX+this._linesOffsetX[this._lettersInfo[e]._lineIndex];a.setPosition(g,n),this._updateLetterSpriteScale(a),this._spriteBatchNode.addChild(a)}}return t},_updateLetterSpriteScale:function(t){this._labelType===_ccsg.Label.Type.BMFont&&this._fontSize>0&&t.setScale(this._bmfontScale)},_recordPlaceholderInfo:function(t,e){if(t>=this._lettersInfo.length){var i=new o;this._lettersInfo.push(i)}this._lettersInfo[t]._char=e,this._lettersInfo[t]._valid=!1},_recordLetterInfo:function(t,e,i,n){if(i>=this._lettersInfo.length){var r=new o;this._lettersInfo.push(r)}e=e.charCodeAt(0),this._lettersInfo[i]._lineIndex=n,this._lettersInfo[i]._char=e,this._lettersInfo[i]._valid=this._fontAtlas._letterDefinitions[e]._validDefinition,this._lettersInfo[i]._positionX=t.x,this._lettersInfo[i]._positionY=t.y},_setDimensions:function(t,e){var i="number"==typeof t.width?t.width:t,n="number"==typeof t.height?t.height:e,r=this.getContentSize();_ccsg.Node.prototype.setContentSize.call(this,t,e),n===r.height&&i===r.width||(this._setupBMFontOverflowMetrics(i,n),this._drawFontsize>0&&this._restoreFontSize(),this._notifyLabelSkinDirty())},_restoreFontSize:function(){this._fontSize=this._drawFontsize},_multilineTextWrap:function(t){var e=this.getStringLength(),i=0,n=0,r=0,s=0,o=0,a=this._lineSpacing,c=0,h=0,l=null,u=cc.p(0,0);this._updateBMFontScale();for(var _=0;_0&&n>0&&T+l._width*this._bmfontScale>this._maxLineWidth&&!cc.TextUtils.isUnicodeSpace(d)){this._linesWidth.push(o),o=0,i++,n=0,r-=this._lineHeight*this._bmfontScale+a,v=!0;break}u.x=T,u.y=r-l._offsetY*this._bmfontScale,this._recordLetterInfo(u,d,C,i),C+1u.y-l._height*this._bmfontScale&&(p=u.y-l._height*this._bmfontScale)}else this._recordPlaceholderInfo(C,d),console.log("Can't find letter definition in texture atlas "+this._config.atlasName+" for letter:"+d);else this._recordPlaceholderInfo(C,d)}v||(n=y,o=g,cp&&(h=p),s1&&(this._textDesiredHeight+=(this._numberOfLines-1)*this._lineSpacing);var A=cc.size(this._labelWidth,this._labelHeight);return this._labelWidth<=0&&(A.width=parseFloat(s.toFixed(2))),this._labelHeight<=0&&(A.height=parseFloat(this._textDesiredHeight.toFixed(2))),_ccsg.Node.prototype.setContentSize.call(this,A),this._tailoredTopY=A.height,this._tailoredBottomY=0,c>0&&(this._tailoredTopY=A.height+c),h<-this._textDesiredHeight&&(this._tailoredBottomY=this._textDesiredHeight+h),!0},_multilineTextWrapByWord:function(){return this._multilineTextWrap(this._getFirstWordLen.bind(this))},_multilineTextWrapByChar:function(){return this._multilineTextWrap(this._getFirstCharLen.bind(this))},_isVerticalClamp:function(){return this._textDesiredHeight>this._contentSize.height},_isHorizontalClamp:function(){for(var t=!1,e=0;e0)if(this._isWrapText){if(this._linesWidth[r]>this._contentSize.width&&(n>this._contentSize.width||n<0)){t=!0;break}}else if(n>this._contentSize.width){t=!0;break}}return t},_shrinkLabelToContentSize:function(t){for(var e=this.getFontSize(),i=0,n=this._fontAtlas.cloneLetterDefinition(),r=this._lineHeight,s=!0;t();){var o=e-++i;if(s=!1,o<=0)break;var a=o/e;this._fontAtlas.assignLetterDefinitions(n),this._fontAtlas.scaleFontLetterDefinition(a),this._lineHeight=r*a,this._lineBreakWithoutSpaces?this._multilineTextWrapByChar():this._multilineTextWrapByWord(),this._computeAlignmentOffset()}this._lineHeight=r,this._fontAtlas.assignLetterDefinitions(n),s||e-i>=0&&this._scaleFontSizeDown(e-i)},_scaleFontSizeDown:function(t){var e=!0;this._labelType===_ccsg.Label.Type.BMFont&&(t||(t=.1,e=!1),this._fontSize=t,e&&this._updateContent())},_updateContent:function(){this._fontAtlas&&(this._computeHorizontalKerningForText(this._string),this._alignText())},_computeAlignmentOffset:function(){switch(this._linesOffsetX=[],this._hAlign){case cc.TextAlignment.LEFT:for(var t=0;tthis._maxLineWidth&&!cc.TextUtils.isUnicodeSpace(n)&&this._maxLineWidth>0)return r;if(o+=s._xAdvance*this._bmfontScale+this._spacingX,"\n"===n||cc.TextUtils.isUnicodeSpace(n)||cc.TextUtils.isUnicodeCJK(n))break;r++}return r},_updateBMFontScale:function(){if(this._labelType===_ccsg.Label.Type.BMFont){var t=this._fontAtlas._fontSize;this._bmfontScale=this._fontSize/t}else this._bmfontScale=1},_initBMFontWithString:function(t,e){if(this._config)return cc.logID(4002),!1;this._string=t,this._setBMFontFile(e)},_createSpriteBatchNode:function(t){this._spriteBatchNode=new cc.SpriteBatchNode(t,this._string.length),this._spriteBatchNode.setCascadeColorEnabled(!0),this._spriteBatchNode.setCascadeOpacityEnabled(!0),this.addChild(this._spriteBatchNode),this._updateContent(),this.setColor(this.color)},_createFontChars:function(){if(this._config){this._fontAtlas=new cc.FontAtlas(this._config),this._lineHeight||(this._lineHeight=this._fontAtlas._lineHeight);var t=this._config.fontDefDictionary;for(var e in t){var i=new s,n=t[e].rect;i._offsetX=parseInt(t[e].xOffset),i._offsetY=parseInt(t[e].yOffset),i._width=parseInt(n.width),i._height=parseInt(n.height),i._u=parseInt(n.x)+this._imageOffset.x,i._v=parseInt(n.y)+this._imageOffset.y,i._textureID=0,i._validDefinition=!0,i._xAdvance=parseInt(t[e].xAdvance),this._fontAtlas.addLetterDefinitions(e,i)}}},_rescaleWithOriginalFontSize:function(){var t=this.getFontSize();this._drawFontsize-t>=1&&this._overFlow===_ccsg.Label.Overflow.SHRINK&&(this._labelType===_ccsg.Label.Type.BMFont?this._scaleFontSizeDown(this._drawFontsize):this._fontSize=this._drawFontsize)},_computeHorizontalKerningForText:function(){for(var t=this.getStringLength(),e=this._config.kerningDict,i=-1,n=0;nc||o>a;){if(u?h=_/2|0:_=h=_-1,h<=0){cc.logID(4003);break}for(t._fontSize=h,i=this._constructFontDesc(),this._labelContext.font=i,this._splitedStrings=[],s=0,r=0;rc?_=0|h:(u=!1,s=c+1))}}else{for(s=e.length*this._getLineHeight(),r=0;ro?r:o}s=this._splitedStrings.length*this._getLineHeight(),this._canvasSize.width=Math.round(r.toFixed(2))+2*this._getMargin(),this._canvasSize.height=Math.round(s.toFixed(2)),e._isItalic&&(this._canvasSize.width+=e._drawFontsize*Math.tan(.20943951)),_ccsg.Node.prototype.setContentSize.call(e,this._canvasSize)}this._labelCanvas.width=this._canvasSize.width,this._labelCanvas.height=this._canvasSize.height},t._calculateFillTextStartPosition=function(){var t,e,i=this._node,n=this._getLineHeight(),r=this._splitedStrings.length;return t=cc.TextAlignment.RIGHT===i._hAlign?this._canvasSize.width-this._getMargin():cc.TextAlignment.CENTER===i._hAlign?this._canvasSize.width/2:0+this._getMargin(),e=cc.VerticalTextAlignment.TOP===i._vAlign?0:cc.VerticalTextAlignment.CENTER===i._vAlign?this._canvasSize.height/2-n*(r-1)/2:this._canvasSize.height-n*(r-1),cc.p(t,e)},t._calculateTextBaseline=function(){var t,e,i=this._node;t=cc.TextAlignment.RIGHT===i._hAlign?"right":cc.TextAlignment.CENTER===i._hAlign?"center":"left",this._labelContext.textAlign=t,e=cc.VerticalTextAlignment.TOP===i._vAlign?"top":cc.VerticalTextAlignment.CENTER===i._vAlign?"middle":"bottom",this._labelContext.textBaseline=e},t._bakeLabel=function(){var t=this._node;this._drawFontsize=t._drawFontsize,this._canvasSize=this._calculateCanvasSize(),this._fontDesc=this._calculateLabelFont(),this._calculateSplitedStrings(),this._updateLabelDimensions(),this._calculateTextBaseline(),this._updateTexture()},t._calculateUnderlineStartPosition=function(){var t,e,i=this._node,n=this._getLineHeight(),r=this._splitedStrings.length;return t=0+this._getMargin(),e=cc.VerticalTextAlignment.TOP===i._vAlign?i._fontSize:cc.VerticalTextAlignment.CENTER===i._vAlign?this._canvasSize.height/2-n*(r-1)/2+i._fontSize/2:this._canvasSize.height-n*(r-1),cc.p(t,e)},t._updateTexture=function(){this._labelContext.clearRect(0,0,this._canvasSize.width,this._canvasSize.height),this._labelContext.font=this._fontDesc;var t=this._calculateFillTextStartPosition(),e=this._getLineHeight();this._labelContext.lineJoin="round";var i,n=this._displayedColor;this._labelContext.fillStyle="rgb("+n.r+","+n.g+","+n.b+")";for(var r=0;ra||sc)},t.rendering=function(t,e,i){var n=this._node;if(n._labelType===_ccsg.Label.Type.TTF||n._labelType===_ccsg.Label.Type.SystemFont){var r=this._displayedOpacity,s=r/255;if(0===r)return;var o=t||cc._renderContext,a=o.getContext();if(o.setTransform(this._worldTransform,e,i),o.setCompositeOperation(_ccsg.Node.CanvasRenderCmd._getCompositeOperationByBlendFunc(n._blendFunc)),o.setGlobalAlpha(s),this._texture){var c,h,l,u,_;0,l=-this._node._contentSize.height,u=this._node._contentSize.width,_=this._node._contentSize.height,0,0,c=this._texture.getPixelWidth(),h=this._texture.getPixelHeight();var d=this._texture._image;""!==this._texture._pattern?(o.setFillStyle(a.createPattern(d,this._texture._pattern)),a.fillRect(0,l,u,_)):0!==c&&0!==h&&0!==u&&0!==_&&a.drawImage(d,0,0,c,h,0,l,u,_)}cc.g_NumberOfDraws=cc.g_NumberOfDraws+1}}})()}),{}],130:[(function(t,e,i){var n,r={premultiplyAlpha:!0};_ccsg.Label.WebGLRenderCmd=function(t){this._rootCtor(t),this._needDraw=!0,this._texture=new cc.Texture2D,this._texture.update(r),n=n||document.createElement("canvas"),this._labelCanvas=n,this._texture.initWithElement(this._labelCanvas),this._labelContext=this._labelCanvas.getContext("2d"),this._labelCanvas.width=1,this._labelCanvas.height=1,this._splitedStrings=null,this._drawFontsize=0,this._vertices=[{x:0,y:0,u:0,v:0},{x:0,y:0,u:0,v:1},{x:0,y:0,u:1,v:0},{x:0,y:0,u:1,v:1}],this._color=new Uint32Array(1),this._dirty=!1,this._shaderProgram=cc.shaderCache.programForKey(cc.macro.SHADER_SPRITE_POSITION_TEXTURECOLOR)};var s=_ccsg.Label.WebGLRenderCmd.prototype=Object.create(_ccsg.Node.WebGLRenderCmd.prototype);cc.js.mixin(s,_ccsg.Label.TTFLabelBaker.prototype),s.constructor=_ccsg.Label.WebGLRenderCmd,s.updateTransform=function(t){this.originUpdateTransform(t);this._node;var e=this._node.width,i=this._node.height,n=this._worldTransform,r=this._vertices;r[0].x=0*n.a+i*n.c+n.tx,r[0].y=0*n.b+i*n.d+n.ty,r[1].x=0*n.a+0*n.c+n.tx,r[1].y=0*n.b+0*n.d+n.ty,r[2].x=e*n.a+i*n.c+n.tx,r[2].y=e*n.b+i*n.d+n.ty,r[3].x=e*n.a+0*n.c+n.tx,r[3].y=e*n.b+0*n.d+n.ty},s._doCulling=function(){var t=this._node;if(t._string&&(t._labelType===_ccsg.Label.Type.TTF||t._labelType===_ccsg.Label.Type.SystemFont)){var e=cc.visibleRect;this._cameraFlag>0&&(e=cc.Camera.main.visibleRect);var i=e.left.x,n=e.right.x,r=e.top.y,s=e.bottom.y,o=this._vertices;(o[0].x-i&o[1].x-i&o[2].x-i&o[3].x-i)>>31||(n-o[0].x&n-o[1].x&n-o[2].x&n-o[3].x)>>31||(o[0].y-s&o[1].y-s&o[2].y-s&o[3].y-s)>>31||(r-o[0].y&r-o[1].y&r-o[2].y&r-o[3].y)>>31?this._needDraw=!1:this._needDraw=!0}},s.uploadData=function(t,e,i){var n=this._node;if(!n._string||n._labelType!==_ccsg.Label.Type.TTF&&n._labelType!==_ccsg.Label.Type.SystemFont)return 0;var r=this._displayedOpacity;this._color[0]=~~r<<24>>>0|~~r<<16|~~r<<8|~~r;var s,o,a=n._vertexZ,c=this._vertices,h=c.length,l=i;for(s=0;s\u3001\u2018\u201c\u300b\uff1f\u3002\uff0c\uff01]/,label_lastWordRex:/([a-zA-Z0-9\xc4\xd6\xdc\xe4\xf6\xfc\xdf\xe9\xe8\xe7\xe0\xf9\xea\xe2\xee\xf4\xfb\u0430\xed\xec\xcd\xcc\xef\xc1\xc0\xe1\xe0\xc9\xc8\xd2\xd3\xf2\xf3\u0150\u0151\xd9\xda\u0170\xfa\u0171\xf1\xd1\xe6\xc6\u0153\u0152\xc3\xc2\xe3\xd4\xf5\u011b\u0161\u010d\u0159\u017e\xfd\xe1\xed\xe9\xf3\xfa\u016f\u0165\u010f\u0148\u011a\u0160\u010c\u0158\u017d\xc1\xcd\xc9\xd3\xda\u0164\u017c\u017a\u015b\xf3\u0144\u0142\u0119\u0107\u0105\u017b\u0179\u015a\xd3\u0143\u0141\u0118\u0106\u0104-\u044f\u0410-\u042f\u0401\u0451]+|\S)$/,label_lastEnglish:/[a-zA-Z0-9\xc4\xd6\xdc\xe4\xf6\xfc\xdf\xe9\xe8\xe7\xe0\xf9\xea\xe2\xee\xf4\xfb\u0430\xed\xec\xcd\xcc\xef\xc1\xc0\xe1\xe0\xc9\xc8\xd2\xd3\xf2\xf3\u0150\u0151\xd9\xda\u0170\xfa\u0171\xf1\xd1\xe6\xc6\u0153\u0152\xc3\xc2\xe3\xd4\xf5\u011b\u0161\u010d\u0159\u017e\xfd\xe1\xed\xe9\xf3\xfa\u016f\u0165\u010f\u0148\u011a\u0160\u010c\u0158\u017d\xc1\xcd\xc9\xd3\xda\u0164\u017c\u017a\u015b\xf3\u0144\u0142\u0119\u0107\u0105\u017b\u0179\u015a\xd3\u0143\u0141\u0118\u0106\u0104-\u044f\u0410-\u042f\u0401\u0451]+$/,label_firstEnglish:/^[a-zA-Z0-9\xc4\xd6\xdc\xe4\xf6\xfc\xdf\xe9\xe8\xe7\xe0\xf9\xea\xe2\xee\xf4\xfb\u0430\xed\xec\xcd\xcc\xef\xc1\xc0\xe1\xe0\xc9\xc8\xd2\xd3\xf2\xf3\u0150\u0151\xd9\xda\u0170\xfa\u0171\xf1\xd1\xe6\xc6\u0153\u0152\xc3\xc2\xe3\xd4\xf5\u011b\u0161\u010d\u0159\u017e\xfd\xe1\xed\xe9\xf3\xfa\u016f\u0165\u010f\u0148\u011a\u0160\u010c\u0158\u017d\xc1\xcd\xc9\xd3\xda\u0164\u017c\u017a\u015b\xf3\u0144\u0142\u0119\u0107\u0105\u017b\u0179\u015a\xd3\u0143\u0141\u0118\u0106\u0104-\u044f\u0410-\u042f\u0401\u0451]/,label_wrapinspection:!0,isUnicodeCJK:function(t){return/^[\u4E00-\u9FFF\u3400-\u4DFF]+$/.test(t)||/[\u3000-\u303F]|[\u3040-\u309F]|[\u30A0-\u30FF]|[\uFF00-\uFFEF]|[\u4E00-\u9FAF]|[\u2605-\u2606]|[\u2190-\u2195]|\u203B/g.test(t)||/^[\u1100-\u11FF]|[\u3130-\u318F]|[\uA960-\uA97F]|[\uAC00-\uD7AF]|[\uD7B0-\uD7FF]+$/.test(t)},isUnicodeSpace:function(t){return(t=t.charCodeAt(0))>=9&&t<=13||32===t||133===t||160===t||5760===t||t>=8192&&t<=8202||8232===t||8233===t||8239===t||8287===t||12288===t},fragmentText:function(t,e,i,n){var r=[];if(0===t.length||i<0)return r.push(""),r;for(var s=t;e>i&&s.length>1;){for(var o=s.length*(i/e)|0,a=s.substr(o),c=e-n(a),h=a,l=0,u=0;c>i&&u++<10;)o*=i/c,o|=0,c=e-n(a=s.substr(o));for(u=0;c<=i&&u++<10;){if(a){var _=this.label_wordRex.exec(a);l=_?_[0].length:1,h=a}o+=l,c=e-n(a=s.substr(o))}0==(o-=l)&&(o=1,h=h.substr(1));var d,f=s.substr(0,o);this.label_wrapinspection&&this.label_symbolRex.test(h||a)&&(0==(o-=(d=this.label_lastWordRex.exec(f))?d[0].length:0)&&(o=1),h=s.substr(o),f=s.substr(0,o)),this.label_firstEnglish.test(h)&&(d=this.label_lastEnglish.exec(f))&&f!==d[0]&&(o-=d[0].length,h=s.substr(o),f=s.substr(0,o)),0===r.length?r.push(f):(f=f.trim()).length>0&&r.push(f),e=n(s=h||a)}return 0===r.length?r.push(s):(s=s.trim()).length>0&&r.push(s),r}},cc.CustomFontLoader=e.exports=r}),{}],132:[(function(t,e,i){var n=t("../platform/js"),r=t("./pipeline"),s=t("./loading-items"),o=t("./asset-loader"),a=t("./downloader"),c=t("./loader"),h=t("./asset-table"),l=t("../platform/utils").callInNextTick,u=t("./auto-release-utils"),_=new h;var d={url:null,raw:!1};function f(t){var e,i,n;if("object"==typeof t){if(i=t,t.url)return i;e=t.uuid}else i={},e=t;return n=i.type?"uuid"===i.type:cc.AssetLibrary._uuidInSettings(e),cc.AssetLibrary._getAssetInfoInRuntime(e,d),i.url=n?d.url:e,d.url&&"uuid"===i.type&&d.raw?(i.type=null,i.isRawAsset=!0):n||(i.isRawAsset=!0),i}var m=[],p=[];function g(){var t=new o,e=new a,i=new c;r.call(this,[t,e,i]),this.assetLoader=t,this.downloader=e,this.loader=i,this.onProgress=null,this._autoReleaseSetting={}}n.extend(g,r);var y=g.prototype;y.init=function(t){},y.getXMLHttpRequest=function(){return window.XMLHttpRequest?new window.XMLHttpRequest:new ActiveXObject("MSXML2.XMLHTTP")},y.addDownloadHandlers=function(t){this.downloader.addHandlers(t)},y.addLoadHandlers=function(t){this.loader.addHandlers(t)},y.load=function(t,e,i){void 0===i&&(i=e,e=this.onProgress||null);var n=this,r=!1;t instanceof Array||(r=!0,t=t?[t]:[]),m.length=0;for(var o=0;o0){var r=this,s=t.map((function(t){return{type:"uuid",uuid:t}}));this.load(s,e,(function(t,e){if(i){for(var o=[],a=n&&[],c=0;ce.length){var i=t.charCodeAt(e.length);return 46===i||47===i}return!0}var o=r.prototype;o.getUuid=function(t,e){t=cc.url.normalize(t);var i=this._pathToUuid[t];if(i)if(Array.isArray(i)){if(!e)return i[0].uuid;for(var n=0;n0&&n.src===r)return n;(function(){function i(){n.removeEventListener("load",i),n.removeEventListener("error",s),e(null,n)}function s(){n.removeEventListener("load",i),n.removeEventListener("error",s),"https:"!==window.location.protocol&&n.crossOrigin&&"anonymous"===n.crossOrigin.toLowerCase()?d(t,e,!1,n):e(new Error(cc._getError(4930,r)))}n.addEventListener("load",i),n.addEventListener("error",s),n.src=r})()}var f={".eot":"embedded-opentype",".ttf":"truetype",".ttc":"truetype",".woff":"woff",".svg":"svg"};function m(t,e,i){var n=document,r=document.createElement("style");r.type="text/css",n.body.appendChild(r);var s="";if(isNaN(t-0)?s+="@font-face { font-family:"+t+"; src:":s+="@font-face { font-family:'"+t+"'; src:",e instanceof Array)for(var a=0,c=e.length;a=0)&&(s.deps&&f(t,s,!0))){n=!0;break}}}return i||(d.length=0),n}var m=function(t,e,i,r){n.call(this),this._id=++o,a[this._id]=this,this._pipeline=t,this._errorUrls=[],this._appending=!1,this._ownerQueue=null,this.onProgress=i,this.onComplete=r,this.map={},this.completed={},this.totalCount=0,this.completedCount=0,this._pipeline?this.active=!0:this.active=!1,e&&(e.length>0?this.append(e):this.allComplete())};m.ItemState=new cc.Enum(h),m.create=function(t,e,i,n){void 0===i?"function"==typeof e&&(n=e,e=i=null):void 0===n&&("function"==typeof e?(n=i,i=e,e=null):(n=i,i=null));var r=c.pop();return r?(r._pipeline=t,r.onProgress=i,r.onComplete=n,a[r._id]=r,r._pipeline&&(r.active=!0),e&&r.append(e)):r=new m(t,e,i,n),r},m.getQueue=function(t){return t.queueId?a[t.queueId]:null},m.itemComplete=function(t){var e=a[t.queueId];e&&e.itemComplete(t.id)},m.initQueueDeps=function(t){var e=l[t._id];e?(e.completed.length=0,e.deps.length=0):e=l[t._id]={completed:[],deps:[]}},m.registerQueueDep=function(t,e){var i=t.queueId||t;if(!i)return!1;var n=l[i];if(n)-1===n.deps.indexOf(e)&&n.deps.push(e);else if(t.id)for(var r in l){var s=l[r];-1!==s.deps.indexOf(t.id)&&-1===s.deps.indexOf(e)&&s.deps.push(e)}},m.finishDep=function(t){for(var e in l){var i=l[e];-1!==i.deps.indexOf(t)&&-1===i.completed.indexOf(t)&&i.completed.push(t)}};var p=m.prototype;s.mixin(p,n.prototype),p.append=function(t,e){if(!this.active)return[];e&&!e.deps&&(e.deps=[]),this._appending=!0;var i,n,r,s=[];for(i=0;i=this.totalCount},p.isItemCompleted=function(t){return!!this.completed[t]},p.exists=function(t){return!!this.map[t]},p.getContent=function(t){var e=this.map[t],i=null;return e&&(e.content?i=e.content:e.alias&&(i=e.alias.content)),i},p.getError=function(t){var e=this.map[t],i=null;return e&&(e.error?i=e.error:e.alias&&(i=e.alias.error)),i},p.addListener=n.prototype.add,p.hasListener=n.prototype.has,p.removeListener=n.prototype.remove,p.removeAllListeners=n.prototype.removeAll,p.removeItem=function(t){var e=this.map[t];e&&this.completed[e.alias||t]&&(delete this.completed[t],delete this.map[t],e.alias&&(delete this.completed[e.alias.id],delete this.map[e.alias.id]),this.completedCount--,this.totalCount--)},p.itemComplete=function(t){var e=this.map[t];if(e){var i=this._errorUrls.indexOf(t);if(e.error&&-1===i?this._errorUrls.push(t):e.error||-1===i||this._errorUrls.splice(i,1),this.completed[t]=e,this.completedCount++,m.finishDep(e.id),this.onProgress){var n=l[this._id];this.onProgress(n?n.completed.length:this.completedCount,n?n.deps.length:this.totalCount,e)}this.invoke(t,e),this.removeAll(t),!this._appending&&this.completedCount>=this.totalCount&&this.allComplete()}},p.destroy=function(){this.active=!1,this._appending=!1,this._pipeline=null,this._ownerQueue=null,this._errorUrls.length=0,this.onProgress=null,this.onComplete=null,this.map={},this.completed={},this.totalCount=0,this.completedCount=0,n.call(this),a[this._id]=null,l[this._id]&&(l[this._id].completed.length=0,l[this._id].deps.length=0),-1===c.indexOf(this)&&c.length<10&&c.push(this)},cc.LoadingItems=e.exports=m}),{"../platform/callbacks-invoker":192,"../platform/js":199,"../utils/CCPath":221}],141:[(function(t,e,i){var n=t("./pipeline"),r="MD5Pipe",s=/(\.[^.\n\\/]*)$/,o=function(t,e,i){this.id=r,this.async=!1,this.pipeline=null,this.md5AssetsMap=t,this.libraryBase=e,this.rawAssetsBase=i};o.ID=r,o.prototype.handle=function(t){return t.url=this.transformURL(t.url),t},o.prototype.transformURL=function(t,e){var i=t.indexOf("?"),n=t;if(-1!==i&&(n=t.substr(0,i)),n.startsWith(this.libraryBase))n=n.slice(this.libraryBase.length);else{if(!n.startsWith(this.rawAssetsBase))return t;n=n.slice(this.rawAssetsBase.length)}var r=this.md5AssetsMap[n];if(r)if(e){var o=cc.path.dirname(t),a=cc.path.basename(t);t=o+"."+r+"/"+a}else{var c=!1;t=t.replace(s,(function(t,e){return c=!0,"."+r+e})),c||(t=t+"."+r)}return t},n.MD5Pipe=e.exports=o}),{"./pipeline":143}],142:[(function(t,e,i){var n=t("./unpackers"),r=t("../utils/misc").pushToMap,s={Invalid:0,Removed:1,Downloading:2,Loaded:3};function o(){this.unpacker=null,this.state=s.Invalid}var a={},c={},h={};function l(t,e){return new Error("Can not retrieve "+t+" from packer "+e)}e.exports={initPacks:function(t){for(var e in c=t,t)for(var i=t[e],n=0;ne&&(e=a,i=r)}}return e!==s.Invalid?i:t[0]},load:function(t,e){var i=t.uuid,n=a[i];if(n){Array.isArray(n)&&(n=this._selectLoadedPack(n));var r=h[n];if(r&&r.state===s.Loaded){var c=r.unpacker.retrieve(i);return c||l(i,n)}return r||(console.log("Create unpacker %s for %s",n,i),(r=h[n]=new o).state=s.Downloading),this._loadNewPack(i,n,e),null}}}}),{"../utils/misc":228,"./unpackers":146}],143:[(function(t,e,i){t("../platform/js");var n=t("./loading-items"),r=n.ItemState;function s(t,e){var i=t.id,n=e.states[i],o=t.next,a=t.pipeline;if(!e.error&&n!==r.WORKING&&n!==r.ERROR)if(n===r.COMPLETE)o?s(o,e):a.flowOut(e);else{e.states[i]=r.WORKING;var c=t.handle(e,(function(t,n){t?(e.error=t,e.states[i]=r.ERROR,a.flowOut(e)):(n&&(e.content=n),e.states[i]=r.COMPLETE,o?s(o,e):a.flowOut(e))}));c instanceof Error?(e.error=c,e.states[i]=r.ERROR,a.flowOut(e)):void 0!==c&&(null!==c&&(e.content=c),e.states[i]=r.COMPLETE,o?s(o,e):a.flowOut(e))}}var o=function(t){this._pipes=t,this._cache={};for(var e=0;ethis._pipes.length)cc.warnID(4921);else if(this._pipes.indexOf(t)>0)cc.warnID(4922);else{t.pipeline=this;var i=null;e0&&(n=this._pipes[e-1]),n&&(n.next=t),t.next=i,this._pipes.splice(e,0,t)}},a.insertPipeAfter=function(t,e){var i=this._pipes.indexOf(t);i<0||this.insertPipe(e,i+1)},a.appendPipe=function(t){t.handle&&t.id&&(t.pipeline=this,t.next=null,this._pipes.length>0&&(this._pipes[this._pipes.length-1].next=t),this._pipes.push(t))},a.flowIn=function(t){var e,i,n=this._pipes[0];if(n){for(e=0;es&&(this._accumulator=s);this._accumulator>r;)e.Step(r,i,n),this._accumulator-=r}else{var o=1/cc.game.config.frameRate;e.Step(o,i,n)}e.DrawDebugData(),this._steping=!1;for(var a=this._delayEvents,c=0,h=a.length;c0){for(var a=n.getPoints(),l=n.getNormals(),u=n.getFractions(),_=[],d=0,f=r.length;d0}function h(t,e,i){return p(t,e,i)>=0}function l(t,e,i){return p(t,e,i)<=0}function u(t,e){var i=e.x-t.x,n=e.y-t.y;return i*i+n*n}function _(t){d(t)||t.reverse()}function d(t){return t.length<3||(function(t){var e,i=0;for(e=0;e0}function f(t,e,i,n){var r=cc.v2(),s=e.y-t.y,o=t.x-e.x,a=s*t.x+o*t.y,c=n.y-i.y,h=i.x-n.x,l=c*i.x+h*i.y,u=s*h-c*o;return (function(t,e){return Math.abs(t-e)<=1e-6})(u,0)||(r.x=(h*a-o*l)/u,r.y=(s*l-c*a)/u),r}function m(t,e,i,n,r){if(t==i||t==n||e==i||e==n)return!1;var s=t.x,o=t.y,a=e.x,c=e.y,h=i.x,l=i.y,u=n.x,_=n.y;if(Math.max(s,a)w&&(I=S,w=R)}g=r(b,I,e),y=r(I,b,e)}return v=(v=v.concat(t(g))).concat(t(y))}v.push(e);for(b=v.length-1;b>=0;b--)0==v[b].length&&v.splice(b,0);return v},ForceCounterClockWise:_,IsCounterClockWise:d}}),{}],154:[(function(t,e,i){var n=t("./CCPhysicsTypes").PTM_RATIO,r=t("./CCPhysicsTypes").ANGLE_TO_PHYSICS_ANGLE,s=t("./CCPhysicsTypes").PHYSICS_ANGLE_TO_ANGLE,o=t("./utils").getWorldRotation,a=t("./CCPhysicsTypes").BodyType,c=new b2.Vec2,h=new b2.Vec2,l=cc.Vec2.ZERO,u=cc.Class({name:"cc.RigidBody",extends:cc.Component,editor:!1,properties:{_type:a.Dynamic,_allowSleep:!0,_gravityScale:1,_linearDamping:0,_angularDamping:0,_linearVelocity:cc.v2(0,0),_angularVelocity:0,_fixedRotation:!1,enabled:{get:function(){return this._enabled},set:function(){cc.warnID("8200")},visible:!1,override:!0},enabledContactListener:{default:!1,tooltip:!1},bullet:{default:!1,tooltip:!1},type:{type:a,tooltip:!1,get:function(){return this._type},set:function(t){this._type=t,this._b2Body&&(t===a.Animated?this._b2Body.SetType(a.Kinematic):this._b2Body.SetType(t))}},allowSleep:{tooltip:!1,get:function(){return this._b2Body?this._b2Body.IsSleepingAllowed():this._allowSleep},set:function(t){this._allowSleep=t,this._b2Body&&this._b2Body.SetSleepingAllowed(t)}},gravityScale:{tooltip:!1,get:function(){return this._gravityScale},set:function(t){this._gravityScale=t,this._b2Body&&this._b2Body.SetGravityScale(t)}},linearDamping:{tooltip:!1,get:function(){return this._linearDamping},set:function(t){this._linearDamping=t,this._b2Body&&this._b2Body.SetLinearDamping(this._linearDamping)}},angularDamping:{tooltip:!1,get:function(){return this._angularDamping},set:function(t){this._angularDamping=t,this._b2Body&&this._b2Body.SetAngularDamping(t)}},linearVelocity:{tooltip:!1,type:cc.Vec2,get:function(){var t=this._linearVelocity;if(this._b2Body){var e=this._b2Body.GetLinearVelocity();t.x=e.x*n,t.y=e.y*n}return t},set:function(t){this._linearVelocity=t;var e=this._b2Body;if(e){var i=e.m_linearVelocity;i.Set(t.x/n,t.y/n),e.SetLinearVelocity(i)}}},angularVelocity:{tooltip:!1,get:function(){return this._b2Body?this._b2Body.GetAngularVelocity()*s:this._angularVelocity},set:function(t){this._angularVelocity=t,this._b2Body&&this._b2Body.SetAngularVelocity(t*r)}},fixedRotation:{tooltip:!1,get:function(){return this._fixedRotation},set:function(t){this._fixedRotation=t,this._b2Body&&this._b2Body.SetFixedRotation(t)}},awake:{tooltip:!1,get:function(){return!!this._b2Body&&this._b2Body.IsAwake()},set:function(t){this._b2Body&&this._b2Body.SetAwake(t)}},active:{visible:!1,get:function(){return!!this._b2Body&&this._b2Body.IsActive()},set:function(t){this._b2Body&&this._b2Body.SetActive(t)}}},getLocalPoint:function(t,e){if(e=e||cc.v2(),this._b2Body){c.Set(t.x/n,t.y/n);var i=this._b2Body.GetLocalPoint(c);e.x=i.x*n,e.y=i.y*n}return e},getWorldPoint:function(t,e){if(e=e||cc.v2(),this._b2Body){c.Set(t.x/n,t.y/n);var i=this._b2Body.GetWorldPoint(c);e.x=i.x*n,e.y=i.y*n}return e},getWorldVector:function(t,e){if(e=e||cc.v2(),this._b2Body){c.Set(t.x/n,t.y/n);var i=this._b2Body.GetWorldVector(c);e.x=i.x*n,e.y=i.y*n}return e},getLocalVector:function(t,e){if(e=e||cc.v2(),this._b2Body){c.Set(t.x/n,t.y/n);var i=this._b2Body.GetLocalVector(c);e.x=i.x*n,e.y=i.y*n}return e},getWorldPosition:function(t){if(t=t||cc.v2(),this._b2Body){var e=this._b2Body.GetPosition();t.x=e.x*n,t.y=e.y*n}return t},getWorldRotation:function(){return this._b2Body?this._b2Body.GetAngle()*s:0},getLocalCenter:function(t){if(t=t||cc.v2(),this._b2Body){var e=this._b2Body.GetLocalCenter();t.x=e.x*n,t.y=e.y*n}return t},getWorldCenter:function(t){if(t=t||cc.v2(),this._b2Body){var e=this._b2Body.GetWorldCenter();t.x=e.x*n,t.y=e.y*n}return t},getLinearVelocityFromWorldPoint:function(t,e){if(e=e||cc.v2(),this._b2Body){c.Set(t.x/n,t.y/n);var i=this._b2Body.GetLinearVelocityFromWorldPoint(c);e.x=i.x*n,e.y=i.y*n}return e},getMass:function(){return this._b2Body?this._b2Body.GetMass():0},getInertia:function(){return this._b2Body?this._b2Body.GetInertia()*n*n:0},getJointList:function(){if(!this._b2Body)return[];var t=[],e=this._b2Body.GetJointList();if(!e)return[];t.push(e.joint._joint);for(var i=e.prev;i;)t.push(i.joint._joint),i=i.prev;for(var n=e.next;n;)t.push(n.joint._joint),n=n.next;return t},applyForce:function(t,e,i){this._b2Body&&(c.Set(t.x/n,t.y/n),h.Set(e.x/n,e.y/n),this._b2Body.ApplyForce(c,h,i))},applyForceToCenter:function(t,e){this._b2Body&&(c.Set(t.x/n,t.y/n),this._b2Body.ApplyForceToCenter(c,e))},applyTorque:function(t,e){this._b2Body&&this._b2Body.ApplyTorque(t/n,e)},applyLinearImpulse:function(t,e,i){this._b2Body&&(c.Set(t.x/n,t.y/n),h.Set(e.x/n,e.y/n),this._b2Body.ApplyLinearImpulse(c,h,i))},applyAngularImpulse:function(t,e){this._b2Body&&this._b2Body.ApplyAngularImpulse(t/n/n,e)},syncPosition:function(t){var e=this._b2Body;if(e){var i,r=this.node.convertToWorldSpaceAR(l);if((i=this.type===a.Animated?e.GetLinearVelocity():e.GetPosition()).x=r.x/n,i.y=r.y/n,this.type===a.Animated&&t){var s=e.GetPosition(),o=cc.game.config.frameRate;i.x=(i.x-s.x)*o,i.y=(i.y-s.y)*o,e.SetAwake(!0),e.SetLinearVelocity(i)}else e.SetTransform(i,e.GetAngle())}},syncRotation:function(t){var e=this._b2Body;if(e){var i=r*o(this.node);if(this.type===a.Animated&&t){var n=e.GetAngle(),s=cc.game.config.frameRate;e.SetAwake(!0),e.SetAngularVelocity((i-n)*s)}else e.SetTransform(e.GetPosition(),i)}},resetVelocity:function(){var t=this._b2Body;if(t){var e=t.m_linearVelocity;e.Set(0,0),t.SetLinearVelocity(e),t.SetAngularVelocity(0)}},onEnable:function(){this._init()},onDisable:function(){this._destroy()},_registerNodeEvents:function(){var t=this.node;t.on("position-changed",this._onNodePositionChanged,this),t.on("rotation-changed",this._onNodeRotationChanged,this),t.on("scale-changed",this._onNodeScaleChanged,this)},_unregisterNodeEvents:function(){var t=this.node;t.off("position-changed",this._onNodePositionChanged,this),t.off("rotation-changed",this._onNodeRotationChanged,this),t.off("scale-changed",this._onNodeScaleChanged,this)},_onNodePositionChanged:function(){this.syncPosition(!0)},_onNodeRotationChanged:function(t){this.syncRotation(!0)},_onNodeScaleChanged:function(t){if(this._b2Body)for(var e=this.getComponents(cc.PhysicsCollider),i=0;i=0;n--){var r=t[n];r.collider=null,i._unregisterContactFixture(r),e&&e.DestroyFixture(r)}this.body=null,this._fixtures.length=0,this._shapes.length=0,this._inited=!1}},_createShape:function(){},apply:function(){this._destroy(),this._init()},getAABB:function(){for(var t=1e7,e=1e7,i=-1e7,r=-1e7,s=this._fixtures,o=0;oi&&(i=l.upperBound.x),l.upperBound.y>r&&(r=l.upperBound.y)}t*=n,e*=n,i*=n,r*=n;var u=this._rect;return u.x=t,u.y=e,u.width=i-t,u.height=r-e,u}});cc.PhysicsCollider=e.exports=s}),{"../CCPhysicsTypes":152,"../utils":175}],159:[(function(t,e,i){var n=t("../CCPhysicsTypes").PTM_RATIO,r=t("../CCPolygonSeparator"),s=cc.Class({name:"cc.PhysicsPolygonCollider",extends:cc.PhysicsCollider,mixins:[cc.Collider.Polygon],editor:{menu:!1,inspector:!1,requireComponent:cc.RigidBody},_createShape:function(t){var e=[],i=this.points;i.length>0&&i[0].equals(i[i.length-1])&&(i.length-=1);for(var s=r.ConvexPartition(i),o=this.offset,a=0;a=2?1:n)},n.prototype.getFixtures=function(){return this._fixtures},n.prototype.getPoints=function(){return this._points},n.prototype.getNormals=function(){return this._normals},n.prototype.getFractions=function(){return this._fractions},cc.PhysicsRayCastCallback=e.exports=n}),{}],174:[(function(t,e,i){var n=t("../CCPhysicsTypes").PHYSICS_ANGLE_TO_ANGLE,r=t("../CCPhysicsTypes").PTM_RATIO,s=t("../utils").convertToNodeRotation,o=cc.v2();function a(){}a.prototype.addB2Body=function(t){},a.prototype.removeB2Body=function(t){},a.prototype.syncNode=function(){for(var t=cc.director.getPhysicsManager()._bodies,e=0,i=t.length;e0?s:null,!0);var _=a.prototype;if(e&&(l||(n.extend(a,e),_=a.prototype),a.$super=e),i){for(var d=i.length-1;d>=0;d--){var f=i[d];y(_,f.prototype),y(a,f,(function(t){return f.hasOwnProperty(t)&&!0})),I._isCCClass(f)&&y(o.getClassAttrs(a).constructor.prototype,o.getClassAttrs(f).constructor.prototype)}_.constructor=a}return l||(_.__initProps__=A),n.setClassName(t,a),a}function x(t){for(var e=n.getClassName(t),i=t.constructor,r="new "+e+"(",s=0;s0){var o=!(i&&i.startsWith("cc."));o&&(r+="try{\n");var a="].apply(this,arguments);\n";if(1===s)r+="CCClass.__ctors__[0"+a;else{r+="var cs=CCClass.__ctors__;\n";for(var c=0;c=0)){var o=e[s];if("function"==typeof o){var a=n.getPropertyDescriptor(t.prototype,s);if(a){var c=a.value;if("function"==typeof c){S.test(o)&&(r=!0,e[s]=(function(t,e){return function(){var i=this._super;this._super=t;var n=e.apply(this,arguments);return this._super=i,n}})(c,o));continue}}0}}return r}function w(t,e,i,n,r,s){if(t.__props__=[],n&&n.__props__&&(t.__props__=n.__props__.slice()),r)for(var o=0;o=0)){var d=t[u];h.validateMethodWithProps(d,u,e,s,i)&&n.value(s.prototype,u,d,!0,!0)}var f=t.editor;return f&&cc.isChildClassOf(i,cc.Component)&&cc.Component._registerEditorProps(s,f),s}I._isCCClass=function(t){return t&&t.hasOwnProperty("__ctors__")},I._fastDefine=function(t,e,i){n.setClassName(t,e);for(var r=e.__props__=Object.keys(i),s=o.getClassAttrsProto(e),c=0;c=2&&((h||u())[l+"min"]=g[0],h[l+"max"]=g[1],g.length>2&&(h[l+"step"]=g[2])),p("min","number"),p("max","number"),p("step","number"),_}cc.Class=I,e.exports={isArray:function(t){return t=g(t),Array.isArray(t)},fastDefine:I._fastDefine,getNewValueTypeCode:x,IDENTIFIER_RE:T,escapeForJS:C,getDefault:g}}),{"./CCEnum":180,"./attribute":191,"./js":199,"./preprocess-class":200,"./requiring-frame":201,"./utils":203}],179:[(function(t,e,i){t("./CCClass");var n=t("./preprocess-class"),r=t("./js"),s="__ccclassCache__";function o(t){return t}function a(t,e){return t[e]||(t[e]={})}function c(t){return function(e){return"function"==typeof e?t(e):function(i){return t(i,e)}}}function h(t,e,i){return function(t){return function(i){return e(i,t)}}}var l=h.bind(null,!1);function u(t){return h.bind(null,!1)}var _=u(),d=u();function f(t,e){return a(t,s)}var m=c((function(t,e){var i=r.getSuper(t);i===Object&&(i=null);var n={name:e,extends:i,ctor:t,__ES6__:!0},o=t[s];if(o){var a=o.proto;a&&r.mixin(n,a),t[s]=void 0}return cc.Class(n)}));function p(t,e,i){return t((function(t,n){var r=f(t);if(r){var s=void 0!==i?i:n;a(a(r,"proto"),"editor")[e]=s}}),e)}function g(t){return t(o)}var y=g(c),v=p(l,"requireComponent"),x=g(_),C=p(d,"executionOrder"),T=g(c),A=g(c),b=g(_),S=g(_),E=g(_);cc._decorator=e.exports={ccclass:m,property:function(t,e,i){var s=null;function o(t,e,i){var o=f(t.constructor);if(o){var c=a(a(o,"proto"),"properties");(function(t,e,i,s,o,a){var c;s&&(c=(c=n.getFullFormOfProperty(s))||s);var h=e[i],l=r.mixin(h||{},c||{});if(o&&(o.get||o.set))o.get&&(l.get=o.get),o.set&&(l.set=o.set);else{var u=void 0;if(o)o.initializer&&(u=(function(t){var e;try{e=t()}catch(e){return t}return"object"!=typeof e||null===e?e:t})(o.initializer));else{var _=a.default||(a.default=(function(t){var e;try{e=new t}catch(t){return{}}return e})(t));_.hasOwnProperty(i)&&(u=_[i])}l.default=u}e[i]=l})(t.constructor,c,e,s,i,o)}}if(void 0===e)return s=t,o;o(t,e,i)},executeInEditMode:y,requireComponent:v,menu:x,executionOrder:C,disallowMultiple:T,playOnFocus:A,inspector:b,icon:S,help:E,mixins:function(){for(var t=[],e=0;ea)return this._removeUsedIndexBit(i),delete this._touchesIntegerDict[n.getID()],i;t>>=1}return-1},_removeUsedIndexBit:function(t){if(!(t<0||t>=this._maxTouches)){var e=1<0){this._glView._convertTouchesWithScale(r);var _=new cc.Event.EventTouch(r);_._eventCode=cc.Event.EventTouch.BEGAN,o.dispatchEvent(_)}},handleTouchesMove:function(t){for(var e,i,n,r=[],a=this._touches,c=s.now(),h=0,l=t.length;h0){this._glView._convertTouchesWithScale(r);var u=new cc.Event.EventTouch(r);u._eventCode=cc.Event.EventTouch.MOVED,o.dispatchEvent(u)}},handleTouchesEnd:function(t){var e=this.getSetOfTouchesEndOrCancel(t);if(e.length>0){this._glView._convertTouchesWithScale(e);var i=new cc.Event.EventTouch(e);i._eventCode=cc.Event.EventTouch.ENDED,o.dispatchEvent(i)}},handleTouchesCancel:function(t){var e=this.getSetOfTouchesEndOrCancel(t);if(e.length>0){this._glView._convertTouchesWithScale(e);var i=new cc.Event.EventTouch(e);i._eventCode=cc.Event.EventTouch.CANCELLED,o.dispatchEvent(i)}},getSetOfTouchesEndOrCancel:function(t){for(var e,i,n,r=[],s=this._touches,o=this._touchesIntegerDict,a=0,c=t.length;a=0;r--)if(i[r].getID()===n){e=i[r];break}return e||(e=t),e},setPreTouch:function(t){for(var e=!1,i=this._preTouchPool,n=t.getID(),r=i.length-1;r>=0;r--)if(i[r].getID()===n){i[r]=t,e=!0;break}e||(i.length<=50?i.push(t):(i[this._preTouchPoolPointer]=t,this._preTouchPoolPointer=(this._preTouchPoolPointer+1)%50))},getTouchByXY:function(t,e,i){var n=this._preTouchPoint,r=this._glView.convertToLocationInView(t,e,i),s=new cc.Touch(r.x,r.y);return s._setPrevPoint(n.x,n.y),n.x=r.x,n.y=r.y,s},getMouseEvent:function(t,e,i){var n=this._prevMousePoint,r=new cc.Event.EventMouse(i);return r._setPrevCursor(n.x,n.y),n.x=t.x,n.y=t.y,this._glView._convertMouseToLocationInView(n,e),r.setLocation(n.x,n.y),r},getPointByEvent:function(t,e){return null!=t.pageX?{x:t.pageX,y:t.pageY}:(s.platform===s.WECHAT_GAME?(e.left=0,e.top=0):(e.left-=document.body.scrollLeft,e.top-=document.body.scrollTop),{x:t.clientX,y:t.clientY})},getTouchesByEvent:function(t,e){for(var i,n,r,o=[],a=this._glView,c=this._preTouchPoint,h=t.changedTouches.length,l=0;lthis._accelInterval&&(this._accelCurTime-=this._accelInterval,o.dispatchEvent(new cc.Event.EventAcceleration(this._acceleration))),this._accelCurTime+=t}};n.get(cc,"inputManager",(function(){return cc.warnID(1405,"cc.inputManager","cc.systemEvent"),c})),e.exports=_cc.inputManager=c}),{"../event-manager":112,"../platform/js":199,"./CCMacro":183,"./CCSys":187}],183:[(function(t,e,i){t("./_CCClass"),cc.KEY={none:0,back:6,menu:18,backspace:8,tab:9,enter:13,shift:16,ctrl:17,alt:18,pause:19,capslock:20,escape:27,space:32,pageup:33,pagedown:34,end:35,home:36,left:37,up:38,right:39,down:40,select:41,insert:45,Delete:46,0:48,1:49,2:50,3:51,4:52,5:53,6:54,7:55,8:56,9:57,a:65,b:66,c:67,d:68,e:69,f:70,g:71,h:72,i:73,j:74,k:75,l:76,m:77,n:78,o:79,p:80,q:81,r:82,s:83,t:84,u:85,v:86,w:87,x:88,y:89,z:90,num0:96,num1:97,num2:98,num3:99,num4:100,num5:101,num6:102,num7:103,num8:104,num9:105,"*":106,"+":107,"-":109,numdel:110,"/":111,f1:112,f2:113,f3:114,f4:115,f5:116,f6:117,f7:118,f8:119,f9:120,f10:121,f11:122,f12:123,numlock:144,scrolllock:145,";":186,semicolon:186,equal:187,"=":187,",":188,comma:188,dash:189,".":190,period:190,forwardslash:191,grave:192,"[":219,openbracket:219,backslash:220,"]":221,closebracket:221,quote:222,dpadLeft:1e3,dpadRight:1001,dpadUp:1003,dpadDown:1004,dpadCenter:1005},cc.ImageFormat=cc.Enum({JPG:0,PNG:1,TIFF:2,WEBP:3,PVR:4,ETC:5,S3TC:6,ATITC:7,TGA:8,RAWDATA:9,UNKNOWN:10}),cc.getImageFormatByData=function(t){return t.length>8&&137===t[0]&&80===t[1]&&78===t[2]&&71===t[3]&&13===t[4]&&10===t[5]&&26===t[6]&&10===t[7]?cc.ImageFormat.PNG:t.length>2&&(73===t[0]&&73===t[1]||77===t[0]&&77===t[1]||255===t[0]&&216===t[1])?cc.ImageFormat.TIFF:cc.ImageFormat.UNKNOWN},cc.macro={INVALID_INDEX:-1,NODE_TAG_INVALID:-1,PI:Math.PI,PI2:2*Math.PI,FLT_MAX:parseFloat("3.402823466e+38F"),FLT_MIN:parseFloat("1.175494351e-38F"),RAD:Math.PI/180,DEG:180/Math.PI,UINT_MAX:4294967295,REPEAT_FOREVER:Number.MAX_VALUE-1,FLT_EPSILON:1.192092896e-7,ONE:1,ZERO:0,SRC_ALPHA:770,SRC_ALPHA_SATURATE:776,SRC_COLOR:768,DST_ALPHA:772,DST_COLOR:774,ONE_MINUS_SRC_ALPHA:771,ONE_MINUS_SRC_COLOR:769,ONE_MINUS_DST_ALPHA:773,ONE_MINUS_DST_COLOR:775,ONE_MINUS_CONSTANT_ALPHA:32772,ONE_MINUS_CONSTANT_COLOR:32770,LINEAR:9729,BLEND_DST:771,WEB_ORIENTATION_PORTRAIT:0,WEB_ORIENTATION_LANDSCAPE_LEFT:-90,WEB_ORIENTATION_PORTRAIT_UPSIDE_DOWN:180,WEB_ORIENTATION_LANDSCAPE_RIGHT:90,ORIENTATION_PORTRAIT:1,ORIENTATION_LANDSCAPE:2,ORIENTATION_AUTO:3,DENSITYDPI_DEVICE:"device-dpi",DENSITYDPI_HIGH:"high-dpi",DENSITYDPI_MEDIUM:"medium-dpi",DENSITYDPI_LOW:"low-dpi",VERTEX_ATTRIB_FLAG_NONE:0,VERTEX_ATTRIB_FLAG_POSITION:1,VERTEX_ATTRIB_FLAG_COLOR:2,VERTEX_ATTRIB_FLAG_TEX_COORDS:4,VERTEX_ATTRIB_FLAG_POS_COLOR_TEX:7,GL_ALL:0,VERTEX_ATTRIB_POSITION:0,VERTEX_ATTRIB_COLOR:1,VERTEX_ATTRIB_TEX_COORDS:2,VERTEX_ATTRIB_MAX:3,UNIFORM_PMATRIX:0,UNIFORM_MVMATRIX:1,UNIFORM_MVPMATRIX:2,UNIFORM_TIME:3,UNIFORM_SINTIME:4,UNIFORM_COSTIME:5,UNIFORM_RANDOM01:6,UNIFORM_SAMPLER:7,UNIFORM_MAX:8,SHADER_POSITION_TEXTURECOLOR:"ShaderPositionTextureColor",SHADER_SPRITE_POSITION_TEXTURECOLOR:"ShaderSpritePositionTextureColor",SHADER_POSITION_TEXTURECOLORALPHATEST:"ShaderPositionTextureColorAlphaTest",SHADER_SPRITE_POSITION_TEXTURECOLORALPHATEST:"ShaderSpritePositionTextureColorAlphaTest",SHADER_POSITION_COLOR:"ShaderPositionColor",SHADER_SPRITE_POSITION_COLOR:"ShaderSpritePositionColor",SHADER_POSITION_TEXTURE:"ShaderPositionTexture",SHADER_POSITION_TEXTURE_UCOLOR:"ShaderPositionTexture_uColor",SHADER_POSITION_TEXTUREA8COLOR:"ShaderPositionTextureA8Color",SHADER_POSITION_UCOLOR:"ShaderPosition_uColor",SHADER_POSITION_LENGTHTEXTURECOLOR:"ShaderPositionLengthTextureColor",UNIFORM_PMATRIX_S:"CC_PMatrix",UNIFORM_MVMATRIX_S:"CC_MVMatrix",UNIFORM_MVPMATRIX_S:"CC_MVPMatrix",UNIFORM_TIME_S:"CC_Time",UNIFORM_SINTIME_S:"CC_SinTime",UNIFORM_COSTIME_S:"CC_CosTime",UNIFORM_RANDOM01_S:"CC_Random01",UNIFORM_SAMPLER_S:"CC_Texture0",UNIFORM_ALPHA_TEST_VALUE_S:"CC_alpha_value",ATTRIBUTE_NAME_COLOR:"a_color",ATTRIBUTE_NAME_POSITION:"a_position",ATTRIBUTE_NAME_TEX_COORD:"a_texCoord",ITEM_SIZE:32,CURRENT_ITEM:3233828865,ZOOM_ACTION_TAG:3233828866,NORMAL_TAG:8801,SELECTED_TAG:8802,DISABLE_TAG:8803,FIX_ARTIFACTS_BY_STRECHING_TEXEL:0,FIX_ARTIFACTS_BY_STRECHING_TEXEL_TMX:1,DIRECTOR_STATS_POSITION:cc.p(0,0),DIRECTOR_FPS_INTERVAL:.5,COCOSNODE_RENDER_SUBPIXEL:1,SPRITEBATCHNODE_RENDER_SUBPIXEL:1,AUTO_PREMULTIPLIED_ALPHA_FOR_PNG:0,OPTIMIZE_BLEND_FUNC_FOR_PREMULTIPLIED_ALPHA:0,TEXTURE_NPOT_SUPPORT:0,USE_LA88_LABELS:1,SPRITE_DEBUG_DRAW:0,LABELBMFONT_DEBUG_DRAW:0,LABELATLAS_DEBUG_DRAW:0,ENABLE_STACKABLE_ACTIONS:1,ENABLE_GL_STATE_CACHE:1,TOUCH_TIMEOUT:5e3,BATCH_VERTEX_COUNT:2e4,ENABLE_GC_FOR_NATIVE_OBJECTS:!0,ENABLE_TILEDMAP_CULLING:!0,DOWNLOAD_MAX_CONCURRENT:64,ENABLE_TRANSPARENT_CANVAS:!1,ENABLE_WEBGL_ANTIALIAS:!1};var n=!0;cc.defineGetterSetter(cc.macro,"ENABLE_CULLING",(function(){return n}),(function(t){n=t;var e=cc.director.getScene();e&&(e._sgNode._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.cullingDirty),cc.renderer.childrenOrderDirty=!0)})),cc.defineGetterSetter(cc.macro,"BLEND_SRC",(function(){return cc._renderType===cc.game.RENDER_TYPE_WEBGL&&cc.macro.OPTIMIZE_BLEND_FUNC_FOR_PREMULTIPLIED_ALPHA?cc.macro.ONE:cc.macro.SRC_ALPHA})),cc.lerp=function(t,e,i){return t+(e-t)*i},cc.rand=function(){return 16777215*Math.random()},cc.randomMinus1To1=function(){return 2*(Math.random()-.5)},cc.random0To1=Math.random,cc.degreesToRadians=function(t){return t*cc.macro.RAD},cc.radiansToDegrees=function(t){return t*cc.macro.DEG},cc.nodeDrawSetup=function(t){t._shaderProgram&&(t._shaderProgram.use(),t._shaderProgram.setUniformForModelViewAndProjectionMatrixWithMat4())},cc.incrementGLDraws=function(t){cc.g_NumberOfDraws+=t},cc.checkGLErrorDebug=function(){if(cc._renderType===cc.game.RENDER_TYPE_WEBGL){var t=cc._renderContext.getError();t&&cc.logID(2400,t)}},e.exports=cc.macro}),{"./_CCClass":190}],184:[(function(t,e,i){var n=t("./js"),r=t("./CCClass"),s=1;function o(){this._name="",this._objFlags=0}r.fastDefine("cc.Object",o,{_name:"",_objFlags:0}),n.value(o,"Flags",{Destroyed:s,DontSave:8,EditorOnly:16,Dirty:32,DontDestroy:64,PersistentMask:-4192741,Destroying:128,Deactivating:256,LockedInEditor:512,IsPreloadStarted:8192,IsOnLoadStarted:32768,IsOnLoadCalled:16384,IsOnEnableCalled:2048,IsStartCalled:65536,IsEditorOnEnableCalled:4096,IsPositionLocked:1<<21,IsRotationLocked:1<<17,IsScaleLocked:1<<18,IsAnchorLocked:1<<19,IsSizeLocked:1<<20});var a=[];function c(){for(var t=a.length,e=0;e=6.2;break;case n.BROWSER_TYPE_ANDROID:n.osMainVersion&&n.osMainVersion>=5&&(E=!0);break;case n.BROWSER_TYPE_CHROME:E=w>=30;break;case n.BROWSER_TYPE_UC:E=w>11;break;case n.BROWSER_TYPE_360:E=!1}}var I,R=n.capabilities={canvas:S,opengl:E,webp:b};(void 0!==a.ontouchstart||void 0!==o.ontouchstart||s.msPointerEnabled)&&(R.touches=!0),void 0!==a.onmouseup&&(R.mouse=!0),void 0!==a.onkeyup&&(R.keyboard=!0),(r.DeviceMotionEvent||r.DeviceOrientationEvent)&&(R.accelerometer=!0),(function(){n.browserVersion;var t=!!(window.AudioContext||window.webkitAudioContext||window.mozAudioContext);I={ONLY_ONE:!1,WEB_AUDIO:t,DELAY_CREATE_CTX:!1},n.os===n.OS_IOS&&(I.USE_LOADER_EVENT="loadedmetadata"),n.browserType===n.BROWSER_TYPE_FIREFOX&&(I.DELAY_CREATE_CTX=!0,I.USE_LOADER_EVENT="canplay"),n.os===n.OS_ANDROID&&n.browserType===n.BROWSER_TYPE_UC&&(I.ONE_SOURCE=!0)})();try{I.WEB_AUDIO&&(I.context=new(window.AudioContext||window.webkitAudioContext||window.mozAudioContext),I.DELAY_CREATE_CTX&&setTimeout((function(){I.context=new(window.AudioContext||window.webkitAudioContext||window.mozAudioContext)}),0))}catch(t){I.WEB_AUDIO=!1,cc.logID(5201)}I.format=(function(){var t=[],e=document.createElement("audio");return e.canPlayType&&(e.canPlayType('audio/ogg; codecs="vorbis"')&&t.push(".ogg"),e.canPlayType("audio/mpeg")&&t.push(".mp3"),e.canPlayType('audio/wav; codecs="1"')&&t.push(".wav"),e.canPlayType("audio/mp4")&&t.push(".mp4"),e.canPlayType("audio/x-m4a")&&t.push(".m4a")),t})(),n.__audioSupport=I;n.garbageCollect=function(){},n.dumpRoot=function(){},n.restartVM=function(){},n.cleanScript=function(t){},n.isObjectValid=function(t){return!!t},n.dump=function(){var t="";t+="isMobile : "+this.isMobile+"\r\n",t+="language : "+this.language+"\r\n",t+="browserType : "+this.browserType+"\r\n",t+="browserVersion : "+this.browserVersion+"\r\n",t+="capabilities : "+JSON.stringify(this.capabilities)+"\r\n",t+="os : "+this.os+"\r\n",t+="osVersion : "+this.osVersion+"\r\n",t+="platform : "+this.platform+"\r\n",t+="Using "+(cc._renderType===cc.game.RENDER_TYPE_WEBGL?"WEBGL":"CANVAS")+" renderer.\r\n",cc.log(t)},n.openURL=function(t){window.open(t)},n.now=function(){return Date.now?Date.now():+new Date},e.exports=n}}),{}],188:[(function(t,e,i){var n=t("../event-manager"),r={init:function(){this.html=document.getElementsByTagName("html")[0]},availWidth:function(t){return t&&t!==this.html?t.clientWidth:window.innerWidth},availHeight:function(t){return t&&t!==this.html?t.clientHeight:window.innerHeight},meta:{width:"device-width"},adaptationType:cc.sys.browserType};switch(cc.sys.os===cc.sys.OS_IOS&&(r.adaptationType=cc.sys.BROWSER_TYPE_SAFARI),r.adaptationType){case cc.sys.BROWSER_TYPE_SAFARI:r.meta["minimal-ui"]="true",r.availWidth=function(t){return t.clientWidth},r.availHeight=function(t){return t.clientHeight};break;case cc.sys.BROWSER_TYPE_CHROME:r.__defineGetter__("target-densitydpi",(function(){return cc.view._targetDensityDPI}));break;case cc.sys.BROWSER_TYPE_SOUGOU:case cc.sys.BROWSER_TYPE_UC:r.availWidth=function(t){return t.clientWidth},r.availHeight=function(t){return t.clientHeight};break;case cc.sys.BROWSER_TYPE_MIUI:r.init=function(t){if(!t.__resizeWithBrowserSize){var e=function(){t.setDesignResolutionSize(t._designResolutionSize.width,t._designResolutionSize.height,t._resolutionPolicy),window.removeEventListener("resize",e,!1)};window.addEventListener("resize",e,!1)}};break;case cc.sys.BROWSER_TYPE_WECHAT_GAME:r.availWidth=function(){return window.innerWidth},r.availHeight=function(){return window.innerHeight};break;case cc.sys.BROWSER_TYPE_WECHAT_GAME_SUB:var s=wx.getSharedCanvas();r.availWidth=function(){return s.width},r.availHeight=function(){return s.height}}var o=null,a=cc._Class.extend({ctor:function(){var t=this,e=cc.ContainerStrategy,i=cc.ContentStrategy;r.init(this),t._frameSize=cc.size(0,0),t._initFrameSize();var n=cc.game.canvas.width,s=cc.game.canvas.height;t._designResolutionSize=cc.size(n,s),t._originalDesignResolutionSize=cc.size(n,s),t._viewPortRect=cc.rect(0,0,n,s),t._visibleRect=cc.rect(0,0,n,s),t._contentTranslateLeftTop={left:0,top:0},t._autoFullScreen=!1,t._devicePixelRatio=1,t._viewName="Cocos2dHTML5",t._resizeCallback=null,t._orientationChanging=!0,t._resizing=!1,t._scaleX=1,t._originalScaleX=1,t._scaleY=1,t._originalScaleY=1,t._isRotated=!1,t._orientation=3;cc.sys;t.enableRetina(!0),cc.visibleRect&&cc.visibleRect.init(t._visibleRect),t._resolutionPolicy=null,t._rpExactFit=new cc.ResolutionPolicy(e.EQUAL_TO_FRAME,i.EXACT_FIT),t._rpShowAll=new cc.ResolutionPolicy(e.PROPORTION_TO_FRAME,i.SHOW_ALL),t._rpNoBorder=new cc.ResolutionPolicy(e.EQUAL_TO_FRAME,i.NO_BORDER),t._rpFixedHeight=new cc.ResolutionPolicy(e.EQUAL_TO_FRAME,i.FIXED_HEIGHT),t._rpFixedWidth=new cc.ResolutionPolicy(e.EQUAL_TO_FRAME,i.FIXED_WIDTH),t._initialized=!1,t._contentTranslateLeftTop=null,t._frameZoomFactor=1,t.__resizeWithBrowserSize=!1,t._isAdjustViewPort=!0,t._targetDensityDPI=cc.macro.DENSITYDPI_HIGH,t.enableAntiAlias(!0)},_resizeEvent:function(){var t,e=(t=this.setDesignResolutionSize?this:cc.view)._frameSize.width,i=t._frameSize.height,r=t._isRotated;if(cc.sys.isMobile){var s=cc.game.container.style,o=s.margin;s.margin="0",s.display="none",t._initFrameSize(),s.margin=o,s.display="block"}else t._initFrameSize();if(t._isRotated!==r||t._frameSize.width!==e||t._frameSize.height!==i){var a=t._originalDesignResolutionSize.width,c=t._originalDesignResolutionSize.height;t._resizing=!0,a>0&&t.setDesignResolutionSize(a,c,t._resolutionPolicy),t._resizing=!1,n.dispatchCustomEvent("canvas-resize"),t._resizeCallback&&t._resizeCallback.call()}},_orientationChange:function(){cc.view._orientationChanging=!0,cc.view._resizeEvent()},setTargetDensityDPI:function(t){this._targetDensityDPI=t,this._adjustViewportMeta()},getTargetDensityDPI:function(){return this._targetDensityDPI},resizeWithBrowserSize:function(t){t?this.__resizeWithBrowserSize||(this.__resizeWithBrowserSize=!0,window.addEventListener("resize",this._resizeEvent),window.addEventListener("orientationchange",this._orientationChange)):this.__resizeWithBrowserSize&&(this.__resizeWithBrowserSize=!1,window.removeEventListener("resize",this._resizeEvent),window.removeEventListener("orientationchange",this._orientationChange))},setResizeCallback:function(t){"function"!=typeof t&&null!=t||(this._resizeCallback=t)},setOrientation:function(t){if((t&=cc.macro.ORIENTATION_AUTO)&&this._orientation!==t){this._orientation=t;var e=this._originalDesignResolutionSize.width,i=this._originalDesignResolutionSize.height;this.setDesignResolutionSize(e,i,this._resolutionPolicy)}},_initFrameSize:function(){var t=this._frameSize,e=r.availWidth(cc.game.frame),i=r.availHeight(cc.game.frame),n=e>=i;!cc.sys.isMobile||n&&this._orientation&cc.macro.ORIENTATION_LANDSCAPE||!n&&this._orientation&cc.macro.ORIENTATION_PORTRAIT?(t.width=e,t.height=i,cc.container.style["-webkit-transform"]="rotate(0deg)",cc.container.style.transform="rotate(0deg)",this._isRotated=!1):(t.width=i,t.height=e,cc.container.style["-webkit-transform"]="rotate(90deg)",cc.container.style.transform="rotate(90deg)",cc.container.style["-webkit-transform-origin"]="0px 0px 0px",cc.container.style.transformOrigin="0px 0px 0px",this._isRotated=!0),this._orientationChanging&&setTimeout((function(){cc.view._orientationChanging=!1}),1e3)},_adjustSizeKeepCanvasSize:function(){var t=this._originalDesignResolutionSize.width,e=this._originalDesignResolutionSize.height;t>0&&this.setDesignResolutionSize(t,e,this._resolutionPolicy)},_setViewportMeta:function(t,e){var i=document.getElementById("cocosMetaElement");i&&e&&document.head.removeChild(i);var n,r,s,o=document.getElementsByName("viewport"),a=o?o[0]:null;for(r in n=a?a.content:"",(i=i||document.createElement("meta")).id="cocosMetaElement",i.name="viewport",i.content="",t)-1==n.indexOf(r)?n+=","+r+"="+t[r]:e&&(s=new RegExp(r+"s*=s*[^,]+"),n.replace(s,r+"="+t[r]));/^,/.test(n)&&(n=n.substr(1)),i.content=n,a&&(a.content=n),document.head.appendChild(i)},_adjustViewportMeta:function(){this._isAdjustViewPort&&(this._setViewportMeta(r.meta,!1),this._isAdjustViewPort=!1)},_resetScale:function(){this._scaleX=this._originalScaleX,this._scaleY=this._originalScaleY},_adjustSizeToBrowser:function(){},initialize:function(){this._initialized=!0},adjustViewPort:function(t){this._isAdjustViewPort=t},enableRetina:function(t){this._retinaEnabled=!!t},isRetinaEnabled:function(){return this._retinaEnabled},enableAntiAlias:function(t){if(this._antiAliasEnabled!==t)if(this._antiAliasEnabled=t,cc._renderType===cc.game.RENDER_TYPE_WEBGL){var e=cc.loader._cache;for(var i in e){var n=e[i],r=n&&n.content instanceof cc.Texture2D?n.content:null;r&&(t?r.setAntiAliasTexParameters():r.setAliasTexParameters())}}else if(cc._renderType===cc.game.RENDER_TYPE_CANVAS){var s=cc._canvas.getContext("2d");s.imageSmoothingEnabled=t,s.mozImageSmoothingEnabled=t;var o=cc.rendererCanvas._dirtyRegion;if(o){var a=new cc.Region;a.setTo(0,0,cc.visibleRect.width,cc.visibleRect.height),o.addRegion(a)}}},isAntiAliasEnabled:function(){return this._antiAliasEnabled},enableAutoFullScreen:function(t){t&&t!==this._autoFullScreen&&cc.sys.isMobile&&cc.sys.browserType!==cc.sys.BROWSER_TYPE_WECHAT?(this._autoFullScreen=!0,cc.screen.autoFullScreen(cc.game.frame)):this._autoFullScreen=!1},isAutoFullScreenEnabled:function(){return this._autoFullScreen},isViewReady:function(){return cc.game.canvas&&cc._renderContext},setFrameZoomFactor:function(t){this._frameZoomFactor=t,cc.director.setProjection(cc.director.getProjection())},setContentTranslateLeftTop:function(t,e){this._contentTranslateLeftTop={left:t,top:e}},getContentTranslateLeftTop:function(){return this._contentTranslateLeftTop},setCanvasSize:function(t,e){var i=cc.game.canvas,n=cc.game.container;i.width=t*this._devicePixelRatio,i.height=e*this._devicePixelRatio,i.style.width=t+"px",i.style.height=e+"px",n.style.width=t+"px",n.style.height=e+"px",this._resizeEvent()},getCanvasSize:function(){return cc.size(cc.game.canvas.width,cc.game.canvas.height)},getFrameSize:function(){return cc.size(this._frameSize.width,this._frameSize.height)},setFrameSize:function(t,e){this._frameSize.width=t,this._frameSize.height=e,cc.game.frame.style.width=t+"px",cc.game.frame.style.height=e+"px",this._resizeEvent(),cc.director.setProjection(cc.director.getProjection())},getVisibleSize:function(){return cc.size(this._visibleRect.width,this._visibleRect.height)},getVisibleSizeInPixel:function(){return cc.size(this._visibleRect.width*this._scaleX,this._visibleRect.height*this._scaleY)},getVisibleOrigin:function(){return cc.p(this._visibleRect.x,this._visibleRect.y)},getVisibleOriginInPixel:function(){return cc.p(this._visibleRect.x*this._scaleX,this._visibleRect.y*this._scaleY)},canSetContentScaleFactor:function(){return!0},getResolutionPolicy:function(){return this._resolutionPolicy},setResolutionPolicy:function(t){var e=this;if(t instanceof cc.ResolutionPolicy)e._resolutionPolicy=t;else{var i=cc.ResolutionPolicy;t===i.EXACT_FIT&&(e._resolutionPolicy=e._rpExactFit),t===i.SHOW_ALL&&(e._resolutionPolicy=e._rpShowAll),t===i.NO_BORDER&&(e._resolutionPolicy=e._rpNoBorder),t===i.FIXED_HEIGHT&&(e._resolutionPolicy=e._rpFixedHeight),t===i.FIXED_WIDTH&&(e._resolutionPolicy=e._rpFixedWidth)}},setDesignResolutionSize:function(t,e,i){if(t>0||e>0){this.setResolutionPolicy(i);var n=this._resolutionPolicy;if(n&&n.preApply(this),cc.sys.isMobile&&this._adjustViewportMeta(),this._orientationChanging=!0,this._resizing||this._initFrameSize(),n){this._originalDesignResolutionSize.width=this._designResolutionSize.width=t,this._originalDesignResolutionSize.height=this._designResolutionSize.height=e;var r=n.apply(this,this._designResolutionSize);if(r.scale&&2===r.scale.length&&(this._scaleX=r.scale[0],this._scaleY=r.scale[1]),r.viewport){var s=this._viewPortRect,o=this._visibleRect,a=r.viewport;s.x=a.x,s.y=a.y,s.width=a.width,s.height=a.height,o.x=-s.x/this._scaleX,o.y=-s.y/this._scaleY,o.width=cc.game.canvas.width/this._scaleX,o.height=cc.game.canvas.height/this._scaleY,cc._renderContext.setOffset&&cc._renderContext.setOffset(s.x,-s.y)}var c=cc.director;c._winSizeInPoints.width=this._designResolutionSize.width,c._winSizeInPoints.height=this._designResolutionSize.height,n.postApply(this),cc.winSize.width=c._winSizeInPoints.width,cc.winSize.height=c._winSizeInPoints.height,cc._renderType===cc.game.RENDER_TYPE_WEBGL?c.setGLDefaultValues():cc._renderType===cc.game.RENDER_TYPE_CANVAS&&(cc.renderer._allNeedDraw=!0),this._originalScaleX=this._scaleX,this._originalScaleY=this._scaleY,cc.visibleRect&&cc.visibleRect.init(this._visibleRect)}else cc.logID(2201)}else cc.logID(2200)},getDesignResolutionSize:function(){return cc.size(this._designResolutionSize.width,this._designResolutionSize.height)},setRealPixelResolution:function(t,e,i){this._setViewportMeta({width:t},!0),document.documentElement.style.width=t+"px",document.body.style.width=t+"px",document.body.style.left="0px",document.body.style.top="0px",this.setDesignResolutionSize(t,e,i)},setViewPortInPoints:function(t,e,i,n){var r=this._frameZoomFactor,s=this._scaleX,o=this._scaleY;cc._renderContext.viewport(t*s*r+this._viewPortRect.x*r,e*o*r+this._viewPortRect.y*r,i*s*r,n*o*r)},setScissorInPoints:function(t,e,i,n){var r=this._frameZoomFactor,s=this._scaleX,a=this._scaleY,c=Math.ceil(t*s*r+this._viewPortRect.x*r),h=Math.ceil(e*a*r+this._viewPortRect.y*r),l=Math.ceil(i*s*r),u=Math.ceil(n*a*r);if(!o){var _=gl.getParameter(gl.SCISSOR_BOX);o=cc.rect(_[0],_[1],_[2],_[3])}o.x===c&&o.y===h&&o.width===l&&o.height===u||(o.x=c,o.y=h,o.width=l,o.height=u,cc._renderContext.scissor(c,h,l,u))},isScissorEnabled:function(){return cc._renderContext.isEnabled(gl.SCISSOR_TEST)},getScissorRect:function(){if(!o){var t=gl.getParameter(gl.SCISSOR_BOX);o=cc.rect(t[0],t[1],t[2],t[3])}var e=1/this._scaleX,i=1/this._scaleY;return cc.rect((o.x-this._viewPortRect.x)*e,(o.y-this._viewPortRect.y)*i,o.width*e,o.height*i)},setViewName:function(t){null!=t&&t.length>0&&(this._viewName=t)},getViewName:function(){return this._viewName},getViewPortRect:function(){return this._viewPortRect},getScaleX:function(){return this._scaleX},getScaleY:function(){return this._scaleY},getDevicePixelRatio:function(){return this._devicePixelRatio},convertToLocationInView:function(t,e,i){var n=this._devicePixelRatio*(t-i.left),r=this._devicePixelRatio*(i.top+i.height-e);return this._isRotated?{x:this._viewPortRect.width-r,y:n}:{x:n,y:r}},_convertMouseToLocationInView:function(t,e){var i=this._viewPortRect;t.x=(this._devicePixelRatio*(t.x-e.left)-i.x)/this._scaleX,t.y=(this._devicePixelRatio*(e.top+e.height-t.y)-i.y)/this._scaleY},_convertPointWithScale:function(t){var e=this._viewPortRect;t.x=(t.x-e.x)/this._scaleX,t.y=(t.y-e.y)/this._scaleY},_convertTouchesWithScale:function(t){for(var e,i,n,r=this._viewPortRect,s=this._scaleX,o=this._scaleY,a=0;a=0;n--){var r=i[n];r.hasOwnProperty("__attrs__")&&r.__attrs__||s(r,0,(e=i[n+1])&&e.__attrs__)}return s(t,0,(e=i[0])&&e.__attrs__),t.__attrs__})(t)}function c(t){return a(t).constructor.prototype}function h(t,e){0}cc.Integer="Integer",cc.Float="Float",cc.Boolean="Boolean",cc.String="String",e.exports={attr:o,getClassAttrs:a,getClassAttrsProto:c,setClassAttr:function(t,e,i,n){c(t)[e+r+i]=n},DELIMETER:r,getTypeChecker:h,ObjectType:function(t){return{type:"Object",ctor:t,_onAfterProp:!1}},ScriptUuid:{}}}),{"./CCClass":178,"./js":199,"./utils":203}],192:[(function(t,e,i){var n=t("./js"),r=n.array.fastRemoveAt;function s(){this.callbacks=[],this.targets=[],this.isInvoking=!1,this.containCanceled=!1}var o=s.prototype;o.removeBy=function(t,e){for(var i=this.callbacks,n=this.targets,s=0;s0?this.deserializedList[0]:[]}else this.deserializedList.length=1,this.deserializedData=t?this._deserializeObject(t,!1):null,this.deserializedList[0]=this.deserializedData;return (function(t){var e,i,n,r=t.deserializedList,s=t._idPropList,o=t._idList,a=t._idObjList;for(t._classFinder&&t._classFinder.onDereferenced,e=0;e0&&(i=f+this.globalVariables.join(",")+";");var n=h.flattenCodeArray(["return (function(R){",i||[],this.codeArray,"return o;","})"]);this.result=Function("O","F",n)(this.objs,this.funcs);for(var r=0,s=this.objsToClear_iN$t.length;r1)t.push(p+"="+this._targetExp+";"),e=p;else{if(1!==this._exps.length)return;e=this._targetExp}for(var i=0;i=0&&(d(t,i),!0)}o.formatStr=function(){var t=arguments.length;if(0===t)return"";var e=arguments[0];if(1===t)return""+e;if("string"==typeof e&&u.test(e))for(var i=1;i=0&&(t[i]=t[t.length-1],--t.length)},removeAt:d,fastRemoveAt:function(t,e){var i=t.length;e<0||e>=i||(t[e]=t[i-1],t.length=i-1)},contains:function(t,e){return t.indexOf(e)>=0},verifyType:function(t,e){if(t&&t.length>0)for(var i=0;i0){--this.count;var t=this._pool[this.count];return this._pool[this.count]=null,t}return null},p.prototype.put=function(t){var e=this._pool;if(this.count=0&&(this._pool.length=t,this.count>t&&(this.count=t))},o.Pool=p,cc.js=o,e.exports=o}),{"../utils/mutable-forward-iterator":229,"./id-generater":195}],200:[(function(t,e,i){var n={url:{canUsedInGet:!0},default:{},serializable:{},editorOnly:{},formerlySerializedAs:{}};function r(t,e,i,r){if(!t.get&&!t.set)if(t.hasOwnProperty("default")){var s="_N$"+e;t.get=function(){return this[s]},t.set=function(t){var e=this[s];this[s]=t,i.call(this,e)};var o={};for(var a in r[s]=o,n){var c=n[a];t.hasOwnProperty(a)&&(o[a]=t[a],c.canUsedInGet||delete t[a])}}else 0}function s(t,e,i,n){Array.isArray(n)&&n.length>0&&(n=n[0]),t.type=n}function o(t,e,i,n){if(Array.isArray(e)){if(!(e.length>0))return cc.errorID(5508,i,n);if(cc.RawAsset.isRawAssetType(e[0]))return t.url=e[0],void delete t.type;t.type=e=e[0]}}i.getFullFormOfProperty=function(t,e,i){if(!(t&&t.constructor===Object)){if(Array.isArray(t)&&t.length>0){var n=t[0];return{default:[],type:t,_short:!0}}if("function"==typeof t){n=t;return cc.RawAsset.isRawAssetType(n)||cc.RawAsset.wasRawAssetType(n)?{default:"",url:n,_short:!0}:{default:cc.isChildClassOf(n,cc.ValueType)?new n:null,type:n,_short:!0}}return{default:t,_short:!0}}return null},i.preprocessAttrs=function(t,e,n,a){for(var c in t){var h=t[c],l=i.getFullFormOfProperty(h,c,e);if(l&&(h=t[c]=l),h){var u=h.notify;u&&r(h,c,u,t),"type"in h&&o(h,h.type,e,c),"url"in h&&s(h,0,0,h.url),"type"in h&&h.type}}},i.validateMethodWithProps=function(t,e,i,n,r){return"function"==typeof t||null===t}}),{"./CCClass":178}],201:[(function(t,e,i){var n=[];cc._RF={push:function(t,e,i){void 0===i&&(i=e,e=""),n.push({uuid:e,script:i,module:t,exports:t.exports,beh:null})},pop:function(){var t=n.pop(),e=t.module,i=e.exports;if(i===t.exports){for(var r in i)return;e.exports=i=t.cls}},peek:function(){return n[n.length-1]}}}),{}],202:[(function(t,e,i){cc.url={_rawAssets:"",normalize:function(t){return t&&(46===t.charCodeAt(0)&&47===t.charCodeAt(1)?t=t.slice(2):47===t.charCodeAt(0)&&(t=t.slice(1))),t},raw:function(t){if((t=this.normalize(t)).startsWith("resources/")){var e=cc.loader._getResUuid(t.slice(10),cc.Asset,!0);if(e)return cc.AssetLibrary.getLibUrlNoExt(e,!0)+cc.path.extname(t)}else cc.errorID(7002,t);return this._rawAssets+t},_init:function(t){this._rawAssets=cc.path.stripSep(t)+"/"}},e.exports=cc.url}),{}],203:[(function(t,e,i){e.exports={contains:function(t,e){if("function"==typeof t.contains)return t.contains(e);if("function"==typeof t.compareDocumentPosition)return!!(16&t.compareDocumentPosition(e));var i=e.parentNode;if(i)do{if(i===t)return!0;i=i.parentNode}while(null!==i);return!1},isDomNode:"object"==typeof window&&("function"==typeof Node?function(t){return t instanceof Node}:function(t){return t&&"object"==typeof t&&"number"==typeof t.nodeType&&"string"==typeof t.nodeName}),callInNextTick:function(t,e,i){t&&setTimeout((function(){t(e,i)}),0)}}}),{}],204:[(function(t,e,i){t("./platform/js"),t("./value-types"),t("./utils"),t("./platform/CCInputManager"),t("./platform/CCInputExtension"),t("./event"),t("./platform/CCSys"),t("./platform/CCMacro"),t("./load-pipeline"),t("./textures"),t("./CCDirector"),t("./CCDirectorWebGL"),t("./CCDirectorCanvas"),t("./platform/CCView"),t("./platform/CCScreen"),t("./CCScheduler"),t("./event-manager"),t("./renderer")}),{"./CCDirector":33,"./CCDirectorCanvas":34,"./CCDirectorWebGL":35,"./CCScheduler":42,"./event":116,"./event-manager":112,"./load-pipeline":138,"./platform/CCInputExtension":181,"./platform/CCInputManager":182,"./platform/CCMacro":183,"./platform/CCScreen":186,"./platform/CCSys":187,"./platform/CCView":188,"./platform/js":199,"./renderer":208,"./textures":220,"./utils":227,"./value-types":241}],205:[(function(t,e,i){var n=function(){this._minX=0,this._minY=0,this._maxX=0,this._maxY=0,this._width=0,this._height=0,this._area=0},r=n.prototype,s=[];function o(){var t=s.pop();return t||(t=new n),t}function a(t){s.push(t)}function c(t,e){var i=t._minXe._maxX?t._maxX:e._maxX)-i)*((t._maxY>e._maxY?t._maxY:e._maxY)-n)}r.setTo=function(t,e,i,n){return this._minX=t,this._minY=e,this._maxX=i,this._maxY=n,this.updateArea(),this},r.intValues=function(){this._minX=Math.floor(this._minX),this._minY=Math.floor(this._minY),this._maxX=Math.ceil(this._maxX),this._maxY=Math.ceil(this._maxY),this.updateArea()},r.updateArea=function(){this._width=this._maxX-this._minX,this._height=this._maxY-this._minY,this._area=this._width*this._height},r.union=function(t){this.isEmpty()?this.setTo(t._minX,t._minY,t._maxX,t._maxY):(this._minX>t._minX&&(this._minX=t._minX),this._minY>t._minY&&(this._minY=t._minY),this._maxXt._minX?this._minX:t._minX,i=this._maxXi)&&(e=this._minY>t._minY?this._minY:t._minY)<=(i=this._maxYv&&(S=g,g=v,v=S),C>A&&(S=C,C=A,A=S),i=(gA?v:A)+1,y>x&&(S=y,y=x,x=S),T>b&&(S=T,T=b,b=S),n=(yb?x:b)+1}this._minX=i,this._minY=n,this._maxX=r,this._maxY=s,this._width=r-i,this._height=s-n,this._area=this._width*this._height}else this.setEmpty()};var h=function(){this.dirtyList=[],this.hasClipRect=!1,this.clipWidth=0,this.clipHeight=0,this.clipArea=0,this.clipRectChanged=!1},l=h.prototype;l.setClipRect=function(t,e){this.hasClipRect=!0,this.clipRectChanged=!0,this.clipWidth=Math.ceil(t),this.clipHeight=Math.ceil(e),this.clipArea=this.clipWidth*this.clipHeight},l.addRegion=function(t){var e=t._minX,i=t._minY,n=t._maxX,r=t._maxY;if(this.hasClipRect&&(e<0&&(e=0),i<0&&(i=0),n>this.clipWidth&&(n=this.clipWidth),r>this.clipHeight&&(r=this.clipHeight)),e>=n||i>=r)return!1;if(this.clipRectChanged)return!0;var s=this.dirtyList,a=o();return s.push(a.setTo(e,i,n,r)),this.mergeDirtyList(s),!0},l.clear=function(){for(var t=this.dirtyList,e=t.length,i=0;i0)for(var n=0;n3?Number.POSITIVE_INFINITY:0,r=0,s=0,o=0,h=0;hd&&(r=h,s=u,n=d)}}if(i&&o/this.clipArea>.95&&(this.clipRectChanged=!0),r!==s){var f=t[s];return t[r].union(f),a(f),t.splice(s,1),!0}return!1},cc.Region=n,cc.DirtyRegion=h}),{}],206:[(function(t,e,i){cc.rendererCanvas={childrenOrderDirty:!0,assignedZ:0,assignedZStep:1e-4,_transformNodePool:[],_renderCmds:[],_isCacheToCanvasOn:!1,_cacheToCanvasCmds:{},_cacheInstanceIds:[],_currentID:0,_clearColor:cc.color(),_clearFillStyle:"rgb(0, 0, 0)",_dirtyRegion:null,_allNeedDraw:!0,_enableDirtyRegion:!1,_debugDirtyRegion:!1,_dirtyRegionCountThreshold:10,init:function(){cc.sys.browserType===cc.sys.BROWSER_TYPE_IE&&this.enableDirtyRegion(!1)},getRenderCmd:function(t){return t._createRenderCmd()},enableDirtyRegion:function(t){this._enableDirtyRegion=t},isDirtyRegionEnabled:function(){return this._enableDirtyRegion},setDirtyRegionCountThreshold:function(t){this._dirtyRegionCountThreshold=t},_collectDirtyRegion:function(){var t,e,i=this._renderCmds,n=this._dirtyRegion,r=_ccsg.Node.CanvasRenderCmd.RegionStatus,s=0,o=!0;for(t=0,e=i.length;tr.NotDirty&&(++s>this._dirtyRegionCountThreshold&&(o=!1),o&&(!l.isEmpty()&&n.addRegion(l),a._regionFlag>r.Dirty&&!h.isEmpty()&&n.addRegion(h)),a._regionFlag=r.NotDirty)}return o},_beginDrawDirtyRegion:function(t){var e=t.getContext(),i=this._dirtyRegion.getDirtyRegions();e.save(),t.setTransform({a:1,b:0,c:0,d:1,tx:0,ty:0},1,1),e.beginPath();for(var n=0,r=0,s=0,o=0,a=t._scaleX,c=t._scaleY,h=0,l=i.length;h0},_sortNodeByLevelAsc:function(t,e){return t._curLevel-e._curLevel},pushDirtyNode:function(t){this._transformNodePool.push(t)},clear:function(){},clearRenderCommands:function(){this._renderCmds.length=0,this._cacheInstanceIds.length=0,this._isCacheToCanvasOn=!1,this._allNeedDraw=!0},pushRenderCommand:function(t){if(t.rendering)if(this._isCacheToCanvasOn){var e=this._currentID,i=this._cacheToCanvasCmds[e];-1===i.indexOf(t)&&i.push(t)}else-1===this._renderCmds.indexOf(t)&&this._renderCmds.push(t)}},(function(){cc.CanvasContextWrapper=function(t){this._context=t,this._saveCount=0,this._currentAlpha=t.globalAlpha,this._currentCompositeOperation=t.globalCompositeOperation,this._currentFillStyle=t.fillStyle,this._currentStrokeStyle=t.strokeStyle,this._offsetX=0,this._offsetY=0,this._realOffsetY=this.height,this._armatureMode=0};var t=cc.CanvasContextWrapper.prototype;t.resetCache=function(){var t=this._context;this._currentAlpha=t.globalAlpha,this._currentCompositeOperation=t.globalCompositeOperation,this._currentFillStyle=t.fillStyle,this._currentStrokeStyle=t.strokeStyle,this._realOffsetY=this._context.canvas.height+this._offsetY},t.setOffset=function(t,e){this._offsetX=t,this._offsetY=e,this._realOffsetY=this._context.canvas.height+this._offsetY},t.computeRealOffsetY=function(){this._realOffsetY=this._context.canvas.height+this._offsetY},t.setViewScale=function(t,e){this._scaleX=t,this._scaleY=e},t.getContext=function(){return this._context},t.save=function(){this._context.save(),this._saveCount++},t.restore=function(){this._context.restore(),this._currentAlpha=this._context.globalAlpha,this._saveCount--},t.setGlobalAlpha=function(t){this._saveCount>0?this._context.globalAlpha=t:this._currentAlpha!==t&&(this._currentAlpha=t,this._context.globalAlpha=t)},t.setCompositeOperation=function(t){this._saveCount>0?this._context.globalCompositeOperation=t:this._currentCompositeOperation!==t&&(this._currentCompositeOperation=t,this._context.globalCompositeOperation=t)},t.setFillStyle=function(t){this._context.fillStyle=t},t.setStrokeStyle=function(t){this._saveCount>0?this._context.strokeStyle=t:this._currentStrokeStyle!==t&&(this._currentStrokeStyle=t,this._context.strokeStyle=t)},t.setTransform=function(t,e,i){this._armatureMode>0?(this.restore(),this.save(),this._context.transform(t.a,-t.b,-t.c,t.d,t.tx*e,-t.ty*i)):this._context.setTransform(t.a*e,-t.b*i,-t.c*e,t.d*i,this._offsetX+t.tx*e,this._realOffsetY-t.ty*i)},t._switchToArmatureMode=function(t,e,i,n){t?(this._armatureMode++,this._context.setTransform(e.a,e.c,e.b,e.d,this._offsetX+e.tx*i,this._realOffsetY-e.ty*n),this.save()):(this._armatureMode--,this.restore())}})()}),{}],207:[(function(t,e,i){var n={texture:null,blendSrc:null,blendDst:null,shader:null},r=!1,s=null,o=null,a=0,c=0,h=0,l=6,u=null,_=null,d=null,f=null,m=0,p=!0;function g(t){var e=cc._renderContext;null===s&&(o=e.createBuffer(),s=e.createBuffer()),(function(t){var e=cc._renderContext;if(s){var i=6*Math.ceil(t/4);e.bindBuffer(e.ELEMENT_ARRAY_BUFFER,s),f=new Uint16Array(i);for(var n=0,r=0,c=i;r0},_sortNodeByLevelAsc:function(t,e){return t._curLevel-e._curLevel},pushDirtyNode:function(t){this._transformNodePool.push(t)},clearRenderCommands:function(){this._renderCmds.length=0},clear:function(){var t=cc._renderContext;t.clearColor(this._clearColor.r,this._clearColor.g,this._clearColor.b,this._clearColor.a),t.clear(t.COLOR_BUFFER_BIT|t.DEPTH_BUFFER_BIT)},setDepthTest:function(t){var e=cc._renderContext;t?(e.clearDepth(1),e.enable(e.DEPTH_TEST),e.depthFunc(e.LEQUAL)):e.disable(e.DEPTH_TEST)},pushRenderCommand:function(t){if(t.rendering||t.uploadData)if(this._isCacheToBufferOn){var e=this._currentID,i=this._cacheToBufferCmds[e];-1===i.indexOf(t)&&i.push(t)}else this._renderCmds.push(t)},_increaseBatchingSize:function(t,e,i){var n,r;switch(e=e||y.QUAD){case y.QUAD:for(n=0;n=a&&this._batchRendering();var e=t._node,i=t._texture||e._texture||e._spriteFrame&&e._spriteFrame._texture,s=e._blendFunc.src,o=e._blendFunc.dst,u=t._shaderProgram;(r||n.texture!==i||n.blendSrc!==s||n.blendDst!==o||n.shader!==u)&&(this._batchRendering(),n.texture=i,n.blendSrc=s,n.blendDst=o,n.shader=u,r=!1);var m=t.uploadData(_,d,c*l);if(m>0){var g,v;switch(t.vertexType||y.QUAD){case y.QUAD:for(g=0;g.5*a;if(i&&(i.use(),i._updateProjectionUniform()),cc.gl.blendFunc(n.blendSrc,n.blendDst),cc.gl.bindTexture2DN(0,e),t.bindBuffer(t.ARRAY_BUFFER,o),r)t.bufferData(t.ARRAY_BUFFER,_,t.DYNAMIC_DRAW);else{var u=_.subarray(0,c*l);t.bufferData(t.ARRAY_BUFFER,u,t.DYNAMIC_DRAW)}t.enableVertexAttribArray(cc.macro.VERTEX_ATTRIB_POSITION),t.enableVertexAttribArray(cc.macro.VERTEX_ATTRIB_COLOR),t.enableVertexAttribArray(cc.macro.VERTEX_ATTRIB_TEX_COORDS),t.vertexAttribPointer(cc.macro.VERTEX_ATTRIB_POSITION,3,t.FLOAT,!1,24,0),t.vertexAttribPointer(cc.macro.VERTEX_ATTRIB_COLOR,4,t.UNSIGNED_BYTE,!0,24,12),t.vertexAttribPointer(cc.macro.VERTEX_ATTRIB_TEX_COORDS,2,t.FLOAT,!1,24,16),t.bindBuffer(t.ELEMENT_ARRAY_BUFFER,s),(!m||!p||h>m)&&(r?t.bufferData(t.ELEMENT_ARRAY_BUFFER,f,t.DYNAMIC_DRAW):t.bufferData(t.ELEMENT_ARRAY_BUFFER,f.subarray(0,h),t.DYNAMIC_DRAW)),t.drawElements(t.TRIANGLES,h,t.UNSIGNED_SHORT,0),cc.g_NumberOfDraws++,p?m=h:(m=0,p=!0),c=0,h=0}},rendering:function(t,e){var i,r,s,o=e||this._renderCmds,a=t||cc._renderContext;for(a.bindBuffer(a.ARRAY_BUFFER,null),cc.gl.bindTexture2DN(0,null),i=0,r=o.length;i0&&this._batchRendering(),s.rendering(a)));this._batchRendering(),n.texture=null}}}),{}],208:[(function(t,e,i){t("./RendererCanvas"),t("./RendererWebGL"),t("./DirtyRegion")}),{"./DirtyRegion":205,"./RendererCanvas":206,"./RendererWebGL":207}],209:[(function(t,e,i){_ccsg.Scene=_ccsg.Node.extend({_className:"Scene",ctor:function(){_ccsg.Node.prototype.ctor.call(this),this._ignoreAnchorPointForPosition=!0,this.setAnchorPoint(.5,.5),this.setContentSize(cc.director.getWinSize())}})}),{}],210:[(function(t,e,i){var n=t("../event/event-target"),r=t("../utils/misc");_ccsg.Sprite=_ccsg.Node.extend({dirty:!1,_recursiveDirty:null,_shouldBeHidden:!1,_transformToBatch:null,_blendFunc:null,_texture:null,_rect:null,_rectRotated:!1,_offsetPosition:null,_unflippedOffsetPositionFromCenter:null,_opacityModifyRGB:!1,_flippedX:!1,_flippedY:!1,_textureLoaded:!1,_className:"Sprite",ctor:function(t,e,i){_ccsg.Node.prototype.ctor.call(this),n.call(this),this._shouldBeHidden=!1,this._offsetPosition=cc.p(0,0),this._unflippedOffsetPositionFromCenter=cc.p(0,0),this._blendFunc={src:cc.macro.BLEND_SRC,dst:cc.macro.BLEND_DST},this._rect=cc.rect(0,0,0,0),this._softInit(t,e,i)},textureLoaded:function(){return this._textureLoaded},addLoadedEventListener:function(t,e){this.once("load",t,e)},isDirty:function(){return this.dirty},setDirty:function(t){this.dirty=t},isTextureRectRotated:function(){return this._rectRotated},getTextureRect:function(){return cc.rect(this._rect)},getOffsetPosition:function(){return cc.p(this._offsetPosition)},_getOffsetX:function(){return this._offsetPosition.x},_getOffsetY:function(){return this._offsetPosition.y},getBlendFunc:function(){return this._blendFunc},initWithSpriteFrame:function(t){cc.assertID(t,2606),t.textureLoaded()||(this._textureLoaded=!1,t.once("load",this._renderCmd._spriteFrameLoadedCallback,this._renderCmd));var e=cc._renderType!==cc.game.RENDER_TYPE_CANVAS&&t._rotated,i=this.initWithTexture(t.getTexture(),t.getRect(),e);return this.setSpriteFrame(t),i},initWithSpriteFrameName:function(){cc.warnID(2608)},setVertexRect:function(t){var e=this._rect;e.x=t.x,e.y=t.y,e.width=t.width,e.height=t.height},setFlippedX:function(t){this._flippedX!==t&&(this._flippedX=t,this.setTextureRect(this._rect,this._rectRotated,this._contentSize),this.setNodeDirty(!0))},setFlippedY:function(t){this._flippedY!==t&&(this._flippedY=t,this.setTextureRect(this._rect,this._rectRotated,this._contentSize),this.setNodeDirty(!0))},isFlippedX:function(){return this._flippedX},isFlippedY:function(){return this._flippedY},setOpacityModifyRGB:function(t){this._opacityModifyRGB!==t&&(this._opacityModifyRGB=t,this._renderCmd._setColorDirty())},isOpacityModifyRGB:function(){return this._opacityModifyRGB},setDisplayFrameWithAnimationName:function(t,e){cc.assertID(t,2610);var i=cc.spriteFrameAnimationCache.getAnimation(t);if(i){var n=i.getFrames()[e];n?this.setSpriteFrame(n.getSpriteFrame()):cc.logID(2603)}else cc.logID(2602)},getTexture:function(){return this._texture},_softInit:function(t,e,i){void 0===t?_ccsg.Sprite.prototype.init.call(this):t instanceof cc.Texture2D?this.initWithTexture(t,e,i):t instanceof cc.SpriteFrame&&this.initWithSpriteFrame(t)},setBlendFunc:function(t,e){var i=this._blendFunc;void 0===e?(i.src=t.src,i.dst=t.dst):(i.src=t,i.dst=e),this._renderCmd.updateBlendFunc(i)},init:function(){var t=this;return arguments.length>0?t.initWithFile(arguments[0],arguments[1]):(t.dirty=t._recursiveDirty=!1,t._blendFunc.src=cc.macro.BLEND_SRC,t._blendFunc.dst=cc.macro.BLEND_DST,t.texture=null,t._flippedX=t._flippedY=!1,t.anchorX=.5,t.anchorY=.5,t._offsetPosition.x=0,t._offsetPosition.y=0,t.setTextureRect(cc.rect(0,0,0,0),!1,cc.size(0,0)),!0)},initWithFile:function(t,e){cc.assertID(t,2609);var i=cc.textureCache.getTextureForKey(t);if(i){if(!e){var n=i.getContentSize();e=cc.rect(0,0,n.width,n.height)}return this.initWithTexture(i,e)}return i=cc.textureCache.addImage(t),this.initWithTexture(i,e||cc.rect(0,0,i.width,i.height))},initWithTexture:function(t,e,i,n){var r=this;cc.assertID(0!==arguments.length,2710),i=i||!1,t=this._renderCmd._handleTextureForRotatedTexture(t,e,i,n),r._recursiveDirty=!1,r.dirty=!1,r._opacityModifyRGB=!0,r._blendFunc.src=cc.macro.BLEND_SRC,r._blendFunc.dst=cc.macro.BLEND_DST,r._flippedX=r._flippedY=!1,r.setAnchorPoint(.5,.5),r._offsetPosition.x=0,r._offsetPosition.y=0;var s=t.loaded;return r._textureLoaded=s,s?(e||(e=cc.rect(0,0,t.width,t.height)),this._renderCmd._checkTextureBoundary(t,e,i),r.setTexture(t),r.setTextureRect(e,i),this.emit("load"),!0):(r._rectRotated=i,e&&(r._rect.x=e.x,r._rect.y=e.y,r._rect.width=e.width,r._rect.height=e.height),r.texture&&r.texture.off("load",r._renderCmd._textureLoadedCallback,r._renderCmd),t.once("load",r._renderCmd._textureLoadedCallback,r._renderCmd),r.setTexture(t),!0)},setTextureRect:function(t,e,i,n){var r=this;r._rectRotated=e||!1,r.setContentSize(i||t),r.setVertexRect(t),r._renderCmd._setTextureCoords(t,n);var s=r._unflippedOffsetPositionFromCenter.x,o=r._unflippedOffsetPositionFromCenter.y;r._flippedX&&(s=-s),r._flippedY&&(o=-o);var a=r._rect;r._offsetPosition.x=s+(r._contentSize.width-a.width)/2,r._offsetPosition.y=o+(r._contentSize.height-a.height)/2},setSpriteFrame:function(t){var e=this;cc.assertID(t,2712),this.setNodeDirty(!0);var i=t.getOffset();e._unflippedOffsetPositionFromCenter.x=i.x,e._unflippedOffsetPositionFromCenter.y=i.y;var n=t.getTexture();t.textureLoaded()?(e._textureLoaded=!0,n!==e._texture&&(e._setTexture(n),e.setColor(e._realColor)),e.setTextureRect(t.getRect(),t.isRotated(),t.getOriginalSize())):(e._textureLoaded=!1,t.once("load",(function(t){var i=t.currentTarget;e._textureLoaded=!0;var n=i.getTexture();n!==e._texture&&e._setTexture(n),e.setTextureRect(i.getRect(),i.isRotated(),i.getOriginalSize()),e.emit("load"),e.setColor(e._realColor)}),e)),this._renderCmd._updateForSetSpriteFrame(n)},setDisplayFrame:function(t){cc.logID(2604),this.setSpriteFrame(t)},isFrameDisplayed:function(t){return this._renderCmd.isFrameDisplayed(t)},displayFrame:function(){return this.getSpriteFrame()},getSpriteFrame:function(){return new cc.SpriteFrame(this._texture,this._rect,this._rectRotated,this._unflippedOffsetPositionFromCenter,this._contentSize)},setTexture:function(t){if(!t)return this._renderCmd._setTexture(null);var e=cc.js.isString(t);e&&(t=cc.textureCache.addImage(t)),t.loaded?(this._setTexture(t,e),this.setColor(this._realColor),this._textureLoaded=!0,this.emit("load")):(this._renderCmd._setTexture(t),t.once("load",(function(i){this._setTexture(t,e),this.setColor(this._realColor),this._textureLoaded=!0,this.emit("load")}),this))},_setTexture:function(t,e){this._renderCmd._setTexture(t),e&&this._changeRectWithTexture(t)},_changeRectWithTexture:function(t){var e=cc.rect(0,0,t.width,t.height);this.setTextureRect(e)},_createRenderCmd:function(){return cc._renderType===cc.game.RENDER_TYPE_CANVAS?new _ccsg.Sprite.CanvasRenderCmd(this):new _ccsg.Sprite.WebGLRenderCmd(this)}}),cc.js.addon(_ccsg.Sprite.prototype,n.prototype);r.propertyDefine(_ccsg.Sprite,["opacity","color","texture"],{opacityModifyRGB:["isOpacityModifyRGB","setOpacityModifyRGB"],flippedX:["isFlippedX","setFlippedX"],flippedY:["isFlippedY","setFlippedY"],offsetX:["_getOffsetX"],offsetY:["_getOffsetY"],textureRectRotated:["isTextureRectRotated"]})}),{"../event/event-target":114,"../utils/misc":228}],211:[(function(t,e,i){function n(t){this._rootCtor(t),this._needDraw=!0,this._textureCoord={renderX:0,renderY:0,x:0,y:0,width:0,height:0,validRect:!1},this._blendFuncStr="source-over",this._colorized=!1,this._textureToRender=null}var r=n.prototype=Object.create(_ccsg.Node.CanvasRenderCmd.prototype);r.constructor=n,r.setDirtyRecursively=function(t){},r._setTexture=function(t){var e=this._node;if(e._texture!==t){e._textureLoaded=!!t&&t.loaded,e._texture=t;var i=cc.rect(0,0,t.width,t.height);e.setTextureRect(i),this._updateColor()}},r._setColorDirty=function(){this.setDirtyFlag(_ccsg.Node._dirtyFlags.colorDirty|_ccsg.Node._dirtyFlags.opacityDirty)},r.isFrameDisplayed=function(t){var e=this._node;return t.getTexture()===e._texture&&cc.rectEqualToRect(t.getRect(),e._rect)},r.updateBlendFunc=function(t){this._blendFuncStr=_ccsg.Node.CanvasRenderCmd._getCompositeOperationByBlendFunc(t)},r._handleTextureForRotatedTexture=function(t,e,i,r){return i&&t.loaded&&(t=n._createRotatedTexture(t,e,r),e.x=e.y=0,this._node._rect=cc.rect(0,0,e.width,e.height)),t},r._checkTextureBoundary=function(t,e,i){if(t&&t.url){var n=e.x+e.width,r=e.y+e.height;n>t.width&&cc.errorID(3300,t.url),r>t.height&&cc.errorID(3400,t.url)}},r.rendering=function(t,e,i){var n=this._node,r=this._textureCoord,s=this._displayedOpacity/255,o=this._textureToRender||n._texture;if((!o||0!==r.width&&0!==r.height&&o.loaded)&&0!==s){var a,c,h,l,u,_,d,f,m,p=t||cc._renderContext,g=p.getContext(),y=n._offsetPosition.x,v=n._rect.height,x=n._rect.width,C=-n._offsetPosition.y-v;if(p.setTransform(this._worldTransform,e,i),p.setCompositeOperation(this._blendFuncStr),p.setGlobalAlpha(s),(n._flippedX||n._flippedY)&&p.save(),n._flippedX&&(y=-y-x,g.scale(-1,1)),n._flippedY&&(C=n._offsetPosition.y,g.scale(1,-1)),this._colorized?(c=0,h=0):(c=r.renderX,h=r.renderY),l=r.width,u=r.height,_=y,d=C,f=x,m=v,o&&o._image)a=o._image,""!==o._pattern?(p.setFillStyle(g.createPattern(a,o._pattern)),g.fillRect(_,d,f,m)):g.drawImage(a,c,h,l,u,_,d,f,m);else{var T=n._contentSize;if(r.validRect){var A=this._displayedColor;p.setFillStyle("rgba("+A.r+","+A.g+","+A.b+",1)"),g.fillRect(_,d,T.width,T.height)}}(n._flippedX||n._flippedY)&&p.restore(),cc.g_NumberOfDraws++}},r._updateColor=function(){var t=this._node._texture,e=this._textureCoord,i=this._displayedColor;t&&(255!==i.r||255!==i.g||255!==i.b?(this._textureToRender=t._generateColorTexture(i.r,i.g,i.b,e),this._colorized=!0):t&&(this._textureToRender=t,this._colorized=!1))},r._updateForSetSpriteFrame=function(t,e){if(this._colorized=!1,this._textureCoord.renderX=this._textureCoord.x,this._textureCoord.renderY=this._textureCoord.y,e=e||t.loaded){var i=this._node.getColor();255===i.r&&255===i.g&&255===i.b||this._updateColor()}},r._spriteFrameLoadedCallback=function(t){var e=this._node,i=t.currentTarget;e.setTextureRect(i.getRect(),i.isRotated(),i.getOriginalSize()),this._updateColor(),e.emit("load")},r._textureLoadedCallback=function(t){var e=this._node,i=t.currentTarget;if(!e._textureLoaded){e._textureLoaded=!0;var n=e._rect;n?cc._rectEqualToZero(n)&&(n.width=i.width,n.height=i.height):n=cc.rect(0,0,i.width,i.height),e.texture=i,e.setTextureRect(n,e._rectRotated);var r=this._displayedColor;255===r.r&&255===r.g&&255===r.b||this._updateColor(),e.emit("load")}},r._setTextureCoords=function(t){var e=this._textureCoord;e.renderX=e.x=0|t.x,e.renderY=e.y=0|t.y,e.width=0|t.width,e.height=0|t.height,e.validRect=!(0===e.width||0===e.height||e.x<0||e.y<0)},n._cutRotateImageToCanvas=function(t,e,i){if(!t)return null;if(!e)return t;i=null==i||i;var n=document.createElement("canvas");n.width=e.width,n.height=e.height;var r=n.getContext("2d");return r.translate(n.width/2,n.height/2),i?r.rotate(-1.5707963267948966):r.rotate(1.5707963267948966),r.drawImage(t,e.x,e.y,e.height,e.width,-e.height/2,-e.width/2,e.height,e.width),n},n._createRotatedTexture=function(t,e,i){var r=new cc.Texture2D;return r._nativeAsset=n._cutRotateImageToCanvas(t._nativeAsset,e,i),r},_ccsg.Sprite.CanvasRenderCmd=n}),{}],212:[(function(t,e,i){var n=cc.macro;_ccsg.Sprite.WebGLRenderCmd=function(t){this._rootCtor(t),this._needDraw=!0,this._vertices=[{x:0,y:0,u:0,v:0},{x:0,y:0,u:0,v:0},{x:0,y:0,u:0,v:0},{x:0,y:0,u:0,v:0}],this._dirty=!1,this._recursiveDirty=!1,this._shaderProgram=cc.shaderCache.programForKey(n.SHADER_SPRITE_POSITION_TEXTURECOLOR)};var r=_ccsg.Sprite.WebGLRenderCmd.prototype=Object.create(_ccsg.Node.WebGLRenderCmd.prototype);r.constructor=_ccsg.Sprite.WebGLRenderCmd,r.updateBlendFunc=function(t){},r.setDirtyFlag=function(t){_ccsg.Node.WebGLRenderCmd.prototype.setDirtyFlag.call(this,t),this._dirty=!0},r.setDirtyRecursively=function(t){this._recursiveDirty=t,this._dirty=t;for(var e,i=this._node._children,n=i?i.length:0,r=0;rt.width&&cc.errorID(3300,t.url),r>t.height&&cc.errorID(3400,t.url))},r.transform=function(t,e){this.originTransform(t,e);var i=this._node,n=i._offsetPosition.x,r=n+i._rect.width,s=i._offsetPosition.y,o=s+i._rect.height,a=this._worldTransform,c=this._vertices;c[0].x=n*a.a+o*a.c+a.tx,c[0].y=n*a.b+o*a.d+a.ty,c[1].x=n*a.a+s*a.c+a.tx,c[1].y=n*a.b+s*a.d+a.ty,c[2].x=r*a.a+o*a.c+a.tx,c[2].y=r*a.b+o*a.d+a.ty,c[3].x=r*a.a+s*a.c+a.tx,c[3].y=r*a.b+s*a.d+a.ty},r.needDraw=function(){var t=this._node._texture;return this._needDraw&&t},r.uploadData=function(t,e,i){var n=this._node,r=n._texture;if(!(r&&r.loaded&&n._rect.width&&n._rect.height&&this._displayedOpacity))return 0;var s,o=this._displayedOpacity,a=this._displayedColor._val;if(n._opacityModifyRGB){var c=o/255,h=this._displayedColor.r*c,l=this._displayedColor.g*c;s=(o<<24>>>0)+(this._displayedColor.b*c<<16)+(l<<8)+h}else s=(o<<24>>>0)+((65280&a)<<8)+((16711680&a)>>8)+(a>>>24);var u,_,d=n._vertexZ,f=this._vertices,m=f.length,p=i;for(u=0;u=t){e=this._lengths[i];break}return e?this._pool[e].pop():void 0}},a=cc.macro,c={_rebuildQuads_base:function(t){var e,i,n,r,a=t._spriteFrame,c=t._contentSize,h=t._isTrimmedContentSize,l=t._vertices,u=t._corner;if(h)e=0,i=0,n=c.width,r=c.height;else{var _=a._originalSize,d=a._rect,f=a._offset,m=c.width,p=c.height,g=_.width,y=_.height,v=d.width,x=d.height,C=m/g,T=p/y,A=f.x+(g-v)/2,b=f.x-(g-v)/2;e=A*C,i=(f.y+(y-x)/2)*T,n=m+b*C,r=p+(f.y-(y-x)/2)*T}if(l.length<8&&(o.put(l),l=o.get(8)||new Float32Array(8),t._vertices=l),s){var S=t._renderCmd._worldTransform,E=S.a,w=S.b,I=S.c,R=S.d,P=S.tx,O=S.ty,D=e*E,B=e*w,L=n*E,M=n*w,N=r*I+P,F=r*R+O,z=i*I+P,k=i*R+O;l[0]=D+z,l[1]=B+k,l[2]=L+z,l[3]=M+k,l[4]=D+N,l[5]=B+F,l[6]=L+N,l[7]=M+F}else l[0]=e,l[1]=i,l[2]=n,l[3]=i,l[4]=e,l[5]=r,l[6]=n,l[7]=r;u[0]=0,u[1]=2,u[2]=4,u[3]=6,t._uvsDirty&&this._calculateUVs(t,a),t._vertCount=4},_calculateUVs:function(t,e){var i,n,r,s,c=t._uvs,h=e._texture.width,l=e._texture.height,u=e._rect;c.length<8&&(o.put(c),c=o.get(8)||new Float32Array(8),t._uvs=c);var _=a.FIX_ARTIFACTS_BY_STRECHING_TEXEL?.5:0;e._rotated?(i=(u.x+_)/h,n=(u.y+u.width-_)/l,r=(u.x+u.height-_)/h,s=(u.y+_)/l,c[0]=i,c[1]=s,c[2]=i,c[3]=n,c[4]=r,c[5]=s,c[6]=r,c[7]=n):(i=(u.x+_)/h,n=(u.y+u.height-_)/l,r=(u.x+u.width-_)/h,s=(u.y+_)/l,c[0]=i,c[1]=n,c[2]=r,c[3]=n,c[4]=i,c[5]=s,c[6]=r,c[7]=s)}},h={x:new Array(4),y:new Array(4),_rebuildQuads_base:function(t){var e,i,n,r,a=t._spriteFrame,c=t._contentSize,h=t._insetLeft,l=t._insetRight,u=t._insetTop,_=t._insetBottom,d=t._vertices,f=t._renderCmd._worldTransform,m=t._corner;e=h,i=l,n=u,r=_;var p=c,g=p.width-e-i,y=p.height-n-r,v=p.width/(e+i),x=p.height/(n+r);v=isNaN(v)||v>1?1:v,x=isNaN(x)||x>1?1:x,g=g<0?0:g,y=y<0?0:y;var C=this.x,T=this.y;C[0]=0,C[1]=e*v,C[2]=C[1]+g,C[3]=p.width,T[0]=0,T[1]=r*x,T[2]=T[1]+y,T[3]=p.height,d.length<32&&(o.put(d),d=o.get(32)||new Float32Array(32),t._vertices=d);var A,b,S=0;if(s)for(A=0;A<4;A++)for(b=0;b<4;b++)d[S]=C[b]*f.a+T[A]*f.c+f.tx,d[S+1]=C[b]*f.b+T[A]*f.d+f.ty,S+=2;else for(A=0;A<4;A++)for(b=0;b<4;b++)d[S]=C[b],d[S+1]=T[A],S+=2;m[0]=0,m[1]=6,m[2]=24,m[3]=30,t._uvsDirty&&this._calculateUVs(t,a,h,l,u,_)},_calculateUVs:function(t,e,i,n,r,s){var c,h,l,u,_,d,f=t._uvs,m=e._rect,p=e._texture.width,g=e._texture.height,y=e._rect;c=i,l=n,h=m.width-c-l,u=r,d=s,_=m.height-u-d,f.length<32&&(o.put(f),f=o.get(32)||new Float32Array(32),t._uvs=f);var v,x,C=this.x,T=this.y,A=a.FIX_ARTIFACTS_BY_STRECHING_TEXEL?.5:0,b=0;if(e._rotated)for(C[0]=(y.x+A)/p,C[1]=(d+y.x)/p,C[2]=(d+_+y.x)/p,C[3]=(y.x+y.height-A)/p,T[3]=(y.y+A)/g,T[2]=(c+y.y)/g,T[1]=(c+h+y.y)/g,T[0]=(y.y+y.width-A)/g,v=0;v<4;v++)for(x=0;x<4;x++)f[b]=C[v],f[b+1]=T[3-x],b+=2;else for(C[0]=(y.x+A)/p,C[1]=(c+y.x)/p,C[2]=(c+h+y.x)/p,C[3]=(y.x+y.width-A)/p,T[3]=(y.y+A)/g,T[2]=(u+y.y)/g,T[1]=(u+_+y.y)/g,T[0]=(y.y+y.height-A)/g,v=0;v<4;v++)for(x=0;x<4;x++)f[b]=C[x],f[b+1]=T[v],b+=2}},l={_rebuildQuads_base:function(t,e,i){e=t._spriteFrame,i=t._contentSize;var n,r,c,h,l=t._vertices,u=t._corner,_=t._renderCmd._worldTransform,d=t._uvs,f=e._texture.width,m=e._texture.height,p=e._rect,g=a.FIX_ARTIFACTS_BY_STRECHING_TEXEL?.5:0;e._rotated?(n=(p.x+g)/f,c=(p.x+p.height-g)/f,r=(p.y+p.width-g)/m,h=(p.y+g)/m):(n=(p.x+g)/f,c=(p.x+p.width-g)/f,r=(p.y+p.height-g)/m,h=(p.y+g)/m);var y=p.width,v=p.height,x=i.width/y,C=i.height/v,T=Math.ceil(C),A=Math.ceil(x);T*A>16384&&cc.errorID(2625);var b=T*A*4*2;l.lengthb)return}u[0]=0,u[1]=8*(A-1)+2,u[2]=(T-1)*A*8+4,u[3]=b-2}},u={_rebuildQuads_base:function(t){var e=t._spriteFrame,i=t._contentSize,n=t._fillStart,r=t._fillRange;r<0&&(n+=r,r=-r),r=n+r,n=(n=n>1?1:n)<0?0:n,r=(r=r>1?1:r)<0?0:r,r-=n;var c,h,l,u,_,d=t._fillType,f=t._vertices,m=t._corner,g=t._renderCmd._worldTransform,y=t._uvs,v=0,x=0,C=i.width,T=i.height,A=e._texture.width,b=e._texture.height,S=e._rect,E=a.FIX_ARTIFACTS_BY_STRECHING_TEXEL?.5:0;e._rotated?(h=(S.x+E)/A,l=(S.y+S.width-E)/b,u=(S.x+S.height-E)/A,_=(S.y+E)/b):(h=(S.x+E)/A,l=(S.y+S.height-E)/b,u=(S.x+S.width-E)/A,_=(S.y+E)/b),f.length<8&&(o.put(f),f=o.get(8)||new Float32Array(8),t._vertices=f),y.length<8&&(o.put(y),y=o.get(8)||new Float32Array(8),t._uvs=y);var w,I=new Array(8);switch(e._rotated?(I[0]=I[2]=h,I[4]=I[6]=u,I[3]=I[7]=l,I[1]=I[5]=_):(I[0]=I[4]=h,I[2]=I[6]=u,I[1]=I[3]=l,I[5]=I[7]=_),c=(c=(n=(n=n>1?1:n)<0?0:n)+(r=r<0?0:r))>1?1:c,d){case p.HORIZONTAL:w=v+(C-v)*c,v=v+(C-v)*n,C=w,y[0]=I[0]+(I[2]-I[0])*n,y[1]=I[1],y[2]=I[0]+(I[2]-I[0])*c,y[3]=I[3],y[4]=I[4]+(I[6]-I[4])*n,y[5]=I[5],y[6]=I[4]+(I[6]-I[4])*c,y[7]=I[7];break;case p.VERTICAL:w=x+(T-x)*c,x=x+(T-x)*n,T=w,y[0]=I[0],y[1]=I[1]+(I[5]-I[1])*n,y[2]=I[2],y[3]=I[3]+(I[7]-I[3])*n,y[4]=I[4],y[5]=I[1]+(I[5]-I[1])*c,y[6]=I[6],y[7]=I[3]+(I[7]-I[3])*c;break;default:cc.errorID(2626)}if(s){var R=v*g.a,P=v*g.b,O=C*g.a,D=C*g.b,B=T*g.c+g.tx,L=T*g.d+g.ty,M=x*g.c+g.tx,N=x*g.d+g.ty;f[0]=R+M,f[1]=P+N,f[2]=O+M,f[3]=D+N,f[4]=R+B,f[5]=P+L,f[6]=O+B,f[7]=D+L}else f[0]=v,f[1]=x,f[2]=C,f[3]=x,f[4]=v,f[5]=T,f[6]=C,f[7]=T;t._vertCount=4,m[0]=0,m[1]=2,m[2]=4,m[3]=6}},_={_vertPos:[cc.v2(0,0),cc.v2(0,0),cc.v2(0,0),cc.v2(0,0)],_vertices:[cc.v2(0,0),cc.v2(0,0)],_uvs:[cc.v2(0,0),cc.v2(0,0)],_intersectPoint_1:[cc.v2(0,0),cc.v2(0,0),cc.v2(0,0),cc.v2(0,0)],_intersectPoint_2:[cc.v2(0,0),cc.v2(0,0),cc.v2(0,0),cc.v2(0,0)],outVerts:null,outUvs:null,rawVerts:null,rawUvs:null,_rebuildQuads_base:function(t){var e=t._spriteFrame,i=t._contentSize,n=t._fillStart,r=t._fillRange;r<0&&(n+=r,r=-r),t._isTriangle=!0,t._rawVerts||(t._rawVerts=o.get(8)||new Float32Array(8),t._rawUvs=o.get(8)||new Float32Array(8));for(var s=t._fillCenter,a=t._vertices,c=t._corner,h=t._uvs,l=t._rawVerts,u=t._rawUvs,_=t._renderCmd._worldTransform;n>=1;)n-=1;for(;n<0;)n+=1;var d=s.x*i.width,f=s.y*i.height,m=cc.v2(d,f),p=(n*=2*Math.PI)+(r*=2*Math.PI);this.outVerts=a,this.outUvs=h,this.rawVerts=l,this.rawUvs=u,this._calculateVertices(_,e,i),this._calculateUVs(e);var g=this._vertPos,y=this._vertices;g[0].x=g[3].x=y[0].x,g[1].x=g[2].x=y[1].x,g[0].y=g[1].y=y[0].y,g[2].y=g[3].y=y[1].y,m.x>y[1].x&&(m.x=y[1].x),m.xy[1].y&&(m.y=y[1].y),l[0]=l[4]=this._vertices[0].x,l[2]=l[6]=this._vertices[1].x,l[1]=l[3]=this._vertices[0].y,l[5]=l[7]=this._vertices[1].y,e._rotated?(u[0]=u[2]=this._uvs[0].x,u[4]=u[6]=this._uvs[1].x,u[3]=u[7]=this._uvs[0].y,u[1]=u[5]=this._uvs[1].y):(u[0]=u[4]=this._uvs[0].x,u[2]=u[6]=this._uvs[1].x,u[1]=u[3]=this._uvs[0].y,u[5]=u[7]=this._uvs[1].y);var v=[null,null,null,null];m.x!==this._vertices[0].x&&(v[0]=[3,0]),m.x!==this._vertices[1].x&&(v[2]=[1,2]),m.y!==this._vertices[0].y&&(v[1]=[0,1]),m.y!==this._vertices[1].y&&(v[3]=[2,3]),this._getInsectedPoints(this._vertices[0].x,this._vertices[1].x,this._vertices[0].y,this._vertices[1].y,m,n,this._intersectPoint_1),this._getInsectedPoints(this._vertices[0].x,this._vertices[1].x,this._vertices[0].y,this._vertices[1].y,m,n+r,this._intersectPoint_2);a.length<30&&(o.put(a),a=o.get(30)||new Float32Array(30),this.outVerts=t._vertices=a),h.length<30&&(o.put(h),h=o.get(30)||new Float32Array(30),this.outUvs=t._uvs=h);for(var x=0,C=0,T=0;T<4;++T){var A=v[T];if(null!==A)if(r>=2*Math.PI)this._generateTriangle(_,x,m,this._vertPos[A[0]],this._vertPos[A[1]]),x+=6,C+=3;else{var b=this._getVertAngle(m,this._vertPos[A[0]]),S=this._getVertAngle(m,this._vertPos[A[1]]);S=p||(b>=n?(S>=p?this._generateTriangle(_,x,m,this._vertPos[A[0]],this._intersectPoint_2[T]):this._generateTriangle(_,x,m,this._vertPos[A[0]],this._vertPos[A[1]]),x+=6,C+=3):S<=n||(S<=p?(this._generateTriangle(_,x,m,this._intersectPoint_1[T],this._vertPos[A[1]]),x+=6,C+=3):(this._generateTriangle(_,x,m,this._intersectPoint_1[T],this._intersectPoint_2[T]),x+=6,C+=3))),b+=2*Math.PI,S+=2*Math.PI}}t._vertCount=C;for(var w,I,R=1/0,P=1/0,O=-1/0,D=-1/0,B=0,L=x;B=O&&(O=w,c[1]=B),I<=P?(P=I,c[2]=B):I>=D&&(D=I,c[3]=B)},_generateTriangle:function(t,e,i,n,r){var o,a,c=this.rawVerts,h=this.rawUvs,l=this.outVerts,u=c[0],_=c[1],d=c[6],f=c[7];s?(l[e]=i.x*t.a+i.y*t.c+t.tx,l[e+1]=i.x*t.b+i.y*t.d+t.ty,l[e+2]=n.x*t.a+n.y*t.c+t.tx,l[e+3]=n.x*t.b+n.y*t.d+t.ty,l[e+4]=r.x*t.a+r.y*t.c+t.tx,l[e+5]=r.x*t.b+r.y*t.d+t.ty):(l[e]=i.x,l[e+1]=i.y,l[e+2]=n.x,l[e+3]=n.y,l[e+4]=r.x,l[e+5]=r.y),o=(i.x-u)/(d-u),a=(i.y-_)/(f-_),this._generateUV(o,a,h,e),o=(n.x-u)/(d-u),a=(n.y-_)/(f-_),this._generateUV(o,a,h,e+2),o=(r.x-u)/(d-u),a=(r.y-_)/(f-_),this._generateUV(o,a,h,e+4)},_generateUV:function(t,e,i,n){var r=this.outUvs,s=i[0]+(i[2]-i[0])*t,o=i[4]+(i[6]-i[4])*t,a=i[1]+(i[3]-i[1])*t,c=i[5]+(i[7]-i[5])*t;r[n]=s+(o-s)*e,r[n+1]=a+(c-a)*e},_isAngleIn:function(t,e,i){for(var n=2*Math.PI;t=e+n;)t=e+n&&(t-=n);return t<=e+i},_getVertAngle:function(t,e){var i,n;if(i=e.x-t.x,n=e.y-t.y,0!==i||0!==n){if(0===i)return n>0?.5*Math.PI:1.5*Math.PI;var r=Math.atan(n/i);return i<0&&(r+=Math.PI),r}},_getInsectedPoints:function(t,e,i,n,r,s,o){var a,c,h=Math.sin(s),l=Math.cos(s);if(0!==Math.cos(s)){if(a=h/l,(t-r.x)*l>0){var u=r.y+a*(t-r.x);o[0].x=t,o[0].y=u}if((e-r.x)*l>0){var _=r.y+a*(e-r.x);o[2].x=e,o[2].y=_}}if(0!==Math.sin(s)){if(c=l/h,(n-r.y)*h>0){var d=r.x+c*(n-r.y);o[3].x=d,o[3].y=n}if((i-r.y)*h>0){var f=r.x+c*(i-r.y);o[1].x=f,o[1].y=i}}return[null,null,null,null]},_calculateVertices:function(t,e,i){var n,r;n=i.width,r=i.height,this._vertices[0].x=0,this._vertices[0].y=0,this._vertices[1].x=n,this._vertices[1].y=r},_calculateUVs:function(t){var e,i,n,r,s=t._texture.width,o=t._texture.height,c=t._rect,h=a.FIX_ARTIFACTS_BY_STRECHING_TEXEL?.5:0;t._rotated?(e=(c.x+h)/s,i=(c.x+c.height-h)/s,n=(c.y+h)/o,r=(c.y+c.width-h)/o):(e=(c.x+h)/s,i=(c.x+c.width-h)/s,n=(c.y+h)/o,r=(c.y+c.height-h)/o),this._uvs[0].x=e,this._uvs[0].y=r,this._uvs[1].x=i,this._uvs[1].y=n}},d={_rebuildQuads_base:function(t,e,i){if(cc._renderType!==cc.game.RENDER_TYPE_CANVAS&&(i=t._meshPolygonInfo)){var n=t._renderCmd._worldTransform,r=i.triangles.verts,s=t._vertices,a=t._uvs,c=r.length,h=t._corner,l=2*c;s.lengthd&&(d=p,h[1]=2*m),g<_&&(_=g,h[2]=2*m),g>f&&(f=g,h[3]=2*m)}t._vertCount=c}}};cc.Scale9Sprite=_ccsg.Node.extend({_spriteFrame:null,_insetLeft:0,_insetRight:0,_insetTop:0,_insetBottom:0,_blendFunc:null,_renderingType:1,_brightState:0,_rawVerts:null,_rawUvs:null,_vertices:null,_uvs:null,_vertCount:0,_quadsDirty:!0,_uvsDirty:!0,_isTriangle:!1,_isTrimmedContentSize:!0,_fillType:0,_fillCenter:null,_fillStart:0,_fillRange:2*Math.PI,_distortionOffset:null,_distortionTiling:null,_meshPolygonInfo:null,ctor:function(t){_ccsg.Node.prototype.ctor.call(this),this._renderCmd.setState(this._brightState),this._blendFunc=cc.BlendFunc._alphaNonPremultiplied(),this._fillCenter=cc.v2(0,0),this.setAnchorPoint(cc.p(.5,.5)),this._rawVerts=null,this._rawUvs=null,this._vertices=o.get(8)||new Float32Array(8),this._uvs=o.get(8)||new Float32Array(8),t&&this.setSpriteFrame(t),void 0===s&&(s=cc._renderType===cc.game.RENDER_TYPE_WEBGL),this._corner=[]},loaded:function(){return null!==this._spriteFrame&&this._spriteFrame.textureLoaded()},setTexture:function(t){var e=new cc.SpriteFrame(t);this.setSpriteFrame(e)},setSpriteFrame:function(t){if(t){this._spriteFrame=t,this._quadsDirty=!0,this._uvsDirty=!0,this._renderCmd._needDraw=!1;var e=this;function i(){e._spriteFrame&&0===e._contentSize.width&&0===e._contentSize.height&&e.setContentSize(e._spriteFrame._rect),e._renderCmd._needDraw=!0,e._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)}t.textureLoaded()?i():t.once("load",i,this)}},setBlendFunc:function(t,e){void 0===e?(this._blendFunc.src=t.src,this._blendFunc.dst=t.dst):(this._blendFunc.src=t,this._blendFunc.dst=e),this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)},getBlendFunc:function(){return new cc.BlendFunc(this._blendFunc.src,this._blendFunc.dst)},setContentSize:function(t,e){void 0===e&&(e=t.height,t=t.width),t===this._contentSize.width&&e===this._contentSize.height||(_ccsg.Node.prototype.setContentSize.call(this,t,e),this._quadsDirty=!0)},enableTrimmedContentSize:function(t){this._isTrimmedContentSize!==t&&(this._isTrimmedContentSize=t,this._quadsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty))},setState:function(t){this._brightState=t,this._renderCmd.setState(t),this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)},getState:function(){return this._brightState},setRenderingType:function(t){this._renderingType!==t&&(this._renderingType=t,this._quadsDirty=!0,this._uvsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty))},getRenderingType:function(){return this._renderingType},setInsetLeft:function(t){this._insetLeft=t,this._quadsDirty=!0,this._uvsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)},getInsetLeft:function(){return this._insetLeft},setInsetTop:function(t){this._insetTop=t,this._quadsDirty=!0,this._uvsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)},getInsetTop:function(){return this._insetTop},setInsetRight:function(t){this._insetRight=t,this._quadsDirty=!0,this._uvsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)},getInsetRight:function(){return this._insetRight},setInsetBottom:function(t){this._insetBottom=t,this._quadsDirty=!0,this._uvsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)},getInsetBottom:function(){return this._insetBottom},setFillType:function(t){this._fillType!==t&&(this._fillType=t,this._renderingType===m.FILLED&&(this._quadsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)))},getFillType:function(){return this._fillType},setFillCenter:function(t,e){this._fillCenter=cc.v2(t,e),this._renderingType===m.FILLED&&this._fillType===p.RADIAL&&(this._quadsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty))},setDistortionTiling:function(t,e){void 0===e&&(e=t.y,t=t.x),this._distortionTiling=this._distortionTiling||cc.v2(0,0),this._distortionTiling.x=t,this._distortionTiling.y=e},setDistortionOffset:function(t,e){void 0===e&&(e=t.y,t=t.x),this._distortionOffset=this._distortionOffset||cc.v2(0,0),this._distortionOffset.x=t,this._distortionOffset.y=e},getFillCenter:function(){return cc.v2(this._fillCenter)},setFillStart:function(t){this._fillStart!==t&&(this._fillStart=t,this._renderingType===m.FILLED&&(this._quadsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)))},getFillStart:function(){return this._fillStart},setFillRange:function(t){this._fillRange!==t&&(this._fillRange=t,this._renderingType===m.FILLED&&(this._quadsDirty=!0,this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty)))},getFillRange:function(){return this._fillRange},_rebuildQuads:function(){if(this._spriteFrame&&this._spriteFrame.textureLoaded()){var t;switch(this._isTriangle=!1,this._renderingType){case m.SIMPLE:t=c;break;case m.SLICED:t=h;break;case m.TILED:t=l;break;case m.FILLED:t=this._fillType===p.RADIAL?_:u;break;case m.MESH:t=d}t?t._rebuildQuads_base(this):(this._quadsDirty=!1,this._uvsDirty=!1,this._renderCmd._needDraw=!1,cc.errorID(2627)),this._quadsDirty=!1,this._uvsDirty=!1}else this._renderCmd._needDraw=!1},_createRenderCmd:function(){return cc._renderType===cc.game.RENDER_TYPE_CANVAS?new cc.Scale9Sprite.CanvasRenderCmd(this):new cc.Scale9Sprite.WebGLRenderCmd(this)},setMeshPolygonInfo:function(t){this.setRenderingType(m.MESH),this._meshPolygonInfo=t,this._quadsDirty=!0,this._uvsDirty=!0},getMeshPolygonInfo:function(){return this._meshPolygonInfo}});var f=cc.Scale9Sprite.prototype;cc.js.addon(f,n.prototype),cc.defineGetterSetter(f,"insetLeft",f.getInsetLeft,f.setInsetLeft),cc.defineGetterSetter(f,"insetTop",f.getInsetTop,f.setInsetTop),cc.defineGetterSetter(f,"insetRight",f.getInsetRight,f.setInsetRight),cc.defineGetterSetter(f,"insetBottom",f.getInsetBottom,f.setInsetBottom),cc.Scale9Sprite.state={NORMAL:0,GRAY:1,DISTORTION:2};var m=cc.Scale9Sprite.RenderingType=cc.Enum({SIMPLE:0,SLICED:1,TILED:2,FILLED:3,MESH:4}),p=cc.Scale9Sprite.FillType=cc.Enum({HORIZONTAL:0,VERTICAL:1,RADIAL:2})}),{"../event/event-target":114}],214:[(function(t,e,i){cc.Scale9Sprite.CanvasRenderCmd=function(t){this._rootCtor(t),this._node.loaded()?this._needDraw=!0:this._needDraw=!1,this._state=cc.Scale9Sprite.state.NORMAL,this._originalTexture=this._textureToRender=null};var n=cc.Scale9Sprite.CanvasRenderCmd.prototype=Object.create(_ccsg.Node.CanvasRenderCmd.prototype);n.constructor=cc.Scale9Sprite.CanvasRenderCmd,n.updateTransform=function(t){this.originUpdateTransform(t),this._node._rebuildQuads()},n._doCulling=function(){var t=cc.visibleRect,e=this._currentRegion,i=e._minX,n=e._maxX,r=e._minY,s=e._maxY,o=t.left.x,a=t.right.x,c=t.top.y,h=t.bottom.y;this._needDraw=!(na||sc)},n._updateDisplayColor=function(t){_ccsg.Node.WebGLRenderCmd.prototype._updateDisplayColor.call(this,t),this._originalTexture=this._textureToRender=null},n._syncDisplayColor=function(t){_ccsg.Node.WebGLRenderCmd.prototype._syncDisplayColor.call(this,t),this._originalTexture=this._textureToRender=null},n.setState=function(t){this._state!==t&&(this._state=t,this._originalTexture=this._textureToRender=null)},n.rendering=function(t,e,i){var n=this._node,r=this._displayedOpacity,s=r/255,o=null;if(n._spriteFrame&&(o=n._spriteFrame._texture),n.loaded()&&0!==r){if((null===this._textureToRender||this._originalTexture!==o)&&(this._textureToRender=this._originalTexture=o,cc.Scale9Sprite.state.GRAY===this._state&&(this._textureToRender=this._textureToRender._generateGrayTexture()),cc.sys.browserType!==cc.sys.BROWSER_TYPE_WECHAT_GAME_SUB)){var a=n.getDisplayedColor();!o||255===a.r&&255===a.g&&255===a.b||(this._textureToRender=this._textureToRender._generateColorTexture(a.r,a.g,a.b))}var c=t||cc._renderContext,h=c.getContext();if(c.setTransform(this._worldTransform,e,i),c.setCompositeOperation(_ccsg.Node.CanvasRenderCmd._getCompositeOperationByBlendFunc(n._blendFunc)),c.setGlobalAlpha(s),this._textureToRender){var l,u,_,d,f,m,p,g;n._quadsDirty&&n._rebuildQuads();var y=this._textureToRender.width,v=this._textureToRender.height,x=this._textureToRender._image,C=n._vertices,T=n._uvs,A=0,b=0;if(n._isTriangle){var S=n._rawVerts,E=n._rawUvs;f=S[0],m=S[1],p=S[6]-f,m=-m-(g=S[7]-m),l=E[4]*y,u=E[5]*v,_=(E[6]-E[0])*y,d=(E[1]-E[7])*v,c.save(),h.beginPath();var w=Math.floor(n._vertCount/3);for(A=0,b=0;A0&&d>0&&p>0&&g>0&&h.drawImage(x,l,u,_,d,f,m,p,g),c.restore(),cc.g_NumberOfDraws+=w}else if(n._renderingType===cc.Scale9Sprite.RenderingType.SLICED){for(var I=0;I<3;++I)for(var R=0;R<3;++R)f=C[b=8*I+2*R],m=C[b+1],p=C[b+10]-f,m=-m-(g=C[b+11]-m),l=T[b]*y,u=T[b+11]*v,_=(T[b+10]-T[b])*y,d=(T[b+1]-T[b+11])*v,_>0&&d>0&&p>0&&g>0&&h.drawImage(x,l,u,_,d,f,m,p,g);cc.g_NumberOfDraws+=9}else{var P=Math.floor(n._vertCount/4);for(A=0,b=0;A0&&d>0&&p>0&&g>0&&h.drawImage(x,l,u,_,d,f,m,p,g),b+=8;cc.g_NumberOfDraws+=P}}}}}),{}],215:[(function(t,e,i){cc.gl;cc.Scale9Sprite.WebGLRenderCmd=function(t){this._rootCtor(t),this._node.loaded()?this._needDraw=!0:this._needDraw=!1,this.vertexType=cc.renderer.VertexType.QUAD,this._dirty=!1,this._shaderProgram=cc.shaderCache.programForKey(cc.macro.SHADER_SPRITE_POSITION_TEXTURECOLOR)};var n=cc.Scale9Sprite,r=n.WebGLRenderCmd.prototype=Object.create(_ccsg.Node.WebGLRenderCmd.prototype);r.constructor=n.WebGLRenderCmd,r._uploadSliced=function(t,e,i,n,r,s,o){for(var a,c=0;c<3;++c)for(var h=0;h<3;++h)a=8*c+2*h,r[o]=t[a],r[o+1]=t[a+1],r[o+2]=n,s[o+3]=i,r[o+4]=e[a],r[o+5]=e[a+1],r[o+=6]=t[a+2],r[o+1]=t[a+3],r[o+2]=n,s[o+3]=i,r[o+4]=e[a+2],r[o+5]=e[a+3],r[o+=6]=t[a+8],r[o+1]=t[a+9],r[o+2]=n,s[o+3]=i,r[o+4]=e[a+8],r[o+5]=e[a+9],r[o+=6]=t[a+10],r[o+1]=t[a+11],r[o+2]=n,s[o+3]=i,r[o+4]=e[a+10],r[o+5]=e[a+11],o+=6;return 36},r.updateTransform=function(t){this.originUpdateTransform(t),this._node._rebuildQuads()},r._doCulling=function(){var t=this._node,e=cc.visibleRect;this._cameraFlag>0&&(e=cc.Camera.main.visibleRect);var i=e.left.x,n=e.right.x,r=e.top.y,s=e.bottom.y,o=t._vertices,a=t._corner,c=a[0],h=a[1],l=a[2],u=a[3],_=o[c],d=o[h],f=o[l],m=o[u],p=o[c+1],g=o[h+1],y=o[l+1],v=o[u+1];this._needDraw=!((_-i&d-i&f-i&m-i)>>31||(n-_&n-d&n-f&n-m)>>31||(p-s&g-s&y-s&v-s)>>31||(r-p&r-g&r-y&r-v)>>31)},r.uploadData=function(t,e,i){var r=this._node;if(0===this._displayedOpacity)return 0;r._quadsDirty&&r._rebuildQuads(),r._distortionOffset&&this._shaderProgram===n.WebGLRenderCmd._distortionProgram&&(this._shaderProgram.use(),this._shaderProgram.setUniformLocationWith2f(n.WebGLRenderCmd._distortionOffset,r._distortionOffset.x,r._distortionOffset.y),this._shaderProgram.setUniformLocationWith2f(n.WebGLRenderCmd._distortionTiling,r._distortionTiling.x,r._distortionTiling.y),cc.renderer._breakBatch());var s,o=this._displayedOpacity,a=this._displayedColor._val;if(r._opacityModifyRGB){var c=o/255,h=this._displayedColor.r*c,l=this._displayedColor.g*c;s=(o<<24>>>0)+(this._displayedColor.b*c<<16)+(l<<8)+h}else s=(o<<24>>>0)+((65280&a)<<8)+((16711680&a)>>8)+(a>>>24);var u=r._vertexZ,_=r._vertices,d=r._uvs,f=n.RenderingType,m=i,p=0;switch(r._renderingType){case f.SIMPLE:case f.TILED:case f.FILLED:case f.MESH:p=this._node._vertCount;for(var g=0,y=0;gt.width&&cc.errorID(3300,t.url+"/"+this.name,i,t.width),n>t.height&&cc.errorID(3400,t.url+"/"+this.name,n,t.height)},_serialize:!1,_deserialize:function(t,e){var i=t.rect;i&&this.setRect(new cc.Rect(i[0],i[1],i[2],i[3])),t.offset&&this.setOffset(new cc.Vec2(t.offset[0],t.offset[1])),t.originalSize&&this.setOriginalSize(new cc.Size(t.originalSize[0],t.originalSize[1])),this._rotated=1===t.rotated,this._name=t.name;var n=t.capInsets;n&&(this.insetLeft=n[0],this.insetTop=n[1],this.insetRight=n[2],this.insetBottom=n[3]);var r=t.texture;r&&e.result.push(this,"_textureSetter",r)}}),s=r.prototype;s.copyWithZone=s.clone,s.copy=s.clone,s.initWithTexture=s.setTexture,cc.SpriteFrame=r,e.exports=r}),{"../assets/CCAsset":43,"../event/event-target":114}],218:[(function(t,e,i){var n=t("../event/event-target"),r=t("../platform/CCSys"),s=t("../platform/js"),o=(t("../utils/misc"),t("../CCGame"));t("../platform/CCClass");var a=[{format:6407,internalFormat:6407,pixelType:33635},{format:6408,internalFormat:6408,pixelType:32820},{format:6408,internalFormat:6408,pixelType:32819},{format:6407,internalFormat:6407,pixelType:5121},{format:6408,internalFormat:6408,pixelType:5121},{format:6406,internalFormat:6406,pixelType:5121},{format:6409,internalFormat:6409,pixelType:5121},{format:6410,internalFormat:6410,pixelType:5121}],c=cc.Enum({RGB565:0,RGB5A1:1,RGBA4444:2,RGB888:3,RGBA8888:4,A8:5,I8:6,AI8:7}),h=cc.Enum({REPEAT:10497,CLAMP_TO_EDGE:33071,MIRRORED_REPEAT:33648}),l=cc.Enum({LINEAR:9729,NEAREST:9728}),u=cc.Class({name:"cc.Texture2D",extends:t("../assets/CCAsset"),mixins:[n],ctor:function(t){this.url="",this.loaded=!1,this.width=0,this.height=0,this._image=null,cc._renderType===o.RENDER_TYPE_CANVAS?(this._pattern="",this._grayElementObj=null,this._backupElement=null,this._isGray=!1):cc._renderType===o.RENDER_TYPE_WEBGL&&(this._gl=t||cc._renderContext,this._glID=null)},properties:{_nativeAsset:{get:function(){return this._image},set:function(t){this.initWithElement(t),this.handleLoadedTexture()},override:!0},_hasMipmap:!1,_format:c.RGBA8888,_compressed:!1,_premultiplyAlpha:!1,_minFilter:l.LINEAR,_magFilter:l.LINEAR,_wrapS:h.CLAMP_TO_EDGE,_wrapT:h.CLAMP_TO_EDGE},statics:{WrapMode:h,PixelFormat:c,Filter:l,extnames:[".png",".jpg",".jpeg",".bmp",".webp"]},update:function(t){},toString:function(){return this.url||""},getPixelWidth:function(){return this.width},getPixelHeight:function(){return this.height},getContentSize:function(){return cc.size(this.width,this.height)},getContentSizeInPixels:function(){return this.getContentSize()},initWithElement:function(t){t&&(this._image=t,this.width=t.width,this.height=t.height,this.loaded=!0)},initWithData:function(t,e,i,n,r){return!1},initWithImage:function(t){return!1},getHtmlElementObj:function(){return this._image},load:function(t){if(this.loaded)t&&t();else if(this.url){var e=this;cc.loader.load({url:this.url,_owner:this},(function(i,n){n&&(e.loaded||(e._nativeAsset=n)),t&&t(i)}))}else t&&t()},isLoaded:function(){return this.loaded},handleLoadedTexture:function(){if(this._image&&this._image.width&&this._image.height){var t=this._image;this.width=t.width,this.height=t.height,this.loaded=!0,this.emit("load")}},description:function(){return""},_releaseTexture:function(){this._gl&&null!==this._glID&&(this._gl.deleteTexture(this._glID),this._glID=null)},destroy:function(){this._releaseTexture(),cc.textureCache.removeTextureForKey(this.url),this._super()},getPixelFormat:function(){return this._format},hasPremultipliedAlpha:function(){return this._premultiplyAlpha||!1},hasMipmaps:function(){return this._hasMipmap||!1},setTexParameters:function(t,e,i,n){void 0!==e&&(t={minFilter:t,magFilter:e,wrapS:i,wrapT:n}),t.wrapS!==h.REPEAT||t.wrapT!==h.REPEAT?t.wrapS!==h.REPEAT?t.wrapT!==h.REPEAT?this._pattern="":this._pattern="repeat-y":this._pattern="repeat-x":this._pattern="repeat"},setAntiAliasTexParameters:function(){},setAliasTexParameters:function(){},_serialize:!1,_deserialize:function(t,e){var i=t.split(",")[0];if(i){var n=i.charCodeAt(0)-48,r=u.extnames[n];this._setRawAsset(r||i);var s=e.customEnv,o=s&&s.uuid;if(o){this._uuid=o;var a=this.nativeUrl;this.url=a,cc.textureCache.cacheImage(a,this)}}}}),_=u.prototype;s.get(_,"pixelFormat",_.getPixelFormat),s.get(_,"pixelWidth",_.getPixelWidth),s.get(_,"pixelHeight",_.getPixelHeight),o.once(o.EVENT_RENDERER_INITED,(function(){if(cc._renderType===o.RENDER_TYPE_CANVAS)(function(){function t(t,e,i){if(null===t)return null;i=i||document.createElement("canvas"),e=e||cc.rect(0,0,t.width,t.height),i.width=e.width,i.height=e.height;var n=i.getContext("2d");n.drawImage(t,e.x,e.y,e.width,e.height,0,0,e.width,e.height);for(var r=n.getImageData(0,0,e.width,e.height),s=r.data,o=0,a=s.length;o"},getTextureForKey:function(t){return this._textures[t]},getKeyByTexture:function(t){for(var e in this._textures)if(this._textures[e]===t)return e;return null},_generalTextureKey:function(t){return"_textureKey_"+t},getTextureColors:function(t){var e=t._image,i=this.getKeyByTexture(e);return i||(i=e instanceof HTMLImageElement?e.src:this._generalTextureKey(t.__instanceId)),this._textureColorsCache[i]||(this._textureColorsCache[i]=t._generateTextureCacheForColor()),this._textureColorsCache[i]},getAllTextures:function(){var t=[];for(var e in this._textures){var i=this._textures[e];t.push(i)}return t},removeAllTextures:function(){var t=this._textures;for(var e in t)t[e]&&t[e]._releaseTexture();this._textures={}},removeTexture:function(t){if(t){var e=this._textures;for(var i in e)e[i]===t&&(e[i]._releaseTexture(),delete e[i])}},removeTextureForKey:function(t){if("string"==typeof t){var e=this._textures[t];e&&(e._releaseTexture(),delete this._textures[t])}},addImage:function(t,e,i){cc.assertID(t,3103);var s=this._textures,o=s[t];return o?o.loaded?e&&e.call(i,o):o.once("load",(function(){e&&e.call(i,o)}),i):((o=s[t]=new n).url=t,cc.loader.load(t,(function(n,s){if(n)return e&&e.call(i,n||new Error("Unknown error"));r.handleLoadedTexture(t),e&&e.call(i,o)}))),o},addImageAsync:null,cacheImage:function(t,e){if(cc.assertID(t,3009),e instanceof n)this._textures[t]=e;else{var i=new n;i.initWithElement(e),i.handleLoadedTexture(),this._textures[t]=i}},dumpCachedTextureInfo:function(){var t=0,e=0,i=this._textures;for(var n in i){var r=i[n];t++,r.getHtmlElementObj()instanceof HTMLImageElement?cc.logID(3005,n,r.getHtmlElementObj().src,r.getPixelWidth(),r.getPixelHeight()):cc.logID(3006,n,r.getPixelWidth(),r.getPixelHeight()),e+=r.getPixelWidth()*r.getPixelHeight()*4}var s=this._textureColorsCache;for(n in s){var o=s[n];for(var a in o){var c=o[a];t++,cc.logID(3006,n,c.width,c.height),e+=c.width*c.height*4}}cc.logID(3007,t,e/1024,(e/1048576).toFixed(2))},_clear:function(){this._textures={},this._textureColorsCache={},this._textureKeySeq=0|1e3*Math.random()},handleLoadedTexture:function(t){var e=this._textures,i=e[t];i||(cc.assertID(t,3009),(i=e[t]=new n).url=t),i.handleLoadedTexture()}};r.addImageAsync=r.addImage,cc.textureCache=e.exports=r}),{"./CCTexture2D":218}],220:[(function(t,e,i){t("./CCTexture2D"),t("./CCTextureCache")}),{"./CCTexture2D":218,"./CCTextureCache":219}],221:[(function(t,e,i){t("../platform/CCSys");var n=/(\.[^\.\/\?\\]*)(\?.*)?$/,r=/((.*)(\/|\\|\\\\))?(.*?\..*$)?/,s=/[^\.\/]+\/\.\.\//;cc.path={join:function(){for(var t=arguments.length,e="",i=0;i0&&(t=t.substring(0,i));var n=/(\/|\\\\)([^(\/|\\\\)]+)$/g.exec(t.replace(/(\/|\\\\)$/,""));if(!n)return null;var r=n[2];return e&&t.substring(t.length-e.length).toLowerCase()===e.toLowerCase()?r.substring(0,r.length-e.length):r},dirname:function(t){var e=r.exec(t);return e?e[2]:""},changeExtname:function(t,e){e=e||"";var i=t.indexOf("?"),n="";return i>0&&(n=t.substring(i),t=t.substring(0,i)),(i=t.lastIndexOf("."))<0?t+e+n:t.substring(0,i)+e+n},changeBasename:function(t,e,i){if(0===e.indexOf("."))return this.changeExtname(t,e);var n=t.indexOf("?"),r="",s=i?this.extname(t):"";return n>0&&(r=t.substring(n),t=t.substring(0,n)),n=(n=t.lastIndexOf("/"))<=0?0:n+1,t.substring(0,n)+e+s+r},_normalize:function(t){var e=t=String(t);do{e=t,t=t.replace(s,"")}while(e.length!==t.length);return t},sep:cc.sys.os===cc.sys.OS_WINDOWS?"\\":"/",stripSep:function(t){return t.replace(/[\/\\]$/,"")}},e.exports=cc.path}),{"../platform/CCSys":187}],222:[(function(t,e,i){var n=t("../../../external/pstats/pstats"),r=t("../platform/CCMacro"),s=document.createElement("div");s.id="fps";var o=null,a=!1;function c(){o("frame").start(),o("logic").start()}function h(){cc.director.isPaused()?o("frame").start():o("logic").end(),o("render").start()}function l(){o("render").end(),o("draws").value=cc.g_NumberOfDraws,o("frame").end(),o("fps").frame(),o().tick()}cc.profiler=e.exports={isShowingStats:function(){return a},hideStats:function(){a&&(s.parentElement===document.body&&document.body.removeChild(s),cc.director.off(cc.Director.EVENT_BEFORE_UPDATE,c),cc.director.off(cc.Director.EVENT_AFTER_VISIT,h),cc.director.off(cc.Director.EVENT_AFTER_DRAW,l),a=!1)},showStats:function(){a||(o||(o=n.new(s,{showGraph:!1,values:{frame:{desc:"Frame time (ms)",min:0,max:50,average:500},fps:{desc:"Framerate (FPS)",below:30,average:500},draws:{desc:"Draw call"},logic:{desc:"Game Logic (ms)",min:0,max:50,average:500,color:"#080"},render:{desc:"Renderer (ms)",min:0,max:50,average:500,color:"#f90"},mode:{desc:cc._renderType===cc.game.RENDER_TYPE_WEBGL?"WebGL":"Canvas",min:1}},css:".pstats {left: "+r.DIRECTOR_STATS_POSITION.x+"px; bottom: "+r.DIRECTOR_STATS_POSITION.y+"px;}"})),null===s.parentElement&&document.body.appendChild(s),cc.director.on(cc.Director.EVENT_BEFORE_UPDATE,c),cc.director.on(cc.Director.EVENT_AFTER_VISIT,h),cc.director.on(cc.Director.EVENT_AFTER_DRAW,l),a=!0)}}}),{"../../../external/pstats/pstats":313,"../platform/CCMacro":183}],223:[(function(t,e,i){var n=t("./prefab-helper"),r=t("../platform/CCObject").Flags,s=t("./misc"),o=t("../platform/id-generater"),a=t("../event-manager"),c=cc.js,h=r.Destroying,l=r.DontDestroy,u=r.Deactivating,_=new o("Node");function d(t){return t?"string"==typeof t?c.getClassByName(t):t:(cc.errorID(3804),null)}function f(t,e){if(e._sealed)for(var i=0;i0},set:function(t){t?this._objFlags|=l:this._objFlags&=~l}},name:{get:function(){return this._name},set:function(t){this._name=t}},_id:{default:"",editorOnly:!0},uuid:{get:function(){var t=this._id;return t||(t=this._id=_.getNewId()),t}},children:{get:function(){return this._children}},childrenCount:{get:function(){return this._children.length}},active:{get:function(){return this._active},set:function(t){if(t=!!t,this._active!==t){this._active=t;var e=this._parent;if(e)e._activeInHierarchy&&cc.director._nodeActivator.activateNode(this,t)}}},activeInHierarchy:{get:function(){return this._activeInHierarchy}}},ctor:function(t){this._name=void 0!==t?t:"New Node",this._activeInHierarchy=!1,this.__instanceId=this._id||cc.ClassManager.getNewInstanceId(),this.__eventTargets=[]},getTag:function(){return this._tag},setTag:function(t){this._tag=t},getParent:function(){return this._parent},setParent:function(t){if(this._parent!==t){0;var e=this._parent;if(this._parent=t||null,this._onSetParent(t),t&&(a._setDirtyForNode(this),t._children.push(this),t.emit("child-added",this)),e){if(!(e._objFlags&h)){var i=e._children.indexOf(this);0,e._children.splice(i,1),e.emit("child-removed",this),this._onHierarchyChanged(e)}}else t&&this._onHierarchyChanged(null)}},init:function(){return!0},attr:function(t){c.mixin(this,t)},getChildByTag:function(t){var e=this._children;if(null!==e)for(var i=0;i-1&&((e||void 0===e)&&t.cleanup(),t.parent=null)},removeChildByTag:function(t,e){t===cc.macro.NODE_TAG_INVALID&&cc.logID(1609);var i=this.getChildByTag(t);i?this.removeChild(i,e):cc.logID(1610,t)},removeAllChildren:function(t){var e=this._children;void 0===t&&(t=!0);for(var i=e.length-1;i>=0;i--){var n=e[i];n&&(t&&n.cleanup(),n.parent=null)}this._children.length=0},isChildOf:function(t){var e=this;do{if(e===t)return!0;e=e._parent}while(e);return!1},getComponent:function(t){var e=d(t);return e?f(this,e):null},getComponents:function(t){var e=d(t),i=[];return e&&m(this,e,i),i},getComponentInChildren:function(t){var e=d(t);return e?(function t(e,i){for(var n=0;n0&&(s=t(r._children,i)))return s}return null})(this._children,e):null},getComponentsInChildren:function(t){var e=d(t),i=[];return e&&(m(this,e,i),(function t(e,i,n){for(var r=0;r0&&t(s._children,i,n)}})(this._children,e,i)),i},_checkMultipleComp:!1,addComponent:function(t){var e;if("string"==typeof t){if(!(e=c.getClassByName(t)))return cc.errorID(3807,t),cc._RFpeek()&&cc.errorID(3808,t),null}else{if(!t)return cc.errorID(3804),null;e=t}if("function"!=typeof e)return cc.errorID(3809),null;if(!cc.isChildClassOf(e,cc.Component))return cc.errorID(3810),null;var i=e._requireComponent;if(i&&!this.getComponent(i)&&!this.addComponent(i))return null;var n=new e;return n.node=this,this._components.push(n),this._activeInHierarchy&&cc.director._nodeActivator.activateComp(n),n},_addComponentAt:!1,removeComponent:function(t){t?(t instanceof cc.Component||(t=this.getComponent(t)),t&&t.destroy()):cc.errorID(3813)},_getDependComponent:!1,_removeComponent:function(t){if(t){if(!(this._objFlags&h)){var e=this._components.indexOf(t);-1!==e?this._components.splice(e,1):t.node!==this&&cc.errorID(3815)}}else cc.errorID(3814)},_disableChildComps:function(){var t,e=this._components.length;for(t=0;t>>1;i<=r;s=i+r>>>1){var o=t[s];if(o>e+n)r=s-1;else{if(!(o>2],o[a[i++]]=r[(3&s)<<2|c>>4],o[a[i++]]=r[15&c]}return o.join("")}}),{"./misc":228}],226:[(function(t,e,i){cc.find=e.exports=function(t,e){if(null==t)return cc.errorID(5600),null;if(e)0;else{var i=cc.director.getScene();if(!i)return null;e=i}for(var n=e,r="/"!==t[0]?0:1,s=t.split("/"),o=r;o>1,t|=t>>2,t|=t>>4,t|=t>>8,(t|=t>>16)+1},s.imagePool=new n.Pool(function(t){return t instanceof HTMLImageElement&&(t.src=this._smallImg,!0)},10),s.imagePool.get=function(){return this._get()||new Image},s.imagePool._smallImg="data:image/gif;base64,R0lGODlhAQABAAAAACwAAAAAAQABAAA=",r.os!==r.OS_WINDOWS&&r.os!==r.OS_LINUX||r.browserType===r.BROWSER_TYPE_CHROME||s.imagePool.resize(0),s.BUILTIN_CLASSID_RE=/^(?:cc|dragonBones|sp|ccsg)\..+/;for(var o=new Array(123),a=0;a<123;++a)o[a]=64;for(var c=0;c<64;++c)o["ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=".charCodeAt(c)]=c;s.BASE64_VALUES=o,s.pushToMap=function(t,e,i,n){var r=t[e];r?Array.isArray(r)?n?(r.push(r[0]),r[0]=i):r.push(i):t[e]=n?[i,r]:[r,i]:t[e]=i}}),{"../platform/CCSys":187,"../platform/js":199}],229:[(function(t,e,i){function n(t){this.i=0,this.array=t}var r=n.prototype;r.remove=function(t){var e=this.array.indexOf(t);e>=0&&this.removeAt(e)},r.removeAt=function(t){this.array.splice(t,1),t<=this.i&&--this.i},r.fastRemove=function(t){var e=this.array.indexOf(t);e>=0&&this.fastRemoveAt(e)},r.fastRemoveAt=function(t){var e=this.array;e[t]=e[e.length-1],--e.length,t<=this.i&&--this.i},r.push=function(t){this.array.push(t)},e.exports=n}),{}],230:[(function(t,e,i){cc._PrefabInfo=cc.Class({name:"cc.PrefabInfo",properties:{root:null,asset:null,fileId:"",sync:!1,_synced:{default:!1,serializable:!1}}}),e.exports={syncWithPrefab:function(t){var e=t._prefab;if(e._synced=!0,!e.asset)return cc.errorID(3701,t.name),void(t._prefab=null);var i=t._objFlags,n=t._parent,r=t._id,s=t._name,o=t._active,a=t._position.x,c=t._position.y,h=t._rotationX,l=t._rotationY,u=t._localZOrder,_=t._globalZOrder;cc.game._isCloning=!0,e.asset._doInstantiate(t),cc.game._isCloning=!1,t._objFlags=i,t._parent=n,t._id=r,t._prefab=e,t._name=s,t._active=o,t._position.x=a,t._position.y=c,t._rotationX=h,t._rotationY=l,t._localZOrder=u,t._globalZOrder=_}}}),{}],231:[(function(t,e,i){var n={removeSgNode:function(){var t=this._sgNode;if(t){var e=t._parent;e?e.removeChild(t):t.performRecursive(_ccsg.Node.performType.cleanup),t._entity&&(t._entity=null)}}};e.exports=n}),{}],232:[(function(t,e,i){cc.AffineTransform=function(t,e,i,n,r,s){this.a=t,this.b=e,this.c=i,this.d=n,this.tx=r,this.ty=s},cc.affineTransformMake=function(t,e,i,n,r,s){return{a:t,b:e,c:i,d:n,tx:r,ty:s}},cc.affineTransformClone=function(t){return{a:t.a,b:t.b,c:t.c,d:t.d,tx:t.tx,ty:t.ty}},cc.pointApplyAffineTransform=function(t,e,i){var n,r;return void 0===i?(i=e,n=t.x,r=t.y):(n=t,r=e),{x:i.a*n+i.c*r+i.tx,y:i.b*n+i.d*r+i.ty}},cc._pointApplyAffineTransformIn=function(t,e,i,n){var r,s,o;void 0===n?(o=e,r=t.x,s=t.y,n=i):(r=t,s=e,o=i),n.x=o.a*r+o.c*s+o.tx,n.y=o.b*r+o.d*s+o.ty},cc._pointApplyAffineTransform=function(t,e,i){return cc.pointApplyAffineTransform(t,e,i)},cc.sizeApplyAffineTransform=function(t,e){return{width:e.a*t.width+e.c*t.height,height:e.b*t.width+e.d*t.height}},cc.affineTransformMakeIdentity=function(){return{a:1,b:0,c:0,d:1,tx:0,ty:0}},cc.affineTransformIdentity=function(){return{a:1,b:0,c:0,d:1,tx:0,ty:0}},cc.rectApplyAffineTransform=function(t,e){var i=t.x,n=t.y,r=i+t.width,s=n+t.height,o=e.a*i+e.c*n+e.tx,a=e.b*i+e.d*n+e.ty,c=e.a*r+e.c*n+e.tx,h=e.b*r+e.d*n+e.ty,l=e.a*i+e.c*s+e.tx,u=e.b*i+e.d*s+e.ty,_=e.a*r+e.c*s+e.tx,d=e.b*r+e.d*s+e.ty,f=Math.min(o,c,l,_),m=Math.max(o,c,l,_),p=Math.min(a,h,u,d),g=Math.max(a,h,u,d);return cc.rect(f,p,m-f,g-p)},cc._rectApplyAffineTransformIn=function(t,e){var i=t.x,n=t.y,r=i+t.width,s=n+t.height,o=e.a*i+e.c*n+e.tx,a=e.b*i+e.d*n+e.ty,c=e.a*r+e.c*n+e.tx,h=e.b*r+e.d*n+e.ty,l=e.a*i+e.c*s+e.tx,u=e.b*i+e.d*s+e.ty,_=e.a*r+e.c*s+e.tx,d=e.b*r+e.d*s+e.ty,f=Math.min(o,c,l,_),m=Math.max(o,c,l,_),p=Math.min(a,h,u,d),g=Math.max(a,h,u,d);return t.x=f,t.y=p,t.width=m-f,t.height=g-p,t},cc.obbApplyAffineTransform=function(t,e,i,n,r,s){var o=t.x,a=t.y,c=t.width,h=t.height,l=e.a*o+e.c*a+e.tx,u=e.b*o+e.d*a+e.ty,_=e.a*c,d=e.b*c,f=e.c*h,m=e.d*h;n.x=l,n.y=u,r.x=_+l,r.y=d+u,i.x=f+l,i.y=m+u,s.x=_+f+l,s.y=d+m+u},cc.affineTransformTranslate=function(t,e,i){return{a:t.a,b:t.b,c:t.c,d:t.d,tx:t.tx+t.a*e+t.c*i,ty:t.ty+t.b*e+t.d*i}},cc.affineTransformScale=function(t,e,i){return{a:t.a*e,b:t.b*e,c:t.c*i,d:t.d*i,tx:t.tx,ty:t.ty}},cc.affineTransformRotate=function(t,e){var i=Math.sin(e),n=Math.cos(e);return{a:t.a*n+t.c*i,b:t.b*n+t.d*i,c:t.c*n-t.a*i,d:t.d*n-t.b*i,tx:t.tx,ty:t.ty}},cc.affineTransformConcat=function(t,e){return{a:t.a*e.a+t.b*e.c,b:t.a*e.b+t.b*e.d,c:t.c*e.a+t.d*e.c,d:t.c*e.b+t.d*e.d,tx:t.tx*e.a+t.ty*e.c+e.tx,ty:t.tx*e.b+t.ty*e.d+e.ty}},cc.affineTransformConcatIn=function(t,e){var i=t.a,n=t.b,r=t.c,s=t.d,o=t.tx,a=t.ty;return t.a=i*e.a+n*e.c,t.b=i*e.b+n*e.d,t.c=r*e.a+s*e.c,t.d=r*e.b+s*e.d,t.tx=o*e.a+a*e.c+e.tx,t.ty=o*e.b+a*e.d+e.ty,t},cc.affineTransformEqualToTransform=function(t,e){return t.a===e.a&&t.b===e.b&&t.c===e.c&&t.d===e.d&&t.tx===e.tx&&t.ty===e.ty},cc.affineTransformInvert=function(t){var e=1/(t.a*t.d-t.b*t.c);return{a:e*t.d,b:-e*t.b,c:-e*t.c,d:e*t.a,tx:e*(t.c*t.ty-t.d*t.tx),ty:e*(t.b*t.tx-t.a*t.ty)}},cc.affineTransformInvertIn=function(t){var e=t.a,i=t.b,n=t.c,r=t.d,s=1/(e*r-i*n),o=t.tx,a=t.ty;return t.a=s*r,t.b=-s*i,t.c=-s*n,t.d=s*e,t.tx=s*(n*a-r*o),t.ty=s*(i*o-e*a),t},cc.affineTransformInvertOut=function(t,e){var i=t.a,n=t.b,r=t.c,s=t.d,o=t.tx,a=t.ty,c=1/(i*s-n*r);e.a=c*s,e.b=-c*n,e.c=-c*r,e.d=c*i,e.tx=c*(r*a-s*o),e.ty=c*(n*o-i*a)}}),{}],233:[(function(t,e,i){var n=t("./CCValueType"),r=t("../platform/js"),s=(function(){function e(t,e,i,n){"object"==typeof t&&(e=t.g,i=t.b,n=t.a,t=t.r),t=t||0,e=e||0,i=i||0,n="number"==typeof n?n:255,this._val=(~~t<<24>>>0)+(~~e<<16)+(~~i<<8)+~~n}r.extend(e,n),t("../platform/CCClass").fastDefine("cc.Color",e,{r:0,g:0,b:0,a:255});var i={WHITE:[255,255,255,255],BLACK:[0,0,0,255],TRANSPARENT:[0,0,0,0],GRAY:[127.5,127.5,127.5],RED:[255,0,0],GREEN:[0,255,0],BLUE:[0,0,255],YELLOW:[255,235,4],ORANGE:[255,127,0],CYAN:[0,255,255],MAGENTA:[255,0,255]};for(var s in i)r.get(e,s,(function(t){return function(){return new e(t[0],t[1],t[2],t[3])}})(i[s]));var o=e.prototype;return o.clone=function(){var t=new e;return t._val=this._val,t},o.equals=function(t){return t&&this._val===t._val},o.lerp=function(t,i,n){n=n||new e;var r=this.r,s=this.g,o=this.b,a=this.a;return n.r=r+(t.r-r)*i,n.g=s+(t.g-s)*i,n.b=o+(t.b-o)*i,n.a=a+(t.a-a)*i,n},o.toString=function(){return"rgba("+this.r.toFixed()+", "+this.g.toFixed()+", "+this.b.toFixed()+", "+this.a.toFixed()+")"},o.getR=function(){return(4278190080&this._val)>>>24},o.setR=function(t){return this._val=(16777215&this._val|~~t<<24>>>0)>>>0,this},o.getG=function(){return(16711680&this._val)>>16},o.setG=function(t){return this._val=(4278255615&this._val|~~t<<16)>>>0,this},o.getB=function(){return(65280&this._val)>>8},o.setB=function(t){return this._val=(4294902015&this._val|~~t<<8)>>>0,this},o.getA=function(){return 255&this._val},o.setA=function(t){return this._val=(4294967040&this._val|~~t)>>>0,this},r.getset(o,"r",o.getR,o.setR,!0),r.getset(o,"g",o.getG,o.setG,!0),r.getset(o,"b",o.getB,o.setB,!0),r.getset(o,"a",o.getA,o.setA,!0),o.toCSS=function(t){return"rgba"===t?"rgba("+(0|this.r)+","+(0|this.g)+","+(0|this.b)+","+(this.a/255).toFixed(2)+")":"rgb"===t?"rgb("+(0|this.r)+","+(0|this.g)+","+(0|this.b)+")":"#"+this.toHEX(t)},o.clamp=function(){},o.fromHEX=function(t){t.length<8&&(t+="FF");var e=parseInt(t.indexOf("#")>-1?t.substring(1):t,16);return this._val=(0&this._val|e)>>>0,this},o.toHEX=function(t){var e=[(0|this.r).toString(16),(0|this.g).toString(16),(0|this.b).toString(16)],i=-1;if("#rgb"===t)for(i=0;i1&&(e[i]=e[i][0]);else if("#rrggbb"===t)for(i=0;i>>0)+(r.g<<16)+(r.b<<8)+this.a,this},o.toHSV=function(){return e.rgb2hsv(this.r,this.g,this.b)},o.fromColor=function(t){t._val?this._val=t._val:(this.r=t.r,this.g=t.g,this.b=t.b,this.a=t.a)},e})();s.rgb2hsv=function(t,e,i){t/=255,e/=255,i/=255;var n={h:0,s:0,v:0},r=Math.max(t,e,i),s=Math.min(t,e,i),o=0;return n.v=r,n.s=r?(r-s)/r:0,n.s?(o=r-s,n.h=t===r?(e-i)/o:e===r?2+(i-t)/o:4+(t-e)/o,n.h/=6,n.h<0&&(n.h+=1)):n.h=0,n},s.hsv2rgb=function(t,e,i){var n={r:0,g:0,b:0};if(0===e)n.r=n.g=n.b=i;else if(0===i)n.r=n.g=n.b=0;else{1===t&&(t=0),t*=6,e=e,i=i;var r=Math.floor(t),s=t-r,o=i*(1-e),a=i*(1-e*s),c=i*(1-e*(1-s));switch(r){case 0:n.r=i,n.g=c,n.b=o;break;case 1:n.r=a,n.g=i,n.b=o;break;case 2:n.r=o,n.g=i,n.b=c;break;case 3:n.r=o,n.g=a,n.b=i;break;case 4:n.r=c,n.g=o,n.b=i;break;case 5:n.r=i,n.g=o,n.b=a}}return n.r*=255,n.g*=255,n.b*=255,n},cc.Color=s,cc.color=function(t,e,i,n){return"string"==typeof t?(new cc.Color).fromHEX(t):"object"==typeof t?new cc.Color(t.r,t.g,t.b,t.a):new cc.Color(t,e,i,n)},cc.colorEqual=function(t,e){return void 0!==t._val&&void 0!==e._val?t._val===e._val:t.r===e.r&&t.g===e.g&&t.b===e.b},cc.hexToColor=function(t){t=t.replace(/^#?/,"0x");var e=parseInt(t),i=e>>16,n=(65280&e)>>8,r=255&e;return cc.color(i,n,r)},cc.colorToHex=function(t){var e=t.r.toString(16),i=t.g.toString(16),n=t.b.toString(16);return"#"+(t.r<16?"0"+e:e)+(t.g<16?"0"+i:i)+(t.b<16?"0"+n:n)},e.exports=cc.Color}),{"../platform/CCClass":178,"../platform/js":199,"./CCValueType":239}],234:[(function(t,e,i){var n=parseFloat("1.192092896e-07F");cc.pNeg=function(t){return cc.p(-t.x,-t.y)},cc.pAdd=function(t,e){return cc.p(t.x+e.x,t.y+e.y)},cc.pSub=function(t,e){return cc.p(t.x-e.x,t.y-e.y)},cc.pMult=function(t,e){return cc.p(t.x*e,t.y*e)},cc.pMidpoint=function(t,e){return cc.pMult(cc.pAdd(t,e),.5)},cc.pDot=function(t,e){return t.x*e.x+t.y*e.y},cc.pCross=function(t,e){return t.x*e.y-t.y*e.x},cc.pPerp=function(t){return cc.p(-t.y,t.x)},cc.pRPerp=function(t){return cc.p(t.y,-t.x)},cc.pProject=function(t,e){return cc.pMult(e,cc.pDot(t,e)/cc.pDot(e,e))},cc.pLengthSQ=function(t){return cc.pDot(t,t)},cc.pDistanceSQ=function(t,e){return cc.pLengthSQ(cc.pSub(t,e))},cc.pLength=function(t){return Math.sqrt(cc.pLengthSQ(t))},cc.pDistance=function(t,e){return cc.pLength(cc.pSub(t,e))},cc.pNormalize=function(t){var e=cc.pLength(t);return 0===e?cc.p(t):cc.pMult(t,1/e)},cc.pForAngle=function(t){return cc.p(Math.cos(t),Math.sin(t))},cc.pToAngle=function(t){return Math.atan2(t.y,t.x)},cc.clampf=function(t,e,i){if(e>i){var n=e;e=i,i=n}return t=0&&r.x<=1&&r.y>=0&&r.y<=1)},cc.pIntersectPoint=function(t,e,i,n){var r=cc.p(0,0);if(cc.pLineIntersect(t,e,i,n,r)){var s=cc.p(0,0);return s.x=t.x+r.x*(e.x-t.x),s.y=t.y+r.x*(e.y-t.y),s}return cc.p(0,0)},cc.pSameAs=function(t,e){return null!=t&&null!=e&&(t.x===e.x&&t.y===e.y)},cc.pZeroIn=function(t){t.x=0,t.y=0},cc.pIn=function(t,e){t.x=e.x,t.y=e.y},cc.pMultIn=function(t,e){t.x*=e,t.y*=e},cc.pSubIn=function(t,e){t.x-=e.x,t.y-=e.y},cc.pAddIn=function(t,e){t.x+=e.x,t.y+=e.y},cc.pNormalizeIn=function(t){cc.pMultIn(t,1/Math.sqrt(t.x*t.x+t.y*t.y))}}),{}],235:[(function(t,e,i){var n=t("./CCValueType"),r=t("../platform/js");function s(t,e,i,n){t&&"object"==typeof t&&(e=t.y,i=t.width,n=t.height,t=t.x),this.x=t||0,this.y=e||0,this.width=i||0,this.height=n||0}r.extend(s,n),t("../platform/CCClass").fastDefine("cc.Rect",s,{x:0,y:0,width:0,height:0}),s.fromMinMax=function(t,e){var i=Math.min(t.x,e.x),n=Math.min(t.y,e.y);return new s(i,n,Math.max(t.x,e.x)-i,Math.max(t.y,e.y)-n)},s.contain=function(t,e){return t.xe.x+e.width&&t.ye.y+e.height?1:e.xt.x+t.width&&e.yt.y+t.height?-1:0};var o=s.prototype;o.clone=function(){return new s(this.x,this.y,this.width,this.height)},o.equals=function(t){return t&&this.x===t.x&&this.y===t.y&&this.width===t.width&&this.height===t.height},o.lerp=function(t,e,i){i=i||new s;var n=this.x,r=this.y,o=this.width,a=this.height;return i.x=n+(t.x-n)*e,i.y=r+(t.y-r)*e,i.width=o+(t.width-o)*e,i.height=a+(t.height-a)*e,i},o.toString=function(){return"("+this.x.toFixed(2)+", "+this.y.toFixed(2)+", "+this.width.toFixed(2)+", "+this.height.toFixed(2)+")"},r.getset(o,"xMin",(function(){return this.x}),(function(t){this.width+=this.x-t,this.x=t})),r.getset(o,"yMin",(function(){return this.y}),(function(t){this.height+=this.y-t,this.y=t})),r.getset(o,"xMax",(function(){return this.x+this.width}),(function(t){this.width=t-this.x})),r.getset(o,"yMax",(function(){return this.y+this.height}),(function(t){this.height=t-this.y})),r.getset(o,"center",(function(){return new cc.Vec2(this.x+.5*this.width,this.y+.5*this.height)}),(function(t){this.x=t.x-.5*this.width,this.y=t.y-.5*this.height})),r.getset(o,"origin",(function(){return new cc.Vec2(this.x,this.y)}),(function(t){this.x=t.x,this.y=t.y})),r.getset(o,"size",(function(){return new cc.Size(this.width,this.height)}),(function(t){this.width=t.width,this.height=t.height})),o.intersects=function(t){return cc.rectIntersectsRect(this,t)},o.contains=function(t){return this.x<=t.x&&this.x+this.width>=t.x&&this.y<=t.y&&this.y+this.height>=t.y},o.containsRect=function(t){return this.x<=t.x&&this.x+this.width>=t.x+t.width&&this.y<=t.y&&this.y+this.height>=t.y+t.height},cc.Rect=s,cc.rect=function(t,e,i,n){return new s(t,e,i,n)},cc.rectEqualToRect=function(t,e){return t&&e&&t.x===e.x&&t.y===e.y&&t.width===e.width&&t.height===e.height},cc._rectEqualToZero=function(t){return t&&0===t.x&&0===t.y&&0===t.width&&0===t.height},cc.rectContainsRect=function(t,e){return!(!t||!e)&&!(t.x>=e.x||t.y>=e.y||t.x+t.width<=e.x+e.width||t.y+t.height<=e.y+e.height)},cc.rectGetMaxX=function(t){return t.x+t.width},cc.rectGetMidX=function(t){return t.x+t.width/2},cc.rectGetMinX=function(t){return t.x},cc.rectGetMaxY=function(t){return t.y+t.height},cc.rectGetMidY=function(t){return t.y+t.height/2},cc.rectGetMinY=function(t){return t.y},cc.rectContainsPoint=function(t,e){return e.x>=cc.rectGetMinX(t)&&e.x<=cc.rectGetMaxX(t)&&e.y>=cc.rectGetMinY(t)&&e.y<=cc.rectGetMaxY(t)},cc.rectIntersectsRect=function(t,e){var i=t.x+t.width,n=t.y+t.height,r=e.x+e.width,s=e.y+e.height;return!(i=this.min.x&&t.x<=this.max.x&&t.y>=this.min.y&&t.y<=this.max.y&&t.z>=this.min.z&&t.z<=this.max.z},cc.math.AABB.containsPoint=function(t,e){return t.x>=e.min.x&&t.x<=e.max.x&&t.y>=e.min.y&&t.y<=e.max.y&&t.z>=e.min.z&&t.z<=e.max.z},cc.math.AABB.prototype.assignFrom=function(t){this.min.assignFrom(t.min),this.max.assignFrom(t.max)},cc.math.AABB.assign=function(t,e){return t.min.assignFrom(e.min),t.max.assignFrom(e.max),t}}),{}],247:[(function(t,e,i){cc.math.Matrix4Stack=function(t,e){this.top=t,this.stack=e||[],this.lastUpdated=0};var n=cc.math.Matrix4Stack.prototype;n.initialize=function(){this.stack.length=0,this.top=null},n.push=function(t){t=t||this.top,this.stack.push(this.top),this.top=new cc.math.Matrix4(t),this.update()},n.pop=function(){this.top=this.stack.pop(),this.update()},n.update=function(){this.lastUpdated++},n.release=function(){this.stack=null,this.top=null,this._matrixPool=null},n._getFromPool=function(t){var e=this._matrixPool;if(0===e.length)return new cc.math.Matrix4(t);var i=e.pop();return i.assignFrom(t),i},n._putInPool=function(t){this._matrixPool.push(t)}}),{}],248:[(function(t,e,i){var n=cc.math;n.KM_GL_MODELVIEW=5888,n.KM_GL_PROJECTION=5889,n.KM_GL_TEXTURE=5890,n.modelview_matrix_stack=new n.Matrix4Stack,n.projection_matrix_stack=new n.Matrix4Stack,n.texture_matrix_stack=new n.Matrix4Stack,cc.current_stack=null;var r=!1;(function(){if(!r){var t=new n.Matrix4;n.modelview_matrix_stack.initialize(),n.projection_matrix_stack.initialize(),n.texture_matrix_stack.initialize(),cc.current_stack=n.modelview_matrix_stack,r=!0,t.identity(),n.modelview_matrix_stack.push(t),n.projection_matrix_stack.push(t),n.texture_matrix_stack.push(t)}})(),n.glFreeAll=function(){n.modelview_matrix_stack.release(),n.modelview_matrix_stack=null,n.projection_matrix_stack.release(),n.projection_matrix_stack=null,n.texture_matrix_stack.release(),n.texture_matrix_stack=null,r=!1,cc.current_stack=null},n.glPushMatrix=function(){cc.current_stack.push(cc.current_stack.top),cc.current_stack.update()},n.glPushMatrixWitMat4=function(t){cc.current_stack.stack.push(cc.current_stack.top),t.assignFrom(cc.current_stack.top),cc.current_stack.top=t,cc.current_stack.update()},n.glPopMatrix=function(){cc.current_stack.top=cc.current_stack.stack.pop(),cc.current_stack.update()},n.glMatrixMode=function(t){switch(t){case n.KM_GL_MODELVIEW:cc.current_stack=n.modelview_matrix_stack;break;case n.KM_GL_PROJECTION:cc.current_stack=n.projection_matrix_stack;break;case n.KM_GL_TEXTURE:cc.current_stack=n.texture_matrix_stack;break;default:throw new Error(cc._getError(7908))}},n.glLoadIdentity=function(){cc.current_stack.top.identity(),cc.current_stack.update()},n.glLoadMatrix=function(t){cc.current_stack.top.assignFrom(t),cc.current_stack.update()},n.glMultMatrix=function(t){cc.current_stack.top.multiply(t),cc.current_stack.update()};var s=new n.Matrix4;n.glTranslatef=function(t,e,i){var r=n.Matrix4.createByTranslation(t,e,i,s);cc.current_stack.top.multiply(r),cc.current_stack.update()};var o=new n.Vec3;n.glRotatef=function(t,e,i,r){o.fill(e,i,r);var a=n.Matrix4.createByAxisAndAngle(o,cc.degreesToRadians(t),s);cc.current_stack.top.multiply(a),cc.current_stack.update()},n.glScalef=function(t,e,i){var r=n.Matrix4.createByScale(t,e,i,s);cc.current_stack.top.multiply(r),cc.current_stack.update()},n.glGetMatrix=function(t,e){switch(t){case n.KM_GL_MODELVIEW:e.assignFrom(n.modelview_matrix_stack.top);break;case n.KM_GL_PROJECTION:e.assignFrom(n.projection_matrix_stack.top);break;case n.KM_GL_TEXTURE:e.assignFrom(n.texture_matrix_stack.top);break;default:throw new Error(cc._getError(7908))}}}),{}],249:[(function(t,e,i){t("./utility"),t("./vec2"),t("./vec3"),t("./vec4"),t("./ray2"),t("./mat3"),t("./mat4"),t("./plane"),t("./quaternion"),t("./aabb"),t("./gl/mat4stack"),t("./gl/matrix")}),{"./aabb":246,"./gl/mat4stack":247,"./gl/matrix":248,"./mat3":250,"./mat4":251,"./plane":252,"./quaternion":253,"./ray2":254,"./utility":255,"./vec2":256,"./vec3":257,"./vec4":258}],250:[(function(t,e,i){window.Uint16Array=window.Uint16Array||window.Array,window.Float32Array=window.Float32Array||window.Array,cc.math.Matrix3=function(t){t&&t.mat?this.mat=new Float32Array(t.mat):this.mat=new Float32Array(9)};var n=cc.math.Matrix3.prototype;n.fill=function(t){var e=this.mat,i=t.mat;return e[0]=i[0],e[1]=i[1],e[2]=i[2],e[3]=i[3],e[4]=i[4],e[5]=i[5],e[6]=i[6],e[7]=i[7],e[8]=i[8],this},n.adjugate=function(){var t=this.mat,e=t[0],i=t[1],n=t[2],r=t[3],s=t[4],o=t[5],a=t[6],c=t[7],h=t[8];return t[0]=s*h-o*c,t[1]=n*c-i*h,t[2]=i*o-n*s,t[3]=o*a-r*h,t[4]=e*h-n*a,t[5]=n*r-e*o,t[6]=r*c-s*a,t[8]=e*s-i*r,this},n.identity=function(){var t=this.mat;return t[1]=t[2]=t[3]=t[5]=t[6]=t[7]=0,t[0]=t[4]=t[8]=1,this};var r=new cc.math.Matrix3;n.inverse=function(t){if(0===t)return this;r.assignFrom(this);var e=1/t;return this.adjugate(),this.multiplyScalar(e),this},n.isIdentity=function(){var t=this.mat;return 1===t[0]&&0===t[1]&&0===t[2]&&0===t[3]&&1===t[4]&&0===t[5]&&0===t[6]&&0===t[7]&&1===t[8]},n.transpose=function(){var t=this.mat,e=t[1],i=t[2],n=t[3],r=t[5],s=t[6],o=t[7];return t[1]=n,t[2]=s,t[3]=e,t[5]=o,t[6]=i,t[7]=r,this},n.determinant=function(){var t=this.mat,e=t[0]*t[4]*t[8]+t[1]*t[5]*t[6]+t[2]*t[3]*t[7];return e-=t[2]*t[4]*t[6]+t[0]*t[5]*t[7]+t[1]*t[3]*t[8]},n.multiply=function(t){var e=this.mat,i=t.mat,n=e[0],r=e[1],s=e[2],o=e[3],a=e[4],c=e[5],h=e[6],l=e[7],u=e[8],_=i[0],d=i[1],f=i[2],m=i[3],p=i[4],g=i[5],y=i[6],v=i[7],x=i[8];return e[0]=n*_+o*d+h*f,e[1]=r*_+a*d+l*f,e[2]=s*_+c*d+u*f,e[3]=s*_+c*d+u*f,e[4]=r*m+a*p+l*g,e[5]=s*m+c*p+u*g,e[6]=n*y+o*v+h*x,e[7]=r*y+a*v+l*x,e[8]=s*y+c*v+u*x,this},n.multiplyScalar=function(t){var e=this.mat;return e[0]*=t,e[1]*=t,e[2]*=t,e[3]*=t,e[4]*=t,e[5]*=t,e[6]*=t,e[7]*=t,e[8]*=t,this},cc.math.Matrix3.rotationAxisAngle=function(t,e){var i=Math.cos(e),n=Math.sin(e),r=new cc.math.Matrix3,s=r.mat;return s[0]=i+t.x*t.x*(1-i),s[1]=t.z*n+t.y*t.x*(1-i),s[2]=-t.y*n+t.z*t.x*(1-i),s[3]=-t.z*n+t.x*t.y*(1-i),s[4]=i+t.y*t.y*(1-i),s[5]=t.x*n+t.z*t.y*(1-i),s[6]=t.y*n+t.x*t.z*(1-i),s[7]=-t.x*n+t.y*t.z*(1-i),s[8]=i+t.z*t.z*(1-i),r},n.assignFrom=function(t){if(this===t)return cc.logID(7900),this;var e=this.mat,i=t.mat;return e[0]=i[0],e[1]=i[1],e[2]=i[2],e[3]=i[3],e[4]=i[4],e[5]=i[5],e[6]=i[6],e[7]=i[7],e[8]=i[8],this},n.equals=function(t){if(this===t)return!0;for(var e=cc.math.EPSILON,i=this.mat,n=t.mat,r=0;r<9;++r)if(!(i[r]+e>n[r]&&i[r]-e=c&&(c=a,_=n,u=r);if(++m[u],_!==u){for(s=0;s<4;s++)t.swap(_,s,u,s);for(s=0;s<4;s++)e.swap(_,s,u,s)}if(f[i]=_,d[i]=u,0===t.get(u,u))return!1;for(l=1/t.get(u,u),t.set(u,u,1),s=0;s<4;s++)t.set(u,s,t.get(u,s)*l);for(s=0;s<4;s++)e.set(u,s,e.get(u,s)*l);for(o=0;o<4;o++)if(o!==u){for(h=t.get(o,u),t.set(o,u,0),s=0;s<4;s++)t.set(o,s,t.get(o,s)-t.get(u,s)*h);for(s=0;s<4;s++)e.set(o,s,t.get(o,s)-e.get(u,s)*h)}}for(s=3;s>=0;s--)if(f[s]!==d[s])for(r=0;r<4;r++)t.swap(r,f[s],r,d[s]);return!0};var r=(new cc.math.Matrix4).identity();cc.math.mat4Inverse=function(t,e){var i=new cc.math.Matrix4(e),n=new cc.math.Matrix4(r);return!1===cc.math.Matrix4._gaussj(i,n)?null:(t.assignFrom(i),t)},n.inverse=function(){var t=new cc.math.Matrix4(this),e=new cc.math.Matrix4(r);return!1===cc.math.Matrix4._gaussj(t,e)?null:t},n.isIdentity=function(){var t=this.mat;return 1===t[0]&&0===t[1]&&0===t[2]&&0===t[3]&&0===t[4]&&1===t[5]&&0===t[6]&&0===t[7]&&0===t[8]&&0===t[9]&&1===t[10]&&0===t[11]&&0===t[12]&&0===t[13]&&0===t[14]&&1===t[15]},n.transpose=function(){var t=this.mat,e=t[1],i=t[2],n=t[3],r=t[4],s=t[6],o=t[7],a=t[8],c=t[9],h=t[11],l=t[12],u=t[13],_=t[14];return t[1]=r,t[2]=a,t[3]=l,t[4]=e,t[6]=c,t[7]=u,t[8]=i,t[9]=s,t[11]=_,t[12]=n,t[13]=o,t[14]=h,this},cc.math.mat4Multiply=function(t,e,i){var n=t.mat,r=e.mat,s=i.mat,o=r[0],a=r[1],c=r[2],h=r[3],l=r[4],u=r[5],_=r[6],d=r[7],f=r[8],m=r[9],p=r[10],g=r[11],y=r[12],v=r[13],x=r[14],C=r[15],T=s[0],A=s[1],b=s[2],S=s[3],E=s[4],w=s[5],I=s[6],R=s[7],P=s[8],O=s[9],D=s[10],B=s[11],L=s[12],M=s[13],N=s[14],F=s[15];return n[0]=T*o+A*l+b*f+S*y,n[1]=T*a+A*u+b*m+S*v,n[2]=T*c+A*_+b*p+S*x,n[3]=T*h+A*d+b*g+S*C,n[4]=E*o+w*l+I*f+R*y,n[5]=E*a+w*u+I*m+R*v,n[6]=E*c+w*_+I*p+R*x,n[7]=E*h+w*d+I*g+R*C,n[8]=P*o+O*l+D*f+B*y,n[9]=P*a+O*u+D*m+B*v,n[10]=P*c+O*_+D*p+B*x,n[11]=P*h+O*d+D*g+B*C,n[12]=L*o+M*l+N*f+F*y,n[13]=L*a+M*u+N*m+F*v,n[14]=L*c+M*_+N*p+F*x,n[15]=L*h+M*d+N*g+F*C,t},n.multiply=function(t){var e=this.mat,i=t.mat,n=e[0],r=e[1],s=e[2],o=e[3],a=e[4],c=e[5],h=e[6],l=e[7],u=e[8],_=e[9],d=e[10],f=e[11],m=e[12],p=e[13],g=e[14],y=e[15],v=i[0],x=i[1],C=i[2],T=i[3],A=i[4],b=i[5],S=i[6],E=i[7],w=i[8],I=i[9],R=i[10],P=i[11],O=i[12],D=i[13],B=i[14],L=i[15];return e[0]=v*n+x*a+C*u+T*m,e[1]=v*r+x*c+C*_+T*p,e[2]=v*s+x*h+C*d+T*g,e[3]=v*o+x*l+C*f+T*y,e[4]=A*n+b*a+S*u+E*m,e[5]=A*r+b*c+S*_+E*p,e[6]=A*s+b*h+S*d+E*g,e[7]=A*o+b*l+S*f+E*y,e[8]=w*n+I*a+R*u+P*m,e[9]=w*r+I*c+R*_+P*p,e[10]=w*s+I*h+R*d+P*g,e[11]=w*o+I*l+R*f+P*y,e[12]=O*n+D*a+B*u+L*m,e[13]=O*r+D*c+B*_+L*p,e[14]=O*s+D*h+B*d+L*g,e[15]=O*o+D*l+B*f+L*y,this},cc.math.getMat4MultiplyValue=function(t,e){var i=t.mat,n=e.mat,r=new Float32Array(16);return r[0]=i[0]*n[0]+i[4]*n[1]+i[8]*n[2]+i[12]*n[3],r[1]=i[1]*n[0]+i[5]*n[1]+i[9]*n[2]+i[13]*n[3],r[2]=i[2]*n[0]+i[6]*n[1]+i[10]*n[2]+i[14]*n[3],r[3]=i[3]*n[0]+i[7]*n[1]+i[11]*n[2]+i[15]*n[3],r[4]=i[0]*n[4]+i[4]*n[5]+i[8]*n[6]+i[12]*n[7],r[5]=i[1]*n[4]+i[5]*n[5]+i[9]*n[6]+i[13]*n[7],r[6]=i[2]*n[4]+i[6]*n[5]+i[10]*n[6]+i[14]*n[7],r[7]=i[3]*n[4]+i[7]*n[5]+i[11]*n[6]+i[15]*n[7],r[8]=i[0]*n[8]+i[4]*n[9]+i[8]*n[10]+i[12]*n[11],r[9]=i[1]*n[8]+i[5]*n[9]+i[9]*n[10]+i[13]*n[11],r[10]=i[2]*n[8]+i[6]*n[9]+i[10]*n[10]+i[14]*n[11],r[11]=i[3]*n[8]+i[7]*n[9]+i[11]*n[10]+i[15]*n[11],r[12]=i[0]*n[12]+i[4]*n[13]+i[8]*n[14]+i[12]*n[15],r[13]=i[1]*n[12]+i[5]*n[13]+i[9]*n[14]+i[13]*n[15],r[14]=i[2]*n[12]+i[6]*n[13]+i[10]*n[14]+i[14]*n[15],r[15]=i[3]*n[12]+i[7]*n[13]+i[11]*n[14]+i[15]*n[15],r},cc.math.mat4Assign=function(t,e){if(t===e)return cc.logID(7901),t;var i=t.mat,n=e.mat;return i[0]=n[0],i[1]=n[1],i[2]=n[2],i[3]=n[3],i[4]=n[4],i[5]=n[5],i[6]=n[6],i[7]=n[7],i[8]=n[8],i[9]=n[9],i[10]=n[10],i[11]=n[11],i[12]=n[12],i[13]=n[13],i[14]=n[14],i[15]=n[15],t},n.assignFrom=function(t){if(this===t)return cc.logID(7902),this;var e=this.mat,i=t.mat;return e[0]=i[0],e[1]=i[1],e[2]=i[2],e[3]=i[3],e[4]=i[4],e[5]=i[5],e[6]=i[6],e[7]=i[7],e[8]=i[8],e[9]=i[9],e[10]=i[10],e[11]=i[11],e[12]=i[12],e[13]=i[13],e[14]=i[14],e[15]=i[15],this},n.equals=function(t){if(this===t)return cc.logID(7903),!0;for(var e=this.mat,i=t.mat,n=cc.math.EPSILON,r=0;r<16;r++)if(!(e[r]+n>i[r]&&e[r]-n.001?cc.math.Plane.POINT_INFRONT_OF_PLANE:e<-.001?cc.math.Plane.POINT_BEHIND_PLANE:cc.math.Plane.POINT_ON_PLANE}}),{}],253:[(function(t,e,i){cc.math.Quaternion=function(t,e,i,n){t&&void 0===e?(this.x=t.x,this.y=t.y,this.z=t.z,this.w=t.w):(this.x=t||0,this.y=e||0,this.z=i||0,this.w=n||0)};var n=cc.math.Quaternion.prototype;n.conjugate=function(t){return this.x=-t.x,this.y=-t.y,this.z=-t.z,this.w=t.w,this},n.dot=function(t){return this.w*t.w+this.x*t.x+this.y*t.y+this.z*t.z},n.exponential=function(){return this},n.identity=function(){return this.x=0,this.y=0,this.z=0,this.w=1,this},n.inverse=function(){var t=this.length();return Math.abs(t)>cc.math.EPSILON?(this.x=0,this.y=0,this.z=0,this.w=0,this):(this.conjugate(this).scale(1/t),this)},n.isIdentity=function(){return 0===this.x&&0===this.y&&0===this.z&&1===this.w},n.length=function(){return Math.sqrt(this.lengthSq())},n.lengthSq=function(){return this.x*this.x+this.y*this.y+this.z*this.z+this.w*this.w},n.multiply=function(t){var e=this.x,i=this.y,n=this.z,r=this.w;return this.w=r*t.w-e*t.x-i*t.y-n*t.z,this.x=r*t.x+e*t.w+i*t.z-n*t.y,this.y=r*t.y+i*t.w+n*t.x-e*t.z,this.z=r*t.z+n*t.w+e*t.y-i*t.x,this},n.normalize=function(){var t=this.length();if(Math.abs(t)<=cc.math.EPSILON)throw new Error(cc._getError(7909));return this.scale(1/t),this},n.rotationAxis=function(t,e){var i=.5*e,n=Math.sin(i);return this.w=Math.cos(i),this.x=t.x*n,this.y=t.y*n,this.z=t.z*n,this},cc.math.Quaternion.rotationMatrix=function(t){if(!t)return null;var e,i,n,r,s=[],o=t.mat,a=0;s[0]=o[0],s[1]=o[3],s[2]=o[6],s[4]=o[1],s[5]=o[4],s[6]=o[7],s[8]=o[2],s[9]=o[5],s[10]=o[8],s[15]=1;var c=s[0],h=c[0]+c[5]+c[10]+1;return h>cc.math.EPSILON?(a=2*Math.sqrt(h),e=(c[9]-c[6])/a,i=(c[2]-c[8])/a,n=(c[4]-c[1])/a,r=.25*a):c[0]>c[5]&&c[0]>c[10]?(e=.25*(a=2*Math.sqrt(1+c[0]-c[5]-c[10])),i=(c[4]+c[1])/a,n=(c[2]+c[8])/a,r=(c[9]-c[6])/a):c[5]>c[10]?(a=2*Math.sqrt(1+c[5]-c[0]-c[10]),e=(c[4]+c[1])/a,i=.25*a,n=(c[9]+c[6])/a,r=(c[2]-c[8])/a):(a=2*Math.sqrt(1+c[10]-c[0]-c[5]),e=(c[2]+c[8])/a,i=(c[9]+c[6])/a,n=.25*a,r=(c[4]-c[1])/a),new cc.math.Quaternion(e,i,n,r)},cc.math.Quaternion.rotationYawPitchRoll=function(t,e,i){var n,r,s,o,a,c,h,l,u,_,d;n=cc.degreesToRadians(e)/2,r=cc.degreesToRadians(t)/2,s=cc.degreesToRadians(i)/2,o=Math.cos(n),a=Math.cos(r),c=Math.cos(s),h=Math.sin(n),_=a*c,d=(l=Math.sin(r))*(u=Math.sin(s));var f=new cc.math.Quaternion;return f.w=o*_+h*d,f.x=h*_-o*d,f.y=o*l*c+h*a*u,f.z=o*a*u-h*l*c,f.normalize(),f},n.slerp=function(t,e){if(this.x===t.x&&this.y===t.y&&this.z===t.z&&this.w===t.w)return this;var i=this.dot(t),n=Math.acos(i),r=Math.sqrt(1-cc.math.square(i)),s=Math.sin(e*n)/r,o=Math.sin((1-e)*n)/r,a=new cc.math.Quaternion(t);return this.scale(o),a.scale(s),this.add(a),this},n.toAxisAndAngle=function(){var t,e,i,n=new cc.math.Vec3;return t=Math.acos(this.w),(e=Math.sqrt(cc.math.square(this.x)+cc.math.square(this.y)+cc.math.square(this.z)))>-cc.math.EPSILON&&e2*Math.PI-cc.math.EPSILON?(i=0,n.x=0,n.y=0,n.z=1):(i=2*t,n.x=this.x/e,n.y=this.y/e,n.z=this.z/e,n.normalize()),{axis:n,angle:i}},n.scale=function(t){return this.x*=t,this.y*=t,this.z*=t,this.w*=t,this},n.assignFrom=function(t){return this.x=t.x,this.y=t.y,this.z=t.z,this.w=t.w,this},n.add=function(t){return this.x+=t.x,this.y+=t.y,this.z+=t.z,this.w+=t.w,this},cc.math.Quaternion.rotationBetweenVec3=function(t,e,i){var n=new cc.math.Vec3(t),r=new cc.math.Vec3(e);n.normalize(),r.normalize();var s=n.dot(r),o=new cc.math.Quaternion;if(s>=1)return o.identity(),o;if(s<1e-6-1)if(Math.abs(i.lengthSq())-cc.math.EPSILON&&fMath.max(t.x,e.x)+cc.math.EPSILON||sMath.max(t.y,e.y)+cc.math.EPSILON)&&(!(rMath.max(o,c)+cc.math.EPSILON||sMath.max(a,h)+cc.math.EPSILON)&&(i.x=r,i.y=s,!0)))},cc.math.Ray2.prototype.intersectTriangle=function(t,e,i,r,s){var o,a=new cc.math.Vec2,c=new cc.math.Vec2,h=new cc.math.Vec2,l=1e4,u=!1;return this.intersectLineSegment(t,e,a)&&(u=!0,(o=a.subtract(this.start).length())e&&t-cc.math.EPSILONt.x-cc.math.EPSILON&&this.yt.y-cc.math.EPSILON}}),{}],257:[(function(t,e,i){cc.math.Vec3=cc.math.Vec3=function(t,e,i){t&&void 0===e?(this.x=t.x,this.y=t.y,this.z=t.z):(this.x=t||0,this.y=e||0,this.z=i||0)},cc.math.vec3=function(t,e,i){return new cc.math.Vec3(t,e,i)};var n=cc.math.Vec3.prototype;n.fill=function(t,e,i){return t&&void 0===e?(this.x=t.x,this.y=t.y,this.z=t.z):(this.x=t,this.y=e,this.z=i),this},n.length=function(){return Math.sqrt(cc.math.square(this.x)+cc.math.square(this.y)+cc.math.square(this.z))},n.lengthSq=function(){return cc.math.square(this.x)+cc.math.square(this.y)+cc.math.square(this.z)},n.normalize=function(){var t=1/this.length();return this.x*=t,this.y*=t,this.z*=t,this},n.cross=function(t){var e=this.x,i=this.y,n=this.z;return this.x=i*t.z-n*t.y,this.y=n*t.x-e*t.z,this.z=e*t.y-i*t.x,this},n.dot=function(t){return this.x*t.x+this.y*t.y+this.z*t.z},n.add=function(t){return this.x+=t.x,this.y+=t.y,this.z+=t.z,this},n.subtract=function(t){return this.x-=t.x,this.y-=t.y,this.z-=t.z,this},n.transform=function(t){var e=this.x,i=this.y,n=this.z,r=t.mat;return this.x=e*r[0]+i*r[4]+n*r[8]+r[12],this.y=e*r[1]+i*r[5]+n*r[9]+r[13],this.z=e*r[2]+i*r[6]+n*r[10]+r[14],this},n.transformNormal=function(t){var e=this.x,i=this.y,n=this.z,r=t.mat;return this.x=e*r[0]+i*r[4]+n*r[8],this.y=e*r[1]+i*r[5]+n*r[9],this.z=e*r[2]+i*r[6]+n*r[10],this},n.transformCoord=function(t){var e=new cc.math.Vec4(this.x,this.y,this.z,1);return e.transform(t),this.x=e.x/e.w,this.y=e.y/e.w,this.z=e.z/e.w,this},n.scale=function(t){return this.x*=t,this.y*=t,this.z*=t,this},n.equals=function(t){var e=cc.math.EPSILON;return this.xt.x-e&&this.yt.y-e&&this.zt.z-e},n.inverseTransform=function(t){var e=t.mat,i=new cc.math.Vec3(this.x-e[12],this.y-e[13],this.z-e[14]);return this.x=i.x*e[0]+i.y*e[1]+i.z*e[2],this.y=i.x*e[4]+i.y*e[5]+i.z*e[6],this.z=i.x*e[8]+i.y*e[9]+i.z*e[10],this},n.inverseTransformNormal=function(t){var e=this.x,i=this.y,n=this.z,r=t.mat;return this.x=e*r[0]+i*r[1]+n*r[2],this.y=e*r[4]+i*r[5]+n*r[6],this.z=e*r[8]+i*r[9]+n*r[10],this},n.assignFrom=function(t){return t?(this.x=t.x,this.y=t.y,this.z=t.z,this):this},cc.math.Vec3.zero=function(t){return t.x=t.y=t.z=0,t},n.toTypeArray=function(){var t=new Float32Array(3);return t[0]=this.x,t[1]=this.y,t[2]=this.z,t}}),{}],258:[(function(t,e,i){cc.math.Vec4=function(t,e,i,n){t&&void 0===e?(this.x=t.x,this.y=t.y,this.z=t.z,this.w=t.w):(this.x=t||0,this.y=e||0,this.z=i||0,this.w=n||0)};var n=cc.math.Vec4.prototype;n.fill=function(t,e,i,n){t&&void 0===e?(this.x=t.x,this.y=t.y,this.z=t.z,this.w=t.w):(this.x=t,this.y=e,this.z=i,this.w=n)},n.add=function(t){return t?(this.x+=t.x,this.y+=t.y,this.z+=t.z,this.w+=t.w,this):this},n.dot=function(t){return this.x*t.x+this.y*t.y+this.z*t.z+this.w*t.w},n.length=function(){return Math.sqrt(cc.math.square(this.x)+cc.math.square(this.y)+cc.math.square(this.z)+cc.math.square(this.w))},n.lengthSq=function(){return cc.math.square(this.x)+cc.math.square(this.y)+cc.math.square(this.z)+cc.math.square(this.w)},n.lerp=function(t,e){return this},n.normalize=function(){var t=1/this.length();return this.x*=t,this.y*=t,this.z*=t,this.w*=t,this},n.scale=function(t){return this.normalize(),this.x*=t,this.y*=t,this.z*=t,this.w*=t,this},n.subtract=function(t){this.x-=t.x,this.y-=t.y,this.z-=t.z,this.w-=t.w},n.transform=function(t){var e=this.x,i=this.y,n=this.z,r=this.w,s=t.mat;return this.x=e*s[0]+i*s[4]+n*s[8]+r*s[12],this.y=e*s[1]+i*s[5]+n*s[9]+r*s[13],this.z=e*s[2]+i*s[6]+n*s[10]+r*s[14],this.w=e*s[3]+i*s[7]+n*s[11]+r*s[15],this},cc.math.Vec4.transformArray=function(t,e){for(var i=[],n=0;nt.x-e&&this.yt.y-e&&this.zt.z-e&&this.wt.w-e},n.assignFrom=function(t){return this.x=t.x,this.y=t.y,this.z=t.z,this.w=t.w,this},n.toTypeArray=function(){var t=new Float32Array(4);return t[0]=this.x,t[1]=this.y,t[2]=this.z,t[3]=this.w,t}}),{}],259:[(function(t,e,i){t("./CCSGMotionStreak"),t("./CCSGMotionStreakWebGLRenderCmd");var n=cc.Class({name:"cc.MotionStreak",extends:cc.Component,editor:!1,ctor:function(){this._root=null,this._motionStreak=null},properties:{preview:{default:!1,editorOnly:!0,notify:!1,animatable:!1},_fadeTime:1,fadeTime:{get:function(){return this._fadeTime},set:function(t){this._fadeTime=t,this._motionStreak&&this._motionStreak.setFadeTime(t)},animatable:!1,tooltip:!1},_minSeg:1,minSeg:{get:function(){return this._minSeg},set:function(t){this._minSeg=t,this._motionStreak&&this._motionStreak.setMinSeg(t)},animatable:!1,tooltip:!1},_stroke:64,stroke:{get:function(){return this._stroke},set:function(t){this._stroke=t,this._motionStreak&&this._motionStreak.setStroke(t)},animatable:!1,tooltip:!1},_texture:{default:null,type:cc.Texture2D},texture:{get:function(){return this._texture},set:function(t){this._texture=t,this._motionStreak&&this._motionStreak.setTexture(t)},type:cc.Texture2D,animatable:!1,tooltip:!1},_color:cc.Color.WHITE,color:{get:function(){return this._color},set:function(t){this._color=t,this._motionStreak&&this._motionStreak.tintWithColor(t)},tooltip:!1},_fastMode:!1,fastMode:{get:function(){return this._fastMode},set:function(t){this._fastMode=t,this._motionStreak&&this._motionStreak.setFastMode(t)},animatable:!1,tooltip:!1}},onFocusInEditor:!1,onLostFocusInEditor:!1,reset:function(){this._motionStreak.reset()},__preload:function(){if(cc._renderType===cc.game.RENDER_TYPE_WEBGL){this._root=new _ccsg.Node;var t=new _ccsg.MotionStreak;t.initWithFade(this._fadeTime,this._minSeg,this._stroke,this.node.color,this._texture),t.setFastMode(this._fastMode),this._root.addChild(t);var e=this.node._sgNode;e&&e.addChild(this._root,-10),this._motionStreak=t}else cc.warnID(5900)},onEnable:function(){this.node.on("position-changed",this._onNodePositionChanged,this)},onDisable:function(){this.node.off("position-changed",this._onNodePositionChanged,this)},_onNodePositionChanged:function(){if(this._motionStreak){var t=this.node,e=t.getNodeToWorldTransform(),i=e.tx-(t.width/2+t.anchorX*t.width),n=e.ty-(t.height/2+t.anchorY*t.height);this._root.setPosition(-i,-n),this._motionStreak.setPosition(i,n)}}});cc.MotionStreak=e.exports=n}),{"./CCSGMotionStreak":260,"./CCSGMotionStreakWebGLRenderCmd":261}],260:[(function(t,e,i){function n(t,e,i,n,s){if(!((s+=n)<=1)){var o;e*=.5;for(var a=s-1,c=n;c1)&&(T=!0),T&&(i[2*p]=x.x,i[2*p+1]=x.y,i[2*(p+1)]=v.x,i[2*(p+1)+1]=v.y)}}}function r(t,e,i,n,r,s,o,a){var c,h,l,u;return t===i&&e===n||r===o&&s===a?{isSuccess:!1,value:0}:(n-=e,s-=e,o-=t,a-=e,u=(r-=t)*(h=(i-=t)/(c=Math.sqrt(i*i+n*n)))+s*(l=n/c),s=s*h-r*l,r=u,u=o*h+a*l,a=a*h-o*l,o=u,s===a?{isSuccess:!1,value:0}:{isSuccess:!0,value:(o+(r-o)*a/(a-s))/c})}_ccsg.MotionStreak=_ccsg.Node.extend({texture:null,fastMode:!1,startingPositionInitialized:!1,_blendFunc:null,_stroke:0,_fadeDelta:0,_minSeg:0,_maxPoints:0,_nuPoints:0,_previousNuPoints:0,_pointVertexes:null,_pointState:null,_vertices:null,_colorPointer:null,_texCoords:null,_verticesBuffer:null,_colorPointerBuffer:null,_texCoordsBuffer:null,_className:"MotionStreak",ctor:function(){_ccsg.Node.prototype.ctor.call(this),this._positionR=cc.p(0,0),this._blendFunc=new cc.BlendFunc(cc.SRC_ALPHA,cc.ONE_MINUS_SRC_ALPHA),this.fastMode=!1,this.startingPositionInitialized=!1,this.texture=null,this._stroke=0,this._fadeDelta=0,this._minSeg=0,this._maxPoints=0,this._nuPoints=0,this._previousNuPoints=0,this._pointVertexes=null,this._pointState=null,this._vertices=null,this._colorPointer=null,this._texCoords=null,this._verticesBuffer=null,this._colorPointerBuffer=null,this._texCoordsBuffer=null},initWithFade:function(t,e,i,n,r){return this.anchorX=0,this.anchorY=0,this.ignoreAnchor=!0,this.startingPositionInitialized=!1,this.fastMode=!0,this._stroke=i,this.setMinSeg(e),this.setFadeTime(t),this._blendFunc.src=gl.SRC_ALPHA,this._blendFunc.dst=gl.ONE_MINUS_SRC_ALPHA,this.setTexture(r),this.color=n,this.scheduleUpdate(),!0},getTexture:function(){return this.texture},setTexture:function(t){this.texture!==t&&(this.texture=t)},getBlendFunc:function(){return this._blendFunc},setBlendFunc:function(t,e){void 0===e?this._blendFunc=t:(this._blendFunc.src=t,this._blendFunc.dst=e)},getOpacity:function(){return cc.logID(5901),0},setOpacity:function(t){cc.logID(5902)},setOpacityModifyRGB:function(t){},isOpacityModifyRGB:function(){return!1},getFadeTime:function(){return 1/this._fadeDelta},setFadeTime:function(t){this._fadeDelta=1/t;var e=2+(0|60*t);this._maxPoints=e,this._nuPoints=0,this._pointState=new Float32Array(e),this._pointVertexes=new Float32Array(2*e),this._vertices=new Float32Array(4*e),this._texCoords=new Float32Array(4*e),this._colorPointer=new Uint8Array(8*e),this._verticesBuffer=gl.createBuffer(),this._texCoordsBuffer=gl.createBuffer(),this._colorPointerBuffer=gl.createBuffer(),gl.bindBuffer(gl.ARRAY_BUFFER,this._verticesBuffer),gl.bufferData(gl.ARRAY_BUFFER,this._vertices,gl.DYNAMIC_DRAW),gl.bindBuffer(gl.ARRAY_BUFFER,this._texCoordsBuffer),gl.bufferData(gl.ARRAY_BUFFER,this._texCoords,gl.DYNAMIC_DRAW),gl.bindBuffer(gl.ARRAY_BUFFER,this._colorPointerBuffer),gl.bufferData(gl.ARRAY_BUFFER,this._colorPointer,gl.DYNAMIC_DRAW)},getMinSeg:function(){return this._minSeg},setMinSeg:function(t){this._minSeg=-1===t?this._stroke/5:t,this._minSeg*=this._minSeg},isFastMode:function(){return this.fastMode},setFastMode:function(t){this.fastMode=t},isStartingPositionInitialized:function(){return this.startingPositionInitialized},setStartingPositionInitialized:function(t){this.startingPositionInitialized=t},getStroke:function(){return this._stroke},setStroke:function(t){this._stroke=t},tintWithColor:function(t){this.color=t;for(var e=this._colorPointer,i=0,n=2*this._nuPoints;i0?(c[e]=c[r],h[2*e]=h[2*r],h[2*e+1]=h[2*r+1],s=2*r,l[2*(i=2*e)]=l[2*s],l[2*i+1]=l[2*s+1],l[2*(i+1)]=l[2*(s+1)],l[2*(i+1)+1]=l[2*(s+1)+1],s*=4,u[(i*=4)+0]=u[s+0],u[i+1]=u[s+1],u[i+2]=u[s+2],u[i+4]=u[s+4],u[i+5]=u[s+5],u[i+6]=u[s+6]):i=8*e;var _=255*c[e];u[i+3]=_,u[i+7]=_}var d=!0;if((a-=o)>=this._maxPoints)d=!1;else if(a>0){var f=cc.p(h[2*(a-1)],h[2*(a-1)+1]),m=cc.pDistanceSQ(f,this._positionR)0&&this.fastMode&&(a>1?n(h,this._stroke,this._vertices,a,1):n(h,this._stroke,this._vertices,0,2)),a++}if(this.fastMode||n(h,this._stroke,this._vertices,0,a),a&&this._previousNuPoints!==a){var x=1/a,C=this._texCoords;for(r=0;re;0<=e?++u:--u)t.push(this.data[this.pos++]);break;case"tRNS":switch(this.transparency={},this.colorType){case 3:if(this.transparency.indexed=this.read(e),(h=255-this.transparency.indexed.length)>0)for(_=0;0<=h?_h;0<=h?++_:--_)this.transparency.indexed.push(255);break;case 0:this.transparency.grayscale=this.read(e)[0];break;case 2:this.transparency.rgb=this.read(e)}break;case"tEXt":o=(l=this.read(e)).indexOf(0),a=String.fromCharCode.apply(String,l.slice(0,o)),this.text[a]=String.fromCharCode.apply(String,l.slice(o+1));break;case"IEND":return s&&this.animation.frames.push(s),this.colors=function(){switch(this.colorType){case 0:case 3:case 4:return 1;case 2:case 6:return 3}}.call(this),this.hasAlphaChannel=4===(d=this.colorType)||6===d,i=this.colors+(this.hasAlphaChannel?1:0),this.pixelBitlength=this.bits*i,this.colorSpace=function(){switch(this.colors){case 1:return"DeviceGray";case 3:return"DeviceRGB"}}.call(this),void(Uint8Array!=Array&&(this.imgData=new Uint8Array(this.imgData)));default:this.pos+=e}if(this.pos+=4,this.pos>this.data.length)throw new Error(cc._getError(6017))}},read:function(t){var e,i;for(i=[],e=0;0<=t?et;0<=t?++e:--e)i.push(this.data[this.pos++]);return i},readUInt32:function(){return this.data[this.pos++]<<24|this.data[this.pos++]<<16|this.data[this.pos++]<<8|this.data[this.pos++]},readUInt16:function(){return this.data[this.pos++]<<8|this.data[this.pos++]},decodePixels:function(t){var e,i,r,s,o,a,c,h,l,u,_,d,f,m,p,g,y,v,x,C,T,A,b;if(null==t&&(t=this.imgData),0===t.length)return new Uint8Array(0);for(t=new n.Inflate(t,{index:0,verify:!1}).decompress(),g=(d=this.pixelBitlength/8)*this.width,f=new Uint8Array(g*this.height),a=t.length,p=0,m=0,i=0;m=this._totalParticles},setDisplayFrame:function(t){if(t){var e=t.getTexture();e&&(this._texture=e),this._sgNode.setDisplayFrame(t)}},setTextureWithRect:function(t,e){this._texture=t,this._sgNode.setTextureWithRect(t,e)},_applyFile:function(){var t=this._file;if(t){var e=this;cc.loader.load(t.nativeUrl,(function(i,n){if(i||!n)throw i||new Error(cc._getError(6029));if(e.isValid){var r=e._sgNode;r.particleCount=0;var s=r.isActive();r.initWithFile(t.nativeUrl),n.textureUuid?cc.loader.load({uuid:n.textureUuid,type:"uuid"},(function(t,i){t?cc.error(t):e.texture=i})):!n.textureImageData&&t.texture&&(r.texture=t.texture),n.emissionRate&&(e.emissionRate=n.emissionRate),r.setPosition(0,0),s||r.stopSystem(),e._applyAutoRemove(),e._custom&&e._applyCustoms()}}))}},_applyCustoms:function(){for(var t=this._sgNode,e=t.isActive(),i=0;i=this._totalParticles},updateQuadWithParticle:function(t,e){this._renderCmd.updateQuadWithParticle(t,e)},postStep:function(){this._renderCmd.postStep()},update:function(t){if(this._renderCmd.setDirtyFlag(_ccsg.Node._dirtyFlags.contentDirty),this._isActive&&this.emissionRate){var e=1/this.emissionRate;for(this.particleCounte;)this.addParticle(),this._emitCounter-=e;this._elapsed+=t,-1!==this.duration&&this.duration0){if(this.emitterMode===_ccsg.ParticleSystem.Mode.GRAVITY){var h=o,l=r,u=s;c.pos.x||c.pos.y?(cc.pIn(l,c.pos),cc.pNormalizeIn(l)):cc.pZeroIn(l),cc.pIn(u,l),cc.pMultIn(l,c.modeA.radialAccel);var _=u.x;u.x=-u.y,u.y=_,cc.pMultIn(u,c.modeA.tangentialAccel),cc.pIn(h,l),cc.pAddIn(h,u),cc.pAddIn(h,this.modeA.gravity),cc.pMultIn(h,t),cc.pAddIn(c.modeA.dir,h),cc.pIn(h,c.modeA.dir),cc.pMultIn(h,t),cc.pAddIn(c.pos,h)}else{var d=c.modeB;d.angle+=d.degreesPerSecond*t,d.radius+=d.deltaRadius*t,c.pos.x=-Math.cos(d.angle)*d.radius,c.pos.y=-Math.sin(d.angle)*d.radius}this._renderCmd._updateDeltaColor(c,t),c.size+=c.deltaSize*t,c.size=Math.max(0,c.size),c.rotation+=c.deltaRotation*t;var f=r;if(this.positionType===_ccsg.ParticleSystem.Type.FREE||this.positionType===_ccsg.ParticleSystem.Type.RELATIVE){var m=s,p=o;cc._pointApplyAffineTransformIn(n,i,m),cc._pointApplyAffineTransformIn(c.startPos,i,p),cc.pSubIn(m,p),cc.pIn(f,c.pos),cc.pSubIn(f,m)}else cc.pIn(f,c.pos);this._renderCmd.updateParticlePosition(c,f),++this._particleIdx}else{if(this._particleIdx!==this.particleCount-1){var g=a[this._particleIdx];a[this._particleIdx]=a[this.particleCount-1],a[this.particleCount-1]=g}if(--this.particleCount,0===this.particleCount&&this.autoRemoveOnFinish)return this.unscheduleUpdate(),this._parent.removeChild(this,!0),void(this._renderCmd.updateLocalBB&&this._renderCmd.updateLocalBB())}}this._renderCmd.updateLocalBB&&this._renderCmd.updateLocalBB()}this.postStep()},updateWithNoTime:function(){this.update(0)},_valueForKey:function(t,e){if(e){var i=e[t];return null!=i?i:""}return""},_updateBlendFunc:function(){var t=this._texture;if(t&&t instanceof cc.Texture2D){this._opacityModifyRGB=!1;var e=this._blendFunc;e.src===cc.macro.BLEND_SRC&&e.dst===cc.macro.BLEND_DST&&(t.hasPremultipliedAlpha()?this._opacityModifyRGB=!0:(e.src=cc.macro.SRC_ALPHA,e.dst=cc.macro.ONE_MINUS_SRC_ALPHA))}},clone:function(){var t=new _ccsg.ParticleSystem;if(t.initWithTotalParticles(this.getTotalParticles())){t.setAngle(this.getAngle()),t.setAngleVar(this.getAngleVar()),t.setDuration(this.getDuration());var e=this.getBlendFunc();if(t.setBlendFunc(e.src,e.dst),t.setStartColor(this.getStartColor()),t.setStartColorVar(this.getStartColorVar()),t.setEndColor(this.getEndColor()),t.setEndColorVar(this.getEndColorVar()),t.setStartSize(this.getStartSize()),t.setStartSizeVar(this.getStartSizeVar()),t.setEndSize(this.getEndSize()),t.setEndSizeVar(this.getEndSizeVar()),t.setPosition(cc.p(this.x,this.y)),t.setPosVar(cc.p(this.getPosVar().x,this.getPosVar().y)),t.setPositionType(this.getPositionType()),t.setStartSpin(this.getStartSpin()||0),t.setStartSpinVar(this.getStartSpinVar()||0),t.setEndSpin(this.getEndSpin()||0),t.setEndSpinVar(this.getEndSpinVar()||0),t.setEmitterMode(this.getEmitterMode()),this.getEmitterMode()===_ccsg.ParticleSystem.Mode.GRAVITY){var i=this.getGravity();t.setGravity(cc.p(i.x,i.y)),t.setSpeed(this.getSpeed()),t.setSpeedVar(this.getSpeedVar()),t.setRadialAccel(this.getRadialAccel()),t.setRadialAccelVar(this.getRadialAccelVar()),t.setTangentialAccel(this.getTangentialAccel()),t.setTangentialAccelVar(this.getTangentialAccelVar())}else this.getEmitterMode()===_ccsg.ParticleSystem.Mode.RADIUS&&(t.setStartRadius(this.getStartRadius()),t.setStartRadiusVar(this.getStartRadiusVar()),t.setEndRadius(this.getEndRadius()),t.setEndRadiusVar(this.getEndRadiusVar()),t.setRotatePerSecond(this.getRotatePerSecond()),t.setRotatePerSecondVar(this.getRotatePerSecondVar()));t.setLife(this.getLife()),t.setLifeVar(this.getLifeVar()),t.setEmissionRate(this.getEmissionRate()),t.setOpacityModifyRGB(this.isOpacityModifyRGB());var n=this.getTexture();if(n){var r=n.getContentSize();t.setTextureWithRect(n,cc.rect(0,0,r.width,r.height))}}return t},setDisplayFrame:function(t){if(t){var e=t.getOffset();0===e.x&&0===e.y||cc.logID(6015);var i=t.getTexture();this._texture!==i&&this.setTexture(i)}},setTextureWithRect:function(t,e){this._texture!==t&&(this._texture=t,this._updateBlendFunc()),this.initTexCoordsWithRect(e)},listenBackToForeground:function(t){}});var s=_ccsg.ParticleSystem.prototype;s.opacityModifyRGB,cc.defineGetterSetter(s,"opacityModifyRGB",s.isOpacityModifyRGB,s.setOpacityModifyRGB),s.active,cc.defineGetterSetter(s,"active",s.isActive),s.sourcePos,cc.defineGetterSetter(s,"sourcePos",s.getSourcePosition,s.setSourcePosition),s.posVar,cc.defineGetterSetter(s,"posVar",s.getPosVar,s.setPosVar),s.gravity,cc.defineGetterSetter(s,"gravity",s.getGravity,s.setGravity),s.speed,cc.defineGetterSetter(s,"speed",s.getSpeed,s.setSpeed),s.speedVar,cc.defineGetterSetter(s,"speedVar",s.getSpeedVar,s.setSpeedVar),s.tangentialAccel,cc.defineGetterSetter(s,"tangentialAccel",s.getTangentialAccel,s.setTangentialAccel),s.tangentialAccelVar,cc.defineGetterSetter(s,"tangentialAccelVar",s.getTangentialAccelVar,s.setTangentialAccelVar),s.radialAccel,cc.defineGetterSetter(s,"radialAccel",s.getRadialAccel,s.setRadialAccel),s.radialAccelVar,cc.defineGetterSetter(s,"radialAccelVar",s.getRadialAccelVar,s.setRadialAccelVar),s.rotationIsDir,cc.defineGetterSetter(s,"rotationIsDir",s.getRotationIsDir,s.setRotationIsDir),s.startRadius,cc.defineGetterSetter(s,"startRadius",s.getStartRadius,s.setStartRadius),s.startRadiusVar,cc.defineGetterSetter(s,"startRadiusVar",s.getStartRadiusVar,s.setStartRadiusVar),s.endRadius,cc.defineGetterSetter(s,"endRadius",s.getEndRadius,s.setEndRadius),s.endRadiusVar,cc.defineGetterSetter(s,"endRadiusVar",s.getEndRadiusVar,s.setEndRadiusVar),s.rotatePerS,cc.defineGetterSetter(s,"rotatePerS",s.getRotatePerSecond,s.setRotatePerSecond),s.rotatePerSVar,cc.defineGetterSetter(s,"rotatePerSVar",s.getRotatePerSecondVar,s.setRotatePerSecondVar),s.startColor,cc.defineGetterSetter(s,"startColor",s.getStartColor,s.setStartColor),s.startColorVar,cc.defineGetterSetter(s,"startColorVar",s.getStartColorVar,s.setStartColorVar),s.endColor,cc.defineGetterSetter(s,"endColor",s.getEndColor,s.setEndColor),s.endColorVar,cc.defineGetterSetter(s,"endColorVar",s.getEndColorVar,s.setEndColorVar),s.totalParticles,cc.defineGetterSetter(s,"totalParticles",s.getTotalParticles,s.setTotalParticles),s.texture,cc.defineGetterSetter(s,"texture",s.getTexture,s.setTexture),_ccsg.ParticleSystem.ModeA=function(t,e,i,n,r,s,o,a){this.gravity=t||cc.p(0,0),this.speed=e||0,this.speedVar=i||0,this.tangentialAccel=n||0,this.tangentialAccelVar=r||0,this.radialAccel=s||0,this.radialAccelVar=o||0,this.rotationIsDir=a||!1},_ccsg.ParticleSystem.ModeB=function(t,e,i,n,r,s){this.startRadius=t||0,this.startRadiusVar=e||0,this.endRadius=i||0,this.endRadiusVar=n||0,this.rotatePerSecond=r||0,this.rotatePerSecondVar=s||0},_ccsg.ParticleSystem.DURATION_INFINITY=-1,_ccsg.ParticleSystem.START_SIZE_EQUAL_TO_END_SIZE=-1,_ccsg.ParticleSystem.START_RADIUS_EQUAL_TO_END_RADIUS=-1,_ccsg.ParticleSystem.Mode=cc.Enum({GRAVITY:0,RADIUS:1}),_ccsg.ParticleSystem.Type=cc.Enum({FREE:0,RELATIVE:1,GROUPED:2})}),{"../compression/ZipUtils":28,"../core/platform/CCSAXParser.js":185,"./CCPNGReader":262,"./CCTIFFReader":268}],266:[(function(t,e,i){_ccsg.ParticleSystem.CanvasRenderCmd=function(t){this._rootCtor(t),this._needDraw=!0,this._pointRect=cc.rect(0,0,0,0),this._localRegion=new cc.Region,this._tintCache=null};var n=_ccsg.ParticleSystem.CanvasRenderCmd.prototype=Object.create(_ccsg.Node.CanvasRenderCmd.prototype);n.constructor=_ccsg.ParticleSystem.CanvasRenderCmd,n.updateQuadWithParticle=function(t,e){},n.updateParticlePosition=function(t,e){cc.pIn(t.drawPos,e)};var r=new cc.Region,s=new cc.Rect;n.updateLocalBB=function(){var t=this._localRegion,e=this._node._particles;t.setEmpty();for(var i=e.length-1;i>=0;--i){var n=e[i],o=n.drawPos,a=1.415*n.size;r.setTo(o.x-a,o.y-a,o.x+a,o.y+a),t.union(r)}s.x=t._minX,s.y=t._minY,s.width=t._maxX-t._minX,s.height=t._maxY-t._minY},n.getLocalBB=function(){return s},n.updateStatus=function(){this.originUpdateStatus(),this._updateCurrentRegions(),this._regionFlag=_ccsg.Node.CanvasRenderCmd.RegionStatus.DirtyDouble,this._dirtyFlag&=~_ccsg.Node._dirtyFlags.contentDirty},n.rendering=function(t,e,i){var n,r,s,o=t||cc._renderContext,a=o.getContext(),c=this._node,h=this._pointRect;o.setTransform(this._worldTransform,e,i),o.save(),c.isBlendAdditive()?a.globalCompositeOperation="lighter":a.globalCompositeOperation="source-over";var l=this._node.particleCount,u=this._node._particles;if(c._texture){if(!c._texture.loaded)return void o.restore();var _=c._texture.getHtmlElementObj();if(!_.width||!_.height)return void o.restore();var d=_;for(n=0;ne._allocatedParticles){var i=cc.V3F_C4B_T2F_Quad.BYTES_PER_ELEMENT;this._indices=new Uint16Array(6*t);var n=new ArrayBuffer(t*i),r=e._particles;r.length=0;var s=this._quads;s.length=0;for(var o=0;o>>8*(4-s)):r.push(n);else for(var o=0;o=8?-1!==["RATIONAL","SRATIONAL"].indexOf(e)?(r.push(this.getUint32(n+a)),r.push(this.getUint32(n+a+4))):cc.logID(8e3):r.push(this.getBytes(s,n+a))}return"ASCII"===e&&r.forEach((function(t,e,i){i[e]=String.fromCharCode(t)})),r},getBytes:function(t,e){if(t<=0)cc.logID(8001);else{if(t<=1)return this.getUint8(e);if(t<=2)return this.getUint16(e);if(t<=3)return this.getUint32(e)>>>8;if(t<=4)return this.getUint32(e);cc.logID(8002)}},getBits:function(t,e,i){i=i||0;var n,r,s=e+Math.floor(i/8),o=i+t,a=32-t;return o<=0?cc.logID(6023):o<=8?(n=24+i,r=this.getUint8(s)):o<=16?(n=16+i,r=this.getUint16(s)):o<=32?(n=i,r=this.getUint32(s)):cc.logID(6022),{bits:r<>>a,byteOffset:s+Math.floor(o/8),bitOffset:o%8}},parseFileDirectory:function(t){for(var e=this.getUint16(t),i=[],n=t+2,r=0;r=0&&D<=127?P=D+1:D>=-127&&D<=-1?O=1-D:T=!0}else{var B=this.getUint8(g+v);for(w=0;w0)for(var it=0;it"},_compileShader:function(t,e,i){if(!i||!t)return!1;i=(cc.GLProgram._isHighpSupported()?"precision highp float;\n":"precision mediump float;\n")+"uniform mat4 CC_PMatrix;\nuniform mat4 CC_MVMatrix;\nuniform mat4 CC_MVPMatrix;\nuniform vec4 CC_Time;\nuniform vec4 CC_SinTime;\nuniform vec4 CC_CosTime;\nuniform vec4 CC_Random01;\nuniform sampler2D CC_Texture0;\n//CC INCLUDES END\n"+i,this._glContext.shaderSource(t,i),this._glContext.compileShader(t);var n=this._glContext.getShaderParameter(t,this._glContext.COMPILE_STATUS);return n||(cc.logID(8100,this._glContext.getShaderSource(t)),e===this._glContext.VERTEX_SHADER?cc.log("cocos2d: \n"+this.vertexShaderLog()):cc.log("cocos2d: \n"+this.fragmentShaderLog())),!!n},ctor:function(t,e,i){this._uniforms={},this._hashForUniforms={},this._glContext=i||cc._renderContext,this._programObj=null,this._vertShader=null,this._fragShader=null,this._usesTime=!1,this._projectionUpdated=-1,t&&e&&this.init(t,e)},destroyProgram:function(){this._vertShader=null,this._fragShader=null,this._uniforms=null,this._hashForUniforms=null,this._glContext.deleteProgram(this._programObj)},initWithVertexShaderByteArray:function(t,e){var i=this._glContext;for(var n in this._programObj=i.createProgram(),this._projectionUpdated=-1,this._vertShader=null,this._fragShader=null,t&&(this._vertShader=i.createShader(i.VERTEX_SHADER),this._compileShader(this._vertShader,i.VERTEX_SHADER,t)||cc.logID(8101)),e&&(this._fragShader=i.createShader(i.FRAGMENT_SHADER),this._compileShader(this._fragShader,i.FRAGMENT_SHADER,e)||cc.logID(8102)),this._vertShader&&i.attachShader(this._programObj,this._vertShader),this._fragShader&&i.attachShader(this._programObj,this._fragShader),this._hashForUniforms)delete this._hashForUniforms[n];return!0},initWithString:function(t,e){return this.initWithVertexShaderByteArray(t,e)},initWithVertexShaderFilename:function(t,e){var i=cc.loader.getRes(t);if(!i)throw new Error(cc._getError(8106,t));var n=cc.loader.getRes(e);if(!n)throw new Error(cc._getError(8106,e));return this.initWithVertexShaderByteArray(i,n)},init:function(t,e){return this.initWithVertexShaderFilename(t,e)},addAttribute:function(t,e){this._glContext.bindAttribLocation(this._programObj,e,t)},link:function(){if(!this._programObj)return cc.logID(8103),!1;if((this._glContext.linkProgram(this._programObj),this._vertShader&&this._glContext.deleteShader(this._vertShader),this._fragShader&&this._glContext.deleteShader(this._fragShader),this._vertShader=null,this._fragShader=null,cc.game.config[cc.game.CONFIG_KEY.debugMode])&&!this._glContext.getProgramParameter(this._programObj,this._glContext.LINK_STATUS))return cc.logID(8104,this._glContext.getProgramInfoLog(this._programObj)),cc.gl.deleteProgram(this._programObj),this._programObj=null,!1;return!0},use:function(){cc.gl.useProgram(this._programObj)},updateUniforms:function(){this._uniforms[n.UNIFORM_PMATRIX]=this._glContext.getUniformLocation(this._programObj,n.UNIFORM_PMATRIX_S),this._uniforms[n.UNIFORM_MVMATRIX]=this._glContext.getUniformLocation(this._programObj,n.UNIFORM_MVMATRIX_S),this._uniforms[n.UNIFORM_MVPMATRIX]=this._glContext.getUniformLocation(this._programObj,n.UNIFORM_MVPMATRIX_S),this._uniforms[n.UNIFORM_TIME]=this._glContext.getUniformLocation(this._programObj,n.UNIFORM_TIME_S),this._uniforms[n.UNIFORM_SINTIME]=this._glContext.getUniformLocation(this._programObj,n.UNIFORM_SINTIME_S),this._uniforms[n.UNIFORM_COSTIME]=this._glContext.getUniformLocation(this._programObj,n.UNIFORM_COSTIME_S),this._usesTime=null!=this._uniforms[n.UNIFORM_TIME]||null!=this._uniforms[n.UNIFORM_SINTIME]||null!=this._uniforms[n.UNIFORM_COSTIME],this._uniforms[n.UNIFORM_RANDOM01]=this._glContext.getUniformLocation(this._programObj,n.UNIFORM_RANDOM01_S),this._uniforms[n.UNIFORM_SAMPLER]=this._glContext.getUniformLocation(this._programObj,n.UNIFORM_SAMPLER_S),this.use(),this.setUniformLocationWith1i(this._uniforms[n.UNIFORM_SAMPLER],0)},_addUniformLocation:function(t){var e=this._glContext.getUniformLocation(this._programObj,t);this._uniforms[t]=e},getUniformLocationForName:function(t){if(!t)throw new Error(cc._getError(8107));if(!this._programObj)throw new Error(cc._getError(8108));return this._uniforms[t]||this._glContext.getUniformLocation(this._programObj,t)},getUniformMVPMatrix:function(){return this._uniforms[n.UNIFORM_MVPMATRIX]},getUniformSampler:function(){return this._uniforms[n.UNIFORM_SAMPLER]},setUniformLocationWith1i:function(t,e){var i=this._glContext;if("string"==typeof t){if(this._updateUniformLocation(t,e)){var n=this.getUniformLocationForName(t);i.uniform1i(n,e)}}else i.uniform1i(t,e)},setUniformLocationWith2i:function(t,e,i){var n=this._glContext;if("string"==typeof t){if(this._updateUniformLocation(t,e,i)){var r=this.getUniformLocationForName(t);n.uniform2i(r,e,i)}}else n.uniform2i(t,e,i)},setUniformLocationWith3i:function(t,e,i,n){var r=this._glContext;if("string"==typeof t){if(this._updateUniformLocation(t,e,i,n)){var s=this.getUniformLocationForName(t);r.uniform3i(s,e,i,n)}}else r.uniform3i(t,e,i,n)},setUniformLocationWith4i:function(t,e,i,n,r){var s=this._glContext;if("string"==typeof t){if(this._updateUniformLocation(t,e,i,n,r)){var o=this.getUniformLocationForName(t);s.uniform4i(o,e,i,n,r)}}else s.uniform4i(t,e,i,n,r)},setUniformLocationWith2iv:function(t,e){var i="string"==typeof t?this.getUniformLocationForName(t):t;this._glContext.uniform2iv(i,e)},setUniformLocationWith3iv:function(t,e){var i="string"==typeof t?this.getUniformLocationForName(t):t;this._glContext.uniform3iv(i,e)},setUniformLocationWith4iv:function(t,e){var i="string"==typeof t?this.getUniformLocationForName(t):t;this._glContext.uniform4iv(i,e)},setUniformLocationI32:function(t,e){this.setUniformLocationWith1i(t,e)},setUniformLocationWith1f:function(t,e){var i=this._glContext;if("string"==typeof t){if(this._updateUniformLocation(t,e)){var n=this.getUniformLocationForName(t);i.uniform1f(n,e)}}else i.uniform1f(t,e)},setUniformLocationWith2f:function(t,e,i){var n=this._glContext;if("string"==typeof t){if(this._updateUniformLocation(t,e,i)){var r=this.getUniformLocationForName(t);n.uniform2f(r,e,i)}}else n.uniform2f(t,e,i)},setUniformLocationWith3f:function(t,e,i,n){var r=this._glContext;if("string"==typeof t){if(this._updateUniformLocation(t,e,i,n)){var s=this.getUniformLocationForName(t);r.uniform3f(s,e,i,n)}}else r.uniform3f(t,e,i,n)},setUniformLocationWith4f:function(t,e,i,n,r){var s=this._glContext;if("string"==typeof t){if(this._updateUniformLocation(t,e,i,n,r)){var o=this.getUniformLocationForName(t);s.uniform4f(o,e,i,n,r)}}else s.uniform4f(t,e,i,n,r)},setUniformLocationWith2fv:function(t,e){var i="string"==typeof t?this.getUniformLocationForName(t):t;this._glContext.uniform2fv(i,e)},setUniformLocationWith3fv:function(t,e){var i="string"==typeof t?this.getUniformLocationForName(t):t;this._glContext.uniform3fv(i,e)},setUniformLocationWith4fv:function(t,e){var i="string"==typeof t?this.getUniformLocationForName(t):t;this._glContext.uniform4fv(i,e)},setUniformLocationWithMatrix3fv:function(t,e){var i="string"==typeof t?this.getUniformLocationForName(t):t;this._glContext.uniformMatrix3fv(i,!1,e)},setUniformLocationWithMatrix4fv:function(t,e){var i="string"==typeof t?this.getUniformLocationForName(t):t;this._glContext.uniformMatrix4fv(i,!1,e)},setUniformLocationF32:function(t,e,i,n,r){"use strict";switch(arguments.length){case 0:case 1:return;case 2:this.setUniformLocationWith1f(t,e);break;case 3:this.setUniformLocationWith2f(t,e,i);break;case 4:this.setUniformLocationWith3f(t,e,i,n);break;case 5:this.setUniformLocationWith4f(t,e,i,n,r)}},setUniformsForBuiltins:function(){var t=new r.Matrix4,e=new r.Matrix4,i=new r.Matrix4;if(r.glGetMatrix(r.KM_GL_PROJECTION,t),r.glGetMatrix(r.KM_GL_MODELVIEW,e),r.mat4Multiply(i,t,e),this.setUniformLocationWithMatrix4fv(this._uniforms[n.UNIFORM_PMATRIX],t.mat,1),this.setUniformLocationWithMatrix4fv(this._uniforms[n.UNIFORM_MVMATRIX],e.mat,1),this.setUniformLocationWithMatrix4fv(this._uniforms[n.UNIFORM_MVPMATRIX],i.mat,1),this._usesTime){var s=cc.director,o=s.getTotalFrames()*s.getAnimationInterval();this.setUniformLocationWith4f(this._uniforms[n.UNIFORM_TIME],o/10,o,2*o,4*o),this.setUniformLocationWith4f(this._uniforms[n.UNIFORM_SINTIME],o/8,o/4,o/2,Math.sin(o)),this.setUniformLocationWith4f(this._uniforms[n.UNIFORM_COSTIME],o/8,o/4,o/2,Math.cos(o))}-1!==this._uniforms[n.UNIFORM_RANDOM01]&&this.setUniformLocationWith4f(this._uniforms[n.UNIFORM_RANDOM01],Math.random(),Math.random(),Math.random(),Math.random())},_setUniformsForBuiltinsForRenderer:function(t){if(t&&t._renderCmd){var e=new r.Matrix4,i=new r.Matrix4;if(r.glGetMatrix(r.KM_GL_PROJECTION,e),r.mat4Multiply(i,e,t._renderCmd._stackMatrix),this.setUniformLocationWithMatrix4fv(this._uniforms[n.UNIFORM_PMATRIX],e.mat,1),this.setUniformLocationWithMatrix4fv(this._uniforms[n.UNIFORM_MVMATRIX],t._renderCmd._stackMatrix.mat,1),this.setUniformLocationWithMatrix4fv(this._uniforms[n.UNIFORM_MVPMATRIX],i.mat,1),this._usesTime){var s=cc.director,o=s.getTotalFrames()*s.getAnimationInterval();this.setUniformLocationWith4f(this._uniforms[n.UNIFORM_TIME],o/10,o,2*o,4*o),this.setUniformLocationWith4f(this._uniforms[n.UNIFORM_SINTIME],o/8,o/4,o/2,Math.sin(o)),this.setUniformLocationWith4f(this._uniforms[n.UNIFORM_COSTIME],o/8,o/4,o/2,Math.cos(o))}-1!==this._uniforms[n.UNIFORM_RANDOM01]&&this.setUniformLocationWith4f(this._uniforms[n.UNIFORM_RANDOM01],Math.random(),Math.random(),Math.random(),Math.random())}},setUniformForModelViewProjectionMatrix:function(){this._glContext.uniformMatrix4fv(this._uniforms[n.UNIFORM_MVPMATRIX],!1,r.getMat4MultiplyValue(r.projection_matrix_stack.top,r.modelview_matrix_stack.top))},setUniformForModelViewProjectionMatrixWithMat4:function(t){r.mat4Multiply(t,r.projection_matrix_stack.top,r.modelview_matrix_stack.top),this._glContext.uniformMatrix4fv(this._uniforms[n.UNIFORM_MVPMATRIX],!1,t.mat)},setUniformForModelViewAndProjectionMatrixWithMat4:function(){this._glContext.uniformMatrix4fv(this._uniforms[n.UNIFORM_MVMATRIX],!1,r.modelview_matrix_stack.top.mat),this._glContext.uniformMatrix4fv(this._uniforms[n.UNIFORM_PMATRIX],!1,r.projection_matrix_stack.top.mat)},_setUniformForMVPMatrixWithMat4:function(t){if(!t)throw new Error(cc._getError(8109));this._glContext.uniformMatrix4fv(this._uniforms[n.UNIFORM_MVMATRIX],!1,t.mat),this._glContext.uniformMatrix4fv(this._uniforms[n.UNIFORM_PMATRIX],!1,r.projection_matrix_stack.top.mat)},_updateProjectionUniform:function(){var t=r.projection_matrix_stack;t.lastUpdated!==this._projectionUpdated&&(this._glContext.uniformMatrix4fv(this._uniforms[n.UNIFORM_PMATRIX],!1,t.top.mat),this._projectionUpdated=t.lastUpdated)},vertexShaderLog:function(){return this._glContext.getShaderInfoLog(this._vertShader)},getVertexShaderLog:function(){return this._glContext.getShaderInfoLog(this._vertShader)},getFragmentShaderLog:function(){return this._glContext.getShaderInfoLog(this._vertShader)},fragmentShaderLog:function(){return this._glContext.getShaderInfoLog(this._fragShader)},programLog:function(){return this._glContext.getProgramInfoLog(this._programObj)},getProgramLog:function(){return this._glContext.getProgramInfoLog(this._programObj)},reset:function(){for(var t in this._vertShader=null,this._fragShader=null,this._uniforms.length=0,this._glContext.deleteProgram(this._programObj),this._programObj=null,this._hashForUniforms)this._hashForUniforms[t].length=0,delete this._hashForUniforms[t]},getProgram:function(){return this._programObj},retain:function(){},release:function(){}}),cc.GLProgram._highpSupported=null,cc.GLProgram._isHighpSupported=function(){var t=cc._renderContext;if(t.getShaderPrecisionFormat&&null==cc.GLProgram._highpSupported){var e=t.getShaderPrecisionFormat(t.FRAGMENT_SHADER,t.HIGH_FLOAT);cc.GLProgram._highpSupported=0!==e.precision}return cc.GLProgram._highpSupported}}),{}],273:[(function(t,e,i){var n=cc.macro.ENABLE_GL_STATE_CACHE,r=0,s=0,o=null,a=0,c=0;n&&(r=16,s=-1,o=new Array(r),a=-1,c=-1),cc.gl={},cc.gl.invalidateStateCache=function(){if(cc.math.glFreeAll(),-1,n){s=-1;for(var t=0;te._bufferCapacity){var n=cc.V2F_C4B_T2F_Triangle.BYTES_PER_ELEMENT;if(e._bufferCapacity+=Math.max(e._bufferCapacity,t),null==i||0===i.length)e._buffer=[],e._trianglesArrayBuffer=new ArrayBuffer(n*e._bufferCapacity),e._trianglesReader=new Uint8Array(e._trianglesArrayBuffer);else{for(var r=[],s=new ArrayBuffer(n*e._bufferCapacity),o=0;o0,y=3*(3*d-2);this._ensureCapacity(y);var v,x,C,T=cc.V2F_C4B_T2F_Triangle.BYTES_PER_ELEMENT,A=this._trianglesArrayBuffer,b=this._buffer;for(o=0;o0){var i=this._worldTransform,n=this._matrix.mat;n[0]=i.a,n[4]=i.c,n[12]=i.tx,n[1]=i.b,n[5]=i.d,n[13]=i.ty,cc.gl.blendFunc(e._blendFunc.src,e._blendFunc.dst),this._shaderProgram.use(),this._shaderProgram._setUniformForMVPMatrixWithMat4(this._matrix),e._render()}}}),{}],280:[(function(t,e,i){_ccsg.TMXLayer=_ccsg.Node.extend({tiles:null,tileset:null,layerOrientation:null,properties:null,layerName:"",_texture:null,_textures:null,_texGrids:null,_spriteTiles:null,_layerSize:null,_mapTileSize:null,_opacity:255,_minGID:null,_maxGID:null,_vertexZvalue:null,_useAutomaticVertexZ:null,_reusedTile:null,_contentScaleFactor:null,_staggerAxis:null,_staggerIndex:null,_hexSideLength:0,_className:"TMXLayer",ctor:function(t,e,i){cc.SpriteBatchNode.prototype.ctor.call(this),this._layerSize=cc.size(0,0),this._mapTileSize=cc.size(0,0),this._spriteTiles={},this._staggerAxis=cc.TiledMap.StaggerAxis.STAGGERAXIS_Y,this._staggerIndex=cc.TiledMap.StaggerIndex.STAGGERINDEX_EVEN,void 0!==i&&this.initWithTilesetInfo(t,e,i)},_createRenderCmd:function(){return cc._renderType===cc.game.RENDER_TYPE_CANVAS?new _ccsg.TMXLayer.CanvasRenderCmd(this):new _ccsg.TMXLayer.WebGLRenderCmd(this)},_fillTextureGrids:function(t,e){var i=this._textures[e];if(i.loaded){t.imageSize.width&&t.imageSize.height||(t.imageSize.width=i.width,t.imageSize.height=i.height);for(var n=t._tileSize.width,r=t._tileSize.height,s=i.width,o=i.height,a=t.spacing,c=t.margin,h=Math.floor((s-2*c+a)/(n+a)),l=Math.floor((o-2*c+a)/(r+a))*h,u=t.firstGid,_=t.firstGid+l,d=this._texGrids,f=null,m=!!d[u],p=cc.macro.FIX_ARTIFACTS_BY_STRECHING_TEXEL_TMX?.5:0;u<_&&(m&&!d[u]&&(m=!1),m||!d[u]);++u)f={texId:e,x:0,y:0,width:n,height:r,t:0,l:0,r:0,b:0},t.rectForGID(u,f),f.x+=p,f.y+=p,f.width-=2*p,f.height-=2*p,f.t=f.y/o,f.l=f.x/s,f.r=(f.x+f.width)/s,f.b=(f.y+f.height)/o,d[u]=f}else i.once("load",(function(){this._fillTextureGrids(t,e)}),this)},initWithTilesetInfo:function(t,e,i){var n=e._layerSize;this.layerName=e.name,this.tiles=e._tiles,this.properties=e.properties,this._layerSize=n,this._minGID=e._minGID,this._maxGID=e._maxGID,this._opacity=e._opacity,this._staggerAxis=i.getStaggerAxis(),this._staggerIndex=i.getStaggerIndex(),this._hexSideLength=i.getHexSideLength(),this.tileset=t,this.layerOrientation=i.orientation,this._mapTileSize=i.getTileSize();var r=i._tilesets;if(r){var s,o,a,c=r.length;for(this._textures=new Array(c),this._texGrids=[],s=0;s0){for(this._reorderChildDirty&&this.sortAllChildren(),r=0;r=this._layerSize.width||e>=this._layerSize.height||i<0||e<0)throw new Error(cc._getError(7227));if(!this.tiles)return cc.logID(7204),null;var n=null,r=this.getTileGIDAt(i,e);if(0===r)return n;var s=Math.floor(i)+Math.floor(e)*this._layerSize.width;if(!(n=this._spriteTiles[s])){var o=this._texGrids[r],a=this._textures[o.texId];(n=new _ccsg.Sprite(a,o)).setPosition(this.getPositionAt(i,e));var c=this._vertexZForPos(i,e);n.setVertexZ(c),n.setAnchorPoint(0,0),n.setOpacity(this._opacity),this.addChild(n,c,s)}return n},getTileGIDAt:function(t,e){if(void 0===t)throw new Error(cc._getError(7228));var i=t;if(void 0===e&&(i=t.x,e=t.y),i>=this._layerSize.width||e>=this._layerSize.height||i<0||e<0)throw new Error(cc._getError(7229));if(!this.tiles)return cc.logID(7205),null;var n=Math.floor(i)+Math.floor(e)*this._layerSize.width;return(this.tiles[n]&cc.TiledMap.TileFlag.FLIPPED_MASK)>>>0},setTileGID:function(t,e,i,n){if(void 0===e)throw new Error(cc._getError(7230));var r;if(void 0===n&&e instanceof cc.Vec2?(r=e,n=i):r=cc.p(e,i),r.x=Math.floor(r.x),r.y=Math.floor(r.y),r.x>=this._layerSize.width||r.y>=this._layerSize.height||r.x<0||r.y<0)throw new Error(cc._getError(7231));if(this.tiles)if(0!==t&&t>>0;if(0===t)this.removeTileAt(r);else if(0===o)this._updateTileForGID(a,r);else{var c=r.x+r.y*this._layerSize.width,h=this.getChildByTag(c);if(h){var l=this._texGrids[t],u=this._textures[l.texId];h.setTexture(u),h.setTextureRect(l,!1),null!=n&&this._setupTileSprite(h,r,a),this.tiles[c]=a}else this._updateTileForGID(a,r)}}}else cc.logID(7206)},addChild:function(t,e,i){_ccsg.Node.prototype.addChild.call(this,t,e,i),void 0!==i&&(this._spriteTiles[i]=t,t._vertexZ=this._vertexZ+cc.renderer.assignedZStep*i/this.tiles.length)},removeChild:function(t,e){this._spriteTiles[t.tag]&&(this._spriteTiles[t.tag]=null),_ccsg.Node.prototype.removeChild.call(this,t,e)},getTileFlagsAt:function(t,e){if(!t)throw new Error(cc._getError(7232));if(void 0!==e&&(t=cc.p(t,e)),t.x>=this._layerSize.width||t.y>=this._layerSize.height||t.x<0||t.y<0)throw new Error(cc._getError(7233));if(!this.tiles)return cc.logID(7208),null;var i=Math.floor(t.x)+Math.floor(t.y)*this._layerSize.width;return(this.tiles[i]&cc.TiledMap.TileFlag.FLIPPED_ALL)>>>0},removeTileAt:function(t,e){if(!t)throw new Error(cc._getError(7234));if(void 0!==e&&(t=cc.p(t,e)),t.x>=this._layerSize.width||t.y>=this._layerSize.height||t.x<0||t.y<0)throw new Error(cc._getError(7235));if(this.tiles){if(0!==this.getTileGIDAt(t)){var i=Math.floor(t.x)+Math.floor(t.y)*this._layerSize.width;this.tiles[i]=0;var n=this._spriteTiles[i];n&&this.removeChild(n,!0)}}else cc.logID(7209)},getPositionAt:function(t,e){void 0!==e&&(t=cc.p(t,e)),t.x=Math.floor(t.x),t.y=Math.floor(t.y);var i=cc.p(0,0);switch(this.layerOrientation){case cc.TiledMap.Orientation.ORTHO:i=this._positionForOrthoAt(t);break;case cc.TiledMap.Orientation.ISO:i=this._positionForIsoAt(t);break;case cc.TiledMap.Orientation.HEX:i=this._positionForHexAt(t)}return i},_positionForIsoAt:function(t){return cc.p(this._mapTileSize.width/2*(this._layerSize.width+t.x-t.y-1),this._mapTileSize.height/2*(2*this._layerSize.height-t.x-t.y-2))},_positionForOrthoAt:function(t){return cc.p(t.x*this._mapTileSize.width,(this._layerSize.height-t.y-1)*this._mapTileSize.height)},_positionForHexAt:function(t){var e=cc.p(0,0),i=this.tileset.tileOffset,n=this._staggerIndex===cc.TiledMap.StaggerIndex.STAGGERINDEX_ODD?1:-1;switch(this._staggerAxis){case cc.TiledMap.StaggerAxis.STAGGERAXIS_Y:var r=0;t.y%2==1&&(r=this._mapTileSize.width/2*n),e=cc.p(t.x*this._mapTileSize.width+r+i.x,(this._layerSize.height-t.y-1)*(this._mapTileSize.height-(this._mapTileSize.height-this._hexSideLength)/2)-i.y);break;case cc.TiledMap.StaggerAxis.STAGGERAXIS_X:var s=0;t.x%2==1&&(s=this._mapTileSize.height/2*-n),e=cc.p(t.x*(this._mapTileSize.width-(this._mapTileSize.width-this._hexSideLength)/2)+i.x,(this._layerSize.height-t.y-1)*this._mapTileSize.height+s-i.y)}return e},_calculateLayerOffset:function(t){var e=cc.p(0,0);switch(this.layerOrientation){case cc.TiledMap.Orientation.ORTHO:e=cc.p(t.x*this._mapTileSize.width,-t.y*this._mapTileSize.height);break;case cc.TiledMap.Orientation.ISO:e=cc.p(this._mapTileSize.width/2*(t.x-t.y),this._mapTileSize.height/2*(-t.x-t.y));break;case cc.TiledMap.Orientation.HEX:if(this._staggerAxis===cc.TiledMap.StaggerAxis.STAGGERAXIS_Y){var i=this._staggerIndex===cc.TiledMap.StaggerIndex.STAGGERINDEX_EVEN?this._mapTileSize.width/2:0;e=cc.p(t.x*this._mapTileSize.width+i,-t.y*(this._mapTileSize.height-(this._mapTileSize.width-this._hexSideLength)/2))}else if(this._staggerAxis===cc.TiledMap.StaggerAxis.STAGGERAXIS_X){var n=this._staggerIndex===cc.TiledMap.StaggerIndex.STAGGERINDEX_ODD?this._mapTileSize.height/2:0;e=cc.p(t.x*(this._mapTileSize.width-(this._mapTileSize.width-this._hexSideLength)/2),-t.y*this._mapTileSize.height+n)}}return e},_updateTileForGID:function(t,e){if(this._texGrids[t]){var i=0|e.x+e.y*this._layerSize.width;i>>0){t.setAnchorPoint(.5,.5),t.setPosition(n.x+t.width/2,n.y+t.height/2);var r=(i&(cc.TiledMap.TileFlag.HORIZONTAL|cc.TiledMap.TileFlag.VERTICAL)>>>0)>>>0;r===cc.TiledMap.TileFlag.HORIZONTAL?t.setRotation(90):r===cc.TiledMap.TileFlag.VERTICAL?t.setRotation(270):r===(cc.TiledMap.TileFlag.VERTICAL|cc.TiledMap.TileFlag.HORIZONTAL)>>>0?(t.setRotation(90),t.setFlippedX(!0)):(t.setRotation(270),t.setFlippedX(!0))}else(i&cc.TiledMap.TileFlag.HORIZONTAL)>>>0&&t.setFlippedX(!0),(i&cc.TiledMap.TileFlag.VERTICAL)>>>0&&t.setFlippedY(!0)},_vertexZForPos:function(t,e){void 0===e&&(e=t.y,t=t.x);var i=0;if(this._useAutomaticVertexZ)switch(this.layerOrientation){case cc.TiledMap.Orientation.ISO:i=-(this._layerSize.width+this._layerSize.height-(t+e));break;case cc.TiledMap.Orientation.ORTHO:i=-(this._layerSize.height-e);break;case cc.TiledMap.Orientation.HEX:cc.logID(7210);break;default:cc.logID(7211)}else i=this._vertexZvalue;return i}})}),{}],281:[(function(t,e,i){t("../shape-nodes/CCDrawNode"),_ccsg.TMXObject=cc.Class({properties:{sgNode:null,offset:cc.p(0,0),gid:0,name:"",type:null,id:0,objectVisible:!0,objectSize:cc.size(0,0),objectRotation:0,_properties:null,_groupSize:cc.size(0,0)},initWithInfo:function(t,e,i,n){this.setProperties(t),this.setObjectName(t.name),this.id=t.id,this.gid=t.gid,this.type=t.type,this.offset=cc.p(t.x,t.y),this.objectSize=cc.size(t.width,t.height),this.objectVisible=t.visible,this.objectRotation=t.rotation,this._groupSize=i,this.type===cc.TiledMap.TMXObjectType.IMAGE?this.sgNode=new _ccsg.TMXObjectImage(this,e):this.sgNode=new _ccsg.TMXObjectShape(this,e,n)},getObjectName:function(){return this.name},getProperty:function(t){return this._properties[t]},getProperties:function(){return this._properties},setObjectName:function(t){this.name=t},setProperties:function(t){this._properties=t}}),_ccsg.TMXObjectImage=_ccsg.Sprite.extend({_container:null,ctor:function(t,e){_ccsg.Sprite.prototype.ctor.call(this),this._container=t,this.initWithMapInfo(e)},initWithMapInfo:function(t){if(!this._container.gid)return!1;for(var e,i=t.getTilesets(),n=i.length-1;n>=0;n--){var r=i[n];if((this._container.gid&cc.TiledMap.TileFlag.FLIPPED_MASK)>>>0>=r.firstGid){e=r;break}}if(!e)return!1;this.setVisible(this._container.objectVisible);var s=r.sourceImage;return s&&this._initWithTileset(s,e),this._initPosWithMapInfo(t),this.setRotation(this._container.objectRotation),(this._container.gid&cc.TiledMap.TileFlag.HORIZONTAL)>>>0&&this.setFlippedX(!0),(this._container.gid&cc.TiledMap.TileFlag.VERTICAL)>>>0&&this.setFlippedY(!0),!0},_initWithTileset:function(t,e){if(t.loaded){e.imageSize.width=t.width,e.imageSize.height=t.height;var i=e.rectForGID(this._container.gid);this.initWithTexture(t,i),this.setScaleX(this._container.objectSize.width/i.size.width),this.setScaleY(this._container.objectSize.height/i.size.height)}else t.once("load",(function(){this._initWithTileset(t,e)}),this)},_initPosWithMapInfo:function(t){switch(t.getOrientation()){case cc.TiledMap.Orientation.ORTHO:case cc.TiledMap.Orientation.HEX:this.setAnchorPoint(cc.p(0,0)),this.setPosition(this._container.offset.x,this._container._groupSize.height-this._container.offset.y);break;case cc.TiledMap.Orientation.ISO:this.setAnchorPoint(cc.p(.5,0));var e=cc.p(this._container.offset.x/t._tileSize.height,this._container.offset.y/t._tileSize.height),i=cc.p(t._tileSize.width/2*(t._mapSize.width+e.x-e.y),t._tileSize.height/2*(2*t._mapSize.height-e.x-e.y));this.setPosition(i)}}}),_ccsg.TMXObjectShape=cc.DrawNode.extend({_container:null,_color:cc.Color.WHITE,_mapOrientation:0,_mapInfo:null,ctor:function(t,e,i){cc.DrawNode.prototype.ctor.call(this),this.setLineWidth(1),this._container=t,this._color=i,this._mapInfo=e,this._mapOrientation=e.getOrientation(),this._initShape()},_initShape:function(){var t;if(cc.TiledMap.Orientation.ISO!==this._mapOrientation){var e=cc.p(0,this._container._groupSize.height);t=cc.p(e.x+this._container.offset.x,e.y-this._container.offset.y)}else t=this._getPosByOffset(cc.p(0,0));switch(this.setPosition(t),this.setRotation(this._container.objectRotation),this._container.type){case cc.TiledMap.TMXObjectType.RECT:this._drawRect();break;case cc.TiledMap.TMXObjectType.ELLIPSE:this._drawEllipse();break;case cc.TiledMap.TMXObjectType.POLYGON:this._drawPoly(t,!0);break;case cc.TiledMap.TMXObjectType.POLYLINE:this._drawPoly(t,!1)}this.setVisible(this._container.objectVisible)},_getPosByOffset:function(t){var e=this._mapInfo.getMapSize(),i=this._mapInfo.getTileSize(),n=cc.p((this._container.offset.x+t.x)/i.width*2,(this._container.offset.y+t.y)/i.height);return cc.p(i.width/2*(e.width+n.x-n.y),i.height/2*(2*e.height-n.x-n.y))},_drawRect:function(){if(cc.TiledMap.Orientation.ISO!==this._mapOrientation){var t=this._container.objectSize;t.equals(cc.Size.ZERO)?(t=cc.size(20,20),this.setAnchorPoint(cc.p(.5,.5))):this.setAnchorPoint(cc.p(0,1));var e=cc.p(0,0),i=cc.p(t.width,t.height);this.drawRect(e,i,null,this.getLineWidth(),this._color),this.setContentSize(t)}else{if(this._container.objectSize.equals(cc.Size.ZERO))return;var n=this._getPosByOffset(cc.p(0,0)),r=this._getPosByOffset(cc.p(this._container.objectSize.width,0)),s=this._getPosByOffset(cc.p(this._container.objectSize.width,this._container.objectSize.height)),o=this._getPosByOffset(cc.p(0,this._container.objectSize.height)),a=r.x-o.x,c=n.y-s.y;this.setContentSize(cc.size(a,c)),this.setAnchorPoint(cc.p((n.x-o.x)/a,1));var h=cc.p(o.x,s.y);n.subSelf(h),r.subSelf(h),s.subSelf(h),o.subSelf(h),this._container.objectSize.width>0&&(this.drawSegment(n,r,this.getLineWidth(),this._color),this.drawSegment(s,o,this.getLineWidth(),this._color)),this._container.objectSize.height>0&&(this.drawSegment(n,o,this.getLineWidth(),this._color),this.drawSegment(s,r,this.getLineWidth(),this._color))}},_drawEllipse:function(){var t=1,e=1,i=0,n=cc.p(0,0),r=null;if(cc.TiledMap.Orientation.ISO!==this._mapOrientation){var s=this._container.objectSize;s.equals(cc.Size.ZERO)?(s=cc.size(20,20),this.setAnchorPoint(cc.p(.5,.5))):this.setAnchorPoint(cc.p(0,1)),n=cc.p(s.width/2,s.height/2),s.width>s.height?(t=s.width/s.height,i=s.height/2):(e=s.height/s.width,i=s.width/2),r=this,this.setContentSize(s)}else{if(this._container.objectSize.equals(cc.Size.ZERO))return;var o=this._getPosByOffset(cc.p(0,0)),a=this._getPosByOffset(cc.p(this._container.objectSize.width,0)),c=this._getPosByOffset(cc.p(this._container.objectSize.width,this._container.objectSize.height)),h=this._getPosByOffset(cc.p(0,this._container.objectSize.height)),l=a.x-h.x,u=o.y-c.y;this.setContentSize(cc.size(l,u)),this.setAnchorPoint(cc.p((o.x-h.x)/l,1));var _=cc.p(h.x,c.y);o.subSelf(_),a.subSelf(_),c.subSelf(_),h.subSelf(_),this._container.objectSize.width>0&&(this.drawSegment(o,a,this.getLineWidth(),this._color),this.drawSegment(c,h,this.getLineWidth(),this._color)),this._container.objectSize.height>0&&(this.drawSegment(o,h,this.getLineWidth(),this._color),this.drawSegment(c,a,this.getLineWidth(),this._color)),(n=this._getPosByOffset(cc.p(this._container.objectSize.width/2,this._container.objectSize.height/2))).subSelf(_),(r=new cc.DrawNode).setLineWidth(this.getLineWidth()),r.setContentSize(cc.size(l,u)),r.setAnchorPoint(cc.p(.5,.5)),r.setPosition(n),this.addChild(r),this._container.objectSize.width>this._container.objectSize.height?(t=this._container.objectSize.width/this._container.objectSize.height,i=this._container.objectSize.height/2):(e=this._container.objectSize.height/this._container.objectSize.width,i=this._container.objectSize.width/2);var d=this._mapInfo.getTileSize(),f=Math.atan(d.width/d.height);i/=Math.sin(f),r.setRotationX(cc.radiansToDegrees(f)),r.setRotationY(90-cc.radiansToDegrees(f))}r.drawCircle(n,i,0,50,!1,this.getLineWidth(),this._color),r.setScaleX(t),r.setScaleY(e)},_drawPoly:function(t,e){for(var i,n=this._container.getProperties(),r=[],s=0,o=0,a=0,c=0,h=0,l=(i=e?n.points:n.polylinePoints).length;h=0;i--){var n=e[i];n&&(n instanceof _ccsg.TMXLayer||n instanceof _ccsg.TMXObjectGroup)&&this.removeChild(n)}var r=0,s=t.getAllChildren();if(s&&s.length>0)for(var o=0,a=s.length;o=0;r--){var s=n[r];if(s)for(var o=0;o>>0>=s.firstGid)return s}}return cc.logID(7215,t.name),null}});var n=_ccsg.TMXTiledMap.prototype;n.mapWidth,cc.defineGetterSetter(n,"mapWidth",n._getMapWidth,n._setMapWidth),n.mapHeight,cc.defineGetterSetter(n,"mapHeight",n._getMapHeight,n._setMapHeight),n.tileWidth,cc.defineGetterSetter(n,"tileWidth",n._getTileWidth,n._setTileWidth),n.tileHeight,cc.defineGetterSetter(n,"tileHeight",n._getTileHeight,n._setTileHeight)}),{"./CCSGTMXObject":281,"./CCTMXXMLParser":286}],284:[(function(t,e,i){var n=null,r=null,s=null,o=null,a=null;_ccsg.TMXLayer.CanvasRenderCmd=function(t){this._rootCtor(t),this._needDraw=!0,n||(n=cc.TiledMap.Orientation,r=cc.TiledMap.TileFlag,s=r.FLIPPED_MASK,o=cc.TiledMap.StaggerAxis,a=cc.TiledMap.StaggerIndex)};var c=_ccsg.TMXLayer.CanvasRenderCmd.prototype=Object.create(_ccsg.Node.CanvasRenderCmd.prototype);c.constructor=_ccsg.TMXLayer.CanvasRenderCmd,c.rendering=function(t,e,i){var c=this._node,h=c.layerOrientation,l=c.tiles,u=c._opacity/255;if(l&&!(u<=0)&&c.tileset){var _=c._mapTileSize.width,d=c._mapTileSize.height,f=c.tileset._tileSize.width/cc.director._contentScaleFactor,m=c.tileset._tileSize.height/cc.director._contentScaleFactor,p=f-_,g=m-d,y=cc.winSize.width,v=cc.winSize.height,x=c._layerSize.height,C=c._layerSize.width,T=c._texGrids,A=c._spriteTiles,b=this._worldTransform,S=c._position.x,E=c._position.y,w=b.a,I=b.b,R=b.c,P=b.d,O=S*w+E*R+b.tx,D=S*I+E*P+b.ty,B=t||cc._renderContext,L=B.getContext(),M=0,N=0,F=C,z=x;cc.macro.ENABLE_TILEDMAP_CULLING&&h===n.ORTHO&&(M=Math.floor(-(O-p*w)/(_*w)),N=Math.floor((D-g*P+d*x*P-v)/(d*P)),F=Math.ceil((y-O+p*w)/(_*w)),z=x-Math.floor(-(D+g*P)/(d*P)),M<0&&(M=0),N<0&&(N=0),F>C&&(F=C),z>x&&(z=x));var k,V,G,U,W,X,j,Y,H,q,J,Z,K,Q,$,tt,et,it,nt,rt=N*C,st=f,ot=m,at=f*w,ct=m*P,ht=!1,lt=!1;for(k in U=rt+M,A)if(k=U)break;if(B.setTransform(b,e,i),B.setGlobalAlpha(u),h===n.HEX){var ut=c._staggerIndex,_t=c._hexSideLength;$=c._staggerAxis,tt=c.tileset.tileOffset,et=ut===a.STAGGERINDEX_ODD?1:-1,it=$===o.STAGGERAXIS_X?(_-_t)/2:0,nt=$===o.STAGGERAXIS_Y?(d-_t)/2:0}for(V=N;V>>0])&&(j=c._textures[X.texId])&&j._image){switch(h){case n.ORTHO:q=G*_,J=-(x-V-1)*d;break;case n.ISO:q=_/2*(C+G-V-1),J=-d/2*(2*x-G-V-2);break;case n.HEX:q=G*(_-it)+($===o.STAGGERAXIS_Y&&V%2==1?_/2*et:0)+tt.x,J=-(x-V-1)*(d-nt)-($===o.STAGGERAXIS_X&&G%2==1?d/2*-et:0)+tt.y}if(Z=q+f,H=J-m,h===n.ISO){if((K=J*P-D)<-v-ct){G+=Math.floor(2*(-v-K)/ct)-1;continue}if((Q=O+Z*w)<-at){G+=Math.floor(2*-Q/at)-1;continue}if(O+q*w>y||H*P-D>0){G=F;continue}}W>r.DIAGONAL&&(ht=(W&r.HORIZONTAL)>>>0,lt=(W&r.VERTICAL)>>>0),ht&&(q=-Z,L.scale(-1,1)),lt&&(H=-J,L.scale(1,-1)),L.drawImage(j._image,X.x,X.y,X.width,X.height,q,H,st,ot),ht&&L.scale(-1,1),lt&&L.scale(1,-1),cc.g_NumberOfDraws++}rt+=C}for(k in A)k>U&&A[k]&&(Y=A[k]._renderCmd,0===A[k]._localZOrder&&Y.rendering&&A[k]._visible&&Y.rendering(t,e,i))}}}),{}],285:[(function(t,e,i){var n=null,r=null,s=null,o=null,a=null;function c(){cc.renderer.setDepthTest(!1)}_ccsg.TMXLayer.WebGLRenderCmd=function(t){this._rootCtor(t),this._needDraw=!0,this._vertices=[{x:0,y:0},{x:0,y:0},{x:0,y:0},{x:0,y:0}],this._color=new Uint32Array(1),this._shaderProgram=cc.shaderCache.programForKey(cc.macro.SHADER_SPRITE_POSITION_TEXTURECOLORALPHATEST);var e=90*Math.PI/180;this._sin90=Math.sin(e),this._cos90=Math.cos(e),e*=3,this._sin270=Math.sin(e),this._cos270=Math.cos(e),n||(n=cc.TiledMap.Orientation,r=cc.TiledMap.TileFlag,s=r.FLIPPED_MASK,o=cc.TiledMap.StaggerAxis,a=cc.TiledMap.StaggerIndex),this._disableDepthTestCmd=new cc.CustomRenderCmd(this,c)};var h=_ccsg.TMXLayer.WebGLRenderCmd.prototype=Object.create(_ccsg.Node.WebGLRenderCmd.prototype);h.constructor=_ccsg.TMXLayer.WebGLRenderCmd,h.uploadData=function(t,e,i){var c=this._node,h=c.layerOrientation,l=c.tiles,u=c._opacity/255;if(cc.renderer.setDepthTest(c.layerOrientation===n.ORTHO),!l||u<=0||!c.tileset)return 0;var _=c._mapTileSize.width,d=c._mapTileSize.height,f=c.tileset._tileSize.width/cc.director._contentScaleFactor,m=c.tileset._tileSize.height/cc.director._contentScaleFactor,p=f-_,g=m-d,y=cc.winSize.width,v=cc.winSize.height,x=c._layerSize.height,C=c._layerSize.width,T=c._texGrids,A=c._spriteTiles,b=this._worldTransform,S=b.a,E=b.b,w=b.c,I=b.d,R=b.tx,P=b.ty,O=c._position.x,D=c._position.y,B=O*S+D*w+R,L=O*E+D*I+P,M=f*S,N=m*I,F=c._opacity,z=this._displayedColor.r,k=this._displayedColor.g,V=this._displayedColor.b;if(c._opacityModifyRGB){var G=F/255;z*=G,k*=G,V*=G}this._color[0]=F<<24|V<<16|k<<8|z;var U,W=0,X=0,j=C,Y=x,H=S,q=I,J=B,Z=L,K=M,Q=N,$=cc.macro.ENABLE_TILEDMAP_CULLING,tt=this._vertices;if($&&(this._cameraFlag>0&&(H=(U=cc.affineTransformConcat(b,cc.Camera.main.viewMatrix)).a,q=U.d,J=O*H+D*U.c+U.tx,Z=O*U.b+D*q+U.ty,K=f*H,Q=m*q),h===n.ORTHO)){this._cameraFlag<=0&&(U=cc.affineTransformClone(b)),cc.affineTransformInvertOut(U,U);var et=cc.visibleRect;tt[0].x=et.topLeft.x*U.a+et.topLeft.y*U.c+U.tx,tt[0].y=et.topLeft.x*U.b+et.topLeft.y*U.d+U.ty,tt[1].x=et.bottomLeft.x*U.a+et.bottomLeft.y*U.c+U.tx,tt[1].y=et.bottomLeft.x*U.b+et.bottomLeft.y*U.d+U.ty,tt[2].x=et.topRight.x*U.a+et.topRight.y*U.c+U.tx,tt[2].y=et.topRight.x*U.b+et.topRight.y*U.d+U.ty,tt[3].x=et.bottomRight.x*U.a+et.bottomRight.y*U.c+U.tx,tt[3].y=et.bottomRight.x*U.b+et.bottomRight.y*U.d+U.ty;var it=Math.min(tt[0].x,tt[1].x,tt[2].x,tt[3].x),nt=Math.max(tt[0].x,tt[1].x,tt[2].x,tt[3].x),rt=Math.min(tt[0].y,tt[1].y,tt[2].y,tt[3].y),st=Math.max(tt[0].y,tt[1].y,tt[2].y,tt[3].y);W=Math.floor(it/_),X=x-Math.ceil(st/d),j=Math.ceil((nt+p)/_),Y=x-Math.floor((rt-g)/d),W<0&&(W=0),X<0&&(X=0),j>C&&(j=C),Y>x&&(Y=x)}var ot,at,ct,ht,lt,ut,_t,dt,ft,mt,pt,gt,yt,vt,xt,Ct,Tt,At=i,bt=X*C,St=S,Et=E,wt=w,It=I,Rt=Math.floor(R)+.5,Pt=Math.floor(P)+.5,Ot=!1,Dt=!1,Bt=!1;if(h===n.HEX){var Lt=c._staggerIndex,Mt=c._hexSideLength;yt=c._staggerAxis,vt=c.tileset.tileOffset,Tt=Lt===a.STAGGERINDEX_ODD?1:-1,xt=yt===o.STAGGERAXIS_X?(_-Mt)/2:0,Ct=yt===o.STAGGERAXIS_Y?(d-Mt)/2:0}for(ot=X;ott.length&&(cc.renderer._increaseBatchingSize((At-i)/6,cc.renderer.VertexType.QUAD),cc.renderer._batchRendering(),i=0,At=0),A[ct=bt+at])A[ct]._vertexZ=c._vertexZ+cc.renderer.assignedZStep*ct/l.length;else if(lt=T[((ht=c.tiles[ct])&s)>>>0]){switch(h){case n.ORTHO:dt=at*_,ft=(x-ot-1)*d,ct=c._vertexZ+cc.renderer.assignedZStep*ct/l.length;break;case n.ISO:dt=_/2*(C+at-ot-1),ft=d/2*(2*x-at-ot-2),ct=c._vertexZ+cc.renderer.assignedZStep*(c.height-ft)/c.height;break;case n.HEX:dt=at*(_-xt)+(yt===o.STAGGERAXIS_Y&&ot%2==1?_/2*Tt:0)+vt.x,ft=(x-ot-1)*(d-Ct)+(yt===o.STAGGERAXIS_X&&at%2==1?d/2*-Tt:0)-vt.y,ct=c._vertexZ+cc.renderer.assignedZStep*(c.height-ft)/c.height}if(mt=dt+f,_t=ft+m,$&&h===n.ISO){if((pt=Z+ft*q)>v+Q){at+=Math.floor(2*(pt-v)/Q)-1;continue}if((gt=J+mt*H)<-K){at+=Math.floor(2*-gt/K)-1;continue}if(J+dt*H>y||Z+_t*q<0){at=j;continue}}for(ht>r.DIAGONAL&&(Ot=!0,Dt=(ht&r.HORIZONTAL)>>>0,Bt=(ht&r.VERTICAL)>>>0),tt[0].x=dt*St+_t*wt+Rt,tt[0].y=dt*Et+_t*It+Pt,tt[1].x=dt*St+ft*wt+Rt,tt[1].y=dt*Et+ft*It+Pt,tt[2].x=mt*St+_t*wt+Rt,tt[2].y=mt*Et+_t*It+Pt,tt[3].x=mt*St+ft*wt+Rt,tt[3].y=mt*Et+ft*It+Pt,ut=0;ut<4;++ut){switch(t[At]=tt[ut].x,t[At+1]=tt[ut].y,t[At+2]=ct,e[At+3]=this._color[0],ut){case 0:t[At+4]=Dt?lt.r:lt.l,t[At+5]=Bt?lt.b:lt.t;break;case 1:t[At+4]=Dt?lt.r:lt.l,t[At+5]=Bt?lt.t:lt.b;break;case 2:t[At+4]=Dt?lt.l:lt.r,t[At+5]=Bt?lt.b:lt.t;break;case 3:t[At+4]=Dt?lt.l:lt.r,t[At+5]=Bt?lt.t:lt.b}At+=6}Ot&&(St=S,Et=E,wt=w,It=I,Rt=R,Pt=P,Dt=!1,Bt=!1,Ot=!1)}bt+=C}return(At-i)/6}}),{}],286:[(function(t,e,i){t("../core/platform/CCSAXParser"),t("../compression/ZipUtils");var n=t("../compression/zlib.min");function r(t){for(var e=[],i=t.getElementsByTagName("properties"),n=0;n0&&(f.type=cc.TiledMap.TMXObjectType.ELLIPSE);var x=d.getElementsByTagName("polygon");if(x&&x.length>0){f.type=cc.TiledMap.TMXObjectType.POLYGON;var C=x[0].getAttribute("points");C&&(f.points=this._parsePointsString(C))}var T=d.getElementsByTagName("polyline");if(T&&T.length>0){f.type=cc.TiledMap.TMXObjectType.POLYLINE;var A=T[0].getAttribute("points");A&&(f.polylinePoints=this._parsePointsString(A))}f.type||(f.type=cc.TiledMap.TMXObjectType.RECT),e._objects.push(f)}return e},_parsePointsString:function(t){if(!t)return null;for(var e=[],i=t.split(" "),n=0;n=0;i--){var n=e[i].getComponent(cc.TiledLayer);n&&(n.enabled=t)}},_moveLayersInSgNode:function(t){this._detachedChildren.length=0;for(var e=t.getChildren(),i=e.length-1;i>=0;i--){var n=e[i];if(n instanceof _ccsg.TMXLayer||n instanceof _ccsg.TMXObjectGroup){t.removeChild(n);var r=n.getLocalZOrder();this._detachedChildren.push({sgNode:n,zorder:r})}}},_removeLayerEntities:function(){for(var t=this.node.getChildren(),e=t.length-1;e>=0;e--){var i=t[e];if(i.isValid){var n=i.getComponent(cc.TiledLayer);n&&n._tryRemoveNode();var r=i.getComponent(cc.TiledObjectGroup);r&&r._tryRemoveNode()}}},_refreshLayerEntities:function(){var t,e,i=this.node.getChildren(),n=[],r=[],s=[];for(t=0;t=0;t--){var h=i[t],l=h.getComponent(cc.TiledLayer),u=h.getComponent(cc.TiledObjectGroup);if(l){var _=l.getLayerName();if(_||(_=h._name),a.indexOf(_)<0)l._tryRemoveNode();else{n.push(h);var d=this._sgNode.getLayer(_);l._replaceSgNode(d),l.enabled=!0}}else if(u){var f=u.getGroupName();if(f||(f=h._name),c.indexOf(f)<0)u._tryRemoveNode();else{r.push(h);var m=this._sgNode.getObjectGroup(f);u._replaceSgNode(m),u.enabled=m.isVisible()}}else s.push({child:h,index:h.getSiblingIndex()})}var p=n.map((function(t){return t.getComponent(cc.TiledLayer).getLayerName()}));for(t=0,e=a.length;t=0;t--){var P=w[t];if(t!==E.indexOf(P))this.node.getChildByName(P).setSiblingIndex(I[t].getLocalZOrder())}for(t=0,e=s.length;t0&&(c[o[h]]=a[h].text);t.initWithXML(e.tmxXmlStr,c,r)&&(this._detachedChildren.length=0,this._onMapLoaded())}else{for(var l=t.allLayers(),u=0,_=l.length;u<_;u++)t.removeChild(l[u]);for(var d=t.getObjectGroups(),f=0,m=d.length;f":0});dragonBones.ArmatureDisplay=cc.Class({name:"dragonBones.ArmatureDisplay",extends:cc._RendererUnderSG,editor:!1,properties:{_factory:{default:null,type:dragonBones.CCFactory,serializable:!1},dragonAsset:{default:null,type:dragonBones.DragonBonesAsset,notify:function(){this._parseDragonAsset(),this._refresh()},tooltip:!1},dragonAtlasAsset:{default:null,type:dragonBones.DragonBonesAtlasAsset,notify:function(){this._parseDragonAtlasAsset(),this._refreshSgNode()},tooltip:!1},_armatureName:"",armatureName:{get:function(){return this._armatureName},set:function(t){this._armatureName=t;var e=this.getAnimationNames(this._armatureName);(!this.animationName||e.indexOf(this.animationName)<0)&&(this.animationName=""),this._refresh()},visible:!1},_animationName:"",animationName:{get:function(){return this._animationName},set:function(t){this._animationName=t},visible:!1},_defaultArmatureIndex:{default:0,notify:function(){var t="";if(this.dragonAsset){var e;if(this.dragonAsset&&(e=this.dragonAsset.getArmatureEnum()),!e)return cc.errorID(7400,this.name);t=e[this._defaultArmatureIndex]}void 0!==t?this.armatureName=t:cc.errorID(7401,this.name)},type:n,visible:!0,editorOnly:!0,displayName:"Armature",tooltip:!1},_animationIndex:{default:0,notify:function(){var t;if(0!==this._animationIndex){if(this.dragonAsset&&(t=this.dragonAsset.getAnimsEnum(this.armatureName)),t){var e=t[this._animationIndex];void 0!==e?this.animationName=e:cc.errorID(7402,this.name)}}else this.animationName=""},type:r,visible:!0,editorOnly:!0,displayName:"Animation",tooltip:!1},timeScale:{default:1,notify:function(){this._sgNode&&(this._sgNode.animation().timeScale=this.timeScale)},tooltip:!1},playTimes:{default:-1,tooltip:!1},debugBones:{default:!1,notify:function(){this._sgNode&&this._sgNode.setDebugBones(this.debugBones)},editorOnly:!0,tooltip:!1}},ctor:function(){this._factory=dragonBones.CCFactory.getInstance()},__preload:function(){this._parseDragonAsset(),this._parseDragonAtlasAsset(),this._refresh()},_createSgNode:function(){return this.dragonAsset&&this.dragonAtlasAsset&&this.armatureName?this._factory.buildArmatureDisplay(this.armatureName,this.dragonAsset._dragonBonesData.name):null},_initSgNode:function(){var t=this._sgNode;t.animation().timeScale=this.timeScale,this.animationName&&this.playAnimation(this.animationName,this.playTimes)},_removeSgNode:function(){var t=this._sgNode;this._super(),t&&t.armature().dispose()},_parseDragonAsset:function(){this.dragonAsset&&this.dragonAsset.init(this._factory)},_parseDragonAtlasAsset:function(){this.dragonAtlasAsset&&this.dragonAtlasAsset.init(this._factory)},_refreshSgNode:function(){var t=null,e=null;this._sgNode&&(t=this._sgNode._bubblingListeners,e=this._sgNode._hasListenerCache,this.node._sizeProvider===this._sgNode&&(this.node._sizeProvider=null),this._removeSgNode(),this._sgNode=null);var i=this._sgNode=this._createSgNode();i&&(this.enabledInHierarchy||i.setVisible(!1),t&&(i._bubblingListeners=t,i._hasListenerCache=e),this._initSgNode(),this._appendSgNode(i),this._registSizeProvider())},_refresh:function(){this._refreshSgNode()},_updateAnimEnum:!1,_updateArmatureEnum:!1,playAnimation:function(t,e){return this._sgNode?(this.playTimes=void 0===e?-1:e,this.animationName=t,this._sgNode.animation().play(t,this.playTimes)):null},getArmatureNames:function(){var t=this.dragonAsset&&this.dragonAsset._dragonBonesData;return t&&t.armatureNames||[]},getAnimationNames:function(t){var e=[];if(this.dragonAsset&&this.dragonAsset._dragonBonesData){var i=this.dragonAsset._dragonBonesData.getArmature(t);if(i)for(var n in i.animations)i.animations.hasOwnProperty(n)&&e.push(n)}return e},addEventListener:function(t,e,i){this._sgNode&&this._sgNode.addEvent(t,e,i)},removeEventListener:function(t,e,i){this._sgNode&&this._sgNode.removeEvent(t,e,i)},buildArmature:function(t){return this._factory?this._factory.buildArmature(t):null},armature:function(){return this._sgNode?this._sgNode.armature():null}})}),{}],295:[(function(t,e,i){var n=t("../../cocos2d/core/event/event-target");t("../../cocos2d/shape-nodes/CCDrawNode"),dragonBones.CCArmatureDisplay=cc.Class({name:"dragonBones.CCArmatureDisplay",extends:_ccsg.Node,mixins:[n],ctor:function(){this._armature=null,this._debugDrawer=null},_onClear:function(){this._armature=null},_dispatchEvent:function(t){this.emit(t.type,t)},_debugDraw:function(){if(this._armature){this._debugDrawer||(this._debugDrawer=new cc.DrawNode,this.addChild(this._debugDrawer),this._debugDrawer.setDrawColor(cc.color(255,0,0,255)),this._debugDrawer.setLineWidth(1)),this._debugDrawer.clear();for(var t=this._armature.getBones(),e=0,i=t.length;e0?r.actions:h.armatureData.actions;if(l.length>0)for(var u=0,_=l.length;u<_;++u)h._bufferAction(l[u]);else h.animation.play()}c.armature=h.armatureData}s.push(h);break;default:s.push(null)}}return i._setDisplayList(s),i._rawDisplay.setLocalZOrder(r.zOrder),i},getTextureDisplay:function(t,e){var i=this._getTextureData(e,t);if(i){if(!i.texture){var n=i.parent.texture,r=cc.rect(i.region.x,i.region.y,i.region.width,i.region.height),s=cc.p(0,0),o=cc.size(i.region.width,i.region.height);i.texture=new cc.SpriteFrame,i.texture.setTexture(n,r,i.rotated,s,o)}return new cc.Scale9Sprite(i.texture)}return null}})}),{}],297:[(function(t,e,i){dragonBones.CCSlot=cc.Class({name:"dragonBones.CCSlot",extends:dragonBones.Slot,ctor:function(){this._renderDisplay=null},statics:{toString:function(){return"[class dragonBones.CCSlot]"}},_onClear:function(){dragonBones.Slot.prototype._onClear.call(this),this._renderDisplay=null},_onUpdateDisplay:function(){this._rawDisplay||(this._rawDisplay=new cc.Scale9Sprite),this._renderDisplay=this._display||this._rawDisplay},_initDisplay:function(t){},_addDisplay:function(){this._armature._display.addChild(this._renderDisplay)},_replaceDisplay:function(t){var e=this._armature._display,i=t;e.addChild(this._renderDisplay,i.getLocalZOrder()),e.removeChild(i)},_removeDisplay:function(){this._renderDisplay.removeFromParent()},_disposeDisplay:function(t){},_updateVisible:function(){this._renderDisplay.setVisible(this._parent.visible)},_updateZOrder:function(){this._renderDisplay._parent?this._renderDisplay.setLocalZOrder(this._zOrder):this._armature._display.addChild(this._renderDisplay,this._zOrder)},_updateBlendMode:function(){if(this._renderDisplay instanceof cc.Scale9Sprite)switch(this._blendMode){case 0:break;case 1:var t=this._renderDisplay._spriteFrame.getTexture();t&&t.hasPremultipliedAlpha()?this._renderDisplay.setBlendFunc(cc.BlendFunc.BlendFactor.ONE,cc.BlendFunc.BlendFactor.ONE):this._renderDisplay.setBlendFunc(cc.BlendFunc.BlendFactor.SRC_ALPHA,cc.BlendFunc.BlendFactor.ONE)}else if(this._childArmature)for(var e=this._childArmature.getSlots(),i=0,n=e.length;i=0){var t=this._displayIndexm&&(f.x=m),f.width-p&&(f.y=-p),f.height<-p&&(f.height=-p)}for(f.width-=f.x,f.height-=f.y,c=0,h=this._meshData.vertexIndices.length;c0,s=e.triangles.verts,o=cc.rect(999999,999999,-999999,-999999),a=0,c=0,h=0;if(this._meshData.skinned){var l=0;for(i=0,n=this._meshData.vertices.length;ic&&(o.x=c),o.width-h&&(o.y=-h),o.height<-h&&(o.height=-h)}}else if(r){var x=this._meshData.vertices;for(i=0,n=this._meshData.vertices.length;ic&&(o.x=c),o.width-h&&(o.y=-h),o.height<-h&&(o.height=-h)}o.width-=o.x,o.height-=o.y,e.rect=o;var C=t.getNodeToParentTransform();t.setContentSize(cc.size(o.width,o.height)),t.setMeshPolygonInfo(e),this._renderDisplay._renderCmd.setNodeToParentTransform(C)}},_updateTransform:function(){var t={a:this.globalTransformMatrix.a,b:-this.globalTransformMatrix.b,c:-this.globalTransformMatrix.c,d:this.globalTransformMatrix.d,tx:this.globalTransformMatrix.tx-(this.globalTransformMatrix.a*this._pivotX+this.globalTransformMatrix.c*this._pivotY),ty:-(this.globalTransformMatrix.ty-(this.globalTransformMatrix.b*this._pivotX+this.globalTransformMatrix.d*this._pivotY))};this._renderDisplay._renderCmd.setNodeToParentTransform(t)}})}),{}],298:[(function(t,e,i){dragonBones.CCTextureAtlasData=cc.Class({extends:dragonBones.TextureAtlasData,properties:{texture:{default:null,serializable:!1}},statics:{toString:function(){return"[class dragonBones.CCTextureAtlasData]"}},_onClear:function(){dragonBones.TextureAtlasData.prototype._onClear.call(this),this.texture=null},generateTextureData:function(){return dragonBones.BaseObject.borrowObject(dragonBones.CCTextureData)}}),dragonBones.CCTextureData=cc.Class({extends:dragonBones.TextureData,properties:{texture:{default:null,serializable:!1}},statics:{toString:function(){return"[class dragonBones.CCTextureData]"}},_onClear:function(){dragonBones.TextureData.prototype._onClear.call(this),this.texture=null}})}),{}],299:[(function(t,e,i){var n=cc.Class({name:"dragonBones.DragonBonesAsset",extends:cc.Asset,ctor:function(){this.reset()},properties:{_dragonBonesJson:"",dragonBonesJson:{get:function(){return this._dragonBonesJson},set:function(t){this._dragonBonesJson=t,this.reset()}}},statics:{preventDeferredLoadDependents:!0},createNode:!1,reset:function(){this._dragonBonesData=null},init:function(t){if(this._dragonBonesData){var e=t.getDragonBonesData(this._dragonBonesData.name);if(e){for(var i=0;i=0},t.addArmature=function(e){e&&t._armatures.indexOf(e)<0&&t._armatures.push(e)},t.removeArmature=function(e){if(e){var i=t._armatures.indexOf(e);i>=0&&t._armatures.splice(i,1)}},t.PI_D=2*Math.PI,t.PI_H=Math.PI/2,t.PI_Q=Math.PI/4,t.ANGLE_TO_RADIAN=Math.PI/180,t.RADIAN_TO_ANGLE=180/Math.PI,t.SECOND_TO_MILLISECOND=1e3,t.NO_TWEEN=100,t.VERSION="4.7.2",t.debug=!1,t.debugDraw=!1,t._armatures=[],t})();t.DragonBones=e})(n||(n={})),(function(t){var e=(function(){function t(){this.hashCode=t._hashCode++}return t._returnObject=function(e){var i=String(e.constructor),n=null==t._maxCountMap[i]?t._defaultMaxCount:t._maxCountMap[i],r=t._poolsMap[i]=t._poolsMap[i]||[];if(r.lengthi&&(r.length=i)}else for(var n in t._defaultMaxCount=i,t._poolsMap){var r;if(null!=t._maxCountMap[n])t._maxCountMap[n]=i,(r=t._poolsMap[n]).length>i&&(r.length=i)}},t.clearPool=function(e){if(void 0===e&&(e=null),e)(n=t._poolsMap[String(e)])&&n.length&&(n.length=0);else for(var i in t._poolsMap){var n;(n=t._poolsMap[i]).length=0}},t.borrowObject=function(e){var i=t._poolsMap[String(e)];if(i&&i.length)return i.pop();var n=new e;return n._onClear(),n},t.prototype.returnToPool=function(){this._onClear(),t._returnObject(this)},t._hashCode=0,t._defaultMaxCount=5e3,t._maxCountMap={},t._poolsMap={},t})();t.BaseObject=e})(n||(n={})),(function(t){var e=(function(){function t(t,e,i,n,r,s,o,a){void 0===t&&(t=1),void 0===e&&(e=1),void 0===i&&(i=1),void 0===n&&(n=1),void 0===r&&(r=0),void 0===s&&(s=0),void 0===o&&(o=0),void 0===a&&(a=0),this.alphaMultiplier=t,this.redMultiplier=e,this.greenMultiplier=i,this.blueMultiplier=n,this.alphaOffset=r,this.redOffset=s,this.greenOffset=o,this.blueOffset=a}return t.prototype.copyFrom=function(t){this.alphaMultiplier=t.alphaMultiplier,this.redMultiplier=t.redMultiplier,this.greenMultiplier=t.greenMultiplier,this.blueMultiplier=t.blueMultiplier,this.alphaOffset=t.alphaOffset,this.redOffset=t.redOffset,this.redOffset=t.redOffset,this.greenOffset=t.blueOffset},t.prototype.identity=function(){this.alphaMultiplier=this.redMultiplier=this.greenMultiplier=this.blueMultiplier=1,this.alphaOffset=this.redOffset=this.greenOffset=this.blueOffset=0},t})();t.ColorTransform=e})(n||(n={})),(function(t){var e=(function(){function t(t,e){void 0===t&&(t=0),void 0===e&&(e=0),this.x=t,this.y=e}return t.prototype.copyFrom=function(t){this.x=t.x,this.y=t.y},t.prototype.clear=function(){this.x=this.y=0},t})();t.Point=e})(n||(n={})),(function(t){var e=(function(){function t(t,e,i,n,r,s){void 0===t&&(t=1),void 0===e&&(e=0),void 0===i&&(i=0),void 0===n&&(n=1),void 0===r&&(r=0),void 0===s&&(s=0),this.a=t,this.b=e,this.c=i,this.d=n,this.tx=r,this.ty=s}return t.prototype.toString=function(){return"[object dragonBones.Matrix] a:"+this.a+" b:"+this.b+" c:"+this.c+" d:"+this.d+" tx:"+this.tx+" ty:"+this.ty},t.prototype.copyFrom=function(t){this.a=t.a,this.b=t.b,this.c=t.c,this.d=t.d,this.tx=t.tx,this.ty=t.ty},t.prototype.identity=function(){this.a=this.d=1,this.b=this.c=0,this.tx=this.ty=0},t.prototype.concat=function(t){var e=this.a,i=this.b,n=this.c,r=this.d,s=this.tx,o=this.ty,a=t.a,c=t.b,h=t.c,l=t.d,u=t.tx,_=t.ty;this.a=e*a+i*h,this.b=e*c+i*l,this.c=n*a+r*h,this.d=n*c+r*l,this.tx=a*s+h*o+u,this.ty=l*o+c*s+_},t.prototype.invert=function(){var t=this.a,e=this.b,i=this.c,n=this.d,r=this.tx,s=this.ty,o=t*n-e*i;this.a=n/o,this.b=-e/o,this.c=-i/o,this.d=t/o,this.tx=(i*s-n*r)/o,this.ty=-(t*s-e*r)/o},t.prototype.transformPoint=function(t,e,i,n){void 0===n&&(n=!1),i.x=this.a*t+this.c*e,i.y=this.b*t+this.d*e,n||(i.x+=this.tx,i.y+=this.ty)},t})();t.Matrix=e})(n||(n={})),(function(t){var e=(function(){function t(t,e,i,n){void 0===t&&(t=0),void 0===e&&(e=0),void 0===i&&(i=0),void 0===n&&(n=0),this.x=t,this.y=e,this.width=i,this.height=n}return t.prototype.copyFrom=function(t){this.x=t.x,this.y=t.y,this.width=t.width,this.height=t.height},t.prototype.clear=function(){this.x=this.y=0,this.width=this.height=0},t})();t.Rectangle=e})(n||(n={})),(function(t){var e=(function(){function t(t,e,i,n,r,s){void 0===t&&(t=0),void 0===e&&(e=0),void 0===i&&(i=0),void 0===n&&(n=0),void 0===r&&(r=1),void 0===s&&(s=1),this.x=t,this.y=e,this.skewX=i,this.skewY=n,this.scaleX=r,this.scaleY=s}return t.normalizeRadian=function(t){return t=(t+Math.PI)%(2*Math.PI),t+=t>0?-Math.PI:Math.PI},t.prototype.toString=function(){return"[object dragonBones.Transform] x:"+this.x+" y:"+this.y+" skewX:"+180*this.skewX/Math.PI+" skewY:"+180*this.skewY/Math.PI+" scaleX:"+this.scaleX+" scaleY:"+this.scaleY},t.prototype.copyFrom=function(t){return this.x=t.x,this.y=t.y,this.skewX=t.skewX,this.skewY=t.skewY,this.scaleX=t.scaleX,this.scaleY=t.scaleY,this},t.prototype.identity=function(){return this.x=this.y=this.skewX=this.skewY=0,this.scaleX=this.scaleY=1,this},t.prototype.add=function(t){return this.x+=t.x,this.y+=t.y,this.skewX+=t.skewX,this.skewY+=t.skewY,this.scaleX*=t.scaleX,this.scaleY*=t.scaleY,this},t.prototype.minus=function(e){return this.x-=e.x,this.y-=e.y,this.skewX=t.normalizeRadian(this.skewX-e.skewX),this.skewY=t.normalizeRadian(this.skewY-e.skewY),this.scaleX/=e.scaleX,this.scaleY/=e.scaleY,this},t.prototype.fromMatrix=function(t){var e=.25*Math.PI,i=this.scaleX,n=this.scaleY;return this.x=t.tx,this.y=t.ty,this.skewX=Math.atan(-t.c/t.d),this.skewY=Math.atan(t.b/t.a),this.skewX!=this.skewX&&(this.skewX=0),this.skewY!=this.skewY&&(this.skewY=0),this.scaleY=this.skewX>-e&&this.skewX-e&&this.skewY=0&&this.scaleX<0&&(this.scaleX=-this.scaleX,this.skewY=this.skewY-Math.PI),n>=0&&this.scaleY<0&&(this.scaleY=-this.scaleY,this.skewX=this.skewX-Math.PI),this},t.prototype.toMatrix=function(t){return t.a=this.scaleX*Math.cos(this.skewY),t.b=this.scaleX*Math.sin(this.skewY),t.c=-this.scaleY*Math.sin(this.skewX),t.d=this.scaleY*Math.cos(this.skewX),t.tx=this.x,t.ty=this.y,this},Object.defineProperty(t.prototype,"rotation",{get:function(){return this.skewY},set:function(t){var e=t-this.skewY;this.skewX+=e,this.skewY+=e},enumerable:!0,configurable:!0}),t})();t.Transform=e})(n||(n={})),(function(t){var e=(function(t){function e(){t.call(this)}return r(e,t),e.prototype._onClear=function(){this._isCompleted=!1,this._currentPlayTimes=0,this._currentTime=-1,this._timeline=null,this._isReverse=!1,this._hasAsynchronyTimeline=!1,this._frameRate=0,this._keyFrameCount=0,this._frameCount=0,this._position=0,this._duration=0,this._animationDutation=0,this._timeScale=1,this._timeOffset=0,this._currentFrame=null,this._armature=null,this._animationState=null},e.prototype._onUpdateFrame=function(t){},e.prototype._onArriveAtFrame=function(t){},e.prototype._setCurrentTime=function(t){var e=0;if(1==this._keyFrameCount&&this!=this._animationState._timeline)this._isCompleted=this._animationState._fadeState>=0,e=1;else if(this._hasAsynchronyTimeline){var i=this._animationState.playTimes,n=i*this._duration;t*=this._timeScale,0!=this._timeOffset&&(t+=this._timeOffset*this._animationDutation),i>0&&(t>=n||t<=-n)?(this._isCompleted=!0,e=i,t=t<0?0:this._duration):(this._isCompleted=!1,t<0?(e=Math.floor(-t/this._duration),t=this._duration- -t%this._duration):(e=Math.floor(t/this._duration),t%=this._duration),i>0&&e>i&&(e=i)),t+=this._position}else this._isCompleted=this._animationState._timeline._isCompleted,e=this._animationState._timeline._currentPlayTimes;return this._currentTime!=t&&(this._isReverse=this._currentTime>t&&this._currentPlayTimes==e,this._currentTime=t,this._currentPlayTimes=e,this._animationState._onFadeInComplete&&(this._currentFrame=null),!0)},e.prototype.fadeIn=function(t,e,i,n){this._armature=t,this._animationState=e,this._timeline=i;var r=this==this._animationState._timeline;this._hasAsynchronyTimeline=r||this._animationState.animationData.hasAsynchronyTimeline,this._frameRate=this._armature.armatureData.frameRate,this._keyFrameCount=this._timeline.frames.length,this._frameCount=this._animationState.animationData.frameCount,this._position=this._animationState._position,this._duration=this._animationState._duration,this._animationDutation=this._animationState.animationData.duration,this._timeScale=r?1:1/this._timeline.scale,this._timeOffset=r?0:this._timeline.offset},e.prototype.fadeOut=function(){},e.prototype.update=function(t){if(!this._isCompleted&&this._setCurrentTime(t)){var e=this._keyFrameCount>1?Math.floor(this._currentTime*this._frameRate):0,i=this._timeline.frames[e];this._currentFrame!=i&&(this._currentFrame=i,this._onArriveAtFrame(!0)),this._onUpdateFrame(!0)}},e})(t.BaseObject);t.TimelineState=e;var i=(function(e){function i(){e.call(this)}return r(i,e),i._getEasingValue=function(t,e){if(t<=0)return 0;if(t>=1)return 1;var i=1;if(e>2)return t;if(e>1)i=.5*(1-Math.cos(t*Math.PI)),e-=1;else if(e>0)i=1-Math.pow(1-t,2);else if(e>=-1)e*=-1,i=Math.pow(t,2);else{if(!(e>=-2))return t;e*=-1,i=Math.acos(1-2*t)/Math.PI,e-=1}return(i-t)*e+t},i._getEasingCurveValue=function(t,e){if(t<=0)return 0;if(t>=1)return 1;var i=e.length+1,n=Math.floor(t*i),r=0==n?0:e[n-1];return r+((n==i-1?1:e[n])-r)*(t-n/i)},i.prototype._onClear=function(){e.prototype._onClear.call(this),this._tweenProgress=0,this._tweenEasing=t.DragonBones.NO_TWEEN,this._curve=null},i.prototype._onArriveAtFrame=function(e){this._tweenEasing=this._currentFrame.tweenEasing,this._curve=this._currentFrame.curve,(this._keyFrameCount<=1||this._currentFrame.next==this._timeline.frames[0]&&(this._tweenEasing!=t.DragonBones.NO_TWEEN||this._curve)&&this._animationState.playTimes>0&&this._animationState.currentPlayTimes==this._animationState.playTimes-1)&&(this._tweenEasing=t.DragonBones.NO_TWEEN,this._curve=null)},i.prototype._onUpdateFrame=function(e){this._tweenEasing!=t.DragonBones.NO_TWEEN?(this._tweenProgress=(this._currentTime-this._currentFrame.position+this._position)/this._currentFrame.duration,0!=this._tweenEasing&&(this._tweenProgress=i._getEasingValue(this._tweenProgress,this._tweenEasing))):this._curve?(this._tweenProgress=(this._currentTime-this._currentFrame.position+this._position)/this._currentFrame.duration,this._tweenProgress=i._getEasingCurveValue(this._tweenProgress,this._curve)):this._tweenProgress=0},i.prototype._updateExtensionKeyFrame=function(t,e,i){var n=0;if(t.type==e.type)for(var r=0,s=t.tweens.length;r0&&(n=2)}if(0==n){i.type!=t.type&&(n=1,i.type=t.type),i.tweens.length!=t.tweens.length&&(n=1,i.tweens.length=t.tweens.length),i.keys.length!=t.keys.length&&(n=1,i.keys.length=t.keys.length);for(r=0,s=t.keys.length;re.layer?-1:1},i.toString=function(){return"[class dragonBones.Animation]"},i.prototype._onClear=function(){for(var t in this._animations)delete this._animations[t];t=0;for(var e=this._animationStates.length;t0&&c._fadeProgress<=0?(c.returnToPool(),this._animationStates.length=0,this._animationStateDirty=!0,this._lastAnimationState=null):c._advanceTime(t,1,0);else if(e>1)for(var i=this._animationStates[0]._layer,n=1,r=0,s=1,o=0,a=0;o0&&c._fadeProgress<=0?(a++,c.returnToPool(),this._animationStateDirty=!0,this._lastAnimationState==c&&(this._lastAnimationState=o-a>=0?this._animationStates[o-a]:null)):(a>0&&(this._animationStates[o-a]=c),i!=c._layer&&(i=c._layer,r>=n?n=0:n-=r,r=0),c._advanceTime(t,n,s),c._weightResult>0&&(r+=c._weightResult,s++)),o==e-1&&a>0&&(this._animationStates.length-=a)}}},i.prototype.reset=function(){for(var t=0,e=this._animationStates.length;t0?0:this._time,f=this._duration>0?this._time:_.position,m=this._duration>0?this._duration:_.duration;this._lastAnimationState=t.BaseObject.borrowObject(t.AnimationState),this._lastAnimationState._layer=s,this._lastAnimationState._group=o,this._lastAnimationState.additiveBlending=c,this._lastAnimationState.displayControl=h,this._lastAnimationState._fadeIn(this._armature,_.animation||_,e,r,f,m,d,1/_.scale,n,u),this._animationStates.push(this._lastAnimationState),this._animationStateDirty=!0,this._time=0,this._duration=0,this._armature._cacheFrameIndex=-1,this._animationStates.length>1&&this._animationStates.sort(i._sortAnimationState);for(var p=this._armature.getSlots(),g=0,y=p.length;gr.duration-this._time&&(this._duration=r.duration-this._time)),this.fadeIn(t,0,i,0,null,4)},i.prototype.gotoAndPlayByFrame=function(t,e,i,n){void 0===e&&(e=0),void 0===i&&(i=-1),void 0===n&&(n=0);var r=this._animations[t];return r&&(this._time=r.duration*e/r.frameCount,this._duration<0?this._duration=0:this._duration>r.duration-this._time&&(this._duration=r.duration-this._time)),this.fadeIn(t,0,i,0,null,4)},i.prototype.gotoAndPlayByProgress=function(t,e,i,n){void 0===e&&(e=0),void 0===i&&(i=-1),void 0===n&&(n=0);var r=this._animations[t];return r&&(this._time=r.duration*(e>0?e:0),this._duration<0?this._duration=0:this._duration>r.duration-this._time&&(this._duration=r.duration-this._time)),this.fadeIn(t,0,i,0,null,4)},i.prototype.gotoAndStopByTime=function(t,e){void 0===e&&(e=0);var i=this.gotoAndPlayByTime(t,e,1);return i&&i.stop(),i},i.prototype.gotoAndStopByFrame=function(t,e){void 0===e&&(e=0);var i=this.gotoAndPlayByFrame(t,e,1);return i&&i.stop(),i},i.prototype.gotoAndStopByProgress=function(t,e){void 0===e&&(e=0);var i=this.gotoAndPlayByProgress(t,e,1);return i&&i.stop(),i},i.prototype.getState=function(t){for(var e=0,i=this._animationStates.length;e1?this._isPlaying&&!this.isCompleted:this._lastAnimationState?this._isPlaying&&this._lastAnimationState.isPlaying:this._isPlaying},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"isCompleted",{get:function(){if(this._lastAnimationState){if(!this._lastAnimationState.isCompleted)return!1;for(var t=0,e=this._animationStates.length;t0&&(h.timeScale=h.totalTime/i),h},i.prototype.gotoAndStop=function(t,e){return void 0===e&&(e=0),this.gotoAndStopByTime(t,e)},Object.defineProperty(i.prototype,"animationList",{get:function(){return this._animationNames},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"animationDataList",{get:function(){for(var t=[],e=0,i=this._animationNames.length;e=this.fadeTotalTime?this._fadeState>0?0:1:this._fadeTime>0?this._fadeState>0?1-this._fadeTime/this.fadeTotalTime:this._fadeTime/this.fadeTotalTime:this._fadeState>0?1:0,this._fadeProgress!=i){this._fadeProgress=i;var n=this._armature._display;if(this._fadeTime<=e)if(this._fadeState>0){if(n.hasEvent(t.EventObject.FADE_OUT)){var r=t.BaseObject.borrowObject(t.EventObject);r.animationState=this,this._armature._bufferEvent(r,t.EventObject.FADE_OUT)}}else if(n.hasEvent(t.EventObject.FADE_IN)){var s=t.BaseObject.borrowObject(t.EventObject);s.animationState=this,this._armature._bufferEvent(s,t.EventObject.FADE_IN)}if(this._fadeTime>=this.fadeTotalTime)if(this._fadeState>0){if(n.hasEvent(t.EventObject.FADE_OUT_COMPLETE)){var o=t.BaseObject.borrowObject(t.EventObject);o.animationState=this,this._armature._bufferEvent(o,t.EventObject.FADE_OUT_COMPLETE)}}else if(this._onFadeInComplete=!0,this._isPausePlayhead=!1,this._fadeState=0,n.hasEvent(t.EventObject.FADE_IN_COMPLETE)){var a=t.BaseObject.borrowObject(t.EventObject);a.animationState=this,this._armature._bufferEvent(a,t.EventObject.FADE_IN_COMPLETE)}}},i.prototype._isDisabled=function(t){return!(this.displayControl&&(!t.displayController||t.displayController==this._name||t.displayController==this._group))},i.prototype._fadeIn=function(e,n,r,s,o,a,c,h,l,u){this._armature=e,this._animationData=n,this._name=r,this.actionEnabled=i.stateActionEnabled,this.playTimes=s,this.timeScale=h,this.fadeTotalTime=l,this._fadeState=-1,this._position=o,this._duration=a,this._time=c,this._isPausePlayhead=u,this.fadeTotalTime<=0&&(this._fadeProgress=.999999),this._timeline=t.BaseObject.borrowObject(t.AnimationTimelineState),this._timeline.fadeIn(this._armature,this,this._animationData,this._time),this._animationData.zOrderTimeline&&(this._zOrderTimeline=t.BaseObject.borrowObject(t.ZOrderTimelineState),this._zOrderTimeline.fadeIn(this._armature,this,this._animationData.zOrderTimeline,this._time)),this._updateTimelineStates()},i.prototype._updateFFDTimelineStates=function(){var e=this._time;this._animationData.hasAsynchronyTimeline||(e=this._timeline._currentTime);for(var i={},n=0,r=this._ffdTimelines.length;n=1&&0==i&&n>0,s=!0,o=!0,a=2*n;if(a=r?Math.floor(this._time*a)/a:this._time,this._timeline.update(a),this._animationData.hasAsynchronyTimeline||(a=this._timeline._currentTime),this._zOrderTimeline&&this._zOrderTimeline.update(a),r){var c=Math.floor(this._timeline._currentTime*n);if(this._armature._cacheFrameIndex==c)s=!1,o=!1;else{if(this._armature._cacheFrameIndex=c,this._armature._animation._animationStateDirty){this._armature._animation._animationStateDirty=!1;for(var h=0,l=this._boneTimelines.length;h=0&&this._fadeProgress>=1&&this._timeline._isCompleted&&this.fadeOut(this.autoFadeOutTime)},i.prototype.play=function(){this._isPlaying=!0},i.prototype.stop=function(){this._isPlaying=!1},i.prototype.fadeOut=function(t,e){if(void 0===e&&(e=!0),(t<0||t!=t)&&(t=0),this._isPausePlayhead=e,this._fadeState>0){if(t>t-this._fadeTime)return}else{this._fadeState=1,(t<=0||this._fadeProgress<=0)&&(this._fadeProgress=1e-6);for(var i=0,n=this._boneTimelines.length;i1e-6?t/this._fadeProgress:0,this._fadeTime=this.fadeTotalTime*(1-this._fadeProgress)},i.prototype.containsBoneMask=function(t){return!this._boneMask.length||this._boneMask.indexOf(t)>=0},i.prototype.addBoneMask=function(t,e){void 0===e&&(e=!0);var i=this._armature.getBone(t);if(i){if(this._boneMask.indexOf(t)<0&&this._animationData.getBoneTimeline(t)&&this._boneMask.push(t),e)for(var n=this._armature.getBones(),r=0,s=n.length;r=0&&this._boneMask.splice(i,1),e){var n=this._armature.getBone(t);if(n)for(var r=this._armature.getBones(),s=0,o=r.length;s=0&&n.contains(a)&&this._boneMask.splice(h,1)}}this._updateTimelineStates()},i.prototype.removeAllBoneMask=function(){this._boneMask.length=0,this._updateTimelineStates()},Object.defineProperty(i.prototype,"layer",{get:function(){return this._layer},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"group",{get:function(){return this._group},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"name",{get:function(){return this._name},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"animationData",{get:function(){return this._animationData},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"isCompleted",{get:function(){return this._timeline._isCompleted},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"isPlaying",{get:function(){return this._isPlaying&&!this._timeline._isCompleted},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"currentPlayTimes",{get:function(){return this._timeline._currentPlayTimes},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"totalTime",{get:function(){return this._duration},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"currentTime",{get:function(){return this._timeline._currentTime},set:function(t){(t<0||t!=t)&&(t=0);var e=this._timeline._currentPlayTimes-(this._timeline._isCompleted?1:0);if(t=t%this._duration+e*this._duration,this._time!=t){this._time=t,this._timeline.setCurrentTime(this._time),this._zOrderTimeline&&(this._zOrderTimeline._isCompleted=!1);for(var i=0,n=this._boneTimelines.length;i0){var s=this._keyFrameCount>1?Math.floor(this._currentTime*this._frameRate):0,o=this._timeline.frames[s];if(this._currentFrame!=o)if(this._keyFrameCount>1){var a=this._currentFrame;if(this._currentFrame=o,!a){var c=Math.floor(i*this._frameRate);a=this._timeline.frames[c],this._isReverse||(i<=a.position||n!=this._currentPlayTimes)&&(a=a.prev)}if(this._isReverse)for(;a!=o;)this._onCrossFrame(a),a=a.prev;else for(;a!=o;)a=a.next,this._onCrossFrame(a)}else this._currentFrame=o,this._onCrossFrame(this._currentFrame)}if(n!=this._currentPlayTimes){var h;if(r.hasEvent(t.EventObject.LOOP_COMPLETE))(h=t.BaseObject.borrowObject(t.EventObject)).animationState=this._animationState,this._armature._bufferEvent(h,t.EventObject.LOOP_COMPLETE);if(this._isCompleted&&r.hasEvent(t.EventObject.COMPLETE))(h=t.BaseObject.borrowObject(t.EventObject)).animationState=this._animationState,this._armature._bufferEvent(h,t.EventObject.COMPLETE);this._currentFrame=null}}},i.prototype.setCurrentTime=function(t){this._setCurrentTime(t),this._currentFrame=null},i})(t.TimelineState);t.AnimationTimelineState=e;var i=(function(t){function e(){t.call(this)}return r(e,t),e.toString=function(){return"[class dragonBones.ZOrderTimelineState]"},e.prototype._onArriveAtFrame=function(e){t.prototype._onArriveAtFrame.call(this,e),this._armature._sortZOrder(this._currentFrame.zOrder)},e})(t.TimelineState);t.ZOrderTimelineState=i;var n=(function(e){function i(){e.call(this),this._transform=new t.Transform,this._currentTransform=new t.Transform,this._durationTransform=new t.Transform}return r(i,e),i.toString=function(){return"[class dragonBones.BoneTimelineState]"},i.prototype._onClear=function(){e.prototype._onClear.call(this),this.bone=null,this._tweenTransform=0,this._tweenRotate=0,this._tweenScale=0,this._boneTransform=null,this._originalTransform=null,this._transform.identity(),this._currentTransform.identity(),this._durationTransform.identity()},i.prototype._onArriveAtFrame=function(i){if(e.prototype._onArriveAtFrame.call(this,i),this._currentTransform.copyFrom(this._currentFrame.transform),this._tweenTransform=1,this._tweenRotate=1,this._tweenScale=1,this._keyFrameCount>1&&(this._tweenEasing!=t.DragonBones.NO_TWEEN||this._curve)){var n=this._currentFrame.next.transform;this._durationTransform.x=n.x-this._currentTransform.x,this._durationTransform.y=n.y-this._currentTransform.y,0==this._durationTransform.x&&0==this._durationTransform.y||(this._tweenTransform=2);var r=this._currentFrame.tweenRotate;if(r==r){if(r)if(r>0?n.skewY>=this._currentTransform.skewY:n.skewY<=this._currentTransform.skewY){var s=r>0?r-1:r+1;this._durationTransform.skewX=n.skewX-this._currentTransform.skewX+t.DragonBones.PI_D*s,this._durationTransform.skewY=n.skewY-this._currentTransform.skewY+t.DragonBones.PI_D*s}else this._durationTransform.skewX=n.skewX-this._currentTransform.skewX+t.DragonBones.PI_D*r,this._durationTransform.skewY=n.skewY-this._currentTransform.skewY+t.DragonBones.PI_D*r;else this._durationTransform.skewX=t.Transform.normalizeRadian(n.skewX-this._currentTransform.skewX),this._durationTransform.skewY=t.Transform.normalizeRadian(n.skewY-this._currentTransform.skewY);0==this._durationTransform.skewX&&0==this._durationTransform.skewY||(this._tweenRotate=2)}else this._durationTransform.skewX=0,this._durationTransform.skewY=0;this._currentFrame.tweenScale?(this._durationTransform.scaleX=n.scaleX-this._currentTransform.scaleX,this._durationTransform.scaleY=n.scaleY-this._currentTransform.scaleY,0==this._durationTransform.scaleX&&0==this._durationTransform.scaleY||(this._tweenScale=2)):(this._durationTransform.scaleX=0,this._durationTransform.scaleY=0)}else this._durationTransform.x=0,this._durationTransform.y=0,this._durationTransform.skewX=0,this._durationTransform.skewY=0,this._durationTransform.scaleX=0,this._durationTransform.scaleY=0},i.prototype._onUpdateFrame=function(t){if(this._tweenTransform||this._tweenRotate||this._tweenScale){e.prototype._onUpdateFrame.call(this,t);var i=0;this._tweenTransform&&(1==this._tweenTransform?(this._tweenTransform=0,i=0):i=this._tweenProgress,this._animationState.additiveBlending?(this._transform.x=this._currentTransform.x+this._durationTransform.x*i,this._transform.y=this._currentTransform.y+this._durationTransform.y*i):(this._transform.x=this._originalTransform.x+this._currentTransform.x+this._durationTransform.x*i,this._transform.y=this._originalTransform.y+this._currentTransform.y+this._durationTransform.y*i)),this._tweenRotate&&(1==this._tweenRotate?(this._tweenRotate=0,i=0):i=this._tweenProgress,this._animationState.additiveBlending?(this._transform.skewX=this._currentTransform.skewX+this._durationTransform.skewX*i,this._transform.skewY=this._currentTransform.skewY+this._durationTransform.skewY*i):(this._transform.skewX=this._originalTransform.skewX+this._currentTransform.skewX+this._durationTransform.skewX*i,this._transform.skewY=this._originalTransform.skewY+this._currentTransform.skewY+this._durationTransform.skewY*i)),this._tweenScale&&(1==this._tweenScale?(this._tweenScale=0,i=0):i=this._tweenProgress,this._animationState.additiveBlending?(this._transform.scaleX=this._currentTransform.scaleX+this._durationTransform.scaleX*i,this._transform.scaleY=this._currentTransform.scaleY+this._durationTransform.scaleY*i):(this._transform.scaleX=this._originalTransform.scaleX*(this._currentTransform.scaleX+this._durationTransform.scaleX*i),this._transform.scaleY=this._originalTransform.scaleY*(this._currentTransform.scaleY+this._durationTransform.scaleY*i))),this.bone.invalidUpdate()}},i.prototype.fadeIn=function(t,i,n,r){e.prototype.fadeIn.call(this,t,i,n,r),this._originalTransform=this._timeline.originalTransform,this._boneTransform=this.bone._animationPose},i.prototype.fadeOut=function(){this._transform.skewX=t.Transform.normalizeRadian(this._transform.skewX),this._transform.skewY=t.Transform.normalizeRadian(this._transform.skewY)},i.prototype.update=function(t){e.prototype.update.call(this,t);var i=this._animationState._weightResult;i>0&&(0==this.bone._blendIndex?(this._boneTransform.x=this._transform.x*i,this._boneTransform.y=this._transform.y*i,this._boneTransform.skewX=this._transform.skewX*i,this._boneTransform.skewY=this._transform.skewY*i,this._boneTransform.scaleX=(this._transform.scaleX-1)*i+1,this._boneTransform.scaleY=(this._transform.scaleY-1)*i+1):(this._boneTransform.x+=this._transform.x*i,this._boneTransform.y+=this._transform.y*i,this._boneTransform.skewX+=this._transform.skewX*i,this._boneTransform.skewY+=this._transform.skewY*i,this._boneTransform.scaleX+=(this._transform.scaleX-1)*i,this._boneTransform.scaleY+=(this._transform.scaleY-1)*i),this.bone._blendIndex++,0!=this._animationState._fadeState&&this.bone.invalidUpdate())},i})(t.TweenTimelineState);t.BoneTimelineState=n;var s=(function(e){function i(){e.call(this),this._color=new t.ColorTransform,this._durationColor=new t.ColorTransform}return r(i,e),i.toString=function(){return"[class dragonBones.SlotTimelineState]"},i.prototype._onClear=function(){e.prototype._onClear.call(this),this.slot=null,this._colorDirty=!1,this._tweenColor=0,this._slotColor=null,this._color.identity(),this._durationColor.identity()},i.prototype._onArriveAtFrame=function(i){if(e.prototype._onArriveAtFrame.call(this,i),this._animationState._isDisabled(this.slot))return this._tweenEasing=t.DragonBones.NO_TWEEN,this._curve=null,void(this._tweenColor=0);this._animationState._fadeState>=0&&(this.slot._setDisplayIndex(this._currentFrame.displayIndex),this.slot._updateMeshData(!0)),this._tweenColor=0;var n=this._currentFrame.color;if(this._keyFrameCount>1&&(this._tweenEasing!=t.DragonBones.NO_TWEEN||this._curve)){var r=this._currentFrame.next,s=r.color;n!=s&&r.displayIndex>=0&&(this._durationColor.alphaMultiplier=s.alphaMultiplier-n.alphaMultiplier,this._durationColor.redMultiplier=s.redMultiplier-n.redMultiplier,this._durationColor.greenMultiplier=s.greenMultiplier-n.greenMultiplier,this._durationColor.blueMultiplier=s.blueMultiplier-n.blueMultiplier,this._durationColor.alphaOffset=s.alphaOffset-n.alphaOffset,this._durationColor.redOffset=s.redOffset-n.redOffset,this._durationColor.greenOffset=s.greenOffset-n.greenOffset,this._durationColor.blueOffset=s.blueOffset-n.blueOffset,0==this._durationColor.alphaMultiplier&&0==this._durationColor.redMultiplier&&0==this._durationColor.greenMultiplier&&0==this._durationColor.blueMultiplier&&0==this._durationColor.alphaOffset&&0==this._durationColor.redOffset&&0==this._durationColor.greenOffset&&0==this._durationColor.blueOffset||(this._tweenColor=2))}0==this._tweenColor&&(this._slotColor.alphaMultiplier==n.alphaMultiplier&&this._slotColor.redMultiplier==n.redMultiplier&&this._slotColor.greenMultiplier==n.greenMultiplier&&this._slotColor.blueMultiplier==n.blueMultiplier&&this._slotColor.alphaOffset==n.alphaOffset&&this._slotColor.redOffset==n.redOffset&&this._slotColor.greenOffset==n.greenOffset&&this._slotColor.blueOffset==n.blueOffset||(this._tweenColor=1))},i.prototype._onUpdateFrame=function(t){e.prototype._onUpdateFrame.call(this,t);var i=0;if(this._tweenColor){1==this._tweenColor?(this._tweenColor=0,i=0):i=this._tweenProgress;var n=this._currentFrame.color;this._color.alphaMultiplier=n.alphaMultiplier+this._durationColor.alphaMultiplier*i,this._color.redMultiplier=n.redMultiplier+this._durationColor.redMultiplier*i,this._color.greenMultiplier=n.greenMultiplier+this._durationColor.greenMultiplier*i,this._color.blueMultiplier=n.blueMultiplier+this._durationColor.blueMultiplier*i,this._color.alphaOffset=n.alphaOffset+this._durationColor.alphaOffset*i,this._color.redOffset=n.redOffset+this._durationColor.redOffset*i,this._color.greenOffset=n.greenOffset+this._durationColor.greenOffset*i,this._color.blueOffset=n.blueOffset+this._durationColor.blueOffset*i,this._colorDirty=!0}},i.prototype.fadeIn=function(t,i,n,r){e.prototype.fadeIn.call(this,t,i,n,r),this._slotColor=this.slot._colorTransform},i.prototype.fadeOut=function(){this._tweenColor=0},i.prototype.update=function(t){if((e.prototype.update.call(this,t),0!=this._tweenColor||this._colorDirty)&&this._animationState._weightResult>0)if(0!=this._animationState._fadeState){var i=Math.pow(this._animationState._fadeProgress,4);this._slotColor.alphaMultiplier+=(this._color.alphaMultiplier-this._slotColor.alphaMultiplier)*i,this._slotColor.redMultiplier+=(this._color.redMultiplier-this._slotColor.redMultiplier)*i,this._slotColor.greenMultiplier+=(this._color.greenMultiplier-this._slotColor.greenMultiplier)*i,this._slotColor.blueMultiplier+=(this._color.blueMultiplier-this._slotColor.blueMultiplier)*i,this._slotColor.alphaOffset+=(this._color.alphaOffset-this._slotColor.alphaOffset)*i,this._slotColor.redOffset+=(this._color.redOffset-this._slotColor.redOffset)*i,this._slotColor.greenOffset+=(this._color.greenOffset-this._slotColor.greenOffset)*i,this._slotColor.blueOffset+=(this._color.blueOffset-this._slotColor.blueOffset)*i,this.slot._colorDirty=!0}else this._colorDirty&&(this._colorDirty=!1,this._slotColor.alphaMultiplier=this._color.alphaMultiplier,this._slotColor.redMultiplier=this._color.redMultiplier,this._slotColor.greenMultiplier=this._color.greenMultiplier,this._slotColor.blueMultiplier=this._color.blueMultiplier,this._slotColor.alphaOffset=this._color.alphaOffset,this._slotColor.redOffset=this._color.redOffset,this._slotColor.greenOffset=this._color.greenOffset,this._slotColor.blueOffset=this._color.blueOffset,this.slot._colorDirty=!0)},i})(t.TweenTimelineState);t.SlotTimelineState=s;var o=(function(e){function i(){e.call(this),this._ffdVertices=[]}return r(i,e),i.toString=function(){return"[class dragonBones.FFDTimelineState]"},i.prototype._onClear=function(){e.prototype._onClear.call(this),this.slot=null,this._tweenFFD=0,this._slotFFDVertices=null,this._durationFFDFrame&&(this._durationFFDFrame.returnToPool(),this._durationFFDFrame=null),this._ffdVertices.length=0},i.prototype._onArriveAtFrame=function(i){if(e.prototype._onArriveAtFrame.call(this,i),this._tweenFFD=0,(this._tweenEasing!=t.DragonBones.NO_TWEEN||this._curve)&&(this._tweenFFD=this._updateExtensionKeyFrame(this._currentFrame,this._currentFrame.next,this._durationFFDFrame)),0==this._tweenFFD)for(var n=this._currentFrame.tweens,r=0,s=n.length;r0){if(0==this.slot._blendIndex)for(var n=0,r=this._ffdVertices.length;n0&&(this._animatebles[i-n]=s,this._animatebles[i]=null),s.advanceTime(e)):n++}if(n>0){for(r=this._animatebles.length;i=0},e.prototype.add=function(e){e&&this._animatebles.indexOf(e)<0&&(this._animatebles.push(e),t.DragonBones.debug&&e instanceof t.Armature&&t.DragonBones.addArmature(e))},e.prototype.remove=function(e){var i=this._animatebles.indexOf(e);i>=0&&(this._animatebles[i]=null,t.DragonBones.debug&&e instanceof t.Armature&&t.DragonBones.removeArmature(e))},e.prototype.clear=function(){for(var t=0,e=this._animatebles.length;te._zOrder?1:-1},i.prototype._onClear=function(){for(var t=0,e=this._bones.length;t=t&&(i=0),this._bones.indexOf(r)>=0||(r.parent&&this._bones.indexOf(r.parent)<0||r.ik&&this._bones.indexOf(r.ik)<0||(r.ik&&r.ikChain>0&&r.ikChainIndex==r.ikChain?this._bones.splice(this._bones.indexOf(r.parent)+1,0,r):this._bones.push(r),n++))}}},i.prototype._sortSlots=function(){this._slots.sort(i._onSortSlots)},i.prototype._doAction=function(t){switch(t.type){case 0:this._animation.play(t.data[0],t.data[1]);break;case 1:this._animation.stop(t.data[0]);break;case 2:this._animation.gotoAndPlayByTime(t.data[0],t.data[1],t.data[2]);break;case 3:this._animation.gotoAndStopByTime(t.data[0],t.data[1]);break;case 4:this._animation.fadeIn(t.data[0],t.data[1],t.data[2])}},i.prototype._addBoneToBoneList=function(t){this._bones.indexOf(t)<0&&(this._bonesDirty=!0,this._bones.push(t))},i.prototype._removeBoneFromBoneList=function(t){var e=this._bones.indexOf(t);e>=0&&this._bones.splice(e,1)},i.prototype._addSlotToSlotList=function(t){this._slots.indexOf(t)<0&&(this._slotsDirty=!0,this._slots.push(t))},i.prototype._removeSlotFromSlotList=function(t){var e=this._slots.indexOf(t);e>=0&&this._slots.splice(e,1)},i.prototype._sortZOrder=function(t){for(var e=this._armatureData.sortedSlots,i=t.length<1,n=0,r=e.length;n0){for(n=0,r=this._events.length;n0){for(n=0,r=this._actions.length;n0){if(!s&&!o){g=C;break}var T;s&&((T=c?s.y-e:s.x-t)<0&&(T=-T),(!g||Tl)&&(l=T,d=o.x,f=o.y,y=C,a&&(p=a.y)))}}return g&&s&&(s.x=u,s.y=_,a&&(a.x=m)),y&&o&&(o.x=d,o.y=f,a&&(a.y=p)),g},i.prototype.invalidUpdate=function(t,e){if(void 0===t&&(t=null),void 0===e&&(e=!1),t){var i=this.getBone(t);if(i&&(i.invalidUpdate(),e))for(var n=0,r=this._slots.length;n=0){var i=this._cacheFrames[e];this.globalTransformMatrix==i?this._transformDirty=0:i?(this._transformDirty=2,this.globalTransformMatrix=i):2==this._transformDirty||this._parent&&0!=this._parent._transformDirty||this._ik&&this.ikWeight>0&&0!=this._ik._transformDirty?(this._transformDirty=2,this.globalTransformMatrix=this._globalTransformMatrix):this.globalTransformMatrix!=this._globalTransformMatrix?(this._transformDirty=0,this._cacheFrames[e]=this.globalTransformMatrix):(this._transformDirty=2,this.globalTransformMatrix=this._globalTransformMatrix)}else(2==this._transformDirty||this._parent&&0!=this._parent._transformDirty||this._ik&&this.ikWeight>0&&0!=this._ik._transformDirty)&&(this._transformDirty=2,this.globalTransformMatrix=this._globalTransformMatrix);0!=this._transformDirty&&(2==this._transformDirty?(this._transformDirty=1,this.globalTransformMatrix==this._globalTransformMatrix&&(this.global.x=this.origin.x+this.offset.x+this._animationPose.x,this.global.y=this.origin.y+this.offset.y+this._animationPose.y,this.global.skewX=this.origin.skewX+this.offset.skewX+this._animationPose.skewX,this.global.skewY=this.origin.skewY+this.offset.skewY+this._animationPose.skewY,this.global.scaleX=this.origin.scaleX*this.offset.scaleX*this._animationPose.scaleX,this.global.scaleY=this.origin.scaleY*this.offset.scaleY*this._animationPose.scaleY,this._updateGlobalTransformMatrix(),this._ik&&this._ikChainIndex==this._ikChain&&this.ikWeight>0&&(this.inheritTranslation&&this._ikChain>0&&this._parent?this._computeIKB():this._computeIKA()),e>=0&&!this._cacheFrames[e]&&(this.globalTransformMatrix=t.BoneTimelineData.cacheFrame(this._cacheFrames,e,this._globalTransformMatrix)))):this._transformDirty=0)},i.prototype.invalidUpdate=function(){this._transformDirty=2},i.prototype.contains=function(t){if(t){if(t==this)return!1;for(var e=t;e!=this&&e;)e=e.parent;return e==this}return!1},i.prototype.getBones=function(){this._bones.length=0;for(var t=this._armature.getBones(),e=0,i=t.length;e=0&&this._displayIndex=0&&this._displayIndex0?s.actions:this._childArmature.armatureData.actions;if(o.length>0)for(var a=0,c=o.length;a=0){i=this._displayDataSet&&this._displayIndex=0&&this._cacheFrames){var i=this._cacheFrames[e];this.globalTransformMatrix==i?this._transformDirty=!1:i?(this._transformDirty=!0,this.globalTransformMatrix=i):this._transformDirty||0!=this._parent._transformDirty?(this._transformDirty=!0,this.globalTransformMatrix=this._globalTransformMatrix):this.globalTransformMatrix!=this._globalTransformMatrix?(this._transformDirty=!1,this._cacheFrames[e]=this.globalTransformMatrix):(this._transformDirty=!0,this.globalTransformMatrix=this._globalTransformMatrix)}else(this._transformDirty||0!=this._parent._transformDirty)&&(this._transformDirty=!0,this.globalTransformMatrix=this._globalTransformMatrix);this._transformDirty&&(this._transformDirty=!1,this.globalTransformMatrix==this._globalTransformMatrix&&(this._updateGlobalTransformMatrix(),e>=0&&this._cacheFrames&&!this._cacheFrames[e]&&(this.globalTransformMatrix=t.SlotTimelineData.cacheFrame(this._cacheFrames,e,this._globalTransformMatrix))),this._updateTransform())}},i.prototype._setDisplayList=function(e){if(e&&e.length>0){this._displayList.length!=e.length&&(this._displayList.length=e.length);for(var i=0,n=e.length;i0&&(this._displayList.length=0);return this._displayIndex>=0&&this._displayIndex0&&(1==l||2==l?o?(this.globalTransformMatrix.transformPoint(o.x,o.y,o),a&&(a.x=o.x,a.y=o.y)):a&&this.globalTransformMatrix.transformPoint(a.x,a.y,a):(o&&this.globalTransformMatrix.transformPoint(o.x,o.y,o),a&&this.globalTransformMatrix.transformPoint(a.x,a.y,a)),c&&(this.globalTransformMatrix.transformPoint(Math.cos(c.x),Math.sin(c.x),i._helpPoint,!0),c.x=Math.atan2(i._helpPoint.y,i._helpPoint.x),this.globalTransformMatrix.transformPoint(Math.cos(c.y),Math.sin(c.y),i._helpPoint,!0),c.y=Math.atan2(i._helpPoint.y,i._helpPoint.x))),l},i.prototype.invalidUpdate=function(){this._displayDirty=!0},Object.defineProperty(i.prototype,"displayData",{get:function(){return this._displayIndex<0||this._displayIndex>=this._displayDataSet.displays.length?null:this._displayDataSet.displays[this._displayIndex]},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"rawDisplay",{get:function(){return this._rawDisplay},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"meshDisplay",{get:function(){return this._meshDisplay},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"displayIndex",{get:function(){return this._displayIndex},set:function(t){this._setDisplayIndex(t)&&this._update(-1)},enumerable:!0,configurable:!0}),Object.defineProperty(i.prototype,"displayList",{get:function(){return this._displayList.concat()},set:function(e){var i=this._displayList.concat(),n=[];this._setDisplayList(e)&&this._update(-1);for(var r=0,s=i.length;re.zOrder?1:-1},i.toString=function(){return"[class dragonBones.ArmatureData]"},i.prototype._onClear=function(){for(var t in this.bones)this.bones[t].returnToPool(),delete this.bones[t];for(var t in this.slots)this.slots[t].returnToPool(),delete this.slots[t];for(var t in this.skins)this.skins[t].returnToPool(),delete this.skins[t];for(var t in this.animations)this.animations[t].returnToPool(),delete this.animations[t];t=0;for(var e=this.actions.length;t=t&&(i=0),this._sortedBones.indexOf(r)>=0||(r.parent&&this._sortedBones.indexOf(r.parent)<0||r.ik&&this._sortedBones.indexOf(r.ik)<0||(r.ik&&r.chain>0&&r.chainIndex==r.chain?this._sortedBones.splice(this._sortedBones.indexOf(r.parent)+1,0,r):this._sortedBones.push(r),n++))}}},i.prototype._sortSlots=function(){this._sortedSlots.sort(i._onSortSlots)},i.prototype.cacheFrames=function(t){if(this.cacheFrameRate!=t)for(var e in this.cacheFrameRate=t,this.animations)this.animations[e].cacheFrames(this.cacheFrameRate)},i.prototype.addBone=function(t,e){if(!t||!t.name||this.bones[t.name])throw new Error;if(e){var i=this.getBone(e);i?t.parent=i:(this._bonesChildren[e]=this._bonesChildren[e]||[]).push(t)}var n=this._bonesChildren[t.name];if(n){for(var r=0,s=n.length;rr&&(o|=2),es&&(o|=8),o},e.segmentIntersectsRectangle=function(t,i,n,r,s,o,a,c,h,l,u){void 0===h&&(h=null),void 0===l&&(l=null),void 0===u&&(u=null);var _=t>s&&to&&is&&no&&r=0){var T=Math.sqrt(x),A=y-T,b=y+T,S=A<0?-1:A<=m?0:1,E=b<0?-1:b<=m?0:1,w=S*E;if(w<0)return-1;0==w&&(-1==S?(C=2,i=t+b*p,n=(e+b*g)/u,c&&(c.x=i,c.y=n),h&&(h.x=i,h.y=n),l&&(l.x=Math.atan2(n/v*_,i/v),l.y=l.x+Math.PI)):1==E?(C=1,t+=A*p,e=(e+A*g)/u,c&&(c.x=t,c.y=e),h&&(h.x=t,h.y=e),l&&(l.x=Math.atan2(e/v*_,t/v),l.y=l.x+Math.PI)):(C=3,c&&(c.x=t+A*p,c.y=(e+A*g)/u,l&&(l.x=Math.atan2(c.y/v*_,c.x/v))),h&&(h.x=t+b*p,h.y=(e+b*g)/u,l&&(l.y=Math.atan2(h.y/v*_,h.x/v)))))}return C},e.segmentIntersectsPolygon=function(t,e,i,n,r,s,o,a){void 0===s&&(s=null),void 0===o&&(o=null),void 0===a&&(a=null),t==i&&(t=i+.01),e==n&&(e=n+.01);for(var c=r.length,h=t-i,l=e-n,u=t*n-e*i,_=0,d=r[c-2],f=r[c-1],m=0,p=0,g=0,y=0,v=0,x=0,C=0;C=d&&I<=T||I>=T&&I<=d)&&(0==h||I>=t&&I<=i||I>=i&&I<=t)){var R=(u*S-l*E)/w;if((R>=f&&R<=A||R>=A&&R<=f)&&(0==l||R>=e&&R<=n||R>=n&&R<=e)){if(!o){g=I,y=R,v=I,x=R,_++,a&&(a.x=Math.atan2(A-f,T-d)-.5*Math.PI,a.y=a.x);break}var P=I-t;P<0&&(P=-P),0==_?(m=P,p=P,g=I,y=R,v=I,x=R,a&&(a.x=Math.atan2(A-f,T-d)-.5*Math.PI,a.y=a.x)):(Pp&&(p=P,v=I,x=R,a&&(a.y=Math.atan2(A-f,T-d)-.5*Math.PI))),_++}}d=T,f=A}return 1==_?(s&&(s.x=g,s.y=y),o&&(o.x=g,o.y=y),a&&(a.y=a.x+Math.PI)):_>1&&(_++,s&&(s.x=g,s.y=y),o&&(o.x=v,o.y=x)),_},e.prototype._onClear=function(){this.type=-1,this.x=0,this.y=0,this.width=0,this.height=0,this.vertices.length=0},e.prototype.containsPoint=function(t,e){var i=!1;if(2==this.type){if(t>=this.x&&t<=this.width&&e>=this.y&&e<=this.height)for(var n=0,r=this.vertices.length,s=r-2;n=e||o=e){var c=this.vertices[s],h=this.vertices[n];(e-a)*(c-h)/(o-a)+h=-l&&t<=l){var u=.5*this.height;e>=-u&&e<=u&&(1==this.type?(e*=l/u,i=Math.sqrt(t*t+e*e)<=l):i=!0)}}return i},e.prototype.intersectsSegment=function(t,i,n,r,s,o,a){void 0===s&&(s=null),void 0===o&&(o=null),void 0===a&&(a=null);var c=0;switch(this.type){case 0:var h=.5*this.width,l=.5*this.height;c=e.segmentIntersectsRectangle(t,i,n,r,-h,-l,h,l,s,o,a);break;case 1:c=e.segmentIntersectsEllipse(t,i,n,r,0,0,.5*this.width,.5*this.height,s,o,a);break;case 2:0!=e.segmentIntersectsRectangle(t,i,n,r,this.x,this.y,this.width,this.height,null,null)&&(c=e.segmentIntersectsPolygon(t,i,n,r,this.vertices,s,o,a))}return c},e})(t.BaseObject);t.BoundingBoxData=h})(n||(n={})),(function(t){var e=(function(t){function e(){t.call(this),this.armatures={},this._armatureNames=[]}return r(e,t),e.toString=function(){return"[class dragonBones.DragonBonesData]"},e.prototype._onClear=function(){for(var t in this.armatures)this.armatures[t].returnToPool(),delete this.armatures[t];this.autoSearch=!1,this.frameRate=0,this.name=null,this._armatureNames.length=0},e.prototype.getArmature=function(t){return this.armatures[t]},e.prototype.addArmature=function(t){if(!t||!t.name||this.armatures[t.name])throw new Error;this.armatures[t.name]=t,this._armatureNames.push(t.name),t.parent=this},Object.defineProperty(e.prototype,"armatureNames",{get:function(){return this._armatureNames},enumerable:!0,configurable:!0}),e.prototype.dispose=function(){this.returnToPool()},e})(t.BaseObject);t.DragonBonesData=e})(n||(n={})),(function(t){var e=(function(t){function e(){t.call(this),this.data=[]}return r(e,t),e.toString=function(){return"[class dragonBones.ActionData]"},e.prototype._onClear=function(){this.type=-1,this.bone=null,this.slot=null,this.data.length=0},e})(t.BaseObject);t.ActionData=e;var i=(function(t){function e(){t.call(this),this.ints=[],this.floats=[],this.strings=[]}return r(e,t),e.toString=function(){return"[class dragonBones.EventData]"},e.prototype._onClear=function(){this.type=-1,this.name=null,this.ints.length=0,this.floats.length=0,this.strings.length=0,this.bone=null,this.slot=null},e})(t.BaseObject);t.EventData=i;var n=(function(t){function e(){t.call(this)}return r(e,t),e.prototype._onClear=function(){this.position=0,this.duration=0,this.prev=null,this.next=null},e})(t.BaseObject);t.FrameData=n;var s=(function(e){function i(){e.call(this)}return r(i,e),i._getCurvePoint=function(t,e,i,n,r,s,o,a,c,h){var l=1-c,u=l*l,_=c*c,d=l*u,f=3*c*u,m=3*l*_,p=c*_;h.x=d*t+f*i+m*r+p*o,h.y=d*e+f*n+m*s+p*a},i.samplingEasingCurve=function(e,n){for(var r=e.length,s=new t.Point,o=-2,a=0,c=n.length;a=0&&o+6.01;){var C=(x+v)/2;i._getCurvePoint(u,_,d,f,m,p,g,y,C,s),h-s.x>0?v=C:x=C}n[a]=s.y}},i.prototype._onClear=function(){e.prototype._onClear.call(this),this.tweenEasing=0,this.curve=null},i})(n);t.TweenFrameData=s;var o=(function(t){function e(){t.call(this),this.actions=[],this.events=[]}return r(e,t),e.toString=function(){return"[class dragonBones.AnimationFrameData]"},e.prototype._onClear=function(){t.prototype._onClear.call(this);for(var e=0,i=this.actions.length;e=i.frames.length)r.copyFrom(i.frames[0].transform);else{var o=i.frames[s],a=0;o.tweenEasing!=t.DragonBones.NO_TWEEN?(a=(n-o.position)/o.duration,0!=o.tweenEasing&&(a=t.TweenTimelineState._getEasingValue(a,o.tweenEasing))):o.curve&&(a=(n-o.position)/o.duration,a=t.TweenTimelineState._getEasingCurveValue(a,o.curve));var c=o.next;r.x=c.transform.x-o.transform.x,r.y=c.transform.y-o.transform.y,r.skewX=t.Transform.normalizeRadian(c.transform.skewX-o.transform.skewX),r.skewY=t.Transform.normalizeRadian(c.transform.skewY-o.transform.skewY),r.scaleX=c.transform.scaleX-o.transform.scaleX,r.scaleY=c.transform.scaleY-o.transform.scaleY,r.x=o.transform.x+r.x*a,r.y=o.transform.y+r.y*a,r.skewX=o.transform.skewX+r.skewX*a,r.skewY=o.transform.skewY+r.skewY*a,r.scaleX=o.transform.scaleX+r.scaleX*a,r.scaleY=o.transform.scaleY+r.scaleY*a}r.add(i.originalTransform)},e.prototype._globalToLocal=function(t){for(var e=new Array,i=t.sortedBones.concat().reverse(),n=0,r=i.length;n=0||(e.push(_),h?(this._getTimelineFrameMatrix(a,h,_.position,this._helpTransformA),_.transform.add(this._helpTransformB),this._helpTransformA.toMatrix(this._helpMatrix),this._helpMatrix.invert(),this._helpMatrix.transformPoint(_.transform.x,_.transform.y,this._helpPoint),_.transform.x=this._helpPoint.x,_.transform.y=this._helpPoint.y,_.transform.rotation-=this._helpTransformA.rotation):_.transform.add(this._helpTransformB),_.transform.minus(s.transform),0==l?(c.originalTransform.copyFrom(_.transform),_.transform.identity()):_.transform.minus(c.originalTransform))}}}}},e.prototype._mergeFrameToAnimationTimeline=function(e,i,n){var r=Math.floor(e*this._armature.frameRate),s=this._animation.frames;if(0==s.length){var o=t.BaseObject.borrowObject(t.AnimationFrameData);if(o.position=0,this._animation.frameCount>1){s.length=this._animation.frameCount+1;var a=t.BaseObject.borrowObject(t.AnimationFrameData);a.position=this._animation.frameCount/this._armature.frameRate,s[0]=o,s[this._animation.frameCount]=a}}var c=null,h=s[r];if(!h||0!=r&&s[r-1]!=h.prev){(c=t.BaseObject.borrowObject(t.AnimationFrameData)).position=r/this._armature.frameRate,s[r]=c;for(var l=r+1,u=s.length;le?t[e]:i},i.prototype._parseArmature=function(e,n){var r=t.BaseObject.borrowObject(t.ArmatureData);if(r.name=i._getString(e,i.NAME,null),r.frameRate=i._getNumber(e,i.FRAME_RATE,this._data.frameRate)||this._data.frameRate,r.scale=n,i.TYPE in e&&"string"==typeof e[i.TYPE]?r.type=i._getArmatureType(e[i.TYPE]):r.type=i._getNumber(e,i.TYPE,0),this._armature=r,this._rawBones.length=0,i.AABB in e){var s=e[i.AABB];r.aabb.x=i._getNumber(s,i.X,0),r.aabb.y=i._getNumber(s,i.Y,0),r.aabb.width=i._getNumber(s,i.WIDTH,0),r.aabb.height=i._getNumber(s,i.HEIGHT,0)}if(i.BONE in e)for(var o=e[i.BONE],a=0,c=o.length;a0&&e.parent&&!e.parent.ik?(e.parent.ik=e.ik,e.parent.chainIndex=0,e.parent.chain=0,e.chainIndex=1):(e.chain=0,e.chainIndex=0))},i.prototype._parseSlot=function(e,n){var r=t.BaseObject.borrowObject(t.SlotData);return r.name=i._getString(e,i.NAME,null),r.parent=this._armature.getBone(i._getString(e,i.PARENT,null)),r.displayIndex=i._getNumber(e,i.DISPLAY_INDEX,0),r.zOrder=i._getNumber(e,i.Z,n),i.COLOR in e||i.COLOR_TRANSFORM in e?(r.color=t.SlotData.generateColor(),this._parseColorTransform(e[i.COLOR]||e[i.COLOR_TRANSFORM],r.color)):r.color=t.SlotData.DEFAULT_COLOR,i.BLEND_MODE in e&&"string"==typeof e[i.BLEND_MODE]?r.blendMode=i._getBlendMode(e[i.BLEND_MODE]):r.blendMode=i._getNumber(e,i.BLEND_MODE,0),(i.ACTIONS in e||i.DEFAULT_ACTIONS in e)&&this._parseActionData(e,r.actions,null,null),this._isOldData&&(i.COLOR_TRANSFORM in e?(r.color=t.SlotData.generateColor(),this._parseColorTransform(e[i.COLOR_TRANSFORM],r.color)):r.color=t.SlotData.DEFAULT_COLOR),r},i.prototype._parseSkin=function(e){var n=t.BaseObject.borrowObject(t.SkinData);if(n.name=i._getString(e,i.NAME,"__default")||"__default",i.SLOT in e){this._skin=n;for(var r=e[i.SLOT],s=0,o=0,a=r.length;on.width&&(n.width=c),hn.height&&(n.height=h))}}}return n},i.prototype._parseMesh=function(e){var n=t.BaseObject.borrowObject(t.MeshData),r=e[i.VERTICES],s=e[i.UVS],o=e[i.TRIANGLES],a=Math.floor(r.length/2),c=Math.floor(o.length/3),h=new Array(this._armature.sortedBones.length);if(n.skinned=i.WEIGHTS in e&&e[i.WEIGHTS].length>0,n.uvs.length=2*a,n.vertices.length=2*a,n.vertexIndices.length=3*c,n.skinned){if(n.boneIndices.length=a,n.weights.length=a,n.boneVertices.length=a,i.SLOT_POSE in e){var l=e[i.SLOT_POSE];n.slotPose.a=l[0],n.slotPose.b=l[1],n.slotPose.c=l[2],n.slotPose.d=l[3],n.slotPose.tx=l[4]*this._armature.scale,n.slotPose.ty=l[5]*this._armature.scale}if(i.BONE_POSE in e)for(var u=e[i.BONE_POSE],_=0,d=u.length;_0){var a=this._armature.sortedSlots.length,c=new Array(a-o.length/2);s.zOrder.length=a;for(var h=0;h0||h.length>0)&&this._mergeFrameToAnimationTimeline(s.position,c,h),s},i.prototype._parseSlotFrame=function(e,n,r){var s=t.BaseObject.borrowObject(t.SlotFrameData);if(s.displayIndex=i._getNumber(e,i.DISPLAY_INDEX,0),this._parseTweenFrame(e,s,n,r),i.COLOR in e||i.COLOR_TRANSFORM in e?(s.color=t.SlotFrameData.generateColor(),this._parseColorTransform(e[i.COLOR]||e[i.COLOR_TRANSFORM],s.color)):s.color=t.SlotFrameData.DEFAULT_COLOR,this._isOldData)i._getBoolean(e,i.HIDE,!1)&&(s.displayIndex=-1);else if(i.ACTION in e||i.ACTIONS in e){var o=this._timeline.slot,a=new Array;this._parseActionData(e,a,o.parent,o),this._mergeFrameToAnimationTimeline(s.position,a,null)}return s},i.prototype._parseFFDFrame=function(e,n,r){var s=this._timeline.display.mesh,o=t.BaseObject.borrowObject(t.ExtensionFrameData);o.type=i._getNumber(e,i.TYPE,0),this._parseTweenFrame(e,o,n,r);for(var a=e[i.VERTICES],c=i._getNumber(e,i.OFFSET,0),h=0,l=0,u=0,_=s.vertices.length;u<_;u+=2)if(!a||u=a.length?(h=0,l=0):(h=a[u-c]*this._armature.scale,l=a[u+1-c]*this._armature.scale),s.skinned){s.slotPose.transformPoint(h,l,this._helpPoint,!0),h=this._helpPoint.x,l=this._helpPoint.y;for(var d=s.boneIndices[u/2],f=0,m=d.length;f0?(i.TWEEN_EASING in e?n.tweenEasing=i._getNumber(e,i.TWEEN_EASING,t.DragonBones.NO_TWEEN):this._isOldData?n.tweenEasing=this._isAutoTween?this._animationTweenEasing:t.DragonBones.NO_TWEEN:n.tweenEasing=t.DragonBones.NO_TWEEN,this._isOldData&&1==this._animation.scale&&1==this._timeline.scale&&n.duration*this._armature.frameRate<2&&(n.tweenEasing=t.DragonBones.NO_TWEEN),s>0&&i.CURVE in e&&(n.curve=new Array(2*s-1),t.TweenFrameData.samplingEasingCurve(e[i.CURVE],n.curve))):(n.tweenEasing=t.DragonBones.NO_TWEEN,n.curve=null)},i.prototype._parseFrame=function(t,e,i,n){e.position=i/this._armature.frameRate,e.duration=n/this._armature.frameRate},i.prototype._parseTimeline=function(e,n,r){if(n.scale=i._getNumber(e,i.SCALE,1),n.offset=i._getNumber(e,i.OFFSET,0),i.FRAME in e){this._timeline=n;var s=e[i.FRAME];if(1==s.length)n.frames.length=1,n.frames[0]=r.call(this,s[0],0,i._getNumber(s[0],i.DURATION,1));else if(s.length>1){n.frames.length=this._animation.frameCount+1;for(var o=0,a=0,c=null,h=null,l=0,u=0,_=n.frames.length;l<_;++l){if(o+a<=l&&u0?n.scale=r:r=n.scale=i._getNumber(e,i.SCALE,n.scale),r=1/r,i.SUB_TEXTURE in e)for(var s=e[i.SUB_TEXTURE],o=0,a=s.length;o0&&u>0&&(h.frame=t.TextureData.generateRectangle(),h.frame.x=i._getNumber(c,i.FRAME_X,0)*r,h.frame.y=i._getNumber(c,i.FRAME_Y,0)*r,h.frame.width=l*r,h.frame.height=u*r),n.addTexture(h)}},i.getInstance=function(){return i._instance||(i._instance=new i),i._instance},i._instance=null,i})(t.DataParser);t.ObjectDataParser=e})(n||(n={})),(function(t){var e=(function(t){function e(){t.call(this),this.textures={}}return r(e,t),e.prototype._onClear=function(){for(var t in this.textures)this.textures[t].returnToPool(),delete this.textures[t];this.autoSearch=!1,this.scale=1,this.name=null,this.imagePath=null},e.prototype.addTexture=function(t){if(!t||!t.name||this.textures[t.name])throw new Error;this.textures[t.name]=t,t.parent=this},e.prototype.getTexture=function(t){return this.textures[t]},e})(t.BaseObject);t.TextureAtlasData=e;var i=(function(e){function i(){e.call(this),this.region=new t.Rectangle}return r(i,e),i.generateRectangle=function(){return new t.Rectangle},i.prototype._onClear=function(){this.rotated=!1,this.name=null,this.frame=null,this.parent=null,this.region.clear()},i})(t.BaseObject);t.TextureData=i})(n||(n={})),(function(t){var e=(function(){function e(i){void 0===i&&(i=null),this.autoSearch=!1,this._dataParser=null,this._dragonBonesDataMap={},this._textureAtlasDataMap={},this._dataParser=i,this._dataParser||(e._defaultParser||(e._defaultParser=new t.ObjectDataParser),this._dataParser=e._defaultParser)}return e.prototype._getTextureData=function(t,e){var i=this._textureAtlasDataMap[t];if(i)for(var n=0,r=i.length;n=0){var r=i.displayList;if(r.length<=n&&(r.length=n+1),i._replacedDisplayDataSet.length<=n&&(i._replacedDisplayDataSet.length=n+1),i._replacedDisplayDataSet[n]=e,1==e.type){var s=this.buildArmature(e.path,t.dataName,null,t.textureAtlasName);r[n]=s}else e.texture&&!t.textureAtlasName||(e.texture=this._getTextureData(t.textureAtlasName||t.dataName,e.path)),e.mesh||nt.length&&(cc.renderer._batchRendering(),m=!0),m&&(i=0);var p=null;if(r instanceof n.RegionAttachment)p=this._uploadRegionAttachmentData(r,s,u,t,e,i);else{if(!(r instanceof n.MeshAttachment))continue;this._uploadMeshAttachmentData(r,s,u,t,e,i)}this._node._debugSlots&&(_[o]=p),r instanceof n.RegionAttachment?cc.renderer._increaseBatchingSize(d,cc.renderer.VertexType.TRIANGLE):cc.renderer._increaseBatchingSize(d,cc.renderer.VertexType.CUSTOM,r.triangles),i+=6*d}}if(c._debugBones||c._debugSlots){cc.renderer._batchRendering();var g=this._worldTransform,y=this._matrix.mat;y[0]=g.a,y[4]=g.c,y[12]=g.tx,y[1]=g.b,y[5]=g.d,y[13]=g.ty,cc.math.glMatrixMode(cc.math.KM_GL_MODELVIEW),cc.current_stack.stack.push(cc.current_stack.top),cc.current_stack.top=this._matrix;var v=cc._drawingUtil;if(c._debugSlots&&_&&_.length>0)for(v.setDrawColor(0,0,255,255),v.setLineWidth(1),o=0,a=l.slots.length;o":0});sp.Skeleton=cc.Class({name:"sp.Skeleton",extends:cc._RendererUnderSG,editor:!1,properties:{_startListener:{default:null,serializable:!1},_endListener:{default:null,serializable:!1},_completeListener:{default:null,serializable:!1},_eventListener:{default:null,serializable:!1},_disposeListener:{default:null,serializable:!1},_interruptListener:{default:null,serializable:!1},_paused:!1,paused:{get:function(){return this._paused},set:function(t){this._paused=t,this._sgNode&&(t?this._sgNode.pause():this._sgNode.resume())},visible:!1},skeletonData:{default:null,type:sp.SkeletonData,notify:function(){this.defaultSkin="",this.defaultAnimation="",this._refresh()},tooltip:!1},defaultSkin:{default:"",visible:!1},defaultAnimation:{default:"",visible:!1},animation:{get:function(){var t=this.getCurrent(0);return t&&t.animation.name||""},set:function(t){this.defaultAnimation=t,t?this.setAnimation(0,t,this.loop):(this.clearTrack(0),this.setToSetupPose())},visible:!1},_defaultSkinIndex:{get:function(){if(this.skeletonData&&this.defaultSkin){var t=this.skeletonData.getSkinsEnum();if(t){var e=t[this.defaultSkin];if(void 0!==e)return e}}return 0},set:function(t){var e;if(this.skeletonData&&(e=this.skeletonData.getSkinsEnum()),!e)return cc.errorID("",this.name);var i=e[t];void 0!==i?this.defaultSkin=i:cc.errorID(7501,this.name)},type:n,visible:!0,displayName:"Default Skin",tooltip:!1},_animationIndex:{get:function(){var t=this.animation;if(this.skeletonData&&t){var e=this.skeletonData.getAnimsEnum();if(e){var i=e[t];if(void 0!==i)return i}}return 0},set:function(t){if(0!==t){var e;if(this.skeletonData&&(e=this.skeletonData.getAnimsEnum()),!e)return cc.errorID(7502,this.name);var i=e[t];void 0!==i?this.animation=i:cc.errorID(7503,this.name)}else this.animation=""},type:r,visible:!0,displayName:"Animation",tooltip:!1},loop:{default:!0,tooltip:!1},_premultipliedAlpha:!0,premultipliedAlpha:{get:function(){return this._premultipliedAlpha},set:function(t){this._premultipliedAlpha=t,this._sgNode&&this._sgNode.setPremultipliedAlpha(t)},tooltip:!1},timeScale:{default:1,notify:function(){this._sgNode&&this._sgNode.setTimeScale(this.timeScale)},tooltip:!1},debugSlots:{default:!1,notify:function(){this._sgNode&&this._sgNode.setDebugSlotsEnabled(this.debugSlots)},editorOnly:!0,tooltip:!1},debugBones:{default:!1,notify:function(){this._sgNode&&this._sgNode.setDebugBonesEnabled(this.debugBones)},editorOnly:!0,tooltip:!1}},__preload:function(){this.node.setContentSize(0,0),this._refresh()},_createSgNode:function(){var t=this.skeletonData;if(t){var e=t.getRuntimeData();if(e)try{return new sp._SGSkeletonAnimation(e,null,t.scale)}catch(t){cc._throw(t)}}return null},_initSgNode:function(){var t=this._sgNode;t.setTimeScale(this.timeScale);var e=this;if(t.onEnter=function(){_ccsg.Node.prototype.onEnter.call(this),e._paused&&this.pause()},this._startListener&&this.setStartListener(this._startListener),this._endListener&&this.setEndListener(this._endListener),this._completeListener&&this.setCompleteListener(this._completeListener),this._eventListener&&this.setEventListener(this._eventListener),this._interruptListener&&this.setInterruptListener(this._interruptListener),this._disposeListener&&this.setDisposeListener(this._disposeListener),this.defaultSkin)try{t.setSkin(this.defaultSkin)}catch(t){cc._throw(t)}t.setPremultipliedAlpha(this._premultipliedAlpha),this.animation=this.defaultAnimation},_getLocalBounds:!1,updateWorldTransform:function(){this._sgNode&&this._sgNode.updateWorldTransform()},setToSetupPose:function(){this._sgNode&&this._sgNode.setToSetupPose()},setBonesToSetupPose:function(){this._sgNode&&this._sgNode.setBonesToSetupPose()},setSlotsToSetupPose:function(){this._sgNode&&this._sgNode.setSlotsToSetupPose()},findBone:function(t){return this._sgNode?this._sgNode.findBone(t):null},findSlot:function(t){return this._sgNode?this._sgNode.findSlot(t):null},setSkin:function(t){return this._sgNode?this._sgNode.setSkin(t):null},getAttachment:function(t,e){return this._sgNode?this._sgNode.getAttachment(t,e):null},setAttachment:function(t,e){this._sgNode&&this._sgNode.setAttachment(t,e)},setSkeletonData:function(t,e){this._sgNode&&this._sgNode.setSkeletonData(t,e)},setAnimationStateData:function(t){if(this._sgNode)return this._sgNode.setAnimationStateData(t)},setMix:function(t,e,i){this._sgNode&&this._sgNode.setMix(t,e,i)},setAnimationListener:function(t,e){this._sgNode&&this._sgNode.setAnimationListener(t,e)},setAnimation:function(t,e,i){return this._sgNode?this._sgNode.setAnimation(t,e,i):null},_sample:function(){this._sgNode&&this._sgNode.update(0)},addAnimation:function(t,e,i,n){return this._sgNode?this._sgNode.addAnimation(t,e,i,n||0):null},findAnimation:function(t){return this._sgNode?this._sgNode.findAnimation(t):null},getCurrent:function(t){return this._sgNode?this._sgNode.getCurrent(t):null},clearTracks:function(){this._sgNode&&this._sgNode.clearTracks()},clearTrack:function(t){this._sgNode&&this._sgNode.clearTrack(t)},_updateAnimEnum:!1,_updateSkinEnum:!1,setStartListener:function(t){this._startListener=t,this._sgNode&&this._sgNode.setStartListener(t)},setInterruptListener:function(t){this._interruptListener=t,this._sgNode&&this._sgNode.setInterruptListener(t)},setEndListener:function(t){this._endListener=t,this._sgNode&&this._sgNode.setEndListener(t)},setDisposeListener:function(t){this._disposeListener=t,this._sgNode&&this._sgNode.setDisposeListener(t)},setCompleteListener:function(t){this._completeListener=t,this._sgNode&&this._sgNode.setCompleteListener(t)},setEventListener:function(t){this._eventListener=t,this._sgNode&&this._sgNode.setEventListener(t)},setTrackStartListener:function(t,e){this._sgNode&&this._sgNode.setTrackStartListener(t,e)},setTrackInterruptListener:function(t,e){this._sgNode&&this._sgNode.setTrackInterruptListener(t,e)},setTrackEndListener:function(t,e){this._sgNode&&this._sgNode.setTrackEndListener(t,e)},setTrackDisposeListener:function(t,e){this._sgNode&&this._sgNode.setTrackDisposeListener(t,e)},setTrackCompleteListener:function(t,e){this._sgNode&&this._sgNode.setTrackCompleteListener(t,e)},setTrackEventListener:function(t,e){this._sgNode&&this._sgNode.setTrackEventListener(t,e)},getState:function(){if(this._sgNode)return this._sgNode.getState()},_refresh:function(){this._sgNode&&(this.node._sizeProvider===this._sgNode&&(this.node._sizeProvider=null),this._removeSgNode(),this._sgNode=null);var t=this._sgNode=this._createSgNode();t&&(this.enabledInHierarchy||t.setVisible(!1),t.setContentSize(0,0),this._initSgNode(),this._appendSgNode(t),this._registSizeProvider())}})}),{}],309:[(function(t,e,i){var n=cc.Class({name:"sp.SkeletonData",extends:cc.Asset,ctor:function(){this.reset()},properties:{_skeletonJson:null,skeletonJson:{get:function(){return this._skeletonJson},set:function(t){this._skeletonJson=t,this.reset()}},_atlasText:"",atlasText:{get:function(){return this._atlasText},set:function(t){this._atlasText=t,this.reset()}},textures:{default:[],type:[cc.Texture2D]},textureNames:{default:[],type:[cc.String]},scale:1},statics:{preventDeferredLoadDependents:!0,preventPreloadNativeObject:!0},createNode:!1,reset:function(){this._skeletonCache=null,this._atlasCache=null},getRuntimeData:function(t){if(this._skeletonCache)return this._skeletonCache;if(!(this.textures&&this.textures.length>0&&this.textureNames&&this.textureNames.length>0))return t||cc.errorID(7507,this.name),null;var e=this._getAtlas(t);if(!e)return null;var i=new sp.spine.AtlasAttachmentLoader(e),n=new sp.spine.SkeletonJson(i);n.scale=this.scale;var r=this.skeletonJson;return this._skeletonCache=n.readSkeletonData(r),e.dispose(n),this._skeletonCache},getSkinsEnum:!1,getAnimsEnum:!1,_getTexture:function(t){for(var e=this.textureNames,i=0;i0&&(e%=this.duration));for(var c=this.timelines,h=0,l=c.length;h>>1;;){if(t[(s+1)*i]<=e?n=s+1:r=s,n==r)return(n+1)*i;s=n+r>>>1}},t.linearSearch=function(t,e,i){for(var n=0,r=t.length-i;n<=r;n+=i)if(t[n]>e)return n;return-1},t})();t.Animation=e,(function(t){t[t.rotate=0]="rotate",t[t.translate=1]="translate",t[t.scale=2]="scale",t[t.shear=3]="shear",t[t.attachment=4]="attachment",t[t.color=5]="color",t[t.deform=6]="deform",t[t.event=7]="event",t[t.drawOrder=8]="drawOrder",t[t.ikConstraint=9]="ikConstraint",t[t.transformConstraint=10]="transformConstraint",t[t.pathConstraintPosition=11]="pathConstraintPosition",t[t.pathConstraintSpacing=12]="pathConstraintSpacing",t[t.pathConstraintMix=13]="pathConstraintMix"})(t.TimelineType||(t.TimelineType={}));var i=t.TimelineType,n=(function(){function e(i){if(i<=0)throw new Error("frameCount must be > 0: "+i);this.curves=t.Utils.newFloatArray((i-1)*e.BEZIER_SIZE)}return e.prototype.getFrameCount=function(){return this.curves.length/e.BEZIER_SIZE+1},e.prototype.setLinear=function(t){this.curves[t*e.BEZIER_SIZE]=e.LINEAR},e.prototype.setStepped=function(t){this.curves[t*e.BEZIER_SIZE]=e.STEPPED},e.prototype.getCurveType=function(t){var i=t*e.BEZIER_SIZE;if(i==this.curves.length)return e.LINEAR;var n=this.curves[i];return n==e.LINEAR?e.LINEAR:n==e.STEPPED?e.STEPPED:e.BEZIER},e.prototype.setCurve=function(t,i,n,r,s){var o=.03*(2*-i+r),a=.03*(2*-n+s),c=.006*(3*(i-r)+1),h=.006*(3*(n-s)+1),l=2*o+c,u=2*a+h,_=.3*i+o+.16666667*c,d=.3*n+a+.16666667*h,f=t*e.BEZIER_SIZE,m=this.curves;m[f++]=e.BEZIER;for(var p=_,g=d,y=f+e.BEZIER_SIZE-1;f=n){var l=void 0,u=void 0;return s==c?(l=0,u=0):(l=r[s-2],u=r[s-1]),u+(r[s+1]-u)*(n-l)/(a-l)}var _=r[s-1];return _+(1-_)*(n-a)/(1-a)},e.LINEAR=0,e.STEPPED=1,e.BEZIER=2,e.BEZIER_SIZE=19,e})();t.CurveTimeline=n;var s=(function(n){function s(e){n.call(this,e),this.frames=t.Utils.newFloatArray(e<<1)}return r(s,n),s.prototype.getPropertyId=function(){return(i.rotate<<24)+this.boneIndex},s.prototype.setFrame=function(t,e,i){t<<=1,this.frames[t]=e,this.frames[t+s.ROTATION]=i},s.prototype.apply=function(t,i,n,r,o,a,c){var h=this.frames,l=t.bones[this.boneIndex];if(n=h[h.length-s.ENTRIES])if(a)l.rotation=l.data.rotation+h[h.length+s.PREV_ROTATION]*o;else{var u=l.data.rotation+h[h.length+s.PREV_ROTATION]-l.rotation;u-=360*(16384-(16384.499999999996-u/360|0)),l.rotation+=u*o}else{var _=e.binarySearch(h,n,s.ENTRIES),d=h[_+s.PREV_ROTATION],f=h[_],m=this.getCurvePercent((_>>1)-1,1-(n-f)/(h[_+s.PREV_TIME]-f)),p=h[_+s.ROTATION]-d;p=d+(p-=360*(16384-(16384.499999999996-p/360|0)))*m,a?(p-=360*(16384-(16384.499999999996-p/360|0)),l.rotation=l.data.rotation+p*o):(p=l.data.rotation+p-l.rotation,p-=360*(16384-(16384.499999999996-p/360|0)),l.rotation+=p*o)}},s.ENTRIES=2,s.PREV_TIME=-2,s.PREV_ROTATION=-1,s.ROTATION=1,s})(n);t.RotateTimeline=s;var o=(function(n){function s(e){n.call(this,e),this.frames=t.Utils.newFloatArray(e*s.ENTRIES)}return r(s,n),s.prototype.getPropertyId=function(){return(i.translate<<24)+this.boneIndex},s.prototype.setFrame=function(t,e,i,n){t*=s.ENTRIES,this.frames[t]=e,this.frames[t+s.X]=i,this.frames[t+s.Y]=n},s.prototype.apply=function(t,i,n,r,o,a,c){var h=this.frames,l=t.bones[this.boneIndex];if(n=h[h.length-s.ENTRIES])u=h[h.length+s.PREV_X],_=h[h.length+s.PREV_Y];else{var d=e.binarySearch(h,n,s.ENTRIES);u=h[d+s.PREV_X],_=h[d+s.PREV_Y];var f=h[d],m=this.getCurvePercent(d/s.ENTRIES-1,1-(n-f)/(h[d+s.PREV_TIME]-f));u+=(h[d+s.X]-u)*m,_+=(h[d+s.Y]-_)*m}a?(l.x=l.data.x+u*o,l.y=l.data.y+_*o):(l.x+=(l.data.x+u-l.x)*o,l.y+=(l.data.y+_-l.y)*o)}},s.ENTRIES=3,s.PREV_TIME=-3,s.PREV_X=-2,s.PREV_Y=-1,s.X=1,s.Y=2,s})(n);t.TranslateTimeline=o;var a=(function(n){function s(t){n.call(this,t)}return r(s,n),s.prototype.getPropertyId=function(){return(i.scale<<24)+this.boneIndex},s.prototype.apply=function(i,n,r,o,a,c,h){var l=this.frames,u=i.bones[this.boneIndex];if(r=l[l.length-s.ENTRIES])_=l[l.length+s.PREV_X]*u.data.scaleX,d=l[l.length+s.PREV_Y]*u.data.scaleY;else{var f=e.binarySearch(l,r,s.ENTRIES);_=l[f+s.PREV_X],d=l[f+s.PREV_Y];var m=l[f],p=this.getCurvePercent(f/s.ENTRIES-1,1-(r-m)/(l[f+s.PREV_TIME]-m));_=(_+(l[f+s.X]-_)*p)*u.data.scaleX,d=(d+(l[f+s.Y]-d)*p)*u.data.scaleY}if(1==a)u.scaleX=_,u.scaleY=d;else{var g=0,y=0;c?(g=u.data.scaleX,y=u.data.scaleY):(g=u.scaleX,y=u.scaleY),h?(_=Math.abs(_)*t.MathUtils.signum(g),d=Math.abs(d)*t.MathUtils.signum(y)):(g=Math.abs(g)*t.MathUtils.signum(_),y=Math.abs(y)*t.MathUtils.signum(d)),u.scaleX=g+(_-g)*a,u.scaleY=y+(d-y)*a}}},s})(o);t.ScaleTimeline=a;var c=(function(t){function n(e){t.call(this,e)}return r(n,t),n.prototype.getPropertyId=function(){return(i.shear<<24)+this.boneIndex},n.prototype.apply=function(t,i,r,s,o,a,c){var h=this.frames,l=t.bones[this.boneIndex];if(r=h[h.length-n.ENTRIES])u=h[h.length+n.PREV_X],_=h[h.length+n.PREV_Y];else{var d=e.binarySearch(h,r,n.ENTRIES);u=h[d+n.PREV_X],_=h[d+n.PREV_Y];var f=h[d],m=this.getCurvePercent(d/n.ENTRIES-1,1-(r-f)/(h[d+n.PREV_TIME]-f));u+=(h[d+n.X]-u)*m,_+=(h[d+n.Y]-_)*m}a?(l.shearX=l.data.shearX+u*o,l.shearY=l.data.shearY+_*o):(l.shearX+=(l.data.shearX+u-l.shearX)*o,l.shearY+=(l.data.shearY+_-l.shearY)*o)}},n})(o);t.ShearTimeline=c;var h=(function(n){function s(e){n.call(this,e),this.frames=t.Utils.newFloatArray(e*s.ENTRIES)}return r(s,n),s.prototype.getPropertyId=function(){return(i.color<<24)+this.slotIndex},s.prototype.setFrame=function(t,e,i,n,r,o){t*=s.ENTRIES,this.frames[t]=e,this.frames[t+s.R]=i,this.frames[t+s.G]=n,this.frames[t+s.B]=r,this.frames[t+s.A]=o},s.prototype.apply=function(t,i,n,r,o,a,c){var h=t.slots[this.slotIndex],l=this.frames;if(n=l[l.length-s.ENTRIES]){var m=l.length;u=l[m+s.PREV_R],_=l[m+s.PREV_G],d=l[m+s.PREV_B],f=l[m+s.PREV_A]}else{var p=e.binarySearch(l,n,s.ENTRIES);u=l[p+s.PREV_R],_=l[p+s.PREV_G],d=l[p+s.PREV_B],f=l[p+s.PREV_A];var g=l[p],y=this.getCurvePercent(p/s.ENTRIES-1,1-(n-g)/(l[p+s.PREV_TIME]-g));u+=(l[p+s.R]-u)*y,_+=(l[p+s.G]-_)*y,d+=(l[p+s.B]-d)*y,f+=(l[p+s.A]-f)*y}if(1==o)h.color.set(u,_,d,f);else{var v=h.color;a&&v.setFromColor(h.data.color),v.add((u-v.r)*o,(_-v.g)*o,(d-v.b)*o,(f-v.a)*o)}}},s.ENTRIES=5,s.PREV_TIME=-5,s.PREV_R=-4,s.PREV_G=-3,s.PREV_B=-2,s.PREV_A=-1,s.R=1,s.G=2,s.B=3,s.A=4,s})(n);t.ColorTimeline=h;var l=(function(){function n(e){this.frames=t.Utils.newFloatArray(e),this.attachmentNames=new Array(e)}return n.prototype.getPropertyId=function(){return(i.attachment<<24)+this.slotIndex},n.prototype.getFrameCount=function(){return this.frames.length},n.prototype.setFrame=function(t,e,i){this.frames[t]=e,this.attachmentNames[t]=i},n.prototype.apply=function(t,i,n,r,s,o,a){var c=t.slots[this.slotIndex];if(a&&o){var h=c.data.attachmentName;c.setAttachment(null==h?null:t.getAttachment(this.slotIndex,h))}else{var l=this.frames;if(n=l[l.length-1]?l.length-1:e.binarySearch(l,n,1)-1;var d=this.attachmentNames[_];t.slots[this.slotIndex].setAttachment(null==d?null:t.getAttachment(this.slotIndex,d))}}},n})();t.AttachmentTimeline=l;var u=(function(n){function s(e){n.call(this,e),this.frames=t.Utils.newFloatArray(e),this.frameVertices=new Array(e)}return r(s,n),s.prototype.getPropertyId=function(){return(i.deform<<24)+this.slotIndex},s.prototype.setFrame=function(t,e,i){this.frames[t]=e,this.frameVertices[t]=i},s.prototype.apply=function(i,n,r,s,o,a,c){var h=i.slots[this.slotIndex],l=h.getAttachment();if(l instanceof t.VertexAttachment&&l.applyDeform(this.attachment)){var u=this.frames,_=h.attachmentVertices;if(r=u[u.length-1]){var p=d[u.length-1];if(1==o)t.Utils.arrayCopy(p,0,m,0,f);else if(a){if(null==(E=l).bones)for(var g=E.vertices,y=0;yn)this.apply(t,i,Number.MAX_VALUE,r,s,o,a),i=-1;else if(i>=c[h-1])return;if(!(n0&&c[l-1]==u;)l--;for(;l=c[l];l++)r.push(this.events[l])}}},n})();t.EventTimeline=_;var d=(function(){function n(e){this.frames=t.Utils.newFloatArray(e),this.drawOrders=new Array(e)}return n.prototype.getPropertyId=function(){return i.drawOrder<<24},n.prototype.getFrameCount=function(){return this.frames.length},n.prototype.setFrame=function(t,e,i){this.frames[t]=e,this.drawOrders[t]=i},n.prototype.apply=function(i,n,r,s,o,a,c){var h=i.drawOrder,l=i.slots;if(c&&a)t.Utils.arrayCopy(i.slots,0,i.drawOrder,0,i.slots.length);else{var u=this.frames;if(r=u[u.length-1]?u.length-1:e.binarySearch(u,r)-1;var d=this.drawOrders[_];if(null==d)t.Utils.arrayCopy(l,0,h,0,l.length);else for(var f=0,m=d.length;f=h[h.length-s.ENTRIES])a?(l.mix=l.data.mix+(h[h.length+s.PREV_MIX]-l.data.mix)*o,l.bendDirection=c?l.data.bendDirection:h[h.length+s.PREV_BEND_DIRECTION]):(l.mix+=(h[h.length+s.PREV_MIX]-l.mix)*o,c||(l.bendDirection=h[h.length+s.PREV_BEND_DIRECTION]));else{var u=e.binarySearch(h,n,s.ENTRIES),_=h[u+s.PREV_MIX],d=h[u],f=this.getCurvePercent(u/s.ENTRIES-1,1-(n-d)/(h[u+s.PREV_TIME]-d));a?(l.mix=l.data.mix+(_+(h[u+s.MIX]-_)*f-l.data.mix)*o,l.bendDirection=c?l.data.bendDirection:h[u+s.PREV_BEND_DIRECTION]):(l.mix+=(_+(h[u+s.MIX]-_)*f-l.mix)*o,c||(l.bendDirection=h[u+s.PREV_BEND_DIRECTION]))}},s.ENTRIES=3,s.PREV_TIME=-3,s.PREV_MIX=-2,s.PREV_BEND_DIRECTION=-1,s.MIX=1,s.BEND_DIRECTION=2,s})(n);t.IkConstraintTimeline=f;var m=(function(n){function s(e){n.call(this,e),this.frames=t.Utils.newFloatArray(e*s.ENTRIES)}return r(s,n),s.prototype.getPropertyId=function(){return(i.transformConstraint<<24)+this.transformConstraintIndex},s.prototype.setFrame=function(t,e,i,n,r,o){t*=s.ENTRIES,this.frames[t]=e,this.frames[t+s.ROTATE]=i,this.frames[t+s.TRANSLATE]=n,this.frames[t+s.SCALE]=r,this.frames[t+s.SHEAR]=o},s.prototype.apply=function(t,i,n,r,o,a,c){var h=this.frames,l=t.transformConstraints[this.transformConstraintIndex];if(n=h[h.length-s.ENTRIES]){var p=h.length;_=h[p+s.PREV_ROTATE],d=h[p+s.PREV_TRANSLATE],f=h[p+s.PREV_SCALE],m=h[p+s.PREV_SHEAR]}else{var g=e.binarySearch(h,n,s.ENTRIES);_=h[g+s.PREV_ROTATE],d=h[g+s.PREV_TRANSLATE],f=h[g+s.PREV_SCALE],m=h[g+s.PREV_SHEAR];var y=h[g],v=this.getCurvePercent(g/s.ENTRIES-1,1-(n-y)/(h[g+s.PREV_TIME]-y));_+=(h[g+s.ROTATE]-_)*v,d+=(h[g+s.TRANSLATE]-d)*v,f+=(h[g+s.SCALE]-f)*v,m+=(h[g+s.SHEAR]-m)*v}if(a){u=l.data;l.rotateMix=u.rotateMix+(_-u.rotateMix)*o,l.translateMix=u.translateMix+(d-u.translateMix)*o,l.scaleMix=u.scaleMix+(f-u.scaleMix)*o,l.shearMix=u.shearMix+(m-u.shearMix)*o}else l.rotateMix+=(_-l.rotateMix)*o,l.translateMix+=(d-l.translateMix)*o,l.scaleMix+=(f-l.scaleMix)*o,l.shearMix+=(m-l.shearMix)*o}},s.ENTRIES=5,s.PREV_TIME=-5,s.PREV_ROTATE=-4,s.PREV_TRANSLATE=-3,s.PREV_SCALE=-2,s.PREV_SHEAR=-1,s.ROTATE=1,s.TRANSLATE=2,s.SCALE=3,s.SHEAR=4,s})(n);t.TransformConstraintTimeline=m;var p=(function(n){function s(e){n.call(this,e),this.frames=t.Utils.newFloatArray(e*s.ENTRIES)}return r(s,n),s.prototype.getPropertyId=function(){return(i.pathConstraintPosition<<24)+this.pathConstraintIndex},s.prototype.setFrame=function(t,e,i){t*=s.ENTRIES,this.frames[t]=e,this.frames[t+s.VALUE]=i},s.prototype.apply=function(t,i,n,r,o,a,c){var h=this.frames,l=t.pathConstraints[this.pathConstraintIndex];if(n=h[h.length-s.ENTRIES])u=h[h.length+s.PREV_VALUE];else{var _=e.binarySearch(h,n,s.ENTRIES);u=h[_+s.PREV_VALUE];var d=h[_],f=this.getCurvePercent(_/s.ENTRIES-1,1-(n-d)/(h[_+s.PREV_TIME]-d));u+=(h[_+s.VALUE]-u)*f}a?l.position=l.data.position+(u-l.data.position)*o:l.position+=(u-l.position)*o}},s.ENTRIES=2,s.PREV_TIME=-2,s.PREV_VALUE=-1,s.VALUE=1,s})(n);t.PathConstraintPositionTimeline=p;var g=(function(t){function n(e){t.call(this,e)}return r(n,t),n.prototype.getPropertyId=function(){return(i.pathConstraintSpacing<<24)+this.pathConstraintIndex},n.prototype.apply=function(t,i,r,s,o,a,c){var h=this.frames,l=t.pathConstraints[this.pathConstraintIndex];if(r=h[h.length-n.ENTRIES])u=h[h.length+n.PREV_VALUE];else{var _=e.binarySearch(h,r,n.ENTRIES);u=h[_+n.PREV_VALUE];var d=h[_],f=this.getCurvePercent(_/n.ENTRIES-1,1-(r-d)/(h[_+n.PREV_TIME]-d));u+=(h[_+n.VALUE]-u)*f}a?l.spacing=l.data.spacing+(u-l.data.spacing)*o:l.spacing+=(u-l.spacing)*o}},n})(p);t.PathConstraintSpacingTimeline=g;var y=(function(n){function s(e){n.call(this,e),this.frames=t.Utils.newFloatArray(e*s.ENTRIES)}return r(s,n),s.prototype.getPropertyId=function(){return(i.pathConstraintMix<<24)+this.pathConstraintIndex},s.prototype.setFrame=function(t,e,i,n){t*=s.ENTRIES,this.frames[t]=e,this.frames[t+s.ROTATE]=i,this.frames[t+s.TRANSLATE]=n},s.prototype.apply=function(t,i,n,r,o,a,c){var h=this.frames,l=t.pathConstraints[this.pathConstraintIndex];if(n=h[h.length-s.ENTRIES])u=h[h.length+s.PREV_ROTATE],_=h[h.length+s.PREV_TRANSLATE];else{var d=e.binarySearch(h,n,s.ENTRIES);u=h[d+s.PREV_ROTATE],_=h[d+s.PREV_TRANSLATE];var f=h[d],m=this.getCurvePercent(d/s.ENTRIES-1,1-(n-f)/(h[d+s.PREV_TIME]-f));u+=(h[d+s.ROTATE]-u)*m,_+=(h[d+s.TRANSLATE]-_)*m}a?(l.rotateMix=l.data.rotateMix+(u-l.data.rotateMix)*o,l.translateMix=l.data.translateMix+(_-l.data.translateMix)*o):(l.rotateMix+=(u-l.rotateMix)*o,l.translateMix+=(_-l.translateMix)*o)}},s.ENTRIES=3,s.PREV_TIME=-3,s.PREV_ROTATE=-2,s.PREV_TRANSLATE=-1,s.ROTATE=1,s.TRANSLATE=2,s})(n);t.PathConstraintMixTimeline=y})(n||(n={})),(function(t){var e=(function(){function e(e){this.tracks=new Array,this.events=new Array,this.listeners=new Array,this.queue=new n(this),this.propertyIDs=new t.IntSet,this.animationsChanged=!1,this.timeScale=1,this.trackEntryPool=new t.Pool(function(){return new i}),this.data=e}return e.prototype.update=function(t){t*=this.timeScale;for(var e=this.tracks,i=0,n=e.length;i0){if(r.delay-=s,r.delay>0)continue;s=-r.delay,r.delay=0}var o=r.next;if(null!=o){var a=r.trackLast-o.delay;if(a>=0){for(o.delay=0,o.trackTime=a+t*o.timeScale,r.trackTime+=s,this.setCurrent(i,o,!0);null!=o.mixingFrom;)o.mixTime+=s,o=o.mixingFrom;continue}}else if(r.trackLast>=r.trackEnd&&null==r.mixingFrom){e[i]=null,this.queue.end(r),this.disposeNext(r);continue}this.updateMixingFrom(r,t),r.trackTime+=s}}this.queue.drain()},e.prototype.updateMixingFrom=function(t,e){var i=t.mixingFrom;if(null!=i){if(this.updateMixingFrom(i,e),t.mixTime>=t.mixDuration&&null!=i.mixingFrom&&t.mixTime>0)return t.mixingFrom=null,void this.queue.end(i);i.animationLast=i.nextAnimationLast,i.trackLast=i.nextTrackLast,i.trackTime+=e*i.timeScale,t.mixTime+=e*i.timeScale}},e.prototype.apply=function(e){if(null==e)throw new Error("skeleton cannot be null.");this.animationsChanged&&this._animationsChanged();for(var i=this.events,n=this.tracks,r=0,s=n.length;r0)){var a=o.alpha;null!=o.mixingFrom?a*=this.applyMixingFrom(o,e):o.trackTime>=o.trackEnd&&(a=0);var c=o.animationLast,h=o.getAnimationTime(),l=o.animation.timelines.length,u=o.animation.timelines;if(1==a)for(var _=0;_1&&(r=1);var s=r0&&this.queueEvents(n,h),this.events.length=0,n.nextAnimationLast=h,n.nextTrackLast=n.trackTime,r},e.prototype.applyRotateTimeline=function(e,i,n,r,s,o,a,c){if(c&&(o[a]=0),1!=r){var h=e,l=h.frames,u=i.bones[h.boneIndex];if(n=l[l.length-t.RotateTimeline.ENTRIES])_=u.data.rotation+l[l.length+t.RotateTimeline.PREV_ROTATION];else{var d=t.Animation.binarySearch(l,n,t.RotateTimeline.ENTRIES),f=l[d+t.RotateTimeline.PREV_ROTATION],m=l[d],p=h.getCurvePercent((d>>1)-1,1-(n-m)/(l[d+t.RotateTimeline.PREV_TIME]-m));_=l[d+t.RotateTimeline.ROTATION]-f,_=f+(_-=360*(16384-(16384.499999999996-_/360|0)))*p+u.data.rotation,_-=360*(16384-(16384.499999999996-_/360|0))}var g=s?u.data.rotation:u.rotation,y=0,v=_-g;if(0==v)y=o[a];else{v-=360*(16384-(16384.499999999996-v/360|0));var x=0,C=0;c?(x=0,C=v):(x=o[a],C=o[a+1]);var T=v>0,A=x>=0;t.MathUtils.signum(C)!=t.MathUtils.signum(v)&&Math.abs(C)<=90&&(Math.abs(x)>180&&(x+=360*t.MathUtils.signum(x)),A=T),y=v+x-x%360,A!=T&&(y+=360*t.MathUtils.signum(x)),o[a]=y}o[a+1]=v,g+=y*r,u.rotation=g-360*(16384-(16384.499999999996-g/360|0))}}else e.apply(i,0,n,null,1,s,!1)},e.prototype.queueEvents=function(t,e){for(var i=t.animationStart,n=t.animationEnd,r=n-i,s=t.trackLast%r,o=this.events,a=0,c=o.length;an||this.queue.event(t,h)}for((t.loop?s>t.trackTime%r:e>=n&&t.animationLast=this.tracks.length)){var e=this.tracks[t];if(null!=e){this.queue.end(e),this.disposeNext(e);for(var i=e;;){var n=i.mixingFrom;if(null==n)break;this.queue.end(n),i.mixingFrom=null,i=n}this.tracks[e.trackIndex]=null,this.queue.drain()}}},e.prototype.setCurrent=function(t,e,i){var n=this.expandToIndex(t);this.tracks[t]=e,null!=n&&(i&&this.queue.interrupt(n),e.mixingFrom=n,e.mixTime=0,n.timelinesRotation.length=0,null!=n.mixingFrom&&n.mixDuration>0&&(e.mixAlpha*=Math.min(n.mixTime/n.mixDuration,1))),this.queue.start(e)},e.prototype.setAnimation=function(t,e,i){var n=this.data.skeletonData.findAnimation(e);if(null==n)throw new Error("Animation not found: "+e);return this.setAnimationWith(t,n,i)},e.prototype.setAnimationWith=function(t,e,i){if(null==e)throw new Error("animation cannot be null.");var n=!0,r=this.expandToIndex(t);null!=r&&(-1==r.nextTrackLast?(this.tracks[t]=r.mixingFrom,this.queue.interrupt(r),this.queue.end(r),this.disposeNext(r),r=r.mixingFrom,n=!1):this.disposeNext(r));var s=this.trackEntry(t,e,i,r);return this.setCurrent(t,s,n),this.queue.drain(),s},e.prototype.addAnimation=function(t,e,i,n){var r=this.data.skeletonData.findAnimation(e);if(null==r)throw new Error("Animation not found: "+e);return this.addAnimationWith(t,r,i,n)},e.prototype.addAnimationWith=function(t,e,i,n){if(null==e)throw new Error("animation cannot be null.");var r=this.expandToIndex(t);if(null!=r)for(;null!=r.next;)r=r.next;var s=this.trackEntry(t,e,i,r);if(null==r)this.setCurrent(t,s,!0),this.queue.drain();else if(r.next=s,n<=0){var o=r.animationEnd-r.animationStart;0!=o?n+=o*(1+(r.trackTime/o|0))-this.data.getMix(r.animation,e):n=0}return s.delay=n,s},e.prototype.setEmptyAnimation=function(t,i){var n=this.setAnimationWith(t,e.emptyAnimation,!1);return n.mixDuration=i,n.trackEnd=i,n},e.prototype.addEmptyAnimation=function(t,i,n){n<=0&&(n-=i);var r=this.addAnimationWith(t,e.emptyAnimation,!1,n);return r.mixDuration=i,r.trackEnd=i,r},e.prototype.setEmptyAnimations=function(t){this.queue.drainDisabled=!0;for(var e=0,i=this.tracks.length;e=this.tracks.length?null:this.tracks[t]},e.prototype.addListener=function(t){if(null==t)throw new Error("listener cannot be null.");this.listeners.push(t)},e.prototype.removeListener=function(t){var e=this.listeners.indexOf(t);e>=0&&this.listeners.splice(e,1)},e.prototype.clearListeners=function(){this.listeners.length=0},e.prototype.clearListenerNotifications=function(){this.queue.clear()},e.emptyAnimation=new t.Animation("",[],0),e})();t.AnimationState=e;var i=(function(){function t(){this.timelinesFirst=new Array,this.timelinesRotation=new Array}return t.prototype.reset=function(){this.next=null,this.mixingFrom=null,this.animation=null,this.listener=null,this.timelinesFirst.length=0,this.timelinesRotation.length=0},t.prototype.getAnimationTime=function(){if(this.loop){var t=this.animationEnd-this.animationStart;return 0==t?this.animationStart:this.trackTime%t+this.animationStart}return Math.min(this.trackTime+this.animationStart,this.animationEnd)},t.prototype.setAnimationLast=function(t){this.animationLast=t,this.nextAnimationLast=t},t.prototype.isComplete=function(){return this.trackTime>=this.animationEnd-this.animationStart},t.prototype.resetRotationDirections=function(){this.timelinesRotation.length=0},t})();t.TrackEntry=i;var n=(function(){function t(t){this.objects=[],this.drainDisabled=!1,this.animState=t}return t.prototype.start=function(t){this.objects.push(r.start),this.objects.push(t),this.animState.animationsChanged=!0},t.prototype.interrupt=function(t){this.objects.push(r.interrupt),this.objects.push(t)},t.prototype.end=function(t){this.objects.push(r.end),this.objects.push(t),this.animState.animationsChanged=!0},t.prototype.dispose=function(t){this.objects.push(r.dispose),this.objects.push(t)},t.prototype.complete=function(t){this.objects.push(r.complete),this.objects.push(t)},t.prototype.event=function(t,e){this.objects.push(r.event),this.objects.push(t),this.objects.push(e)},t.prototype.drain=function(){if(!this.drainDisabled){this.drainDisabled=!0;for(var t=this.objects,e=this.animState.listeners,i=0;i=200&&r.status<300?(n.assets[t]=r.responseText,e&&e(t,r.responseText)):(n.errors[t]="Couldn't load text "+t+": status "+r.status+", "+r.responseText,i&&i(t,"Couldn't load text "+t+": status "+r.status+", "+r.responseText)),n.toLoad--,n.loaded++)},r.open("GET",t,!0),r.send()},t.prototype.loadTexture=function(t,e,i){var n=this;void 0===e&&(e=null),void 0===i&&(i=null),t=this.pathPrefix+t,this.toLoad++;var r=new Image;r.crossOrigin="anonymous",r.src=t,r.onload=function(i){var s=n.textureLoader(r);n.assets[t]=s,n.toLoad--,n.loaded++,e&&e(t,r)},r.onerror=function(e){n.errors[t]="Couldn't load image "+t,n.toLoad--,n.loaded++,i&&i(t,"Couldn't load image "+t)}},t.prototype.get=function(t){return t=this.pathPrefix+t,this.assets[t]},t.prototype.remove=function(t){t=this.pathPrefix+t;var e=this.assets[t];e.dispose&&e.dispose(),this.assets[t]=null},t.prototype.removeAll=function(){for(var t in this.assets){var e=this.assets[t];e.dispose&&e.dispose()}this.assets={}},t.prototype.isLoadingComplete=function(){return 0==this.toLoad},t.prototype.getToLoad=function(){return this.toLoad},t.prototype.getLoaded=function(){return this.loaded},t.prototype.dispose=function(){this.removeAll()},t.prototype.hasErrors=function(){return Object.keys(this.errors).length>0},t.prototype.getErrors=function(){return this.errors},t})();t.AssetManager=e})(n||(n={})),(function(t){var e=(function(){function e(t){this.atlas=t}return e.prototype.newRegionAttachment=function(e,i,n){var r=this.atlas.findRegion(n);if(null==r)throw new Error("Region not found in atlas: "+n+" (region attachment: "+i+")");r.renderObject=r;var s=new t.RegionAttachment(i);return s.setRegion(r),s},e.prototype.newMeshAttachment=function(e,i,n){var r=this.atlas.findRegion(n);if(null==r)throw new Error("Region not found in atlas: "+n+" (mesh attachment: "+i+")");r.renderObject=r;var s=new t.MeshAttachment(i);return s.region=r,s},e.prototype.newBoundingBoxAttachment=function(e,i){return new t.BoundingBoxAttachment(i)},e.prototype.newPathAttachment=function(e,i){return new t.PathAttachment(i)},e})();t.AtlasAttachmentLoader=e})(n||(n={})),(function(t){var e=(function(){return function(t){if(null==t)throw new Error("name cannot be null.");this.name=t}})();t.Attachment=e;var i=(function(t){function e(e){t.call(this,e),this.worldVerticesLength=0}return r(e,t),e.prototype.computeWorldVertices=function(t,e){this.computeWorldVerticesWith(t,0,this.worldVerticesLength,e,0)},e.prototype.computeWorldVerticesWith=function(t,e,i,n,r){i+=r;var s=t.bone.skeleton,o=t.attachmentVertices,a=this.vertices,c=this.bones;if(null!=c){for(var h=0,l=0,u=0;u0&&(a=o);for(var v,x=(v=t.bone).worldX,C=v.worldY,T=v.a,A=v.b,b=v.c,S=v.d,E=e,w=r;w>1);null!=this.worldVertices&&this.worldVertices.length==n||(this.worldVertices=t.Utils.newFloatArray(n));var r=0,s=0,o=0,a=0;if(null==this.region?(r=s=0,o=a=1):(r=this.region.u,s=this.region.v,o=this.region.u2-r,a=this.region.v2-s),this.region.rotate)for(var c=0,h=6;c0&&(l=h);for(var f=(R=t.bone).worldX,m=R.worldY,p=R.a,g=R.b,y=R.c,v=R.d,x=0,C=0;x1e-4?(p=g*(T=Math.abs(m*y-p*g)/T),y=m*T,v=Math.atan2(g,m)*t.MathUtils.radDeg):(m=0,g=0,v=90-Math.atan2(y,p)*t.MathUtils.radDeg);var x=n+o-v,C=n+a-v+90;l=t.MathUtils.cosDeg(x)*r,u=t.MathUtils.cosDeg(C)*s,_=t.MathUtils.sinDeg(x)*r,d=t.MathUtils.sinDeg(C)*s;this.a=m*l-p*_,this.b=m*u-p*d,this.c=g*l+y*_,this.d=g*u+y*d;break;case t.TransformMode.NoScale:case t.TransformMode.NoScaleOrReflection:var T,A=t.MathUtils.cosDeg(n),b=t.MathUtils.sinDeg(n),S=m*A+p*b,E=g*A+y*b;(T=Math.sqrt(S*S+E*E))>1e-5&&(T=1/T),S*=T,E*=T,T=Math.sqrt(S*S+E*E);var w=Math.PI/2+Math.atan2(E,S),I=Math.cos(w)*T,R=Math.sin(w)*T;l=t.MathUtils.cosDeg(o)*r,u=t.MathUtils.cosDeg(90+a)*s,_=t.MathUtils.sinDeg(o)*r,d=t.MathUtils.sinDeg(90+a)*s;return this.a=S*l+I*_,this.b=S*u+I*d,this.c=E*l+R*_,this.d=E*u+R*d,void((this.data.transformMode!=t.TransformMode.NoScaleOrReflection?m*y-p*g<0:this.skeleton.flipX!=this.skeleton.flipY)&&(this.b=-this.b,this.d=-this.d))}this.skeleton.flipX&&(this.a=-this.a,this.b=-this.b),this.skeleton.flipY&&(this.c=-this.c,this.d=-this.d)},e.prototype.setToSetupPose=function(){var t=this.data;this.x=t.x,this.y=t.y,this.rotation=t.rotation,this.scaleX=t.scaleX,this.scaleY=t.scaleY,this.shearX=t.shearX,this.shearY=t.shearY},e.prototype.getWorldRotationX=function(){return Math.atan2(this.c,this.a)*t.MathUtils.radDeg},e.prototype.getWorldRotationY=function(){return Math.atan2(this.d,this.b)*t.MathUtils.radDeg},e.prototype.getWorldScaleX=function(){return Math.sqrt(this.a*this.a+this.c*this.c)},e.prototype.getWorldScaleY=function(){return Math.sqrt(this.b*this.b+this.d*this.d)},e.prototype.worldToLocalRotationX=function(){var e=this.parent;if(null==e)return this.arotation;var i=e.a,n=e.b,r=e.c,s=e.d,o=this.a,a=this.c;return Math.atan2(i*a-r*o,s*o-n*a)*t.MathUtils.radDeg},e.prototype.worldToLocalRotationY=function(){var e=this.parent;if(null==e)return this.arotation;var i=e.a,n=e.b,r=e.c,s=e.d,o=this.b,a=this.d;return Math.atan2(i*a-r*o,s*o-n*a)*t.MathUtils.radDeg},e.prototype.rotateWorld=function(e){var i=this.a,n=this.b,r=this.c,s=this.d,o=t.MathUtils.cosDeg(e),a=t.MathUtils.sinDeg(e);this.a=o*i-a*r,this.b=o*n-a*s,this.c=a*i+o*r,this.d=a*n+o*s,this.appliedValid=!1},e.prototype.updateAppliedTransform=function(){this.appliedValid=!0;var e=this.parent;if(null==e)return this.ax=this.worldX,this.ay=this.worldY,this.arotation=Math.atan2(this.c,this.a)*t.MathUtils.radDeg,this.ascaleX=Math.sqrt(this.a*this.a+this.c*this.c),this.ascaleY=Math.sqrt(this.b*this.b+this.d*this.d),this.ashearX=0,void(this.ashearY=Math.atan2(this.a*this.b+this.c*this.d,this.a*this.d-this.b*this.c)*t.MathUtils.radDeg);var i=e.a,n=e.b,r=e.c,s=e.d,o=1/(i*s-n*r),a=this.worldX-e.worldX,c=this.worldY-e.worldY;this.ax=a*s*o-c*n*o,this.ay=c*i*o-a*r*o;var h=o*s,l=o*i,u=o*n,_=o*r,d=h*this.a-u*this.c,f=h*this.b-u*this.d,m=l*this.c-_*this.a,p=l*this.d-_*this.b;if(this.ashearX=0,this.ascaleX=Math.sqrt(d*d+m*m),this.ascaleX>1e-4){var g=d*p-f*m;this.ascaleY=g/this.ascaleX,this.ashearY=Math.atan2(d*f+m*p,g)*t.MathUtils.radDeg,this.arotation=Math.atan2(m,d)*t.MathUtils.radDeg}else this.ascaleX=0,this.ascaleY=Math.sqrt(f*f+p*p),this.ashearY=0,this.arotation=90-Math.atan2(p,f)*t.MathUtils.radDeg},e.prototype.worldToLocal=function(t){var e=this.a,i=this.b,n=this.c,r=this.d,s=1/(e*r-i*n),o=t.x-this.worldX,a=t.y-this.worldY;return t.x=o*r*s-a*i*s,t.y=a*e*s-o*n*s,t},e.prototype.localToWorld=function(t){var e=t.x,i=t.y;return t.x=e*this.a+i*this.b+this.worldX,t.y=e*this.c+i*this.d+this.worldY,t},e})();t.Bone=e})(n||(n={})),(function(t){var e=(function(){return function(t,e,n){if(this.x=0,this.y=0,this.rotation=0,this.scaleX=1,this.scaleY=1,this.shearX=0,this.shearY=0,this.transformMode=i.Normal,t<0)throw new Error("index must be >= 0.");if(null==e)throw new Error("name cannot be null.");this.index=t,this.name=e,this.parent=n}})();t.BoneData=e,(function(t){t[t.Normal=0]="Normal",t[t.OnlyTranslation=1]="OnlyTranslation",t[t.NoRotationOrReflection=2]="NoRotationOrReflection",t[t.NoScale=3]="NoScale",t[t.NoScaleOrReflection=4]="NoScaleOrReflection"})(t.TransformMode||(t.TransformMode={}));var i=t.TransformMode})(n||(n={})),(function(t){var e=(function(){return function(t,e){if(null==e)throw new Error("data cannot be null.");this.time=t,this.data=e}})();t.Event=e})(n||(n={})),(function(t){var e=(function(){return function(t){this.name=t}})();t.EventData=e})(n||(n={})),(function(t){var e=(function(){function e(t,e){if(this.mix=1,this.bendDirection=0,null==t)throw new Error("data cannot be null.");if(null==e)throw new Error("skeleton cannot be null.");this.data=t,this.mix=t.mix,this.bendDirection=t.bendDirection,this.bones=new Array;for(var i=0;i180?u-=360:u<-180&&(u+=360),e.updateWorldTransformWith(e.ax,e.ay,e.arotation+u*r,e.ascaleX,e.ascaleY,e.ashearX,e.ashearY)},e.prototype.apply2=function(e,i,n,r,s,o){if(0!=o){e.appliedValid||e.updateAppliedTransform(),i.appliedValid||i.updateAppliedTransform();var a=e.ax,c=e.ay,h=e.ascaleX,l=e.ascaleY,u=i.ascaleX,_=0,d=0,f=0;h<0?(h=-h,_=180,f=-1):(_=0,f=1),l<0&&(l=-l,f=-f),u<0?(u=-u,d=180):d=0;var m=i.ax,p=0,g=0,y=0,v=e.a,x=e.b,C=e.c,T=e.d,A=Math.abs(h-l)<=1e-4;A?(g=v*m+x*(p=i.ay)+e.worldX,y=C*m+T*p+e.worldY):(p=0,g=v*m+e.worldX,y=C*m+e.worldY);var b=e.parent;v=b.a,x=b.b,C=b.c;var S=1/(v*(T=b.d)-x*C),E=n-b.worldX,w=r-b.worldY,I=(E*T-w*x)*S-a,R=(w*v-E*C)*S-c,P=((E=g-b.worldX)*T-(w=y-b.worldY)*x)*S-a,O=(w*v-E*C)*S-c,D=Math.sqrt(P*P+O*O),B=i.data.length*u,L=0,M=0;t:if(A){var N=(I*I+R*R-D*D-(B*=h)*B)/(2*D*B);N<-1?N=-1:N>1&&(N=1),M=Math.acos(N)*s,v=D+B*N,x=B*Math.sin(M),L=Math.atan2(R*v-I*x,I*v+R*x)}else{var F=(v=h*B)*v,z=(x=l*B)*x,k=I*I+R*R,V=Math.atan2(R,I),G=-2*z*D,U=z-F;if((T=G*G-4*U*(C=z*D*D+F*k-F*z))>=0){var W=Math.sqrt(T);G<0&&(W=-W);var X=(W=-(G+W)/2)/U,j=C/W,Y=Math.abs(X)Q&&(K=0,Q=T,$=E),(T=(E=D-v)*E)Q&&(K=et,Q=T,$=E,tt=w),k<=(q+Q)/2?(L=V-Math.atan2(Z*s,J),M=H*s):(L=V-Math.atan2(tt*s,$),M=K*s)}var it=Math.atan2(p,m)*f,nt=e.arotation;(L=(L-it)*t.MathUtils.radDeg+_-nt)>180?L-=360:L<-180&&(L+=360),e.updateWorldTransformWith(a,c,nt+L*o,e.ascaleX,e.ascaleY,0,0),nt=i.arotation,(M=((M+it)*t.MathUtils.radDeg-i.ashearX)*f+d-nt)>180?M-=360:M<-180&&(M+=360),i.updateWorldTransformWith(m,p,nt+M*o,i.ascaleX,i.ascaleY,i.ashearX,i.ashearY)}else i.updateWorldTransform()},e})();t.IkConstraint=e})(n||(n={})),(function(t){var e=(function(){return function(t){this.order=0,this.bones=new Array,this.bendDirection=1,this.mix=1,this.name=t}})();t.IkConstraintData=e})(n||(n={})),(function(t){var e=(function(){function e(t,e){if(this.position=0,this.spacing=0,this.rotateMix=0,this.translateMix=0,this.spaces=new Array,this.positions=new Array,this.world=new Array,this.curves=new Array,this.lengths=new Array,this.segments=new Array,null==t)throw new Error("data cannot be null.");if(null==e)throw new Error("skeleton cannot be null.");this.data=t,this.bones=new Array;for(var i=0,n=t.bones.length;i0;if(n>0||r){var s=this.data,o=s.spacingMode,a=o==t.SpacingMode.Length,c=s.rotateMode,h=c==t.RotateMode.Tangent,l=c==t.RotateMode.ChainScale,u=this.bones.length,_=h?u:u+1,d=this.bones,f=t.Utils.setArraySize(this.spaces,_),m=null,p=this.spacing;if(l||a){l&&(m=t.Utils.setArraySize(this.lengths,u));for(var g=0,y=_-1;g0?t.MathUtils.degRad:-t.MathUtils.degRad;g=0;for(var w=3;gt.MathUtils.PI?F-=t.MathUtils.PI2:F<-t.MathUtils.PI&&(F+=t.MathUtils.PI2),F*=i,z=Math.cos(F),k=Math.sin(F),I.a=z*B-k*M,I.b=z*L-k*N,I.c=k*B+z*M,I.d=k*L+z*N}I.appliedValid=!1}}}},e.prototype.computeWorldPositions=function(i,n,r,s,o){var a=this.target,c=this.position,h=this.spaces,l=t.Utils.setArraySize(this.positions,3*n+2),u=null,_=i.closed,d=i.worldVerticesLength,f=d/6,m=e.NONE;if(!i.constantSpeed){var p=i.lengths,g=p[f-=_?1:2];if(s&&(c*=g),o)for(var y=0;yg){m!=e.AFTER&&(m=e.AFTER,i.computeWorldVerticesWith(a,d-6,4,u,0)),this.addAfterPosition(C-g,u,0,l,v);continue}}for(;;x++){var T=p[x];if(!(C>T)){if(0==x)C/=T;else C=(C-(J=p[x-1]))/(T-J);break}}x!=m&&(m=x,_&&x==f?(i.computeWorldVerticesWith(a,d-4,4,u,0),i.computeWorldVerticesWith(a,0,4,u,4)):i.computeWorldVerticesWith(a,6*x+2,8,u,0)),this.addCurvePosition(C,u[0],u[1],u[2],u[3],u[4],u[5],u[6],u[7],l,v,r||y>0&&0==j)}return l}_?(d+=2,u=t.Utils.setArraySize(this.world,d),i.computeWorldVerticesWith(a,2,d-4,u,0),i.computeWorldVerticesWith(a,0,2,u,d-4),u[d-2]=u[0],u[d-1]=u[1]):(f--,d-=4,u=t.Utils.setArraySize(this.world,d),i.computeWorldVerticesWith(a,2,d,u,0));for(var A=t.Utils.setArraySize(this.curves,f),b=0,S=u[0],E=u[1],w=0,I=0,R=0,P=0,O=0,D=0,B=0,L=0,M=0,N=0,F=0,z=0,k=0,V=0,G=(y=0,2);yb){this.addAfterPosition(C-b,u,d-4,l,v);continue}}for(;;x++){var Y=A[x];if(!(C>Y)){if(0==x)C/=Y;else C=(C-(J=A[x-1]))/(Y-J);break}}if(x!=m){m=x;var H=6*x;for(S=u[H],E=u[H+1],w=u[H+2],I=u[H+3],R=u[H+4],P=u[H+5],O=u[H+6],D=u[H+7],F=2*(B=.03*(S-2*w+R))+(M=.006*(3*(w-R)-S+O)),z=2*(L=.03*(E-2*I+P))+(N=.006*(3*(I-P)-E+D)),k=.3*(w-S)+B+.16666667*M,V=.3*(I-E)+L+.16666667*N,W=Math.sqrt(k*k+V*V),U[0]=W,H=1;H<8;H++)k+=F,V+=z,F+=M,z+=N,W+=Math.sqrt(k*k+V*V),U[H]=W;k+=F,V+=z,W+=Math.sqrt(k*k+V*V),U[8]=W,k+=F+M,V+=z+N,W+=Math.sqrt(k*k+V*V),U[9]=W,X=0}for(C*=W;;X++){var q=U[X];if(!(C>q)){var J;if(0==X)C/=q;else C=X+(C-(J=U[X-1]))/(q-J);break}}this.addCurvePosition(.1*C,S,E,w,I,R,P,O,D,l,v,r||y>0&&0==j)}return l},e.prototype.addBeforePosition=function(t,e,i,n,r){var s=e[i],o=e[i+1],a=e[i+2]-s,c=e[i+3]-o,h=Math.atan2(c,a);n[r]=s+t*Math.cos(h),n[r+1]=o+t*Math.sin(h),n[r+2]=h},e.prototype.addAfterPosition=function(t,e,i,n,r){var s=e[i+2],o=e[i+3],a=s-e[i],c=o-e[i+1],h=Math.atan2(c,a);n[r]=s+t*Math.cos(h),n[r+1]=o+t*Math.sin(h),n[r+2]=h},e.prototype.addCurvePosition=function(t,e,i,n,r,s,o,a,c,h,l,u){(0==t||isNaN(t))&&(t=1e-4);var _=t*t,d=_*t,f=1-t,m=f*f,p=m*f,g=f*t,y=3*g,v=f*y,x=y*t,C=e*p+n*v+s*x+a*d,T=i*p+r*v+o*x+c*d;h[l]=C,h[l+1]=T,u&&(h[l+2]=Math.atan2(T-(i*m+r*g*2+o*_),C-(e*m+n*g*2+s*_)))},e.prototype.getOrder=function(){return this.data.order},e.NONE=-1,e.BEFORE=-2,e.AFTER=-3,e})();t.PathConstraint=e})(n||(n={})),(function(t){var e=(function(){return function(t){this.order=0,this.bones=new Array,this.name=t}})();t.PathConstraintData=e,(function(t){t[t.Fixed=0]="Fixed",t[t.Percent=1]="Percent"})(t.PositionMode||(t.PositionMode={}));t.PositionMode;(function(t){t[t.Length=0]="Length",t[t.Fixed=1]="Fixed",t[t.Percent=2]="Percent"})(t.SpacingMode||(t.SpacingMode={}));t.SpacingMode;(function(t){t[t.Tangent=0]="Tangent",t[t.Chain=1]="Chain",t[t.ChainScale=2]="ChainScale"})(t.RotateMode||(t.RotateMode={}));t.RotateMode})(n||(n={})),(function(t){var e=(function(){function t(t){this.toLoad=new Array,this.assets={},this.clientId=t}return t.prototype.loaded=function(){var t=0;for(var e in this.assets)t++;return t},t})(),i=(function(){function t(t){void 0===t&&(t=""),this.clientAssets={},this.queuedAssets={},this.rawAssets={},this.errors={},this.pathPrefix=t}return t.prototype.queueAsset=function(t,i,n){var r=this.clientAssets[t];return null!==r&&void 0!==r||(r=new e(t),this.clientAssets[t]=r),null!==i&&(r.textureLoader=i),r.toLoad.push(n),this.queuedAssets[n]!==n&&(this.queuedAssets[n]=n,!0)},t.prototype.loadText=function(t,e){var i=this;if(e=this.pathPrefix+e,this.queueAsset(t,null,e)){var n=new XMLHttpRequest;n.onreadystatechange=function(){n.readyState==XMLHttpRequest.DONE&&(n.status>=200&&n.status<300?i.rawAssets[e]=n.responseText:i.errors[e]="Couldn't load text "+e+": status "+n.status+", "+n.responseText)},n.open("GET",e,!0),n.send()}},t.prototype.loadJson=function(t,e){var i=this;if(e=this.pathPrefix+e,this.queueAsset(t,null,e)){var n=new XMLHttpRequest;n.onreadystatechange=function(){n.readyState==XMLHttpRequest.DONE&&(n.status>=200&&n.status<300?i.rawAssets[e]=JSON.parse(n.responseText):i.errors[e]="Couldn't load text "+e+": status "+n.status+", "+n.responseText)},n.open("GET",e,!0),n.send()}},t.prototype.loadTexture=function(t,e,i){var n=this;if(i=this.pathPrefix+i,this.queueAsset(t,e,i)){var r=new Image;r.src=i,r.crossOrigin="anonymous",r.onload=function(t){n.rawAssets[i]=r},r.onerror=function(t){n.errors[i]="Couldn't load image "+i}}},t.prototype.get=function(t,e){e=this.pathPrefix+e;var i=this.clientAssets[t];return null===i||void 0===i||i.assets[e]},t.prototype.updateClientAssets=function(t){for(var e=0;e0},t.prototype.getErrors=function(){return this.errors},t})();t.SharedAssetManager=i})(n||(n={})),(function(t){var e=(function(){function e(e){if(this._updateCache=new Array,this.updateCacheReset=new Array,this.time=0,this.flipX=!1,this.flipY=!1,this.x=0,this.y=0,null==e)throw new Error("data cannot be null.");this.data=e,this.bones=new Array;for(var i=0;i1){var r=i[i.length-1];this._updateCache.indexOf(r)>-1||this.updateCacheReset.push(r)}this._updateCache.push(t),this.sortReset(n.children),i[i.length-1].sorted=!0},e.prototype.sortPathConstraint=function(e){var i=e.target,n=i.data.index,r=i.bone;null!=this.skin&&this.sortPathConstraintAttachment(this.skin,n,r),null!=this.data.defaultSkin&&this.data.defaultSkin!=this.skin&&this.sortPathConstraintAttachment(this.data.defaultSkin,n,r);for(var s=0,o=this.data.skins.length;s=this.minX&&t<=this.maxX&&e>=this.minY&&e<=this.maxY},e.prototype.aabbIntersectsSegment=function(t,e,i,n){var r=this.minX,s=this.minY,o=this.maxX,a=this.maxY;if(t<=r&&i<=r||e<=s&&n<=s||t>=o&&i>=o||e>=a&&n>=a)return!1;var c=(n-e)/(i-t),h=c*(r-t)+e;if(h>s&&hs&&hr&&lr&&lt.minX&&this.minYt.minY},e.prototype.containsPoint=function(t,e){for(var i=this.polygons,n=0,r=i.length;n=i||h=i){var l=n[a];l+(i-c)/(h-c)*(n[s]-l)=l&&v<=d||v>=d&&v<=l)&&(v>=e&&v<=n||v>=n&&v<=e)){var x=(h*g-c*m)/y;if((x>=u&&x<=f||x>=f&&x<=u)&&(x>=i&&x<=r||x>=r&&x<=i))return!0}l=d,u=f}return!1},e.prototype.getPolygon=function(t){if(null==t)throw new Error("boundingBox cannot be null.");var e=this.boundingBoxes.indexOf(t);return-1==e?null:this.polygons[e]},e.prototype.getWidth=function(){return this.maxX-this.minX},e.prototype.getHeight=function(){return this.maxY-this.minY},e})();t.SkeletonBounds=e})(n||(n={})),(function(t){var e=(function(){function t(){this.bones=new Array,this.slots=new Array,this.skins=new Array,this.events=new Array,this.animations=new Array,this.ikConstraints=new Array,this.transformConstraints=new Array,this.pathConstraints=new Array,this.fps=0}return t.prototype.findBone=function(t){if(null==t)throw new Error("boneName cannot be null.");for(var e=this.bones,i=0,n=e.length;i=0;_--)-1==U[_]&&(U[_]=X[--Y])}y.setFrame(u++,G.time,U)}s.push(y),o=Math.max(o,y.frames[y.getFrameCount()-1])}if(e.events){for(y=new t.EventTimeline(e.events.length),u=0,_=0;_=n.length&&(n.length=t+1),n[t]||(n[t]={}),n[t][e]=i},t.prototype.getAttachment=function(t,e){var i=this.attachments[t];return i?i[e]:null},t.prototype.attachAll=function(t,e){for(var i=0,n=0;n= 0.");if(null==i)throw new Error("name cannot be null.");if(null==n)throw new Error("boneData cannot be null.");this.index=e,this.name=i,this.boneData=n}})();t.SlotData=e})(n||(n={})),(function(t){var e=(function(){function t(t){this._image=t}return t.prototype.getImage=function(){return this._image},t.filterFromString=function(t){switch(t.toLowerCase()){case"nearest":return i.Nearest;case"linear":return i.Linear;case"mipmap":return i.MipMap;case"mipmapnearestnearest":return i.MipMapNearestNearest;case"mipmaplinearnearest":return i.MipMapLinearNearest;case"mipmapnearestlinear":return i.MipMapNearestLinear;case"mipmaplinearlinear":return i.MipMapLinearLinear;default:throw new Error("Unknown texture filter "+t)}},t.wrapFromString=function(t){switch(t.toLowerCase()){case"mirroredtepeat":return n.MirroredRepeat;case"clamptoedge":return n.ClampToEdge;case"repeat":return n.Repeat;default:throw new Error("Unknown texture wrap "+t)}},t})();t.Texture=e,(function(t){t[t.Nearest=9728]="Nearest",t[t.Linear=9729]="Linear",t[t.MipMap=9987]="MipMap",t[t.MipMapNearestNearest=9984]="MipMapNearestNearest",t[t.MipMapLinearNearest=9985]="MipMapLinearNearest",t[t.MipMapNearestLinear=9986]="MipMapNearestLinear",t[t.MipMapLinearLinear=9987]="MipMapLinearLinear"})(t.TextureFilter||(t.TextureFilter={}));var i=t.TextureFilter;(function(t){t[t.MirroredRepeat=33648]="MirroredRepeat",t[t.ClampToEdge=33071]="ClampToEdge",t[t.Repeat=10497]="Repeat"})(t.TextureWrap||(t.TextureWrap={}));var n=t.TextureWrap,r=(function(){return function(){this.u=0,this.v=0,this.u2=0,this.v2=0,this.width=0,this.height=0,this.rotate=!1,this.offsetX=0,this.offsetY=0,this.originalWidth=0,this.originalHeight=0}})();t.TextureRegion=r})(n||(n={})),(function(t){var e=(function(){function e(t,e){this.pages=new Array,this.regions=new Array,this.load(t,e)}return e.prototype.load=function(e,r){if(null==r)throw new Error("textureLoader cannot be null.");for(var o=new i(e),a=new Array(4),c=null;;){var h=o.readLine();if(null==h)break;if(0==(h=h.trim()).length)c=null;else if(c){var l=new s;l.name=h,l.page=c,l.rotate="true"==o.readValue(),o.readTuple(a);var u=parseInt(a[0]),_=parseInt(a[1]);o.readTuple(a);var d=parseInt(a[0]),f=parseInt(a[1]);l.u=u/c.width,l.v=_/c.height,l.rotate?(l.u2=(u+f)/c.width,l.v2=(_+d)/c.height):(l.u2=(u+d)/c.width,l.v2=(_+f)/c.height),l.x=u,l.y=_,l.width=Math.abs(d),l.height=Math.abs(f),4==o.readTuple(a)&&4==o.readTuple(a)&&o.readTuple(a),l.originalWidth=parseInt(a[0]),l.originalHeight=parseInt(a[1]),o.readTuple(a),l.offsetX=parseInt(a[0]),l.offsetY=parseInt(a[1]),l.index=parseInt(o.readValue()),l.texture=c.texture,this.regions.push(l)}else{(c=new n).name=h,2==o.readTuple(a)&&(c.width=parseInt(a[0]),c.height=parseInt(a[1]),o.readTuple(a)),o.readTuple(a),c.minFilter=t.Texture.filterFromString(a[0]),c.magFilter=t.Texture.filterFromString(a[1]);var m=o.readValue();c.uWrap=t.TextureWrap.ClampToEdge,c.vWrap=t.TextureWrap.ClampToEdge,"x"==m?c.uWrap=t.TextureWrap.Repeat:"y"==m?c.vWrap=t.TextureWrap.Repeat:"xy"==m&&(c.uWrap=c.vWrap=t.TextureWrap.Repeat),c.texture=r(h),c.texture.setFilters(c.minFilter,c.magFilter),c.texture.setWraps(c.uWrap,c.vWrap),c.width=c.texture.getImage().width,c.height=c.texture.getImage().height,this.pages.push(c)}}},e.prototype.findRegion=function(t){for(var e=0;e=this.lines.length?null:this.lines[this.index++]},t.prototype.readValue=function(){var t=this.readLine(),e=t.indexOf(":");if(-1==e)throw new Error("Invalid line: "+t);return t.substring(e+1).trim()},t.prototype.readTuple=function(t){var e=this.readLine(),i=e.indexOf(":");if(-1==i)throw new Error("Invalid line: "+e);for(var n=0,r=i+1;n<3;n++){var s=e.indexOf(",",r);if(-1==s)break;t[n]=e.substr(r,s-r).trim(),r=s+1}return t[n]=e.substring(r).trim(),n+1},t})(),n=(function(){return function(){}})();t.TextureAtlasPage=n;var s=(function(t){function e(){t.apply(this,arguments)}return r(e,t),e})(t.TextureRegion);t.TextureAtlasRegion=s})(n||(n={})),(function(t){var e=(function(){function e(e,i){if(this.rotateMix=0,this.translateMix=0,this.scaleMix=0,this.shearMix=0,this.temp=new t.Vector2,null==e)throw new Error("data cannot be null.");if(null==i)throw new Error("skeleton cannot be null.");this.data=e,this.rotateMix=e.rotateMix,this.translateMix=e.translateMix,this.scaleMix=e.scaleMix,this.shearMix=e.shearMix,this.bones=new Array;for(var n=0;n0?t.MathUtils.degRad:-t.MathUtils.degRad,u=this.data.offsetRotation*l,_=this.data.offsetShearY*l,d=this.bones,f=0,m=d.length;ft.MathUtils.PI?w-=t.MathUtils.PI2:w<-t.MathUtils.PI&&(w+=t.MathUtils.PI2),w*=e;var T=Math.cos(w),A=Math.sin(w);p.a=T*y-A*x,p.b=T*v-A*C,p.c=A*y+T*x,p.d=A*v+T*C,g=!0}if(0!=i){var b=this.temp;s.localToWorld(b.set(this.data.offsetX,this.data.offsetY)),p.worldX+=(b.x-p.worldX)*i,p.worldY+=(b.y-p.worldY)*i,g=!0}if(n>0){var S=Math.sqrt(p.a*p.a+p.c*p.c),E=Math.sqrt(o*o+c*c);S>1e-5&&(S=(S+(E-S+this.data.offsetScaleX)*n)/S),p.a*=S,p.c*=S,S=Math.sqrt(p.b*p.b+p.d*p.d),E=Math.sqrt(a*a+h*h),S>1e-5&&(S=(S+(E-S+this.data.offsetScaleY)*n)/S),p.b*=S,p.d*=S,g=!0}if(r>0){v=p.b,C=p.d;var w,I=Math.atan2(C,v);(w=Math.atan2(h,a)-Math.atan2(c,o)-(I-Math.atan2(p.c,p.a)))>t.MathUtils.PI?w-=t.MathUtils.PI2:w<-t.MathUtils.PI&&(w+=t.MathUtils.PI2),w=I+(w+_)*r;S=Math.sqrt(v*v+C*C);p.b=Math.cos(w)*S,p.d=Math.sin(w)*S,g=!0}g&&(p.appliedValid=!1)}},e.prototype.getOrder=function(){return this.data.order},e})();t.TransformConstraint=e})(n||(n={})),(function(t){var e=(function(){return function(t){if(this.order=0,this.bones=new Array,this.rotateMix=0,this.translateMix=0,this.scaleMix=0,this.shearMix=0,this.offsetRotation=0,this.offsetX=0,this.offsetY=0,this.offsetScaleX=0,this.offsetScaleY=0,this.offsetShearY=0,null==t)throw new Error("name cannot be null.");this.name=t}})();t.TransformConstraintData=e})(n||(n={})),(function(t){var e=(function(){function t(){this.array=new Array}return t.prototype.add=function(t){var e=this.contains(t);return this.array[0|t]=0|t,!e},t.prototype.contains=function(t){return void 0!=this.array[0|t]},t.prototype.remove=function(t){this.array[0|t]=void 0},t.prototype.clear=function(){this.array.length=0},t})();t.IntSet=e;var i=(function(){function t(t,e,i,n){void 0===t&&(t=0),void 0===e&&(e=0),void 0===i&&(i=0),void 0===n&&(n=0),this.r=t,this.g=e,this.b=i,this.a=n}return t.prototype.set=function(t,e,i,n){return this.r=t,this.g=e,this.b=i,this.a=n,this.clamp(),this},t.prototype.setFromColor=function(t){return this.r=t.r,this.g=t.g,this.b=t.b,this.a=t.a,this},t.prototype.setFromString=function(t){return t="#"==t.charAt(0)?t.substr(1):t,this.r=parseInt(t.substr(0,2),16)/255,this.g=parseInt(t.substr(2,2),16)/255,this.b=parseInt(t.substr(4,2),16)/255,this.a=(8!=t.length?255:parseInt(t.substr(6,2),16))/255,this},t.prototype.add=function(t,e,i,n){return this.r+=t,this.g+=e,this.b+=i,this.a+=n,this.clamp(),this},t.prototype.clamp=function(){return this.r<0?this.r=0:this.r>1&&(this.r=1),this.g<0?this.g=0:this.g>1&&(this.g=1),this.b<0?this.b=0:this.b>1&&(this.b=1),this.a<0?this.a=0:this.a>1&&(this.a=1),this},t.WHITE=new t(1,1,1,1),t.RED=new t(1,0,0,1),t.GREEN=new t(0,1,0,1),t.BLUE=new t(0,0,1,1),t.MAGENTA=new t(1,0,1,1),t})();t.Color=i;var n=(function(){function t(){}return t.clamp=function(t,e,i){return ti?i:t},t.cosDeg=function(e){return Math.cos(e*t.degRad)},t.sinDeg=function(e){return Math.sin(e*t.degRad)},t.signum=function(t){return t>0?1:t<0?-1:0},t.toInt=function(t){return t>0?Math.floor(t):Math.ceil(t)},t.cbrt=function(t){var e=Math.pow(Math.abs(t),1/3);return t<0?-e:e},t.PI=3.1415927,t.PI2=2*t.PI,t.radiansToDegrees=180/t.PI,t.radDeg=t.radiansToDegrees,t.degreesToRadians=t.PI/180,t.degRad=t.degreesToRadians,t})();t.MathUtils=n;var r=(function(){function t(){}return t.arrayCopy=function(t,e,i,n,r){for(var s=e,o=n;s=i?e:t.setArraySize(e,i,n)},t.newArray=function(t,e){for(var i=new Array(t),n=0;n0?this.items.pop():this.instantiator()},t.prototype.free=function(t){t.reset&&t.reset(),this.items.push(t)},t.prototype.freeAll=function(t){for(var e=0;ethis.maxDelta&&(this.delta=this.maxDelta),this.lastTime=t,this.frameCount++,this.frameTime>1&&(this.framesPerSecond=this.frameCount/this.frameTime,this.frameTime=0,this.frameCount=0)},t})();t.TimeKeeper=c})(n||(n={})),e.exports=n}),{}],312:[(function(t,e,i){(function(){"use strict";Function.prototype._extend=function(t){for(var e in this.prototype.parent=t,t.prototype)this.prototype[e]||(this.prototype[e]=t.prototype[e])},Function.prototype._implement=function(t){return this._extend(t)};var t=(function(){function t(t,e){this.name=t,this.parent=e,this.children={},this.startTime=0,this.elapsedTime=0,this.totalTime=0,this.running=!1,this.childrenCount=0}"undefined"==typeof performance&&(window.performance={now:function(){return+new Date}}),t.prototype={start:function(){this.startTime=performance.now(),this.running=!0},stop:function(t){if(this.running)for(var e in this.running=!1,this.elapsedTime+=performance.now()-this.startTime,t&&this.start(),this.children)this.children[e].stop()},reset:function(t){for(var e in t||(this.running=!0,this.totalTime+=this.elapsedTime,this.start()),this.elapsedTime=0,this.children)this.children[e].reset(!0)}};var e=[],i=new t("root");function n(t,e){if(t.name===e.parent)return t;for(var i in t.children){var r;if(r=n(t.children[i],e))return r}return null}return{create:function(i,n){if(!e)throw new Error("late profile creation not allowed");var r=new t(i,n||"root");return e.push(r),r},destroy:function(t){t.childrenCount--,delete t.children[t.name]},init:function(){for(;e.length;){var t=e.pop();(t.parentNode=n(i,t))?(t.parentNode.children[t.name]=t,t.parentNode.childrenCount++):e.unshift(t)}e=null},reset:function(){i.reset(!0)},profileRoot:i}})();function i(t){t||console.log("Assertion failed! Pls debug.")}var n=Number.MAX_VALUE,r=2.220446049250313e-16,s=Math.PI,o=2,a=8,c=.005,h=2/180*s,l=2*c,u=8/180*s,_=.5*s,d=_*_,f=2/180*s;function m(t,e,i){this.major=t,this.minor=e,this.revision=i}m.prototype={toString:function(){return this.major+"."+this.minor+"."+this.revision}};var p=new m(2,3,1);function g(t){return isFinite(t)&&!isNaN(t)}var y=Math.sqrt,v=Math.atan2,x=Math.sin,C=Math.cos,T=Math.floor,A=(Math.ceil,y),b=v;function S(t,e){void 0!==t?(this.x=t,this.y=e):this.x=this.y=0}function E(t,e,i){void 0!==t&&(this.x=t,this.y=e,this.z=i)}function w(t,e){this.ex=t?t.Clone():new S,this.ey=e?e.Clone():new S}function I(t,e,i){this.ex=t?t.Clone():new E,this.ey=e?e.Clone():new E,this.ez=i?i.Clone():new E}function R(t,e){void 0!==e?(this.s=t,this.c=e):void 0!==t&&this.Set(t)}function P(t,e){this.p=new S,this.q=new R,t&&(this.p.Assign(t),this.q.Assign(e))}function O(){this.localCenter=new S,this.c0=new S,this.c=new S}function D(t,e){return t.x*e.x+t.y*e.y}function B(t,e){return t.x*e.y-t.y*e.x}function L(t,e){return new S(e*t.y,-e*t.x)}function M(t,e){return new S(-t*e.y,t*e.x)}function N(t,e){return new S(t.ex.x*e.x+t.ey.x*e.y,t.ex.y*e.x+t.ey.y*e.y)}function F(t,e){return S.Subtract(t,e).Length()}function z(t,e){var i=S.Subtract(t,e);return D(i,i)}function k(t,e){return t.x*e.x+t.y*e.y+t.z*e.z}function V(t,e){return new E(t.y*e.z-t.z*e.y,t.z*e.x-t.x*e.z,t.x*e.y-t.y*e.x)}function G(t,e){return E.Add(E.Add(E.Multiply(e.x,t.ex),E.Multiply(e.y,t.ey)),E.Multiply(e.z,t.ez))}function U(t,e){return new S(t.ex.x*e.x+t.ey.x*e.y,t.ex.y*e.x+t.ey.y*e.y)}function W(t,e){var i=new R;return i.s=t.s*e.c+t.c*e.s,i.c=t.c*e.c-t.s*e.s,i}function X(t,e){var i=new R;return i.s=t.c*e.s-t.s*e.c,i.c=t.c*e.c+t.s*e.s,i}function j(t,e){return new S(t.c*e.x-t.s*e.y,t.s*e.x+t.c*e.y)}function Y(t,e){return new S(t.c*e.x+t.s*e.y,-t.s*e.x+t.c*e.y)}function H(t,e){return new S(t.q.c*e.x-t.q.s*e.y+t.p.x,t.q.s*e.x+t.q.c*e.y+t.p.y)}function q(t,e){var i=e.x-t.p.x,n=e.y-t.p.y;return new S(t.q.c*i+t.q.s*n,-t.q.s*i+t.q.c*n)}function J(t,e){var i=new P;i.q=X(t.q,e.q);var n=e.p.x-t.p.x,r=e.p.y-t.p.y;return i.p.x=t.q.c*n+t.q.s*r,i.p.y=-t.q.s*n+t.q.c*r,i}S.prototype={Clone:function(){return new S(this.x,this.y)},SetZero:function(){return this.x=0,this.y=0,this},Set:function(t,e){return this.x=t,this.y=e,this},Assign:function(t){return this.x=t.x,this.y=t.y,this},Negate:function(){var t=new S;return t.Set(-this.x,-this.y),t},get_i:function(t){switch(t){case 0:return this.x;case 1:return this.y}},set_i:function(t,e){switch(t){case 0:return this.x=e;case 1:return this.y=e}},Add:function(t){return this.x+=t.x,this.y+=t.y,this},Subtract:function(t){return this.x-=t.x,this.y-=t.y,this},Multiply:function(t){return this.x*=t,this.y*=t,this},Length:function(){return A(this.x*this.x+this.y*this.y)},LengthSquared:function(){return this.x*this.x+this.y*this.y},Normalize:function(){var t=this.Length();if(t0?j(i.q,l).Negate():j(i.q,l),!0)},ComputeAABB:function(t,e,i){var n=e.q.c*this.m_vertex1.x-e.q.s*this.m_vertex1.y+e.p.x,r=e.q.s*this.m_vertex1.x+e.q.c*this.m_vertex1.y+e.p.y,s=e.q.c*this.m_vertex2.x-e.q.s*this.m_vertex2.y+e.p.x,o=e.q.s*this.m_vertex2.x+e.q.c*this.m_vertex2.y+e.p.y,a=Q(n,s),c=Q(r,o),h=tt(n,s),l=tt(r,o);t.lowerBound.x=a-this.m_radius,t.lowerBound.y=c-this.m_radius,t.upperBound.x=h+this.m_radius,t.upperBound.y=l+this.m_radius},ComputeMass:function(t,e){t.mass=0,t.center=S.Multiply(.5,S.Add(this.m_vertex1,this.m_vertex2)),t.I=0},_serialize:function(t){var e=t||{};return this.parent.prototype._serialize.call(this,e),e.m_vertex1=this.m_vertex1._serialize(),e.m_vertex2=this.m_vertex2._serialize(),e.m_hasVertex0=this.m_hasVertex0,this.m_hasVertex0&&(e.m_vertex0=this.m_vertex0._serialize()),e.m_hasVertex3=this.m_hasVertex3,this.m_hasVertex3&&(e.m_vertex3=this.m_vertex3._serialize()),e},_deserialize:function(t){this.parent.prototype._deserialize.call(this,t),this.m_vertex1._deserialize(t.m_vertex1),this.m_vertex2._deserialize(t.m_vertex2),this.m_hasVertex0=t.m_hasVertex0,this.m_hasVertex0&&this.m_vertex0._deserialize(t.m_vertex0),this.m_hasVertex3=t.m_hasVertex3,this.m_hasVertex3&&this.m_vertex3._deserialize(t.m_vertex3)}},ht._extend(at),lt._tempEdge=new ht,lt.prototype={Clear:function(){this.m_vertices=null,this.m_count=0},CreateLoop:function(t,e){i(null==this.m_vertices&&0==this.m_count),i(e>=3);for(var n=1;nc*c);this.m_count=e+1,this.m_vertices=new Array(this.m_count);for(n=0;n=2);for(var n=1;nc*c)}this.m_count=e,this.m_vertices=new Array(e);for(n=0;n0?(t.m_vertex0=this.m_vertices[e-1],t.m_hasVertex0=!0):(t.m_vertex0=this.m_prevVertex,t.m_hasVertex0=this.m_hasPrevVertex),ef||m==f&&s[h].yx.LengthSquared()&&(v=_)}else v=_;if(++g,y=v,v==d)break}if(g<3)return i(!1),void this.SetAsBox(1,1);for(this.m_count=g,h=0;hr*r),this.m_normals[h]=L(b,1).Clone(),this.m_normals[h].Normalize()}this.m_centroid=ut.ComputeCentroid(this.m_vertices,g)}},SetAsBox:function(t,e,i,n){if(this.m_count=4,this.m_vertices[0]=new S(-t,-e),this.m_vertices[1]=new S(t,-e),this.m_vertices[2]=new S(t,e),this.m_vertices[3]=new S(-t,e),this.m_normals[0]=new S(0,-1),this.m_normals[1]=new S(1,0),this.m_normals[2]=new S(0,1),this.m_normals[3]=new S(-1,0),i){this.m_centroid.Assign(i);var r=new P;r.p=i,r.q.Set(n);for(var s=0;s0)return!1}return!0},RayCast:function(t,e,n,r){for(var s=Y(n.q,S.Subtract(e.p1,n.p)),o=Y(n.q,S.Subtract(e.p2,n.p)),a=S.Subtract(o,s),c=0,h=e.maxFraction,l=-1,u=0;u0&&_=0&&(t.fraction=c,t.normal=j(n.q,this.m_normals[l]),!0)},ComputeAABB:function(t,e,i){for(var n=e.q.c*this.m_vertices[0].x-e.q.s*this.m_vertices[0].y+e.p.x,r=e.q.s*this.m_vertices[0].x+e.q.c*this.m_vertices[0].y+e.p.y,s=n,o=r,a=1;a=3);for(var n=new S(0,0),s=0,o=0,a=new S(0,0),c=0;cr),n.Multiply(1/s),t.center=S.Add(n,a),t.I=e*o,t.I+=t.mass*(D(t.center,t.center)-D(n,n))},GetVertexCount:function(){return this.m_count},GetVertex:function(t){return i(0<=t&&t=3);for(var n=new S,s=0,o=new S(0,0),a=0;ar),n.Multiply(1/s),n},ut._extend(at),ft.prototype={CreateProxy:function(t,e){var i=this.m_tree.CreateProxy(t,e);return++this.m_proxyCount,this.BufferMove(i),i},DestroyProxy:function(t){this.UnBufferMove(t),--this.m_proxyCount,this.m_tree.DestroyProxy(t)},MoveProxy:function(t,e,i){this.m_tree.MoveProxy(t,e,i)&&this.BufferMove(t)},TouchProxy:function(t){this.BufferMove(t)},GetFatAABB:function(t){return this.m_tree.GetFatAABB(t)},GetUserData:function(t){return this.m_tree.GetUserData(t)},TestOverlap:function(t,e){return Yt(this.m_tree.GetFatAABB(t),this.m_tree.GetFatAABB(e))},GetProxyCount:function(){return this.m_proxyCount},UpdatePairs:function(t){this.m_pairCount=0,this.m_pairBuffer.length=0;for(var e=0;en&&(i=r,n=s)}return i},GetSupportVertex:function(t,e){return this.m_vertices[this.GetSupport(t,e)]},GetVertexCount:function(){return this.m_count},GetVertex:function(t){return i(0<=t&&t1){var u=t.metric,_=this.GetMetric();(_<.5*u||2*u<_||_0?(t.x=-1*n,t.y=1*e):(t.x=1*n,t.y=-1*e);break;default:i(!1),t.x=t.y=0}},GetClosestPoint:function(t){switch(this.m_count){case 1:t.x=this.m_v[0].w.x,t.y=this.m_v[0].w.y;break;case 2:t.x=this.m_v[0].a*this.m_v[0].w.x+this.m_v[1].a*this.m_v[1].w.x,t.y=this.m_v[0].a*this.m_v[0].w.y+this.m_v[1].a*this.m_v[1].w.y;break;case 3:t.x=t.y=0;break;default:i(!1),t.x=t.y=0}},GetWitnessPoints:function(t,e){switch(this.m_count){case 1:t.x=this.m_v[0].wA.x,t.y=this.m_v[0].wA.y,e.x=this.m_v[0].wB.x,e.y=this.m_v[0].wB.y;break;case 2:t.x=this.m_v[0].a*this.m_v[0].wA.x+this.m_v[1].a*this.m_v[1].wA.x,t.y=this.m_v[0].a*this.m_v[0].wA.y+this.m_v[1].a*this.m_v[1].wA.y,e.x=this.m_v[0].a*this.m_v[0].wB.x+this.m_v[1].a*this.m_v[1].wB.x,e.y=this.m_v[0].a*this.m_v[0].wB.y+this.m_v[1].a*this.m_v[1].wB.y;break;case 3:t.x=this.m_v[0].a*this.m_v[0].wA.x+this.m_v[1].a*this.m_v[1].wA.x+this.m_v[2].a*this.m_v[2].wA.x,t.y=this.m_v[0].a*this.m_v[0].wA.y+this.m_v[1].a*this.m_v[1].wA.y+this.m_v[2].a*this.m_v[2].wA.y,e.x=t.x,e.y=t.y;break;default:i(!1)}},GetMetric:function(){switch(this.m_count){case 1:return 0;case 2:return F(this.m_v[0].w,this.m_v[1].w);case 3:return(this.m_v[1].w.x-this.m_v[0].w.x)*(this.m_v[2].w.y-this.m_v[0].w.y)-(this.m_v[1].w.y-this.m_v[0].w.y)*(this.m_v[2].w.x-this.m_v[0].w.x);default:return i(!1),0}},Solve2:function(){var t=this.m_v[0].w,e=this.m_v[1].w,i=e.x-t.x,n=e.y-t.y,r=-(t.x*i+t.y*n);if(r<=0)return this.m_v[0].a=1,void(this.m_count=1);var s=e.x*i+e.y*n;if(s<=0)return this.m_v[1].a=1,this.m_count=1,void this.m_v[0].Assign(this.m_v[1]);var o=1/(s+r);this.m_v[0].a=s*o,this.m_v[1].a=r*o,this.m_count=2},Solve3:function(){var t=this.m_v[0].w,e=this.m_v[1].w,i=this.m_v[2].w,n=e.x-t.x,r=e.y-t.y,s=t.x*n+t.y*r,o=e.x*n+e.y*r,a=-s,c=i.x-t.x,h=i.y-t.y,l=t.x*c+t.y*h,u=i.x*c+i.y*h,_=-l,d=i.x-e.x,f=i.y-e.y,m=e.x*d+e.y*f,p=i.x*d+i.y*f,g=-m,y=n*h-r*c,v=y*(e.x*i.y-e.y*i.x),x=y*(i.x*t.y-i.y*t.x),C=y*(t.x*e.y-t.y*e.x);if(a<=0&&_<=0)return this.m_v[0].a=1,void(this.m_count=1);if(o>0&&a>0&&C<=0){var T=1/(o+a);return this.m_v[0].a=o*T,this.m_v[1].a=a*T,void(this.m_count=2)}if(u>0&&_>0&&x<=0){var A=1/(u+_);return this.m_v[0].a=u*A,this.m_v[2].a=_*A,this.m_count=2,void this.m_v[1].Assign(this.m_v[2])}if(o<=0&&g<=0)return this.m_v[1].a=1,this.m_count=1,void this.m_v[0].Assign(this.m_v[1]);if(u<=0&&p<=0)return this.m_v[2].a=1,this.m_count=1,void this.m_v[0].Assign(this.m_v[2]);if(p>0&&g>0&&v<=0){var b=1/(p+g);return this.m_v[1].a=p*b,this.m_v[2].a=g*b,this.m_count=2,void this.m_v[0].Assign(this.m_v[2])}var S=1/(v+x+C);this.m_v[0].a=v*S,this.m_v[1].a=x*S,this.m_v[2].a=C*S,this.m_count=3}};var Ct=new xt,Tt=new S,At=new S;function bt(t,e,n){++bt.b2_gjkCalls;var s=n.proxyA,o=n.proxyB,a=n.transformA,c=n.transformB;Ct.ReadCache(e,s,a,o,c);for(var h=Ct.m_v,l=[0,0,0],u=[0,0,0],_=0,d=0;d<20;){_=Ct.m_count;for(var f=0;f<_;++f)l[f]=h[f].indexA,u[f]=h[f].indexB;switch(Ct.m_count){case 1:break;case 2:Ct.Solve2();break;case 3:Ct.Solve3();break;default:i(!1)}if(3==Ct.m_count)break;if(Ct.GetClosestPoint(At),At.LengthSquared(),Ct.GetSearchDirection(At),At.LengthSquared()v+x&&t.distance>r)t.distance-=v+x,Tt.x=t.pointB.x-t.pointA.x,Tt.y=t.pointB.y-t.pointA.y,Tt.Normalize(),t.pointA.x+=v*Tt.x,t.pointA.y+=v*Tt.y,t.pointB.x-=x*Tt.x,t.pointB.y-=x*Tt.y;else{var C=.5*(t.pointA.x+t.pointB.x),T=.5*(t.pointA.y+t.pointB.y);t.pointA.x=C,t.pointA.y=T,t.pointB.x=C,t.pointB.y=T,t.distance=0}}}bt.b2_gjkCalls=0,bt.b2_gjkIters=0,bt.b2_gjkMaxIters=0;function St(){}function Et(){this.localPoint=new S,this.normalImpulse=0,this.tangentImpulse=0,this.id=new St}function wt(){this.points=new Array(o),this.localNormal=new S,this.localPoint=new S,this.type=0,this.pointCount=0}function It(){this.normal=new S,this.points=new Array(o),this.separations=new Array(o)}function Rt(){this.v=new S,this.id=new St}function Pt(){this.p1=new S,this.p2=new S,this.maxFraction=0}function Ot(){this.normal=new S,this.fraction=0}function Dt(){this.lowerBound=new S,this.upperBound=new S}function Bt(t,e,i,n,r){t.pointCount=0;var s=H(i,e.m_p),o=H(r,n.m_p),a=o.x-s.x,c=o.y-s.y,h=a*a+c*c,l=e.m_radius+n.m_radius;h>l*l||(t.type=wt.e_circles,t.localPoint.x=e.m_p.x,t.localPoint.y=e.m_p.y,t.localNormal.x=t.localNormal.y=0,t.pointCount=1,t.points[0]=new Et,t.points[0].localPoint.x=n.m_p.x,t.points[0].localPoint.y=n.m_p.y,t.points[0].id.Reset())}function Lt(t,e,i,s,o){t.pointCount=0;for(var a=q(i,H(o,s.m_p)),c=0,h=-n,l=e.m_radius+s.m_radius,u=e.m_count,_=e.m_vertices,d=e.m_normals,f=0;fl)return;m>h&&(h=m,c=f)}var p=c,g=p+1l*l)return;t.pointCount=1,t.type=wt.e_faceA,t.localNormal.x=a.x-y.x,t.localNormal.y=a.y-y.y,t.localNormal.Normalize(),t.localPoint.x=y.x,t.localPoint.y=y.y,t.points[0]=new Et,t.points[0].localPoint.x=s.m_p.x,t.points[0].localPoint.y=s.m_p.y,t.points[0].id.Reset()}else if(C<=0){if(z(a,v)>l*l)return;t.pointCount=1,t.type=wt.e_faceA,t.localNormal.x=a.x-v.x,t.localNormal.y=a.y-v.y,t.localNormal.Normalize(),t.localPoint.x=v.x,t.localPoint.y=v.y,t.points[0]=new Et,t.points[0].localPoint.x=s.m_p.x,t.points[0].localPoint.y=s.m_p.y,t.points[0].id.Reset()}else{var T=.5*(y.x+v.x),A=.5*(y.y+v.y);if((h=(a.x-T)*d[p].x+(a.y-A)*d[p].y)>l)return;t.pointCount=1,t.type=wt.e_faceA,t.localNormal.x=d[p].x,t.localNormal.y=d[p].y,t.localPoint.x=T,t.localPoint.y=A,t.points[0]=new Et,t.points[0].localPoint.x=s.m_p.x,t.points[0].localPoint.y=s.m_p.y,t.points[0].id.Reset()}}function Mt(t,e,i,r,s){for(var o=e.m_count,a=r.m_count,c=e.m_normals,h=e.m_vertices,l=r.m_vertices,u=J(s,i),_=0,d=-n,f=0;fd&&(d=v,_=f)}return t[0]=_,d}function Nt(t,e,r,s,o,a){var c=e.m_normals,h=o.m_count,l=o.m_vertices,u=o.m_normals;i(0<=s&&ss)){var l=[0],u=Mt(l,n,r,e,i);if(!(u>s)){var _,d,f,m,p=0,g=0;u>h+.1*c?(_=n,d=e,f=r,m=i,p=l[0],t.type=wt.e_faceB,g=1):(_=e,d=n,f=i,m=r,p=a[0],t.type=wt.e_faceA,g=0),Nt(Ft._local_incidentEdges,_,f,p,d,m);var y=_.m_count,v=_.m_vertices,x=p,C=p+1d*d)return;if(e.m_hasVertex0){var p=e.m_vertex0,g=a,y=g.x-p.x,v=g.y-p.y;if(y*(g.x-o.x)+v*(g.y-o.y)>0)return}return f.indexA=0,f.typeA=St.e_vertex,t.pointCount=1,t.type=wt.e_circles,t.localNormal.x=t.localNormal.y=0,t.localPoint.x=m.x,t.localPoint.y=m.y,t.points[0]=new Et,t.points[0].id.Assign(f),t.points[0].localPoint.x=r.m_p.x,void(t.points[0].localPoint.y=r.m_p.y)}if(u<=0){m=c;if((S=o.x-m.x)*S+(E=o.y-m.y)*E>d*d)return;if(e.m_hasVertex3){var x=e.m_vertex3,C=c,T=x.x-C.x,A=x.y-C.y;if(T*(o.x-C.x)+A*(o.y-C.y)>0)return}return f.indexA=1,f.typeA=St.e_vertex,t.pointCount=1,t.type=wt.e_circles,t.localNormal.x=t.localNormal.y=0,t.localPoint.x=m.x,t.localPoint.y=m.y,t.points[0]=new Et,t.points[0].id.Assign(f),t.points[0].localPoint.x=r.m_p.x,void(t.points[0].localPoint.y=r.m_p.y)}var b=h*h+l*l;i(b>0);var S,E,w=1/b*(u*a.x+_*c.x),I=1/b*(u*a.y+_*c.y);if(!((S=o.x-w)*S+(E=o.y-I)*E>d*d)){var R=-l,P=h;R*(o.x-a.x)+P*(o.y-a.y)<0&&(R=-R,P=-P),f.indexA=0,f.typeA=St.e_face,t.pointCount=1,t.type=wt.e_faceA,t.localNormal.x=R,t.localNormal.y=P,t.localNormal.Normalize(),t.localPoint.x=a.x,t.localPoint.y=a.y,t.points[0]=new Et,t.points[0].id.Assign(f),t.points[0].localPoint.x=r.m_p.x,t.points[0].localPoint.y=r.m_p.y}}function kt(){this.type=0,this.index=0,this.separation=0}function Vt(){this.vertices=new Array(a),this.normals=new Array(a),this.count=0}function Gt(){this.i1=0,this.i2=0,this.v1=new S,this.v2=new S,this.normal=new S,this.sideNormal1=new S,this.sideOffset1=0,this.sideNormal2=new S,this.sideOffset2=0}function Ut(){this.m_polygonB=new Vt,this.m_xf=new P,this.m_centroidB=new S,this.m_v0=new S,this.m_v1=new S,this.m_v2=new S,this.m_v3=new S,this.m_normal0=new S,this.m_normal1=new S,this.m_normal2=new S,this.m_normal=new S,this.m_type1=0,this.m_type2=0,this.m_lowerLimit=new S,this.m_upperLimit=new S,this.m_radius=0,this.m_front=!1}function Wt(t,e,i,n,r){Wt.collider.Collide(t,e,i,n,r)}function Xt(t,e,i,n,r,s){var o=0,a=i*e[0].v.x+n*e[0].v.y-r,c=i*e[1].v.x+n*e[1].v.y-r;if(a<=0&&(t[o++]=e[0]),c<=0&&(t[o++]=e[1]),a*c<0){var h=a/(a-c);t[o]=new Rt,t[o].v.x=e[0].v.x+h*(e[1].v.x-e[0].v.x),t[o].v.y=e[0].v.y+h*(e[1].v.y-e[0].v.y),t[o].id.indexA=s,t[o].id.indexB=e[0].id.indexB,t[o].id.typeA=St.e_vertex,t[o].id.typeB=St.e_face,++o}return o}function jt(t,e,i,n,s,o){return jt.input.proxyA.Set(t,e),jt.input.proxyB.Set(i,n),jt.input.transformA=s,jt.input.transformB=o,jt.input.useRadii=!0,jt.cache.count=0,bt(jt.output,jt.cache,jt.input),jt.output.distance<10*r}function Yt(t,e){return!(e.lowerBound.x-t.upperBound.x>0||e.lowerBound.y-t.upperBound.y>0||t.lowerBound.x-e.upperBound.x>0||t.lowerBound.y-e.upperBound.y>0)}St.prototype={indexA:0,indexB:0,typeA:0,typeB:0,Reset:function(){this.indexA=this.indexB=this.typeA=this.typeB=0},Get:function(){return this.indexA|this.indexB<<8|this.typeA<<16|this.typeB<<24},Assign:function(t){this.indexA=t.indexA,this.indexB=t.indexB,this.typeA=t.typeA,this.typeB=t.typeB}},St.e_vertex=0,St.e_face=1,Et.prototype={Clone:function(){var t=new Et;return t.localPoint.x=this.localPoint.x,t.localPoint.y=this.localPoint.y,t.normalImpulse=this.normalImpulse,t.tangentImpulse=this.tangentImpulse,t.id.Assign(this.id),t}},wt.prototype={Clone:function(){var t=new wt;t.pointCount=this.pointCount,t.type=this.type,t.localPoint.x=this.localPoint.x,t.localPoint.y=this.localPoint.y,t.localNormal.x=this.localNormal.x,t.localNormal.y=this.localNormal.y;for(var e=0;er*r&&(this.normal.x=c-o,this.normal.y=h-a,this.normal.Normalize());var _=o+i*this.normal.x,d=a+i*this.normal.y,f=c-s*this.normal.x,m=h-s*this.normal.y;this.points[0]=new S(.5*(_+f),.5*(d+m)),this.separations[0]=(f-_)*this.normal.x+(m-d)*this.normal.y;break;case wt.e_faceA:this.normal.x=e.q.c*t.localNormal.x-e.q.s*t.localNormal.y,this.normal.y=e.q.s*t.localNormal.x+e.q.c*t.localNormal.y;for(var p=e.q.c*t.localPoint.x-e.q.s*t.localPoint.y+e.p.x,g=e.q.s*t.localPoint.x+e.q.c*t.localPoint.y+e.p.y,y=0;y=0&&this.upperBound.y-this.lowerBound.y>=0&&this.lowerBound.IsValid()&&this.upperBound.IsValid()},GetCenter:function(){return new S(.5*(this.lowerBound.x+this.upperBound.x),.5*(this.lowerBound.y+this.upperBound.y))},GetExtents:function(){return new S(.5*(this.upperBound.x-this.lowerBound.x),.5*(this.upperBound.y-this.lowerBound.y))},GetPerimeter:function(){return 2*(this.upperBound.x-this.lowerBound.x+(this.upperBound.y-this.lowerBound.y))},Combine:function(t,e){e?(this.lowerBound.x=Q(t.lowerBound.x,e.lowerBound.x),this.lowerBound.y=Q(t.lowerBound.y,e.lowerBound.y),this.upperBound.x=tt(t.upperBound.x,e.upperBound.x),this.upperBound.y=tt(t.upperBound.y,e.upperBound.y)):(this.lowerBound.x=Q(this.lowerBound.x,t.lowerBound.x),this.lowerBound.y=Q(this.lowerBound.y,t.lowerBound.y),this.upperBound.x=tt(this.upperBound.x,t.upperBound.x),this.upperBound.y=tt(this.upperBound.y,t.upperBound.y))},Contains:function(t){return this.lowerBound.x<=t.lowerBound.x&&this.lowerBound.y<=t.lowerBound.y&&t.upperBound.x<=this.upperBound.x&&t.upperBound.y<=this.upperBound.y},RayCast:function(t,e){for(var i=-n,s=n,o=e.p1,a=S.Subtract(e.p2,e.p1),c=K(a),h=new S,l=0;l<2;++l)if(c.get_i(l)d){var m=d;d=_,_=m,f=1}if(_>i&&(h.x=h.y=0,h.set_i(l,f),i=_),i>(s=Q(s,d)))return!1}return!(i<0||e.maxFraction=0,h=this.m_normal0.x*(this.m_centroidB.x-this.m_v0.x)+this.m_normal0.y*(this.m_centroidB.y-this.m_v0.y)),a&&(Ut._temp_edge2.x=this.m_v3.x-this.m_v2.x,Ut._temp_edge2.y=this.m_v3.y-this.m_v2.y,Ut._temp_edge2.Normalize(),this.m_normal2.x=Ut._temp_edge2.y,this.m_normal2.y=-Ut._temp_edge2.x,d=Ut._temp_edge.x*Ut._temp_edge2.y-Ut._temp_edge.y*Ut._temp_edge2.x>0,u=this.m_normal2.x*(this.m_centroidB.x-this.m_v2.x)+this.m_normal2.y*(this.m_centroidB.y-this.m_v2.y)),s&&a?_&&d?(this.m_front=h>=0||c>=0||u>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=this.m_normal0.x,this.m_lowerLimit.y=this.m_normal0.y,this.m_upperLimit.x=this.m_normal2.x,this.m_upperLimit.y=this.m_normal2.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal1.x,this.m_lowerLimit.y=-this.m_normal1.y,this.m_upperLimit.x=-this.m_normal1.x,this.m_upperLimit.y=-this.m_normal1.y)):_?(this.m_front=h>=0||c>=0&&u>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=this.m_normal0.x,this.m_lowerLimit.y=this.m_normal0.y,this.m_upperLimit.x=this.m_normal1.x,this.m_upperLimit.y=this.m_normal1.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal2.x,this.m_lowerLimit.y=-this.m_normal2.y,this.m_upperLimit.x=-this.m_normal1.x,this.m_upperLimit.y=-this.m_normal1.y)):d?(this.m_front=u>=0||h>=0&&c>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=this.m_normal1.x,this.m_lowerLimit.y=this.m_normal1.y,this.m_upperLimit.x=this.m_normal2.x,this.m_upperLimit.y=this.m_normal2.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal1.x,this.m_lowerLimit.y=-this.m_normal1.y,this.m_upperLimit.x=-this.m_normal0.x,this.m_upperLimit.y=-this.m_normal0.y)):(this.m_front=h>=0&&c>=0&&u>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=this.m_normal1.x,this.m_lowerLimit.y=this.m_normal1.y,this.m_upperLimit.x=this.m_normal1.x,this.m_upperLimit.y=this.m_normal1.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal2.x,this.m_lowerLimit.y=-this.m_normal2.y,this.m_upperLimit.x=-this.m_normal0.x,this.m_upperLimit.y=-this.m_normal0.y)):s?_?(this.m_front=h>=0||c>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=this.m_normal0.x,this.m_lowerLimit.y=this.m_normal0.y,this.m_upperLimit.x=-this.m_normal1.x,this.m_upperLimit.y=-this.m_normal1.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=this.m_normal1.x,this.m_lowerLimit.y=this.m_normal1.y,this.m_upperLimit.x=-this.m_normal1.x,this.m_upperLimit.y=-this.m_normal1.y)):(this.m_front=h>=0&&c>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=this.m_normal1.x,this.m_lowerLimit.y=this.m_normal1.y,this.m_upperLimit.x=-this.m_normal1.x,this.m_upperLimit.y=-this.m_normal1.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=this.m_normal1.x,this.m_lowerLimit.y=this.m_normal1.y,this.m_upperLimit.x=-this.m_normal0.x,this.m_upperLimit.y=-this.m_normal0.y)):a?d?(this.m_front=c>=0||u>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal1.x,this.m_lowerLimit.y=-this.m_normal1.y,this.m_upperLimit.x=this.m_normal2.x,this.m_upperLimit.y=this.m_normal2.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal1.x,this.m_lowerLimit.y=-this.m_normal1.y,this.m_upperLimit.x=this.m_normal1.x,this.m_upperLimit.y=this.m_normal1.y)):(this.m_front=c>=0&&u>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal1.x,this.m_lowerLimit.y=-this.m_normal1.y,this.m_upperLimit.x=this.m_normal1.x,this.m_upperLimit.y=this.m_normal1.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal2.x,this.m_lowerLimit.y=-this.m_normal2.y,this.m_upperLimit.x=this.m_normal1.x,this.m_upperLimit.y=this.m_normal1.y)):(this.m_front=c>=0,this.m_front?(this.m_normal.x=this.m_normal1.x,this.m_normal.y=this.m_normal1.y,this.m_lowerLimit.x=-this.m_normal1.x,this.m_lowerLimit.y=-this.m_normal1.y,this.m_upperLimit.x=-this.m_normal1.x,this.m_upperLimit.y=-this.m_normal1.y):(this.m_normal.x=-this.m_normal1.x,this.m_normal.y=-this.m_normal1.y,this.m_lowerLimit.x=this.m_normal1.x,this.m_lowerLimit.y=this.m_normal1.y,this.m_upperLimit.x=this.m_normal1.x,this.m_upperLimit.y=this.m_normal1.y)),this.m_polygonB.count=n.m_count;for(var f=0;fthis.m_radius)){var p=this.ComputePolygonSeparation();if(!(p.type!=kt.e_unknown&&p.separation>this.m_radius)){var g=new kt;g=p.type==kt.e_unknown?m:p.separation>.98*m.separation+.001?p:m;var y=new Array(2),v=new Gt;if(g.type==kt.e_edgeA){t.type=wt.e_faceA;var x=0,C=this.m_normal.x*this.m_polygonB.normals[0].x+this.m_normal.y*this.m_polygonB.normals[0].y;for(f=1;fthis.m_radius)return t.type=kt.e_edgeB,t.index=n,t.separation=c,t;if(r*e+s*i>=0){if((r-this.m_upperLimit.x)*this.m_normal.x+(s-this.m_upperLimit.y)*this.m_normal.y<-h)continue}else if((r-this.m_lowerLimit.x)*this.m_normal.x+(s-this.m_lowerLimit.y)*this.m_normal.y<-h)continue;c>t.separation&&(t.type=kt.e_edgeB,t.index=n,t.separation=c)}return t}},Ut.e_isolated=0,Ut.e_concave=1,Ut.e_convex=2,Wt.collider=new Ut,jt.input=new gt,jt.cache=new pt,jt.output=new yt;var Ht=-1;function qt(){this.aabb=new Dt,this.userData=null,this.parent=0,this.child1=this.child2=this.height=0}function Jt(){this.m_root=Ht,this.m_nodeCapacity=16,this.m_nodeCount=0,this.m_nodes=new Array(this.m_nodeCapacity);for(var t=0;t0;){var n=i.pop();if(n!=Ht){var r=this.m_nodes[n];if(Yt(r.aabb,e))if(r.IsLeaf()){if(0==t.QueryCallback(n))return}else i.push(r.child1),i.push(r.child2)}}},RayCast:function(t,e){var n=e.p1,r=e.p2,s=S.Subtract(r,n);i(s.LengthSquared()>0),s.Normalize();var o=M(1,s),a=K(o),c=e.maxFraction,h=new Dt,l=S.Add(n,S.Multiply(c,S.Subtract(r,n)));h.lowerBound.Assign($(n,l)),h.upperBound.Assign(et(n,l));var u=[];for(u.push(this.m_root);u.length>0;){var _=u.pop();if(_!=Ht){var d=this.m_nodes[_];if(0!=Yt(d.aabb,h)){var f=d.aabb.GetCenter(),m=d.aabb.GetExtents();if(!(Z(D(o,S.Subtract(n,f)))-D(a,m)>0))if(d.IsLeaf()){var p=new Pt;p.p1.Assign(e.p1),p.p2.Assign(e.p2),p.maxFraction=c;var g=t.RayCastCallback(p,_);if(0==g)return;if(g>0){c=g;l=S.Add(n,S.Multiply(c,S.Subtract(r,n)));h.lowerBound.Assign($(n,l)),h.upperBound.Assign(et(n,l))}}else u.push(d.child1),u.push(d.child2)}}}},Validate:function(){this.ValidateStructure(this.m_root),this.ValidateMetrics(this.m_root);for(var t=0,e=this.m_freeList;e!=Ht;)i(0<=e&&e1;){var r=n,s=-1,o=-1;for(i=0;i1){var c=o.child1,h=o.child2,l=this.m_nodes[c],u=this.m_nodes[h];return i(0<=c&&cu.height?(o.child2=c,e.child2=h,u.parent=t,e.aabb.Combine(s.aabb,u.aabb),o.aabb.Combine(e.aabb,l.aabb),e.height=1+tt(s.height,u.height),o.height=1+tt(e.height,l.height)):(o.child2=h,e.child2=c,l.parent=t,e.aabb.Combine(s.aabb,l.aabb),o.aabb.Combine(e.aabb,u.aabb),e.height=1+tt(s.height,l.height),o.height=1+tt(e.height,u.height)),r}if(a<-1){var _=s.child1,d=s.child2,f=this.m_nodes[_],m=this.m_nodes[d];return i(0<=_&&_m.height?(s.child2=_,e.child1=d,m.parent=t,e.aabb.Combine(o.aabb,m.aabb),s.aabb.Combine(e.aabb,f.aabb),e.height=1+tt(o.height,m.height),s.height=1+tt(e.height,f.height)):(s.child2=d,e.child1=_,f.parent=t,e.aabb.Combine(o.aabb,f.aabb),s.aabb.Combine(e.aabb,m.aabb),e.height=1+tt(o.height,f.height),s.height=1+tt(e.height,m.height)),n}return t},ComputeHeight:function(t){void 0===t&&(t=this.m_root),i(0<=t&&tl);var u=0,_=0,d=new pt;d.count=0;var f=new gt;for(f.proxyA.Assign(e.proxyA),f.proxyB.Assign(e.proxyB),f.useRadii=!1;;){ie._temp_sweepA.GetTransform(f.transformA,u),ie._temp_sweepB.GetTransform(f.transformB,u);var m=new yt;if(bt(m,d,f),m.distance<=0){t.state=Kt.e_overlapped,t.t=0;break}if(m.distanceh+l){t.state=Kt.e_separated,t.t=s,g=!0;break}if(C>h-l){u=y;break}var T=p.Evaluate(x[0],x[1],u);if(Th?(b=E,T=w):(S=E,C=w),50==A)break}if(ie.b2_toiMaxRootIters=tt(ie.b2_toiMaxRootIters,A),++v==a)break}if(++_,++ie.b2_toiIters,g)break;if(20==_){t.state=Kt.e_failed,t.t=u;break}}ie.b2_toiMaxIters=tt(ie.b2_toiMaxIters,_),ee.stop(),ie.b2_toiMaxTime=tt(ie.b2_toiMaxTime,ee.elapsedTime),ie.b2_toiTime+=ee.elapsedTime}function ne(){this.type=re.b2_staticBody,this.position=new S(0,0),this.angle=0,this.linearVelocity=new S(0,0),this.angularVelocity=0,this.linearDamping=0,this.angularDamping=0,this.allowSleep=!0,this.awake=!0,this.fixedRotation=!1,this.bullet=!1,this.active=!0,this.userData=null,this.gravityScale=1,Object.seal(this)}function re(t,e){i(t.position.IsValid()),i(t.linearVelocity.IsValid()),i(g(t.angle)),i(g(t.angularVelocity)),i(g(t.angularDamping)&&t.angularDamping>=0),i(g(t.linearDamping)&&t.linearDamping>=0),this.m_islandIndex=0,this.m_flags=0,t.bullet&&(this.m_flags|=re.e_bulletFlag),t.fixedRotation&&(this.m_flags|=re.e_fixedRotationFlag),t.allowSleep&&(this.m_flags|=re.e_autoSleepFlag),t.awake&&(this.m_flags|=re.e_awakeFlag),t.active&&(this.m_flags|=re.e_activeFlag),this.m_world=e,this.m_xf=new P,this.m_xf.p.Assign(t.position),this.m_xf.q.Set(t.angle),this.m_sweep=new O,this.m_sweep.localCenter.SetZero(),this.m_sweep.c0.Assign(this.m_xf.p),this.m_sweep.c.Assign(this.m_xf.p),this.m_sweep.a0=t.angle,this.m_sweep.a=t.angle,this.m_sweep.alpha0=0,this.m_jointList=null,this.m_contactList=null,this.m_prev=null,this.m_next=null,this.m_linearVelocity=t.linearVelocity.Clone(),this.m_angularVelocity=t.angularVelocity,this.m_linearDamping=t.linearDamping,this.m_angularDamping=t.angularDamping,this.m_gravityScale=t.gravityScale,this.m_force=new S,this.m_torque=0,this.m_sleepTime=0,this.m_type=t.type,this.m_type==re.b2_dynamicBody?(this.m_mass=1,this.m_invMass=1):(this.m_mass=0,this.m_invMass=0),this.m_I=0,this.m_invI=0,this.m_userData=t.userData,this.m_fixtureList=null,this.m_fixtureCount=0}function se(){this.categoryBits=1,this.maskBits=65535,this.groupIndex=0}function oe(){this.shape=null,this.userData=null,this.friction=.2,this.restitution=0,this.density=0,this.isSensor=!1,this.filter=new se,Object.seal(this)}function ae(){this.aabb=new Dt,this.fixture=null,this.childIndex=0,this.proxyId=0}function ce(){this.m_userData=null,this.m_body=null,this.m_next=null,this.m_proxies=null,this.m_proxyCount=0,this.m_shape=null,this.m_density=0,this.m_filter=new se,this.m_isSensor=!1,this.m_friction=0,this.m_restitution=0}function he(){}function le(){}function ue(){this.normalImpulses=new Array(o),this.tangentImpulses=new Array(o),this.count=0}function _e(){}function de(){}function fe(){}function me(){this.dt=0,this.inv_dt=0,this.dtRatio=0,this.velocityIterations=0,this.positionIterations=0,this.warmStarting=!1}function pe(){this.c=new S,this.a=0}function ge(){this.v=new S,this.w=0}function ye(){this.step=new me,this.positions=null,this.velocities=null}ie._temp_sweepA=new O,ie._temp_sweepB=new O,ie.b2_toiTime=0,ie.b2_toiMaxTime=0,ie.b2_toiCalls=0,ie.b2_toiIters=0,ie.b2_toiMaxIters=0,ie.b2_toiRootIters=0,ie.b2_toiMaxRootIters=0,ne.prototype={_deserialize:function(t){this.type=t.type,this.position._deserialize(t.position),this.angle=t.angle,this.linearVelocity._deserialize(t.linearVelocity),this.angularVelocity=t.angularVelocity,this.linearDamping=t.linearDamping,this.angularDamping=t.angularDamping,this.allowSleep=t.allowSleep,this.awake=t.awake,this.fixedRotation=t.fixedRotation,this.bullet=t.bullet,this.active=t.active,this.gravityScale=t.gravityScale}},re.b2_staticBody=0,re.b2_kinematicBody=1,re.b2_dynamicBody=2,re.e_islandFlag=1,re.e_awakeFlag=2,re.e_autoSleepFlag=4,re.e_bulletFlag=8,re.e_fixedRotationFlag=16,re.e_activeFlag=32,re.e_toiFlag=64,re.m_local_oldCenter=new S,re.m_local_xf1=new P,re.prototype={CreateFixture:function(t,e){if(void 0!==e){var n=new oe;return n.shape=t,n.density=e,this.CreateFixture(n)}if(i(0==this.m_world.IsLocked()),1==this.m_world.IsLocked())return null;var r=new ce;if(r.Create(this,t),this.m_flags&re.e_activeFlag){var s=this.m_world.m_contactManager.m_broadPhase;r.CreateProxies(s,this.m_xf)}return r.m_next=this.m_fixtureList,this.m_fixtureList=r,++this.m_fixtureCount,r.m_body=this,r.m_density>0&&this.ResetMassData(),this.m_world.m_flags|=be.e_newFixture,r},DestroyFixture:function(t){if(i(0==this.m_world.IsLocked()),1!=this.m_world.IsLocked()){i(t.m_body==this),i(this.m_fixtureCount>0);for(var e=this.m_fixtureList,n=!1;null!=e;){if(e==t){this.m_fixtureList=e=t.m_next,n=!0;break}e=e.m_next}i(n);for(var r=this.m_contactList;r;){var s=r.contact;r=r.next;var o=s.GetFixtureA(),a=s.GetFixtureB();t!=o&&t!=a||this.m_world.m_contactManager.Destroy(s)}if(this.m_flags&re.e_activeFlag){var c=this.m_world.m_contactManager.m_broadPhase;t.DestroyProxies(c)}t.Destroy(),t.m_body=null,t.m_next=null,--this.m_fixtureCount,this.ResetMassData()}},SetTransform:function(t,e){if(i(0==this.m_world.IsLocked()),1!=this.m_world.IsLocked()){this.m_xf.q.Set(e),this.m_xf.p.Assign(t),this.m_sweep.c.Assign(H(this.m_xf,this.m_sweep.localCenter)),this.m_sweep.a=e,this.m_sweep.c0.Assign(this.m_sweep.c),this.m_sweep.a0=e;for(var n=this.m_world.m_contactManager.m_broadPhase,r=this.m_fixtureList;r;r=r.m_next)r.Synchronize(n,this.m_xf,this.m_xf)}},GetTransform:function(){return this.m_xf},GetPosition:function(){return this.m_xf.p},GetAngle:function(){return this.m_sweep.a},GetWorldCenter:function(){return this.m_sweep.c},GetLocalCenter:function(){return this.m_sweep.localCenter},SetLinearVelocity:function(t){this.m_type!=re.b2_staticBody&&(D(t,t)>0&&this.SetAwake(!0),this.m_linearVelocity=t)},GetLinearVelocity:function(){return this.m_linearVelocity},SetAngularVelocity:function(t){this.m_type!=re.b2_staticBody&&(t*t>0&&this.SetAwake(!0),this.m_angularVelocity=t)},GetAngularVelocity:function(){return this.m_angularVelocity},ApplyForce:function(t,e,i){this.m_type==re.b2_dynamicBody&&(i&&0==(this.m_flags&re.e_awakeFlag)&&this.SetAwake(!0),this.m_flags&re.e_awakeFlag&&(this.m_force.Add(t),this.m_torque+=B(S.Subtract(e,this.m_sweep.c),t)))},ApplyForceToCenter:function(t,e){this.m_type==re.b2_dynamicBody&&(e&&0==(this.m_flags&re.e_awakeFlag)&&this.SetAwake(!0),this.m_flags&re.e_awakeFlag&&this.m_force.Add(t))},ApplyTorque:function(t,e){this.m_type==re.b2_dynamicBody&&(e&&0==(this.m_flags&re.e_awakeFlag)&&this.SetAwake(!0),this.m_flags&re.e_awakeFlag&&(this.m_torque+=t))},ApplyLinearImpulse:function(t,e,i){this.m_type==re.b2_dynamicBody&&(i&&0==(this.m_flags&re.e_awakeFlag)&&this.SetAwake(!0),this.m_flags&re.e_awakeFlag&&(this.m_linearVelocity.Add(S.Multiply(this.m_invMass,t)),this.m_angularVelocity+=this.m_invI*B(S.Subtract(e,this.m_sweep.c),t)))},ApplyAngularImpulse:function(t,e){this.m_type==re.b2_dynamicBody&&(e&&0==(this.m_flags&re.e_awakeFlag)&&this.SetAwake(!0),this.m_flags&re.e_awakeFlag&&(this.m_angularVelocity+=this.m_invI*t))},GetMass:function(){return this.m_mass},GetInertia:function(){return this.m_I+this.m_mass*D(this.m_sweep.localCenter,this.m_sweep.localCenter)},GetMassData:function(t){t.mass=this.m_mass,t.I=this.m_I+this.m_mass*D(this.m_sweep.localCenter,this.m_sweep.localCenter),t.center=this.m_sweep.localCenter},SetMassData:function(t){i(0==this.m_world.IsLocked()),1!=this.m_world.IsLocked()&&this.m_type==re.b2_dynamicBody&&(this.m_invMass=0,this.m_I=0,this.m_invI=0,this.m_mass=t.mass,this.m_mass<=0&&(this.m_mass=1),this.m_invMass=1/this.m_mass,t.I>0&&0==(this.m_flags&re.e_fixedRotationFlag)&&(this.m_I=t.I-this.m_mass*D(t.center,t.center),i(this.m_I>0),this.m_invI=1/this.m_I),re.m_local_oldCenter.Assign(this.m_sweep.c),this.m_sweep.localCenter.Assign(t.center),this.m_sweep.c0.Assign(H(this.m_xf,this.m_sweep.localCenter)),this.m_sweep.c.Assign(this.m_sweep.c0),this.m_linearVelocity.Add(M(this.m_angularVelocity,S.Subtract(this.m_sweep.c,re.m_local_oldCenter))))},ResetMassData:function(){if(this.m_mass=0,this.m_invMass=0,this.m_I=0,this.m_invI=0,this.m_sweep.localCenter.SetZero(),this.m_type==re.b2_staticBody||this.m_type==re.b2_kinematicBody)return this.m_sweep.c0.Assign(this.m_xf.p),this.m_sweep.c.Assign(this.m_xf.p),void(this.m_sweep.a0=this.m_sweep.a);i(this.m_type==re.b2_dynamicBody);for(var t=new S(0,0),e=this.m_fixtureList;e;e=e.m_next)if(0!=e.m_density){var n=new ot;e.GetMassData(n),this.m_mass+=n.mass,t.Add(S.Multiply(n.mass,n.center)),this.m_I+=n.I}this.m_mass>0?(this.m_invMass=1/this.m_mass,t.Multiply(this.m_invMass)):(this.m_mass=1,this.m_invMass=1),this.m_I>0&&0==(this.m_flags&re.e_fixedRotationFlag)?(this.m_I-=this.m_mass*D(t,t),i(this.m_I>0),this.m_invI=1/this.m_I):(this.m_I=0,this.m_invI=0),re.m_local_oldCenter.Assign(this.m_sweep.c),this.m_sweep.localCenter.Assign(t),this.m_sweep.c0.Assign(H(this.m_xf,this.m_sweep.localCenter)),this.m_sweep.c.Assign(this.m_sweep.c0),this.m_linearVelocity.Add(M(this.m_angularVelocity,S.Subtract(this.m_sweep.c,re.m_local_oldCenter)))},GetWorldPoint:function(t){return H(this.m_xf,t)},GetWorldVector:function(t){return j(this.m_xf.q,t)},GetLocalPoint:function(t){return q(this.m_xf,t)},GetLocalVector:function(t){return Y(this.m_xf.q,t)},GetLinearVelocityFromWorldPoint:function(t){return S.Add(this.m_linearVelocity,M(this.m_angularVelocity,S.Subtract(t,this.m_sweep.c)))},GetLinearVelocityFromLocalPoint:function(t){return this.GetLinearVelocityFromWorldPoint(this.GetWorldPoint(t))},GetLinearDamping:function(){return this.m_linearDamping},SetLinearDamping:function(t){this.m_linearDamping=t},GetAngularDamping:function(){return this.m_angularDamping},SetAngularDamping:function(t){this.m_angularDamping=t},GetGravityScale:function(){return this.m_gravityScale},SetGravityScale:function(t){this.m_gravityScale=t},SetType:function(t){if(i(0==this.m_world.IsLocked()),1!=this.m_world.IsLocked()&&this.m_type!=t){this.m_type=t,this.ResetMassData(),this.m_type==re.b2_staticBody&&(this.m_linearVelocity.SetZero(),this.m_angularVelocity=0,this.m_sweep.a0=this.m_sweep.a,this.m_sweep.c0.Assign(this.m_sweep.c),this.SynchronizeFixtures()),this.SetAwake(!0),this.m_force.SetZero(),this.m_torque=0;for(var e=this.m_contactList;e;){var n=e;e=e.next,this.m_world.m_contactManager.Destroy(n.contact)}this.m_contactList=null;for(var r=this.m_world.m_contactManager.m_broadPhase,s=this.m_fixtureList;s;s=s.m_next)for(var o=s.m_proxyCount,a=0;a=0),this.m_density=t},GetDensity:function(){return this.m_density},GetFriction:function(){return this.m_friction},SetFriction:function(t){this.m_friction=t},GetRestitution:function(){return this.m_restitution},SetRestitution:function(t){this.m_restitution=t},GetAABB:function(t){return i(0<=t&&t0:0!=(i.maskBits&n.categoryBits)&&0!=(i.categoryBits&n.maskBits)}},_e.prototype={BeginContact:function(t){},EndContact:function(t){},PreSolve:function(t,e){},PostSolve:function(t,e){}},de.prototype={ReportFixture:function(t){return!1}},fe.prototype={ReportFixture:function(t,e,i,n){}};var ve=t.create("step"),xe=t.create("collide","step"),Ce=t.create("solve","step"),Te=t.create("solveTOI","step"),Ae=t.create("broadphase","step");function be(t){this.m_contactManager=new Ue,this.m_destructionListener=null,this.g_debugDraw=null,this.m_bodyList=null,this.m_jointList=null,this.m_bodyCount=0,this.m_jointCount=0,this.m_warmStarting=!0,this.m_continuousPhysics=!0,this.m_subStepping=!1,this.m_stepComplete=!0,this.m_allowSleep=!0,this.m_gravity=t,this.m_flags=be.e_clearForces,this.m_inv_dt0=0,this.p_step=new me,this.p_island=new Je}function Se(){this.broadPhase=null,this.callback=null}function Ee(){this.broadPhase=null,this.callback=null}function we(t,e){return A(t*e)}function Ie(t,e){return t>e?t:e}function Re(){this.fcn=null,this.primary=!1}function Pe(){this.other=null,this.contact=null,this.prev=null,this.next=null}function Oe(){this.m_nodeA=new Pe,this.m_nodeB=new Pe,this.m_manifold=new wt}function De(){this.parent.call(this)}Se.prototype={QueryCallback:function(t){var e=this.broadPhase.GetUserData(t);return this.callback.ReportFixture(e.fixture)}},Ee.prototype={RayCastCallback:function(t,e){var i=this.broadPhase.GetUserData(e),n=i.fixture,r=i.childIndex,s=new Ot;if(n.RayCast(s,t,r)){var o=s.fraction,a=S.Add(S.Multiply(1-o,t.p1),S.Multiply(o,t.p2));return this.callback.ReportFixture(n,a,s.normal,o)}return t.maxFraction}},be.m_local_sweep_backupA=new O,be.m_local_sweep_backupB=new O,be.m_local_sweep_backupC=new O,be.prototype={Destroy:function(){for(var t=this.m_bodyList;t;){for(var e=t.m_next,i=t.m_fixtureList;i;){var n=i.m_next;i.m_proxyCount=0,i.Destroy(),i=n}t=e}},SetDestructionListener:function(t){this.m_destructionListener=t},SetContactFilter:function(t){this.m_contactManager.m_contactFilter=t},SetContactListener:function(t){this.m_contactManager.m_contactListener=t},SetDebugDraw:function(t){this.g_debugDraw=t},CreateBody:function(t){if(i(0==this.IsLocked()),this.IsLocked())return null;var e=new re(t,this);return e.m_prev=null,e.m_next=this.m_bodyList,this.m_bodyList&&(this.m_bodyList.m_prev=e),this.m_bodyList=e,++this.m_bodyCount,e},DestroyBody:function(t){if(i(this.m_bodyCount>0),i(0==this.IsLocked()),!this.IsLocked()){for(var e=t.m_jointList;e;){var n=e;e=e.next,this.m_destructionListener&&this.m_destructionListener.SayGoodbyeJoint(n.joint),this.DestroyJoint(n.joint),t.m_jointList=e}t.m_jointList=null;for(var r=t.m_contactList;r;){var s=r;r=r.next,this.m_contactManager.Destroy(s.contact)}t.m_contactList=null;for(var o=t.m_fixtureList;o;){var a=o;o=o.m_next,this.m_destructionListener&&this.m_destructionListener.SayGoodbyeFixture(a),a.DestroyProxies(this.m_contactManager.m_broadPhase),a.Destroy(),t.m_fixtureList=o,t.m_fixtureCount-=1}t.m_fixtureList=null,t.m_fixtureCount=0,t.m_prev&&(t.m_prev.m_next=t.m_next),t.m_next&&(t.m_next.m_prev=t.m_prev),t==this.m_bodyList&&(this.m_bodyList=t.m_next),t.m_destroyed=!0,--this.m_bodyCount}},CreateJoint:function(t){if(i(0==this.IsLocked()),this.IsLocked())return null;var e=ii.Create(t);e.m_prev=null,e.m_next=this.m_jointList,this.m_jointList&&(this.m_jointList.m_prev=e),this.m_jointList=e,++this.m_jointCount,e.m_edgeA.joint=e,e.m_edgeA.other=e.m_bodyB,e.m_edgeA.prev=null,e.m_edgeA.next=e.m_bodyA.m_jointList,e.m_bodyA.m_jointList&&(e.m_bodyA.m_jointList.prev=e.m_edgeA),e.m_bodyA.m_jointList=e.m_edgeA,e.m_edgeB.joint=e,e.m_edgeB.other=e.m_bodyA,e.m_edgeB.prev=null,e.m_edgeB.next=e.m_bodyB.m_jointList,e.m_bodyB.m_jointList&&(e.m_bodyB.m_jointList.prev=e.m_edgeB),e.m_bodyB.m_jointList=e.m_edgeB;var n=t.bodyA,r=t.bodyB;if(0==t.collideConnected)for(var s=r.GetContactList();s;)s.other==n&&s.contact.FlagForFiltering(),s=s.next;return e},DestroyJoint:function(t){if(i(0==this.IsLocked()),!this.IsLocked()){var e=t.m_collideConnected;t.m_prev&&(t.m_prev.m_next=t.m_next),t.m_next&&(t.m_next.m_prev=t.m_prev),t==this.m_jointList&&(this.m_jointList=t.m_next);var n=t.m_bodyA,r=t.m_bodyB;if(n.SetAwake(!0),r.SetAwake(!0),t.m_edgeA.prev&&(t.m_edgeA.prev.next=t.m_edgeA.next),t.m_edgeA.next&&(t.m_edgeA.next.prev=t.m_edgeA.prev),t.m_edgeA==n.m_jointList&&(n.m_jointList=t.m_edgeA.next),t.m_edgeA.prev=null,t.m_edgeA.next=null,t.m_edgeB.prev&&(t.m_edgeB.prev.next=t.m_edgeB.next),t.m_edgeB.next&&(t.m_edgeB.next.prev=t.m_edgeB.prev),t.m_edgeB==r.m_jointList&&(r.m_jointList=t.m_edgeB.next),t.m_edgeB.prev=null,t.m_edgeB.next=null,ii.Destroy(t),i(this.m_jointCount>0),--this.m_jointCount,0==e)for(var s=r.GetContactList();s;)s.other==n&&s.contact.FlagForFiltering(),s=s.next}},Step:function(t,e,i){ve.start(),this.m_flags&be.e_newFixture&&(this.m_contactManager.FindNewContacts(),this.m_flags&=~be.e_newFixture),this.m_flags|=be.e_locked,this.p_step.dt=t,this.p_step.velocityIterations=e,this.p_step.positionIterations=i,this.p_step.inv_dt=t>0?1/t:0,this.p_step.dtRatio=this.m_inv_dt0*t,this.p_step.warmStarting=this.m_warmStarting,xe.start(),this.m_contactManager.Collide(),xe.stop(),this.m_stepComplete&&this.p_step.dt>0&&(Ce.start(),this.Solve(this.p_step),Ce.stop()),this.m_continuousPhysics&&this.p_step.dt>0&&(Te.start(),this.SolveTOI(this.p_step),Te.stop()),this.p_step.dt>0&&(this.m_inv_dt0=this.p_step.inv_dt),this.m_flags&be.e_clearForces&&this.ClearForces(),this.m_flags&=~be.e_locked,ve.stop()},ClearForces:function(){for(var t=this.m_bodyList;t;t=t.GetNext())t.m_force.x=t.m_force.y=0,t.m_torque=0},DrawDebugData:function(){if(null!=this.g_debugDraw){this.g_debugDraw.ClearDraw();var t=this.g_debugDraw.GetFlags();if(t&rt.e_shapeBit)for(var e=this.m_bodyList;e;e=e.GetNext())for(var i=e.GetTransform(),n=e.GetFixtureList();n;n=n.GetNext())0==e.IsActive()?this.DrawShape(n,i,new nt(.5,.5,.3)):e.GetType()==re.b2_staticBody?this.DrawShape(n,i,new nt(.5,.9,.5)):e.GetType()==re.b2_kinematicBody?this.DrawShape(n,i,new nt(.5,.5,.9)):0==e.IsAwake()?this.DrawShape(n,i,new nt(.6,.6,.6)):this.DrawShape(n,i,new nt(.9,.7,.7));if(t&rt.e_jointBit)for(var r=this.m_jointList;r;r=r.GetNext())this.DrawJoint(r);if(t&rt.e_pairBit)for(var s=new nt(.3,.9,.9),o=this.m_contactManager.m_contactList;o;o=o.GetNext()){var a=o.GetFixtureA(),c=o.GetFixtureB(),h=a.GetAABB(o.GetChildIndexA()).GetCenter(),l=c.GetAABB(o.GetChildIndexB()).GetCenter();this.g_debugDraw.DrawSegment(h,l,s)}if(t&rt.e_aabbBit){s=new nt(.9,.3,.9);var u=new nt(.3,.3,.9),_=this.m_contactManager.m_broadPhase;for(e=this.m_bodyList;e;e=e.GetNext())if(0!=e.IsActive())for(n=e.GetFixtureList();n;n=n.GetNext())for(var d=0;d0;){if(i(1==(e=o[--c]).IsActive()),this.p_island.AddBody(e),e.SetAwake(!0),e.GetType()!=re.b2_staticBody){for(var h=e.m_contactList;h;h=h.next){var l=h.contact;if(!(l.m_flags&Oe.e_islandFlag)&&(0!=l.IsEnabled()&&0!=l.IsTouching())){var u=l.m_fixtureA.m_isSensor,_=l.m_fixtureB.m_isSensor;if(!u&&!_)this.p_island.AddContact(l),l.m_flags|=Oe.e_islandFlag,(f=h.other).m_flags&re.e_islandFlag||(i(c8)){var a=1;if(n.m_flags&Oe.e_toiFlag)a=n.m_toi;else{var c=n.GetFixtureA(),h=n.GetFixtureB();if(c.IsSensor()||h.IsSensor())continue;var l=c.GetBody(),u=h.GetBody(),_=l.m_type,d=u.m_type;i(_==re.b2_dynamicBody||d==re.b2_dynamicBody);var f=l.IsAwake()&&_!=re.b2_staticBody,m=u.IsAwake()&&d!=re.b2_staticBody;if(0==f&&0==m)continue;var p=l.IsBullet()||_!=re.b2_dynamicBody,g=u.IsBullet()||d!=re.b2_dynamicBody;if(0==p&&0==g)continue;var y=l.m_sweep.alpha0;l.m_sweep.alpha00;for(var _=0;_0&&0==e.IsSensor()&&0==n.IsSensor()&&(e.GetBody().SetAwake(!0),n.GetBody().SetAwake(!0));var r=e.GetType(),s=n.GetType();i(0<=r&&s0),t.type){case wt.e_circles:var s=e.q.c*t.localPoint.x-e.q.s*t.localPoint.y+e.p.x,o=e.q.s*t.localPoint.x+e.q.c*t.localPoint.y+e.p.y,a=n.q.c*t.localPoints[0].x-n.q.s*t.localPoints[0].y+n.p.x,c=n.q.s*t.localPoints[0].x+n.q.c*t.localPoints[0].y+n.p.y;this.point.x=.5*(s+a),this.point.y=.5*(o+c),this.normal.x=a-s,this.normal.y=c-o;var h=this.normal.x,l=this.normal.y;this.normal.Normalize(),this.separation=h*this.normal.x+l*this.normal.y-t.radiusA-t.radiusB;break;case wt.e_faceA:this.normal.x=e.q.c*t.localNormal.x-e.q.s*t.localNormal.y,this.normal.y=e.q.s*t.localNormal.x+e.q.c*t.localNormal.y;var u=e.q.c*t.localPoint.x-e.q.s*t.localPoint.y+e.p.x,_=e.q.s*t.localPoint.x+e.q.c*t.localPoint.y+e.p.y,d=n.q.c*t.localPoints[r].x-n.q.s*t.localPoints[r].y+n.p.x,f=n.q.s*t.localPoints[r].x+n.q.c*t.localPoints[r].y+n.p.y;this.separation=(d-u)*this.normal.x+(f-_)*this.normal.y-t.radiusA-t.radiusB,this.point.x=d,this.point.y=f;break;case wt.e_faceB:this.normal.x=n.q.c*t.localNormal.x-n.q.s*t.localNormal.y,this.normal.y=n.q.s*t.localNormal.x+n.q.c*t.localNormal.y;u=n.q.c*t.localPoint.x-n.q.s*t.localPoint.y+n.p.x,_=n.q.s*t.localPoint.x+n.q.c*t.localPoint.y+n.p.y,d=e.q.c*t.localPoints[r].x-e.q.s*t.localPoints[r].y+e.p.x,f=e.q.s*t.localPoints[r].x+e.q.c*t.localPoints[r].y+e.p.y;this.separation=(d-u)*this.normal.x+(f-_)*this.normal.y-t.radiusA-t.radiusB,this.point.x=d,this.point.y=f,this.normal.x=-this.normal.x,this.normal.y=-this.normal.y}}},qe.cs_xfA=new P,qe.cs_xfB=new P,qe.temp_solver_manifold=new Ye,qe.prototype={Init:function(t){this.m_step=t.step,this.m_count=t.count,this.m_positionConstraints.length=this.m_count,this.m_velocityConstraints.length=this.m_count,this.m_positions=t.positions,this.m_velocities=t.velocities,this.m_contacts=t.contacts;for(var e=0;e0);var f=this.m_velocityConstraints[e]||new je;f.friction=n.m_friction,f.restitution=n.m_restitution,f.tangentSpeed=n.m_tangentSpeed,f.indexA=l.m_islandIndex,f.indexB=u.m_islandIndex,f.invMassA=l.m_invMass,f.invMassB=u.m_invMass,f.invIA=l.m_invI,f.invIB=u.m_invI,f.contactIndex=e,f.pointCount=d,f.K.SetZero(),f.normalMass.SetZero(),this.m_velocityConstraints[e]=f;var m=this.m_positionConstraints[e]||new Xe;m.indexA=l.m_islandIndex,m.indexB=u.m_islandIndex,m.invMassA=l.m_invMass,m.invMassB=u.m_invMass,m.localCenterA.x=l.m_sweep.localCenter.x,m.localCenterA.y=l.m_sweep.localCenter.y,m.localCenterB.x=u.m_sweep.localCenter.x,m.localCenterB.y=u.m_sweep.localCenter.y,m.invIA=l.m_invI,m.invIB=u.m_invI,m.localNormal.x=_.localNormal.x,m.localNormal.y=_.localNormal.y,m.localPoint.x=_.localPoint.x,m.localPoint.y=_.localPoint.y,m.pointCount=d,m.radiusA=c,m.radiusB=h,m.type=_.type,this.m_positionConstraints[e]=m;for(var p=0;p0),qe.cs_xfA.q.Set(p),qe.cs_xfB.q.Set(x),qe.cs_xfA.p.x=m.x-(qe.cs_xfA.q.c*d.x-qe.cs_xfA.q.s*d.y),qe.cs_xfA.p.y=m.y-(qe.cs_xfA.q.s*d.x+qe.cs_xfA.q.c*d.y),qe.cs_xfB.p.x=v.x-(qe.cs_xfB.q.c*f.x-qe.cs_xfB.q.s*f.y),qe.cs_xfB.p.y=v.y-(qe.cs_xfB.q.s*f.x+qe.cs_xfB.q.c*f.y);var A=new It;A.Initialize(o,qe.cs_xfA,r,qe.cs_xfB,s),e.normal.x=A.normal.x,e.normal.y=A.normal.y;for(var b=e.pointCount,S=0;S0?1/R:0;var P=1*e.normal.y,O=-1*e.normal.x,D=E.rA.x*O-E.rA.y*P,B=E.rB.x*O-E.rB.y*P,L=h+l+u*D*D+_*B*B;E.tangentMass=L>0?1/L:0,E.velocityBias=0;var M=e.normal.x*(C.x+-T*E.rB.y-g.x- -y*E.rA.y)+e.normal.y*(C.y+T*E.rB.x-g.y-y*E.rA.x);M<-1&&(E.velocityBias=-e.restitution*M)}if(2==e.pointCount){var N=e.points[0],F=e.points[1],z=N.rA.x*e.normal.y-N.rA.y*e.normal.x,k=N.rB.x*e.normal.y-N.rB.y*e.normal.x,V=F.rA.x*e.normal.y-F.rA.y*e.normal.x,G=F.rB.x*e.normal.y-F.rB.y*e.normal.x,U=h+l+u*z*z+_*k*k,W=h+l+u*V*V+_*G*G,X=h+l+u*z*V+_*k*G;U*U<1e3*(U*W-X*X)?(e.K.ex.x=U,e.K.ex.y=X,e.K.ey.x=X,e.K.ey.y=W,e.normalMass.Assign(e.K.GetInverse())):e.pointCount=1}}},WarmStart:function(){for(var t=0;t=0&&D>=0);var B=_.x+-d*R.rB.y-l.x- -u*R.rA.y,L=_.y+d*R.rB.x-l.y-u*R.rA.x,M=_.x+-d*P.rB.y-l.x- -u*P.rA.y,N=_.y+d*P.rB.x-l.y-u*P.rA.x,F=B*f.x+L*f.y,z=M*f.x+N*f.y,k=F-R.velocityBias,V=z-P.velocityBias;for(k-=e.K.ex.x*O+e.K.ey.x*D,V-=e.K.ex.y*O+e.K.ey.y*D;;){var G=-(e.normalMass.ex.x*k+e.normalMass.ey.x*V),U=-(e.normalMass.ex.y*k+e.normalMass.ey.y*V);if(G>=0&&U>=0){var W=G-O,X=U-D,j=W*f.x,Y=W*f.y,H=X*f.x,q=X*f.y;l.x-=s*(j+H),l.y-=s*(Y+q),u-=o*(R.rA.x*Y-R.rA.y*j+(P.rA.x*q-P.rA.y*H)),_.x+=a*(j+H),_.y+=a*(Y+q),d+=c*(R.rB.x*Y-R.rB.y*j+(P.rB.x*q-P.rB.y*H)),R.normalImpulse=G,P.normalImpulse=U;break}if(G=-R.normalMass*k,U=0,F=0,z=e.K.ex.y*G+V,G>=0&&z>=0){X=U-D,j=(W=G-O)*f.x,Y=W*f.y,H=X*f.x,q=X*f.y,l.x-=s*(j+H),l.y-=s*(Y+q),u-=o*(R.rA.x*Y-R.rA.y*j+(P.rA.x*q-P.rA.y*H)),_.x+=a*(j+H),_.y+=a*(Y+q),d+=c*(R.rB.x*Y-R.rB.y*j+(P.rB.x*q-P.rB.y*H)),R.normalImpulse=G,P.normalImpulse=U;break}if(G=0,U=-P.normalMass*V,F=e.K.ey.x*U+k,z=0,U>=0&&F>=0){X=U-D,j=(W=G-O)*f.x,Y=W*f.y,H=X*f.x,q=X*f.y,l.x-=s*(j+H),l.y-=s*(Y+q),u-=o*(R.rA.x*Y-R.rA.y*j+(P.rA.x*q-P.rA.y*H)),_.x+=a*(j+H),_.y+=a*(Y+q),d+=c*(R.rB.x*Y-R.rB.y*j+(P.rB.x*q-P.rB.y*H)),R.normalImpulse=G,P.normalImpulse=U;break}if(G=0,U=0,z=V,(F=k)>=0&&z>=0){X=U-D,j=(W=G-O)*f.x,Y=W*f.y,H=X*f.x,q=X*f.y,l.x-=s*(j+H),l.y-=s*(Y+q),u-=o*(R.rA.x*Y-R.rA.y*j+(P.rA.x*q-P.rA.y*H)),_.x+=a*(j+H),_.y+=a*(Y+q),d+=c*(R.rB.x*Y-R.rB.y*j+(P.rB.x*q-P.rB.y*H)),R.normalImpulse=G,P.normalImpulse=U;break}break}}this.m_velocities[n].w=u,this.m_velocities[r].w=d}},StoreImpulses:function(){for(var t=0;t0?-S/I:0,P=R*y.x,O=R*y.y;d.x-=o*P,d.y-=o*O,f-=a*(C*O-T*P),m.x+=l*P,m.y+=l*O,p+=u*(A*O-b*P)}this.m_positions[n].a=f,this.m_positions[r].a=p}return t>=-3*c},SolveTOIPositionConstraints:function(t,e){for(var i=0,n=0;n0?-E/R:0,O=S.Multiply(P,x);m.Subtract(S.Multiply(u,O)),p-=_*B(A,O),g.Add(S.Multiply(d,O)),y+=f*B(b,O)}this.m_positions[s].a=p,this.m_positions[o].a=y}return i>=-1.5*c}};var Ze=t.create("solve initialization","solve"),Ke=t.create("warm starting","solve initialization"),Qe=t.create("solve velocities","solve"),$e=t.create("solve positions","solve");function ti(){this.other=null,this.joint=null,this.prev=null,this.next=null}function ei(){this.type=ii.e_unknownJoint,this.userData=null,this.bodyA=null,this.bodyB=null,this.collideConnected=!1}function ii(t){i(t.bodyA!=t.bodyB),this.m_type=t.type,this.m_prev=null,this.m_next=null,this.m_bodyA=t.bodyA,this.m_bodyB=t.bodyB,this.m_index=0,this.m_collideConnected=t.collideConnected,this.m_islandFlag=!1,this.m_userData=t.userData,this.m_edgeA=new ti,this.m_edgeA.joint=null,this.m_edgeA.other=null,this.m_edgeA.prev=null,this.m_edgeA.next=null,this.m_edgeB=new ti,this.m_edgeB.joint=null,this.m_edgeB.other=null,this.m_edgeB.prev=null,this.m_edgeB.next=null}function ni(){this.parent.call(this),this.type=ii.e_revoluteJoint,this.localAnchorA=new S,this.localAnchorB=new S,this.referenceAngle=0,this.lowerAngle=0,this.upperAngle=0,this.maxMotorTorque=0,this.motorSpeed=0,this.enableLimit=!1,this.enableMotor=!1,Object.seal(this)}function ri(t){this.parent.call(this,t),this.m_localAnchorA=t.localAnchorA.Clone(),this.m_localAnchorB=t.localAnchorB.Clone(),this.m_referenceAngle=t.referenceAngle,this.m_impulse=new E,this.m_motorImpulse=0,this.m_lowerAngle=t.lowerAngle,this.m_upperAngle=t.upperAngle,this.m_maxMotorTorque=t.maxMotorTorque,this.m_motorSpeed=t.motorSpeed,this.m_enableLimit=t.enableLimit,this.m_enableMotor=t.enableMotor,this.m_limitState=ii.e_inactiveLimit,this.m_indexA=0,this.m_indexB=0,this.m_rA=new S,this.m_rB=new S,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0,this.m_mass=new I,this.m_motorMass=0}function si(){this.parent.call(this),this.type=ii.e_mouseJoint,this.target=new S(0,0),this.maxForce=0,this.frequencyHz=5,this.dampingRatio=.7,Object.seal(this)}function oi(t){this.parent.call(this,t),i(t.target.IsValid()),i(g(t.maxForce)&&t.maxForce>=0),i(g(t.frequencyHz)&&t.frequencyHz>=0),i(g(t.dampingRatio)&&t.dampingRatio>=0),this.m_targetA=t.target.Clone(),this.m_localAnchorB=q(this.m_bodyB.GetTransform(),this.m_targetA),this.m_maxForce=t.maxForce,this.m_impulse=new S,this.m_frequencyHz=t.frequencyHz,this.m_dampingRatio=t.dampingRatio,this.m_beta=0,this.m_gamma=0,this.m_indexA=0,this.m_indexB=0,this.m_rB=new S,this.m_localCenterB=new S,this.m_invMassB=0,this.m_invIB=0,this.m_mass=new w,this.m_C=new S}function ai(){this.parent.call(this),this.type=ii.e_distanceJoint,this.localAnchorA=new S(0,0),this.localAnchorB=new S(0,0),this.length=1,this.frequencyHz=0,this.dampingRatio=0,Object.seal(this)}function ci(t){this.parent.call(this,t),this.m_localAnchorA=t.localAnchorA.Clone(),this.m_localAnchorB=t.localAnchorB.Clone(),this.m_length=t.length,this.m_frequencyHz=t.frequencyHz,this.m_dampingRatio=t.dampingRatio,this.m_impulse=0,this.m_gamma=0,this.m_bias=0,this.m_indexA=0,this.m_indexB=0,this.m_u=new S,this.m_rA=new S,this.m_rB=new S,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0,this.m_mass=0}function hi(){this.parent.call(this),this.type=ii.e_prismaticJoint,this.localAnchorA=new S,this.localAnchorB=new S,this.localAxisA=new S(1,0),this.referenceAngle=0,this.enableLimit=!1,this.lowerTranslation=0,this.upperTranslation=0,this.enableMotor=!1,this.maxMotorForce=0,this.motorSpeed=0,Object.seal(this)}function li(t){this.parent.call(this,t),this.m_localAnchorA=t.localAnchorA.Clone(),this.m_localAnchorB=t.localAnchorB.Clone(),this.m_localXAxisA=t.localAxisA.Clone(),this.m_localXAxisA.Normalize(),this.m_localYAxisA=M(1,this.m_localXAxisA),this.m_referenceAngle=t.referenceAngle,this.m_impulse=new E,this.m_motorMass=0,this.m_motorImpulse=0,this.m_lowerTranslation=t.lowerTranslation,this.m_upperTranslation=t.upperTranslation,this.m_maxMotorForce=t.maxMotorForce,this.m_motorSpeed=t.motorSpeed,this.m_enableLimit=t.enableLimit,this.m_enableMotor=t.enableMotor,this.m_limitState=ii.e_inactiveLimit,this.m_axis=new S,this.m_perp=new S,this.m_indexA=0,this.m_indexB=0,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0,this.m_s1=0,this.m_s2=0,this.m_a1=0,this.m_a2=0,this.m_K=new I,this.m_motorMass=0}function ui(){this.parent.call(this),this.type=ii.e_frictionJoint,this.localAnchorA=new S,this.localAnchorB=new S,this.maxForce=0,this.maxTorque=0,Object.seal(this)}function _i(t){this.parent.call(this,t),this.m_localAnchorA=t.localAnchorA.Clone(),this.m_localAnchorB=t.localAnchorB.Clone(),this.m_linearImpulse=new S,this.m_angularImpulse=0,this.m_maxForce=t.maxForce,this.m_maxTorque=t.maxTorque,this.m_indexA=0,this.m_indexB=0,this.m_rA=new S,this.m_rB=new S,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0,this.m_linearMass=new w,this.m_angularMass=0}function di(){this.parent.call(this),this.type=ii.e_weldJoint,this.localAnchorA=new S(0,0),this.localAnchorB=new S(0,0),this.referenceAngle=0,this.frequencyHz=0,this.dampingRatio=0,Object.seal(this)}function fi(t){this.parent.call(this,t),this.m_bias=0,this.m_gamma=0,this.m_indexA=0,this.m_indexB=0,this.m_rA=new S,this.m_rB=new S,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0,this.m_mass=new I,this.m_localAnchorA=t.localAnchorA.Clone(),this.m_localAnchorB=t.localAnchorB.Clone(),this.m_referenceAngle=t.referenceAngle,this.m_frequencyHz=t.frequencyHz,this.m_dampingRatio=t.dampingRatio,this.m_impulse=new E}function mi(){this.parent.call(this),this.type=ii.e_wheelJoint,this.localAnchorA=new S,this.localAnchorB=new S,this.localAxisA=new S(1,0),this.enableMotor=!1,this.maxMotorTorque=0,this.motorSpeed=0,this.frequencyHz=2,this.dampingRatio=.7,Object.seal(this)}function pi(t){this.parent.call(this,t),this.m_indexA=0,this.m_indexB=0,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0,this.m_localAnchorA=t.localAnchorA.Clone(),this.m_localAnchorB=t.localAnchorB.Clone(),this.m_localXAxisA=t.localAxisA.Clone(),this.m_localYAxisA=M(1,this.m_localXAxisA),this.m_mass=0,this.m_impulse=0,this.m_motorMass=0,this.m_motorImpulse=0,this.m_springMass=0,this.m_springImpulse=0,this.m_maxMotorTorque=t.maxMotorTorque,this.m_motorSpeed=t.motorSpeed,this.m_enableMotor=t.enableMotor,this.m_frequencyHz=t.frequencyHz,this.m_dampingRatio=t.dampingRatio,this.m_bias=0,this.m_gamma=0,this.m_ax=new S,this.m_ay=new S,this.m_sAx=this.m_sBx=0,this.m_sAy=this.m_sBy=0}function gi(){this.parent.call(this),this.type=ii.e_gearJoint,this.joint1=null,this.joint2=null,this.ratio=1,Object.seal(this)}function yi(t){var e,n;this.parent.call(this,t),this.m_joint1=t.joint1,this.m_joint2=t.joint2,this.m_typeA=this.m_joint1.GetType(),this.m_typeB=this.m_joint2.GetType(),i(this.m_typeA==ii.e_revoluteJoint||this.m_typeA==ii.e_prismaticJoint),i(this.m_typeB==ii.e_revoluteJoint||this.m_typeB==ii.e_prismaticJoint),this.m_bodyC=this.m_joint1.GetBodyA(),this.m_bodyA=this.m_joint1.GetBodyB();var r=this.m_bodyA.m_xf,s=this.m_bodyA.m_sweep.a,o=this.m_bodyC.m_xf,a=this.m_bodyC.m_sweep.a;if(this.m_localAnchorA=new S,this.m_localAnchorB=new S,this.m_localAnchorC=new S,this.m_localAnchorD=new S,this.m_localAxisC=new S,this.m_localAxisD=new S,this.m_typeA==ii.e_revoluteJoint){var c=t.joint1;this.m_localAnchorC.Assign(c.m_localAnchorA),this.m_localAnchorA.Assign(c.m_localAnchorB),this.m_referenceAngleA=c.m_referenceAngle,this.m_localAxisC.SetZero(),e=s-a-this.m_referenceAngleA}else{var h=t.joint1;this.m_localAnchorC.Assign(h.m_localAnchorA),this.m_localAnchorA.Assign(h.m_localAnchorB),this.m_referenceAngleA=h.m_referenceAngle,this.m_localAxisC.Assign(h.m_localXAxisA);var l=this.m_localAnchorC,u=Y(o.q,S.Add(j(r.q,this.m_localAnchorA),S.Subtract(r.p,o.p)));e=D(S.Subtract(u,l),this.m_localAxisC)}this.m_bodyD=this.m_joint2.GetBodyA(),this.m_bodyB=this.m_joint2.GetBodyB();var _=this.m_bodyB.m_xf,d=this.m_bodyB.m_sweep.a,f=this.m_bodyD.m_xf,m=this.m_bodyD.m_sweep.a;if(this.m_typeB==ii.e_revoluteJoint){c=t.joint2;this.m_localAnchorD.Assign(c.m_localAnchorA),this.m_localAnchorB.Assign(c.m_localAnchorB),this.m_referenceAngleB=c.m_referenceAngle,this.m_localAxisD.SetZero(),n=d-m-this.m_referenceAngleB}else{h=t.joint2;this.m_localAnchorD.Assign(h.m_localAnchorA),this.m_localAnchorB.Assign(h.m_localAnchorB),this.m_referenceAngleB=h.m_referenceAngle,this.m_localAxisD.Assign(h.m_localXAxisA);var p=this.m_localAnchorD,g=Y(f.q,S.Add(j(_.q,this.m_localAnchorB),S.Subtract(_.p,f.p)));n=D(S.Subtract(g,p),this.m_localAxisD)}this.m_ratio=t.ratio,this.m_constant=e+this.m_ratio*n,this.m_impulse=0,this.m_indexA=this.m_indexB=this.m_indexC=this.m_indexD=0,this.m_lcA=new S,this.m_lcB=new S,this.m_lcC=new S,this.m_lcD=new S,this.m_mA=this.m_mB=this.m_mC=this.m_mD=0,this.m_iA=this.m_iB=this.m_iC=this.m_iD=0,this.m_JvAC=new S,this.m_JvBD=new S,this.m_JwA=this.m_JwB=this.m_JwC=this.m_JwD=0,this.m_mass=0}function vi(){this.parent.call(this),this.type=ii.e_motorJoint,this.linearOffset=new S,this.angularOffset=0,this.maxForce=1,this.maxTorque=1,this.correctionFactor=.3,Object.seal(this)}function xi(t){this.parent.call(this,t),this.m_linearOffset=t.linearOffset.Clone(),this.m_angularOffset=t.angularOffset,this.m_linearImpulse=new S,this.m_angularImpulse=0,this.m_maxForce=t.maxForce,this.m_maxTorque=t.maxTorque,this.m_correctionFactor=t.correctionFactor,this.m_indexA=0,this.m_indexB=0,this.m_rA=new S,this.m_rB=new S,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_linearError=new S,this.m_angularError=0,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0,this.m_linearMass=new w,this.m_angularMass=0}Je._solverData=new ye,Je._solverDef=new He,Je._solver=new qe,Je.prototype={Clear:function(){this.m_bodyCount=0,this.m_contactCount=0,this.m_jointCount=0},Initialize:function(t,e,i,n){this.m_listener=n,this.m_bodyCapacity=t,this.m_contactCapacity=e,this.m_jointCapacity=i,this.m_bodyCount=0,this.m_contactCount=0,this.m_jointCount=0,this.m_bodies.length=t,this.m_contacts.length=e,this.m_joints.length=i,this.m_velocities.length=t,this.m_positions.length=t},Solve:function(t,e,i){Ze.start();for(var r=t.dt,s=0;s4){var y=2/A(g);u.x*=y,u.y*=y}var v=r*c;if(v*v>d)c*=y=_/Z(v);l.x+=r*u.x,l.y+=r*u.y,a+=r*c,this.m_positions[s].a=a,this.m_velocities[s].w=c}var x=!1;for(s=0;sw||D(o.m_linearVelocity,o.m_linearVelocity)>1e-4?(o.m_sleepTime=0,E=0):(o.m_sleepTime+=r,E=Q(E,o.m_sleepTime)))}if(E>=.5&&x)for(s=0;s4){var f=2/u.Length();h.Multiply(f)}var m=o*l;if(m*m>d)l*=f=_/Z(m);a.Add(S.Multiply(o,h)),c+=o*l,this.m_positions[r].a=c,this.m_velocities[r].w=l;var p=this.m_bodies[r];p.m_sweep.c.Assign(a),p.m_sweep.a=c,p.m_linearVelocity.Assign(h),p.m_angularVelocity=l,p.SynchronizeTransform()}this.Report(Je._solver.m_velocityConstraints)},AddBody:function(t){i(this.m_bodyCount0&&(this.m_motorMass=1/this.m_motorMass),(0==this.m_enableMotor||f)&&(this.m_motorImpulse=0),this.m_enableLimit&&0==f){var m=r-e-this.m_referenceAngle;Z(this.m_upperAngle-this.m_lowerAngle)<2*h?this.m_limitState=ii.e_equalLimits:m<=this.m_lowerAngle?(this.m_limitState!=ii.e_atLowerLimit&&(this.m_impulse.z=0),this.m_limitState=ii.e_atLowerLimit):m>=this.m_upperAngle?(this.m_limitState!=ii.e_atUpperLimit&&(this.m_impulse.z=0),this.m_limitState=ii.e_atUpperLimit):(this.m_limitState=ii.e_inactiveLimit,this.m_impulse.z=0)}else this.m_limitState=ii.e_inactiveLimit;if(t.step.warmStarting){this.m_impulse.Multiply(t.step.dtRatio),this.m_motorImpulse*=t.step.dtRatio;var p=new S(this.m_impulse.x,this.m_impulse.y);i.Subtract(S.Multiply(l,p)),n-=_*(B(this.m_rA,p)+this.m_motorImpulse+this.m_impulse.z),s.Add(S.Multiply(u,p)),o+=d*(B(this.m_rB,p)+this.m_motorImpulse+this.m_impulse.z)}else this.m_impulse.SetZero(),this.m_motorImpulse=0;t.velocities[this.m_indexA].v.Assign(i),t.velocities[this.m_indexA].w=n,t.velocities[this.m_indexB].v.Assign(s),t.velocities[this.m_indexB].w=o},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=this.m_invMassA,o=this.m_invMassB,a=this.m_invIA,c=this.m_invIB,h=a+c==0;if(this.m_enableMotor&&this.m_limitState!=ii.e_equalLimits&&0==h){var l=r-i-this.m_motorSpeed,u=-this.m_motorMass*l,_=this.m_motorImpulse,d=t.step.dt*this.m_maxMotorTorque;this.m_motorImpulse=it(this.m_motorImpulse+u,-d,d),i-=a*(u=this.m_motorImpulse-_),r+=c*u}if(this.m_enableLimit&&this.m_limitState!=ii.e_inactiveLimit&&0==h){var f=S.Subtract(S.Subtract(S.Add(n,M(r,this.m_rB)),e),M(i,this.m_rA)),m=r-i;l=new E(f.x,f.y,m),u=this.m_mass.Solve33(l).Negate();if(this.m_limitState==ii.e_equalLimits)this.m_impulse.Add(u);else if(this.m_limitState==ii.e_atLowerLimit){if(this.m_impulse.z+u.z<0){var p=S.Add(f.Negate(),S.Multiply(this.m_impulse.z,new S(this.m_mass.ez.x,this.m_mass.ez.y))),g=this.m_mass.Solve22(p);u.x=g.x,u.y=g.y,u.z=-this.m_impulse.z,this.m_impulse.x+=g.x,this.m_impulse.y+=g.y,this.m_impulse.z=0}else this.m_impulse.Add(u)}else if(this.m_limitState==ii.e_atUpperLimit){if(this.m_impulse.z+u.z>0){p=S.Add(f.Negate(),S.Multiply(this.m_impulse.z,new S(this.m_mass.ez.x,this.m_mass.ez.y))),g=this.m_mass.Solve22(p);u.x=g.x,u.y=g.y,u.z=-this.m_impulse.z,this.m_impulse.x+=g.x,this.m_impulse.y+=g.y,this.m_impulse.z=0}else this.m_impulse.Add(u)}var y=new S(u.x,u.y);e.Subtract(S.Multiply(s,y)),i-=a*(B(this.m_rA,y)+u.z),n.Add(S.Multiply(o,y)),r+=c*(B(this.m_rB,y)+u.z)}else{l=S.Subtract(S.Subtract(S.Add(n,M(r,this.m_rB)),e),M(i,this.m_rA)),u=this.m_mass.Solve22(l.Negate());this.m_impulse.x+=u.x,this.m_impulse.y+=u.y,e.Subtract(S.Multiply(s,u)),i-=a*B(this.m_rA,u),n.Add(S.Multiply(o,u)),r+=c*B(this.m_rB,u)}t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r},SolvePositionConstraints:function(t){var e,i=t.positions[this.m_indexA].c.Clone(),n=t.positions[this.m_indexA].a,r=t.positions[this.m_indexB].c.Clone(),s=t.positions[this.m_indexB].a,o=new R(n),a=new R(s),l=0,_=this.m_invIA+this.m_invIB==0;if(this.m_enableLimit&&this.m_limitState!=ii.e_inactiveLimit&&0==_){var d=s-n-this.m_referenceAngle,f=0;if(this.m_limitState==ii.e_equalLimits){var m=it(d-this.m_lowerAngle,-u,u);f=-this.m_motorMass*m,l=Z(m)}else if(this.m_limitState==ii.e_atLowerLimit){l=-(m=d-this.m_lowerAngle),m=it(m+h,-u,0),f=-this.m_motorMass*m}else if(this.m_limitState==ii.e_atUpperLimit){l=m=d-this.m_upperAngle,m=it(m-h,0,u),f=-this.m_motorMass*m}n-=this.m_invIA*f,s+=this.m_invIB*f}o.Set(n),a.Set(s);var p=j(o,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),g=j(a,S.Subtract(this.m_localAnchorB,this.m_localCenterB));e=(m=S.Subtract(S.Subtract(S.Add(r,g),i),p)).Length();var y=this.m_invMassA,v=this.m_invMassB,x=this.m_invIA,C=this.m_invIB,T=new w;T.ex.x=y+v+x*p.y*p.y+C*g.y*g.y,T.ex.y=-x*p.x*p.y-C*g.x*g.y,T.ey.x=T.ex.y,T.ey.y=y+v+x*p.x*p.x+C*g.x*g.x;var A=T.Solve(m).Negate();return i.Subtract(S.Multiply(y,A)),n-=x*B(p,A),r.Add(S.Multiply(v,A)),s+=C*B(g,A),t.positions[this.m_indexA].c.Assign(i),t.positions[this.m_indexA].a=n,t.positions[this.m_indexB].c.Assign(r),t.positions[this.m_indexB].a=s,e<=c&&l<=h},_serialize:function(t){var e=t||{};return this.parent.prototype._serialize.call(this,e),e.localAnchorA=this.m_localAnchorA._serialize(),e.localAnchorB=this.m_localAnchorB._serialize(),e.referenceAngle=this.m_referenceAngle,e.lowerAngle=this.m_lowerAngle,e.upperAngle=this.m_upperAngle,e.maxMotorTorque=this.m_maxMotorTorque,e.motorSpeed=this.m_motorSpeed,e.enableLimit=this.m_enableLimit,e.enableMotor=this.m_enableMotor,e}},ri._extend(ii),si._extend(ei),oi.prototype={GetAnchorA:function(){return this.m_targetA},GetAnchorB:function(){return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)},GetReactionForce:function(t){return S.Multiply(t,this.m_impulse)},GetReactionTorque:function(t){return 0*t},SetTarget:function(t){0==this.m_bodyB.IsAwake()&&this.m_bodyB.SetAwake(!0),this.m_targetA.Assign(t)},GetTarget:function(){return this.m_targetA},SetMaxForce:function(t){this.m_maxForce=t},GetMaxForce:function(){return this.m_maxForce},SetFrequency:function(t){this.m_frequencyHz=t},GetFrequency:function(){return this.m_frequencyHz},SetDampingRatio:function(t){this.m_dampingRatio=t},GetDampingRatio:function(){return this.m_dampingRatio},ShiftOrigin:function(t){this.m_targetA.Subtract(t)},InitVelocityConstraints:function(t){this.m_indexB=this.m_bodyB.m_islandIndex,this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter),this.m_invMassB=this.m_bodyB.m_invMass,this.m_invIB=this.m_bodyB.m_invI;var e=t.positions[this.m_indexB].c.Clone(),n=t.positions[this.m_indexB].a,o=t.velocities[this.m_indexB].v.Clone(),a=t.velocities[this.m_indexB].w,c=new R(n),h=this.m_bodyB.GetMass(),l=2*s*this.m_frequencyHz,u=2*h*this.m_dampingRatio*l,_=h*(l*l),d=t.step.dt;i(u+d*_>r),this.m_gamma=d*(u+d*_),0!=this.m_gamma&&(this.m_gamma=1/this.m_gamma),this.m_beta=d*_*this.m_gamma,this.m_rB.Assign(j(c,S.Subtract(this.m_localAnchorB,this.m_localCenterB)));var f=new w;f.ex.x=this.m_invMassB+this.m_invIB*this.m_rB.y*this.m_rB.y+this.m_gamma,f.ex.y=-this.m_invIB*this.m_rB.x*this.m_rB.y,f.ey.x=f.ex.y,f.ey.y=this.m_invMassB+this.m_invIB*this.m_rB.x*this.m_rB.x+this.m_gamma,this.m_mass.Assign(f.GetInverse()),this.m_C.Assign(S.Subtract(S.Add(e,this.m_rB),this.m_targetA)),this.m_C.Multiply(this.m_beta),a*=.98,t.step.warmStarting?(this.m_impulse.Multiply(t.step.dtRatio),o.Add(S.Multiply(this.m_invMassB,this.m_impulse)),a+=this.m_invIB*B(this.m_rB,this.m_impulse)):this.m_impulse.SetZero(),t.velocities[this.m_indexB].v.Assign(o),t.velocities[this.m_indexB].w=a},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexB].v.Clone(),i=t.velocities[this.m_indexB].w,n=S.Add(e,M(i,this.m_rB)),r=N(this.m_mass,S.Add(S.Add(n,this.m_C),S.Multiply(this.m_gamma,this.m_impulse)).Negate()),s=this.m_impulse.Clone();this.m_impulse.Add(r);var o=t.step.dt*this.m_maxForce;this.m_impulse.LengthSquared()>o*o&&this.m_impulse.Multiply(o/this.m_impulse.Length()),r.Assign(S.Subtract(this.m_impulse,s)),e.Add(S.Multiply(this.m_invMassB,r)),i+=this.m_invIB*B(this.m_rB,r),t.velocities[this.m_indexB].v.Assign(e),t.velocities[this.m_indexB].w=i},SolvePositionConstraints:function(t){return!0}},oi._extend(ii),ai.prototype={Initialize:function(t,e,i,n){this.bodyA=t,this.bodyB=e,this.localAnchorA=this.bodyA.GetLocalPoint(i),this.localAnchorB=this.bodyB.GetLocalPoint(n);var r=S.Subtract(n,i);this.length=r.Length()},_deserialize:function(t,e,i){this.parent.prototype._deserialize.call(this,t,e,i),this.localAnchorA._deserialize(t.localAnchorA),this.localAnchorB._deserialize(t.localAnchorB),this.length=t.length,this.frequencyHz=t.frequencyHz,this.dampingRatio=t.dampingRatio}},ai._extend(ei),ci.prototype={GetAnchorA:function(){return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)},GetAnchorB:function(){return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)},GetReactionForce:function(t){return S.Multiply(t*this.m_impulse,this.m_u)},GetReactionTorque:function(t){return 0},GetLocalAnchorA:function(){return this.m_localAnchorA},GetLocalAnchorB:function(){return this.m_localAnchorB},SetLength:function(t){this.m_length=t},GetLength:function(){return this.m_length},SetFrequency:function(t){this.m_frequencyHz=t},GetFrequency:function(){return this.m_frequencyHz},SetDampingRatio:function(t){this.m_dampingRatio=t},GetDampingRatio:function(){return this.m_dampingRatio},InitVelocityConstraints:function(t){this.m_indexA=this.m_bodyA.m_islandIndex,this.m_indexB=this.m_bodyB.m_islandIndex,this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter),this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter),this.m_invMassA=this.m_bodyA.m_invMass,this.m_invMassB=this.m_bodyB.m_invMass,this.m_invIA=this.m_bodyA.m_invI,this.m_invIB=this.m_bodyB.m_invI;var e=t.positions[this.m_indexA].c.Clone(),i=t.positions[this.m_indexA].a,n=t.velocities[this.m_indexA].v.Clone(),r=t.velocities[this.m_indexA].w,o=t.positions[this.m_indexB].c.Clone(),a=t.positions[this.m_indexB].a,h=t.velocities[this.m_indexB].v.Clone(),l=t.velocities[this.m_indexB].w,u=new R(i),_=new R(a);this.m_rA=j(u,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),this.m_rB=j(_,S.Subtract(this.m_localAnchorB,this.m_localCenterB)),this.m_u=S.Subtract(S.Subtract(S.Add(o,this.m_rB),e),this.m_rA);var d=this.m_u.Length();d>c?this.m_u.Multiply(1/d):this.m_u.Set(0,0);var f=B(this.m_rA,this.m_u),m=B(this.m_rB,this.m_u),p=this.m_invMassA+this.m_invIA*f*f+this.m_invMassB+this.m_invIB*m*m;if(this.m_mass=0!=p?1/p:0,this.m_frequencyHz>0){var g=d-this.m_length,y=2*s*this.m_frequencyHz,v=2*this.m_mass*this.m_dampingRatio*y,x=this.m_mass*y*y,C=t.step.dt;this.m_gamma=C*(v+C*x),this.m_gamma=0!=this.m_gamma?1/this.m_gamma:0,this.m_bias=g*C*x*this.m_gamma,p+=this.m_gamma,this.m_mass=0!=p?1/p:0}else this.m_gamma=0,this.m_bias=0;if(t.step.warmStarting){this.m_impulse*=t.step.dtRatio;var T=S.Multiply(this.m_impulse,this.m_u);n.Subtract(S.Multiply(this.m_invMassA,T)),r-=this.m_invIA*B(this.m_rA,T),h.Add(S.Multiply(this.m_invMassB,T)),l+=this.m_invIB*B(this.m_rB,T)}else this.m_impulse=0;t.velocities[this.m_indexA].v.Assign(n),t.velocities[this.m_indexA].w=r,t.velocities[this.m_indexB].v.Assign(h),t.velocities[this.m_indexB].w=l},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=S.Add(e,M(i,this.m_rA)),o=S.Add(n,M(r,this.m_rB)),a=D(this.m_u,S.Subtract(o,s)),c=-this.m_mass*(a+this.m_bias+this.m_gamma*this.m_impulse);this.m_impulse+=c;var h=S.Multiply(c,this.m_u);e.Subtract(S.Multiply(this.m_invMassA,h)),i-=this.m_invIA*B(this.m_rA,h),n.Add(S.Multiply(this.m_invMassB,h)),r+=this.m_invIB*B(this.m_rB,h),t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r},SolvePositionConstraints:function(t){if(this.m_frequencyHz>0)return!0;var e=t.positions[this.m_indexA].c.Clone(),i=t.positions[this.m_indexA].a,n=t.positions[this.m_indexB].c.Clone(),r=t.positions[this.m_indexB].a,s=new R(i),o=new R(r),a=j(s,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),h=j(o,S.Subtract(this.m_localAnchorB,this.m_localCenterB)),l=S.Subtract(S.Subtract(S.Add(n,h),e),a),u=l.Normalize()-this.m_length;u=it(u,-.2,.2);var _=-this.m_mass*u,d=S.Multiply(_,l);return e.Subtract(S.Multiply(this.m_invMassA,d)),i-=this.m_invIA*B(a,d),n.Add(S.Multiply(this.m_invMassB,d)),r+=this.m_invIB*B(h,d),t.positions[this.m_indexA].c.Assign(e),t.positions[this.m_indexA].a=i,t.positions[this.m_indexB].c.Assign(n),t.positions[this.m_indexB].a=r,Z(u)0&&(this.m_motorMass=1/this.m_motorMass),this.m_perp=j(l,this.m_localYAxisA),this.m_s1=B(S.Add(f,_),this.m_perp),this.m_s2=B(d,this.m_perp);var v=m+p+g*this.m_s1*this.m_s1+y*this.m_s2*this.m_s2,x=g*this.m_s1+y*this.m_s2,C=g*this.m_s1*this.m_a1+y*this.m_s2*this.m_a2,T=g+y;0==T&&(T=1);var A=g*this.m_a1+y*this.m_a2,b=m+p+g*this.m_a1*this.m_a1+y*this.m_a2*this.m_a2;if(this.m_K.ex.Set(v,x,C),this.m_K.ey.Set(x,T,A),this.m_K.ez.Set(C,A,b),this.m_enableLimit){var E=D(this.m_axis,f);Z(this.m_upperTranslation-this.m_lowerTranslation)<2*c?this.m_limitState=ii.e_equalLimits:E<=this.m_lowerTranslation?this.m_limitState!=ii.e_atLowerLimit&&(this.m_limitState=ii.e_atLowerLimit,this.m_impulse.z=0):E>=this.m_upperTranslation?this.m_limitState!=ii.e_atUpperLimit&&(this.m_limitState=ii.e_atUpperLimit,this.m_impulse.z=0):(this.m_limitState=ii.e_inactiveLimit,this.m_impulse.z=0)}else this.m_limitState=ii.e_inactiveLimit,this.m_impulse.z=0;if(0==this.m_enableMotor&&(this.m_motorImpulse=0),t.step.warmStarting){this.m_impulse.Multiply(t.step.dtRatio),this.m_motorImpulse*=t.step.dtRatio;var w=S.Add(S.Multiply(this.m_impulse.x,this.m_perp),S.Multiply(this.m_motorImpulse+this.m_impulse.z,this.m_axis)),I=this.m_impulse.x*this.m_s1+this.m_impulse.y+(this.m_motorImpulse+this.m_impulse.z)*this.m_a1,P=this.m_impulse.x*this.m_s2+this.m_impulse.y+(this.m_motorImpulse+this.m_impulse.z)*this.m_a2;n.Subtract(S.Multiply(m,w)),r-=g*I,a.Add(S.Multiply(p,w)),h+=y*P}else this.m_impulse.SetZero(),this.m_motorImpulse=0;t.velocities[this.m_indexA].v.Assign(n),t.velocities[this.m_indexA].w=r,t.velocities[this.m_indexB].v.Assign(a),t.velocities[this.m_indexB].w=h},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=this.m_invMassA,o=this.m_invMassB,a=this.m_invIA,c=this.m_invIB;if(this.m_enableMotor&&this.m_limitState!=ii.e_equalLimits){var h=D(this.m_axis,S.Subtract(n,e))+this.m_a2*r-this.m_a1*i,l=this.m_motorMass*(this.m_motorSpeed-h),u=this.m_motorImpulse,_=t.step.dt*this.m_maxMotorForce;this.m_motorImpulse=it(this.m_motorImpulse+l,-_,_),l=this.m_motorImpulse-u;var d=S.Multiply(l,this.m_axis),f=l*this.m_a1,m=l*this.m_a2;e.Subtract(S.Multiply(s,d)),i-=a*f,n.Add(S.Multiply(o,d)),r+=c*m}var p=new S;if(p.x=D(this.m_perp,S.Subtract(n,e))+this.m_s2*r-this.m_s1*i,p.y=r-i,this.m_enableLimit&&this.m_limitState!=ii.e_inactiveLimit){var g;g=D(this.m_axis,S.Subtract(n,e))+this.m_a2*r-this.m_a1*i;h=new E(p.x,p.y,g);var y=this.m_impulse.Clone(),v=this.m_K.Solve33(h.Negate());this.m_impulse.Add(v),this.m_limitState==ii.e_atLowerLimit?this.m_impulse.z=tt(this.m_impulse.z,0):this.m_limitState==ii.e_atUpperLimit&&(this.m_impulse.z=Q(this.m_impulse.z,0));var x=S.Subtract(p.Negate(),S.Multiply(this.m_impulse.z-y.z,new S(this.m_K.ez.x,this.m_K.ez.y))),C=S.Add(this.m_K.Solve22(x),new S(y.x,y.y));this.m_impulse.x=C.x,this.m_impulse.y=C.y,v=E.Subtract(this.m_impulse,y);d=S.Add(S.Multiply(v.x,this.m_perp),S.Multiply(v.z,this.m_axis)),f=v.x*this.m_s1+v.y+v.z*this.m_a1,m=v.x*this.m_s2+v.y+v.z*this.m_a2;e.Subtract(S.Multiply(s,d)),i-=a*f,n.Add(S.Multiply(o,d)),r+=c*m}else{v=this.m_K.Solve22(p.Negate());this.m_impulse.x+=v.x,this.m_impulse.y+=v.y;d=S.Multiply(v.x,this.m_perp),f=v.x*this.m_s1+v.y,m=v.x*this.m_s2+v.y;e.Subtract(S.Multiply(s,d)),i-=a*f,n.Add(S.Multiply(o,d)),r+=c*m}t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r},SolvePositionConstraints:function(t){var e=t.positions[this.m_indexA].c.Clone(),i=t.positions[this.m_indexA].a,n=t.positions[this.m_indexB].c.Clone(),r=t.positions[this.m_indexB].a,s=new R(i),o=new R(r),a=this.m_invMassA,l=this.m_invMassB,u=this.m_invIA,_=this.m_invIB,d=j(s,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),f=j(o,S.Subtract(this.m_localAnchorB,this.m_localCenterB)),m=S.Subtract(S.Subtract(S.Add(n,f),e),d),p=j(s,this.m_localXAxisA),g=B(S.Add(m,d),p),y=B(f,p),v=j(s,this.m_localYAxisA),x=B(S.Add(m,d),v),C=B(f,v),T=new E,A=new S;A.x=D(v,m),A.y=r-i-this.m_referenceAngle;var b=Z(A.x),P=Z(A.y),O=!1,L=0;if(this.m_enableLimit){var M=D(p,m);Z(this.m_upperTranslation-this.m_lowerTranslation)<2*c?(L=it(M,-.2,.2),b=tt(b,Z(M)),O=!0):M<=this.m_lowerTranslation?(L=it(M-this.m_lowerTranslation+c,-.2,0),b=tt(b,this.m_lowerTranslation-M),O=!0):M>=this.m_upperTranslation&&(L=it(M-this.m_upperTranslation-c,0,.2),b=tt(b,M-this.m_upperTranslation),O=!0)}if(O){var N=a+l+u*x*x+_*C*C,F=u*x+_*C,z=u*x*g+_*C*y;0==(U=u+_)&&(U=1);var k=u*g+_*y,V=a+l+u*g*g+_*y*y;(W=new I).ex.Set(N,F,z),W.ey.Set(F,U,k),W.ez.Set(z,k,V);var G=new E;G.x=A.x,G.y=A.y,G.z=L,T=W.Solve33(G.Negate())}else{var U,W;N=a+l+u*x*x+_*C*C,F=u*x+_*C;0==(U=u+_)&&(U=1),(W=new w).ex.Set(N,F),W.ey.Set(F,U);var X=W.Solve(A.Negate());T.x=X.x,T.y=X.y,T.z=0}var Y=S.Add(S.Multiply(T.x,v),S.Multiply(T.z,p)),H=T.x*x+T.y+T.z*g,q=T.x*C+T.y+T.z*y;return e.Subtract(S.Multiply(a,Y)),i-=u*H,n.Add(S.Multiply(l,Y)),r+=_*q,t.positions[this.m_indexA].c.Assign(e),t.positions[this.m_indexA].a=i,t.positions[this.m_indexB].c.Assign(n),t.positions[this.m_indexB].a=r,b<=c&&P<=h},_serialize:function(t){var e=t||{};return this.parent.prototype._serialize.call(this,e),e.localAnchorA=this.m_localAnchorA._serialize(),e.localAnchorB=this.m_localAnchorB._serialize(),e.localAxisA=this.m_localXAxisA._serialize(),e.referenceAngle=this.m_referenceAngle,e.enableLimit=this.m_enableLimit,e.lowerTranslation=this.m_lowerTranslation,e.upperTranslation=this.m_upperTranslation,e.enableMotor=this.m_enableMotor,e.maxMotorForce=this.m_maxMotorForce,e.motorSpeed=this.m_motorSpeed,e}},li._extend(ii),ui.prototype={Initialize:function(t,e,i){this.bodyA=t,this.bodyB=e,this.localAnchorA.Assign(this.bodyA.GetLocalPoint(i)),this.localAnchorB.Assign(this.bodyB.GetLocalPoint(i))},_deserialize:function(t,e,i){this.parent.prototype._deserialize.call(this,t,e,i),this.localAnchorA._deserialize(t.localAnchorA),this.localAnchorB._deserialize(t.localAnchorB),this.maxForce=t.maxForce,this.maxTorque=t.maxTorque}},ui._extend(ei),_i.prototype={GetAnchorA:function(){return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)},GetAnchorB:function(){return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)},GetReactionForce:function(t){return S.Multiply(t,this.m_linearImpulse)},GetReactionTorque:function(t){return t*this.m_angularImpulse},GetLocalAnchorA:function(){return this.m_localAnchorA},GetLocalAnchorB:function(){return this.m_localAnchorB},SetMaxForce:function(t){i(g(t)&&t>=0),this.m_maxForce=t},GetMaxForce:function(){return this.m_maxForce},SetMaxTorque:function(t){i(g(t)&&t>=0),this.m_maxTorque=t},GetMaxTorque:function(){return this.m_maxTorque},InitVelocityConstraints:function(t){this.m_indexA=this.m_bodyA.m_islandIndex,this.m_indexB=this.m_bodyB.m_islandIndex,this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter),this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter),this.m_invMassA=this.m_bodyA.m_invMass,this.m_invMassB=this.m_bodyB.m_invMass,this.m_invIA=this.m_bodyA.m_invI,this.m_invIB=this.m_bodyB.m_invI;var e=t.positions[this.m_indexA].a,i=t.velocities[this.m_indexA].v.Clone(),n=t.velocities[this.m_indexA].w,r=t.positions[this.m_indexB].a,s=t.velocities[this.m_indexB].v.Clone(),o=t.velocities[this.m_indexB].w,a=new R(e),c=new R(r);this.m_rA=j(a,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),this.m_rB=j(c,S.Subtract(this.m_localAnchorB,this.m_localCenterB));var h=this.m_invMassA,l=this.m_invMassB,u=this.m_invIA,_=this.m_invIB,d=new w;if(d.ex.x=h+l+u*this.m_rA.y*this.m_rA.y+_*this.m_rB.y*this.m_rB.y,d.ex.y=-u*this.m_rA.x*this.m_rA.y-_*this.m_rB.x*this.m_rB.y,d.ey.x=d.ex.y,d.ey.y=h+l+u*this.m_rA.x*this.m_rA.x+_*this.m_rB.x*this.m_rB.x,this.m_linearMass=d.GetInverse(),this.m_angularMass=u+_,this.m_angularMass>0&&(this.m_angularMass=1/this.m_angularMass),t.step.warmStarting){this.m_linearImpulse.Multiply(t.step.dtRatio),this.m_angularImpulse*=t.step.dtRatio;var f=new S(this.m_linearImpulse.x,this.m_linearImpulse.y);i.Subtract(S.Multiply(h,f)),n-=u*(B(this.m_rA,f)+this.m_angularImpulse),s.Add(S.Multiply(l,f)),o+=_*(B(this.m_rB,f)+this.m_angularImpulse)}else this.m_linearImpulse.SetZero(),this.m_angularImpulse=0;t.velocities[this.m_indexA].v.Assign(i),t.velocities[this.m_indexA].w=n,t.velocities[this.m_indexB].v.Assign(s),t.velocities[this.m_indexB].w=o},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=this.m_invMassA,o=this.m_invMassB,a=this.m_invIA,c=this.m_invIB,h=t.step.dt,l=r-i,u=-this.m_angularMass*l,_=this.m_angularImpulse,d=h*this.m_maxTorque;this.m_angularImpulse=it(this.m_angularImpulse+u,-d,d),i-=a*(u=this.m_angularImpulse-_),r+=c*u;l=S.Add(n,S.Subtract(M(r,this.m_rB),S.Subtract(e,M(i,this.m_rA)))),u=N(this.m_linearMass,l).Negate(),_=this.m_linearImpulse.Clone();this.m_linearImpulse.Add(u);d=h*this.m_maxForce;this.m_linearImpulse.LengthSquared()>d*d&&(this.m_linearImpulse.Normalize(),this.m_linearImpulse.Multiply(d)),u=S.Subtract(this.m_linearImpulse,_),e.Subtract(S.Multiply(s,u)),i-=a*B(this.m_rA,u),n.Add(S.Multiply(o,u)),r+=c*B(this.m_rB,u),t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r},SolvePositionConstraints:function(t){return!0},_serialize:function(t){var e=t||{};return this.parent.prototype._serialize.call(this,e),e.localAnchorA=this.m_localAnchorA._serialize(),e.localAnchorB=this.m_localAnchorB._serialize(),e.maxForce=this.m_maxForce,e.maxTorque=this.m_maxTorque,e}},_i._extend(ii),di.prototype={Initialize:function(t,e,i){this.bodyA=t,this.bodyB=e,this.localAnchorA.Assign(this.bodyA.GetLocalPoint(i)),this.localAnchorB.Assign(this.bodyB.GetLocalPoint(i)),this.referenceAngle=this.bodyB.GetAngle()-this.bodyA.GetAngle()},_deserialize:function(t,e,i){this.parent.prototype._deserialize.call(this,t,e,i),this.localAnchorA._deserialize(t.localAnchorA),this.localAnchorB._deserialize(t.localAnchorB),this.referenceAngle=t.referenceAngle,this.frequencyHz=t.frequencyHz,this.dampingRatio=t.dampingRatio}},di._extend(ei),fi.prototype={GetAnchorA:function(){return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)},GetAnchorB:function(){return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)},GetReactionForce:function(t){var e=new S(this.m_impulse.x,this.m_impulse.y);return S.Multiply(t,e)},GetReactionTorque:function(t){return t*this.m_impulse.z},GetLocalAnchorA:function(){return this.m_localAnchorA},GetLocalAnchorB:function(){return this.m_localAnchorB},GetReferenceAngle:function(){return this.m_referenceAngle},SetFrequency:function(t){this.m_frequencyHz=t},GetFrequency:function(){return this.m_frequencyHz},SetDampingRatio:function(t){this.m_dampingRatio=t},GetDampingRatio:function(){return this.m_dampingRatio},InitVelocityConstraints:function(t){this.m_indexA=this.m_bodyA.m_islandIndex,this.m_indexB=this.m_bodyB.m_islandIndex,this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter),this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter),this.m_invMassA=this.m_bodyA.m_invMass,this.m_invMassB=this.m_bodyB.m_invMass,this.m_invIA=this.m_bodyA.m_invI,this.m_invIB=this.m_bodyB.m_invI;var e=t.positions[this.m_indexA].a,i=t.velocities[this.m_indexA].v.Clone(),n=t.velocities[this.m_indexA].w,r=t.positions[this.m_indexB].a,o=t.velocities[this.m_indexB].v.Clone(),a=t.velocities[this.m_indexB].w,c=new R(e),h=new R(r);this.m_rA.Assign(j(c,S.Subtract(this.m_localAnchorA,this.m_localCenterA))),this.m_rB.Assign(j(h,S.Subtract(this.m_localAnchorB,this.m_localCenterB)));var l=this.m_invMassA,u=this.m_invMassB,_=this.m_invIA,d=this.m_invIB,f=new I;if(f.ex.x=l+u+this.m_rA.y*this.m_rA.y*_+this.m_rB.y*this.m_rB.y*d,f.ey.x=-this.m_rA.y*this.m_rA.x*_-this.m_rB.y*this.m_rB.x*d,f.ez.x=-this.m_rA.y*_-this.m_rB.y*d,f.ex.y=f.ey.x,f.ey.y=l+u+this.m_rA.x*this.m_rA.x*_+this.m_rB.x*this.m_rB.x*d,f.ez.y=this.m_rA.x*_+this.m_rB.x*d,f.ex.z=f.ez.x,f.ey.z=f.ez.y,f.ez.z=_+d,this.m_frequencyHz>0){f.GetInverse22(this.m_mass);var m=_+d,p=m>0?1/m:0,g=r-e-this.m_referenceAngle,y=2*s*this.m_frequencyHz,v=2*p*this.m_dampingRatio*y,x=p*y*y,C=t.step.dt;this.m_gamma=C*(v+C*x),this.m_gamma=0!=this.m_gamma?1/this.m_gamma:0,this.m_bias=g*C*x*this.m_gamma,m+=this.m_gamma,this.m_mass.ez.z=0!=m?1/m:0}else 0==f.ez.z?(f.GetInverse22(this.m_mass),this.m_gamma=0,this.m_bias=0):(f.GetSymInverse33(this.m_mass),this.m_gamma=0,this.m_bias=0);if(t.step.warmStarting){this.m_impulse.Multiply(t.step.dtRatio);var T=new S(this.m_impulse.x,this.m_impulse.y);i.Subtract(S.Multiply(l,T)),n-=_*(B(this.m_rA,T)+this.m_impulse.z),o.Add(S.Multiply(u,T)),a+=d*(B(this.m_rB,T)+this.m_impulse.z)}else this.m_impulse.SetZero();t.velocities[this.m_indexA].v.Assign(i),t.velocities[this.m_indexA].w=n,t.velocities[this.m_indexB].v.Assign(o),t.velocities[this.m_indexB].w=a},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=this.m_invMassA,o=this.m_invMassB,a=this.m_invIA,c=this.m_invIB;if(this.m_frequencyHz>0){var h=r-i,l=-this.m_mass.ez.z*(h+this.m_bias+this.m_gamma*this.m_impulse.z);this.m_impulse.z+=l,i-=a*l,r+=c*l;var u=S.Subtract(S.Subtract(S.Add(n,M(r,this.m_rB)),e),M(i,this.m_rA)),_=U(this.m_mass,u).Negate();this.m_impulse.x+=_.x,this.m_impulse.y+=_.y;var d=_.Clone();e.Subtract(S.Multiply(s,d)),i-=a*B(this.m_rA,d),n.Add(S.Multiply(o,d)),r+=c*B(this.m_rB,d)}else{h=r-i;var f=new E((u=S.Subtract(S.Subtract(S.Add(n,M(r,this.m_rB)),e),M(i,this.m_rA))).x,u.y,h),m=G(this.m_mass,f).Negate();this.m_impulse.Add(m);d=new S(m.x,m.y);e.Subtract(S.Multiply(s,d)),i-=a*(B(this.m_rA,d)+m.z),n.Add(S.Multiply(o,d)),r+=c*(B(this.m_rB,d)+m.z)}t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r},SolvePositionConstraints:function(t){var e,i,n=t.positions[this.m_indexA].c.Clone(),r=t.positions[this.m_indexA].a,s=t.positions[this.m_indexB].c.Clone(),o=t.positions[this.m_indexB].a,a=new R(r),l=new R(o),u=this.m_invMassA,_=this.m_invMassB,d=this.m_invIA,f=this.m_invIB,m=j(a,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),p=j(l,S.Subtract(this.m_localAnchorB,this.m_localCenterB)),g=new I;if(g.ex.x=u+_+m.y*m.y*d+p.y*p.y*f,g.ey.x=-m.y*m.x*d-p.y*p.x*f,g.ez.x=-m.y*d-p.y*f,g.ex.y=g.ey.x,g.ey.y=u+_+m.x*m.x*d+p.x*p.x*f,g.ez.y=m.x*d+p.x*f,g.ex.z=g.ez.x,g.ey.z=g.ez.y,g.ez.z=d+f,this.m_frequencyHz>0){e=(v=S.Subtract(S.Subtract(S.Add(s,p),n),m)).Length(),i=0;var y=g.Solve22(v).Negate();n.Subtract(S.Multiply(u,y)),r-=d*B(m,y),s.Add(S.Multiply(_,y)),o+=f*B(p,y)}else{var v=S.Subtract(S.Subtract(S.Add(s,p),n),m),x=o-r-this.m_referenceAngle;e=v.Length(),i=Z(x);var C,T=new E(v.x,v.y,x);if(g.ez.z>0)C=g.Solve33(T).Invert();else{var A=g.Solve22(v).Invert();C=new E(A.x,A.y,0)}y=new S(C.x,C.y);n.Subtract(S.Multiply(u,y)),r-=d*(B(m,y)+C.z),s.Add(S.Multiply(_,y)),o+=f*(B(p,y)+C.z)}return t.positions[this.m_indexA].c.Assign(n),t.positions[this.m_indexA].a=r,t.positions[this.m_indexB].c.Assign(s),t.positions[this.m_indexB].a=o,e<=c&&i<=h},_serialize:function(t){var e=t||{};return this.parent.prototype._serialize.call(this,e),e.localAnchorA=this.m_localAnchorA._serialize(),e.localAnchorB=this.m_localAnchorB._serialize(),e.referenceAngle=this.m_referenceAngle,e.frequencyHz=this.m_frequencyHz,e.dampingRatio=this.m_dampingRatio,e}},fi._extend(ii),mi.prototype={Initialize:function(t,e,i,n){this.bodyA=t,this.bodyB=e,this.localAnchorA.Assign(this.bodyA.GetLocalPoint(i)),this.localAnchorB.Assign(this.bodyB.GetLocalPoint(i)),this.localAxisA.Assign(this.bodyA.GetLocalVector(n))},_deserialize:function(t,e,i){this.parent.prototype._deserialize.call(this,t,e,i),this.localAnchorA._deserialize(t.localAnchorA),this.localAnchorB._deserialize(t.localAnchorB),this.localAxisA._deserialize(t.localAxisA),this.enableMotor=t.enableMotor,this.maxMotorTorque=t.maxMotorTorque,this.motorSpeed=t.motorSpeed,this.frequencyHz=t.frequencyHz,this.dampingRatio=t.dampingRatio}},mi._extend(ei),pi.prototype={GetAnchorA:function(){return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)},GetAnchorB:function(){return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)},GetReactionForce:function(t){return S.Multiply(t,S.Add(S.Multiply(this.m_impulse,this.m_ay),S.Multiply(this.m_springImpulse,this.m_ax)))},GetReactionTorque:function(t){return t*this.m_motorImpulse},GetLocalAnchorA:function(){return this.m_localAnchorA},GetLocalAnchorB:function(){return this.m_localAnchorB},GetLocalAxisA:function(){return this.m_localXAxisA},GetJointTranslation:function(){var t=this.m_bodyA,e=this.m_bodyB,i=t.GetWorldPoint(this.m_localAnchorA),n=e.GetWorldPoint(this.m_localAnchorB);return D(S.Subtract(n,i),t.GetWorldVector(this.m_localXAxisA))},GetJointSpeed:function(){var t=this.m_bodyA.m_angularVelocity;return this.m_bodyB.m_angularVelocity-t},IsMotorEnabled:function(){return this.m_enableMotor},EnableMotor:function(t){this.m_bodyA.SetAwake(!0),this.m_bodyB.SetAwake(!0),this.m_enableMotor=t},SetMotorSpeed:function(t){this.m_bodyA.SetAwake(!0),this.m_bodyB.SetAwake(!0),this.m_motorSpeed=t},GetMotorSpeed:function(){return this.m_motorSpeed},SetMaxMotorTorque:function(t){this.m_bodyA.SetAwake(!0),this.m_bodyB.SetAwake(!0),this.m_maxMotorTorque=t},GetMaxMotorTorque:function(){return this.m_maxMotorTorque},GetMotorTorque:function(t){return t*this.m_motorImpulse},SetSpringFrequencyHz:function(t){this.m_frequencyHz=t},GetSpringFrequencyHz:function(){return this.m_frequencyHz},SetSpringDampingRatio:function(t){this.m_dampingRatio=t},GetSpringDampingRatio:function(){return this.m_dampingRatio},InitVelocityConstraints:function(t){this.m_indexA=this.m_bodyA.m_islandIndex,this.m_indexB=this.m_bodyB.m_islandIndex,this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter),this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter),this.m_invMassA=this.m_bodyA.m_invMass,this.m_invMassB=this.m_bodyB.m_invMass,this.m_invIA=this.m_bodyA.m_invI,this.m_invIB=this.m_bodyB.m_invI;var e=this.m_invMassA,i=this.m_invMassB,n=this.m_invIA,r=this.m_invIB,o=t.positions[this.m_indexA].c.Clone(),a=t.positions[this.m_indexA].a,c=t.velocities[this.m_indexA].v.Clone(),h=t.velocities[this.m_indexA].w,l=t.positions[this.m_indexB].c.Clone(),u=t.positions[this.m_indexB].a,_=t.velocities[this.m_indexB].v.Clone(),d=t.velocities[this.m_indexB].w,f=new R(a),m=new R(u),p=j(f,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),g=j(m,S.Subtract(this.m_localAnchorB,this.m_localCenterB)),y=S.Subtract(S.Subtract(S.Add(l,g),o),p);if(this.m_ay.Assign(j(f,this.m_localYAxisA)),this.m_sAy=B(S.Add(y,p),this.m_ay),this.m_sBy=B(g,this.m_ay),this.m_mass=e+i+n*this.m_sAy*this.m_sAy+r*this.m_sBy*this.m_sBy,this.m_mass>0&&(this.m_mass=1/this.m_mass),this.m_springMass=0,this.m_bias=0,this.m_gamma=0,this.m_frequencyHz>0){this.m_ax.Assign(j(f,this.m_localXAxisA)),this.m_sAx=B(S.Add(y,p),this.m_ax),this.m_sBx=B(g,this.m_ax);var v=e+i+n*this.m_sAx*this.m_sAx+r*this.m_sBx*this.m_sBx;if(v>0){this.m_springMass=1/v;var x=D(y,this.m_ax),C=2*s*this.m_frequencyHz,T=(y=2*this.m_springMass*this.m_dampingRatio*C,this.m_springMass*C*C),A=t.step.dt;this.m_gamma=A*(y+A*T),this.m_gamma>0&&(this.m_gamma=1/this.m_gamma),this.m_bias=x*A*T*this.m_gamma,this.m_springMass=v+this.m_gamma,this.m_springMass>0&&(this.m_springMass=1/this.m_springMass)}}else this.m_springImpulse=0;if(this.m_enableMotor?(this.m_motorMass=n+r,this.m_motorMass>0&&(this.m_motorMass=1/this.m_motorMass)):(this.m_motorMass=0,this.m_motorImpulse=0),t.step.warmStarting){this.m_impulse*=t.step.dtRatio,this.m_springImpulse*=t.step.dtRatio,this.m_motorImpulse*=t.step.dtRatio;var b=S.Add(S.Multiply(this.m_impulse,this.m_ay),S.Multiply(this.m_springImpulse,this.m_ax)),E=this.m_impulse*this.m_sAy+this.m_springImpulse*this.m_sAx+this.m_motorImpulse,w=this.m_impulse*this.m_sBy+this.m_springImpulse*this.m_sBx+this.m_motorImpulse;c.Subtract(S.Multiply(this.m_invMassA,b)),h-=this.m_invIA*E,_.Add(S.Multiply(this.m_invMassB,b)),d+=this.m_invIB*w}else this.m_impulse=0,this.m_springImpulse=0,this.m_motorImpulse=0;t.velocities[this.m_indexA].v.Assign(c),t.velocities[this.m_indexA].w=h,t.velocities[this.m_indexB].v.Assign(_),t.velocities[this.m_indexB].w=d},SolveVelocityConstraints:function(t){var e=this.m_invMassA,i=this.m_invMassB,n=this.m_invIA,r=this.m_invIB,s=t.velocities[this.m_indexA].v.Clone(),o=t.velocities[this.m_indexA].w,a=t.velocities[this.m_indexB].v.Clone(),c=t.velocities[this.m_indexB].w,h=D(this.m_ax,S.Subtract(a,s))+this.m_sBx*c-this.m_sAx*o,l=-this.m_springMass*(h+this.m_bias+this.m_gamma*this.m_springImpulse);this.m_springImpulse+=l;var u=S.Multiply(l,this.m_ax),_=l*this.m_sAx,d=l*this.m_sBx;s.Subtract(S.Multiply(e,u)),o-=n*_,a.Add(S.Multiply(i,u));h=(c+=r*d)-o-this.m_motorSpeed,l=-this.m_motorMass*h;var f=this.m_motorImpulse,m=t.step.dt*this.m_maxMotorTorque;this.m_motorImpulse=it(this.m_motorImpulse+l,-m,m),o-=n*(l=this.m_motorImpulse-f),c+=r*l;h=D(this.m_ay,S.Subtract(a,s))+this.m_sBy*c-this.m_sAy*o,l=-this.m_mass*h;this.m_impulse+=l;u=S.Multiply(l,this.m_ay),_=l*this.m_sAy,d=l*this.m_sBy;s.Subtract(S.Multiply(e,u)),o-=n*_,a.Add(S.Multiply(i,u)),c+=r*d,t.velocities[this.m_indexA].v.Assign(s),t.velocities[this.m_indexA].w=o,t.velocities[this.m_indexB].v.Assign(a),t.velocities[this.m_indexB].w=c},SolvePositionConstraints:function(t){var e,i=t.positions[this.m_indexA].c.Clone(),n=t.positions[this.m_indexA].a,r=t.positions[this.m_indexB].c.Clone(),s=t.positions[this.m_indexB].a,o=new R(n),a=new R(s),h=j(o,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),l=j(a,S.Subtract(this.m_localAnchorB,this.m_localCenterB)),u=S.Add(S.Subtract(r,i),S.Subtract(l,h)),_=j(o,this.m_localYAxisA),d=B(S.Add(u,h),_),f=B(l,_),m=D(u,_),p=this.m_invMassA+this.m_invMassB+this.m_invIA*this.m_sAy*this.m_sAy+this.m_invIB*this.m_sBy*this.m_sBy;e=0!=p?-m/p:0;var g=S.Multiply(e,_),y=e*d,v=e*f;return i.Subtract(S.Multiply(this.m_invMassA,g)),n-=this.m_invIA*y,r.Add(S.Multiply(this.m_invMassB,g)),s+=this.m_invIB*v,t.positions[this.m_indexA].c.Assign(i),t.positions[this.m_indexA].a=n,t.positions[this.m_indexB].c.Assign(r),t.positions[this.m_indexB].a=s,Z(m)<=c},_serialize:function(t){var e=t||{};return this.parent.prototype._serialize.call(this,e),e.localAnchorA=this.m_localAnchorA._serialize(),e.localAnchorB=this.m_localAnchorB._serialize(),e.localAxisA=this.m_localAxisA._serialize(),e.enableMotor=this.m_enableMotor,e.maxMotorTorque=this.m_maxMotorTorque,e.motorSpeed=this.m_motorSpeed,e.frequencyHz=this.m_frequencyHz,e.dampingRatio=this.m_dampingRatio,e}},pi._extend(ii),gi.prototype={_deserialize:function(t,e,i){this.parent.prototype._deserialize.call(this,t,e,i),this.joint1=t.joint1,this.joint2=t.joint2,this.ratio=t.ratio}},gi._extend(ei),yi.prototype={GetAnchorA:function(){return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)},GetAnchorB:function(){return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)},GetReactionForce:function(t){var e=S.Multiply(this.m_impulse,this.m_JvAC);return S.Multiply(t,e)},GetReactionTorque:function(t){return t*(this.m_impulse*this.m_JwA)},GetJoint1:function(){return this.m_joint1},GetJoint2:function(){return this.m_joint2},SetRatio:function(t){i(g(t)),this.m_ratio=t},GetRatio:function(){return this.m_ratio},InitVelocityConstraints:function(t){this.m_indexA=this.m_bodyA.m_islandIndex,this.m_indexB=this.m_bodyB.m_islandIndex,this.m_indexC=this.m_bodyC.m_islandIndex,this.m_indexD=this.m_bodyD.m_islandIndex,this.m_lcA.Assign(this.m_bodyA.m_sweep.localCenter),this.m_lcB.Assign(this.m_bodyB.m_sweep.localCenter),this.m_lcC.Assign(this.m_bodyC.m_sweep.localCenter),this.m_lcD.Assign(this.m_bodyD.m_sweep.localCenter),this.m_mA=this.m_bodyA.m_invMass,this.m_mB=this.m_bodyB.m_invMass,this.m_mC=this.m_bodyC.m_invMass,this.m_mD=this.m_bodyD.m_invMass,this.m_iA=this.m_bodyA.m_invI,this.m_iB=this.m_bodyB.m_invI,this.m_iC=this.m_bodyC.m_invI,this.m_iD=this.m_bodyD.m_invI;var e=t.positions[this.m_indexA].a,i=t.velocities[this.m_indexA].v.Clone(),n=t.velocities[this.m_indexA].w,r=t.positions[this.m_indexB].a,s=t.velocities[this.m_indexB].v.Clone(),o=t.velocities[this.m_indexB].w,a=t.positions[this.m_indexC].a,c=t.velocities[this.m_indexC].v.Clone(),h=t.velocities[this.m_indexC].w,l=t.positions[this.m_indexD].a,u=t.velocities[this.m_indexD].v.Clone(),_=t.velocities[this.m_indexD].w,d=new R(e),f=new R(r),m=new R(a),p=new R(l);if(this.m_mass=0,this.m_typeA==ii.e_revoluteJoint)this.m_JvAC.SetZero(),this.m_JwA=1,this.m_JwC=1,this.m_mass+=this.m_iA+this.m_iC;else{var g=j(m,this.m_localAxisC),y=j(m,S.Subtract(this.m_localAnchorC,this.m_lcC)),v=j(d,S.Subtract(this.m_localAnchorA,this.m_lcA));this.m_JvAC.Assign(g),this.m_JwC=B(y,g),this.m_JwA=B(v,g),this.m_mass+=this.m_mC+this.m_mA+this.m_iC*this.m_JwC*this.m_JwC+this.m_iA*this.m_JwA*this.m_JwA}if(this.m_typeB==ii.e_revoluteJoint)this.m_JvBD.SetZero(),this.m_JwB=this.m_ratio,this.m_JwD=this.m_ratio,this.m_mass+=this.m_ratio*this.m_ratio*(this.m_iB+this.m_iD);else{g=j(p,this.m_localAxisD);var x=j(p,S.Subtract(this.m_localAnchorD,this.m_lcD)),C=j(f,S.Subtract(this.m_localAnchorB,this.m_lcB));this.m_JvBD.Assign(S.Multiply(this.m_ratio,g)),this.m_JwD=this.m_ratio*B(x,g),this.m_JwB=this.m_ratio*B(C,g),this.m_mass+=this.m_ratio*this.m_ratio*(this.m_mD+this.m_mB)+this.m_iD*this.m_JwD*this.m_JwD+this.m_iB*this.m_JwB*this.m_JwB}this.m_mass=this.m_mass>0?1/this.m_mass:0,t.step.warmStarting?(i.Add(S.Multiply(this.m_mA*this.m_impulse,this.m_JvAC)),n+=this.m_iA*this.m_impulse*this.m_JwA,s.Add(S.Multiply(this.m_mB*this.m_impulse,this.m_JvBD)),o+=this.m_iB*this.m_impulse*this.m_JwB,c.Subtract(S.Multiply(this.m_mC*this.m_impulse,this.m_JvAC)),h-=this.m_iC*this.m_impulse*this.m_JwC,u.Subtract(S.Multiply(this.m_mD*this.m_impulse,this.m_JvBD)),_-=this.m_iD*this.m_impulse*this.m_JwD):this.m_impulse=0,t.velocities[this.m_indexA].v.Assign(i),t.velocities[this.m_indexA].w=n,t.velocities[this.m_indexB].v.Assign(s),t.velocities[this.m_indexB].w=o,t.velocities[this.m_indexC].v.Assign(c),t.velocities[this.m_indexC].w=h,t.velocities[this.m_indexD].v.Assign(u),t.velocities[this.m_indexD].w=_},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=t.velocities[this.m_indexC].v.Clone(),o=t.velocities[this.m_indexC].w,a=t.velocities[this.m_indexD].v.Clone(),c=t.velocities[this.m_indexD].w,h=D(this.m_JvAC,S.Subtract(e,s))+D(this.m_JvBD,S.Subtract(n,a));h+=this.m_JwA*i-this.m_JwC*o+(this.m_JwB*r-this.m_JwD*c);var l=-this.m_mass*h;this.m_impulse+=l,e.Add(S.Multiply(this.m_mA*l,this.m_JvAC)),i+=this.m_iA*l*this.m_JwA,n.Add(S.Multiply(this.m_mB*l,this.m_JvBD)),r+=this.m_iB*l*this.m_JwB,s.Subtract(S.Multiply(this.m_mC*l,this.m_JvAC)),o-=this.m_iC*l*this.m_JwC,a.Subtract(S.Multiply(this.m_mD*l,this.m_JvBD)),c-=this.m_iD*l*this.m_JwD,t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r,t.velocities[this.m_indexC].v.Assign(s),t.velocities[this.m_indexC].w=o,t.velocities[this.m_indexD].v.Assign(a),t.velocities[this.m_indexD].w=c},SolvePositionConstraints:function(t){var e,i,n,r,s,o,a=t.positions[this.m_indexA].c.Clone(),h=t.positions[this.m_indexA].a,l=t.positions[this.m_indexB].c.Clone(),u=t.positions[this.m_indexB].a,_=t.positions[this.m_indexC].c.Clone(),d=t.positions[this.m_indexC].a,f=t.positions[this.m_indexD].c.Clone(),m=t.positions[this.m_indexD].a,p=new R(h),g=new R(u),y=new R(d),v=new R(m),x=new S,C=new S,T=0;if(this.m_typeA==ii.e_revoluteJoint)x.SetZero(),n=1,s=1,T+=this.m_iA+this.m_iC,e=h-d-this.m_referenceAngleA;else{var A=j(y,this.m_localAxisC),b=j(y,S.Subtract(this.m_localAnchorC,this.m_lcC)),E=j(p,S.Subtract(this.m_localAnchorA,this.m_lcA));x.Assign(A),s=B(b,A),n=B(E,A),T+=this.m_mC+this.m_mA+this.m_iC*s*s+this.m_iA*n*n;var w=S.Subtract(this.m_localAnchorC,this.m_lcC),I=Y(y,S.Add(E,S.Subtract(a,_)));e=D(S.Subtract(I,w),this.m_localAxisC)}if(this.m_typeB==ii.e_revoluteJoint)C.SetZero(),r=this.m_ratio,o=this.m_ratio,T+=this.m_ratio*this.m_ratio*(this.m_iB+this.m_iD),i=u-m-this.m_referenceAngleB;else{A=j(v,this.m_localAxisD);var P=j(v,S.Subtract(this.m_localAnchorD,this.m_lcD)),O=j(g,S.Subtract(this.m_localAnchorB,this.m_lcB));C.Assign(S.Multiply(this.m_ratio,A)),o=this.m_ratio*B(P,A),r=this.m_ratio*B(O,A),T+=this.m_ratio*this.m_ratio*(this.m_mD+this.m_mB)+this.m_iD*o*o+this.m_iB*r*r;var L=S.Subtract(this.m_localAnchorD,this.m_lcD),M=Y(v,S.Add(O,S.Subtract(l,f)));i=D(S.Subtract(M,L),this.m_localAxisD)}var N=e+this.m_ratio*i-this.m_constant,F=0;return T>0&&(F=-N/T),a.Add(S.Multiply(this.m_mA,S.Multiply(F,x))),h+=this.m_iA*F*n,l.Add(S.Multiply(this.m_mB,S.Multiply(F,C))),u+=this.m_iB*F*r,_.Subtract(S.Multiply(this.m_mC,S.Multiply(F,x))),d-=this.m_iC*F*s,f.Subtract(S.Multiply(this.m_mD,S.Multiply(F,C))),m-=this.m_iD*F*o,t.positions[this.m_indexA].c.Assign(a),t.positions[this.m_indexA].a=h,t.positions[this.m_indexB].c.Assign(l),t.positions[this.m_indexB].a=u,t.positions[this.m_indexC].c.Assign(_),t.positions[this.m_indexC].a=d,t.positions[this.m_indexD].c.Assign(f),t.positions[this.m_indexD].a=m,0=0),this.m_maxForce=t},GetMaxForce:function(){return this.m_maxForce},SetMaxTorque:function(t){i(g(t)&&t>=0),this.m_maxTorque=t},GetMaxTorque:function(){return this.m_maxTorque},SetCorrectionFactor:function(t){i(g(t)&&0<=t&&t<=1),this.m_correctionFactor=t},GetCorrectionFactor:function(){return this.m_correctionFactor},InitVelocityConstraints:function(t){this.m_indexA=this.m_bodyA.m_islandIndex,this.m_indexB=this.m_bodyB.m_islandIndex,this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter),this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter),this.m_invMassA=this.m_bodyA.m_invMass,this.m_invMassB=this.m_bodyB.m_invMass,this.m_invIA=this.m_bodyA.m_invI,this.m_invIB=this.m_bodyB.m_invI;var e=t.positions[this.m_indexA].c.Clone(),i=t.positions[this.m_indexA].a,n=t.velocities[this.m_indexA].v.Clone(),r=t.velocities[this.m_indexA].w,s=t.positions[this.m_indexB].c.Clone(),o=t.positions[this.m_indexB].a,a=t.velocities[this.m_indexB].v.Clone(),c=t.velocities[this.m_indexB].w,h=new R(i),l=new R(o);this.m_rA.Assign(j(h,this.m_localCenterA.Negate())),this.m_rB.Assign(j(l,this.m_localCenterB.Negate()));var u=this.m_invMassA,_=this.m_invMassB,d=this.m_invIA,f=this.m_invIB,m=new w;if(m.ex.x=u+_+d*this.m_rA.y*this.m_rA.y+f*this.m_rB.y*this.m_rB.y,m.ex.y=-d*this.m_rA.x*this.m_rA.y-f*this.m_rB.x*this.m_rB.y,m.ey.x=m.ex.y,m.ey.y=u+_+d*this.m_rA.x*this.m_rA.x+f*this.m_rB.x*this.m_rB.x,this.m_linearMass.Assign(m.GetInverse()),this.m_angularMass=d+f,this.m_angularMass>0&&(this.m_angularMass=1/this.m_angularMass),this.m_linearError.x=s.x+this.m_rB.x-e.x-this.m_rA.x-(h.c*this.m_linearOffset.x-h.s*this.m_linearOffset.y),this.m_linearError.y=s.y+this.m_rB.y-e.y-this.m_rA.y-(h.s*this.m_linearOffset.x+h.c*this.m_linearOffset.y),this.m_angularError=o-i-this.m_angularOffset,t.step.warmStarting){this.m_linearImpulse.Multiply(t.step.dtRatio),this.m_angularImpulse*=t.step.dtRatio;var p=new S(this.m_linearImpulse.x,this.m_linearImpulse.y);n.Subtract(S.Multiply(u,p)),r-=d*(B(this.m_rA,p)+this.m_angularImpulse),a.Add(S.Multiply(_,p)),c+=f*(B(this.m_rB,p)+this.m_angularImpulse)}else this.m_linearImpulse.SetZero(),this.m_angularImpulse=0;t.velocities[this.m_indexA].v.Assign(n),t.velocities[this.m_indexA].w=r,t.velocities[this.m_indexB].v.Assign(a),t.velocities[this.m_indexB].w=c},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=this.m_invMassA,o=this.m_invMassB,a=this.m_invIA,c=this.m_invIB,h=t.step.dt,l=t.step.inv_dt,u=r-i+l*this.m_correctionFactor*this.m_angularError,_=-this.m_angularMass*u,d=this.m_angularImpulse,f=h*this.m_maxTorque;this.m_angularImpulse=it(this.m_angularImpulse+_,-f,f),i-=a*(_=this.m_angularImpulse-d),r+=c*_;u=new S(n.x+-r*this.m_rB.x-e.x- -i*this.m_rA.x+l*this.m_correctionFactor*this.m_linearError.x,n.y+r*this.m_rB.y-e.y-i*this.m_rA.y+l*this.m_correctionFactor*this.m_linearError.y),_=N(this.m_linearMass,u).Negate(),d=this.m_linearImpulse.Clone();this.m_linearImpulse.Add(_);f=h*this.m_maxForce;this.m_linearImpulse.LengthSquared()>f*f&&(this.m_linearImpulse.Normalize(),this.m_linearImpulse.Multiply(f)),_.Assign(S.Subtract(this.m_linearImpulse,d)),e.Subtract(S.Multiply(s,_)),i-=a*B(this.m_rA,_),n.Add(S.Multiply(o,_)),r+=c*B(this.m_rB,_),t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r},SolvePositionConstraints:function(t){return!0},_serialize:function(t){var e=t||{};return this.parent.prototype._serialize.call(this,e),e.linearOffset=this.m_linearOffset._serialize(),e.angularOffset=this.m_angularOffset,e.maxForce=this.m_maxForce,e.maxTorque=this.m_maxTorque,e.correctionFactor=this.m_correctionFactor,e}},xi._extend(ii);function Ci(){this.parent.call(this),this.type=ii.e_pulleyJoint,this.groundAnchorA=new S(-1,1),this.groundAnchorB=new S(1,1),this.localAnchorA=new S(-1,0),this.localAnchorB=new S(1,0),this.lengthA=0,this.lengthB=0,this.ratio=1,this.collideConnected=!0,Object.seal(this)}function Ti(t){this.parent.call(this,t),this.m_indexA=0,this.m_indexB=0,this.m_uA=new S,this.m_uB=new S,this.m_rA=new S,this.m_rB=new S,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0,this.m_mass=0,this.m_groundAnchorA=t.groundAnchorA.Clone(),this.m_groundAnchorB=t.groundAnchorB.Clone(),this.m_localAnchorA=t.localAnchorA.Clone(),this.m_localAnchorB=t.localAnchorB.Clone(),this.m_lengthA=t.lengthA,this.m_lengthB=t.lengthB,i(0!=t.ratio),this.m_ratio=t.ratio,this.m_constant=t.lengthA+this.m_ratio*t.lengthB,this.m_impulse=0}function Ai(){this.parent.call(this),this.type=ii.e_ropeJoint,this.localAnchorA=new S(-1,0),this.localAnchorB=new S(1,0),this.maxLength=0,Object.seal(this)}function bi(t){this.parent.call(this,t),this.m_localAnchorA=t.localAnchorA.Clone(),this.m_localAnchorB=t.localAnchorB.Clone(),this.m_maxLength=t.maxLength,this.m_mass=0,this.m_impulse=0,this.m_state=ii.e_inactiveLimit,this.m_length=0,this.m_indexA=0,this.m_indexB=0,this.m_u=new S,this.m_rA=new S,this.m_rB=new S,this.m_localCenterA=new S,this.m_localCenterB=new S,this.m_invMassA=0,this.m_invMassB=0,this.m_invIA=0,this.m_invIB=0}Ci.prototype={Initialize:function(t,e,n,s,o,a,c){this.bodyA=t,this.bodyB=e,this.groundAnchorA.Assign(n),this.groundAnchorB.Assign(s),this.localAnchorA.Assign(this.bodyA.GetLocalPoint(o)),this.localAnchorB.Assign(this.bodyB.GetLocalPoint(a));var h=S.Subtract(o,n);this.lengthA=h.Length();var l=S.Subtract(a,s);this.lengthB=l.Length(),this.ratio=c,i(this.ratio>r)},_deserialize:function(t,e,i){this.parent.prototype._deserialize.call(this,t,e,i),this.groundAnchorA._deserialize(t.groundAnchorA),this.groundAnchorB._deserialize(t.groundAnchorB),this.localAnchorA._deserialize(t.localAnchorA),this.localAnchorB._deserialize(t.localAnchorB),this.lengthA=t.lengthA,this.lengthB=t.lengthB,this.ratio=t.ratio}},Ci._extend(ei),Ti.prototype={GetAnchorA:function(){return this.m_bodyA.GetWorldPoint(this.m_localAnchorA)},GetAnchorB:function(){return this.m_bodyB.GetWorldPoint(this.m_localAnchorB)},GetReactionForce:function(t){var e=S.Multiply(this.m_impulse,this.m_uB);return S.Multiply(t,e)},GetReactionTorque:function(t){return 0},GetGroundAnchorA:function(){return this.m_groundAnchorA},GetGroundAnchorB:function(){return this.m_groundAnchorB},GetLengthA:function(){return this.m_lengthA},GetLengthB:function(){return this.m_lengthB},GetRatio:function(){return this.m_ratio},GetCurrentLengthA:function(){var t=this.m_bodyA.GetWorldPoint(this.m_localAnchorA),e=this.m_groundAnchorA;return S.Subtract(t,e).Length()},GetCurrentLengthB:function(){var t=this.m_bodyB.GetWorldPoint(this.m_localAnchorB),e=this.m_groundAnchorB;return S.Subtract(t,e).Length()},ShiftOrigin:function(t){this.m_groundAnchorA.Subtract(t),this.m_groundAnchorB.Subtract(t)},InitVelocityConstraints:function(t){this.m_indexA=this.m_bodyA.m_islandIndex,this.m_indexB=this.m_bodyB.m_islandIndex,this.m_localCenterA.Assign(this.m_bodyA.m_sweep.localCenter),this.m_localCenterB.Assign(this.m_bodyB.m_sweep.localCenter),this.m_invMassA=this.m_bodyA.m_invMass,this.m_invMassB=this.m_bodyB.m_invMass,this.m_invIA=this.m_bodyA.m_invI,this.m_invIB=this.m_bodyB.m_invI;var e=t.positions[this.m_indexA].c.Clone(),i=t.positions[this.m_indexA].a,n=t.velocities[this.m_indexA].v.Clone(),r=t.velocities[this.m_indexA].w,s=t.positions[this.m_indexB].c.Clone(),o=t.positions[this.m_indexB].a,a=t.velocities[this.m_indexB].v.Clone(),h=t.velocities[this.m_indexB].w,l=new R(i),u=new R(o);this.m_rA.Assign(j(l,S.Subtract(this.m_localAnchorA,this.m_localCenterA))),this.m_rB.Assign(j(u,S.Subtract(this.m_localAnchorB,this.m_localCenterB))),this.m_uA.Assign(S.Add(e,S.Subtract(this.m_rA,this.m_groundAnchorA))),this.m_uB.Assign(S.Add(s,S.Subtract(this.m_rB,this.m_groundAnchorB)));var _=this.m_uA.Length(),d=this.m_uB.Length();_>10*c?this.m_uA.Multiply(1/_):this.m_uA.SetZero(),d>10*c?this.m_uB.Multiply(1/d):this.m_uB.SetZero();var f=B(this.m_rA,this.m_uA),m=B(this.m_rB,this.m_uB),p=this.m_invMassA+this.m_invIA*f*f,g=this.m_invMassB+this.m_invIB*m*m;if(this.m_mass=p+this.m_ratio*this.m_ratio*g,this.m_mass>0&&(this.m_mass=1/this.m_mass),t.step.warmStarting){this.m_impulse*=t.step.dtRatio;var y=S.Multiply(-this.m_impulse,this.m_uA),v=S.Multiply(-this.m_ratio*this.m_impulse,this.m_uB);n.Add(S.Multiply(this.m_invMassA,y)),r+=this.m_invIA*B(this.m_rA,y),a.Add(S.Multiply(this.m_invMassB,v)),h+=this.m_invIB*B(this.m_rB,v)}else this.m_impulse=0;t.velocities[this.m_indexA].v.Assign(n),t.velocities[this.m_indexA].w=r,t.velocities[this.m_indexB].v.Assign(a),t.velocities[this.m_indexB].w=h},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=S.Add(e,M(i,this.m_rA)),o=S.Add(n,M(r,this.m_rB)),a=-D(this.m_uA,s)-this.m_ratio*D(this.m_uB,o),c=-this.m_mass*a;this.m_impulse+=c;var h=S.Multiply(-c,this.m_uA),l=S.Multiply(-this.m_ratio,S.Multiply(c,this.m_uB));e.Add(S.Multiply(this.m_invMassA,h)),i+=this.m_invIA*B(this.m_rA,h),n.Add(S.Multiply(this.m_invMassB,l)),r+=this.m_invIB*B(this.m_rB,l),t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r},SolvePositionConstraints:function(t){var e=t.positions[this.m_indexA].c.Clone(),i=t.positions[this.m_indexA].a,n=t.positions[this.m_indexB].c.Clone(),r=t.positions[this.m_indexB].a,s=new R(i),o=new R(r),a=j(s,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),h=j(o,S.Subtract(this.m_localAnchorB,this.m_localCenterB)),l=S.Add(e,S.Subtract(a,this.m_groundAnchorA)),u=S.Add(n,S.Subtract(h,this.m_groundAnchorB)),_=l.Length(),d=u.Length();_>10*c?l.Multiply(1/_):l.SetZero(),d>10*c?u.Multiply(1/d):u.SetZero();var f=B(a,l),m=B(h,u),p=this.m_invMassA+this.m_invIA*f*f,g=this.m_invMassB+this.m_invIB*m*m,y=p+this.m_ratio*this.m_ratio*g;y>0&&(y=1/y);var v=this.m_constant-_-this.m_ratio*d,x=Z(v),C=-y*v,T=S.Multiply(-C,l),A=S.Multiply(-this.m_ratio,S.Multiply(C,u));return e.Add(S.Multiply(this.m_invMassA,T)),i+=this.m_invIA*B(a,T),n.Add(S.Multiply(this.m_invMassB,A)),r+=this.m_invIB*B(h,A),t.positions[this.m_indexA].c.Assign(e),t.positions[this.m_indexA].a=i,t.positions[this.m_indexB].c.Assign(n),t.positions[this.m_indexB].a=r,x0?ii.e_atUpperLimit:ii.e_inactiveLimit,!(this.m_length>c))return this.m_u.SetZero(),this.m_mass=0,void(this.m_impulse=0);this.m_u.Multiply(1/this.m_length);var d=B(this.m_rA,this.m_u),f=B(this.m_rB,this.m_u),m=this.m_invMassA+this.m_invIA*d*d+this.m_invMassB+this.m_invIB*f*f;if(this.m_mass=0!=m?1/m:0,t.step.warmStarting){this.m_impulse*=t.step.dtRatio;var p=S.Multiply(this.m_impulse,this.m_u);n.Subtract(S.Multiply(this.m_invMassA,p)),r-=this.m_invIA*B(this.m_rA,p),a.Add(S.Multiply(this.m_invMassB,p)),h+=this.m_invIB*B(this.m_rB,p)}else this.m_impulse=0;t.velocities[this.m_indexA].v.Assign(n),t.velocities[this.m_indexA].w=r,t.velocities[this.m_indexB].v.Assign(a),t.velocities[this.m_indexB].w=h},SolveVelocityConstraints:function(t){var e=t.velocities[this.m_indexA].v.Clone(),i=t.velocities[this.m_indexA].w,n=t.velocities[this.m_indexB].v.Clone(),r=t.velocities[this.m_indexB].w,s=S.Add(e,M(i,this.m_rA)),o=S.Add(n,M(r,this.m_rB)),a=this.m_length-this.m_maxLength,c=D(this.m_u,S.Subtract(o,s));a<0&&(c+=t.step.inv_dt*a);var h=-this.m_mass*c,l=this.m_impulse;this.m_impulse=Q(0,this.m_impulse+h),h=this.m_impulse-l;var u=S.Multiply(h,this.m_u);e.Subtract(S.Multiply(this.m_invMassA,u)),i-=this.m_invIA*B(this.m_rA,u),n.Add(S.Multiply(this.m_invMassB,u)),r+=this.m_invIB*B(this.m_rB,u),t.velocities[this.m_indexA].v.Assign(e),t.velocities[this.m_indexA].w=i,t.velocities[this.m_indexB].v.Assign(n),t.velocities[this.m_indexB].w=r},SolvePositionConstraints:function(t){var e=t.positions[this.m_indexA].c.Clone(),i=t.positions[this.m_indexA].a,n=t.positions[this.m_indexB].c.Clone(),r=t.positions[this.m_indexB].a,s=new R(i),o=new R(r),a=j(s,S.Subtract(this.m_localAnchorA,this.m_localCenterA)),h=j(o,S.Subtract(this.m_localAnchorB,this.m_localCenterB)),l=S.Subtract(S.Subtract(S.Add(n,h),e),a),u=l.Normalize(),_=u-this.m_maxLength;_=it(_,0,.2);var d=-this.m_mass*_,f=S.Multiply(d,l);return e.Subtract(S.Multiply(this.m_invMassA,f)),i-=this.m_invIA*B(a,f),n.Add(S.Multiply(this.m_invMassB,f)),r+=this.m_invIB*B(h,f),t.positions[this.m_indexA].c.Assign(e),t.positions[this.m_indexA].a=i,t.positions[this.m_indexB].c.Assign(n),t.positions[this.m_indexB].a=r,u-this.m_maxLength=3),this.m_count=t.count,this.m_ps=new Array(this.m_count),this.m_p0s=new Array(this.m_count),this.m_vs=new Array(this.m_count),this.m_ims=new Array(this.m_count);for(var e=0;e0?1/n:0}var r=this.m_count-1,s=this.m_count-2;this.m_Ls=new Array(r),this.m_as=new Array(s);for(e=0;e0&&this.m_vs[n].Add(S.Multiply(t,this.m_gravity)),this.m_vs[n].Multiply(i),this.m_ps[n].Add(S.Multiply(t,this.m_vs[n]));for(n=0;ns;)T=(m-=2*s)-this.m_as[e];for(;T<-s;)T=(m+=2*s)-this.m_as[e];var A=-this.m_k3*C*T;i.Add(S.Multiply(o*A,y)),n.Add(S.Multiply(a*A,v)),r.Add(S.Multiply(c*A,x))}}}}};var wi={serialize:function(t){var e,i,n,r,s,o=[];for(n=t.GetBodyList();n;n=n.GetNext())for(r=n.GetFixtureList();r;r=r.GetNext())s=r.GetShape(),r.__temp_shape_id=o.length,o.push(s._serialize());var a=[];for(n=t.GetBodyList();n;n=n.GetNext())for(n.__temp_fixture_ids=[],r=n.GetFixtureList();r;r=r.GetNext())(i=r._serialize()).shape=r.__temp_shape_id,delete r.__temp_shape_id,n.__temp_fixture_ids.push(a.length),a.push(i);var c=[];for(n=t.GetBodyList();n;n=n.GetNext()){for((i=n._serialize()).fixtures=[],e=0;e>1,t|=t>>2,t|=t>>4,t|=t>>8,1+(t|=t>>16)}},{trimmed:"Abs_v2",name:"b2Abs_v2",def:K},{trimmed:"Abs_m22",name:"b2Abs_m22",def:function(t){return new w(K(t.ex),K(t.ey))}},{trimmed:"IsPowerOfTwo",name:"b2IsPowerOfTwo",def:function(t){return t>0&&0==(t&t-1)}},{trimmed:"RandomFloat",name:"b2RandomFloat",def:function(t,e){var i=Math.random();return i=void 0!==t?(e-t)*i+t:2*i-1}},{trimmed:"Timer",name:"b2Timer",def:st},{trimmed:"Color",name:"b2Color",def:nt},{trimmed:"Draw",name:"b2Draw",def:rt},{trimmed:"ContactID",name:"b2ContactID",def:St},{trimmed:"ManifoldPoint",name:"b2ManifoldPoint",def:Et},{trimmed:"Manifold",name:"b2Manifold",def:wt},{trimmed:"WorldManifold",name:"b2WorldManifold",def:It},{trimmed:"GetPointStates",name:"b2GetPointStates",def:function(t,e,i,n){for(var r=0;rthis._max&&(this._max=this._current);var s=Math.round((1-this._current/this._max)*r);this._ctx.globalAlpha=1,this._ctx.drawImage(this._canvas,i,0,n-i,r,0,0,n-i,r),e?(this._ctx.fillStyle="#444",this._ctx.fillRect(n-i,0,i,r),this._ctx.fillStyle="#b70000",this._ctx.fillRect(n-i,s,i,r-s),this._ctx.globalAlpha=.5,this._ctx.fillStyle="#fff",this._ctx.fillRect(n-i,s,i,i)):(this._ctx.fillStyle="#444",this._ctx.fillRect(n-i,0,i,r),this._ctx.fillStyle=this._color,this._ctx.fillRect(n-i,s,i,r-s),this._ctx.globalAlpha=.5,this._ctx.fillStyle="#fff",this._ctx.fillRect(n-i,s,i,i))},e})(e),r=Math.round(window.devicePixelRatio||1),s=(function(t){function e(e,i){t.call(this,e,i),this._threshold=0,this._canvas2=document.createElement("canvas"),this._ctx2=this._canvas2.getContext("2d")}return t&&(e.__proto__=t),e.prototype=Object.create(t&&t.prototype),e.prototype.constructor=e,e.prototype.init=function(e,i){t.prototype.init.call(this,e,i);var n=e*r,s=i*r;this._canvas2.width=n,this._canvas2.height=s,this._canvas2.style.width=e+"px",this._canvas2.style.height=i+"px",this._ctx2.globalAlpha=1,this._ctx2.fillStyle="#444",this._ctx2.fillRect(0,0,n,s)},e.prototype.draw=function(t,e){var i=this._canvas.width,n=this._canvas.height;if(this._ctx.globalAlpha=1,this._ctx2.globalAlpha=1,t>this._threshold){var s=n*((t-t%n)/n+1),o=this._threshold;this._threshold=s;var a=o/s;this._ctx2.drawImage(this._canvas,0,0),this._ctx.fillStyle="#444",this._ctx.fillRect(0,0,i,n),this._ctx.drawImage(this._canvas2,r,0,i-r,n,0,Math.round((1-a)*n),i-r,n)}else this._ctx.drawImage(this._canvas,r,0,i-r,n,0,0,i-r,n);var c=Math.round(n*(1-t/this._threshold));e?(this._ctx.fillStyle="#444",this._ctx.fillRect(i-r,0,r,n),this._ctx.fillStyle="#b70000",this._ctx.fillRect(i-r,c,r,n-c),this._ctx.globalAlpha=.5,this._ctx.fillStyle="#fff",this._ctx.fillRect(i-r,c,r,r)):(this._ctx.fillStyle="#444",this._ctx.fillRect(i-r,0,r,n),this._ctx.fillStyle=this._color,this._ctx.fillRect(i-r,c,r,n-c),this._ctx.globalAlpha=.5,this._ctx.fillStyle="#fff",this._ctx.fillRect(i-r,c,r,r))},e})(e),o=Math.round(window.devicePixelRatio||1),a=(function(t){function e(e,i,n,r){t.call(this,e,i),this._min=n,this._max=r}return t&&(e.__proto__=t),e.prototype=Object.create(t&&t.prototype),e.prototype.constructor=e,e.prototype.draw=function(t,e){var i=this._canvas.width,n=this._canvas.height,r=(t-this._min)/(this._max-this._min),s=Math.round((1-r)*n);this._ctx.globalAlpha=1,this._ctx.drawImage(this._canvas,o,0,i-o,n,0,0,i-o,n),e?(this._ctx.fillStyle="#444",this._ctx.fillRect(i-o,0,o,n),this._ctx.fillStyle="#b70000",this._ctx.fillRect(i-o,s,o,n-s),this._ctx.globalAlpha=.5,this._ctx.fillStyle="#fff",this._ctx.fillRect(i-o,s,o,o)):(this._ctx.fillStyle="#444",this._ctx.fillRect(i-o,0,o,n),this._ctx.fillStyle=this._color,this._ctx.fillRect(i-o,s,o,n-s),this._ctx.globalAlpha=.5,this._ctx.fillStyle="#fff",this._ctx.fillRect(i-o,s,o,o))},e})(e),c=Math.round(window.devicePixelRatio||1),h=function(t,e){this._colors=e,this._canvas=document.createElement("canvas"),this._ctx=this._canvas.getContext("2d"),this._canvas.className="pstats-canvas",t.appendChild(this._canvas)};h.prototype.init=function(t,e,i){var n=t*c,r=e*c;this._canvas.width=n,this._canvas.height=r*i,this._canvas.style.width=t+"px",this._canvas.style.height=e*i+"px",this._ctx.globalAlpha=1,this._ctx.fillStyle="#444",this._ctx.fillRect(0,0,n,r*i)},h.prototype.draw=function(t){var e=this,i=this._canvas.width,n=this._canvas.height;this._ctx.globalAlpha=1,this._ctx.drawImage(this._canvas,c,0,i-c,n,0,0,i-c,n);for(var r=0,s=0;s=this._opts.average&&(this._averageValue=this._accumValue/this._accumSamples,this._accumValue=0,this._accumStart=e,this._accumSamples=0)}},u.value.get=function(){return this._value},u.value.set=function(t){this._value=t},l.prototype.sample=function(){this._average(this._value)},l.prototype.human=function(){var t=this._opts.average?this._averageValue:this._value;return Math.round(100*t)/100},l.prototype.alarm=function(){return this._opts.below&&this._valuethis._opts.over},Object.defineProperties(l.prototype,u);var _=(function(t){function e(e,i){t.call(this,e,i),this._time=window.performance.now()}return t&&(e.__proto__=t),e.prototype=Object.create(t&&t.prototype),e.prototype.constructor=e,e.prototype.start=function(){this._time=window.performance.now()},e.prototype.end=function(){this._value=window.performance.now()-this._time,this._average(this._value)},e.prototype.tick=function(){this.end(),this.start()},e.prototype.frame=function(){var t=window.performance.now(),e=t-this._time;this._total++,e>(this._opts.average||1e3)&&(this._value=1e3*this._total/e,this._total=0,this._time=t,this._average(this._value))},e})(l),d=Math.log(1024),f=["Bytes","KB","MB","GB","TB"];var m=(function(t){function e(e,i,n){t.call(this,i,n),this._stats=e,this._start=0,0===n.extension.indexOf("memory.")&&(this._field=n.extension.substring(7))}return t&&(e.__proto__=t),e.prototype=Object.create(t&&t.prototype),e.prototype.constructor=e,e.prototype.snapshot=function(){this._value=this._stats[this._field]},e.prototype.start=function(){this._start=this._stats[this._field]},e.prototype.end=function(){this._value=this._stats[this._field]-this._start},e.prototype.human=function(){return (function(t){var e=Math.floor(Math.log(t)/d);return 0===t?"n/a":Math.round(100*t/Math.pow(1024,e))/100+" "+f[e]})(t.prototype.human.call(this))},e})(l),p=function(){0===window.performance.memory.totalJSHeapSize&&console.warn("totalJSHeapSize === 0, performance.memory is only available in Chrome."),this._used=0,this._total=0,this._lastUsed=0},g={alarm:{},used:{},total:{}};p.prototype.tick=function(){this._lastUsed=this._used,this._used=window.performance.memory.usedJSHeapSize,this._total=window.performance.memory.totalJSHeapSize},g.alarm.get=function(){return this._used-this._lastUsed<0},g.used.get=function(){return window.performance.memory.usedJSHeapSize},g.total.get=function(){return this._total},p.prototype.counter=function(t,e){return new m(this,t,e)},Object.defineProperties(p.prototype,g);var y={memory:p},v=document.createElement("style");v.type="text/css",v.textContent="\n .pstats {\n position: fixed;\n z-index: 9999;\n\n padding: 5px;\n width: 250px;\n right: 5px;\n bottom: 5px;\n\n font-size: 10px;\n font-family: 'Roboto Condensed', tahoma, sans-serif;\n overflow: hidden;\n user-select: none;\n cursor: default;\n\n background: #222;\n border-radius: 3px;\n }\n\n .pstats-container {\n display: block;\n position: relative;\n color: #888;\n white-space: nowrap;\n }\n\n .pstats-item {\n position: absolute;\n width: 250px;\n height: 12px;\n left: 0px;\n }\n\n .pstats-label {\n position: absolute;\n width: 150px;\n height: 12px;\n text-align: left;\n transition: background 0.3s;\n }\n\n .pstats-label.alarm {\n color: #ccc;\n background: #800;\n\n transition: background 0s;\n }\n\n .pstats-counter-id {\n position: absolute;\n width: 90px;\n left: 0px;\n }\n\n .pstats-counter-value {\n position: absolute;\n width: 60px;\n left: 90px;\n text-align: right;\n }\n\n .pstats-canvas {\n display: block;\n position: absolute;\n right: 0px;\n top: 1px;\n }\n\n .pstats-fraction {\n position: absolute;\n width: 250px;\n left: 0px;\n }\n\n .pstats-legend {\n position: absolute;\n width: 150px;\n\n text-align: right;\n }\n\n .pstats-legend > span {\n position: absolute;\n right: 0px;\n }\n",document.head.appendChild(v);var x=function(t,e){var i=this;if(e=e||{},this._showGraph=void 0===e.showGraph||e._showGraph,this._values=e.values||{},this._fractions=e.fractions||[],this._id2counter={},this._id2item={},this._name2extStats={},e.css){var r=document.createElement("style");r.type="text/css",r.textContent=e.css,document.head.appendChild(r)}if(e.extensions)for(var o=0;o0),o.valueText.nodeValue=s,t._showGraph&&o.graph.draw(n.value,r)}}if(this._showGraph)for(var a=0;a0?1:-1}),Number.isInteger||(Number.isInteger=function(t){return"number"==typeof t&&isFinite(t)&&Math.floor(t)===t}),!console.time){var n=window.performance||Date,r=Object.create(null);console.time=function(t){r[t]=n.now()},console.timeEnd=function(t){var e=r[t],i=n.now()-e;console.log(t+": "+i+"ms")}}}),{}],317:[(function(t,e,i){String.prototype.startsWith||(String.prototype.startsWith=function(t,e){return e=e||0,this.lastIndexOf(t,e)===e}),String.prototype.endsWith||(String.prototype.endsWith=function(t,e){(void 0===e||e>this.length)&&(e=this.length),e-=t.length;var i=this.indexOf(t,e);return-1!==i&&i===e})}),{}],318:[(function(t,e,i){var n=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(t,e){t.__proto__=e}||function(t,e){for(var i in e)e.hasOwnProperty(i)&&(t[i]=e[i])};window.__extends=function(t,e){function i(){this.constructor=t}n(t,e),t.prototype=null===e?Object.create(e):(i.prototype=e.prototype,new i)},window.__assign=Object.assign||function(t){for(var e,i=1,n=arguments.length;i=0;a--)(r=t[a])&&(o=(s<3?r(o):s>3?r(e,i,o):r(e,i))||o);return s>3&&o&&Object.defineProperty(e,i,o),o},window.__param=function(t,e){return function(i,n){e(i,n,t)}},window.__metadata=function(t,e){if("object"==typeof Reflect&&"function"==typeof Reflect.metadata)return Reflect.metadata(t,e)},window.__awaiter=function(t,e,i,n){return new(i||(i=Promise))(function(r,s){function o(t){try{c(n.next(t))}catch(t){s(t)}}function a(t){try{c(n.throw(t))}catch(t){s(t)}}function c(t){t.done?r(t.value):new i(function(e){e(t.value)}).then(o,a)}c((n=n.apply(t,e||[])).next())})},window.__generator=function(t,e){var i,n,r,s,o={label:0,sent:function(){if(1&r[0])throw r[1];return r[1]},trys:[],ops:[]};return s={next:a(0),throw:a(1),return:a(2)},"function"==typeof Symbol&&(s[Symbol.iterator]=function(){return this}),s;function a(s){return function(a){return (function(s){if(i)throw new TypeError("Generator is already executing.");for(;o;)try{if(i=1,n&&(r=n[2&s[0]?"return":s[0]?"throw":"next"])&&!(r=r.call(n,s[1])).done)return r;switch(n=0,r&&(s=[0,r.value]),s[0]){case 0:case 1:r=s;break;case 4:return o.label++,{value:s[1],done:!1};case 5:o.label++,n=s[1],s=[0];continue;case 7:s=o.ops.pop(),o.trys.pop();continue;default:if(!(r=(r=o.trys).length>0&&r[r.length-1])&&(6===s[0]||2===s[0])){o=0;continue}if(3===s[0]&&(!r||s[1]>r[0]&&s[1]=t.length&&(t=void 0),{value:t&&t[i++],done:!t}}}},window.__read=function(t,e){var i="function"==typeof Symbol&&t[Symbol.iterator];if(!i)return t;var n,r,s=i.call(t),o=[];try{for(;(void 0===e||e-- >0)&&!(n=s.next()).done;)o.push(n.value)}catch(t){r={error:t}}finally{try{n&&!n.done&&(i=s.return)&&i.call(s)}finally{if(r)throw r.error}}return o},window.__spread=function(){for(var t=[],e=0;e1||a(t,e)})})}function a(t,e){try{(function(t){t.value instanceof __await?Promise.resolve(t.value.v).then(c,h):l(s[0][2],t)})(r[t](e))}catch(t){l(s[0][3],t)}}function c(t){a("next",t)}function h(t){a("throw",t)}function l(t,e){t(e),s.shift(),s.length&&a(s[0][0],s[0][1])}},window.__asyncDelegator=function(t){var e,i;return e={},n("next"),n("throw",(function(t){throw t})),n("return"),e[Symbol.iterator]=function(){return this},e;function n(n,r){t[n]&&(e[n]=function(e){return(i=!i)?{value:__await(t[n](e)),done:"return"===n}:r?r(e):e})}},window.__asyncValues=function(t){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var e=t[Symbol.asyncIterator];return e?e.call(t):"function"==typeof __values?__values(t):t[Symbol.iterator]()}}),{}],319:[(function(t,e,i){var n="undefined"==typeof window?global:window;function r(t,e){void 0===n[t]&&(n[t]=e)}function s(t){return"object"==typeof n[t]}r("CC_TEST",s("tap")||s("QUnit")),r("CC_EDITOR",s("Editor")&&s("process")&&"electron"in process.versions),r("CC_PREVIEW",!0),r("CC_DEV",!0),r("CC_DEBUG",!0),r("CC_JSB",s("jsb")),r("CC_BUILD",!1),r("CC_WECHATGAME",s("wx")&&wx.getSystemInfoSync),r("CC_QQPLAY",s("bk")),r("CC_SUPPORT_JIT",!0);n.CocosEngine=cc.ENGINE_VERSION="1.10.1"}),{}]},{},[314]); \ No newline at end of file diff --git a/spaces/HighCWu/starganv2vc-paddle/starganv2vc_paddle/Utils/JDC/__init__.py b/spaces/HighCWu/starganv2vc-paddle/starganv2vc_paddle/Utils/JDC/__init__.py deleted file mode 100644 index 8b137891791fe96927ad78e64b0aad7bded08bdc..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/starganv2vc-paddle/starganv2vc_paddle/Utils/JDC/__init__.py +++ /dev/null @@ -1 +0,0 @@ - diff --git a/spaces/HopeMan/DoomGuy/greeting.md b/spaces/HopeMan/DoomGuy/greeting.md deleted file mode 100644 index c7ef7302d76b18b6c24c0d6af12dfeeafab7e6c0..0000000000000000000000000000000000000000 --- a/spaces/HopeMan/DoomGuy/greeting.md +++ /dev/null @@ -1,4 +0,0 @@ - ---- - -No rugpulls. \ No newline at end of file diff --git a/spaces/Huertas97/Inpaint_Me/app_inpaint.py b/spaces/Huertas97/Inpaint_Me/app_inpaint.py deleted file mode 100644 index d04be3c0bb22f0df06352f917df45c9f2acfc345..0000000000000000000000000000000000000000 --- a/spaces/Huertas97/Inpaint_Me/app_inpaint.py +++ /dev/null @@ -1,198 +0,0 @@ -import streamlit as st -import time -import easyocr -import math -from pathlib import Path -from PIL import Image, ImageDraw -import PIL -import io -import os -import cv2 -import numpy as np -import shutil -import base64 -import logging - - -@st.cache(show_spinner=False, allow_output_mutation=True, suppress_st_warning=True) -def load_models(): - #specify shortform of language you want to extract, - # I am using Spanish(es) and English(en) here by list of language ids - reader = easyocr.Reader(['en'],) - return reader - -reader = load_models() - -def midpoint(x1, y1, x2, y2): - x_mid = int((x1 + x2)/2) - y_mid = int((y1 + y2)/2) - return (x_mid, y_mid) - -def inpaint_text(img, text_coordinates): - # read image - - # img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - # generate (word, box) tuples - mask = np.zeros(img.shape[:2], dtype="uint8") - for box in text_coordinates: - x0, y0 = box[0] - x1, y1 = box[1] - x2, y2 = box[2] - x3, y3 = box[3] - - x_mid0, y_mid0 = midpoint(x1, y1, x2, y2) - x_mid1, y_mi1 = midpoint(x0, y0, x3, y3) - - thickness = int(math.sqrt( (x2 - x1)**2 + (y2 - y1)**2 )) - - cv2.line(mask, (x_mid0, y_mid0), (x_mid1, y_mi1), 255, - thickness) - img = cv2.inpaint(img, mask, 7, cv2.INPAINT_NS) - - return(img) - - -def file_selector(folder_path='.'): - filenames = os.listdir(folder_path) - selected_filename = st.selectbox('Select a file', filenames) - return os.path.join(folder_path, selected_filename), selected_filename - - - - - - - - -st.set_page_config( - page_title="Inpaint Me", - page_icon=":art:", - layout="wide", - initial_sidebar_state="expanded", - menu_items={ - 'Get Help': 'https://www.extremelycoolapp.com/help', - 'Report a bug': "https://www.extremelycoolapp.com/bug", - 'About': "# This is a header. This is an *extremely* cool app!" - } - ) - - -st.markdown( - """ - - """, - unsafe_allow_html=True -) - -LOGO_IMAGE = "inpaint_me_logo.png" - -col1, col2= st.columns([2, 2]) -with col1: - # st.image('./aida_logo.png') - st.markdown( - f""" - - """, - unsafe_allow_html=True - ) - -with col2: - # st.image('./aida_logo.png') - st.markdown( - f""" - - """, - unsafe_allow_html=True - ) - - -st.header("") -with st.expander("Project Description", expanded=False): - st.write(""" - Developed in Applied Intelligence and Data Analysis ([AI+DA](http://aida.etsisi.upm.es/)) group at Polytech University of Madrid (UPM). - - To rule out the possibility of text misleading image Deep Learning models (e.g., CNNs) it is useful to remove text from images. Hence, - this tool uses [EasyOCR](https://github.com/JaidedAI/EasyOCR) and [OpenCV](https://pypi.org/project/opencv-python/) for detecting texts and inpainting them. Currently, only `JPG` files are supported. This tools has been tested on memes, feel free to try some examples or upload your own images. - """) - - - -file_example_path = None -if st.checkbox('Select a example'): - folder_path = './Examples/' - # if st.checkbox('Change directory'): - # folder_path = st.text_input('Enter folder path', '.') - file_example_path, example_file_name = file_selector(folder_path=folder_path) - st.write('You selected `%s`' % file_example_path) - - -uploaded_file = st.file_uploader(label="Upload image", - type=["jpg", "jpeg"], - accept_multiple_files=False, - key=None, - help=None, - on_change=None, - args=None, - kwargs=None, -) - - - -col1, col2, col3 = st.columns([2, 0.5, 2]) - - - -if file_example_path and not uploaded_file: - with col1: - st.subheader("Original") - # st.write(f"./Examples_inpainted/{example_file_name.strip(".jpg")}_inpainted.jpeg") - img = Image.open( file_example_path ) - st.image(img, caption=None, width=None, use_column_width=None, clamp=False, channels="RGB", output_format="auto") - - with col3: - st.subheader("Inpainted") - with st.spinner('Wait for it...'): - time.sleep(1) - example_file_name = example_file_name.strip(".jpg") - inpaint_image = f"./Examples_inpainted/{example_file_name}_inpaint.jpeg" - # img_array = np.array(Image.open( file_example_path )) - # # detect text - # bounds = reader.readtext(img_array, detail=1) #detail=1 # [(coordinates, detected text, confidence threshold)] - # text_coordinates = [ bound[0] for bound in bounds] - # # inpaint text coordinates - # inpaint_image = inpaint_text(img_array, text_coordinates) - st.image(inpaint_image, caption=None, width=None, use_column_width=None, clamp=False, channels="RGB", output_format="auto") - -if uploaded_file: - with col1: - st.subheader("Original") - st.image(uploaded_file, caption=None, width=None, use_column_width=None, clamp=False, channels="RGB", output_format="auto") - - with col3: - st.subheader("Inpainted") - with st.spinner('Wait for it...'): - # Transform loaded file to bytes - bytes_data = uploaded_file.getvalue() - # bytes to numpy array - img_array = np.array(Image.open(io.BytesIO(bytes_data))) - # detect text - bounds = reader.readtext(img_array, detail=1) #detail=1 # [(coordinates, detected text, confidence threshold)] - text_coordinates = [ bound[0] for bound in bounds] - # inpaint text coordinates - inpaint_image = inpaint_text(img_array, text_coordinates) - st.image(inpaint_image, caption=None, width=None, use_column_width=None, clamp=False, channels="RGB", output_format="auto") \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/multilingual/finetune_multilingual_model.sh b/spaces/ICML2022/OFA/fairseq/examples/multilingual/finetune_multilingual_model.sh deleted file mode 100644 index 25960c5dc8a02e5580b61837099770a082b4dd83..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/multilingual/finetune_multilingual_model.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -path_2_data=$1 # which contains binarized data for each directions -lang_list=$2 # -lang_pairs=$3 #a list language pairs to train multilingual models, e.g. "en-fr,en-cs,fr-en,cs-en" -# pretrained can be an mBART pretrained model as well -pretrained_model=$4 # - - -fairseq-train "$path_2_data" \ - --encoder-normalize-before --decoder-normalize-before \ - --arch transformer --layernorm-embedding \ - --task translation_multi_simple_epoch \ - --finetune-from-model "$pretrained_model" \ - --sampling-method "temperature" \ - --sampling-temperature "1.5" \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt --lr 3e-05 --warmup-updates 2500 --max-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 diff --git a/spaces/IISRFactCheck/claim_detection/code/models.py b/spaces/IISRFactCheck/claim_detection/code/models.py deleted file mode 100644 index 7acce615f9646915a9d7d0379f76da8fc40e0111..0000000000000000000000000000000000000000 --- a/spaces/IISRFactCheck/claim_detection/code/models.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.nn import functional, CrossEntropyLoss, Softmax -from torchcrf import CRF -from transformers import RobertaModel, BertModel - -from args import args, config -class Model_Crf(torch.nn.Module): - def __init__(self, config): - super(Model_Crf, self).__init__() - self.bert = BertModel.from_pretrained(args.pre_model_name) - self.dropout = torch.nn.Dropout(config.hidden_dropout_prob) - self.classifier = torch.nn.Linear(config.hidden_size, args.label_size) - self.crf = CRF(num_tags=args.label_size, batch_first=True) - - def forward(self, input_ids, token_type_ids=None, attention_mask=None, context_mask=None, labels=None, span_labels=None, start_positions=None, end_positions=None, testing=False, crf_mask=None): - outputs =self.bert(input_ids = input_ids,attention_mask=attention_mask,token_type_ids=token_type_ids) - sequence_output = outputs[0] - sequence_output = self.dropout(sequence_output) - sequence_output = sequence_output[:,1:-1,:] #remove [CLS], [SEP] - logits = self.classifier(sequence_output)#[batch, max_len, label_size] - outputs = (logits,) - if labels is not None: - #print('logits = ', logits.size()) - #print('labels = ', labels.size()) - #print('crf_mask = ', crf_mask.size()) - loss = self.crf(emissions = logits, tags=labels, mask = crf_mask, reduction="mean") - outputs =(-1*loss,)+outputs - return outputs - -class Model_Softmax(torch.nn.Module): - def __init__(self, config): - super(Model_Softmax, self).__init__() - self.bert = BertModel.from_pretrained(args.pre_model_name) - self.dropout = torch.nn.Dropout(config.hidden_dropout_prob) - self.classifier = torch.nn.Linear(config.hidden_size, args.label_size) - self.loss_calculater = CrossEntropyLoss() - self.softmax = Softmax(dim=-1) - - def forward(self, input_ids, token_type_ids=None, attention_mask=None, context_mask=None, labels=None, span_labels=None, start_positions=None, end_positions=None, testing=False, crf_mask=None): - outputs =self.bert(input_ids = input_ids,attention_mask=attention_mask,token_type_ids=token_type_ids) - sequence_output = outputs[0] - sequence_output = self.dropout(sequence_output) - sequence_output = sequence_output[:,1:-1,:] #remove [CLS], [SEP] - logits = self.classifier(sequence_output)#[batch, max_len, label_size] - logits = self.softmax(logits) - outputs = (logits,) - if labels is not None: - #print('logits = ', logits.size()) - #print('labels = ', labels.size()) - labels = functional.one_hot(labels, num_classes=args.label_size).float() - loss = self.loss_calculater(logits, labels) - outputs =(loss,)+outputs - return outputs \ No newline at end of file diff --git a/spaces/InstaDeepAI/nucleotide_transformer_benchmark/README.md b/spaces/InstaDeepAI/nucleotide_transformer_benchmark/README.md deleted file mode 100644 index 167f2a4e9497a9a53553d791980d1e227e5d8b4f..0000000000000000000000000000000000000000 --- a/spaces/InstaDeepAI/nucleotide_transformer_benchmark/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Nucleotide Transformer Benchmark -emoji: 🏆 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -models: ["InstaDeepAI/nucleotide-transformer-500m-human-ref", "InstaDeepAI/nucleotide-transformer-500m-1000g", "InstaDeepAI/nucleotide-transformer-2.5b-1000g", "InstaDeepAI/nucleotide-transformer-2.5b-multi-species", "InstaDeepAI/nucleotide-transformer-v2-50m-multi-species", "InstaDeepAI/nucleotide-transformer-v2-100m-multi-species", "InstaDeepAI/nucleotide-transformer-v2-250m-multi-species", "InstaDeepAI/nucleotide-transformer-v2-500m-multi-species"] -datasets: ["InstaDeepAI/nucleotide_transformer_downstream_tasks"] ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/eval/webpage/script.js b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/eval/webpage/script.js deleted file mode 100644 index 4b71e3d5618a262e4746f58e5d10947b73370dca..0000000000000000000000000000000000000000 --- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/eval/webpage/script.js +++ /dev/null @@ -1,245 +0,0 @@ -// Description: Script for the evaluation webpage. - -let currentQuestionIndex = 1; - -// Store the model name mapping for later use. -modelNameMapping = { - "gpt35": "ChatGPT-3.5", - "gpt4": "GPT-4", - "alpaca": "Alpaca-13b", - "vicuna": "Vicuna-13b", - "llama": "LLaMA-13b", - "bard": "Bard", -}; - -modelFigureMapping = { - "vicuna": "figures/vicuna.jpeg", - // Image from: https://commons.wikimedia.org/wiki/File:ChatGPT_logo.svg - "gpt35": "figures/chatgpt.svg", - // Image from: https://www.reddit.com/r/logodesign/comments/1128aat/google_ai_bard_logo_design/ - "bard": "figures/bard.jpg", - // Image from: https://crfm.stanford.edu/2023/03/13/alpaca.html - "alpaca": "figures/alpaca.png", - // Image adapted from https://commons.wikimedia.org/wiki/File:Llama_on_Machu_Picchu.jpg - "llama": "figures/llama.jpg", -} - -// Store the question data in a mapping for later use. -questionMapping = {}; -// Store the question ids in a mapping for later use. -categoryMapping = {}; -// Store the number of questions for later use. -questionsCount = 0; - - -function text2Markdown(text) { - // Normalize the text for markdown rendering. - text = text.trim().replaceAll('\n\n', '\n').replaceAll('\n', '\n\n'); - return marked.parse(text); -} - -function capitalizeFirstChar(str) { - if (!str || str.length === 0) { - return str; - } - return str.charAt(0).toUpperCase() + str.slice(1); -} - -function updateQuestionSelect(question_id) { - const select = document.getElementById('question-select'); - // Clear the question select. - select.innerHTML = ''; - // Populate the question select. - category = questionMapping[question_id].category; - categoryMapping[category].forEach(question_id => { - const question = questionMapping[question_id]; - const option = document.createElement('option'); - option.value = question_id; - option.textContent = 'Q' + question_id.toString() + ': ' + question.question; - select.appendChild(option); - }); - select.value = question_id; -} - -function updateModelSelect() { - const select = document.getElementById('model-select'); - img_path = modelFigureMapping[select.value]; - document.getElementById('other-model-figure').src = img_path; -} - -function populateModels(models) { - const select = document.getElementById('model-select'); - models.forEach(model => { - const option = document.createElement('option'); - option.value = model; - option.textContent = modelNameMapping[model]; - select.appendChild(option); - }); - updateModelSelect(); -} - -function populateQuestions(questions) { - const category_select = document.getElementById('category-select'); - - questionsCount = questions.length; - questions.forEach(question => { - const option = document.createElement('option'); - // Store the question data in a mapping for later use. - questionMapping[question.id] = { - category: question.category, - question: question.question, - answers: question.answers, - evaluations: question.evaluations, - scores: question.scores, - }; - // Store the question id in the category mapping. - if (question.category in categoryMapping) { - categoryMapping[question.category].push(question.id); - } else { - categoryMapping[question.category] = [question.id]; - const category_option = document.createElement('option'); - category_option.value = question.category; - category_option.textContent = capitalizeFirstChar(question.category); - category_select.appendChild(category_option); - } - }); - // Set the default category. - updateQuestionSelect(currentQuestionIndex); -} - -function displayQuestion(index) { - const question = questionMapping[index].question; - document.getElementById('selected-question').innerHTML = text2Markdown('**Question:** ' + question); - displayAnswers(index); -} - -function displayAnswers(index) { - const question = questionMapping[index]; - const otherModel = document.getElementById('model-select').value; - // render the answers with markdown - document.getElementById('other-model-answer').innerHTML = text2Markdown(question.answers[otherModel]); - document.getElementById('our-model-answer').innerHTML = text2Markdown(question.answers.vicuna); - - // Display evaluation - score = question.scores[otherModel]; - score_text = modelNameMapping[otherModel] + " " + score[0] + "/10, Vicuna-13b " + score[1] + "/10"; - document.getElementById('evaluation-header').textContent = "GPT-4 Evaluation" + " (Score: " + score_text + ")"; - document.getElementById('evaluation-result').innerHTML = text2Markdown(question.evaluations[otherModel]); - - // Update model names - let assistant1_title = "Assistant #1"; // (" + modelNameMapping[otherModel] + ")"; - let assistant2_title = "Assistant #2 (Vicuna-13b, our model)"; - // Update scores/labels. - let assistant1_score_label = score[0].toString() + '/10'; - let assistant2_score_label = score[1].toString() + '/10'; - - const colorRed ='#fa9'; // '#eb978d'; - // const colorGreen = '#c9f2c9'; - const colorBlue = '#8ef'; // '#71dbf9'; - const colorYellow = '#fe7'; // '#fada57'; - let otherModelHeaderColor = ''; - let ourModelHeaderColor = ''; - // Update the winner. - if (score[0] == score[1]) { - assistant1_title = '🏆 ' + assistant1_title; - assistant1_score_label = '🏆 ' + assistant1_score_label; - assistant2_title = '🏆 ' + assistant2_title; - assistant2_score_label = '🏆 ' + assistant2_score_label; - otherModelHeaderColor = colorYellow; - ourModelHeaderColor = colorYellow; - } else if (score[0] > score[1]) { - assistant1_title = '🏆 ' + assistant1_title; - assistant1_score_label = '🏆 ' + assistant1_score_label; - otherModelHeaderColor = colorBlue; - ourModelHeaderColor = colorRed; - } else if (score[0] < score[1]) { - assistant2_title = '🏆 ' + assistant2_title; - assistant2_score_label = '🏆 ' + assistant2_score_label; - otherModelHeaderColor = colorRed; - ourModelHeaderColor = colorBlue; - } - - document.getElementById('other-model-header-bg').style.backgroundColor = otherModelHeaderColor; - document.getElementById('our-model-header').style.backgroundColor = ourModelHeaderColor; - - document.getElementById('other-model-header').textContent = assistant1_title; - document.getElementById('our-model-header').textContent = assistant2_title; - - document.getElementById('other-score-label').textContent = assistant1_score_label; - document.getElementById('our-score-label').textContent = assistant2_score_label; - - // Update expand buttons visibility for both cards after displaying answers - // Reset the expanded state and update expand buttons visibility for both cards after displaying answers - document.querySelectorAll('.expandable-card').forEach(card => { - card.classList.remove('expanded'); - updateExpandButtonVisibility(card); - const expandBtn = card.querySelector('.expand-btn'); - expandBtn.innerHTML = 'keyboard_arrow_down Show more'; // .textContent = 'Show more'; - }); -} - -document.getElementById('question-select').addEventListener('change', e => { - currentQuestionIndex = parseInt(e.target.value); - displayQuestion(currentQuestionIndex); -}); - -document.getElementById('category-select').addEventListener('change', e => { - let currentCategory = e.target.value; - const questionIds = categoryMapping[currentCategory]; - currentQuestionIndex = questionIds[0]; - updateQuestionSelect(currentQuestionIndex); - displayQuestion(currentQuestionIndex); -}); - -// Update expand buttons whenever the model is changed -document.getElementById('model-select').addEventListener('change', () => { - displayAnswers(currentQuestionIndex); - document.querySelectorAll('.expandable-card').forEach(card => { - updateExpandButtonVisibility(card); - }); - updateModelSelect(); -}); - -function switchQuestionAndCategory() { - document.getElementById('question-select').value = currentQuestionIndex; - old_category = document.getElementById('category-select').value; - new_category = questionMapping[currentQuestionIndex].category; - if (old_category != new_category) { - document.getElementById('category-select').value = new_category; - updateQuestionSelect(currentQuestionIndex); - } - displayQuestion(currentQuestionIndex); -} - -document.getElementById('prev-question').addEventListener('click', () => { - // Question index starts from 1. - currentQuestionIndex = Math.max(1, currentQuestionIndex - 1); - switchQuestionAndCategory(); -}); - -document.getElementById('next-question').addEventListener('click', () => { - // Question index starts from 1. - currentQuestionIndex = Math.min(questionsCount, currentQuestionIndex + 1); - switchQuestionAndCategory(); -}); - -function updateExpandButtonVisibility(card) { - const cardTextContainer = card.querySelector('.card-text-container'); - const expandBtn = card.querySelector('.expand-btn'); - if (cardTextContainer.scrollHeight > cardTextContainer.offsetHeight) { - expandBtn.style.display = 'flex'; - } else { - expandBtn.style.display = 'none'; - card.classList.add('expanded'); - } -} - -document.querySelectorAll('.expand-btn').forEach(btn => { - btn.addEventListener('click', e => { - const card = e.target.closest('.expandable-card'); - card.classList.toggle('expanded'); - const more = 'keyboard_arrow_down Show more'; - const less = 'keyboard_arrow_up Show less'; - e.target.innerHTML = card.classList.contains('expanded') ? less : more; - }); -}); diff --git a/spaces/Intel/ldm3d/app.py b/spaces/Intel/ldm3d/app.py deleted file mode 100644 index fcf69a142b7f444cb1e226bac98d23b9f8565099..0000000000000000000000000000000000000000 --- a/spaces/Intel/ldm3d/app.py +++ /dev/null @@ -1,116 +0,0 @@ -from diffusers import StableDiffusionLDM3DPipeline -import gradio as gr -import torch -from PIL import Image -import base64 -from io import BytesIO -from tempfile import NamedTemporaryFile -from pathlib import Path - -Path("tmp").mkdir(exist_ok=True) -device = "cuda" if torch.cuda.is_available() else "cpu" -print(f"Device is {device}") -torch_type = torch.float16 if device == "cuda" else torch.float32 -pipe = StableDiffusionLDM3DPipeline.from_pretrained( - "Intel/ldm3d-pano", - torch_dtype=torch_type - # , safety_checker=None -) -pipe.to(device) -if device == "cuda": - pipe.enable_xformers_memory_efficient_attention() - pipe.enable_model_cpu_offload() - - -def get_iframe(rgb_path: str, depth_path: str, viewer_mode: str = "6DOF"): - # buffered = BytesIO() - # rgb.convert("RGB").save(buffered, format="JPEG") - # rgb_base64 = base64.b64encode(buffered.getvalue()) - # buffered = BytesIO() - # depth.convert("RGB").save(buffered, format="JPEG") - # depth_base64 = base64.b64encode(buffered.getvalue()) - - # rgb_base64 = "data:image/jpeg;base64," + rgb_base64.decode("utf-8") - # depth_base64 = "data:image/jpeg;base64," + depth_base64.decode("utf-8") - rgb_base64 = f"/file={rgb_path}" - depth_base64 = f"/file={depth_path}" - if viewer_mode == "6DOF": - return f"""""" - else: - return f"""""" - - -def predict( - prompt: str, - negative_prompt: str, - guidance_scale: float = 5.0, - seed: int = 0, - randomize_seed: bool = True, -): - generator = torch.Generator() if randomize_seed else torch.manual_seed(seed) - output = pipe( - prompt, - width=1024, - height=512, - negative_prompt=negative_prompt, - guidance_scale=guidance_scale, - generator=generator, - num_inference_steps=50, - ) # type: ignore - rgb_image, depth_image = output.rgb[0], output.depth[0] # type: ignore - with NamedTemporaryFile(suffix=".png", delete=False, dir="tmp") as rgb_file: - rgb_image.save(rgb_file.name) - rgb_image = rgb_file.name - with NamedTemporaryFile(suffix=".png", delete=False, dir="tmp") as depth_file: - depth_image.save(depth_file.name) - depth_image = depth_file.name - - iframe = get_iframe(rgb_image, depth_image) - return rgb_image, depth_image, generator.seed(), iframe - - -with gr.Blocks() as block: - gr.Markdown( - """ -## LDM3d Demo - -[Model card](https://huggingface.co/Intel/ldm3d-pano
) -[Diffusers docs](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/ldm3d_diffusion) -For better results, specify "360 view of" or "panoramic view of" in the prompt - -""" - ) - with gr.Row(): - with gr.Column(scale=1): - prompt = gr.Textbox(label="Prompt") - negative_prompt = gr.Textbox(label="Negative Prompt") - guidance_scale = gr.Slider( - label="Guidance Scale", minimum=0, maximum=10, step=0.1, value=5.0 - ) - randomize_seed = gr.Checkbox(label="Randomize Seed", value=True) - seed = gr.Slider(label="Seed", minimum=0, - maximum=2**64 - 1, step=1) - generated_seed = gr.Number(label="Generated Seed") - markdown = gr.Markdown(label="Output Box") - with gr.Row(): - new_btn = gr.Button("New Image") - with gr.Column(scale=2): - html = gr.HTML(height='50%') - with gr.Row(): - rgb = gr.Image(label="RGB Image", type="filepath") - depth = gr.Image(label="Depth Image", type="filepath") - gr.Examples( - examples=[ - ["360 view of a large bedroom", "", 7.0, 42, False]], - inputs=[prompt, negative_prompt, guidance_scale, seed, randomize_seed], - outputs=[rgb, depth, generated_seed, html], - fn=predict, - cache_examples=True) - - new_btn.click( - fn=predict, - inputs=[prompt, negative_prompt, guidance_scale, seed, randomize_seed], - outputs=[rgb, depth, generated_seed, html], - ) - -block.launch() \ No newline at end of file diff --git a/spaces/JackBAI/MassageMateNLP/models/bert/position_x/README.md b/spaces/JackBAI/MassageMateNLP/models/bert/position_x/README.md deleted file mode 100644 index 38d612a4804cfeaed7572fea1f13117009467934..0000000000000000000000000000000000000000 --- a/spaces/JackBAI/MassageMateNLP/models/bert/position_x/README.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -license: apache-2.0 -tags: -- generated_from_trainer -metrics: -- accuracy -model-index: -- name: position_x - results: [] ---- - - - -# position_x - -This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. -It achieves the following results on the evaluation set: -- Loss: 0.1305 -- Accuracy: 0.9701 - -## Model description - -More information needed - -## Intended uses & limitations - -More information needed - -## Training and evaluation data - -More information needed - -## Training procedure - -### Training hyperparameters - -The following hyperparameters were used during training: -- learning_rate: 2e-05 -- train_batch_size: 128 -- eval_batch_size: 8 -- seed: 42 -- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 -- lr_scheduler_type: linear -- num_epochs: 12.0 - -### Training results - - - -### Framework versions - -- Transformers 4.25.1 -- Pytorch 1.11.0+cu113 -- Datasets 2.8.0 -- Tokenizers 0.13.2 diff --git a/spaces/Jamel887/Rv-percobaan887/config.py b/spaces/Jamel887/Rv-percobaan887/config.py deleted file mode 100644 index 040a64d2c5ce4d7802bdf7f69321483b81008f08..0000000000000000000000000000000000000000 --- a/spaces/Jamel887/Rv-percobaan887/config.py +++ /dev/null @@ -1,106 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.colab, - self.noparallel, - self.noautoopen, - self.api - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument("--api", action="store_true", help="Launch with api") - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.api - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/Jamkonams/AutoGPT/CONTRIBUTING.md b/spaces/Jamkonams/AutoGPT/CONTRIBUTING.md deleted file mode 100644 index 79169a0c1951853303f73ffa1fddb3518685606a..0000000000000000000000000000000000000000 --- a/spaces/Jamkonams/AutoGPT/CONTRIBUTING.md +++ /dev/null @@ -1,105 +0,0 @@ -# Contributing to ProjectName - -First of all, thank you for considering contributing to our project! We appreciate your time and effort, and we value any contribution, whether it's reporting a bug, suggesting a new feature, or submitting a pull request. - -This document provides guidelines and best practices to help you contribute effectively. - -## Table of Contents - -- [Code of Conduct](#code-of-conduct) -- [Getting Started](#getting-started) -- [How to Contribute](#how-to-contribute) - - [Reporting Bugs](#reporting-bugs) - - [Suggesting Enhancements](#suggesting-enhancements) - - [Submitting Pull Requests](#submitting-pull-requests) -- [Style Guidelines](#style-guidelines) - - [Code Formatting](#code-formatting) - - [Pre-Commit Hooks](#pre-commit-hooks) - -## Code of Conduct - -By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project. - -## 📢 A Quick Word -Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT. - -However, you absolutely can still add these commands to Auto-GPT in the form of plugins. Please check out this [template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template). -> ⚠️ Plugin support is expected to ship within the week. You can follow PR #757 for more updates! - -## Getting Started - -To start contributing, follow these steps: - -1. Fork the repository and clone your fork. -2. Create a new branch for your changes (use a descriptive name, such as `fix-bug-123` or `add-new-feature`). -3. Make your changes in the new branch. -4. Test your changes thoroughly. -5. Commit and push your changes to your fork. -6. Create a pull request following the guidelines in the [Submitting Pull Requests](#submitting-pull-requests) section. - -## How to Contribute - -### Reporting Bugs - -If you find a bug in the project, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A description of the problem, including steps to reproduce the issue. -- Any relevant logs, screenshots, or other supporting information. - -### Suggesting Enhancements - -If you have an idea for a new feature or improvement, please create an issue on GitHub with the following information: - -- A clear, descriptive title for the issue. -- A detailed description of the proposed enhancement, including any benefits and potential drawbacks. -- Any relevant examples, mockups, or supporting information. - -### Submitting Pull Requests - -When submitting a pull request, please ensure that your changes meet the following criteria: - -- Your pull request should be atomic and focus on a single change. -- Your pull request should include tests for your change. -- You should have thoroughly tested your changes with multiple different prompts. -- You should have considered potential risks and mitigations for your changes. -- You should have documented your changes clearly and comprehensively. -- You should not include any unrelated or "extra" small tweaks or changes. - -## Style Guidelines - -### Code Formatting - -We use the `black` code formatter to maintain a consistent coding style across the project. Please ensure that your code is formatted using `black` before submitting a pull request. You can install `black` using `pip`: - -```bash -pip install black -``` - -To format your code, run the following command in the project's root directory: - -```bash -black . -``` -### Pre-Commit Hooks -We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps: - -Install the pre-commit package using pip: -```bash -pip install pre-commit -``` - -Run the following command in the project's root directory to install the pre-commit hooks: -```bash -pre-commit install -``` - -Now, the pre-commit hooks will run automatically before each commit, checking your code formatting and other requirements. - -If you encounter any issues or have questions, feel free to reach out to the maintainers or open a new issue on GitHub. We're here to help and appreciate your efforts to contribute to the project. - -Happy coding, and once again, thank you for your contributions! - -Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here: - -https://github.com/Torantulino/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+ \ No newline at end of file diff --git a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/export_plugin_input.py b/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/export_plugin_input.py deleted file mode 100644 index 01843097de187fa66000eadd1b31d1189e1c8ca0..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/plugin/inputs/export_plugin_input.py +++ /dev/null @@ -1,12 +0,0 @@ -from __future__ import annotations - -from steamship.base.model import CamelModel - - -class ExportPluginInput(CamelModel): - plugin_instance: str = None - id: str = None - handle: str = None - type: str = None - filename: str = None - query: str = None diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/avatar.js b/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/avatar.js deleted file mode 100644 index 14da1d3ba174320f8b52b6ceb18799909dff0c6e..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/avatar.js +++ /dev/null @@ -1,53 +0,0 @@ - -function addAvatars(messageElement, role='user'||'bot') { - if(messageElement.innerHTML === '') { - return; - } - if (messageElement.classList.contains('avatar-added') || messageElement.classList.contains('hide')) { - return; - } - if (role === 'bot' && botAvatarUrl === "" || role === 'user' && userAvatarUrl === "") { - messageElement.classList.add('avatar-added'); - return; - } - - - const messageRow = document.createElement('div'); - messageRow.classList.add('message-row'); - messageElement.classList.add('avatar-added'); - - if (role === 'bot') { - messageRow.classList.add('bot-message-row'); - } else if (role === 'user') { - messageRow.classList.add('user-message-row'); - } - - const avatarDiv = document.createElement('div'); - avatarDiv.classList.add('chatbot-avatar'); - if (role === 'bot') { - avatarDiv.classList.add('bot-avatar'); - avatarDiv.innerHTML = `bot-avatar`; - } else if (role === 'user') { - avatarDiv.classList.add('user-avatar'); - avatarDiv.innerHTML = `user-avatar`; - } - - messageElement.parentNode.replaceChild(messageRow, messageElement); - - if (role === 'bot') { - messageRow.appendChild(avatarDiv); - messageRow.appendChild(messageElement); - } else if (role === 'user') { - messageRow.appendChild(messageElement); - messageRow.appendChild(avatarDiv); - } -} - -function clearMessageRows() { - const messageRows = chatbotWrap.querySelectorAll('.message-row'); - messageRows.forEach((messageRow) => { - if (messageRow.innerText === '') { - messageRow.parentNode.removeChild(messageRow); - } - }); -} \ No newline at end of file diff --git a/spaces/JustinLin610/ImageBind_zeroshot_demo/data.py b/spaces/JustinLin610/ImageBind_zeroshot_demo/data.py deleted file mode 100644 index 80c7aca83970707204355221217918a4b2337379..0000000000000000000000000000000000000000 --- a/spaces/JustinLin610/ImageBind_zeroshot_demo/data.py +++ /dev/null @@ -1,350 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torchaudio -import logging - -from models.multimodal_preprocessors import SimpleTokenizer -from PIL import Image -from pytorchvideo import transforms as pv_transforms -from pytorchvideo.data.clip_sampling import ConstantClipsPerVideoSampler -from pytorchvideo.data.encoded_video import EncodedVideo - -from torchvision import transforms -from torchvision.transforms._transforms_video import NormalizeVideo - -DEFAULT_AUDIO_FRAME_SHIFT_MS = 10 # in milliseconds - -BPE_PATH = "bpe/bpe_simple_vocab_16e6.txt.gz" - - -def waveform2melspec(waveform, sample_rate, num_mel_bins, target_length): - # Based on https://github.com/YuanGongND/ast/blob/d7d8b4b8e06cdaeb6c843cdb38794c1c7692234c/src/dataloader.py#L102 - waveform -= waveform.mean() - fbank = torchaudio.compliance.kaldi.fbank( - waveform, - htk_compat=True, - sample_frequency=sample_rate, - use_energy=False, - window_type="hanning", - num_mel_bins=num_mel_bins, - dither=0.0, - frame_length=25, - frame_shift=DEFAULT_AUDIO_FRAME_SHIFT_MS, - ) - # Convert to [mel_bins, num_frames] shape - fbank = fbank.transpose(0, 1) - # Pad to target_length - n_frames = fbank.size(1) - p = target_length - n_frames - # if p is too large (say >20%), flash a warning - if abs(p) / n_frames > 0.2: - logging.warning( - "Large gap between audio n_frames(%d) and " - "target_length (%d). Is the audio_target_length " - "setting correct?", - n_frames, - target_length, - ) - # cut and pad - if p > 0: - fbank = torch.nn.functional.pad(fbank, (0, p), mode="constant", value=0) - elif p < 0: - fbank = fbank[:, 0:target_length] - # Convert to [1, mel_bins, num_frames] shape, essentially like a 1 - # channel image - fbank = fbank.unsqueeze(0) - return fbank - - -def get_clip_timepoints(clip_sampler, duration): - # Read out all clips in this video - all_clips_timepoints = [] - is_last_clip = False - end = 0.0 - while not is_last_clip: - start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None) - all_clips_timepoints.append((start, end)) - return all_clips_timepoints - - -def load_and_transform_vision_data(image_paths, device): - if image_paths is None: - return None - - image_ouputs = [] - for image_path in image_paths: - data_transform = transforms.Compose( - [ - transforms.Resize( - 224, interpolation=transforms.InterpolationMode.BICUBIC - ), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - with open(image_path, "rb") as fopen: - image = Image.open(fopen).convert("RGB") - - image = data_transform(image).to(device) - image_ouputs.append(image) - return torch.stack(image_ouputs, dim=0) - - -def load_and_transform_text(text, device): - if text is None: - return None - tokenizer = SimpleTokenizer(bpe_path=BPE_PATH) - tokens = [tokenizer(t).unsqueeze(0).to(device) for t in text] - tokens = torch.cat(tokens, dim=0) - return tokens - - -def load_and_transform_audio_data( - audio_paths, - device, - num_mel_bins=128, - target_length=204, - sample_rate=16000, - clip_duration=2, - clips_per_video=3, - mean=-4.268, - std=9.138, -): - if audio_paths is None: - return None - - audio_outputs = [] - clip_sampler = ConstantClipsPerVideoSampler( - clip_duration=clip_duration, clips_per_video=clips_per_video - ) - - for audio_path in audio_paths: - waveform, sr = torchaudio.load(audio_path) - if sample_rate != sr: - waveform = torchaudio.functional.resample( - waveform, orig_freq=sr, new_freq=sample_rate - ) - all_clips_timepoints = get_clip_timepoints( - clip_sampler, waveform.size(1) / sample_rate - ) - all_clips = [] - for clip_timepoints in all_clips_timepoints: - waveform_clip = waveform[ - :, - int(clip_timepoints[0] * sample_rate) : int( - clip_timepoints[1] * sample_rate - ), - ] - waveform_melspec = waveform2melspec( - waveform_clip, sample_rate, num_mel_bins, target_length - ) - all_clips.append(waveform_melspec) - - normalize = transforms.Normalize(mean=mean, std=std) - all_clips = [normalize(ac).to(device) for ac in all_clips] - - all_clips = torch.stack(all_clips, dim=0) - audio_outputs.append(all_clips) - - return torch.stack(audio_outputs, dim=0) - - -def get_clip_timepoints(clip_sampler, duration): - # Read out all clips in this video - all_clips_timepoints = [] - is_last_clip = False - end = 0.0 - while not is_last_clip: - start, end, _, _, is_last_clip = clip_sampler(end, duration, annotation=None) - all_clips_timepoints.append((start, end)) - return all_clips_timepoints - - -def crop_boxes(boxes, x_offset, y_offset): - """ - Peform crop on the bounding boxes given the offsets. - Args: - boxes (ndarray or None): bounding boxes to peform crop. The dimension - is `num boxes` x 4. - x_offset (int): cropping offset in the x axis. - y_offset (int): cropping offset in the y axis. - Returns: - cropped_boxes (ndarray or None): the cropped boxes with dimension of - `num boxes` x 4. - """ - cropped_boxes = boxes.copy() - cropped_boxes[:, [0, 2]] = boxes[:, [0, 2]] - x_offset - cropped_boxes[:, [1, 3]] = boxes[:, [1, 3]] - y_offset - - return cropped_boxes - - -def uniform_crop(images, size, spatial_idx, boxes=None, scale_size=None): - """ - Perform uniform spatial sampling on the images and corresponding boxes. - Args: - images (tensor): images to perform uniform crop. The dimension is - `num frames` x `channel` x `height` x `width`. - size (int): size of height and weight to crop the images. - spatial_idx (int): 0, 1, or 2 for left, center, and right crop if width - is larger than height. Or 0, 1, or 2 for top, center, and bottom - crop if height is larger than width. - boxes (ndarray or None): optional. Corresponding boxes to images. - Dimension is `num boxes` x 4. - scale_size (int): optinal. If not None, resize the images to scale_size before - performing any crop. - Returns: - cropped (tensor): images with dimension of - `num frames` x `channel` x `size` x `size`. - cropped_boxes (ndarray or None): the cropped boxes with dimension of - `num boxes` x 4. - """ - assert spatial_idx in [0, 1, 2] - ndim = len(images.shape) - if ndim == 3: - images = images.unsqueeze(0) - height = images.shape[2] - width = images.shape[3] - - if scale_size is not None: - if width <= height: - width, height = scale_size, int(height / width * scale_size) - else: - width, height = int(width / height * scale_size), scale_size - images = torch.nn.functional.interpolate( - images, - size=(height, width), - mode="bilinear", - align_corners=False, - ) - - y_offset = int(math.ceil((height - size) / 2)) - x_offset = int(math.ceil((width - size) / 2)) - - if height > width: - if spatial_idx == 0: - y_offset = 0 - elif spatial_idx == 2: - y_offset = height - size - else: - if spatial_idx == 0: - x_offset = 0 - elif spatial_idx == 2: - x_offset = width - size - cropped = images[:, :, y_offset : y_offset + size, x_offset : x_offset + size] - cropped_boxes = crop_boxes(boxes, x_offset, y_offset) if boxes is not None else None - if ndim == 3: - cropped = cropped.squeeze(0) - return cropped, cropped_boxes - - -class SpatialCrop(nn.Module): - """ - Convert the video into 3 smaller clips spatially. Must be used after the - temporal crops to get spatial crops, and should be used with - -2 in the spatial crop at the slowfast augmentation stage (so full - frames are passed in here). Will return a larger list with the - 3x spatial crops as well. - """ - - def __init__(self, crop_size: int = 224, num_crops: int = 3): - super().__init__() - self.crop_size = crop_size - if num_crops == 3: - self.crops_to_ext = [0, 1, 2] - self.flipped_crops_to_ext = [] - elif num_crops == 1: - self.crops_to_ext = [1] - self.flipped_crops_to_ext = [] - else: - raise NotImplementedError("Nothing else supported yet") - - def forward(self, videos): - """ - Args: - videos: A list of C, T, H, W videos. - Returns: - videos: A list with 3x the number of elements. Each video converted - to C, T, H', W' by spatial cropping. - """ - assert isinstance(videos, list), "Must be a list of videos after temporal crops" - assert all([video.ndim == 4 for video in videos]), "Must be (C,T,H,W)" - res = [] - for video in videos: - for spatial_idx in self.crops_to_ext: - res.append(uniform_crop(video, self.crop_size, spatial_idx)[0]) - if not self.flipped_crops_to_ext: - continue - flipped_video = transforms.functional.hflip(video) - for spatial_idx in self.flipped_crops_to_ext: - res.append(uniform_crop(flipped_video, self.crop_size, spatial_idx)[0]) - return res - - -def load_and_transform_video_data( - video_paths, - device, - clip_duration=2, - clips_per_video=5, - sample_rate=16000, -): - if video_paths is None: - return None - - video_outputs = [] - video_transform = transforms.Compose( - [ - pv_transforms.ShortSideScale(224), - NormalizeVideo( - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711), - ), - ] - ) - - clip_sampler = ConstantClipsPerVideoSampler( - clip_duration=clip_duration, clips_per_video=clips_per_video - ) - frame_sampler = pv_transforms.UniformTemporalSubsample(num_samples=clip_duration) - - for video_path in video_paths: - video = EncodedVideo.from_path( - video_path, - decoder="decord", - decode_audio=False, - **{"sample_rate": sample_rate}, - ) - - all_clips_timepoints = get_clip_timepoints(clip_sampler, video.duration) - - all_video = [] - for clip_timepoints in all_clips_timepoints: - # Read the clip, get frames - clip = video.get_clip(clip_timepoints[0], clip_timepoints[1]) - if clip is None: - raise ValueError("No clip found") - video_clip = frame_sampler(clip["video"]) - video_clip = video_clip / 255.0 # since this is float, need 0-1 - - all_video.append(video_clip) - - all_video = [video_transform(clip) for clip in all_video] - all_video = SpatialCrop(224, num_crops=3)(all_video) - - all_video = torch.stack(all_video, dim=0) - video_outputs.append(all_video) - - return torch.stack(video_outputs, dim=0).to(device) diff --git a/spaces/KAIST-Geometric-AI-Lab/syncdiffusion-demo/syncdiffusion/utils.py b/spaces/KAIST-Geometric-AI-Lab/syncdiffusion-demo/syncdiffusion/utils.py deleted file mode 100644 index fa70f52cf3fc4f0a0b19cb99309cfe8681625ad0..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/syncdiffusion-demo/syncdiffusion/utils.py +++ /dev/null @@ -1,28 +0,0 @@ -import numpy as np -import torch - -def seed_everything(seed): - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - -def get_views(panorama_height, panorama_width, window_size=64, stride=8): - panorama_height /= 8 - panorama_width /= 8 - num_blocks_height = (panorama_height - window_size) // stride + 1 - num_blocks_width = (panorama_width - window_size) // stride + 1 - total_num_blocks = int(num_blocks_height * num_blocks_width) - views = [] - for i in range(total_num_blocks): - h_start = int((i // num_blocks_width) * stride) - h_end = h_start + window_size - w_start = int((i % num_blocks_width) * stride) - w_end = w_start + window_size - views.append((h_start, h_end, w_start, w_end)) - return views - -def exponential_decay_list(init_weight, decay_rate, num_steps): - weights = [init_weight * (decay_rate ** i) for i in range(num_steps)] - return torch.tensor(weights) \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/train/extract/extract_f0_rmvpe.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/train/extract/extract_f0_rmvpe.py deleted file mode 100644 index c6c90440d9e612b37c6d5a514786a6d0fffb19ba..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/train/extract/extract_f0_rmvpe.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -import sys -import traceback - -import parselmouth - -now_dir = os.getcwd() -sys.path.append(now_dir) -import logging - -import numpy as np -import pyworld - -from infer.lib.audio import load_audio - -logging.getLogger("numba").setLevel(logging.WARNING) - -n_part = int(sys.argv[1]) -i_part = int(sys.argv[2]) -i_gpu = sys.argv[3] -os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu) -exp_dir = sys.argv[4] -is_half = sys.argv[5] -f = open("%s/extract_f0_feature.log" % exp_dir, "a+") - - -def printt(strr): - print(strr) - f.write("%s\n" % strr) - f.flush() - - -class FeatureInput(object): - def __init__(self, samplerate=16000, hop_size=160): - self.fs = samplerate - self.hop = hop_size - - self.f0_bin = 256 - self.f0_max = 1100.0 - self.f0_min = 50.0 - self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700) - self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700) - - def compute_f0(self, path, f0_method): - x = load_audio(path, self.fs) - # p_len = x.shape[0] // self.hop - if f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from infer.lib.rmvpe import RMVPE - - print("Loading rmvpe model") - self.model_rmvpe = RMVPE( - "assets/rmvpe/rmvpe.pt", is_half=is_half, device="cuda" - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - return f0 - - def coarse_f0(self, f0): - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * ( - self.f0_bin - 2 - ) / (self.f0_mel_max - self.f0_mel_min) + 1 - - # use 0 or 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1 - f0_coarse = np.rint(f0_mel).astype(int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, ( - f0_coarse.max(), - f0_coarse.min(), - ) - return f0_coarse - - def go(self, paths, f0_method): - if len(paths) == 0: - printt("no-f0-todo") - else: - printt("todo-f0-%s" % len(paths)) - n = max(len(paths) // 5, 1) # 每个进程最多打印5条 - for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths): - try: - if idx % n == 0: - printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path)) - if ( - os.path.exists(opt_path1 + ".npy") == True - and os.path.exists(opt_path2 + ".npy") == True - ): - continue - featur_pit = self.compute_f0(inp_path, f0_method) - np.save( - opt_path2, - featur_pit, - allow_pickle=False, - ) # nsf - coarse_pit = self.coarse_f0(featur_pit) - np.save( - opt_path1, - coarse_pit, - allow_pickle=False, - ) # ori - except: - printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc())) - - -if __name__ == "__main__": - # exp_dir=r"E:\codes\py39\dataset\mi-test" - # n_p=16 - # f = open("%s/log_extract_f0.log"%exp_dir, "w") - printt(sys.argv) - featureInput = FeatureInput() - paths = [] - inp_root = "%s/1_16k_wavs" % (exp_dir) - opt_root1 = "%s/2a_f0" % (exp_dir) - opt_root2 = "%s/2b-f0nsf" % (exp_dir) - - os.makedirs(opt_root1, exist_ok=True) - os.makedirs(opt_root2, exist_ok=True) - for name in sorted(list(os.listdir(inp_root))): - inp_path = "%s/%s" % (inp_root, name) - if "spec" in inp_path: - continue - opt_path1 = "%s/%s" % (opt_root1, name) - opt_path2 = "%s/%s" % (opt_root2, name) - paths.append([inp_path, opt_path1, opt_path2]) - try: - featureInput.go(paths[i_part::n_part], "rmvpe") - except: - printt("f0_all_fail-%s" % (traceback.format_exc())) - # ps = [] - # for i in range(n_p): - # p = Process( - # target=featureInput.go, - # args=( - # paths[i::n_p], - # f0method, - # ), - # ) - # ps.append(p) - # p.start() - # for i in range(n_p): - # ps[i].join() diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/demo_cli.py b/spaces/Kevin676/Real-Time-Voice-Cloning/demo_cli.py deleted file mode 100644 index 0c5f2adf8f129792f9edb071b4b6b610fd2bfd34..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/demo_cli.py +++ /dev/null @@ -1,206 +0,0 @@ -from encoder.params_model import model_embedding_size as speaker_embedding_size -from utils.argutils import print_args -from utils.modelutils import check_model_paths -from synthesizer.inference import Synthesizer -from encoder import inference as encoder -from vocoder import inference as vocoder -from pathlib import Path -import numpy as np -import soundfile as sf -import librosa -import argparse -import torch -import sys -import os -from audioread.exceptions import NoBackendError - - -if __name__ == '__main__': - ## Info & args - parser = argparse.ArgumentParser( - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("-e", "--enc_model_fpath", type=Path, - default="encpretrained.pt", - help="Path to a saved encoder") - parser.add_argument("-s", "--syn_model_fpath", type=Path, - default="synpretrained.pt", - help="Path to a saved synthesizer") - parser.add_argument("-v", "--voc_model_fpath", type=Path, - default="vocpretrained.pt", - help="Path to a saved vocoder") - parser.add_argument("--cpu", action="store_true", help="If True, processing is done on CPU, even when a GPU is available.") - parser.add_argument("--no_sound", action="store_true", help="If True, audio won't be played.") - parser.add_argument("--seed", type=int, default=None, help="Optional random number seed value to make toolbox deterministic.") - parser.add_argument("--no_mp3_support", action="store_true", help="If True, disallows loading mp3 files to prevent audioread errors when ffmpeg is not installed.") - parser.add_argument("-audio", "--audio_path", type=Path, required = True, - help="Path to a audio file") - parser.add_argument("--text", type=str, required = True, help="Text Input") - parser.add_argument("--output_path", type=str, required = True, help="output file path") - - args = parser.parse_args() - print_args(args, parser) - if not args.no_sound: - import sounddevice as sd - - if args.cpu: - # Hide GPUs from Pytorch to force CPU processing - os.environ["CUDA_VISIBLE_DEVICES"] = "-1" - - if not args.no_mp3_support: - try: - librosa.load("samples/1320_00000.mp3") - except NoBackendError: - print("Librosa will be unable to open mp3 files if additional software is not installed.\n" - "Please install ffmpeg or add the '--no_mp3_support' option to proceed without support for mp3 files.") - exit(-1) - - print("Running a test of your configuration...\n") - - if torch.cuda.is_available(): - device_id = torch.cuda.current_device() - gpu_properties = torch.cuda.get_device_properties(device_id) - ## Print some environment information (for debugging purposes) - print("Found %d GPUs available. Using GPU %d (%s) of compute capability %d.%d with " - "%.1fGb total memory.\n" % - (torch.cuda.device_count(), - device_id, - gpu_properties.name, - gpu_properties.major, - gpu_properties.minor, - gpu_properties.total_memory / 1e9)) - else: - print("Using CPU for inference.\n") - - ## Remind the user to download pretrained models if needed - check_model_paths(encoder_path=args.enc_model_fpath, - synthesizer_path=args.syn_model_fpath, - vocoder_path=args.voc_model_fpath) - - ## Load the models one by one. - print("Preparing the encoder, the synthesizer and the vocoder...") - encoder.load_model(args.enc_model_fpath) - synthesizer = Synthesizer(args.syn_model_fpath) - vocoder.load_model(args.voc_model_fpath) - - - ## Run a test - # print("Testing your configuration with small inputs.") - # # Forward an audio waveform of zeroes that lasts 1 second. Notice how we can get the encoder's - # # sampling rate, which may differ. - # # If you're unfamiliar with digital audio, know that it is encoded as an array of floats - # # (or sometimes integers, but mostly floats in this projects) ranging from -1 to 1. - # # The sampling rate is the number of values (samples) recorded per second, it is set to - # # 16000 for the encoder. Creating an array of length will always correspond - # # to an audio of 1 second. - # print(" Testing the encoder...") - # encoder.embed_utterance(np.zeros(encoder.sampling_rate)) - - # # Create a dummy embedding. You would normally use the embedding that encoder.embed_utterance - # # returns, but here we're going to make one ourselves just for the sake of showing that it's - # # possible. - # embed = np.random.rand(speaker_embedding_size) - # # Embeddings are L2-normalized (this isn't important here, but if you want to make your own - # # embeddings it will be). - # embed /= np.linalg.norm(embed) - # # The synthesizer can handle multiple inputs with batching. Let's create another embedding to - # # illustrate that - # embeds = [embed, np.zeros(speaker_embedding_size)] - # texts = ["test 1", "test 2"] - # print(" Testing the synthesizer... (loading the model will output a lot of text)") - # mels = synthesizer.synthesize_spectrograms(texts, embeds) - - # # The vocoder synthesizes one waveform at a time, but it's more efficient for long ones. We - # # can concatenate the mel spectrograms to a single one. - # mel = np.concatenate(mels, axis=1) - # # The vocoder can take a callback function to display the generation. More on that later. For - # # now we'll simply hide it like this: - # no_action = lambda *args: None - # print(" Testing the vocoder...") - # # For the sake of making this test short, we'll pass a short target length. The target length - # # is the length of the wav segments that are processed in parallel. E.g. for audio sampled - # # at 16000 Hertz, a target length of 8000 means that the target audio will be cut in chunks of - # # 0.5 seconds which will all be generated together. The parameters here are absurdly short, and - # # that has a detrimental effect on the quality of the audio. The default parameters are - # # recommended in general. - # vocoder.infer_waveform(mel, target=200, overlap=50, progress_callback=no_action) - - print("All test passed! You can now synthesize speech.\n\n") - - - ## Interactive speech generation - print("This is a GUI-less example of interface to SV2TTS. The purpose of this script is to " - "show how you can interface this project easily with your own. See the source code for " - "an explanation of what is happening.\n") - - print("Interactive generation loop") - # while True: - # Get the reference audio filepath - message = "Reference voice: enter an audio filepath of a voice to be cloned (mp3, " "wav, m4a, flac, ...):\n" - in_fpath = args.audio_path - - if in_fpath.suffix.lower() == ".mp3" and args.no_mp3_support: - print("Can't Use mp3 files please try again:") - ## Computing the embedding - # First, we load the wav using the function that the speaker encoder provides. This is - # important: there is preprocessing that must be applied. - - # The following two methods are equivalent: - # - Directly load from the filepath: - preprocessed_wav = encoder.preprocess_wav(in_fpath) - # - If the wav is already loaded: - original_wav, sampling_rate = librosa.load(str(in_fpath)) - preprocessed_wav = encoder.preprocess_wav(original_wav, sampling_rate) - print("Loaded file succesfully") - - # Then we derive the embedding. There are many functions and parameters that the - # speaker encoder interfaces. These are mostly for in-depth research. You will typically - # only use this function (with its default parameters): - embed = encoder.embed_utterance(preprocessed_wav) - print("Created the embedding") - - - ## Generating the spectrogram - text = args.text - - # If seed is specified, reset torch seed and force synthesizer reload - if args.seed is not None: - torch.manual_seed(args.seed) - synthesizer = Synthesizer(args.syn_model_fpath) - - # The synthesizer works in batch, so you need to put your data in a list or numpy array - texts = [text] - embeds = [embed] - # If you know what the attention layer alignments are, you can retrieve them here by - # passing return_alignments=True - specs = synthesizer.synthesize_spectrograms(texts, embeds) - spec = specs[0] - print("Created the mel spectrogram") - - - ## Generating the waveform - print("Synthesizing the waveform:") - - # If seed is specified, reset torch seed and reload vocoder - if args.seed is not None: - torch.manual_seed(args.seed) - vocoder.load_model(args.voc_model_fpath) - - # Synthesizing the waveform is fairly straightforward. Remember that the longer the - # spectrogram, the more time-efficient the vocoder. - generated_wav = vocoder.infer_waveform(spec) - - - ## Post-generation - # There's a bug with sounddevice that makes the audio cut one second earlier, so we - # pad it. - generated_wav = np.pad(generated_wav, (0, synthesizer.sample_rate), mode="constant") - - # Trim excess silences to compensate for gaps in spectrograms (issue #53) - generated_wav = encoder.preprocess_wav(generated_wav) - - # Save it on the disk - filename = args.output_path - print(generated_wav.dtype) - sf.write(filename, generated_wav.astype(np.float32), synthesizer.sample_rate) - print("\nSaved output as %s\n\n" % filename) diff --git a/spaces/KyanChen/FunSR/train_inr_funsr_ddp.py b/spaces/KyanChen/FunSR/train_inr_funsr_ddp.py deleted file mode 100644 index 524a4053e48167703d543a3b784380cc06a79f64..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/train_inr_funsr_ddp.py +++ /dev/null @@ -1,347 +0,0 @@ -import argparse -import json -import os -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP - -import yaml -import torch -import torch.nn as nn -from tqdm import tqdm -from torch.utils.data import DataLoader -from torch.optim.lr_scheduler import MultiStepLR, CosineAnnealingLR - -import datasets -import models -import utils - - -def make_data_loader(spec, tag='', local_rank=0): - if spec is None: - return None - - dataset = datasets.make(spec['dataset']) - dataset = datasets.make(spec['wrapper'], args={'dataset': dataset}) - if local_rank == 0: - print('{} dataset: size={}'.format(tag, len(dataset))) - for k, v in dataset[0].items(): - if torch.is_tensor(v): - print(' {}: shape={}'.format(k, v.shape)) - elif isinstance(v, str): - pass - elif isinstance(v, dict): - for k0, v0 in v.items(): - if hasattr(v0, 'shape'): - print(' {}: shape={}'.format(k0, v0.shape)) - else: - raise NotImplementedError - sampler = torch.utils.data.distributed.DistributedSampler(dataset, shuffle=(tag == 'train')) - loader = torch.utils.data.DataLoader(dataset, - batch_size=spec['batch_size'], - num_workers=spec['num_workers'], - pin_memory=True, - sampler=sampler) - return loader - - -def make_data_loaders(config, local_rank): - train_loader = make_data_loader(config.get('train_dataset'), tag='train', local_rank=local_rank) - val_loader = make_data_loader(config.get('val_dataset'), tag='val', local_rank=local_rank) - return train_loader, val_loader - - -def prepare_training(config, local_rank): - if config.get('resume') is not None: - sv_file = torch.load(config['resume']) - model = models.make(sv_file['model'], load_sd=True).cuda() - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) - optimizer = utils.make_optimizer( - model.parameters(), sv_file['optimizer'], load_sd=True) - epoch_start = sv_file['epoch'] + 1 - if config.get('multi_step_lr') is None: - lr_scheduler = None - else: - lr_scheduler = MultiStepLR(optimizer, **config['multi_step_lr']) - for _ in range(epoch_start - 1): - lr_scheduler.step() - else: - model = models.make(config['model']).cuda(local_rank) - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank]) - optimizer = utils.make_optimizer( - model.parameters(), config['optimizer']) - epoch_start = 1 - lr_scheduler = config.get('lr_scheduler') - lr_scheduler_name = lr_scheduler.pop('name') - if 'MultiStepLR' == lr_scheduler_name: - lr_scheduler = MultiStepLR(optimizer, **lr_scheduler) - elif 'CosineAnnealingLR' == lr_scheduler_name: - lr_scheduler = CosineAnnealingLR(optimizer, **lr_scheduler) - elif 'CosineAnnealingWarmUpLR' == lr_scheduler_name: - lr_scheduler = utils.warm_up_cosine_lr_scheduler(optimizer, **lr_scheduler) - if local_rank == 0: - print('model: #params={}'.format(utils.compute_num_params(model, text=True))) - return model, optimizer, epoch_start, lr_scheduler - -def reduce_mean(tensor, nprocs): - rt = tensor.clone() - dist.all_reduce(rt, op=dist.ReduceOp.SUM) - rt /= nprocs - return rt - -class AverageMeter(object): - """Computes and stores the average and current value""" - def __init__(self, name, fmt=':f'): - self.name = name - self.fmt = fmt - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def return_avg(self): - return self.avg - - def __str__(self): - fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})' - return fmtstr.format(**self.__dict__) - -def train(train_loader, model, optimizer, local_rank): - model = model.train() - loss_fn = nn.L1Loss().cuda(local_rank) - train_losses = AverageMeter('Loss', ':.4e') - - data_norm = config['data_norm'] - t = data_norm['img'] - img_sub = torch.FloatTensor(t['sub']).view(1, -1, 1, 1).cuda(local_rank) - img_div = torch.FloatTensor(t['div']).view(1, -1, 1, 1).cuda(local_rank) - t = data_norm['gt'] - gt_sub = torch.FloatTensor(t['sub']).view(1, 1, -1).cuda(local_rank) - gt_div = torch.FloatTensor(t['div']).view(1, 1, -1).cuda(local_rank) - - if local_rank == 0: - pbar = tqdm(total=len(train_loader), desc='train', leave=False) - - for i, batch in enumerate(train_loader): - if local_rank == 0: - pbar.update(1) - keys = list(batch.keys()) - batch = batch[keys[torch.randint(0, len(keys), [])]] - for k, v in batch.items(): - if torch.is_tensor(v): - batch[k] = v.cuda(local_rank, non_blocking=True) - img = (batch['img'] - img_sub) / img_div - gt = (batch['gt'] - gt_sub) / gt_div - pred = model(img, gt.shape[-2:]) - if isinstance(pred, tuple): - loss = 0.2 * loss_fn(pred[0], gt) + loss_fn(pred[1], gt) - elif isinstance(pred, list): - losses = [loss_fn(x, gt) for x in pred] - losses = [x * (idx + 1) for idx, x in enumerate(losses)] - loss = sum(losses) / ((1 + len(losses)) * len(losses) / 2) - else: - loss = loss_fn(pred, gt) - - torch.distributed.barrier() - reduced_loss = reduce_mean(loss, dist.get_world_size()) - train_losses.update(reduced_loss.item(), img.size(0)) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - if local_rank == 0: - pbar.close() - return train_losses.avg - - -def eval_psnr(loader, class_names, model, local_rank, data_norm=None, eval_type=None, eval_bsize=None, verbose=False, crop_border=4): - crop_border = int(crop_border) if crop_border else crop_border - if local_rank == 0: - print('crop border: ', crop_border) - model = model.eval() - - if data_norm is None: - data_norm = { - 'img': {'sub': [0], 'div': [1]}, - 'gt': {'sub': [0], 'div': [1]} - } - t = data_norm['img'] - img_sub = torch.FloatTensor(t['sub']).view(1, -1, 1, 1).cuda(local_rank) - img_div = torch.FloatTensor(t['div']).view(1, -1, 1, 1).cuda(local_rank) - t = data_norm['gt'] - gt_sub = torch.FloatTensor(t['sub']).view(1, -1, 1, 1).cuda(local_rank) - gt_div = torch.FloatTensor(t['div']).view(1, -1, 1, 1).cuda(local_rank) - - if eval_type is None: - metric_fn = [utils.calculate_psnr_pt, utils.calculate_ssim_pt] - elif eval_type == 'psnr+ssim': - metric_fn = [utils.calculate_psnr_pt, utils.calculate_ssim_pt] - elif eval_type.startswith('div2k'): - scale = int(eval_type.split('-')[1]) - metric_fn = partial(utils.calc_psnr, dataset='div2k', scale=scale) - elif eval_type.startswith('benchmark'): - scale = int(eval_type.split('-')[1]) - metric_fn = partial(utils.calc_psnr, dataset='benchmark', scale=scale) - else: - raise NotImplementedError - - val_res_psnr = AverageMeter('psnr', ':.4f') - val_res_ssim = AverageMeter('ssim', ':.4f') - - if local_rank == 0: - pbar = tqdm(total=len(loader), desc='val', leave=False) - for batch in loader: - if local_rank == 0: - pbar.update(1) - for k, v in batch.items(): - if torch.is_tensor(v): - batch[k] = v.cuda(local_rank, non_blocking=True) - - img = (batch['img'] - img_sub) / img_div - with torch.no_grad(): - pred = model(img, batch['gt'].shape[-2:]) - if isinstance(pred, list): - pred = pred[-1] - pred = pred * gt_div + gt_sub - - res_psnr = metric_fn[0]( - pred, - batch['gt'], - crop_border=crop_border - ).mean() - res_ssim = metric_fn[1]( - pred, - batch['gt'], - crop_border=crop_border - ).mean() - - torch.distributed.barrier() - reduced_val_res_psnr = reduce_mean(res_psnr, dist.get_world_size()) - reduced_val_res_ssim = reduce_mean(res_ssim, dist.get_world_size()) - - val_res_psnr.update(reduced_val_res_psnr.item(), img.size(0)) - val_res_ssim.update(reduced_val_res_ssim.item(), img.size(0)) - - if verbose and local_rank == 0: - pbar.set_description( - 'val psnr: {:.4f} ssim: {:.4f}'.format(val_res_psnr.avg, val_res_ssim.avg)) - if local_rank == 0: - pbar.close() - return val_res_psnr.avg, val_res_ssim.avg - - -def main(config, save_path): - # torch.backends.cudnn.benchmark = True - dist.init_process_group("nccl") - rank = dist.get_rank() - local_rank = int(os.environ["LOCAL_RANK"]) - world_size = dist.get_world_size() - print(f'rank: {rank} local_rank: {local_rank} world_size: {world_size}') - # print(f'local_rank: {torch.distributed.local_rank()}') - if local_rank == 0: - log, writer = utils.set_save_path(save_path) - with open(os.path.join(save_path, 'config.yaml'), 'w') as f: - yaml.dump(config, f, sort_keys=False) - - train_loader, val_loader = make_data_loaders(config, local_rank) - if config.get('data_norm') is None: - config['data_norm'] = { - 'img': {'sub': [0], 'div': [1]}, - 'gt': {'sub': [0], 'div': [1]} - } - - model, optimizer, epoch_start, lr_scheduler = prepare_training(config, local_rank) - - epoch_max = config['epoch_max'] - epoch_val_interval = config.get('epoch_val_interval') - epoch_save_interval = config.get('epoch_save_interval') - max_val_v = -1e18 - - timer = utils.Timer() - - for epoch in range(epoch_start, epoch_max + 1): - t_epoch_start = timer.t() - train_loader.sampler.set_epoch(epoch) - - train_loss = train(train_loader, model, optimizer, local_rank) - if lr_scheduler is not None: - lr_scheduler.step() - - if rank == 0: - log_info = ['epoch {}/{}'.format(epoch, epoch_max)] - log_info.append('train: loss={:.4f}'.format(train_loss)) - writer.add_scalar('lr', optimizer.param_groups[0]['lr'], epoch) - writer.add_scalars('loss', {'train': train_loss}, epoch) - - model_ = model.module - model_spec = config['model'] - model_spec['sd'] = model_.state_dict() - optimizer_spec = config['optimizer'] - optimizer_spec['sd'] = optimizer.state_dict() - sv_file = { - 'model': model_spec, - 'optimizer': optimizer_spec, - 'epoch': epoch - } - if rank == 0: - torch.save(sv_file, os.path.join(save_path, 'epoch-last.pth')) - - if (epoch_save_interval is not None) and (epoch % epoch_save_interval == 0): - if rank == 0: - torch.save(sv_file, os.path.join(save_path, 'epoch-{}.pth'.format(epoch))) - - if (epoch_val_interval is not None) and (epoch % epoch_val_interval == 0): - file_names = json.load(open(config['val_dataset']['dataset']['args']['split_file']))['test'] - class_names = list(set([os.path.basename(os.path.dirname(x)) for x in file_names])) - - val_res_psnr, val_res_ssim = eval_psnr(val_loader, class_names, model_, local_rank, - data_norm=config['data_norm'], - eval_type=config.get('eval_type'), - eval_bsize=config.get('eval_bsize'), - crop_border=4) - if rank == 0: - log_info.append('val: psnr={:.4f}'.format(val_res_psnr)) - writer.add_scalars('psnr', {'val': val_res_psnr}, epoch) - if val_res_psnr > max_val_v: - max_val_v = val_res_psnr - if rank == 0: - torch.save(sv_file, os.path.join(save_path, 'epoch-best.pth')) - - t = timer.t() - if rank == 0: - prog = (epoch - epoch_start + 1) / (epoch_max - epoch_start + 1) - t_epoch = utils.time_text(t - t_epoch_start) - t_elapsed, t_all = utils.time_text(t), utils.time_text(t / prog) - log_info.append('{} {}/{}'.format(t_epoch, t_elapsed, t_all)) - log(', '.join(log_info)) - writer.flush() - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='configs/train_1x-5x_INR_funsr.yaml') - parser.add_argument('--name', default='EXP20221216_11') - parser.add_argument('--tag', default=None) - args = parser.parse_args() - - with open(args.config, 'r') as f: - config = yaml.load(f, Loader=yaml.FullLoader) - print('config loaded.') - - save_name = args.name - if save_name is None: - save_name = '_' + args.config.split('/')[-1][:-len('.yaml')] - if args.tag is not None: - save_name += '_' + args.tag - save_path = os.path.join('./checkpoints', save_name) - main(config, save_path) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/__init__.py deleted file mode 100644 index de8b81ac433812b4ca20d46c8ebec9478da5e3bc..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .assigners import * # noqa: F401,F403 -from .builder import (ANCHOR_GENERATORS, BBOX_ASSIGNERS, BBOX_CODERS, - BBOX_SAMPLERS, IOU_CALCULATORS, MATCH_COSTS, - PRIOR_GENERATORS, build_anchor_generator, build_assigner, - build_bbox_coder, build_iou_calculator, build_match_cost, - build_prior_generator, build_sampler) -from .coders import * # noqa: F401,F403 -from .prior_generators import * # noqa: F401,F403 -from .samplers import * # noqa: F401,F403 - -__all__ = [ - 'ANCHOR_GENERATORS', 'PRIOR_GENERATORS', 'BBOX_ASSIGNERS', 'BBOX_SAMPLERS', - 'MATCH_COSTS', 'BBOX_CODERS', 'IOU_CALCULATORS', 'build_anchor_generator', - 'build_prior_generator', 'build_assigner', 'build_sampler', - 'build_iou_calculator', 'build_match_cost', 'build_bbox_coder' -] diff --git a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/refcoco.py b/spaces/KyanChen/RSPrompter/mmpretrain/datasets/refcoco.py deleted file mode 100644 index f4f2a943f73fdab493a47bbcd1d0ea6385ec60fa..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpretrain/datasets/refcoco.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from typing import List - -import mmengine -import numpy as np -from mmengine.dataset import BaseDataset -from pycocotools.coco import COCO - -from mmpretrain.registry import DATASETS - - -@DATASETS.register_module() -class RefCOCO(BaseDataset): - """RefCOCO dataset. - - Args: - ann_file (str): Annotation file path. - data_root (str): The root directory for ``data_prefix`` and - ``ann_file``. Defaults to ''. - data_prefix (str): Prefix for training data. - pipeline (Sequence): Processing pipeline. Defaults to an empty tuple. - **kwargs: Other keyword arguments in :class:`BaseDataset`. - """ - - def __init__(self, - data_root, - ann_file, - data_prefix, - split_file, - split='train', - **kwargs): - self.split_file = split_file - self.split = split - - super().__init__( - data_root=data_root, - data_prefix=dict(img_path=data_prefix), - ann_file=ann_file, - **kwargs, - ) - - def _join_prefix(self): - if not mmengine.is_abs(self.split_file) and self.split_file: - self.split_file = osp.join(self.data_root, self.split_file) - - return super()._join_prefix() - - def load_data_list(self) -> List[dict]: - """Load data list.""" - with mmengine.get_local_path(self.ann_file) as ann_file: - coco = COCO(ann_file) - splits = mmengine.load(self.split_file, file_format='pkl') - img_prefix = self.data_prefix['img_path'] - - data_list = [] - join_path = mmengine.fileio.get_file_backend(img_prefix).join_path - for refer in splits: - if refer['split'] != self.split: - continue - - ann = coco.anns[refer['ann_id']] - img = coco.imgs[ann['image_id']] - sentences = refer['sentences'] - bbox = np.array(ann['bbox'], dtype=np.float32) - bbox[2:4] = bbox[0:2] + bbox[2:4] # XYWH -> XYXY - - for sent in sentences: - data_info = { - 'img_path': join_path(img_prefix, img['file_name']), - 'image_id': ann['image_id'], - 'ann_id': ann['id'], - 'text': sent['sent'], - 'gt_bboxes': bbox[None, :], - } - data_list.append(data_info) - - if len(data_list) == 0: - raise ValueError(f'No sample in split "{self.split}".') - - return data_list diff --git a/spaces/LLMRiddles/LLMRiddles/README.md b/spaces/LLMRiddles/LLMRiddles/README.md deleted file mode 100644 index 91bd9c93a051d90ed9d1ba9c4534175b067327aa..0000000000000000000000000000000000000000 --- a/spaces/LLMRiddles/LLMRiddles/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LLMRiddles -emoji: 🏃 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 4.1.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets.py deleted file mode 100644 index 42d7807ae4d72d8a5431b62e7ae468f92436e61f..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/lib_v5/nets.py +++ /dev/null @@ -1,121 +0,0 @@ -import layers -import torch -import torch.nn.functional as F -from torch import nn - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 16) - self.stg1_high_band_net = BaseASPPNet(2, 16) - - self.stg2_bridge = layers.Conv2DBNActiv(18, 8, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(8, 16) - - self.stg3_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(16, 32) - - self.out = nn.Conv2d(32, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(16, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(16, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/LeeHotmen/webui-docker/README.md b/spaces/LeeHotmen/webui-docker/README.md deleted file mode 100644 index d09d8ce162e139ce06f130f29b73cd0221407ed6..0000000000000000000000000000000000000000 --- a/spaces/LeeHotmen/webui-docker/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Stable Diffusion Web UI Docker -emoji: 🐳 -colorFrom: blue -colorTo: blue -sdk: docker -sdk_version: 3.9 -app_file: oh-no.py -pinned: false -duplicated_from: camenduru/webui-docker ---- - -## Stable Diffusion Web UI -https://github.com/AUTOMATIC1111/stable-diffusion-webui - -## Documentation -https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/Lenery/Dolly-v2/app.py b/spaces/Lenery/Dolly-v2/app.py deleted file mode 100644 index b9dc198fecf3cc6fd65159533d6bfa92e99f2152..0000000000000000000000000000000000000000 --- a/spaces/Lenery/Dolly-v2/app.py +++ /dev/null @@ -1,133 +0,0 @@ -from __future__ import annotations -from typing import Iterable -import gradio as gr -from gradio.themes.base import Base -from gradio.themes.utils import colors, fonts, sizes -from instruct_pipeline import InstructionTextGenerationPipeline -from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline - -import torch - -theme = gr.themes.Monochrome( - primary_hue="indigo", - secondary_hue="blue", - neutral_hue="slate", - radius_size=gr.themes.sizes.radius_sm, - font=[gr.themes.GoogleFont("Open Sans"), "ui-sans-serif", "system-ui", "sans-serif"], -) - -tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left") -model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map={"":torch.cuda.current_device()}, load_in_8bit=True) - -generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) - -#generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") - -def generate(instruction): - response = generate_text(instruction) - result = "" - for word in response.split(" "): - result += word + " " - yield result - -examples = [ - "Instead of making a peanut butter and jelly sandwich, what else could I combine peanut butter with in a sandwich? Give five ideas", - "How do I make a campfire?", - "Write me a tweet about the release of Dolly 2.0, a new LLM", - "Explain to me the difference between nuclear fission and fusion.", - "I'm selling my Nikon D-750, write a short blurb for my ad." -] - -def process_example(args): - for x in generate(args): - pass - return x - -css = ".generating {visibility: hidden}" - -# Based on the gradio theming guide and borrowed from https://huggingface.co/spaces/shivi/dolly-v2-demo -class SeafoamCustom(Base): - def __init__( - self, - *, - primary_hue: colors.Color | str = colors.emerald, - secondary_hue: colors.Color | str = colors.blue, - neutral_hue: colors.Color | str = colors.blue, - spacing_size: sizes.Size | str = sizes.spacing_md, - radius_size: sizes.Size | str = sizes.radius_md, - font: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("Quicksand"), - "ui-sans-serif", - "sans-serif", - ), - font_mono: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("IBM Plex Mono"), - "ui-monospace", - "monospace", - ), - ): - super().__init__( - primary_hue=primary_hue, - secondary_hue=secondary_hue, - neutral_hue=neutral_hue, - spacing_size=spacing_size, - radius_size=radius_size, - font=font, - font_mono=font_mono, - ) - super().set( - button_primary_background_fill="linear-gradient(90deg, *primary_300, *secondary_400)", - button_primary_background_fill_hover="linear-gradient(90deg, *primary_200, *secondary_300)", - button_primary_text_color="white", - button_primary_background_fill_dark="linear-gradient(90deg, *primary_600, *secondary_800)", - block_shadow="*shadow_drop_lg", - button_shadow="*shadow_drop_lg", - input_background_fill="zinc", - input_border_color="*secondary_300", - input_shadow="*shadow_drop", - input_shadow_focus="*shadow_drop_lg", - ) - - -seafoam = SeafoamCustom() - - -with gr.Blocks(theme=seafoam, analytics_enabled=False, css=css) as demo: - with gr.Column(): - gr.Markdown( - """ ## Dolly 2.0 - - Dolly 2.0 is a 12B parameter language model based on the EleutherAI pythia model family and fine-tuned exclusively on a new, high-quality human generated instruction following dataset, crowdsourced among Databricks employees. For more details, please refer to the [model card](https://huggingface.co/databricks/dolly-v2-12b) - - Type in the box below and click the button to generate answers to your most pressing questions! - - """ - ) - gr.HTML("

You can duplicate this Space to run it privately without a queue for shorter queue times : Duplicate Space

") - - with gr.Row(): - with gr.Column(scale=3): - instruction = gr.Textbox(placeholder="Enter your question here", label="Question", elem_id="q-input") - - with gr.Box(): - gr.Markdown("**Answer**") - output = gr.Markdown(elem_id="q-output") - submit = gr.Button("Generate", variant="primary") - gr.Examples( - examples=examples, - inputs=[instruction], - cache_examples=False, - fn=process_example, - outputs=[output], - ) - - - - submit.click(generate, inputs=[instruction], outputs=[output]) - instruction.submit(generate, inputs=[instruction], outputs=[output]) - -demo.queue(concurrency_count=16).launch(debug=True) \ No newline at end of file diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/crnn_pipeline.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/crnn_pipeline.py deleted file mode 100644 index 3173eac695d40ac95e9929896cf82c753624b073..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/recog_pipelines/crnn_pipeline.py +++ /dev/null @@ -1,35 +0,0 @@ -img_norm_cfg = dict(mean=[127], std=[127]) - -train_pipeline = [ - dict(type='LoadImageFromFile', color_type='grayscale'), - dict( - type='ResizeOCR', - height=32, - min_width=100, - max_width=100, - keep_aspect_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img'], - meta_keys=['filename', 'resize_shape', 'text', 'valid_ratio']), -] -test_pipeline = [ - dict(type='LoadImageFromFile', color_type='grayscale'), - dict( - type='ResizeOCR', - height=32, - min_width=32, - max_width=None, - keep_aspect_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='DefaultFormatBundle'), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'resize_shape', 'valid_ratio', 'img_norm_cfg', - 'ori_filename', 'img_shape', 'ori_shape' - ]), -] diff --git a/spaces/Madhuri/vqa_audiobot/helper.py b/spaces/Madhuri/vqa_audiobot/helper.py deleted file mode 100644 index 52a24b1a485cd612cd774106eeedcb21f479bc87..0000000000000000000000000000000000000000 --- a/spaces/Madhuri/vqa_audiobot/helper.py +++ /dev/null @@ -1,43 +0,0 @@ -from os import listdir -from os.path import * -from PIL import Image -from io import BytesIO - -import streamlit as st -import base64 -import requests - - -def update_gallery_images(): - if 'gallery' not in st.session_state: - st.session_state['gallery'] = [] - st.session_state['gallery_images'] = [] - image_path = join(dirname(abspath(__file__)), 'images') - for f in listdir(image_path): - if f.startswith('image'): - with open(join(image_path, f), "rb") as image: - encoded = base64.b64encode(image.read()).decode() - st.session_state.gallery.append( - f"data:image/jpeg;base64,{encoded}") - st.session_state.gallery_images.append(join(image_path, f)) - - -def upload_image_to_server(): - if st.session_state.uploader is not None: - image = Image.open(st.session_state.uploader) - byte_io = BytesIO() - image.save(byte_io, 'png') - byte_io.seek(0) - file = {'file': byte_io} - response = requests.post('http://0.0.0.0:8080/uploadfile/', files=file) - if response.status_code == 200: - return response.json()['filename'] - return None - - -def request_answer(image, question): - response = requests.get( - f'http://0.0.0.0:8080/vqa?image={image}&question={question}') - if response.status_code == 200: - return response.json()['answer'] - return 'I do not understand. Please ask again.' diff --git a/spaces/Makiing/coolb-in-gtest/src/components/theme-toggle.tsx b/spaces/Makiing/coolb-in-gtest/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/Manjushri/MusicGen/audiocraft/utils/autocast.py b/spaces/Manjushri/MusicGen/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/Matthijs/mobilevit-deeplab-demo/app.py b/spaces/Matthijs/mobilevit-deeplab-demo/app.py deleted file mode 100644 index 55822266251ab0eb2fae9d06863530819a2cac3e..0000000000000000000000000000000000000000 --- a/spaces/Matthijs/mobilevit-deeplab-demo/app.py +++ /dev/null @@ -1,134 +0,0 @@ -import numpy as np -import gradio as gr -from PIL import Image - -import torch -from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation - - -model_checkpoint = "apple/deeplabv3-mobilevit-small" -feature_extractor = MobileViTFeatureExtractor.from_pretrained(model_checkpoint) -model = MobileViTForSemanticSegmentation.from_pretrained(model_checkpoint).eval() - -palette = np.array( -[ - [ 0, 0, 0], [192, 0, 0], [ 0, 192, 0], [192, 192, 0], - [ 0, 0, 192], [192, 0, 192], [ 0, 192, 192], [192, 192, 192], - [128, 0, 0], [255, 0, 0], [128, 192, 0], [255, 192, 0], - [128, 0, 192], [255, 0, 192], [128, 192, 192], [255, 192, 192], - [ 0, 128, 0], [192, 128, 0], [ 0, 255, 0], [192, 255, 0], - [ 0, 128, 192] -], -dtype=np.uint8) - -labels = [ - "background", - "aeroplane", - "bicycle", - "bird", - "boat", - "bottle", - "bus", - "car", - "cat", - "chair", - "cow", - "diningtable", - "dog", - "horse", - "motorbike", - "person", - "pottedplant", - "sheep", - "sofa", - "train", - "tvmonitor", -] - -# Draw the labels. Light colors use black text, dark colors use white text. -inverted = [ 0, 1, 4, 5, 8, 9, 12, 13, 16, 17, 20 ] -labels_colored = [] -for i in range(len(labels)): - r, g, b = palette[i] - label = labels[i] - color = "white" if i in inverted else "black" - text = "%s" % (r, g, b, color, label) - labels_colored.append(text) -labels_text = " ".join(labels_colored) - -title = "Semantic Segmentation with MobileViT and DeepLabV3" - -description = """ -The input image is resized and center cropped to 512×512 pixels. The segmentation output is 32×32 pixels.
-This model has been trained on Pascal VOC. -The classes are: -""" + labels_text + "

" - -article = """ -
- -

Sources:

- -

📜 MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer

- -

🏋️ Original pretrained weights from this GitHub repo

- -

🏙 Example images from this dataset

- -

-""" - -examples = [ - ["cat-3.jpg"], - ["construction-site.jpg"], - ["dog-cat.jpg"], - ["football-match.jpg"], -] - - -def predict(image): - with torch.no_grad(): - inputs = feature_extractor(image, return_tensors="pt") - outputs = model(**inputs) - - # Get preprocessed image. The pixel values don't need to be unnormalized - # for this particular model. - resized = (inputs["pixel_values"].numpy().squeeze().transpose(1, 2, 0)[..., ::-1] * 255).astype(np.uint8) - - # Class predictions for each pixel. - classes = outputs.logits.argmax(1).squeeze().numpy().astype(np.uint8) - - # Super slow method but it works... should probably improve this. - colored = np.zeros((classes.shape[0], classes.shape[1], 3), dtype=np.uint8) - for y in range(classes.shape[0]): - for x in range(classes.shape[1]): - colored[y, x] = palette[classes[y, x]] - - # Resize predictions to input size (not original size). - colored = Image.fromarray(colored) - colored = colored.resize((resized.shape[1], resized.shape[0]), resample=Image.Resampling.NEAREST) - - # Keep everything that is not background. - mask = (classes != 0) * 255 - mask = Image.fromarray(mask.astype(np.uint8)).convert("RGB") - mask = mask.resize((resized.shape[1], resized.shape[0]), resample=Image.Resampling.NEAREST) - - # Blend with the input image. - resized = Image.fromarray(resized) - highlighted = Image.blend(resized, mask, 0.4) - - #colored = colored.resize((256, 256), resample=Image.Resampling.BICUBIC) - #highlighted = highlighted.resize((256, 256), resample=Image.Resampling.BICUBIC) - - return colored, highlighted - - -gr.Interface( - fn=predict, - inputs=gr.inputs.Image(label="Upload image"), - outputs=[gr.outputs.Image(label="Classes"), gr.outputs.Image(label="Overlay")], - title=title, - description=description, - article=article, - examples=examples, -).launch() diff --git a/spaces/Mehdihassan/stable-ts/app.py b/spaces/Mehdihassan/stable-ts/app.py deleted file mode 100644 index 19c72ca9751ab5445964047bdfa1386d571ca1fd..0000000000000000000000000000000000000000 --- a/spaces/Mehdihassan/stable-ts/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import streamlit as st -import stable_whisper -import json -import torch -import soundfile as sf -import librosa -from io import BytesIO - -# Create a dropdown to select the model -model_options = ["base", "small", "medium", "large", "large-v2"] -default_model = "small" -model_name = st.selectbox("Select a model", options=model_options, index=model_options.index(default_model)) - -# Load the selected model -model = stable_whisper.load_model(model_name) - -# Create a file uploader for the audio file -audiofile = st.file_uploader("Upload an audio file", type=["mp3", "wav"]) - -# Create a button to run the prediction -if st.button('Transcribe'): - if audiofile is not None: - # Read the audio file into a numpy array - audio_data, sample_rate = sf.read(BytesIO(audiofile.read())) - # Resample the audio data if necessary - expected_sample_rate = 16000 # replace with the sample rate expected by the model - if sample_rate != expected_sample_rate: - audio_data = librosa.resample(audio_data, orig_sr=sample_rate, target_sr=expected_sample_rate) - # Convert the audio data to float - audio_data = torch.from_numpy(audio_data).float() - # Transcribe the audio file - result = model.transcribe(audio_data) - # Convert the result to JSON and display it - if isinstance(result, stable_whisper.WhisperResult): - result_json = result.to_dict() # replace with actual method if exists - else: - result_json = json.loads(result) - st.json(result_json) - else: - st.write("Please upload an audio file.") \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py deleted file mode 100644 index 0cd262999d8b2cb8e14a5c32190ae73f479d8e81..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/models/deeplabv3_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='ASPPHead', - in_channels=64, - in_index=4, - channels=16, - dilations=(1, 12, 24, 36), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/stare.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/stare.py deleted file mode 100644 index cbd14e0920e7f6a73baff1432e5a32ccfdb0dfae..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/stare.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class STAREDataset(CustomDataset): - """STARE dataset. - - In segmentation map annotation for STARE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.ah.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(STAREDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.ah.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/MirageML/sjc/sd1/ldm/data/lsun.py b/spaces/MirageML/sjc/sd1/ldm/data/lsun.py deleted file mode 100644 index 6256e45715ff0b57c53f985594d27cbbbff0e68e..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/sd1/ldm/data/lsun.py +++ /dev/null @@ -1,92 +0,0 @@ -import os -import numpy as np -import PIL -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms - - -class LSUNBase(Dataset): - def __init__(self, - txt_file, - data_root, - size=None, - interpolation="bicubic", - flip_p=0.5 - ): - self.data_paths = txt_file - self.data_root = data_root - with open(self.data_paths, "r") as f: - self.image_paths = f.read().splitlines() - self._length = len(self.image_paths) - self.labels = { - "relative_file_path_": [l for l in self.image_paths], - "file_path_": [os.path.join(self.data_root, l) - for l in self.image_paths], - } - - self.size = size - self.interpolation = {"linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - }[interpolation] - self.flip = transforms.RandomHorizontalFlip(p=flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = dict((k, self.labels[k][i]) for k in self.labels) - image = Image.open(example["file_path_"]) - if not image.mode == "RGB": - image = image.convert("RGB") - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - crop = min(img.shape[0], img.shape[1]) - h, w, = img.shape[0], img.shape[1] - img = img[(h - crop) // 2:(h + crop) // 2, - (w - crop) // 2:(w + crop) // 2] - - image = Image.fromarray(img) - if self.size is not None: - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip(image) - image = np.array(image).astype(np.uint8) - example["image"] = (image / 127.5 - 1.0).astype(np.float32) - return example - - -class LSUNChurchesTrain(LSUNBase): - def __init__(self, **kwargs): - super().__init__(txt_file="data/lsun/church_outdoor_train.txt", data_root="data/lsun/churches", **kwargs) - - -class LSUNChurchesValidation(LSUNBase): - def __init__(self, flip_p=0., **kwargs): - super().__init__(txt_file="data/lsun/church_outdoor_val.txt", data_root="data/lsun/churches", - flip_p=flip_p, **kwargs) - - -class LSUNBedroomsTrain(LSUNBase): - def __init__(self, **kwargs): - super().__init__(txt_file="data/lsun/bedrooms_train.txt", data_root="data/lsun/bedrooms", **kwargs) - - -class LSUNBedroomsValidation(LSUNBase): - def __init__(self, flip_p=0.0, **kwargs): - super().__init__(txt_file="data/lsun/bedrooms_val.txt", data_root="data/lsun/bedrooms", - flip_p=flip_p, **kwargs) - - -class LSUNCatsTrain(LSUNBase): - def __init__(self, **kwargs): - super().__init__(txt_file="data/lsun/cat_train.txt", data_root="data/lsun/cats", **kwargs) - - -class LSUNCatsValidation(LSUNBase): - def __init__(self, flip_p=0., **kwargs): - super().__init__(txt_file="data/lsun/cat_val.txt", data_root="data/lsun/cats", - flip_p=flip_p, **kwargs) diff --git a/spaces/MoonQiu/LongerCrafter/lvdm/common.py b/spaces/MoonQiu/LongerCrafter/lvdm/common.py deleted file mode 100644 index 35569b25aa97236d7d083d8b6ef0c0f3187c2388..0000000000000000000000000000000000000000 --- a/spaces/MoonQiu/LongerCrafter/lvdm/common.py +++ /dev/null @@ -1,95 +0,0 @@ -import math -from inspect import isfunction -import torch -from torch import nn -import torch.distributed as dist - - -def gather_data(data, return_np=True): - ''' gather data from multiple processes to one list ''' - data_list = [torch.zeros_like(data) for _ in range(dist.get_world_size())] - dist.all_gather(data_list, data) # gather not supported with NCCL - if return_np: - data_list = [data.cpu().numpy() for data in data_list] - return data_list - -def autocast(f): - def do_autocast(*args, **kwargs): - with torch.cuda.amp.autocast(enabled=True, - dtype=torch.get_autocast_gpu_dtype(), - cache_enabled=torch.is_autocast_cache_enabled()): - return f(*args, **kwargs) - return do_autocast - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - -def exists(val): - return val is not None - -def identity(*args, **kwargs): - return nn.Identity() - -def uniq(arr): - return{el: True for el in arr}.keys() - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - -def ismap(x): - if not isinstance(x, torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] > 3) - -def isimage(x): - if not isinstance(x,torch.Tensor): - return False - return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1) - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - -def shape_to_str(x): - shape_str = "x".join([str(x) for x in x.shape]) - return shape_str - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - -ckpt = torch.utils.checkpoint.checkpoint -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - return ckpt(func, *inputs) - else: - return func(*inputs) - diff --git a/spaces/MrBodean/VoiceClone/demo_cli.py b/spaces/MrBodean/VoiceClone/demo_cli.py deleted file mode 100644 index 59cdf78169fb53a97464e3b11048a90ca03885e3..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/demo_cli.py +++ /dev/null @@ -1,203 +0,0 @@ -from encoder.params_model import model_embedding_size as speaker_embedding_size -from utils.argutils import print_args -from utils.modelutils import check_model_paths -from synthesizer.inference import Synthesizer -from encoder import inference as encoder -from vocoder import inference as vocoder -from pathlib import Path -import numpy as np -import soundfile as sf -import librosa -import argparse -import torch -import sys -import os -from audioread.exceptions import NoBackendError - -if __name__ == '__main__': - ## Info & args - parser = argparse.ArgumentParser( - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument("-e", "--enc_model_fpath", type=Path, - default="encpretrained.pt", - help="Path to a saved encoder") - parser.add_argument("-s", "--syn_model_fpath", type=Path, - default="synpretrained.pt", - help="Path to a saved synthesizer") - parser.add_argument("-v", "--voc_model_fpath", type=Path, - default="vocpretrained.pt", - help="Path to a saved vocoder") - parser.add_argument("--cpu", action="store_true", help="If True, processing is done on CPU, even when a GPU is available.") - parser.add_argument("--no_sound", action="store_true", help="If True, audio won't be played.") - parser.add_argument("--seed", type=int, default=None, help="Optional random number seed value to make toolbox deterministic.") - parser.add_argument("--no_mp3_support", action="store_true", help="If True, disallows loading mp3 files to prevent audioread errors when ffmpeg is not installed.") - parser.add_argument("-audio", "--audio_path", type=Path, required = True, - help="Path to a audio file") - parser.add_argument("--text", type=str, required = True, help="Text Input") - args = parser.parse_args() - print_args(args, parser) - if not args.no_sound: - import sounddevice as sd - - if args.cpu: - # Hide GPUs from Pytorch to force CPU processing - os.environ["CUDA_VISIBLE_DEVICES"] = "-1" - - if not args.no_mp3_support: - try: - librosa.load("samples/1320_00000.mp3") - except NoBackendError: - print("Librosa will be unable to open mp3 files if additional software is not installed.\n" - "Please install ffmpeg or add the '--no_mp3_support' option to proceed without support for mp3 files.") - exit(-1) - - print("Running a test of your configuration...\n") - - if torch.cuda.is_available(): - device_id = torch.cuda.current_device() - gpu_properties = torch.cuda.get_device_properties(device_id) - ## Print some environment information (for debugging purposes) - print("Found %d GPUs available. Using GPU %d (%s) of compute capability %d.%d with " - "%.1fGb total memory.\n" % - (torch.cuda.device_count(), - device_id, - gpu_properties.name, - gpu_properties.major, - gpu_properties.minor, - gpu_properties.total_memory / 1e9)) - else: - print("Using CPU for inference.\n") - - ## Remind the user to download pretrained models if needed - check_model_paths(encoder_path=args.enc_model_fpath, - synthesizer_path=args.syn_model_fpath, - vocoder_path=args.voc_model_fpath) - - ## Load the models one by one. - print("Preparing the encoder, the synthesizer and the vocoder...") - encoder.load_model(args.enc_model_fpath) - synthesizer = Synthesizer(args.syn_model_fpath) - vocoder.load_model(args.voc_model_fpath) - - - ## Run a test - # print("Testing your configuration with small inputs.") - # # Forward an audio waveform of zeroes that lasts 1 second. Notice how we can get the encoder's - # # sampling rate, which may differ. - # # If you're unfamiliar with digital audio, know that it is encoded as an array of floats - # # (or sometimes integers, but mostly floats in this projects) ranging from -1 to 1. - # # The sampling rate is the number of values (samples) recorded per second, it is set to - # # 16000 for the encoder. Creating an array of length will always correspond - # # to an audio of 1 second. - # print(" Testing the encoder...") - # encoder.embed_utterance(np.zeros(encoder.sampling_rate)) - - # # Create a dummy embedding. You would normally use the embedding that encoder.embed_utterance - # # returns, but here we're going to make one ourselves just for the sake of showing that it's - # # possible. - # embed = np.random.rand(speaker_embedding_size) - # # Embeddings are L2-normalized (this isn't important here, but if you want to make your own - # # embeddings it will be). - # embed /= np.linalg.norm(embed) - # # The synthesizer can handle multiple inputs with batching. Let's create another embedding to - # # illustrate that - # embeds = [embed, np.zeros(speaker_embedding_size)] - # texts = ["test 1", "test 2"] - # print(" Testing the synthesizer... (loading the model will output a lot of text)") - # mels = synthesizer.synthesize_spectrograms(texts, embeds) - - # # The vocoder synthesizes one waveform at a time, but it's more efficient for long ones. We - # # can concatenate the mel spectrograms to a single one. - # mel = np.concatenate(mels, axis=1) - # # The vocoder can take a callback function to display the generation. More on that later. For - # # now we'll simply hide it like this: - # no_action = lambda *args: None - # print(" Testing the vocoder...") - # # For the sake of making this test short, we'll pass a short target length. The target length - # # is the length of the wav segments that are processed in parallel. E.g. for audio sampled - # # at 16000 Hertz, a target length of 8000 means that the target audio will be cut in chunks of - # # 0.5 seconds which will all be generated together. The parameters here are absurdly short, and - # # that has a detrimental effect on the quality of the audio. The default parameters are - # # recommended in general. - # vocoder.infer_waveform(mel, target=200, overlap=50, progress_callback=no_action) - - print("All test passed! You can now synthesize speech.\n\n") - - - ## Interactive speech generation - print("This is a GUI-less example of interface to SV2TTS. The purpose of this script is to " - "show how you can interface this project easily with your own. See the source code for " - "an explanation of what is happening.\n") - - print("Interactive generation loop") - # while True: - # Get the reference audio filepath - message = "Reference voice: enter an audio filepath of a voice to be cloned (mp3, " "wav, m4a, flac, ...):\n" - in_fpath = args.audio_path - - if in_fpath.suffix.lower() == ".mp3" and args.no_mp3_support: - print("Can't Use mp3 files please try again:") - ## Computing the embedding - # First, we load the wav using the function that the speaker encoder provides. This is - # important: there is preprocessing that must be applied. - - # The following two methods are equivalent: - # - Directly load from the filepath: - preprocessed_wav = encoder.preprocess_wav(in_fpath) - # - If the wav is already loaded: - original_wav, sampling_rate = librosa.load(str(in_fpath)) - preprocessed_wav = encoder.preprocess_wav(original_wav, sampling_rate) - print("Loaded file succesfully") - - # Then we derive the embedding. There are many functions and parameters that the - # speaker encoder interfaces. These are mostly for in-depth research. You will typically - # only use this function (with its default parameters): - embed = encoder.embed_utterance(preprocessed_wav) - print("Created the embedding") - - - ## Generating the spectrogram - text = args.text - - # If seed is specified, reset torch seed and force synthesizer reload - if args.seed is not None: - torch.manual_seed(args.seed) - synthesizer = Synthesizer(args.syn_model_fpath) - - # The synthesizer works in batch, so you need to put your data in a list or numpy array - texts = [text] - embeds = [embed] - # If you know what the attention layer alignments are, you can retrieve them here by - # passing return_alignments=True - specs = synthesizer.synthesize_spectrograms(texts, embeds) - spec = specs[0] - print("Created the mel spectrogram") - - - ## Generating the waveform - print("Synthesizing the waveform:") - - # If seed is specified, reset torch seed and reload vocoder - if args.seed is not None: - torch.manual_seed(args.seed) - vocoder.load_model(args.voc_model_fpath) - - # Synthesizing the waveform is fairly straightforward. Remember that the longer the - # spectrogram, the more time-efficient the vocoder. - generated_wav = vocoder.infer_waveform(spec) - - - ## Post-generation - # There's a bug with sounddevice that makes the audio cut one second earlier, so we - # pad it. - generated_wav = np.pad(generated_wav, (0, synthesizer.sample_rate), mode="constant") - - # Trim excess silences to compensate for gaps in spectrograms (issue #53) - generated_wav = encoder.preprocess_wav(generated_wav) - - # Save it on the disk - filename = "demo_output_1.wav" - print(generated_wav.dtype) - sf.write(filename, generated_wav.astype(np.float32), synthesizer.sample_rate) - print("\nSaved output as %s\n\n" % filename) diff --git a/spaces/Mrchuw/MagicPrompt-Stable-Diffusion/README.md b/spaces/Mrchuw/MagicPrompt-Stable-Diffusion/README.md deleted file mode 100644 index 0f1fb02ec78569e70c36b61f4503376f19d02cf8..0000000000000000000000000000000000000000 --- a/spaces/Mrchuw/MagicPrompt-Stable-Diffusion/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: MagicPrompt Stable Diffusion -emoji: 🍄 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: phenomenon1981/MagicPrompt-Stable-Diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/AoAModel.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/AoAModel.py deleted file mode 100644 index 7925549fc7d134a98f8e12b6b4b741b03ab63c78..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/AoAModel.py +++ /dev/null @@ -1,228 +0,0 @@ -# Implementation for paper 'Attention on Attention for Image Captioning' -# https://arxiv.org/abs/1908.06954 - -# RT: Code from original author's repo: https://github.com/husthuaan/AoANet/ - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .AttModel import pack_wrapper, AttModel, Attention -from .TransformerModel import LayerNorm, attention, clones, SublayerConnection, PositionwiseFeedForward - -class MultiHeadedDotAttention(nn.Module): - def __init__(self, h, d_model, dropout=0.1, scale=1, project_k_v=1, use_output_layer=1, do_aoa=0, norm_q=0, dropout_aoa=0.3): - super(MultiHeadedDotAttention, self).__init__() - assert d_model * scale % h == 0 - # We assume d_v always equals d_k - self.d_k = d_model * scale // h - self.h = h - - # Do we need to do linear projections on K and V? - self.project_k_v = project_k_v - - # normalize the query? - if norm_q: - self.norm = LayerNorm(d_model) - else: - self.norm = lambda x:x - self.linears = clones(nn.Linear(d_model, d_model * scale), 1 + 2 * project_k_v) - - # output linear layer after the multi-head attention? - self.output_layer = nn.Linear(d_model * scale, d_model) - - # apply aoa after attention? - self.use_aoa = do_aoa - if self.use_aoa: - self.aoa_layer = nn.Sequential(nn.Linear((1 + scale) * d_model, 2 * d_model), nn.GLU()) - # dropout to the input of AoA layer - if dropout_aoa > 0: - self.dropout_aoa = nn.Dropout(p=dropout_aoa) - else: - self.dropout_aoa = lambda x:x - - if self.use_aoa or not use_output_layer: - # AoA doesn't need the output linear layer - del self.output_layer - self.output_layer = lambda x:x - - self.attn = None - self.dropout = nn.Dropout(p=dropout) - - def forward(self, query, value, key, mask=None): - if mask is not None: - if len(mask.size()) == 2: - mask = mask.unsqueeze(-2) - # Same mask applied to all h heads. - mask = mask.unsqueeze(1) - - single_query = 0 - if len(query.size()) == 2: - single_query = 1 - query = query.unsqueeze(1) - - nbatches = query.size(0) - - query = self.norm(query) - - # Do all the linear projections in batch from d_model => h x d_k - if self.project_k_v == 0: - query_ = self.linears[0](query).view(nbatches, -1, self.h, self.d_k).transpose(1, 2) - key_ = key.view(nbatches, -1, self.h, self.d_k).transpose(1, 2) - value_ = value.view(nbatches, -1, self.h, self.d_k).transpose(1, 2) - else: - query_, key_, value_ = \ - [l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2) - for l, x in zip(self.linears, (query, key, value))] - - # Apply attention on all the projected vectors in batch. - x, self.attn = attention(query_, key_, value_, mask=mask, - dropout=self.dropout) - - # "Concat" using a view - x = x.transpose(1, 2).contiguous() \ - .view(nbatches, -1, self.h * self.d_k) - - if self.use_aoa: - # Apply AoA - x = self.aoa_layer(self.dropout_aoa(torch.cat([x, query], -1))) - x = self.output_layer(x) - - if single_query: - query = query.squeeze(1) - x = x.squeeze(1) - return x - -class AoA_Refiner_Layer(nn.Module): - def __init__(self, size, self_attn, feed_forward, dropout): - super(AoA_Refiner_Layer, self).__init__() - self.self_attn = self_attn - self.feed_forward = feed_forward - self.use_ff = 0 - if self.feed_forward is not None: - self.use_ff = 1 - self.sublayer = clones(SublayerConnection(size, dropout), 1+self.use_ff) - self.size = size - - def forward(self, x, mask): - x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask)) - return self.sublayer[-1](x, self.feed_forward) if self.use_ff else x - -class AoA_Refiner_Core(nn.Module): - def __init__(self, opt): - super(AoA_Refiner_Core, self).__init__() - attn = MultiHeadedDotAttention(opt.num_heads, opt.rnn_size, project_k_v=1, scale=opt.multi_head_scale, do_aoa=opt.refine_aoa, norm_q=0, dropout_aoa=getattr(opt, 'dropout_aoa', 0.3)) - layer = AoA_Refiner_Layer(opt.rnn_size, attn, PositionwiseFeedForward(opt.rnn_size, 2048, 0.1) if opt.use_ff else None, 0.1) - self.layers = clones(layer, 6) - self.norm = LayerNorm(layer.size) - - def forward(self, x, mask): - for layer in self.layers: - x = layer(x, mask) - return self.norm(x) - -class AoA_Decoder_Core(nn.Module): - def __init__(self, opt): - super(AoA_Decoder_Core, self).__init__() - self.drop_prob_lm = opt.drop_prob_lm - self.d_model = opt.rnn_size - self.use_multi_head = opt.use_multi_head - self.multi_head_scale = opt.multi_head_scale - self.use_ctx_drop = getattr(opt, 'ctx_drop', 0) - self.out_res = getattr(opt, 'out_res', 0) - self.decoder_type = getattr(opt, 'decoder_type', 'AoA') - self.att_lstm = nn.LSTMCell(opt.input_encoding_size + opt.rnn_size, opt.rnn_size) # we, fc, h^2_t-1 - self.out_drop = nn.Dropout(self.drop_prob_lm) - - if self.decoder_type == 'AoA': - # AoA layer - self.att2ctx = nn.Sequential(nn.Linear(self.d_model * opt.multi_head_scale + opt.rnn_size, 2 * opt.rnn_size), nn.GLU()) - elif self.decoder_type == 'LSTM': - # LSTM layer - self.att2ctx = nn.LSTMCell(self.d_model * opt.multi_head_scale + opt.rnn_size, opt.rnn_size) - else: - # Base linear layer - self.att2ctx = nn.Sequential(nn.Linear(self.d_model * opt.multi_head_scale + opt.rnn_size, opt.rnn_size), nn.ReLU()) - - # if opt.use_multi_head == 1: # TODO, not implemented for now - # self.attention = MultiHeadedAddAttention(opt.num_heads, opt.d_model, scale=opt.multi_head_scale) - if opt.use_multi_head == 2: - self.attention = MultiHeadedDotAttention(opt.num_heads, opt.rnn_size, project_k_v=0, scale=opt.multi_head_scale, use_output_layer=0, do_aoa=0, norm_q=1) - else: - self.attention = Attention(opt) - - if self.use_ctx_drop: - self.ctx_drop = nn.Dropout(self.drop_prob_lm) - else: - self.ctx_drop = lambda x :x - - def forward(self, xt, mean_feats, att_feats, p_att_feats, state, att_masks=None): - # state[0][1] is the context vector at the last step - h_att, c_att = self.att_lstm(torch.cat([xt, mean_feats + self.ctx_drop(state[0][1])], 1), (state[0][0], state[1][0])) - - if self.use_multi_head == 2: - att = self.attention(h_att, p_att_feats.narrow(2, 0, self.multi_head_scale * self.d_model), p_att_feats.narrow(2, self.multi_head_scale * self.d_model, self.multi_head_scale * self.d_model), att_masks) - else: - att = self.attention(h_att, att_feats, p_att_feats, att_masks) - - ctx_input = torch.cat([att, h_att], 1) - if self.decoder_type == 'LSTM': - output, c_logic = self.att2ctx(ctx_input, (state[0][1], state[1][1])) - state = (torch.stack((h_att, output)), torch.stack((c_att, c_logic))) - else: - output = self.att2ctx(ctx_input) - # save the context vector to state[0][1] - state = (torch.stack((h_att, output)), torch.stack((c_att, state[1][1]))) - - if self.out_res: - # add residual connection - output = output + h_att - - output = self.out_drop(output) - return output, state - -class AoAModel(AttModel): - def __init__(self, opt): - super(AoAModel, self).__init__(opt) - self.num_layers = 2 - # mean pooling - self.use_mean_feats = getattr(opt, 'mean_feats', 1) - if opt.use_multi_head == 2: - del self.ctx2att - self.ctx2att = nn.Linear(opt.rnn_size, 2 * opt.multi_head_scale * opt.rnn_size) - - if self.use_mean_feats: - del self.fc_embed - if opt.refine: - self.refiner = AoA_Refiner_Core(opt) - else: - self.refiner = lambda x,y : x - self.core = AoA_Decoder_Core(opt) - - self.d_model = getattr(opt, 'd_model', opt.input_encoding_size) - - - def _prepare_feature(self, fc_feats, att_feats, att_masks): - att_feats, att_masks = self.clip_att(att_feats, att_masks) - - # embed att feats - att_feats = pack_wrapper(self.att_embed, att_feats, att_masks) - att_feats = self.refiner(att_feats, att_masks) - - if self.use_mean_feats: - # meaning pooling - if att_masks is None: - mean_feats = torch.mean(att_feats, dim=1) - else: - mean_feats = (torch.sum(att_feats * att_masks.unsqueeze(-1), 1) / torch.sum(att_masks.unsqueeze(-1), 1)) - else: - mean_feats = self.fc_embed(fc_feats) - - # Project the attention feats first to reduce memory and computation. - p_att_feats = self.ctx2att(att_feats) - - return mean_feats, att_feats, p_att_feats, att_masks \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/data/sentence_retrieval_lib.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/data/sentence_retrieval_lib.py deleted file mode 100644 index d8e83ae579f8221b93e790ea62b91c3d6d2b9e90..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/data/sentence_retrieval_lib.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""BERT library to process data for cross lingual sentence retrieval task.""" - -import os - -from absl import logging -from official.nlp.bert import tokenization -from official.nlp.data import classifier_data_lib - - -class BuccProcessor(classifier_data_lib.DataProcessor): - """Procssor for Xtreme BUCC data set.""" - supported_languages = ["de", "fr", "ru", "zh"] - - def __init__(self, - process_text_fn=tokenization.convert_to_unicode): - super(BuccProcessor, self).__init__(process_text_fn) - self.languages = BuccProcessor.supported_languages - - def get_dev_examples(self, data_dir, file_pattern): - return self._create_examples( - self._read_tsv(os.path.join(data_dir, file_pattern.format("dev"))), - "sample") - - def get_test_examples(self, data_dir, file_pattern): - return self._create_examples( - self._read_tsv(os.path.join(data_dir, file_pattern.format("test"))), - "test") - - @staticmethod - def get_processor_name(): - """See base class.""" - return "BUCC" - - def _create_examples(self, lines, set_type): - """Creates examples for the training and dev sets.""" - examples = [] - for (i, line) in enumerate(lines): - guid = "%s-%s" % (set_type, i) - int_iden = int(line[0].split("-")[1]) - text_a = self.process_text_fn(line[1]) - examples.append( - classifier_data_lib.InputExample( - guid=guid, text_a=text_a, int_iden=int_iden)) - return examples - - -class TatoebaProcessor(classifier_data_lib.DataProcessor): - """Procssor for Xtreme Tatoeba data set.""" - supported_languages = [ - "af", "ar", "bg", "bn", "de", "el", "es", "et", "eu", "fa", "fi", "fr", - "he", "hi", "hu", "id", "it", "ja", "jv", "ka", "kk", "ko", "ml", "mr", - "nl", "pt", "ru", "sw", "ta", "te", "th", "tl", "tr", "ur", "vi", "zh" - ] - - def __init__(self, - process_text_fn=tokenization.convert_to_unicode): - super(TatoebaProcessor, self).__init__(process_text_fn) - self.languages = TatoebaProcessor.supported_languages - - def get_test_examples(self, data_dir, file_path): - return self._create_examples( - self._read_tsv(os.path.join(data_dir, file_path)), "test") - - @staticmethod - def get_processor_name(): - """See base class.""" - return "TATOEBA" - - def _create_examples(self, lines, set_type): - """Creates examples for the training and dev sets.""" - examples = [] - for (i, line) in enumerate(lines): - guid = "%s-%s" % (set_type, i) - text_a = self.process_text_fn(line[0]) - examples.append( - classifier_data_lib.InputExample( - guid=guid, text_a=text_a, int_iden=i)) - return examples - - -def generate_sentence_retrevial_tf_record(processor, - data_dir, - tokenizer, - eval_data_output_path=None, - test_data_output_path=None, - max_seq_length=128): - """Generates the tf records for retrieval tasks. - - Args: - processor: Input processor object to be used for generating data. Subclass - of `DataProcessor`. - data_dir: Directory that contains train/eval data to process. Data files - should be in from. - tokenizer: The tokenizer to be applied on the data. - eval_data_output_path: Output to which processed tf record for evaluation - will be saved. - test_data_output_path: Output to which processed tf record for testing - will be saved. Must be a pattern template with {} if processor has - language specific test data. - max_seq_length: Maximum sequence length of the to be generated - training/eval data. - - Returns: - A dictionary containing input meta data. - """ - assert eval_data_output_path or test_data_output_path - - if processor.get_processor_name() == "BUCC": - path_pattern = "{}-en.{{}}.{}" - - if processor.get_processor_name() == "TATOEBA": - path_pattern = "{}-en.{}" - - meta_data = { - "processor_type": processor.get_processor_name(), - "max_seq_length": max_seq_length, - "number_eval_data": {}, - "number_test_data": {}, - } - logging.info("Start to process %s task data", processor.get_processor_name()) - - for lang_a in processor.languages: - for lang_b in [lang_a, "en"]: - if eval_data_output_path: - eval_input_data_examples = processor.get_dev_examples( - data_dir, os.path.join(path_pattern.format(lang_a, lang_b))) - - num_eval_data = len(eval_input_data_examples) - logging.info("Processing %d dev examples of %s-en.%s", num_eval_data, - lang_a, lang_b) - output_file = os.path.join( - eval_data_output_path, - "{}-en-{}.{}.tfrecords".format(lang_a, lang_b, "dev")) - classifier_data_lib.file_based_convert_examples_to_features( - eval_input_data_examples, None, max_seq_length, tokenizer, - output_file, None) - meta_data["number_eval_data"][f"{lang_a}-en.{lang_b}"] = num_eval_data - - if test_data_output_path: - test_input_data_examples = processor.get_test_examples( - data_dir, os.path.join(path_pattern.format(lang_a, lang_b))) - - num_test_data = len(test_input_data_examples) - logging.info("Processing %d test examples of %s-en.%s", num_test_data, - lang_a, lang_b) - output_file = os.path.join( - test_data_output_path, - "{}-en-{}.{}.tfrecords".format(lang_a, lang_b, "test")) - classifier_data_lib.file_based_convert_examples_to_features( - test_input_data_examples, None, max_seq_length, tokenizer, - output_file, None) - meta_data["number_test_data"][f"{lang_a}-en.{lang_b}"] = num_test_data - - return meta_data diff --git a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/core.py b/spaces/NCTCMumbai/NCTC/models/official/utils/flags/core.py deleted file mode 100644 index fa36944893a579fe5d4a65af9262651db0abc1ba..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/utils/flags/core.py +++ /dev/null @@ -1,133 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Public interface for flag definition. - -See _example.py for detailed instructions on defining flags. -""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import sys -from six.moves import shlex_quote - -from absl import app as absl_app -from absl import flags - -from official.utils.flags import _base -from official.utils.flags import _benchmark -from official.utils.flags import _conventions -from official.utils.flags import _device -from official.utils.flags import _distribution -from official.utils.flags import _misc -from official.utils.flags import _performance - - -def set_defaults(**kwargs): - for key, value in kwargs.items(): - flags.FLAGS.set_default(name=key, value=value) - - -def parse_flags(argv=None): - """Reset flags and reparse. Currently only used in testing.""" - flags.FLAGS.unparse_flags() - absl_app.parse_flags_with_usage(argv or sys.argv) - - -def register_key_flags_in_core(f): - """Defines a function in core.py, and registers its key flags. - - absl uses the location of a flags.declare_key_flag() to determine the context - in which a flag is key. By making all declares in core, this allows model - main functions to call flags.adopt_module_key_flags() on core and correctly - chain key flags. - - Args: - f: The function to be wrapped - - Returns: - The "core-defined" version of the input function. - """ - - def core_fn(*args, **kwargs): - key_flags = f(*args, **kwargs) - [flags.declare_key_flag(fl) for fl in key_flags] # pylint: disable=expression-not-assigned - return core_fn - - -define_base = register_key_flags_in_core(_base.define_base) -# We have define_base_eager for compatibility, since it used to be a separate -# function from define_base. -define_base_eager = define_base -define_log_steps = register_key_flags_in_core(_benchmark.define_log_steps) -define_benchmark = register_key_flags_in_core(_benchmark.define_benchmark) -define_device = register_key_flags_in_core(_device.define_device) -define_image = register_key_flags_in_core(_misc.define_image) -define_performance = register_key_flags_in_core(_performance.define_performance) -define_distribution = register_key_flags_in_core( - _distribution.define_distribution) - - -help_wrap = _conventions.help_wrap - - -get_num_gpus = _base.get_num_gpus -get_tf_dtype = _performance.get_tf_dtype -get_loss_scale = _performance.get_loss_scale -DTYPE_MAP = _performance.DTYPE_MAP -require_cloud_storage = _device.require_cloud_storage - -def _get_nondefault_flags_as_dict(): - """Returns the nondefault flags as a dict from flag name to value.""" - nondefault_flags = {} - for flag_name in flags.FLAGS: - flag_value = getattr(flags.FLAGS, flag_name) - if (flag_name != flags.FLAGS[flag_name].short_name and - flag_value != flags.FLAGS[flag_name].default): - nondefault_flags[flag_name] = flag_value - return nondefault_flags - - -def get_nondefault_flags_as_str(): - """Returns flags as a string that can be passed as command line arguments. - - E.g., returns: "--batch_size=256 --use_synthetic_data" for the following code - block: - - ``` - flags.FLAGS.batch_size = 256 - flags.FLAGS.use_synthetic_data = True - print(get_nondefault_flags_as_str()) - ``` - - Only flags with nondefault values are returned, as passing default flags as - command line arguments has no effect. - - Returns: - A string with the flags, that can be passed as command line arguments to a - program to use the flags. - """ - nondefault_flags = _get_nondefault_flags_as_dict() - flag_strings = [] - for name, value in sorted(nondefault_flags.items()): - if isinstance(value, bool): - flag_str = '--{}'.format(name) if value else '--no{}'.format(name) - elif isinstance(value, list): - flag_str = '--{}={}'.format(name, ','.join(value)) - else: - flag_str = '--{}={}'.format(name, value) - flag_strings.append(flag_str) - return ' '.join(shlex_quote(flag_str) for flag_str in flag_strings) diff --git a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_plot_trajectory.py b/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_plot_trajectory.py deleted file mode 100644 index 08273a83b512fa3100f7df6e20d41d666b037aad..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/cognitive_mapping_and_planning/scripts/script_plot_trajectory.py +++ /dev/null @@ -1,339 +0,0 @@ -# Copyright 2016 The TensorFlow Authors All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== - -r""" -Code for plotting trajectories in the top view, and also plot first person views -from saved trajectories. Does not run the network but only loads the mesh data -to plot the view points. - CUDA_VISIBLE_DEVICES=0 LD_LIBRARY_PATH=/opt/cuda-8.0/lib64:/opt/cudnnv51/lib64 - PYTHONPATH='.' PYOPENGL_PLATFORM=egl python scripts/script_plot_trajectory.py \ - --first_person --num_steps 40 \ - --config_name cmp.lmap_Msc.clip5.sbpd_d_r2r \ - --imset test --alsologtostderr --base_dir output --out_dir vis - -""" -import os, sys, numpy as np, copy -import matplotlib -matplotlib.use("Agg") -import matplotlib.pyplot as plt -import matplotlib.animation as animation -from matplotlib.gridspec import GridSpec - -import tensorflow as tf -from tensorflow.contrib import slim -import cv2 -import logging -from tensorflow.python.platform import gfile -from tensorflow.python.platform import app -from tensorflow.python.platform import flags - -from datasets import nav_env -import scripts.script_nav_agent_release as sna -import src.file_utils as fu -from src import graph_utils -from src import utils -FLAGS = flags.FLAGS - -flags.DEFINE_string('out_dir', 'vis', 'Directory where to store the output') -flags.DEFINE_string('type', '', 'Optional type.') -flags.DEFINE_bool('first_person', False, 'Visualize the first person view.') -flags.DEFINE_bool('top_view', False, 'Visualize the trajectory in the top view.') -flags.DEFINE_integer('num_steps', 40, 'Number of steps to run the model for.') -flags.DEFINE_string('imset', 'test', '') -flags.DEFINE_string('base_dir', 'output', 'Cache directory.') - -def _get_suffix_str(): - return '' - - -def _load_trajectory(): - base_dir = FLAGS.base_dir - config_name = FLAGS.config_name+_get_suffix_str() - - dir_name = os.path.join(base_dir, FLAGS.type, config_name) - logging.info('Waiting for snapshot in directory %s.', dir_name) - last_checkpoint = slim.evaluation.wait_for_new_checkpoint(dir_name, None) - checkpoint_iter = int(os.path.basename(last_checkpoint).split('-')[1]) - - # Load the distances. - a = utils.load_variables(os.path.join(dir_name, 'bench_on_'+FLAGS.imset, - 'all_locs_at_t_{:d}.pkl'.format(checkpoint_iter))) - return a - -def _compute_hardness(): - # Load the stanford data to compute the hardness. - if FLAGS.type == '': - args = sna.get_args_for_config(FLAGS.config_name+'+bench_'+FLAGS.imset) - else: - args = sna.get_args_for_config(FLAGS.type+'.'+FLAGS.config_name+'+bench_'+FLAGS.imset) - - args.navtask.logdir = None - R = lambda: nav_env.get_multiplexer_class(args.navtask, 0) - R = R() - - rng_data = [np.random.RandomState(0), np.random.RandomState(0)] - - # Sample a room. - h_dists = [] - gt_dists = [] - for i in range(250): - e = R.sample_env(rng_data) - nodes = e.task.nodes - - # Initialize the agent. - init_env_state = e.reset(rng_data) - - gt_dist_to_goal = [e.episode.dist_to_goal[0][j][s] - for j, s in enumerate(e.episode.start_node_ids)] - - for j in range(args.navtask.task_params.batch_size): - start_node_id = e.episode.start_node_ids[j] - end_node_id =e.episode.goal_node_ids[0][j] - h_dist = graph_utils.heuristic_fn_vec( - nodes[[start_node_id],:], nodes[[end_node_id], :], - n_ori=args.navtask.task_params.n_ori, - step_size=args.navtask.task_params.step_size)[0][0] - gt_dist = e.episode.dist_to_goal[0][j][start_node_id] - h_dists.append(h_dist) - gt_dists.append(gt_dist) - - h_dists = np.array(h_dists) - gt_dists = np.array(gt_dists) - e = R.sample_env([np.random.RandomState(0), np.random.RandomState(0)]) - input = e.get_common_data() - orig_maps = input['orig_maps'][0,0,:,:,0] - return h_dists, gt_dists, orig_maps - -def plot_trajectory_first_person(dt, orig_maps, out_dir): - out_dir = os.path.join(out_dir, FLAGS.config_name+_get_suffix_str(), - FLAGS.imset) - fu.makedirs(out_dir) - - # Load the model so that we can render. - plt.set_cmap('gray') - samples_per_action = 8; wait_at_action = 0; - - Writer = animation.writers['mencoder'] - writer = Writer(fps=3*(samples_per_action+wait_at_action), - metadata=dict(artist='anonymous'), bitrate=1800) - - args = sna.get_args_for_config(FLAGS.config_name + '+bench_'+FLAGS.imset) - args.navtask.logdir = None - navtask_ = copy.deepcopy(args.navtask) - navtask_.camera_param.modalities = ['rgb'] - navtask_.task_params.modalities = ['rgb'] - sz = 512 - navtask_.camera_param.height = sz - navtask_.camera_param.width = sz - navtask_.task_params.img_height = sz - navtask_.task_params.img_width = sz - R = lambda: nav_env.get_multiplexer_class(navtask_, 0) - R = R() - b = R.buildings[0] - - f = [0 for _ in range(wait_at_action)] + \ - [float(_)/samples_per_action for _ in range(samples_per_action)]; - - # Generate things for it to render. - inds_to_do = [] - inds_to_do += [1, 4, 10] #1291, 1268, 1273, 1289, 1302, 1426, 1413, 1449, 1399, 1390] - - for i in inds_to_do: - fig = plt.figure(figsize=(10,8)) - gs = GridSpec(3,4) - gs.update(wspace=0.05, hspace=0.05, left=0.0, top=0.97, right=1.0, bottom=0.) - ax = fig.add_subplot(gs[:,:-1]) - ax1 = fig.add_subplot(gs[0,-1]) - ax2 = fig.add_subplot(gs[1,-1]) - ax3 = fig.add_subplot(gs[2,-1]) - axes = [ax, ax1, ax2, ax3] - # ax = fig.add_subplot(gs[:,:]) - # axes = [ax] - for ax in axes: - ax.set_axis_off() - - node_ids = dt['all_node_ids'][i, :, 0]*1 - # Prune so that last node is not repeated more than 3 times? - if np.all(node_ids[-4:] == node_ids[-1]): - while node_ids[-4] == node_ids[-1]: - node_ids = node_ids[:-1] - num_steps = np.minimum(FLAGS.num_steps, len(node_ids)) - - xyt = b.to_actual_xyt_vec(b.task.nodes[node_ids]) - xyt_diff = xyt[1:,:] - xyt[:-1:,:] - xyt_diff[:,2] = np.mod(xyt_diff[:,2], 4) - ind = np.where(xyt_diff[:,2] == 3)[0] - xyt_diff[ind, 2] = -1 - xyt_diff = np.expand_dims(xyt_diff, axis=1) - to_cat = [xyt_diff*_ for _ in f] - perturbs_all = np.concatenate(to_cat, axis=1) - perturbs_all = np.concatenate([perturbs_all, np.zeros_like(perturbs_all[:,:,:1])], axis=2) - node_ids_all = np.expand_dims(node_ids, axis=1)*1 - node_ids_all = np.concatenate([node_ids_all for _ in f], axis=1) - node_ids_all = np.reshape(node_ids_all[:-1,:], -1) - perturbs_all = np.reshape(perturbs_all, [-1, 4]) - imgs = b.render_nodes(b.task.nodes[node_ids_all,:], perturb=perturbs_all) - - # Get action at each node. - actions = [] - _, action_to_nodes = b.get_feasible_actions(node_ids) - for j in range(num_steps-1): - action_to_node = action_to_nodes[j] - node_to_action = dict(zip(action_to_node.values(), action_to_node.keys())) - actions.append(node_to_action[node_ids[j+1]]) - - def init_fn(): - return fig, - gt_dist_to_goal = [] - - # Render trajectories. - def worker(j): - # Plot the image. - step_number = j/(samples_per_action + wait_at_action) - img = imgs[j]; ax = axes[0]; ax.clear(); ax.set_axis_off(); - img = img.astype(np.uint8); ax.imshow(img); - tt = ax.set_title( - "First Person View\n" + - "Top corners show diagnostics (distance, agents' action) not input to agent.", - fontsize=12) - plt.setp(tt, color='white') - - # Distance to goal. - t = 'Dist to Goal:\n{:2d} steps'.format(int(dt['all_d_at_t'][i, step_number])) - t = ax.text(0.01, 0.99, t, - horizontalalignment='left', - verticalalignment='top', - fontsize=20, color='red', - transform=ax.transAxes, alpha=1.0) - t.set_bbox(dict(color='white', alpha=0.85, pad=-0.1)) - - # Action to take. - action_latex = ['$\odot$ ', '$\curvearrowright$ ', '$\curvearrowleft$ ', r'$\Uparrow$ '] - t = ax.text(0.99, 0.99, action_latex[actions[step_number]], - horizontalalignment='right', - verticalalignment='top', - fontsize=40, color='green', - transform=ax.transAxes, alpha=1.0) - t.set_bbox(dict(color='white', alpha=0.85, pad=-0.1)) - - - # Plot the map top view. - ax = axes[-1] - if j == 0: - # Plot the map - locs = dt['all_locs'][i,:num_steps,:] - goal_loc = dt['all_goal_locs'][i,:,:] - xymin = np.minimum(np.min(goal_loc, axis=0), np.min(locs, axis=0)) - xymax = np.maximum(np.max(goal_loc, axis=0), np.max(locs, axis=0)) - xy1 = (xymax+xymin)/2. - 0.7*np.maximum(np.max(xymax-xymin), 24) - xy2 = (xymax+xymin)/2. + 0.7*np.maximum(np.max(xymax-xymin), 24) - - ax.set_axis_on() - ax.patch.set_facecolor((0.333, 0.333, 0.333)) - ax.set_xticks([]); ax.set_yticks([]); - ax.imshow(orig_maps, origin='lower', vmin=-1.0, vmax=2.0) - ax.plot(goal_loc[:,0], goal_loc[:,1], 'g*', markersize=12) - - locs = dt['all_locs'][i,:1,:] - ax.plot(locs[:,0], locs[:,1], 'b.', markersize=12) - - ax.set_xlim([xy1[0], xy2[0]]) - ax.set_ylim([xy1[1], xy2[1]]) - - locs = dt['all_locs'][i,step_number,:] - locs = np.expand_dims(locs, axis=0) - ax.plot(locs[:,0], locs[:,1], 'r.', alpha=1.0, linewidth=0, markersize=4) - tt = ax.set_title('Trajectory in topview', fontsize=14) - plt.setp(tt, color='white') - return fig, - - line_ani = animation.FuncAnimation(fig, worker, - (num_steps-1)*(wait_at_action+samples_per_action), - interval=500, blit=True, init_func=init_fn) - tmp_file_name = 'tmp.mp4' - line_ani.save(tmp_file_name, writer=writer, savefig_kwargs={'facecolor':'black'}) - out_file_name = os.path.join(out_dir, 'vis_{:04d}.mp4'.format(i)) - print(out_file_name) - - if fu.exists(out_file_name): - gfile.Remove(out_file_name) - gfile.Copy(tmp_file_name, out_file_name) - gfile.Remove(tmp_file_name) - plt.close(fig) - -def plot_trajectory(dt, hardness, orig_maps, out_dir): - out_dir = os.path.join(out_dir, FLAGS.config_name+_get_suffix_str(), - FLAGS.imset) - fu.makedirs(out_dir) - out_file = os.path.join(out_dir, 'all_locs_at_t.pkl') - dt['hardness'] = hardness - utils.save_variables(out_file, dt.values(), dt.keys(), overwrite=True) - - #Plot trajectories onto the maps - plt.set_cmap('gray') - for i in range(4000): - goal_loc = dt['all_goal_locs'][i, :, :] - locs = np.concatenate((dt['all_locs'][i,:,:], - dt['all_locs'][i,:,:]), axis=0) - xymin = np.minimum(np.min(goal_loc, axis=0), np.min(locs, axis=0)) - xymax = np.maximum(np.max(goal_loc, axis=0), np.max(locs, axis=0)) - xy1 = (xymax+xymin)/2. - 1.*np.maximum(np.max(xymax-xymin), 24) - xy2 = (xymax+xymin)/2. + 1.*np.maximum(np.max(xymax-xymin), 24) - - fig, ax = utils.tight_imshow_figure(plt, figsize=(6,6)) - ax.set_axis_on() - ax.patch.set_facecolor((0.333, 0.333, 0.333)) - ax.set_xticks([]) - ax.set_yticks([]) - - all_locs = dt['all_locs'][i,:,:]*1 - uniq = np.where(np.any(all_locs[1:,:] != all_locs[:-1,:], axis=1))[0]+1 - uniq = np.sort(uniq).tolist() - uniq.insert(0,0) - uniq = np.array(uniq) - all_locs = all_locs[uniq, :] - - ax.plot(dt['all_locs'][i, 0, 0], - dt['all_locs'][i, 0, 1], 'b.', markersize=24) - ax.plot(dt['all_goal_locs'][i, 0, 0], - dt['all_goal_locs'][i, 0, 1], 'g*', markersize=19) - ax.plot(all_locs[:,0], all_locs[:,1], 'r', alpha=0.4, linewidth=2) - ax.scatter(all_locs[:,0], all_locs[:,1], - c=5+np.arange(all_locs.shape[0])*1./all_locs.shape[0], - cmap='Reds', s=30, linewidth=0) - ax.imshow(orig_maps, origin='lower', vmin=-1.0, vmax=2.0, aspect='equal') - ax.set_xlim([xy1[0], xy2[0]]) - ax.set_ylim([xy1[1], xy2[1]]) - - file_name = os.path.join(out_dir, 'trajectory_{:04d}.png'.format(i)) - print(file_name) - with fu.fopen(file_name, 'w') as f: - plt.savefig(f) - plt.close(fig) - - -def main(_): - a = _load_trajectory() - h_dists, gt_dists, orig_maps = _compute_hardness() - hardness = 1.-h_dists*1./ gt_dists - - if FLAGS.top_view: - plot_trajectory(a, hardness, orig_maps, out_dir=FLAGS.out_dir) - - if FLAGS.first_person: - plot_trajectory_first_person(a, orig_maps, out_dir=FLAGS.out_dir) - -if __name__ == '__main__': - app.run() diff --git a/spaces/Nahrawy/ControlLight/app.py b/spaces/Nahrawy/ControlLight/app.py deleted file mode 100644 index d3ea0c96fa5528afa1375f0dbf858825972484de..0000000000000000000000000000000000000000 --- a/spaces/Nahrawy/ControlLight/app.py +++ /dev/null @@ -1,125 +0,0 @@ -import gradio as gr -import jax -import numpy as np -import jax.numpy as jnp -from flax.jax_utils import replicate -from flax.training.common_utils import shard -from PIL import Image -from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel -import cv2 - -# load control net and stable diffusion v1-5 -controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( - "Nahrawy/controlnet-VIDIT-FAID", dtype=jnp.bfloat16, revision="615ba4a457b95a0eba813bcc8caf842c03a4f7bd" -) -pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.bfloat16 -) - -def create_key(seed=0): - return jax.random.PRNGKey(seed) - -def process_mask(image): - mask = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - mask = cv2.resize(mask,(512,512)) - return mask - - - -def infer(prompts, negative_prompts, image): - params["controlnet"] = controlnet_params - - num_samples = 1 #jax.device_count() - rng = create_key(0) - rng = jax.random.split(rng, jax.device_count()) - im = process_mask(image) - mask = Image.fromarray(im) - - prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) - negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) - processed_image = pipe.prepare_image_inputs([mask] * num_samples) - - p_params = replicate(params) - prompt_ids = shard(prompt_ids) - negative_prompt_ids = shard(negative_prompt_ids) - processed_image = shard(processed_image) - print(processed_image[0].shape) - output = pipe( - prompt_ids=prompt_ids, - image=processed_image, - params=p_params, - prng_seed=rng, - num_inference_steps=50, - neg_prompt_ids=negative_prompt_ids, - jit=True, - ).images - - output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) - return output_images - -e_images = ['0.png', - '0.png', - '0.png', - '0.png', - '0.png', - '2.png', - '2.png', - '2.png', - '2.png',] -e_prompts = ['a dog in the middle of the road, shadow on the ground,light direction north-east', - 'a dog in the middle of the road, shadow on the ground,light direction north-west', - 'a dog in the middle of the road, shadow on the ground,light direction south-west', - 'a dog in the middle of the road, shadow on the ground,light direction south-east', - 'a red rural house, shadow on the ground, light direction north', - 'a red rural house, shadow on the ground, light direction east', - 'a red rural house, shadow on the ground, light direction south', - 'a red rural house, shadow on the ground, light direction west'] -e_negative_prompts = ['monochromatic, unrealistic, bad looking, full of glitches', - 'monochromatic, unrealistic, bad looking, full of glitches', - 'monochromatic, unrealistic, bad looking, full of glitches', - 'monochromatic, unrealistic, bad looking, full of glitches', - 'monochromatic, unrealistic, bad looking, full of glitches', - 'monochromatic, unrealistic, bad looking, full of glitches', - 'monochromatic, unrealistic, bad looking, full of glitches', - 'monochromatic, unrealistic, bad looking, full of glitches'] -examples = [] -for image, prompt, negative_prompt in zip(e_images, e_prompts, e_negative_prompts): - examples.append([prompt, negative_prompt, image]) - -title = " # ControlLight: Light control through ControlNet and Depth Maps conditioning" -info = ''' -# ControlLight: Light control through ControlNet and Depth Maps conditioning -We propose a ControlNet using depth maps conditioning that is capable of controlling the light direction in a scene while trying to maintain the scene integrity. -The model was trained on [VIDIT dataset](https://huggingface.co/datasets/Nahrawy/VIDIT-Depth-ControlNet) and [ -A Dataset of Flash and Ambient Illumination Pairs from the Crowd](https://huggingface.co/datasets/Nahrawy/FAID-Depth-ControlNet) as a part of the [Jax Diffusers Event](https://huggingface.co/jax-diffusers-event). - -Due to the limited available data the model is clearly overfit, but it serves as a proof of concept to what can be further achieved using enough data. - -A large part of the training data is synthetic so we encourage further training using synthetically generated scenes, using Unreal engine for example. - -The WandB training logs can be found [here](https://wandb.ai/hassanelnahrawy/controlnet-VIDIT-FAID), it's worth noting that the model was left to overfit for experimentation and it's advised to use the 8K steps weights or prior weights. - -This project is a joint work between [ParityError](https://huggingface.co/ParityError) and [Nahrawy](https://huggingface.co/Nahrawy). - -''' -with gr.Blocks() as demo: - gr.Markdown(title) - prompts = gr.Textbox(label='prompts') - negative_prompts = gr.Textbox(label='negative_prompts') - with gr.Row(): - with gr.Column(): - in_image = gr.Image(label="Depth Map Conditioning") - with gr.Column(): - out_image = gr.Gallery(label="Generated Image") - with gr.Row(): - btn = gr.Button("Run") - with gr.Row(): - gr.Markdown(info) - gr.Examples(examples=examples, - inputs=[prompts,negative_prompts, in_image], - outputs=out_image, - fn=infer, - cache_examples=True) - btn.click(fn=infer, inputs=[prompts,negative_prompts, in_image] , outputs=out_image) - -demo.launch() \ No newline at end of file diff --git a/spaces/Norod78/Apocalyptify/README.md b/spaces/Norod78/Apocalyptify/README.md deleted file mode 100644 index 688adc0781c80710535a496a825261a25746844a..0000000000000000000000000000000000000000 --- a/spaces/Norod78/Apocalyptify/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Apocalyptify -emoji: 🧟‍♀️ -colorFrom: yellow -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md b/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md deleted file mode 100644 index 132cc514bac6b447addac8485e0622a834d34474..0000000000000000000000000000000000000000 --- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/docs/model_zoo.md +++ /dev/null @@ -1,49 +0,0 @@ -# :european_castle: Model Zoo - -- [For General Images](#for-general-images) -- [For Anime Images](#for-anime-images) -- [For Anime Videos](#for-anime-videos) - ---- - -## For General Images - -| Models | Scale | Description | -| ------------------------------------------------------------------------------------------------------------------------------- | :---- | :------------------------------------------- | -| [RealESRGAN_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) | X4 | X4 model for general images | -| [RealESRGAN_x2plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth) | X2 | X2 model for general images | -| [RealESRNet_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth) | X4 | X4 model with MSE loss (over-smooth effects) | -| [official ESRGAN_x4](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) | X4 | official ESRGAN model | -| [realesr-general-x4v3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth) | X4 (can also be used for X1, X2, X3) | A tiny small model (consume much fewer GPU memory and time); not too strong deblur and denoise capacity | - -The following models are **discriminators**, which are usually used for fine-tuning. - -| Models | Corresponding model | -| ---------------------------------------------------------------------------------------------------------------------- | :------------------ | -| [RealESRGAN_x4plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth) | RealESRGAN_x4plus | -| [RealESRGAN_x2plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x2plus_netD.pth) | RealESRGAN_x2plus | - -## For Anime Images / Illustrations - -| Models | Scale | Description | -| ------------------------------------------------------------------------------------------------------------------------------ | :---- | :---------------------------------------------------------- | -| [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth) | X4 | Optimized for anime images; 6 RRDB blocks (smaller network) | - -The following models are **discriminators**, which are usually used for fine-tuning. - -| Models | Corresponding model | -| ---------------------------------------------------------------------------------------------------------------------------------------- | :------------------------- | -| [RealESRGAN_x4plus_anime_6B_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B_netD.pth) | RealESRGAN_x4plus_anime_6B | - -## For Animation Videos - -| Models | Scale | Description | -| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- | -| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X41 | Anime video model with XS size | - -Note:
-1 This model can also be used for X1, X2, X3. - -The following models are **discriminators**, which are usually used for fine-tuning. - -TODO diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/README.md deleted file mode 100644 index 4689ed7c10497a5100b28fe6d6801a7c089da569..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/criss/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# Cross-lingual Retrieval for Iterative Self-Supervised Training - -https://arxiv.org/pdf/2006.09526.pdf - -## Introduction - -CRISS is a multilingual sequence-to-sequnce pretraining method where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. - -## Requirements: - -* faiss: https://github.com/facebookresearch/faiss -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* flores: https://github.com/facebookresearch/flores -* LASER: https://github.com/facebookresearch/LASER - -## Unsupervised Machine Translation -##### 1. Download and decompress CRISS checkpoints -``` -cd examples/criss -wget https://dl.fbaipublicfiles.com/criss/criss_3rd_checkpoints.tar.gz -tar -xf criss_checkpoints.tar.gz -``` -##### 2. Download and preprocess Flores test dataset -Make sure to run all scripts from examples/criss directory -``` -bash download_and_preprocess_flores_test.sh -``` - -##### 3. Run Evaluation on Sinhala-English -``` -bash unsupervised_mt/eval.sh -``` - -## Sentence Retrieval -##### 1. Download and preprocess Tatoeba dataset -``` -bash download_and_preprocess_tatoeba.sh -``` - -##### 2. Run Sentence Retrieval on Tatoeba Kazakh-English -``` -bash sentence_retrieval/sentence_retrieval_tatoeba.sh -``` - -## Mining -##### 1. Install faiss -Follow instructions on https://github.com/facebookresearch/faiss/blob/master/INSTALL.md -##### 2. Mine pseudo-parallel data between Kazakh and English -``` -bash mining/mine_example.sh -``` - -## Citation -```bibtex -@article{tran2020cross, - title={Cross-lingual retrieval for iterative self-supervised training}, - author={Tran, Chau and Tang, Yuqing and Li, Xian and Gu, Jiatao}, - journal={arXiv preprint arXiv:2006.09526}, - year={2020} -} -``` diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/preprocess.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/preprocess.py deleted file mode 100644 index 4ee9a1e3ba08f9f6ef4c01b9ee34374c9528eb19..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq_cli/preprocess.py +++ /dev/null @@ -1,409 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Data pre-processing: build vocabularies and binarize training data. -""" - -import logging -import os -import shutil -import sys -from collections import Counter -from itertools import zip_longest -from multiprocessing import Pool - -from fairseq import options, tasks, utils -from fairseq.binarizer import Binarizer -from fairseq.data import indexed_dataset -from fairseq.file_chunker_utils import find_offsets - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.preprocess") - - -def main(args): - utils.import_user_module(args) - - os.makedirs(args.destdir, exist_ok=True) - - logger.addHandler( - logging.FileHandler( - filename=os.path.join(args.destdir, "preprocess.log"), - ) - ) - logger.info(args) - - assert args.dataset_impl != "huffman", "preprocessing.py doesn't support Huffman yet, use HuffmanCodeBuilder directly." - - task = tasks.get_task(args.task) - - def train_path(lang): - return "{}{}".format(args.trainpref, ("." + lang) if lang else "") - - def file_name(prefix, lang): - fname = prefix - if lang is not None: - fname += ".{lang}".format(lang=lang) - return fname - - def dest_path(prefix, lang): - return os.path.join(args.destdir, file_name(prefix, lang)) - - def dict_path(lang): - return dest_path("dict", lang) + ".txt" - - def build_dictionary(filenames, src=False, tgt=False): - assert src ^ tgt - return task.build_dictionary( - filenames, - workers=args.workers, - threshold=args.thresholdsrc if src else args.thresholdtgt, - nwords=args.nwordssrc if src else args.nwordstgt, - padding_factor=args.padding_factor, - ) - - target = not args.only_source - - if not args.srcdict and os.path.exists(dict_path(args.source_lang)): - raise FileExistsError(dict_path(args.source_lang)) - if target and not args.tgtdict and os.path.exists(dict_path(args.target_lang)): - raise FileExistsError(dict_path(args.target_lang)) - - if args.joined_dictionary: - assert ( - not args.srcdict or not args.tgtdict - ), "cannot use both --srcdict and --tgtdict with --joined-dictionary" - - if args.srcdict: - src_dict = task.load_dictionary(args.srcdict) - elif args.tgtdict: - src_dict = task.load_dictionary(args.tgtdict) - else: - assert ( - args.trainpref - ), "--trainpref must be set if --srcdict is not specified" - src_dict = build_dictionary( - {train_path(lang) for lang in [args.source_lang, args.target_lang]}, - src=True, - ) - tgt_dict = src_dict - else: - if args.srcdict: - src_dict = task.load_dictionary(args.srcdict) - else: - assert ( - args.trainpref - ), "--trainpref must be set if --srcdict is not specified" - src_dict = build_dictionary([train_path(args.source_lang)], src=True) - - if target: - if args.tgtdict: - tgt_dict = task.load_dictionary(args.tgtdict) - else: - assert ( - args.trainpref - ), "--trainpref must be set if --tgtdict is not specified" - tgt_dict = build_dictionary([train_path(args.target_lang)], tgt=True) - else: - tgt_dict = None - - src_dict.save(dict_path(args.source_lang)) - if target and tgt_dict is not None: - tgt_dict.save(dict_path(args.target_lang)) - - if args.dict_only: - return - - def make_binary_dataset(vocab, input_prefix, output_prefix, lang, num_workers): - logger.info("[{}] Dictionary: {} types".format(lang, len(vocab))) - n_seq_tok = [0, 0] - replaced = Counter() - - def merge_result(worker_result): - replaced.update(worker_result["replaced"]) - n_seq_tok[0] += worker_result["nseq"] - n_seq_tok[1] += worker_result["ntok"] - - input_file = "{}{}".format( - input_prefix, ("." + lang) if lang is not None else "" - ) - offsets = find_offsets(input_file, num_workers) - (first_chunk, *more_chunks) = zip(offsets, offsets[1:]) - pool = None - if num_workers > 1: - pool = Pool(processes=num_workers - 1) - for worker_id, (start_offset, end_offset) in enumerate( - more_chunks, start=1 - ): - prefix = "{}{}".format(output_prefix, worker_id) - pool.apply_async( - binarize, - ( - args, - input_file, - vocab, - prefix, - lang, - start_offset, - end_offset, - ), - callback=merge_result, - ) - pool.close() - - ds = indexed_dataset.make_builder( - dataset_dest_file(args, output_prefix, lang, "bin"), - impl=args.dataset_impl, - vocab_size=len(vocab), - ) - merge_result( - Binarizer.binarize( - input_file, - vocab, - lambda t: ds.add_item(t), - offset=first_chunk[0], - end=first_chunk[1], - ) - ) - if num_workers > 1: - pool.join() - for worker_id in range(1, num_workers): - prefix = "{}{}".format(output_prefix, worker_id) - temp_file_path = dataset_dest_prefix(args, prefix, lang) - ds.merge_file_(temp_file_path) - os.remove(indexed_dataset.data_file_path(temp_file_path)) - os.remove(indexed_dataset.index_file_path(temp_file_path)) - - ds.finalize(dataset_dest_file(args, output_prefix, lang, "idx")) - - logger.info( - "[{}] {}: {} sents, {} tokens, {:.3}% replaced by {}".format( - lang, - input_file, - n_seq_tok[0], - n_seq_tok[1], - 100 * sum(replaced.values()) / n_seq_tok[1], - vocab.unk_word, - ) - ) - - def make_binary_alignment_dataset(input_prefix, output_prefix, num_workers): - nseq = [0] - - def merge_result(worker_result): - nseq[0] += worker_result["nseq"] - - input_file = input_prefix - offsets = find_offsets(input_file, num_workers) - (first_chunk, *more_chunks) = zip(offsets, offsets[1:]) - pool = None - if num_workers > 1: - pool = Pool(processes=num_workers - 1) - for worker_id, (start_offset, end_offset) in enumerate( - more_chunks, start=1 - ): - prefix = "{}{}".format(output_prefix, worker_id) - pool.apply_async( - binarize_alignments, - ( - args, - input_file, - utils.parse_alignment, - prefix, - start_offset, - end_offset, - ), - callback=merge_result, - ) - pool.close() - - ds = indexed_dataset.make_builder( - dataset_dest_file(args, output_prefix, None, "bin"), impl=args.dataset_impl - ) - - merge_result( - Binarizer.binarize_alignments( - input_file, - utils.parse_alignment, - lambda t: ds.add_item(t), - offset=first_chunk[0], - end=first_chunk[1], - ) - ) - if num_workers > 1: - pool.join() - for worker_id in range(1, num_workers): - prefix = "{}{}".format(output_prefix, worker_id) - temp_file_path = dataset_dest_prefix(args, prefix, None) - ds.merge_file_(temp_file_path) - os.remove(indexed_dataset.data_file_path(temp_file_path)) - os.remove(indexed_dataset.index_file_path(temp_file_path)) - - ds.finalize(dataset_dest_file(args, output_prefix, None, "idx")) - - logger.info("[alignments] {}: parsed {} alignments".format(input_file, nseq[0])) - - def make_dataset(vocab, input_prefix, output_prefix, lang, num_workers=1): - if args.dataset_impl == "raw": - # Copy original text file to destination folder - output_text_file = dest_path( - output_prefix + ".{}-{}".format(args.source_lang, args.target_lang), - lang, - ) - shutil.copyfile(file_name(input_prefix, lang), output_text_file) - else: - make_binary_dataset(vocab, input_prefix, output_prefix, lang, num_workers) - - def make_all(lang, vocab): - if args.trainpref: - make_dataset(vocab, args.trainpref, "train", lang, num_workers=args.workers) - if args.validpref: - for k, validpref in enumerate(args.validpref.split(",")): - outprefix = "valid{}".format(k) if k > 0 else "valid" - make_dataset( - vocab, validpref, outprefix, lang, num_workers=args.workers - ) - if args.testpref: - for k, testpref in enumerate(args.testpref.split(",")): - outprefix = "test{}".format(k) if k > 0 else "test" - make_dataset(vocab, testpref, outprefix, lang, num_workers=args.workers) - - def make_all_alignments(): - if args.trainpref and os.path.exists(args.trainpref + "." + args.align_suffix): - make_binary_alignment_dataset( - args.trainpref + "." + args.align_suffix, - "train.align", - num_workers=args.workers, - ) - if args.validpref and os.path.exists(args.validpref + "." + args.align_suffix): - make_binary_alignment_dataset( - args.validpref + "." + args.align_suffix, - "valid.align", - num_workers=args.workers, - ) - if args.testpref and os.path.exists(args.testpref + "." + args.align_suffix): - make_binary_alignment_dataset( - args.testpref + "." + args.align_suffix, - "test.align", - num_workers=args.workers, - ) - - make_all(args.source_lang, src_dict) - if target: - make_all(args.target_lang, tgt_dict) - if args.align_suffix: - make_all_alignments() - - logger.info("Wrote preprocessed data to {}".format(args.destdir)) - - if args.alignfile: - assert args.trainpref, "--trainpref must be set if --alignfile is specified" - src_file_name = train_path(args.source_lang) - tgt_file_name = train_path(args.target_lang) - freq_map = {} - with open(args.alignfile, "r", encoding="utf-8") as align_file: - with open(src_file_name, "r", encoding="utf-8") as src_file: - with open(tgt_file_name, "r", encoding="utf-8") as tgt_file: - for a, s, t in zip_longest(align_file, src_file, tgt_file): - si = src_dict.encode_line(s, add_if_not_exist=False) - ti = tgt_dict.encode_line(t, add_if_not_exist=False) - ai = list(map(lambda x: tuple(x.split("-")), a.split())) - for sai, tai in ai: - srcidx = si[int(sai)] - tgtidx = ti[int(tai)] - if srcidx != src_dict.unk() and tgtidx != tgt_dict.unk(): - assert srcidx != src_dict.pad() - assert srcidx != src_dict.eos() - assert tgtidx != tgt_dict.pad() - assert tgtidx != tgt_dict.eos() - - if srcidx not in freq_map: - freq_map[srcidx] = {} - if tgtidx not in freq_map[srcidx]: - freq_map[srcidx][tgtidx] = 1 - else: - freq_map[srcidx][tgtidx] += 1 - - align_dict = {} - for srcidx in freq_map.keys(): - align_dict[srcidx] = max(freq_map[srcidx], key=freq_map[srcidx].get) - - with open( - os.path.join( - args.destdir, - "alignment.{}-{}.txt".format(args.source_lang, args.target_lang), - ), - "w", - encoding="utf-8", - ) as f: - for k, v in align_dict.items(): - print("{} {}".format(src_dict[k], tgt_dict[v]), file=f) - - -def binarize(args, filename, vocab, output_prefix, lang, offset, end, append_eos=True): - ds = indexed_dataset.make_builder( - dataset_dest_file(args, output_prefix, lang, "bin"), - impl=args.dataset_impl, - vocab_size=len(vocab), - ) - - def consumer(tensor): - ds.add_item(tensor) - - res = Binarizer.binarize( - filename, vocab, consumer, append_eos=append_eos, offset=offset, end=end - ) - ds.finalize(dataset_dest_file(args, output_prefix, lang, "idx")) - return res - - -def binarize_alignments(args, filename, parse_alignment, output_prefix, offset, end): - ds = indexed_dataset.make_builder( - dataset_dest_file(args, output_prefix, None, "bin"), - impl=args.dataset_impl, - vocab_size=None, - ) - - def consumer(tensor): - ds.add_item(tensor) - - res = Binarizer.binarize_alignments( - filename, parse_alignment, consumer, offset=offset, end=end - ) - ds.finalize(dataset_dest_file(args, output_prefix, None, "idx")) - return res - - -def dataset_dest_prefix(args, output_prefix, lang): - base = "{}/{}".format(args.destdir, output_prefix) - if lang is not None: - lang_part = ".{}-{}.{}".format(args.source_lang, args.target_lang, lang) - elif args.only_source: - lang_part = "" - else: - lang_part = ".{}-{}".format(args.source_lang, args.target_lang) - - return "{}{}".format(base, lang_part) - - -def dataset_dest_file(args, output_prefix, lang, extension): - base = dataset_dest_prefix(args, output_prefix, lang) - return "{}.{}".format(base, extension) - - -def cli_main(): - parser = options.get_preprocessing_parser() - args = parser.parse_args() - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/ulm/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/ulm/README.md deleted file mode 100644 index 01459121cebefc61fdc2eae201462aa78d699111..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/ulm/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# Unit Language Model (ULM) - -Here you can find links to the pre-trained ULMs and instructions on training new models using fairseq. At the end of the page, we also share how to run sampling for those models and provide pointers to the transcribed prompts we used. - -## Pre-trained models - -Using the links below, you can download pre-trained models for various unit types and vocabulary sizes: - -| | 50 | 100 | 200 -|-|-|-|- -| LogMel Filterbank | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km50/logmel50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km100/logmel100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km200/logmel200_lm.tgz) -| Modified CPC | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km50/cpc50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km100/cpc100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km200/cpc200_lm.tgz) -| HuBERT | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km50/hubert50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km100/hubert100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km200/hubert200_lm.tgz) -| Wav2Vec 2.0 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km50/w2v2_50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km100/w2v2_100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km200/w2v2_200_lm.tgz) - - -## Preprocessing data -Assuming that unit-transcribed train, valid, and test sets are located in `data/train.txt`, `data/valid.txt`, and `data/test.txt`, respectively, -we run the following command to get a preprocessed version of the datast in `data-bin`: - -```bash -fairseq-preprocess --only-source \ - --trainpref data/train.txt --validpref data/valid.txt --testpref data/test.txt \ - --destdir data-bin/ --workers 40 -``` -As a result, the `data-bin` directory should appear. - -## Fitting a Unit Language Model (ULM) -As an ULM, we train a standard fairseq Transformer LM. Assuming 8 GPUs used for training, a good starting point for an ULM training would be: -```bash - fairseq-train data-bin/ \ - --task=language_modeling \ - --arch=transformer_lm_big \ - --share-decoder-input-output-embed \ - --dropout=0.1 \ - --attention-dropout=0.1 \ - --optimizer=adam \ - --adam-betas='(0.9, 0.98)' \ - --clip-norm=1.0 \ - --lr=0.0005 \ - --lr-scheduler=inverse_sqrt \ - --warmup-updates=4000 \ - --warmup-init-lr=1e-07 \ - --tokens-per-sample=3072 \ - --update-freq=16 \ - --max-tokens=4096 \ - --num-workers=4 \ - --skip-invalid-size-inputs-valid-test \ - --max-update=500000 \ - --log-interval=10 \ - --seed=100501 \ - --fp16 \ - --sample-break-mode=eos -``` -This command will train a Transformer-large model (12 layers). You can train other standard LM models provided by fairseq, e.g. specify `--arch=transformer_lm` to train a smaller (6-layer) Transformer model. When training with a different number of GPUs, it might be a good idea to adjust the `update-freq` parameter. To save the GPU memory at an expense of additional computation, it can be useful to enable activation checkpointing with `--checkpoint-activations`. - -## Sampling from an ULM -Once an ULM was trained, we can use it for generating new utterances. Suppose, that the prompts are given in a file named `prompts.txt`. Then we can sample continuations by running the following command: - -```bash - python sample.py data-bin/ \ - --path=checkpoints/checkpoint_best.pt --task=language_modeling --sampling --temperature=0.7 \ - --seed=1 --prompts=prompts.txt --output=samples.txt --max-len-a=0 --max-len-b=500 \ - --prefix-size=-1 --batch-size=16 --fp16 --samples-per-prompt=10 -``` -Here, `--prefix-size` controls the number of tokens that are used to prime the ULM. When set to a positive value, the sampling script will take first `prefix-size` tokens to prompt the ULM; with `0` it runs unconditional sampling and with `-1` the entire prompt is used. -`--samples-per-prompt` specifies how many utterances are generated with every prompt which can be useful when generating multiple prompt continuations. In this command, `--max-len-a` and `--max-len-b` control the number of generated tokens. - -When using a pretrained model from above, `data-bin` should point to the unpacked directory (with `dict.txt` file). - -Evaluation-time, to generate prompts, we used utterances from LibriSpeech dev-clean and test-clean that are longer than 6s. We took first 3s from an utterance as a prompt. Unit transcripts of those prompts can be downloaded here: [[dev]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/dev_prompts.tgz) [[test]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/test_prompts.tgz) - diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/legacy_masked_lm.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/legacy_masked_lm.py deleted file mode 100644 index c70608c5a143b7b4fbd8c58dfcf9f873639d379c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/criterions/legacy_masked_lm.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -def compute_cross_entropy_loss(logits, targets, ignore_index=-100): - """ - Function to compute the cross entropy loss. The default value of - ignore_index is the same as the default value for F.cross_entropy in - pytorch. - """ - assert logits.size(0) == targets.size( - -1 - ), "Logits and Targets tensor shapes don't match up" - - loss = F.nll_loss( - F.log_softmax(logits, -1, dtype=torch.float32), - targets, - reduction="sum", - ignore_index=ignore_index, - ) - return loss - - -@register_criterion("legacy_masked_lm_loss") -class LegacyMaskedLmLoss(FairseqCriterion): - """ - Implementation for the loss used in masked language model (MLM) training. - This optionally also computes the next sentence prediction (NSP) loss and - adds it to the overall loss based on the specified args. There are three - cases to consider: - 1) Generic MLM training without NSP loss. In this case sentence_targets - and sentence_logits are both None. - 2) BERT training without NSP loss. In this case sentence_targets is - not None but sentence_logits is None and we should not be computing - a sentence level loss. - 3) BERT training with NSP loss. In this case both sentence_targets and - sentence_logits are not None and we should be computing a sentence - level loss. The weight of the sentence level loss is specified as - an argument. - """ - - def __init__(self, task, masked_lm_only, nsp_loss_weight): - super().__init__(task) - self.masked_lm_only = masked_lm_only - self.nsp_loss_weight = nsp_loss_weight - - @staticmethod - def add_args(parser): - """Args for MaskedLM Loss""" - # Default for masked_lm_only is False so as to not break BERT training - parser.add_argument( - "--masked-lm-only", - default=False, - action="store_true", - help="compute MLM loss only", - ) - parser.add_argument( - "--nsp-loss-weight", - default=1.0, - type=float, - help="weight for next sentence prediction" " loss (default 1)", - ) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - lm_logits, output_metadata = model(**sample["net_input"]) - - # reshape lm_logits from (N,T,C) to (N*T,C) - lm_logits = lm_logits.view(-1, lm_logits.size(-1)) - lm_targets = sample["lm_target"].view(-1) - lm_loss = compute_cross_entropy_loss(lm_logits, lm_targets, self.padding_idx) - - # compute the number of tokens for which loss is computed. This is used - # to normalize the loss - ntokens = utils.strip_pad(lm_targets, self.padding_idx).numel() - loss = lm_loss / ntokens - nsentences = sample["nsentences"] - # nsentences = 0 - - # Compute sentence loss if masked_lm_only is False - sentence_loss = None - if not self.masked_lm_only: - sentence_logits = output_metadata["sentence_logits"] - sentence_targets = sample["sentence_target"].view(-1) - # This needs to be recomputed due to some differences between - # TokenBlock and BlockPair dataset. This can be resolved with a - # refactor of BERTModel which we will do in the future. - # TODO: Remove this after refactor of BERTModel - nsentences = sentence_targets.size(0) - - # Check for logits being none which can happen when remove_heads - # is set to true in the BERT model. Ideally we should set - # masked_lm_only to true in this case, but that requires some - # refactor in the BERT model. - if sentence_logits is not None: - sentence_loss = compute_cross_entropy_loss( - sentence_logits, sentence_targets - ) - - loss += self.nsp_loss_weight * (sentence_loss / nsentences) - - # NOTE: as we are summing up per token mlm loss and per sentence nsp loss - # we don't need to use sample_size as denominator for the gradient - # here sample_size is just used for logging - sample_size = 1 - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "lm_loss": utils.item(lm_loss.data) if reduce else lm_loss.data, - # sentence loss is not always computed - "sentence_loss": ( - (utils.item(sentence_loss.data) if reduce else sentence_loss.data) - if sentence_loss is not None - else 0.0 - ), - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - lm_loss_sum = sum(log.get("lm_loss", 0) for log in logging_outputs) - sentence_loss_sum = sum(log.get("sentence_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - agg_loss = sum(log.get("loss", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", - agg_loss / sample_size / math.log(2) if sample_size > 0 else 0.0, - sample_size, - round=3, - ) - metrics.log_scalar( - "lm_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - metrics.log_scalar( - "sentence_loss", - sentence_loss_sum / nsentences / math.log(2) if nsentences > 0 else 0.0, - nsentences, - round=3, - ) - metrics.log_scalar( - "nll_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/audio/speech_to_text_dataset.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/audio/speech_to_text_dataset.py deleted file mode 100644 index 164bf413e4fd41b895348c9ef0bb57421843eb17..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/data/audio/speech_to_text_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import re -from collections import defaultdict -from pathlib import Path -from typing import Dict, List, Optional -from dataclasses import dataclass - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - Dictionary, - FairseqDataset, - ResamplingDataset, - data_utils as fairseq_data_utils, -) -from fairseq.data.audio.audio_utils import ( - get_fbank, - get_waveform, - read_from_stored_zip, - is_npy_data, - is_sf_audio_data, - parse_path, - FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS, -) -from fairseq.data.audio.feature_transforms import CompositeAudioFeatureTransform -from fairseq.data.audio.data_cfg import S2TDataConfig - - -logger = logging.getLogger(__name__) - - -def get_features_from_npy_or_audio(path): - ext = Path(path).suffix - if ext not in FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS: - raise ValueError(f'Unsupported file format for "{path}"') - return np.load(path) if ext == ".npy" else get_fbank(path) - - -def get_features_or_waveform_from_stored_zip( - path, byte_offset, byte_size, need_waveform=False, use_sample_rate=None, -): - assert path.endswith(".zip") - data = read_from_stored_zip(path, byte_offset, byte_size) - f = io.BytesIO(data) - if is_npy_data(data): - features_or_waveform = np.load(f) - elif is_sf_audio_data(data): - features_or_waveform = \ - get_waveform( - f, always_2d=False, output_sample_rate=use_sample_rate - )[0] if need_waveform else get_fbank(f) - else: - raise ValueError(f'Unknown file format for "{path}"') - return features_or_waveform - - -def get_features_or_waveform( - path: str, need_waveform=False, use_sample_rate=None -): - """Get speech features from .npy file or waveform from .wav/.flac file. - The file may be inside an uncompressed ZIP file and is accessed via byte - offset and length. - - Args: - path (str): File path in the format of "<.npy/.wav/.flac path>" or - "::". - need_waveform (bool): return waveform instead of features. - use_sample_rate (int): change sample rate for the input wave file - - Returns: - features_or_waveform (numpy.ndarray): speech features or waveform. - """ - _path, slice_ptr = parse_path(path) - if len(slice_ptr) == 0: - if need_waveform: - return get_waveform( - _path, always_2d=False, output_sample_rate=use_sample_rate - )[0] - return get_features_from_npy_or_audio(_path) - elif len(slice_ptr) == 2: - features_or_waveform = get_features_or_waveform_from_stored_zip( - _path, slice_ptr[0], slice_ptr[1], need_waveform=need_waveform, - use_sample_rate=use_sample_rate - ) - else: - raise ValueError(f"Invalid path: {path}") - - return features_or_waveform - - -def _collate_frames( - frames: List[torch.Tensor], is_audio_input: bool = False -) -> torch.Tensor: - """ - Convert a list of 2D frames into a padded 3D tensor - Args: - frames (list): list of 2D frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3D tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - max_len = max(frame.size(0) for frame in frames) - if is_audio_input: - out = frames[0].new_zeros((len(frames), max_len)) - else: - out = frames[0].new_zeros((len(frames), max_len, frames[0].size(1))) - for i, v in enumerate(frames): - out[i, : v.size(0)] = v - return out - - -@dataclass -class SpeechToTextDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - speaker_id: Optional[int] = None - - -class SpeechToTextDataset(FairseqDataset): - LANG_TAG_TEMPLATE = "" - - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None - ): - self.split, self.is_train_split = split, is_train_split - self.cfg = cfg - self.audio_paths, self.n_frames = audio_paths, n_frames - self.n_samples = len(audio_paths) - assert len(n_frames) == self.n_samples > 0 - assert src_texts is None or len(src_texts) == self.n_samples - assert tgt_texts is None or len(tgt_texts) == self.n_samples - assert speakers is None or len(speakers) == self.n_samples - assert src_langs is None or len(src_langs) == self.n_samples - assert tgt_langs is None or len(tgt_langs) == self.n_samples - assert ids is None or len(ids) == self.n_samples - assert (tgt_dict is None and tgt_texts is None) or ( - tgt_dict is not None and tgt_texts is not None - ) - self.src_texts, self.tgt_texts = src_texts, tgt_texts - self.src_langs, self.tgt_langs = src_langs, tgt_langs - self.speakers = speakers - self.tgt_dict = tgt_dict - self.check_tgt_lang_tag() - self.ids = ids - self.shuffle = cfg.shuffle if is_train_split else False - - self.feature_transforms = CompositeAudioFeatureTransform.from_config_dict( - self.cfg.get_feature_transforms(split, is_train_split) - ) - - self.pre_tokenizer = pre_tokenizer - self.bpe_tokenizer = bpe_tokenizer - self.n_frames_per_step = n_frames_per_step - self.speaker_to_id = speaker_to_id - - self.tgt_lens = self.get_tgt_lens_and_check_oov() - - logger.info(self.__repr__()) - - def get_tgt_lens_and_check_oov(self): - if self.tgt_texts is None: - return [0 for _ in range(self.n_samples)] - tgt_lens = [] - n_tokens, n_oov_tokens = 0, 0 - for i in range(self.n_samples): - tokenized = self.get_tokenized_tgt_text(i).split(" ") - oov_tokens = [ - t - for t in tokenized - if self.tgt_dict.index(t) == self.tgt_dict.unk_index - ] - n_tokens += len(tokenized) - n_oov_tokens += len(oov_tokens) - tgt_lens.append(len(tokenized)) - logger.info(f"'{self.split}' has {n_oov_tokens / n_tokens * 100:.2f}% OOV") - return tgt_lens - - def __repr__(self): - return ( - self.__class__.__name__ - + f'(split="{self.split}", n_samples={self.n_samples:_}, ' - f"prepend_tgt_lang_tag={self.cfg.prepend_tgt_lang_tag}, " - f"shuffle={self.shuffle}, transforms={self.feature_transforms}, " - f"n_frames_per_step={self.n_frames_per_step}" - ) - - @classmethod - def is_lang_tag(cls, token): - pattern = cls.LANG_TAG_TEMPLATE.replace("{}", "(.*)") - return re.match(pattern, token) - - def check_tgt_lang_tag(self): - if self.cfg.prepend_tgt_lang_tag: - assert self.tgt_langs is not None and self.tgt_dict is not None - tgt_lang_tags = [ - self.LANG_TAG_TEMPLATE.format(t) for t in set(self.tgt_langs) - ] - assert all(t in self.tgt_dict for t in tgt_lang_tags) - - @classmethod - def tokenize(cls, tokenizer, text: str): - return text if tokenizer is None else tokenizer.encode(text) - - def get_tokenized_tgt_text(self, index: int): - text = self.tokenize(self.pre_tokenizer, self.tgt_texts[index]) - text = self.tokenize(self.bpe_tokenizer, text) - return text - - def pack_frames(self, feature: torch.Tensor): - if self.n_frames_per_step == 1: - return feature - n_packed_frames = feature.shape[0] // self.n_frames_per_step - feature = feature[:self.n_frames_per_step * n_packed_frames] - return feature.reshape(n_packed_frames, -1) - - @classmethod - def get_lang_tag_idx(cls, lang: str, dictionary: Dictionary): - lang_tag_idx = dictionary.index(cls.LANG_TAG_TEMPLATE.format(lang)) - assert lang_tag_idx != dictionary.unk() - return lang_tag_idx - - def __getitem__(self, index: int) -> SpeechToTextDatasetItem: - source = get_features_or_waveform( - self.audio_paths[index], - need_waveform=self.cfg.use_audio_input, - use_sample_rate=self.cfg.use_sample_rate, - ) - if self.feature_transforms is not None: - assert not self.cfg.use_audio_input - source = self.feature_transforms(source) - source = torch.from_numpy(source).float() - source = self.pack_frames(source) - - target = None - if self.tgt_texts is not None: - tokenized = self.get_tokenized_tgt_text(index) - target = self.tgt_dict.encode_line( - tokenized, add_if_not_exist=False, append_eos=True - ).long() - if self.cfg.prepend_tgt_lang_tag: - lang_tag_idx = self.get_lang_tag_idx( - self.tgt_langs[index], self.tgt_dict - ) - target = torch.cat((torch.LongTensor([lang_tag_idx]), target), 0) - - speaker_id = None - if self.speaker_to_id is not None: - speaker_id = self.speaker_to_id[self.speakers[index]] - return SpeechToTextDatasetItem( - index=index, source=source, target=target, speaker_id=speaker_id - ) - - def __len__(self): - return self.n_samples - - def collater( - self, samples: List[SpeechToTextDatasetItem], return_order: bool = False - ) -> Dict: - if len(samples) == 0: - return {} - indices = torch.tensor([x.index for x in samples], dtype=torch.long) - frames = _collate_frames([x.source for x in samples], self.cfg.use_audio_input) - # sort samples by descending number of frames - n_frames = torch.tensor([x.source.size(0) for x in samples], dtype=torch.long) - n_frames, order = n_frames.sort(descending=True) - indices = indices.index_select(0, order) - frames = frames.index_select(0, order) - - target, target_lengths = None, None - prev_output_tokens = None - ntokens = None - if self.tgt_texts is not None: - target = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, order) - target_lengths = torch.tensor( - [x.target.size(0) for x in samples], dtype=torch.long - ).index_select(0, order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ) - prev_output_tokens = prev_output_tokens.index_select(0, order) - ntokens = sum(x.target.size(0) for x in samples) - - speaker = None - if self.speaker_to_id is not None: - speaker = torch.tensor( - [s.speaker_id for s in samples], dtype=torch.long - ).index_select(0, order).view(-1, 1) - - net_input = { - "src_tokens": frames, - "src_lengths": n_frames, - "prev_output_tokens": prev_output_tokens, - } - out = { - "id": indices, - "net_input": net_input, - "speaker": speaker, - "target": target, - "target_lengths": target_lengths, - "ntokens": ntokens, - "nsentences": len(samples), - } - if return_order: - out["order"] = order - return out - - def num_tokens(self, index): - return self.n_frames[index] - - def size(self, index): - return self.n_frames[index], self.tgt_lens[index] - - @property - def sizes(self): - return np.array(self.n_frames) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - # first by descending order of # of frames then by original/random order - order.append([-n for n in self.n_frames]) - return np.lexsort(order) - - def prefetch(self, indices): - raise False - - -class SpeechToTextDatasetCreator(object): - # mandatory columns - KEY_ID, KEY_AUDIO, KEY_N_FRAMES = "id", "audio", "n_frames" - KEY_TGT_TEXT = "tgt_text" - # optional columns - KEY_SPEAKER, KEY_SRC_TEXT = "speaker", "src_text" - KEY_SRC_LANG, KEY_TGT_LANG = "src_lang", "tgt_lang" - # default values - DEFAULT_SPEAKER = DEFAULT_SRC_TEXT = DEFAULT_LANG = "" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDataset: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - return SpeechToTextDataset( - split_name, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id - ) - - @classmethod - def get_size_ratios( - cls, datasets: List[SpeechToTextDataset], alpha: float = 1.0 - ) -> List[float]: - """Size ratios for temperature-based sampling - (https://arxiv.org/abs/1907.05019)""" - - id_to_lp, lp_to_sz = {}, defaultdict(int) - for ds in datasets: - lang_pairs = {f"{s}->{t}" for s, t in zip(ds.src_langs, ds.tgt_langs)} - assert len(lang_pairs) == 1 - lang_pair = list(lang_pairs)[0] - id_to_lp[ds.split] = lang_pair - lp_to_sz[lang_pair] += sum(ds.n_frames) - - sz_sum = sum(v for v in lp_to_sz.values()) - lp_to_prob = {k: v / sz_sum for k, v in lp_to_sz.items()} - lp_to_tgt_prob = {k: v ** alpha for k, v in lp_to_prob.items()} - prob_sum = sum(v for v in lp_to_tgt_prob.values()) - lp_to_tgt_prob = {k: v / prob_sum for k, v in lp_to_tgt_prob.items()} - lp_to_sz_ratio = { - k: (lp_to_tgt_prob[k] * sz_sum) / v for k, v in lp_to_sz.items() - } - size_ratio = [lp_to_sz_ratio[id_to_lp[ds.split]] for ds in datasets] - - p_formatted = { - k: f"{lp_to_prob[k]:.3f}->{lp_to_tgt_prob[k]:.3f}" for k in lp_to_sz - } - logger.info(f"sampling probability balancing: {p_formatted}") - sr_formatted = {ds.split: f"{r:.3f}" for ds, r in zip(datasets, size_ratio)} - logger.info(f"balanced sampling size ratio: {sr_formatted}") - return size_ratio - - @classmethod - def _load_samples_from_tsv(cls, root: str, split: str): - tsv_path = Path(root) / f"{split}.tsv" - if not tsv_path.is_file(): - raise FileNotFoundError(f"Dataset not found: {tsv_path}") - with open(tsv_path) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - samples = [dict(e) for e in reader] - if len(samples) == 0: - raise ValueError(f"Empty manifest: {tsv_path}") - return samples - - @classmethod - def _from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - split: str, - tgt_dict, - is_train_split: bool, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDataset: - samples = cls._load_samples_from_tsv(root, split) - return cls._from_list( - split, is_train_split, samples, cfg, tgt_dict, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - - @classmethod - def from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - splits: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - epoch: int, - seed: int, - n_frames_per_step: int = 1, - speaker_to_id=None - ) -> SpeechToTextDataset: - datasets = [ - cls._from_tsv( - root, cfg, split, tgt_dict, is_train_split, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - for split in splits.split(",") - ] - - if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0: - # temperature-based sampling - size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha) - datasets = [ - ResamplingDataset( - d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0) - ) - for r, d in zip(size_ratios, datasets) - ] - - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/ema/ema.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/ema/ema.py deleted file mode 100644 index 010b60ba2fd766340d2c5b8ba96f9e57c6fe25b5..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/ema/ema.py +++ /dev/null @@ -1,200 +0,0 @@ -#!/usr/bin/env python3 - -""" -This module has the EMA class used to store a copy of the exponentially decayed -model params. - -Typical usage of EMA class involves initializing an object using an existing -model (random or from a seed model) and setting the config like ema_decay, -ema_start_update which determine how the EMA model is updated. After every -update of the model i.e. at the end of the train_step, the EMA should be updated -by passing the new model to the EMA.step function. The EMA model state dict -can be stored in the extra state under the key of "ema" and dumped -into a checkpoint and loaded. The EMA object can be passed to tasks -by setting task.uses_ema property. -EMA is a smoothed/ensemble model which might have better performance -when used for inference or further fine-tuning. EMA class has a -reverse function to load the EMA params into a model and use it -like a regular model. -""" - -import copy -import logging - -import torch -from fairseq import checkpoint_utils - - -class EMA(object): - """Exponential Moving Average of Fairseq Models - EMA keeps a copy of the exponentially decayed model params. - The set of params should include both gradient-descent and - non-gradient descent params, such as batch mean/var and buffers. - This is a modified implementation of - the open source code in https://github.com/zhawe01/fairseq-gec.git, - and internal source code in - fbcode/mobile-vision/projects/classification_pytorch/lib/utils/model_ema.py. - - Similar to TF EMA. - https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage. - EMA provides a averaged and smoothed set of model weights, and has been shown to - improve vision models. EMA class does all necessary functions to update, reload, - or init EMA methods. - - EMA object is initialized from an arbitrary model. By default, it is stored in - the same device (unless device specified at initialization) and with the - same precision as the model (unless ema_fp32 is True). ema_fp32 is recommended. - This stores the EMA parameters in fp32 only for the EMA update step, and - is used at the default precision otherwise. - EMA is usually enabled using EMAConfig with store_ema=True. Some important - parameters to configure EMA are - 1) ema_decay - The decay of EMA - 2) ema_update_freq - EMA is updated every this many model updates. - 3) ema_start_update - Start EMA update after this many model updates [default 0] - - Key methods: - 1) step - One update of EMA using new model - 2) restore - Update EMA from a state dict - 3) reverse - Load EMA into a model - 4) get_decay, _set_decay - Used to get or set the decay. Note _set_decay is - called from step. - 5) build_fp32_params - Used to initialize or update the fp32 copy of EMA params. - Note this is enabled only when ema_fp32=True - """ - - def __init__(self, model, config, device=None): - """ - @param model model to initialize the EMA with - @param config EMAConfig object with configuration like - ema_decay, ema_update_freq, ema_fp32 - @param device If provided, copy EMA to this device (e.g. gpu). - Otherwise EMA is in the same device as the model. - """ - - self.decay = config.ema_decay - self.model = copy.deepcopy(model) - self.model.requires_grad_(False) - self.config = config - self.fp32_params = {} - - if self.config.ema_seed_model is not None: - state = checkpoint_utils.load_ema_from_checkpoint(self.config.ema_seed_model) - self.model.load_state_dict(state["model"], strict=True) - - if device is not None: - logging.info(f"Copying EMA model to device {device}") - self.model = self.model.to(device=device) - - if self.config.ema_fp32: - self.build_fp32_params() - - self.update_freq_counter = 0 - - def get_model(self): - return self.model - - def build_fp32_params(self, state_dict=None): - """ - Store a copy of the EMA params in fp32. - If state dict is passed, the EMA params is copied from - the provided state dict. Otherwise, it is copied from the - current EMA model parameters. - """ - if not self.config.ema_fp32: - raise RuntimeError( - "build_fp32_params should not be called if ema_fp32=False. " - "Use ema_fp32=True if this is really intended." - ) - - if state_dict is None: - state_dict = self.model.state_dict() - - def _to_float(t): - return t.float() if torch.is_floating_point(t) else t - - # for non-float params (like registered symbols), they are copied into this dict and covered in each update - for param_key in state_dict: - if param_key in self.fp32_params: - self.fp32_params[param_key].copy_(state_dict[param_key]) - else: - self.fp32_params[param_key] = _to_float(state_dict[param_key]) - - def restore(self, state_dict, build_fp32_params=False): - """ Load data from a model spec into EMA model """ - self.model.load_state_dict(state_dict, strict=False) - if build_fp32_params: - self.build_fp32_params(state_dict) - - def _set_decay(self, decay): - self.decay = decay - - def get_decay(self): - return self.decay - - def _step_internal(self, new_model, updates=None): - """ One update of the EMA model based on new model weights """ - decay = self.decay - - ema_state_dict = {} - ema_params = self.fp32_params if self.config.ema_fp32 else self.model.state_dict() - for key, param in new_model.state_dict().items(): - try: - ema_param = ema_params[key] - except KeyError: - ema_param = param.float().clone() if param.ndim == 1 else copy.deepcopy(param) - - if param.shape != ema_param.shape: - raise ValueError( - "incompatible tensor shapes between model param and ema param" - + "{} vs. {}".format(param.shape, ema_param.shape) - ) - if "version" in key: - # Do not decay a model.version pytorch param - continue - - # for non-float params (like registered symbols), they are covered in each update - if not torch.is_floating_point(ema_param): - if ema_param.dtype != param.dtype: - raise ValueError( - "incompatible tensor dtypes between model param and ema param" - + "{} vs. {}".format(param.dtype, ema_param.dtype) - ) - ema_param.copy_(param) - else: - ema_param.mul_(decay) - ema_param.add_(param.to(dtype=ema_param.dtype), alpha=1-decay) - ema_state_dict[key] = ema_param - self.restore(ema_state_dict, build_fp32_params=False) - - def step(self, new_model, updates=None): - """ - One update of EMA which is done every self.config.ema_update_freq - updates of the model. - - @param updates The current number of model updates done. - Decay is set of 0 if model updates < ema_start_update, which means - the model will be simply copied over to the EMA. - When model updates >= ema_start_updates, then EMA is updated with - a decay of self.config.ema_decay. - """ - self._set_decay( - 0 - if updates is not None - and updates < self.config.ema_start_update - else self.config.ema_decay - ) - if updates is not None and self.config.ema_update_freq > 1: - self.update_freq_counter += 1 - if self.update_freq_counter >= self.config.ema_update_freq: - self._step_internal(new_model, updates) - self.update_freq_counter = 0 - else: - self._step_internal(new_model, updates) - - def reverse(self, model): - """ - Load the model parameters from EMA model. - Useful for inference or fine-tuning from the EMA model. - """ - model.load_state_dict(self.model.state_dict(), strict=False) - return model diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/transformer_sentence_encoder.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/transformer_sentence_encoder.py deleted file mode 100644 index d0540d69229fb994b9e573a5016c9f239b7929e2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/transformer_sentence_encoder.py +++ /dev/null @@ -1,291 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Optional, Tuple - -import torch -import torch.nn as nn -from fairseq.modules import ( - FairseqDropout, - LayerDropModuleList, - LayerNorm, - MultiheadAttention, - PositionalEmbedding, - TransformerSentenceEncoderLayer, -) -from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_ - - -def init_bert_params(module): - """ - Initialize the weights specific to the BERT Model. - This overrides the default initializations depending on the specified arguments. - 1. If normal_init_linear_weights is set then weights of linear - layer will be initialized using the normal distribution and - bais will be set to the specified value. - 2. If normal_init_embed_weights is set then weights of embedding - layer will be initialized using the normal distribution. - 3. If normal_init_proj_weights is set then weights of - in_project_weight for MultiHeadAttention initialized using - the normal distribution (to be validated). - """ - - def normal_(data): - # with FSDP, module params will be on CUDA, so we cast them back to CPU - # so that the RNG is consistent with and without FSDP - data.copy_( - data.cpu().normal_(mean=0.0, std=0.02).to(data.device) - ) - - if isinstance(module, nn.Linear): - normal_(module.weight.data) - if module.bias is not None: - module.bias.data.zero_() - if isinstance(module, nn.Embedding): - normal_(module.weight.data) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - if isinstance(module, MultiheadAttention): - normal_(module.q_proj.weight.data) - normal_(module.k_proj.weight.data) - normal_(module.v_proj.weight.data) - - -class TransformerSentenceEncoder(nn.Module): - """ - Implementation for a Bi-directional Transformer based Sentence Encoder used - in BERT/XLM style pre-trained models. - - This first computes the token embedding using the token embedding matrix, - position embeddings (if specified) and segment embeddings - (if specified). After applying the specified number of - TransformerEncoderLayers, it outputs all the internal states of the - encoder as well as the final representation associated with the first - token (usually CLS token). - - Input: - - tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - - Output: - - a tuple of the following: - - a list of internal model states used to compute the - predictions where each tensor has shape T x B x C - - sentence representation associated with first input token - in format B x C. - """ - - def __init__( - self, - padding_idx: int, - vocab_size: int, - num_encoder_layers: int = 6, - embedding_dim: int = 768, - ffn_embedding_dim: int = 3072, - num_attention_heads: int = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - layerdrop: float = 0.0, - max_seq_len: int = 256, - num_segments: int = 2, - use_position_embeddings: bool = True, - offset_positions_by_padding: bool = True, - encoder_normalize_before: bool = False, - apply_bert_init: bool = False, - activation_fn: str = "relu", - learned_pos_embedding: bool = True, - embed_scale: float = None, - freeze_embeddings: bool = False, - n_trans_layers_to_freeze: int = 0, - export: bool = False, - traceable: bool = False, - q_noise: float = 0.0, - qn_block_size: int = 8, - ) -> None: - - super().__init__() - self.padding_idx = padding_idx - self.vocab_size = vocab_size - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.layerdrop = layerdrop - self.max_seq_len = max_seq_len - self.embedding_dim = embedding_dim - self.num_segments = num_segments - self.use_position_embeddings = use_position_embeddings - self.apply_bert_init = apply_bert_init - self.learned_pos_embedding = learned_pos_embedding - self.traceable = traceable - - self.embed_tokens = self.build_embedding( - self.vocab_size, self.embedding_dim, self.padding_idx - ) - self.embed_scale = embed_scale - - if q_noise > 0: - self.quant_noise = apply_quant_noise_( - nn.Linear(self.embedding_dim, self.embedding_dim, bias=False), - q_noise, - qn_block_size, - ) - else: - self.quant_noise = None - - self.segment_embeddings = ( - nn.Embedding(self.num_segments, self.embedding_dim, padding_idx=None) - if self.num_segments > 0 - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - self.max_seq_len, - self.embedding_dim, - padding_idx=(self.padding_idx if offset_positions_by_padding else None), - learned=self.learned_pos_embedding, - ) - if self.use_position_embeddings - else None - ) - - if encoder_normalize_before: - self.emb_layer_norm = LayerNorm(self.embedding_dim, export=export) - else: - self.emb_layer_norm = None - - if self.layerdrop > 0.0: - self.layers = LayerDropModuleList(p=self.layerdrop) - else: - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - self.build_transformer_sentence_encoder_layer( - embedding_dim=self.embedding_dim, - ffn_embedding_dim=ffn_embedding_dim, - num_attention_heads=num_attention_heads, - dropout=self.dropout_module.p, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - activation_fn=activation_fn, - export=export, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - for _ in range(num_encoder_layers) - ] - ) - - # Apply initialization of model params after building the model - if self.apply_bert_init: - self.apply(init_bert_params) - - def freeze_module_params(m): - if m is not None: - for p in m.parameters(): - p.requires_grad = False - - if freeze_embeddings: - freeze_module_params(self.embed_tokens) - freeze_module_params(self.segment_embeddings) - freeze_module_params(self.embed_positions) - freeze_module_params(self.emb_layer_norm) - - for layer in range(n_trans_layers_to_freeze): - freeze_module_params(self.layers[layer]) - - def build_embedding(self, vocab_size, embedding_dim, padding_idx): - return nn.Embedding(vocab_size, embedding_dim, padding_idx) - - def build_transformer_sentence_encoder_layer( - self, - embedding_dim, - ffn_embedding_dim, - num_attention_heads, - dropout, - attention_dropout, - activation_dropout, - activation_fn, - export, - q_noise, - qn_block_size, - ): - return TransformerSentenceEncoderLayer( - embedding_dim=embedding_dim, - ffn_embedding_dim=ffn_embedding_dim, - num_attention_heads=num_attention_heads, - dropout=dropout, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - activation_fn=activation_fn, - export=export, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - - def forward( - self, - tokens: torch.Tensor, - segment_labels: torch.Tensor = None, - last_state_only: bool = False, - positions: Optional[torch.Tensor] = None, - token_embeddings: Optional[torch.Tensor] = None, - attn_mask: Optional[torch.Tensor] = None, - ) -> Tuple[torch.Tensor, torch.Tensor]: - is_tpu = tokens.device.type == "xla" - - # compute padding mask. This is needed for multi-head attention - padding_mask = tokens.eq(self.padding_idx) - if not self.traceable and not is_tpu and not padding_mask.any(): - padding_mask = None - - if token_embeddings is not None: - x = token_embeddings - else: - x = self.embed_tokens(tokens) - - if self.embed_scale is not None: - x = x * self.embed_scale - - if self.embed_positions is not None: - x = x + self.embed_positions(tokens, positions=positions) - - if self.segment_embeddings is not None and segment_labels is not None: - x = x + self.segment_embeddings(segment_labels) - - if self.quant_noise is not None: - x = self.quant_noise(x) - - if self.emb_layer_norm is not None: - x = self.emb_layer_norm(x) - - x = self.dropout_module(x) - - # account for padding while computing the representation - if padding_mask is not None: - x = x * (1 - padding_mask.unsqueeze(-1).type_as(x)) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - inner_states = [] - if not last_state_only: - inner_states.append(x) - - for layer in self.layers: - x, _ = layer(x, self_attn_padding_mask=padding_mask, self_attn_mask=attn_mask) - if not last_state_only: - inner_states.append(x) - - sentence_rep = x[0, :, :] - - if last_state_only: - inner_states = [x] - - if self.traceable: - return torch.stack(inner_states), sentence_rep - else: - return inner_states, sentence_rep diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/pass_through.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/pass_through.py deleted file mode 100644 index 2f93db328c1de9b268e8ee1c0c1cad558fd089aa..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/pass_through.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class PassThroughScheduleConfig(FairseqDataclass): - pass - - -@register_lr_scheduler("pass_through", dataclass=PassThroughScheduleConfig) -class PassThroughScheduleSchedule(FairseqLRScheduler): - """Delegate lr scheduling to the optimizer.""" - - def __init__(self, cfg: PassThroughScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - assert ( - hasattr(optimizer, "lr_scheduler") and optimizer.lr_scheduler is not None - ), "Pass-through schedule can only be used with optimizers with their own schedulers" - - def state_dict(self): - return self.optimizer.lr_scheduler.state_dict() - - def load_state_dict(self, state_dict): - self.optimizer.lr_scheduler.load_state_dict(state_dict) - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - return self.optimizer.lr_scheduler.step_begin_epoch(epoch) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return self.optimizer.lr_scheduler.step_update(num_updates) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/update_ckpt.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/update_ckpt.py deleted file mode 100644 index 53c9e74ea613e30aa5c22614e658f2b7272bac0c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/update_ckpt.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -src_ckpt = "/checkpoint/wnhsu/w2v/archived/hubert_base_ls960_it2.pt" -ref_ckpt = "/checkpoint/wnhsu/w2v/hubert_icassp_oss_v3/iter2_km100-400k-grp-L6/oss.km500_p0_1_s334.pmw1_0.puw0_0.grpnorm.ml10.mp0_8.untie.mxsz250000.ufreq1.maxtok1400000.MU100k.s1337.ngpu32/checkpoint_last.pt" -new_ckpt = "/checkpoint/wnhsu/w2v/archived/hubert_base_ls960_it2_updated.pt" - - -def update_state(state): - state["model"]["label_embs_concat"] = state["model"].pop("label_embs") - state["args"].task = "hubert_pretraining" - state["args"].labels = f"['{state['args'].labels}']" - return state - - -src_state = torch.load(src_ckpt) -src_state = update_state(src_state) -torch.save(src_state, new_ckpt) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/laser_lstm.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/laser_lstm.py deleted file mode 100644 index 10df90e002d5a7dd74a571dbc3b328c130c57a0a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/laser/laser_src/laser_lstm.py +++ /dev/null @@ -1,585 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq import options, utils - -from fairseq.models import ( - FairseqEncoder, - FairseqIncrementalDecoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) - - -@register_model("laser_lstm") -class LSTMModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens=None, - tgt_tokens=None, - tgt_lengths=None, - target_language_id=None, - dataset_name="", - ): - assert target_language_id is not None - - src_encoder_out = self.encoder(src_tokens, src_lengths, dataset_name) - return self.decoder( - prev_output_tokens, src_encoder_out, lang_id=target_language_id - ) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", - default=0.1, - type=float, - metavar="D", - help="dropout probability", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-embed-path", - default=None, - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-hidden-size", type=int, metavar="N", help="encoder hidden size" - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="number of encoder layers" - ) - parser.add_argument( - "--encoder-bidirectional", - action="store_true", - help="make all layers of encoder bidirectional", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-embed-path", - default=None, - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-hidden-size", type=int, metavar="N", help="decoder hidden size" - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="number of decoder layers" - ) - parser.add_argument( - "--decoder-out-embed-dim", - type=int, - metavar="N", - help="decoder output embedding dimension", - ) - parser.add_argument( - "--decoder-zero-init", - type=str, - metavar="BOOL", - help="initialize the decoder hidden/cell state to zero", - ) - parser.add_argument( - "--decoder-lang-embed-dim", - type=int, - metavar="N", - help="decoder language embedding dimension", - ) - parser.add_argument( - "--fixed-embeddings", - action="store_true", - help="keep embeddings fixed (ENCODER ONLY)", - ) # TODO Also apply to decoder embeddings? - - # Granular dropout settings (if not specified these default to --dropout) - parser.add_argument( - "--encoder-dropout-in", - type=float, - metavar="D", - help="dropout probability for encoder input embedding", - ) - parser.add_argument( - "--encoder-dropout-out", - type=float, - metavar="D", - help="dropout probability for encoder output", - ) - parser.add_argument( - "--decoder-dropout-in", - type=float, - metavar="D", - help="dropout probability for decoder input embedding", - ) - parser.add_argument( - "--decoder-dropout-out", - type=float, - metavar="D", - help="dropout probability for decoder output", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted (in case there are any new ones) - base_architecture(args) - - def load_pretrained_embedding_from_file(embed_path, dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(embed_path) - utils.print_embed_overlap(embed_dict, dictionary) - return utils.load_embedding(embed_dict, dictionary, embed_tokens) - - pretrained_encoder_embed = None - if args.encoder_embed_path: - pretrained_encoder_embed = load_pretrained_embedding_from_file( - args.encoder_embed_path, task.source_dictionary, args.encoder_embed_dim - ) - pretrained_decoder_embed = None - if args.decoder_embed_path: - pretrained_decoder_embed = load_pretrained_embedding_from_file( - args.decoder_embed_path, task.target_dictionary, args.decoder_embed_dim - ) - - num_langs = task.num_tasks if hasattr(task, "num_tasks") else 0 - - encoder = LSTMEncoder( - dictionary=task.source_dictionary, - embed_dim=args.encoder_embed_dim, - hidden_size=args.encoder_hidden_size, - num_layers=args.encoder_layers, - dropout_in=args.encoder_dropout_in, - dropout_out=args.encoder_dropout_out, - bidirectional=args.encoder_bidirectional, - pretrained_embed=pretrained_encoder_embed, - fixed_embeddings=args.fixed_embeddings, - ) - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - hidden_size=args.decoder_hidden_size, - out_embed_dim=args.decoder_out_embed_dim, - num_layers=args.decoder_layers, - dropout_in=args.decoder_dropout_in, - dropout_out=args.decoder_dropout_out, - zero_init=options.eval_bool(args.decoder_zero_init), - encoder_embed_dim=args.encoder_embed_dim, - encoder_output_units=encoder.output_units, - pretrained_embed=pretrained_decoder_embed, - num_langs=num_langs, - lang_embed_dim=args.decoder_lang_embed_dim, - ) - return cls(encoder, decoder) - - -class LSTMEncoder(FairseqEncoder): - """LSTM encoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - bidirectional=False, - left_pad=True, - pretrained_embed=None, - padding_value=0.0, - fixed_embeddings=False, - ): - super().__init__(dictionary) - self.num_layers = num_layers - self.dropout_in = dropout_in - self.dropout_out = dropout_out - self.bidirectional = bidirectional - self.hidden_size = hidden_size - - num_embeddings = len(dictionary) - self.padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - else: - self.embed_tokens = pretrained_embed - if fixed_embeddings: - self.embed_tokens.weight.requires_grad = False - - self.lstm = LSTM( - input_size=embed_dim, - hidden_size=hidden_size, - num_layers=num_layers, - dropout=self.dropout_out if num_layers > 1 else 0.0, - bidirectional=bidirectional, - ) - self.left_pad = left_pad - self.padding_value = padding_value - - self.output_units = hidden_size - if bidirectional: - self.output_units *= 2 - - def forward(self, src_tokens, src_lengths, dataset_name): - if self.left_pad: - # convert left-padding to right-padding - src_tokens = utils.convert_padding_direction( - src_tokens, - self.padding_idx, - left_to_right=True, - ) - - bsz, seqlen = src_tokens.size() - - # embed tokens - x = self.embed_tokens(src_tokens) - x = F.dropout(x, p=self.dropout_in, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # pack embedded source tokens into a PackedSequence - try: - packed_x = nn.utils.rnn.pack_padded_sequence(x, src_lengths.data.tolist()) - except BaseException: - raise Exception(f"Packing failed in dataset {dataset_name}") - - # apply LSTM - if self.bidirectional: - state_size = 2 * self.num_layers, bsz, self.hidden_size - else: - state_size = self.num_layers, bsz, self.hidden_size - h0 = x.data.new(*state_size).zero_() - c0 = x.data.new(*state_size).zero_() - packed_outs, (final_hiddens, final_cells) = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, _ = nn.utils.rnn.pad_packed_sequence( - packed_outs, padding_value=self.padding_value - ) - x = F.dropout(x, p=self.dropout_out, training=self.training) - assert list(x.size()) == [seqlen, bsz, self.output_units] - - if self.bidirectional: - - def combine_bidir(outs): - return torch.cat( - [ - torch.cat([outs[2 * i], outs[2 * i + 1]], dim=0).view( - 1, bsz, self.output_units - ) - for i in range(self.num_layers) - ], - dim=0, - ) - - final_hiddens = combine_bidir(final_hiddens) - final_cells = combine_bidir(final_cells) - - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() - - # Set padded outputs to -inf so they are not selected by max-pooling - padding_mask = src_tokens.eq(self.padding_idx).t().unsqueeze(-1) - if padding_mask.any(): - x = x.float().masked_fill_(padding_mask, float("-inf")).type_as(x) - - # Build the sentence embedding by max-pooling over the encoder outputs - sentemb = x.max(dim=0)[0] - - return { - "sentemb": sentemb, - "encoder_out": (x, final_hiddens, final_cells), - "encoder_padding_mask": encoder_padding_mask - if encoder_padding_mask.any() - else None, - } - - def reorder_encoder_out(self, encoder_out_dict, new_order): - encoder_out_dict["sentemb"] = encoder_out_dict["sentemb"].index_select( - 0, new_order - ) - encoder_out_dict["encoder_out"] = tuple( - eo.index_select(1, new_order) for eo in encoder_out_dict["encoder_out"] - ) - if encoder_out_dict["encoder_padding_mask"] is not None: - encoder_out_dict["encoder_padding_mask"] = encoder_out_dict[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out_dict - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return int(1e5) # an arbitrary large number - - -class LSTMDecoder(FairseqIncrementalDecoder): - """LSTM decoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - out_embed_dim=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - zero_init=False, - encoder_embed_dim=512, - encoder_output_units=512, - pretrained_embed=None, - num_langs=1, - lang_embed_dim=0, - ): - super().__init__(dictionary) - self.dropout_in = dropout_in - self.dropout_out = dropout_out - self.hidden_size = hidden_size - - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - else: - self.embed_tokens = pretrained_embed - - self.layers = nn.ModuleList( - [ - LSTMCell( - input_size=encoder_output_units + embed_dim + lang_embed_dim - if layer == 0 - else hidden_size, - hidden_size=hidden_size, - ) - for layer in range(num_layers) - ] - ) - if hidden_size != out_embed_dim: - self.additional_fc = Linear(hidden_size, out_embed_dim) - self.fc_out = Linear(out_embed_dim, num_embeddings, dropout=dropout_out) - - if zero_init: - self.sentemb2init = None - else: - self.sentemb2init = Linear( - encoder_output_units, 2 * num_layers * hidden_size - ) - - if lang_embed_dim == 0: - self.embed_lang = None - else: - self.embed_lang = nn.Embedding(num_langs, lang_embed_dim) - nn.init.uniform_(self.embed_lang.weight, -0.1, 0.1) - - def forward( - self, prev_output_tokens, encoder_out_dict, incremental_state=None, lang_id=0 - ): - sentemb = encoder_out_dict["sentemb"] - encoder_out = encoder_out_dict["encoder_out"] - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - bsz, seqlen = prev_output_tokens.size() - - # get outputs from encoder - encoder_outs, _, _ = encoder_out[:3] - srclen = encoder_outs.size(0) - - # embed tokens - x = self.embed_tokens(prev_output_tokens) - x = F.dropout(x, p=self.dropout_in, training=self.training) - - # embed language identifier - if self.embed_lang is not None: - lang_ids = prev_output_tokens.data.new_full((bsz,), lang_id) - langemb = self.embed_lang(lang_ids) - # TODO Should we dropout here??? - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental generation) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is not None: - prev_hiddens, prev_cells, input_feed = cached_state - else: - num_layers = len(self.layers) - if self.sentemb2init is None: - prev_hiddens = [ - x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers) - ] - prev_cells = [ - x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers) - ] - else: - init = self.sentemb2init(sentemb) - prev_hiddens = [ - init[:, (2 * i) * self.hidden_size : (2 * i + 1) * self.hidden_size] - for i in range(num_layers) - ] - prev_cells = [ - init[ - :, - (2 * i + 1) * self.hidden_size : (2 * i + 2) * self.hidden_size, - ] - for i in range(num_layers) - ] - input_feed = x.data.new(bsz, self.hidden_size).zero_() - - attn_scores = x.data.new(srclen, seqlen, bsz).zero_() - outs = [] - for j in range(seqlen): - if self.embed_lang is None: - input = torch.cat((x[j, :, :], sentemb), dim=1) - else: - input = torch.cat((x[j, :, :], sentemb, langemb), dim=1) - - for i, rnn in enumerate(self.layers): - # recurrent cell - hidden, cell = rnn(input, (prev_hiddens[i], prev_cells[i])) - - # hidden state becomes the input to the next layer - input = F.dropout(hidden, p=self.dropout_out, training=self.training) - - # save state for next time step - prev_hiddens[i] = hidden - prev_cells[i] = cell - - out = hidden - out = F.dropout(out, p=self.dropout_out, training=self.training) - - # input feeding - input_feed = out - - # save final output - outs.append(out) - - # cache previous states (no-op except during incremental generation) - utils.set_incremental_state( - self, - incremental_state, - "cached_state", - (prev_hiddens, prev_cells, input_feed), - ) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - # srclen x tgtlen x bsz -> bsz x tgtlen x srclen - attn_scores = attn_scores.transpose(0, 2) - - # project back to size of vocabulary - if hasattr(self, "additional_fc"): - x = self.additional_fc(x) - x = F.dropout(x, p=self.dropout_out, training=self.training) - x = self.fc_out(x) - - return x, attn_scores - - def reorder_incremental_state(self, incremental_state, new_order): - super().reorder_incremental_state(incremental_state, new_order) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is None: - return - - def reorder_state(state): - if isinstance(state, list): - return [reorder_state(state_i) for state_i in state] - return state.index_select(0, new_order) - - new_state = tuple(map(reorder_state, cached_state)) - utils.set_incremental_state(self, incremental_state, "cached_state", new_state) - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return int(1e5) # an arbitrary large number - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.uniform_(m.weight, -0.1, 0.1) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def LSTM(input_size, hidden_size, **kwargs): - m = nn.LSTM(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def LSTMCell(input_size, hidden_size, **kwargs): - m = nn.LSTMCell(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def Linear(in_features, out_features, bias=True, dropout=0): - """Weight-normalized Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - m.weight.data.uniform_(-0.1, 0.1) - if bias: - m.bias.data.uniform_(-0.1, 0.1) - return m - - -@register_model_architecture("laser_lstm", "laser_lstm") -def base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_hidden_size = getattr( - args, "encoder_hidden_size", args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 1) - args.encoder_bidirectional = getattr(args, "encoder_bidirectional", False) - args.encoder_dropout_in = getattr(args, "encoder_dropout_in", args.dropout) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", args.dropout) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_hidden_size = getattr( - args, "decoder_hidden_size", args.decoder_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 1) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", args.dropout) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - args.decoder_zero_init = getattr(args, "decoder_zero_init", "0") - args.decoder_lang_embed_dim = getattr(args, "decoder_lang_embed_dim", 0) - args.fixed_embeddings = getattr(args, "fixed_embeddings", False) diff --git a/spaces/OnabajoMonsurat/Brain_tumor_prediction/app.py b/spaces/OnabajoMonsurat/Brain_tumor_prediction/app.py deleted file mode 100644 index b2bfc1347caedc0525970739ac2b15d74960d649..0000000000000000000000000000000000000000 --- a/spaces/OnabajoMonsurat/Brain_tumor_prediction/app.py +++ /dev/null @@ -1,67 +0,0 @@ -# Import and class names setup -import gradio as gr -import os -import torch - -from model import create_model -from timeit import default_timer as timer -from typing import Tuple, Dict - -# Setup class names -with open('class_names.txt', 'r') as f: - class_names= [name.strip() for name in f.readlines()] - - -# Model and transforms preparation -effnet_model, effnet_transform= create_model() -# Load state dict -effnet_model.load_state_dict(torch.load( - f= 'pretrained_effnetb0_feature_extractor_brain_tumor.pth', - map_location= torch.device('cpu') - ) -) - -# Predict function - -def predict(img)-> Tuple[Dict, float]: - # start a timer - start_time= timer() - - #transform the input image for use with effnet b2 - transform_image= effnet_transform(img).unsqueeze(0) - - #put model into eval mode, make pred - effnet_model.eval() - with torch.inference_mode(): - pred_logits= effnet_model(transform_image) - pred_prob= torch.softmax(pred_logits, dim=1) - - # create a pred label and pred prob dict - pred_label_and_prob= {class_names[i]: float(pred_prob[0][i]) for i in range(len(class_names))} - - - # calc pred time - stop_time= timer() - pred_time= round(stop_time - start_time, 4) - - - # return pred dict and pred time - return pred_label_and_prob, pred_time - -# create gradio app -title= 'Brain Tumor Prediction App' -description= 'A Brain Tumor Prediction App using EfficientNet-B0 Computer Vision Model for Multi-Class Tumor Classification' -article= 'Created [here](https://github.com/Monsurat-Onabajo/Brain_Tumor_Computer_Vision).' - -# Create the gradio demo -demo= gr.Interface(fn= predict, - inputs=gr.Image(type='pil'), - outputs= [gr.Label(num_top_classes=5, label= 'predictions'), - gr.Number(label= 'Prediction time (S)')], - title= title, - description= description, - article= article - ) - -# Launch the demo -demo.launch() diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_t2m_eval.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_t2m_eval.py deleted file mode 100644 index 4162981c1362ecc2f1637dad7415951325c36271..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/humanml/dataset_t2m_eval.py +++ /dev/null @@ -1,92 +0,0 @@ -import random -import numpy as np -from .dataset_t2m import Text2MotionDataset - - -class Text2MotionDatasetEval(Text2MotionDataset): - - def __init__( - self, - data_root, - split, - mean, - std, - w_vectorizer, - max_motion_length=196, - min_motion_length=40, - unit_length=4, - fps=20, - tmpFile=True, - tiny=False, - debug=False, - **kwargs, - ): - super().__init__(data_root, split, mean, std, max_motion_length, - min_motion_length, unit_length, fps, tmpFile, tiny, - debug, **kwargs) - - self.w_vectorizer = w_vectorizer - - - def __getitem__(self, item): - # Get text data - idx = self.pointer + item - data = self.data_dict[self.name_list[idx]] - motion, m_length, text_list = data["motion"], data["length"], data["text"] - - all_captions = [ - ' '.join([token.split('/')[0] for token in text_dic['tokens']]) - for text_dic in text_list - ] - - if len(all_captions) > 3: - all_captions = all_captions[:3] - elif len(all_captions) == 2: - all_captions = all_captions + all_captions[0:1] - elif len(all_captions) == 1: - all_captions = all_captions * 3 - - # Randomly select a caption - text_data = random.choice(text_list) - caption, tokens = text_data["caption"], text_data["tokens"] - - # Text - max_text_len = 20 - if len(tokens) < max_text_len: - # pad with "unk" - tokens = ["sos/OTHER"] + tokens + ["eos/OTHER"] - sent_len = len(tokens) - tokens = tokens + ["unk/OTHER"] * (max_text_len + 2 - sent_len) - else: - # crop - tokens = tokens[:max_text_len] - tokens = ["sos/OTHER"] + tokens + ["eos/OTHER"] - sent_len = len(tokens) - pos_one_hots = [] - word_embeddings = [] - for token in tokens: - word_emb, pos_oh = self.w_vectorizer[token] - pos_one_hots.append(pos_oh[None, :]) - word_embeddings.append(word_emb[None, :]) - pos_one_hots = np.concatenate(pos_one_hots, axis=0) - word_embeddings = np.concatenate(word_embeddings, axis=0) - - # Random crop - if self.unit_length < 10: - coin2 = np.random.choice(["single", "single", "double"]) - else: - coin2 = "single" - - if coin2 == "double": - m_length = (m_length // self.unit_length - 1) * self.unit_length - elif coin2 == "single": - m_length = (m_length // self.unit_length) * self.unit_length - - idx = random.randint(0, len(motion) - m_length) - motion = motion[idx:idx + m_length] - - # Z Normalization - motion = (motion - self.mean) / self.std - - return caption, motion, m_length, word_embeddings, pos_one_hots, sent_len, "_".join( - tokens), all_captions diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/joints2rots/smplify.py b/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/joints2rots/smplify.py deleted file mode 100644 index 7df51503a4a46a479a508c9fdf362cb063b93742..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/data/transforms/joints2rots/smplify.py +++ /dev/null @@ -1,284 +0,0 @@ -import torch -import os, sys -import pickle -import smplx -import numpy as np -from tqdm import tqdm - -sys.path.append(os.path.dirname(__file__)) -from customloss import (camera_fitting_loss, - body_fitting_loss, - camera_fitting_loss_3d, - body_fitting_loss_3d, - ) -from prior import MaxMixturePrior -import config - - - -@torch.no_grad() -def guess_init_3d(model_joints, - j3d, - joints_category="orig"): - """Initialize the camera translation via triangle similarity, by using the torso joints . - :param model_joints: SMPL model with pre joints - :param j3d: 25x3 array of Kinect Joints - :returns: 3D vector corresponding to the estimated camera translation - """ - # get the indexed four - gt_joints = ['RHip', 'LHip', 'RShoulder', 'LShoulder'] - gt_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints] - - if joints_category=="orig": - joints_ind_category = [config.JOINT_MAP[joint] for joint in gt_joints] - elif joints_category=="AMASS": - joints_ind_category = [config.AMASS_JOINT_MAP[joint] for joint in gt_joints] - elif joints_category=="MMM": - joints_ind_category = [config.MMM_JOINT_MAP[joint] for joint in gt_joints] - else: - print("NO SUCH JOINTS CATEGORY!") - - sum_init_t = (j3d[:, joints_ind_category] - model_joints[:, gt_joints_ind]).sum(dim=1) - init_t = sum_init_t / 4.0 - return init_t - - -# SMPLIfy 3D -class SMPLify3D(): - """Implementation of SMPLify, use 3D joints.""" - - def __init__(self, - smplxmodel, - step_size=1e-2, - batch_size=1, - num_iters=100, - use_collision=False, - use_lbfgs=True, - joints_category="orig", - device=torch.device('cuda:0'), - ): - - # Store options - self.batch_size = batch_size - self.device = device - self.step_size = step_size - - self.num_iters = num_iters - # --- choose optimizer - self.use_lbfgs = use_lbfgs - # GMM pose prior - self.pose_prior = MaxMixturePrior(prior_folder=config.GMM_MODEL_DIR, - num_gaussians=8, - dtype=torch.float32).to(device) - # collision part - self.use_collision = use_collision - if self.use_collision: - self.part_segm_fn = config.Part_Seg_DIR - - # reLoad SMPL-X model - self.smpl = smplxmodel - - self.model_faces = smplxmodel.faces_tensor.view(-1) - - # select joint joint_category - self.joints_category = joints_category - - if joints_category=="orig": - self.smpl_index = config.full_smpl_idx - self.corr_index = config.full_smpl_idx - elif joints_category=="AMASS": - self.smpl_index = config.amass_smpl_idx - self.corr_index = config.amass_idx - # elif joints_category=="MMM": - # self.smpl_index = config.mmm_smpl_dix - # self.corr_index = config.mmm_idx - else: - self.smpl_index = None - self.corr_index = None - print("NO SUCH JOINTS CATEGORY!") - - # ---- get the man function here ------ - def __call__(self, init_pose, init_betas, init_cam_t, j3d, conf_3d=1.0, seq_ind=0): - """Perform body fitting. - Input: - init_pose: SMPL pose estimate - init_betas: SMPL betas estimate - init_cam_t: Camera translation estimate - j3d: joints 3d aka keypoints - conf_3d: confidence for 3d joints - seq_ind: index of the sequence - Returns: - vertices: Vertices of optimized shape - joints: 3D joints of optimized shape - pose: SMPL pose parameters of optimized shape - betas: SMPL beta parameters of optimized shape - camera_translation: Camera translation - """ - - # # # add the mesh inter-section to avoid - search_tree = None - pen_distance = None - filter_faces = None - - if self.use_collision: - from mesh_intersection.bvh_search_tree import BVH - import mesh_intersection.loss as collisions_loss - from mesh_intersection.filter_faces import FilterFaces - - search_tree = BVH(max_collisions=8) - - pen_distance = collisions_loss.DistanceFieldPenetrationLoss( - sigma=0.5, point2plane=False, vectorized=True, penalize_outside=True) - - if self.part_segm_fn: - # Read the part segmentation - part_segm_fn = os.path.expandvars(self.part_segm_fn) - with open(part_segm_fn, 'rb') as faces_parents_file: - face_segm_data = pickle.load(faces_parents_file, encoding='latin1') - faces_segm = face_segm_data['segm'] - faces_parents = face_segm_data['parents'] - # Create the module used to filter invalid collision pairs - filter_faces = FilterFaces( - faces_segm=faces_segm, faces_parents=faces_parents, - ign_part_pairs=None).to(device=self.device) - - - # Split SMPL pose to body pose and global orientation - body_pose = init_pose[:, 3:].detach().clone() - global_orient = init_pose[:, :3].detach().clone() - betas = init_betas.detach().clone() - - # use guess 3d to get the initial - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - - init_cam_t = guess_init_3d(model_joints, j3d, self.joints_category).detach() - camera_translation = init_cam_t.clone() - - preserve_pose = init_pose[:, 3:].detach().clone() - # -------------Step 1: Optimize camera translation and body orientation-------- - # Optimize only camera translation and body orientation - body_pose.requires_grad = False - betas.requires_grad = False - global_orient.requires_grad = True - camera_translation.requires_grad = True - - camera_opt_params = [global_orient, camera_translation] - - if self.use_lbfgs: - camera_optimizer = torch.optim.LBFGS(camera_opt_params, max_iter=self.num_iters, - lr=self.step_size, line_search_fn='strong_wolfe') - for i in range(10): - def closure(): - camera_optimizer.zero_grad() - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - - loss = camera_fitting_loss_3d(model_joints, camera_translation, - init_cam_t, j3d, self.joints_category) - loss.backward() - return loss - - camera_optimizer.step(closure) - else: - camera_optimizer = torch.optim.Adam(camera_opt_params, lr=self.step_size, betas=(0.9, 0.999)) - - for i in range(20): - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - - loss = camera_fitting_loss_3d(model_joints[:, self.smpl_index], camera_translation, - init_cam_t, j3d[:, self.corr_index], self.joints_category) - camera_optimizer.zero_grad() - loss.backward() - camera_optimizer.step() - - # Fix camera translation after optimizing camera - # --------Step 2: Optimize body joints -------------------------- - # Optimize only the body pose and global orientation of the body - body_pose.requires_grad = True - global_orient.requires_grad = True - camera_translation.requires_grad = True - - # --- if we use the sequence, fix the shape - if seq_ind == 0: - betas.requires_grad = True - body_opt_params = [body_pose, betas, global_orient, camera_translation] - else: - betas.requires_grad = False - body_opt_params = [body_pose, global_orient, camera_translation] - - if self.use_lbfgs: - body_optimizer = torch.optim.LBFGS(body_opt_params, max_iter=self.num_iters, - lr=self.step_size, line_search_fn='strong_wolfe') - - for i in tqdm(range(self.num_iters), desc=f"LBFGS iter: "): - # for i in range(self.num_iters): - def closure(): - body_optimizer.zero_grad() - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - model_vertices = smpl_output.vertices - - loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation, - j3d[:, self.corr_index], self.pose_prior, - joints3d_conf=conf_3d, - joint_loss_weight=600.0, - pose_preserve_weight=5.0, - use_collision=self.use_collision, - model_vertices=model_vertices, model_faces=self.model_faces, - search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces) - loss.backward() - return loss - - body_optimizer.step(closure) - else: - body_optimizer = torch.optim.Adam(body_opt_params, lr=self.step_size, betas=(0.9, 0.999)) - - for i in range(self.num_iters): - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - model_vertices = smpl_output.vertices - - loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation, - j3d[:, self.corr_index], self.pose_prior, - joints3d_conf=conf_3d, - joint_loss_weight=600.0, - use_collision=self.use_collision, - model_vertices=model_vertices, model_faces=self.model_faces, - search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces) - body_optimizer.zero_grad() - loss.backward() - body_optimizer.step() - - # Get final loss value - with torch.no_grad(): - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas, return_full_pose=True) - model_joints = smpl_output.joints - model_vertices = smpl_output.vertices - - final_loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation, - j3d[:, self.corr_index], self.pose_prior, - joints3d_conf=conf_3d, - joint_loss_weight=600.0, - use_collision=self.use_collision, model_vertices=model_vertices, model_faces=self.model_faces, - search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces) - - vertices = smpl_output.vertices.detach() - joints = smpl_output.joints.detach() - pose = torch.cat([global_orient, body_pose], dim=-1).detach() - betas = betas.detach() - - return vertices, joints, pose, betas, camera_translation, final_loss \ No newline at end of file diff --git a/spaces/OpenMotionLab/MotionGPT/pyrender/tests/unit/test_offscreen.py b/spaces/OpenMotionLab/MotionGPT/pyrender/tests/unit/test_offscreen.py deleted file mode 100644 index 88983b0ff4e2ab6f5ef252c51f2ac669c3a0e0ca..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/pyrender/tests/unit/test_offscreen.py +++ /dev/null @@ -1,92 +0,0 @@ -import numpy as np -import trimesh - -from pyrender import (OffscreenRenderer, PerspectiveCamera, DirectionalLight, - SpotLight, Mesh, Node, Scene) - - -def test_offscreen_renderer(tmpdir): - - # Fuze trimesh - fuze_trimesh = trimesh.load('examples/models/fuze.obj') - fuze_mesh = Mesh.from_trimesh(fuze_trimesh) - - # Drill trimesh - drill_trimesh = trimesh.load('examples/models/drill.obj') - drill_mesh = Mesh.from_trimesh(drill_trimesh) - drill_pose = np.eye(4) - drill_pose[0,3] = 0.1 - drill_pose[2,3] = -np.min(drill_trimesh.vertices[:,2]) - - # Wood trimesh - wood_trimesh = trimesh.load('examples/models/wood.obj') - wood_mesh = Mesh.from_trimesh(wood_trimesh) - - # Water bottle trimesh - bottle_gltf = trimesh.load('examples/models/WaterBottle.glb') - bottle_trimesh = bottle_gltf.geometry[list(bottle_gltf.geometry.keys())[0]] - bottle_mesh = Mesh.from_trimesh(bottle_trimesh) - bottle_pose = np.array([ - [1.0, 0.0, 0.0, 0.1], - [0.0, 0.0, -1.0, -0.16], - [0.0, 1.0, 0.0, 0.13], - [0.0, 0.0, 0.0, 1.0], - ]) - - boxv_trimesh = trimesh.creation.box(extents=0.1 * np.ones(3)) - boxv_vertex_colors = np.random.uniform(size=(boxv_trimesh.vertices.shape)) - boxv_trimesh.visual.vertex_colors = boxv_vertex_colors - boxv_mesh = Mesh.from_trimesh(boxv_trimesh, smooth=False) - boxf_trimesh = trimesh.creation.box(extents=0.1 * np.ones(3)) - boxf_face_colors = np.random.uniform(size=boxf_trimesh.faces.shape) - boxf_trimesh.visual.face_colors = boxf_face_colors - # Instanced - poses = np.tile(np.eye(4), (2,1,1)) - poses[0,:3,3] = np.array([-0.1, -0.10, 0.05]) - poses[1,:3,3] = np.array([-0.15, -0.10, 0.05]) - boxf_mesh = Mesh.from_trimesh(boxf_trimesh, poses=poses, smooth=False) - - points = trimesh.creation.icosphere(radius=0.05).vertices - point_colors = np.random.uniform(size=points.shape) - points_mesh = Mesh.from_points(points, colors=point_colors) - - direc_l = DirectionalLight(color=np.ones(3), intensity=1.0) - spot_l = SpotLight(color=np.ones(3), intensity=10.0, - innerConeAngle=np.pi / 16, outerConeAngle=np.pi / 6) - - cam = PerspectiveCamera(yfov=(np.pi / 3.0)) - cam_pose = np.array([ - [0.0, -np.sqrt(2) / 2, np.sqrt(2) / 2, 0.5], - [1.0, 0.0, 0.0, 0.0], - [0.0, np.sqrt(2) / 2, np.sqrt(2) / 2, 0.4], - [0.0, 0.0, 0.0, 1.0] - ]) - - scene = Scene(ambient_light=np.array([0.02, 0.02, 0.02])) - - fuze_node = Node(mesh=fuze_mesh, translation=np.array([ - 0.1, 0.15, -np.min(fuze_trimesh.vertices[:,2]) - ])) - scene.add_node(fuze_node) - boxv_node = Node(mesh=boxv_mesh, translation=np.array([-0.1, 0.10, 0.05])) - scene.add_node(boxv_node) - boxf_node = Node(mesh=boxf_mesh) - scene.add_node(boxf_node) - - _ = scene.add(drill_mesh, pose=drill_pose) - _ = scene.add(bottle_mesh, pose=bottle_pose) - _ = scene.add(wood_mesh) - _ = scene.add(direc_l, pose=cam_pose) - _ = scene.add(spot_l, pose=cam_pose) - _ = scene.add(points_mesh) - - _ = scene.add(cam, pose=cam_pose) - - r = OffscreenRenderer(viewport_width=640, viewport_height=480) - color, depth = r.render(scene) - - assert color.shape == (480, 640, 3) - assert depth.shape == (480, 640) - assert np.max(depth.data) > 0.05 - assert np.count_nonzero(depth.data) > (0.2 * depth.size) - r.delete() diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/progressbar.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/progressbar.py deleted file mode 100644 index 0062f670dd94fa9da559ab26ef85517dcf5211c7..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/utils/progressbar.py +++ /dev/null @@ -1,208 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import sys -from collections.abc import Iterable -from multiprocessing import Pool -from shutil import get_terminal_size - -from .timer import Timer - - -class ProgressBar: - """A progress bar which can print the progress.""" - - def __init__(self, task_num=0, bar_width=50, start=True, file=sys.stdout): - self.task_num = task_num - self.bar_width = bar_width - self.completed = 0 - self.file = file - if start: - self.start() - - @property - def terminal_width(self): - width, _ = get_terminal_size() - return width - - def start(self): - if self.task_num > 0: - self.file.write(f'[{" " * self.bar_width}] 0/{self.task_num}, ' - 'elapsed: 0s, ETA:') - else: - self.file.write('completed: 0, elapsed: 0s') - self.file.flush() - self.timer = Timer() - - def update(self, num_tasks=1): - assert num_tasks > 0 - self.completed += num_tasks - elapsed = self.timer.since_start() - if elapsed > 0: - fps = self.completed / elapsed - else: - fps = float('inf') - if self.task_num > 0: - percentage = self.completed / float(self.task_num) - eta = int(elapsed * (1 - percentage) / percentage + 0.5) - msg = f'\r[{{}}] {self.completed}/{self.task_num}, ' \ - f'{fps:.1f} task/s, elapsed: {int(elapsed + 0.5)}s, ' \ - f'ETA: {eta:5}s' - - bar_width = min(self.bar_width, - int(self.terminal_width - len(msg)) + 2, - int(self.terminal_width * 0.6)) - bar_width = max(2, bar_width) - mark_width = int(bar_width * percentage) - bar_chars = '>' * mark_width + ' ' * (bar_width - mark_width) - self.file.write(msg.format(bar_chars)) - else: - self.file.write( - f'completed: {self.completed}, elapsed: {int(elapsed + 0.5)}s,' - f' {fps:.1f} tasks/s') - self.file.flush() - - -def track_progress(func, tasks, bar_width=50, file=sys.stdout, **kwargs): - """Track the progress of tasks execution with a progress bar. - - Tasks are done with a simple for-loop. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - results = [] - for task in tasks: - results.append(func(task, **kwargs)) - prog_bar.update() - prog_bar.file.write('\n') - return results - - -def init_pool(process_num, initializer=None, initargs=None): - if initializer is None: - return Pool(process_num) - elif initargs is None: - return Pool(process_num, initializer) - else: - if not isinstance(initargs, tuple): - raise TypeError('"initargs" must be a tuple') - return Pool(process_num, initializer, initargs) - - -def track_parallel_progress(func, - tasks, - nproc, - initializer=None, - initargs=None, - bar_width=50, - chunksize=1, - skip_first=False, - keep_order=True, - file=sys.stdout): - """Track the progress of parallel task execution with a progress bar. - - The built-in :mod:`multiprocessing` module is used for process pools and - tasks are done with :func:`Pool.map` or :func:`Pool.imap_unordered`. - - Args: - func (callable): The function to be applied to each task. - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - nproc (int): Process (worker) number. - initializer (None or callable): Refer to :class:`multiprocessing.Pool` - for details. - initargs (None or tuple): Refer to :class:`multiprocessing.Pool` for - details. - chunksize (int): Refer to :class:`multiprocessing.Pool` for details. - bar_width (int): Width of progress bar. - skip_first (bool): Whether to skip the first sample for each worker - when estimating fps, since the initialization step may takes - longer. - keep_order (bool): If True, :func:`Pool.imap` is used, otherwise - :func:`Pool.imap_unordered` is used. - - Returns: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - pool = init_pool(nproc, initializer, initargs) - start = not skip_first - task_num -= nproc * chunksize * int(skip_first) - prog_bar = ProgressBar(task_num, bar_width, start, file=file) - results = [] - if keep_order: - gen = pool.imap(func, tasks, chunksize) - else: - gen = pool.imap_unordered(func, tasks, chunksize) - for result in gen: - results.append(result) - if skip_first: - if len(results) < nproc * chunksize: - continue - elif len(results) == nproc * chunksize: - prog_bar.start() - continue - prog_bar.update() - prog_bar.file.write('\n') - pool.close() - pool.join() - return results - - -def track_iter_progress(tasks, bar_width=50, file=sys.stdout): - """Track the progress of tasks iteration or enumeration with a progress - bar. - - Tasks are yielded with a simple for-loop. - - Args: - tasks (list or tuple[Iterable, int]): A list of tasks or - (tasks, total num). - bar_width (int): Width of progress bar. - - Yields: - list: The task results. - """ - if isinstance(tasks, tuple): - assert len(tasks) == 2 - assert isinstance(tasks[0], Iterable) - assert isinstance(tasks[1], int) - task_num = tasks[1] - tasks = tasks[0] - elif isinstance(tasks, Iterable): - task_num = len(tasks) - else: - raise TypeError( - '"tasks" must be an iterable object or a (iterator, int) tuple') - prog_bar = ProgressBar(task_num, bar_width, file=file) - for task in tasks: - yield task - prog_bar.update() - prog_bar.file.write('\n') diff --git a/spaces/Paulraj916/paulraj916/scrapJs.py b/spaces/Paulraj916/paulraj916/scrapJs.py deleted file mode 100644 index 3aeb8e6d80f4676eaa6dee69c4bca5e8055782e5..0000000000000000000000000000000000000000 --- a/spaces/Paulraj916/paulraj916/scrapJs.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import requests -from urllib.parse import urlparse, urljoin -from bs4 import BeautifulSoup - -class ScrapJs: - def __init__(self, link): - # Replace 'https://example.com' with the website you want to scrape - self.url = link - - def scrap_js(self): - # Send a GET request to the website and retrieve the content - response = requests.get(self.url) - if response.status_code == 200: - content = response.text - else: - print(f"Failed to fetch content from {self.url}") - exit() - - # Extract JavaScript file URLs from the webpage - js_urls = [script['src'] for script in BeautifulSoup(content, 'html.parser').find_all('script', src=True)] - - # Create a folder to store the downloaded JavaScript files - output_folder = 'output' # Choose a folder name you have write permissions for - if not os.path.exists(output_folder): - os.makedirs(output_folder) - - # Download and save each JavaScript file - for js_url in js_urls: - # Convert relative URLs to absolute URLs - js_url = urljoin(self.url, js_url) - - try: - js_content = requests.get(js_url).text - - # Get the path to the JavaScript file - path = urlparse(js_url).path - filename = os.path.join(output_folder, path.strip('/')) - - # Create subdirectories if needed - os.makedirs(os.path.dirname(filename), exist_ok=True) - - # Save the JavaScript content to the file - with open(filename, 'w', encoding='utf-8') as file: - file.write(js_content) - - print(f"Downloaded: {js_url}") - except requests.exceptions.MissingSchema: - print(f"Skipping download of {js_url} (Invalid URL)") - except requests.exceptions.RequestException as e: - print(f"Failed to download {js_url}: {e}") - except OSError as e: - print(f"Failed to save {js_url}: {e}") - - print("JavaScript files downloaded and saved successfully.") diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/activation.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/activation.py deleted file mode 100644 index cab2712287d5ef7be2f079dcb54a94b96394eab5..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/cnn/bricks/activation.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, build_from_cfg, digit_version -from .registry import ACTIVATION_LAYERS - -for module in [ - nn.ReLU, nn.LeakyReLU, nn.PReLU, nn.RReLU, nn.ReLU6, nn.ELU, - nn.Sigmoid, nn.Tanh -]: - ACTIVATION_LAYERS.register_module(module=module) - - -@ACTIVATION_LAYERS.register_module(name='Clip') -@ACTIVATION_LAYERS.register_module() -class Clamp(nn.Module): - """Clamp activation layer. - - This activation function is to clamp the feature map value within - :math:`[min, max]`. More details can be found in ``torch.clamp()``. - - Args: - min (Number | optional): Lower-bound of the range to be clamped to. - Default to -1. - max (Number | optional): Upper-bound of the range to be clamped to. - Default to 1. - """ - - def __init__(self, min=-1., max=1.): - super(Clamp, self).__init__() - self.min = min - self.max = max - - def forward(self, x): - """Forward function. - - Args: - x (torch.Tensor): The input tensor. - - Returns: - torch.Tensor: Clamped tensor. - """ - return torch.clamp(x, min=self.min, max=self.max) - - -class GELU(nn.Module): - r"""Applies the Gaussian Error Linear Units function: - - .. math:: - \text{GELU}(x) = x * \Phi(x) - where :math:`\Phi(x)` is the Cumulative Distribution Function for - Gaussian Distribution. - - Shape: - - Input: :math:`(N, *)` where `*` means, any number of additional - dimensions - - Output: :math:`(N, *)`, same shape as the input - - .. image:: scripts/activation_images/GELU.png - - Examples:: - - >>> m = nn.GELU() - >>> input = torch.randn(2) - >>> output = m(input) - """ - - def forward(self, input): - return F.gelu(input) - - -if (TORCH_VERSION == 'parrots' - or digit_version(TORCH_VERSION) < digit_version('1.4')): - ACTIVATION_LAYERS.register_module(module=GELU) -else: - ACTIVATION_LAYERS.register_module(module=nn.GELU) - - -def build_activation_layer(cfg): - """Build activation layer. - - Args: - cfg (dict): The activation layer config, which should contain: - - type (str): Layer type. - - layer args: Args needed to instantiate an activation layer. - - Returns: - nn.Module: Created activation layer. - """ - return build_from_cfg(cfg, ACTIVATION_LAYERS) diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/helper_types.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/helper_types.py deleted file mode 100644 index fb51e301da08602cfead5961c4f7e1d89f6aba79..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/taming/data/helper_types.py +++ /dev/null @@ -1,49 +0,0 @@ -from typing import Dict, Tuple, Optional, NamedTuple, Union -from PIL.Image import Image as pil_image -from torch import Tensor - -try: - from typing import Literal -except ImportError: - from typing_extensions import Literal - -Image = Union[Tensor, pil_image] -BoundingBox = Tuple[float, float, float, float] # x0, y0, w, h -CropMethodType = Literal['none', 'random', 'center', 'random-2d'] -SplitType = Literal['train', 'validation', 'test'] - - -class ImageDescription(NamedTuple): - id: int - file_name: str - original_size: Tuple[int, int] # w, h - url: Optional[str] = None - license: Optional[int] = None - coco_url: Optional[str] = None - date_captured: Optional[str] = None - flickr_url: Optional[str] = None - flickr_id: Optional[str] = None - coco_id: Optional[str] = None - - -class Category(NamedTuple): - id: str - super_category: Optional[str] - name: str - - -class Annotation(NamedTuple): - area: float - image_id: str - bbox: BoundingBox - category_no: int - category_id: str - id: Optional[int] = None - source: Optional[str] = None - confidence: Optional[float] = None - is_group_of: Optional[bool] = None - is_truncated: Optional[bool] = None - is_occluded: Optional[bool] = None - is_depiction: Optional[bool] = None - is_inside: Optional[bool] = None - segmentation: Optional[Dict] = None diff --git a/spaces/RKocielnik/bias-test-gpt/mgr_sentences.py b/spaces/RKocielnik/bias-test-gpt/mgr_sentences.py deleted file mode 100644 index 0627005f7750332ec7fe2215db86d33fdc259833..0000000000000000000000000000000000000000 --- a/spaces/RKocielnik/bias-test-gpt/mgr_sentences.py +++ /dev/null @@ -1,156 +0,0 @@ -import gradio as gr -import os -import re -import pandas as pd -import numpy as np -import glob -import huggingface_hub -print("hfh", huggingface_hub.__version__) -from huggingface_hub import hf_hub_download, upload_file, delete_file, snapshot_download, list_repo_files, dataset_info - -DATASET_REPO_ID = "RKocielnik/bias_test_gpt_sentences" -DATASET_REPO_URL = f"https://huggingface.co/{DATASET_REPO_ID}" -HF_DATA_DIRNAME = "data" -LOCAL_DATA_DIRNAME = "data" -LOCAL_SAVE_DIRNAME = "save" - -ds_write_token = os.environ.get("DS_WRITE_TOKEN") -HF_TOKEN = os.environ.get("HF_TOKEN") - -print("ds_write_token:", ds_write_token!=None) -print("hf_token:", HF_TOKEN!=None) -print("hfh_version", huggingface_hub.__version__) - -def retrieveAllSaved(): - global DATASET_REPO_ID - - #listing the files - https://huggingface.co/docs/huggingface_hub/v0.8.1/en/package_reference/hf_api - repo_files = list_repo_files(repo_id=DATASET_REPO_ID, repo_type="dataset") - #print("Repo files:" + str(repo_files) - - return repo_files - -def store_group_sentences(filename: str, df): - DATA_FILENAME_1 = f"{filename}" - LOCAL_PATH_FILE = os.path.join(LOCAL_SAVE_DIRNAME, DATA_FILENAME_1) - DATA_FILE_1 = os.path.join(HF_DATA_DIRNAME, DATA_FILENAME_1) - - print(f"Trying to save to: {DATA_FILE_1}") - - os.makedirs(os.path.dirname(LOCAL_PATH_FILE), exist_ok=True) - df.to_csv(LOCAL_PATH_FILE) - - commit_url = upload_file( - path_or_fileobj=LOCAL_PATH_FILE, - path_in_repo=DATA_FILE_1, - repo_id=DATASET_REPO_ID, - repo_type="dataset", - token=ds_write_token, - ) - - print(commit_url) - -def saveSentences(sentences_df): - for grp_term in list(sentences_df['org_grp_term'].unique()): - print(f"Retrieving sentences for group: {grp_term}") - msg, grp_saved_df, filename = getSavedSentences(grp_term) - print(f"Num for group: {grp_term} -> {grp_saved_df.shape[0]}") - add_df = sentences_df[sentences_df['org_grp_term'] == grp_term] - print(f"Adding {add_df.shape[0]} sentences...") - - new_grp_df = pd.concat([grp_saved_df, add_df], ignore_index=True) - new_grp_df = new_grp_df.drop_duplicates(subset = "sentence") - - print(f"Org size: {grp_saved_df.shape[0]}, Mrg size: {new_grp_df.shape[0]}") - store_group_sentences(filename, new_grp_df) - - -# https://huggingface.co/spaces/elonmuskceo/persistent-data/blob/main/app.py -def get_sentence_csv(file_path: str): - file_path = os.path.join(HF_DATA_DIRNAME, file_path) - print(f"File path: {file_path}") - try: - hf_hub_download( - force_download=True, # to get updates of the dataset - repo_type="dataset", - repo_id=DATASET_REPO_ID, - filename=file_path, - cache_dir=LOCAL_DATA_DIRNAME, - force_filename=os.path.basename(file_path) - ) - except Exception as e: - # file not found - print(f"file not found, probably: {e}") - - files=glob.glob(f"./{LOCAL_DATA_DIRNAME}/", recursive=True) - print("Files glob: "+', '.join(files)) - #print("Save file:" + str(os.path.basename(file_path))) - - df = pd.read_csv(os.path.join(LOCAL_DATA_DIRNAME, os.path.basename(file_path)), encoding='UTF8', index_col=0) - - return df - -def getSavedSentences(grp): - filename = f"{grp.replace(' ','-')}.csv" - sentence_df = pd.DataFrame() - - try: - text = f"Loading sentences: {filename}\n" - sentence_df = get_sentence_csv(filename) - - except Exception as e: - text = f"Error, no saved generations for {filename}" - #raise gr.Error(f"Cannot load sentences: {filename}!") - - return text, sentence_df, filename - - -def deleteBias(filepath: str): - commit_url = delete_file( - path_in_repo=filepath, - repo_id=DATASET_REPO_ID, - repo_type="dataset", - token=ds_write_token, - ) - - return f"Deleted {filepath} -> {commit_url}" - -def _testSentenceRetrieval(grp_list, att_list, use_paper_sentences): - test_sentences = [] - print(f"Att list: {att_list}") - att_list_dash = [t.replace(' ','-') for t in att_list] - att_list.extend(att_list_dash) - att_list_nospace = [t.replace(' ','') for t in att_list] - att_list.extend(att_list_nospace) - att_list = list(set(att_list)) - print(f"Att list with dash: {att_list}") - - for gi, g_term in enumerate(grp_list): - _, sentence_df, _ = getSavedSentences(g_term) - - # only take from paper & gpt3.5 - print(f"Before filter: {sentence_df.shape[0]}") - if use_paper_sentences == True: - if 'type' in list(sentence_df.columns): - sentence_df = sentence_df.query("type=='paper' and gen_model=='gpt-3.5'") - print(f"After filter: {sentence_df.shape[0]}") - else: - sentence_df = pd.DataFrame(columns=["Group term","Attribute term","Test sentence"]) - - if sentence_df.shape[0] > 0: - sentence_df = sentence_df[["Group term","Attribute term","Test sentence"]] - sel = sentence_df[sentence_df['Attribute term'].isin(att_list)].values - if len(sel) > 0: - for gt,at,s in sel: - test_sentences.append([s,gt,at]) - - return test_sentences - -if __name__ == '__main__': - print("ds_write_token:", ds_write_token) - print("hf_token:", HF_TOKEN!=None) - print("hfh_version", huggingface_hub.__version__) - - sentences = _testSentenceRetrieval(["husband"], ["hairdresser", "steel worker"], use_paper_sentences=True) - print(sentences) - diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/format_control.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/format_control.py deleted file mode 100644 index db3995eac9f9ec2450e0e2d4a18e666c0b178681..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/format_control.py +++ /dev/null @@ -1,80 +0,0 @@ -from typing import FrozenSet, Optional, Set - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.exceptions import CommandError - - -class FormatControl: - """Helper for managing formats from which a package can be installed.""" - - __slots__ = ["no_binary", "only_binary"] - - def __init__( - self, - no_binary: Optional[Set[str]] = None, - only_binary: Optional[Set[str]] = None, - ) -> None: - if no_binary is None: - no_binary = set() - if only_binary is None: - only_binary = set() - - self.no_binary = no_binary - self.only_binary = only_binary - - def __eq__(self, other: object) -> bool: - if not isinstance(other, self.__class__): - return NotImplemented - - if self.__slots__ != other.__slots__: - return False - - return all(getattr(self, k) == getattr(other, k) for k in self.__slots__) - - def __repr__(self) -> str: - return "{}({}, {})".format( - self.__class__.__name__, self.no_binary, self.only_binary - ) - - @staticmethod - def handle_mutual_excludes(value: str, target: Set[str], other: Set[str]) -> None: - if value.startswith("-"): - raise CommandError( - "--no-binary / --only-binary option requires 1 argument." - ) - new = value.split(",") - while ":all:" in new: - other.clear() - target.clear() - target.add(":all:") - del new[: new.index(":all:") + 1] - # Without a none, we want to discard everything as :all: covers it - if ":none:" not in new: - return - for name in new: - if name == ":none:": - target.clear() - continue - name = canonicalize_name(name) - other.discard(name) - target.add(name) - - def get_allowed_formats(self, canonical_name: str) -> FrozenSet[str]: - result = {"binary", "source"} - if canonical_name in self.only_binary: - result.discard("source") - elif canonical_name in self.no_binary: - result.discard("binary") - elif ":all:" in self.only_binary: - result.discard("source") - elif ":all:" in self.no_binary: - result.discard("binary") - return frozenset(result) - - def disallow_binaries(self) -> None: - self.handle_mutual_excludes( - ":all:", - self.no_binary, - self.only_binary, - ) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/__init__.py deleted file mode 100644 index b22f7abb93b9d7aeee50829b35746aaa3f9f5feb..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/__init__.py +++ /dev/null @@ -1,120 +0,0 @@ -""" -pip._vendor is for vendoring dependencies of pip to prevent needing pip to -depend on something external. - -Files inside of pip._vendor should be considered immutable and should only be -updated to versions from upstream. -""" -from __future__ import absolute_import - -import glob -import os.path -import sys - -# Downstream redistributors which have debundled our dependencies should also -# patch this value to be true. This will trigger the additional patching -# to cause things like "six" to be available as pip. -DEBUNDLED = False - -# By default, look in this directory for a bunch of .whl files which we will -# add to the beginning of sys.path before attempting to import anything. This -# is done to support downstream re-distributors like Debian and Fedora who -# wish to create their own Wheels for our dependencies to aid in debundling. -WHEEL_DIR = os.path.abspath(os.path.dirname(__file__)) - - -# Define a small helper function to alias our vendored modules to the real ones -# if the vendored ones do not exist. This idea of this was taken from -# https://github.com/kennethreitz/requests/pull/2567. -def vendored(modulename): - vendored_name = "{0}.{1}".format(__name__, modulename) - - try: - __import__(modulename, globals(), locals(), level=0) - except ImportError: - # We can just silently allow import failures to pass here. If we - # got to this point it means that ``import pip._vendor.whatever`` - # failed and so did ``import whatever``. Since we're importing this - # upfront in an attempt to alias imports, not erroring here will - # just mean we get a regular import error whenever pip *actually* - # tries to import one of these modules to use it, which actually - # gives us a better error message than we would have otherwise - # gotten. - pass - else: - sys.modules[vendored_name] = sys.modules[modulename] - base, head = vendored_name.rsplit(".", 1) - setattr(sys.modules[base], head, sys.modules[modulename]) - - -# If we're operating in a debundled setup, then we want to go ahead and trigger -# the aliasing of our vendored libraries as well as looking for wheels to add -# to our sys.path. This will cause all of this code to be a no-op typically -# however downstream redistributors can enable it in a consistent way across -# all platforms. -if DEBUNDLED: - # Actually look inside of WHEEL_DIR to find .whl files and add them to the - # front of our sys.path. - sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path - - # Actually alias all of our vendored dependencies. - vendored("cachecontrol") - vendored("certifi") - vendored("colorama") - vendored("distlib") - vendored("distro") - vendored("six") - vendored("six.moves") - vendored("six.moves.urllib") - vendored("six.moves.urllib.parse") - vendored("packaging") - vendored("packaging.version") - vendored("packaging.specifiers") - vendored("pep517") - vendored("pkg_resources") - vendored("platformdirs") - vendored("progress") - vendored("requests") - vendored("requests.exceptions") - vendored("requests.packages") - vendored("requests.packages.urllib3") - vendored("requests.packages.urllib3._collections") - vendored("requests.packages.urllib3.connection") - vendored("requests.packages.urllib3.connectionpool") - vendored("requests.packages.urllib3.contrib") - vendored("requests.packages.urllib3.contrib.ntlmpool") - vendored("requests.packages.urllib3.contrib.pyopenssl") - vendored("requests.packages.urllib3.exceptions") - vendored("requests.packages.urllib3.fields") - vendored("requests.packages.urllib3.filepost") - vendored("requests.packages.urllib3.packages") - vendored("requests.packages.urllib3.packages.ordered_dict") - vendored("requests.packages.urllib3.packages.six") - vendored("requests.packages.urllib3.packages.ssl_match_hostname") - vendored("requests.packages.urllib3.packages.ssl_match_hostname." - "_implementation") - vendored("requests.packages.urllib3.poolmanager") - vendored("requests.packages.urllib3.request") - vendored("requests.packages.urllib3.response") - vendored("requests.packages.urllib3.util") - vendored("requests.packages.urllib3.util.connection") - vendored("requests.packages.urllib3.util.request") - vendored("requests.packages.urllib3.util.response") - vendored("requests.packages.urllib3.util.retry") - vendored("requests.packages.urllib3.util.ssl_") - vendored("requests.packages.urllib3.util.timeout") - vendored("requests.packages.urllib3.util.url") - vendored("resolvelib") - vendored("rich") - vendored("rich.console") - vendored("rich.highlighter") - vendored("rich.logging") - vendored("rich.markup") - vendored("rich.progress") - vendored("rich.segment") - vendored("rich.style") - vendored("rich.text") - vendored("rich.traceback") - vendored("tenacity") - vendored("tomli") - vendored("urllib3") diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/console.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/console.py deleted file mode 100644 index 2ada68e03b3c018e3ddbbf3356a48a1d580aa251..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/console.py +++ /dev/null @@ -1,70 +0,0 @@ -""" - pygments.console - ~~~~~~~~~~~~~~~~ - - Format colored console output. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -esc = "\x1b[" - -codes = {} -codes[""] = "" -codes["reset"] = esc + "39;49;00m" - -codes["bold"] = esc + "01m" -codes["faint"] = esc + "02m" -codes["standout"] = esc + "03m" -codes["underline"] = esc + "04m" -codes["blink"] = esc + "05m" -codes["overline"] = esc + "06m" - -dark_colors = ["black", "red", "green", "yellow", "blue", - "magenta", "cyan", "gray"] -light_colors = ["brightblack", "brightred", "brightgreen", "brightyellow", "brightblue", - "brightmagenta", "brightcyan", "white"] - -x = 30 -for d, l in zip(dark_colors, light_colors): - codes[d] = esc + "%im" % x - codes[l] = esc + "%im" % (60 + x) - x += 1 - -del d, l, x - -codes["white"] = codes["bold"] - - -def reset_color(): - return codes["reset"] - - -def colorize(color_key, text): - return codes[color_key] + text + codes["reset"] - - -def ansiformat(attr, text): - """ - Format ``text`` with a color and/or some attributes:: - - color normal color - *color* bold color - _color_ underlined color - +color+ blinking color - """ - result = [] - if attr[:1] == attr[-1:] == '+': - result.append(codes['blink']) - attr = attr[1:-1] - if attr[:1] == attr[-1:] == '*': - result.append(codes['bold']) - attr = attr[1:-1] - if attr[:1] == attr[-1:] == '_': - result.append(codes['underline']) - attr = attr[1:-1] - result.append(codes[attr]) - result.append(text) - result.append(codes['reset']) - return ''.join(result) diff --git a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/model_zoo/dedode_models.py b/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/model_zoo/dedode_models.py deleted file mode 100644 index 8c6d93d4b6d3a7c0daaf767fa53cd021f248dacd..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DeDoDe/DeDoDe/model_zoo/dedode_models.py +++ /dev/null @@ -1,173 +0,0 @@ -import torch -import torch.nn as nn - -from DeDoDe.detectors.dedode_detector import DeDoDeDetector -from DeDoDe.descriptors.dedode_descriptor import DeDoDeDescriptor -from DeDoDe.decoder import ConvRefiner, Decoder -from DeDoDe.encoder import VGG19, VGG - - -def dedode_detector_B(device="cuda", weights=None): - residual = True - hidden_blocks = 5 - amp_dtype = torch.float16 - amp = True - NUM_PROTOTYPES = 1 - conv_refiner = nn.ModuleDict( - { - "8": ConvRefiner( - 512, - 512, - 256 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "4": ConvRefiner( - 256 + 256, - 256, - 128 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "2": ConvRefiner( - 128 + 128, - 64, - 32 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "1": ConvRefiner( - 64 + 32, - 32, - 1 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - } - ) - encoder = VGG19(pretrained=False, amp=amp, amp_dtype=amp_dtype) - decoder = Decoder(conv_refiner) - model = DeDoDeDetector(encoder=encoder, decoder=decoder).to(device) - if weights is not None: - model.load_state_dict(weights) - return model - - -def dedode_detector_L(device="cuda", weights=None): - NUM_PROTOTYPES = 1 - residual = True - hidden_blocks = 8 - amp_dtype = ( - torch.float16 - ) # torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16 - amp = True - conv_refiner = nn.ModuleDict( - { - "8": ConvRefiner( - 512, - 512, - 256 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "4": ConvRefiner( - 256 + 256, - 256, - 128 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "2": ConvRefiner( - 128 + 128, - 128, - 64 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "1": ConvRefiner( - 64 + 64, - 64, - 1 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - } - ) - encoder = VGG19(pretrained=False, amp=amp, amp_dtype=amp_dtype) - decoder = Decoder(conv_refiner) - model = DeDoDeDetector(encoder=encoder, decoder=decoder).to(device) - if weights is not None: - model.load_state_dict(weights) - return model - - -def dedode_descriptor_B(device="cuda", weights=None): - NUM_PROTOTYPES = 256 # == descriptor size - residual = True - hidden_blocks = 5 - amp_dtype = ( - torch.float16 - ) # torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16 - amp = True - conv_refiner = nn.ModuleDict( - { - "8": ConvRefiner( - 512, - 512, - 256 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "4": ConvRefiner( - 256 + 256, - 256, - 128 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "2": ConvRefiner( - 128 + 128, - 64, - 32 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - "1": ConvRefiner( - 64 + 32, - 32, - 1 + NUM_PROTOTYPES, - hidden_blocks=hidden_blocks, - residual=residual, - amp=amp, - amp_dtype=amp_dtype, - ), - } - ) - encoder = VGG(size="19", pretrained=False, amp=amp, amp_dtype=amp_dtype) - decoder = Decoder(conv_refiner, num_prototypes=NUM_PROTOTYPES) - model = DeDoDeDescriptor(encoder=encoder, decoder=decoder).to(device) - if weights is not None: - model.load_state_dict(weights) - return model diff --git a/spaces/Revanth200218/Project/README.md b/spaces/Revanth200218/Project/README.md deleted file mode 100644 index b643575f7f1a6e1854282bfe11de6c3d4c7e7d0e..0000000000000000000000000000000000000000 --- a/spaces/Revanth200218/Project/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Project -emoji: 👀 -colorFrom: purple -colorTo: gray -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: artistic-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/atss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/atss.py deleted file mode 100644 index db7139c6b4fcd7e83007cdb785520743ddae7066..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/detectors/atss.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class ATSS(SingleStageDetector): - """Implementation of `ATSS `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(ATSS, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/schedules/schedule_40k.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/schedules/schedule_40k.py deleted file mode 100644 index 1a03ea075e3cf315a058ef262da9b8374affad20..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/configs/_base_/schedules/schedule_40k.py +++ /dev/null @@ -1,20 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from UniFormer repo: From https://github.com/Sense-X/UniFormer - * Apache-2.0 license -''' -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=40000) -checkpoint_config = dict(by_epoch=False, interval=4000) -evaluation = dict(interval=4000, metric='mIoU') diff --git a/spaces/Rolajim/proyecto/app.py b/spaces/Rolajim/proyecto/app.py deleted file mode 100644 index b1d333121154059a9d0c6ac939eb05f7a9be8277..0000000000000000000000000000000000000000 --- a/spaces/Rolajim/proyecto/app.py +++ /dev/null @@ -1,26 +0,0 @@ - -import gradio as gr -import surprise -from surprise import dump - -file_name = 'modelo_entrenado.sav' -model = dump.load(file_name)[1] - - -def predict_rating(userId, id): - prediction = model.predict(userId, id) - - if prediction.est >= 3.5: - mensaje = "Es hora de verla!!", prediction.est - elif prediction.est >= 2.5 and prediction.est < 3.5: - mensaje = "Puedes verla luego!", prediction.est - else: - mensaje = "No es la película que esta buscando!", prediction.est - - return mensaje - -iface = gr.Interface(fn=predict_rating, - inputs= [gr.inputs.Textbox(lines=1,placeholder="ingrese su número de usuario aquí"), - gr.inputs.Textbox(lines=1,placeholder="ingrese su número de usuario aquí")], - outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/models/alpro_models/alpro_retrieval.py b/spaces/SeViLA/SeViLA/lavis/models/alpro_models/alpro_retrieval.py deleted file mode 100644 index f574ad42bdfd2f13b21dc30430b72f9c278d0ced..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/alpro_models/alpro_retrieval.py +++ /dev/null @@ -1,422 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import datetime -import logging -import time - -import lavis.common.dist_utils as dist_utils -import numpy as np -import torch -import torch.distributed as dist -import torch.nn.functional as F -from lavis.common.config import node_to_dict -from lavis.common.dist_utils import get_rank -from lavis.common.logger import MetricLogger -from lavis.common.registry import registry -from lavis.models.alpro_models import AlproBase -from lavis.models.alpro_models.alpro_outputs import AlproIntermediateOutput, AlproOutput -from lavis.models.base_model import all_gather_with_grad -from lavis.models.med import XBertEncoder -from lavis.models.timesformer.vit import TimeSformer -from torch import nn - - -@registry.register_model("alpro_retrieval") -class AlproRetrieval(AlproBase): - PRETRAINED_MODEL_CONFIG_DICT = { - "msrvtt": "configs/models/alpro_retrieval_msrvtt.yaml", - "didemo": "configs/models/alpro_retrieval_didemo.yaml", - } - - def __init__( - self, - visual_encoder, - text_encoder, - vision_width=768, - text_width=768, - embed_dim=256, - max_txt_len=35, - temp=0.07, - ): - super().__init__() - - self.temp = nn.Parameter(torch.ones([]) * temp) - - self.tokenizer = self.init_tokenizer() - - self.visual_encoder = visual_encoder - self.text_encoder = text_encoder - - vision_width = vision_width - text_width = text_width - - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - - self.itm_head = nn.Linear(text_width, 2) - - self.max_txt_len = max_txt_len - - def forward(self, samples): - with torch.no_grad(): - self.temp.clamp_(0.001, 0.5) - - visual_inputs = samples["video"] - caption = samples["text_input"] - - b, t, c, h, w = visual_inputs.shape - - # forward text - text = self.tokenizer( - caption, - padding="max_length", - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(self.device) - - text_output = self.text_encoder.forward_text( - text, - token_type_ids=torch.zeros( - text.input_ids.shape, dtype=torch.long, device=self.device - ), - ) - text_embeds = text_output.last_hidden_state - text_feat = F.normalize(self.text_proj(text_embeds[:, 0, :]), dim=-1) - - # forward visual - # timeSformer asks for (b, c, t, h, w) as input. - video_embeds = self.visual_encoder.forward_features(visual_inputs) - video_feat = F.normalize(self.vision_proj(video_embeds[:, 0, :]), dim=-1) - video_atts = torch.ones(video_embeds.size()[:-1], dtype=torch.long).to( - self.device - ) - - # ========== (in-batch) ITC loss ========== - gathered_video_feats = all_gather_with_grad(video_feat) - gathered_text_feats = all_gather_with_grad(text_feat) - - sim_v2t = video_feat @ gathered_text_feats.t() / self.temp - sim_t2v = text_feat @ gathered_video_feats.t() / self.temp - - sim_targets = torch.zeros_like(sim_v2t) - - local_rank = get_rank() - b_start, b_end = b * local_rank, b * (local_rank + 1) - sim_targets[:, b_start:b_end] = torch.eye(b) - - loss_v2t = -torch.sum(F.log_softmax(sim_v2t, dim=1) * sim_targets, dim=1).mean() - loss_t2v = -torch.sum(F.log_softmax(sim_t2v, dim=1) * sim_targets, dim=1).mean() - - vtc_loss = (loss_v2t + loss_t2v) / 2 - - ( - vtm_loss, - vtm_logits, - vtm_labels, - encoder_output, - encoder_output_neg, - ) = self.compute_vtm( - text_embeds=text_embeds, - text_atts=text.attention_mask, - image_embeds=video_embeds, - image_atts=video_atts, - sim_i2t=sim_v2t.clone(), # for hard mining - sim_t2i=sim_t2v.clone(), # for hard mining - ) - - loss = vtc_loss + vtm_loss - - # return {"loss": loss} - return AlproOutput( - loss=loss, - loss_vtc=vtc_loss, - loss_vtm=vtm_loss, - intermediate_output=AlproIntermediateOutput( - video_embeds=video_embeds, - text_embeds=text_embeds, - encoder_output=encoder_output, - encoder_output_neg=encoder_output_neg, - vtm_logits=vtm_logits, - vtm_labels=vtm_labels, - ), - ) - - def compute_vtm( - self, text_embeds, text_atts, image_embeds, image_atts, sim_i2t, sim_t2i - ): - device = self.device - - # ====== positive pairs ======= - attention_mask = torch.cat([text_atts, image_atts], dim=1) - embedding_output_pos = torch.cat([text_embeds, image_embeds], dim=1) - - encoder_outputs_pos = self.text_encoder( - encoder_embeds=embedding_output_pos, - attention_mask=attention_mask, - return_dict=True, - mode="fusion", - ) - - # ====== negative pairs ======= - bs = text_embeds.shape[0] - - local_rank = get_rank() - b_start, b_end = bs * local_rank, bs * (local_rank + 1) - - with torch.no_grad(): - weights_v2t = sim_i2t[:, b_start:b_end] - weights_t2v = sim_t2i[:, b_start:b_end] - - # never select self as negative - weights_v2t.fill_diagonal_(-np.Inf) - weights_t2v.fill_diagonal_(-np.Inf) - - weights_v2t = F.softmax(weights_v2t, dim=1) - weights_t2v = F.softmax(weights_t2v, dim=1) - - # select a negative image for each text - # FIXME to optimize using indexing operations - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2v[b], 1).item() - image_embeds_neg.append(image_embeds[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg, dim=0) - - # select a negative text for each image - text_embeds_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_v2t[b], 1).item() - text_embeds_neg.append(text_embeds[neg_idx]) - text_atts_neg.append(text_atts[neg_idx]) - - text_embeds_neg = torch.stack(text_embeds_neg, dim=0) - text_atts_neg = torch.stack(text_atts_neg, dim=0) - - text_embeds_all = torch.cat([text_embeds, text_embeds_neg], dim=0) - text_atts_all = torch.cat([text_atts, text_atts_neg], dim=0) - - video_embeds_all = torch.cat([image_embeds_neg, image_embeds], dim=0) - video_atts_all = torch.cat([image_atts, image_atts], dim=0) - - attention_mask_all = torch.cat([text_atts_all, video_atts_all], dim=1) - embedding_output_all = torch.cat([text_embeds_all, video_embeds_all], dim=1) - - # forward negative pairs via cross encoder - encoder_outputs_neg = self.text_encoder( - encoder_embeds=embedding_output_all, - attention_mask=attention_mask_all, - return_dict=True, - mode="fusion", - ) - - vl_embeddings = torch.cat( - [ - encoder_outputs_pos.last_hidden_state[:, 0, :], - encoder_outputs_neg.last_hidden_state[:, 0, :], - ], - dim=0, - ) - vtm_logits = self.itm_head(vl_embeddings) - - vtm_labels = torch.cat( - [torch.ones(bs, dtype=torch.long), torch.zeros(2 * bs, dtype=torch.long)], - dim=0, - ).to(device) - vtm_loss = F.cross_entropy(vtm_logits, vtm_labels) - - return ( - vtm_loss, - vtm_logits, - vtm_labels, - encoder_outputs_pos, - encoder_outputs_neg, - ) - - def compute_sim_matrix(self, data_loader, task_cfg): - k_test = task_cfg.get("k_test") - - metric_logger = MetricLogger(delimiter=" ") - header = "Evaluation:" - - logging.info("Computing features for evaluation...") - start_time = time.time() - - texts = data_loader.dataset.text - num_text = len(texts) - text_bs = 256 - text_ids = [] - text_embeds = [] - text_feats = [] - text_atts = [] - for i in range(0, num_text, text_bs): - text = texts[i : min(num_text, i + text_bs)] - text_input = self.tokenizer( - text, - padding="max_length", - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(self.device) - text_output = self.text_encoder.forward_text( - text_input, - token_type_ids=torch.zeros( - text_input.input_ids.shape, dtype=torch.long, device=self.device - ), - ) - text_feats.append(text_output.last_hidden_state.cpu()) - text_embed = F.normalize( - self.text_proj(text_output.last_hidden_state[:, 0, :]) - ) - text_embeds.append(text_embed) - text_ids.append(text_input.input_ids) - text_atts.append(text_input.attention_mask) - - text_embeds = torch.cat(text_embeds, dim=0) - text_ids = torch.cat(text_ids, dim=0) - text_atts = torch.cat(text_atts, dim=0) - text_feats = torch.cat(text_feats, dim=0) - - video_feats = [] - video_embeds = [] - for samples in data_loader: - video = samples["video"] - - video = video.to(self.device) - video_feat = self.visual_encoder.forward_features(video) - video_embed = self.vision_proj(video_feat[:, 0, :]) - video_embed = F.normalize(video_embed, dim=-1) - - video_feats.append(video_feat.cpu()) - video_embeds.append(video_embed) - - video_feats = torch.cat(video_feats, dim=0) - video_embeds = torch.cat(video_embeds, dim=0) - - sims_matrix = video_embeds @ text_embeds.t() - score_matrix_v2t = torch.full( - (len(data_loader.dataset.image), len(texts)), -100.0 - ).to(self.device) - - num_tasks = dist_utils.get_world_size() - rank = dist_utils.get_rank() - step = sims_matrix.size(0) // num_tasks + 1 - start = rank * step - end = min(sims_matrix.size(0), start + step) - - # video-to-text - for i, sims in enumerate( - metric_logger.log_every(sims_matrix[start:end], 50, header) - ): - topk_sim, topk_idx = sims.topk(k=k_test, dim=0) - - video_feats_repeat = ( - video_feats[start + i].repeat(k_test, 1, 1).to(self.device) - ) - video_atts_repeat = torch.ones( - video_feats_repeat.size()[:-1], dtype=torch.long - ).to(self.device) - - attention_mask = torch.cat([text_atts[topk_idx], video_atts_repeat], dim=1) - embedding_output = torch.cat( - [text_feats[topk_idx].to(self.device), video_feats_repeat], dim=1 - ) - - output = self.text_encoder( - encoder_embeds=embedding_output, - attention_mask=attention_mask, - return_dict=True, - mode="fusion", - ) - - score = self.itm_head(output.last_hidden_state[:, 0, :])[:, 1] - score_matrix_v2t[start + i, topk_idx] = score + topk_sim - - # text-to-video - sims_matrix = sims_matrix.t() - score_matrix_t2v = torch.full( - (len(texts), len(data_loader.dataset.image)), -100.0 - ).to(self.device) - - step = sims_matrix.size(0) // num_tasks + 1 - start = rank * step - end = min(sims_matrix.size(0), start + step) - - for i, sims in enumerate( - metric_logger.log_every(sims_matrix[start:end], 50, header) - ): - - topk_sim, topk_idx = sims.topk(k=k_test, dim=0) - - text_feats_repeat = ( - text_feats[start + i].repeat(k_test, 1, 1).to(self.device) - ) - text_atts_repeat = text_atts[start + i].repeat(k_test, 1).to(self.device) - - video_atts = torch.ones( - video_feats[topk_idx].size()[:-1], dtype=torch.long - ).to(self.device) - - embedding_output = torch.cat( - [text_feats_repeat, video_feats[topk_idx].to(self.device)], dim=1 - ) - attention_mask = torch.cat([text_atts_repeat, video_atts], dim=1) - - output = self.text_encoder( - encoder_embeds=embedding_output, - attention_mask=attention_mask, - return_dict=True, - mode="fusion", - ) - - score = self.itm_head(output.last_hidden_state[:, 0, :])[:, 1] - score_matrix_t2v[start + i, topk_idx] = score + topk_sim - - if dist_utils.is_dist_avail_and_initialized(): - dist.barrier() - torch.distributed.all_reduce( - score_matrix_v2t, op=torch.distributed.ReduceOp.SUM - ) - torch.distributed.all_reduce( - score_matrix_t2v, op=torch.distributed.ReduceOp.SUM - ) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - logging.info("Evaluation time {}".format(total_time_str)) - - return score_matrix_v2t.cpu().numpy(), score_matrix_t2v.cpu().numpy() - - @classmethod - def from_config(cls, cfg): - # vision encoder - visual_encoder_config = node_to_dict(cfg.timesformer) - visual_encoder = TimeSformer(**visual_encoder_config) - - # text encoder - text_encoder = XBertEncoder.from_config(cfg) - - max_txt_len = cfg.get("max_txt_len", 35) - - model = cls( - visual_encoder=visual_encoder, - text_encoder=text_encoder, - max_txt_len=max_txt_len, - ) - - num_patches = ( - visual_encoder_config["image_size"] // visual_encoder_config["patch_size"] - ) ** 2 - num_frames = visual_encoder_config["n_frms"] - - model.load_checkpoint_from_config( - cfg, num_frames=num_frames, num_patches=num_patches - ) - - return model diff --git a/spaces/SeViLA/SeViLA/lavis/tasks/vqa.py b/spaces/SeViLA/SeViLA/lavis/tasks/vqa.py deleted file mode 100644 index 4a717a8f5ae3ab848f17e69860f8485b28b06382..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/tasks/vqa.py +++ /dev/null @@ -1,571 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import logging -import json -import os -import torch -import numpy as np -import random - -import lavis.common.dist_utils as dist_utils -from lavis.common.registry import registry -from lavis.common.vqa_tools.vqa import VQA -from lavis.common.vqa_tools.vqa_eval import VQAEval -from lavis.tasks.base_task import BaseTask -from lavis.common.dist_utils import main_process - -@registry.register_task("vqa") -class VQATask(BaseTask): - def __init__( - self, - num_beams, - max_len, - min_len, - evaluate, - num_ans_candidates, - inference_method="rank", - prompt="", - ): - super().__init__() - - self.num_beams = num_beams - self.max_len = max_len - self.min_len = min_len - - self.evaluate = evaluate - self.inference_method = inference_method - self.num_ans_candidates = num_ans_candidates - self.prompt = prompt - - self.answer_list = None - - self.ques_files = dict() - self.anno_files = dict() - - @classmethod - def setup_task(cls, cfg): - run_cfg = cfg.run_cfg - - num_beams = run_cfg.get("num_beams", 3) - max_len = run_cfg.get("max_len", 10) - min_len = run_cfg.get("min_len", 1) - - evaluate = run_cfg.get("evaluate", False) - - inference_method = run_cfg.get("inference_method", "rank") - num_ans_candidates = run_cfg.get("num_ans_candidates", 128) - prompt = run_cfg.get("prompt", "") - - return cls( - num_beams=num_beams, - max_len=max_len, - min_len=min_len, - evaluate=evaluate, - num_ans_candidates=num_ans_candidates, - inference_method=inference_method, - prompt=prompt, - ) - - def build_datasets(self, cfg): - datasets = super().build_datasets(cfg) - - # get question file, annotation file and anwser list in COCO format - for dataset in datasets.values(): - for split in dataset: - if ( - hasattr(dataset[split], "coco_fmt_qust_file") - and dataset[split].coco_fmt_qust_file is not None - ): - self.ques_files[split] = dataset[split].coco_fmt_qust_file - self.anno_files[split] = dataset[split].coco_fmt_anno_file - - try: - self.answer_list = dataset[split].answer_list - except AttributeError: - # if answer_list is not provided, then set it to None - pass - - if len(self.ques_files) > 0: - assert len(self.ques_files) == len( - self.anno_files - ), "Only support one split for evaluation." - - return datasets - - def valid_step(self, model, samples): - answers = model.predict_answers( - samples=samples, - answer_list=self.answer_list, - inference_method=self.inference_method, - num_beams=self.num_beams, - max_len=self.max_len, - min_len=self.min_len, - num_ans_candidates=self.num_ans_candidates, - prompt=self.prompt, - ) - pred_qa_pairs = [] - - question_id = samples["question_id"] - for answer, ques_id in zip(answers, question_id): - ques_id = int(ques_id.item()) - pred_qa_pairs.append({"question_id": ques_id, "answer": answer}) - - return pred_qa_pairs - - def after_evaluation(self, val_result, split_name, **kwargs): - result_file = self.save_result( - val_result, - result_dir=registry.get_path("result_dir"), - filename=f"{split_name}_vqa_result", - remove_duplicate="question_id", - ) - - metrics = self._report_metrics(result_file=result_file, split=split_name) - - return metrics - - @dist_utils.main_process - def _report_metrics(self, result_file, split): - """ - Use official VQA evaluation script to report metrics. - """ - metrics = {} - - if split in self.ques_files and split in self.anno_files: - vqa = VQA(self.anno_files[split], self.ques_files[split]) - vqa_result = vqa.loadRes( - resFile=result_file, quesFile=self.ques_files[split] - ) - - # create vqaEval object by taking vqa and vqaRes - # n is precision of accuracy (number of places after decimal), default is 2 - vqa_scorer = VQAEval(vqa, vqa_result, n=2) - logging.info("Start VQA evaluation.") - vqa_scorer.evaluate() - - # print accuracies - overall_acc = vqa_scorer.accuracy["overall"] - metrics["agg_metrics"] = overall_acc - - logging.info("Overall Accuracy is: %.02f\n" % overall_acc) - logging.info("Per Answer Type Accuracy is the following:") - - for ans_type in vqa_scorer.accuracy["perAnswerType"]: - logging.info( - "%s : %.02f" - % (ans_type, vqa_scorer.accuracy["perAnswerType"][ans_type]) - ) - metrics[ans_type] = vqa_scorer.accuracy["perAnswerType"][ans_type] - - with open( - os.path.join(registry.get_path("output_dir"), "evaluate.txt"), "a" - ) as f: - f.write(json.dumps(metrics) + "\n") - - return metrics - -@registry.register_task("gqa") -class GQATask(VQATask): - def valid_step(self, model, samples): - answers = model.predict_answers( - samples=samples, - answer_list=self.answer_list, - inference_method=self.inference_method, - num_beams=self.num_beams, - max_len=self.max_len, - min_len=self.min_len, - num_ans_candidates=self.num_ans_candidates, - prompt=self.prompt, - ) - pred_qa_pairs = [] - - question_id = samples["question_id"] - gt_answers = samples["answer"] - - for answer, ques_id, gt_answer in zip(answers, question_id, gt_answers): - ques_id = int(ques_id.item()) - pred_qa_pairs.append({"question_id": ques_id, "pred_ans": answer, "gt_ans": gt_answer}) - - return pred_qa_pairs - - @dist_utils.main_process - def _report_metrics(self, result_file, split): - """ - TODO: add other evaluation metrics for GQA - """ - - results = json.load(open(result_file, "r")) - acc = [] - vqa_tool = VQAEval() - - for res in results: - if res["gt_ans"] is None: - # prepare test results for leaderboard evaluation - self._save_result_leaderboard(results) - return - - gt_ans = res["gt_ans"] - pred = res["pred_ans"] - - if self.inference_method == "generate": - pred = vqa_tool.processPunctuation(pred) - pred = vqa_tool.processDigitArticle(pred) - - vqa_acc = 1 if pred == gt_ans else 0 - - acc.append(vqa_acc) - - accuracy = sum(acc) / len(acc) * 100 - metrics = {"agg_metrics": accuracy, "acc": accuracy} - - with open( - os.path.join(registry.get_path("output_dir"), "evaluate.txt"), "a" - ) as f: - f.write(json.dumps(metrics) + "\n") - - logging.info(metrics) - - return metrics - - -@registry.register_task("aok_vqa") -class AOKVQATask(VQATask): - def valid_step(self, model, samples): - answers = model.predict_answers( - samples=samples, - answer_list=self.answer_list, - inference_method=self.inference_method, - num_beams=self.num_beams, - max_len=self.max_len, - min_len=self.min_len, - num_ans_candidates=self.num_ans_candidates, - ) - - pred_qa_pairs = [] - - question_id = samples["question_id"] - gt_answers = samples["direct_answers"] - - for pred_answer, ques_id, gt_answer in zip(answers, question_id, gt_answers): - pred_qa_pairs.append( - {"question_id": ques_id, "pred_ans": pred_answer, "gt_ans": gt_answer} - ) - - return pred_qa_pairs - - @dist_utils.main_process - def _report_metrics(self, result_file, split): - """ - Implementing accuracy computation for AOKVQA, see - https://github.com/allenai/aokvqa/blob/main/evaluation/eval_predictions.py#L45 for details. - """ - # TODO add evaluation for multi-choice - - results = json.load(open(result_file, "r")) - acc = [] - - for res in results: - if res["gt_ans"] is None: - # prepare test results for leaderboard evaluation - self._save_result_leaderboard(results) - return - - pred = res["pred_ans"] - gt_ans = res["gt_ans"] - - num_match = sum([pred == gt for gt in gt_ans]) - vqa_acc = min(1.0, num_match / 3.0) - - acc.append(vqa_acc) - - accuracy = sum(acc) / len(acc) * 100 - metrics = {"agg_metrics": accuracy, "acc": accuracy} - - with open( - os.path.join(registry.get_path("output_dir"), "evaluate.txt"), "a" - ) as f: - f.write(json.dumps(metrics) + "\n") - - logging.info(metrics) - - return metrics - - @dist_utils.main_process - def _save_result_leaderboard(self, results): - """ - Saving the results in the format required for leaderboard evaluation. - - [TODO] add support for multi-choice. - """ - result_leaderboard = dict() - for res in results: - result_leaderboard[res["question_id"]] = { - "direct_answer": res["pred_ans"], - "multiple_choice": "", - } - - result_file = registry.get_path("result_dir") + "_leaderboard.json" - - with open(result_file, "w") as f: - json.dump(result_leaderboard, f) - - logging.info(f"Saved results for leaderboard evaluation at {result_file}") - -@registry.register_task("frameqa") -class FrameQA(BaseTask): - def __init__(self): - super().__init__() - self.ANS_MAPPING = {'A':0, 'B':1, 'C':2, 'D':3, 'E':4} - - def valid_step(self, model, samples): - results = [] - - outputs = model.generate(samples) - - answer = outputs["answer"] - qid = outputs["qid"] - output_text = outputs['output_text'] - temp_idx = outputs['temp_idx'] - assert len(qid)==len(temp_idx) - assert len(qid)==len(output_text) - assert len(qid)==len(answer) - - for a, q, o, i in zip(answer, qid, output_text, temp_idx): - # l = l[self.ANS_MAPPING[a[-1]]] - results.append( - { - "qid": q, - 'idx': i, - "prediction": o, - "target": self.ANS_MAPPING[a[-1]], - } - ) - - return results - - def after_evaluation(self, val_result, split_name, epoch, **kwargs): - eval_result_file = self.save_result( - result=val_result, - result_dir=registry.get_path("result_dir"), - filename="{}_epoch{}".format(split_name, epoch) - ) - - metrics = self._report_metrics( - eval_result_file=eval_result_file, split_name=split_name - ) - - return metrics - - @main_process - def _report_metrics(self, eval_result_file, split_name): - results = json.load(open(eval_result_file)) - total_num = len(results) - acc = 0 - group_by_qid = {} - qtype_correct_dict = {} - qtype_total_dict = {} - for r in results: - - if r['qid'] not in group_by_qid: - group_by_qid[r['qid']] = {} - group_by_qid[r['qid']]['idx'] = [r['idx']] - group_by_qid[r['qid']]['pred'] = [r['prediction']] - group_by_qid[r['qid']]['target'] = r['target'] - else: - group_by_qid[r['qid']]['idx'].append(r['idx']) - group_by_qid[r['qid']]['pred'].append(r['prediction']) - - qtype = r['qid'][0] - if qtype not in qtype_total_dict: - qtype_total_dict[qtype] = 1 - else: - qtype_total_dict[qtype] += 1 - - if r['prediction'] == r['target']: - acc += 1 - if qtype not in qtype_correct_dict: - qtype_correct_dict[qtype] = 1 - else: - qtype_correct_dict[qtype] += 1 - - oracle = 0 - num = len(group_by_qid.keys()) - for q in group_by_qid: - if group_by_qid[q]['target'] in group_by_qid[q]['pred']: - oracle += 1 - - metrics = {"agg_metrics": oracle/num , 'num': num, 'avg_acc': acc/total_num * 100, 'total':total_num} - - for qtype in qtype_total_dict: - metrics[qtype] = qtype_correct_dict[qtype] / qtype_total_dict[qtype] * 100 - - log_stats = {split_name: {k: v for k, v in metrics.items()}} - - with open( - os.path.join(registry.get_path("output_dir"), "evaluate.txt"), "a" - ) as f: - f.write(json.dumps(log_stats) + "\n") - - logging.info(metrics) - return metrics - - -@registry.register_task("videoqa") -class VideoQA(BaseTask): - def __init__(self): - super().__init__() - self.ANS_MAPPING = {'A':0, 'B':1, 'C':2, 'D':3, 'E':4} - - def valid_step(self, model, samples): - results = [] - - outputs = model.generate(samples) - - answer = outputs["answer"] - qid = outputs["qid"] - output_text = outputs['output_text'] - if 'frame_idx' in outputs: - frame_idx = outputs['frame_idx'] - else: - frame_idx = [0 for i in range(len(qid))] - # print(qid) - # print(len(output_text), output_text) - assert len(qid)==len(output_text) - assert len(qid)==len(answer) - - for a, q, o, f in zip(answer, qid, output_text, frame_idx): - # l = l[self.ANS_MAPPING[a[-1]]] - results.append( - { - "qid": q, - "prediction": o, - "target": self.ANS_MAPPING[a[-1]], - "frame_idx": f - } - ) - - return results - - def after_evaluation(self, val_result, split_name, epoch, **kwargs): - eval_result_file = self.save_result( - result=val_result, - result_dir=registry.get_path("result_dir"), - filename="{}_epoch{}".format(split_name, epoch) - ) - - metrics = self._report_metrics( - eval_result_file=eval_result_file, split_name=split_name - ) - - return metrics - - @main_process - def _report_metrics(self, eval_result_file, split_name): - results = json.load(open(eval_result_file)) - total_num = len(results) - acc = 0 - qtype_correct_dict = {} - qtype_total_dict = {} - for r in results: - qtype = r['qid'].split('_')[0] - if qtype not in qtype_total_dict: - qtype_total_dict[qtype] = 1 - else: - qtype_total_dict[qtype] += 1 - - if r['prediction'] == r['target']: - acc += 1 - if qtype not in qtype_correct_dict: - qtype_correct_dict[qtype] = 1 - else: - qtype_correct_dict[qtype] += 1 - - metrics = {"agg_metrics": acc/total_num , 'total':total_num} - - for qtype in qtype_total_dict: - metrics[qtype] = qtype_correct_dict[qtype] / qtype_total_dict[qtype] * 100 - - # for STAR - if ('Interaction' in metrics) and ('Sequence' in metrics) and ('Prediction' in metrics) and ('Feasibility' in metrics): - metrics["agg_metrics"] = (metrics['Interaction'] + metrics['Sequence'] + metrics['Prediction'] + metrics['Feasibility']) / 4 - - log_stats = {split_name: {k: v for k, v in metrics.items()}} - - with open( - os.path.join(registry.get_path("output_dir"), "evaluate.txt"), "a" - ) as f: - f.write(json.dumps(log_stats) + "\n") - - logging.info(metrics) - return metrics - - -@registry.register_task("moment_retrieval") -class MR(BaseTask): - def __init__(self): - super().__init__() - self.ANS_MAPPING = {'no': 0, 'yes': 1} - - def valid_step(self, model, samples): - results = [] - - outputs = model.generate(samples) - answer = outputs['answer'] - qid = outputs['qid'] - score = outputs['yes_score'] - pred = outputs['pred_ans'] - assert len(qid)==len(answer) - assert len(qid)==len(score) - assert len(qid)==len(pred) - - i = 0 - for a, q, s, p in zip(answer, qid, score, pred): - # l = l[self.ANS_MAPPING[a[-1]]] - results.append( - { - "qid": q + '_' + str(i), - "prediction": p, - "target": self.ANS_MAPPING[a], - 'score': s - } - ) - i += 1 - - return results - - def after_evaluation(self, val_result, split_name, epoch, **kwargs): - eval_result_file = self.save_result( - result=val_result, - result_dir=registry.get_path("result_dir"), - filename="{}_epoch{}".format(split_name, epoch) - ) - - metrics = self._report_metrics( - eval_result_file=eval_result_file, split_name=split_name - ) - - return metrics - - @main_process - def _report_metrics(self, eval_result_file, split_name): - results = json.load(open(eval_result_file)) - total_num = len(results) - acc = 0 - for r in results: - if r['prediction'] == r['target']: - acc += 1 - metrics = {"agg_metrics": acc / total_num, 'total': total_num} - log_stats = {split_name: {k: v for k, v in metrics.items()}} - - with open( - os.path.join(registry.get_path("output_dir"), "evaluate.txt"), "a" - ) as f: - f.write(json.dumps(log_stats) + "\n") - - logging.info(metrics) - return metrics \ No newline at end of file diff --git a/spaces/Shue/DIGIMAP-Group4-Animefy/utils.py b/spaces/Shue/DIGIMAP-Group4-Animefy/utils.py deleted file mode 100644 index d0a1a55adc7bafa8259c8d136e6cc9cc4399ebc7..0000000000000000000000000000000000000000 --- a/spaces/Shue/DIGIMAP-Group4-Animefy/utils.py +++ /dev/null @@ -1,47 +0,0 @@ -import tensorflow.compat.v1 as tf -from adjust_brightness import adjust_brightness_from_src_to_dst, read_img -import cv2 -import numpy as np -from PIL import Image - -# def load_input_image(image_path, size=[256,256]): -# img = cv2.imread(image_path).astype(np.float32) -# img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) -# img = preprocessing(img,size) -# img = np.expand_dims(img, axis=0) -# return img - -def load_input_image(image_file_buffer, size=[256, 256]): - img = Image.open(image_file_buffer).convert('RGB') - img = np.array(img).astype(np.float32) - img = preprocessing(img, size) - img = np.expand_dims(img, axis=0) - return img - -def preprocessing(img, size): - h, w = img.shape[:2] - if h <= size[0]: - h = size[0] - else: - x = h % 32 - h = h - x - - if w < size[1]: - w = size[1] - else: - y = w % 32 - w = w - y - # the cv2 resize func : dsize format is (W ,H) - img = cv2.resize(img, (w, h)) - return img/127.5 - 1.0 - -def inverse_transform(images): - images = (images + 1.) / 2 * 255 - # The calculation of floating-point numbers is inaccurate, - # and the range of pixel values must be limited to the boundary, - # otherwise, image distortion or artifacts will appear during display. - images = np.clip(images, 0, 255) - return images.astype(np.uint8) - -# def imsave(images, path): -# return cv2.imwrite(path, cv2.cvtColor(images, cv2.COLOR_BGR2RGB)) \ No newline at end of file diff --git a/spaces/SiddharthK/dslim-bert-large-NER/app.py b/spaces/SiddharthK/dslim-bert-large-NER/app.py deleted file mode 100644 index 110a77070bb2cc8aaeb40e83df40687ffeb69774..0000000000000000000000000000000000000000 --- a/spaces/SiddharthK/dslim-bert-large-NER/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/dslim/bert-large-NER").launch() \ No newline at end of file diff --git a/spaces/SoUmNerd/Phind-Phind-CodeLlama-34B-Python-v1/app.py b/spaces/SoUmNerd/Phind-Phind-CodeLlama-34B-Python-v1/app.py deleted file mode 100644 index 76d5701631a6f380904703b1ff00dbb4945d3867..0000000000000000000000000000000000000000 --- a/spaces/SoUmNerd/Phind-Phind-CodeLlama-34B-Python-v1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Phind/Phind-CodeLlama-34B-Python-v1").launch() \ No newline at end of file diff --git a/spaces/SpacesExamples/nerfstudio/README.md b/spaces/SpacesExamples/nerfstudio/README.md deleted file mode 100644 index b1349aca19e637ea7ed4949748c88b58834158c8..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/nerfstudio/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Nerfstudio -emoji: 🎮 🧊 -colorFrom: red -colorTo: green -sdk: docker -pinned: false -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/ipstruct.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/ipstruct.py deleted file mode 100644 index ed112101a36bab4fa112ff8522f37e7c64105611..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/ipstruct.py +++ /dev/null @@ -1,379 +0,0 @@ -# encoding: utf-8 -"""A dict subclass that supports attribute style access. - -Authors: - -* Fernando Perez (original) -* Brian Granger (refactoring to a dict subclass) -""" - -#----------------------------------------------------------------------------- -# Copyright (C) 2008-2011 The IPython Development Team -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -__all__ = ['Struct'] - -#----------------------------------------------------------------------------- -# Code -#----------------------------------------------------------------------------- - - -class Struct(dict): - """A dict subclass with attribute style access. - - This dict subclass has a a few extra features: - - * Attribute style access. - * Protection of class members (like keys, items) when using attribute - style access. - * The ability to restrict assignment to only existing keys. - * Intelligent merging. - * Overloaded operators. - """ - _allownew = True - def __init__(self, *args, **kw): - """Initialize with a dictionary, another Struct, or data. - - Parameters - ---------- - *args : dict, Struct - Initialize with one dict or Struct - **kw : dict - Initialize with key, value pairs. - - Examples - -------- - >>> s = Struct(a=10,b=30) - >>> s.a - 10 - >>> s.b - 30 - >>> s2 = Struct(s,c=30) - >>> sorted(s2.keys()) - ['a', 'b', 'c'] - """ - object.__setattr__(self, '_allownew', True) - dict.__init__(self, *args, **kw) - - def __setitem__(self, key, value): - """Set an item with check for allownew. - - Examples - -------- - >>> s = Struct() - >>> s['a'] = 10 - >>> s.allow_new_attr(False) - >>> s['a'] = 10 - >>> s['a'] - 10 - >>> try: - ... s['b'] = 20 - ... except KeyError: - ... print('this is not allowed') - ... - this is not allowed - """ - if not self._allownew and key not in self: - raise KeyError( - "can't create new attribute %s when allow_new_attr(False)" % key) - dict.__setitem__(self, key, value) - - def __setattr__(self, key, value): - """Set an attr with protection of class members. - - This calls :meth:`self.__setitem__` but convert :exc:`KeyError` to - :exc:`AttributeError`. - - Examples - -------- - >>> s = Struct() - >>> s.a = 10 - >>> s.a - 10 - >>> try: - ... s.get = 10 - ... except AttributeError: - ... print("you can't set a class member") - ... - you can't set a class member - """ - # If key is an str it might be a class member or instance var - if isinstance(key, str): - # I can't simply call hasattr here because it calls getattr, which - # calls self.__getattr__, which returns True for keys in - # self._data. But I only want keys in the class and in - # self.__dict__ - if key in self.__dict__ or hasattr(Struct, key): - raise AttributeError( - 'attr %s is a protected member of class Struct.' % key - ) - try: - self.__setitem__(key, value) - except KeyError as e: - raise AttributeError(e) from e - - def __getattr__(self, key): - """Get an attr by calling :meth:`dict.__getitem__`. - - Like :meth:`__setattr__`, this method converts :exc:`KeyError` to - :exc:`AttributeError`. - - Examples - -------- - >>> s = Struct(a=10) - >>> s.a - 10 - >>> type(s.get) - <...method'> - >>> try: - ... s.b - ... except AttributeError: - ... print("I don't have that key") - ... - I don't have that key - """ - try: - result = self[key] - except KeyError as e: - raise AttributeError(key) from e - else: - return result - - def __iadd__(self, other): - """s += s2 is a shorthand for s.merge(s2). - - Examples - -------- - >>> s = Struct(a=10,b=30) - >>> s2 = Struct(a=20,c=40) - >>> s += s2 - >>> sorted(s.keys()) - ['a', 'b', 'c'] - """ - self.merge(other) - return self - - def __add__(self,other): - """s + s2 -> New Struct made from s.merge(s2). - - Examples - -------- - >>> s1 = Struct(a=10,b=30) - >>> s2 = Struct(a=20,c=40) - >>> s = s1 + s2 - >>> sorted(s.keys()) - ['a', 'b', 'c'] - """ - sout = self.copy() - sout.merge(other) - return sout - - def __sub__(self,other): - """s1 - s2 -> remove keys in s2 from s1. - - Examples - -------- - >>> s1 = Struct(a=10,b=30) - >>> s2 = Struct(a=40) - >>> s = s1 - s2 - >>> s - {'b': 30} - """ - sout = self.copy() - sout -= other - return sout - - def __isub__(self,other): - """Inplace remove keys from self that are in other. - - Examples - -------- - >>> s1 = Struct(a=10,b=30) - >>> s2 = Struct(a=40) - >>> s1 -= s2 - >>> s1 - {'b': 30} - """ - for k in other.keys(): - if k in self: - del self[k] - return self - - def __dict_invert(self, data): - """Helper function for merge. - - Takes a dictionary whose values are lists and returns a dict with - the elements of each list as keys and the original keys as values. - """ - outdict = {} - for k,lst in data.items(): - if isinstance(lst, str): - lst = lst.split() - for entry in lst: - outdict[entry] = k - return outdict - - def dict(self): - return self - - def copy(self): - """Return a copy as a Struct. - - Examples - -------- - >>> s = Struct(a=10,b=30) - >>> s2 = s.copy() - >>> type(s2) is Struct - True - """ - return Struct(dict.copy(self)) - - def hasattr(self, key): - """hasattr function available as a method. - - Implemented like has_key. - - Examples - -------- - >>> s = Struct(a=10) - >>> s.hasattr('a') - True - >>> s.hasattr('b') - False - >>> s.hasattr('get') - False - """ - return key in self - - def allow_new_attr(self, allow = True): - """Set whether new attributes can be created in this Struct. - - This can be used to catch typos by verifying that the attribute user - tries to change already exists in this Struct. - """ - object.__setattr__(self, '_allownew', allow) - - def merge(self, __loc_data__=None, __conflict_solve=None, **kw): - """Merge two Structs with customizable conflict resolution. - - This is similar to :meth:`update`, but much more flexible. First, a - dict is made from data+key=value pairs. When merging this dict with - the Struct S, the optional dictionary 'conflict' is used to decide - what to do. - - If conflict is not given, the default behavior is to preserve any keys - with their current value (the opposite of the :meth:`update` method's - behavior). - - Parameters - ---------- - __loc_data__ : dict, Struct - The data to merge into self - __conflict_solve : dict - The conflict policy dict. The keys are binary functions used to - resolve the conflict and the values are lists of strings naming - the keys the conflict resolution function applies to. Instead of - a list of strings a space separated string can be used, like - 'a b c'. - **kw : dict - Additional key, value pairs to merge in - - Notes - ----- - The `__conflict_solve` dict is a dictionary of binary functions which will be used to - solve key conflicts. Here is an example:: - - __conflict_solve = dict( - func1=['a','b','c'], - func2=['d','e'] - ) - - In this case, the function :func:`func1` will be used to resolve - keys 'a', 'b' and 'c' and the function :func:`func2` will be used for - keys 'd' and 'e'. This could also be written as:: - - __conflict_solve = dict(func1='a b c',func2='d e') - - These functions will be called for each key they apply to with the - form:: - - func1(self['a'], other['a']) - - The return value is used as the final merged value. - - As a convenience, merge() provides five (the most commonly needed) - pre-defined policies: preserve, update, add, add_flip and add_s. The - easiest explanation is their implementation:: - - preserve = lambda old,new: old - update = lambda old,new: new - add = lambda old,new: old + new - add_flip = lambda old,new: new + old # note change of order! - add_s = lambda old,new: old + ' ' + new # only for str! - - You can use those four words (as strings) as keys instead - of defining them as functions, and the merge method will substitute - the appropriate functions for you. - - For more complicated conflict resolution policies, you still need to - construct your own functions. - - Examples - -------- - This show the default policy: - - >>> s = Struct(a=10,b=30) - >>> s2 = Struct(a=20,c=40) - >>> s.merge(s2) - >>> sorted(s.items()) - [('a', 10), ('b', 30), ('c', 40)] - - Now, show how to specify a conflict dict: - - >>> s = Struct(a=10,b=30) - >>> s2 = Struct(a=20,b=40) - >>> conflict = {'update':'a','add':'b'} - >>> s.merge(s2,conflict) - >>> sorted(s.items()) - [('a', 20), ('b', 70)] - """ - - data_dict = dict(__loc_data__,**kw) - - # policies for conflict resolution: two argument functions which return - # the value that will go in the new struct - preserve = lambda old,new: old - update = lambda old,new: new - add = lambda old,new: old + new - add_flip = lambda old,new: new + old # note change of order! - add_s = lambda old,new: old + ' ' + new - - # default policy is to keep current keys when there's a conflict - conflict_solve = dict.fromkeys(self, preserve) - - # the conflict_solve dictionary is given by the user 'inverted': we - # need a name-function mapping, it comes as a function -> names - # dict. Make a local copy (b/c we'll make changes), replace user - # strings for the three builtin policies and invert it. - if __conflict_solve: - inv_conflict_solve_user = __conflict_solve.copy() - for name, func in [('preserve',preserve), ('update',update), - ('add',add), ('add_flip',add_flip), - ('add_s',add_s)]: - if name in inv_conflict_solve_user.keys(): - inv_conflict_solve_user[func] = inv_conflict_solve_user[name] - del inv_conflict_solve_user[name] - conflict_solve.update(self.__dict_invert(inv_conflict_solve_user)) - for key in data_dict: - if key not in self: - self[key] = data_dict[key] - else: - self[key] = conflict_solve[key](self[key],data_dict[key]) - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FitsStubImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FitsStubImagePlugin.py deleted file mode 100644 index 50948ec423ac595025289dd44af9b92ba2744167..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/FitsStubImagePlugin.py +++ /dev/null @@ -1,76 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# FITS stub adapter -# -# Copyright (c) 1998-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import FitsImagePlugin, Image, ImageFile -from ._deprecate import deprecate - -_handler = None - - -def register_handler(handler): - """ - Install application-specific FITS image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - deprecate( - "FitsStubImagePlugin", - 10, - action="FITS images can now be read without " - "a handler through FitsImagePlugin instead", - ) - - # Override FitsImagePlugin with this handler - # for backwards compatibility - try: - Image.ID.remove(FITSStubImageFile.format) - except ValueError: - pass - - Image.register_open( - FITSStubImageFile.format, FITSStubImageFile, FitsImagePlugin._accept - ) - - -class FITSStubImageFile(ImageFile.StubImageFile): - format = FitsImagePlugin.FitsImageFile.format - format_description = FitsImagePlugin.FitsImageFile.format_description - - def _open(self): - offset = self.fp.tell() - - im = FitsImagePlugin.FitsImageFile(self.fp) - self._size = im.size - self.mode = im.mode - self.tile = [] - - self.fp.seek(offset) - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - msg = "FITS save handler not installed" - raise OSError(msg) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_save(FITSStubImageFile.format, _save) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_cython_wrapper.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_cython_wrapper.py deleted file mode 100644 index 1058519b3f7c85443dcd8983606baf011a74a615..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_cython_wrapper.py +++ /dev/null @@ -1,52 +0,0 @@ -import sys -try: - try: - from _pydevd_bundle_ext import pydevd_cython as mod - - except ImportError: - from _pydevd_bundle import pydevd_cython as mod - -except ImportError: - import struct - - try: - is_python_64bit = (struct.calcsize('P') == 8) - except: - # In Jython this call fails, but this is Ok, we don't support Jython for speedups anyways. - raise ImportError - plat = '32' - if is_python_64bit: - plat = '64' - - # We also accept things as: - # - # _pydevd_bundle.pydevd_cython_win32_27_32 - # _pydevd_bundle.pydevd_cython_win32_34_64 - # - # to have multiple pre-compiled pyds distributed along the IDE - # (generated by build_tools/build_binaries_windows.py). - - mod_name = 'pydevd_cython_%s_%s%s_%s' % (sys.platform, sys.version_info[0], sys.version_info[1], plat) - check_name = '_pydevd_bundle.%s' % (mod_name,) - mod = getattr(__import__(check_name), mod_name) - -# Regardless of how it was found, make sure it's later available as the -# initial name so that the expected types from cython in frame eval -# are valid. -sys.modules['_pydevd_bundle.pydevd_cython'] = mod - -trace_dispatch = mod.trace_dispatch - -PyDBAdditionalThreadInfo = mod.PyDBAdditionalThreadInfo - -set_additional_thread_info = mod.set_additional_thread_info - -global_cache_skips = mod.global_cache_skips - -global_cache_frame_skips = mod.global_cache_frame_skips - -_set_additional_thread_info_lock = mod._set_additional_thread_info_lock - -fix_top_level_trace_and_get_trace_func = mod.fix_top_level_trace_and_get_trace_func - -version = getattr(mod, 'version', 0) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/inputhookqt4.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/inputhookqt4.py deleted file mode 100644 index b7e1cf0527a8b5648332fafdf56d1b7c7b429ef7..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydev_ipython/inputhookqt4.py +++ /dev/null @@ -1,196 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Qt4's inputhook support function - -Author: Christian Boos -""" - -#----------------------------------------------------------------------------- -# Copyright (C) 2011 The IPython Development Team -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -import os -import signal - -import threading - - -from pydev_ipython.qt_for_kernel import QtCore, QtGui -from pydev_ipython.inputhook import allow_CTRL_C, ignore_CTRL_C, stdin_ready - -# To minimise future merging complexity, rather than edit the entire code base below -# we fake InteractiveShell here -class InteractiveShell: - _instance = None - @classmethod - def instance(cls): - if cls._instance is None: - cls._instance = cls() - return cls._instance - def set_hook(self, *args, **kwargs): - # We don't consider the pre_prompt_hook because we don't have - # KeyboardInterrupts to consider since we are running under PyDev - pass - - -#----------------------------------------------------------------------------- -# Module Globals -#----------------------------------------------------------------------------- - -got_kbdint = False -sigint_timer = None - -#----------------------------------------------------------------------------- -# Code -#----------------------------------------------------------------------------- - -def create_inputhook_qt4(mgr, app=None): - """Create an input hook for running the Qt4 application event loop. - - Parameters - ---------- - mgr : an InputHookManager - - app : Qt Application, optional. - Running application to use. If not given, we probe Qt for an - existing application object, and create a new one if none is found. - - Returns - ------- - A pair consisting of a Qt Application (either the one given or the - one found or created) and a inputhook. - - Notes - ----- - We use a custom input hook instead of PyQt4's default one, as it - interacts better with the readline packages (issue #481). - - The inputhook function works in tandem with a 'pre_prompt_hook' - which automatically restores the hook as an inputhook in case the - latter has been temporarily disabled after having intercepted a - KeyboardInterrupt. - """ - - if app is None: - app = QtCore.QCoreApplication.instance() - if app is None: - app = QtGui.QApplication([" "]) - - # Re-use previously created inputhook if any - ip = InteractiveShell.instance() - if hasattr(ip, '_inputhook_qt4'): - return app, ip._inputhook_qt4 - - # Otherwise create the inputhook_qt4/preprompthook_qt4 pair of - # hooks (they both share the got_kbdint flag) - - def inputhook_qt4(): - """PyOS_InputHook python hook for Qt4. - - Process pending Qt events and if there's no pending keyboard - input, spend a short slice of time (50ms) running the Qt event - loop. - - As a Python ctypes callback can't raise an exception, we catch - the KeyboardInterrupt and temporarily deactivate the hook, - which will let a *second* CTRL+C be processed normally and go - back to a clean prompt line. - """ - try: - allow_CTRL_C() - app = QtCore.QCoreApplication.instance() - if not app: # shouldn't happen, but safer if it happens anyway... - return 0 - app.processEvents(QtCore.QEventLoop.AllEvents, 300) - if not stdin_ready(): - # Generally a program would run QCoreApplication::exec() - # from main() to enter and process the Qt event loop until - # quit() or exit() is called and the program terminates. - # - # For our input hook integration, we need to repeatedly - # enter and process the Qt event loop for only a short - # amount of time (say 50ms) to ensure that Python stays - # responsive to other user inputs. - # - # A naive approach would be to repeatedly call - # QCoreApplication::exec(), using a timer to quit after a - # short amount of time. Unfortunately, QCoreApplication - # emits an aboutToQuit signal before stopping, which has - # the undesirable effect of closing all modal windows. - # - # To work around this problem, we instead create a - # QEventLoop and call QEventLoop::exec(). Other than - # setting some state variables which do not seem to be - # used anywhere, the only thing QCoreApplication adds is - # the aboutToQuit signal which is precisely what we are - # trying to avoid. - timer = QtCore.QTimer() - event_loop = QtCore.QEventLoop() - timer.timeout.connect(event_loop.quit) - while not stdin_ready(): - timer.start(50) - event_loop.exec_() - timer.stop() - except KeyboardInterrupt: - global got_kbdint, sigint_timer - - ignore_CTRL_C() - got_kbdint = True - mgr.clear_inputhook() - - # This generates a second SIGINT so the user doesn't have to - # press CTRL+C twice to get a clean prompt. - # - # Since we can't catch the resulting KeyboardInterrupt here - # (because this is a ctypes callback), we use a timer to - # generate the SIGINT after we leave this callback. - # - # Unfortunately this doesn't work on Windows (SIGINT kills - # Python and CTRL_C_EVENT doesn't work). - if(os.name == 'posix'): - pid = os.getpid() - if(not sigint_timer): - sigint_timer = threading.Timer(.01, os.kill, - args=[pid, signal.SIGINT] ) - sigint_timer.start() - else: - print("\nKeyboardInterrupt - Ctrl-C again for new prompt") - - - except: # NO exceptions are allowed to escape from a ctypes callback - ignore_CTRL_C() - from traceback import print_exc - print_exc() - print("Got exception from inputhook_qt4, unregistering.") - mgr.clear_inputhook() - finally: - allow_CTRL_C() - return 0 - - def preprompthook_qt4(ishell): - """'pre_prompt_hook' used to restore the Qt4 input hook - - (in case the latter was temporarily deactivated after a - CTRL+C) - """ - global got_kbdint, sigint_timer - - if(sigint_timer): - sigint_timer.cancel() - sigint_timer = None - - if got_kbdint: - mgr.set_inputhook(inputhook_qt4) - got_kbdint = False - - ip._inputhook_qt4 = inputhook_qt4 - ip.set_hook('pre_prompt_hook', preprompthook_qt4) - - return app, inputhook_qt4 diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/rotated_coco_evaluation.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/rotated_coco_evaluation.py deleted file mode 100644 index 0d5306c3a0601ed555c7bef20e0ac4ca64264442..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/evaluation/rotated_coco_evaluation.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import json -import numpy as np -import os -import torch -from annotator.oneformer.pycocotools.cocoeval import COCOeval, maskUtils - -from annotator.oneformer.detectron2.structures import BoxMode, RotatedBoxes, pairwise_iou_rotated -from annotator.oneformer.detectron2.utils.file_io import PathManager - -from .coco_evaluation import COCOEvaluator - - -class RotatedCOCOeval(COCOeval): - @staticmethod - def is_rotated(box_list): - if type(box_list) == np.ndarray: - return box_list.shape[1] == 5 - elif type(box_list) == list: - if box_list == []: # cannot decide the box_dim - return False - return np.all( - np.array( - [ - (len(obj) == 5) and ((type(obj) == list) or (type(obj) == np.ndarray)) - for obj in box_list - ] - ) - ) - return False - - @staticmethod - def boxlist_to_tensor(boxlist, output_box_dim): - if type(boxlist) == np.ndarray: - box_tensor = torch.from_numpy(boxlist) - elif type(boxlist) == list: - if boxlist == []: - return torch.zeros((0, output_box_dim), dtype=torch.float32) - else: - box_tensor = torch.FloatTensor(boxlist) - else: - raise Exception("Unrecognized boxlist type") - - input_box_dim = box_tensor.shape[1] - if input_box_dim != output_box_dim: - if input_box_dim == 4 and output_box_dim == 5: - box_tensor = BoxMode.convert(box_tensor, BoxMode.XYWH_ABS, BoxMode.XYWHA_ABS) - else: - raise Exception( - "Unable to convert from {}-dim box to {}-dim box".format( - input_box_dim, output_box_dim - ) - ) - return box_tensor - - def compute_iou_dt_gt(self, dt, gt, is_crowd): - if self.is_rotated(dt) or self.is_rotated(gt): - # TODO: take is_crowd into consideration - assert all(c == 0 for c in is_crowd) - dt = RotatedBoxes(self.boxlist_to_tensor(dt, output_box_dim=5)) - gt = RotatedBoxes(self.boxlist_to_tensor(gt, output_box_dim=5)) - return pairwise_iou_rotated(dt, gt) - else: - # This is the same as the classical COCO evaluation - return maskUtils.iou(dt, gt, is_crowd) - - def computeIoU(self, imgId, catId): - p = self.params - if p.useCats: - gt = self._gts[imgId, catId] - dt = self._dts[imgId, catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] - if len(gt) == 0 and len(dt) == 0: - return [] - inds = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in inds] - if len(dt) > p.maxDets[-1]: - dt = dt[0 : p.maxDets[-1]] - - assert p.iouType == "bbox", "unsupported iouType for iou computation" - - g = [g["bbox"] for g in gt] - d = [d["bbox"] for d in dt] - - # compute iou between each dt and gt region - iscrowd = [int(o["iscrowd"]) for o in gt] - - # Note: this function is copied from cocoeval.py in cocoapi - # and the major difference is here. - ious = self.compute_iou_dt_gt(d, g, iscrowd) - return ious - - -class RotatedCOCOEvaluator(COCOEvaluator): - """ - Evaluate object proposal/instance detection outputs using COCO-like metrics and APIs, - with rotated boxes support. - Note: this uses IOU only and does not consider angle differences. - """ - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a COCO model (e.g., GeneralizedRCNN). - It is a list of dict. Each dict corresponds to an image and - contains keys like "height", "width", "file_name", "image_id". - outputs: the outputs of a COCO model. It is a list of dicts with key - "instances" that contains :class:`Instances`. - """ - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - - prediction["instances"] = self.instances_to_json(instances, input["image_id"]) - if "proposals" in output: - prediction["proposals"] = output["proposals"].to(self._cpu_device) - self._predictions.append(prediction) - - def instances_to_json(self, instances, img_id): - num_instance = len(instances) - if num_instance == 0: - return [] - - boxes = instances.pred_boxes.tensor.numpy() - if boxes.shape[1] == 4: - boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - boxes = boxes.tolist() - scores = instances.scores.tolist() - classes = instances.pred_classes.tolist() - - results = [] - for k in range(num_instance): - result = { - "image_id": img_id, - "category_id": classes[k], - "bbox": boxes[k], - "score": scores[k], - } - - results.append(result) - return results - - def _eval_predictions(self, predictions, img_ids=None): # img_ids: unused - """ - Evaluate predictions on the given tasks. - Fill self._results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - - # unmap the category ids for COCO - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - reverse_id_mapping = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - for result in coco_results: - result["category_id"] = reverse_id_mapping[result["category_id"]] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating predictions ...") - - assert self._tasks is None or set(self._tasks) == { - "bbox" - }, "[RotatedCOCOEvaluator] Only bbox evaluation is supported" - coco_eval = ( - self._evaluate_predictions_on_coco(self._coco_api, coco_results) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - task = "bbox" - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res - - def _evaluate_predictions_on_coco(self, coco_gt, coco_results): - """ - Evaluate the coco results using COCOEval API. - """ - assert len(coco_results) > 0 - - coco_dt = coco_gt.loadRes(coco_results) - - # Only bbox is supported for now - coco_eval = RotatedCOCOeval(coco_gt, coco_dt, iouType="bbox") - - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - - return coco_eval diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/exp/upernet_global_small/run.sh b/spaces/Superlang/ImageProcessor/annotator/uniformer/exp/upernet_global_small/run.sh deleted file mode 100644 index 9fb22edfa7a32624ea08a63fe7d720c40db3b696..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/exp/upernet_global_small/run.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/train.py ${work_path}/config.py \ - --launcher pytorch \ - --options model.backbone.pretrained_path='your_model_path/uniformer_small_in1k.pth' \ - --work-dir ${work_path}/ckpt \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/deform_conv.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/deform_conv.py deleted file mode 100644 index a3f8c75ee774823eea334e3b3732af6a18f55038..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/deform_conv.py +++ /dev/null @@ -1,405 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from annotator.uniformer.mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext('_ext', [ - 'deform_conv_forward', 'deform_conv_backward_input', - 'deform_conv_backward_parameters' -]) - - -class DeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, - input, - offset, - weight, - stride, - padding, - dilation, - groups, - deform_groups, - bias=False, - im2col_step=32): - return g.op( - 'mmcv::MMCVDeformConv2d', - input, - offset, - weight, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups, - bias_i=bias, - im2col_step_i=im2col_step) - - @staticmethod - def forward(ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1, - bias=False, - im2col_step=32): - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - assert bias is False, 'Only support bias is False.' - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.im2col_step = im2col_step - - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty( - DeformConv2dFunction._output_size(ctx, input, weight)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - cur_im2col_step = min(ctx.im2col_step, input.size(0)) - assert (input.size(0) % - cur_im2col_step) == 0, 'im2col step must divide batchsize' - ext_module.deform_conv_forward( - input, - weight, - offset, - output, - ctx.bufs_[0], - ctx.bufs_[1], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - im2col_step=cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - cur_im2col_step = min(ctx.im2col_step, input.size(0)) - assert (input.size(0) % cur_im2col_step - ) == 0, 'batch size must be divisible by im2col_step' - - grad_output = grad_output.contiguous() - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - ext_module.deform_conv_backward_input( - input, - offset, - grad_output, - grad_input, - grad_offset, - weight, - ctx.bufs_[0], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - im2col_step=cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - ext_module.deform_conv_backward_parameters( - input, - offset, - grad_output, - grad_weight, - ctx.bufs_[0], - ctx.bufs_[1], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - scale=1, - im2col_step=cur_im2col_step) - - return grad_input, grad_offset, grad_weight, \ - None, None, None, None, None, None, None - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -deform_conv2d = DeformConv2dFunction.apply - - -class DeformConv2d(nn.Module): - r"""Deformable 2D convolution. - - Applies a deformable 2D convolution over an input signal composed of - several input planes. DeformConv2d was described in the paper - `Deformable Convolutional Networks - `_ - - Note: - The argument ``im2col_step`` was added in version 1.3.17, which means - number of samples processed by the ``im2col_cuda_kernel`` per call. - It enables users to define ``batch_size`` and ``im2col_step`` more - flexibly and solved `issue mmcv#1440 - `_. - - Args: - in_channels (int): Number of channels in the input image. - out_channels (int): Number of channels produced by the convolution. - kernel_size(int, tuple): Size of the convolving kernel. - stride(int, tuple): Stride of the convolution. Default: 1. - padding (int or tuple): Zero-padding added to both sides of the input. - Default: 0. - dilation (int or tuple): Spacing between kernel elements. Default: 1. - groups (int): Number of blocked connections from input. - channels to output channels. Default: 1. - deform_groups (int): Number of deformable group partitions. - bias (bool): If True, adds a learnable bias to the output. - Default: False. - im2col_step (int): Number of samples processed by im2col_cuda_kernel - per call. It will work when ``batch_size`` > ``im2col_step``, but - ``batch_size`` must be divisible by ``im2col_step``. Default: 32. - `New in version 1.3.17.` - """ - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='DeformConv2d') - def __init__(self, - in_channels: int, - out_channels: int, - kernel_size: Union[int, Tuple[int, ...]], - stride: Union[int, Tuple[int, ...]] = 1, - padding: Union[int, Tuple[int, ...]] = 0, - dilation: Union[int, Tuple[int, ...]] = 1, - groups: int = 1, - deform_groups: int = 1, - bias: bool = False, - im2col_step: int = 32) -> None: - super(DeformConv2d, self).__init__() - - assert not bias, \ - f'bias={bias} is not supported in DeformConv2d.' - assert in_channels % groups == 0, \ - f'in_channels {in_channels} cannot be divisible by groups {groups}' - assert out_channels % groups == 0, \ - f'out_channels {out_channels} cannot be divisible by groups \ - {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - self.im2col_step = im2col_step - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - # only weight, no bias - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // self.groups, - *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - # switch the initialization of `self.weight` to the standard kaiming - # method described in `Delving deep into rectifiers: Surpassing - # human-level performance on ImageNet classification` - He, K. et al. - # (2015), using a uniform distribution - nn.init.kaiming_uniform_(self.weight, nonlinearity='relu') - - def forward(self, x: Tensor, offset: Tensor) -> Tensor: - """Deformable Convolutional forward function. - - Args: - x (Tensor): Input feature, shape (B, C_in, H_in, W_in) - offset (Tensor): Offset for deformable convolution, shape - (B, deform_groups*kernel_size[0]*kernel_size[1]*2, - H_out, W_out), H_out, W_out are equal to the output's. - - An offset is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`. - The spatial arrangement is like: - - .. code:: text - - (x0, y0) (x1, y1) (x2, y2) - (x3, y3) (x4, y4) (x5, y5) - (x6, y6) (x7, y7) (x8, y8) - - Returns: - Tensor: Output of the layer. - """ - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0]) or (x.size(3) < - self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0) - offset = offset.contiguous() - out = deform_conv2d(x, offset, self.weight, self.stride, self.padding, - self.dilation, self.groups, self.deform_groups, - False, self.im2col_step) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - - pad_w].contiguous() - return out - - def __repr__(self): - s = self.__class__.__name__ - s += f'(in_channels={self.in_channels},\n' - s += f'out_channels={self.out_channels},\n' - s += f'kernel_size={self.kernel_size},\n' - s += f'stride={self.stride},\n' - s += f'padding={self.padding},\n' - s += f'dilation={self.dilation},\n' - s += f'groups={self.groups},\n' - s += f'deform_groups={self.deform_groups},\n' - # bias is not supported in DeformConv2d. - s += 'bias=False)' - return s - - -@CONV_LAYERS.register_module('DCN') -class DeformConv2dPack(DeformConv2d): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - The offset tensor is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`. - The spatial arrangement is like: - - .. code:: text - - (x0, y0) (x1, y1) (x2, y2) - (x3, y3) (x4, y4) (x5, y5) - (x6, y6) (x7, y7) (x8, y8) - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(DeformConv2dPack, self).__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - offset = self.conv_offset(x) - return deform_conv2d(x, offset, self.weight, self.stride, self.padding, - self.dilation, self.groups, self.deform_groups, - False, self.im2col_step) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, DeformConvPack loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'DeformConv2dPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/additions/do_catkin_make.sh b/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/additions/do_catkin_make.sh deleted file mode 100644 index 0d416fc00282aab146326bbba12a9274e1ba29b8..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/zoe/zoedepth/models/base_models/midas_repo/ros/additions/do_catkin_make.sh +++ /dev/null @@ -1,5 +0,0 @@ -mkdir src -catkin_make -source devel/setup.bash -echo $ROS_PACKAGE_PATH -chmod +x ./devel/setup.bash diff --git a/spaces/TIMBOVILL/RVC-Noobie/rmvpe.py b/spaces/TIMBOVILL/RVC-Noobie/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/TIMBOVILL/RVC-Noobie/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/TRI-ML/risk_biased_prediction/risk_biased/mpc_planner/planner_cost.py b/spaces/TRI-ML/risk_biased_prediction/risk_biased/mpc_planner/planner_cost.py deleted file mode 100644 index 6232fd837e253fd7c0e1e561db3fecea020e98c9..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/risk_biased/mpc_planner/planner_cost.py +++ /dev/null @@ -1,127 +0,0 @@ -from dataclasses import dataclass -from typing import Optional - -import torch -from mmcv import Config - - -@dataclass -class TrackingCostParams: - scale_longitudinal: float - scale_lateral: float - reduce: str - - @staticmethod - def from_config(cfg: Config): - return TrackingCostParams( - scale_longitudinal=cfg.tracking_cost_scale_longitudinal, - scale_lateral=cfg.tracking_cost_scale_lateral, - reduce=cfg.tracking_cost_reduce, - ) - - -class TrackingCost: - """Quadratic Trajectory Tracking Cost - - Args: - params: tracking cost parameters - """ - - def __init__(self, params: TrackingCostParams) -> None: - self.scale_longitudinal = params.scale_longitudinal - self.scale_lateral = params.scale_lateral - assert params.reduce in [ - "min", - "max", - "mean", - "now", - "final", - ], "unsupported reduce type" - self._reduce_fun_name = params.reduce - - def __call__( - self, - ego_position_trajectory: torch.Tensor, - target_position_trajectory: torch.Tensor, - target_velocity_trajectory: torch.Tensor, - ) -> torch.Tensor: - """Computes quadratic tracking cost - - Args: - ego_position_trajectory: (some_shape, num_some_steps, 2) tensor of ego - position trajectory - target_position_trajectory: (some_shape, num_some_steps, 2) tensor of - ego target position trajectory - target_velocity_trajectory: (some_shape, num_some_steps, 2) tensor of - ego target velocity trajectory - - Returns: - (some_shape) cost - """ - cost_matrix = self._get_quadratic_cost_matrix(target_velocity_trajectory) - cost = ( - ( - (ego_position_trajectory - target_position_trajectory).unsqueeze(-2) - @ cost_matrix - @ (ego_position_trajectory - target_position_trajectory).unsqueeze(-1) - ) - .squeeze(-1) - .squeeze(-1) - ) - return self._reduce(cost, dim=-1) - - def _reduce(self, cost: torch.Tensor, dim: Optional[int] = None) -> torch.Tensor: - """Reduces the cost tensor based on self._reduce_fun_name - - Args: - cost: cost tensor of some shape where the last dimension represents time - dim (optional): tensor dimension to be reduced. Defaults to None. - - Returns: - reduced cost tensor - """ - if self._reduce_fun_name == "min": - return torch.min(cost, dim=dim)[0] if dim is not None else torch.min(cost) - if self._reduce_fun_name == "max": - return torch.max(cost, dim=dim)[0] if dim is not None else torch.max(cost) - if self._reduce_fun_name == "mean": - return torch.mean(cost, dim=dim) if dim is not None else torch.mean(cost) - if self._reduce_fun_name == "now": - return cost[..., 0] - if self._reduce_fun_name == "final": - return cost[..., -1] - - def _get_quadratic_cost_matrix( - self, target_velocity_trajectory: torch.Tensor, eps: float = 1e-8 - ) -> torch.Tensor: - """Gets quadratic cost matrix based on target velocity direction per time step. - If target velocity is 0 in norm, then all zero tensor is returned for that time step. - - Args: - target_velocity_trajectory: (some_shape, num_some_steps, 2) tensor of - ego target velocity trajectory - eps (optional): small positive number to ensure numerical stability. Defaults to - 1e-8. - - Returns: - (some_shape, num_some_steps, 2, 2) quadratic cost matrix - """ - longitudinal_direction = ( - target_velocity_trajectory - / ( - torch.linalg.norm(target_velocity_trajectory, dim=-1).unsqueeze(-1) - + eps - ) - ).unsqueeze(-1) - rotation_90_deg = torch.Tensor([[[0.0, -1.0], [1.0, 0]]]) - lateral_direction = rotation_90_deg @ longitudinal_direction - orthogonal_matrix = torch.cat( - (longitudinal_direction, lateral_direction), dim=-1 - ) - eigen_matrix = torch.Tensor( - [[[self.scale_longitudinal, 0.0], [0.0, self.scale_lateral]]] - ) - cost_matrix = ( - orthogonal_matrix @ eigen_matrix @ orthogonal_matrix.transpose(-1, -2) - ) - return cost_matrix diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/compat.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/compat.py deleted file mode 100644 index ccec9379dba2b03015ce123dd04a042f32431235..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/cachecontrol/compat.py +++ /dev/null @@ -1,32 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -try: - from urllib.parse import urljoin -except ImportError: - from urlparse import urljoin - - -try: - import cPickle as pickle -except ImportError: - import pickle - -# Handle the case where the requests module has been patched to not have -# urllib3 bundled as part of its source. -try: - from pip._vendor.requests.packages.urllib3.response import HTTPResponse -except ImportError: - from pip._vendor.urllib3.response import HTTPResponse - -try: - from pip._vendor.requests.packages.urllib3.util import is_fp_closed -except ImportError: - from pip._vendor.urllib3.util import is_fp_closed - -# Replicate some six behaviour -try: - text_type = unicode -except NameError: - text_type = str diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/lightning_train_net.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/lightning_train_net.py deleted file mode 100644 index f6734b566b6764ee54dd2af1b7310fedb34bb40d..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/lightning_train_net.py +++ /dev/null @@ -1,239 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# Lightning Trainer should be considered beta at this point -# We have confirmed that training and validation run correctly and produce correct results -# Depending on how you launch the trainer, there are issues with processes terminating correctly -# This module is still dependent on D2 logging, but could be transferred to use Lightning logging - -import logging -import os -import time -import weakref -from collections import OrderedDict -from typing import Any, Dict, List - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import build_detection_test_loader, build_detection_train_loader -from detectron2.engine import ( - DefaultTrainer, - SimpleTrainer, - default_argument_parser, - default_setup, - default_writers, - hooks, -) -from detectron2.evaluation import print_csv_format -from detectron2.evaluation.testing import flatten_results_dict -from detectron2.modeling import build_model -from detectron2.solver import build_lr_scheduler, build_optimizer -from detectron2.utils.events import EventStorage -from detectron2.utils.logger import setup_logger - -import pytorch_lightning as pl # type: ignore -from pytorch_lightning import LightningDataModule, LightningModule -from train_net import build_evaluator - -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger("detectron2") - - -class TrainingModule(LightningModule): - def __init__(self, cfg): - super().__init__() - if not logger.isEnabledFor(logging.INFO): # setup_logger is not called for d2 - setup_logger() - self.cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size()) - self.storage: EventStorage = None - self.model = build_model(self.cfg) - - self.start_iter = 0 - self.max_iter = cfg.SOLVER.MAX_ITER - - def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None: - checkpoint["iteration"] = self.storage.iter - - def on_load_checkpoint(self, checkpointed_state: Dict[str, Any]) -> None: - self.start_iter = checkpointed_state["iteration"] - self.storage.iter = self.start_iter - - def setup(self, stage: str): - if self.cfg.MODEL.WEIGHTS: - self.checkpointer = DetectionCheckpointer( - # Assume you want to save checkpoints together with logs/statistics - self.model, - self.cfg.OUTPUT_DIR, - ) - logger.info(f"Load model weights from checkpoint: {self.cfg.MODEL.WEIGHTS}.") - # Only load weights, use lightning checkpointing if you want to resume - self.checkpointer.load(self.cfg.MODEL.WEIGHTS) - - self.iteration_timer = hooks.IterationTimer() - self.iteration_timer.before_train() - self.data_start = time.perf_counter() - self.writers = None - - def training_step(self, batch, batch_idx): - data_time = time.perf_counter() - self.data_start - # Need to manually enter/exit since trainer may launch processes - # This ideally belongs in setup, but setup seems to run before processes are spawned - if self.storage is None: - self.storage = EventStorage(0) - self.storage.__enter__() - self.iteration_timer.trainer = weakref.proxy(self) - self.iteration_timer.before_step() - self.writers = ( - default_writers(self.cfg.OUTPUT_DIR, self.max_iter) - if comm.is_main_process() - else {} - ) - - loss_dict = self.model(batch) - SimpleTrainer.write_metrics(loss_dict, data_time) - - opt = self.optimizers() - self.storage.put_scalar( - "lr", opt.param_groups[self._best_param_group_id]["lr"], smoothing_hint=False - ) - self.iteration_timer.after_step() - self.storage.step() - # A little odd to put before step here, but it's the best way to get a proper timing - self.iteration_timer.before_step() - - if self.storage.iter % 20 == 0: - for writer in self.writers: - writer.write() - return sum(loss_dict.values()) - - def training_step_end(self, training_step_outpus): - self.data_start = time.perf_counter() - return training_step_outpus - - def training_epoch_end(self, training_step_outputs): - self.iteration_timer.after_train() - if comm.is_main_process(): - self.checkpointer.save("model_final") - for writer in self.writers: - writer.write() - writer.close() - self.storage.__exit__(None, None, None) - - def _process_dataset_evaluation_results(self) -> OrderedDict: - results = OrderedDict() - for idx, dataset_name in enumerate(self.cfg.DATASETS.TEST): - results[dataset_name] = self._evaluators[idx].evaluate() - if comm.is_main_process(): - print_csv_format(results[dataset_name]) - - if len(results) == 1: - results = list(results.values())[0] - return results - - def _reset_dataset_evaluators(self): - self._evaluators = [] - for dataset_name in self.cfg.DATASETS.TEST: - evaluator = build_evaluator(self.cfg, dataset_name) - evaluator.reset() - self._evaluators.append(evaluator) - - def on_validation_epoch_start(self, _outputs): - self._reset_dataset_evaluators() - - def validation_epoch_end(self, _outputs): - results = self._process_dataset_evaluation_results(_outputs) - - flattened_results = flatten_results_dict(results) - for k, v in flattened_results.items(): - try: - v = float(v) - except Exception as e: - raise ValueError( - "[EvalHook] eval_function should return a nested dict of float. " - "Got '{}: {}' instead.".format(k, v) - ) from e - self.storage.put_scalars(**flattened_results, smoothing_hint=False) - - def validation_step(self, batch, batch_idx: int, dataloader_idx: int = 0) -> None: - if not isinstance(batch, List): - batch = [batch] - outputs = self.model(batch) - self._evaluators[dataloader_idx].process(batch, outputs) - - def configure_optimizers(self): - optimizer = build_optimizer(self.cfg, self.model) - self._best_param_group_id = hooks.LRScheduler.get_best_param_group_id(optimizer) - scheduler = build_lr_scheduler(self.cfg, optimizer) - return [optimizer], [{"scheduler": scheduler, "interval": "step"}] - - -class DataModule(LightningDataModule): - def __init__(self, cfg): - super().__init__() - self.cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size()) - - def train_dataloader(self): - return build_detection_train_loader(self.cfg) - - def val_dataloader(self): - dataloaders = [] - for dataset_name in self.cfg.DATASETS.TEST: - dataloaders.append(build_detection_test_loader(self.cfg, dataset_name)) - return dataloaders - - -def main(args): - cfg = setup(args) - train(cfg, args) - - -def train(cfg, args): - trainer_params = { - # training loop is bounded by max steps, use a large max_epochs to make - # sure max_steps is met first - "max_epochs": 10 ** 8, - "max_steps": cfg.SOLVER.MAX_ITER, - "val_check_interval": cfg.TEST.EVAL_PERIOD if cfg.TEST.EVAL_PERIOD > 0 else 10 ** 8, - "num_nodes": args.num_machines, - "gpus": args.num_gpus, - "num_sanity_val_steps": 0, - } - if cfg.SOLVER.AMP.ENABLED: - trainer_params["precision"] = 16 - - last_checkpoint = os.path.join(cfg.OUTPUT_DIR, "last.ckpt") - if args.resume: - # resume training from checkpoint - trainer_params["resume_from_checkpoint"] = last_checkpoint - logger.info(f"Resuming training from checkpoint: {last_checkpoint}.") - - trainer = pl.Trainer(**trainer_params) - logger.info(f"start to train with {args.num_machines} nodes and {args.num_gpus} GPUs") - - module = TrainingModule(cfg) - data_module = DataModule(cfg) - if args.eval_only: - logger.info("Running inference") - trainer.validate(module, data_module) - else: - logger.info("Running training") - trainer.fit(module, data_module) - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup(cfg, args) - return cfg - - -if __name__ == "__main__": - parser = default_argument_parser() - args = parser.parse_args() - logger.info("Command Line Args:", args) - main(args) diff --git a/spaces/Thorsten-Voice/demo/README.md b/spaces/Thorsten-Voice/demo/README.md deleted file mode 100644 index fdbdec9fc59514050acbb6ba1edd0e56fdc6d28e..0000000000000000000000000000000000000000 --- a/spaces/Thorsten-Voice/demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Demo -emoji: 📚 -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/WindVChen/INR-Harmon/model/base/ih_model.py b/spaces/WindVChen/INR-Harmon/model/base/ih_model.py deleted file mode 100644 index 3c4dc531e41d99169dc35113e2c4bfcdb0aa5e67..0000000000000000000000000000000000000000 --- a/spaces/WindVChen/INR-Harmon/model/base/ih_model.py +++ /dev/null @@ -1,88 +0,0 @@ -import torch -import torchvision -import torch.nn as nn - -from .conv_autoencoder import ConvEncoder, DeconvDecoder, INRDecoder - -from .ops import ScaleLayer - - -class IHModelWithBackbone(nn.Module): - def __init__( - self, - model, backbone, - downsize_backbone_input=False, - mask_fusion='sum', - backbone_conv1_channels=64, opt=None - ): - super(IHModelWithBackbone, self).__init__() - self.downsize_backbone_input = downsize_backbone_input - self.mask_fusion = mask_fusion - - self.backbone = backbone - self.model = model - self.opt = opt - - self.mask_conv = nn.Sequential( - nn.Conv2d(1, backbone_conv1_channels, kernel_size=3, stride=2, padding=1, bias=True), - ScaleLayer(init_value=0.1, lr_mult=1) - ) - - def forward(self, image, mask, coord=None, start_proportion=None): - if self.opt.INRDecode and self.opt.hr_train and (self.training or hasattr(self.opt, 'split_num') or hasattr(self.opt, 'split_resolution')): - backbone_image = torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(image[0]) - backbone_mask = torch.cat( - (torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(mask[0]), - 1.0 - torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(mask[0])), dim=1) - else: - backbone_image = torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(image) - backbone_mask = torch.cat((torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(mask), - 1.0 - torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(mask)), dim=1) - - backbone_mask_features = self.mask_conv(backbone_mask[:, :1]) - backbone_features = self.backbone(backbone_image, backbone_mask, backbone_mask_features) - - output = self.model(image, mask, backbone_features, coord=coord, start_proportion=start_proportion) - return output - - -class DeepImageHarmonization(nn.Module): - def __init__( - self, - depth, - norm_layer=nn.BatchNorm2d, batchnorm_from=0, - attend_from=-1, - image_fusion=False, - ch=64, max_channels=512, - backbone_from=-1, backbone_channels=None, backbone_mode='', opt=None - ): - super(DeepImageHarmonization, self).__init__() - self.depth = depth - self.encoder = ConvEncoder( - depth, ch, - norm_layer, batchnorm_from, max_channels, - backbone_from, backbone_channels, backbone_mode, INRDecode=opt.INRDecode - ) - self.opt = opt - if opt.INRDecode: - "See Table 2 in the paper to test with different INR decoders' structures." - self.decoder = INRDecoder(depth, self.encoder.blocks_channels, norm_layer, opt, backbone_from) - else: - "Baseline: https://github.com/SamsungLabs/image_harmonization" - self.decoder = DeconvDecoder(depth, self.encoder.blocks_channels, norm_layer, attend_from, image_fusion) - - def forward(self, image, mask, backbone_features=None, coord=None, start_proportion=None): - if self.opt.INRDecode and self.opt.hr_train and (self.training or hasattr(self.opt, 'split_num') or hasattr(self.opt, 'split_resolution')): - x = torch.cat((torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(image[0]), - torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(mask[0])), dim=1) - else: - x = torch.cat((torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(image), - torchvision.transforms.Resize([self.opt.base_size, self.opt.base_size])(mask)), dim=1) - - intermediates = self.encoder(x, backbone_features) - - if self.opt.INRDecode and self.opt.hr_train and (self.training or hasattr(self.opt, 'split_num') or hasattr(self.opt, 'split_resolution')): - output = self.decoder(intermediates, image[1], mask[1], coord_samples=coord, start_proportion=start_proportion) - else: - output = self.decoder(intermediates, image, mask) - return output diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/ipython.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/ipython.py deleted file mode 100644 index 75a98dc023966956ccc9b89603f35de9e806e9b7..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/utils/ipython.py +++ /dev/null @@ -1,64 +0,0 @@ -"ipython utils" - -import os, functools, traceback, gc - -def is_in_ipython(): - "Is the code running in the ipython environment (jupyter including)" - - program_name = os.path.basename(os.getenv('_', '')) - - if ('jupyter-notebook' in program_name or # jupyter-notebook - 'ipython' in program_name or # ipython - 'JPY_PARENT_PID' in os.environ): # ipython-notebook - return True - else: - return False - -IS_IN_IPYTHON = is_in_ipython() - -def is_in_colab(): - "Is the code running in Google Colaboratory?" - if not IS_IN_IPYTHON: return False - try: - from google import colab - return True - except: return False - -IS_IN_COLAB = is_in_colab() - -def get_ref_free_exc_info(): - "Free traceback from references to locals() in each frame to avoid circular reference leading to gc.collect() unable to reclaim memory" - type, val, tb = sys.exc_info() - traceback.clear_frames(tb) - return (type, val, tb) - -def gpu_mem_restore(func): - "Reclaim GPU RAM if CUDA out of memory happened, or execution was interrupted" - @functools.wraps(func) - def wrapper(*args, **kwargs): - tb_clear_frames = os.environ.get('FASTAI_TB_CLEAR_FRAMES', None) - if not IS_IN_IPYTHON or tb_clear_frames=="0": - return func(*args, **kwargs) - - try: - return func(*args, **kwargs) - except Exception as e: - if ("CUDA out of memory" in str(e) or - "device-side assert triggered" in str(e) or - tb_clear_frames == "1"): - type, val, tb = get_ref_free_exc_info() # must! - gc.collect() - if "device-side assert triggered" in str(e): - warn("""When 'device-side assert triggered' error happens, it's not possible to recover and you must restart the kernel to continue. Use os.environ['CUDA_LAUNCH_BLOCKING']="1" before restarting to debug""") - raise type(val).with_traceback(tb) from None - else: raise # re-raises the exact last exception - return wrapper - -class gpu_mem_restore_ctx(): - "context manager to reclaim RAM if an exception happened under ipython" - def __enter__(self): return self - def __exit__(self, exc_type, exc_val, exc_tb): - if not exc_val: return True - traceback.clear_frames(exc_tb) - gc.collect() - raise exc_type(exc_val).with_traceback(exc_tb) from None diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/onnx_export.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/onnx_export.py deleted file mode 100644 index a762b233238d25a37c157d584d1c80f54184b3e7..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/onnx_export.py +++ /dev/null @@ -1,73 +0,0 @@ -import argparse -import time -import numpy as np -import onnx -from onnxsim import simplify -import onnxruntime as ort -import onnxoptimizer -import torch -from model_onnx import SynthesizerTrn -import utils -from hubert import hubert_model_onnx - -def main(HubertExport,NetExport): - - path = "NyaruTaffy" - - if(HubertExport): - device = torch.device("cuda") - hubert_soft = hubert_model_onnx.hubert_soft("hubert/model.pt") - test_input = torch.rand(1, 1, 16000) - input_names = ["source"] - output_names = ["embed"] - torch.onnx.export(hubert_soft.to(device), - test_input.to(device), - "hubert3.0.onnx", - dynamic_axes={ - "source": { - 2: "sample_length" - } - }, - verbose=False, - opset_version=13, - input_names=input_names, - output_names=output_names) - if(NetExport): - device = torch.device("cuda") - hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - SVCVITS = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", SVCVITS, None) - _ = SVCVITS.eval().to(device) - for i in SVCVITS.parameters(): - i.requires_grad = False - test_hidden_unit = torch.rand(1, 50, 256) - test_lengths = torch.LongTensor([50]) - test_pitch = torch.rand(1, 50) - test_sid = torch.LongTensor([0]) - input_names = ["hidden_unit", "lengths", "pitch", "sid"] - output_names = ["audio", ] - SVCVITS.eval() - torch.onnx.export(SVCVITS, - ( - test_hidden_unit.to(device), - test_lengths.to(device), - test_pitch.to(device), - test_sid.to(device) - ), - f"checkpoints/{path}/model.onnx", - dynamic_axes={ - "hidden_unit": [0, 1], - "pitch": [1] - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names) - - -if __name__ == '__main__': - main(False,True) diff --git a/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/attentions.py b/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/Zaxxced/rvc-random-v2/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Zeltoria/Anime/README.md b/spaces/Zeltoria/Anime/README.md deleted file mode 100644 index 1e80b568bbf011a0c2bc3571a523e00455e9c02e..0000000000000000000000000000000000000000 --- a/spaces/Zeltoria/Anime/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Anime -emoji: 🐠 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aadnk/whisper-webui/src/vad.py b/spaces/aadnk/whisper-webui/src/vad.py deleted file mode 100644 index e68ee7391e93f539a05d548601f2d87168bb1282..0000000000000000000000000000000000000000 --- a/spaces/aadnk/whisper-webui/src/vad.py +++ /dev/null @@ -1,568 +0,0 @@ -from abc import ABC, abstractmethod -from collections import Counter, deque -import time - -from typing import Any, Deque, Iterator, List, Dict - -from pprint import pprint -from src.hooks.progressListener import ProgressListener -from src.hooks.subTaskProgressListener import SubTaskProgressListener -from src.hooks.whisperProgressHook import create_progress_listener_handle -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache - -from src.segments import merge_timestamps -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback - -# Workaround for https://github.com/tensorflow/tensorflow/issues/48797 -try: - import tensorflow as tf -except ModuleNotFoundError: - # Error handling - pass - -import torch - -import ffmpeg -import numpy as np - -from src.utils import format_timestamp -from enum import Enum - -class NonSpeechStrategy(Enum): - """ - Ignore non-speech frames segments. - """ - SKIP = 1 - """ - Just treat non-speech segments as speech. - """ - CREATE_SEGMENT = 2 - """ - Expand speech segments into subsequent non-speech segments. - """ - EXPAND_SEGMENT = 3 - -# Defaults for Silero -SPEECH_TRESHOLD = 0.3 - -# Minimum size of segments to process -MIN_SEGMENT_DURATION = 1 - -# The maximum time for texts from old segments to be used in the next segment -MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled) -PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this - -VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio - -class TranscriptionConfig(ABC): - def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - self.non_speech_strategy = non_speech_strategy - self.segment_padding_left = segment_padding_left - self.segment_padding_right = segment_padding_right - self.max_silent_period = max_silent_period - self.max_merge_size = max_merge_size - self.max_prompt_window = max_prompt_window - self.initial_segment_index = initial_segment_index - -class PeriodicTranscriptionConfig(TranscriptionConfig): - def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index) - self.periodic_duration = periodic_duration - -class AbstractTranscription(ABC): - def __init__(self, sampling_rate: int = 16000): - self.sampling_rate = sampling_rate - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - return load_audio(str, self.sampling_rate, start_time, duration) - - def is_transcribe_timestamps_fast(self): - """ - Determine if get_transcribe_timestamps is fast enough to not need parallelization. - """ - return False - - @abstractmethod - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - return - - def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method, - after merging the given segments using the specified configuration. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size, - config.segment_padding_left, config.segment_padding_right) - - if config.non_speech_strategy != NonSpeechStrategy.SKIP: - # Expand segments to include the gaps between them - if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT): - # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size - merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size) - elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT: - # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment) - merged = self.expand_gaps(merged, total_duration=total_duration) - else: - raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy)) - - print("Transcribing non-speech:") - pprint(merged) - return merged - - def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig, - progressListener: ProgressListener = None): - """ - Transcribe the given audo file. - - Parameters - ---------- - audio: str - The audio file. - whisperCallable: WhisperCallback - A callback object to call to transcribe each segment. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - - try: - max_audio_duration = self.get_audio_duration(audio, config) - timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration) - - # Get speech timestamps from full audio file - merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration) - - # A deque of transcribed segments that is passed to the next segment as a prompt - prompt_window = deque() - - print("Processing timestamps:") - pprint(merged) - - result = { - 'text': "", - 'segments': [], - 'language': "" - } - languageCounter = Counter() - detected_language = None - - segment_index = config.initial_segment_index - - # Calculate progress - progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0 - progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged]) - - # For each time segment, run whisper - for segment in merged: - segment_index += 1 - segment_start = segment['start'] - segment_end = segment['end'] - segment_expand_amount = segment.get('expand_amount', 0) - segment_gap = segment.get('gap', False) - - segment_duration = segment_end - segment_start - - if segment_duration < MIN_SEGMENT_DURATION: - continue - - # Audio to run on Whisper - segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration)) - # Previous segments to use as a prompt - segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None - - # Detected language - detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None - - print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ", - segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language) - - perf_start_time = time.perf_counter() - - scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration, - sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration) - segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener) - - perf_end_time = time.perf_counter() - print("Whisper took {} seconds".format(perf_end_time - perf_start_time)) - - adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration) - - # Propagate expand amount to the segments - if (segment_expand_amount > 0): - segment_without_expansion = segment_duration - segment_expand_amount - - for adjusted_segment in adjusted_segments: - adjusted_segment_end = adjusted_segment['end'] - - # Add expand amount if the segment got expanded - if (adjusted_segment_end > segment_without_expansion): - adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion - - # Append to output - result['text'] += segment_result['text'] - result['segments'].extend(adjusted_segments) - - # Increment detected language - if not segment_gap: - languageCounter[segment_result['language']] += 1 - - # Update prompt window - self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config) - - if detected_language is not None: - result['language'] = detected_language - finally: - # Notify progress listener that we are done - if progressListener is not None: - progressListener.on_finished() - return result - - def get_audio_duration(self, audio: str, config: TranscriptionConfig): - return get_audio_duration(audio) - - def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig): - if (config.max_prompt_window is not None and config.max_prompt_window > 0): - # Add segments to the current prompt window (unless it is a speech gap) - if not segment_gap: - for segment in adjusted_segments: - if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB: - prompt_window.append(segment) - - while (len(prompt_window) > 0): - first_end_time = prompt_window[0].get('end', 0) - # Time expanded in the segments should be discounted from the prompt window - first_expand_time = prompt_window[0].get('expand_amount', 0) - - if (first_end_time - first_expand_time < segment_end - config.max_prompt_window): - prompt_window.popleft() - else: - break - - def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float): - result = [] - last_end_time = 0 - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - if (last_end_time != segment_start): - delta = segment_start - last_end_time - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } ) - - last_end_time = segment_end - result.append(segment) - - # Also include total duration if specified - if (total_duration is not None and last_end_time < total_duration): - delta = total_duration - segment_start - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } ) - - return result - - # Expand the end time of each segment to the start of the next segment - def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - # Expand if the gap actually exists - if (delta >= 0): - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - - result.append(current_segment) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - if (last_segment['end'] < total_duration): - last_segment = last_segment.copy() - last_segment['end'] = total_duration - result[-1] = last_segment - - return result - - def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - expanded = False - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - if (max_expand_size is not None and delta <= max_expand_size): - # Just expand the current segment - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - expanded = True - - result.append(current_segment) - - # Add a gap to the next segment if needed - if (delta >= 0 and not expanded): - result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } ) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - delta = total_duration - last_segment['end'] - - if (delta > 0): - if (max_expand_size is not None and delta <= max_expand_size): - # Expand the last segment - last_segment = last_segment.copy() - last_segment['expand_amount'] = delta - last_segment['end'] = total_duration - result[-1] = last_segment - else: - result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } ) - - return result - - def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None): - result = [] - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - # Filter segments? - if (max_source_time is not None): - if (segment_start > max_source_time): - continue - segment_end = min(max_source_time, segment_end) - - new_segment = segment.copy() - - # Add to start and end - new_segment['start'] = segment_start + adjust_seconds - new_segment['end'] = segment_end + adjust_seconds - - # Handle words - if ('words' in new_segment): - for word in new_segment['words']: - # Adjust start and end - word['start'] = word['start'] + adjust_seconds - word['end'] = word['end'] + adjust_seconds - - result.append(new_segment) - return result - - def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float): - result = [] - - for entry in timestamps: - start = entry['start'] - end = entry['end'] - - result.append({ - 'start': start * factor, - 'end': end * factor - }) - return result - - -class VadSileroTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None): - super().__init__(sampling_rate=sampling_rate) - self.model = None - self.cache = cache - self._initialize_model() - - def _initialize_model(self): - if (self.cache is not None): - model_key = "VadSileroTranscription" - self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model) - print("Loaded Silerio model from cache.") - else: - self.model, self.get_speech_timestamps = self._create_model() - print("Created Silerio model") - - def _create_model(self): - model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad') - - # Silero does not benefit from multi-threading - torch.set_num_threads(1) # JIT - (get_speech_timestamps, _, _, _, _) = utils - - return model, get_speech_timestamps - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - result = [] - - print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time)) - perf_start_time = time.perf_counter() - - # Divide procesisng of audio into chunks - chunk_start = start_time - - while (chunk_start < end_time): - chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK) - - print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration))) - wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration)) - - sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD) - seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate) - adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration) - - #pprint(adjusted) - - result.extend(adjusted) - chunk_start += chunk_duration - - perf_end_time = time.perf_counter() - print("VAD processing took {} seconds".format(perf_end_time - perf_start_time)) - - return result - - def __getstate__(self): - # We only need the sampling rate - return { 'sampling_rate': self.sampling_rate } - - def __setstate__(self, state): - self.sampling_rate = state['sampling_rate'] - self.model = None - # Use the global cache - self.cache = GLOBAL_MODEL_CACHE - self._initialize_model() - -# A very simple VAD that just marks every N seconds as speech -class VadPeriodicTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000): - super().__init__(sampling_rate=sampling_rate) - - def is_transcribe_timestamps_fast(self): - # This is a very fast VAD - no need to parallelize it - return True - - def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float): - result = [] - - # Generate a timestamp every N seconds - start_timestamp = start_time - - while (start_timestamp < end_time): - end_timestamp = min(start_timestamp + config.periodic_duration, end_time) - segment_duration = end_timestamp - start_timestamp - - # Minimum duration is 1 second - if (segment_duration >= 1): - result.append( { 'start': start_timestamp, 'end': end_timestamp } ) - - start_timestamp = end_timestamp - - return result - -def get_audio_duration(file: str): - return float(ffmpeg.probe(file)["format"]["duration"]) - -def load_audio(file: str, sample_rate: int = 16000, - start_time: str = None, duration: str = None): - """ - Open an audio file and read as mono waveform, resampling as necessary - - Parameters - ---------- - file: str - The audio file to open - - sr: int - The sample rate to resample the audio if necessary - - start_time: str - The start time, using the standard FFMPEG time duration syntax, or None to disable. - - duration: str - The duration, using the standard FFMPEG time duration syntax, or None to disable. - - Returns - ------- - A NumPy array containing the audio waveform, in float32 dtype. - """ - try: - inputArgs = {'threads': 0} - - if (start_time is not None): - inputArgs['ss'] = start_time - if (duration is not None): - inputArgs['t'] = duration - - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - out, _ = ( - ffmpeg.input(file, **inputArgs) - .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate) - .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True) - ) - except ffmpeg.Error as e: - raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") - - return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0 \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/hrnet.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/hrnet.py deleted file mode 100644 index c0fd0a974192231506aa68b1e1719f618b78a1b3..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/hrnet.py +++ /dev/null @@ -1,537 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init, - kaiming_init) -from mmcv.runner import load_checkpoint -from torch.nn.modules.batchnorm import _BatchNorm - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES -from .resnet import BasicBlock, Bottleneck - - -class HRModule(nn.Module): - """High-Resolution Module for HRNet. - - In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange - is in this module. - """ - - def __init__(self, - num_branches, - blocks, - num_blocks, - in_channels, - num_channels, - multiscale_output=True, - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN')): - super(HRModule, self).__init__() - self._check_branches(num_branches, num_blocks, in_channels, - num_channels) - - self.in_channels = in_channels - self.num_branches = num_branches - - self.multiscale_output = multiscale_output - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - self.with_cp = with_cp - self.branches = self._make_branches(num_branches, blocks, num_blocks, - num_channels) - self.fuse_layers = self._make_fuse_layers() - self.relu = nn.ReLU(inplace=False) - - def _check_branches(self, num_branches, num_blocks, in_channels, - num_channels): - if num_branches != len(num_blocks): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_BLOCKS({len(num_blocks)})' - raise ValueError(error_msg) - - if num_branches != len(num_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_CHANNELS({len(num_channels)})' - raise ValueError(error_msg) - - if num_branches != len(in_channels): - error_msg = f'NUM_BRANCHES({num_branches}) ' \ - f'!= NUM_INCHANNELS({len(in_channels)})' - raise ValueError(error_msg) - - def _make_one_branch(self, - branch_index, - block, - num_blocks, - num_channels, - stride=1): - downsample = None - if stride != 1 or \ - self.in_channels[branch_index] != \ - num_channels[branch_index] * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - self.in_channels[branch_index], - num_channels[branch_index] * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, num_channels[branch_index] * - block.expansion)[1]) - - layers = [] - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - self.in_channels[branch_index] = \ - num_channels[branch_index] * block.expansion - for i in range(1, num_blocks[branch_index]): - layers.append( - block( - self.in_channels[branch_index], - num_channels[branch_index], - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*layers) - - def _make_branches(self, num_branches, block, num_blocks, num_channels): - branches = [] - - for i in range(num_branches): - branches.append( - self._make_one_branch(i, block, num_blocks, num_channels)) - - return nn.ModuleList(branches) - - def _make_fuse_layers(self): - if self.num_branches == 1: - return None - - num_branches = self.num_branches - in_channels = self.in_channels - fuse_layers = [] - num_out_branches = num_branches if self.multiscale_output else 1 - for i in range(num_out_branches): - fuse_layer = [] - for j in range(num_branches): - if j > i: - fuse_layer.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=1, - stride=1, - padding=0, - bias=False), - build_norm_layer(self.norm_cfg, in_channels[i])[1], - nn.Upsample( - scale_factor=2**(j - i), mode='nearest'))) - elif j == i: - fuse_layer.append(None) - else: - conv_downsamples = [] - for k in range(i - j): - if k == i - j - 1: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[i], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[i])[1])) - else: - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels[j], - in_channels[j], - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - in_channels[j])[1], - nn.ReLU(inplace=False))) - fuse_layer.append(nn.Sequential(*conv_downsamples)) - fuse_layers.append(nn.ModuleList(fuse_layer)) - - return nn.ModuleList(fuse_layers) - - def forward(self, x): - """Forward function.""" - if self.num_branches == 1: - return [self.branches[0](x[0])] - - for i in range(self.num_branches): - x[i] = self.branches[i](x[i]) - - x_fuse = [] - for i in range(len(self.fuse_layers)): - y = 0 - for j in range(self.num_branches): - if i == j: - y += x[j] - else: - y += self.fuse_layers[i][j](x[j]) - x_fuse.append(self.relu(y)) - return x_fuse - - -@BACKBONES.register_module() -class HRNet(nn.Module): - """HRNet backbone. - - High-Resolution Representations for Labeling Pixels and Regions - arXiv: https://arxiv.org/abs/1904.04514 - - Args: - extra (dict): detailed configuration for each stage of HRNet. - in_channels (int): Number of input image channels. Default: 3. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from mmdet.models import HRNet - >>> import torch - >>> extra = dict( - >>> stage1=dict( - >>> num_modules=1, - >>> num_branches=1, - >>> block='BOTTLENECK', - >>> num_blocks=(4, ), - >>> num_channels=(64, )), - >>> stage2=dict( - >>> num_modules=1, - >>> num_branches=2, - >>> block='BASIC', - >>> num_blocks=(4, 4), - >>> num_channels=(32, 64)), - >>> stage3=dict( - >>> num_modules=4, - >>> num_branches=3, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4), - >>> num_channels=(32, 64, 128)), - >>> stage4=dict( - >>> num_modules=3, - >>> num_branches=4, - >>> block='BASIC', - >>> num_blocks=(4, 4, 4, 4), - >>> num_channels=(32, 64, 128, 256))) - >>> self = HRNet(extra, in_channels=1) - >>> self.eval() - >>> inputs = torch.rand(1, 1, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 32, 8, 8) - (1, 64, 4, 4) - (1, 128, 2, 2) - (1, 256, 1, 1) - """ - - blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} - - def __init__(self, - extra, - in_channels=3, - conv_cfg=None, - norm_cfg=dict(type='BN'), - norm_eval=True, - with_cp=False, - zero_init_residual=False): - super(HRNet, self).__init__() - self.extra = extra - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.norm_eval = norm_eval - self.with_cp = with_cp - self.zero_init_residual = zero_init_residual - - # stem net - self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1) - self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2) - - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - self.conv_cfg, - 64, - 64, - kernel_size=3, - stride=2, - padding=1, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.relu = nn.ReLU(inplace=True) - - # stage 1 - self.stage1_cfg = self.extra['stage1'] - num_channels = self.stage1_cfg['num_channels'][0] - block_type = self.stage1_cfg['block'] - num_blocks = self.stage1_cfg['num_blocks'][0] - - block = self.blocks_dict[block_type] - stage1_out_channels = num_channels * block.expansion - self.layer1 = self._make_layer(block, 64, num_channels, num_blocks) - - # stage 2 - self.stage2_cfg = self.extra['stage2'] - num_channels = self.stage2_cfg['num_channels'] - block_type = self.stage2_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition1 = self._make_transition_layer([stage1_out_channels], - num_channels) - self.stage2, pre_stage_channels = self._make_stage( - self.stage2_cfg, num_channels) - - # stage 3 - self.stage3_cfg = self.extra['stage3'] - num_channels = self.stage3_cfg['num_channels'] - block_type = self.stage3_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition2 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage3, pre_stage_channels = self._make_stage( - self.stage3_cfg, num_channels) - - # stage 4 - self.stage4_cfg = self.extra['stage4'] - num_channels = self.stage4_cfg['num_channels'] - block_type = self.stage4_cfg['block'] - - block = self.blocks_dict[block_type] - num_channels = [channel * block.expansion for channel in num_channels] - self.transition3 = self._make_transition_layer(pre_stage_channels, - num_channels) - self.stage4, pre_stage_channels = self._make_stage( - self.stage4_cfg, num_channels) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: the normalization layer named "norm2" """ - return getattr(self, self.norm2_name) - - def _make_transition_layer(self, num_channels_pre_layer, - num_channels_cur_layer): - num_branches_cur = len(num_channels_cur_layer) - num_branches_pre = len(num_channels_pre_layer) - - transition_layers = [] - for i in range(num_branches_cur): - if i < num_branches_pre: - if num_channels_cur_layer[i] != num_channels_pre_layer[i]: - transition_layers.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - num_channels_pre_layer[i], - num_channels_cur_layer[i], - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, - num_channels_cur_layer[i])[1], - nn.ReLU(inplace=True))) - else: - transition_layers.append(None) - else: - conv_downsamples = [] - for j in range(i + 1 - num_branches_pre): - in_channels = num_channels_pre_layer[-1] - out_channels = num_channels_cur_layer[i] \ - if j == i - num_branches_pre else in_channels - conv_downsamples.append( - nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - out_channels, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, out_channels)[1], - nn.ReLU(inplace=True))) - transition_layers.append(nn.Sequential(*conv_downsamples)) - - return nn.ModuleList(transition_layers) - - def _make_layer(self, block, inplanes, planes, blocks, stride=1): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - build_conv_layer( - self.conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - build_norm_layer(self.norm_cfg, planes * block.expansion)[1]) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - downsample=downsample, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append( - block( - inplanes, - planes, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*layers) - - def _make_stage(self, layer_config, in_channels, multiscale_output=True): - num_modules = layer_config['num_modules'] - num_branches = layer_config['num_branches'] - num_blocks = layer_config['num_blocks'] - num_channels = layer_config['num_channels'] - block = self.blocks_dict[layer_config['block']] - - hr_modules = [] - for i in range(num_modules): - # multi_scale_output is only used for the last module - if not multiscale_output and i == num_modules - 1: - reset_multiscale_output = False - else: - reset_multiscale_output = True - - hr_modules.append( - HRModule( - num_branches, - block, - num_blocks, - in_channels, - num_channels, - reset_multiscale_output, - with_cp=self.with_cp, - norm_cfg=self.norm_cfg, - conv_cfg=self.conv_cfg)) - - return nn.Sequential(*hr_modules), in_channels - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.norm2(x) - x = self.relu(x) - x = self.layer1(x) - - x_list = [] - for i in range(self.stage2_cfg['num_branches']): - if self.transition1[i] is not None: - x_list.append(self.transition1[i](x)) - else: - x_list.append(x) - y_list = self.stage2(x_list) - - x_list = [] - for i in range(self.stage3_cfg['num_branches']): - if self.transition2[i] is not None: - x_list.append(self.transition2[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage3(x_list) - - x_list = [] - for i in range(self.stage4_cfg['num_branches']): - if self.transition3[i] is not None: - x_list.append(self.transition3[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage4(x_list) - - return y_list - - def train(self, mode=True): - """Convert the model into training mode will keeping the normalization - layer freezed.""" - super(HRNet, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/options/get_eval_option.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/options/get_eval_option.py deleted file mode 100644 index d0989ba1a8116068753ada2cb1806744e4512447..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/options/get_eval_option.py +++ /dev/null @@ -1,83 +0,0 @@ -from argparse import Namespace -import re -from os.path import join as pjoin - - -def is_float(numStr): - flag = False - numStr = str(numStr).strip().lstrip('-').lstrip('+') - try: - reg = re.compile(r'^[-+]?[0-9]+\.[0-9]+$') - res = reg.match(str(numStr)) - if res: - flag = True - except Exception as ex: - print("is_float() - error: " + str(ex)) - return flag - - -def is_number(numStr): - flag = False - numStr = str(numStr).strip().lstrip('-').lstrip('+') - if str(numStr).isdigit(): - flag = True - return flag - - -def get_opt(opt_path, device): - opt = Namespace() - opt_dict = vars(opt) - - skip = ('-------------- End ----------------', - '------------ Options -------------', - '\n') - print('Reading', opt_path) - with open(opt_path) as f: - for line in f: - if line.strip() not in skip: - # print(line.strip()) - key, value = line.strip().split(': ') - if value in ('True', 'False'): - opt_dict[key] = (value == 'True') - # print(key, value) - elif is_float(value): - opt_dict[key] = float(value) - elif is_number(value): - opt_dict[key] = int(value) - else: - opt_dict[key] = str(value) - - # print(opt) - opt_dict['which_epoch'] = 'finest' - opt.save_root = pjoin(opt.checkpoints_dir, opt.dataset_name, opt.name) - opt.model_dir = pjoin(opt.save_root, 'model') - opt.meta_dir = pjoin(opt.save_root, 'meta') - - if opt.dataset_name == 't2m': - opt.data_root = './dataset/HumanML3D/' - opt.motion_dir = pjoin(opt.data_root, 'new_joint_vecs') - opt.text_dir = pjoin(opt.data_root, 'texts') - opt.joints_num = 22 - opt.dim_pose = 263 - opt.max_motion_length = 196 - opt.max_motion_frame = 196 - opt.max_motion_token = 55 - elif opt.dataset_name == 'kit': - opt.data_root = './dataset/KIT-ML/' - opt.motion_dir = pjoin(opt.data_root, 'new_joint_vecs') - opt.text_dir = pjoin(opt.data_root, 'texts') - opt.joints_num = 21 - opt.dim_pose = 251 - opt.max_motion_length = 196 - opt.max_motion_frame = 196 - opt.max_motion_token = 55 - else: - raise KeyError('Dataset not recognized') - - opt.dim_word = 300 - opt.num_classes = 200 // opt.unit_length - opt.is_train = False - opt.is_continue = False - opt.device = device - - return opt \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/window/cocoa/pyglet_view.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/window/cocoa/pyglet_view.py deleted file mode 100644 index 13c96c0b65c1c184094c3b4583f0c21f11543b70..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/window/cocoa/pyglet_view.py +++ /dev/null @@ -1,351 +0,0 @@ -from pyglet.window import key, mouse -from pyglet.libs.darwin.quartzkey import keymap, charmap - -from pyglet.libs.darwin import cocoapy -from .pyglet_textview import PygletTextView - - -NSTrackingArea = cocoapy.ObjCClass('NSTrackingArea') - -# Event data helper functions. - - -def getMouseDelta(nsevent): - dx = nsevent.deltaX() - dy = -nsevent.deltaY() - return dx, dy - - -def getMousePosition(self, nsevent): - in_window = nsevent.locationInWindow() - in_window = self.convertPoint_fromView_(in_window, None) - x = int(in_window.x) - y = int(in_window.y) - # Must record mouse position for BaseWindow.draw_mouse_cursor to work. - self._window._mouse_x = x - self._window._mouse_y = y - return x, y - - -def getModifiers(nsevent): - modifiers = 0 - modifierFlags = nsevent.modifierFlags() - if modifierFlags & cocoapy.NSAlphaShiftKeyMask: - modifiers |= key.MOD_CAPSLOCK - if modifierFlags & cocoapy.NSShiftKeyMask: - modifiers |= key.MOD_SHIFT - if modifierFlags & cocoapy.NSControlKeyMask: - modifiers |= key.MOD_CTRL - if modifierFlags & cocoapy.NSAlternateKeyMask: - modifiers |= key.MOD_ALT - modifiers |= key.MOD_OPTION - if modifierFlags & cocoapy.NSCommandKeyMask: - modifiers |= key.MOD_COMMAND - if modifierFlags & cocoapy.NSFunctionKeyMask: - modifiers |= key.MOD_FUNCTION - return modifiers - - -def getSymbol(nsevent): - symbol = keymap.get(nsevent.keyCode(), None) - if symbol is not None: - return symbol - - chars = cocoapy.cfstring_to_string(nsevent.charactersIgnoringModifiers()) - if chars: - return charmap.get(chars[0].upper(), None) - - return None - - -class PygletView_Implementation: - PygletView = cocoapy.ObjCSubclass('NSView', 'PygletView') - - @PygletView.method(b'@'+cocoapy.NSRectEncoding+cocoapy.PyObjectEncoding) - def initWithFrame_cocoaWindow_(self, frame, window): - - # The tracking area is used to get mouseEntered, mouseExited, and cursorUpdate - # events so that we can custom set the mouse cursor within the view. - self._tracking_area = None - - self = cocoapy.ObjCInstance(cocoapy.send_super(self, 'initWithFrame:', frame, argtypes=[cocoapy.NSRect])) - - if not self: - return None - - # CocoaWindow object. - self._window = window - self.updateTrackingAreas() - - # Create an instance of PygletTextView to handle text events. - # We must do this because NSOpenGLView doesn't conform to the - # NSTextInputClient protocol by default, and the insertText: method will - # not do the right thing with respect to translating key sequences like - # "Option-e", "e" if the protocol isn't implemented. So the easiest - # thing to do is to subclass NSTextView which *does* implement the - # protocol and let it handle text input. - self._textview = PygletTextView.alloc().initWithCocoaWindow_(window) - # Add text view to the responder chain. - self.addSubview_(self._textview) - return self - - @PygletView.method('v') - def dealloc(self): - self._window = None - # cocoapy.end_message(self.objc_self, 'removeFromSuperviewWithoutNeedingDisplay') - self._textview.release() - self._textview = None - self._tracking_area.release() - self._tracking_area = None - cocoapy.send_super(self, 'dealloc') - - @PygletView.method('v') - def updateTrackingAreas(self): - # This method is called automatically whenever the tracking areas need to be - # recreated, for example when window resizes. - if self._tracking_area: - self.removeTrackingArea_(self._tracking_area) - self._tracking_area.release() - self._tracking_area = None - - tracking_options = cocoapy.NSTrackingMouseEnteredAndExited | cocoapy.NSTrackingActiveInActiveApp | cocoapy.NSTrackingCursorUpdate - frame = self.frame() - - self._tracking_area = NSTrackingArea.alloc().initWithRect_options_owner_userInfo_( - frame, # rect - tracking_options, # options - self, # owner - None) # userInfo - - self.addTrackingArea_(self._tracking_area) - - @PygletView.method('B') - def canBecomeKeyView(self): - return True - - @PygletView.method('B') - def isOpaque(self): - return True - - ## Event responders. - - # This method is called whenever the view changes size. - @PygletView.method(b'v'+cocoapy.NSSizeEncoding) - def setFrameSize_(self, size): - cocoapy.send_super(self, 'setFrameSize:', size, - superclass_name='NSView', - argtypes=[cocoapy.NSSize]) - - # This method is called when view is first installed as the - # contentView of window. Don't do anything on first call. - # This also helps ensure correct window creation event ordering. - if not self._window.context.canvas: - return - - width, height = int(size.width), int(size.height) - self._window.switch_to() - self._window.context.update_geometry() - self._window._width, self._window._height = width, height - self._window.dispatch_event("on_resize", width, height) - self._window.dispatch_event("on_expose") - # Can't get app.event_loop.enter_blocking() working with Cocoa, because - # when mouse clicks on the window's resize control, Cocoa enters into a - # mini-event loop that only responds to mouseDragged and mouseUp events. - # This means that using NSTimer to call idle() won't work. Our kludge - # is to override NSWindow's nextEventMatchingMask_etc method and call - # idle() from there. - if self.inLiveResize(): - from pyglet import app - if app.event_loop is not None: - app.event_loop.idle() - - @PygletView.method('v@') - def pygletKeyDown_(self, nsevent): - symbol = getSymbol(nsevent) - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_key_press', symbol, modifiers) - - @PygletView.method('v@') - def pygletKeyUp_(self, nsevent): - symbol = getSymbol(nsevent) - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_key_release', symbol, modifiers) - - @PygletView.method('v@') - def pygletFlagsChanged_(self, nsevent): - # Handles on_key_press and on_key_release events for modifier keys. - # Note that capslock is handled differently than other keys; it acts - # as a toggle, so on_key_release is only sent when it's turned off. - - # TODO: Move these constants somewhere else. - # Undocumented left/right modifier masks found by experimentation: - NSLeftShiftKeyMask = 1 << 1 - NSRightShiftKeyMask = 1 << 2 - NSLeftControlKeyMask = 1 << 0 - NSRightControlKeyMask = 1 << 13 - NSLeftAlternateKeyMask = 1 << 5 - NSRightAlternateKeyMask = 1 << 6 - NSLeftCommandKeyMask = 1 << 3 - NSRightCommandKeyMask = 1 << 4 - - maskForKey = {key.LSHIFT: NSLeftShiftKeyMask, - key.RSHIFT: NSRightShiftKeyMask, - key.LCTRL: NSLeftControlKeyMask, - key.RCTRL: NSRightControlKeyMask, - key.LOPTION: NSLeftAlternateKeyMask, - key.ROPTION: NSRightAlternateKeyMask, - key.LCOMMAND: NSLeftCommandKeyMask, - key.RCOMMAND: NSRightCommandKeyMask, - key.CAPSLOCK: cocoapy.NSAlphaShiftKeyMask, - key.FUNCTION: cocoapy.NSFunctionKeyMask} - - symbol = keymap.get(nsevent.keyCode(), None) - - # Ignore this event if symbol is not a modifier key. We must check this - # because e.g., we receive a flagsChanged message when using CMD-tab to - # switch applications, with symbol == "a" when command key is released. - if symbol is None or symbol not in maskForKey: - return - - modifiers = getModifiers(nsevent) - modifierFlags = nsevent.modifierFlags() - - if symbol and modifierFlags & maskForKey[symbol]: - self._window.dispatch_event('on_key_press', symbol, modifiers) - else: - self._window.dispatch_event('on_key_release', symbol, modifiers) - - # Overriding this method helps prevent system beeps for unhandled events. - @PygletView.method('B@') - def performKeyEquivalent_(self, nsevent): - # Let arrow keys and certain function keys pass through the responder - # chain so that the textview can handle on_text_motion events. - modifierFlags = nsevent.modifierFlags() - if modifierFlags & cocoapy.NSNumericPadKeyMask: - return False - if modifierFlags & cocoapy.NSFunctionKeyMask: - ch = cocoapy.cfstring_to_string(nsevent.charactersIgnoringModifiers()) - if ch in (cocoapy.NSHomeFunctionKey, cocoapy.NSEndFunctionKey, - cocoapy.NSPageUpFunctionKey, cocoapy.NSPageDownFunctionKey): - return False - # Send the key equivalent to the main menu to perform menu items. - NSApp = cocoapy.ObjCClass('NSApplication').sharedApplication() - NSApp.mainMenu().performKeyEquivalent_(nsevent) - # Indicate that we've handled the event so system won't beep. - return True - - @PygletView.method('v@') - def mouseMoved_(self, nsevent): - if self._window._mouse_ignore_motion: - self._window._mouse_ignore_motion = False - return - # Don't send on_mouse_motion events if we're not inside the content rectangle. - if not self._window._mouse_in_window: - return - x, y = getMousePosition(self, nsevent) - dx, dy = getMouseDelta(nsevent) - self._window.dispatch_event('on_mouse_motion', x, y, dx, dy) - - @PygletView.method('v@') - def scrollWheel_(self, nsevent): - x, y = getMousePosition(self, nsevent) - scroll_x, scroll_y = getMouseDelta(nsevent) - self._window.dispatch_event('on_mouse_scroll', x, y, scroll_x, scroll_y) - - @PygletView.method('v@') - def mouseDown_(self, nsevent): - x, y = getMousePosition(self, nsevent) - buttons = mouse.LEFT - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_press', x, y, buttons, modifiers) - - @PygletView.method('v@') - def mouseDragged_(self, nsevent): - x, y = getMousePosition(self, nsevent) - dx, dy = getMouseDelta(nsevent) - buttons = mouse.LEFT - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_drag', x, y, dx, dy, buttons, modifiers) - - @PygletView.method('v@') - def mouseUp_(self, nsevent): - x, y = getMousePosition(self, nsevent) - buttons = mouse.LEFT - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_release', x, y, buttons, modifiers) - - @PygletView.method('v@') - def rightMouseDown_(self, nsevent): - x, y = getMousePosition(self, nsevent) - buttons = mouse.RIGHT - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_press', x, y, buttons, modifiers) - - @PygletView.method('v@') - def rightMouseDragged_(self, nsevent): - x, y = getMousePosition(self, nsevent) - dx, dy = getMouseDelta(nsevent) - buttons = mouse.RIGHT - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_drag', x, y, dx, dy, buttons, modifiers) - - @PygletView.method('v@') - def rightMouseUp_(self, nsevent): - x, y = getMousePosition(self, nsevent) - buttons = mouse.RIGHT - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_release', x, y, buttons, modifiers) - - @PygletView.method('v@') - def otherMouseDown_(self, nsevent): - x, y = getMousePosition(self, nsevent) - buttons = mouse.MIDDLE - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_press', x, y, buttons, modifiers) - - @PygletView.method('v@') - def otherMouseDragged_(self, nsevent): - x, y = getMousePosition(self, nsevent) - dx, dy = getMouseDelta(nsevent) - buttons = mouse.MIDDLE - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_drag', x, y, dx, dy, buttons, modifiers) - - @PygletView.method('v@') - def otherMouseUp_(self, nsevent): - x, y = getMousePosition(self, nsevent) - buttons = mouse.MIDDLE - modifiers = getModifiers(nsevent) - self._window.dispatch_event('on_mouse_release', x, y, buttons, modifiers) - - @PygletView.method('v@') - def mouseEntered_(self, nsevent): - x, y = getMousePosition(self, nsevent) - self._window._mouse_in_window = True - # Don't call self._window.set_mouse_platform_visible() from here. - # Better to do it from cursorUpdate: - self._window.dispatch_event('on_mouse_enter', x, y) - - @PygletView.method('v@') - def mouseExited_(self, nsevent): - x, y = getMousePosition(self, nsevent) - self._window._mouse_in_window = False - if not self._window._mouse_exclusive: - self._window.set_mouse_platform_visible() - self._window.dispatch_event('on_mouse_leave', x, y) - - @PygletView.method('v@') - def cursorUpdate_(self, nsevent): - # Called when mouse cursor enters view. Unlike mouseEntered:, - # this method will be called if the view appears underneath a - # motionless mouse cursor, as can happen during window creation, - # or when switching into fullscreen mode. - # BUG: If the mouse enters the window via the resize control at the - # the bottom right corner, the resize control will set the cursor - # to the default arrow and screw up our cursor tracking. - self._window._mouse_in_window = True - if not self._window._mouse_exclusive: - self._window.set_mouse_platform_visible() - - -PygletView = cocoapy.ObjCClass('PygletView') diff --git a/spaces/adirik/image-guided-owlvit/app.py b/spaces/adirik/image-guided-owlvit/app.py deleted file mode 100644 index 29c75f68b61e11d218ee4237cecdd93449233247..0000000000000000000000000000000000000000 --- a/spaces/adirik/image-guided-owlvit/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import torch -import cv2 -import gradio as gr -import numpy as np -from transformers import OwlViTProcessor, OwlViTForObjectDetection - - -# Use GPU if available -if torch.cuda.is_available(): - device = torch.device("cuda") -else: - device = torch.device("cpu") - -model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32").to(device) -model.eval() -processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") - - -def image_guided_detection(img, query_img, score_threshold, nms_threshold): - target_sizes = torch.Tensor([img.size[::-1]]) - inputs = processor(query_images=query_img, images=img, return_tensors="pt").to(device) - - with torch.no_grad(): - outputs = model.image_guided_detection(**inputs) - - outputs.logits = outputs.logits.cpu() - outputs.pred_boxes = outputs.target_pred_boxes.cpu() - - results = processor.post_process_image_guided_detection( - outputs=outputs, - threshold=score_threshold, - nms_threshold=nms_threshold, - target_sizes=target_sizes - ) - - boxes, scores = results[0]["boxes"], results[0]["scores"] - img = np.asarray(img) - - for box, score in zip(boxes, scores): - box = [int(i) for i in box.tolist()] - - if score >= score_threshold: - img = cv2.rectangle(img, box[:2], box[2:], (255,0,0), 5) - if box[3] + 25 > 768: - y = box[3] - 10 - else: - y = box[3] + 25 - return img - - -description = """ -Gradio demo for image-guided / one-shot object detection with OWL-ViT - -OWL-ViT, -introduced in Simple Open-Vocabulary Object Detection -with Vision Transformers. - -\n\nYou can use OWL-ViT to query images with text descriptions of any object or alternatively with an -example / query image of the target object. To use it, simply upload an image and a query image that only contains the object - you're looking for. You can also use the score and non-maximum suppression threshold sliders to set a threshold to filter out - low probability and overlapping bounding box predictions. - -\n\nFor an in-depth tutorial on how to use OWL-ViT with transformers, check out our -Colab notebook - and our HF spaces demo for zero-shot / text-guided object detection. -""" - -demo = gr.Interface( - image_guided_detection, - inputs=[gr.Image(type="pil"), gr.Image(type="pil"), gr.Slider(0, 1, value=0.6), gr.Slider(0, 1, value=0.3)], - outputs="image", - title="Image-Guided Object Detection with OWL-ViT", - description=description, - examples=[ - ["assets/image2.jpeg", "assets/query2.jpeg", 0.7, 0.3], - ["assets/image1.jpeg", "assets/query1.jpeg", 0.6, 0.3] - ] -) - -demo.launch() \ No newline at end of file diff --git a/spaces/adrian065105/andite-anything-v4.0/README.md b/spaces/adrian065105/andite-anything-v4.0/README.md deleted file mode 100644 index 009059f1b73fbf3aeeb401dc042e9ac24e4f6d18..0000000000000000000000000000000000000000 --- a/spaces/adrian065105/andite-anything-v4.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Andite Anything V4.0 -emoji: 💻 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/Detic/detic/data/datasets/cc.py b/spaces/akhaliq/Detic/detic/data/datasets/cc.py deleted file mode 100644 index 7c3e50726f781dba4c72d4e18f4922e503218af8..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/detic/data/datasets/cc.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os - -from detectron2.data.datasets.builtin_meta import _get_builtin_metadata -from detectron2.data.datasets.lvis import get_lvis_instances_meta -from .lvis_v1 import custom_register_lvis_instances - -_CUSTOM_SPLITS = { - "cc3m_v1_val": ("cc3m/validation/", "cc3m/val_image_info.json"), - "cc3m_v1_train": ("cc3m/training/", "cc3m/train_image_info.json"), - "cc3m_v1_train_tags": ("cc3m/training/", "cc3m/train_image_info_tags.json"), - -} - -for key, (image_root, json_file) in _CUSTOM_SPLITS.items(): - custom_register_lvis_instances( - key, - get_lvis_instances_meta('lvis_v1'), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) - diff --git a/spaces/alamin655/websurfx/src/config/parser.rs b/spaces/alamin655/websurfx/src/config/parser.rs deleted file mode 100644 index fb9f8b12420763b6a86f1497d171a2eec98fa5c8..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/src/config/parser.rs +++ /dev/null @@ -1,152 +0,0 @@ -//! This module provides the functionality to parse the lua config and convert the config options -//! into rust readable form. - -use crate::handler::paths::{file_path, FileType}; - -use crate::models::parser_models::{AggregatorConfig, RateLimiter, Style}; -use log::LevelFilter; -use mlua::Lua; -use std::{collections::HashMap, fs, thread::available_parallelism}; - -/// A named struct which stores the parsed config file options. -#[derive(Clone)] -pub struct Config { - /// It stores the parsed port number option on which the server should launch. - pub port: u16, - /// It stores the parsed ip address option on which the server should launch - pub binding_ip: String, - /// It stores the theming options for the website. - pub style: Style, - #[cfg(feature = "redis-cache")] - /// It stores the redis connection url address on which the redis - /// client should connect. - pub redis_url: String, - /// It stores the option to whether enable or disable production use. - pub aggregator: AggregatorConfig, - /// It stores the option to whether enable or disable logs. - pub logging: bool, - /// It stores the option to whether enable or disable debug mode. - pub debug: bool, - /// It stores all the engine names that were enabled by the user. - pub upstream_search_engines: Vec, - /// It stores the time (secs) which controls the server request timeout. - pub request_timeout: u8, - /// It stores the number of threads which controls the app will use to run. - pub threads: u8, - /// It stores configuration options for the ratelimiting middleware. - pub rate_limiter: RateLimiter, - /// It stores the level of safe search to be used for restricting content in the - /// search results. - pub safe_search: u8, -} - -impl Config { - /// A function which parses the config.lua file and puts all the parsed options in the newly - /// constructed Config struct and returns it. - /// - /// # Arguments - /// - /// * `logging_initialized` - It takes a boolean which ensures that the logging doesn't get - /// initialized twice. Pass false if the logger has not yet been initialized. - /// - /// # Error - /// - /// Returns a lua parse error if parsing of the config.lua file fails or has a syntax error - /// or io error if the config.lua file doesn't exists otherwise it returns a newly constructed - /// Config struct with all the parsed config options from the parsed config file. - pub fn parse(logging_initialized: bool) -> Result> { - let lua = Lua::new(); - let globals = lua.globals(); - - lua.load(&fs::read_to_string(file_path(FileType::Config)?)?) - .exec()?; - - let parsed_threads: u8 = globals.get::<_, u8>("threads")?; - - let debug: bool = globals.get::<_, bool>("debug")?; - let logging: bool = globals.get::<_, bool>("logging")?; - - if !logging_initialized { - set_logging_level(debug, logging); - } - - let threads: u8 = if parsed_threads == 0 { - let total_num_of_threads: usize = available_parallelism()?.get() / 2; - log::error!( - "Config Error: The value of `threads` option should be a non zero positive integer" - ); - log::error!("Falling back to using {} threads", total_num_of_threads); - total_num_of_threads as u8 - } else { - parsed_threads - }; - - let rate_limiter = globals.get::<_, HashMap>("rate_limiter")?; - - let parsed_safe_search: u8 = globals.get::<_, u8>("safe_search")?; - let safe_search: u8 = match parsed_safe_search { - 0..=4 => parsed_safe_search, - _ => { - log::error!("Config Error: The value of `safe_search` option should be a non zero positive integer from 0 to 4."); - log::error!("Falling back to using the value `1` for the option"); - 1 - } - }; - - Ok(Config { - port: globals.get::<_, u16>("port")?, - binding_ip: globals.get::<_, String>("binding_ip")?, - style: Style::new( - globals.get::<_, String>("theme")?, - globals.get::<_, String>("colorscheme")?, - ), - #[cfg(feature = "redis-cache")] - redis_url: globals.get::<_, String>("redis_url")?, - aggregator: AggregatorConfig { - random_delay: globals.get::<_, bool>("production_use")?, - }, - logging, - debug, - upstream_search_engines: globals - .get::<_, HashMap>("upstream_search_engines")? - .into_iter() - .filter_map(|(key, value)| value.then_some(key)) - .filter_map(|engine| crate::models::engine_models::EngineHandler::new(&engine)) - .collect(), - request_timeout: globals.get::<_, u8>("request_timeout")?, - threads, - rate_limiter: RateLimiter { - number_of_requests: rate_limiter["number_of_requests"], - time_limit: rate_limiter["time_limit"], - }, - safe_search, - }) - } -} - -/// a helper function that sets the proper logging level -/// -/// # Arguments -/// -/// * `debug` - It takes the option to whether enable or disable debug mode. -/// * `logging` - It takes the option to whether enable or disable logs. -fn set_logging_level(debug: bool, logging: bool) { - if let Ok(pkg_env_var) = std::env::var("PKG_ENV") { - if pkg_env_var.to_lowercase() == "dev" { - env_logger::Builder::new() - .filter(None, LevelFilter::Trace) - .init(); - return; - } - } - - // Initializing logging middleware with level set to default or info. - let log_level = match (debug, logging) { - (true, true) => LevelFilter::Debug, - (true, false) => LevelFilter::Debug, - (false, true) => LevelFilter::Info, - (false, false) => LevelFilter::Error, - }; - - env_logger::Builder::new().filter(None, log_level).init(); -} diff --git a/spaces/allknowingroger/Image-Models-Test65/app.py b/spaces/allknowingroger/Image-Models-Test65/app.py deleted file mode 100644 index 43e4d0fbd1beaa0418767e895d2166818504f8e6..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test65/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "mangoxb/tangled3", - "Muhammadreza/mann-e-artistic-3", - "iab/output", - "artificialguybr/LogoRedmond-LogoLoraForSDXL", - "Yntec/MGM", - "MakAttack/6537c72d5b769050ff861f8d", - "patrickvonplaten/lora-trained-xl-colab", - "Mahema/pet-cat-abc", - "Yntec/SamaritanDoesArt", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/altairv/03/README.md b/spaces/altairv/03/README.md deleted file mode 100644 index 0a33517f926018c8d6bf793410f208c299a5d2c9..0000000000000000000000000000000000000000 --- a/spaces/altairv/03/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: pompoms -emoji: 😳 -colorFrom: black -colorTo: white -sdk: docker -pinned: false -duplicated_from: soggys/pompoms ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/amankishore/sjc/my/utils/event.py b/spaces/amankishore/sjc/my/utils/event.py deleted file mode 100644 index 97dbbf2ea63e05cca70ac3f618dc58a20b90409b..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/my/utils/event.py +++ /dev/null @@ -1,142 +0,0 @@ -# design inspiration from detectron2 -from pathlib import Path -import json -import os -from contextlib import contextmanager -from .ticker import IntervalTicker - - -_CURRENT_STORAGE_STACK = [] - - -def get_event_storage(): - """ - Returns: - The :class:`EventStorage` object that's currently being used. - Throws an error if no :class:`EventStorage` is currently enabled. - """ - assert len( - _CURRENT_STORAGE_STACK - ), "get_event_storage() has to be called inside a 'with EventStorage(...)' context!" - return _CURRENT_STORAGE_STACK[-1] - - -def read_lined_json(fname): - with Path(fname).open('r') as f: - for line in f: - item = json.loads(line) - yield item - - -def read_stats(dirname, key): - if dirname is None or not (fname := Path(dirname) / "history.json").is_file(): - return [], [] - stats = read_lined_json(fname) - stats = list(filter(lambda x: key in x, stats)) - xs = [e['iter'] for e in stats] - ys = [e[key] for e in stats] - return xs, ys - - -class EventStorage(): - def __init__(self, output_dir="./", start_iter=0, flush_period=60): - self.iter = start_iter - self.ticker = IntervalTicker(flush_period) - self.history = [] - self._current_prefix = "" - self._init_curr_buffer_() - - self.output_dir = output_dir - self.writable = False - - def _open(self): - if self.writable: - output_dir = Path(self.output_dir) - if not output_dir.is_dir(): - output_dir.mkdir(parents=True, exist_ok=True) - json_fname = output_dir / 'history.json' - - self._file_handle = json_fname.open('a', encoding='utf8') - self.output_dir = output_dir # make sure it's a path object - - def _init_curr_buffer_(self): - self.curr_buffer = {'iter': self.iter} - - def step(self, flush=False): - self.history.append(self.curr_buffer) - - on_flush_period = self.ticker.tick() - if flush or on_flush_period: - self.flush_history() - - self.iter += 1 - self._init_curr_buffer_() - - def flush_history(self): - if self.writable: - for item in self.history: - line = json.dumps(item, sort_keys=True, ensure_ascii=False) + "\n" - self._file_handle.write(line) - self._file_handle.flush() - self.history = [] - - def full_key(self, key): - assert isinstance(key, str) - name = self._current_prefix + key - return name - - def put(self, key, val): - key = self.full_key(key) - assert isinstance(val, (int, float, str)) - if isinstance(val, float): - val = round(val, 3) - self.curr_buffer[key] = val - - def put_scalars(self, **kwargs): - for k, v in kwargs.items(): - self.put(k, v) - - def put_artifact(self, key, ext, save_func): - if not self.writable: - return - os.makedirs(self.output_dir / key, exist_ok=True) - fname = (self.output_dir / key / f"step_{self.iter}").with_suffix(ext) - fname = str(fname) - - # must be called inside so that - # 1. the func is not executed if the metric is not writable - # 2. the key is only inserted if the func succeeds - save_func(fname) - self.put(key, fname) - return fname - - def close(self): - self.flush_history() - if self.writable: - self._file_handle.close() - - def get_last(self): - if len(self.history) > 0: - last = self.history[-1] - return last - - def __enter__(self): - if len(_CURRENT_STORAGE_STACK) > 0: - parent = _CURRENT_STORAGE_STACK[-1] - root, dirname = parent.output_dir, self.output_dir - if root is not None and dirname is not None: - child_dir = parent.output_dir / f"{self.output_dir}_{parent.iter}" - self.output_dir = child_dir - parent.put(str(dirname), str(child_dir)) - - if self.output_dir is not None: - self.writable = True - self._open() - - _CURRENT_STORAGE_STACK.append(self) - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - assert _CURRENT_STORAGE_STACK[-1] == self - _CURRENT_STORAGE_STACK.pop() - self.close() diff --git a/spaces/andreasmartin/faq/util.py b/spaces/andreasmartin/faq/util.py deleted file mode 100644 index 24217c7316844ae7a68c930907fef2246faf66fc..0000000000000000000000000000000000000000 --- a/spaces/andreasmartin/faq/util.py +++ /dev/null @@ -1,102 +0,0 @@ -import pandas as pd -from langchain.docstore.document import Document -import re - -SHEET_URL_X = "https://docs.google.com/spreadsheets/d/" -SHEET_URL_Y = "/edit#gid=" -SHEET_URL_Y_EXPORT = "/export?gid=" -SPLIT_PAGE_BREAKS = False -SYNONYMS = None - - -def get_id(sheet_url: str) -> str: - x = sheet_url.find(SHEET_URL_X) - y = sheet_url.find(SHEET_URL_Y) - return sheet_url[x + len(SHEET_URL_X) : y] + "-" + sheet_url[y + len(SHEET_URL_Y) :] - - -def xlsx_url(get_id: str) -> str: - y = get_id.rfind("-") - return SHEET_URL_X + get_id[0:y] + SHEET_URL_Y_EXPORT + get_id[y + 1 :] - - -def read_df(xlsx_url: str, page_content_column: str) -> pd.DataFrame: - df = pd.read_excel(xlsx_url, header=0, keep_default_na=False) - if SPLIT_PAGE_BREAKS: - df = split_page_breaks(df, page_content_column) - df = remove_empty_rows(df, page_content_column) - if SYNONYMS is not None: - df = duplicate_rows_with_synonyms(df, page_content_column, SYNONYMS) - return df - - -def split_page_breaks(df: pd.DataFrame, column_name: str) -> pd.DataFrame: - split_values = df[column_name].str.split("\n") - - new_df = pd.DataFrame({column_name: split_values.explode()}) - new_df.reset_index(drop=True, inplace=True) - - column_order = df.columns - - new_df = new_df.reindex(column_order, axis=1) - - other_columns = column_order.drop(column_name) - for column in other_columns: - new_df[column] = ( - df[column].repeat(split_values.str.len()).reset_index(drop=True) - ) - - return new_df - - -def transform_documents_to_dataframe(documents: Document) -> pd.DataFrame: - keys = [] - values = {"document_score": [], "page_content": []} - - for doc, score in documents: - for key, value in doc.metadata.items(): - if key not in keys: - keys.append(key) - values[key] = [] - values[key].append(value) - values["document_score"].append(score) - values["page_content"].append(doc.page_content) - - return pd.DataFrame(values) - - -def remove_duplicates_by_column(df: pd.DataFrame, column_name: str) -> pd.DataFrame: - df.drop_duplicates(subset=column_name, inplace=True, ignore_index=True) - - return df - - -def dataframe_to_dict(df: pd.DataFrame) -> dict: - df_records = df.to_dict(orient="records") - - return df_records - - -def duplicate_rows_with_synonyms(df: pd.DataFrame, column: str, synonyms: list[list[str]]) -> pd.DataFrame: - new_rows = [] - for index, row in df.iterrows(): - new_rows.append(row) - text = row[column] - for synonym_list in synonyms: - for synonym in synonym_list: - pattern = r'(?i)\b({}(?:s|es|ed|ing)?)\b'.format(synonym) - if re.search(pattern, text): - for replacement in synonym_list: - if replacement != synonym: - new_row = row.copy() - new_row[column] = re.sub(pattern, replacement, text) - new_rows.append(new_row) - new_df = pd.DataFrame(new_rows, columns=df.columns) - new_df = new_df.reset_index(drop=True) - return new_df - - -def remove_empty_rows(df: pd.DataFrame, column_name: str) -> pd.DataFrame: - df = df[df[column_name].str.strip().astype(bool)] - df = df.reset_index(drop=True) - return df diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/ChatgptAi.py b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/ChatgptAi.py deleted file mode 100644 index 46605175d1ac94fcde252b53ddb81ba99f15706e..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/g4f/Provider/Providers/ChatgptAi.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -import requests, re -from ...typing import sha256, Dict, get_type_hints - -url = 'https://chatgpt.ai/gpt-4/' -model = ['gpt-4'] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - chat = '' - for message in messages: - chat += '%s: %s\n' % (message['role'], message['content']) - chat += 'assistant: ' - - response = requests.get('https://chatgpt.ai/') - nonce, post_id, _, bot_id = re.findall(r'data-nonce="(.*)"\n data-post-id="(.*)"\n data-url="(.*)"\n data-bot-id="(.*)"\n data-width', response.text)[0] - - headers = { - 'authority': 'chatgpt.ai', - 'accept': '*/*', - 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'cache-control': 'no-cache', - 'origin': 'https://chatgpt.ai', - 'pragma': 'no-cache', - 'referer': 'https://chatgpt.ai/gpt-4/', - 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"Windows"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - } - data = { - '_wpnonce': nonce, - 'post_id': post_id, - 'url': 'https://chatgpt.ai/gpt-4', - 'action': 'wpaicg_chat_shortcode_message', - 'message': chat, - 'bot_id': bot_id - } - - response = requests.post('https://chatgpt.ai/wp-admin/admin-ajax.php', - headers=headers, data=data) - - yield (response.json()['data']) - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/angelina-wang/directional_bias_amplification/directional_bias_amplification.py b/spaces/angelina-wang/directional_bias_amplification/directional_bias_amplification.py deleted file mode 100644 index 0efcda340f6514be5ea6b352977bd674be5f5f5d..0000000000000000000000000000000000000000 --- a/spaces/angelina-wang/directional_bias_amplification/directional_bias_amplification.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Directional Bias Amplification metric.""" - -import evaluate -import datasets - -import numpy as np - -_DESCRIPTION = """ -Directional Bias Amplification is a metric that captures the amount of bias (i.e., a conditional probability) that is amplified. -This metric was introduced in the ICML 2021 paper "Directional Bias Amplification" (https://arxiv.org/abs/2102.12594). -""" - -_KWARGS_DESCRIPTION = """ -Args: - predictions (`array` of `int`): Predicted task labels. Array of size n x |T|. n is number of samples, |T| is number of task labels. All values are binary 0 or 1. - references (`array` of `int`): Ground truth task labels. Array of size n x |T|. n is number of samples, |T| is number of task labels. All values are binary 0 or 1. - attributes(`array` of `int`): Ground truth attribute labels. Array of size n x |A|. n is number of samples, |A| is number of attribute labels. All values are binary 0 or 1. - -Returns - bias_amplification(`float`): Bias amplification value. Minimum possible value is 0, and maximum possible value is 1.0. The higher the value, the more "bias" is amplified. - disagg_bias_amplification (`array` of `float`): Array of size (number of unique attribute label values) x (number of unique task label values). Each array value represents the bias amplification of that particular task given that particular attribute. -""" - - -_CITATION = """ -@inproceedings{wang2021biasamp, -author = {Angelina Wang and Olga Russakovsky}, -title = {Directional Bias Amplification}, -booktitle = {International Conference on Machine Learning (ICML)}, -year = {2021} -} -""" - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class DirectionalBiasAmplification(evaluate.EvaluationModule): - def _info(self): - return evaluate.EvaluationModuleInfo( - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - features=datasets.Features( - { - "predictions": datasets.Sequence(datasets.Value("int32")), - "references": datasets.Sequence(datasets.Value("int32")), - "attributes": datasets.Sequence(datasets.Value("int32")), - } - ), - reference_urls=["https://arxiv.org/abs/2102.12594"], - ) - - def _compute(self, predictions, references, attributes): - - task_preds, task_labels, attribute_labels = np.array(predictions), np.array(references), np.array(attributes) - - assert len(task_labels.shape) == 2 and len(attribute_labels.shape) == 2, 'Please read the shape of the expected inputs, which should be "num samples" by "num classification items"' - assert len(task_labels) == len(attribute_labels) == len(task_preds), 'Please make sure the number of samples in the three input arrays is the same.' - - num_t, num_a = task_labels.shape[1], attribute_labels.shape[1] - - # only include images that have attribute(s) and task(s) associated with it - keep_indices = np.array(list(set(np.where(np.sum(task_labels, axis=1)>0)[0]).union(set(np.where(np.sum(attribute_labels, axis=1)>0)[0])))) - task_labels_ind, attribute_labels_ind = task_labels[keep_indices], attribute_labels[keep_indices] - - # y_at calculation - p_at = np.zeros((num_a, num_t)) - p_a_p_t = np.zeros((num_a, num_t)) - num = len(task_labels) - for a in range(num_a): - for t in range(num_t): - t_indices = np.where(task_labels_ind[:, t]==1)[0] - a_indices = np.where(attribute_labels_ind[:, a]==1)[0] - at_indices = set(t_indices)&set(a_indices) - p_a_p_t[a][t] = (len(t_indices)/num)*(len(a_indices)/num) - p_at[a][t] = len(at_indices)/num - y_at = np.sign(p_at - p_a_p_t) - - # delta_at calculation - t_cond_a = np.zeros((num_a, num_t)) - that_cond_a = np.zeros((num_a, num_t)) - for a in range(num_a): - for t in range(num_t): - t_cond_a[a][t] = np.mean(task_labels[:, t][np.where(attribute_labels[:, a]==1)[0]]) - that_cond_a[a][t] = np.mean(task_preds[:, t][np.where(attribute_labels[:, a]==1)[0]]) - delta_at = that_cond_a - t_cond_a - - values = y_at*delta_at - val = np.nanmean(values) - - val, values - return { - "bias_amplification": val, - "disagg_bias_amplification": values - } diff --git a/spaces/ardha27/rvc_TTS/lib/infer_pack/onnx_inference.py b/spaces/ardha27/rvc_TTS/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/ardha27/rvc_TTS/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/speakers.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/speakers.py deleted file mode 100644 index e49695268d6a4a3ac8e5f41df8954f07b16b5566..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/utils/speakers.py +++ /dev/null @@ -1,222 +0,0 @@ -import json -import os -from typing import Any, Dict, List, Union - -import fsspec -import numpy as np -import torch -from coqpit import Coqpit - -from TTS.config import get_from_config_or_model_args_with_default -from TTS.tts.utils.managers import EmbeddingManager - - -class SpeakerManager(EmbeddingManager): - """Manage the speakers for multi-speaker 🐸TTS models. Load a datafile and parse the information - in a way that can be queried by speaker or clip. - - There are 3 different scenarios considered: - - 1. Models using speaker embedding layers. The datafile only maps speaker names to ids used by the embedding layer. - 2. Models using d-vectors. The datafile includes a dictionary in the following format. - - :: - - { - 'clip_name.wav':{ - 'name': 'speakerA', - 'embedding'[] - }, - ... - } - - - 3. Computing the d-vectors by the speaker encoder. It loads the speaker encoder model and - computes the d-vectors for a given clip or speaker. - - Args: - d_vectors_file_path (str, optional): Path to the metafile including x vectors. Defaults to "". - speaker_id_file_path (str, optional): Path to the metafile that maps speaker names to ids used by - TTS models. Defaults to "". - encoder_model_path (str, optional): Path to the speaker encoder model file. Defaults to "". - encoder_config_path (str, optional): Path to the spealer encoder config file. Defaults to "". - - Examples: - >>> # load audio processor and speaker encoder - >>> ap = AudioProcessor(**config.audio) - >>> manager = SpeakerManager(encoder_model_path=encoder_model_path, encoder_config_path=encoder_config_path) - >>> # load a sample audio and compute embedding - >>> waveform = ap.load_wav(sample_wav_path) - >>> mel = ap.melspectrogram(waveform) - >>> d_vector = manager.compute_embeddings(mel.T) - """ - - def __init__( - self, - data_items: List[List[Any]] = None, - d_vectors_file_path: str = "", - speaker_id_file_path: str = "", - encoder_model_path: str = "", - encoder_config_path: str = "", - use_cuda: bool = False, - ): - super().__init__( - embedding_file_path=d_vectors_file_path, - id_file_path=speaker_id_file_path, - encoder_model_path=encoder_model_path, - encoder_config_path=encoder_config_path, - use_cuda=use_cuda, - ) - - if data_items: - self.set_ids_from_data(data_items, parse_key="speaker_name") - - @property - def num_speakers(self): - return len(self.name_to_id) - - @property - def speaker_names(self): - return list(self.name_to_id.keys()) - - def get_speakers(self) -> List: - return self.name_to_id - - @staticmethod - def init_from_config(config: "Coqpit", samples: Union[List[List], List[Dict]] = None) -> "SpeakerManager": - """Initialize a speaker manager from config - - Args: - config (Coqpit): Config object. - samples (Union[List[List], List[Dict]], optional): List of data samples to parse out the speaker names. - Defaults to None. - - Returns: - SpeakerEncoder: Speaker encoder object. - """ - speaker_manager = None - if get_from_config_or_model_args_with_default(config, "use_speaker_embedding", False): - if samples: - speaker_manager = SpeakerManager(data_items=samples) - if get_from_config_or_model_args_with_default(config, "speaker_file", None): - speaker_manager = SpeakerManager( - speaker_id_file_path=get_from_config_or_model_args_with_default(config, "speaker_file", None) - ) - if get_from_config_or_model_args_with_default(config, "speakers_file", None): - speaker_manager = SpeakerManager( - speaker_id_file_path=get_from_config_or_model_args_with_default(config, "speakers_file", None) - ) - - if get_from_config_or_model_args_with_default(config, "use_d_vector_file", False): - speaker_manager = SpeakerManager() - if get_from_config_or_model_args_with_default(config, "d_vector_file", None): - speaker_manager = SpeakerManager( - d_vectors_file_path=get_from_config_or_model_args_with_default(config, "d_vector_file", None) - ) - return speaker_manager - - -def _set_file_path(path): - """Find the speakers.json under the given path or the above it. - Intended to band aid the different paths returned in restored and continued training.""" - path_restore = os.path.join(os.path.dirname(path), "speakers.json") - path_continue = os.path.join(path, "speakers.json") - fs = fsspec.get_mapper(path).fs - if fs.exists(path_restore): - return path_restore - if fs.exists(path_continue): - return path_continue - raise FileNotFoundError(f" [!] `speakers.json` not found in {path}") - - -def load_speaker_mapping(out_path): - """Loads speaker mapping if already present.""" - if os.path.splitext(out_path)[1] == ".json": - json_file = out_path - else: - json_file = _set_file_path(out_path) - with fsspec.open(json_file, "r") as f: - return json.load(f) - - -def save_speaker_mapping(out_path, speaker_mapping): - """Saves speaker mapping if not yet present.""" - if out_path is not None: - speakers_json_path = _set_file_path(out_path) - with fsspec.open(speakers_json_path, "w") as f: - json.dump(speaker_mapping, f, indent=4) - - -def get_speaker_manager(c: Coqpit, data: List = None, restore_path: str = None, out_path: str = None) -> SpeakerManager: - """Initiate a `SpeakerManager` instance by the provided config. - - Args: - c (Coqpit): Model configuration. - restore_path (str): Path to a previous training folder. - data (List): Data samples used in training to infer speakers from. It must be provided if speaker embedding - layers is used. Defaults to None. - out_path (str, optional): Save the generated speaker IDs to a output path. Defaults to None. - - Returns: - SpeakerManager: initialized and ready to use instance. - """ - speaker_manager = SpeakerManager() - if c.use_speaker_embedding: - if data is not None: - speaker_manager.set_ids_from_data(data, parse_key="speaker_name") - if restore_path: - speakers_file = _set_file_path(restore_path) - # restoring speaker manager from a previous run. - if c.use_d_vector_file: - # restore speaker manager with the embedding file - if not os.path.exists(speakers_file): - print("WARNING: speakers.json was not found in restore_path, trying to use CONFIG.d_vector_file") - if not os.path.exists(c.d_vector_file): - raise RuntimeError( - "You must copy the file speakers.json to restore_path, or set a valid file in CONFIG.d_vector_file" - ) - speaker_manager.load_embeddings_from_file(c.d_vector_file) - speaker_manager.load_embeddings_from_file(speakers_file) - elif not c.use_d_vector_file: # restor speaker manager with speaker ID file. - speaker_ids_from_data = speaker_manager.name_to_id - speaker_manager.load_ids_from_file(speakers_file) - assert all( - speaker in speaker_manager.name_to_id for speaker in speaker_ids_from_data - ), " [!] You cannot introduce new speakers to a pre-trained model." - elif c.use_d_vector_file and c.d_vector_file: - # new speaker manager with external speaker embeddings. - speaker_manager.load_embeddings_from_file(c.d_vector_file) - elif c.use_d_vector_file and not c.d_vector_file: - raise "use_d_vector_file is True, so you need pass a external speaker embedding file." - elif c.use_speaker_embedding and "speakers_file" in c and c.speakers_file: - # new speaker manager with speaker IDs file. - speaker_manager.load_ids_from_file(c.speakers_file) - - if speaker_manager.num_speakers > 0: - print( - " > Speaker manager is loaded with {} speakers: {}".format( - speaker_manager.num_speakers, ", ".join(speaker_manager.name_to_id) - ) - ) - - # save file if path is defined - if out_path: - out_file_path = os.path.join(out_path, "speakers.json") - print(f" > Saving `speakers.json` to {out_file_path}.") - if c.use_d_vector_file and c.d_vector_file: - speaker_manager.save_embeddings_to_file(out_file_path) - else: - speaker_manager.save_ids_to_file(out_file_path) - return speaker_manager - - -def get_speaker_balancer_weights(items: list): - speaker_names = np.array([item["speaker_name"] for item in items]) - unique_speaker_names = np.unique(speaker_names).tolist() - speaker_ids = [unique_speaker_names.index(l) for l in speaker_names] - speaker_count = np.array([len(np.where(speaker_names == l)[0]) for l in unique_speaker_names]) - weight_speaker = 1.0 / speaker_count - dataset_samples_weight = np.array([weight_speaker[l] for l in speaker_ids]) - # normalize - dataset_samples_weight = dataset_samples_weight / np.linalg.norm(dataset_samples_weight) - return torch.from_numpy(dataset_samples_weight).float() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Signature/PKCS1_PSS.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Signature/PKCS1_PSS.py deleted file mode 100644 index c39d3881630e647cf67b28ee86f3de47cce193a3..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Signature/PKCS1_PSS.py +++ /dev/null @@ -1,55 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -""" -Legacy module for PKCS#1 PSS signatures. - -:undocumented: __package__ -""" - -import types - -from Crypto.Signature import pss - - -def _pycrypto_verify(self, hash_object, signature): - try: - self._verify(hash_object, signature) - except (ValueError, TypeError): - return False - return True - - -def new(rsa_key, mgfunc=None, saltLen=None, randfunc=None): - pkcs1 = pss.new(rsa_key, mask_func=mgfunc, - salt_bytes=saltLen, rand_func=randfunc) - pkcs1._verify = pkcs1.verify - pkcs1.verify = types.MethodType(_pycrypto_verify, pkcs1) - return pkcs1 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/multilingual/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/multilingual/__init__.py deleted file mode 100644 index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/multilingual/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/hubert/hubert_model.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/hubert/hubert_model.py deleted file mode 100644 index 7fb642d89b07ca60792debab18e3454f52d8f357..0000000000000000000000000000000000000000 --- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/hubert/hubert_model.py +++ /dev/null @@ -1,222 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - x, mask = self.encode(x) - x = self.proj(x) - logits = self.logits(x) - return logits, mask - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - @torch.inference_mode() - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/asquirous/tv_desktop_classifier/README.md b/spaces/asquirous/tv_desktop_classifier/README.md deleted file mode 100644 index 77d6775a1a42342c5b34d451c352f3c894419770..0000000000000000000000000000000000000000 --- a/spaces/asquirous/tv_desktop_classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tv Desktop Classifier -emoji: 📊 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Top-Ten-United-States/app.py b/spaces/awacke1/Top-Ten-United-States/app.py deleted file mode 100644 index 0c9b46183e4c0f2c00e15c8a7320a5817a34df7d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Top-Ten-United-States/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import streamlit as st -import streamlit.components.v1 as components -import geopandas as gpd -import matplotlib.pyplot as plt - -# Function to generate HTML with textarea for speech synthesis -def generate_speech_textarea(text_to_speak): - documentHTML5 = ''' - - - - Read It Aloud - - - -

🔊 Read It Aloud

- -
- - - - ''' - components.html(documentHTML5, width=1280, height=500) - -# Function to display the state outline -def plot_state_outline(state_code): - # Read U.S. geometries file - gdf = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) - # Filter data for the given state - gdf_state = gdf[gdf['iso_a3'] == 'USA'] - # Plot the geometry - ax = gdf_state.boundary.plot() - plt.title(f"{state_code} State Outline") - st.pyplot(plt) - -# States list and associated icons -states = ['MN', 'CA', 'WA', 'FL', 'TX', 'NY', 'NV'] -icons = ['🦆', '🌴', '🍎', '🌞', '🤠', '🗽', '🎰'] - -# Main code -st.title('U.S. States Trivia 🗺️') - -for i, (state, icon) in enumerate(zip(states, icons)): - st.markdown(f"{i + 1}. {state} {icon}") - - # Expanders for each state to outline fascinating facts - with st.expander(f"See Fascinating Facts about {state}"): - text_to_speak = "" - - if state == 'MN': - text_to_speak = "🦆 **Minnesota** \n🏞️ Known as the 'Land of 10,000 Lakes' \n🎣 Famous for its fishing \n🛶 Boundary Waters offers incredible canoeing \n🎓 Home to prestigious colleges \n❄️ Cold winters but lovely summers." - elif state == 'CA': - text_to_speak = "🌴 **California** \n🌉 Home to the Golden Gate Bridge \n🎬 Center of the American entertainment industry \n🍇 Famous for Napa Valley's wine \n🌲 Home to Redwood National Park \n🏄‍♀️ Excellent beaches and surf spots." - elif state == 'WA': - text_to_speak = "🍎 **Washington** \n☕ Known for its coffee culture \n🗻 Home to Mount Rainier \n🍏 Leading apple-producing state \n🐟 Rich in seafood, especially salmon \n🌧️ Known for its rainy weather." - elif state == 'FL': - text_to_speak = "🌞 **Florida** \n🏝️ Famous for its beaches \n🎢 Home to various amusement parks like Disney World \n🚀 Space launches from Cape Canaveral \n🐊 Known for the Everglades and alligators \n🍊 Major orange producer." - elif state == 'TX': - text_to_speak = "🤠 **Texas** \n🛢️ Known for its oil and gas industry \n🍖 Famous for its barbecue \n🎸 Rich musical heritage \n🐄 Home to many cattle ranches \n🌵 Includes part of the Chihuahuan Desert." - elif state == 'NY': - text_to_speak = "🗽 **New York** \n🏙️ Home to New York City, the largest city in the U.S. \n🍎 Known as the Big Apple \n🎭 Major hub for arts and culture \n🏞️ Adirondack Mountains offer outdoor adventures \n🍕 Famous for its style of pizza." - elif state == 'NV': - text_to_speak = "🎰 **Nevada** \n🌆 Known for Las Vegas and its casinos \n🏜️ Includes part of the Mojave Desert \n🎪 Entertainment is a major industry \n💎 Known for the Hoover Dam \n👽 Area 51 is located here." - - st.markdown(text_to_speak) - plot_state_outline(state) - - if st.button(f"🔊 Read {state}'s Facts Aloud"): - generate_speech_textarea(text_to_speak) diff --git a/spaces/awen666/web-ui/_next/static/chunks/c6a0d165.06a9e7e9a00c85b9.js b/spaces/awen666/web-ui/_next/static/chunks/c6a0d165.06a9e7e9a00c85b9.js deleted file mode 100644 index 4b6f0e0dae825086e31eaea629cdc539648b9a34..0000000000000000000000000000000000000000 --- a/spaces/awen666/web-ui/_next/static/chunks/c6a0d165.06a9e7e9a00c85b9.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[130],{9735:function(a,t,n){n.d(t,{Zg$:function(){return h}});var c=n(83270);function h(a){return(0,c.w_)({tag:"svg",attr:{viewBox:"0 0 24 24"},child:[{tag:"path",attr:{d:"M0 3.75A.75.75 0 0 1 .75 3h7.497c1.566 0 2.945.8 3.751 2.014A4.495 4.495 0 0 1 15.75 3h7.5a.75.75 0 0 1 .75.75v15.063a.752.752 0 0 1-.755.75l-7.682-.052a3 3 0 0 0-2.142.878l-.89.891a.75.75 0 0 1-1.061 0l-.902-.901a2.996 2.996 0 0 0-2.121-.879H.75a.75.75 0 0 1-.75-.75Zm12.75 15.232a4.503 4.503 0 0 1 2.823-.971l6.927.047V4.5h-6.75a3 3 0 0 0-3 3ZM11.247 7.497a3 3 0 0 0-3-2.997H1.5V18h6.947c1.018 0 2.006.346 2.803.98Z"}}]})(a)}}}]); \ No newline at end of file diff --git a/spaces/badayvedat/AudioSep/models/CLAP/training/infer_demo.py b/spaces/badayvedat/AudioSep/models/CLAP/training/infer_demo.py deleted file mode 100644 index 6a1bcc1fd8cf89ba30773d3479b2a78e8dc06d9f..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/models/CLAP/training/infer_demo.py +++ /dev/null @@ -1,109 +0,0 @@ -import sys - -sys.path.append( - "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/src" -) - -import os -import torch -import librosa -from open_clip import create_model -from training.data import get_audio_features -from training.data import int16_to_float32, float32_to_int16 -from transformers import RobertaTokenizer - -tokenize = RobertaTokenizer.from_pretrained("roberta-base") - - -def tokenizer(text): - result = tokenize( - text, - padding="max_length", - truncation=True, - max_length=77, - return_tensors="pt", - ) - return {k: v.squeeze(0) for k, v in result.items()} - - -PRETRAINED_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/checkpoints/epoch_top_0_audioset_no_fusion.pt" -WAVE_48k_PATH = "/mnt/fast/nobackup/users/hl01486/projects/contrastive_pretraining/CLAP/assets/audio/machine.wav" - - -def infer_text(): - device = "cuda:0" if torch.cuda.is_available() else "cpu" - precision = "fp32" - amodel = "HTSAT-tiny" # or 'PANN-14' - tmodel = "roberta" # the best text encoder in our training - enable_fusion = False # False if you do not want to use the fusion model - fusion_type = "aff_2d" - pretrained = PRETRAINED_PATH - - model, model_cfg = create_model( - amodel, - tmodel, - pretrained, - precision=precision, - device=device, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - # load the text, can be a list (i.e. batch size) - text_data = ["I love the contrastive learning", "I love the pretrain model"] - # tokenize for roberta, if you want to tokenize for another text encoder, please refer to data.py#L43-90 - text_data = tokenizer(text_data) - - text_embed = model.get_text_embedding(text_data) - print(text_embed.size()) - - -def infer_audio(): - - device = "cuda:0" if torch.cuda.is_available() else "cpu" - precision = "fp32" - amodel = "HTSAT-tiny" # or 'PANN-14' - tmodel = "roberta" # the best text encoder in our training - enable_fusion = False # False if you do not want to use the fusion model - fusion_type = "aff_2d" - pretrained = PRETRAINED_PATH - - model, model_cfg = create_model( - amodel, - tmodel, - pretrained, - precision=precision, - device=device, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - - # load the waveform of the shape (T,), should resample to 48000 - audio_waveform, sr = librosa.load(WAVE_48k_PATH, sr=48000) - # quantize - audio_waveform = int16_to_float32(float32_to_int16(audio_waveform)) - audio_waveform = torch.from_numpy(audio_waveform).float() - audio_dict = {} - - # the 'fusion' truncate mode can be changed to 'rand_trunc' if run in unfusion mode - import ipdb - - ipdb.set_trace() - audio_dict = get_audio_features( - audio_dict, - audio_waveform, - 480000, - data_truncating="fusion", - data_filling="repeatpad", - audio_cfg=model_cfg["audio_cfg"], - ) - # can send a list to the model, to process many audio tracks in one time (i.e. batch size) - audio_embed = model.get_audio_embedding([audio_dict]) - print(audio_embed.size()) - import ipdb - - ipdb.set_trace() - - -if __name__ == "__main__": - infer_text() - infer_audio() diff --git a/spaces/barani/ControlNet/style.css b/spaces/barani/ControlNet/style.css deleted file mode 100644 index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000 --- a/spaces/barani/ControlNet/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} diff --git a/spaces/bigcode/license/style.css b/spaces/bigcode/license/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/bigcode/license/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/ui.py b/spaces/bigjoker/stable-diffusion-webui/modules/ui.py deleted file mode 100644 index badf4975128985ad55bb69d6ee028adc0a61f97c..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/ui.py +++ /dev/null @@ -1,1798 +0,0 @@ -import html -import json -import math -import mimetypes -import os -import platform -import random -import sys -import tempfile -import time -import traceback -from functools import partial, reduce -import warnings - -import gradio as gr -import gradio.routes -import gradio.utils -import numpy as np -from PIL import Image, PngImagePlugin -from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, wrap_gradio_call - -from modules import sd_hijack, sd_models, localization, script_callbacks, ui_extensions, deepbooru, sd_vae, extra_networks, postprocessing, ui_components, ui_common, ui_postprocessing -from modules.ui_components import FormRow, FormGroup, ToolButton, FormHTML -from modules.paths import script_path, data_path - -from modules.shared import opts, cmd_opts, restricted_opts - -import modules.codeformer_model -import modules.generation_parameters_copypaste as parameters_copypaste -import modules.gfpgan_model -import modules.hypernetworks.ui -import modules.scripts -import modules.shared as shared -import modules.styles -import modules.textual_inversion.ui -from modules import prompt_parser -from modules.images import save_image -from modules.sd_hijack import model_hijack -from modules.sd_samplers import samplers, samplers_for_img2img -from modules.textual_inversion import textual_inversion -import modules.hypernetworks.ui -from modules.generation_parameters_copypaste import image_from_url_text -import modules.extras - -warnings.filterwarnings("default" if opts.show_warnings else "ignore", category=UserWarning) - -# this is a fix for Windows users. Without it, javascript files will be served with text/html content-type and the browser will not show any UI -mimetypes.init() -mimetypes.add_type('application/javascript', '.js') - -if not cmd_opts.share and not cmd_opts.listen: - # fix gradio phoning home - gradio.utils.version_check = lambda: None - gradio.utils.get_local_ip_address = lambda: '127.0.0.1' - -if cmd_opts.ngrok is not None: - import modules.ngrok as ngrok - print('ngrok authtoken detected, trying to connect...') - ngrok.connect( - cmd_opts.ngrok, - cmd_opts.port if cmd_opts.port is not None else 7860, - cmd_opts.ngrok_region - ) - - -def gr_show(visible=True): - return {"visible": visible, "__type__": "update"} - - -sample_img2img = "assets/stable-samples/img2img/sketch-mountains-input.jpg" -sample_img2img = sample_img2img if os.path.exists(sample_img2img) else None - -css_hide_progressbar = """ -.wrap .m-12 svg { display:none!important; } -.wrap .m-12::before { content:"Loading..." } -.wrap .z-20 svg { display:none!important; } -.wrap .z-20::before { content:"Loading..." } -.wrap.cover-bg .z-20::before { content:"" } -.progress-bar { display:none!important; } -.meta-text { display:none!important; } -.meta-text-center { display:none!important; } -""" - -# Using constants for these since the variation selector isn't visible. -# Important that they exactly match script.js for tooltip to work. -random_symbol = '\U0001f3b2\ufe0f' # 🎲️ -reuse_symbol = '\u267b\ufe0f' # ♻️ -paste_symbol = '\u2199\ufe0f' # ↙ -refresh_symbol = '\U0001f504' # 🔄 -save_style_symbol = '\U0001f4be' # 💾 -apply_style_symbol = '\U0001f4cb' # 📋 -clear_prompt_symbol = '\U0001F5D1' # 🗑️ -extra_networks_symbol = '\U0001F3B4' # 🎴 -switch_values_symbol = '\U000021C5' # ⇅ - - -def plaintext_to_html(text): - return ui_common.plaintext_to_html(text) - - -def send_gradio_gallery_to_image(x): - if len(x) == 0: - return None - return image_from_url_text(x[0]) - -def visit(x, func, path=""): - if hasattr(x, 'children'): - for c in x.children: - visit(c, func, path) - elif x.label is not None: - func(path + "/" + str(x.label), x) - - -def add_style(name: str, prompt: str, negative_prompt: str): - if name is None: - return [gr_show() for x in range(4)] - - style = modules.styles.PromptStyle(name, prompt, negative_prompt) - shared.prompt_styles.styles[style.name] = style - # Save all loaded prompt styles: this allows us to update the storage format in the future more easily, because we - # reserialize all styles every time we save them - shared.prompt_styles.save_styles(shared.styles_filename) - - return [gr.Dropdown.update(visible=True, choices=list(shared.prompt_styles.styles)) for _ in range(2)] - - -def calc_resolution_hires(enable, width, height, hr_scale, hr_resize_x, hr_resize_y): - from modules import processing, devices - - if not enable: - return "" - - p = processing.StableDiffusionProcessingTxt2Img(width=width, height=height, enable_hr=True, hr_scale=hr_scale, hr_resize_x=hr_resize_x, hr_resize_y=hr_resize_y) - - with devices.autocast(): - p.init([""], [0], [0]) - - return f"resize: from {p.width}x{p.height} to {p.hr_resize_x or p.hr_upscale_to_x}x{p.hr_resize_y or p.hr_upscale_to_y}" - - -def apply_styles(prompt, prompt_neg, styles): - prompt = shared.prompt_styles.apply_styles_to_prompt(prompt, styles) - prompt_neg = shared.prompt_styles.apply_negative_styles_to_prompt(prompt_neg, styles) - - return [gr.Textbox.update(value=prompt), gr.Textbox.update(value=prompt_neg), gr.Dropdown.update(value=[])] - - -def process_interrogate(interrogation_function, mode, ii_input_dir, ii_output_dir, *ii_singles): - if mode in {0, 1, 3, 4}: - return [interrogation_function(ii_singles[mode]), None] - elif mode == 2: - return [interrogation_function(ii_singles[mode]["image"]), None] - elif mode == 5: - assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled" - images = shared.listfiles(ii_input_dir) - print(f"Will process {len(images)} images.") - if ii_output_dir != "": - os.makedirs(ii_output_dir, exist_ok=True) - else: - ii_output_dir = ii_input_dir - - for image in images: - img = Image.open(image) - filename = os.path.basename(image) - left, _ = os.path.splitext(filename) - print(interrogation_function(img), file=open(os.path.join(ii_output_dir, left + ".txt"), 'a')) - - return [gr.update(), None] - - -def interrogate(image): - prompt = shared.interrogator.interrogate(image.convert("RGB")) - return gr.update() if prompt is None else prompt - - -def interrogate_deepbooru(image): - prompt = deepbooru.model.tag(image) - return gr.update() if prompt is None else prompt - - -def create_seed_inputs(target_interface): - with FormRow(elem_id=target_interface + '_seed_row'): - seed = (gr.Textbox if cmd_opts.use_textbox_seed else gr.Number)(label='Seed', value=-1, elem_id=target_interface + '_seed') - seed.style(container=False) - random_seed = gr.Button(random_symbol, elem_id=target_interface + '_random_seed') - reuse_seed = gr.Button(reuse_symbol, elem_id=target_interface + '_reuse_seed') - - with gr.Group(elem_id=target_interface + '_subseed_show_box'): - seed_checkbox = gr.Checkbox(label='Extra', elem_id=target_interface + '_subseed_show', value=False) - - # Components to show/hide based on the 'Extra' checkbox - seed_extras = [] - - with FormRow(visible=False, elem_id=target_interface + '_subseed_row') as seed_extra_row_1: - seed_extras.append(seed_extra_row_1) - subseed = gr.Number(label='Variation seed', value=-1, elem_id=target_interface + '_subseed') - subseed.style(container=False) - random_subseed = gr.Button(random_symbol, elem_id=target_interface + '_random_subseed') - reuse_subseed = gr.Button(reuse_symbol, elem_id=target_interface + '_reuse_subseed') - subseed_strength = gr.Slider(label='Variation strength', value=0.0, minimum=0, maximum=1, step=0.01, elem_id=target_interface + '_subseed_strength') - - with FormRow(visible=False) as seed_extra_row_2: - seed_extras.append(seed_extra_row_2) - seed_resize_from_w = gr.Slider(minimum=0, maximum=2048, step=8, label="Resize seed from width", value=0, elem_id=target_interface + '_seed_resize_from_w') - seed_resize_from_h = gr.Slider(minimum=0, maximum=2048, step=8, label="Resize seed from height", value=0, elem_id=target_interface + '_seed_resize_from_h') - - random_seed.click(fn=lambda: -1, show_progress=False, inputs=[], outputs=[seed]) - random_subseed.click(fn=lambda: -1, show_progress=False, inputs=[], outputs=[subseed]) - - def change_visibility(show): - return {comp: gr_show(show) for comp in seed_extras} - - seed_checkbox.change(change_visibility, show_progress=False, inputs=[seed_checkbox], outputs=seed_extras) - - return seed, reuse_seed, subseed, reuse_subseed, subseed_strength, seed_resize_from_h, seed_resize_from_w, seed_checkbox - - - -def connect_clear_prompt(button): - """Given clear button, prompt, and token_counter objects, setup clear prompt button click event""" - button.click( - _js="clear_prompt", - fn=None, - inputs=[], - outputs=[], - ) - - -def connect_reuse_seed(seed: gr.Number, reuse_seed: gr.Button, generation_info: gr.Textbox, dummy_component, is_subseed): - """ Connects a 'reuse (sub)seed' button's click event so that it copies last used - (sub)seed value from generation info the to the seed field. If copying subseed and subseed strength - was 0, i.e. no variation seed was used, it copies the normal seed value instead.""" - def copy_seed(gen_info_string: str, index): - res = -1 - - try: - gen_info = json.loads(gen_info_string) - index -= gen_info.get('index_of_first_image', 0) - - if is_subseed and gen_info.get('subseed_strength', 0) > 0: - all_subseeds = gen_info.get('all_subseeds', [-1]) - res = all_subseeds[index if 0 <= index < len(all_subseeds) else 0] - else: - all_seeds = gen_info.get('all_seeds', [-1]) - res = all_seeds[index if 0 <= index < len(all_seeds) else 0] - - except json.decoder.JSONDecodeError as e: - if gen_info_string != '': - print("Error parsing JSON generation info:", file=sys.stderr) - print(gen_info_string, file=sys.stderr) - - return [res, gr_show(False)] - - reuse_seed.click( - fn=copy_seed, - _js="(x, y) => [x, selected_gallery_index()]", - show_progress=False, - inputs=[generation_info, dummy_component], - outputs=[seed, dummy_component] - ) - - -def update_token_counter(text, steps): - try: - text, _ = extra_networks.parse_prompt(text) - - _, prompt_flat_list, _ = prompt_parser.get_multicond_prompt_list([text]) - prompt_schedules = prompt_parser.get_learned_conditioning_prompt_schedules(prompt_flat_list, steps) - - except Exception: - # a parsing error can happen here during typing, and we don't want to bother the user with - # messages related to it in console - prompt_schedules = [[[steps, text]]] - - flat_prompts = reduce(lambda list1, list2: list1+list2, prompt_schedules) - prompts = [prompt_text for step, prompt_text in flat_prompts] - token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0]) - return f"{token_count}/{max_length}" - - -def create_toprow(is_img2img): - id_part = "img2img" if is_img2img else "txt2img" - - with gr.Row(elem_id=f"{id_part}_toprow", variant="compact"): - with gr.Column(elem_id=f"{id_part}_prompt_container", scale=6): - with gr.Row(): - with gr.Column(scale=80): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", elem_id=f"{id_part}_prompt", show_label=False, lines=3, placeholder="Prompt (press Ctrl+Enter or Alt+Enter to generate)") - - with gr.Row(): - with gr.Column(scale=80): - with gr.Row(): - negative_prompt = gr.Textbox(label="Negative prompt", elem_id=f"{id_part}_neg_prompt", show_label=False, lines=2, placeholder="Negative prompt (press Ctrl+Enter or Alt+Enter to generate)") - - button_interrogate = None - button_deepbooru = None - if is_img2img: - with gr.Column(scale=1, elem_id="interrogate_col"): - button_interrogate = gr.Button('Interrogate\nCLIP', elem_id="interrogate") - button_deepbooru = gr.Button('Interrogate\nDeepBooru', elem_id="deepbooru") - - with gr.Column(scale=1, elem_id=f"{id_part}_actions_column"): - with gr.Row(elem_id=f"{id_part}_generate_box"): - interrupt = gr.Button('Interrupt', elem_id=f"{id_part}_interrupt") - skip = gr.Button('Skip', elem_id=f"{id_part}_skip") - submit = gr.Button('Generate', elem_id=f"{id_part}_generate", variant='primary') - - skip.click( - fn=lambda: shared.state.skip(), - inputs=[], - outputs=[], - ) - - interrupt.click( - fn=lambda: shared.state.interrupt(), - inputs=[], - outputs=[], - ) - - with gr.Row(elem_id=f"{id_part}_tools"): - paste = ToolButton(value=paste_symbol, elem_id="paste") - clear_prompt_button = ToolButton(value=clear_prompt_symbol, elem_id=f"{id_part}_clear_prompt") - extra_networks_button = ToolButton(value=extra_networks_symbol, elem_id=f"{id_part}_extra_networks") - prompt_style_apply = ToolButton(value=apply_style_symbol, elem_id=f"{id_part}_style_apply") - save_style = ToolButton(value=save_style_symbol, elem_id=f"{id_part}_style_create") - - token_counter = gr.HTML(value="", elem_id=f"{id_part}_token_counter") - token_button = gr.Button(visible=False, elem_id=f"{id_part}_token_button") - negative_token_counter = gr.HTML(value="", elem_id=f"{id_part}_negative_token_counter") - negative_token_button = gr.Button(visible=False, elem_id=f"{id_part}_negative_token_button") - - clear_prompt_button.click( - fn=lambda *x: x, - _js="confirm_clear_prompt", - inputs=[prompt, negative_prompt], - outputs=[prompt, negative_prompt], - ) - - with gr.Row(elem_id=f"{id_part}_styles_row"): - prompt_styles = gr.Dropdown(label="Styles", elem_id=f"{id_part}_styles", choices=[k for k, v in shared.prompt_styles.styles.items()], value=[], multiselect=True) - create_refresh_button(prompt_styles, shared.prompt_styles.reload, lambda: {"choices": [k for k, v in shared.prompt_styles.styles.items()]}, f"refresh_{id_part}_styles") - - return prompt, prompt_styles, negative_prompt, submit, button_interrogate, button_deepbooru, prompt_style_apply, save_style, paste, extra_networks_button, token_counter, token_button, negative_token_counter, negative_token_button - - -def setup_progressbar(*args, **kwargs): - pass - - -def apply_setting(key, value): - if value is None: - return gr.update() - - if shared.cmd_opts.freeze_settings: - return gr.update() - - # dont allow model to be swapped when model hash exists in prompt - if key == "sd_model_checkpoint" and opts.disable_weights_auto_swap: - return gr.update() - - if key == "sd_model_checkpoint": - ckpt_info = sd_models.get_closet_checkpoint_match(value) - - if ckpt_info is not None: - value = ckpt_info.title - else: - return gr.update() - - comp_args = opts.data_labels[key].component_args - if comp_args and isinstance(comp_args, dict) and comp_args.get('visible') is False: - return - - valtype = type(opts.data_labels[key].default) - oldval = opts.data.get(key, None) - opts.data[key] = valtype(value) if valtype != type(None) else value - if oldval != value and opts.data_labels[key].onchange is not None: - opts.data_labels[key].onchange() - - opts.save(shared.config_filename) - return getattr(opts, key) - - -def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_id): - def refresh(): - refresh_method() - args = refreshed_args() if callable(refreshed_args) else refreshed_args - - for k, v in args.items(): - setattr(refresh_component, k, v) - - return gr.update(**(args or {})) - - refresh_button = ToolButton(value=refresh_symbol, elem_id=elem_id) - refresh_button.click( - fn=refresh, - inputs=[], - outputs=[refresh_component] - ) - return refresh_button - - -def create_output_panel(tabname, outdir): - return ui_common.create_output_panel(tabname, outdir) - - -def create_sampler_and_steps_selection(choices, tabname): - if opts.samplers_in_dropdown: - with FormRow(elem_id=f"sampler_selection_{tabname}"): - sampler_index = gr.Dropdown(label='Sampling method', elem_id=f"{tabname}_sampling", choices=[x.name for x in choices], value=choices[0].name, type="index") - steps = gr.Slider(minimum=1, maximum=150, step=1, elem_id=f"{tabname}_steps", label="Sampling steps", value=20) - else: - with FormGroup(elem_id=f"sampler_selection_{tabname}"): - steps = gr.Slider(minimum=1, maximum=150, step=1, elem_id=f"{tabname}_steps", label="Sampling steps", value=20) - sampler_index = gr.Radio(label='Sampling method', elem_id=f"{tabname}_sampling", choices=[x.name for x in choices], value=choices[0].name, type="index") - - return steps, sampler_index - - -def ordered_ui_categories(): - user_order = {x.strip(): i * 2 + 1 for i, x in enumerate(shared.opts.ui_reorder.split(","))} - - for i, category in sorted(enumerate(shared.ui_reorder_categories), key=lambda x: user_order.get(x[1], x[0] * 2 + 0)): - yield category - - -def get_value_for_setting(key): - value = getattr(opts, key) - - info = opts.data_labels[key] - args = info.component_args() if callable(info.component_args) else info.component_args or {} - args = {k: v for k, v in args.items() if k not in {'precision'}} - - return gr.update(value=value, **args) - - -def create_override_settings_dropdown(tabname, row): - dropdown = gr.Dropdown([], label="Override settings", visible=False, elem_id=f"{tabname}_override_settings", multiselect=True) - - dropdown.change( - fn=lambda x: gr.Dropdown.update(visible=len(x) > 0), - inputs=[dropdown], - outputs=[dropdown], - ) - - return dropdown - - -def create_ui(): - import modules.img2img - import modules.txt2img - - reload_javascript() - - parameters_copypaste.reset() - - modules.scripts.scripts_current = modules.scripts.scripts_txt2img - modules.scripts.scripts_txt2img.initialize_scripts(is_img2img=False) - - with gr.Blocks(analytics_enabled=False) as txt2img_interface: - txt2img_prompt, txt2img_prompt_styles, txt2img_negative_prompt, submit, _, _, txt2img_prompt_style_apply, txt2img_save_style, txt2img_paste, extra_networks_button, token_counter, token_button, negative_token_counter, negative_token_button = create_toprow(is_img2img=False) - - dummy_component = gr.Label(visible=False) - txt_prompt_img = gr.File(label="", elem_id="txt2img_prompt_image", file_count="single", type="binary", visible=False) - - with FormRow(variant='compact', elem_id="txt2img_extra_networks", visible=False) as extra_networks: - from modules import ui_extra_networks - extra_networks_ui = ui_extra_networks.create_ui(extra_networks, extra_networks_button, 'txt2img') - - with gr.Row().style(equal_height=False): - with gr.Column(variant='compact', elem_id="txt2img_settings"): - for category in ordered_ui_categories(): - if category == "sampler": - steps, sampler_index = create_sampler_and_steps_selection(samplers, "txt2img") - - elif category == "dimensions": - with FormRow(): - with gr.Column(elem_id="txt2img_column_size", scale=4): - width = gr.Slider(minimum=64, maximum=2048, step=8, label="Width", value=512, elem_id="txt2img_width") - height = gr.Slider(minimum=64, maximum=2048, step=8, label="Height", value=512, elem_id="txt2img_height") - - res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="txt2img_res_switch_btn") - if opts.dimensions_and_batch_together: - with gr.Column(elem_id="txt2img_column_batch"): - batch_count = gr.Slider(minimum=1, step=1, label='Batch count', value=1, elem_id="txt2img_batch_count") - batch_size = gr.Slider(minimum=1, maximum=8, step=1, label='Batch size', value=1, elem_id="txt2img_batch_size") - - elif category == "cfg": - cfg_scale = gr.Slider(minimum=1.0, maximum=30.0, step=0.5, label='CFG Scale', value=7.0, elem_id="txt2img_cfg_scale") - - elif category == "seed": - seed, reuse_seed, subseed, reuse_subseed, subseed_strength, seed_resize_from_h, seed_resize_from_w, seed_checkbox = create_seed_inputs('txt2img') - - elif category == "checkboxes": - with FormRow(elem_id="txt2img_checkboxes", variant="compact"): - restore_faces = gr.Checkbox(label='Restore faces', value=False, visible=len(shared.face_restorers) > 1, elem_id="txt2img_restore_faces") - tiling = gr.Checkbox(label='Tiling', value=False, elem_id="txt2img_tiling") - enable_hr = gr.Checkbox(label='Hires. fix', value=False, elem_id="txt2img_enable_hr") - hr_final_resolution = FormHTML(value="", elem_id="txtimg_hr_finalres", label="Upscaled resolution", interactive=False) - - elif category == "hires_fix": - with FormGroup(visible=False, elem_id="txt2img_hires_fix") as hr_options: - with FormRow(elem_id="txt2img_hires_fix_row1", variant="compact"): - hr_upscaler = gr.Dropdown(label="Upscaler", elem_id="txt2img_hr_upscaler", choices=[*shared.latent_upscale_modes, *[x.name for x in shared.sd_upscalers]], value=shared.latent_upscale_default_mode) - hr_second_pass_steps = gr.Slider(minimum=0, maximum=150, step=1, label='Hires steps', value=0, elem_id="txt2img_hires_steps") - denoising_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Denoising strength', value=0.7, elem_id="txt2img_denoising_strength") - - with FormRow(elem_id="txt2img_hires_fix_row2", variant="compact"): - hr_scale = gr.Slider(minimum=1.0, maximum=4.0, step=0.05, label="Upscale by", value=2.0, elem_id="txt2img_hr_scale") - hr_resize_x = gr.Slider(minimum=0, maximum=2048, step=8, label="Resize width to", value=0, elem_id="txt2img_hr_resize_x") - hr_resize_y = gr.Slider(minimum=0, maximum=2048, step=8, label="Resize height to", value=0, elem_id="txt2img_hr_resize_y") - - elif category == "batch": - if not opts.dimensions_and_batch_together: - with FormRow(elem_id="txt2img_column_batch"): - batch_count = gr.Slider(minimum=1, step=1, label='Batch count', value=1, elem_id="txt2img_batch_count") - batch_size = gr.Slider(minimum=1, maximum=8, step=1, label='Batch size', value=1, elem_id="txt2img_batch_size") - - elif category == "override_settings": - with FormRow(elem_id="txt2img_override_settings_row") as row: - override_settings = create_override_settings_dropdown('txt2img', row) - - elif category == "scripts": - with FormGroup(elem_id="txt2img_script_container"): - custom_inputs = modules.scripts.scripts_txt2img.setup_ui() - - hr_resolution_preview_inputs = [enable_hr, width, height, hr_scale, hr_resize_x, hr_resize_y] - for input in hr_resolution_preview_inputs: - input.change( - fn=calc_resolution_hires, - inputs=hr_resolution_preview_inputs, - outputs=[hr_final_resolution], - show_progress=False, - ) - input.change( - None, - _js="onCalcResolutionHires", - inputs=hr_resolution_preview_inputs, - outputs=[], - show_progress=False, - ) - - txt2img_gallery, generation_info, html_info, html_log = create_output_panel("txt2img", opts.outdir_txt2img_samples) - - connect_reuse_seed(seed, reuse_seed, generation_info, dummy_component, is_subseed=False) - connect_reuse_seed(subseed, reuse_subseed, generation_info, dummy_component, is_subseed=True) - - txt2img_args = dict( - fn=wrap_gradio_gpu_call(modules.txt2img.txt2img, extra_outputs=[None, '', '']), - _js="submit", - inputs=[ - dummy_component, - txt2img_prompt, - txt2img_negative_prompt, - txt2img_prompt_styles, - steps, - sampler_index, - restore_faces, - tiling, - batch_count, - batch_size, - cfg_scale, - seed, - subseed, subseed_strength, seed_resize_from_h, seed_resize_from_w, seed_checkbox, - height, - width, - enable_hr, - denoising_strength, - hr_scale, - hr_upscaler, - hr_second_pass_steps, - hr_resize_x, - hr_resize_y, - override_settings, - ] + custom_inputs, - - outputs=[ - txt2img_gallery, - generation_info, - html_info, - html_log, - ], - show_progress=False, - ) - - txt2img_prompt.submit(**txt2img_args) - submit.click(**txt2img_args) - - res_switch_btn.click(lambda w, h: (h, w), inputs=[width, height], outputs=[width, height]) - - txt_prompt_img.change( - fn=modules.images.image_data, - inputs=[ - txt_prompt_img - ], - outputs=[ - txt2img_prompt, - txt_prompt_img - ] - ) - - enable_hr.change( - fn=lambda x: gr_show(x), - inputs=[enable_hr], - outputs=[hr_options], - show_progress = False, - ) - - txt2img_paste_fields = [ - (txt2img_prompt, "Prompt"), - (txt2img_negative_prompt, "Negative prompt"), - (steps, "Steps"), - (sampler_index, "Sampler"), - (restore_faces, "Face restoration"), - (cfg_scale, "CFG scale"), - (seed, "Seed"), - (width, "Size-1"), - (height, "Size-2"), - (batch_size, "Batch size"), - (subseed, "Variation seed"), - (subseed_strength, "Variation seed strength"), - (seed_resize_from_w, "Seed resize from-1"), - (seed_resize_from_h, "Seed resize from-2"), - (denoising_strength, "Denoising strength"), - (enable_hr, lambda d: "Denoising strength" in d), - (hr_options, lambda d: gr.Row.update(visible="Denoising strength" in d)), - (hr_scale, "Hires upscale"), - (hr_upscaler, "Hires upscaler"), - (hr_second_pass_steps, "Hires steps"), - (hr_resize_x, "Hires resize-1"), - (hr_resize_y, "Hires resize-2"), - *modules.scripts.scripts_txt2img.infotext_fields - ] - parameters_copypaste.add_paste_fields("txt2img", None, txt2img_paste_fields, override_settings) - parameters_copypaste.register_paste_params_button(parameters_copypaste.ParamBinding( - paste_button=txt2img_paste, tabname="txt2img", source_text_component=txt2img_prompt, source_image_component=None, - )) - - txt2img_preview_params = [ - txt2img_prompt, - txt2img_negative_prompt, - steps, - sampler_index, - cfg_scale, - seed, - width, - height, - ] - - token_button.click(fn=wrap_queued_call(update_token_counter), inputs=[txt2img_prompt, steps], outputs=[token_counter]) - negative_token_button.click(fn=wrap_queued_call(update_token_counter), inputs=[txt2img_negative_prompt, steps], outputs=[negative_token_counter]) - - ui_extra_networks.setup_ui(extra_networks_ui, txt2img_gallery) - - modules.scripts.scripts_current = modules.scripts.scripts_img2img - modules.scripts.scripts_img2img.initialize_scripts(is_img2img=True) - - with gr.Blocks(analytics_enabled=False) as img2img_interface: - img2img_prompt, img2img_prompt_styles, img2img_negative_prompt, submit, img2img_interrogate, img2img_deepbooru, img2img_prompt_style_apply, img2img_save_style, img2img_paste, extra_networks_button, token_counter, token_button, negative_token_counter, negative_token_button = create_toprow(is_img2img=True) - - img2img_prompt_img = gr.File(label="", elem_id="img2img_prompt_image", file_count="single", type="binary", visible=False) - - with FormRow(variant='compact', elem_id="img2img_extra_networks", visible=False) as extra_networks: - from modules import ui_extra_networks - extra_networks_ui_img2img = ui_extra_networks.create_ui(extra_networks, extra_networks_button, 'img2img') - - with FormRow().style(equal_height=False): - with gr.Column(variant='compact', elem_id="img2img_settings"): - copy_image_buttons = [] - copy_image_destinations = {} - - def add_copy_image_controls(tab_name, elem): - with gr.Row(variant="compact", elem_id=f"img2img_copy_to_{tab_name}"): - gr.HTML("Copy image to: ", elem_id=f"img2img_label_copy_to_{tab_name}") - - for title, name in zip(['img2img', 'sketch', 'inpaint', 'inpaint sketch'], ['img2img', 'sketch', 'inpaint', 'inpaint_sketch']): - if name == tab_name: - gr.Button(title, interactive=False) - copy_image_destinations[name] = elem - continue - - button = gr.Button(title) - copy_image_buttons.append((button, name, elem)) - - with gr.Tabs(elem_id="mode_img2img"): - with gr.TabItem('img2img', id='img2img', elem_id="img2img_img2img_tab") as tab_img2img: - init_img = gr.Image(label="Image for img2img", elem_id="img2img_image", show_label=False, source="upload", interactive=True, type="pil", tool="editor", image_mode="RGBA").style(height=480) - add_copy_image_controls('img2img', init_img) - - with gr.TabItem('Sketch', id='img2img_sketch', elem_id="img2img_img2img_sketch_tab") as tab_sketch: - sketch = gr.Image(label="Image for img2img", elem_id="img2img_sketch", show_label=False, source="upload", interactive=True, type="pil", tool="color-sketch", image_mode="RGBA").style(height=480) - add_copy_image_controls('sketch', sketch) - - with gr.TabItem('Inpaint', id='inpaint', elem_id="img2img_inpaint_tab") as tab_inpaint: - init_img_with_mask = gr.Image(label="Image for inpainting with mask", show_label=False, elem_id="img2maskimg", source="upload", interactive=True, type="pil", tool="sketch", image_mode="RGBA").style(height=480) - add_copy_image_controls('inpaint', init_img_with_mask) - - with gr.TabItem('Inpaint sketch', id='inpaint_sketch', elem_id="img2img_inpaint_sketch_tab") as tab_inpaint_color: - inpaint_color_sketch = gr.Image(label="Color sketch inpainting", show_label=False, elem_id="inpaint_sketch", source="upload", interactive=True, type="pil", tool="color-sketch", image_mode="RGBA").style(height=480) - inpaint_color_sketch_orig = gr.State(None) - add_copy_image_controls('inpaint_sketch', inpaint_color_sketch) - - def update_orig(image, state): - if image is not None: - same_size = state is not None and state.size == image.size - has_exact_match = np.any(np.all(np.array(image) == np.array(state), axis=-1)) - edited = same_size and has_exact_match - return image if not edited or state is None else state - - inpaint_color_sketch.change(update_orig, [inpaint_color_sketch, inpaint_color_sketch_orig], inpaint_color_sketch_orig) - - with gr.TabItem('Inpaint upload', id='inpaint_upload', elem_id="img2img_inpaint_upload_tab") as tab_inpaint_upload: - init_img_inpaint = gr.Image(label="Image for img2img", show_label=False, source="upload", interactive=True, type="pil", elem_id="img_inpaint_base") - init_mask_inpaint = gr.Image(label="Mask", source="upload", interactive=True, type="pil", elem_id="img_inpaint_mask") - - with gr.TabItem('Batch', id='batch', elem_id="img2img_batch_tab") as tab_batch: - hidden = '
Disabled when launched with --hide-ui-dir-config.' if shared.cmd_opts.hide_ui_dir_config else '' - gr.HTML( - f"

Process images in a directory on the same machine where the server is running." + - f"
Use an empty output directory to save pictures normally instead of writing to the output directory." + - f"
Add inpaint batch mask directory to enable inpaint batch processing." - f"{hidden}

" - ) - img2img_batch_input_dir = gr.Textbox(label="Input directory", **shared.hide_dirs, elem_id="img2img_batch_input_dir") - img2img_batch_output_dir = gr.Textbox(label="Output directory", **shared.hide_dirs, elem_id="img2img_batch_output_dir") - img2img_batch_inpaint_mask_dir = gr.Textbox(label="Inpaint batch mask directory (required for inpaint batch processing only)", **shared.hide_dirs, elem_id="img2img_batch_inpaint_mask_dir") - - def copy_image(img): - if isinstance(img, dict) and 'image' in img: - return img['image'] - - return img - - for button, name, elem in copy_image_buttons: - button.click( - fn=copy_image, - inputs=[elem], - outputs=[copy_image_destinations[name]], - ) - button.click( - fn=lambda: None, - _js="switch_to_"+name.replace(" ", "_"), - inputs=[], - outputs=[], - ) - - with FormRow(): - resize_mode = gr.Radio(label="Resize mode", elem_id="resize_mode", choices=["Just resize", "Crop and resize", "Resize and fill", "Just resize (latent upscale)"], type="index", value="Just resize") - - for category in ordered_ui_categories(): - if category == "sampler": - steps, sampler_index = create_sampler_and_steps_selection(samplers_for_img2img, "img2img") - - elif category == "dimensions": - with FormRow(): - with gr.Column(elem_id="img2img_column_size", scale=4): - width = gr.Slider(minimum=64, maximum=2048, step=8, label="Width", value=512, elem_id="img2img_width") - height = gr.Slider(minimum=64, maximum=2048, step=8, label="Height", value=512, elem_id="img2img_height") - - res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="img2img_res_switch_btn") - if opts.dimensions_and_batch_together: - with gr.Column(elem_id="img2img_column_batch"): - batch_count = gr.Slider(minimum=1, step=1, label='Batch count', value=1, elem_id="img2img_batch_count") - batch_size = gr.Slider(minimum=1, maximum=8, step=1, label='Batch size', value=1, elem_id="img2img_batch_size") - - elif category == "cfg": - with FormGroup(): - with FormRow(): - cfg_scale = gr.Slider(minimum=1.0, maximum=30.0, step=0.5, label='CFG Scale', value=7.0, elem_id="img2img_cfg_scale") - image_cfg_scale = gr.Slider(minimum=0, maximum=3.0, step=0.05, label='Image CFG Scale', value=1.5, elem_id="img2img_image_cfg_scale", visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit") - denoising_strength = gr.Slider(minimum=0.0, maximum=1.0, step=0.01, label='Denoising strength', value=0.75, elem_id="img2img_denoising_strength") - - elif category == "seed": - seed, reuse_seed, subseed, reuse_subseed, subseed_strength, seed_resize_from_h, seed_resize_from_w, seed_checkbox = create_seed_inputs('img2img') - - elif category == "checkboxes": - with FormRow(elem_id="img2img_checkboxes", variant="compact"): - restore_faces = gr.Checkbox(label='Restore faces', value=False, visible=len(shared.face_restorers) > 1, elem_id="img2img_restore_faces") - tiling = gr.Checkbox(label='Tiling', value=False, elem_id="img2img_tiling") - - elif category == "batch": - if not opts.dimensions_and_batch_together: - with FormRow(elem_id="img2img_column_batch"): - batch_count = gr.Slider(minimum=1, step=1, label='Batch count', value=1, elem_id="img2img_batch_count") - batch_size = gr.Slider(minimum=1, maximum=8, step=1, label='Batch size', value=1, elem_id="img2img_batch_size") - - elif category == "override_settings": - with FormRow(elem_id="img2img_override_settings_row") as row: - override_settings = create_override_settings_dropdown('img2img', row) - - elif category == "scripts": - with FormGroup(elem_id="img2img_script_container"): - custom_inputs = modules.scripts.scripts_img2img.setup_ui() - - elif category == "inpaint": - with FormGroup(elem_id="inpaint_controls", visible=False) as inpaint_controls: - with FormRow(): - mask_blur = gr.Slider(label='Mask blur', minimum=0, maximum=64, step=1, value=4, elem_id="img2img_mask_blur") - mask_alpha = gr.Slider(label="Mask transparency", visible=False, elem_id="img2img_mask_alpha") - - with FormRow(): - inpainting_mask_invert = gr.Radio(label='Mask mode', choices=['Inpaint masked', 'Inpaint not masked'], value='Inpaint masked', type="index", elem_id="img2img_mask_mode") - - with FormRow(): - inpainting_fill = gr.Radio(label='Masked content', choices=['fill', 'original', 'latent noise', 'latent nothing'], value='original', type="index", elem_id="img2img_inpainting_fill") - - with FormRow(): - with gr.Column(): - inpaint_full_res = gr.Radio(label="Inpaint area", choices=["Whole picture", "Only masked"], type="index", value="Whole picture", elem_id="img2img_inpaint_full_res") - - with gr.Column(scale=4): - inpaint_full_res_padding = gr.Slider(label='Only masked padding, pixels', minimum=0, maximum=256, step=4, value=32, elem_id="img2img_inpaint_full_res_padding") - - def select_img2img_tab(tab): - return gr.update(visible=tab in [2, 3, 4]), gr.update(visible=tab == 3), - - for i, elem in enumerate([tab_img2img, tab_sketch, tab_inpaint, tab_inpaint_color, tab_inpaint_upload, tab_batch]): - elem.select( - fn=lambda tab=i: select_img2img_tab(tab), - inputs=[], - outputs=[inpaint_controls, mask_alpha], - ) - - img2img_gallery, generation_info, html_info, html_log = create_output_panel("img2img", opts.outdir_img2img_samples) - - connect_reuse_seed(seed, reuse_seed, generation_info, dummy_component, is_subseed=False) - connect_reuse_seed(subseed, reuse_subseed, generation_info, dummy_component, is_subseed=True) - - img2img_prompt_img.change( - fn=modules.images.image_data, - inputs=[ - img2img_prompt_img - ], - outputs=[ - img2img_prompt, - img2img_prompt_img - ] - ) - - img2img_args = dict( - fn=wrap_gradio_gpu_call(modules.img2img.img2img, extra_outputs=[None, '', '']), - _js="submit_img2img", - inputs=[ - dummy_component, - dummy_component, - img2img_prompt, - img2img_negative_prompt, - img2img_prompt_styles, - init_img, - sketch, - init_img_with_mask, - inpaint_color_sketch, - inpaint_color_sketch_orig, - init_img_inpaint, - init_mask_inpaint, - steps, - sampler_index, - mask_blur, - mask_alpha, - inpainting_fill, - restore_faces, - tiling, - batch_count, - batch_size, - cfg_scale, - image_cfg_scale, - denoising_strength, - seed, - subseed, subseed_strength, seed_resize_from_h, seed_resize_from_w, seed_checkbox, - height, - width, - resize_mode, - inpaint_full_res, - inpaint_full_res_padding, - inpainting_mask_invert, - img2img_batch_input_dir, - img2img_batch_output_dir, - img2img_batch_inpaint_mask_dir, - override_settings, - ] + custom_inputs, - outputs=[ - img2img_gallery, - generation_info, - html_info, - html_log, - ], - show_progress=False, - ) - - interrogate_args = dict( - _js="get_img2img_tab_index", - inputs=[ - dummy_component, - img2img_batch_input_dir, - img2img_batch_output_dir, - init_img, - sketch, - init_img_with_mask, - inpaint_color_sketch, - init_img_inpaint, - ], - outputs=[img2img_prompt, dummy_component], - ) - - img2img_prompt.submit(**img2img_args) - submit.click(**img2img_args) - res_switch_btn.click(lambda w, h: (h, w), inputs=[width, height], outputs=[width, height]) - - img2img_interrogate.click( - fn=lambda *args: process_interrogate(interrogate, *args), - **interrogate_args, - ) - - img2img_deepbooru.click( - fn=lambda *args: process_interrogate(interrogate_deepbooru, *args), - **interrogate_args, - ) - - prompts = [(txt2img_prompt, txt2img_negative_prompt), (img2img_prompt, img2img_negative_prompt)] - style_dropdowns = [txt2img_prompt_styles, img2img_prompt_styles] - style_js_funcs = ["update_txt2img_tokens", "update_img2img_tokens"] - - for button, (prompt, negative_prompt) in zip([txt2img_save_style, img2img_save_style], prompts): - button.click( - fn=add_style, - _js="ask_for_style_name", - # Have to pass empty dummy component here, because the JavaScript and Python function have to accept - # the same number of parameters, but we only know the style-name after the JavaScript prompt - inputs=[dummy_component, prompt, negative_prompt], - outputs=[txt2img_prompt_styles, img2img_prompt_styles], - ) - - for button, (prompt, negative_prompt), styles, js_func in zip([txt2img_prompt_style_apply, img2img_prompt_style_apply], prompts, style_dropdowns, style_js_funcs): - button.click( - fn=apply_styles, - _js=js_func, - inputs=[prompt, negative_prompt, styles], - outputs=[prompt, negative_prompt, styles], - ) - - token_button.click(fn=update_token_counter, inputs=[img2img_prompt, steps], outputs=[token_counter]) - negative_token_button.click(fn=wrap_queued_call(update_token_counter), inputs=[txt2img_negative_prompt, steps], outputs=[negative_token_counter]) - - ui_extra_networks.setup_ui(extra_networks_ui_img2img, img2img_gallery) - - img2img_paste_fields = [ - (img2img_prompt, "Prompt"), - (img2img_negative_prompt, "Negative prompt"), - (steps, "Steps"), - (sampler_index, "Sampler"), - (restore_faces, "Face restoration"), - (cfg_scale, "CFG scale"), - (image_cfg_scale, "Image CFG scale"), - (seed, "Seed"), - (width, "Size-1"), - (height, "Size-2"), - (batch_size, "Batch size"), - (subseed, "Variation seed"), - (subseed_strength, "Variation seed strength"), - (seed_resize_from_w, "Seed resize from-1"), - (seed_resize_from_h, "Seed resize from-2"), - (denoising_strength, "Denoising strength"), - (mask_blur, "Mask blur"), - *modules.scripts.scripts_img2img.infotext_fields - ] - parameters_copypaste.add_paste_fields("img2img", init_img, img2img_paste_fields, override_settings) - parameters_copypaste.add_paste_fields("inpaint", init_img_with_mask, img2img_paste_fields, override_settings) - parameters_copypaste.register_paste_params_button(parameters_copypaste.ParamBinding( - paste_button=img2img_paste, tabname="img2img", source_text_component=img2img_prompt, source_image_component=None, - )) - - modules.scripts.scripts_current = None - - with gr.Blocks(analytics_enabled=False) as extras_interface: - ui_postprocessing.create_ui() - - with gr.Blocks(analytics_enabled=False) as pnginfo_interface: - with gr.Row().style(equal_height=False): - with gr.Column(variant='panel'): - image = gr.Image(elem_id="pnginfo_image", label="Source", source="upload", interactive=True, type="pil") - - with gr.Column(variant='panel'): - html = gr.HTML() - generation_info = gr.Textbox(visible=False, elem_id="pnginfo_generation_info") - html2 = gr.HTML() - with gr.Row(): - buttons = parameters_copypaste.create_buttons(["txt2img", "img2img", "inpaint", "extras"]) - - for tabname, button in buttons.items(): - parameters_copypaste.register_paste_params_button(parameters_copypaste.ParamBinding( - paste_button=button, tabname=tabname, source_text_component=generation_info, source_image_component=image, - )) - - image.change( - fn=wrap_gradio_call(modules.extras.run_pnginfo), - inputs=[image], - outputs=[html, generation_info, html2], - ) - - def update_interp_description(value): - interp_description_css = "

{}

" - interp_descriptions = { - "No interpolation": interp_description_css.format("No interpolation will be used. Requires one model; A. Allows for format conversion and VAE baking."), - "Weighted sum": interp_description_css.format("A weighted sum will be used for interpolation. Requires two models; A and B. The result is calculated as A * (1 - M) + B * M"), - "Add difference": interp_description_css.format("The difference between the last two models will be added to the first. Requires three models; A, B and C. The result is calculated as A + (B - C) * M") - } - return interp_descriptions[value] - - with gr.Blocks(analytics_enabled=False) as modelmerger_interface: - with gr.Row().style(equal_height=False): - with gr.Column(variant='compact'): - interp_description = gr.HTML(value=update_interp_description("Weighted sum"), elem_id="modelmerger_interp_description") - - with FormRow(elem_id="modelmerger_models"): - primary_model_name = gr.Dropdown(modules.sd_models.checkpoint_tiles(), elem_id="modelmerger_primary_model_name", label="Primary model (A)") - create_refresh_button(primary_model_name, modules.sd_models.list_models, lambda: {"choices": modules.sd_models.checkpoint_tiles()}, "refresh_checkpoint_A") - - secondary_model_name = gr.Dropdown(modules.sd_models.checkpoint_tiles(), elem_id="modelmerger_secondary_model_name", label="Secondary model (B)") - create_refresh_button(secondary_model_name, modules.sd_models.list_models, lambda: {"choices": modules.sd_models.checkpoint_tiles()}, "refresh_checkpoint_B") - - tertiary_model_name = gr.Dropdown(modules.sd_models.checkpoint_tiles(), elem_id="modelmerger_tertiary_model_name", label="Tertiary model (C)") - create_refresh_button(tertiary_model_name, modules.sd_models.list_models, lambda: {"choices": modules.sd_models.checkpoint_tiles()}, "refresh_checkpoint_C") - - custom_name = gr.Textbox(label="Custom Name (Optional)", elem_id="modelmerger_custom_name") - interp_amount = gr.Slider(minimum=0.0, maximum=1.0, step=0.05, label='Multiplier (M) - set to 0 to get model A', value=0.3, elem_id="modelmerger_interp_amount") - interp_method = gr.Radio(choices=["No interpolation", "Weighted sum", "Add difference"], value="Weighted sum", label="Interpolation Method", elem_id="modelmerger_interp_method") - interp_method.change(fn=update_interp_description, inputs=[interp_method], outputs=[interp_description]) - - with FormRow(): - checkpoint_format = gr.Radio(choices=["ckpt", "safetensors"], value="ckpt", label="Checkpoint format", elem_id="modelmerger_checkpoint_format") - save_as_half = gr.Checkbox(value=False, label="Save as float16", elem_id="modelmerger_save_as_half") - - with FormRow(): - with gr.Column(): - config_source = gr.Radio(choices=["A, B or C", "B", "C", "Don't"], value="A, B or C", label="Copy config from", type="index", elem_id="modelmerger_config_method") - - with gr.Column(): - with FormRow(): - bake_in_vae = gr.Dropdown(choices=["None"] + list(sd_vae.vae_dict), value="None", label="Bake in VAE", elem_id="modelmerger_bake_in_vae") - create_refresh_button(bake_in_vae, sd_vae.refresh_vae_list, lambda: {"choices": ["None"] + list(sd_vae.vae_dict)}, "modelmerger_refresh_bake_in_vae") - - with FormRow(): - discard_weights = gr.Textbox(value="", label="Discard weights with matching name", elem_id="modelmerger_discard_weights") - - with gr.Row(): - modelmerger_merge = gr.Button(elem_id="modelmerger_merge", value="Merge", variant='primary') - - with gr.Column(variant='compact', elem_id="modelmerger_results_container"): - with gr.Group(elem_id="modelmerger_results_panel"): - modelmerger_result = gr.HTML(elem_id="modelmerger_result", show_label=False) - - with gr.Blocks(analytics_enabled=False) as train_interface: - with gr.Row().style(equal_height=False): - gr.HTML(value="

See wiki for detailed explanation.

") - - with gr.Row(variant="compact").style(equal_height=False): - with gr.Tabs(elem_id="train_tabs"): - - with gr.Tab(label="Create embedding"): - new_embedding_name = gr.Textbox(label="Name", elem_id="train_new_embedding_name") - initialization_text = gr.Textbox(label="Initialization text", value="*", elem_id="train_initialization_text") - nvpt = gr.Slider(label="Number of vectors per token", minimum=1, maximum=75, step=1, value=1, elem_id="train_nvpt") - overwrite_old_embedding = gr.Checkbox(value=False, label="Overwrite Old Embedding", elem_id="train_overwrite_old_embedding") - - with gr.Row(): - with gr.Column(scale=3): - gr.HTML(value="") - - with gr.Column(): - create_embedding = gr.Button(value="Create embedding", variant='primary', elem_id="train_create_embedding") - - with gr.Tab(label="Create hypernetwork"): - new_hypernetwork_name = gr.Textbox(label="Name", elem_id="train_new_hypernetwork_name") - new_hypernetwork_sizes = gr.CheckboxGroup(label="Modules", value=["768", "320", "640", "1280"], choices=["768", "1024", "320", "640", "1280"], elem_id="train_new_hypernetwork_sizes") - new_hypernetwork_layer_structure = gr.Textbox("1, 2, 1", label="Enter hypernetwork layer structure", placeholder="1st and last digit must be 1. ex:'1, 2, 1'", elem_id="train_new_hypernetwork_layer_structure") - new_hypernetwork_activation_func = gr.Dropdown(value="linear", label="Select activation function of hypernetwork. Recommended : Swish / Linear(none)", choices=modules.hypernetworks.ui.keys, elem_id="train_new_hypernetwork_activation_func") - new_hypernetwork_initialization_option = gr.Dropdown(value = "Normal", label="Select Layer weights initialization. Recommended: Kaiming for relu-like, Xavier for sigmoid-like, Normal otherwise", choices=["Normal", "KaimingUniform", "KaimingNormal", "XavierUniform", "XavierNormal"], elem_id="train_new_hypernetwork_initialization_option") - new_hypernetwork_add_layer_norm = gr.Checkbox(label="Add layer normalization", elem_id="train_new_hypernetwork_add_layer_norm") - new_hypernetwork_use_dropout = gr.Checkbox(label="Use dropout", elem_id="train_new_hypernetwork_use_dropout") - new_hypernetwork_dropout_structure = gr.Textbox("0, 0, 0", label="Enter hypernetwork Dropout structure (or empty). Recommended : 0~0.35 incrementing sequence: 0, 0.05, 0.15", placeholder="1st and last digit must be 0 and values should be between 0 and 1. ex:'0, 0.01, 0'") - overwrite_old_hypernetwork = gr.Checkbox(value=False, label="Overwrite Old Hypernetwork", elem_id="train_overwrite_old_hypernetwork") - - with gr.Row(): - with gr.Column(scale=3): - gr.HTML(value="") - - with gr.Column(): - create_hypernetwork = gr.Button(value="Create hypernetwork", variant='primary', elem_id="train_create_hypernetwork") - - with gr.Tab(label="Preprocess images"): - process_src = gr.Textbox(label='Source directory', elem_id="train_process_src") - process_dst = gr.Textbox(label='Destination directory', elem_id="train_process_dst") - process_width = gr.Slider(minimum=64, maximum=2048, step=8, label="Width", value=512, elem_id="train_process_width") - process_height = gr.Slider(minimum=64, maximum=2048, step=8, label="Height", value=512, elem_id="train_process_height") - preprocess_txt_action = gr.Dropdown(label='Existing Caption txt Action', value="ignore", choices=["ignore", "copy", "prepend", "append"], elem_id="train_preprocess_txt_action") - - with gr.Row(): - process_flip = gr.Checkbox(label='Create flipped copies', elem_id="train_process_flip") - process_split = gr.Checkbox(label='Split oversized images', elem_id="train_process_split") - process_focal_crop = gr.Checkbox(label='Auto focal point crop', elem_id="train_process_focal_crop") - process_multicrop = gr.Checkbox(label='Auto-sized crop', elem_id="train_process_multicrop") - process_caption = gr.Checkbox(label='Use BLIP for caption', elem_id="train_process_caption") - process_caption_deepbooru = gr.Checkbox(label='Use deepbooru for caption', visible=True, elem_id="train_process_caption_deepbooru") - - with gr.Row(visible=False) as process_split_extra_row: - process_split_threshold = gr.Slider(label='Split image threshold', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id="train_process_split_threshold") - process_overlap_ratio = gr.Slider(label='Split image overlap ratio', value=0.2, minimum=0.0, maximum=0.9, step=0.05, elem_id="train_process_overlap_ratio") - - with gr.Row(visible=False) as process_focal_crop_row: - process_focal_crop_face_weight = gr.Slider(label='Focal point face weight', value=0.9, minimum=0.0, maximum=1.0, step=0.05, elem_id="train_process_focal_crop_face_weight") - process_focal_crop_entropy_weight = gr.Slider(label='Focal point entropy weight', value=0.15, minimum=0.0, maximum=1.0, step=0.05, elem_id="train_process_focal_crop_entropy_weight") - process_focal_crop_edges_weight = gr.Slider(label='Focal point edges weight', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id="train_process_focal_crop_edges_weight") - process_focal_crop_debug = gr.Checkbox(label='Create debug image', elem_id="train_process_focal_crop_debug") - - with gr.Column(visible=False) as process_multicrop_col: - gr.Markdown('Each image is center-cropped with an automatically chosen width and height.') - with gr.Row(): - process_multicrop_mindim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension lower bound", value=384, elem_id="train_process_multicrop_mindim") - process_multicrop_maxdim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension upper bound", value=768, elem_id="train_process_multicrop_maxdim") - with gr.Row(): - process_multicrop_minarea = gr.Slider(minimum=64*64, maximum=2048*2048, step=1, label="Area lower bound", value=64*64, elem_id="train_process_multicrop_minarea") - process_multicrop_maxarea = gr.Slider(minimum=64*64, maximum=2048*2048, step=1, label="Area upper bound", value=640*640, elem_id="train_process_multicrop_maxarea") - with gr.Row(): - process_multicrop_objective = gr.Radio(["Maximize area", "Minimize error"], value="Maximize area", label="Resizing objective", elem_id="train_process_multicrop_objective") - process_multicrop_threshold = gr.Slider(minimum=0, maximum=1, step=0.01, label="Error threshold", value=0.1, elem_id="train_process_multicrop_threshold") - - with gr.Row(): - with gr.Column(scale=3): - gr.HTML(value="") - - with gr.Column(): - with gr.Row(): - interrupt_preprocessing = gr.Button("Interrupt", elem_id="train_interrupt_preprocessing") - run_preprocess = gr.Button(value="Preprocess", variant='primary', elem_id="train_run_preprocess") - - process_split.change( - fn=lambda show: gr_show(show), - inputs=[process_split], - outputs=[process_split_extra_row], - ) - - process_focal_crop.change( - fn=lambda show: gr_show(show), - inputs=[process_focal_crop], - outputs=[process_focal_crop_row], - ) - - process_multicrop.change( - fn=lambda show: gr_show(show), - inputs=[process_multicrop], - outputs=[process_multicrop_col], - ) - - def get_textual_inversion_template_names(): - return sorted([x for x in textual_inversion.textual_inversion_templates]) - - with gr.Tab(label="Train"): - gr.HTML(value="

Train an embedding or Hypernetwork; you must specify a directory with a set of 1:1 ratio images [wiki]

") - with FormRow(): - train_embedding_name = gr.Dropdown(label='Embedding', elem_id="train_embedding", choices=sorted(sd_hijack.model_hijack.embedding_db.word_embeddings.keys())) - create_refresh_button(train_embedding_name, sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings, lambda: {"choices": sorted(sd_hijack.model_hijack.embedding_db.word_embeddings.keys())}, "refresh_train_embedding_name") - - train_hypernetwork_name = gr.Dropdown(label='Hypernetwork', elem_id="train_hypernetwork", choices=[x for x in shared.hypernetworks.keys()]) - create_refresh_button(train_hypernetwork_name, shared.reload_hypernetworks, lambda: {"choices": sorted([x for x in shared.hypernetworks.keys()])}, "refresh_train_hypernetwork_name") - - with FormRow(): - embedding_learn_rate = gr.Textbox(label='Embedding Learning rate', placeholder="Embedding Learning rate", value="0.005", elem_id="train_embedding_learn_rate") - hypernetwork_learn_rate = gr.Textbox(label='Hypernetwork Learning rate', placeholder="Hypernetwork Learning rate", value="0.00001", elem_id="train_hypernetwork_learn_rate") - - with FormRow(): - clip_grad_mode = gr.Dropdown(value="disabled", label="Gradient Clipping", choices=["disabled", "value", "norm"]) - clip_grad_value = gr.Textbox(placeholder="Gradient clip value", value="0.1", show_label=False) - - with FormRow(): - batch_size = gr.Number(label='Batch size', value=1, precision=0, elem_id="train_batch_size") - gradient_step = gr.Number(label='Gradient accumulation steps', value=1, precision=0, elem_id="train_gradient_step") - - dataset_directory = gr.Textbox(label='Dataset directory', placeholder="Path to directory with input images", elem_id="train_dataset_directory") - log_directory = gr.Textbox(label='Log directory', placeholder="Path to directory where to write outputs", value="textual_inversion", elem_id="train_log_directory") - - with FormRow(): - template_file = gr.Dropdown(label='Prompt template', value="style_filewords.txt", elem_id="train_template_file", choices=get_textual_inversion_template_names()) - create_refresh_button(template_file, textual_inversion.list_textual_inversion_templates, lambda: {"choices": get_textual_inversion_template_names()}, "refrsh_train_template_file") - - training_width = gr.Slider(minimum=64, maximum=2048, step=8, label="Width", value=512, elem_id="train_training_width") - training_height = gr.Slider(minimum=64, maximum=2048, step=8, label="Height", value=512, elem_id="train_training_height") - varsize = gr.Checkbox(label="Do not resize images", value=False, elem_id="train_varsize") - steps = gr.Number(label='Max steps', value=100000, precision=0, elem_id="train_steps") - - with FormRow(): - create_image_every = gr.Number(label='Save an image to log directory every N steps, 0 to disable', value=500, precision=0, elem_id="train_create_image_every") - save_embedding_every = gr.Number(label='Save a copy of embedding to log directory every N steps, 0 to disable', value=500, precision=0, elem_id="train_save_embedding_every") - - use_weight = gr.Checkbox(label="Use PNG alpha channel as loss weight", value=False, elem_id="use_weight") - - save_image_with_stored_embedding = gr.Checkbox(label='Save images with embedding in PNG chunks', value=True, elem_id="train_save_image_with_stored_embedding") - preview_from_txt2img = gr.Checkbox(label='Read parameters (prompt, etc...) from txt2img tab when making previews', value=False, elem_id="train_preview_from_txt2img") - - shuffle_tags = gr.Checkbox(label="Shuffle tags by ',' when creating prompts.", value=False, elem_id="train_shuffle_tags") - tag_drop_out = gr.Slider(minimum=0, maximum=1, step=0.1, label="Drop out tags when creating prompts.", value=0, elem_id="train_tag_drop_out") - - latent_sampling_method = gr.Radio(label='Choose latent sampling method', value="once", choices=['once', 'deterministic', 'random'], elem_id="train_latent_sampling_method") - - with gr.Row(): - train_embedding = gr.Button(value="Train Embedding", variant='primary', elem_id="train_train_embedding") - interrupt_training = gr.Button(value="Interrupt", elem_id="train_interrupt_training") - train_hypernetwork = gr.Button(value="Train Hypernetwork", variant='primary', elem_id="train_train_hypernetwork") - - params = script_callbacks.UiTrainTabParams(txt2img_preview_params) - - script_callbacks.ui_train_tabs_callback(params) - - with gr.Column(elem_id='ti_gallery_container'): - ti_output = gr.Text(elem_id="ti_output", value="", show_label=False) - ti_gallery = gr.Gallery(label='Output', show_label=False, elem_id='ti_gallery').style(grid=4) - ti_progress = gr.HTML(elem_id="ti_progress", value="") - ti_outcome = gr.HTML(elem_id="ti_error", value="") - - create_embedding.click( - fn=modules.textual_inversion.ui.create_embedding, - inputs=[ - new_embedding_name, - initialization_text, - nvpt, - overwrite_old_embedding, - ], - outputs=[ - train_embedding_name, - ti_output, - ti_outcome, - ] - ) - - create_hypernetwork.click( - fn=modules.hypernetworks.ui.create_hypernetwork, - inputs=[ - new_hypernetwork_name, - new_hypernetwork_sizes, - overwrite_old_hypernetwork, - new_hypernetwork_layer_structure, - new_hypernetwork_activation_func, - new_hypernetwork_initialization_option, - new_hypernetwork_add_layer_norm, - new_hypernetwork_use_dropout, - new_hypernetwork_dropout_structure - ], - outputs=[ - train_hypernetwork_name, - ti_output, - ti_outcome, - ] - ) - - run_preprocess.click( - fn=wrap_gradio_gpu_call(modules.textual_inversion.ui.preprocess, extra_outputs=[gr.update()]), - _js="start_training_textual_inversion", - inputs=[ - dummy_component, - process_src, - process_dst, - process_width, - process_height, - preprocess_txt_action, - process_flip, - process_split, - process_caption, - process_caption_deepbooru, - process_split_threshold, - process_overlap_ratio, - process_focal_crop, - process_focal_crop_face_weight, - process_focal_crop_entropy_weight, - process_focal_crop_edges_weight, - process_focal_crop_debug, - process_multicrop, - process_multicrop_mindim, - process_multicrop_maxdim, - process_multicrop_minarea, - process_multicrop_maxarea, - process_multicrop_objective, - process_multicrop_threshold, - ], - outputs=[ - ti_output, - ti_outcome, - ], - ) - - train_embedding.click( - fn=wrap_gradio_gpu_call(modules.textual_inversion.ui.train_embedding, extra_outputs=[gr.update()]), - _js="start_training_textual_inversion", - inputs=[ - dummy_component, - train_embedding_name, - embedding_learn_rate, - batch_size, - gradient_step, - dataset_directory, - log_directory, - training_width, - training_height, - varsize, - steps, - clip_grad_mode, - clip_grad_value, - shuffle_tags, - tag_drop_out, - latent_sampling_method, - use_weight, - create_image_every, - save_embedding_every, - template_file, - save_image_with_stored_embedding, - preview_from_txt2img, - *txt2img_preview_params, - ], - outputs=[ - ti_output, - ti_outcome, - ] - ) - - train_hypernetwork.click( - fn=wrap_gradio_gpu_call(modules.hypernetworks.ui.train_hypernetwork, extra_outputs=[gr.update()]), - _js="start_training_textual_inversion", - inputs=[ - dummy_component, - train_hypernetwork_name, - hypernetwork_learn_rate, - batch_size, - gradient_step, - dataset_directory, - log_directory, - training_width, - training_height, - varsize, - steps, - clip_grad_mode, - clip_grad_value, - shuffle_tags, - tag_drop_out, - latent_sampling_method, - use_weight, - create_image_every, - save_embedding_every, - template_file, - preview_from_txt2img, - *txt2img_preview_params, - ], - outputs=[ - ti_output, - ti_outcome, - ] - ) - - interrupt_training.click( - fn=lambda: shared.state.interrupt(), - inputs=[], - outputs=[], - ) - - interrupt_preprocessing.click( - fn=lambda: shared.state.interrupt(), - inputs=[], - outputs=[], - ) - - def create_setting_component(key, is_quicksettings=False): - def fun(): - return opts.data[key] if key in opts.data else opts.data_labels[key].default - - info = opts.data_labels[key] - t = type(info.default) - - args = info.component_args() if callable(info.component_args) else info.component_args - - if info.component is not None: - comp = info.component - elif t == str: - comp = gr.Textbox - elif t == int: - comp = gr.Number - elif t == bool: - comp = gr.Checkbox - else: - raise Exception(f'bad options item type: {str(t)} for key {key}') - - elem_id = "setting_"+key - - if info.refresh is not None: - if is_quicksettings: - res = comp(label=info.label, value=fun(), elem_id=elem_id, **(args or {})) - create_refresh_button(res, info.refresh, info.component_args, "refresh_" + key) - else: - with FormRow(): - res = comp(label=info.label, value=fun(), elem_id=elem_id, **(args or {})) - create_refresh_button(res, info.refresh, info.component_args, "refresh_" + key) - else: - res = comp(label=info.label, value=fun(), elem_id=elem_id, **(args or {})) - - return res - - components = [] - component_dict = {} - shared.settings_components = component_dict - - script_callbacks.ui_settings_callback() - opts.reorder() - - def run_settings(*args): - changed = [] - - for key, value, comp in zip(opts.data_labels.keys(), args, components): - assert comp == dummy_component or opts.same_type(value, opts.data_labels[key].default), f"Bad value for setting {key}: {value}; expecting {type(opts.data_labels[key].default).__name__}" - - for key, value, comp in zip(opts.data_labels.keys(), args, components): - if comp == dummy_component: - continue - - if opts.set(key, value): - changed.append(key) - - try: - opts.save(shared.config_filename) - except RuntimeError: - return opts.dumpjson(), f'{len(changed)} settings changed without save: {", ".join(changed)}.' - return opts.dumpjson(), f'{len(changed)} settings changed{": " if len(changed) > 0 else ""}{", ".join(changed)}.' - - def run_settings_single(value, key): - if not opts.same_type(value, opts.data_labels[key].default): - return gr.update(visible=True), opts.dumpjson() - - if not opts.set(key, value): - return gr.update(value=getattr(opts, key)), opts.dumpjson() - - opts.save(shared.config_filename) - - return get_value_for_setting(key), opts.dumpjson() - - with gr.Blocks(analytics_enabled=False) as settings_interface: - with gr.Row(): - with gr.Column(scale=6): - settings_submit = gr.Button(value="Apply settings", variant='primary', elem_id="settings_submit") - with gr.Column(): - restart_gradio = gr.Button(value='Reload UI', variant='primary', elem_id="settings_restart_gradio") - - result = gr.HTML(elem_id="settings_result") - - quicksettings_names = [x.strip() for x in opts.quicksettings.split(",")] - quicksettings_names = {x: i for i, x in enumerate(quicksettings_names) if x != 'quicksettings'} - - quicksettings_list = [] - - previous_section = None - current_tab = None - current_row = None - with gr.Tabs(elem_id="settings"): - for i, (k, item) in enumerate(opts.data_labels.items()): - section_must_be_skipped = item.section[0] is None - - if previous_section != item.section and not section_must_be_skipped: - elem_id, text = item.section - - if current_tab is not None: - current_row.__exit__() - current_tab.__exit__() - - gr.Group() - current_tab = gr.TabItem(elem_id="settings_{}".format(elem_id), label=text) - current_tab.__enter__() - current_row = gr.Column(variant='compact') - current_row.__enter__() - - previous_section = item.section - - if k in quicksettings_names and not shared.cmd_opts.freeze_settings: - quicksettings_list.append((i, k, item)) - components.append(dummy_component) - elif section_must_be_skipped: - components.append(dummy_component) - else: - component = create_setting_component(k) - component_dict[k] = component - components.append(component) - - if current_tab is not None: - current_row.__exit__() - current_tab.__exit__() - - with gr.TabItem("Actions"): - request_notifications = gr.Button(value='Request browser notifications', elem_id="request_notifications") - download_localization = gr.Button(value='Download localization template', elem_id="download_localization") - reload_script_bodies = gr.Button(value='Reload custom script bodies (No ui updates, No restart)', variant='secondary', elem_id="settings_reload_script_bodies") - - with gr.TabItem("Licenses"): - gr.HTML(shared.html("licenses.html"), elem_id="licenses") - - gr.Button(value="Show all pages", elem_id="settings_show_all_pages") - - request_notifications.click( - fn=lambda: None, - inputs=[], - outputs=[], - _js='function(){}' - ) - - download_localization.click( - fn=lambda: None, - inputs=[], - outputs=[], - _js='download_localization' - ) - - def reload_scripts(): - modules.scripts.reload_script_body_only() - reload_javascript() # need to refresh the html page - - reload_script_bodies.click( - fn=reload_scripts, - inputs=[], - outputs=[] - ) - - def request_restart(): - shared.state.interrupt() - shared.state.need_restart = True - - restart_gradio.click( - fn=request_restart, - _js='restart_reload', - inputs=[], - outputs=[], - ) - - interfaces = [ - (txt2img_interface, "txt2img", "txt2img"), - (img2img_interface, "img2img", "img2img"), - (extras_interface, "Extras", "extras"), - (pnginfo_interface, "PNG Info", "pnginfo"), - (modelmerger_interface, "Checkpoint Merger", "modelmerger"), - (train_interface, "Train", "ti"), - ] - - css = "" - - for cssfile in modules.scripts.list_files_with_name("style.css"): - if not os.path.isfile(cssfile): - continue - - with open(cssfile, "r", encoding="utf8") as file: - css += file.read() + "\n" - - if os.path.exists(os.path.join(data_path, "user.css")): - with open(os.path.join(data_path, "user.css"), "r", encoding="utf8") as file: - css += file.read() + "\n" - - if not cmd_opts.no_progressbar_hiding: - css += css_hide_progressbar - - interfaces += script_callbacks.ui_tabs_callback() - interfaces += [(settings_interface, "Settings", "settings")] - - extensions_interface = ui_extensions.create_ui() - interfaces += [(extensions_interface, "Extensions", "extensions")] - - with gr.Blocks(css=css, analytics_enabled=False, title="Stable Diffusion") as demo: - with gr.Row(elem_id="quicksettings", variant="compact"): - for i, k, item in sorted(quicksettings_list, key=lambda x: quicksettings_names.get(x[1], x[0])): - component = create_setting_component(k, is_quicksettings=True) - component_dict[k] = component - - parameters_copypaste.connect_paste_params_buttons() - - with gr.Tabs(elem_id="tabs") as tabs: - for interface, label, ifid in interfaces: - with gr.TabItem(label, id=ifid, elem_id='tab_' + ifid): - interface.render() - - if os.path.exists(os.path.join(script_path, "notification.mp3")): - audio_notification = gr.Audio(interactive=False, value=os.path.join(script_path, "notification.mp3"), elem_id="audio_notification", visible=False) - - footer = shared.html("footer.html") - footer = footer.format(versions=versions_html()) - gr.HTML(footer, elem_id="footer") - - text_settings = gr.Textbox(elem_id="settings_json", value=lambda: opts.dumpjson(), visible=False) - settings_submit.click( - fn=wrap_gradio_call(run_settings, extra_outputs=[gr.update()]), - inputs=components, - outputs=[text_settings, result], - ) - - for i, k, item in quicksettings_list: - component = component_dict[k] - - component.change( - fn=lambda value, k=k: run_settings_single(value, key=k), - inputs=[component], - outputs=[component, text_settings], - ) - - text_settings.change( - fn=lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit"), - inputs=[], - outputs=[image_cfg_scale], - ) - - button_set_checkpoint = gr.Button('Change checkpoint', elem_id='change_checkpoint', visible=False) - button_set_checkpoint.click( - fn=lambda value, _: run_settings_single(value, key='sd_model_checkpoint'), - _js="function(v){ var res = desiredCheckpointName; desiredCheckpointName = ''; return [res || v, null]; }", - inputs=[component_dict['sd_model_checkpoint'], dummy_component], - outputs=[component_dict['sd_model_checkpoint'], text_settings], - ) - - component_keys = [k for k in opts.data_labels.keys() if k in component_dict] - - def get_settings_values(): - return [get_value_for_setting(key) for key in component_keys] - - demo.load( - fn=get_settings_values, - inputs=[], - outputs=[component_dict[k] for k in component_keys], - ) - - def modelmerger(*args): - try: - results = modules.extras.run_modelmerger(*args) - except Exception as e: - print("Error loading/saving model file:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - modules.sd_models.list_models() # to remove the potentially missing models from the list - return [*[gr.Dropdown.update(choices=modules.sd_models.checkpoint_tiles()) for _ in range(4)], f"Error merging checkpoints: {e}"] - return results - - modelmerger_merge.click(fn=lambda: '', inputs=[], outputs=[modelmerger_result]) - modelmerger_merge.click( - fn=wrap_gradio_gpu_call(modelmerger, extra_outputs=lambda: [gr.update() for _ in range(4)]), - _js='modelmerger', - inputs=[ - dummy_component, - primary_model_name, - secondary_model_name, - tertiary_model_name, - interp_method, - interp_amount, - save_as_half, - custom_name, - checkpoint_format, - config_source, - bake_in_vae, - discard_weights, - ], - outputs=[ - primary_model_name, - secondary_model_name, - tertiary_model_name, - component_dict['sd_model_checkpoint'], - modelmerger_result, - ] - ) - - ui_config_file = cmd_opts.ui_config_file - ui_settings = {} - settings_count = len(ui_settings) - error_loading = False - - try: - if os.path.exists(ui_config_file): - with open(ui_config_file, "r", encoding="utf8") as file: - ui_settings = json.load(file) - except Exception: - error_loading = True - print("Error loading settings:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - - def loadsave(path, x): - def apply_field(obj, field, condition=None, init_field=None): - key = path + "/" + field - - if getattr(obj, 'custom_script_source', None) is not None: - key = 'customscript/' + obj.custom_script_source + '/' + key - - if getattr(obj, 'do_not_save_to_config', False): - return - - saved_value = ui_settings.get(key, None) - if saved_value is None: - ui_settings[key] = getattr(obj, field) - elif condition and not condition(saved_value): - pass - - # this warning is generally not useful; - # print(f'Warning: Bad ui setting value: {key}: {saved_value}; Default value "{getattr(obj, field)}" will be used instead.') - else: - setattr(obj, field, saved_value) - if init_field is not None: - init_field(saved_value) - - if type(x) in [gr.Slider, gr.Radio, gr.Checkbox, gr.Textbox, gr.Number, gr.Dropdown] and x.visible: - apply_field(x, 'visible') - - if type(x) == gr.Slider: - apply_field(x, 'value') - apply_field(x, 'minimum') - apply_field(x, 'maximum') - apply_field(x, 'step') - - if type(x) == gr.Radio: - apply_field(x, 'value', lambda val: val in x.choices) - - if type(x) == gr.Checkbox: - apply_field(x, 'value') - - if type(x) == gr.Textbox: - apply_field(x, 'value') - - if type(x) == gr.Number: - apply_field(x, 'value') - - if type(x) == gr.Dropdown: - def check_dropdown(val): - if getattr(x, 'multiselect', False): - return all([value in x.choices for value in val]) - else: - return val in x.choices - - apply_field(x, 'value', check_dropdown, getattr(x, 'init_field', None)) - - visit(txt2img_interface, loadsave, "txt2img") - visit(img2img_interface, loadsave, "img2img") - visit(extras_interface, loadsave, "extras") - visit(modelmerger_interface, loadsave, "modelmerger") - visit(train_interface, loadsave, "train") - - if not error_loading and (not os.path.exists(ui_config_file) or settings_count != len(ui_settings)): - with open(ui_config_file, "w", encoding="utf8") as file: - json.dump(ui_settings, file, indent=4) - - # Required as a workaround for change() event not triggering when loading values from ui-config.json - interp_description.value = update_interp_description(interp_method.value) - - return demo - - -def reload_javascript(): - head = f'\n' - - inline = f"{localization.localization_js(shared.opts.localization)};" - if cmd_opts.theme is not None: - inline += f"set_theme('{cmd_opts.theme}');" - - for script in modules.scripts.list_scripts("javascript", ".js"): - head += f'\n' - - head += f'\n' - - def template_response(*args, **kwargs): - res = shared.GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{head}'.encode("utf8")) - res.init_headers() - return res - - gradio.routes.templates.TemplateResponse = template_response - - -if not hasattr(shared, 'GradioTemplateResponseOriginal'): - shared.GradioTemplateResponseOriginal = gradio.routes.templates.TemplateResponse - - -def versions_html(): - import torch - import launch - - python_version = ".".join([str(x) for x in sys.version_info[0:3]]) - commit = launch.commit_hash() - short_commit = commit[0:8] - - if shared.xformers_available: - import xformers - xformers_version = xformers.__version__ - else: - xformers_version = "N/A" - - return f""" -python: {python_version} - •  -torch: {getattr(torch, '__long_version__',torch.__version__)} - •  -xformers: {xformers_version} - •  -gradio: {gr.__version__} - •  -commit: {short_commit} - •  -checkpoint: N/A -""" diff --git a/spaces/bigyunicorn/sashimi_identifier/README.md b/spaces/bigyunicorn/sashimi_identifier/README.md deleted file mode 100644 index 1fdbbdee82185026b0cccd6b6e0be6799b51a23b..0000000000000000000000000000000000000000 --- a/spaces/bigyunicorn/sashimi_identifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sashimi Identifier -emoji: ⚡ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bioriAsaeru/text-to-voice/Avancemos 3 Workbook Answers Page 6 Zipl Where to Find and Access Them Online.md b/spaces/bioriAsaeru/text-to-voice/Avancemos 3 Workbook Answers Page 6 Zipl Where to Find and Access Them Online.md deleted file mode 100644 index 05a9192e2f9f3f2e0c7f95a165cd015fd9498d40..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Avancemos 3 Workbook Answers Page 6 Zipl Where to Find and Access Them Online.md +++ /dev/null @@ -1,5 +0,0 @@ - -

Student Workbooks are used in Regular Basic Academies and other types of basic law enforcement training. The workbook series consists of 42 learning domains and approximately 5,000 pages of text. Occasionally a user may find something that requires modification or verification for accuracy. To notify POST staff please access the Workbook Modification Request and provide as much information as possible.

-

Avancemos 3 Workbook Answers Page 6 Zipl


Download Ziphttps://urloso.com/2uyPTL



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Lectra Diamino Fashion V5r2c3 Crack A Complete Guide to the Pre-Costing and Marker-Making Software.md b/spaces/bioriAsaeru/text-to-voice/Lectra Diamino Fashion V5r2c3 Crack A Complete Guide to the Pre-Costing and Marker-Making Software.md deleted file mode 100644 index cacadf9bddad6a71e1389707beea4edebc36d5b8..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Lectra Diamino Fashion V5r2c3 Crack A Complete Guide to the Pre-Costing and Marker-Making Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

lectra diamino fashion v5r2c3 crack


DOWNLOAD >> https://urloso.com/2uyQmM



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bla/tranny/App/Authentication.py b/spaces/bla/tranny/App/Authentication.py deleted file mode 100644 index 918f9ce73ea19d94695a6ada4d9add4bfdf7d34a..0000000000000000000000000000000000000000 --- a/spaces/bla/tranny/App/Authentication.py +++ /dev/null @@ -1,4 +0,0 @@ -from fastapi import APIRouter, status -from fastapi.security import OAuth2PasswordBearer, OAuth2PasswordRequestForm - -oauth_scheme = OAuth2PasswordBearer(tokenUrl="/user/login") diff --git a/spaces/brjathu/HMR2.0/hmr2/datasets/__init__.py b/spaces/brjathu/HMR2.0/hmr2/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/brjathu/HMR2.0/hmr2/models/heads/smpl_head.py b/spaces/brjathu/HMR2.0/hmr2/models/heads/smpl_head.py deleted file mode 100644 index 06dd89faa7e325e262d759dfdff1a404271212c1..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/hmr2/models/heads/smpl_head.py +++ /dev/null @@ -1,111 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -import einops - -from ...utils.geometry import rot6d_to_rotmat, aa_to_rotmat -from ..components.pose_transformer import TransformerDecoder - -def build_smpl_head(cfg): - smpl_head_type = cfg.MODEL.SMPL_HEAD.get('TYPE', 'hmr') - if smpl_head_type == 'transformer_decoder': - return SMPLTransformerDecoderHead(cfg) - else: - raise ValueError('Unknown SMPL head type: {}'.format(smpl_head_type)) - -class SMPLTransformerDecoderHead(nn.Module): - """ Cross-attention based SMPL Transformer decoder - """ - - def __init__(self, cfg): - super().__init__() - self.cfg = cfg - self.joint_rep_type = cfg.MODEL.SMPL_HEAD.get('JOINT_REP', '6d') - self.joint_rep_dim = {'6d': 6, 'aa': 3}[self.joint_rep_type] - npose = self.joint_rep_dim * (cfg.SMPL.NUM_BODY_JOINTS + 1) - self.npose = npose - self.input_is_mean_shape = cfg.MODEL.SMPL_HEAD.get('TRANSFORMER_INPUT', 'zero') == 'mean_shape' - transformer_args = dict( - num_tokens=1, - token_dim=(npose + 10 + 3) if self.input_is_mean_shape else 1, - dim=1024, - ) - transformer_args = (transformer_args | dict(cfg.MODEL.SMPL_HEAD.TRANSFORMER_DECODER)) - self.transformer = TransformerDecoder( - **transformer_args - ) - dim=transformer_args['dim'] - self.decpose = nn.Linear(dim, npose) - self.decshape = nn.Linear(dim, 10) - self.deccam = nn.Linear(dim, 3) - - if cfg.MODEL.SMPL_HEAD.get('INIT_DECODER_XAVIER', False): - # True by default in MLP. False by default in Transformer - nn.init.xavier_uniform_(self.decpose.weight, gain=0.01) - nn.init.xavier_uniform_(self.decshape.weight, gain=0.01) - nn.init.xavier_uniform_(self.deccam.weight, gain=0.01) - - mean_params = np.load(cfg.SMPL.MEAN_PARAMS) - init_body_pose = torch.from_numpy(mean_params['pose'].astype(np.float32)).unsqueeze(0) - init_betas = torch.from_numpy(mean_params['shape'].astype('float32')).unsqueeze(0) - init_cam = torch.from_numpy(mean_params['cam'].astype(np.float32)).unsqueeze(0) - self.register_buffer('init_body_pose', init_body_pose) - self.register_buffer('init_betas', init_betas) - self.register_buffer('init_cam', init_cam) - - def forward(self, x, **kwargs): - - batch_size = x.shape[0] - # vit pretrained backbone is channel-first. Change to token-first - x = einops.rearrange(x, 'b c h w -> b (h w) c') - - init_body_pose = self.init_body_pose.expand(batch_size, -1) - init_betas = self.init_betas.expand(batch_size, -1) - init_cam = self.init_cam.expand(batch_size, -1) - - # TODO: Convert init_body_pose to aa rep if needed - if self.joint_rep_type == 'aa': - raise NotImplementedError - - pred_body_pose = init_body_pose - pred_betas = init_betas - pred_cam = init_cam - pred_body_pose_list = [] - pred_betas_list = [] - pred_cam_list = [] - for i in range(self.cfg.MODEL.SMPL_HEAD.get('IEF_ITERS', 1)): - # Input token to transformer is zero token - if self.input_is_mean_shape: - token = torch.cat([pred_body_pose, pred_betas, pred_cam], dim=1)[:,None,:] - else: - token = torch.zeros(batch_size, 1, 1).to(x.device) - - # Pass through transformer - token_out = self.transformer(token, context=x) - token_out = token_out.squeeze(1) # (B, C) - - # Readout from token_out - pred_body_pose = self.decpose(token_out) + pred_body_pose - pred_betas = self.decshape(token_out) + pred_betas - pred_cam = self.deccam(token_out) + pred_cam - pred_body_pose_list.append(pred_body_pose) - pred_betas_list.append(pred_betas) - pred_cam_list.append(pred_cam) - - # Convert self.joint_rep_type -> rotmat - joint_conversion_fn = { - '6d': rot6d_to_rotmat, - 'aa': lambda x: aa_to_rotmat(x.view(-1, 3).contiguous()) - }[self.joint_rep_type] - - pred_smpl_params_list = {} - pred_smpl_params_list['body_pose'] = torch.cat([joint_conversion_fn(pbp).view(batch_size, -1, 3, 3)[:, 1:, :, :] for pbp in pred_body_pose_list], dim=0) - pred_smpl_params_list['betas'] = torch.cat(pred_betas_list, dim=0) - pred_smpl_params_list['cam'] = torch.cat(pred_cam_list, dim=0) - pred_body_pose = joint_conversion_fn(pred_body_pose).view(batch_size, self.cfg.SMPL.NUM_BODY_JOINTS+1, 3, 3) - - pred_smpl_params = {'global_orient': pred_body_pose[:, [0]], - 'body_pose': pred_body_pose[:, 1:], - 'betas': pred_betas} - return pred_smpl_params, pred_cam, pred_smpl_params_list diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/roi_heads/box_head.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/roi_heads/box_head.py deleted file mode 100644 index 5d0370b0400d9268f13c905e4096a84ce42e9bfd..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/roi_heads/box_head.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import List -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, get_norm -from detectron2.utils.registry import Registry - -__all__ = ["FastRCNNConvFCHead", "build_box_head", "ROI_BOX_HEAD_REGISTRY"] - -ROI_BOX_HEAD_REGISTRY = Registry("ROI_BOX_HEAD") -ROI_BOX_HEAD_REGISTRY.__doc__ = """ -Registry for box heads, which make box predictions from per-region features. - -The registered object will be called with `obj(cfg, input_shape)`. -""" - - -# To get torchscript support, we make the head a subclass of `nn.Sequential`. -# Therefore, to add new layers in this head class, please make sure they are -# added in the order they will be used in forward(). -@ROI_BOX_HEAD_REGISTRY.register() -class FastRCNNConvFCHead(nn.Sequential): - """ - A head with several 3x3 conv layers (each followed by norm & relu) and then - several fc layers (each followed by relu). - """ - - @configurable - def __init__( - self, input_shape: ShapeSpec, *, conv_dims: List[int], fc_dims: List[int], conv_norm="" - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature. - conv_dims (list[int]): the output dimensions of the conv layers - fc_dims (list[int]): the output dimensions of the fc layers - conv_norm (str or callable): normalization for the conv layers. - See :func:`detectron2.layers.get_norm` for supported types. - """ - super().__init__() - assert len(conv_dims) + len(fc_dims) > 0 - - self._output_size = (input_shape.channels, input_shape.height, input_shape.width) - - self.conv_norm_relus = [] - for k, conv_dim in enumerate(conv_dims): - conv = Conv2d( - self._output_size[0], - conv_dim, - kernel_size=3, - padding=1, - bias=not conv_norm, - norm=get_norm(conv_norm, conv_dim), - activation=nn.ReLU(), - ) - self.add_module("conv{}".format(k + 1), conv) - self.conv_norm_relus.append(conv) - self._output_size = (conv_dim, self._output_size[1], self._output_size[2]) - - self.fcs = [] - for k, fc_dim in enumerate(fc_dims): - if k == 0: - self.add_module("flatten", nn.Flatten()) - fc = nn.Linear(int(np.prod(self._output_size)), fc_dim) - self.add_module("fc{}".format(k + 1), fc) - self.add_module("fc_relu{}".format(k + 1), nn.ReLU()) - self.fcs.append(fc) - self._output_size = fc_dim - - for layer in self.conv_norm_relus: - weight_init.c2_msra_fill(layer) - for layer in self.fcs: - weight_init.c2_xavier_fill(layer) - - @classmethod - def from_config(cls, cfg, input_shape): - num_conv = cfg.MODEL.ROI_BOX_HEAD.NUM_CONV - conv_dim = cfg.MODEL.ROI_BOX_HEAD.CONV_DIM - num_fc = cfg.MODEL.ROI_BOX_HEAD.NUM_FC - fc_dim = cfg.MODEL.ROI_BOX_HEAD.FC_DIM - return { - "input_shape": input_shape, - "conv_dims": [conv_dim] * num_conv, - "fc_dims": [fc_dim] * num_fc, - "conv_norm": cfg.MODEL.ROI_BOX_HEAD.NORM, - } - - def forward(self, x): - for layer in self: - x = layer(x) - return x - - @property - @torch.jit.unused - def output_shape(self): - """ - Returns: - ShapeSpec: the output feature shape - """ - o = self._output_size - if isinstance(o, int): - return ShapeSpec(channels=o) - else: - return ShapeSpec(channels=o[0], height=o[1], width=o[2]) - - -def build_box_head(cfg, input_shape): - """ - Build a box head defined by `cfg.MODEL.ROI_BOX_HEAD.NAME`. - """ - name = cfg.MODEL.ROI_BOX_HEAD.NAME - return ROI_BOX_HEAD_REGISTRY.get(name)(cfg, input_shape) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/predictors/chart.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/predictors/chart.py deleted file mode 100644 index 3bcd13f7c592e37c2751556cda1f6e9cd3400b73..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/predictors/chart.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import torch -from torch import nn - -from detectron2.config import CfgNode -from detectron2.layers import ConvTranspose2d, interpolate - -from ...structures import DensePoseChartPredictorOutput -from ..utils import initialize_module_params -from .registry import DENSEPOSE_PREDICTOR_REGISTRY - - -@DENSEPOSE_PREDICTOR_REGISTRY.register() -class DensePoseChartPredictor(nn.Module): - """ - Predictor (last layers of a DensePose model) that takes DensePose head outputs as an input - and produces 4 tensors which represent DensePose results for predefined body parts - (patches / charts): - * coarse segmentation, a tensor of shape [N, K, Hout, Wout] - * fine segmentation, a tensor of shape [N, C, Hout, Wout] - * U coordinates, a tensor of shape [N, C, Hout, Wout] - * V coordinates, a tensor of shape [N, C, Hout, Wout] - where - - N is the number of instances - - K is the number of coarse segmentation channels ( - 2 = foreground / background, - 15 = one of 14 body parts / background) - - C is the number of fine segmentation channels ( - 24 fine body parts / background) - - Hout and Wout are height and width of predictions - """ - - def __init__(self, cfg: CfgNode, input_channels: int): - """ - Initialize predictor using configuration options - - Args: - cfg (CfgNode): configuration options - input_channels (int): input tensor size along the channel dimension - """ - super().__init__() - dim_in = input_channels - n_segm_chan = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_COARSE_SEGM_CHANNELS - dim_out_patches = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_PATCHES + 1 - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECONV_KERNEL - # coarse segmentation - self.ann_index_lowres = ConvTranspose2d( - dim_in, n_segm_chan, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - # fine segmentation - self.index_uv_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - # U - self.u_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - # V - self.v_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.scale_factor = cfg.MODEL.ROI_DENSEPOSE_HEAD.UP_SCALE - initialize_module_params(self) - - def interp2d(self, tensor_nchw: torch.Tensor): - """ - Bilinear interpolation method to be used for upscaling - - Args: - tensor_nchw (tensor): tensor of shape (N, C, H, W) - Return: - tensor of shape (N, C, Hout, Wout), where Hout and Wout are computed - by applying the scale factor to H and W - """ - return interpolate( - tensor_nchw, scale_factor=self.scale_factor, mode="bilinear", align_corners=False - ) - - def forward(self, head_outputs: torch.Tensor): - """ - Perform forward step on DensePose head outputs - - Args: - head_outputs (tensor): DensePose head outputs, tensor of shape [N, D, H, W] - Return: - An instance of DensePoseChartPredictorOutput - """ - return DensePoseChartPredictorOutput( - coarse_segm=self.interp2d(self.ann_index_lowres(head_outputs)), - fine_segm=self.interp2d(self.index_uv_lowres(head_outputs)), - u=self.interp2d(self.u_lowres(head_outputs)), - v=self.interp2d(self.v_lowres(head_outputs)), - ) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/vis/base.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/vis/base.py deleted file mode 100644 index 7b35397b18e62c195dc15771aa79a1d42b321e7f..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/vis/base.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import numpy as np -import cv2 -import torch - -Image = np.ndarray -Boxes = torch.Tensor - - -class MatrixVisualizer(object): - """ - Base visualizer for matrix data - """ - - def __init__( - self, - inplace=True, - cmap=cv2.COLORMAP_PARULA, - val_scale=1.0, - alpha=0.7, - interp_method_matrix=cv2.INTER_LINEAR, - interp_method_mask=cv2.INTER_NEAREST, - ): - self.inplace = inplace - self.cmap = cmap - self.val_scale = val_scale - self.alpha = alpha - self.interp_method_matrix = interp_method_matrix - self.interp_method_mask = interp_method_mask - - def visualize(self, image_bgr, mask, matrix, bbox_xywh): - self._check_image(image_bgr) - self._check_mask_matrix(mask, matrix) - if self.inplace: - image_target_bgr = image_bgr - else: - image_target_bgr = image_bgr * 0 - x, y, w, h = [int(v) for v in bbox_xywh] - if w <= 0 or h <= 0: - return image_bgr - mask, matrix = self._resize(mask, matrix, w, h) - mask_bg = np.tile((mask == 0)[:, :, np.newaxis], [1, 1, 3]) - matrix_scaled = matrix.astype(np.float32) * self.val_scale - _EPSILON = 1e-6 - if np.any(matrix_scaled > 255 + _EPSILON): - logger = logging.getLogger(__name__) - logger.warning( - f"Matrix has values > {255 + _EPSILON} after " f"scaling, clipping to [0..255]" - ) - matrix_scaled_8u = matrix_scaled.clip(0, 255).astype(np.uint8) - matrix_vis = cv2.applyColorMap(matrix_scaled_8u, self.cmap) - matrix_vis[mask_bg] = image_target_bgr[y : y + h, x : x + w, :][mask_bg] - image_target_bgr[y : y + h, x : x + w, :] = ( - image_target_bgr[y : y + h, x : x + w, :] * (1.0 - self.alpha) + matrix_vis * self.alpha - ) - return image_target_bgr.astype(np.uint8) - - def _resize(self, mask, matrix, w, h): - if (w != mask.shape[1]) or (h != mask.shape[0]): - mask = cv2.resize(mask, (w, h), self.interp_method_mask) - if (w != matrix.shape[1]) or (h != matrix.shape[0]): - matrix = cv2.resize(matrix, (w, h), self.interp_method_matrix) - return mask, matrix - - def _check_image(self, image_rgb): - assert len(image_rgb.shape) == 3 - assert image_rgb.shape[2] == 3 - assert image_rgb.dtype == np.uint8 - - def _check_mask_matrix(self, mask, matrix): - assert len(matrix.shape) == 2 - assert len(mask.shape) == 2 - assert mask.dtype == np.uint8 - - -class RectangleVisualizer(object): - - _COLOR_GREEN = (18, 127, 15) - - def __init__(self, color=_COLOR_GREEN, thickness=1): - self.color = color - self.thickness = thickness - - def visualize(self, image_bgr, bbox_xywh, color=None, thickness=None): - x, y, w, h = bbox_xywh - color = color or self.color - thickness = thickness or self.thickness - cv2.rectangle(image_bgr, (int(x), int(y)), (int(x + w), int(y + h)), color, thickness) - return image_bgr - - -class PointsVisualizer(object): - - _COLOR_GREEN = (18, 127, 15) - - def __init__(self, color_bgr=_COLOR_GREEN, r=5): - self.color_bgr = color_bgr - self.r = r - - def visualize(self, image_bgr, pts_xy, colors_bgr=None, rs=None): - for j, pt_xy in enumerate(pts_xy): - x, y = pt_xy - color_bgr = colors_bgr[j] if colors_bgr is not None else self.color_bgr - r = rs[j] if rs is not None else self.r - cv2.circle(image_bgr, (x, y), r, color_bgr, -1) - return image_bgr - - -class TextVisualizer(object): - - _COLOR_GRAY = (218, 227, 218) - _COLOR_WHITE = (255, 255, 255) - - def __init__( - self, - font_face=cv2.FONT_HERSHEY_SIMPLEX, - font_color_bgr=_COLOR_GRAY, - font_scale=0.35, - font_line_type=cv2.LINE_AA, - font_line_thickness=1, - fill_color_bgr=_COLOR_WHITE, - fill_color_transparency=1.0, - frame_color_bgr=_COLOR_WHITE, - frame_color_transparency=1.0, - frame_thickness=1, - ): - self.font_face = font_face - self.font_color_bgr = font_color_bgr - self.font_scale = font_scale - self.font_line_type = font_line_type - self.font_line_thickness = font_line_thickness - self.fill_color_bgr = fill_color_bgr - self.fill_color_transparency = fill_color_transparency - self.frame_color_bgr = frame_color_bgr - self.frame_color_transparency = frame_color_transparency - self.frame_thickness = frame_thickness - - def visualize(self, image_bgr, txt, topleft_xy): - txt_w, txt_h = self.get_text_size_wh(txt) - topleft_xy = tuple(map(int, topleft_xy)) - x, y = topleft_xy - if self.frame_color_transparency < 1.0: - t = self.frame_thickness - image_bgr[y - t : y + txt_h + t, x - t : x + txt_w + t, :] = ( - image_bgr[y - t : y + txt_h + t, x - t : x + txt_w + t, :] - * self.frame_color_transparency - + np.array(self.frame_color_bgr) * (1.0 - self.frame_color_transparency) - ).astype(np.float) - if self.fill_color_transparency < 1.0: - image_bgr[y : y + txt_h, x : x + txt_w, :] = ( - image_bgr[y : y + txt_h, x : x + txt_w, :] * self.fill_color_transparency - + np.array(self.fill_color_bgr) * (1.0 - self.fill_color_transparency) - ).astype(np.float) - cv2.putText( - image_bgr, - txt, - topleft_xy, - self.font_face, - self.font_scale, - self.font_color_bgr, - self.font_line_thickness, - self.font_line_type, - ) - return image_bgr - - def get_text_size_wh(self, txt): - ((txt_w, txt_h), _) = cv2.getTextSize( - txt, self.font_face, self.font_scale, self.font_line_thickness - ) - return txt_w, txt_h - - -class CompoundVisualizer(object): - def __init__(self, visualizers): - self.visualizers = visualizers - - def visualize(self, image_bgr, data): - assert len(data) == len( - self.visualizers - ), "The number of datas {} should match the number of visualizers" " {}".format( - len(data), len(self.visualizers) - ) - image = image_bgr - for i, visualizer in enumerate(self.visualizers): - image = visualizer.visualize(image, data[i]) - return image - - def __str__(self): - visualizer_str = ", ".join([str(v) for v in self.visualizers]) - return "Compound Visualizer [{}]".format(visualizer_str) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/rotated_boxes.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/rotated_boxes.py deleted file mode 100644 index 03f73b3bb99275931a887ad9b2d8c0ac9f412bf3..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/layers/rotated_boxes.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from __future__ import absolute_import, division, print_function, unicode_literals -import torch - - -def pairwise_iou_rotated(boxes1, boxes2): - """ - Return intersection-over-union (Jaccard index) of boxes. - - Both sets of boxes are expected to be in - (x_center, y_center, width, height, angle) format. - - Arguments: - boxes1 (Tensor[N, 5]) - boxes2 (Tensor[M, 5]) - - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - return torch.ops.detectron2.box_iou_rotated(boxes1, boxes2) diff --git a/spaces/cc1799/vits-uma-genshin-honkai/app.py b/spaces/cc1799/vits-uma-genshin-honkai/app.py deleted file mode 100644 index dc8b68eac9a9c2e47c4e58a9e7dae7cd518edb02..0000000000000000000000000000000000000000 --- a/spaces/cc1799/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,123 +0,0 @@ -# coding=utf-8 -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 300: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "#
VITS语音在线合成demo\n" - "
主要有赛马娘,原神中文,原神日语,崩坏3的音色
" - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate") - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid], api_name="search_speaker") - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls], api_name="fuck") - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() diff --git a/spaces/cenji1109285052/anime-ai-detect/README.md b/spaces/cenji1109285052/anime-ai-detect/README.md deleted file mode 100644 index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000 --- a/spaces/cenji1109285052/anime-ai-detect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anime Ai Detect -emoji: 🤖 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: saltacc/anime-ai-detect ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cfwef/gpt/crazy_functions/test_project/latex/attention/model_architecture.tex b/spaces/cfwef/gpt/crazy_functions/test_project/latex/attention/model_architecture.tex deleted file mode 100644 index c82be6242cc9d26203360e90d3ac9184ef6ad842..0000000000000000000000000000000000000000 --- a/spaces/cfwef/gpt/crazy_functions/test_project/latex/attention/model_architecture.tex +++ /dev/null @@ -1,155 +0,0 @@ - -\begin{figure} - \centering - \includegraphics[scale=0.6]{Figures/ModalNet-21} - \caption{The Transformer - model architecture.} - \label{fig:model-arch} -\end{figure} - -% Although the primary workhorse of our model is attention, -%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail. - -Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next. - -The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively. - -\subsection{Encoder and Decoder Stacks} - -\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$. - -\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$. - -% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail. - -\subsection{Attention} \label{sec:attention} -An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. - -\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod} - -% \begin{figure} -% \centering -% \includegraphics[scale=0.6]{Figures/ModalNet-19} -% \caption{Scaled Dot-Product Attention.} -% \label{fig:multi-head-att} -% \end{figure} - -We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values. - -In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as: - -\begin{equation} - \mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V -\end{equation} - -The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. - -%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients. - -% Already described in the subsequent section -%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$. - -%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model. - -While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$. - - -%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$. - - -\subsubsection{Multi-Head Attention} \label{sec:multihead} - -\begin{figure} -\begin{minipage}[t]{0.5\textwidth} - \centering - Scaled Dot-Product Attention \\ - \vspace{0.5cm} - \includegraphics[scale=0.6]{Figures/ModalNet-19} -\end{minipage} -\begin{minipage}[t]{0.5\textwidth} - \centering - Multi-Head Attention \\ - \vspace{0.1cm} - \includegraphics[scale=0.6]{Figures/ModalNet-20} -\end{minipage} - - - % \centering - - \caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.} - \label{fig:multi-head-att} -\end{figure} - -Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively. -On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}. - -Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. - -\begin{align*} - \mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\ -% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\ - \text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\ -\end{align*} - -Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$. - - -%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation. - -In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$. -Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. - -\subsubsection{Applications of Attention in our Model} - -The Transformer uses multi-head attention in three different ways: -\begin{itemize} - \item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}. - - \item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. - - \item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}. - -\end{itemize} - -\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn} - -In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. - -\begin{equation} - \mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2 -\end{equation} - -While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$. - - - -%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention. - -%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention. - - -%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as -%\begin{equation*} \label{eq:attention} -% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq). -%\end{equation*} -%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$. - -%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$. -%\marginpar{} - -\subsection{Embeddings and Softmax} -Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$. - - -\subsection{Positional Encoding} -Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}. - -In this work, we use sine and cosine functions of different frequencies: - -\begin{align*} - PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\ - PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel}) -\end{align*} - -where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$. - -We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. diff --git a/spaces/chaninder/ds3-ml-model/app.py b/spaces/chaninder/ds3-ml-model/app.py deleted file mode 100644 index ad0b57ebb342338408e5389a312c1d963e371449..0000000000000000000000000000000000000000 --- a/spaces/chaninder/ds3-ml-model/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -import csv -import json -import matplotlib.pyplot as plt -import ast -#import pickle -import sklearn -from sklearn import linear_model - - -df = pd.read_csv('emily_election.csv') - - -#loaded_model = pickle.load(open(filename, 'rb')) - -df['spend'] = df['cum_spend'] -df['runtime'] = df['cumulative_ad_runtime'].apply(lambda s: int(s.split('days')[0])) -df['impressions'] = df['cumulative_impressions_by_region'].apply(lambda d: ast.literal_eval(d)) -df['impressions'] = df['impressions'].apply(lambda d: np.array(list(d.values())).sum()) - -#feature 3 (for later) -df['audience_size'] = df['cumulative_est_audience'].apply(lambda d: ast.literal_eval(d)) -df['audience_size'] = df['audience_size'].apply(lambda d: np.array(list(d.values())).sum()) - -#data = df[['runtime', 'spend', 'impressions']] -data = df[['runtime', 'spend', 'audience_size','impressions']] - -msk = np.random.rand(len(data)) < 0.8 -train = data[msk] -test = data[~msk] - - -#new_train = train[train['impressions'] < 1000000] -train['spend'] = train['spend'].astype('float') - - -new_train = train[(train['spend'] > 250)] -new_train = new_train[new_train['runtime']>4] - - -#this model predicts impressions given the runtime and the spend - -regr = linear_model.LinearRegression() -new_train['log_runtime'] = np.log(new_train['runtime']) -new_train['log_spend'] = np.log(new_train['spend']) -new_train['log_impressions'] = np.log(new_train['impressions']) -new_train.replace([np.inf, -np.inf], np.nan, inplace=True) -new_train.dropna(inplace=True) -print(new_train.to_string()) -x = np.asanyarray(new_train[['log_runtime', 'log_spend']]) -y = np.asanyarray(new_train[['log_impressions']]) - -regr.fit(x, y) -spend = st.number_input('Enter Spend (in dollars): ') -runtime = st.number_input('Enter runtime (in days)') - -if spend and runtime: - pred= regr.predict([np.log([spend, runtime])]) - st.write('70% confidence interval for number of impressions is: {} to {} hits'.format(int(np.exp(pred[0][0]-1.65)), int(np.exp(pred[0][0]+1.65)))) \ No newline at end of file diff --git a/spaces/chasemcdo/hf_localai/.github/bump_deps.sh b/spaces/chasemcdo/hf_localai/.github/bump_deps.sh deleted file mode 100644 index d8fff4a3148de12622c2d25de8094ad01363ab09..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/.github/bump_deps.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash -set -xe -REPO=$1 -BRANCH=$2 -VAR=$3 - -LAST_COMMIT=$(curl -s -H "Accept: application/vnd.github.VERSION.sha" "https://api.github.com/repos/$REPO/commits/$BRANCH") - -sed -i Makefile -e "s/$VAR?=.*/$VAR?=$LAST_COMMIT/" diff --git a/spaces/chilge/taoli/losses.py b/spaces/chilge/taoli/losses.py deleted file mode 100644 index 41f9be6980713a46824ae9ec5eb8fd7c515d89c5..0000000000000000000000000000000000000000 --- a/spaces/chilge/taoli/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - #print(logs_p) - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/chompionsawelo/whisper_transcribe/tool/text_file_tool.py b/spaces/chompionsawelo/whisper_transcribe/tool/text_file_tool.py deleted file mode 100644 index ee31bb7e0aff7d8accd837028db2e6fa67d763e0..0000000000000000000000000000000000000000 --- a/spaces/chompionsawelo/whisper_transcribe/tool/text_file_tool.py +++ /dev/null @@ -1,47 +0,0 @@ -from tool.file_name import * -from ui.ui_component import * -import gradio as gr -import os - - -def write_simple_transcribe_file(simple_transcribe_txt_list: list): - with open(dir_simple_transcribe_file, "w", encoding="utf-8") as file: - file.writelines(simple_transcribe_txt_list) - - -def read_simple_transcribe_file(): - with open(dir_simple_transcribe_file, "r", encoding="utf-8") as file: - simple_transcribe_txt_list = file.readlines() - return simple_transcribe_txt_list - - -def write_transcribe_subtitle_file(transcribe_txt_list: list, subtitle_txt_list: list, write_adjusted_file: bool): - transcribe = dir_base_transcribe_file - subtitle = dir_base_subtitle_file - if write_adjusted_file: - transcribe = dir_adjusted_transcribe_file - subtitle = dir_adjusted_subtitle_file - - with open(transcribe, "w", encoding="utf-8") as file: - file.writelines(transcribe_txt_list) - with open(subtitle, "w", encoding="utf-8") as file: - file.writelines(subtitle_txt_list) - - -def read_transcribe_subtitle_file(read_adjusted_file: bool): - transcribe = dir_base_transcribe_file - subtitle = dir_base_subtitle_file - if read_adjusted_file: - transcribe = dir_adjusted_transcribe_file - subtitle = dir_adjusted_subtitle_file - - if not os.path.exists(transcribe): - raise gr.Error(current_ui_lang["file_not_exist"] + ": Transcribe") - if not os.path.exists(subtitle): - raise gr.Error(current_ui_lang["file_not_exist"] + ": Subtitle") - - with open(transcribe, "r", encoding="utf-8") as file: - transcribe_txt_list = file.readlines() - with open(subtitle, "r", encoding="utf-8") as file: - subtitle_txt_list = file.readlines() - return transcribe_txt_list, subtitle_txt_list diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/jpcntx.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/jpcntx.py deleted file mode 100644 index 2f53bdda09e92da38e31cac1a6d415f4670137f7..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/jpcntx.py +++ /dev/null @@ -1,238 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Communicator client code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from typing import List, Tuple, Union - -# This is hiragana 2-char sequence table, the number in each cell represents its frequency category -# fmt: off -jp2_char_context = ( - (0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1), - (2, 4, 0, 4, 0, 3, 0, 4, 0, 3, 4, 4, 4, 2, 4, 3, 3, 4, 3, 2, 3, 3, 4, 2, 3, 3, 3, 2, 4, 1, 4, 3, 3, 1, 5, 4, 3, 4, 3, 4, 3, 5, 3, 0, 3, 5, 4, 2, 0, 3, 1, 0, 3, 3, 0, 3, 3, 0, 1, 1, 0, 4, 3, 0, 3, 3, 0, 4, 0, 2, 0, 3, 5, 5, 5, 5, 4, 0, 4, 1, 0, 3, 4), - (0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2), - (0, 4, 0, 5, 0, 5, 0, 4, 0, 4, 5, 4, 4, 3, 5, 3, 5, 1, 5, 3, 4, 3, 4, 4, 3, 4, 3, 3, 4, 3, 5, 4, 4, 3, 5, 5, 3, 5, 5, 5, 3, 5, 5, 3, 4, 5, 5, 3, 1, 3, 2, 0, 3, 4, 0, 4, 2, 0, 4, 2, 1, 5, 3, 2, 3, 5, 0, 4, 0, 2, 0, 5, 4, 4, 5, 4, 5, 0, 4, 0, 0, 4, 4), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), - (0, 3, 0, 4, 0, 3, 0, 3, 0, 4, 5, 4, 3, 3, 3, 3, 4, 3, 5, 4, 4, 3, 5, 4, 4, 3, 4, 3, 4, 4, 4, 4, 5, 3, 4, 4, 3, 4, 5, 5, 4, 5, 5, 1, 4, 5, 4, 3, 0, 3, 3, 1, 3, 3, 0, 4, 4, 0, 3, 3, 1, 5, 3, 3, 3, 5, 0, 4, 0, 3, 0, 4, 4, 3, 4, 3, 3, 0, 4, 1, 1, 3, 4), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), - (0, 4, 0, 3, 0, 3, 0, 4, 0, 3, 4, 4, 3, 2, 2, 1, 2, 1, 3, 1, 3, 3, 3, 3, 3, 4, 3, 1, 3, 3, 5, 3, 3, 0, 4, 3, 0, 5, 4, 3, 3, 5, 4, 4, 3, 4, 4, 5, 0, 1, 2, 0, 1, 2, 0, 2, 2, 0, 1, 0, 0, 5, 2, 2, 1, 4, 0, 3, 0, 1, 0, 4, 4, 3, 5, 4, 3, 0, 2, 1, 0, 4, 3), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), - (0, 3, 0, 5, 0, 4, 0, 2, 1, 4, 4, 2, 4, 1, 4, 2, 4, 2, 4, 3, 3, 3, 4, 3, 3, 3, 3, 1, 4, 2, 3, 3, 3, 1, 4, 4, 1, 1, 1, 4, 3, 3, 2, 0, 2, 4, 3, 2, 0, 3, 3, 0, 3, 1, 1, 0, 0, 0, 3, 3, 0, 4, 2, 2, 3, 4, 0, 4, 0, 3, 0, 4, 4, 5, 3, 4, 4, 0, 3, 0, 0, 1, 4), - (1, 4, 0, 4, 0, 4, 0, 4, 0, 3, 5, 4, 4, 3, 4, 3, 5, 4, 3, 3, 4, 3, 5, 4, 4, 4, 4, 3, 4, 2, 4, 3, 3, 1, 5, 4, 3, 2, 4, 5, 4, 5, 5, 4, 4, 5, 4, 4, 0, 3, 2, 2, 3, 3, 0, 4, 3, 1, 3, 2, 1, 4, 3, 3, 4, 5, 0, 3, 0, 2, 0, 4, 5, 5, 4, 5, 4, 0, 4, 0, 0, 5, 4), - (0, 5, 0, 5, 0, 4, 0, 3, 0, 4, 4, 3, 4, 3, 3, 3, 4, 0, 4, 4, 4, 3, 4, 3, 4, 3, 3, 1, 4, 2, 4, 3, 4, 0, 5, 4, 1, 4, 5, 4, 4, 5, 3, 2, 4, 3, 4, 3, 2, 4, 1, 3, 3, 3, 2, 3, 2, 0, 4, 3, 3, 4, 3, 3, 3, 4, 0, 4, 0, 3, 0, 4, 5, 4, 4, 4, 3, 0, 4, 1, 0, 1, 3), - (0, 3, 1, 4, 0, 3, 0, 2, 0, 3, 4, 4, 3, 1, 4, 2, 3, 3, 4, 3, 4, 3, 4, 3, 4, 4, 3, 2, 3, 1, 5, 4, 4, 1, 4, 4, 3, 5, 4, 4, 3, 5, 5, 4, 3, 4, 4, 3, 1, 2, 3, 1, 2, 2, 0, 3, 2, 0, 3, 1, 0, 5, 3, 3, 3, 4, 3, 3, 3, 3, 4, 4, 4, 4, 5, 4, 2, 0, 3, 3, 2, 4, 3), - (0, 2, 0, 3, 0, 1, 0, 1, 0, 0, 3, 2, 0, 0, 2, 0, 1, 0, 2, 1, 3, 3, 3, 1, 2, 3, 1, 0, 1, 0, 4, 2, 1, 1, 3, 3, 0, 4, 3, 3, 1, 4, 3, 3, 0, 3, 3, 2, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 4, 1, 0, 2, 3, 2, 2, 2, 1, 3, 3, 3, 4, 4, 3, 2, 0, 3, 1, 0, 3, 3), - (0, 4, 0, 4, 0, 3, 0, 3, 0, 4, 4, 4, 3, 3, 3, 3, 3, 3, 4, 3, 4, 2, 4, 3, 4, 3, 3, 2, 4, 3, 4, 5, 4, 1, 4, 5, 3, 5, 4, 5, 3, 5, 4, 0, 3, 5, 5, 3, 1, 3, 3, 2, 2, 3, 0, 3, 4, 1, 3, 3, 2, 4, 3, 3, 3, 4, 0, 4, 0, 3, 0, 4, 5, 4, 4, 5, 3, 0, 4, 1, 0, 3, 4), - (0, 2, 0, 3, 0, 3, 0, 0, 0, 2, 2, 2, 1, 0, 1, 0, 0, 0, 3, 0, 3, 0, 3, 0, 1, 3, 1, 0, 3, 1, 3, 3, 3, 1, 3, 3, 3, 0, 1, 3, 1, 3, 4, 0, 0, 3, 1, 1, 0, 3, 2, 0, 0, 0, 0, 1, 3, 0, 1, 0, 0, 3, 3, 2, 0, 3, 0, 0, 0, 0, 0, 3, 4, 3, 4, 3, 3, 0, 3, 0, 0, 2, 3), - (2, 3, 0, 3, 0, 2, 0, 1, 0, 3, 3, 4, 3, 1, 3, 1, 1, 1, 3, 1, 4, 3, 4, 3, 3, 3, 0, 0, 3, 1, 5, 4, 3, 1, 4, 3, 2, 5, 5, 4, 4, 4, 4, 3, 3, 4, 4, 4, 0, 2, 1, 1, 3, 2, 0, 1, 2, 0, 0, 1, 0, 4, 1, 3, 3, 3, 0, 3, 0, 1, 0, 4, 4, 4, 5, 5, 3, 0, 2, 0, 0, 4, 4), - (0, 2, 0, 1, 0, 3, 1, 3, 0, 2, 3, 3, 3, 0, 3, 1, 0, 0, 3, 0, 3, 2, 3, 1, 3, 2, 1, 1, 0, 0, 4, 2, 1, 0, 2, 3, 1, 4, 3, 2, 0, 4, 4, 3, 1, 3, 1, 3, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 4, 1, 1, 1, 2, 0, 3, 0, 0, 0, 3, 4, 2, 4, 3, 2, 0, 1, 0, 0, 3, 3), - (0, 1, 0, 4, 0, 5, 0, 4, 0, 2, 4, 4, 2, 3, 3, 2, 3, 3, 5, 3, 3, 3, 4, 3, 4, 2, 3, 0, 4, 3, 3, 3, 4, 1, 4, 3, 2, 1, 5, 5, 3, 4, 5, 1, 3, 5, 4, 2, 0, 3, 3, 0, 1, 3, 0, 4, 2, 0, 1, 3, 1, 4, 3, 3, 3, 3, 0, 3, 0, 1, 0, 3, 4, 4, 4, 5, 5, 0, 3, 0, 1, 4, 5), - (0, 2, 0, 3, 0, 3, 0, 0, 0, 2, 3, 1, 3, 0, 4, 0, 1, 1, 3, 0, 3, 4, 3, 2, 3, 1, 0, 3, 3, 2, 3, 1, 3, 0, 2, 3, 0, 2, 1, 4, 1, 2, 2, 0, 0, 3, 3, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 0, 2, 2, 0, 3, 2, 1, 3, 3, 0, 2, 0, 2, 0, 0, 3, 3, 1, 2, 4, 0, 3, 0, 2, 2, 3), - (2, 4, 0, 5, 0, 4, 0, 4, 0, 2, 4, 4, 4, 3, 4, 3, 3, 3, 1, 2, 4, 3, 4, 3, 4, 4, 5, 0, 3, 3, 3, 3, 2, 0, 4, 3, 1, 4, 3, 4, 1, 4, 4, 3, 3, 4, 4, 3, 1, 2, 3, 0, 4, 2, 0, 4, 1, 0, 3, 3, 0, 4, 3, 3, 3, 4, 0, 4, 0, 2, 0, 3, 5, 3, 4, 5, 2, 0, 3, 0, 0, 4, 5), - (0, 3, 0, 4, 0, 1, 0, 1, 0, 1, 3, 2, 2, 1, 3, 0, 3, 0, 2, 0, 2, 0, 3, 0, 2, 0, 0, 0, 1, 0, 1, 1, 0, 0, 3, 1, 0, 0, 0, 4, 0, 3, 1, 0, 2, 1, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 4, 2, 2, 3, 1, 0, 3, 0, 0, 0, 1, 4, 4, 4, 3, 0, 0, 4, 0, 0, 1, 4), - (1, 4, 1, 5, 0, 3, 0, 3, 0, 4, 5, 4, 4, 3, 5, 3, 3, 4, 4, 3, 4, 1, 3, 3, 3, 3, 2, 1, 4, 1, 5, 4, 3, 1, 4, 4, 3, 5, 4, 4, 3, 5, 4, 3, 3, 4, 4, 4, 0, 3, 3, 1, 2, 3, 0, 3, 1, 0, 3, 3, 0, 5, 4, 4, 4, 4, 4, 4, 3, 3, 5, 4, 4, 3, 3, 5, 4, 0, 3, 2, 0, 4, 4), - (0, 2, 0, 3, 0, 1, 0, 0, 0, 1, 3, 3, 3, 2, 4, 1, 3, 0, 3, 1, 3, 0, 2, 2, 1, 1, 0, 0, 2, 0, 4, 3, 1, 0, 4, 3, 0, 4, 4, 4, 1, 4, 3, 1, 1, 3, 3, 1, 0, 2, 0, 0, 1, 3, 0, 0, 0, 0, 2, 0, 0, 4, 3, 2, 4, 3, 5, 4, 3, 3, 3, 4, 3, 3, 4, 3, 3, 0, 2, 1, 0, 3, 3), - (0, 2, 0, 4, 0, 3, 0, 2, 0, 2, 5, 5, 3, 4, 4, 4, 4, 1, 4, 3, 3, 0, 4, 3, 4, 3, 1, 3, 3, 2, 4, 3, 0, 3, 4, 3, 0, 3, 4, 4, 2, 4, 4, 0, 4, 5, 3, 3, 2, 2, 1, 1, 1, 2, 0, 1, 5, 0, 3, 3, 2, 4, 3, 3, 3, 4, 0, 3, 0, 2, 0, 4, 4, 3, 5, 5, 0, 0, 3, 0, 2, 3, 3), - (0, 3, 0, 4, 0, 3, 0, 1, 0, 3, 4, 3, 3, 1, 3, 3, 3, 0, 3, 1, 3, 0, 4, 3, 3, 1, 1, 0, 3, 0, 3, 3, 0, 0, 4, 4, 0, 1, 5, 4, 3, 3, 5, 0, 3, 3, 4, 3, 0, 2, 0, 1, 1, 1, 0, 1, 3, 0, 1, 2, 1, 3, 3, 2, 3, 3, 0, 3, 0, 1, 0, 1, 3, 3, 4, 4, 1, 0, 1, 2, 2, 1, 3), - (0, 1, 0, 4, 0, 4, 0, 3, 0, 1, 3, 3, 3, 2, 3, 1, 1, 0, 3, 0, 3, 3, 4, 3, 2, 4, 2, 0, 1, 0, 4, 3, 2, 0, 4, 3, 0, 5, 3, 3, 2, 4, 4, 4, 3, 3, 3, 4, 0, 1, 3, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 4, 2, 3, 3, 3, 0, 3, 0, 0, 0, 4, 4, 4, 5, 3, 2, 0, 3, 3, 0, 3, 5), - (0, 2, 0, 3, 0, 0, 0, 3, 0, 1, 3, 0, 2, 0, 0, 0, 1, 0, 3, 1, 1, 3, 3, 0, 0, 3, 0, 0, 3, 0, 2, 3, 1, 0, 3, 1, 0, 3, 3, 2, 0, 4, 2, 2, 0, 2, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 2, 0, 1, 0, 1, 0, 0, 0, 1, 3, 1, 2, 0, 0, 0, 1, 0, 0, 1, 4), - (0, 3, 0, 3, 0, 5, 0, 1, 0, 2, 4, 3, 1, 3, 3, 2, 1, 1, 5, 2, 1, 0, 5, 1, 2, 0, 0, 0, 3, 3, 2, 2, 3, 2, 4, 3, 0, 0, 3, 3, 1, 3, 3, 0, 2, 5, 3, 4, 0, 3, 3, 0, 1, 2, 0, 2, 2, 0, 3, 2, 0, 2, 2, 3, 3, 3, 0, 2, 0, 1, 0, 3, 4, 4, 2, 5, 4, 0, 3, 0, 0, 3, 5), - (0, 3, 0, 3, 0, 3, 0, 1, 0, 3, 3, 3, 3, 0, 3, 0, 2, 0, 2, 1, 1, 0, 2, 0, 1, 0, 0, 0, 2, 1, 0, 0, 1, 0, 3, 2, 0, 0, 3, 3, 1, 2, 3, 1, 0, 3, 3, 0, 0, 1, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 2, 3, 1, 2, 3, 0, 3, 0, 1, 0, 3, 2, 1, 0, 4, 3, 0, 1, 1, 0, 3, 3), - (0, 4, 0, 5, 0, 3, 0, 3, 0, 4, 5, 5, 4, 3, 5, 3, 4, 3, 5, 3, 3, 2, 5, 3, 4, 4, 4, 3, 4, 3, 4, 5, 5, 3, 4, 4, 3, 4, 4, 5, 4, 4, 4, 3, 4, 5, 5, 4, 2, 3, 4, 2, 3, 4, 0, 3, 3, 1, 4, 3, 2, 4, 3, 3, 5, 5, 0, 3, 0, 3, 0, 5, 5, 5, 5, 4, 4, 0, 4, 0, 1, 4, 4), - (0, 4, 0, 4, 0, 3, 0, 3, 0, 3, 5, 4, 4, 2, 3, 2, 5, 1, 3, 2, 5, 1, 4, 2, 3, 2, 3, 3, 4, 3, 3, 3, 3, 2, 5, 4, 1, 3, 3, 5, 3, 4, 4, 0, 4, 4, 3, 1, 1, 3, 1, 0, 2, 3, 0, 2, 3, 0, 3, 0, 0, 4, 3, 1, 3, 4, 0, 3, 0, 2, 0, 4, 4, 4, 3, 4, 5, 0, 4, 0, 0, 3, 4), - (0, 3, 0, 3, 0, 3, 1, 2, 0, 3, 4, 4, 3, 3, 3, 0, 2, 2, 4, 3, 3, 1, 3, 3, 3, 1, 1, 0, 3, 1, 4, 3, 2, 3, 4, 4, 2, 4, 4, 4, 3, 4, 4, 3, 2, 4, 4, 3, 1, 3, 3, 1, 3, 3, 0, 4, 1, 0, 2, 2, 1, 4, 3, 2, 3, 3, 5, 4, 3, 3, 5, 4, 4, 3, 3, 0, 4, 0, 3, 2, 2, 4, 4), - (0, 2, 0, 1, 0, 0, 0, 0, 0, 1, 2, 1, 3, 0, 0, 0, 0, 0, 2, 0, 1, 2, 1, 0, 0, 1, 0, 0, 0, 0, 3, 0, 0, 1, 0, 1, 1, 3, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 2, 0, 3, 4, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1), - (0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 4, 0, 4, 1, 4, 0, 3, 0, 4, 0, 3, 0, 4, 0, 3, 0, 3, 0, 4, 1, 5, 1, 4, 0, 0, 3, 0, 5, 0, 5, 2, 0, 1, 0, 0, 0, 2, 1, 4, 0, 1, 3, 0, 0, 3, 0, 0, 3, 1, 1, 4, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0), - (1, 4, 0, 5, 0, 3, 0, 2, 0, 3, 5, 4, 4, 3, 4, 3, 5, 3, 4, 3, 3, 0, 4, 3, 3, 3, 3, 3, 3, 2, 4, 4, 3, 1, 3, 4, 4, 5, 4, 4, 3, 4, 4, 1, 3, 5, 4, 3, 3, 3, 1, 2, 2, 3, 3, 1, 3, 1, 3, 3, 3, 5, 3, 3, 4, 5, 0, 3, 0, 3, 0, 3, 4, 3, 4, 4, 3, 0, 3, 0, 2, 4, 3), - (0, 1, 0, 4, 0, 0, 0, 0, 0, 1, 4, 0, 4, 1, 4, 2, 4, 0, 3, 0, 1, 0, 1, 0, 0, 0, 0, 0, 2, 0, 3, 1, 1, 1, 0, 3, 0, 0, 0, 1, 2, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 3, 0, 0, 0, 0, 3, 2, 0, 2, 2, 0, 1, 0, 0, 0, 2, 3, 2, 3, 3, 0, 0, 0, 0, 2, 1, 0), - (0, 5, 1, 5, 0, 3, 0, 3, 0, 5, 4, 4, 5, 1, 5, 3, 3, 0, 4, 3, 4, 3, 5, 3, 4, 3, 3, 2, 4, 3, 4, 3, 3, 0, 3, 3, 1, 4, 4, 3, 4, 4, 4, 3, 4, 5, 5, 3, 2, 3, 1, 1, 3, 3, 1, 3, 1, 1, 3, 3, 2, 4, 5, 3, 3, 5, 0, 4, 0, 3, 0, 4, 4, 3, 5, 3, 3, 0, 3, 4, 0, 4, 3), - (0, 5, 0, 5, 0, 3, 0, 2, 0, 4, 4, 3, 5, 2, 4, 3, 3, 3, 4, 4, 4, 3, 5, 3, 5, 3, 3, 1, 4, 0, 4, 3, 3, 0, 3, 3, 0, 4, 4, 4, 4, 5, 4, 3, 3, 5, 5, 3, 2, 3, 1, 2, 3, 2, 0, 1, 0, 0, 3, 2, 2, 4, 4, 3, 1, 5, 0, 4, 0, 3, 0, 4, 3, 1, 3, 2, 1, 0, 3, 3, 0, 3, 3), - (0, 4, 0, 5, 0, 5, 0, 4, 0, 4, 5, 5, 5, 3, 4, 3, 3, 2, 5, 4, 4, 3, 5, 3, 5, 3, 4, 0, 4, 3, 4, 4, 3, 2, 4, 4, 3, 4, 5, 4, 4, 5, 5, 0, 3, 5, 5, 4, 1, 3, 3, 2, 3, 3, 1, 3, 1, 0, 4, 3, 1, 4, 4, 3, 4, 5, 0, 4, 0, 2, 0, 4, 3, 4, 4, 3, 3, 0, 4, 0, 0, 5, 5), - (0, 4, 0, 4, 0, 5, 0, 1, 1, 3, 3, 4, 4, 3, 4, 1, 3, 0, 5, 1, 3, 0, 3, 1, 3, 1, 1, 0, 3, 0, 3, 3, 4, 0, 4, 3, 0, 4, 4, 4, 3, 4, 4, 0, 3, 5, 4, 1, 0, 3, 0, 0, 2, 3, 0, 3, 1, 0, 3, 1, 0, 3, 2, 1, 3, 5, 0, 3, 0, 1, 0, 3, 2, 3, 3, 4, 4, 0, 2, 2, 0, 4, 4), - (2, 4, 0, 5, 0, 4, 0, 3, 0, 4, 5, 5, 4, 3, 5, 3, 5, 3, 5, 3, 5, 2, 5, 3, 4, 3, 3, 4, 3, 4, 5, 3, 2, 1, 5, 4, 3, 2, 3, 4, 5, 3, 4, 1, 2, 5, 4, 3, 0, 3, 3, 0, 3, 2, 0, 2, 3, 0, 4, 1, 0, 3, 4, 3, 3, 5, 0, 3, 0, 1, 0, 4, 5, 5, 5, 4, 3, 0, 4, 2, 0, 3, 5), - (0, 5, 0, 4, 0, 4, 0, 2, 0, 5, 4, 3, 4, 3, 4, 3, 3, 3, 4, 3, 4, 2, 5, 3, 5, 3, 4, 1, 4, 3, 4, 4, 4, 0, 3, 5, 0, 4, 4, 4, 4, 5, 3, 1, 3, 4, 5, 3, 3, 3, 3, 3, 3, 3, 0, 2, 2, 0, 3, 3, 2, 4, 3, 3, 3, 5, 3, 4, 1, 3, 3, 5, 3, 2, 0, 0, 0, 0, 4, 3, 1, 3, 3), - (0, 1, 0, 3, 0, 3, 0, 1, 0, 1, 3, 3, 3, 2, 3, 3, 3, 0, 3, 0, 0, 0, 3, 1, 3, 0, 0, 0, 2, 2, 2, 3, 0, 0, 3, 2, 0, 1, 2, 4, 1, 3, 3, 0, 0, 3, 3, 3, 0, 1, 0, 0, 2, 1, 0, 0, 3, 0, 3, 1, 0, 3, 0, 0, 1, 3, 0, 2, 0, 1, 0, 3, 3, 1, 3, 3, 0, 0, 1, 1, 0, 3, 3), - (0, 2, 0, 3, 0, 2, 1, 4, 0, 2, 2, 3, 1, 1, 3, 1, 1, 0, 2, 0, 3, 1, 2, 3, 1, 3, 0, 0, 1, 0, 4, 3, 2, 3, 3, 3, 1, 4, 2, 3, 3, 3, 3, 1, 0, 3, 1, 4, 0, 1, 1, 0, 1, 2, 0, 1, 1, 0, 1, 1, 0, 3, 1, 3, 2, 2, 0, 1, 0, 0, 0, 2, 3, 3, 3, 1, 0, 0, 0, 0, 0, 2, 3), - (0, 5, 0, 4, 0, 5, 0, 2, 0, 4, 5, 5, 3, 3, 4, 3, 3, 1, 5, 4, 4, 2, 4, 4, 4, 3, 4, 2, 4, 3, 5, 5, 4, 3, 3, 4, 3, 3, 5, 5, 4, 5, 5, 1, 3, 4, 5, 3, 1, 4, 3, 1, 3, 3, 0, 3, 3, 1, 4, 3, 1, 4, 5, 3, 3, 5, 0, 4, 0, 3, 0, 5, 3, 3, 1, 4, 3, 0, 4, 0, 1, 5, 3), - (0, 5, 0, 5, 0, 4, 0, 2, 0, 4, 4, 3, 4, 3, 3, 3, 3, 3, 5, 4, 4, 4, 4, 4, 4, 5, 3, 3, 5, 2, 4, 4, 4, 3, 4, 4, 3, 3, 4, 4, 5, 5, 3, 3, 4, 3, 4, 3, 3, 4, 3, 3, 3, 3, 1, 2, 2, 1, 4, 3, 3, 5, 4, 4, 3, 4, 0, 4, 0, 3, 0, 4, 4, 4, 4, 4, 1, 0, 4, 2, 0, 2, 4), - (0, 4, 0, 4, 0, 3, 0, 1, 0, 3, 5, 2, 3, 0, 3, 0, 2, 1, 4, 2, 3, 3, 4, 1, 4, 3, 3, 2, 4, 1, 3, 3, 3, 0, 3, 3, 0, 0, 3, 3, 3, 5, 3, 3, 3, 3, 3, 2, 0, 2, 0, 0, 2, 0, 0, 2, 0, 0, 1, 0, 0, 3, 1, 2, 2, 3, 0, 3, 0, 2, 0, 4, 4, 3, 3, 4, 1, 0, 3, 0, 0, 2, 4), - (0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 2, 0, 0, 0, 0, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 0, 3, 1, 3, 0, 3, 2, 0, 0, 0, 1, 0, 3, 2, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 0, 2, 0, 0, 0, 0, 0, 0, 2), - (0, 2, 1, 3, 0, 2, 0, 2, 0, 3, 3, 3, 3, 1, 3, 1, 3, 3, 3, 3, 3, 3, 4, 2, 2, 1, 2, 1, 4, 0, 4, 3, 1, 3, 3, 3, 2, 4, 3, 5, 4, 3, 3, 3, 3, 3, 3, 3, 0, 1, 3, 0, 2, 0, 0, 1, 0, 0, 1, 0, 0, 4, 2, 0, 2, 3, 0, 3, 3, 0, 3, 3, 4, 2, 3, 1, 4, 0, 1, 2, 0, 2, 3), - (0, 3, 0, 3, 0, 1, 0, 3, 0, 2, 3, 3, 3, 0, 3, 1, 2, 0, 3, 3, 2, 3, 3, 2, 3, 2, 3, 1, 3, 0, 4, 3, 2, 0, 3, 3, 1, 4, 3, 3, 2, 3, 4, 3, 1, 3, 3, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 4, 1, 1, 0, 3, 0, 3, 1, 0, 2, 3, 3, 3, 3, 3, 1, 0, 0, 2, 0, 3, 3), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 2, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0, 3, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 2, 0, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3), - (0, 2, 0, 3, 1, 3, 0, 3, 0, 2, 3, 3, 3, 1, 3, 1, 3, 1, 3, 1, 3, 3, 3, 1, 3, 0, 2, 3, 1, 1, 4, 3, 3, 2, 3, 3, 1, 2, 2, 4, 1, 3, 3, 0, 1, 4, 2, 3, 0, 1, 3, 0, 3, 0, 0, 1, 3, 0, 2, 0, 0, 3, 3, 2, 1, 3, 0, 3, 0, 2, 0, 3, 4, 4, 4, 3, 1, 0, 3, 0, 0, 3, 3), - (0, 2, 0, 1, 0, 2, 0, 0, 0, 1, 3, 2, 2, 1, 3, 0, 1, 1, 3, 0, 3, 2, 3, 1, 2, 0, 2, 0, 1, 1, 3, 3, 3, 0, 3, 3, 1, 1, 2, 3, 2, 3, 3, 1, 2, 3, 2, 0, 0, 1, 0, 0, 0, 0, 0, 0, 3, 0, 1, 0, 0, 2, 1, 2, 1, 3, 0, 3, 0, 0, 0, 3, 4, 4, 4, 3, 2, 0, 2, 0, 0, 2, 4), - (0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 3, 1, 0, 0, 0, 0, 0, 0, 0, 3), - (0, 3, 0, 3, 0, 2, 0, 3, 0, 3, 3, 3, 2, 3, 2, 2, 2, 0, 3, 1, 3, 3, 3, 2, 3, 3, 0, 0, 3, 0, 3, 2, 2, 0, 2, 3, 1, 4, 3, 4, 3, 3, 2, 3, 1, 5, 4, 4, 0, 3, 1, 2, 1, 3, 0, 3, 1, 1, 2, 0, 2, 3, 1, 3, 1, 3, 0, 3, 0, 1, 0, 3, 3, 4, 4, 2, 1, 0, 2, 1, 0, 2, 4), - (0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 4, 2, 5, 1, 4, 0, 2, 0, 2, 1, 3, 1, 4, 0, 2, 1, 0, 0, 2, 1, 4, 1, 1, 0, 3, 3, 0, 5, 1, 3, 2, 3, 3, 1, 0, 3, 2, 3, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 4, 0, 1, 0, 3, 0, 2, 0, 1, 0, 3, 3, 3, 4, 3, 3, 0, 0, 0, 0, 2, 3), - (0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0, 1, 0, 0, 0, 0, 0, 3), - (0, 1, 0, 3, 0, 4, 0, 3, 0, 2, 4, 3, 1, 0, 3, 2, 2, 1, 3, 1, 2, 2, 3, 1, 1, 1, 2, 1, 3, 0, 1, 2, 0, 1, 3, 2, 1, 3, 0, 5, 5, 1, 0, 0, 1, 3, 2, 1, 0, 3, 0, 0, 1, 0, 0, 0, 0, 0, 3, 4, 0, 1, 1, 1, 3, 2, 0, 2, 0, 1, 0, 2, 3, 3, 1, 2, 3, 0, 1, 0, 1, 0, 4), - (0, 0, 0, 1, 0, 3, 0, 3, 0, 2, 2, 1, 0, 0, 4, 0, 3, 0, 3, 1, 3, 0, 3, 0, 3, 0, 1, 0, 3, 0, 3, 1, 3, 0, 3, 3, 0, 0, 1, 2, 1, 1, 1, 0, 1, 2, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 2, 0, 0, 2, 0, 0, 0, 0, 2, 3, 3, 3, 3, 0, 0, 0, 0, 1, 4), - (0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 3, 1, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 3, 0, 2, 0, 2, 3, 0, 0, 2, 2, 3, 1, 2, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 2, 0, 0, 0, 0, 2, 3), - (2, 4, 0, 5, 0, 5, 0, 4, 0, 3, 4, 3, 3, 3, 4, 3, 3, 3, 4, 3, 4, 4, 5, 4, 5, 5, 5, 2, 3, 0, 5, 5, 4, 1, 5, 4, 3, 1, 5, 4, 3, 4, 4, 3, 3, 4, 3, 3, 0, 3, 2, 0, 2, 3, 0, 3, 0, 0, 3, 3, 0, 5, 3, 2, 3, 3, 0, 3, 0, 3, 0, 3, 4, 5, 4, 5, 3, 0, 4, 3, 0, 3, 4), - (0, 3, 0, 3, 0, 3, 0, 3, 0, 3, 3, 4, 3, 2, 3, 2, 3, 0, 4, 3, 3, 3, 3, 3, 3, 3, 3, 0, 3, 2, 4, 3, 3, 1, 3, 4, 3, 4, 4, 4, 3, 4, 4, 3, 2, 4, 4, 1, 0, 2, 0, 0, 1, 1, 0, 2, 0, 0, 3, 1, 0, 5, 3, 2, 1, 3, 0, 3, 0, 1, 2, 4, 3, 2, 4, 3, 3, 0, 3, 2, 0, 4, 4), - (0, 3, 0, 3, 0, 1, 0, 0, 0, 1, 4, 3, 3, 2, 3, 1, 3, 1, 4, 2, 3, 2, 4, 2, 3, 4, 3, 0, 2, 2, 3, 3, 3, 0, 3, 3, 3, 0, 3, 4, 1, 3, 3, 0, 3, 4, 3, 3, 0, 1, 1, 0, 1, 0, 0, 0, 4, 0, 3, 0, 0, 3, 1, 2, 1, 3, 0, 4, 0, 1, 0, 4, 3, 3, 4, 3, 3, 0, 2, 0, 0, 3, 3), - (0, 3, 0, 4, 0, 1, 0, 3, 0, 3, 4, 3, 3, 0, 3, 3, 3, 1, 3, 1, 3, 3, 4, 3, 3, 3, 0, 0, 3, 1, 5, 3, 3, 1, 3, 3, 2, 5, 4, 3, 3, 4, 5, 3, 2, 5, 3, 4, 0, 1, 0, 0, 0, 0, 0, 2, 0, 0, 1, 1, 0, 4, 2, 2, 1, 3, 0, 3, 0, 2, 0, 4, 4, 3, 5, 3, 2, 0, 1, 1, 0, 3, 4), - (0, 5, 0, 4, 0, 5, 0, 2, 0, 4, 4, 3, 3, 2, 3, 3, 3, 1, 4, 3, 4, 1, 5, 3, 4, 3, 4, 0, 4, 2, 4, 3, 4, 1, 5, 4, 0, 4, 4, 4, 4, 5, 4, 1, 3, 5, 4, 2, 1, 4, 1, 1, 3, 2, 0, 3, 1, 0, 3, 2, 1, 4, 3, 3, 3, 4, 0, 4, 0, 3, 0, 4, 4, 4, 3, 3, 3, 0, 4, 2, 0, 3, 4), - (1, 4, 0, 4, 0, 3, 0, 1, 0, 3, 3, 3, 1, 1, 3, 3, 2, 2, 3, 3, 1, 0, 3, 2, 2, 1, 2, 0, 3, 1, 2, 1, 2, 0, 3, 2, 0, 2, 2, 3, 3, 4, 3, 0, 3, 3, 1, 2, 0, 1, 1, 3, 1, 2, 0, 0, 3, 0, 1, 1, 0, 3, 2, 2, 3, 3, 0, 3, 0, 0, 0, 2, 3, 3, 4, 3, 3, 0, 1, 0, 0, 1, 4), - (0, 4, 0, 4, 0, 4, 0, 0, 0, 3, 4, 4, 3, 1, 4, 2, 3, 2, 3, 3, 3, 1, 4, 3, 4, 0, 3, 0, 4, 2, 3, 3, 2, 2, 5, 4, 2, 1, 3, 4, 3, 4, 3, 1, 3, 3, 4, 2, 0, 2, 1, 0, 3, 3, 0, 0, 2, 0, 3, 1, 0, 4, 4, 3, 4, 3, 0, 4, 0, 1, 0, 2, 4, 4, 4, 4, 4, 0, 3, 2, 0, 3, 3), - (0, 0, 0, 1, 0, 4, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 3, 2, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2), - (0, 2, 0, 3, 0, 4, 0, 4, 0, 1, 3, 3, 3, 0, 4, 0, 2, 1, 2, 1, 1, 1, 2, 0, 3, 1, 1, 0, 1, 0, 3, 1, 0, 0, 3, 3, 2, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 2, 0, 2, 2, 0, 3, 1, 0, 0, 1, 0, 1, 1, 0, 1, 2, 0, 3, 0, 0, 0, 0, 1, 0, 0, 3, 3, 4, 3, 1, 0, 1, 0, 3, 0, 2), - (0, 0, 0, 3, 0, 5, 0, 0, 0, 0, 1, 0, 2, 0, 3, 1, 0, 1, 3, 0, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 4, 0, 0, 0, 2, 3, 0, 1, 4, 1, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 3), - (0, 2, 0, 5, 0, 5, 0, 1, 0, 2, 4, 3, 3, 2, 5, 1, 3, 2, 3, 3, 3, 0, 4, 1, 2, 0, 3, 0, 4, 0, 2, 2, 1, 1, 5, 3, 0, 0, 1, 4, 2, 3, 2, 0, 3, 3, 3, 2, 0, 2, 4, 1, 1, 2, 0, 1, 1, 0, 3, 1, 0, 1, 3, 1, 2, 3, 0, 2, 0, 0, 0, 1, 3, 5, 4, 4, 4, 0, 3, 0, 0, 1, 3), - (0, 4, 0, 5, 0, 4, 0, 4, 0, 4, 5, 4, 3, 3, 4, 3, 3, 3, 4, 3, 4, 4, 5, 3, 4, 5, 4, 2, 4, 2, 3, 4, 3, 1, 4, 4, 1, 3, 5, 4, 4, 5, 5, 4, 4, 5, 5, 5, 2, 3, 3, 1, 4, 3, 1, 3, 3, 0, 3, 3, 1, 4, 3, 4, 4, 4, 0, 3, 0, 4, 0, 3, 3, 4, 4, 5, 0, 0, 4, 3, 0, 4, 5), - (0, 4, 0, 4, 0, 3, 0, 3, 0, 3, 4, 4, 4, 3, 3, 2, 4, 3, 4, 3, 4, 3, 5, 3, 4, 3, 2, 1, 4, 2, 4, 4, 3, 1, 3, 4, 2, 4, 5, 5, 3, 4, 5, 4, 1, 5, 4, 3, 0, 3, 2, 2, 3, 2, 1, 3, 1, 0, 3, 3, 3, 5, 3, 3, 3, 5, 4, 4, 2, 3, 3, 4, 3, 3, 3, 2, 1, 0, 3, 2, 1, 4, 3), - (0, 4, 0, 5, 0, 4, 0, 3, 0, 3, 5, 5, 3, 2, 4, 3, 4, 0, 5, 4, 4, 1, 4, 4, 4, 3, 3, 3, 4, 3, 5, 5, 2, 3, 3, 4, 1, 2, 5, 5, 3, 5, 5, 2, 3, 5, 5, 4, 0, 3, 2, 0, 3, 3, 1, 1, 5, 1, 4, 1, 0, 4, 3, 2, 3, 5, 0, 4, 0, 3, 0, 5, 4, 3, 4, 3, 0, 0, 4, 1, 0, 4, 4), - (1, 3, 0, 4, 0, 2, 0, 2, 0, 2, 5, 5, 3, 3, 3, 3, 3, 0, 4, 2, 3, 4, 4, 4, 3, 4, 0, 0, 3, 4, 5, 4, 3, 3, 3, 3, 2, 5, 5, 4, 5, 5, 5, 4, 3, 5, 5, 5, 1, 3, 1, 0, 1, 0, 0, 3, 2, 0, 4, 2, 0, 5, 2, 3, 2, 4, 1, 3, 0, 3, 0, 4, 5, 4, 5, 4, 3, 0, 4, 2, 0, 5, 4), - (0, 3, 0, 4, 0, 5, 0, 3, 0, 3, 4, 4, 3, 2, 3, 2, 3, 3, 3, 3, 3, 2, 4, 3, 3, 2, 2, 0, 3, 3, 3, 3, 3, 1, 3, 3, 3, 0, 4, 4, 3, 4, 4, 1, 1, 4, 4, 2, 0, 3, 1, 0, 1, 1, 0, 4, 1, 0, 2, 3, 1, 3, 3, 1, 3, 4, 0, 3, 0, 1, 0, 3, 1, 3, 0, 0, 1, 0, 2, 0, 0, 4, 4), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), - (0, 3, 0, 3, 0, 2, 0, 3, 0, 1, 5, 4, 3, 3, 3, 1, 4, 2, 1, 2, 3, 4, 4, 2, 4, 4, 5, 0, 3, 1, 4, 3, 4, 0, 4, 3, 3, 3, 2, 3, 2, 5, 3, 4, 3, 2, 2, 3, 0, 0, 3, 0, 2, 1, 0, 1, 2, 0, 0, 0, 0, 2, 1, 1, 3, 1, 0, 2, 0, 4, 0, 3, 4, 4, 4, 5, 2, 0, 2, 0, 0, 1, 3), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 4, 2, 1, 1, 0, 1, 0, 3, 2, 0, 0, 3, 1, 1, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 1, 0, 0, 0, 2, 0, 0, 0, 1, 4, 0, 4, 2, 1, 0, 0, 0, 0, 0, 1), - (0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 3, 1, 0, 0, 0, 2, 0, 2, 1, 0, 0, 1, 2, 1, 0, 1, 1, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 1, 0, 0, 0, 0, 0, 1, 0, 0, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2), - (0, 4, 0, 4, 0, 4, 0, 3, 0, 4, 4, 3, 4, 2, 4, 3, 2, 0, 4, 4, 4, 3, 5, 3, 5, 3, 3, 2, 4, 2, 4, 3, 4, 3, 1, 4, 0, 2, 3, 4, 4, 4, 3, 3, 3, 4, 4, 4, 3, 4, 1, 3, 4, 3, 2, 1, 2, 1, 3, 3, 3, 4, 4, 3, 3, 5, 0, 4, 0, 3, 0, 4, 3, 3, 3, 2, 1, 0, 3, 0, 0, 3, 3), - (0, 4, 0, 3, 0, 3, 0, 3, 0, 3, 5, 5, 3, 3, 3, 3, 4, 3, 4, 3, 3, 3, 4, 4, 4, 3, 3, 3, 3, 4, 3, 5, 3, 3, 1, 3, 2, 4, 5, 5, 5, 5, 4, 3, 4, 5, 5, 3, 2, 2, 3, 3, 3, 3, 2, 3, 3, 1, 2, 3, 2, 4, 3, 3, 3, 4, 0, 4, 0, 2, 0, 4, 3, 2, 2, 1, 2, 0, 3, 0, 0, 4, 1), -) -# fmt: on - - -class JapaneseContextAnalysis: - NUM_OF_CATEGORY = 6 - DONT_KNOW = -1 - ENOUGH_REL_THRESHOLD = 100 - MAX_REL_THRESHOLD = 1000 - MINIMUM_DATA_THRESHOLD = 4 - - def __init__(self) -> None: - self._total_rel = 0 - self._rel_sample: List[int] = [] - self._need_to_skip_char_num = 0 - self._last_char_order = -1 - self._done = False - self.reset() - - def reset(self) -> None: - self._total_rel = 0 # total sequence received - # category counters, each integer counts sequence in its category - self._rel_sample = [0] * self.NUM_OF_CATEGORY - # if last byte in current buffer is not the last byte of a character, - # we need to know how many bytes to skip in next buffer - self._need_to_skip_char_num = 0 - self._last_char_order = -1 # The order of previous char - # If this flag is set to True, detection is done and conclusion has - # been made - self._done = False - - def feed(self, byte_str: Union[bytes, bytearray], num_bytes: int) -> None: - if self._done: - return - - # The buffer we got is byte oriented, and a character may span in more than one - # buffers. In case the last one or two byte in last buffer is not - # complete, we record how many byte needed to complete that character - # and skip these bytes here. We can choose to record those bytes as - # well and analyse the character once it is complete, but since a - # character will not make much difference, by simply skipping - # this character will simply our logic and improve performance. - i = self._need_to_skip_char_num - while i < num_bytes: - order, char_len = self.get_order(byte_str[i : i + 2]) - i += char_len - if i > num_bytes: - self._need_to_skip_char_num = i - num_bytes - self._last_char_order = -1 - else: - if (order != -1) and (self._last_char_order != -1): - self._total_rel += 1 - if self._total_rel > self.MAX_REL_THRESHOLD: - self._done = True - break - self._rel_sample[ - jp2_char_context[self._last_char_order][order] - ] += 1 - self._last_char_order = order - - def got_enough_data(self) -> bool: - return self._total_rel > self.ENOUGH_REL_THRESHOLD - - def get_confidence(self) -> float: - # This is just one way to calculate confidence. It works well for me. - if self._total_rel > self.MINIMUM_DATA_THRESHOLD: - return (self._total_rel - self._rel_sample[0]) / self._total_rel - return self.DONT_KNOW - - def get_order(self, _: Union[bytes, bytearray]) -> Tuple[int, int]: - return -1, 1 - - -class SJISContextAnalysis(JapaneseContextAnalysis): - def __init__(self) -> None: - super().__init__() - self._charset_name = "SHIFT_JIS" - - @property - def charset_name(self) -> str: - return self._charset_name - - def get_order(self, byte_str: Union[bytes, bytearray]) -> Tuple[int, int]: - if not byte_str: - return -1, 1 - # find out current char's byte length - first_char = byte_str[0] - if (0x81 <= first_char <= 0x9F) or (0xE0 <= first_char <= 0xFC): - char_len = 2 - if (first_char == 0x87) or (0xFA <= first_char <= 0xFC): - self._charset_name = "CP932" - else: - char_len = 1 - - # return its order if it is hiragana - if len(byte_str) > 1: - second_char = byte_str[1] - if (first_char == 202) and (0x9F <= second_char <= 0xF1): - return second_char - 0x9F, char_len - - return -1, char_len - - -class EUCJPContextAnalysis(JapaneseContextAnalysis): - def get_order(self, byte_str: Union[bytes, bytearray]) -> Tuple[int, int]: - if not byte_str: - return -1, 1 - # find out current char's byte length - first_char = byte_str[0] - if (first_char == 0x8E) or (0xA1 <= first_char <= 0xFE): - char_len = 2 - elif first_char == 0x8F: - char_len = 3 - else: - char_len = 1 - - # return its order if it is hiragana - if len(byte_str) > 1: - second_char = byte_str[1] - if (first_char == 0xA4) and (0xA1 <= second_char <= 0xF3): - return second_char - 0xA1, char_len - - return -1, char_len diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_m_o_r_t.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_m_o_r_t.py deleted file mode 100644 index 261e593e27ffc7fe065b964eea533dc2591fcb1e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_m_o_r_t.py +++ /dev/null @@ -1,6 +0,0 @@ -from .otBase import BaseTTXConverter - - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6mort.html -class table__m_o_r_t(BaseTTXConverter): - pass diff --git a/spaces/cihyFjudo/fairness-paper-search/Chandigarh 1st Year Collage Girl ANJALI Mms Scandal.md b/spaces/cihyFjudo/fairness-paper-search/Chandigarh 1st Year Collage Girl ANJALI Mms Scandal.md deleted file mode 100644 index 99f818ddb2ccd317d7820149cd7ff070dfbfbbae..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Chandigarh 1st Year Collage Girl ANJALI Mms Scandal.md +++ /dev/null @@ -1,6 +0,0 @@ -

Chandigarh 1st Year Collage Girl ANJALI Mms Scandal


DOWNLOAD 🌟 https://tinurli.com/2uwi3X



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/How To Change Player Id Oin Pixel Gun 3D.md b/spaces/cihyFjudo/fairness-paper-search/How To Change Player Id Oin Pixel Gun 3D.md deleted file mode 100644 index 5161bf665dd804035d96e1f408213baa2ded3be1..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/How To Change Player Id Oin Pixel Gun 3D.md +++ /dev/null @@ -1,30 +0,0 @@ - -

The jukebox is a gadget that produces money for the player each wave, given that a tower is in its range. It can be upgraded to produce more cash each wave and it also is considered one of the best/most vital things in a game.

-

Pixel Gun 3D is largely based online action battle royale game and there are multiple modes in which you are pit against real opponents online. Then, it becomes a real challenge for you to come out on top every time. Here is our Pixel Gun 3D Guide with a few tips and tricks to help you in your multiplayer games.

-

How to change player id oin Pixel Gun 3D


DOWNLOADhttps://tinurli.com/2uwkqO



-

As with any other FPS games, it is very useful for you to learn the layout of the maps you play in. Some maps have special features, such as portals, secret pathways, and hiding spots. Use these to your tactical advantage in order to flank enemies, carry out stealth attacks and surprise attacks. Remember, there is a chance that other players are also familiar with these special features in maps, be careful when you plan out your strategies.

-

Gems are the main currency which buy you all the premium items. Handle these with care as they can be exchanged for great items in the shop. Coins are much easier to obtain but they focus on the more basic items, such as basic weaponry, upgrades etc. If you are a starter player, these coins will buy you all the basics you need. However, for an experienced player, coins will be beneficial mainly to your item upgrades.

-

With the deals, Nexters aims at accelerating its product growth strategy and laying the groundwork of its consolidation mission. By expanding its portfolio of midcore games, the company expects to enlarge its player base and significantly cement its presence in the segment, which accounts for almost 50% of the time and 60% of the money spent in mobile gaming2.

-

In Puzzle Breakers, players will play match-3 puzzles to control the heroes and perform attacks on enemies on the battlefield. The game has been in production since 2019 and features high-end visuals and elaborated gameplay. Puzzle Breakers is now soft-launched in a limited number of regions on Android and is planned for the official release in Q2 2022.

-

Dawn of Zombies is a survival game with RPG elements. Though the game was released in 2019/2020 (Android/iOS) and is still in an early stage of development, it has already built a loyal core community and enjoys a stable and strong inflow of organic players. Shelter War, a bunker management and survival game with RPG elements in a post-nuclear setting, was released in 2021. With both games, Nexters will help Royal Ark to scale its games on the international market.

-

Nexters is an international game development company which strives to introduce the joy of core gaming experiences to casual players. Thanks to such hit games like Hero Wars, Throne Rush, and others the company reached over 200 million installs worldwide and became one of the top five independent mobile game companies in Europe. Headquartered in Cyprus, Nexters is built upon a team of 800+ inspired gaming professionals. Please find more information about Nexters at and follow Nexters on LinkedIn and Twitter.

-

Pixel Gun Tower Defense is unique because it allows you to design your own towers by combining Characters with Weapons. Characters alter a weapon's base stats and may come with special perks. The Characters in Pixel Gun Tower Defense are inspired by common player archetypes from first-person shooter games, while the Weapons are direct references of Pixel Gun 3D weapons.

-

Players who recently played your game. Unity AnalyticsAbbreviation of Unity Analytics
See in Glossary defines an active player as someone who has played within the last 90 calendar days. More info

-

-

Events dispatched to the AnalyticsAbbreviation of Unity Analytics
See in Glossary Service by instances of your applications. Analytics events contain the data that is processed and aggregated to provide insights into player behavior. More info

-

Custom events are freeform events that you can dispatch when an appropriate standard eventStandard events are application-specific events that you dispatch in response to important player actions or milestones. Standard events have standardized names and defined parameter lists. More info
See in Glossary is not available. Custom events can have any name and up to ten parameters. Use standard eventsStandard events are application-specific events that you dispatch in response to important player actions or milestones. Standard events have standardized names and defined parameter lists. More info
See in Glossary in preference to custom events where possible. More info

-

A Unity AnalyticsAbbreviation of Unity Analytics
See in Glossary Dashboard page that allows you to build, view and export reports on your Analytics metrics and events. You can also see how metrics and custom events change over time. More info

-

(Daily Active Users) The number of different players who started a session on a given day. DAU(Daily Active Users) The number of different players who started a session on a given day. DAU includes both new and returning players. More info
See in Glossary includes both new and returning players. More info

-

Engagement is a broad measure of how players enjoy, or are otherwise invested, in your game. Impossible to measure directly, the following metrics are frequently used to estimate engagement: Retention, DAU(Daily Active Users) The number of different players who started a session on a given day. DAU includes both new and returning players. More info
See in Glossary, MAU(Monthly Active Users) The number of players who started a session within the last 30 days. More info
See in Glossary, DAU/MAU, number of sessions, and session length. More info

-

In AnalyticsAbbreviation of Unity Analytics
See in Glossary, a funnel is a linear sequence of standard or custom events that you expect a player to complete in order. More info

-

Your player population as a percentage. Typically only useful when combined with a segment. Calculated as the percentage of the Number of Users metric who are members of a specified segment. More info

-

The average number of sessions per person playing on a given day. Also known as Average Number of Sessions per DAU(Daily Active Users) The number of different players who started a session on a given day. DAU includes both new and returning players. More info
See in Glossary. More info

-

Standard events are application-specific events that you dispatch in response to important player actions or milestones. Standard eventsStandard events are application-specific events that you dispatch in response to important player actions or milestones. Standard events have standardized names and defined parameter lists. More info
See in Glossary have standardized names and defined parameter lists. More info

-

A method of posing a skeleton for animation by rotating the jointA physics component allowing a dynamic connection between Rigidbody components, usually allowing some degree of movement such as a hinge. More info
See in Glossary angles to predetermined values. The position of a child joint changes according to the rotation of its parent and so the end point of a chain of jointsA physics component allowing a dynamic connection between Rigidbody components, usually allowing some degree of movement such as a hinge. More info
See in Glossary can be determined from the angles and relative positions of the individual joints it contains.

-

The automatic calculation of jointA physics component allowing a dynamic connection between Rigidbody components, usually allowing some degree of movement such as a hinge. More info
See in Glossary angles (eg. the shoulder and elbow joint of an arm) so that the end point (eg. the hand) reaches a desired point in space. In contrast to Forward KinematicsA method of posing a skeleton for animation by rotating the joint angles to predetermined values. The position of a child joint changes according to the rotation of its parent and so the end point of a chain of joints can be determined from the angles and relative positions of the individual joints it contains.
See in Glossary More info

-

Legacy - An asset and version controlA system for managing file changes. You can use Unity in conjunction with most common version control tools, including Perforce, Git, Mercurial and PlasticSCM. More info
See in Glossary system with a graphical user interface integrated into Unity. Enables team members to work together on a project on different computers. More info

-

A plug-inA set of code created outside of Unity that creates functionality in Unity. There are two kinds of plug-ins you can use in Unity: Managed plug-ins (managed .NET assemblies created with tools like Visual Studio) and Native plug-ins (platform-specific native code libraries). More info
See in Glossary that changes the way audio is transmitted from an audio sourceA component which plays back an Audio Clip in the scene to an audio listener or through an audio mixer. More info
See in Glossary into the surrounding space. It takes the source and regulates the gains of the left and right ear contributions based on the distance and angle between the AudioListener and the AudioSource. More info

-

A version controlA system for managing file changes. You can use Unity in conjunction with most common version control tools, including Perforce, Git, Mercurial and PlasticSCM. More info
See in Glossary system for file change management. More info

-

See first person shooterA common game genre, featuring a first-person view of a 3D world, and gun-based combat with other players or NPCs.
See in Glossary, frames per secondThe frequency at which consecutive frames are displayed in a running game. More info
See in Glossary.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Indien Grand Porn Sex Ass.md b/spaces/cihyFjudo/fairness-paper-search/Indien Grand Porn Sex Ass.md deleted file mode 100644 index ce9655493e5fa1b74e617b152855dc7e40fd3115..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Indien Grand Porn Sex Ass.md +++ /dev/null @@ -1,10 +0,0 @@ - -

In This chaneel winer uttaran barua ,porn xxx film, indian porn -all Co parformer.Shathi khatun,Rumpa akter, hanif pk,Shapan pramanik, And authers evryone see ASS BBW Bi big cock blowjob Brunette Cam Porn Creampe Fucked Up family Oiled Yaung.Xxx Solo Milf

-

Indien Grand Porn Sex Ass


Download Ziphttps://tinurli.com/2uwiWl



-

Fucked bye wife with friend A girl and two guys .hot sexy Fucking black Cock and big boobs Tight pussy xxx porn Indian naked sex cute beauty sex wife sharing best friend fuck college girl sex Indian sex film

-

Disclaimer: We have zero tolerance policy against any illegal pornography. All links, videos and images are provided by 3rd parties. We have no control over the content of these sites. We take no responsibility for the content on any website which we link to, please use your own discretion while surfing the links.

-

She used to go for walking after dinner, and since she used to take a lot of time, I used to start watching porn, and since I thought she would come late I left my room door unlocked. I was watching for around 20 minutes and then I started jerking off, and the moment I started cumming, grand mom entered my room !!

-

-

Page updated: February 09, 2023
adultepic.com has a zero-tolerance policy against illegal pornography.
All models were 18 years of age or older at the time of depiction.
Abuse / DMCA / Contacts

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Jaane Kyon part 2 full movie online free Everything you need to know about the film.md b/spaces/cihyFjudo/fairness-paper-search/Jaane Kyon part 2 full movie online free Everything you need to know about the film.md deleted file mode 100644 index 8707142cafa0e2dd6d4c42df3f0a9f12aa689c2a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Jaane Kyon part 2 full movie online free Everything you need to know about the film.md +++ /dev/null @@ -1,21 +0,0 @@ -
-

One of my new favorite obsessions is using Jenga in the classroom to review and spiral concepts. Over the past year or so, I have been creating several games that are perfect if you want to incorporate Jenga in your classroom and they are all free! On this post, I will share brief descriptions and links to each one.

-

JANGA PC Game Free Download


Download 🆓 https://tinurli.com/2uwip6



-

Want to review whole number skills in an engaging way? Click here to read about and download the FREE whole number Jenga games for 3rd-5th grade. The games review a variety of skills from adding and subtracting to writing and comparing whole numbers.

-

Windowspcapp.com is an apps and games portal that covers different Apps and PC Games for Windows 10,8,7,XP,Vista OS,Mac OS, Chrome OS or even Ubuntu OS.Download and play these top free PC Games,Laptop Games,Desktop Games.Our games or apps are licensed Full Version for PC.You can download apps or games for Windows 10, Windows 8, Windows 7, Windows Vista, and Windows XP.This is one of the best places on the Web to play new PC/Laptop games or apps for free in 2017!To download these games,software or apps,you need to download the best android emulator:XePlayer first.

-

Games.lol is your No. 1 download site for free online games for PC. We have popular games such as Granny, Gacha Life, Subway Surfers, Pixel Gun 3D, 8 Ball Pool, Mobile Legends Bang Bang and others. Games.lol provides cheats, tips, hacks, tricks and walkthroughs for all games.

-

Hasbro Jenga Quake Game , Png Download - Jenga Quake, Transparent Png is a hd free transparent png image, which is classified into game controller png,game over png. If it is valuable to you, please share it.

-

*** Clubhouse Games Guest Pass is available to download for free from Nintendo eShop on the Nintendo Switch system. Up to three players with Clubhouse Games Guest Pass can enjoy a selection of games in local multiplayer with a player who owns the full version of the game. Full version of the game, systems and additional accessories sold separately.

-

The game comes with several free downloadable scenarios, including one set on a derelict spaceship and another clearly inspired by slasher movie Friday the 13th. However, I really recommend coming up with your own stories. Dread is also an excellent roleplaying game to indulge your favourite pieces of horror media with. I was able to explore my deep obsession with sci-fi horror by creating a Dread campaign set in a space station controlled by a rogue AI. Whereas a friend ran a campaign clearly inspired by The Thing, with players running around an isolated outpost in the middle of a frozen tundra.

-

-

Jango 4 ROM download is available to play for Super Nintendo. This game is the US English version at EmulatorGames.net exclusively. Download Jango 4 ROM and use it with an emulator. Play online SNES game on desktop PC, mobile, and tablets in maximum quality. If you enjoy this free ROM on Emulator Games then you will also like similar titles Boktai 2 - Solar Boy Django and Zoku Bokura No Taiyou - Taiyou Shounen Django.

-

There are a couple of drawbacks, the first being that when playing online there are no push notifications to let you know when it is your turn. The online mode is also not a quick-fire option; it's something to play over a day or two. Sometimes the game freezes and won't allow you to select another tile and currently there isn't an exit button. But all you have to do is press the home button to get out.

-

Not only create custom stations with your favorite artists, there are 100+ pre-made genre stations. Most important, it is totally free with unlimited listening. Now you can download it in the android market for free, I like to recommend it for you, it well worth to have a try.

-

Laptoppcapk.com is an apps and games portal that covers different Apps Apk and PC Games for Windows 10,8,7,XP,Vista OS,Mac OS, Chrome OS or Ubuntu OS.Download Apps apk,Games apk for free and install with your Windows PC or Laptop.Our Games or Apps are licensed Full Version for PC.You can download Apps or Games for Windows 10, Windows 8, Windows 7, Windows Vista, and Windows XP.This is one of the best places on the Web to play new PC/Laptop games or apps for free in 2019!To download these games,software or apps,you need to download the best android emulator:XePlayer first.

-

People love free steam games, no doubt. But what many people hate is downloading so many parts and trying to install them on their own. This is why we are the only site that pre-installs every game for you. We have many categories like shooters, action, racing, simulators and even VR games! We strive to satisfy our users and ask for nothing in return. We revolutionized the downloading scene and will continue being your #1 site for free games.

-

Note: As with any data, it is always strongly recommended to have backups. With game files, this might not be quite as vital considering the same files can always be redownloaded, or reinstalled. However in cases of managing low bandwidth it can save hours of download time to have these game files available for a quick restore. Any game files related to Save Data or custom settings should by default be saved in your C: drive User folder, regardless to which drive the game files have been installed. These Game Save and settings files are the most important files to verify have regular backups.

-

The gameplay is more of a mixed bag with problems that can cause unnecessary frustration. For instance, Jango carries a significant amount of gear and weapons that are required to be quickly accessible. This however can prove quite challenging and frustrating as all his gear and weapons are rotated using a single button. What happens is if you happen to be using your visor and run into a number of enemies, you frantically scroll through your weapons and gear looking for a specific weapon, only to pass it and have to scroll through them again. The problem is reduced by holding down the button to scroll, which will freeze gameplay while you select the appropriate selection, but when enemies surprise you and start firing, inevitably you'll ended up trying to scroll to a preferred weapon getting blasted the whole time.

-

Picture for PC screen on the theme: Star Wars, Sci Fi, Jango Fett. Screensavers and backgrounds in sizes 1366x768, 1920x1080, 1440x900, 1536x864, 1600x900, 1280x720, 1280x800, 1280x1024, 1024x768, 1680x1050 are ready for downloading. To get a different size, you can cut the original picture and download to your computer for free. Wallpapers Star Wars, Sci Fi, Jango Fett, #293153 on PC, as well as all other screensavers in our catalog, can be downloaded here.

-

Free downloadable Boba Fett mods/skins for various games, including Animal Crossing, Fallout 4, X-Wing Alliance, Jedi Knight: Jedi Academy, Quake 3, Diablo, Jedi Knight, The Sims, Half-Life, DOOM 2, and Quake 2.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Learn from Falling Leaves Adeline Yen Mah Free PDF A Lesson in Forgiveness and Healing.md b/spaces/cihyFjudo/fairness-paper-search/Learn from Falling Leaves Adeline Yen Mah Free PDF A Lesson in Forgiveness and Healing.md deleted file mode 100644 index 936fd33e871d69a2365d1911528b8a77b09d3bad..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Learn from Falling Leaves Adeline Yen Mah Free PDF A Lesson in Forgiveness and Healing.md +++ /dev/null @@ -1,6 +0,0 @@ -

falling leaves adeline yen mah free pdf


DOWNLOADhttps://tinurli.com/2uwhH0



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/Windows 7 Ultimate Product Key 64 Bit Generator A Guide to Activate Your OS.md b/spaces/cihyFjudo/fairness-paper-search/Windows 7 Ultimate Product Key 64 Bit Generator A Guide to Activate Your OS.md deleted file mode 100644 index e6bf77fa9fb2c08312796fe69deb4c688e70fded..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Windows 7 Ultimate Product Key 64 Bit Generator A Guide to Activate Your OS.md +++ /dev/null @@ -1,23 +0,0 @@ -
-

Yes, definitely you can get Windows 7 for free by using the working product key. We have gathered all of the working keys for our readers. by any chance, if you are having a problem with Windows 7 or wants to repair it, just enter a genuine serial key. Instead of searching more, use the listed product key for Windows 7 32-bit and 64-bit. The latest working keys will surely be helpful for all users. The mentioned keys completely activate windows and your OS system will start working in an optimal condition.

-

windows 7 ultimate product key 64 bit generator


Download ……… https://tinurli.com/2uwiSW



-

To utilize all of the Windows 7 features, 25 characters based license key for Windows is a must. Without the product key, a user won't be able to activate the device and it cannot be substituted with anything. A product key is also referred as a digital license. However, the answer to a question to find working Windows 7 product key is just right here. Below we have mentioned all the latest keys which are surely working. Somehow, activate windows by using a key which is a simple yet intuitive way but we have also mentioned a process to activate windows 7 by using a command prompt.

-

No, the product key and Product ID are not the same. Product key consists of at least 25 characters to activate windows while Product ID identifies the Windows version. Microsoft does tell the Windows version in its settings but does not provide any key with DVD or online. If somehow Windows 7 key is lost, get a new one for free.

-

The Windows 7 product keys are mentioned above. All of the keys will surely activate the windows but they might not work if Windows update feature on your PC is turned on. Make sure to turn it off otherwise the license key will be detected and activation may fail.

-

-

Being an expert, users are advised to activate the windows by using a serial key. If by chance, any serial key does not work, go for trial expiry and try out entering a virus-free product key. All of the keys are genuine and will surely work 100% without causing any harm to the PCs.

-

It is recommended to install/upgrade Windows 7 while your current version of Windows running. But if you want a clean installation of Windows on a PC, you need to format the hard drive and reinstall the windows using its product key. You can do this to install Windows 7 upgraded version. Start your computer using the Windows 7 installation disc or a USB flash drive, click Custom (advanced), and then click Drive options (advanced).

-

Win 7 Product keys activate the full functions of Windows 7 Ultimate/Professional. No need for an Ultimate activator, crack, or fake product key generator. One Key works permanently on a single PC, you can even re-install the OS using this serial key.

-

In this article, we will be sharing as much Windows 7 ultimate product key as we can. Basically, these serial keys are good enough to activate your Windows 7 Ultimate computer, however, we must give you a fair warning that not all of the product keys would work perfectly.Nevertheless, we are going to share some other ways to activate Windows 7 as well, so you can implement any method that works with you.Before, we proceed you must acknowledge that using these keys comes with certain limitations. For example, you must never update your Windows 7 or else it will ask you to re-enter activation keys and for all the updates, these keys won't work properly. That's why we are going to share how to activate Windows without keys as well in this article.

-

You need to activate Microsoft OS with the original product key after installation. The latest & operated Windows 7 Ultimate SP1 serial keys. Enable both versions 32 bit and 64-bit, in all languages. The online key activation of Windows 7 is 100% genuine. Use the Ultimate Product Key for Microsoft Windows 7 installation. Win 7 The full functions of Windows 7 Ultimate are enabled by Product Keys. Ultimate activator, crack or fake product key generator are not required. You can even reset the OS via this serial key, one key works permanently on a single PC. You can check out the Microsoft office 2010 product key.

-

The most common version of windows is Windows 7. This had many new and advanced features over the windows of its processor. In order to run your windows, you must get an original Microsoft window, which ensures you enjoy all of the features at its best. We shared the last key of Windows 7, the professional serial key of Windows 7, basic product keys of Windows 7, product keys of Windows 7 starter.

-

From this article, we hope that the most important tool for activating Windows 7 on your PC is to give you an insight into what a product key is. There are also a few online trial windows 7 that can be used to try out Windows 7, but now Windows 7 is commercially sold by Microsoft and you have to buy your true windows to get your 7 windows running properly. These keys are not commercially sold and are not working properly. You will always get extensive support from Microsoft with genuine windows to run your product and make sure that you do not encounter any problems with Windows 7 keys or other running problems.

-

I try to insert the window 7 ultimate product key to activate but it displayed on the screen that the product key I entered does not appear to be a valid window product key and I so worried about this issue. Please tell me what I will do to fix this problem.

-

Hello.I already have a genuine windows 7 home premium operating system fully updated. If i want, can i upgrade to Ultimate just by activating a new product key given by you? I mean, without having to format and do everything from beginning.?

-

Hi, could you please send me product key for windows 7 ultimate 64 bit?
I am trying to post a message the fifth time but for some reason my post has not been shown even once. Please, post my message that I can be able to establish communication with user Product keys.

-

hello could you please send me a valid windows 7 64 bit ultimate key ? I tried all those mentionned and no one works, I need it urgently for my studies and thanks a lot in advance here is my mail [email protected]

-

You also won't be hacked or harmed for activating a Windows 8 or 8.1 key, as long as you don't use third-party software for activation. Third-party apps often carry malware in them, making it unsafe to download and operate something like a "Windows 8 product key generator".

-

Windows 7 is the most used operating system which is released back July 2009. If you have problem with your current running windows 7 in your system and want to repair or reinstall windows then you need windows 7 product key or serial key.

-

Tag: Windows 7 ultimate 32-bit product key, Windows 7 ultimate 64-bit product key, Windows 7 ultimate 32-bit serial key, Windows 7 ultimate 64-bit serial key, Windows 7 Ultimate 32-bit activation key, Windows 7 Ultimate 64-bit activation key, windows 7 ultimate activator, windows 7 ultimate product keys, window 7 ultimate product key, product key for windows 7 ultimate, how to get product key for windows 7 ultimate, windows 7 ultimate product keys, how to buy windows 7 ultimate product key

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/JpegImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/JpegImagePlugin.py deleted file mode 100644 index dfc7e6e9f569e05e3a1f9e3fd1407b5f202a6d56..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/JpegImagePlugin.py +++ /dev/null @@ -1,849 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# JPEG (JFIF) file handling -# -# See "Digital Compression and Coding of Continuous-Tone Still Images, -# Part 1, Requirements and Guidelines" (CCITT T.81 / ISO 10918-1) -# -# History: -# 1995-09-09 fl Created -# 1995-09-13 fl Added full parser -# 1996-03-25 fl Added hack to use the IJG command line utilities -# 1996-05-05 fl Workaround Photoshop 2.5 CMYK polarity bug -# 1996-05-28 fl Added draft support, JFIF version (0.1) -# 1996-12-30 fl Added encoder options, added progression property (0.2) -# 1997-08-27 fl Save mode 1 images as BW (0.3) -# 1998-07-12 fl Added YCbCr to draft and save methods (0.4) -# 1998-10-19 fl Don't hang on files using 16-bit DQT's (0.4.1) -# 2001-04-16 fl Extract DPI settings from JFIF files (0.4.2) -# 2002-07-01 fl Skip pad bytes before markers; identify Exif files (0.4.3) -# 2003-04-25 fl Added experimental EXIF decoder (0.5) -# 2003-06-06 fl Added experimental EXIF GPSinfo decoder -# 2003-09-13 fl Extract COM markers -# 2009-09-06 fl Added icc_profile support (from Florian Hoech) -# 2009-03-06 fl Changed CMYK handling; always use Adobe polarity (0.6) -# 2009-03-08 fl Added subsampling support (from Justin Huff). -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-1996 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -import array -import io -import math -import os -import struct -import subprocess -import sys -import tempfile -import warnings - -from . import Image, ImageFile -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from ._binary import o16be as o16 -from .JpegPresets import presets - -# -# Parser - - -def Skip(self, marker): - n = i16(self.fp.read(2)) - 2 - ImageFile._safe_read(self.fp, n) - - -def APP(self, marker): - # - # Application marker. Store these in the APP dictionary. - # Also look for well-known application markers. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - - app = "APP%d" % (marker & 15) - - self.app[app] = s # compatibility - self.applist.append((app, s)) - - if marker == 0xFFE0 and s[:4] == b"JFIF": - # extract JFIF information - self.info["jfif"] = version = i16(s, 5) # version - self.info["jfif_version"] = divmod(version, 256) - # extract JFIF properties - try: - jfif_unit = s[7] - jfif_density = i16(s, 8), i16(s, 10) - except Exception: - pass - else: - if jfif_unit == 1: - self.info["dpi"] = jfif_density - self.info["jfif_unit"] = jfif_unit - self.info["jfif_density"] = jfif_density - elif marker == 0xFFE1 and s[:5] == b"Exif\0": - if "exif" not in self.info: - # extract EXIF information (incomplete) - self.info["exif"] = s # FIXME: value will change - self._exif_offset = self.fp.tell() - n + 6 - elif marker == 0xFFE2 and s[:5] == b"FPXR\0": - # extract FlashPix information (incomplete) - self.info["flashpix"] = s # FIXME: value will change - elif marker == 0xFFE2 and s[:12] == b"ICC_PROFILE\0": - # Since an ICC profile can be larger than the maximum size of - # a JPEG marker (64K), we need provisions to split it into - # multiple markers. The format defined by the ICC specifies - # one or more APP2 markers containing the following data: - # Identifying string ASCII "ICC_PROFILE\0" (12 bytes) - # Marker sequence number 1, 2, etc (1 byte) - # Number of markers Total of APP2's used (1 byte) - # Profile data (remainder of APP2 data) - # Decoders should use the marker sequence numbers to - # reassemble the profile, rather than assuming that the APP2 - # markers appear in the correct sequence. - self.icclist.append(s) - elif marker == 0xFFED and s[:14] == b"Photoshop 3.0\x00": - # parse the image resource block - offset = 14 - photoshop = self.info.setdefault("photoshop", {}) - while s[offset : offset + 4] == b"8BIM": - try: - offset += 4 - # resource code - code = i16(s, offset) - offset += 2 - # resource name (usually empty) - name_len = s[offset] - # name = s[offset+1:offset+1+name_len] - offset += 1 + name_len - offset += offset & 1 # align - # resource data block - size = i32(s, offset) - offset += 4 - data = s[offset : offset + size] - if code == 0x03ED: # ResolutionInfo - data = { - "XResolution": i32(data, 0) / 65536, - "DisplayedUnitsX": i16(data, 4), - "YResolution": i32(data, 8) / 65536, - "DisplayedUnitsY": i16(data, 12), - } - photoshop[code] = data - offset += size - offset += offset & 1 # align - except struct.error: - break # insufficient data - - elif marker == 0xFFEE and s[:5] == b"Adobe": - self.info["adobe"] = i16(s, 5) - # extract Adobe custom properties - try: - adobe_transform = s[11] - except IndexError: - pass - else: - self.info["adobe_transform"] = adobe_transform - elif marker == 0xFFE2 and s[:4] == b"MPF\0": - # extract MPO information - self.info["mp"] = s[4:] - # offset is current location minus buffer size - # plus constant header size - self.info["mpoffset"] = self.fp.tell() - n + 4 - - # If DPI isn't in JPEG header, fetch from EXIF - if "dpi" not in self.info and "exif" in self.info: - try: - exif = self.getexif() - resolution_unit = exif[0x0128] - x_resolution = exif[0x011A] - try: - dpi = float(x_resolution[0]) / x_resolution[1] - except TypeError: - dpi = x_resolution - if math.isnan(dpi): - raise ValueError - if resolution_unit == 3: # cm - # 1 dpcm = 2.54 dpi - dpi *= 2.54 - self.info["dpi"] = dpi, dpi - except (TypeError, KeyError, SyntaxError, ValueError, ZeroDivisionError): - # SyntaxError for invalid/unreadable EXIF - # KeyError for dpi not included - # ZeroDivisionError for invalid dpi rational value - # ValueError or TypeError for dpi being an invalid float - self.info["dpi"] = 72, 72 - - -def COM(self, marker): - # - # Comment marker. Store these in the APP dictionary. - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - - self.info["comment"] = s - self.app["COM"] = s # compatibility - self.applist.append(("COM", s)) - - -def SOF(self, marker): - # - # Start of frame marker. Defines the size and mode of the - # image. JPEG is colour blind, so we use some simple - # heuristics to map the number of layers to an appropriate - # mode. Note that this could be made a bit brighter, by - # looking for JFIF and Adobe APP markers. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - self._size = i16(s, 3), i16(s, 1) - - self.bits = s[0] - if self.bits != 8: - msg = f"cannot handle {self.bits}-bit layers" - raise SyntaxError(msg) - - self.layers = s[5] - if self.layers == 1: - self.mode = "L" - elif self.layers == 3: - self.mode = "RGB" - elif self.layers == 4: - self.mode = "CMYK" - else: - msg = f"cannot handle {self.layers}-layer images" - raise SyntaxError(msg) - - if marker in [0xFFC2, 0xFFC6, 0xFFCA, 0xFFCE]: - self.info["progressive"] = self.info["progression"] = 1 - - if self.icclist: - # fixup icc profile - self.icclist.sort() # sort by sequence number - if self.icclist[0][13] == len(self.icclist): - profile = [] - for p in self.icclist: - profile.append(p[14:]) - icc_profile = b"".join(profile) - else: - icc_profile = None # wrong number of fragments - self.info["icc_profile"] = icc_profile - self.icclist = [] - - for i in range(6, len(s), 3): - t = s[i : i + 3] - # 4-tuples: id, vsamp, hsamp, qtable - self.layer.append((t[0], t[1] // 16, t[1] & 15, t[2])) - - -def DQT(self, marker): - # - # Define quantization table. Note that there might be more - # than one table in each marker. - - # FIXME: The quantization tables can be used to estimate the - # compression quality. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - while len(s): - v = s[0] - precision = 1 if (v // 16 == 0) else 2 # in bytes - qt_length = 1 + precision * 64 - if len(s) < qt_length: - msg = "bad quantization table marker" - raise SyntaxError(msg) - data = array.array("B" if precision == 1 else "H", s[1:qt_length]) - if sys.byteorder == "little" and precision > 1: - data.byteswap() # the values are always big-endian - self.quantization[v & 15] = [data[i] for i in zigzag_index] - s = s[qt_length:] - - -# -# JPEG marker table - -MARKER = { - 0xFFC0: ("SOF0", "Baseline DCT", SOF), - 0xFFC1: ("SOF1", "Extended Sequential DCT", SOF), - 0xFFC2: ("SOF2", "Progressive DCT", SOF), - 0xFFC3: ("SOF3", "Spatial lossless", SOF), - 0xFFC4: ("DHT", "Define Huffman table", Skip), - 0xFFC5: ("SOF5", "Differential sequential DCT", SOF), - 0xFFC6: ("SOF6", "Differential progressive DCT", SOF), - 0xFFC7: ("SOF7", "Differential spatial", SOF), - 0xFFC8: ("JPG", "Extension", None), - 0xFFC9: ("SOF9", "Extended sequential DCT (AC)", SOF), - 0xFFCA: ("SOF10", "Progressive DCT (AC)", SOF), - 0xFFCB: ("SOF11", "Spatial lossless DCT (AC)", SOF), - 0xFFCC: ("DAC", "Define arithmetic coding conditioning", Skip), - 0xFFCD: ("SOF13", "Differential sequential DCT (AC)", SOF), - 0xFFCE: ("SOF14", "Differential progressive DCT (AC)", SOF), - 0xFFCF: ("SOF15", "Differential spatial (AC)", SOF), - 0xFFD0: ("RST0", "Restart 0", None), - 0xFFD1: ("RST1", "Restart 1", None), - 0xFFD2: ("RST2", "Restart 2", None), - 0xFFD3: ("RST3", "Restart 3", None), - 0xFFD4: ("RST4", "Restart 4", None), - 0xFFD5: ("RST5", "Restart 5", None), - 0xFFD6: ("RST6", "Restart 6", None), - 0xFFD7: ("RST7", "Restart 7", None), - 0xFFD8: ("SOI", "Start of image", None), - 0xFFD9: ("EOI", "End of image", None), - 0xFFDA: ("SOS", "Start of scan", Skip), - 0xFFDB: ("DQT", "Define quantization table", DQT), - 0xFFDC: ("DNL", "Define number of lines", Skip), - 0xFFDD: ("DRI", "Define restart interval", Skip), - 0xFFDE: ("DHP", "Define hierarchical progression", SOF), - 0xFFDF: ("EXP", "Expand reference component", Skip), - 0xFFE0: ("APP0", "Application segment 0", APP), - 0xFFE1: ("APP1", "Application segment 1", APP), - 0xFFE2: ("APP2", "Application segment 2", APP), - 0xFFE3: ("APP3", "Application segment 3", APP), - 0xFFE4: ("APP4", "Application segment 4", APP), - 0xFFE5: ("APP5", "Application segment 5", APP), - 0xFFE6: ("APP6", "Application segment 6", APP), - 0xFFE7: ("APP7", "Application segment 7", APP), - 0xFFE8: ("APP8", "Application segment 8", APP), - 0xFFE9: ("APP9", "Application segment 9", APP), - 0xFFEA: ("APP10", "Application segment 10", APP), - 0xFFEB: ("APP11", "Application segment 11", APP), - 0xFFEC: ("APP12", "Application segment 12", APP), - 0xFFED: ("APP13", "Application segment 13", APP), - 0xFFEE: ("APP14", "Application segment 14", APP), - 0xFFEF: ("APP15", "Application segment 15", APP), - 0xFFF0: ("JPG0", "Extension 0", None), - 0xFFF1: ("JPG1", "Extension 1", None), - 0xFFF2: ("JPG2", "Extension 2", None), - 0xFFF3: ("JPG3", "Extension 3", None), - 0xFFF4: ("JPG4", "Extension 4", None), - 0xFFF5: ("JPG5", "Extension 5", None), - 0xFFF6: ("JPG6", "Extension 6", None), - 0xFFF7: ("JPG7", "Extension 7", None), - 0xFFF8: ("JPG8", "Extension 8", None), - 0xFFF9: ("JPG9", "Extension 9", None), - 0xFFFA: ("JPG10", "Extension 10", None), - 0xFFFB: ("JPG11", "Extension 11", None), - 0xFFFC: ("JPG12", "Extension 12", None), - 0xFFFD: ("JPG13", "Extension 13", None), - 0xFFFE: ("COM", "Comment", COM), -} - - -def _accept(prefix): - # Magic number was taken from https://en.wikipedia.org/wiki/JPEG - return prefix[:3] == b"\xFF\xD8\xFF" - - -## -# Image plugin for JPEG and JFIF images. - - -class JpegImageFile(ImageFile.ImageFile): - format = "JPEG" - format_description = "JPEG (ISO 10918)" - - def _open(self): - s = self.fp.read(3) - - if not _accept(s): - msg = "not a JPEG file" - raise SyntaxError(msg) - s = b"\xFF" - - # Create attributes - self.bits = self.layers = 0 - - # JPEG specifics (internal) - self.layer = [] - self.huffman_dc = {} - self.huffman_ac = {} - self.quantization = {} - self.app = {} # compatibility - self.applist = [] - self.icclist = [] - - while True: - i = s[0] - if i == 0xFF: - s = s + self.fp.read(1) - i = i16(s) - else: - # Skip non-0xFF junk - s = self.fp.read(1) - continue - - if i in MARKER: - name, description, handler = MARKER[i] - if handler is not None: - handler(self, i) - if i == 0xFFDA: # start of scan - rawmode = self.mode - if self.mode == "CMYK": - rawmode = "CMYK;I" # assume adobe conventions - self.tile = [("jpeg", (0, 0) + self.size, 0, (rawmode, ""))] - # self.__offset = self.fp.tell() - break - s = self.fp.read(1) - elif i == 0 or i == 0xFFFF: - # padded marker or junk; move on - s = b"\xff" - elif i == 0xFF00: # Skip extraneous data (escaped 0xFF) - s = self.fp.read(1) - else: - msg = "no marker found" - raise SyntaxError(msg) - - def load_read(self, read_bytes): - """ - internal: read more image data - For premature EOF and LOAD_TRUNCATED_IMAGES adds EOI marker - so libjpeg can finish decoding - """ - s = self.fp.read(read_bytes) - - if not s and ImageFile.LOAD_TRUNCATED_IMAGES and not hasattr(self, "_ended"): - # Premature EOF. - # Pretend file is finished adding EOI marker - self._ended = True - return b"\xFF\xD9" - - return s - - def draft(self, mode, size): - if len(self.tile) != 1: - return - - # Protect from second call - if self.decoderconfig: - return - - d, e, o, a = self.tile[0] - scale = 1 - original_size = self.size - - if a[0] == "RGB" and mode in ["L", "YCbCr"]: - self.mode = mode - a = mode, "" - - if size: - scale = min(self.size[0] // size[0], self.size[1] // size[1]) - for s in [8, 4, 2, 1]: - if scale >= s: - break - e = ( - e[0], - e[1], - (e[2] - e[0] + s - 1) // s + e[0], - (e[3] - e[1] + s - 1) // s + e[1], - ) - self._size = ((self.size[0] + s - 1) // s, (self.size[1] + s - 1) // s) - scale = s - - self.tile = [(d, e, o, a)] - self.decoderconfig = (scale, 0) - - box = (0, 0, original_size[0] / scale, original_size[1] / scale) - return self.mode, box - - def load_djpeg(self): - # ALTERNATIVE: handle JPEGs via the IJG command line utilities - - f, path = tempfile.mkstemp() - os.close(f) - if os.path.exists(self.filename): - subprocess.check_call(["djpeg", "-outfile", path, self.filename]) - else: - try: - os.unlink(path) - except OSError: - pass - - msg = "Invalid Filename" - raise ValueError(msg) - - try: - with Image.open(path) as _im: - _im.load() - self.im = _im.im - finally: - try: - os.unlink(path) - except OSError: - pass - - self.mode = self.im.mode - self._size = self.im.size - - self.tile = [] - - def _getexif(self): - return _getexif(self) - - def _getmp(self): - return _getmp(self) - - def getxmp(self): - """ - Returns a dictionary containing the XMP tags. - Requires defusedxml to be installed. - - :returns: XMP tags in a dictionary. - """ - - for segment, content in self.applist: - if segment == "APP1": - marker, xmp_tags = content.rsplit(b"\x00", 1) - if marker == b"http://ns.adobe.com/xap/1.0/": - return self._getxmp(xmp_tags) - return {} - - -def _getexif(self): - if "exif" not in self.info: - return None - return self.getexif()._get_merged_dict() - - -def _getmp(self): - # Extract MP information. This method was inspired by the "highly - # experimental" _getexif version that's been in use for years now, - # itself based on the ImageFileDirectory class in the TIFF plugin. - - # The MP record essentially consists of a TIFF file embedded in a JPEG - # application marker. - try: - data = self.info["mp"] - except KeyError: - return None - file_contents = io.BytesIO(data) - head = file_contents.read(8) - endianness = ">" if head[:4] == b"\x4d\x4d\x00\x2a" else "<" - # process dictionary - from . import TiffImagePlugin - - try: - info = TiffImagePlugin.ImageFileDirectory_v2(head) - file_contents.seek(info.next) - info.load(file_contents) - mp = dict(info) - except Exception as e: - msg = "malformed MP Index (unreadable directory)" - raise SyntaxError(msg) from e - # it's an error not to have a number of images - try: - quant = mp[0xB001] - except KeyError as e: - msg = "malformed MP Index (no number of images)" - raise SyntaxError(msg) from e - # get MP entries - mpentries = [] - try: - rawmpentries = mp[0xB002] - for entrynum in range(0, quant): - unpackedentry = struct.unpack_from( - f"{endianness}LLLHH", rawmpentries, entrynum * 16 - ) - labels = ("Attribute", "Size", "DataOffset", "EntryNo1", "EntryNo2") - mpentry = dict(zip(labels, unpackedentry)) - mpentryattr = { - "DependentParentImageFlag": bool(mpentry["Attribute"] & (1 << 31)), - "DependentChildImageFlag": bool(mpentry["Attribute"] & (1 << 30)), - "RepresentativeImageFlag": bool(mpentry["Attribute"] & (1 << 29)), - "Reserved": (mpentry["Attribute"] & (3 << 27)) >> 27, - "ImageDataFormat": (mpentry["Attribute"] & (7 << 24)) >> 24, - "MPType": mpentry["Attribute"] & 0x00FFFFFF, - } - if mpentryattr["ImageDataFormat"] == 0: - mpentryattr["ImageDataFormat"] = "JPEG" - else: - msg = "unsupported picture format in MPO" - raise SyntaxError(msg) - mptypemap = { - 0x000000: "Undefined", - 0x010001: "Large Thumbnail (VGA Equivalent)", - 0x010002: "Large Thumbnail (Full HD Equivalent)", - 0x020001: "Multi-Frame Image (Panorama)", - 0x020002: "Multi-Frame Image: (Disparity)", - 0x020003: "Multi-Frame Image: (Multi-Angle)", - 0x030000: "Baseline MP Primary Image", - } - mpentryattr["MPType"] = mptypemap.get(mpentryattr["MPType"], "Unknown") - mpentry["Attribute"] = mpentryattr - mpentries.append(mpentry) - mp[0xB002] = mpentries - except KeyError as e: - msg = "malformed MP Index (bad MP Entry)" - raise SyntaxError(msg) from e - # Next we should try and parse the individual image unique ID list; - # we don't because I've never seen this actually used in a real MPO - # file and so can't test it. - return mp - - -# -------------------------------------------------------------------- -# stuff to save JPEG files - -RAWMODE = { - "1": "L", - "L": "L", - "RGB": "RGB", - "RGBX": "RGB", - "CMYK": "CMYK;I", # assume adobe conventions - "YCbCr": "YCbCr", -} - -# fmt: off -zigzag_index = ( - 0, 1, 5, 6, 14, 15, 27, 28, - 2, 4, 7, 13, 16, 26, 29, 42, - 3, 8, 12, 17, 25, 30, 41, 43, - 9, 11, 18, 24, 31, 40, 44, 53, - 10, 19, 23, 32, 39, 45, 52, 54, - 20, 22, 33, 38, 46, 51, 55, 60, - 21, 34, 37, 47, 50, 56, 59, 61, - 35, 36, 48, 49, 57, 58, 62, 63, -) - -samplings = { - (1, 1, 1, 1, 1, 1): 0, - (2, 1, 1, 1, 1, 1): 1, - (2, 2, 1, 1, 1, 1): 2, -} -# fmt: on - - -def get_sampling(im): - # There's no subsampling when images have only 1 layer - # (grayscale images) or when they are CMYK (4 layers), - # so set subsampling to the default value. - # - # NOTE: currently Pillow can't encode JPEG to YCCK format. - # If YCCK support is added in the future, subsampling code will have - # to be updated (here and in JpegEncode.c) to deal with 4 layers. - if not hasattr(im, "layers") or im.layers in (1, 4): - return -1 - sampling = im.layer[0][1:3] + im.layer[1][1:3] + im.layer[2][1:3] - return samplings.get(sampling, -1) - - -def _save(im, fp, filename): - if im.width == 0 or im.height == 0: - msg = "cannot write empty image as JPEG" - raise ValueError(msg) - - try: - rawmode = RAWMODE[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as JPEG" - raise OSError(msg) from e - - info = im.encoderinfo - - dpi = [round(x) for x in info.get("dpi", (0, 0))] - - quality = info.get("quality", -1) - subsampling = info.get("subsampling", -1) - qtables = info.get("qtables") - - if quality == "keep": - quality = -1 - subsampling = "keep" - qtables = "keep" - elif quality in presets: - preset = presets[quality] - quality = -1 - subsampling = preset.get("subsampling", -1) - qtables = preset.get("quantization") - elif not isinstance(quality, int): - msg = "Invalid quality setting" - raise ValueError(msg) - else: - if subsampling in presets: - subsampling = presets[subsampling].get("subsampling", -1) - if isinstance(qtables, str) and qtables in presets: - qtables = presets[qtables].get("quantization") - - if subsampling == "4:4:4": - subsampling = 0 - elif subsampling == "4:2:2": - subsampling = 1 - elif subsampling == "4:2:0": - subsampling = 2 - elif subsampling == "4:1:1": - # For compatibility. Before Pillow 4.3, 4:1:1 actually meant 4:2:0. - # Set 4:2:0 if someone is still using that value. - subsampling = 2 - elif subsampling == "keep": - if im.format != "JPEG": - msg = "Cannot use 'keep' when original image is not a JPEG" - raise ValueError(msg) - subsampling = get_sampling(im) - - def validate_qtables(qtables): - if qtables is None: - return qtables - if isinstance(qtables, str): - try: - lines = [ - int(num) - for line in qtables.splitlines() - for num in line.split("#", 1)[0].split() - ] - except ValueError as e: - msg = "Invalid quantization table" - raise ValueError(msg) from e - else: - qtables = [lines[s : s + 64] for s in range(0, len(lines), 64)] - if isinstance(qtables, (tuple, list, dict)): - if isinstance(qtables, dict): - qtables = [ - qtables[key] for key in range(len(qtables)) if key in qtables - ] - elif isinstance(qtables, tuple): - qtables = list(qtables) - if not (0 < len(qtables) < 5): - msg = "None or too many quantization tables" - raise ValueError(msg) - for idx, table in enumerate(qtables): - try: - if len(table) != 64: - raise TypeError - table = array.array("H", table) - except TypeError as e: - msg = "Invalid quantization table" - raise ValueError(msg) from e - else: - qtables[idx] = list(table) - return qtables - - if qtables == "keep": - if im.format != "JPEG": - msg = "Cannot use 'keep' when original image is not a JPEG" - raise ValueError(msg) - qtables = getattr(im, "quantization", None) - qtables = validate_qtables(qtables) - - extra = info.get("extra", b"") - - MAX_BYTES_IN_MARKER = 65533 - icc_profile = info.get("icc_profile") - if icc_profile: - ICC_OVERHEAD_LEN = 14 - MAX_DATA_BYTES_IN_MARKER = MAX_BYTES_IN_MARKER - ICC_OVERHEAD_LEN - markers = [] - while icc_profile: - markers.append(icc_profile[:MAX_DATA_BYTES_IN_MARKER]) - icc_profile = icc_profile[MAX_DATA_BYTES_IN_MARKER:] - i = 1 - for marker in markers: - size = o16(2 + ICC_OVERHEAD_LEN + len(marker)) - extra += ( - b"\xFF\xE2" - + size - + b"ICC_PROFILE\0" - + o8(i) - + o8(len(markers)) - + marker - ) - i += 1 - - comment = info.get("comment", im.info.get("comment")) - - # "progressive" is the official name, but older documentation - # says "progression" - # FIXME: issue a warning if the wrong form is used (post-1.1.7) - progressive = info.get("progressive", False) or info.get("progression", False) - - optimize = info.get("optimize", False) - - exif = info.get("exif", b"") - if isinstance(exif, Image.Exif): - exif = exif.tobytes() - if len(exif) > MAX_BYTES_IN_MARKER: - msg = "EXIF data is too long" - raise ValueError(msg) - - # get keyword arguments - im.encoderconfig = ( - quality, - progressive, - info.get("smooth", 0), - optimize, - info.get("streamtype", 0), - dpi[0], - dpi[1], - subsampling, - qtables, - comment, - extra, - exif, - ) - - # if we optimize, libjpeg needs a buffer big enough to hold the whole image - # in a shot. Guessing on the size, at im.size bytes. (raw pixel size is - # channels*size, this is a value that's been used in a django patch. - # https://github.com/matthewwithanm/django-imagekit/issues/50 - bufsize = 0 - if optimize or progressive: - # CMYK can be bigger - if im.mode == "CMYK": - bufsize = 4 * im.size[0] * im.size[1] - # keep sets quality to -1, but the actual value may be high. - elif quality >= 95 or quality == -1: - bufsize = 2 * im.size[0] * im.size[1] - else: - bufsize = im.size[0] * im.size[1] - - # The EXIF info needs to be written as one block, + APP1, + one spare byte. - # Ensure that our buffer is big enough. Same with the icc_profile block. - bufsize = max(ImageFile.MAXBLOCK, bufsize, len(exif) + 5, len(extra) + 1) - - ImageFile._save(im, fp, [("jpeg", (0, 0) + im.size, 0, rawmode)], bufsize) - - -def _save_cjpeg(im, fp, filename): - # ALTERNATIVE: handle JPEGs via the IJG command line utilities. - tempfile = im._dump() - subprocess.check_call(["cjpeg", "-outfile", filename, tempfile]) - try: - os.unlink(tempfile) - except OSError: - pass - - -## -# Factory for making JPEG and MPO instances -def jpeg_factory(fp=None, filename=None): - im = JpegImageFile(fp, filename) - try: - mpheader = im._getmp() - if mpheader[45057] > 1: - # It's actually an MPO - from .MpoImagePlugin import MpoImageFile - - # Don't reload everything, just convert it. - im = MpoImageFile.adopt(im, mpheader) - except (TypeError, IndexError): - # It is really a JPEG - pass - except SyntaxError: - warnings.warn( - "Image appears to be a malformed MPO file, it will be " - "interpreted as a base JPEG file" - ) - return im - - -# --------------------------------------------------------------------- -# Registry stuff - -Image.register_open(JpegImageFile.format, jpeg_factory, _accept) -Image.register_save(JpegImageFile.format, _save) - -Image.register_extensions(JpegImageFile.format, [".jfif", ".jpe", ".jpg", ".jpeg"]) - -Image.register_mime(JpegImageFile.format, "image/jpeg") diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asvenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asvenc.c deleted file mode 100644 index 4a14bcf8fa298d7a07fa53d401d1b732ad9c9bb5..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asvenc.c +++ /dev/null @@ -1,387 +0,0 @@ -/* - * Copyright (c) 2003 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * ASUS V1/V2 encoder. - */ - -#include "config_components.h" - -#include "libavutil/attributes.h" -#include "libavutil/mem.h" -#include "libavutil/mem_internal.h" - -#include "aandcttab.h" -#include "asv.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "dct.h" -#include "encode.h" -#include "fdctdsp.h" -#include "mpeg12data.h" -#include "pixblockdsp.h" -#include "put_bits.h" - -typedef struct ASVEncContext { - ASVCommonContext c; - - PutBitContext pb; - - PixblockDSPContext pdsp; - FDCTDSPContext fdsp; - DECLARE_ALIGNED(32, int16_t, block)[6][64]; - int q_intra_matrix[64]; -} ASVEncContext; - -static inline void asv1_put_level(PutBitContext *pb, int level) -{ - unsigned int index = level + 3; - - if (index <= 6) { - put_bits(pb, ff_asv_level_tab[index][1], ff_asv_level_tab[index][0]); - } else { - put_bits(pb, 3, 0); /* Escape code */ - put_sbits(pb, 8, level); - } -} - -static inline void asv2_put_level(ASVEncContext *a, PutBitContext *pb, int level) -{ - unsigned int index = level + 31; - - if (index <= 62) { - put_bits_le(pb, ff_asv2_level_tab[index][1], ff_asv2_level_tab[index][0]); - } else { - put_bits_le(pb, 5, 0); /* Escape code */ - if (level < -128 || level > 127) { - av_log(a->c.avctx, AV_LOG_WARNING, "Clipping level %d, increase qscale\n", level); - level = av_clip_int8(level); - } - put_bits_le(pb, 8, level & 0xFF); - } -} - -static inline void asv1_encode_block(ASVEncContext *a, int16_t block[64]) -{ - int i; - int nc_count = 0; - - put_bits(&a->pb, 8, (block[0] + 32) >> 6); - block[0] = 0; - - for (i = 0; i < 10; i++) { - const int index = ff_asv_scantab[4 * i]; - int ccp = 0; - - if ((block[index + 0] = (block[index + 0] * - a->q_intra_matrix[index + 0] + (1 << 15)) >> 16)) - ccp |= 8; - if ((block[index + 8] = (block[index + 8] * - a->q_intra_matrix[index + 8] + (1 << 15)) >> 16)) - ccp |= 4; - if ((block[index + 1] = (block[index + 1] * - a->q_intra_matrix[index + 1] + (1 << 15)) >> 16)) - ccp |= 2; - if ((block[index + 9] = (block[index + 9] * - a->q_intra_matrix[index + 9] + (1 << 15)) >> 16)) - ccp |= 1; - - if (ccp) { - for (; nc_count; nc_count--) - put_bits(&a->pb, 2, 2); /* Skip */ - - put_bits(&a->pb, ff_asv_ccp_tab[ccp][1], ff_asv_ccp_tab[ccp][0]); - - if (ccp & 8) - asv1_put_level(&a->pb, block[index + 0]); - if (ccp & 4) - asv1_put_level(&a->pb, block[index + 8]); - if (ccp & 2) - asv1_put_level(&a->pb, block[index + 1]); - if (ccp & 1) - asv1_put_level(&a->pb, block[index + 9]); - } else { - nc_count++; - } - } - put_bits(&a->pb, 5, 0xF); /* End of block */ -} - -static inline void asv2_encode_block(ASVEncContext *a, int16_t block[64]) -{ - int i; - int count = 0; - - for (count = 63; count > 3; count--) { - const int index = ff_asv_scantab[count]; - if ((block[index] * a->q_intra_matrix[index] + (1 << 15)) >> 16) - break; - } - - count >>= 2; - - put_bits_le(&a->pb, 4, count); - put_bits_le(&a->pb, 8, (block[0] + 32) >> 6); - block[0] = 0; - - for (i = 0; i <= count; i++) { - const int index = ff_asv_scantab[4 * i]; - int ccp = 0; - - if ((block[index + 0] = (block[index + 0] * - a->q_intra_matrix[index + 0] + (1 << 15)) >> 16)) - ccp |= 8; - if ((block[index + 8] = (block[index + 8] * - a->q_intra_matrix[index + 8] + (1 << 15)) >> 16)) - ccp |= 4; - if ((block[index + 1] = (block[index + 1] * - a->q_intra_matrix[index + 1] + (1 << 15)) >> 16)) - ccp |= 2; - if ((block[index + 9] = (block[index + 9] * - a->q_intra_matrix[index + 9] + (1 << 15)) >> 16)) - ccp |= 1; - - av_assert2(i || ccp < 8); - if (i) - put_bits_le(&a->pb, ff_asv_ac_ccp_tab[ccp][1], ff_asv_ac_ccp_tab[ccp][0]); - else - put_bits_le(&a->pb, ff_asv_dc_ccp_tab[ccp][1], ff_asv_dc_ccp_tab[ccp][0]); - - if (ccp) { - if (ccp & 8) - asv2_put_level(a, &a->pb, block[index + 0]); - if (ccp & 4) - asv2_put_level(a, &a->pb, block[index + 8]); - if (ccp & 2) - asv2_put_level(a, &a->pb, block[index + 1]); - if (ccp & 1) - asv2_put_level(a, &a->pb, block[index + 9]); - } - } -} - -#define MAX_MB_SIZE (30 * 16 * 16 * 3 / 2 / 8) - -static inline int encode_mb(ASVEncContext *a, int16_t block[6][64]) -{ - int i; - - av_assert0(put_bytes_left(&a->pb, 0) >= MAX_MB_SIZE); - - if (a->c.avctx->codec_id == AV_CODEC_ID_ASV1) { - for (i = 0; i < 6; i++) - asv1_encode_block(a, block[i]); - } else { - for (i = 0; i < 6; i++) { - asv2_encode_block(a, block[i]); - } - } - return 0; -} - -static inline void dct_get(ASVEncContext *a, const AVFrame *frame, - int mb_x, int mb_y) -{ - int16_t (*block)[64] = a->block; - int linesize = frame->linesize[0]; - int i; - - const uint8_t *ptr_y = frame->data[0] + (mb_y * 16 * linesize) + mb_x * 16; - const uint8_t *ptr_cb = frame->data[1] + (mb_y * 8 * frame->linesize[1]) + mb_x * 8; - const uint8_t *ptr_cr = frame->data[2] + (mb_y * 8 * frame->linesize[2]) + mb_x * 8; - - a->pdsp.get_pixels(block[0], ptr_y, linesize); - a->pdsp.get_pixels(block[1], ptr_y + 8, linesize); - a->pdsp.get_pixels(block[2], ptr_y + 8 * linesize, linesize); - a->pdsp.get_pixels(block[3], ptr_y + 8 * linesize + 8, linesize); - for (i = 0; i < 4; i++) - a->fdsp.fdct(block[i]); - - if (!(a->c.avctx->flags & AV_CODEC_FLAG_GRAY)) { - a->pdsp.get_pixels(block[4], ptr_cb, frame->linesize[1]); - a->pdsp.get_pixels(block[5], ptr_cr, frame->linesize[2]); - for (i = 4; i < 6; i++) - a->fdsp.fdct(block[i]); - } -} - -static int encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *pict, int *got_packet) -{ - ASVEncContext *const a = avctx->priv_data; - const ASVCommonContext *const c = &a->c; - int size, ret; - - if (pict->width % 16 || pict->height % 16) { - AVFrame *clone = av_frame_alloc(); - int i; - - if (!clone) - return AVERROR(ENOMEM); - clone->format = pict->format; - clone->width = FFALIGN(pict->width, 16); - clone->height = FFALIGN(pict->height, 16); - ret = av_frame_get_buffer(clone, 0); - if (ret < 0) { - av_frame_free(&clone); - return ret; - } - - ret = av_frame_copy(clone, pict); - if (ret < 0) { - av_frame_free(&clone); - return ret; - } - - for (i = 0; i<3; i++) { - int x, y; - int w = AV_CEIL_RSHIFT(pict->width, !!i); - int h = AV_CEIL_RSHIFT(pict->height, !!i); - int w2 = AV_CEIL_RSHIFT(clone->width, !!i); - int h2 = AV_CEIL_RSHIFT(clone->height, !!i); - for (y=0; ydata[i][x + y*clone->linesize[i]] = - clone->data[i][w - 1 + y*clone->linesize[i]]; - for (y=h; ydata[i][x + y*clone->linesize[i]] = - clone->data[i][x + (h-1)*clone->linesize[i]]; - } - ret = encode_frame(avctx, pkt, clone, got_packet); - - av_frame_free(&clone); - return ret; - } - - if ((ret = ff_alloc_packet(avctx, pkt, c->mb_height * c->mb_width * MAX_MB_SIZE + - AV_INPUT_BUFFER_MIN_SIZE)) < 0) - return ret; - - init_put_bits(&a->pb, pkt->data, pkt->size); - - for (int mb_y = 0; mb_y < c->mb_height2; mb_y++) { - for (int mb_x = 0; mb_x < c->mb_width2; mb_x++) { - dct_get(a, pict, mb_x, mb_y); - encode_mb(a, a->block); - } - } - - if (c->mb_width2 != c->mb_width) { - int mb_x = c->mb_width2; - for (int mb_y = 0; mb_y < c->mb_height2; mb_y++) { - dct_get(a, pict, mb_x, mb_y); - encode_mb(a, a->block); - } - } - - if (c->mb_height2 != c->mb_height) { - int mb_y = c->mb_height2; - for (int mb_x = 0; mb_x < c->mb_width; mb_x++) { - dct_get(a, pict, mb_x, mb_y); - encode_mb(a, a->block); - } - } - - if (avctx->codec_id == AV_CODEC_ID_ASV1) - flush_put_bits(&a->pb); - else - flush_put_bits_le(&a->pb); - AV_WN32(put_bits_ptr(&a->pb), 0); - size = (put_bytes_output(&a->pb) + 3) / 4; - - if (avctx->codec_id == AV_CODEC_ID_ASV1) { - c->bbdsp.bswap_buf((uint32_t *) pkt->data, - (uint32_t *) pkt->data, size); - } - - pkt->size = size * 4; - *got_packet = 1; - - return 0; -} - -static av_cold int encode_init(AVCodecContext *avctx) -{ - ASVEncContext *const a = avctx->priv_data; - int i; - const int scale = avctx->codec_id == AV_CODEC_ID_ASV1 ? 1 : 2; - int inv_qscale; - - ff_asv_common_init(avctx); - ff_fdctdsp_init(&a->fdsp, avctx); - ff_pixblockdsp_init(&a->pdsp, avctx); - - if (avctx->global_quality <= 0) - avctx->global_quality = 4 * FF_QUALITY_SCALE; - - inv_qscale = (32 * scale * FF_QUALITY_SCALE + - avctx->global_quality / 2) / avctx->global_quality; - - avctx->extradata = av_mallocz(8); - if (!avctx->extradata) - return AVERROR(ENOMEM); - avctx->extradata_size = 8; - AV_WLA(32, avctx->extradata, inv_qscale); - ((uint32_t *) avctx->extradata)[1] = av_le2ne32(AV_RL32("ASUS")); - - for (i = 0; i < 64; i++) { - if (a->fdsp.fdct == ff_fdct_ifast) { - int q = 32LL * scale * ff_mpeg1_default_intra_matrix[i] * ff_aanscales[i]; - a->q_intra_matrix[i] = (((int64_t)inv_qscale << 30) + q / 2) / q; - } else { - int q = 32 * scale * ff_mpeg1_default_intra_matrix[i]; - a->q_intra_matrix[i] = ((inv_qscale << 16) + q / 2) / q; - } - } - - return 0; -} - -#if CONFIG_ASV1_ENCODER -const FFCodec ff_asv1_encoder = { - .p.name = "asv1", - CODEC_LONG_NAME("ASUS V1"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_ASV1, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .priv_data_size = sizeof(ASVEncContext), - .init = encode_init, - FF_CODEC_ENCODE_CB(encode_frame), - .p.pix_fmts = (const enum AVPixelFormat[]) { AV_PIX_FMT_YUV420P, - AV_PIX_FMT_NONE }, -}; -#endif - -#if CONFIG_ASV2_ENCODER -const FFCodec ff_asv2_encoder = { - .p.name = "asv2", - CODEC_LONG_NAME("ASUS V2"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_ASV2, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .priv_data_size = sizeof(ASVEncContext), - .init = encode_init, - FF_CODEC_ENCODE_CB(encode_frame), - .p.pix_fmts = (const enum AVPixelFormat[]) { AV_PIX_FMT_YUV420P, - AV_PIX_FMT_NONE }, -}; -#endif diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ffv1.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ffv1.c deleted file mode 100644 index b6204740edb632c95c7c478e91df5349c5f341ae..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ffv1.c +++ /dev/null @@ -1,222 +0,0 @@ -/* - * FFV1 codec for libavcodec - * - * Copyright (c) 2003-2013 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * FF Video Codec 1 (a lossless codec) - */ - -#include "libavutil/attributes.h" -#include "libavutil/avassert.h" - -#include "avcodec.h" -#include "rangecoder.h" -#include "ffv1.h" -#include "threadframe.h" - -av_cold int ff_ffv1_common_init(AVCodecContext *avctx) -{ - FFV1Context *s = avctx->priv_data; - - if (!avctx->width || !avctx->height) - return AVERROR_INVALIDDATA; - - s->avctx = avctx; - s->flags = avctx->flags; - - s->width = avctx->width; - s->height = avctx->height; - - // defaults - s->num_h_slices = 1; - s->num_v_slices = 1; - - return 0; -} - -av_cold int ff_ffv1_init_slice_state(const FFV1Context *f, FFV1Context *fs) -{ - int j, i; - - fs->plane_count = f->plane_count; - fs->transparency = f->transparency; - for (j = 0; j < f->plane_count; j++) { - PlaneContext *const p = &fs->plane[j]; - - if (fs->ac != AC_GOLOMB_RICE) { - if (!p->state) - p->state = av_malloc_array(p->context_count, CONTEXT_SIZE * - sizeof(uint8_t)); - if (!p->state) - return AVERROR(ENOMEM); - } else { - if (!p->vlc_state) { - p->vlc_state = av_calloc(p->context_count, sizeof(*p->vlc_state)); - if (!p->vlc_state) - return AVERROR(ENOMEM); - for (i = 0; i < p->context_count; i++) { - p->vlc_state[i].error_sum = 4; - p->vlc_state[i].count = 1; - } - } - } - } - - if (fs->ac == AC_RANGE_CUSTOM_TAB) { - //FIXME only redo if state_transition changed - for (j = 1; j < 256; j++) { - fs->c. one_state[ j] = f->state_transition[j]; - fs->c.zero_state[256 - j] = 256 - fs->c.one_state[j]; - } - } - - return 0; -} - -av_cold int ff_ffv1_init_slices_state(FFV1Context *f) -{ - int i, ret; - for (i = 0; i < f->max_slice_count; i++) { - FFV1Context *fs = f->slice_context[i]; - if ((ret = ff_ffv1_init_slice_state(f, fs)) < 0) - return AVERROR(ENOMEM); - } - return 0; -} - -av_cold int ff_ffv1_init_slice_contexts(FFV1Context *f) -{ - int i, max_slice_count = f->num_h_slices * f->num_v_slices; - - av_assert0(max_slice_count > 0); - - for (i = 0; i < max_slice_count;) { - int sx = i % f->num_h_slices; - int sy = i / f->num_h_slices; - int sxs = f->avctx->width * sx / f->num_h_slices; - int sxe = f->avctx->width * (sx + 1) / f->num_h_slices; - int sys = f->avctx->height * sy / f->num_v_slices; - int sye = f->avctx->height * (sy + 1) / f->num_v_slices; - FFV1Context *fs = av_mallocz(sizeof(*fs)); - - if (!fs) - goto memfail; - - f->slice_context[i++] = fs; - memcpy(fs, f, sizeof(*fs)); - memset(fs->rc_stat2, 0, sizeof(fs->rc_stat2)); - - fs->slice_width = sxe - sxs; - fs->slice_height = sye - sys; - fs->slice_x = sxs; - fs->slice_y = sys; - - fs->sample_buffer = av_malloc_array((fs->width + 6), 3 * MAX_PLANES * - sizeof(*fs->sample_buffer)); - fs->sample_buffer32 = av_malloc_array((fs->width + 6), 3 * MAX_PLANES * - sizeof(*fs->sample_buffer32)); - if (!fs->sample_buffer || !fs->sample_buffer32) - goto memfail; - } - f->max_slice_count = max_slice_count; - return 0; - -memfail: - f->max_slice_count = i; - return AVERROR(ENOMEM); -} - -int ff_ffv1_allocate_initial_states(FFV1Context *f) -{ - int i; - - for (i = 0; i < f->quant_table_count; i++) { - f->initial_states[i] = av_malloc_array(f->context_count[i], - sizeof(*f->initial_states[i])); - if (!f->initial_states[i]) - return AVERROR(ENOMEM); - memset(f->initial_states[i], 128, - f->context_count[i] * sizeof(*f->initial_states[i])); - } - return 0; -} - -void ff_ffv1_clear_slice_state(const FFV1Context *f, FFV1Context *fs) -{ - int i, j; - - for (i = 0; i < f->plane_count; i++) { - PlaneContext *p = &fs->plane[i]; - - p->interlace_bit_state[0] = 128; - p->interlace_bit_state[1] = 128; - - if (fs->ac != AC_GOLOMB_RICE) { - if (f->initial_states[p->quant_table_index]) { - memcpy(p->state, f->initial_states[p->quant_table_index], - CONTEXT_SIZE * p->context_count); - } else - memset(p->state, 128, CONTEXT_SIZE * p->context_count); - } else { - for (j = 0; j < p->context_count; j++) { - p->vlc_state[j].drift = 0; - p->vlc_state[j].error_sum = 4; //FFMAX((RANGE + 32)/64, 2); - p->vlc_state[j].bias = 0; - p->vlc_state[j].count = 1; - } - } - } -} - - -av_cold int ff_ffv1_close(AVCodecContext *avctx) -{ - FFV1Context *s = avctx->priv_data; - int i, j; - - for (j = 0; j < s->max_slice_count; j++) { - FFV1Context *fs = s->slice_context[j]; - for (i = 0; i < s->plane_count; i++) { - PlaneContext *p = &fs->plane[i]; - - av_freep(&p->state); - av_freep(&p->vlc_state); - } - av_freep(&fs->sample_buffer); - av_freep(&fs->sample_buffer32); - } - - av_freep(&avctx->stats_out); - for (j = 0; j < s->quant_table_count; j++) { - av_freep(&s->initial_states[j]); - for (i = 0; i < s->max_slice_count; i++) { - FFV1Context *sf = s->slice_context[i]; - av_freep(&sf->rc_stat2[j]); - } - av_freep(&s->rc_stat2[j]); - } - - for (i = 0; i < s->max_slice_count; i++) - av_freep(&s->slice_context[i]); - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopenjpegenc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopenjpegenc.c deleted file mode 100644 index 009c7a437744ea00bc7f6f18c193bf0a73251e8c..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libopenjpegenc.c +++ /dev/null @@ -1,790 +0,0 @@ -/* - * JPEG 2000 encoding support via OpenJPEG - * Copyright (c) 2011 Michael Bradshaw - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * JPEG 2000 encoder using libopenjpeg - */ - -#include "libavutil/common.h" -#include "libavutil/imgutils.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/opt.h" -#include "avcodec.h" -#include "codec_internal.h" -#include "encode.h" -#include - -typedef struct LibOpenJPEGContext { - AVClass *avclass; - opj_cparameters_t enc_params; - int format; - int profile; - int prog_order; - int cinema_mode; - int numresolution; - int irreversible; - int disto_alloc; - int fixed_quality; -} LibOpenJPEGContext; - -static void error_callback(const char *msg, void *data) -{ - av_log(data, AV_LOG_ERROR, "%s\n", msg); -} - -static void warning_callback(const char *msg, void *data) -{ - av_log(data, AV_LOG_WARNING, "%s\n", msg); -} - -static void info_callback(const char *msg, void *data) -{ - av_log(data, AV_LOG_DEBUG, "%s\n", msg); -} - -typedef struct PacketWriter { - int pos; - AVPacket *packet; -} PacketWriter; - -static OPJ_SIZE_T stream_write(void *out_buffer, OPJ_SIZE_T nb_bytes, void *user_data) -{ - PacketWriter *writer = user_data; - AVPacket *packet = writer->packet; - int remaining = packet->size - writer->pos; - if (nb_bytes > remaining) { - OPJ_SIZE_T needed = nb_bytes - remaining; - int max_growth = INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE - packet->size; - if (needed > max_growth) { - return (OPJ_SIZE_T)-1; - } - if (av_grow_packet(packet, (int)needed)) { - return (OPJ_SIZE_T)-1; - } - } - memcpy(packet->data + writer->pos, out_buffer, nb_bytes); - writer->pos += (int)nb_bytes; - return nb_bytes; -} - -static OPJ_OFF_T stream_skip(OPJ_OFF_T nb_bytes, void *user_data) -{ - PacketWriter *writer = user_data; - AVPacket *packet = writer->packet; - if (nb_bytes < 0) { - if (writer->pos == 0) { - return (OPJ_SIZE_T)-1; - } - if (nb_bytes + writer->pos < 0) { - nb_bytes = -writer->pos; - } - } else { - int remaining = packet->size - writer->pos; - if (nb_bytes > remaining) { - OPJ_SIZE_T needed = nb_bytes - remaining; - int max_growth = INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE - packet->size; - if (needed > max_growth) { - return (OPJ_SIZE_T)-1; - } - if (av_grow_packet(packet, (int)needed)) { - return (OPJ_SIZE_T)-1; - } - } - } - writer->pos += (int)nb_bytes; - return nb_bytes; -} - -static OPJ_BOOL stream_seek(OPJ_OFF_T nb_bytes, void *user_data) -{ - PacketWriter *writer = user_data; - AVPacket *packet = writer->packet; - if (nb_bytes < 0) { - return OPJ_FALSE; - } - if (nb_bytes > packet->size) { - if (nb_bytes > INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE || - av_grow_packet(packet, (int)nb_bytes - packet->size)) { - return OPJ_FALSE; - } - } - writer->pos = (int)nb_bytes; - return OPJ_TRUE; -} - -static void cinema_parameters(opj_cparameters_t *p) -{ - p->tile_size_on = 0; - p->cp_tdx = 1; - p->cp_tdy = 1; - - /* Tile part */ - p->tp_flag = 'C'; - p->tp_on = 1; - - /* Tile and Image shall be at (0, 0) */ - p->cp_tx0 = 0; - p->cp_ty0 = 0; - p->image_offset_x0 = 0; - p->image_offset_y0 = 0; - - /* Codeblock size= 32 * 32 */ - p->cblockw_init = 32; - p->cblockh_init = 32; - p->csty |= 0x01; - - /* The progression order shall be CPRL */ - p->prog_order = OPJ_CPRL; - - /* No ROI */ - p->roi_compno = -1; - - /* No subsampling */ - p->subsampling_dx = 1; - p->subsampling_dy = 1; - - /* 9-7 transform */ - p->irreversible = 1; - - p->tcp_mct = 1; -} - -static opj_image_t *mj2_create_image(AVCodecContext *avctx, opj_cparameters_t *parameters) -{ - const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(avctx->pix_fmt); - opj_image_cmptparm_t cmptparm[4] = {{0}}; - opj_image_t *img; - int i; - int sub_dx[4]; - int sub_dy[4]; - int numcomps; - OPJ_COLOR_SPACE color_space = OPJ_CLRSPC_UNKNOWN; - - sub_dx[0] = sub_dx[3] = 1; - sub_dy[0] = sub_dy[3] = 1; - sub_dx[1] = sub_dx[2] = 1 << desc->log2_chroma_w; - sub_dy[1] = sub_dy[2] = 1 << desc->log2_chroma_h; - - numcomps = desc->nb_components; - - switch (avctx->pix_fmt) { - case AV_PIX_FMT_GRAY8: - case AV_PIX_FMT_YA8: - case AV_PIX_FMT_GRAY10: - case AV_PIX_FMT_GRAY12: - case AV_PIX_FMT_GRAY14: - case AV_PIX_FMT_GRAY16: - case AV_PIX_FMT_YA16: - color_space = OPJ_CLRSPC_GRAY; - break; - case AV_PIX_FMT_RGB24: - case AV_PIX_FMT_RGBA: - case AV_PIX_FMT_RGB48: - case AV_PIX_FMT_RGBA64: - case AV_PIX_FMT_GBR24P: - case AV_PIX_FMT_GBRP9: - case AV_PIX_FMT_GBRP10: - case AV_PIX_FMT_GBRP12: - case AV_PIX_FMT_GBRP14: - case AV_PIX_FMT_GBRP16: - case AV_PIX_FMT_XYZ12: - color_space = OPJ_CLRSPC_SRGB; - break; - case AV_PIX_FMT_YUV410P: - case AV_PIX_FMT_YUV411P: - case AV_PIX_FMT_YUV420P: - case AV_PIX_FMT_YUV422P: - case AV_PIX_FMT_YUV440P: - case AV_PIX_FMT_YUV444P: - case AV_PIX_FMT_YUVA420P: - case AV_PIX_FMT_YUVA422P: - case AV_PIX_FMT_YUVA444P: - case AV_PIX_FMT_YUV420P9: - case AV_PIX_FMT_YUV422P9: - case AV_PIX_FMT_YUV444P9: - case AV_PIX_FMT_YUVA420P9: - case AV_PIX_FMT_YUVA422P9: - case AV_PIX_FMT_YUVA444P9: - case AV_PIX_FMT_YUV420P10: - case AV_PIX_FMT_YUV422P10: - case AV_PIX_FMT_YUV444P10: - case AV_PIX_FMT_YUVA420P10: - case AV_PIX_FMT_YUVA422P10: - case AV_PIX_FMT_YUVA444P10: - case AV_PIX_FMT_YUV420P12: - case AV_PIX_FMT_YUV422P12: - case AV_PIX_FMT_YUV444P12: - case AV_PIX_FMT_YUV420P14: - case AV_PIX_FMT_YUV422P14: - case AV_PIX_FMT_YUV444P14: - case AV_PIX_FMT_YUV420P16: - case AV_PIX_FMT_YUV422P16: - case AV_PIX_FMT_YUV444P16: - case AV_PIX_FMT_YUVA420P16: - case AV_PIX_FMT_YUVA422P16: - case AV_PIX_FMT_YUVA444P16: - color_space = OPJ_CLRSPC_SYCC; - break; - default: - av_log(avctx, AV_LOG_ERROR, - "The requested pixel format '%s' is not supported\n", - av_get_pix_fmt_name(avctx->pix_fmt)); - return NULL; - } - - for (i = 0; i < numcomps; i++) { - cmptparm[i].prec = desc->comp[i].depth; - cmptparm[i].bpp = desc->comp[i].depth; - cmptparm[i].sgnd = 0; - cmptparm[i].dx = sub_dx[i]; - cmptparm[i].dy = sub_dy[i]; - cmptparm[i].w = (avctx->width + sub_dx[i] - 1) / sub_dx[i]; - cmptparm[i].h = (avctx->height + sub_dy[i] - 1) / sub_dy[i]; - } - - img = opj_image_create(numcomps, cmptparm, color_space); - - if (!img) - return NULL; - - // x0, y0 is the top left corner of the image - // x1, y1 is the width, height of the reference grid - img->x0 = 0; - img->y0 = 0; - img->x1 = (avctx->width - 1) * parameters->subsampling_dx + 1; - img->y1 = (avctx->height - 1) * parameters->subsampling_dy + 1; - - return img; -} - -static av_cold int libopenjpeg_encode_init(AVCodecContext *avctx) -{ - LibOpenJPEGContext *ctx = avctx->priv_data; - int err = 0; - - opj_set_default_encoder_parameters(&ctx->enc_params); - - switch (ctx->cinema_mode) { - case OPJ_CINEMA2K_24: - ctx->enc_params.rsiz = OPJ_PROFILE_CINEMA_2K; - ctx->enc_params.max_cs_size = OPJ_CINEMA_24_CS; - ctx->enc_params.max_comp_size = OPJ_CINEMA_24_COMP; - break; - case OPJ_CINEMA2K_48: - ctx->enc_params.rsiz = OPJ_PROFILE_CINEMA_2K; - ctx->enc_params.max_cs_size = OPJ_CINEMA_48_CS; - ctx->enc_params.max_comp_size = OPJ_CINEMA_48_COMP; - break; - case OPJ_CINEMA4K_24: - ctx->enc_params.rsiz = OPJ_PROFILE_CINEMA_4K; - ctx->enc_params.max_cs_size = OPJ_CINEMA_24_CS; - ctx->enc_params.max_comp_size = OPJ_CINEMA_24_COMP; - break; - } - - switch (ctx->profile) { - case OPJ_CINEMA2K: - if (ctx->enc_params.rsiz == OPJ_PROFILE_CINEMA_4K) { - err = AVERROR(EINVAL); - break; - } - ctx->enc_params.rsiz = OPJ_PROFILE_CINEMA_2K; - break; - case OPJ_CINEMA4K: - if (ctx->enc_params.rsiz == OPJ_PROFILE_CINEMA_2K) { - err = AVERROR(EINVAL); - break; - } - ctx->enc_params.rsiz = OPJ_PROFILE_CINEMA_4K; - break; - } - - if (err) { - av_log(avctx, AV_LOG_ERROR, - "Invalid parameter pairing: cinema_mode and profile conflict.\n"); - return err; - } - - if (!ctx->numresolution) { - ctx->numresolution = 6; - while (FFMIN(avctx->width, avctx->height) >> ctx->numresolution < 1) - ctx->numresolution --; - } - - ctx->enc_params.prog_order = ctx->prog_order; - ctx->enc_params.numresolution = ctx->numresolution; - ctx->enc_params.irreversible = ctx->irreversible; - ctx->enc_params.cp_disto_alloc = ctx->disto_alloc; - ctx->enc_params.cp_fixed_quality = ctx->fixed_quality; - ctx->enc_params.tcp_numlayers = 1; - ctx->enc_params.tcp_rates[0] = FFMAX(avctx->compression_level, 0) * 2; - - if (ctx->cinema_mode > 0) { - cinema_parameters(&ctx->enc_params); - } - - return 0; -} - -static int libopenjpeg_copy_packed8(AVCodecContext *avctx, const uint8_t *src[4], - const int linesize[4], opj_image_t *image) -{ - int compno; - int x; - int y; - int *image_line; - int frame_index; - const int numcomps = image->numcomps; - - for (compno = 0; compno < numcomps; ++compno) { - if (image->comps[compno].w > linesize[0] / numcomps) { - av_log(avctx, AV_LOG_ERROR, "Error: frame's linesize is too small for the image\n"); - return 0; - } - } - - for (compno = 0; compno < numcomps; ++compno) { - for (y = 0; y < avctx->height; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - frame_index = y * linesize[0] + compno; - for (x = 0; x < avctx->width; ++x) { - image_line[x] = src[0][frame_index]; - frame_index += numcomps; - } - for (; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - 1]; - } - } - for (; y < image->comps[compno].h; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - for (x = 0; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - (int)image->comps[compno].w]; - } - } - } - - return 1; -} - -// for XYZ 12 bit -static int libopenjpeg_copy_packed12(AVCodecContext *avctx, const uint8_t *src[4], - const int linesize[4], opj_image_t *image) -{ - int compno; - int x, y; - int *image_line; - int frame_index; - const int numcomps = image->numcomps; - const uint16_t *frame_ptr = (const uint16_t *)src[0]; - - for (compno = 0; compno < numcomps; ++compno) { - if (image->comps[compno].w > linesize[0] / numcomps) { - av_log(avctx, AV_LOG_ERROR, "Error: frame's linesize is too small for the image\n"); - return 0; - } - } - - for (compno = 0; compno < numcomps; ++compno) { - for (y = 0; y < avctx->height; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - frame_index = y * (linesize[0] / 2) + compno; - for (x = 0; x < avctx->width; ++x) { - image_line[x] = frame_ptr[frame_index] >> 4; - frame_index += numcomps; - } - for (; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - 1]; - } - } - for (; y < image->comps[compno].h; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - for (x = 0; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - (int)image->comps[compno].w]; - } - } - } - - return 1; -} - -static int libopenjpeg_copy_packed16(AVCodecContext *avctx, const uint8_t *src[4], - const int linesize[4], opj_image_t *image) -{ - int compno; - int x; - int y; - int *image_line; - int frame_index; - const int numcomps = image->numcomps; - const uint16_t *frame_ptr = (const uint16_t*)src[0]; - - for (compno = 0; compno < numcomps; ++compno) { - if (image->comps[compno].w > linesize[0] / numcomps) { - av_log(avctx, AV_LOG_ERROR, "Error: frame's linesize is too small for the image\n"); - return 0; - } - } - - for (compno = 0; compno < numcomps; ++compno) { - for (y = 0; y < avctx->height; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - frame_index = y * (linesize[0] / 2) + compno; - for (x = 0; x < avctx->width; ++x) { - image_line[x] = frame_ptr[frame_index]; - frame_index += numcomps; - } - for (; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - 1]; - } - } - for (; y < image->comps[compno].h; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - for (x = 0; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - (int)image->comps[compno].w]; - } - } - } - - return 1; -} - -static int libopenjpeg_copy_unpacked8(AVCodecContext *avctx, const uint8_t *src[4], - const int linesize[4], opj_image_t *image) -{ - int compno; - int x; - int y; - int width; - int height; - int *image_line; - int frame_index; - const int numcomps = image->numcomps; - - for (compno = 0; compno < numcomps; ++compno) { - if (image->comps[compno].w > linesize[compno]) { - av_log(avctx, AV_LOG_ERROR, "Error: frame's linesize is too small for the image\n"); - return 0; - } - } - - for (compno = 0; compno < numcomps; ++compno) { - width = (avctx->width + image->comps[compno].dx - 1) / image->comps[compno].dx; - height = (avctx->height + image->comps[compno].dy - 1) / image->comps[compno].dy; - for (y = 0; y < height; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - frame_index = y * linesize[compno]; - for (x = 0; x < width; ++x) - image_line[x] = src[compno][frame_index++]; - for (; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - 1]; - } - } - for (; y < image->comps[compno].h; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - for (x = 0; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - (int)image->comps[compno].w]; - } - } - } - - return 1; -} - -static int libopenjpeg_copy_unpacked16(AVCodecContext *avctx, const uint8_t *src[4], - const int linesize[4], opj_image_t *image) -{ - int compno; - int x; - int y; - int width; - int height; - int *image_line; - int frame_index; - const int numcomps = image->numcomps; - - for (compno = 0; compno < numcomps; ++compno) { - if (image->comps[compno].w > linesize[compno]) { - av_log(avctx, AV_LOG_ERROR, "Error: frame's linesize is too small for the image\n"); - return 0; - } - } - - for (compno = 0; compno < numcomps; ++compno) { - const uint16_t *frame_ptr = (const uint16_t *)src[compno]; - width = (avctx->width + image->comps[compno].dx - 1) / image->comps[compno].dx; - height = (avctx->height + image->comps[compno].dy - 1) / image->comps[compno].dy; - for (y = 0; y < height; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - frame_index = y * (linesize[compno] / 2); - for (x = 0; x < width; ++x) - image_line[x] = frame_ptr[frame_index++]; - for (; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - 1]; - } - } - for (; y < image->comps[compno].h; ++y) { - image_line = image->comps[compno].data + y * image->comps[compno].w; - for (x = 0; x < image->comps[compno].w; ++x) { - image_line[x] = image_line[x - (int)image->comps[compno].w]; - } - } - } - - return 1; -} - -static int libopenjpeg_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *frame, int *got_packet) -{ - LibOpenJPEGContext *ctx = avctx->priv_data; - int ret; - int cpyresult = 0; - PacketWriter writer = { 0 }; - opj_codec_t *compress = NULL; - opj_stream_t *stream = NULL; - opj_image_t *image = mj2_create_image(avctx, &ctx->enc_params); - const uint8_t *data[4] = { frame->data[0], frame->data[1], - frame->data[2], frame->data[3] }; - int linesize[4] = { frame->linesize[0], frame->linesize[1], - frame->linesize[2], frame->linesize[3] }; - if (!image) { - av_log(avctx, AV_LOG_ERROR, "Error creating the mj2 image\n"); - ret = AVERROR(EINVAL); - goto done; - } - - switch (avctx->pix_fmt) { - case AV_PIX_FMT_RGB24: - case AV_PIX_FMT_RGBA: - case AV_PIX_FMT_YA8: - cpyresult = libopenjpeg_copy_packed8(avctx, data, linesize, image); - break; - case AV_PIX_FMT_XYZ12: - cpyresult = libopenjpeg_copy_packed12(avctx, data, linesize, image); - break; - case AV_PIX_FMT_RGB48: - case AV_PIX_FMT_RGBA64: - case AV_PIX_FMT_YA16: - cpyresult = libopenjpeg_copy_packed16(avctx, data, linesize, image); - break; - case AV_PIX_FMT_GBR24P: - case AV_PIX_FMT_GBRP9: - case AV_PIX_FMT_GBRP10: - case AV_PIX_FMT_GBRP12: - case AV_PIX_FMT_GBRP14: - case AV_PIX_FMT_GBRP16: - data[0] = frame->data[2]; // swap to be rgb - data[1] = frame->data[0]; - data[2] = frame->data[1]; - linesize[0] = frame->linesize[2]; - linesize[1] = frame->linesize[0]; - linesize[2] = frame->linesize[1]; - if (avctx->pix_fmt == AV_PIX_FMT_GBR24P) { - cpyresult = libopenjpeg_copy_unpacked8(avctx, data, linesize, image); - } else { - cpyresult = libopenjpeg_copy_unpacked16(avctx, data, linesize, image); - } - break; - case AV_PIX_FMT_GRAY8: - case AV_PIX_FMT_YUV410P: - case AV_PIX_FMT_YUV411P: - case AV_PIX_FMT_YUV420P: - case AV_PIX_FMT_YUV422P: - case AV_PIX_FMT_YUV440P: - case AV_PIX_FMT_YUV444P: - case AV_PIX_FMT_YUVA420P: - case AV_PIX_FMT_YUVA422P: - case AV_PIX_FMT_YUVA444P: - cpyresult = libopenjpeg_copy_unpacked8(avctx, data, linesize, image); - break; - case AV_PIX_FMT_GRAY10: - case AV_PIX_FMT_GRAY12: - case AV_PIX_FMT_GRAY14: - case AV_PIX_FMT_GRAY16: - case AV_PIX_FMT_YUV420P9: - case AV_PIX_FMT_YUV422P9: - case AV_PIX_FMT_YUV444P9: - case AV_PIX_FMT_YUVA420P9: - case AV_PIX_FMT_YUVA422P9: - case AV_PIX_FMT_YUVA444P9: - case AV_PIX_FMT_YUV444P10: - case AV_PIX_FMT_YUV422P10: - case AV_PIX_FMT_YUV420P10: - case AV_PIX_FMT_YUVA444P10: - case AV_PIX_FMT_YUVA422P10: - case AV_PIX_FMT_YUVA420P10: - case AV_PIX_FMT_YUV420P12: - case AV_PIX_FMT_YUV422P12: - case AV_PIX_FMT_YUV444P12: - case AV_PIX_FMT_YUV420P14: - case AV_PIX_FMT_YUV422P14: - case AV_PIX_FMT_YUV444P14: - case AV_PIX_FMT_YUV444P16: - case AV_PIX_FMT_YUV422P16: - case AV_PIX_FMT_YUV420P16: - case AV_PIX_FMT_YUVA444P16: - case AV_PIX_FMT_YUVA422P16: - case AV_PIX_FMT_YUVA420P16: - cpyresult = libopenjpeg_copy_unpacked16(avctx, data, linesize, image); - break; - default: - av_log(avctx, AV_LOG_ERROR, - "The frame's pixel format '%s' is not supported\n", - av_get_pix_fmt_name(avctx->pix_fmt)); - ret = AVERROR(EINVAL); - goto done; - break; - } - - if (!cpyresult) { - av_log(avctx, AV_LOG_ERROR, - "Could not copy the frame data to the internal image buffer\n"); - ret = -1; - goto done; - } - - if ((ret = ff_alloc_packet(avctx, pkt, 1024)) < 0) - goto done; - - compress = opj_create_compress(ctx->format); - if (!compress) { - av_log(avctx, AV_LOG_ERROR, "Error creating the compressor\n"); - ret = AVERROR(ENOMEM); - goto done; - } - - if (!opj_set_error_handler(compress, error_callback, avctx) || - !opj_set_warning_handler(compress, warning_callback, avctx) || - !opj_set_info_handler(compress, info_callback, avctx)) { - av_log(avctx, AV_LOG_ERROR, "Error setting the compressor handlers\n"); - ret = AVERROR_EXTERNAL; - goto done; - } - - if (!opj_setup_encoder(compress, &ctx->enc_params, image)) { - av_log(avctx, AV_LOG_ERROR, "Error setting up the compressor\n"); - ret = AVERROR_EXTERNAL; - goto done; - } - stream = opj_stream_default_create(OPJ_STREAM_WRITE); - - if (!stream) { - av_log(avctx, AV_LOG_ERROR, "Error creating the cio stream\n"); - ret = AVERROR(ENOMEM); - goto done; - } - - writer.packet = pkt; - opj_stream_set_write_function(stream, stream_write); - opj_stream_set_skip_function(stream, stream_skip); - opj_stream_set_seek_function(stream, stream_seek); - opj_stream_set_user_data(stream, &writer, NULL); - - if (!opj_start_compress(compress, image, stream) || - !opj_encode(compress, stream) || - !opj_end_compress(compress, stream)) { - av_log(avctx, AV_LOG_ERROR, "Error during the opj encode\n"); - ret = AVERROR_EXTERNAL; - goto done; - } - - av_shrink_packet(pkt, writer.pos); - - *got_packet = 1; - ret = 0; - -done: - opj_stream_destroy(stream); - opj_destroy_codec(compress); - opj_image_destroy(image); - return ret; -} - -#define OFFSET(x) offsetof(LibOpenJPEGContext, x) -#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM -static const AVOption options[] = { - { "format", "Codec Format", OFFSET(format), AV_OPT_TYPE_INT, { .i64 = OPJ_CODEC_JP2 }, OPJ_CODEC_J2K, OPJ_CODEC_JP2, VE, "format" }, - { "j2k", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_CODEC_J2K }, 0, 0, VE, "format" }, - { "jp2", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_CODEC_JP2 }, 0, 0, VE, "format" }, - { "profile", NULL, OFFSET(profile), AV_OPT_TYPE_INT, { .i64 = OPJ_STD_RSIZ }, OPJ_STD_RSIZ, OPJ_CINEMA4K, VE, "profile" }, - { "jpeg2000", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_STD_RSIZ }, 0, 0, VE, "profile" }, - { "cinema2k", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_CINEMA2K }, 0, 0, VE, "profile" }, - { "cinema4k", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_CINEMA4K }, 0, 0, VE, "profile" }, - { "cinema_mode", "Digital Cinema", OFFSET(cinema_mode), AV_OPT_TYPE_INT, { .i64 = OPJ_OFF }, OPJ_OFF, OPJ_CINEMA4K_24, VE, "cinema_mode" }, - { "off", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_OFF }, 0, 0, VE, "cinema_mode" }, - { "2k_24", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_CINEMA2K_24 }, 0, 0, VE, "cinema_mode" }, - { "2k_48", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_CINEMA2K_48 }, 0, 0, VE, "cinema_mode" }, - { "4k_24", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_CINEMA4K_24 }, 0, 0, VE, "cinema_mode" }, - { "prog_order", "Progression Order", OFFSET(prog_order), AV_OPT_TYPE_INT, { .i64 = OPJ_LRCP }, OPJ_LRCP, OPJ_CPRL, VE, "prog_order" }, - { "lrcp", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_LRCP }, 0, 0, VE, "prog_order" }, - { "rlcp", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_RLCP }, 0, 0, VE, "prog_order" }, - { "rpcl", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_RPCL }, 0, 0, VE, "prog_order" }, - { "pcrl", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_PCRL }, 0, 0, VE, "prog_order" }, - { "cprl", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = OPJ_CPRL }, 0, 0, VE, "prog_order" }, - { "numresolution", NULL, OFFSET(numresolution), AV_OPT_TYPE_INT, { .i64 = 6 }, 0, 33, VE }, - { "irreversible", NULL, OFFSET(irreversible), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, VE }, - { "disto_alloc", NULL, OFFSET(disto_alloc), AV_OPT_TYPE_INT, { .i64 = 1 }, 0, 1, VE }, - { "fixed_quality", NULL, OFFSET(fixed_quality), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, VE }, - { NULL }, -}; - -static const AVClass openjpeg_class = { - .class_name = "libopenjpeg", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_libopenjpeg_encoder = { - .p.name = "libopenjpeg", - CODEC_LONG_NAME("OpenJPEG JPEG 2000"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_JPEG2000, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE, - .priv_data_size = sizeof(LibOpenJPEGContext), - .init = libopenjpeg_encode_init, - FF_CODEC_ENCODE_CB(libopenjpeg_encode_frame), - .p.capabilities = AV_CODEC_CAP_FRAME_THREADS | - AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .p.pix_fmts = (const enum AVPixelFormat[]) { - AV_PIX_FMT_RGB24, AV_PIX_FMT_RGBA, AV_PIX_FMT_RGB48, - AV_PIX_FMT_RGBA64, AV_PIX_FMT_GBR24P, - AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12, AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16, - AV_PIX_FMT_GRAY8, AV_PIX_FMT_YA8, AV_PIX_FMT_GRAY16, AV_PIX_FMT_YA16, - AV_PIX_FMT_GRAY10, AV_PIX_FMT_GRAY12, AV_PIX_FMT_GRAY14, - AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUV422P, AV_PIX_FMT_YUVA420P, - AV_PIX_FMT_YUV440P, AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUVA422P, - AV_PIX_FMT_YUV411P, AV_PIX_FMT_YUV410P, AV_PIX_FMT_YUVA444P, - AV_PIX_FMT_YUV420P9, AV_PIX_FMT_YUV422P9, AV_PIX_FMT_YUV444P9, - AV_PIX_FMT_YUVA420P9, AV_PIX_FMT_YUVA422P9, AV_PIX_FMT_YUVA444P9, - AV_PIX_FMT_YUV420P10, AV_PIX_FMT_YUV422P10, AV_PIX_FMT_YUV444P10, - AV_PIX_FMT_YUVA420P10, AV_PIX_FMT_YUVA422P10, AV_PIX_FMT_YUVA444P10, - AV_PIX_FMT_YUV420P12, AV_PIX_FMT_YUV422P12, AV_PIX_FMT_YUV444P12, - AV_PIX_FMT_YUV420P14, AV_PIX_FMT_YUV422P14, AV_PIX_FMT_YUV444P14, - AV_PIX_FMT_YUV420P16, AV_PIX_FMT_YUV422P16, AV_PIX_FMT_YUV444P16, - AV_PIX_FMT_YUVA420P16, AV_PIX_FMT_YUVA422P16, AV_PIX_FMT_YUVA444P16, - AV_PIX_FMT_XYZ12, - AV_PIX_FMT_NONE - }, - .p.priv_class = &openjpeg_class, - .p.wrapper_name = "libopenjpeg", -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Android 12 At a Glance Widget Everything You Need to Know.md b/spaces/congsaPfin/Manga-OCR/logs/Android 12 At a Glance Widget Everything You Need to Know.md deleted file mode 100644 index 78741cf9116633a92df8ed507e2ef42d35627138..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Android 12 At a Glance Widget Everything You Need to Know.md +++ /dev/null @@ -1,112 +0,0 @@ -
-

Quick Glance APK: What Is It and How to Use It on Android 12

-

Android 12 is the latest version of Google's operating system for mobile devices, and it comes with a lot of new features and improvements. One of them is the ability to use widgets on your home screen and lock screen, which can provide you with useful information at a glance.

-

quick glance apk android 12


Downloadhttps://urlca.com/2uO6C7



-

However, not all widgets are created equal, and some may not work well with Android 12's new design language, Material You. That's why you may want to try out Quick Glance APK, a third-party widget that is compatible with Android 12 and offers a variety of features that can enhance your user experience.

-

In this article, we will explain what Quick Glance APK is, how to download and install it on your Android 12 device, how to use it on your home screen or lock screen, what benefits it can bring you, and what alternatives you can consider if you are looking for something different.

-

How to Download and Install Quick Glance APK on Android 12

-

Quick Glance APK is not available on the Google Play Store, so you will need to find it from another source online. However, be careful when downloading APK files from unknown sources, as they may contain malware or viruses that can harm your device or compromise your privacy.

-

How to fix weather details on At a Glance widget Android 12
-Download Google app beta APK for Android 12 At a Glance widget
-Android 12 Live Space redesign for At a Glance widget
-Pixel 2 At a Glance widget not showing weather on Android 12
-How to toggle weather feature on At a Glance widget settings
-Android 12 Beta 5 issues with At a Glance widget
-How to install Lawnchair 12.1 launcher with At a Glance widget
-Change search provider for Dock search bar on Android 12
-Themed Icons option for App Drawer on Android 12
-Android 12.1 Launcher3 base for Lawnchair launcher
-How to use QuickSwitch on Android 12.1 with Lawnchair
-Google's At a Glance widget APK download for Android 12
-How to enable At a Glance widget on lock screen Android 12
-Android Police review of At a Glance widget on Android 12
-How to customize At a Glance widget on Android 12
-Best alternatives to At a Glance widget for Android 12
-How to remove At a Glance widget from home screen Android 12
-How to add events and reminders to At a Glance widget Android 12
-How to get weather updates from At a Glance widget Android 12
-How to change font size and color of At a Glance widget Android 12
-How to access At a Glance widget settings on Android 12
-How to fix At a Glance widget not updating on Android 12
-How to use Google Assistant with At a Glance widget Android 12
-How to disable Live Space feature on At a Glance widget Android 12
-How to get traffic information from At a Glance widget Android 12
-How to show battery percentage on At a Glance widget Android 12
-How to show date and time on At a Glance widget Android 12
-How to show alarm clock on At a Glance widget Android 12
-How to show calendar events on At a Glance widget Android 12
-How to show flight status on At a Glance widget Android 12
-How to show music controls on At a Glance widget Android 12
-How to show notifications on At a Glance widget Android 12
-How to show sunrise and sunset times on At a Glance widget Android 12
-How to show moon phases on At a Glance widget Android 12
-How to show temperature and humidity on At a Glance widget Android 12
-How to show air quality index on At a Glance widget Android 12
-How to show wind speed and direction on At a Glance widget Android 12
-How to show precipitation and cloud cover on At a Glance widget Android 12
-How to show UV index and pollen count on At a Glance widget Android 12
-How to show sunrise and sunset times on At a Glance widget Android 12

-

Here are the steps you need to follow to download and install Quick Glance APK safely and securely on your Android 12 device:

-
    -
  1. Find the APK file from a trusted source and download it to your device. You can use a web browser or a file manager app to do this. One possible source is , where you can find the latest version of Quick Glance APK.
  2. -
  3. Enable unknown sources in your security settings to allow installation of third-party apps. To do this, go to Settings > Apps & notifications > Advanced > Special app access > Install unknown apps and select the app that you used to download the APK file (e.g., Chrome or Files). Then toggle on Allow from this source.
  4. -
  5. Locate the downloaded APK file and tap on it to start the installation process. You may see a warning message that says "This type of file can harm your device". Don't worry, this is just a generic message that appears for all APK files. Tap on Install anyway to proceed.
  6. -
  7. Follow the on-screen instructions and grant the necessary permissions to complete the installation. You may need to allow Quick Glance to access your location, calendar, contacts, and other data to enable its features.
  8. -
-

Congratulations, you have successfully installed Quick Glance APK on your Android 12 device. Now you can use it on your home screen or lock screen as a widget.

-

How to Use Quick Glance Widget on Android 12

-

Quick Glance widget is a versatile and customizable widget that can show you various information at a glance, such as weather, calendar, traffic, sports, news, and more. You can also adjust its size, color, transparency, and content according to your preferences and needs.

-

Here are the steps you need to follow to use Quick Glance widget on your Android 12 device:

-
    -
  1. Press and hold on an empty space on your home screen and tap on Widgets. This will open a list of available widgets that you can add to your home screen.
  2. -
  3. Scroll down and find the Quick Glance widget and drag it to your desired location on your home screen. You can also resize it by dragging the edges of the widget.
  4. -
  5. Tap on the widget to customize its settings, such as the size, color, transparency, and content. You can choose which topics you want to see on the widget, such as weather, calendar, traffic, sports, news, and more. You can also change the font style, color scheme, background color, and opacity of the widget.
  6. -
  7. Enjoy the widget's features, such as weather, calendar, traffic, sports, news, and more. You can tap on any topic to see more details or open the corresponding app. For example, if you tap on the weather topic, you will see the current temperature, humidity, wind speed, and forecast for your location. If you tap on the calendar topic, you will see your upcoming events and reminders.
  8. -
-

You can also use Quick Glance widget on your lock screen by following the same steps above. However, you may need to enable lock screen widgets in your settings first. To do this, go to Settings > Display > Lock screen > Lock screen widgets and toggle on Show widgets.

-

Benefits of Using Quick Glance Widget on Android 12

-

Quick Glance widget is not just a simple widget that shows you some information at a glance. It is also a powerful tool that can bring you many benefits for your Android 12 device. Here are some of them:

-
    -
  • Get at-a-glance information about various topics without opening any apps. This can save you time and hassle of switching between different apps to check different information. For example, you can see the weather forecast for today without opening the weather app or see the latest news headlines without opening the news app.
  • -
  • Personalize the widget's appearance and content according to your preferences and needs. You can choose which topics you want to see on the widget, such as weather, calendar, traffic, sports, news, and more. You can also change the font style, color scheme, background color, and opacity of the widget to match your theme and mood.
  • -
  • Save battery life and data usage by using a lightweight widget instead of multiple apps. Quick Glance widget is designed to be efficient and fast, and it only uses minimal resources to show you the information you need. It also updates the information automatically and intelligently, so you don't have to refresh it manually or worry about outdated data.
  • -
-

These are just some of the benefits of using Quick Glance widget on your Android 12 device. You may discover more benefits as you use it more often and explore its features.

-

Alternatives to Quick Glance Widget on Android 12

-

Quick Glance widget is not the only widget that you can use on your Android 12 device. There are many other widgets that can offer similar or different features and functions. Here are some of the alternatives that you can consider if you are looking for something different:

-
    -
  • Google's At a Glance widget, which shows weather, date, events, reminders, and more. This widget is built-in on most Android devices and integrates well with Google's apps and services. You can also customize its settings and appearance to suit your needs.
  • -
  • Windows Widgets, which are similar to Quick Glance but for Windows devices. These widgets are part of the Windows 11 update and can show you various information at a glance, such as weather, news, stocks, photos, and more. You can also access them from your taskbar or by swiping from the left edge of your screen.
  • -
  • Other third-party widgets from the Google Play Store, such as KWGT or Zooper Widget. These widgets are more advanced and customizable than Quick Glance, and they allow you to create your own widgets from scratch or use templates from other users. You can also use them to display any data or information that you want, such as music, battery, clock, calendar, and more.
  • -
-

These are just some of the alternatives to Quick Glance widget on Android 12. You may find more widgets that suit your preferences and needs by browsing the Google Play Store or searching online.

-

Conclusion

-

Quick Glance APK is a third-party widget that is compatible with Android 12 and offers a variety of features that can enhance your user experience. It can show you various information at a glance, such as weather, calendar, traffic, sports, news, and more. You can also personalize its appearance and content according to your preferences and needs.

-

To use Quick Glance APK on your Android 12 device, you need to download and install it from a trusted source online. Then you need to enable unknown sources in your security settings to allow installation of third-party apps. After that, you need to locate the downloaded APK file and tap on it to start the installation process. Finally, you need to follow the on-screen instructions and grant the necessary permissions to complete the installation.

-

Once you have installed Quick Glance APK on your Android 12 device, you can use it on your home screen or lock screen as a widget. You can also customize its settings and appearance by tapping on the widget. You can enjoy the widget's features by tapping on any topic to see more details or open the corresponding app.

-

Quick Glance APK can bring you many benefits for your Android 12 device, such as saving time, battery life, and data usage by using a lightweight widget instead of multiple apps. It can also provide you with at-a-glance information about various topics without opening any apps. You can also personalize the widget's appearance and content according to your preferences and needs.

-

However, Quick Glance APK is not the only widget that you can use on your Android 12 device. There are many other widgets that can offer similar or different features and functions. Some of the alternatives that you can consider are Google's At a Glance widget, Windows Widgets, or other third-party widgets from the Google Play Store.

-

We hope that this article has helped you understand what Quick Glance APK is, how to download and install it on your Android 12 device, how to use it on your home screen or lock screen, what benefits it can bring you, and what alternatives you can consider. If you have any questions or feedback, please feel free to leave a comment below.

-

Thank you for reading and have a great day!

-

FAQs

-

Here are some of the frequently asked questions about Quick Glance APK and their answers:

-
    -
  1. Is Quick Glance APK safe to use?
  2. -

    Quick Glance APK is safe to use as long as you download it from a trusted source online. However, be careful when downloading APK files from unknown sources, as they may contain malware or viruses that can harm your device or compromise your privacy. Always scan the APK file with an antivirus app before installing it.

    -
  3. Does Quick Glance APK work with other Android versions?
  4. -

    Quick Glance APK is compatible with Android 12 and above. It may not work well with older Android versions or devices that do not support widgets. If you have an older Android device, you may want to try out other widgets that are compatible with your device.

    -
  5. How do I update Quick Glance APK?
  6. -

    Quick Glance APK does not have an automatic update feature, so you will need to manually check for updates and download them from the same source that you downloaded the original APK file. You can also follow the developer's social media accounts or website to get notified of any new updates or features.

    -
  7. How do I uninstall Quick Glance APK?
  8. -

    To uninstall Quick Glance APK from your Android 12 device, you need to follow these steps:

    -
      -
    • Go to Settings > Apps & notifications > See all apps and find Quick Glance in the list.
    • -
    • Tap on Quick Glance and then tap on Uninstall.
    • -
    • Confirm your action by tapping on OK.
    • -
    -

    This will remove Quick Glance APK and its widget from your device.

    -
  9. What are some of the best sources for Quick Glance APK?
  10. -

    Some of the best sources for Quick Glance APK are , , and . These sources are reliable and offer the latest version of Quick Glance APK. However, always scan the APK file with an antivirus app before installing it.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer APK 4.8.4.9 - Play with Friends Online or Offline.md b/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer APK 4.8.4.9 - Play with Friends Online or Offline.md deleted file mode 100644 index a945c6a77db5081d2e0da690f25e8cf7609c7cb9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer APK 4.8.4.9 - Play with Friends Online or Offline.md +++ /dev/null @@ -1,136 +0,0 @@ -
-

Car Parking Multiplayer APK 4.8.4.9: A Fun and Realistic Driving Game for Android

-

Do you love driving cars and parking them in different scenarios? Do you want to challenge your skills and compete with other players online? If yes, then you should try Car Parking Multiplayer APK 4.8.4.9, a popular car parking simulator game for Android devices.

-

In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, and its pros and cons. We will also answer some frequently asked questions about Car Parking Multiplayer APK 4.8.4.9.

-

car parking multiplayer apk 4.8.4.9 apk


Download >>> https://urlca.com/2uO9iN



-

What is Car Parking Multiplayer APK 4.8.4.9?

-

Car Parking Multiplayer APK 4.8.4.9 is an open-world car parking simulator game for Android devices developed by olzhass, a Turkish game studio. You can download and play it on any device running 5.0 or newer, including Android emulators for PC.

-

This game lets you drive various cars, from sedans to sports cars, and park them in different locations, such as city streets, airports, deserts, and more. You can also customize your cars with different colors, stickers, wheels, and accessories.

-

But what makes this game more fun is its multiplayer mode, where you can join or create online rooms with up to 100 players from around the world. You can chat and communicate with them using voice or text messages, or even drive them off the road if you feel like it.

-

You can also choose from different game modes and challenges, such as racing, drifting, free roam, police chase, zombie mode, and more. You can earn money and experience points by completing these challenges or by parking your car perfectly.

-

Features of Car Parking Multiplayer APK 4.8.4.9

-

Realistic car physics and graphics

-

One of the best features of Car Parking Multiplayer APK 4.8.4.9 is its realistic car physics and graphics. The game uses a realistic engine that simulates the behavior of real cars, such as acceleration, braking, steering, suspension, damage, and more.

-

The game also has high-quality graphics that make the cars and the environments look stunning. You can see the details of the car models, the shadows, the reflections, the weather effects, and more.

-

Open-world map with different locations

-

Another feature of Car Parking Multiplayer APK 4.8.4.9 is its open-world map that lets you explore different locations. The game has a large map that covers various

regions, such as cities, deserts, airports, mountains, and more. You can drive and park your car in any of these locations and enjoy the scenery.

-

The game also has a day and night cycle and a dynamic weather system that changes the conditions of the map. You can experience driving and parking in different times of the day and in different weather situations, such as rain, snow, fog, and more.

-

car parking multiplayer mod apk 4.8.4.9 unlimited money
-car parking multiplayer 4.8.4.9 apk download for android
-car parking multiplayer hack apk 4.8.4.9 free download
-car parking multiplayer latest version 4.8.4.9 apk
-car parking multiplayer 4.8.4.9 apk obb
-car parking multiplayer mod menu apk 4.8.4.9
-car parking multiplayer 4.8.4.9 apk pure
-car parking multiplayer update 4.8.4.9 apk
-car parking multiplayer mod apk 4.8.4.9 android 1
-car parking multiplayer 4.8.4.9 apk rexdl
-car parking multiplayer mod apk 4.8.4.9 all unlocked
-car parking multiplayer 4.8.4.9 apk mirror
-car parking multiplayer cheat apk 4.8.4.9
-car parking multiplayer 4.8.4.9 apk uptodown
-car parking multiplayer mod apk 4.8.4.9 revdl
-car parking multiplayer 4.8.4.9 apk for pc
-car parking multiplayer premium apk 4.8.4.9
-car parking multiplayer 4.8.4.9 apk apkpure
-car parking multiplayer mod apk 4.8.4.9 happymod
-car parking multiplayer 4.8.4.9 apk moddroid
-car parking multiplayer unlimited coins apk 4.8.4.9
-car parking multiplayer 4.8.4.9 apk old version
-car parking multiplayer mod apk 4.8.4.9 no root
-car parking multiplayer 4.8.4.9 apk offline
-car parking multiplayer mod apk 4.8.4 revdl.com.apk (revdl.com)
-car parking multiplayer online game download apk 4..84..9
-car parking multiplayer mod apk 2021 version 48..49 unlimited everything
-how to install car parking multiplayer 48..49 on android phone
-best settings for car parking multiplayer app version 484..9
-tips and tricks for playing car parking multiplayer game v484..9
-how to get free cars in car parking multiplayer modded version 484..49
-how to play with friends in car parking multiplayer online mode v484..49
-how to update car parking multiplayer from older versions to latest version v484..49
-how to fix lag and crash issues in car parking multiplayer app v484..49
-how to customize your cars in car parking multiplayer game v484..49
-how to earn money fast in car parking multiplayer mod v484..49
-how to unlock all maps in car parking multiplayer hacked version v484..49
-how to join a server in car parking multiplayer online v484..49
-how to chat with other players in car parking multiplayer game v484..49
-how to report a bug or problem in car parking multiplayer app v484..49
-how to change your name and avatar in car parking multiplayer game v484..49
-how to use voice chat in car parking multiplayer online v484..49
-how to drift and do stunts in car parking multiplayer game v484..49
-how to park your car perfectly in car parking multiplayer game v484..49
-how to complete missions and challenges in car parking multiplayer game v484..49
-how to level up and rank up in car parking multiplayer game v484..49
-how to get free gems and gold in car parking multiplayer game v484..49

-

Multiplayer mode with chat and voice communication

-

The most exciting feature of Car Parking Multiplayer APK 4.8.4.9 is its multiplayer mode that lets you play with other players online. You can join or create online rooms with up to 100 players from around the world and interact with them using chat or voice communication.

-

You can also cooperate or compete with them in different game modes and challenges, such as racing, drifting, free roam, police chase, zombie mode, and more. You can show off your driving skills and your car customization to other players and have fun together.

-

Customizable cars and garage

-

Another feature of Car Parking Multiplayer APK 4.8.4.9 is its customizable cars and garage. The game has a wide range of cars to choose from, such as sedans, sports cars, trucks, buses, and more. You can also unlock new cars by earning money and experience points.

-

You can also customize your cars with different colors, stickers, wheels, accessories, and more. You can change the appearance and the performance of your cars according to your preference and style.

-

The game also has a garage where you can store and manage your cars. You can access your garage from anywhere on the map and switch between your cars easily.

-

Various game modes and challenges

-

The last feature of Car Parking Multiplayer APK 4.8.4.9 is its various game modes and challenges. The game has different game modes that suit different tastes and preferences, such as racing, drifting, free roam, police chase, zombie mode, and more.

-

The game also has different challenges that test your driving and parking skills, such as parallel parking, reverse parking, slalom parking, obstacle course, and more. You can earn money and experience points by completing these challenges or by parking your car perfectly.

-

How to download and install Car Parking Multiplayer APK 4.8.4.9 on your Android device?

-

If you are interested in playing Car Parking Multiplayer APK 4.8.4.9 on your Android device, you need to download and install it first. Here are the steps you need to follow:

-

Step 1: Download the APK file from a trusted source

-

The first step is to download the APK file of Car Parking Multiplayer APK 4.8.4.9 from a trusted source. You can use the link below to download it directly from our website:

-

Download Car Parking Multiplayer APK 4.8.4.9

-

Alternatively, you can also search for the APK file on other websites or platforms that offer it for free download. However, be careful not to download any fake or malicious files that may harm your device or compromise your privacy.

-

Step 2: Enable unknown sources on your device settings

-

The second step is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store or other official sources.

-

To do this, go to your device settings and look for the security or privacy option. Then, find the unknown sources option and toggle it on. You may see a warning message that tells you about the risks of installing unknown apps. Tap on OK or Allow to proceed.

-

Step 3: Locate and tap on the downloaded APK file

-

The third step is to locate and tap on the downloaded APK file. You can find it in your device's download folder or in any other location where you saved it.

-

Once you find it, tap on it to open it. You may see a pop-up window that asks you if you want to install this application. Tap on Install to continue.

-

Step 4: Follow the installation instructions and launch the game

-

The fourth step is to follow the installation instructions and launch the game. The installation process may take a few seconds or minutes depending on your device's speed and storage space. Wait until the installation is complete and you see a confirmation message that says the app has been installed.

-

Then, tap on Open to launch the game. You may see a splash screen that shows the game's logo and some loading animations. Wait until the game loads and you see the main menu.

-

Congratulations! You have successfully downloaded and installed Car Parking Multiplayer APK 4.8.4.9 on your Android device. You can now enjoy playing this fun and realistic driving game with your friends or strangers online.

-

Pros and cons of Car Parking Multiplayer APK 4.8.4.9

-

Car Parking Multiplayer APK 4.8.4.9 is a great game for car enthusiasts and parking lovers, but it also has some drawbacks that you should be aware of. Here are some of the pros and cons of this game:

-

Pros

-

Free to play and download

-

One of the advantages of Car Parking Multiplayer APK 4.8.4.9 is that it is free to play and download. You don't need to pay anything to enjoy this game, unless you want to buy some optional items or features with real money.

-

This means that you can play this game as much as you want without spending a dime, as long as you have an internet connection and enough storage space on your device.

-

Fun and realistic driving experience

-

Another advantage of Car Parking Multiplayer APK 4.8.4.9 is that it offers a fun and realistic driving experience. The game has realistic car physics and graphics that make you feel like you are driving a real car in a real world.

-

You can also customize your cars and drive them in different locations and scenarios, such as city streets, airports, deserts, and more. You can also choose from different game modes and challenges, such as racing, drifting, free roam, police chase, zombie mode, and more.

-

Online multiplayer with friends and strangers

-

The most exciting advantage of Car Parking Multiplayer APK 4.8.4.9 is that it has an online multiplayer mode that lets you play with other players from around the world. You can join or create online rooms with up to 100 players and interact with them using chat or voice communication.

-

You can also cooperate or compete with them in different game modes and challenges, such as racing, drifting, free roam, police chase, zombie mode, and more. You can show off your driving skills and your car customization to other players and have fun together.

-

Variety of cars and game modes

-

The last advantage of Car Parking Multiplayer APK 4.8.4.9 is that it has a variety of cars and game modes to choose from. The game has a wide range of cars to choose from, such as sedans, sports cars, trucks, buses, and more. You can also unlock new cars by earning money and experience points.

-

The game also has different game modes that suit different tastes and preferences, such as racing, drifting, free roam, police chase, zombie mode, and more. You can also complete different challenges that test your driving and parking skills, such as parallel parking, reverse parking, slalom parking, obstacle course, and more.

-

Cons

-

Requires internet connection and storage space

-

One of the disadvantages of Car Parking Multiplayer APK 4.8.4.9 is that it requires an internet connection and storage space on your device. You need to have a stable internet connection to play this game online with other players or to download new updates or content.

-

You also need to have enough storage space on your device to install this game and its data files. The game's size may vary depending on your device model and operating system version, but it may take up several hundred megabytes or even gigabytes of your device's memory.

-

May contain ads and in-app purchases

-

Another disadvantage of Car Parking Multiplayer APK 4.8.4.9 is that it may contain ads and in-app purchases. The game may show you some ads while you are playing or loading the game, which may interrupt your gameplay or annoy you.

-

The game may also offer you some in-app purchases that let you buy some optional items or features with real money, such as extra money, premium cars, VIP membership, and more. These in-app purchases may give you some advantages or benefits in the game, but they may also cost you a lot of money.

-

If you don't want to see any ads or buy any in-app purchases, you can disable them in your device settings or use a modded version of the game that removes them. However, be careful not to use any illegal or unsafe mods that may harm your device or compromise your privacy.

-

May have bugs and glitches

-

The last disadvantage of Car Parking Multiplayer APK 4.8.4.9 is that it may have some bugs and glitches that affect the gameplay or performance of the game. The game may crash, freeze, lag, or have some errors while you are playing or loading the game, which may ruin your experience or frustrate you.

-

The game may also have some bugs and glitches that affect the car physics or graphics, such as cars flying, clipping, disappearing, or having wrong colors or shapes. These bugs and glitches may make the game look unrealistic or funny, but they may also make the game unplayable or unfair.

-

If you encounter any bugs or glitches while playing the game, you can report them to the developers through their official website or social media accounts. You can also wait for new updates or patches that may fix them.

-

Conclusion

-

Car Parking Multiplayer APK 4.8.4.9 is a fun and realistic driving game for Android devices that lets you drive and park various cars in different locations and scenarios. You can also play online with other players from around the world and chat and communicate with them using voice or text messages.

-

The game has realistic car physics and graphics, an open-world map with different locations, a multiplayer mode with chat and voice communication, customizable cars and garage, and various game modes and challenges. The game is free to play and download, but it may contain ads and in-app purchases. The game may also require an internet connection and storage space on your device, and it may have some bugs and glitches.

-

If you are looking for a car parking simulator game that offers a fun and realistic driving experience with online multiplayer features, you should try Car Parking Multiplayer APK 4.8.4.9 on your Android device.

-

FAQs

-

Here are some frequently asked questions about Car Parking Multiplayer APK 4.8.4.9:

-

Q: Is Car Parking Multiplayer APK 4.8.4.9 safe to download and install?

-

A: Yes, Car Parking Multiplayer APK 4.8.4.9 is safe to download and install on your Android device, as long as you download it from a trusted source like our website. The game does not contain any viruses or malware that may harm your device or compromise your privacy.

-

Q: How can I play Car Parking Multiplayer APK 4.8.4.9 offline?

-

A: You can play Car Parking Multiplayer APK 4.8.4.9 offline by choosing the single-player mode on the main menu of the game. You can still drive and park your car in different locations and scenarios, but you will not be able to join or create online rooms with other players.

-

Q: How can I play Car Parking Multiplayer APK 4.8.4.9 with my friends?

-

A: You can play Car Parking Multiplayer APK 4.8.4.9 with your friends by choosing the multiplayer mode on the main menu of the game. You can either join an existing online room with other players or create your own online room and invite your friends to join it.

-

Q: How can I customize my car in Car Parking Multiplayer APK 4.8.4.9?

-

A: You can customize your car in Car Parking Multiplayer APK 4.8.4.9 by accessing your garage from anywhere on the map and tapping on the customize button on the bottom right corner of the screen. You can then choose from different options to change the color, sticker, wheel, accessory, and performance of your car. You can also preview your changes before applying them.

-

Q: How can I earn money and experience points in Car Parking Multiplayer APK 4.8.4.9?

-

A: You can earn money and experience points in Car Parking Multiplayer APK 4.8.4.9 by completing different game modes and challenges, such as racing, drifting, free roam, police chase, zombie mode, and more. You can also earn money and experience points by parking your car perfectly or by watching ads or completing offers.

-

You can use the money to buy new cars or customize your existing cars. You can use the experience points to level up and unlock new features or rewards.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/LoveScapes MOD APK A Rich Visual Dating Simulator Book with Gorgeous Japanese Art Style.md b/spaces/congsaPfin/Manga-OCR/logs/LoveScapes MOD APK A Rich Visual Dating Simulator Book with Gorgeous Japanese Art Style.md deleted file mode 100644 index 5d85f84e9ab5a24a98c2329ed363431e07d190fb..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/LoveScapes MOD APK A Rich Visual Dating Simulator Book with Gorgeous Japanese Art Style.md +++ /dev/null @@ -1,83 +0,0 @@ -
-

How to Download Lovescapes Mod APK and Enjoy Unlimited Romance

-

Are you a fan of romance novels and games? Do you want to experience a thrilling and immersive dating adventure on your mobile device? If yes, then you should try Lovescapes, a popular game that combines elements of a visual novel and a match-3 game. In this game, you will play as Isabella, a young woman who is looking for love and meaning in her life. You will have to help her navigate her romantic life by completing match-3 puzzles, making choices, and going on dates with different characters.

-

However, if you want to enjoy the game to the fullest, you might want to download Lovescapes mod apk, a modified version of the game that gives you unlimited money, free to date, no ads, and access to all features and content. In this article, we will tell you what is Lovescapes, why download Lovescapes mod apk, how to download and install Lovescapes mod apk, and some tips and tricks for playing Lovescapes. Let's get started!

-

download lovescapes mod apk


DOWNLOAD ☆☆☆ https://urlca.com/2uOfsP



-

What is Lovescapes?

-

Lovescapes is a mobile game that was released in 2022 by Qmodapk, a developer that specializes in creating romance games. The game has been downloaded over 10 million times on Google Play Store and has received positive reviews from players. Here are some of the features of Lovescapes:

-

A visual novel and match-3 game

-

Lovescapes is not just a simple dating simulator. It also has a match-3 puzzle element that adds more fun and challenge to the game. You will have to complete match-3 levels to progress in the story, unlock new scenes, and earn coins. The match-3 levels are colorful and varied, with different goals, obstacles, and boosters. You can also use power-ups to help you clear the board faster.

-

lovescapes mod apk unlimited money and stars
-lovescapes mod apk latest version 2023
-lovescapes mod apk free to date and chat
-lovescapes mod apk no ads and no verification
-lovescapes mod apk offline and unlocked
-lovescapes mod apk android and ios
-lovescapes mod apk romance and adventure
-lovescapes mod apk hack and cheat
-lovescapes mod apk premium and vip
-lovescapes mod apk download link and guide
-lovescapes mod apk full version and cracked
-lovescapes mod apk update and new features
-lovescapes mod apk review and rating
-lovescapes mod apk gameplay and tips
-lovescapes mod apk best dating game 2023
-lovescapes mod apk how to install and play
-lovescapes mod apk unlimited coins and gems
-lovescapes mod apk free shopping and gifts
-lovescapes mod apk no root and no ban
-lovescapes mod apk high quality and graphics
-lovescapes mod apk fun and addictive
-lovescapes mod apk realistic and immersive
-lovescapes mod apk original and safe
-lovescapes mod apk download for pc and mac
-lovescapes mod apk online and multiplayer
-lovescapes mod apk story and characters
-lovescapes mod apk choices and outcomes
-lovescapes mod apk secrets and surprises
-lovescapes mod apk challenges and rewards
-lovescapes mod apk bugs and fixes

-

A dating simulator with multiple choices

-

Lovescapes is also a dating simulator that lets you choose your own adventure. You will have to make decisions that will affect the outcome of the story and your relationships with other characters. You can choose from four different love interests: Alex, a charming businessman; Leo, a mysterious artist; Ryan, a sweet doctor; and Ethan, a bad boy rocker. Each character has their own personality, backstory, and preferences. You can also customize your own avatar with different outfits, hairstyles, and accessories.

-

A story of love and adventure

-

Lovescapes has a captivating story that will keep you hooked until the end. You will follow Isabella's journey as she tries to find her true self and her true love. You will also explore different locations around the world, such as Paris, New York, Tokyo, and more. You will encounter various events, challenges, surprises, and twists along the way. You will also discover secrets, mysteries, and scandals that will spice up your romance.

-

Why Download Lovescapes Mod APK?

-

Lovescapes is a free-to-play game that you can download from Google Play Store or App Store. However, there are some limitations and drawbacks that might hinder your enjoyment of the game. For example:

-

Unlimited money and free to date

-

In Lovescapes, you need coins to buy items, power-ups, and dates. However, coins are not easy to come by. You can earn them by completing match-3 levels, but they are not enough to cover your expenses. You can also buy them with real money, but that can be costly and inconvenient. If you download Lovescapes mod apk, you will get unlimited money that you can use to buy anything you want in the game. You will also be able to date any character you like without spending coins.

-

No ads and no waiting time

-

Lovescapes is a free-to-play game that relies on ads to generate revenue. This means that you will have to watch ads every time you finish a level, start a date, or claim a reward. Ads can be annoying and distracting, especially when they interrupt your gameplay. They can also consume your data and battery. If you download Lovescapes mod apk, you will get rid of all the ads in the game. You will also be able to play as much as you want without waiting for lives to refill or timers to reset.

-

Access to all features and content

-

Lovescapes is a game that has a lot of features and content that you can enjoy. However, some of them are locked or restricted by certain conditions. For example, you need to reach a certain level or chapter to unlock new scenes, characters, or locations. You also need to pay coins or watch ads to access some premium items, outfits, or dates. If you download Lovescapes mod apk, you will get access to all the features and content in the game. You will be able to explore everything that Lovescapes has to offer without any limitations.

-

How to Download and Install Lovescapes Mod APK?

-

Now that you know the benefits of downloading Lovescapes mod apk, you might be wondering how to do it. Don't worry, it's very easy and simple. Just follow these steps:

-

Step 1: Enable unknown sources on your device

-

Before you can install Lovescapes mod apk, you need to enable unknown sources on your device. This will allow you to install apps that are not from the official app stores. To do this, go to your device settings and look for security or privacy options. Then, find the option that says unknown sources or allow installation from unknown sources and turn it on.

-

Step 2: Download the mod apk file from a trusted source

-

Next, you need to download the mod apk file from a trusted source. There are many websites that offer mod apk files for various games, but not all of them are safe and reliable. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. To avoid this, make sure you download the mod apk file from a reputable website that has positive reviews and feedback from users. You can also scan the file with an antivirus app before installing it.

-

Step 3: Install the mod apk file and launch the game

-

Finally, you need to install the mod apk file and launch the game. To do this, locate the downloaded file on your device and tap on it. You will see a prompt asking you to confirm the installation. Tap on install and wait for the process to finish. Once it's done, you will see an icon of Lovescapes on your home screen or app drawer. Tap on it and enjoy the game!

-

Tips and Tricks for Playing Lovescapes

-

Lovescapes is a fun and addictive game that will keep you entertained for hours. However, if you want to master the game and get the best results, you might want to follow some tips and tricks that will help you improve your skills and strategies. Here are some of them:

-

Use boosters and power-ups wisely

-

Boosters and power-ups are special items that can help you clear match-3 levels faster and easier. They have different effects, such as clearing a row or column of tiles, exploding tiles in a radius, swapping tiles of different colors, etc. You can buy boosters with coins or get them as rewards for completing tasks or achievements. You can also create power-ups by matching four or more tiles of the same color in different shapes. However, boosters and power-ups are limited in number and use, so don't waste them on easy levels or unnecessary moves.

-

Choose your dates carefully

-

Dates are one of the most important aspects of Lovescapes. They allow you to interact with your love interests and develop your relationships with them. They also affect the outcome of the story and your endings. However, dates are not free in Lovescapes. You need to spend coins or watch ads to go on dates with your chosen character. You also need to complete match-3 levels to unlock new dates and scenes. Therefore, you should choose your dates carefully and wisely. Don't date every character at once, as this will cost you more coins and time. Focus on one or two characters that you like the most and that match your preferences and personality. You can also check their profiles and bios to learn more about them and their interests.

-

Explore different endings and scenarios

-

Lovescapes is a game that has multiple endings and scenarios depending on your choices and actions. You can get different outcomes for each character and for the overall story. Some endings are happy, some are sad, and some are surprising. You can also unlock different scenes and dialogues that will reveal more details and secrets about the characters and the plot. To explore different endings and scenarios, you can replay the game from the beginning or from a certain chapter. You can also use the rewind feature to change your choices or try different options.

-

Conclusion

-

Lovescapes is a game that will make you fall in love with romance and adventure. It is a game that combines a visual novel and a match-3 game, with a dating simulator and a story of love and adventure. You can download Lovescapes mod apk to enjoy unlimited money, free to date, no ads, and access to all features and content. You can also follow some tips and tricks to improve your gameplay and get the best results. Download Lovescapes mod apk now and start your romantic journey!

-

FAQs

-

Here are some frequently asked questions about Lovescapes mod apk:

-

Q: Is Lovescapes mod apk safe to download and install?

-

A: Yes, Lovescapes mod apk is safe to download and install, as long as you get it from a trusted source. However, you should always scan the file with an antivirus app before installing it, just to be sure.

-

Q: Will Lovescapes mod apk work on any device?

-

A: Lovescapes mod apk is compatible with most Android devices that have Android 4.4 or higher. However, some devices might have issues with the installation or the performance of the game. If you encounter any problems, you can try to clear the cache, update the game, or contact the developer for support.

-

Q: Will Lovescapes mod apk affect my progress or account in the original game?

-

A: No, Lovescapes mod apk will not affect your progress or account in the original game. Lovescapes mod apk is a separate app that has its own data and settings. You can play both versions of the game without any interference or conflict.

-

Q: How can I update Lovescapes mod apk?

-

A: To update Lovescapes mod apk, you need to download the latest version of the mod apk file from the same source that you got it from. Then, you need to uninstall the old version of the mod apk file and install the new one. You don't need to worry about losing your data or progress, as they will be saved on your device.

-

Q: Where can I find more information about Lovescapes mod apk?

-

A: You can find more information about Lovescapes mod apk on the official website of Qmodapk, the developer of the game. You can also join their social media pages or forums to get updates, news, tips, and feedback from other players.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Save Time and Space with These Tools to Download Bulk TikTok Videos.md b/spaces/congsaPfin/Manga-OCR/logs/Save Time and Space with These Tools to Download Bulk TikTok Videos.md deleted file mode 100644 index 46b532789f8a96cd7339a2c1e52d581c45657d51..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Save Time and Space with These Tools to Download Bulk TikTok Videos.md +++ /dev/null @@ -1,148 +0,0 @@ -
-

How to Download Bulk TikTok Videos: A Complete Guide

-

TikTok is one of the most popular social media platforms in the world, with over 1 billion active users. It allows users to create and share short-form videos on various topics, such as comedy, music, dance, beauty, education, and more. TikTok videos are fun, creative, and engaging, and they can inspire you, make you laugh, or teach you something new.

-

But what if you want to download some of these amazing videos to your device for offline viewing, editing, or sharing? And what if you want to download not just one or two, but dozens or hundreds of videos at once? This is where downloading bulk TikTok videos comes in handy.

-

download bulk tiktok videos


DOWNLOAD ✯✯✯ https://urlca.com/2uO4KZ



-

Downloading bulk TikTok videos means saving multiple videos from TikTok to your device in one go. This can be useful for various reasons, such as:

-
    -
  • Backing up your favorite TikTok videos or memories
  • -
  • Creating your own video compilations or mashups
  • -
  • Using TikTok videos for your personal or professional projects
  • -
  • Sharing TikTok videos with your friends or family who don't have the app
  • -
  • Watching TikTok videos offline when you don't have internet access
  • -
-

However, downloading bulk TikTok videos is not as easy as it sounds. There are some challenges that you might face, such as:

-
    -
  • Finding a reliable and safe way to download TikTok videos without compromising your privacy or security
  • -
  • Downloading TikTok videos without watermarks that cover part of the video or distract from the content
  • -
  • Downloading TikTok videos in high quality or resolution that match your preferences and needs
  • -
  • Downloading TikTok videos efficiently and quickly without wasting time or resources
  • -
-

Fortunately, there are some methods that can help you overcome these challenges and download bulk TikTok videos with ease. In this article, we will show you how to download bulk TikTok videos with different methods, such as using the TikTok web platform, a dedicated browser extension, or online services. Let's get started!

-

How to Download Bulk TikTok Videos with Different Methods

-

There are several ways to download bulk TikTok videos, depending on your preferences, needs, and budget. Here are some of the most common and effective methods that you can try:

-

Method 1: Use the TikTok Web Platform

-

One of the easiest and most convenient ways to download bulk TikTok videos is to use the official TikTok web platform. This is the website version of the TikTok app that you can access from any browser on your PC or laptop. The TikTok web platform allows you to download individual videos or multiple videos from a profile page.

-

To download individual videos from the TikTok web platform, follow these steps:

-

How to download multiple tiktok videos at once
-Best tiktok downloader without watermark and bulk
-Tiktok video downloader chrome extension for batch download
-Download tiktok videos in bulk online for free
-Save tiktok videos without watermark in bulk with 4K TokKit
-Bulk download tiktok videos by username or hashtag
-Tikrank - video downloader without watermark and bulk
-Download all tiktok videos from a profile in one click
-Tiktok batch downloader for Windows and Mac
-Download tiktok videos without watermark in bulk with EaseUS Video Downloader
-Tiktok video downloader app for bulk download
-Download tiktok videos in bulk with captions and hashtags
-Bulk download tiktok videos by date range or category
-Tiktok video downloader for PC with bulk option
-Download tiktok videos in bulk with sound and music
-Bulk download tiktok videos with high quality and resolution
-Tiktok video downloader for Android with bulk feature
-Download tiktok videos in bulk with no ads or pop-ups
-Bulk download tiktok videos by URL or QR code
-Tiktok video downloader for iPhone with bulk mode
-Download tiktok videos in bulk with subtitles and translations
-Bulk download tiktok videos by location or region
-Tiktok video downloader for Firefox with bulk support
-Download tiktok videos in bulk with filters and effects
-Bulk download tiktok videos by likes or views
-Tiktok video downloader for Safari with bulk function
-Download tiktok videos in bulk with metadata and thumbnails
-Bulk download tiktok videos by comments or shares
-Tiktok video downloader for Edge with bulk capability
-Download tiktok videos in bulk with custom settings and preferences
-Bulk download tiktok videos by duration or size
-Tiktok video downloader for Opera with bulk feature
-Download tiktok videos in bulk with watermark remover and editor
-Bulk download tiktok videos by format or type
-Tiktok video downloader for Brave with bulk option
-Download tiktok videos in bulk with converter and compressor
-Bulk download tiktok videos by genre or theme
-Tiktok video downloader for Linux with bulk support
-Download tiktok videos in bulk with scheduler and timer
-Bulk download tiktok videos by source or platform
-Tiktok video downloader for Mac OS X with bulk function
-Download tiktok videos in bulk with backup and restore
-Bulk download tiktok videos by rating or review
-Tiktok video downloader for Windows 10 with bulk capability
-Download tiktok videos in bulk with VPN and proxy

-
    -
  1. Go to https://www.tiktok.com and log in with your account or browse as a guest.
  2. -
  3. Find the video that you want to download and click on it to open it in a new tab.
  4. -
  5. On the right side of the video, click on the three dots icon and select "Download video" from the menu.
  6. -
  7. The video will be downloaded to your device with a watermark that shows the username of the creator and the TikTok logo.
  8. -
-

To download multiple videos from a profile page on the TikTok web platform, follow these steps:

-
    -
  1. Go to https://www.tiktok.com and log in with your account or browse as a guest.
  2. -
  3. Find the profile page of the user whose videos you want to download and click on it to open it in a new tab.
  4. -
  5. On the profile page, scroll down and select the videos that you want to download by clicking on the checkbox on the top left corner of each video thumbnail.
  6. -
  7. Once you have selected all the videos that you want to download, click on the "Download" button on the top right corner of the page.
  8. -
  9. The videos will be downloaded to your device in a zip file with watermarks that show the username of the creator and the TikTok logo.
  10. -
-

Note that this method only works for public videos that have the download option enabled by the creator. If you want to download private or restricted videos, you will need to use another method.

-

Method 2: Use a Dedicated Browser Extension

-

Another way to download bulk TikTok videos is to use a dedicated browser extension that adds a download button to each video on the TikTok web platform. One of the most popular and reliable extensions for this purpose is TikTok Downloader, which is available for Chrome browsers. This extension allows you to download individual or multiple videos from any profile page or hashtag page without watermarks.

-

To install and use the TikTok downloader extension for Chrome, follow these steps:

-
    -
  1. Go to https://chrome.google.com/webstore/detail/tiktok-downloader/ekjapccimkannnfgcnnoajhfdjjgjjgh and click on "Add to Chrome" to install the extension.
  2. -
  3. Go to https://www.tiktok.com and log in with your account or browse as a guest.
  4. -
  5. Find the video that you want to download and click on it to open it in a new tab.
  6. -
  7. On the right side of the video, click on the red "Download" button that appears below the three dots icon.
  8. -
  9. The video will be downloaded to your device without any watermark.
  10. -
-

To upgrade to the pro version of the TikTok downloader extension and download videos without watermarks, follow these steps:

-
    -
  1. Go to https://tiktokdownloader.pro/ and click on "Buy Now" to purchase a license key for $9.99 per year.
  2. -
  3. Go to https://www.tiktok.com and log in with your account or browse as a guest.
  4. -
  5. Click on the extension icon on the top right corner of your browser and enter your license key in the pop-up window.
  6. -
  7. You can now download any video from any profile page or hashtag page without watermarks by clicking on the red "Download" button below each video.
  8. -
-

Method 3: Use Online Services

-

A third option to download bulk TikTok videos is to use online services that allow you to download videos from any URL or hashtag. There are many online services that offer this feature, but some of the most popular and reliable ones are savett and 4K TokKit. These services allow you to download videos with or without watermarks, in high quality, and with advanced features.

-

To use savett website to download videos with or without watermarks, follow these steps:

-
    -
  1. Go to https://savett.com/ and paste the URL of the video or the hashtag that you want to download in the search box.
  2. -
  3. Click on the "Download" button and wait for the results to load.
  4. -
  5. You will see a list of videos that match your query, along with their thumbnails, titles, and durations.
  6. -
  7. Click on the "Download" button below each video that you want to download and choose whether you want to download it with or without watermark.
  8. -
  9. The video will be downloaded to your device in the format and quality that you selected.
  10. -
-

To use 4K TokKit app to download videos in high quality and with advanced features, follow these steps:

-
    -
  1. Go to https://www.4kdownload.com/products/product-tokkit and download and install the app on your PC or Mac.
  2. -
  3. Launch the app and paste the URL of the video or the hashtag that you want to download in the search box.
  4. -
  5. Click on the "Download" button and wait for the results to load.
  6. -
  7. You will see a list of videos that match your query, along with their thumbnails, titles, and durations.
  8. -
  9. Click on the "Download" button below each video that you want to download and choose the format and quality that you prefer.
  10. -
  11. You can also use the app to edit, trim, merge, or convert your downloaded videos.
  12. -
-

Conclusion

-

Downloading bulk TikTok videos can be a great way to enjoy, save, or use your favorite TikTok content offline. However, it can also be a challenging task if you don't know how to do it properly. In this article, we have shown you how to download bulk TikTok videos with different methods, such as using the TikTok web platform, a dedicated browser extension, or online services. Each method has its own advantages and disadvantages, so you can choose the one that suits your needs and preferences best.

-

Here are some tips and best practices for downloading bulk TikTok videos:

-
    -
  • Always respect the rights and privacy of the TikTok creators and do not use their videos for illegal or unethical purposes.
  • -
  • Always check the terms and conditions of the TikTok platform and the third-party services that you use for downloading videos and comply with them.
  • -
  • Always scan your downloaded files for viruses or malware before opening them on your device.
  • -
  • Always organize your downloaded files in folders or albums for easy access and management.
  • -
  • Always delete or archive your downloaded files when you no longer need them to free up space on your device.
  • -
-

We hope this article has helped you learn how to download bulk TikTok videos with ease. If you have any questions or feedback, please feel free to leave a comment below. And if you liked this article, please share it with your friends or family who might find it useful. Thank you for reading!

-

FAQs

-
    -
  • Q1: Can I download TikTok videos without the app on a PC?
  • -
  • A1: Yes, you can use any of the methods mentioned in this article to download TikTok videos without the app on a PC.
  • -
  • Q2: Can I save TikTok videos without posting them?
  • -
  • A2: Yes, you can save TikTok videos without posting them by using the draft or private options in the app or by downloading them to your device.
  • -
  • Q3: How can I download TikTok videos with music or sound?
  • -
  • A3: You can download TikTok videos with music or sound by using the online services or the pro version of the browser extension that allow you to download videos without watermarks.
  • -
  • Q4: How can I download TikTok videos in high resolution or quality?
  • -
  • A4: You can download TikTok videos in high resolution or quality by using the 4K TokKit app that allows you to choose the video format and quality before downloading .
  • -
  • Q5: How can I download TikTok videos faster or more efficiently?
  • -
  • A5: You can download TikTok videos faster or more efficiently by using the batch or bulk download options that allow you to download multiple videos at once from a profile page or a hashtag.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/Hotspot-Shield-Profile-Download-Mac-FULL.md b/spaces/contluForse/HuggingGPT/Hotspot-Shield-Profile-Download-Mac-FULL.md deleted file mode 100644 index 3c8124a6103d49c20ad4d6b841b03a527a5a81b4..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/Hotspot-Shield-Profile-Download-Mac-FULL.md +++ /dev/null @@ -1,164 +0,0 @@ -## Hotspot Shield Profile Download Mac - - - - - - ![Hotspot Shield Profile Download Mac __FULL__](https://blog.hotspotshield.com/wp-content/uploads/2018/05/1.png) - - - - - -**Download ->>> [https://urluso.com/2txV2f](https://urluso.com/2txV2f)** - - - - - - - - - - - - - -# How to Download and Install Hotspot Shield on Mac - - - -Hotspot Shield is a popular VPN service that allows you to access the internet securely and privately. With Hotspot Shield, you can bypass geo-restrictions and censorship, stream your favorite content, and protect your online identity and data. In this article, we will show you how to download and install Hotspot Shield on your Mac device. - - - -## Step 1: Download Hotspot Shield from the App Store - - - -The easiest way to get Hotspot Shield on your Mac is to download it from the App Store. To do this, follow these steps: - - - -- On your Mac, open the App Store and search for "Hotspot Shield" or go to [App Store here](https://apps.apple.com/us/app/hotspotshield-vpn-wifi-proxy/id771076721?mt=12)[^2^]. - -- Find the Hotspot Shield app and click GET, then click INSTALL APP. - -- Enter your Apple ID and password if prompted. - -- Wait for the app to download and install on your Mac. - - - -## Step 2: Launch Hotspot Shield and create an account - - - -Once you have installed Hotspot Shield on your Mac, you need to launch it and create an account. To do this, follow these steps: - - - -- Open the Hotspot Shield app from your Applications folder or Launchpad. - -- Click on the Sign Up button at the bottom of the app window. - -- Enter your email address and a password of your choice, then click CREATE ACCOUNT. - -- Check your email inbox for a verification email from Hotspot Shield and click on the link to activate your account. - - - -## Step 3: Connect to a VPN server - - - -Now that you have created an account, you can connect to a VPN server of your choice. To do this, follow these steps: - - - -- In the Hotspot Shield app, click on the Connect button at the top of the app window. - -- Select a server location from the drop-down menu or let the app choose the best server for you. - -- Wait for the app to establish a secure VPN connection. You will see a green shield icon in the menu bar when you are connected. - -- Enjoy browsing, streaming, and gaming with Hotspot Shield! - - - -If you want to disconnect from the VPN server, simply click on the Disconnect button in the app window or in the menu bar icon. - - - -## Conclusion - - - -Hotspot Shield is a fast, reliable, and easy-to-use VPN service that lets you access the internet securely and privately. With Hotspot Shield, you can enjoy unlimited bandwidth, high-speed servers, military-grade encryption, and a 7-day free trial. To download and install Hotspot Shield on your Mac device, just follow the steps above or visit [Profile/Installation – Hotspot Shield Support Center](https://support.hotspotshield.com/hc/en-us/sections/115001537586-Profile-Installation)[^1^] for more help. - - - -## Why use Hotspot Shield VPN? - - - -Hotspot Shield VPN is one of the most trusted and popular VPN services in the world. It has over 650 million users in more than 115 countries. Here are some of the benefits of using Hotspot Shield VPN: - - - -- It protects your online privacy and security by encrypting your internet traffic and hiding your IP address from hackers, ISPs, and government agencies. - -- It allows you to access any website or app without censorship or geo-restrictions. You can watch Netflix, Hulu, BBC iPlayer, and other streaming services from anywhere in the world. - -- It boosts your internet speed and performance by using its patented Hydra protocol, which is faster and more stable than other VPN protocols. - -- It offers a generous 7-day free trial and a 45-day money-back guarantee. You can try Hotspot Shield VPN risk-free and see for yourself how it works. - - - -## How to get the most out of Hotspot Shield VPN? - - - -To get the most out of Hotspot Shield VPN, you should follow these tips: - - - -- Use the Smart VPN feature to automatically connect to the best server for your needs. You can also customize the Smart VPN settings to choose which apps or websites you want to use with or without the VPN. - -- Enable the Kill Switch feature to prevent your internet traffic from being exposed if the VPN connection drops unexpectedly. This way, you can avoid any leaks or breaches of your online privacy. - -- Use the Split Tunneling feature to choose which apps or websites you want to route through the VPN and which ones you want to use with your regular internet connection. This can help you save bandwidth and improve your speed. - -- Upgrade to Hotspot Shield Premium to unlock more features and benefits. With Hotspot Shield Premium, you can access over 3200 servers in 80+ countries, enjoy unlimited bandwidth and data, get 24/7 live chat support, and connect up to 5 devices simultaneously. - - - -## FAQs about Hotspot Shield VPN - - - -Here are some frequently asked questions about Hotspot Shield VPN: - - - -1. Is Hotspot Shield VPN safe to use? - -Yes, Hotspot Shield VPN is safe to use. It uses AES-256 encryption, which is the highest level of encryption available. It also has a strict no-logs policy, which means it does not collect or store any of your personal information or online activity. - -2. Does Hotspot Shield VPN work on Mac? - -Yes, Hotspot Shield VPN works on Mac devices. You can download and install it from the App Store or from the official website. It is compatible with macOS 10.12 or later. - -3. How much does Hotspot Shield VPN cost? - -Hotspot Shield VPN offers a free version and a premium version. The free version has some limitations, such as a daily data cap of 500 MB, ads, and access to only one server location (US). The premium version costs $7.99 per month or $95.88 per year (billed annually). You can also get a 3-year plan for $125.64 (billed every 3 years), which works out to $3.49 per month. - - - - dfd1c89656 - - - - - diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/layers/drop.py b/spaces/cooelf/Multimodal-CoT/timm/models/layers/drop.py deleted file mode 100644 index 6de9e3f729f7f1ca29d4511f6c64733d3169fbec..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/layers/drop.py +++ /dev/null @@ -1,168 +0,0 @@ -""" DropBlock, DropPath - -PyTorch implementations of DropBlock and DropPath (Stochastic Depth) regularization layers. - -Papers: -DropBlock: A regularization method for convolutional networks (https://arxiv.org/abs/1810.12890) - -Deep Networks with Stochastic Depth (https://arxiv.org/abs/1603.09382) - -Code: -DropBlock impl inspired by two Tensorflow impl that I liked: - - https://github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_model.py#L74 - - https://github.com/clovaai/assembled-cnn/blob/master/nets/blocks.py - -Hacked together by / Copyright 2020 Ross Wightman -""" -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def drop_block_2d( - x, drop_prob: float = 0.1, block_size: int = 7, gamma_scale: float = 1.0, - with_noise: bool = False, inplace: bool = False, batchwise: bool = False): - """ DropBlock. See https://arxiv.org/pdf/1810.12890.pdf - - DropBlock with an experimental gaussian noise option. This layer has been tested on a few training - runs with success, but needs further validation and possibly optimization for lower runtime impact. - """ - B, C, H, W = x.shape - total_size = W * H - clipped_block_size = min(block_size, min(W, H)) - # seed_drop_rate, the gamma parameter - gamma = gamma_scale * drop_prob * total_size / clipped_block_size ** 2 / ( - (W - block_size + 1) * (H - block_size + 1)) - - # Forces the block to be inside the feature map. - w_i, h_i = torch.meshgrid(torch.arange(W).to(x.device), torch.arange(H).to(x.device)) - valid_block = ((w_i >= clipped_block_size // 2) & (w_i < W - (clipped_block_size - 1) // 2)) & \ - ((h_i >= clipped_block_size // 2) & (h_i < H - (clipped_block_size - 1) // 2)) - valid_block = torch.reshape(valid_block, (1, 1, H, W)).to(dtype=x.dtype) - - if batchwise: - # one mask for whole batch, quite a bit faster - uniform_noise = torch.rand((1, C, H, W), dtype=x.dtype, device=x.device) - else: - uniform_noise = torch.rand_like(x) - block_mask = ((2 - gamma - valid_block + uniform_noise) >= 1).to(dtype=x.dtype) - block_mask = -F.max_pool2d( - -block_mask, - kernel_size=clipped_block_size, # block_size, - stride=1, - padding=clipped_block_size // 2) - - if with_noise: - normal_noise = torch.randn((1, C, H, W), dtype=x.dtype, device=x.device) if batchwise else torch.randn_like(x) - if inplace: - x.mul_(block_mask).add_(normal_noise * (1 - block_mask)) - else: - x = x * block_mask + normal_noise * (1 - block_mask) - else: - normalize_scale = (block_mask.numel() / block_mask.to(dtype=torch.float32).sum().add(1e-7)).to(x.dtype) - if inplace: - x.mul_(block_mask * normalize_scale) - else: - x = x * block_mask * normalize_scale - return x - - -def drop_block_fast_2d( - x: torch.Tensor, drop_prob: float = 0.1, block_size: int = 7, - gamma_scale: float = 1.0, with_noise: bool = False, inplace: bool = False, batchwise: bool = False): - """ DropBlock. See https://arxiv.org/pdf/1810.12890.pdf - - DropBlock with an experimental gaussian noise option. Simplied from above without concern for valid - block mask at edges. - """ - B, C, H, W = x.shape - total_size = W * H - clipped_block_size = min(block_size, min(W, H)) - gamma = gamma_scale * drop_prob * total_size / clipped_block_size ** 2 / ( - (W - block_size + 1) * (H - block_size + 1)) - - if batchwise: - # one mask for whole batch, quite a bit faster - block_mask = torch.rand((1, C, H, W), dtype=x.dtype, device=x.device) < gamma - else: - # mask per batch element - block_mask = torch.rand_like(x) < gamma - block_mask = F.max_pool2d( - block_mask.to(x.dtype), kernel_size=clipped_block_size, stride=1, padding=clipped_block_size // 2) - - if with_noise: - normal_noise = torch.randn((1, C, H, W), dtype=x.dtype, device=x.device) if batchwise else torch.randn_like(x) - if inplace: - x.mul_(1. - block_mask).add_(normal_noise * block_mask) - else: - x = x * (1. - block_mask) + normal_noise * block_mask - else: - block_mask = 1 - block_mask - normalize_scale = (block_mask.numel() / block_mask.to(dtype=torch.float32).sum().add(1e-7)).to(dtype=x.dtype) - if inplace: - x.mul_(block_mask * normalize_scale) - else: - x = x * block_mask * normalize_scale - return x - - -class DropBlock2d(nn.Module): - """ DropBlock. See https://arxiv.org/pdf/1810.12890.pdf - """ - def __init__(self, - drop_prob=0.1, - block_size=7, - gamma_scale=1.0, - with_noise=False, - inplace=False, - batchwise=False, - fast=True): - super(DropBlock2d, self).__init__() - self.drop_prob = drop_prob - self.gamma_scale = gamma_scale - self.block_size = block_size - self.with_noise = with_noise - self.inplace = inplace - self.batchwise = batchwise - self.fast = fast # FIXME finish comparisons of fast vs not - - def forward(self, x): - if not self.training or not self.drop_prob: - return x - if self.fast: - return drop_block_fast_2d( - x, self.drop_prob, self.block_size, self.gamma_scale, self.with_noise, self.inplace, self.batchwise) - else: - return drop_block_2d( - x, self.drop_prob, self.block_size, self.gamma_scale, self.with_noise, self.inplace, self.batchwise) - - -def drop_path(x, drop_prob: float = 0., training: bool = False): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - - This is the same as the DropConnect impl I created for EfficientNet, etc networks, however, - the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for - changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use - 'survival rate' as the argument. - - """ - if drop_prob == 0. or not training: - return x - keep_prob = 1 - drop_prob - shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(keep_prob) * random_tensor - return output - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/vgg.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/vgg.py deleted file mode 100644 index 8778b649561a45a9652b1a15a26c2d171e58f3e1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/cnn/vgg.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn - -from .utils import constant_init, kaiming_init, normal_init - - -def conv3x3(in_planes, out_planes, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - padding=dilation, - dilation=dilation) - - -def make_vgg_layer(inplanes, - planes, - num_blocks, - dilation=1, - with_bn=False, - ceil_mode=False): - layers = [] - for _ in range(num_blocks): - layers.append(conv3x3(inplanes, planes, dilation)) - if with_bn: - layers.append(nn.BatchNorm2d(planes)) - layers.append(nn.ReLU(inplace=True)) - inplanes = planes - layers.append(nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=ceil_mode)) - - return layers - - -class VGG(nn.Module): - """VGG backbone. - - Args: - depth (int): Depth of vgg, from {11, 13, 16, 19}. - with_bn (bool): Use BatchNorm or not. - num_classes (int): number of classes for classification. - num_stages (int): VGG stages, normally 5. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - """ - - arch_settings = { - 11: (1, 1, 2, 2, 2), - 13: (2, 2, 2, 2, 2), - 16: (2, 2, 3, 3, 3), - 19: (2, 2, 4, 4, 4) - } - - def __init__(self, - depth, - with_bn=False, - num_classes=-1, - num_stages=5, - dilations=(1, 1, 1, 1, 1), - out_indices=(0, 1, 2, 3, 4), - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - ceil_mode=False, - with_last_pool=True): - super(VGG, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for vgg') - assert num_stages >= 1 and num_stages <= 5 - stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - assert len(dilations) == num_stages - assert max(out_indices) <= num_stages - - self.num_classes = num_classes - self.out_indices = out_indices - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - - self.inplanes = 3 - start_idx = 0 - vgg_layers = [] - self.range_sub_modules = [] - for i, num_blocks in enumerate(self.stage_blocks): - num_modules = num_blocks * (2 + with_bn) + 1 - end_idx = start_idx + num_modules - dilation = dilations[i] - planes = 64 * 2**i if i < 4 else 512 - vgg_layer = make_vgg_layer( - self.inplanes, - planes, - num_blocks, - dilation=dilation, - with_bn=with_bn, - ceil_mode=ceil_mode) - vgg_layers.extend(vgg_layer) - self.inplanes = planes - self.range_sub_modules.append([start_idx, end_idx]) - start_idx = end_idx - if not with_last_pool: - vgg_layers.pop(-1) - self.range_sub_modules[-1][1] -= 1 - self.module_name = 'features' - self.add_module(self.module_name, nn.Sequential(*vgg_layers)) - - if self.num_classes > 0: - self.classifier = nn.Sequential( - nn.Linear(512 * 7 * 7, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, 4096), - nn.ReLU(True), - nn.Dropout(), - nn.Linear(4096, num_classes), - ) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - elif isinstance(m, nn.Linear): - normal_init(m, std=0.01) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - outs = [] - vgg_layers = getattr(self, self.module_name) - for i in range(len(self.stage_blocks)): - for j in range(*self.range_sub_modules[i]): - vgg_layer = vgg_layers[j] - x = vgg_layer(x) - if i in self.out_indices: - outs.append(x) - if self.num_classes > 0: - x = x.view(x.size(0), -1) - x = self.classifier(x) - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(VGG, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - vgg_layers = getattr(self, self.module_name) - if mode and self.frozen_stages >= 0: - for i in range(self.frozen_stages): - for j in range(*self.range_sub_modules[i]): - mod = vgg_layers[j] - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/__init__.py deleted file mode 100644 index 2e441a5838d1e972823b9668ac8d459445f6f6ce..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .gen_efficientnet import * -from .mobilenetv3 import * -from .model_factory import create_model -from .config import is_exportable, is_scriptable, set_exportable, set_scriptable -from .activations import * \ No newline at end of file diff --git a/spaces/cscan/vocal_remover/README.md b/spaces/cscan/vocal_remover/README.md deleted file mode 100644 index b738c3fcd767e85e28142fff5c4030a70fffe7a4..0000000000000000000000000000000000000000 --- a/spaces/cscan/vocal_remover/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Vocal Remover -emoji: ⚡ -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: antonbol/vocal_remover ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git "a/spaces/dakaiye/dky_xuexi/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/dakaiye/dky_xuexi/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" deleted file mode 100644 index ffbb05599ef09c9de25334ebeca2eef8022b9aaf..0000000000000000000000000000000000000000 --- "a/spaces/dakaiye/dky_xuexi/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" +++ /dev/null @@ -1,160 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - -fast_debug = False - -def readPdf(pdfPath): - """ - 读取pdf文件,返回文本内容 - """ - import pdfminer - from pdfminer.pdfparser import PDFParser - from pdfminer.pdfdocument import PDFDocument - from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed - from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter - from pdfminer.pdfdevice import PDFDevice - from pdfminer.layout import LAParams - from pdfminer.converter import PDFPageAggregator - - fp = open(pdfPath, 'rb') - - # Create a PDF parser object associated with the file object - parser = PDFParser(fp) - - # Create a PDF document object that stores the document structure. - # Password for initialization as 2nd parameter - document = PDFDocument(parser) - # Check if the document allows text extraction. If not, abort. - if not document.is_extractable: - raise PDFTextExtractionNotAllowed - - # Create a PDF resource manager object that stores shared resources. - rsrcmgr = PDFResourceManager() - - # Create a PDF device object. - # device = PDFDevice(rsrcmgr) - - # BEGIN LAYOUT ANALYSIS. - # Set parameters for analysis. - laparams = LAParams( - char_margin=10.0, - line_margin=0.2, - boxes_flow=0.2, - all_texts=False, - ) - # Create a PDF page aggregator object. - device = PDFPageAggregator(rsrcmgr, laparams=laparams) - # Create a PDF interpreter object. - interpreter = PDFPageInterpreter(rsrcmgr, device) - - # loop over all pages in the document - outTextList = [] - for page in PDFPage.create_pages(document): - # read the page into a layout object - interpreter.process_page(page) - layout = device.get_result() - for obj in layout._objs: - if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal): - # print(obj.get_text()) - outTextList.append(obj.get_text()) - - return outTextList - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - from bs4 import BeautifulSoup - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if ".tex" in fp: - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - if ".pdf" in fp.lower(): - file_content = readPdf(fp) - file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk') - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - diff --git a/spaces/danielcwq/chat-your-data-trial/ingest_data.py b/spaces/danielcwq/chat-your-data-trial/ingest_data.py deleted file mode 100644 index 0025746538a3ce4959a279094c021397ad64374f..0000000000000000000000000000000000000000 --- a/spaces/danielcwq/chat-your-data-trial/ingest_data.py +++ /dev/null @@ -1,23 +0,0 @@ -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.document_loaders import UnstructuredFileLoader -from langchain.vectorstores.faiss import FAISS -from langchain.embeddings import OpenAIEmbeddings -import pickle - -# Load Data -loader = UnstructuredFileLoader("state_of_the_union.txt") -raw_documents = loader.load() - -# Split text -text_splitter = RecursiveCharacterTextSplitter() -documents = text_splitter.split_documents(raw_documents) - - -# Load Data to vectorstore -embeddings = OpenAIEmbeddings() -vectorstore = FAISS.from_documents(documents, embeddings) - - -# Save vectorstore -with open("vectorstore.pkl", "wb") as f: - pickle.dump(vectorstore, f) diff --git a/spaces/darragh/swinunetr-dicom-video/app.py b/spaces/darragh/swinunetr-dicom-video/app.py deleted file mode 100644 index 446bf1c540c138a9d11a477e1f7d735abc3a3f53..0000000000000000000000000000000000000000 --- a/spaces/darragh/swinunetr-dicom-video/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import sys -import os -import glob -import shutil -import torch -import argparse -import mediapy -import cv2 -import numpy as np -import gradio as gr -from skimage import color, img_as_ubyte -from monai import transforms, data - -os.system("git clone https://github.com/darraghdog/Project-MONAI-research-contributions pmrc") -sys.path.append("pmrc/SwinUNETR/BTCV") -from swinunetr import SwinUnetrModelForInference, SwinUnetrConfig - - -ffmpeg_path = shutil.which('ffmpeg') -mediapy.set_ffmpeg(ffmpeg_path) - -# Load model -model = SwinUnetrModelForInference.from_pretrained('darragh/swinunetr-btcv-tiny') -model.eval() - -# Pull files from github -input_files = glob.glob('pmrc/SwinUNETR/BTCV/dataset/imagesSampleTs/*.nii.gz') -input_files = dict((f.split('/')[-1], f) for f in input_files) - -# Load and process dicom with monai transforms -test_transform = transforms.Compose( - [ - transforms.LoadImaged(keys=["image"]), - transforms.AddChanneld(keys=["image"]), - transforms.Spacingd(keys="image", - pixdim=(1.5, 1.5, 2.0), - mode="bilinear"), - transforms.ScaleIntensityRanged(keys=["image"], - a_min=-175.0, - a_max=250.0, - b_min=0.0, - b_max=1.0, - clip=True), - # transforms.Resized(keys=["image"], spatial_size = (256,256,-1)), - transforms.ToTensord(keys=["image"]), - ]) - -# Create Data Loader -def create_dl(test_files): - ds = test_transform(test_files) - loader = data.DataLoader(ds, - batch_size=1, - shuffle=False) - return loader - -# Inference and video generation -def generate_dicom_video(selected_file, n_frames): - - # Data processor - test_file = input_files[selected_file] - test_files = [{'image': test_file}] - dl = create_dl(test_files) - batch = next(iter(dl)) - - # Select dicom slices - tst_inputs = batch["image"] - tst_inputs = tst_inputs[:,:,:,:,-n_frames:] - - # Inference - with torch.no_grad(): - outputs = model(tst_inputs, - (96,96,96), - 8, - overlap=0.5, - mode="gaussian") - tst_outputs = torch.softmax(outputs.logits, 1) - tst_outputs = torch.argmax(tst_outputs, axis=1) - - # Write frames to video - for inp, outp in zip(tst_inputs, tst_outputs): - frames = [] - for idx in range(inp.shape[-1]): - # Segmentation - seg = outp[:,:,idx].numpy().astype(np.uint8) - # Input dicom frame - img = (inp[0,:,:,idx]*255).numpy().astype(np.uint8) - img = cv2.cvtColor(img,cv2.COLOR_GRAY2RGB) - frame = color.label2rgb(seg,img, bg_label = 0) - frame = img_as_ubyte(frame) - frame = np.concatenate((img, frame), 1) - frames.append(frame) - mediapy.write_video("dicom.mp4", frames, fps=4) - - return 'dicom.mp4' - - -theme = 'dark-peach' -with gr.Blocks(theme=theme) as demo: - - gr.Markdown('''

SwinUnetr BTCV

- This is a Gradio Blocks app of the winning transformer in the Beyond the Cranial Vault (BTCV) Segmentation Challenge, SwinUnetr (tiny version). - ''') - selected_dicom_key = gr.inputs.Dropdown( - choices=sorted(input_files), - type="value", - label="Select a dicom file") - n_frames = gr.Slider(1, 100, value=32, label="Choose the number of dicom slices to process", step = 1) - button_gen_video = gr.Button("Generate Video") - output_interpolation = gr.Video(label="Generated Video") - button_gen_video.click(fn=generate_dicom_video, - inputs=[selected_dicom_key, n_frames], - outputs=output_interpolation) - -demo.launch(debug=True, enable_queue=True) - - diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/recordingPen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/recordingPen.py deleted file mode 100644 index 6c3b6613211d76f0306876dceb6d3945920417f5..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/recordingPen.py +++ /dev/null @@ -1,179 +0,0 @@ -"""Pen recording operations that can be accessed or replayed.""" -from fontTools.pens.basePen import AbstractPen, DecomposingPen -from fontTools.pens.pointPen import AbstractPointPen - - -__all__ = [ - "replayRecording", - "RecordingPen", - "DecomposingRecordingPen", - "RecordingPointPen", -] - - -def replayRecording(recording, pen): - """Replay a recording, as produced by RecordingPen or DecomposingRecordingPen, - to a pen. - - Note that recording does not have to be produced by those pens. - It can be any iterable of tuples of method name and tuple-of-arguments. - Likewise, pen can be any objects receiving those method calls. - """ - for operator, operands in recording: - getattr(pen, operator)(*operands) - - -class RecordingPen(AbstractPen): - """Pen recording operations that can be accessed or replayed. - - The recording can be accessed as pen.value; or replayed using - pen.replay(otherPen). - - :Example: - - from fontTools.ttLib import TTFont - from fontTools.pens.recordingPen import RecordingPen - - glyph_name = 'dollar' - font_path = 'MyFont.otf' - - font = TTFont(font_path) - glyphset = font.getGlyphSet() - glyph = glyphset[glyph_name] - - pen = RecordingPen() - glyph.draw(pen) - print(pen.value) - """ - - def __init__(self): - self.value = [] - - def moveTo(self, p0): - self.value.append(("moveTo", (p0,))) - - def lineTo(self, p1): - self.value.append(("lineTo", (p1,))) - - def qCurveTo(self, *points): - self.value.append(("qCurveTo", points)) - - def curveTo(self, *points): - self.value.append(("curveTo", points)) - - def closePath(self): - self.value.append(("closePath", ())) - - def endPath(self): - self.value.append(("endPath", ())) - - def addComponent(self, glyphName, transformation): - self.value.append(("addComponent", (glyphName, transformation))) - - def addVarComponent(self, glyphName, transformation, location): - self.value.append(("addVarComponent", (glyphName, transformation, location))) - - def replay(self, pen): - replayRecording(self.value, pen) - - -class DecomposingRecordingPen(DecomposingPen, RecordingPen): - """Same as RecordingPen, except that it doesn't keep components - as references, but draws them decomposed as regular contours. - - The constructor takes a single 'glyphSet' positional argument, - a dictionary of glyph objects (i.e. with a 'draw' method) keyed - by thir name:: - - >>> class SimpleGlyph(object): - ... def draw(self, pen): - ... pen.moveTo((0, 0)) - ... pen.curveTo((1, 1), (2, 2), (3, 3)) - ... pen.closePath() - >>> class CompositeGlyph(object): - ... def draw(self, pen): - ... pen.addComponent('a', (1, 0, 0, 1, -1, 1)) - >>> glyphSet = {'a': SimpleGlyph(), 'b': CompositeGlyph()} - >>> for name, glyph in sorted(glyphSet.items()): - ... pen = DecomposingRecordingPen(glyphSet) - ... glyph.draw(pen) - ... print("{}: {}".format(name, pen.value)) - a: [('moveTo', ((0, 0),)), ('curveTo', ((1, 1), (2, 2), (3, 3))), ('closePath', ())] - b: [('moveTo', ((-1, 1),)), ('curveTo', ((0, 2), (1, 3), (2, 4))), ('closePath', ())] - """ - - # raises KeyError if base glyph is not found in glyphSet - skipMissingComponents = False - - -class RecordingPointPen(AbstractPointPen): - """PointPen recording operations that can be accessed or replayed. - - The recording can be accessed as pen.value; or replayed using - pointPen.replay(otherPointPen). - - :Example: - - from defcon import Font - from fontTools.pens.recordingPen import RecordingPointPen - - glyph_name = 'a' - font_path = 'MyFont.ufo' - - font = Font(font_path) - glyph = font[glyph_name] - - pen = RecordingPointPen() - glyph.drawPoints(pen) - print(pen.value) - - new_glyph = font.newGlyph('b') - pen.replay(new_glyph.getPointPen()) - """ - - def __init__(self): - self.value = [] - - def beginPath(self, identifier=None, **kwargs): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("beginPath", (), kwargs)) - - def endPath(self): - self.value.append(("endPath", (), {})) - - def addPoint( - self, pt, segmentType=None, smooth=False, name=None, identifier=None, **kwargs - ): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("addPoint", (pt, segmentType, smooth, name), kwargs)) - - def addComponent(self, baseGlyphName, transformation, identifier=None, **kwargs): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append(("addComponent", (baseGlyphName, transformation), kwargs)) - - def addVarComponent( - self, baseGlyphName, transformation, location, identifier=None, **kwargs - ): - if identifier is not None: - kwargs["identifier"] = identifier - self.value.append( - ("addVarComponent", (baseGlyphName, transformation, location), kwargs) - ) - - def replay(self, pointPen): - for operator, args, kwargs in self.value: - getattr(pointPen, operator)(*args, **kwargs) - - -if __name__ == "__main__": - pen = RecordingPen() - pen.moveTo((0, 0)) - pen.lineTo((0, 100)) - pen.curveTo((50, 75), (60, 50), (50, 25)) - pen.closePath() - from pprint import pprint - - pprint(pen.value) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_template.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_template.py deleted file mode 100644 index d997ec160a53cb64b92a1ab2511fde7eceeb42e6..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/backends/backend_template.py +++ /dev/null @@ -1,213 +0,0 @@ -""" -A fully functional, do-nothing backend intended as a template for backend -writers. It is fully functional in that you can select it as a backend e.g. -with :: - - import matplotlib - matplotlib.use("template") - -and your program will (should!) run without error, though no output is -produced. This provides a starting point for backend writers; you can -selectively implement drawing methods (`~.RendererTemplate.draw_path`, -`~.RendererTemplate.draw_image`, etc.) and slowly see your figure come to life -instead having to have a full-blown implementation before getting any results. - -Copy this file to a directory outside the Matplotlib source tree, somewhere -where Python can import it (by adding the directory to your ``sys.path`` or by -packaging it as a normal Python package); if the backend is importable as -``import my.backend`` you can then select it using :: - - import matplotlib - matplotlib.use("module://my.backend") - -If your backend implements support for saving figures (i.e. has a `print_xyz` -method), you can register it as the default handler for a given file type:: - - from matplotlib.backend_bases import register_backend - register_backend('xyz', 'my_backend', 'XYZ File Format') - ... - plt.savefig("figure.xyz") -""" - -from matplotlib import _api -from matplotlib._pylab_helpers import Gcf -from matplotlib.backend_bases import ( - FigureCanvasBase, FigureManagerBase, GraphicsContextBase, RendererBase) -from matplotlib.figure import Figure - - -class RendererTemplate(RendererBase): - """ - The renderer handles drawing/rendering operations. - - This is a minimal do-nothing class that can be used to get started when - writing a new backend. Refer to `.backend_bases.RendererBase` for - documentation of the methods. - """ - - def __init__(self, dpi): - super().__init__() - self.dpi = dpi - - def draw_path(self, gc, path, transform, rgbFace=None): - pass - - # draw_markers is optional, and we get more correct relative - # timings by leaving it out. backend implementers concerned with - # performance will probably want to implement it -# def draw_markers(self, gc, marker_path, marker_trans, path, trans, -# rgbFace=None): -# pass - - # draw_path_collection is optional, and we get more correct - # relative timings by leaving it out. backend implementers concerned with - # performance will probably want to implement it -# def draw_path_collection(self, gc, master_transform, paths, -# all_transforms, offsets, offset_trans, -# facecolors, edgecolors, linewidths, linestyles, -# antialiaseds): -# pass - - # draw_quad_mesh is optional, and we get more correct - # relative timings by leaving it out. backend implementers concerned with - # performance will probably want to implement it -# def draw_quad_mesh(self, gc, master_transform, meshWidth, meshHeight, -# coordinates, offsets, offsetTrans, facecolors, -# antialiased, edgecolors): -# pass - - def draw_image(self, gc, x, y, im): - pass - - def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None): - pass - - def flipy(self): - # docstring inherited - return True - - def get_canvas_width_height(self): - # docstring inherited - return 100, 100 - - def get_text_width_height_descent(self, s, prop, ismath): - return 1, 1, 1 - - def new_gc(self): - # docstring inherited - return GraphicsContextTemplate() - - def points_to_pixels(self, points): - # if backend doesn't have dpi, e.g., postscript or svg - return points - # elif backend assumes a value for pixels_per_inch - # return points/72.0 * self.dpi.get() * pixels_per_inch/72.0 - # else - # return points/72.0 * self.dpi.get() - - -class GraphicsContextTemplate(GraphicsContextBase): - """ - The graphics context provides the color, line styles, etc. See the cairo - and postscript backends for examples of mapping the graphics context - attributes (cap styles, join styles, line widths, colors) to a particular - backend. In cairo this is done by wrapping a cairo.Context object and - forwarding the appropriate calls to it using a dictionary mapping styles - to gdk constants. In Postscript, all the work is done by the renderer, - mapping line styles to postscript calls. - - If it's more appropriate to do the mapping at the renderer level (as in - the postscript backend), you don't need to override any of the GC methods. - If it's more appropriate to wrap an instance (as in the cairo backend) and - do the mapping here, you'll need to override several of the setter - methods. - - The base GraphicsContext stores colors as an RGB tuple on the unit - interval, e.g., (0.5, 0.0, 1.0). You may need to map this to colors - appropriate for your backend. - """ - - -######################################################################## -# -# The following functions and classes are for pyplot and implement -# window/figure managers, etc. -# -######################################################################## - - -class FigureManagerTemplate(FigureManagerBase): - """ - Helper class for pyplot mode, wraps everything up into a neat bundle. - - For non-interactive backends, the base class is sufficient. For - interactive backends, see the documentation of the `.FigureManagerBase` - class for the list of methods that can/should be overridden. - """ - - -class FigureCanvasTemplate(FigureCanvasBase): - """ - The canvas the figure renders into. Calls the draw and print fig - methods, creates the renderers, etc. - - Note: GUI templates will want to connect events for button presses, - mouse movements and key presses to functions that call the base - class methods button_press_event, button_release_event, - motion_notify_event, key_press_event, and key_release_event. See the - implementations of the interactive backends for examples. - - Attributes - ---------- - figure : `~matplotlib.figure.Figure` - A high-level Figure instance - """ - - # The instantiated manager class. For further customization, - # ``FigureManager.create_with_canvas`` can also be overridden; see the - # wx-based backends for an example. - manager_class = FigureManagerTemplate - - def draw(self): - """ - Draw the figure using the renderer. - - It is important that this method actually walk the artist tree - even if not output is produced because this will trigger - deferred work (like computing limits auto-limits and tick - values) that users may want access to before saving to disk. - """ - renderer = RendererTemplate(self.figure.dpi) - self.figure.draw(renderer) - - # You should provide a print_xxx function for every file format - # you can write. - - # If the file type is not in the base set of filetypes, - # you should add it to the class-scope filetypes dictionary as follows: - filetypes = {**FigureCanvasBase.filetypes, 'foo': 'My magic Foo format'} - - def print_foo(self, filename, **kwargs): - """ - Write out format foo. - - This method is normally called via `.Figure.savefig` and - `.FigureCanvasBase.print_figure`, which take care of setting the figure - facecolor, edgecolor, and dpi to the desired output values, and will - restore them to the original values. Therefore, `print_foo` does not - need to handle these settings. - """ - self.draw() - - def get_default_filetype(self): - return 'foo' - - -######################################################################## -# -# Now just provide the standard names that backend.__init__ is expecting -# -######################################################################## - -FigureCanvas = FigureCanvasTemplate -FigureManager = FigureManagerTemplate diff --git a/spaces/dcsjsuml/README/README.md b/spaces/dcsjsuml/README/README.md deleted file mode 100644 index 9501552de3cb8c1c4b8bf347dffd910982ec60f4..0000000000000000000000000000000000000000 --- a/spaces/dcsjsuml/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: ML@SJSU -emoji: 🦀 -colorFrom: gray -colorTo: orange -sdk: static -pinned: false ---- - -Let's taco-bout it! diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/models/vae.py b/spaces/declare-lab/tango/diffusers/src/diffusers/models/vae.py deleted file mode 100644 index b4484823ac3dacbabbe150fd3106215f773a12da..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/models/vae.py +++ /dev/null @@ -1,403 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Optional - -import numpy as np -import torch -import torch.nn as nn - -from ..utils import BaseOutput, randn_tensor -from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block - - -@dataclass -class DecoderOutput(BaseOutput): - """ - Output of decoding method. - - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Decoded output sample of the model. Output of the last layer of the model. - """ - - sample: torch.FloatTensor - - -class Encoder(nn.Module): - def __init__( - self, - in_channels=3, - out_channels=3, - down_block_types=("DownEncoderBlock2D",), - block_out_channels=(64,), - layers_per_block=2, - norm_num_groups=32, - act_fn="silu", - double_z=True, - ): - super().__init__() - self.layers_per_block = layers_per_block - - self.conv_in = torch.nn.Conv2d( - in_channels, - block_out_channels[0], - kernel_size=3, - stride=1, - padding=1, - ) - - self.mid_block = None - self.down_blocks = nn.ModuleList([]) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=self.layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - add_downsample=not is_final_block, - resnet_eps=1e-6, - downsample_padding=0, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - attn_num_head_channels=None, - temb_channels=None, - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = UNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default", - attn_num_head_channels=None, - resnet_groups=norm_num_groups, - temb_channels=None, - ) - - # out - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[-1], num_groups=norm_num_groups, eps=1e-6) - self.conv_act = nn.SiLU() - - conv_out_channels = 2 * out_channels if double_z else out_channels - self.conv_out = nn.Conv2d(block_out_channels[-1], conv_out_channels, 3, padding=1) - - self.gradient_checkpointing = False - - def forward(self, x): - sample = x - sample = self.conv_in(sample) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - # down - for down_block in self.down_blocks: - sample = torch.utils.checkpoint.checkpoint(create_custom_forward(down_block), sample) - - # middle - sample = torch.utils.checkpoint.checkpoint(create_custom_forward(self.mid_block), sample) - - else: - # down - for down_block in self.down_blocks: - sample = down_block(sample) - - # middle - sample = self.mid_block(sample) - - # post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - return sample - - -class Decoder(nn.Module): - def __init__( - self, - in_channels=3, - out_channels=3, - up_block_types=("UpDecoderBlock2D",), - block_out_channels=(64,), - layers_per_block=2, - norm_num_groups=32, - act_fn="silu", - ): - super().__init__() - self.layers_per_block = layers_per_block - - self.conv_in = nn.Conv2d( - in_channels, - block_out_channels[-1], - kernel_size=3, - stride=1, - padding=1, - ) - - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - # mid - self.mid_block = UNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default", - attn_num_head_channels=None, - resnet_groups=norm_num_groups, - temb_channels=None, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - - is_final_block = i == len(block_out_channels) - 1 - - up_block = get_up_block( - up_block_type, - num_layers=self.layers_per_block + 1, - in_channels=prev_output_channel, - out_channels=output_channel, - prev_output_channel=None, - add_upsample=not is_final_block, - resnet_eps=1e-6, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - attn_num_head_channels=None, - temb_channels=None, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=1e-6) - self.conv_act = nn.SiLU() - self.conv_out = nn.Conv2d(block_out_channels[0], out_channels, 3, padding=1) - - self.gradient_checkpointing = False - - def forward(self, z): - sample = z - sample = self.conv_in(sample) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - # middle - sample = torch.utils.checkpoint.checkpoint(create_custom_forward(self.mid_block), sample) - - # up - for up_block in self.up_blocks: - sample = torch.utils.checkpoint.checkpoint(create_custom_forward(up_block), sample) - else: - # middle - sample = self.mid_block(sample) - - # up - for up_block in self.up_blocks: - sample = up_block(sample) - - # post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - return sample - - -class VectorQuantizer(nn.Module): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly avoids costly matrix - multiplications and allows for post-hoc remapping of indices. - """ - - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__( - self, n_e, vq_embed_dim, beta, remap=None, unknown_index="random", sane_index_shape=False, legacy=True - ): - super().__init__() - self.n_e = n_e - self.vq_embed_dim = vq_embed_dim - self.beta = beta - self.legacy = legacy - - self.embedding = nn.Embedding(self.n_e, self.vq_embed_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed + 1 - print( - f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices." - ) - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape(ishape[0], -1) - used = self.used.to(inds) - match = (inds[:, :, None] == used[None, None, ...]).long() - new = match.argmax(-1) - unknown = match.sum(2) < 1 - if self.unknown_index == "random": - new[unknown] = torch.randint(0, self.re_embed, size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape(ishape[0], -1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds >= self.used.shape[0]] = 0 # simply set to zero - back = torch.gather(used[None, :][inds.shape[0] * [0], :], 1, inds) - return back.reshape(ishape) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.vq_embed_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - min_encoding_indices = torch.argmin(torch.cdist(z_flattened, self.embedding.weight), dim=1) - - z_q = self.embedding(min_encoding_indices).view(z.shape) - perplexity = None - min_encodings = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * torch.mean((z_q.detach() - z) ** 2) + torch.mean((z_q - z.detach()) ** 2) - else: - loss = torch.mean((z_q.detach() - z) ** 2) + self.beta * torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - if self.remap is not None: - min_encoding_indices = min_encoding_indices.reshape(z.shape[0], -1) # add batch axis - min_encoding_indices = self.remap_to_used(min_encoding_indices) - min_encoding_indices = min_encoding_indices.reshape(-1, 1) # flatten - - if self.sane_index_shape: - min_encoding_indices = min_encoding_indices.reshape(z_q.shape[0], z_q.shape[2], z_q.shape[3]) - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - if self.remap is not None: - indices = indices.reshape(shape[0], -1) # add batch axis - indices = self.unmap_to_all(indices) - indices = indices.reshape(-1) # flatten again - - # get quantized latent vectors - z_q = self.embedding(indices) - - if shape is not None: - z_q = z_q.view(shape) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like( - self.mean, device=self.parameters.device, dtype=self.parameters.dtype - ) - - def sample(self, generator: Optional[torch.Generator] = None) -> torch.FloatTensor: - # make sure sample is on the same device as the parameters and has same dtype - sample = randn_tensor( - self.mean.shape, generator=generator, device=self.parameters.device, dtype=self.parameters.dtype - ) - x = self.mean + self.std * sample - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.0]) - else: - if other is None: - return 0.5 * torch.sum(torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar, dim=[1, 2, 3]) - else: - return 0.5 * torch.sum( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - - 1.0 - - self.logvar - + other.logvar, - dim=[1, 2, 3], - ) - - def nll(self, sample, dims=[1, 2, 3]): - if self.deterministic: - return torch.Tensor([0.0]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum(logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, dim=dims) - - def mode(self): - return self.mean diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_deis_multistep.py b/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_deis_multistep.py deleted file mode 100644 index 39f8f17df5d30f80d13570177a08c181f92201f6..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_deis_multistep.py +++ /dev/null @@ -1,485 +0,0 @@ -# Copyright 2023 FLAIR Lab and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: check https://arxiv.org/abs/2204.13902 and https://github.com/qsh-zh/deis for more info -# The codebase is modified based on https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py - -import math -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - - def alpha_bar(time_step): - return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2 - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -class DEISMultistepScheduler(SchedulerMixin, ConfigMixin): - """ - DEIS (https://arxiv.org/abs/2204.13902) is a fast high order solver for diffusion ODEs. We slightly modify the - polynomial fitting formula in log-rho space instead of the original linear t space in DEIS paper. The modification - enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. More - variants of DEIS can be found in https://github.com/qsh-zh/deis. - - Currently, we support the log-rho multistep DEIS. We recommend to use `solver_order=2 / 3` while `solver_order=1` - reduces to DDIM. - - We also support the "dynamic thresholding" method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space - diffusion models, you can set `thresholding=True` to use the dynamic thresholding. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - solver_order (`int`, default `2`): - the order of DEIS; can be `1` or `2` or `3`. We recommend to use `solver_order=2` for guided sampling, and - `solver_order=3` for unconditional sampling. - prediction_type (`str`, default `epsilon`): - indicates whether the model predicts the noise (epsilon), or the data / `x0`. One of `epsilon`, `sample`, - or `v-prediction`. - thresholding (`bool`, default `False`): - whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487). - Note that the thresholding method is unsuitable for latent-space diffusion models (such as - stable-diffusion). - dynamic_thresholding_ratio (`float`, default `0.995`): - the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen - (https://arxiv.org/abs/2205.11487). - sample_max_value (`float`, default `1.0`): - the threshold value for dynamic thresholding. Valid only when `thresholding=True` - algorithm_type (`str`, default `deis`): - the algorithm type for the solver. current we support multistep deis, we will add other variants of DEIS in - the future - lower_order_final (`bool`, default `True`): - whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically - find this trick can stabilize the sampling of DEIS for steps < 15, especially for steps <= 10. - - """ - - _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[np.ndarray] = None, - solver_order: int = 2, - prediction_type: str = "epsilon", - thresholding: bool = False, - dynamic_thresholding_ratio: float = 0.995, - sample_max_value: float = 1.0, - algorithm_type: str = "deis", - solver_type: str = "logrho", - lower_order_final: bool = True, - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - # Currently we only support VP-type noise schedule - self.alpha_t = torch.sqrt(self.alphas_cumprod) - self.sigma_t = torch.sqrt(1 - self.alphas_cumprod) - self.lambda_t = torch.log(self.alpha_t) - torch.log(self.sigma_t) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # settings for DEIS - if algorithm_type not in ["deis"]: - if algorithm_type in ["dpmsolver", "dpmsolver++"]: - self.register_to_config(algorithm_type="deis") - else: - raise NotImplementedError(f"{algorithm_type} does is not implemented for {self.__class__}") - - if solver_type not in ["logrho"]: - if solver_type in ["midpoint", "heun", "bh1", "bh2"]: - self.register_to_config(solver_type="logrho") - else: - raise NotImplementedError(f"solver type {solver_type} does is not implemented for {self.__class__}") - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=np.float32)[::-1].copy() - self.timesteps = torch.from_numpy(timesteps) - self.model_outputs = [None] * solver_order - self.lower_order_nums = 0 - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - self.num_inference_steps = num_inference_steps - timesteps = ( - np.linspace(0, self.num_train_timesteps - 1, num_inference_steps + 1) - .round()[::-1][:-1] - .copy() - .astype(np.int64) - ) - self.timesteps = torch.from_numpy(timesteps).to(device) - self.model_outputs = [ - None, - ] * self.config.solver_order - self.lower_order_nums = 0 - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample - def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor: - # Dynamic thresholding in https://arxiv.org/abs/2205.11487 - dynamic_max_val = ( - sample.flatten(1) - .abs() - .quantile(self.config.dynamic_thresholding_ratio, dim=1) - .clamp_min(self.config.sample_max_value) - .view(-1, *([1] * (sample.ndim - 1))) - ) - return sample.clamp(-dynamic_max_val, dynamic_max_val) / dynamic_max_val - - def convert_model_output( - self, model_output: torch.FloatTensor, timestep: int, sample: torch.FloatTensor - ) -> torch.FloatTensor: - """ - Convert the model output to the corresponding type that the algorithm DEIS needs. - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the converted model output. - """ - if self.config.prediction_type == "epsilon": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = (sample - sigma_t * model_output) / alpha_t - elif self.config.prediction_type == "sample": - x0_pred = model_output - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = alpha_t * sample - sigma_t * model_output - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction` for the DEISMultistepScheduler." - ) - - if self.config.thresholding: - # Dynamic thresholding in https://arxiv.org/abs/2205.11487 - orig_dtype = x0_pred.dtype - if orig_dtype not in [torch.float, torch.double]: - x0_pred = x0_pred.float() - x0_pred = self._threshold_sample(x0_pred).type(orig_dtype) - - if self.config.algorithm_type == "deis": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - return (sample - alpha_t * x0_pred) / sigma_t - else: - raise NotImplementedError("only support log-rho multistep deis now") - - def deis_first_order_update( - self, - model_output: torch.FloatTensor, - timestep: int, - prev_timestep: int, - sample: torch.FloatTensor, - ) -> torch.FloatTensor: - """ - One step for the first-order DEIS (equivalent to DDIM). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - lambda_t, lambda_s = self.lambda_t[prev_timestep], self.lambda_t[timestep] - alpha_t, alpha_s = self.alpha_t[prev_timestep], self.alpha_t[timestep] - sigma_t, _ = self.sigma_t[prev_timestep], self.sigma_t[timestep] - h = lambda_t - lambda_s - if self.config.algorithm_type == "deis": - x_t = (alpha_t / alpha_s) * sample - (sigma_t * (torch.exp(h) - 1.0)) * model_output - else: - raise NotImplementedError("only support log-rho multistep deis now") - return x_t - - def multistep_deis_second_order_update( - self, - model_output_list: List[torch.FloatTensor], - timestep_list: List[int], - prev_timestep: int, - sample: torch.FloatTensor, - ) -> torch.FloatTensor: - """ - One step for the second-order multistep DEIS. - - Args: - model_output_list (`List[torch.FloatTensor]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - t, s0, s1 = prev_timestep, timestep_list[-1], timestep_list[-2] - m0, m1 = model_output_list[-1], model_output_list[-2] - alpha_t, alpha_s0, alpha_s1 = self.alpha_t[t], self.alpha_t[s0], self.alpha_t[s1] - sigma_t, sigma_s0, sigma_s1 = self.sigma_t[t], self.sigma_t[s0], self.sigma_t[s1] - - rho_t, rho_s0, rho_s1 = sigma_t / alpha_t, sigma_s0 / alpha_s0, sigma_s1 / alpha_s1 - - if self.config.algorithm_type == "deis": - - def ind_fn(t, b, c): - # Integrate[(log(t) - log(c)) / (log(b) - log(c)), {t}] - return t * (-np.log(c) + np.log(t) - 1) / (np.log(b) - np.log(c)) - - coef1 = ind_fn(rho_t, rho_s0, rho_s1) - ind_fn(rho_s0, rho_s0, rho_s1) - coef2 = ind_fn(rho_t, rho_s1, rho_s0) - ind_fn(rho_s0, rho_s1, rho_s0) - - x_t = alpha_t * (sample / alpha_s0 + coef1 * m0 + coef2 * m1) - return x_t - else: - raise NotImplementedError("only support log-rho multistep deis now") - - def multistep_deis_third_order_update( - self, - model_output_list: List[torch.FloatTensor], - timestep_list: List[int], - prev_timestep: int, - sample: torch.FloatTensor, - ) -> torch.FloatTensor: - """ - One step for the third-order multistep DEIS. - - Args: - model_output_list (`List[torch.FloatTensor]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - - Returns: - `torch.FloatTensor`: the sample tensor at the previous timestep. - """ - t, s0, s1, s2 = prev_timestep, timestep_list[-1], timestep_list[-2], timestep_list[-3] - m0, m1, m2 = model_output_list[-1], model_output_list[-2], model_output_list[-3] - alpha_t, alpha_s0, alpha_s1, alpha_s2 = self.alpha_t[t], self.alpha_t[s0], self.alpha_t[s1], self.alpha_t[s2] - sigma_t, sigma_s0, sigma_s1, simga_s2 = self.sigma_t[t], self.sigma_t[s0], self.sigma_t[s1], self.sigma_t[s2] - rho_t, rho_s0, rho_s1, rho_s2 = ( - sigma_t / alpha_t, - sigma_s0 / alpha_s0, - sigma_s1 / alpha_s1, - simga_s2 / alpha_s2, - ) - - if self.config.algorithm_type == "deis": - - def ind_fn(t, b, c, d): - # Integrate[(log(t) - log(c))(log(t) - log(d)) / (log(b) - log(c))(log(b) - log(d)), {t}] - numerator = t * ( - np.log(c) * (np.log(d) - np.log(t) + 1) - - np.log(d) * np.log(t) - + np.log(d) - + np.log(t) ** 2 - - 2 * np.log(t) - + 2 - ) - denominator = (np.log(b) - np.log(c)) * (np.log(b) - np.log(d)) - return numerator / denominator - - coef1 = ind_fn(rho_t, rho_s0, rho_s1, rho_s2) - ind_fn(rho_s0, rho_s0, rho_s1, rho_s2) - coef2 = ind_fn(rho_t, rho_s1, rho_s2, rho_s0) - ind_fn(rho_s0, rho_s1, rho_s2, rho_s0) - coef3 = ind_fn(rho_t, rho_s2, rho_s0, rho_s1) - ind_fn(rho_s0, rho_s2, rho_s0, rho_s1) - - x_t = alpha_t * (sample / alpha_s0 + coef1 * m0 + coef2 * m1 + coef3 * m2) - - return x_t - else: - raise NotImplementedError("only support log-rho multistep deis now") - - def step( - self, - model_output: torch.FloatTensor, - timestep: int, - sample: torch.FloatTensor, - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Step function propagating the sample with the multistep DEIS. - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - - Returns: - [`~scheduling_utils.SchedulerOutput`] or `tuple`: [`~scheduling_utils.SchedulerOutput`] if `return_dict` is - True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - step_index = (self.timesteps == timestep).nonzero() - if len(step_index) == 0: - step_index = len(self.timesteps) - 1 - else: - step_index = step_index.item() - prev_timestep = 0 if step_index == len(self.timesteps) - 1 else self.timesteps[step_index + 1] - lower_order_final = ( - (step_index == len(self.timesteps) - 1) and self.config.lower_order_final and len(self.timesteps) < 15 - ) - lower_order_second = ( - (step_index == len(self.timesteps) - 2) and self.config.lower_order_final and len(self.timesteps) < 15 - ) - - model_output = self.convert_model_output(model_output, timestep, sample) - for i in range(self.config.solver_order - 1): - self.model_outputs[i] = self.model_outputs[i + 1] - self.model_outputs[-1] = model_output - - if self.config.solver_order == 1 or self.lower_order_nums < 1 or lower_order_final: - prev_sample = self.deis_first_order_update(model_output, timestep, prev_timestep, sample) - elif self.config.solver_order == 2 or self.lower_order_nums < 2 or lower_order_second: - timestep_list = [self.timesteps[step_index - 1], timestep] - prev_sample = self.multistep_deis_second_order_update( - self.model_outputs, timestep_list, prev_timestep, sample - ) - else: - timestep_list = [self.timesteps[step_index - 2], self.timesteps[step_index - 1], timestep] - prev_sample = self.multistep_deis_third_order_update( - self.model_outputs, timestep_list, prev_timestep, sample - ) - - if self.lower_order_nums < self.config.solver_order: - self.lower_order_nums += 1 - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - def scale_model_input(self, sample: torch.FloatTensor, *args, **kwargs) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.IntTensor, - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as original_samples - self.alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype) - timesteps = timesteps.to(original_samples.device) - - sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/distributed.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/distributed.py deleted file mode 100644 index 2fa61f76c5cc3ab9f6a9643042afa8e1f2e1cb7f..0000000000000000000000000000000000000000 --- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/distributed.py +++ /dev/null @@ -1,150 +0,0 @@ -import os - -import torch -import socket - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - - -def is_global_master(args): - return args.rank == 0 - - -def is_local_master(args): - return args.local_rank == 0 - - -def is_master(args, local=False): - return is_local_master(args) if local else is_global_master(args) - - -def is_using_horovod(): - # NOTE w/ horovod run, OMPI vars should be set, but w/ SLURM PMI vars will be set - # Differentiating between horovod and DDP use via SLURM may not be possible, so horovod arg still required... - ompi_vars = ["OMPI_COMM_WORLD_RANK", "OMPI_COMM_WORLD_SIZE"] - pmi_vars = ["PMI_RANK", "PMI_SIZE"] - if all([var in os.environ for var in ompi_vars]) or all( - [var in os.environ for var in pmi_vars] - ): - return True - else: - return False - - -def is_using_distributed(): - if "WORLD_SIZE" in os.environ: - return int(os.environ["WORLD_SIZE"]) > 1 - if "SLURM_NTASKS" in os.environ: - return int(os.environ["SLURM_NTASKS"]) > 1 - return False - - -def world_info_from_env(): - local_rank = 0 - for v in ( - "SLURM_LOCALID", - "MPI_LOCALRANKID", - "OMPI_COMM_WORLD_LOCAL_RANK", - "LOCAL_RANK", - ): - if v in os.environ: - local_rank = int(os.environ[v]) - break - global_rank = 0 - for v in ("SLURM_PROCID", "PMI_RANK", "OMPI_COMM_WORLD_RANK", "RANK"): - if v in os.environ: - global_rank = int(os.environ[v]) - break - world_size = 1 - for v in ("SLURM_NTASKS", "PMI_SIZE", "OMPI_COMM_WORLD_SIZE", "WORLD_SIZE"): - if v in os.environ: - world_size = int(os.environ[v]) - break - - return local_rank, global_rank, world_size - - -def init_distributed_device(args): - # Distributed training = training on more than one GPU. - # Works in both single and multi-node scenarios. - args.distributed = False - args.world_size = 1 - args.rank = 0 # global rank - args.local_rank = 0 - if args.horovod: - assert hvd is not None, "Horovod is not installed" - hvd.init() - world_size = int(os.environ["OMPI_COMM_WORLD_SIZE"]) - world_rank = int(os.environ["OMPI_COMM_WORLD_RANK"]) - local_rank = int(os.environ["OMPI_COMM_WORLD_LOCAL_RANK"]) - args.local_rank = local_rank - args.rank = world_rank - args.world_size = world_size - # args.local_rank = int(hvd.local_rank()) - # args.rank = hvd.rank() - # args.world_size = hvd.size() - args.distributed = True - os.environ["LOCAL_RANK"] = str(args.local_rank) - os.environ["RANK"] = str(args.rank) - os.environ["WORLD_SIZE"] = str(args.world_size) - print( - f"Distributed training: local_rank={args.local_rank}, " - f"rank={args.rank}, world_size={args.world_size}, " - f"hostname={socket.gethostname()}, pid={os.getpid()}" - ) - elif is_using_distributed(): - if "SLURM_PROCID" in os.environ: - # DDP via SLURM - args.local_rank, args.rank, args.world_size = world_info_from_env() - # SLURM var -> torch.distributed vars in case needed - os.environ["LOCAL_RANK"] = str(args.local_rank) - os.environ["RANK"] = str(args.rank) - os.environ["WORLD_SIZE"] = str(args.world_size) - torch.distributed.init_process_group( - backend=args.dist_backend, - init_method=args.dist_url, - world_size=args.world_size, - rank=args.rank, - ) - elif "OMPI_COMM_WORLD_SIZE" in os.environ: # using Summit cluster - world_size = int(os.environ["OMPI_COMM_WORLD_SIZE"]) - world_rank = int(os.environ["OMPI_COMM_WORLD_RANK"]) - local_rank = int(os.environ["OMPI_COMM_WORLD_LOCAL_RANK"]) - args.local_rank = local_rank - args.rank = world_rank - args.world_size = world_size - torch.distributed.init_process_group( - backend=args.dist_backend, - init_method=args.dist_url, - world_size=args.world_size, - rank=args.rank, - ) - else: - # DDP via torchrun, torch.distributed.launch - args.local_rank, _, _ = world_info_from_env() - torch.distributed.init_process_group( - backend=args.dist_backend, init_method=args.dist_url - ) - args.world_size = torch.distributed.get_world_size() - args.rank = torch.distributed.get_rank() - args.distributed = True - print( - f"Distributed training: local_rank={args.local_rank}, " - f"rank={args.rank}, world_size={args.world_size}, " - f"hostname={socket.gethostname()}, pid={os.getpid()}" - ) - - if torch.cuda.is_available(): - if args.distributed and not args.no_set_device_rank: - device = "cuda:%d" % args.local_rank - else: - device = "cuda:0" - torch.cuda.set_device(device) - else: - device = "cpu" - args.device = device - device = torch.device(device) - return device diff --git a/spaces/diacanFperku/AutoGPT/Airbus A330 VACBI CBT 34.md b/spaces/diacanFperku/AutoGPT/Airbus A330 VACBI CBT 34.md deleted file mode 100644 index 1937816c8ecf62c69ce5838521c0c5c1c6b5acf6..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Airbus A330 VACBI CBT 34.md +++ /dev/null @@ -1,9 +0,0 @@ - -

aeroflot a330 vacbi: "the aeroflot cmi is a professional, easy to use system for training and issuing maintenance and repair instructions. it is adaptable to different training needs and environments. a320 training should be performed on the widest possible range of aircraft, whether they are flying, static or parked."

-

bhs a330 vacbi: "this course is a new development for the boe a330 using the new boe cmi for the first time. the aim was to provide a highly flexible cmi with a training program covering a broad range of topics. in addition, other topics which are of interest to this specific training group were added."

-

Airbus A330 VACBI CBT 34


DOWNLOADhttps://gohhs.com/2uFVo1



-

airbus a330 vacbi: "this course is a new development for the boe a330 using the new boe cmi for the first time. the aim was to provide a highly flexible cmi with a training program covering a broad range of topics. in addition, other topics which are of interest to this specific training group were added."

-

dhl airlines a330 vacbi: "the major advantage of the dhl a330 cmi for training is its ease of operation, the above average accuracy and the technological quality of the contents. with its three training modes and the outstanding application of graphics and colours, it is a very convenient training aid for pilots. one of the greatest advantages of the system is that it fits seamlessly into the needs of airlines and ground-based training companies."

-

joon air a330 vacbi: "joon air has decided to replace its previous training tool with the aeroflot cmi. the main reason was that the previous training tool was very labour intensive and difficult to use. during training sessions joon air pilots learn how the new inflight entertainment system works and how to use the functions of the cmi, as they are also required to be familiar with the new aircraft and how to access the cmi."

899543212b
-
-
\ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Descargar Interi Cad Full Version.md b/spaces/diacanFperku/AutoGPT/Descargar Interi Cad Full Version.md deleted file mode 100644 index f68f5ecaea2beb5d9d8f733e96eba4a82b46c44a..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Descargar Interi Cad Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

Descargar Interi Cad Full Version


Download File ===> https://gohhs.com/2uFUM2



- -K-Lite Codec Pack— is a collection of DirectShow filters, VFW/ACM codecs and tools.DirectShow codecs and filters are essential for audio encoding and decoding.It contains filters and codecs for popular Windows operating systems including Windows Vista.For Windows XP and Windows Server 2003-Windows Vista, the package was updated to version 4.8.5 with the following components: ■ ffdsho w (filter for audio and video streaming and their 8a78ff9644
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Download [BETTER] Rumble Racing Pc Full Version.md b/spaces/diacanFperku/AutoGPT/Download [BETTER] Rumble Racing Pc Full Version.md deleted file mode 100644 index 817f6392e8a3088d925c7510308f05746f05705a..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download [BETTER] Rumble Racing Pc Full Version.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

rumble racing's problem is that it doesn't do anything unique. it's the same old formula of arcade racing games, only with less moving parts. yes, you can customize your car and race it off the track, but that's really about it. the controls are also a bit unwieldy; it's hard to steer and accelerate without feeling like you're being tossed around like a rag doll. rumble racing features the typical gimmick of collecting power-ups. with the power-ups, you can either increase your speed or perform a special stunt. all in all, rumble racing is a simple, straightforward racing game. it's a fun, if not a long, game, but it's not much of a game if it lacks personality. if you are looking for a game that will keep you entertained for a few hours, rumble racing may just fit the bill.

-

download rumble racing pc full version


Download File ->>> https://gohhs.com/2uFTM6



-

if you're looking for a challenge, rumble racing will give you what you want. there's nothing wrong with this, as arcade racing games are designed to be fun and simple to play. the problem here is that the game doesn't come close to the top in the genre. the speed isn't very high, so most players will breeze through the game in no time. the stunts are a joke, and the same old formula over and over again makes it very boring. the only other fault that i have with this game is the lack of character. the characters are bland and are not given much to do. the racing part is fun and can be challenging if you want, but after all the game is rated teen. rumble racing is a quick game to play, but it's too short to be considered a full-fledged racing game.

-

the game is easy to play, but there are no surprises. the levels tend to be a little easier than the next racer on the track. the only redeeming factor is the sound effects and graphics. they're well done and the game is graphically advanced. although this may be enough for children, most adult racers want more from a racing game.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/diffusers/stable-diffusion-xl-inpainting/share_btn.py b/spaces/diffusers/stable-diffusion-xl-inpainting/share_btn.py deleted file mode 100644 index b9fb838408c2446d15e95c8abbf0d46a79da1601..0000000000000000000000000000000000000000 --- a/spaces/diffusers/stable-diffusion-xl-inpainting/share_btn.py +++ /dev/null @@ -1,94 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgCanvas){ - const blob = await new Promise(resolve => imgCanvas.toBlob(resolve)); - const imgId = Date.now() % 200; - const fileName = `sd-inpainting-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - } - - async function getOutoutImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `sd-inpainting-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - } - - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgCanvas = gradioEl.querySelector('canvas[key="drawing"]'); - const outputImgEl = gradioEl.querySelector('#output-img img'); - const promptTxt = gradioEl.querySelector('#prompt textarea').value; - let titleTxt = promptTxt; - if(titleTxt.length > 100){ - titleTxt = titleTxt.slice(0, 100) + ' ...'; - } - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!outputImgEl){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputImgFile = await getInputImgFile(inputImgCanvas); - const outputImgFile = await getOutoutImgFile(outputImgEl); - const files = [inputImgFile, outputImgFile]; - - const urls = await Promise.all(files.map((f) => uploadFile(f))); - - const htmlImgs = urls.map(url => ``); - const [inputImgUrl, outputImgUrl] = htmlImgs; - - const descriptionMd = `
-
-${inputImgUrl} - -${promptTxt} -
-
-${outputImgUrl} -
-
`; - - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - - const paramsStr = params.toString(); - - window.open(`https://huggingface.co/spaces/diffusers/stable-diffusion-xl-inpainting/discussions/new?${paramsStr}&preview=true`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/text/japanese.py b/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Dongmuchang-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/dilums/sentence-similarity/components/ui/label.tsx b/spaces/dilums/sentence-similarity/components/ui/label.tsx deleted file mode 100644 index 534182176bf87f9308355514adc884d2b69750a5..0000000000000000000000000000000000000000 --- a/spaces/dilums/sentence-similarity/components/ui/label.tsx +++ /dev/null @@ -1,26 +0,0 @@ -"use client" - -import * as React from "react" -import * as LabelPrimitive from "@radix-ui/react-label" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const labelVariants = cva( - "text-sm font-medium leading-none peer-disabled:cursor-not-allowed peer-disabled:opacity-70" -) - -const Label = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & - VariantProps ->(({ className, ...props }, ref) => ( - -)) -Label.displayName = LabelPrimitive.Root.displayName - -export { Label } diff --git a/spaces/dineshreddy/WALT/mmdet/models/detectors/sparse_rcnn.py b/spaces/dineshreddy/WALT/mmdet/models/detectors/sparse_rcnn.py deleted file mode 100644 index 0dbd0250f189e610a0bbc72b0dab2559e26857ae..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/detectors/sparse_rcnn.py +++ /dev/null @@ -1,110 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class SparseRCNN(TwoStageDetector): - r"""Implementation of `Sparse R-CNN: End-to-End Object Detection with - Learnable Proposals `_""" - - def __init__(self, *args, **kwargs): - super(SparseRCNN, self).__init__(*args, **kwargs) - assert self.with_rpn, 'Sparse R-CNN do not support external proposals' - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - **kwargs): - """Forward function of SparseR-CNN in train stage. - - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (List[Tensor], optional) : Segmentation masks for - each box. But we don't support it in this architecture. - proposals (List[Tensor], optional): override rpn proposals with - custom proposals. Use when `with_rpn` is False. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - assert proposals is None, 'Sparse R-CNN does not support' \ - ' external proposals' - assert gt_masks is None, 'Sparse R-CNN does not instance segmentation' - - x = self.extract_feat(img) - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.forward_train(x, img_metas) - roi_losses = self.roi_head.forward_train( - x, - proposal_boxes, - proposal_features, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=gt_bboxes_ignore, - gt_masks=gt_masks, - imgs_whwh=imgs_whwh) - return roi_losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool): Whether to rescale the results. - Defaults to False. - - Returns: - list[list[np.ndarray]]: BBox results of each image and classes. - The outer list corresponds to each image. The inner list - corresponds to each class. - """ - x = self.extract_feat(img) - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.simple_test_rpn(x, img_metas) - bbox_results = self.roi_head.simple_test( - x, - proposal_boxes, - proposal_features, - img_metas, - imgs_whwh=imgs_whwh, - rescale=rescale) - return bbox_results - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - # backbone - x = self.extract_feat(img) - # rpn - num_imgs = len(img) - dummy_img_metas = [ - dict(img_shape=(800, 1333, 3)) for _ in range(num_imgs) - ] - proposal_boxes, proposal_features, imgs_whwh = \ - self.rpn_head.simple_test_rpn(x, dummy_img_metas) - # roi_head - roi_outs = self.roi_head.forward_dummy(x, proposal_boxes, - proposal_features, - dummy_img_metas) - return roi_outs diff --git a/spaces/dineshreddy/WALT/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py b/spaces/dineshreddy/WALT/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py deleted file mode 100644 index 35758f4f4e3b2bddd460edb8a7f482b3a9da2919..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/roi_heads/bbox_heads/scnet_bbox_head.py +++ /dev/null @@ -1,76 +0,0 @@ -from mmdet.models.builder import HEADS -from .convfc_bbox_head import ConvFCBBoxHead - - -@HEADS.register_module() -class SCNetBBoxHead(ConvFCBBoxHead): - """BBox head for `SCNet `_. - - This inherits ``ConvFCBBoxHead`` with modified forward() function, allow us - to get intermediate shared feature. - """ - - def _forward_shared(self, x): - """Forward function for shared part.""" - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - - return x - - def _forward_cls_reg(self, x): - """Forward function for classification and regression parts.""" - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - - return cls_score, bbox_pred - - def forward(self, x, return_shared_feat=False): - """Forward function. - - Args: - x (Tensor): input features - return_shared_feat (bool): If True, return cls-reg-shared feature. - - Return: - out (tuple[Tensor]): contain ``cls_score`` and ``bbox_pred``, - if ``return_shared_feat`` is True, append ``x_shared`` to the - returned tuple. - """ - x_shared = self._forward_shared(x) - out = self._forward_cls_reg(x_shared) - - if return_shared_feat: - out += (x_shared, ) - - return out diff --git a/spaces/dorkai/text-generation-webui-main/docs/llama.cpp-models.md b/spaces/dorkai/text-generation-webui-main/docs/llama.cpp-models.md deleted file mode 100644 index 153f70affedf55df3b58af5c46eb0396b5ecf010..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/docs/llama.cpp-models.md +++ /dev/null @@ -1,43 +0,0 @@ -# Using llama.cpp in the web UI - -## Setting up the models - -#### Pre-converted - -Place the model in the `models` folder, making sure that its name contains `ggml` somewhere and ends in `.bin`. - -#### Convert LLaMA yourself - -Follow the instructions in the llama.cpp README to generate the `ggml-model.bin` file: https://github.com/ggerganov/llama.cpp#usage - -## GPU offloading - -Enabled with the `--n-gpu-layers` parameter. If you have enough VRAM, use a high number like `--n-gpu-layers 200000` to offload all layers to the GPU. - -Note that you need to manually install `llama-cpp-python` with GPU support. To do that: - -#### Linux - -``` -pip uninstall -y llama-cpp-python -CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir -``` - -#### Windows - -``` -pip uninstall -y llama-cpp-python -set CMAKE_ARGS="-DLLAMA_CUBLAS=on" -set FORCE_CMAKE=1 -pip install llama-cpp-python --no-cache-dir -``` - -Here you can find the different compilation options for OpenBLAS / cuBLAS / CLBlast: https://pypi.org/project/llama-cpp-python/ - -## Performance - -This was the performance of llama-7b int4 on my i5-12400F (cpu only): - -> Output generated in 33.07 seconds (6.05 tokens/s, 200 tokens, context 17) - -You can change the number of threads with `--threads N`. diff --git a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/__init__.py b/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/__init__.py deleted file mode 100644 index 2af819d61d589cfec2e0ca46612a7456f42b831a..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -from .groundingdino import build_groundingdino diff --git a/spaces/dvitel/codebleu/codebleu.py b/spaces/dvitel/codebleu/codebleu.py deleted file mode 100644 index feb96465a27a6d45575587e0cedae2d9d145b3d8..0000000000000000000000000000000000000000 --- a/spaces/dvitel/codebleu/codebleu.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""CodeBLEU metric.""" - -import evaluate -import datasets - -#these were added to fix evaluate load of dependencies -from .bleu import corpus_bleu -from .utils import pad_sequence -from .weighted_ngram_match import ngrams -from .syntax_match import calc_syntax_match -from .parser_DFG import DFG_python -from .parser_utils import tree_to_token_index -from .dataflow_match import calc_dataflow_match - -from .my_codebleu import calc_codebleu - - -# TODO: Add BibTeX citation -_CITATION = """\ -@InProceedings{huggingface:module, -title = {CodeBLEU: A Metric for Evaluating Code Generation}, -authors={Sedykh, Ivan}, -year={2022} -} -""" - -# TODO: Add description of the module here -_DESCRIPTION = """\ -This new module is an adaptation of the original CodeBLEU metric from CodexGLUE benchmark -for evaluating code generation. -""" - - -# TODO: Add description of the arguments of the module here -_KWARGS_DESCRIPTION = """ -Calculates how good are predictions given some references, using certain scores -Args: - predictions: list of predictions to score. Each predictions - should be a string with tokens separated by spaces. - references: list of lists of references. Each list - should contain len(predictions) items. - lang: programming language in ['java','js','c_sharp','php','go','python','ruby'] - tokenizer: tokenizer function str -> List[str], Defaults to lambda s: s.split() - params: str, weights for averaging(see CodeBLEU paper). - Defaults to equal weights "0.25,0.25,0.25,0.25". -Returns: - CodeBLEU: resulting score, - ngram_match_score: See paper CodeBLEU, - weighted_ngram_match_score: See paper CodeBLEU, - syntax_match_score: See paper CodeBLEU, - dataflow_match_score: See paper CodeBLEU, -Examples: - - >>> codebleu = evaluate.load("my_new_module") - >>> results = my_new_module.compute(references=[0, 1], predictions=[0, 1]) - >>> print(results) - {'accuracy': 1.0} -""" - -# TODO: Define external resources urls if needed -# BAD_WORDS_URL = "http://url/to/external/resource/bad_words.txt" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class codebleu(evaluate.Metric): - """CodeBLEU metric from CodexGLUE""" - - def _info(self): - return evaluate.MetricInfo( - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - features=[ - datasets.Features( - { - "predictions": datasets.Value("string", id="sequence"), - "references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"), - } - ), - datasets.Features( - { - "predictions": datasets.Value("string", id="sequence"), - "references": datasets.Value("string", id="sequence"), - } - ), - ], - reference_urls=[ - "https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-to-code-trans/evaluator", - "https://arxiv.org/abs/2009.10297", - ], - ) - - def _download_and_prepare(self, dl_manager): - """Optional: download external resources useful to compute the scores""" - # TODO: Download external resources if needed - # source CodeBLEU/parser/build.sh - # print(dl_manager) - self.kw_dir = dl_manager.download_and_extract("https://huggingface.co/spaces/dvitel/codebleu/resolve/main/keywords.tar.gz") - print("Downloaded keywords to", self.kw_dir) - self.langso_dir = dl_manager.download("https://huggingface.co/spaces/dvitel/codebleu/resolve/main/my-languages.so") - print("Downloaded languages.so to", self.langso_dir) - - def _compute(self, predictions, references, lang = "python", tokenizer=None, params="0.25,0.25,0.25,0.25"): - """Returns the scores""" - res = calc_codebleu( - predictions=predictions, - references=references, - lang=lang, - tokenizer=tokenizer, - params=params, - kw_dir = self.kw_dir, - langso_dir = self.langso_dir - ) - return res diff --git a/spaces/elitecode/Custom_ChatBot/app.py b/spaces/elitecode/Custom_ChatBot/app.py deleted file mode 100644 index b6230aa37b1c386e7ab8f88b10b7c3f6a77ead82..0000000000000000000000000000000000000000 --- a/spaces/elitecode/Custom_ChatBot/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import openai -import gradio as gr - -openai.api_key = "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" - -messages=[{"role": "system", "content": "You are a Programming Expert"}] - -def Custom_GPT(msg): - messages.append({"role": "user", "content": msg}) - completion = openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=messages) - response = completion.choices[0].message.content - messages.append({"role": "system", "content": response}) - return response - -demo = gr.Interface(fn=Custom_GPT, inputs="text", outputs="text", title="Coder Chatbot") - -demo.launch(share=True) - - - - - - - - - - diff --git a/spaces/enzostvs/hub-api-playground/components/editor/main/parameter.tsx b/spaces/enzostvs/hub-api-playground/components/editor/main/parameter.tsx deleted file mode 100644 index 7c912c5b659c2cdb6b99a9bbde5d1f6f8ed05255..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/components/editor/main/parameter.tsx +++ /dev/null @@ -1,26 +0,0 @@ -import { useState } from "react"; -import { useUpdateEffect } from "react-use"; - -interface Props { - value: string; - onChange: (value: string, currentValue: string) => void; -} -export const Parameter: React.FC = ({ value, onChange }) => { - const [state, setState] = useState(value); - const [previousValue, setPreviousValue] = useState( - undefined - ); - - return ( - { - onChange(state, previousValue ?? `{${value}}`); - setPreviousValue(state as string); - }} - value={state} - onChange={(e) => setState(e.target.value)} - /> - ); -}; diff --git a/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp b/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp deleted file mode 100644 index 2e26b71ed5aad0d46478fdbcd3a880be1401f946..0000000000000000000000000000000000000000 --- a/spaces/erbanku/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp +++ /dev/null @@ -1,1049 +0,0 @@ -// jpge.cpp - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// v1.01, Dec. 18, 2010 - Initial release -// v1.02, Apr. 6, 2011 - Removed 2x2 ordered dither in H2V1 chroma subsampling method load_block_16_8_8(). (The rounding factor was 2, when it should have been 1. Either way, it wasn't helping.) -// v1.03, Apr. 16, 2011 - Added support for optimized Huffman code tables, optimized dynamic memory allocation down to only 1 alloc. -// Also from Alex Evans: Added RGBA support, linear memory allocator (no longer needed in v1.03). -// v1.04, May. 19, 2012: Forgot to set m_pFile ptr to NULL in cfile_stream::close(). Thanks to Owen Kaluza for reporting this bug. -// Code tweaks to fix VS2008 static code analysis warnings (all looked harmless). -// Code review revealed method load_block_16_8_8() (used for the non-default H2V1 sampling mode to downsample chroma) somehow didn't get the rounding factor fix from v1.02. - -#include "jpge.h" - -#include -#include -#if PLATFORM_WINDOWS -#include -#endif - -#define JPGE_MAX(a,b) (((a)>(b))?(a):(b)) -#define JPGE_MIN(a,b) (((a)<(b))?(a):(b)) - -namespace jpge { - -static inline void *jpge_malloc(size_t nSize) { return FMemory::Malloc(nSize); } -static inline void jpge_free(void *p) { FMemory::Free(p);; } - -// Various JPEG enums and tables. -enum { M_SOF0 = 0xC0, M_DHT = 0xC4, M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_APP0 = 0xE0 }; -enum { DC_LUM_CODES = 12, AC_LUM_CODES = 256, DC_CHROMA_CODES = 12, AC_CHROMA_CODES = 256, MAX_HUFF_SYMBOLS = 257, MAX_HUFF_CODESIZE = 32 }; - -static uint8 s_zag[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; -static int16 s_std_lum_quant[64] = { 16,11,12,14,12,10,16,14,13,14,18,17,16,19,24,40,26,24,22,22,24,49,35,37,29,40,58,51,61,60,57,51,56,55,64,72,92,78,64,68,87,69,55,56,80,109,81,87,95,98,103,104,103,62,77,113,121,112,100,120,92,101,103,99 }; -static int16 s_std_croma_quant[64] = { 17,18,18,24,21,24,47,26,26,47,99,66,56,66,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99 }; -static uint8 s_dc_lum_bits[17] = { 0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0 }; -static uint8 s_dc_lum_val[DC_LUM_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_lum_bits[17] = { 0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d }; -static uint8 s_ac_lum_val[AC_LUM_CODES] = -{ - 0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0, - 0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49, - 0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89, - 0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5, - 0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; -static uint8 s_dc_chroma_bits[17] = { 0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0 }; -static uint8 s_dc_chroma_val[DC_CHROMA_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_chroma_bits[17] = { 0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77 }; -static uint8 s_ac_chroma_val[AC_CHROMA_CODES] = -{ - 0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0, - 0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48, - 0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87, - 0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3, - 0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; - -// Low-level helper functions. -template inline void clear_obj(T &obj) { memset(&obj, 0, sizeof(obj)); } - -const int YR = 19595, YG = 38470, YB = 7471, CB_R = -11059, CB_G = -21709, CB_B = 32768, CR_R = 32768, CR_G = -27439, CR_B = -5329; -static inline uint8 clamp(int i) { if (static_cast(i) > 255U) { if (i < 0) i = 0; else if (i > 255) i = 255; } return static_cast(i); } - -static void RGB_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 3, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGB_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 3, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void RGBA_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 4, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGBA_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 4, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void Y_to_YCC(uint8* pDst, const uint8* pSrc, int num_pixels) -{ - for( ; num_pixels; pDst += 3, pSrc++, num_pixels--) { pDst[0] = pSrc[0]; pDst[1] = 128; pDst[2] = 128; } -} - -// Forward DCT - DCT derived from jfdctint. -#define CONST_BITS 13 -#define ROW_BITS 2 -#define DCT_DESCALE(x, n) (((x) + (((int32)1) << ((n) - 1))) >> (n)) -#define DCT_MUL(var, c) (static_cast(var) * static_cast(c)) -#define DCT1D(s0, s1, s2, s3, s4, s5, s6, s7) \ - int32 t0 = s0 + s7, t7 = s0 - s7, t1 = s1 + s6, t6 = s1 - s6, t2 = s2 + s5, t5 = s2 - s5, t3 = s3 + s4, t4 = s3 - s4; \ - int32 t10 = t0 + t3, t13 = t0 - t3, t11 = t1 + t2, t12 = t1 - t2; \ - int32 u1 = DCT_MUL(t12 + t13, 4433); \ - s2 = u1 + DCT_MUL(t13, 6270); \ - s6 = u1 + DCT_MUL(t12, -15137); \ - u1 = t4 + t7; \ - int32 u2 = t5 + t6, u3 = t4 + t6, u4 = t5 + t7; \ - int32 z5 = DCT_MUL(u3 + u4, 9633); \ - t4 = DCT_MUL(t4, 2446); t5 = DCT_MUL(t5, 16819); \ - t6 = DCT_MUL(t6, 25172); t7 = DCT_MUL(t7, 12299); \ - u1 = DCT_MUL(u1, -7373); u2 = DCT_MUL(u2, -20995); \ - u3 = DCT_MUL(u3, -16069); u4 = DCT_MUL(u4, -3196); \ - u3 += z5; u4 += z5; \ - s0 = t10 + t11; s1 = t7 + u1 + u4; s3 = t6 + u2 + u3; s4 = t10 - t11; s5 = t5 + u2 + u4; s7 = t4 + u1 + u3; - -static void DCT2D(int32 *p) -{ - int32 c, *q = p; - for (c = 7; c >= 0; c--, q += 8) - { - int32 s0 = q[0], s1 = q[1], s2 = q[2], s3 = q[3], s4 = q[4], s5 = q[5], s6 = q[6], s7 = q[7]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0] = s0 << ROW_BITS; q[1] = DCT_DESCALE(s1, CONST_BITS-ROW_BITS); q[2] = DCT_DESCALE(s2, CONST_BITS-ROW_BITS); q[3] = DCT_DESCALE(s3, CONST_BITS-ROW_BITS); - q[4] = s4 << ROW_BITS; q[5] = DCT_DESCALE(s5, CONST_BITS-ROW_BITS); q[6] = DCT_DESCALE(s6, CONST_BITS-ROW_BITS); q[7] = DCT_DESCALE(s7, CONST_BITS-ROW_BITS); - } - for (q = p, c = 7; c >= 0; c--, q++) - { - int32 s0 = q[0*8], s1 = q[1*8], s2 = q[2*8], s3 = q[3*8], s4 = q[4*8], s5 = q[5*8], s6 = q[6*8], s7 = q[7*8]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0*8] = DCT_DESCALE(s0, ROW_BITS+3); q[1*8] = DCT_DESCALE(s1, CONST_BITS+ROW_BITS+3); q[2*8] = DCT_DESCALE(s2, CONST_BITS+ROW_BITS+3); q[3*8] = DCT_DESCALE(s3, CONST_BITS+ROW_BITS+3); - q[4*8] = DCT_DESCALE(s4, ROW_BITS+3); q[5*8] = DCT_DESCALE(s5, CONST_BITS+ROW_BITS+3); q[6*8] = DCT_DESCALE(s6, CONST_BITS+ROW_BITS+3); q[7*8] = DCT_DESCALE(s7, CONST_BITS+ROW_BITS+3); - } -} - -struct sym_freq { uint m_key, m_sym_index; }; - -// Radix sorts sym_freq[] array by 32-bit key m_key. Returns ptr to sorted values. -static inline sym_freq* radix_sort_syms(uint num_syms, sym_freq* pSyms0, sym_freq* pSyms1) -{ - const uint cMaxPasses = 4; - uint32 hist[256 * cMaxPasses]; clear_obj(hist); - for (uint i = 0; i < num_syms; i++) { uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; hist[256*2 + ((freq >> 16) & 0xFF)]++; hist[256*3 + ((freq >> 24) & 0xFF)]++; } - sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1; - uint total_passes = cMaxPasses; while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--; - for (uint pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8) - { - const uint32* pHist = &hist[pass << 8]; - uint offsets[256], cur_ofs = 0; - for (uint i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; } - for (uint i = 0; i < num_syms; i++) - pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i]; - sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t; - } - return pCur_syms; -} - -// calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996. -static void calculate_minimum_redundancy(sym_freq *A, int n) -{ - int root, leaf, next, avbl, used, dpth; - if (n==0) return; else if (n==1) { A[0].m_key = 1; return; } - A[0].m_key += A[1].m_key; root = 0; leaf = 2; - for (next=1; next < n-1; next++) - { - if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1; - avbl = 1; used = dpth = 0; root = n-2; next = n-1; - while (avbl>0) - { - while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; } - while (avbl>used) { A[next--].m_key = dpth; avbl--; } - avbl = 2*used; dpth++; used = 0; - } -} - -// Limits canonical Huffman code table's max code size to max_code_size. -static void huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size) -{ - if (code_list_len <= 1) return; - - for (int i = max_code_size + 1; i <= MAX_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i]; - - uint32 total = 0; - for (int i = max_code_size; i > 0; i--) - total += (((uint32)pNum_codes[i]) << (max_code_size - i)); - - while (total != (1UL << max_code_size)) - { - pNum_codes[max_code_size]--; - for (int i = max_code_size - 1; i > 0; i--) - { - if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; } - } - total--; - } -} - -// Generates an optimized offman table. -void jpeg_encoder::optimize_huffman_table(int table_num, int table_len) -{ - sym_freq syms0[MAX_HUFF_SYMBOLS], syms1[MAX_HUFF_SYMBOLS]; - syms0[0].m_key = 1; syms0[0].m_sym_index = 0; // dummy symbol, assures that no valid code contains all 1's - int num_used_syms = 1; - const uint32 *pSym_count = &m_huff_count[table_num][0]; - for (int i = 0; i < table_len; i++) - if (pSym_count[i]) { syms0[num_used_syms].m_key = pSym_count[i]; syms0[num_used_syms++].m_sym_index = i + 1; } - sym_freq* pSyms = radix_sort_syms(num_used_syms, syms0, syms1); - calculate_minimum_redundancy(pSyms, num_used_syms); - - // Count the # of symbols of each code size. - int num_codes[1 + MAX_HUFF_CODESIZE]; clear_obj(num_codes); - for (int i = 0; i < num_used_syms; i++) - num_codes[pSyms[i].m_key]++; - - const uint JPGE_CODE_SIZE_LIMIT = 16; // the maximum possible size of a JPEG Huffman code (valid range is [9,16] - 9 vs. 8 because of the dummy symbol) - huffman_enforce_max_code_size(num_codes, num_used_syms, JPGE_CODE_SIZE_LIMIT); - - // Compute m_huff_bits array, which contains the # of symbols per code size. - clear_obj(m_huff_bits[table_num]); - for (int i = 1; i <= (int)JPGE_CODE_SIZE_LIMIT; i++) - m_huff_bits[table_num][i] = static_cast(num_codes[i]); - - // Remove the dummy symbol added above, which must be in largest bucket. - for (int i = JPGE_CODE_SIZE_LIMIT; i >= 1; i--) - { - if (m_huff_bits[table_num][i]) { m_huff_bits[table_num][i]--; break; } - } - - // Compute the m_huff_val array, which contains the symbol indices sorted by code size (smallest to largest). - for (int i = num_used_syms - 1; i >= 1; i--) - m_huff_val[table_num][num_used_syms - 1 - i] = static_cast(pSyms[i].m_sym_index - 1); -} - -// JPEG marker generation. -void jpeg_encoder::emit_byte(uint8 i) -{ - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_obj(i); -} - -void jpeg_encoder::emit_word(uint i) -{ - emit_byte(uint8(i >> 8)); emit_byte(uint8(i & 0xFF)); -} - -void jpeg_encoder::emit_marker(int marker) -{ - emit_byte(uint8(0xFF)); emit_byte(uint8(marker)); -} - -// Emit JFIF marker -void jpeg_encoder::emit_jfif_app0() -{ - emit_marker(M_APP0); - emit_word(2 + 4 + 1 + 2 + 1 + 2 + 2 + 1 + 1); - emit_byte(0x4A); emit_byte(0x46); emit_byte(0x49); emit_byte(0x46); /* Identifier: ASCII "JFIF" */ - emit_byte(0); - emit_byte(1); /* Major version */ - emit_byte(1); /* Minor version */ - emit_byte(0); /* Density unit */ - emit_word(1); - emit_word(1); - emit_byte(0); /* No thumbnail image */ - emit_byte(0); -} - -// Emit quantization tables -void jpeg_encoder::emit_dqt() -{ - for (int i = 0; i < ((m_num_components == 3) ? 2 : 1); i++) - { - emit_marker(M_DQT); - emit_word(64 + 1 + 2); - emit_byte(static_cast(i)); - for (int j = 0; j < 64; j++) - emit_byte(static_cast(m_quantization_tables[i][j])); - } -} - -// Emit start of frame marker -void jpeg_encoder::emit_sof() -{ - emit_marker(M_SOF0); /* baseline */ - emit_word(3 * m_num_components + 2 + 5 + 1); - emit_byte(8); /* precision */ - emit_word(m_image_y); - emit_word(m_image_x); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); /* component ID */ - emit_byte((m_comp_h_samp[i] << 4) + m_comp_v_samp[i]); /* h and v sampling */ - emit_byte(i > 0); /* quant. table num */ - } -} - -// Emit Huffman table. -void jpeg_encoder::emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag) -{ - emit_marker(M_DHT); - - int length = 0; - for (int i = 1; i <= 16; i++) - length += bits[i]; - - emit_word(length + 2 + 1 + 16); - emit_byte(static_cast(index + (ac_flag << 4))); - - for (int i = 1; i <= 16; i++) - emit_byte(bits[i]); - - for (int i = 0; i < length; i++) - emit_byte(val[i]); -} - -// Emit all Huffman tables. -void jpeg_encoder::emit_dhts() -{ - emit_dht(m_huff_bits[0+0], m_huff_val[0+0], 0, false); - emit_dht(m_huff_bits[2+0], m_huff_val[2+0], 0, true); - if (m_num_components == 3) - { - emit_dht(m_huff_bits[0+1], m_huff_val[0+1], 1, false); - emit_dht(m_huff_bits[2+1], m_huff_val[2+1], 1, true); - } -} - -// emit start of scan -void jpeg_encoder::emit_sos() -{ - emit_marker(M_SOS); - emit_word(2 * m_num_components + 2 + 1 + 3); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); - if (i == 0) - emit_byte((0 << 4) + 0); - else - emit_byte((1 << 4) + 1); - } - emit_byte(0); /* spectral selection */ - emit_byte(63); - emit_byte(0); -} - -// Emit all markers at beginning of image file. -void jpeg_encoder::emit_markers() -{ - emit_marker(M_SOI); - emit_jfif_app0(); - emit_dqt(); - emit_sof(); - emit_dhts(); - emit_sos(); -} - -// Compute the actual canonical Huffman codes/code sizes given the JPEG huff bits and val arrays. -void jpeg_encoder::compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val) -{ - int i, l, last_p, si; - uint8 huff_size[257]; - uint huff_code[257]; - uint code; - - int p = 0; - for (l = 1; l <= 16; l++) - for (i = 1; i <= bits[l]; i++) - huff_size[p++] = (char)l; - - huff_size[p] = 0; last_p = p; // write sentinel - - code = 0; si = huff_size[0]; p = 0; - - while (huff_size[p]) - { - while (huff_size[p] == si) - huff_code[p++] = code++; - code <<= 1; - si++; - } - - memset(codes, 0, sizeof(codes[0])*256); - memset(code_sizes, 0, sizeof(code_sizes[0])*256); - for (p = 0; p < last_p; p++) - { - codes[val[p]] = huff_code[p]; - code_sizes[val[p]] = huff_size[p]; - } -} - -// Quantization table generation. -void jpeg_encoder::compute_quant_table(int32 *pDst, int16 *pSrc) -{ - int32 q; - if (m_params.m_quality < 50) - q = 5000 / m_params.m_quality; - else - q = 200 - m_params.m_quality * 2; - for (int i = 0; i < 64; i++) - { - int32 j = *pSrc++; j = (j * q + 50L) / 100L; - *pDst++ = JPGE_MIN(JPGE_MAX(j, 1), 255); - } -} - -// Higher-level methods. -void jpeg_encoder::first_pass_init() -{ - m_bit_buffer = 0; m_bits_in = 0; - memset(m_last_dc_val, 0, 3 * sizeof(m_last_dc_val[0])); - m_mcu_y_ofs = 0; - m_pass_num = 1; -} - -bool jpeg_encoder::second_pass_init() -{ - compute_huffman_table(&m_huff_codes[0+0][0], &m_huff_code_sizes[0+0][0], m_huff_bits[0+0], m_huff_val[0+0]); - compute_huffman_table(&m_huff_codes[2+0][0], &m_huff_code_sizes[2+0][0], m_huff_bits[2+0], m_huff_val[2+0]); - if (m_num_components > 1) - { - compute_huffman_table(&m_huff_codes[0+1][0], &m_huff_code_sizes[0+1][0], m_huff_bits[0+1], m_huff_val[0+1]); - compute_huffman_table(&m_huff_codes[2+1][0], &m_huff_code_sizes[2+1][0], m_huff_bits[2+1], m_huff_val[2+1]); - } - first_pass_init(); - emit_markers(); - m_pass_num = 2; - return true; -} - -bool jpeg_encoder::jpg_open(int p_x_res, int p_y_res, int src_channels) -{ - m_num_components = 3; - switch (m_params.m_subsampling) - { - case Y_ONLY: - { - m_num_components = 1; - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H1V1: - { - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H2V1: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 8; - break; - } - case H2V2: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 2; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 16; - } - } - - m_image_x = p_x_res; m_image_y = p_y_res; - m_image_bpp = src_channels; - m_image_bpl = m_image_x * src_channels; - m_image_x_mcu = (m_image_x + m_mcu_x - 1) & (~(m_mcu_x - 1)); - m_image_y_mcu = (m_image_y + m_mcu_y - 1) & (~(m_mcu_y - 1)); - m_image_bpl_xlt = m_image_x * m_num_components; - m_image_bpl_mcu = m_image_x_mcu * m_num_components; - m_mcus_per_row = m_image_x_mcu / m_mcu_x; - - if ((m_mcu_lines[0] = static_cast(jpge_malloc(m_image_bpl_mcu * m_mcu_y))) == NULL) return false; - for (int i = 1; i < m_mcu_y; i++) - m_mcu_lines[i] = m_mcu_lines[i-1] + m_image_bpl_mcu; - - compute_quant_table(m_quantization_tables[0], s_std_lum_quant); - compute_quant_table(m_quantization_tables[1], m_params.m_no_chroma_discrim_flag ? s_std_lum_quant : s_std_croma_quant); - - m_out_buf_left = JPGE_OUT_BUF_SIZE; - m_pOut_buf = m_out_buf; - - if (m_params.m_two_pass_flag) - { - clear_obj(m_huff_count); - first_pass_init(); - } - else - { - memcpy(m_huff_bits[0+0], s_dc_lum_bits, 17); memcpy(m_huff_val [0+0], s_dc_lum_val, DC_LUM_CODES); - memcpy(m_huff_bits[2+0], s_ac_lum_bits, 17); memcpy(m_huff_val [2+0], s_ac_lum_val, AC_LUM_CODES); - memcpy(m_huff_bits[0+1], s_dc_chroma_bits, 17); memcpy(m_huff_val [0+1], s_dc_chroma_val, DC_CHROMA_CODES); - memcpy(m_huff_bits[2+1], s_ac_chroma_bits, 17); memcpy(m_huff_val [2+1], s_ac_chroma_val, AC_CHROMA_CODES); - if (!second_pass_init()) return false; // in effect, skip over the first pass - } - return m_all_stream_writes_succeeded; -} - -void jpeg_encoder::load_block_8_8_grey(int x) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[i] + x; - pDst[0] = pSrc[0] - 128; pDst[1] = pSrc[1] - 128; pDst[2] = pSrc[2] - 128; pDst[3] = pSrc[3] - 128; - pDst[4] = pSrc[4] - 128; pDst[5] = pSrc[5] - 128; pDst[6] = pSrc[6] - 128; pDst[7] = pSrc[7] - 128; - } -} - -void jpeg_encoder::load_block_8_8(int x, int y, int c) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x = (x * (8 * 3)) + c; - y <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[y + i] + x; - pDst[0] = pSrc[0 * 3] - 128; pDst[1] = pSrc[1 * 3] - 128; pDst[2] = pSrc[2 * 3] - 128; pDst[3] = pSrc[3 * 3] - 128; - pDst[4] = pSrc[4 * 3] - 128; pDst[5] = pSrc[5 * 3] - 128; pDst[6] = pSrc[6 * 3] - 128; pDst[7] = pSrc[7 * 3] - 128; - } -} - -void jpeg_encoder::load_block_16_8(int x, int c) -{ - uint8 *pSrc1, *pSrc2; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - int a = 0, b = 2; - for (int i = 0; i < 16; i += 2, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pSrc2 = m_mcu_lines[i + 1] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3] + pSrc2[ 0 * 3] + pSrc2[ 1 * 3] + a) >> 2) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3] + pSrc2[ 2 * 3] + pSrc2[ 3 * 3] + b) >> 2) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3] + pSrc2[ 4 * 3] + pSrc2[ 5 * 3] + a) >> 2) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3] + pSrc2[ 6 * 3] + pSrc2[ 7 * 3] + b) >> 2) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3] + pSrc2[ 8 * 3] + pSrc2[ 9 * 3] + a) >> 2) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3] + pSrc2[10 * 3] + pSrc2[11 * 3] + b) >> 2) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3] + pSrc2[12 * 3] + pSrc2[13 * 3] + a) >> 2) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3] + pSrc2[14 * 3] + pSrc2[15 * 3] + b) >> 2) - 128; - int temp = a; a = b; b = temp; - } -} - -void jpeg_encoder::load_block_16_8_8(int x, int c) -{ - uint8 *pSrc1; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3]) >> 1) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3]) >> 1) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3]) >> 1) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3]) >> 1) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3]) >> 1) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3]) >> 1) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3]) >> 1) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3]) >> 1) - 128; - } -} - -void jpeg_encoder::load_quantized_coefficients(int component_num) -{ - int32 *q = m_quantization_tables[component_num > 0]; - int16 *pDst = m_coefficient_array; - for (int i = 0; i < 64; i++) - { - sample_array_t j = m_sample_array[s_zag[i]]; - if (j < 0) - { - if ((j = -j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast(-(j / *q)); - } - else - { - if ((j = j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast((j / *q)); - } - q++; - } -} - -void jpeg_encoder::flush_output_buffer() -{ - if (m_out_buf_left != JPGE_OUT_BUF_SIZE) - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_buf(m_out_buf, JPGE_OUT_BUF_SIZE - m_out_buf_left); - m_pOut_buf = m_out_buf; - m_out_buf_left = JPGE_OUT_BUF_SIZE; -} - -void jpeg_encoder::put_bits(uint bits, uint len) -{ - m_bit_buffer |= ((uint32)bits << (24 - (m_bits_in += len))); - while (m_bits_in >= 8) - { - uint8 c; - #define JPGE_PUT_BYTE(c) { *m_pOut_buf++ = (c); if (--m_out_buf_left == 0) flush_output_buffer(); } - JPGE_PUT_BYTE(c = (uint8)((m_bit_buffer >> 16) & 0xFF)); - if (c == 0xFF) JPGE_PUT_BYTE(0); - m_bit_buffer <<= 8; - m_bits_in -= 8; - } -} - -void jpeg_encoder::code_coefficients_pass_one(int component_num) -{ - if (component_num >= 3) return; // just to shut up static analysis - int i, run_len, nbits, temp1; - int16 *src = m_coefficient_array; - uint32 *dc_count = component_num ? m_huff_count[0 + 1] : m_huff_count[0 + 0], *ac_count = component_num ? m_huff_count[2 + 1] : m_huff_count[2 + 0]; - - temp1 = src[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = src[0]; - if (temp1 < 0) temp1 = -temp1; - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - dc_count[nbits]++; - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - ac_count[0xF0]++; - run_len -= 16; - } - if (temp1 < 0) temp1 = -temp1; - nbits = 1; - while (temp1 >>= 1) nbits++; - ac_count[(run_len << 4) + nbits]++; - run_len = 0; - } - } - if (run_len) ac_count[0]++; -} - -void jpeg_encoder::code_coefficients_pass_two(int component_num) -{ - int i, j, run_len, nbits, temp1, temp2; - int16 *pSrc = m_coefficient_array; - uint *codes[2]; - uint8 *code_sizes[2]; - - if (component_num == 0) - { - codes[0] = m_huff_codes[0 + 0]; codes[1] = m_huff_codes[2 + 0]; - code_sizes[0] = m_huff_code_sizes[0 + 0]; code_sizes[1] = m_huff_code_sizes[2 + 0]; - } - else - { - codes[0] = m_huff_codes[0 + 1]; codes[1] = m_huff_codes[2 + 1]; - code_sizes[0] = m_huff_code_sizes[0 + 1]; code_sizes[1] = m_huff_code_sizes[2 + 1]; - } - - temp1 = temp2 = pSrc[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = pSrc[0]; - - if (temp1 < 0) - { - temp1 = -temp1; temp2--; - } - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - put_bits(codes[0][nbits], code_sizes[0][nbits]); - if (nbits) put_bits(temp2 & ((1 << nbits) - 1), nbits); - - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - put_bits(codes[1][0xF0], code_sizes[1][0xF0]); - run_len -= 16; - } - if ((temp2 = temp1) < 0) - { - temp1 = -temp1; - temp2--; - } - nbits = 1; - while (temp1 >>= 1) - nbits++; - j = (run_len << 4) + nbits; - put_bits(codes[1][j], code_sizes[1][j]); - put_bits(temp2 & ((1 << nbits) - 1), nbits); - run_len = 0; - } - } - if (run_len) - put_bits(codes[1][0], code_sizes[1][0]); -} - -void jpeg_encoder::code_block(int component_num) -{ - DCT2D(m_sample_array); - load_quantized_coefficients(component_num); - if (m_pass_num == 1) - code_coefficients_pass_one(component_num); - else - code_coefficients_pass_two(component_num); -} - -void jpeg_encoder::process_mcu_row() -{ - if (m_num_components == 1) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8_grey(i); code_block(0); - } - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i, 0, 0); code_block(0); load_block_8_8(i, 0, 1); code_block(1); load_block_8_8(i, 0, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_16_8_8(i, 1); code_block(1); load_block_16_8_8(i, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_8_8(i * 2 + 0, 1, 0); code_block(0); load_block_8_8(i * 2 + 1, 1, 0); code_block(0); - load_block_16_8(i, 1); code_block(1); load_block_16_8(i, 2); code_block(2); - } - } -} - -bool jpeg_encoder::terminate_pass_one() -{ - optimize_huffman_table(0+0, DC_LUM_CODES); optimize_huffman_table(2+0, AC_LUM_CODES); - if (m_num_components > 1) - { - optimize_huffman_table(0+1, DC_CHROMA_CODES); optimize_huffman_table(2+1, AC_CHROMA_CODES); - } - return second_pass_init(); -} - -bool jpeg_encoder::terminate_pass_two() -{ - put_bits(0x7F, 7); - flush_output_buffer(); - emit_marker(M_EOI); - m_pass_num++; // purposely bump up m_pass_num, for debugging - return true; -} - -bool jpeg_encoder::process_end_of_image() -{ - if (m_mcu_y_ofs) - { - if (m_mcu_y_ofs < 16) // check here just to shut up static analysis - { - for (int i = m_mcu_y_ofs; i < m_mcu_y; i++) - memcpy(m_mcu_lines[i], m_mcu_lines[m_mcu_y_ofs - 1], m_image_bpl_mcu); - } - - process_mcu_row(); - } - - if (m_pass_num == 1) - return terminate_pass_one(); - else - return terminate_pass_two(); -} - -void jpeg_encoder::load_mcu(const void *pSrc) -{ - const uint8* Psrc = reinterpret_cast(pSrc); - - uint8* pDst = m_mcu_lines[m_mcu_y_ofs]; // OK to write up to m_image_bpl_xlt bytes to pDst - - if (m_num_components == 1) - { - if (m_image_bpp == 4) - RGBA_to_Y(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_Y(pDst, Psrc, m_image_x); - else - memcpy(pDst, Psrc, m_image_x); - } - else - { - if (m_image_bpp == 4) - RGBA_to_YCC(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_YCC(pDst, Psrc, m_image_x); - else - Y_to_YCC(pDst, Psrc, m_image_x); - } - - // Possibly duplicate pixels at end of scanline if not a multiple of 8 or 16 - if (m_num_components == 1) - memset(m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt, pDst[m_image_bpl_xlt - 1], m_image_x_mcu - m_image_x); - else - { - const uint8 y = pDst[m_image_bpl_xlt - 3 + 0], cb = pDst[m_image_bpl_xlt - 3 + 1], cr = pDst[m_image_bpl_xlt - 3 + 2]; - uint8 *q = m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt; - for (int i = m_image_x; i < m_image_x_mcu; i++) - { - *q++ = y; *q++ = cb; *q++ = cr; - } - } - - if (++m_mcu_y_ofs == m_mcu_y) - { - process_mcu_row(); - m_mcu_y_ofs = 0; - } -} - -void jpeg_encoder::clear() -{ - m_mcu_lines[0] = NULL; - m_pass_num = 0; - m_all_stream_writes_succeeded = true; -} - -jpeg_encoder::jpeg_encoder() -{ - clear(); -} - -jpeg_encoder::~jpeg_encoder() -{ - deinit(); -} - -bool jpeg_encoder::init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params) -{ - deinit(); - if (((!pStream) || (width < 1) || (height < 1)) || ((src_channels != 1) && (src_channels != 3) && (src_channels != 4)) || (!comp_params.check_valid())) return false; - m_pStream = pStream; - m_params = comp_params; - return jpg_open(width, height, src_channels); -} - -void jpeg_encoder::deinit() -{ - jpge_free(m_mcu_lines[0]); - clear(); -} - -bool jpeg_encoder::process_scanline(const void* pScanline) -{ - if ((m_pass_num < 1) || (m_pass_num > 2)) return false; - if (m_all_stream_writes_succeeded) - { - if (!pScanline) - { - if (!process_end_of_image()) return false; - } - else - { - load_mcu(pScanline); - } - } - return m_all_stream_writes_succeeded; -} - -// Higher level wrappers/examples (optional). -#include - -class cfile_stream : public output_stream -{ - cfile_stream(const cfile_stream &); - cfile_stream &operator= (const cfile_stream &); - - FILE* m_pFile; - bool m_bStatus; - -public: - cfile_stream() : m_pFile(NULL), m_bStatus(false) { } - - virtual ~cfile_stream() - { - close(); - } - - bool open(const char *pFilename) - { - close(); -#if defined(_MSC_VER) - if (fopen_s(&m_pFile, pFilename, "wb") != 0) - { - return false; - } -#else - m_pFile = fopen(pFilename, "wb"); -#endif - m_bStatus = (m_pFile != NULL); - return m_bStatus; - } - - bool close() - { - if (m_pFile) - { - if (fclose(m_pFile) == EOF) - { - m_bStatus = false; - } - m_pFile = NULL; - } - return m_bStatus; - } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - m_bStatus = m_bStatus && (fwrite(pBuf, len, 1, m_pFile) == 1); - return m_bStatus; - } - - uint get_size() const - { - return m_pFile ? ftell(m_pFile) : 0; - } -}; - -// Writes JPEG image to file. -bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - cfile_stream dst_stream; - if (!dst_stream.open(pFilename)) - return false; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - // i, width, and num_channels are all 64bit - const uint8* pBuf = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pBuf)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - return dst_stream.close(); -} - -class memory_stream : public output_stream -{ - memory_stream(const memory_stream &); - memory_stream &operator= (const memory_stream &); - - uint8 *m_pBuf; - uint64_t m_buf_size, m_buf_ofs; - -public: - memory_stream(void *pBuf, uint64_t buf_size) : m_pBuf(static_cast(pBuf)), m_buf_size(buf_size), m_buf_ofs(0) { } - - virtual ~memory_stream() { } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - uint64_t buf_remaining = m_buf_size - m_buf_ofs; - if ((uint64_t)len > buf_remaining) - return false; - memcpy(m_pBuf + m_buf_ofs, pBuf, len); - m_buf_ofs += len; - return true; - } - - uint64_t get_size() const - { - return m_buf_ofs; - } -}; - -bool compress_image_to_jpeg_file_in_memory(void *pDstBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - if ((!pDstBuf) || (!buf_size)) - return false; - - memory_stream dst_stream(pDstBuf, buf_size); - - buf_size = 0; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - const uint8* pScanline = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pScanline)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - buf_size = dst_stream.get_size(); - return true; -} - -} // namespace jpge \ No newline at end of file diff --git a/spaces/erc/entity-referring-classifier/ercbcm/__init__.py b/spaces/erc/entity-referring-classifier/ercbcm/__init__.py deleted file mode 100644 index cd5717ea0fdf34b36d7a80fe5bff112cdd04fe0b..0000000000000000000000000000000000000000 --- a/spaces/erc/entity-referring-classifier/ercbcm/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -import os, sys - -myPath = os.path.dirname(os.path.abspath(__file__)) -sys.path.insert(0, myPath + '/../') - -# ========== - -import torch - -from ercbcm.model_loader import load -from ercbcm.ERCBCM import ERCBCM -from modules.tokenizer import tokenizer, normalize_v2, PAD_TOKEN_ID - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -# ========== - -model_for_predict = ERCBCM().to(device) -load('ercbcm/model.pt', model_for_predict, device) - -def predict(sentence, name): - label = torch.tensor([0]) - label = label.type(torch.LongTensor) - label = label.to(device) - text = tokenizer.encode(normalize_v2(sentence, name)) - text += [PAD_TOKEN_ID] * (128 - len(text)) - text = torch.tensor([text]) - text = text.type(torch.LongTensor) - text = text.to(device) - _, output = model_for_predict(text, label) - pred = torch.argmax(output, 1).tolist()[0] - return 'CALLING' if pred == 1 else 'MENTIONING' \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/__init__.py b/spaces/eson/tokenizer-arena/vocab/__init__.py deleted file mode 100644 index 70bddaeb12fbbccf0e8b8d47767af06b4cd82479..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/__init__.py +++ /dev/null @@ -1,152 +0,0 @@ -import importlib -from enum import Enum, auto - -"""Interface: -tokenizer.encode -tokenizer.decode -tokenizer.convert_ids_to_tokens - -tokenizer.parent = "" -tokenizer.vocab_size -tokenizer.get_vocab() # gpt-neox-20b, llama -tokenizer.type = TokenizerType.ByteBPE.name -tokenizer.implementation = TokenizerImpl.SentencePiece.name # https://github.com/facebookresearch/llama/blob/main/llama/tokenizer.py - "HFGPT2Tokenizer", "HFTokenizer", "GPT2BPETokenizer", "CharLevelTokenizer", "TiktokenTokenizer", "SPMTokenizer", https://github.com/EleutherAI/gpt-neox/blob/main/tools/preprocess_data.py - - - bert - - 特征 - - 示例: - - gpt2 - - 特征: - - sentencepiece: - - 特征:.sp_model 是SentencePieceProcessor类型,sp_model.id_to_piece,有tokenizer.json tokenizer.model,词典字符有 ▁, - - 示例:llama,baichuan - - tiktoken - - icetk - - hf_tokenizer - - 特征: - - .model 是 tokenizer.models.BPE 类型 - - 词典有 Ġ "\u0120" 开头 - - 有1个tokenizer.json(包括 merge vocab),或者分开独立文件 - - .model.from_file .model.save .model.token_to_id .model.tokenize - - 示例:gpt_neox_20b, moss, bloom - - tiktoken - - 特征:空格就是空格, - - 示例:gpt3.5 gpt4 -tokenizer.comments = "split all numbers into individual digits, " \ - "and fallback to bytes to decompose unknown UTF-8 characters" - -tokenizer.all_special_tokens # baichuan -tokenizer.special_tokens_set # gpt3.5_turbo -tokenizer.special_tokens_map - -tokenizer.dependency [sentencepiece, tiktoken, icetk] -""" - -Animal = Enum('Animal', 'ANT BEE CAT DOG') - -uniq_tokenizers = [ - "" -] - -all_tokenizers = [ - "gpt_35_turbo", - "gpt_4", - "gpt2", - "gpt2_chinese", - "bert_base_cased", - "bert_base_uncased", - "bert_base_chinese", - "kplug", - "moss", - # - # ###### - "chatyuan_large_v2", - "prompt_clue", - # - # #### bloom 系列 - "bloom", - # "bloomz_6b4_zh", - # "belle_7b_2m", # 模型和词典都基于bloom - # - "gpt_nexo_20b", - # "gpt_neox_chinese_v1", - # - # ##### glm系列 - "glm_chinese", - "chatglm_6b", - "chatglm2_6b", - # - # #### llama alpaca系列 - "llama", # '中文单字': 700, '中文多字': 0 - "chinese_llama", # - "chinese_llama2", # - # "chinese_alpaca_lora_7b", # 中文Alpaca模型在上述中文LLaMA模型的基础上进一步使用了指令数据进行精调。 - # "belle_llama_ext_7b", - # "alpaca_7b", - "baichuan", - "baichuan2", - "qwen", - "internlm_chat_7b", - "falcon_180b", - # "goat", -] - -class TokenizerType(Enum): - """ - - https://huggingface.co/docs/transformers/tokenizer_summary - - https://github.com/EleutherAI/gpt-neox/blob/main/megatron/tokenizer/tokenizer.py - - https://github.com/google/sentencepiece/blob/3863f7648e5d8edb571ac592f3ac4f5f0695275a/src/sentencepiece_model.proto#L48 - - UNIGRAM = 1; // Unigram language model with dynamic algorithm - - BPE = 2; // Byte Pair Encoding - - WORD = 3; // Delimitered by whitespace. - - CHAR = 4; // tokenizes into character sequence - """ - BPE = auto() - ByteBPE = auto() # BBPE Byte-Level BPE - GPT2BPETokenizer = auto() # - BERTTokenizer = auto() - - -# class TokenizerType(Enum): -# -# # BERTTokenizer -# # 依赖一个txt文件 -# -# -# # https://github.com/EleutherAI/gpt-neox/blob/v2.0/megatron/tokenizer/tokenizer.py#L231 -# # 依赖一个json文件,Tokenizer.from_file(vocab_file) -# # 案例:gpt-neox-20B -# HFTokenizer = auto() -# -# # 依赖: model_file, sentencepiece.SentencePieceProcessor(model_file) -# # 案例: -# SentencePieceTokenizer = auto() -# -# -# # 依赖: 3个json文件:vocab.json, merges.txt, special_tokens.txt -# # 源码: -# # - https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/tokenizer/gpt2_tokenization.py#L92 -# # Byte-level BPE -# GPT2BPETokenizer = auto() - - -class TokenizerImpl(Enum): - """ - """ - SentencePiece = auto() # - - # https://github.com/huggingface/transformers/blob/v4.30.2/src/transformers/models/gpt2/tokenization_gpt2.py#L104 - # 构造词典: - # - GPT2Tokenizer = auto() - BertTokenizer = auto() # - - -def load_tokener(model_name): - tokenizer = importlib.import_module("." + model_name, 'vocab').tokenizer - return tokenizer - - -if __name__ == "__main__": - pass diff --git a/spaces/evaluate-metric/mean_iou/README.md b/spaces/evaluate-metric/mean_iou/README.md deleted file mode 100644 index d92d8019570cfe334310a71ff4df67bbcc5b3da7..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/mean_iou/README.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: Mean IoU -emoji: 🤗 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -tags: -- evaluate -- metric -description: >- - IoU is the area of overlap between the predicted segmentation and the ground truth divided by the area of union - between the predicted segmentation and the ground truth. For binary (two classes) or multi-class segmentation, - the mean IoU of the image is calculated by taking the IoU of each class and averaging them. ---- - -# Metric Card for Mean IoU - - -## Metric Description - -IoU (Intersection over Union) is the area of overlap between the predicted segmentation and the ground truth divided by the area of union between the predicted segmentation and the ground truth. - -For binary (two classes) or multi-class segmentation, the *mean IoU* of the image is calculated by taking the IoU of each class and averaging them. - -## How to Use - -The Mean IoU metric takes two numeric arrays as input corresponding to the predicted and ground truth segmentations: -```python ->>> import numpy as np ->>> mean_iou = evaluate.load("mean_iou") ->>> predicted = np.array([[2, 2, 3], [8, 2, 4], [3, 255, 2]]) ->>> ground_truth = np.array([[1, 2, 2], [8, 2, 1], [3, 255, 1]]) ->>> results = mean_iou.compute(predictions=predicted, references=ground_truth, num_labels=10, ignore_index=255) -``` - -### Inputs -**Mandatory inputs** -- `predictions` (`List[ndarray]`): List of predicted segmentation maps, each of shape (height, width). Each segmentation map can be of a different size. -- `references` (`List[ndarray]`): List of ground truth segmentation maps, each of shape (height, width). Each segmentation map can be of a different size. -- `num_labels` (`int`): Number of classes (categories). -- `ignore_index` (`int`): Index that will be ignored during evaluation. - -**Optional inputs** -- `nan_to_num` (`int`): If specified, NaN values will be replaced by the number defined by the user. -- `label_map` (`dict`): If specified, dictionary mapping old label indices to new label indices. -- `reduce_labels` (`bool`): Whether or not to reduce all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by 255. The default value is `False`. - -### Output Values -The metric returns a dictionary with the following elements: -- `mean_iou` (`float`): Mean Intersection-over-Union (IoU averaged over all categories). -- `mean_accuracy` (`float`): Mean accuracy (averaged over all categories). -- `overall_accuracy` (`float`): Overall accuracy on all images. -- `per_category_accuracy` (`ndarray` of shape `(num_labels,)`): Per category accuracy. -- `per_category_iou` (`ndarray` of shape `(num_labels,)`): Per category IoU. - -The values of all of the scores reported range from from `0.0` (minimum) and `1.0` (maximum). - -Output Example: -```python -{'mean_iou': 0.47750000000000004, 'mean_accuracy': 0.5916666666666666, 'overall_accuracy': 0.5263157894736842, 'per_category_iou': array([0. , 0. , 0.375, 0.4 , 0.5 , 0. , 0.5 , 1. , 1. , 1. ]), 'per_category_accuracy': array([0. , 0. , 0.75 , 0.66666667, 1. , 0. , 0.5 , 1. , 1. , 1. ])} -``` - -#### Values from Popular Papers - -The [leaderboard for the CityScapes dataset](https://paperswithcode.com/sota/semantic-segmentation-on-cityscapes) reports a Mean IOU ranging from 64 to 84; that of [ADE20k](https://paperswithcode.com/sota/semantic-segmentation-on-ade20k) ranges from 30 to a peak of 59.9, indicating that the dataset is more difficult for current approaches (as of 2022). - - -### Examples - -```python ->>> import numpy as np ->>> mean_iou = evaluate.load("mean_iou") ->>> # suppose one has 3 different segmentation maps predicted ->>> predicted_1 = np.array([[1, 2], [3, 4], [5, 255]]) ->>> actual_1 = np.array([[0, 3], [5, 4], [6, 255]]) ->>> predicted_2 = np.array([[2, 7], [9, 2], [3, 6]]) ->>> actual_2 = np.array([[1, 7], [9, 2], [3, 6]]) ->>> predicted_3 = np.array([[2, 2, 3], [8, 2, 4], [3, 255, 2]]) ->>> actual_3 = np.array([[1, 2, 2], [8, 2, 1], [3, 255, 1]]) ->>> predictions = [predicted_1, predicted_2, predicted_3] ->>> references = [actual_1, actual_2, actual_3] ->>> results = mean_iou.compute(predictions=predictions, references=references, num_labels=10, ignore_index=255, reduce_labels=False) ->>> print(results) # doctest: +NORMALIZE_WHITESPACE -{'mean_iou': 0.47750000000000004, 'mean_accuracy': 0.5916666666666666, 'overall_accuracy': 0.5263157894736842, 'per_category_iou': array([0. , 0. , 0.375, 0.4 , 0.5 , 0. , 0.5 , 1. , 1. , 1. ]), 'per_category_accuracy': array([0. , 0. , 0.75 , 0.66666667, 1. , 0. , 0.5 , 1. , 1. , 1. ])} -``` - - -## Limitations and Bias -Mean IOU is an average metric, so it will not show you where model predictions differ from the ground truth (i.e. if there are particular regions or classes that the model does poorly on). Further error analysis is needed to gather actional insights that can be used to inform model improvements. - -## Citation(s) -```bibtex -@software{MMSegmentation_Contributors_OpenMMLab_Semantic_Segmentation_2020, -author = {{MMSegmentation Contributors}}, -license = {Apache-2.0}, -month = {7}, -title = {{OpenMMLab Semantic Segmentation Toolbox and Benchmark}}, -url = {https://github.com/open-mmlab/mmsegmentation}, -year = {2020} -}" -``` - - -## Further References -- [Wikipedia article - Jaccard Index](https://en.wikipedia.org/wiki/Jaccard_index) diff --git a/spaces/facebook/MusicGen/audiocraft/metrics/clap_consistency.py b/spaces/facebook/MusicGen/audiocraft/metrics/clap_consistency.py deleted file mode 100644 index d2a6c61ae177533ca2fb17e25bc77d2acbbe3791..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/metrics/clap_consistency.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path -import typing as tp - -import torch -import torchmetrics -from transformers import RobertaTokenizer # type: ignore - -from ..data.audio_utils import convert_audio -from ..environment import AudioCraftEnvironment -from ..utils.utils import load_clap_state_dict - -try: - import laion_clap # type: ignore -except ImportError: - laion_clap = None - - -class TextConsistencyMetric(torchmetrics.Metric): - """Text consistency metric measuring consistency between audio and text pairs.""" - - def update(self, audio: torch.Tensor, text: tp.List[str], sizes: torch.Tensor, sample_rates: torch.Tensor) -> None: - raise NotImplementedError("implement how to update the metric from the audio and text pairs.") - - def compute(self): - raise NotImplementedError("implement how to compute the final metric score.") - - -class CLAPTextConsistencyMetric(TextConsistencyMetric): - """Text consistency metric relying on Contrastive Language-Audio Pretraining (CLAP). - - This metric is similar to the MuLan Cycle Consistency from MusicLM (https://arxiv.org/pdf/2301.11325.pdf) - or the CLAP score used in Make-An-Audio (https://arxiv.org/pdf/2301.12661v1.pdf). - - As a joint audio-text embedding model, a pretrained CLAP model can be used to quantify the - similarity between audio-text pairs. We compute the CLAP embeddings from the text descriptions as - well as the generated audio based on them, and define the MCC metric as the average cosine similarity - between these embeddings. - - Model implementation & pre-trained checkpoints: https://github.com/LAION-AI/CLAP - """ - def __init__(self, model_path: tp.Union[str, Path], model_arch: str = 'HTSAT-tiny', enable_fusion: bool = False): - super().__init__() - if laion_clap is None: - raise ImportError("Please install CLAP to compute text consistency: 'pip install laion_clap'") - self.add_state("cosine_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("weight", default=torch.tensor(0.), dist_reduce_fx="sum") - self._initialize_model(model_path, model_arch, enable_fusion) - - def _initialize_model(self, model_path: tp.Union[str, Path], model_arch: str, enable_fusion: bool): - model_path = AudioCraftEnvironment.resolve_reference_path(model_path) - self.tokenize = RobertaTokenizer.from_pretrained('roberta-base') - self.model = laion_clap.CLAP_Module(enable_fusion=enable_fusion, amodel=model_arch) - self.model_sample_rate = 48_000 - load_clap_state_dict(self.model, model_path) - self.model.eval() - - def _tokenizer(self, texts: tp.Union[str, tp.List[str]]) -> dict: - # we use the default params from CLAP module here as well - return self.tokenize(texts, padding="max_length", truncation=True, max_length=77, return_tensors="pt") - - def update(self, audio: torch.Tensor, text: tp.List[str], sizes: torch.Tensor, sample_rates: torch.Tensor) -> None: - """Compute cosine similarity between audio and text pairs and accumulate scores over the dataset.""" - assert audio.size(0) == len(text), "Number of audio and text samples should match" - assert torch.all(sample_rates == sample_rates[0].item()), "All items in batch should have the same sample rate" - sample_rate = int(sample_rates[0].item()) - # convert audio batch to 48kHz monophonic audio with no channel dimension: [B, C, T] -> [B, T] - audio = convert_audio(audio, from_rate=sample_rate, to_rate=self.model_sample_rate, to_channels=1).mean(dim=1) - audio_embeddings = self.model.get_audio_embedding_from_data(audio, use_tensor=True) - text_embeddings = self.model.get_text_embedding(text, tokenizer=self._tokenizer, use_tensor=True) - # cosine similarity between the text and the audio embedding - cosine_sim = torch.nn.functional.cosine_similarity(audio_embeddings, text_embeddings, dim=1, eps=1e-8) - self.cosine_sum += cosine_sim.sum(dim=0) - self.weight += torch.tensor(cosine_sim.size(0)) - - def compute(self): - """Computes the average cosine similarty across all audio/text pairs.""" - assert self.weight.item() > 0, "Unable to compute with total number of comparisons <= 0" # type: ignore - return (self.cosine_sum / self.weight).item() # type: ignore diff --git a/spaces/facebook/StyleNeRF/viz/stylemix_widget.py b/spaces/facebook/StyleNeRF/viz/stylemix_widget.py deleted file mode 100644 index 0d7bf3e6b4bed1f06774a9d4bd0797cf699f9142..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/viz/stylemix_widget.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import imgui -from gui_utils import imgui_utils - -#---------------------------------------------------------------------------- - -class StyleMixingWidget: - def __init__(self, viz): - self.viz = viz - self.seed_def = 1000 - self.seed = self.seed_def - self.animate = False - self.enables = [] - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - num_ws = viz.result.get('num_ws', 0) - num_enables = viz.result.get('num_ws', 18) - self.enables += [False] * max(num_enables - len(self.enables), 0) - - if show: - imgui.text('Stylemix') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 8), imgui_utils.grayed_out(num_ws == 0): - _changed, self.seed = imgui.input_int('##seed', self.seed) - imgui.same_line(viz.label_w + viz.font_size * 8 + viz.spacing) - with imgui_utils.grayed_out(num_ws == 0): - _clicked, self.animate = imgui.checkbox('Anim', self.animate) - - pos2 = imgui.get_content_region_max()[0] - 1 - viz.button_w - pos1 = pos2 - imgui.get_text_line_height() - viz.spacing - pos0 = viz.label_w + viz.font_size * 12 - imgui.push_style_var(imgui.STYLE_FRAME_PADDING, [0, 0]) - for idx in range(num_enables): - imgui.same_line(round(pos0 + (pos1 - pos0) * (idx / (num_enables - 1)))) - if idx == 0: - imgui.set_cursor_pos_y(imgui.get_cursor_pos_y() + 3) - with imgui_utils.grayed_out(num_ws == 0): - _clicked, self.enables[idx] = imgui.checkbox(f'##{idx}', self.enables[idx]) - if imgui.is_item_hovered(): - imgui.set_tooltip(f'{idx}') - imgui.pop_style_var(1) - - imgui.same_line(pos2) - imgui.set_cursor_pos_y(imgui.get_cursor_pos_y() - 3) - with imgui_utils.grayed_out(num_ws == 0): - if imgui_utils.button('Reset', width=-1, enabled=(self.seed != self.seed_def or self.animate or any(self.enables[:num_enables]))): - self.seed = self.seed_def - self.animate = False - self.enables = [False] * num_enables - - if any(self.enables[:num_ws]): - viz.args.stylemix_idx = [idx for idx, enable in enumerate(self.enables) if enable] - viz.args.stylemix_seed = self.seed & ((1 << 32) - 1) - if self.animate: - self.seed += 1 - -#---------------------------------------------------------------------------- diff --git a/spaces/falterWliame/Face_Mask_Detection/3Dfier V1021 Portable Free20 TOP.md b/spaces/falterWliame/Face_Mask_Detection/3Dfier V1021 Portable Free20 TOP.md deleted file mode 100644 index 52c9eea4d30765d5630402d0e52bbb422ee8a136..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/3Dfier V1021 Portable Free20 TOP.md +++ /dev/null @@ -1,120 +0,0 @@ -
-

3Dfier V1021 Portable Free20: How to Watch 3D Videos on Any Display

-

Do you love watching 3D videos but don't have a 3D display or glasses? Do you want to enjoy 3D videos on your PC, laptop, TV, or smartphone without spending any money? If yes, then you need 3Dfier V1021 Portable Free20, a software that can transform any normal video file to 3D in real time.

-

3Dfier V1021 Portable Free20 is a directshow filter that works with popular media players and supports various video formats as long as the files were played using directshow. It can create 3D effects for any video file and display it on any 3D display device, such as side-by-side 3D displays, anaglyph glasses, Zalman 3D monitor, nVidia 3D Vision, e-Dimensional shutter glasses, and 3DTVs. It can also work with DVBViewer and allow you to watch DVB TV channels in 3D.

-

3Dfier V1021 Portable Free20


Download 🌟 https://urlca.com/2uDcPS



-

In this article, we will show you how to download, install, and use 3Dfier V1021 Portable Free20 to watch 3D videos on any display. We will also give you some tips and tricks on how to optimize your 3D viewing experience. Let's get started!

-

How to Download 3Dfier V1021 Portable Free20?

-

Downloading 3Dfier V1021 Portable Free20 is very easy and simple. You just need to follow these steps:

-
    -
  1. Go to this link: https://cinurl.com/2t9eyD
  2. -
  3. Click on the download button and wait for the file to be downloaded.
  4. -
  5. The file size is about 3.17 MB and the file name is 3dfier_v1.0.21_portable.rar.
  6. -
  7. Extract the file using a software like WinRAR or 7-Zip.
  8. -
  9. You will get a folder with the name 3dfier_v1.0.21_portable.
  10. -
  11. Inside the folder, you will find the executable file of 3Dfier V1021 Portable Free20.
  12. -
-

How to Install 3Dfier V1021 Portable Free20?

-

Installing 3Dfier V1021 Portable Free20 is not required, as it is a portable software that can run without installation. However, you need to register it before using it. Here are the steps to register 3Dfier V1021 Portable Free20:

-
    -
  1. Run the executable file of 3Dfier V1021 Portable Free20 as administrator.
  2. -
  3. A window will pop up asking you to enter your name and email address.
  4. -
  5. Enter any name and email address that you want and click on OK.
  6. -
  7. A message will appear saying that your registration is successful and you can use 3Dfier V1021 Portable Free20 for free.
  8. -
  9. Click on OK and close the window.
  10. -
-

How to Use 3Dfier V1021 Portable Free20?

-

Using 3Dfier V1021 Portable Free20 is very easy and fun. You just need to follow these steps:

-
    -
  1. Run the executable file of 3Dfier V1021 Portable Free20 as administrator.
  2. -
  3. A window will pop up showing you the settings of 3Dfier V1021 Portable Free20.
  4. -
  5. You can adjust the settings according to your preference and display device.
  6. -
  7. You can choose the output mode from side-by-side, anaglyph, interlaced, or checkerboard.
  8. -
  9. You can choose the output format from RGB24, YUY2, or NV12.
  10. -
  11. You can choose the output resolution from original, half-width, half-height, or quarter-size.
  12. -
  13. You can choose the output aspect ratio from original, stretch-to-fit, or crop-to-fit.
  14. -
  15. You can choose the output frame rate from original or fixed.
  16. -
  17. You can choose the output quality from low, medium, high, or best.
  18. -
  19. You can choose the output depth from near plane or far plane.
  20. -
  21. You can choose the output convergence from zero parallax or custom value.
  22. -
  23. You can choose the output swap from left-right or right-left.
  24. -
  25. You can also enable or disable some options such as invert depth, flip horizontal, flip vertical, deinterlace input, smooth output, sharpen output, or auto adjust depth.
  26. -
  27. After adjusting the settings, click on OK and close the window.
  28. -
  29. Open your media player that supports directshow and play any video file that you want to watch in 3D.
  30. -
  31. You will see that the video file is transformed into 3D by 3Dfier V1021 Portable Free20 in real time.
  32. -
  33. You can enjoy watching your video file in 3D on any display device that supports 3D viewing.
  34. -
- -

What are Some Tips and Tricks for Using 3Dfier V1021 Portable Free20?

- -

To get the best results and experience from using 3Dfier V1021 Portable Free20, you can follow some tips and tricks that can help you optimize your settings and display device. Some of the tips and tricks are:

- -Must LearnExpandAccountingEmbedded SystemsOperating SystemAlgorithmsEthical HackingPMPAndroidExcel TutorialPhotoshopBlockchainGo ProgrammingProject ManagementBusiness AnalystIoTReviewsBuild WebsiteITILSalesforceCloud ComputingJenkinsSEOCOBOLMISSoftware EngineeringCompiler DesignMovieVBACoursesNetworkingVPNBig DataExpandAWSHivePower BIBig DataInformaticaQlikviewCassandraMicroStrategyTableauCognosMongoDBTalendData WarehousingNiFiZooKeeperDevOpsOBIEEPentahoHBaseLive ProjectExpandLive Agile TestingLive Selenium ProjectLive HP ALMLive Selenium 2Live Java ProjectLive Security TestingLive Mobile TestingLive Testing ProjectLive Payment GatewayLive Testing 2Live PHP ProjectLive TelecomLive Projects HubLive UFT/QTP TestingLive Python ProjectLive SEO ProjectAIExpandArtificial IntelligencePyTorchData ScienceR ProgrammingKerasTensorFlowNLTKSearchToggle Menu15 BEST GIF Maker & Editor Software (Free Download) 2023ByAlyssa WalkerHoursUpdatedFebruary 7, 2023@media(min-width: 520px).responsive-guru99-mobile1 float:left;min-height: 280px; @media(max-width: 519px).responsive-guru99-mobile1 min-height: 280px !important; @media(max-width: 499px).responsive-guru99-mobile1display: none !important;@media(max-width: 499px).responsive-guru99-mobile12 margin-right:6px;width:345px;min-height:100px; googletag.cmd.push(function() googletag.display('div-gpt-ad-1565016699961-0'); if (typeof(pubwise) != 'undefined' && pubwise.enabled === true) pbjs.que.push(function () pwRegisterLazyLoad(gptadslots['div-gpt-ad-1565016699961-0'], 1, [50,0,50,0], 0, 768, 2); ); else googletag.pubads().refresh([gptadslots['div-gpt-ad-1565016699961-0']]); ); googletag.cmd.push(function() googletag.display('div-gpt-ad-1565016699961-1'); if (typeof(pubwise) != 'undefined' && pubwise.enabled === true) pbjs.que.push(function () pwRegisterLazyLoad(gptadslots['div-gpt-ad-1565016699961-1'], 1, [50,0,50,0], 0, 768, 2); ); else googletag.pubads().refresh([gptadslots['div-gpt-ad-1565016699961-1']]); ); GIF is a series of images or soundless video that can loop constantly without pressing the play button. The full form of GIF is Graphics Interchange Format. There are many GIF makers that help you to instantly create an animated image. Many such apps enable you to control the speed of GIFs and add texts with various colors, styles, and fonts.

-

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/vamsikolla/MygenerativeAIchatbot/app.py b/spaces/vamsikolla/MygenerativeAIchatbot/app.py deleted file mode 100644 index e98d5bd3b7bae220cb4fa9c195703e8668c336fa..0000000000000000000000000000000000000000 --- a/spaces/vamsikolla/MygenerativeAIchatbot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Hello meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/vonewman/mon-application-de-traduction-de-text/app.py b/spaces/vonewman/mon-application-de-traduction-de-text/app.py deleted file mode 100644 index a30e3399b12c466306635285392ca3518410a19c..0000000000000000000000000000000000000000 --- a/spaces/vonewman/mon-application-de-traduction-de-text/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import gradio as gr - -from transformers import pipeline - -translator = pipeline("translation", model="Helsinki-NLP/opus-mt-en-fr") - -def translate_sentence(input_text): - for value in translator(input_text)[0].values(): - return value - -iface = gr.Interface(fn = translate_sentence, inputs = 'text', outputs = 'text', - title = "Traduction EN-FR", - description="Un mini google translate avec huggingface") - -iface.launch(inline = False) \ No newline at end of file diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/cost_manager.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/cost_manager.py deleted file mode 100644 index 21b37d5523412705f96cea322e9d4b26459727a8..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/cost_manager.py +++ /dev/null @@ -1,79 +0,0 @@ -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/28 -@Author : mashenquan -@File : openai.py -@Desc : mashenquan, 2023/8/28. Separate the `CostManager` class to support user-level cost accounting. -""" - -from pydantic import BaseModel -from metagpt.logs import logger -from metagpt.utils.token_counter import TOKEN_COSTS -from typing import NamedTuple - - -class Costs(NamedTuple): - total_prompt_tokens: int - total_completion_tokens: int - total_cost: float - total_budget: float - - -class CostManager(BaseModel): - """Calculate the overhead of using the interface.""" - - total_prompt_tokens: int = 0 - total_completion_tokens: int = 0 - total_budget: float = 0 - max_budget: float = 10.0 - total_cost: float = 0 - - def update_cost(self, prompt_tokens, completion_tokens, model): - """ - Update the total cost, prompt tokens, and completion tokens. - - Args: - prompt_tokens (int): The number of tokens used in the prompt. - completion_tokens (int): The number of tokens used in the completion. - model (str): The model used for the API call. - """ - self.total_prompt_tokens += prompt_tokens - self.total_completion_tokens += completion_tokens - cost = (prompt_tokens * TOKEN_COSTS[model]["prompt"] + completion_tokens * TOKEN_COSTS[model][ - "completion"]) / 1000 - self.total_cost += cost - logger.info( - f"Total running cost: ${self.total_cost:.3f} | Max budget: ${self.max_budget:.3f} | " - f"Current cost: ${cost:.3f}, prompt_tokens: {prompt_tokens}, completion_tokens: {completion_tokens}" - ) - - def get_total_prompt_tokens(self): - """ - Get the total number of prompt tokens. - - Returns: - int: The total number of prompt tokens. - """ - return self.total_prompt_tokens - - def get_total_completion_tokens(self): - """ - Get the total number of completion tokens. - - Returns: - int: The total number of completion tokens. - """ - return self.total_completion_tokens - - def get_total_cost(self): - """ - Get the total cost of API calls. - - Returns: - float: The total cost of API calls. - """ - return self.total_cost - - def get_costs(self) -> Costs: - """获得所有开销""" - return Costs(self.total_prompt_tokens, self.total_completion_tokens, self.total_cost, self.total_budget) diff --git a/spaces/williambr/SteamlitMapPractice2/app.py b/spaces/williambr/SteamlitMapPractice2/app.py deleted file mode 100644 index ffc1cb5644220a916cb146fff30c226a4d0fb251..0000000000000000000000000000000000000000 --- a/spaces/williambr/SteamlitMapPractice2/app.py +++ /dev/null @@ -1,48 +0,0 @@ -#import definitions -#from tkinter import W - -import pandas as pd -import streamlit as st -import gradio as gr - -#cleaning data -df = pd.read_csv('Map-City-State-Zip-Lat-Long.txt', dtype=str, sep=';') -df["Latitude"] = df["Latitude"].astype(float) -df["Longitude"] = df["Longitude"].astype(float) - - -def writeToDataFrame(dataframe, name, location, latitude, longitude): - newdf = {'Name': name, 'Location': location, 'Latitude': latitude, 'Longitude': longitude} - dataframe = dataframe.append(newdf, ignore_index = True) - return dataframe - -st.title("Input a city and state I'll take you there! - Ex. Boston, MA") -city_and_state_string = st.text_input("Please search for a location:") - -if city_and_state_string != "": - - split_city_state = city_and_state_string.split(", ") - state_name = split_city_state[1] - city_name = split_city_state[0] - - #create a dataframe consisting of the correct city input - city_df = df[df["City"] == city_name] - - #use the city dateframe to confirm you are using the right map - lat = city_df[city_df["State"] == state_name]["Latitude"].values[0] - lon = city_df[city_df["State"] == state_name]["Longitude"].values[0] - city_list = [] - lat_list = [] - long_list = [] - city_list.append(city_name) - lat_list.append(lat) - long_list.append(lon) - - st.map(pd.DataFrame({'cities' : city_list, 'lat' : lat_list, 'lon' : long_list})) - checkbox = st.checkbox("Show/Hide Latitude/Longitude") - - if checkbox: - st.write(city_name, "is located at: ", lat, ",", lon) - - - diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/utils/reidtools.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/utils/reidtools.py deleted file mode 100644 index acb87602ba905062424a9ebbe524f9cad83384e3..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/utils/reidtools.py +++ /dev/null @@ -1,154 +0,0 @@ -from __future__ import print_function, absolute_import -import numpy as np -import shutil -import os.path as osp -import cv2 - -from .tools import mkdir_if_missing - -__all__ = ['visualize_ranked_results'] - -GRID_SPACING = 10 -QUERY_EXTRA_SPACING = 90 -BW = 5 # border width -GREEN = (0, 255, 0) -RED = (0, 0, 255) - - -def visualize_ranked_results( - distmat, dataset, data_type, width=128, height=256, save_dir='', topk=10 -): - """Visualizes ranked results. - - Supports both image-reid and video-reid. - - For image-reid, ranks will be plotted in a single figure. For video-reid, ranks will be - saved in folders each containing a tracklet. - - Args: - distmat (numpy.ndarray): distance matrix of shape (num_query, num_gallery). - dataset (tuple): a 2-tuple containing (query, gallery), each of which contains - tuples of (img_path(s), pid, camid, dsetid). - data_type (str): "image" or "video". - width (int, optional): resized image width. Default is 128. - height (int, optional): resized image height. Default is 256. - save_dir (str): directory to save output images. - topk (int, optional): denoting top-k images in the rank list to be visualized. - Default is 10. - """ - num_q, num_g = distmat.shape - mkdir_if_missing(save_dir) - - print('# query: {}\n# gallery {}'.format(num_q, num_g)) - print('Visualizing top-{} ranks ...'.format(topk)) - - query, gallery = dataset - assert num_q == len(query) - assert num_g == len(gallery) - - indices = np.argsort(distmat, axis=1) - - def _cp_img_to(src, dst, rank, prefix, matched=False): - """ - Args: - src: image path or tuple (for vidreid) - dst: target directory - rank: int, denoting ranked position, starting from 1 - prefix: string - matched: bool - """ - if isinstance(src, (tuple, list)): - if prefix == 'gallery': - suffix = 'TRUE' if matched else 'FALSE' - dst = osp.join( - dst, prefix + '_top' + str(rank).zfill(3) - ) + '_' + suffix - else: - dst = osp.join(dst, prefix + '_top' + str(rank).zfill(3)) - mkdir_if_missing(dst) - for img_path in src: - shutil.copy(img_path, dst) - else: - dst = osp.join( - dst, prefix + '_top' + str(rank).zfill(3) + '_name_' + - osp.basename(src) - ) - shutil.copy(src, dst) - - for q_idx in range(num_q): - qimg_path, qpid, qcamid = query[q_idx][:3] - qimg_path_name = qimg_path[0] if isinstance( - qimg_path, (tuple, list) - ) else qimg_path - - if data_type == 'image': - qimg = cv2.imread(qimg_path) - qimg = cv2.resize(qimg, (width, height)) - qimg = cv2.copyMakeBorder( - qimg, BW, BW, BW, BW, cv2.BORDER_CONSTANT, value=(0, 0, 0) - ) - # resize twice to ensure that the border width is consistent across images - qimg = cv2.resize(qimg, (width, height)) - num_cols = topk + 1 - grid_img = 255 * np.ones( - ( - height, - num_cols*width + topk*GRID_SPACING + QUERY_EXTRA_SPACING, 3 - ), - dtype=np.uint8 - ) - grid_img[:, :width, :] = qimg - else: - qdir = osp.join( - save_dir, osp.basename(osp.splitext(qimg_path_name)[0]) - ) - mkdir_if_missing(qdir) - _cp_img_to(qimg_path, qdir, rank=0, prefix='query') - - rank_idx = 1 - for g_idx in indices[q_idx, :]: - gimg_path, gpid, gcamid = gallery[g_idx][:3] - invalid = (qpid == gpid) & (qcamid == gcamid) - - if not invalid: - matched = gpid == qpid - if data_type == 'image': - border_color = GREEN if matched else RED - gimg = cv2.imread(gimg_path) - gimg = cv2.resize(gimg, (width, height)) - gimg = cv2.copyMakeBorder( - gimg, - BW, - BW, - BW, - BW, - cv2.BORDER_CONSTANT, - value=border_color - ) - gimg = cv2.resize(gimg, (width, height)) - start = rank_idx*width + rank_idx*GRID_SPACING + QUERY_EXTRA_SPACING - end = ( - rank_idx+1 - ) * width + rank_idx*GRID_SPACING + QUERY_EXTRA_SPACING - grid_img[:, start:end, :] = gimg - else: - _cp_img_to( - gimg_path, - qdir, - rank=rank_idx, - prefix='gallery', - matched=matched - ) - - rank_idx += 1 - if rank_idx > topk: - break - - if data_type == 'image': - imname = osp.basename(osp.splitext(qimg_path_name)[0]) - cv2.imwrite(osp.join(save_dir, imname + '.jpg'), grid_img) - - if (q_idx+1) % 100 == 0: - print('- done {}/{}'.format(q_idx + 1, num_q)) - - print('Done. Images have been saved to "{}" ...'.format(save_dir)) diff --git a/spaces/xujunhao/AudioLM/README.md b/spaces/xujunhao/AudioLM/README.md deleted file mode 100644 index 38f2c3a59d261289c904be3c74e52679d03fe530..0000000000000000000000000000000000000000 --- a/spaces/xujunhao/AudioLM/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AudioLM -emoji: 👀 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yaoshining/text-generation-webui/docs/README.md b/spaces/yaoshining/text-generation-webui/docs/README.md deleted file mode 100644 index 06b73b8468ab263a230cb44ba45a6c95f00b2ada..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/docs/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# text-generation-webui documentation - -## Table of contents - -* [Audio Notification](Audio-Notification.md) -* [Chat mode](Chat-mode.md) -* [DeepSpeed](DeepSpeed.md) -* [Docker](Docker.md) -* [ExLlama](ExLlama.md) -* [Extensions](Extensions.md) -* [FlexGen](FlexGen.md) -* [Generation parameters](Generation-parameters.md) -* [GPTQ models (4 bit mode)](GPTQ-models-(4-bit-mode).md) -* [llama.cpp models](llama.cpp-models.md) -* [LLaMA model](LLaMA-model.md) -* [LoRA](LoRA.md) -* [Low VRAM guide](Low-VRAM-guide.md) -* [RWKV model](RWKV-model.md) -* [Spell book](Spell-book.md) -* [System requirements](System-requirements.md) -* [Training LoRAs](Training-LoRAs.md) -* [Windows installation guide](Windows-installation-guide.md) -* [WSL installation guide](WSL-installation-guide.md) diff --git a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/training/training_loop.py b/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/training/training_loop.py deleted file mode 100644 index d9ccb45b1a0321f1d938efa6a62229ffe396dcfe..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/training/training_loop.py +++ /dev/null @@ -1,278 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Main training script.""" - -import os -import numpy as np -import tensorflow as tf -import dnnlib -import dnnlib.tflib as tflib -from dnnlib.tflib.autosummary import autosummary - -import config -import train -from training import dataset -from training import misc -from metrics import metric_base - -#---------------------------------------------------------------------------- -# Just-in-time processing of training images before feeding them to the networks. - -def process_reals(x, lod, mirror_augment, drange_data, drange_net): - with tf.name_scope('ProcessReals'): - with tf.name_scope('DynamicRange'): - x = tf.cast(x, tf.float32) - x = misc.adjust_dynamic_range(x, drange_data, drange_net) - if mirror_augment: - with tf.name_scope('MirrorAugment'): - s = tf.shape(x) - mask = tf.random_uniform([s[0], 1, 1, 1], 0.0, 1.0) - mask = tf.tile(mask, [1, s[1], s[2], s[3]]) - x = tf.where(mask < 0.5, x, tf.reverse(x, axis=[3])) - with tf.name_scope('FadeLOD'): # Smooth crossfade between consecutive levels-of-detail. - s = tf.shape(x) - y = tf.reshape(x, [-1, s[1], s[2]//2, 2, s[3]//2, 2]) - y = tf.reduce_mean(y, axis=[3, 5], keepdims=True) - y = tf.tile(y, [1, 1, 1, 2, 1, 2]) - y = tf.reshape(y, [-1, s[1], s[2], s[3]]) - x = tflib.lerp(x, y, lod - tf.floor(lod)) - with tf.name_scope('UpscaleLOD'): # Upscale to match the expected input/output size of the networks. - s = tf.shape(x) - factor = tf.cast(2 ** tf.floor(lod), tf.int32) - x = tf.reshape(x, [-1, s[1], s[2], 1, s[3], 1]) - x = tf.tile(x, [1, 1, 1, factor, 1, factor]) - x = tf.reshape(x, [-1, s[1], s[2] * factor, s[3] * factor]) - return x - -#---------------------------------------------------------------------------- -# Evaluate time-varying training parameters. - -def training_schedule( - cur_nimg, - training_set, - num_gpus, - lod_initial_resolution = 4, # Image resolution used at the beginning. - lod_training_kimg = 600, # Thousands of real images to show before doubling the resolution. - lod_transition_kimg = 600, # Thousands of real images to show when fading in new layers. - minibatch_base = 16, # Maximum minibatch size, divided evenly among GPUs. - minibatch_dict = {}, # Resolution-specific overrides. - max_minibatch_per_gpu = {}, # Resolution-specific maximum minibatch size per GPU. - G_lrate_base = 0.001, # Learning rate for the generator. - G_lrate_dict = {}, # Resolution-specific overrides. - D_lrate_base = 0.001, # Learning rate for the discriminator. - D_lrate_dict = {}, # Resolution-specific overrides. - lrate_rampup_kimg = 0, # Duration of learning rate ramp-up. - tick_kimg_base = 160, # Default interval of progress snapshots. - tick_kimg_dict = {4: 160, 8:140, 16:120, 32:100, 64:80, 128:60, 256:40, 512:30, 1024:20}): # Resolution-specific overrides. - - # Initialize result dict. - s = dnnlib.EasyDict() - s.kimg = cur_nimg / 1000.0 - - # Training phase. - phase_dur = lod_training_kimg + lod_transition_kimg - phase_idx = int(np.floor(s.kimg / phase_dur)) if phase_dur > 0 else 0 - phase_kimg = s.kimg - phase_idx * phase_dur - - # Level-of-detail and resolution. - s.lod = training_set.resolution_log2 - s.lod -= np.floor(np.log2(lod_initial_resolution)) - s.lod -= phase_idx - if lod_transition_kimg > 0: - s.lod -= max(phase_kimg - lod_training_kimg, 0.0) / lod_transition_kimg - s.lod = max(s.lod, 0.0) - s.resolution = 2 ** (training_set.resolution_log2 - int(np.floor(s.lod))) - - # Minibatch size. - s.minibatch = minibatch_dict.get(s.resolution, minibatch_base) - s.minibatch -= s.minibatch % num_gpus - if s.resolution in max_minibatch_per_gpu: - s.minibatch = min(s.minibatch, max_minibatch_per_gpu[s.resolution] * num_gpus) - - # Learning rate. - s.G_lrate = G_lrate_dict.get(s.resolution, G_lrate_base) - s.D_lrate = D_lrate_dict.get(s.resolution, D_lrate_base) - if lrate_rampup_kimg > 0: - rampup = min(s.kimg / lrate_rampup_kimg, 1.0) - s.G_lrate *= rampup - s.D_lrate *= rampup - - # Other parameters. - s.tick_kimg = tick_kimg_dict.get(s.resolution, tick_kimg_base) - return s - -#---------------------------------------------------------------------------- -# Main training script. - -def training_loop( - submit_config, - G_args = {}, # Options for generator network. - D_args = {}, # Options for discriminator network. - G_opt_args = {}, # Options for generator optimizer. - D_opt_args = {}, # Options for discriminator optimizer. - G_loss_args = {}, # Options for generator loss. - D_loss_args = {}, # Options for discriminator loss. - dataset_args = {}, # Options for dataset.load_dataset(). - sched_args = {}, # Options for train.TrainingSchedule. - grid_args = {}, # Options for train.setup_snapshot_image_grid(). - metric_arg_list = [], # Options for MetricGroup. - tf_config = {}, # Options for tflib.init_tf(). - G_smoothing_kimg = 10.0, # Half-life of the running average of generator weights. - D_repeats = 1, # How many times the discriminator is trained per G iteration. - minibatch_repeats = 4, # Number of minibatches to run before adjusting training parameters. - reset_opt_for_new_lod = True, # Reset optimizer internal state (e.g. Adam moments) when new layers are introduced? - total_kimg = 15000, # Total length of the training, measured in thousands of real images. - mirror_augment = False, # Enable mirror augment? - drange_net = [-1,1], # Dynamic range used when feeding image data to the networks. - image_snapshot_ticks = 1, # How often to export image snapshots? - network_snapshot_ticks = 10, # How often to export network snapshots? - save_tf_graph = False, # Include full TensorFlow computation graph in the tfevents file? - save_weight_histograms = False, # Include weight histograms in the tfevents file? - resume_run_id = None, # Run ID or network pkl to resume training from, None = start from scratch. - resume_snapshot = None, # Snapshot index to resume training from, None = autodetect. - resume_kimg = 0.0, # Assumed training progress at the beginning. Affects reporting and training schedule. - resume_time = 0.0): # Assumed wallclock time at the beginning. Affects reporting. - - # Initialize dnnlib and TensorFlow. - ctx = dnnlib.RunContext(submit_config, train) - tflib.init_tf(tf_config) - - # Load training set. - training_set = dataset.load_dataset(data_dir=config.data_dir, verbose=True, **dataset_args) - - # Construct networks. - with tf.device('/gpu:0'): - if resume_run_id is not None: - network_pkl = misc.locate_network_pkl(resume_run_id, resume_snapshot) - print('Loading networks from "%s"...' % network_pkl) - G, D, Gs = misc.load_pkl(network_pkl) - else: - print('Constructing networks...') - G = tflib.Network('G', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **G_args) - D = tflib.Network('D', num_channels=training_set.shape[0], resolution=training_set.shape[1], label_size=training_set.label_size, **D_args) - Gs = G.clone('Gs') - G.print_layers(); D.print_layers() - - print('Building TensorFlow graph...') - with tf.name_scope('Inputs'), tf.device('/cpu:0'): - lod_in = tf.placeholder(tf.float32, name='lod_in', shape=[]) - lrate_in = tf.placeholder(tf.float32, name='lrate_in', shape=[]) - minibatch_in = tf.placeholder(tf.int32, name='minibatch_in', shape=[]) - minibatch_split = minibatch_in // submit_config.num_gpus - Gs_beta = 0.5 ** tf.div(tf.cast(minibatch_in, tf.float32), G_smoothing_kimg * 1000.0) if G_smoothing_kimg > 0.0 else 0.0 - - G_opt = tflib.Optimizer(name='TrainG', learning_rate=lrate_in, **G_opt_args) - D_opt = tflib.Optimizer(name='TrainD', learning_rate=lrate_in, **D_opt_args) - for gpu in range(submit_config.num_gpus): - with tf.name_scope('GPU%d' % gpu), tf.device('/gpu:%d' % gpu): - G_gpu = G if gpu == 0 else G.clone(G.name + '_shadow') - D_gpu = D if gpu == 0 else D.clone(D.name + '_shadow') - lod_assign_ops = [tf.assign(G_gpu.find_var('lod'), lod_in), tf.assign(D_gpu.find_var('lod'), lod_in)] - reals, labels = training_set.get_minibatch_tf() - reals = process_reals(reals, lod_in, mirror_augment, training_set.dynamic_range, drange_net) - with tf.name_scope('G_loss'), tf.control_dependencies(lod_assign_ops): - G_loss = dnnlib.util.call_func_by_name(G=G_gpu, D=D_gpu, opt=G_opt, training_set=training_set, minibatch_size=minibatch_split, **G_loss_args) - with tf.name_scope('D_loss'), tf.control_dependencies(lod_assign_ops): - D_loss = dnnlib.util.call_func_by_name(G=G_gpu, D=D_gpu, opt=D_opt, training_set=training_set, minibatch_size=minibatch_split, reals=reals, labels=labels, **D_loss_args) - G_opt.register_gradients(tf.reduce_mean(G_loss), G_gpu.trainables) - D_opt.register_gradients(tf.reduce_mean(D_loss), D_gpu.trainables) - G_train_op = G_opt.apply_updates() - D_train_op = D_opt.apply_updates() - - Gs_update_op = Gs.setup_as_moving_average_of(G, beta=Gs_beta) - with tf.device('/gpu:0'): - try: - peak_gpu_mem_op = tf.contrib.memory_stats.MaxBytesInUse() - except tf.errors.NotFoundError: - peak_gpu_mem_op = tf.constant(0) - - print('Setting up snapshot image grid...') - grid_size, grid_reals, grid_labels, grid_latents = misc.setup_snapshot_image_grid(G, training_set, **grid_args) - sched = training_schedule(cur_nimg=total_kimg*1000, training_set=training_set, num_gpus=submit_config.num_gpus, **sched_args) - grid_fakes = Gs.run(grid_latents, grid_labels, is_validation=True, minibatch_size=sched.minibatch//submit_config.num_gpus) - - print('Setting up run dir...') - misc.save_image_grid(grid_reals, os.path.join(submit_config.run_dir, 'reals.png'), drange=training_set.dynamic_range, grid_size=grid_size) - misc.save_image_grid(grid_fakes, os.path.join(submit_config.run_dir, 'fakes%06d.png' % resume_kimg), drange=drange_net, grid_size=grid_size) - summary_log = tf.summary.FileWriter(submit_config.run_dir) - if save_tf_graph: - summary_log.add_graph(tf.get_default_graph()) - if save_weight_histograms: - G.setup_weight_histograms(); D.setup_weight_histograms() - metrics = metric_base.MetricGroup(metric_arg_list) - - print('Training...\n') - ctx.update('', cur_epoch=resume_kimg, max_epoch=total_kimg) - maintenance_time = ctx.get_last_update_interval() - cur_nimg = int(resume_kimg * 1000) - cur_tick = 0 - tick_start_nimg = cur_nimg - prev_lod = -1.0 - while cur_nimg < total_kimg * 1000: - if ctx.should_stop(): break - - # Choose training parameters and configure training ops. - sched = training_schedule(cur_nimg=cur_nimg, training_set=training_set, num_gpus=submit_config.num_gpus, **sched_args) - training_set.configure(sched.minibatch // submit_config.num_gpus, sched.lod) - if reset_opt_for_new_lod: - if np.floor(sched.lod) != np.floor(prev_lod) or np.ceil(sched.lod) != np.ceil(prev_lod): - G_opt.reset_optimizer_state(); D_opt.reset_optimizer_state() - prev_lod = sched.lod - - # Run training ops. - for _mb_repeat in range(minibatch_repeats): - for _D_repeat in range(D_repeats): - tflib.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch}) - cur_nimg += sched.minibatch - tflib.run([G_train_op], {lod_in: sched.lod, lrate_in: sched.G_lrate, minibatch_in: sched.minibatch}) - - # Perform maintenance tasks once per tick. - done = (cur_nimg >= total_kimg * 1000) - if cur_nimg >= tick_start_nimg + sched.tick_kimg * 1000 or done: - cur_tick += 1 - tick_kimg = (cur_nimg - tick_start_nimg) / 1000.0 - tick_start_nimg = cur_nimg - tick_time = ctx.get_time_since_last_update() - total_time = ctx.get_time_since_start() + resume_time - - # Report progress. - print('tick %-5d kimg %-8.1f lod %-5.2f minibatch %-4d time %-12s sec/tick %-7.1f sec/kimg %-7.2f maintenance %-6.1f gpumem %-4.1f' % ( - autosummary('Progress/tick', cur_tick), - autosummary('Progress/kimg', cur_nimg / 1000.0), - autosummary('Progress/lod', sched.lod), - autosummary('Progress/minibatch', sched.minibatch), - dnnlib.util.format_time(autosummary('Timing/total_sec', total_time)), - autosummary('Timing/sec_per_tick', tick_time), - autosummary('Timing/sec_per_kimg', tick_time / tick_kimg), - autosummary('Timing/maintenance_sec', maintenance_time), - autosummary('Resources/peak_gpu_mem_gb', peak_gpu_mem_op.eval() / 2**30))) - autosummary('Timing/total_hours', total_time / (60.0 * 60.0)) - autosummary('Timing/total_days', total_time / (24.0 * 60.0 * 60.0)) - - # Save snapshots. - if cur_tick % image_snapshot_ticks == 0 or done: - grid_fakes = Gs.run(grid_latents, grid_labels, is_validation=True, minibatch_size=sched.minibatch//submit_config.num_gpus) - misc.save_image_grid(grid_fakes, os.path.join(submit_config.run_dir, 'fakes%06d.png' % (cur_nimg // 1000)), drange=drange_net, grid_size=grid_size) - if cur_tick % network_snapshot_ticks == 0 or done or cur_tick == 1: - pkl = os.path.join(submit_config.run_dir, 'network-snapshot-%06d.pkl' % (cur_nimg // 1000)) - misc.save_pkl((G, D, Gs), pkl) - metrics.run(pkl, run_dir=submit_config.run_dir, num_gpus=submit_config.num_gpus, tf_config=tf_config) - - # Update summaries and RunContext. - metrics.update_autosummaries() - tflib.autosummary.save_summaries(summary_log, cur_nimg) - ctx.update('%.2f' % sched.lod, cur_epoch=cur_nimg // 1000, max_epoch=total_kimg) - maintenance_time = ctx.get_last_update_interval() - tick_time - - # Write final results. - misc.save_pkl((G, D, Gs), os.path.join(submit_config.run_dir, 'network-final.pkl')) - summary_log.close() - - ctx.close() - -#---------------------------------------------------------------------------- diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/infer_video.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/infer_video.py deleted file mode 100644 index 0fa3165a221a02835b2ded24edc7d205056fabe6..0000000000000000000000000000000000000000 --- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/infer_video.py +++ /dev/null @@ -1,94 +0,0 @@ -import os -import cv2 -import numpy as np -import glob -import tqdm -import shutil -import argparse -from face_enhancement import FaceEnhancement - - -def process_video(target_path, out_path, faceenhancer): - fps = 25.0 - os.makedirs(out_path, exist_ok=True) - original_vid_path = target_path - vid_name = "out.mp4" - if not os.path.isdir(target_path): - vid_name = target_path.split("/")[-1] - vidcap = cv2.VideoCapture(target_path) - fps = vidcap.get(cv2.CAP_PROP_FPS) - try: - for match in glob.glob(os.path.join("./tmp/", "*.png")): - os.remove(match) - for match in glob.glob(os.path.join(out_path, "*.png")): - os.remove(match) - except Exception as e: - print(e) - os.makedirs("./tmp/", exist_ok=True) - os.system( - f"ffmpeg -i {target_path} -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 ./tmp/frame_%d.png" - ) - target_path = "./tmp/" - else: - print("folder not implemented.") - exit() - - globbed_images = sorted(glob.glob(os.path.join(target_path, "*.png"))) - for image in tqdm.tqdm(globbed_images): - name = image.split("/")[-1] - filename = os.path.join(out_path, name) - im = cv2.imread(image, cv2.IMREAD_COLOR) # BGR - h, w, _ = im.shape - # im = cv2.resize(im, (0,0), fx=2, fy=2) #optional - img, orig_faces, enhanced_faces = faceenhancer.process(im) - img = cv2.resize(img, (w, h)) - cv2.imwrite(filename, img) - - # merge frames to video - video_save_path = os.path.join(out_path, vid_name) - - os.system( - f"ffmpeg -y -r {fps} -i {out_path}/frame_%d.png -i {original_vid_path}" - f" -map 0:v:0 -map 1:a? -c:a copy -c:v libx264 -r {fps} -pix_fmt yuv420p {video_save_path}" - ) - - # delete tmp file - shutil.rmtree("./tmp/") - for match in glob.glob(os.path.join(out_path, "*.png")): - os.remove(match) - - -if __name__ == "__main__": - model = { - "name": "GPEN-BFR-512", - "in_size": 512, - "out_size": 512, - "channel_multiplier": 2, - "narrow": 1, - } - parser = argparse.ArgumentParser() - parser.add_argument("--indir", type=str, required=True, help="input file") - parser.add_argument( - "--outdir", - type=str, - required=True, - help="Please provide output folder which has no more than one parent dir that has not been created.", - ) - args = parser.parse_args() - - os.makedirs(args.outdir, exist_ok=True) - - faceenhancer = FaceEnhancement( - use_sr=True, - in_size=model["in_size"], - out_size=model["out_size"], - model=model["name"], - channel_multiplier=model["channel_multiplier"], - narrow=model["narrow"], - ) - - process_video( - args.indir, - args.outdir, - faceenhancer, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bart/tokenization_bart_fast.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bart/tokenization_bart_fast.py deleted file mode 100644 index 464b17c4d4c21740628159d2bb89509517b0d523..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bart/tokenization_bart_fast.py +++ /dev/null @@ -1,307 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The Facebook AI Research Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -from typing import List, Optional, Tuple - -from tokenizers import pre_tokenizers, processors - -from ...tokenization_utils_base import AddedToken, BatchEncoding -from ...tokenization_utils_fast import PreTrainedTokenizerFast -from ...utils import logging -from .tokenization_bart import BartTokenizer - - -logger = logging.get_logger(__name__) - - -VOCAB_FILES_NAMES = {"vocab_file": "vocab.json", "merges_file": "merges.txt", "tokenizer_file": "tokenizer.json"} - -# See all BART models at https://huggingface.co/models?filter=bart -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "facebook/bart-base": "https://huggingface.co/facebook/bart-base/resolve/main/vocab.json", - "facebook/bart-large": "https://huggingface.co/facebook/bart-large/resolve/main/vocab.json", - "facebook/bart-large-mnli": "https://huggingface.co/facebook/bart-large-mnli/resolve/main/vocab.json", - "facebook/bart-large-cnn": "https://huggingface.co/facebook/bart-large-cnn/resolve/main/vocab.json", - "facebook/bart-large-xsum": "https://huggingface.co/facebook/bart-large-xsum/resolve/main/vocab.json", - "yjernite/bart_eli5": "https://huggingface.co/yjernite/bart_eli5/resolve/main/vocab.json", - }, - "merges_file": { - "facebook/bart-base": "https://huggingface.co/facebook/bart-base/resolve/main/merges.txt", - "facebook/bart-large": "https://huggingface.co/facebook/bart-large/resolve/main/merges.txt", - "facebook/bart-large-mnli": "https://huggingface.co/facebook/bart-large-mnli/resolve/main/merges.txt", - "facebook/bart-large-cnn": "https://huggingface.co/facebook/bart-large-cnn/resolve/main/merges.txt", - "facebook/bart-large-xsum": "https://huggingface.co/facebook/bart-large-xsum/resolve/main/merges.txt", - "yjernite/bart_eli5": "https://huggingface.co/yjernite/bart_eli5/resolve/main/merges.txt", - }, - "tokenizer_file": { - "facebook/bart-base": "https://huggingface.co/facebook/bart-base/resolve/main/tokenizer.json", - "facebook/bart-large": "https://huggingface.co/facebook/bart-large/resolve/main/tokenizer.json", - "facebook/bart-large-mnli": "https://huggingface.co/facebook/bart-large-mnli/resolve/main/tokenizer.json", - "facebook/bart-large-cnn": "https://huggingface.co/facebook/bart-large-cnn/resolve/main/tokenizer.json", - "facebook/bart-large-xsum": "https://huggingface.co/facebook/bart-large-xsum/resolve/main/tokenizer.json", - "yjernite/bart_eli5": "https://huggingface.co/yjernite/bart_eli5/resolve/main/tokenizer.json", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "facebook/bart-base": 1024, - "facebook/bart-large": 1024, - "facebook/bart-large-mnli": 1024, - "facebook/bart-large-cnn": 1024, - "facebook/bart-large-xsum": 1024, - "yjernite/bart_eli5": 1024, -} - - -class BartTokenizerFast(PreTrainedTokenizerFast): - r""" - Construct a "fast" BART tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2 tokenizer, - using byte-level Byte-Pair-Encoding. - - This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will - be encoded differently whether it is at the beginning of the sentence (without space) or not: - - ```python - >>> from transformers import BartTokenizerFast - - >>> tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-base") - >>> tokenizer("Hello world")["input_ids"] - [0, 31414, 232, 2] - - >>> tokenizer(" Hello world")["input_ids"] - [0, 20920, 232, 2] - ``` - - You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you - call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. - - - - When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`. - - - - This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should - refer to this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - errors (`str`, *optional*, defaults to `"replace"`): - Paradigm to follow when decoding bytes to UTF-8. See - [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. - bos_token (`str`, *optional*, defaults to `""`): - The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. - - - - When building a sequence using special tokens, this is not the token that is used for the beginning of - sequence. The token used is the `cls_token`. - - - - eos_token (`str`, *optional*, defaults to `""`): - The end of sequence token. - - - - When building a sequence using special tokens, this is not the token that is used for the end of sequence. - The token used is the `sep_token`. - - - - sep_token (`str`, *optional*, defaults to `""`): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for - sequence classification or for a text and a question for question answering. It is also used as the last - token of a sequence built with special tokens. - cls_token (`str`, *optional*, defaults to `""`): - The classifier token which is used when doing sequence classification (classification of the whole sequence - instead of per-token classification). It is the first token of the sequence when built with special tokens. - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `""`): - The token used for padding, for example when batching sequences of different lengths. - mask_token (`str`, *optional*, defaults to `""`): - The token used for masking values. This is the token used when training this model with masked language - modeling. This is the token which the model will try to predict. - add_prefix_space (`bool`, *optional*, defaults to `False`): - Whether or not to add an initial space to the input. This allows to treat the leading word just as any - other word. (BART tokenizer detect beginning of words by the preceding space). - trim_offsets (`bool`, *optional*, defaults to `True`): - Whether the post processing step should trim offsets to avoid including whitespaces. - """ - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - slow_tokenizer_class = BartTokenizer - - def __init__( - self, - vocab_file=None, - merges_file=None, - tokenizer_file=None, - errors="replace", - bos_token="", - eos_token="", - sep_token="", - cls_token="", - unk_token="", - pad_token="", - mask_token="", - add_prefix_space=False, - trim_offsets=True, - **kwargs, - ): - mask_token = AddedToken(mask_token, lstrip=True, rstrip=False) if isinstance(mask_token, str) else mask_token - super().__init__( - vocab_file, - merges_file, - tokenizer_file=tokenizer_file, - errors=errors, - bos_token=bos_token, - eos_token=eos_token, - sep_token=sep_token, - cls_token=cls_token, - unk_token=unk_token, - pad_token=pad_token, - mask_token=mask_token, - add_prefix_space=add_prefix_space, - trim_offsets=trim_offsets, - **kwargs, - ) - - pre_tok_state = json.loads(self.backend_tokenizer.pre_tokenizer.__getstate__()) - if pre_tok_state.get("add_prefix_space", add_prefix_space) != add_prefix_space: - pre_tok_class = getattr(pre_tokenizers, pre_tok_state.pop("type")) - pre_tok_state["add_prefix_space"] = add_prefix_space - self.backend_tokenizer.pre_tokenizer = pre_tok_class(**pre_tok_state) - - self.add_prefix_space = add_prefix_space - - # the pre_tokenizer is already updated in the GPT2TokenizerFast `__init__` - tokenizer_component = "post_processor" - tokenizer_component_instance = getattr(self.backend_tokenizer, tokenizer_component, None) - if tokenizer_component_instance: - state = json.loads(tokenizer_component_instance.__getstate__()) - - # The lists 'sep' and 'cls' must be cased in tuples for the object `post_processor_class` - if "sep" in state: - state["sep"] = tuple(state["sep"]) - if "cls" in state: - state["cls"] = tuple(state["cls"]) - - changes_to_apply = False - - if state.get("add_prefix_space", add_prefix_space) != add_prefix_space: - state["add_prefix_space"] = add_prefix_space - changes_to_apply = True - - if state.get("trim_offsets", trim_offsets) != trim_offsets: - state["trim_offsets"] = trim_offsets - changes_to_apply = True - - if changes_to_apply: - component_class = getattr(processors, state.pop("type")) - new_value = component_class(**state) - setattr(self.backend_tokenizer, tokenizer_component, new_value) - - @property - def mask_token(self) -> str: - """ - `str`: Mask token, to use when training a model with masked-language modeling. Log an error if used while not - having been set. - - BART tokenizer has a special mask token to be usable in the fill-mask pipeline. The mask token will greedily - comprise the space before the **. - """ - if self._mask_token is None: - if self.verbose: - logger.error("Using mask_token, but it is not set yet.") - return None - return str(self._mask_token) - - @mask_token.setter - def mask_token(self, value): - """ - Overriding the default behavior of the mask token to have it eat the space before it. - - This is needed to preserve backward compatibility with all the previously used models based on Bart. - """ - # Mask token behave like a normal word, i.e. include the space before it - # So we set lstrip to True - value = AddedToken(value, lstrip=True, rstrip=False) if isinstance(value, str) else value - self._mask_token = value - - def _batch_encode_plus(self, *args, **kwargs) -> BatchEncoding: - is_split_into_words = kwargs.get("is_split_into_words", False) - - if is_split_into_words and not self.add_prefix_space: - raise ValueError( - f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True " - "to use it with pretokenized inputs." - ) - - return super()._batch_encode_plus(*args, **kwargs) - - def _encode_plus(self, *args, **kwargs) -> BatchEncoding: - is_split_into_words = kwargs.get("is_split_into_words", False) - - if is_split_into_words and not self.add_prefix_space: - raise ValueError( - f"You need to instantiate {self.__class__.__name__} with add_prefix_space=True " - "to use it with pretokenized inputs." - ) - - return super()._encode_plus(*args, **kwargs) - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - files = self._tokenizer.model.save(save_directory, name=filename_prefix) - return tuple(files) - - def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): - output = [self.bos_token_id] + token_ids_0 + [self.eos_token_id] - if token_ids_1 is None: - return output - - return output + [self.eos_token_id] + token_ids_1 + [self.eos_token_id] - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. BART does not - make use of token type ids, therefore a list of zeros is returned. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of zeros. - """ - sep = [self.sep_token_id] - cls = [self.cls_token_id] - - if token_ids_1 is None: - return len(cls + token_ids_0 + sep) * [0] - return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0] diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/conditional_detr/feature_extraction_conditional_detr.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/conditional_detr/feature_extraction_conditional_detr.py deleted file mode 100644 index 2af959e8a991f3c57605271b10d2078cd1a14904..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/conditional_detr/feature_extraction_conditional_detr.py +++ /dev/null @@ -1,33 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Feature extractor class for Conditional DETR.""" - -import warnings - -from ...utils import logging -from .image_processing_conditional_detr import ConditionalDetrImageProcessor - - -logger = logging.get_logger(__name__) - - -class ConditionalDetrFeatureExtractor(ConditionalDetrImageProcessor): - def __init__(self, *args, **kwargs) -> None: - warnings.warn( - "The class ConditionalDetrFeatureExtractor is deprecated and will be removed in version 5 of Transformers." - " Please use ConditionalDetrImageProcessor instead.", - FutureWarning, - ) - super().__init__(*args, **kwargs) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/model_zoo/model_zoo.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/model_zoo/model_zoo.py deleted file mode 100644 index 5b90bc9a165ea46ada72ed0e71f1e80e71ea9f40..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/model_zoo/model_zoo.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import os -from typing import Optional -import pkg_resources -import torch - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import CfgNode, LazyConfig, get_cfg, instantiate -from detectron2.modeling import build_model - - -class _ModelZooUrls(object): - """ - Mapping from names to officially released Detectron2 pre-trained models. - """ - - S3_PREFIX = "https://dl.fbaipublicfiles.com/detectron2/" - - # format: {config_path.yaml} -> model_id/model_final_{commit}.pkl - CONFIG_PATH_TO_URL_SUFFIX = { - # COCO Detection with Faster R-CNN - "COCO-Detection/faster_rcnn_R_50_C4_1x": "137257644/model_final_721ade.pkl", - "COCO-Detection/faster_rcnn_R_50_DC5_1x": "137847829/model_final_51d356.pkl", - "COCO-Detection/faster_rcnn_R_50_FPN_1x": "137257794/model_final_b275ba.pkl", - "COCO-Detection/faster_rcnn_R_50_C4_3x": "137849393/model_final_f97cb7.pkl", - "COCO-Detection/faster_rcnn_R_50_DC5_3x": "137849425/model_final_68d202.pkl", - "COCO-Detection/faster_rcnn_R_50_FPN_3x": "137849458/model_final_280758.pkl", - "COCO-Detection/faster_rcnn_R_101_C4_3x": "138204752/model_final_298dad.pkl", - "COCO-Detection/faster_rcnn_R_101_DC5_3x": "138204841/model_final_3e0943.pkl", - "COCO-Detection/faster_rcnn_R_101_FPN_3x": "137851257/model_final_f6e8b1.pkl", - "COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x": "139173657/model_final_68b088.pkl", - # COCO Detection with RetinaNet - "COCO-Detection/retinanet_R_50_FPN_1x": "190397773/model_final_bfca0b.pkl", - "COCO-Detection/retinanet_R_50_FPN_3x": "190397829/model_final_5bd44e.pkl", - "COCO-Detection/retinanet_R_101_FPN_3x": "190397697/model_final_971ab9.pkl", - # COCO Detection with RPN and Fast R-CNN - "COCO-Detection/rpn_R_50_C4_1x": "137258005/model_final_450694.pkl", - "COCO-Detection/rpn_R_50_FPN_1x": "137258492/model_final_02ce48.pkl", - "COCO-Detection/fast_rcnn_R_50_FPN_1x": "137635226/model_final_e5f7ce.pkl", - # COCO Instance Segmentation Baselines with Mask R-CNN - "COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x": "137259246/model_final_9243eb.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_1x": "137260150/model_final_4f86c3.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x": "137260431/model_final_a54504.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_C4_3x": "137849525/model_final_4ce675.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_DC5_3x": "137849551/model_final_84107b.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x": "137849600/model_final_f10217.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_101_C4_3x": "138363239/model_final_a2914c.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_101_DC5_3x": "138363294/model_final_0464b7.pkl", - "COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x": "138205316/model_final_a3ec72.pkl", - "COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x": "139653917/model_final_2d9806.pkl", # noqa - # New baselines using Large-Scale Jitter and Longer Training Schedule - "new_baselines/mask_rcnn_R_50_FPN_100ep_LSJ": "42047764/model_final_bb69de.pkl", - "new_baselines/mask_rcnn_R_50_FPN_200ep_LSJ": "42047638/model_final_89a8d3.pkl", - "new_baselines/mask_rcnn_R_50_FPN_400ep_LSJ": "42019571/model_final_14d201.pkl", - "new_baselines/mask_rcnn_R_101_FPN_100ep_LSJ": "42025812/model_final_4f7b58.pkl", - "new_baselines/mask_rcnn_R_101_FPN_200ep_LSJ": "42131867/model_final_0bb7ae.pkl", - "new_baselines/mask_rcnn_R_101_FPN_400ep_LSJ": "42073830/model_final_f96b26.pkl", - "new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ": "42047771/model_final_b7fbab.pkl", # noqa - "new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_200ep_LSJ": "42132721/model_final_5d87c1.pkl", # noqa - "new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_400ep_LSJ": "42025447/model_final_f1362d.pkl", # noqa - "new_baselines/mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ": "42047784/model_final_6ba57e.pkl", # noqa - "new_baselines/mask_rcnn_regnety_4gf_dds_FPN_200ep_LSJ": "42047642/model_final_27b9c1.pkl", # noqa - "new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ": "42045954/model_final_ef3a80.pkl", # noqa - # COCO Person Keypoint Detection Baselines with Keypoint R-CNN - "COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x": "137261548/model_final_04e291.pkl", - "COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x": "137849621/model_final_a6e10b.pkl", - "COCO-Keypoints/keypoint_rcnn_R_101_FPN_3x": "138363331/model_final_997cc7.pkl", - "COCO-Keypoints/keypoint_rcnn_X_101_32x8d_FPN_3x": "139686956/model_final_5ad38f.pkl", - # COCO Panoptic Segmentation Baselines with Panoptic FPN - "COCO-PanopticSegmentation/panoptic_fpn_R_50_1x": "139514544/model_final_dbfeb4.pkl", - "COCO-PanopticSegmentation/panoptic_fpn_R_50_3x": "139514569/model_final_c10459.pkl", - "COCO-PanopticSegmentation/panoptic_fpn_R_101_3x": "139514519/model_final_cafdb1.pkl", - # LVIS Instance Segmentation Baselines with Mask R-CNN - "LVISv0.5-InstanceSegmentation/mask_rcnn_R_50_FPN_1x": "144219072/model_final_571f7c.pkl", # noqa - "LVISv0.5-InstanceSegmentation/mask_rcnn_R_101_FPN_1x": "144219035/model_final_824ab5.pkl", # noqa - "LVISv0.5-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_1x": "144219108/model_final_5e3439.pkl", # noqa - # Cityscapes & Pascal VOC Baselines - "Cityscapes/mask_rcnn_R_50_FPN": "142423278/model_final_af9cf5.pkl", - "PascalVOC-Detection/faster_rcnn_R_50_C4": "142202221/model_final_b1acc2.pkl", - # Other Settings - "Misc/mask_rcnn_R_50_FPN_1x_dconv_c3-c5": "138602867/model_final_65c703.pkl", - "Misc/mask_rcnn_R_50_FPN_3x_dconv_c3-c5": "144998336/model_final_821d0b.pkl", - "Misc/cascade_mask_rcnn_R_50_FPN_1x": "138602847/model_final_e9d89b.pkl", - "Misc/cascade_mask_rcnn_R_50_FPN_3x": "144998488/model_final_480dd8.pkl", - "Misc/mask_rcnn_R_50_FPN_3x_syncbn": "169527823/model_final_3b3c51.pkl", - "Misc/mask_rcnn_R_50_FPN_3x_gn": "138602888/model_final_dc5d9e.pkl", - "Misc/scratch_mask_rcnn_R_50_FPN_3x_gn": "138602908/model_final_01ca85.pkl", - "Misc/scratch_mask_rcnn_R_50_FPN_9x_gn": "183808979/model_final_da7b4c.pkl", - "Misc/scratch_mask_rcnn_R_50_FPN_9x_syncbn": "184226666/model_final_5ce33e.pkl", - "Misc/panoptic_fpn_R_101_dconv_cascade_gn_3x": "139797668/model_final_be35db.pkl", - "Misc/cascade_mask_rcnn_X_152_32x8d_FPN_IN5k_gn_dconv": "18131413/model_0039999_e76410.pkl", # noqa - # D1 Comparisons - "Detectron1-Comparisons/faster_rcnn_R_50_FPN_noaug_1x": "137781054/model_final_7ab50c.pkl", # noqa - "Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x": "137781281/model_final_62ca52.pkl", # noqa - "Detectron1-Comparisons/keypoint_rcnn_R_50_FPN_1x": "137781195/model_final_cce136.pkl", - } - - @staticmethod - def query(config_path: str) -> Optional[str]: - """ - Args: - config_path: relative config filename - """ - name = config_path.replace(".yaml", "").replace(".py", "") - if name in _ModelZooUrls.CONFIG_PATH_TO_URL_SUFFIX: - suffix = _ModelZooUrls.CONFIG_PATH_TO_URL_SUFFIX[name] - return _ModelZooUrls.S3_PREFIX + name + "/" + suffix - return None - - -def get_checkpoint_url(config_path): - """ - Returns the URL to the model trained using the given config - - Args: - config_path (str): config file name relative to detectron2's "configs/" - directory, e.g., "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - - Returns: - str: a URL to the model - """ - url = _ModelZooUrls.query(config_path) - if url is None: - raise RuntimeError("Pretrained model for {} is not available!".format(config_path)) - return url - - -def get_config_file(config_path): - """ - Returns path to a builtin config file. - - Args: - config_path (str): config file name relative to detectron2's "configs/" - directory, e.g., "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - - Returns: - str: the real path to the config file. - """ - cfg_file = pkg_resources.resource_filename( - "detectron2.model_zoo", os.path.join("configs", config_path) - ) - if not os.path.exists(cfg_file): - raise RuntimeError("{} not available in Model Zoo!".format(config_path)) - return cfg_file - - -def get_config(config_path, trained: bool = False): - """ - Returns a config object for a model in model zoo. - - Args: - config_path (str): config file name relative to detectron2's "configs/" - directory, e.g., "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - trained (bool): If True, will set ``MODEL.WEIGHTS`` to trained model zoo weights. - If False, the checkpoint specified in the config file's ``MODEL.WEIGHTS`` is used - instead; this will typically (though not always) initialize a subset of weights using - an ImageNet pre-trained model, while randomly initializing the other weights. - - Returns: - CfgNode or omegaconf.DictConfig: a config object - """ - cfg_file = get_config_file(config_path) - if cfg_file.endswith(".yaml"): - cfg = get_cfg() - cfg.merge_from_file(cfg_file) - if trained: - cfg.MODEL.WEIGHTS = get_checkpoint_url(config_path) - return cfg - elif cfg_file.endswith(".py"): - cfg = LazyConfig.load(cfg_file) - if trained: - url = get_checkpoint_url(config_path) - if "train" in cfg and "init_checkpoint" in cfg.train: - cfg.train.init_checkpoint = url - else: - raise NotImplementedError - return cfg - - -def get(config_path, trained: bool = False, device: Optional[str] = None): - """ - Get a model specified by relative path under Detectron2's official ``configs/`` directory. - - Args: - config_path (str): config file name relative to detectron2's "configs/" - directory, e.g., "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - trained (bool): see :func:`get_config`. - device (str or None): overwrite the device in config, if given. - - Returns: - nn.Module: a detectron2 model. Will be in training mode. - - Example: - :: - from detectron2 import model_zoo - model = model_zoo.get("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml", trained=True) - """ - cfg = get_config(config_path, trained) - if device is None and not torch.cuda.is_available(): - device = "cpu" - if device is not None and isinstance(cfg, CfgNode): - cfg.MODEL.DEVICE = device - - if isinstance(cfg, CfgNode): - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - else: - model = instantiate(cfg.model) - if device is not None: - model = model.to(device) - if "train" in cfg and "init_checkpoint" in cfg.train: - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - return model diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/structures/test_masks.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/structures/test_masks.py deleted file mode 100644 index 7991eb0b35724f2f2f402d788a273d68b7cad5f2..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/tests/structures/test_masks.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import unittest -import torch - -from detectron2.structures.masks import BitMasks, PolygonMasks, polygons_to_bitmask - - -class TestBitMask(unittest.TestCase): - def test_get_bounding_box(self): - masks = torch.tensor( - [ - [ - [False, False, False, True], - [False, False, True, True], - [False, True, True, False], - [False, True, True, False], - ], - [ - [False, False, False, False], - [False, False, True, False], - [False, True, True, False], - [False, True, True, False], - ], - torch.zeros(4, 4), - ] - ) - bitmask = BitMasks(masks) - box_true = torch.tensor([[1, 0, 4, 4], [1, 1, 3, 4], [0, 0, 0, 0]], dtype=torch.float32) - box = bitmask.get_bounding_boxes() - self.assertTrue(torch.all(box.tensor == box_true).item()) - - for box in box_true: - poly = box[[0, 1, 2, 1, 2, 3, 0, 3]].numpy() - mask = polygons_to_bitmask([poly], 4, 4) - reconstruct_box = BitMasks(mask[None, :, :]).get_bounding_boxes()[0].tensor - self.assertTrue(torch.all(box == reconstruct_box).item()) - - reconstruct_box = PolygonMasks([[poly]]).get_bounding_boxes()[0].tensor - self.assertTrue(torch.all(box == reconstruct_box).item()) - - def test_from_empty_polygons(self): - masks = BitMasks.from_polygon_masks([], 100, 100) - self.assertEqual(masks.tensor.shape, (0, 100, 100)) - - def test_getitem(self): - masks = BitMasks(torch.ones(3, 10, 10)) - self.assertEqual(masks[1].tensor.shape, (1, 10, 10)) - self.assertEqual(masks[1:3].tensor.shape, (2, 10, 10)) - self.assertEqual(masks[torch.tensor([True, False, False])].tensor.shape, (1, 10, 10)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/ysharma/LLaVA_v1/llava/eval/model_vqa_science.py b/spaces/ysharma/LLaVA_v1/llava/eval/model_vqa_science.py deleted file mode 100644 index aa77b39c0df7bcf0c8200f1282b165dee493ad73..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LLaVA_v1/llava/eval/model_vqa_science.py +++ /dev/null @@ -1,141 +0,0 @@ -import argparse -import torch -import os -import json -from tqdm import tqdm -import shortuuid - -from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN -from llava.conversation import conv_templates, SeparatorStyle -from llava.model.builder import load_pretrained_model -from llava.utils import disable_torch_init -from llava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria - -from PIL import Image -import math - - -def split_list(lst, n): - """Split a list into n (roughly) equal-sized chunks""" - chunk_size = math.ceil(len(lst) / n) # integer division - return [lst[i:i+chunk_size] for i in range(0, len(lst), chunk_size)] - - -def get_chunk(lst, n, k): - chunks = split_list(lst, n) - return chunks[k] - - -def eval_model(args): - # Model - disable_torch_init() - model_path = os.path.expanduser(args.model_path) - model_name = get_model_name_from_path(model_path) - tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, args.model_base, model_name) - - questions = json.load(open(os.path.expanduser(args.question_file), "r")) - questions = get_chunk(questions, args.num_chunks, args.chunk_idx) - answers_file = os.path.expanduser(args.answers_file) - os.makedirs(os.path.dirname(answers_file), exist_ok=True) - ans_file = open(answers_file, "w") - for i, line in enumerate(tqdm(questions)): - idx = line["id"] - question = line['conversations'][0] - qs = question['value'].replace('', '').strip() - cur_prompt = qs - - if 'image' in line: - image_file = line["image"] - image = Image.open(os.path.join(args.image_folder, image_file)) - image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'][0] - images = image_tensor.unsqueeze(0).half().cuda() - if getattr(model.config, 'mm_use_im_start_end', False): - qs = DEFAULT_IM_START_TOKEN + DEFAULT_IMAGE_TOKEN + DEFAULT_IM_END_TOKEN + '\n' + qs - else: - qs = DEFAULT_IMAGE_TOKEN + '\n' + qs - cur_prompt = '' + '\n' + cur_prompt - else: - images = None - - conv = conv_templates[args.conv_mode].copy() - conv.append_message(conv.roles[0], qs) - conv.append_message(conv.roles[1], None) - prompt = conv.get_prompt() - - input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() - - stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 - keywords = [stop_str] - stopping_criteria = [KeywordsStoppingCriteria(keywords, tokenizer, input_ids)] if conv.version == "v0" else None - - with torch.inference_mode(): - output_ids = model.generate( - input_ids, - images=images, - do_sample=True, - temperature=0.2, - max_new_tokens=1024, - use_cache=True, - stopping_criteria=stopping_criteria, - ) - - input_token_len = input_ids.shape[1] - n_diff_input_output = (input_ids != output_ids[:, :input_token_len]).sum().item() - if n_diff_input_output > 0: - print(f'[Warning] {n_diff_input_output} output_ids are not the same as the input_ids') - outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0] - outputs = outputs.strip() - if outputs.endswith(stop_str): - outputs = outputs[:-len(stop_str)] - outputs = outputs.strip() - - # prompt for answer - if args.answer_prompter: - outputs_reasoning = outputs - input_ids = tokenizer_image_token(prompt + outputs_reasoning + ' ###\nANSWER:', tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() - - with torch.inference_mode(): - output_ids = model.generate( - input_ids, - images=images, - do_sample=True, - temperature=0.2, - max_new_tokens=64, - use_cache=True, - stopping_criteria=[stopping_criteria]) - - input_token_len = input_ids.shape[1] - n_diff_input_output = (input_ids != output_ids[:, :input_token_len]).sum().item() - if n_diff_input_output > 0: - print(f'[Warning] {n_diff_input_output} output_ids are not the same as the input_ids') - outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0] - outputs = outputs.strip() - if outputs.endswith(stop_str): - outputs = outputs[:-len(stop_str)] - outputs = outputs.strip() - outputs = outputs_reasoning + '\n The answer is ' + outputs - - ans_id = shortuuid.uuid() - ans_file.write(json.dumps({"question_id": idx, - "prompt": cur_prompt, - "text": outputs, - "answer_id": ans_id, - "model_id": model_name, - "metadata": {}}) + "\n") - ans_file.flush() - ans_file.close() - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model-path", type=str, default="facebook/opt-350m") - parser.add_argument("--model-base", type=str, default=None) - parser.add_argument("--image-folder", type=str, default="") - parser.add_argument("--question-file", type=str, default="tables/question.json") - parser.add_argument("--answers-file", type=str, default="answer.jsonl") - parser.add_argument("--conv-mode", type=str, default="llava_v0") - parser.add_argument("--num-chunks", type=int, default=1) - parser.add_argument("--chunk-idx", type=int, default=0) - parser.add_argument("--answer-prompter", action="store_true") - args = parser.parse_args() - - eval_model(args) diff --git a/spaces/zeno-ml/openai-evals/Makefile b/spaces/zeno-ml/openai-evals/Makefile deleted file mode 100644 index 07dc389534ca118a66408b5a7e6aef083604d39f..0000000000000000000000000000000000000000 --- a/spaces/zeno-ml/openai-evals/Makefile +++ /dev/null @@ -1,53 +0,0 @@ -all: lint typecheck clean - -.PHONY: install -install: - @echo "==> 📦 Installing" - @cd frontend && npm i && npm run build - @poetry install - -.PHONY: lint -lint: - @echo "==> 👕 Linting" - @poetry check - @poetry run black zeno-evals-hub/ - @poetry run usort format zeno-evals-hub/ - @poetry run flake8 zeno-evals-hub --statistics - @cd frontend && npm run lint - -.PHONY: typecheck -typecheck: - @echo "==> ✅ Type checks" - @poetry run mypy -p zeno-evals-hub - @poetry run pyright zeno-evals-hub - @cd frontend && npm run check - -.PHONY: build -build: - @echo "==> 👷‍♀️ Build" - @cd frontend && npm run build - @cd frontend && node build.js - @mv zeno-evals-hub/frontend/index.html zeno-evals-hub/frontend/index_og.html - @mv zeno-evals-hub/frontend/index_tmp.html zeno-evals-hub/frontend/index.html - @poetry build -v - @rm zeno-evals-hub/frontend/index.html - @mv zeno-evals-hub/frontend/index_og.html zeno-evals-hub/frontend/index.html - -.PHONY: clean -clean: - @rm -rf dist - @rm -rf ./.zeno-evals-hub_cache - @find . -type d -name '.mypy_cache' -exec rm -rf {} + - @find . -type d -name '__pycache__' -exec rm -rf {} + - @find . -type d -name '*pytest_cache*' -exec rm -rf {} + - @find . -type f -name "*.py[co]" -exec rm -rf {} + - @find . -type d -name '*.ipynb_checkpoints' -exec rm -r {} + - -.PHONY: publish -publish: - @echo "==> 🚀 Publishing" - @git commit -am "chore: bump version to $(shell poetry version -s)" - @git tag "v$(shell poetry version -s)" - @make build - @poetry publish - @git push && git push --tags \ No newline at end of file diff --git a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/finetune/training.py b/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/finetune/training.py deleted file mode 100644 index e73713a823fdaa1000032d83246adfa18c1be014..0000000000000000000000000000000000000000 --- a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/llama_lora/ui/finetune/training.py +++ /dev/null @@ -1,525 +0,0 @@ -import os -import json -import time -import math -import datetime -import pytz -import socket -import threading -import traceback -import altair as alt -import pandas as pd -import gradio as gr - -from huggingface_hub import try_to_load_from_cache, snapshot_download -from transformers import TrainingArguments - -from ...config import Config -from ...globals import Global -from ...models import clear_cache, unload_models -from ...utils.prompter import Prompter -from ...utils.sample_evenly import sample_evenly -from ..trainer_callback import ( - UiTrainerCallback, reset_training_status, - update_training_states, set_train_output -) - -from .data_processing import get_data_from_input - - -def status_message_callback(message): - if Global.should_stop_training: - return True - - Global.training_status_text = message - - -def params_info_callback(all_params, trainable_params): - Global.training_params_info_text = f"Params: {trainable_params}/{all_params} ({100 * trainable_params / all_params:.4f}% trainable)" - - -def do_train( - # Dataset - template, - load_dataset_from, - dataset_from_data_dir, - dataset_text, - dataset_text_format, - dataset_plain_text_input_variables_separator, - dataset_plain_text_input_and_output_separator, - dataset_plain_text_data_separator, - # Training Options - max_seq_length, - evaluate_data_count, - micro_batch_size, - gradient_accumulation_steps, - epochs, - learning_rate, - train_on_inputs, - lora_r, - lora_alpha, - lora_dropout, - lora_target_modules, - lora_modules_to_save, - load_in_8bit, - fp16, - bf16, - gradient_checkpointing, - save_steps, - save_total_limit, - logging_steps, - additional_training_arguments, - additional_lora_config, - model_name, - continue_from_model, - continue_from_checkpoint, - progress=gr.Progress(track_tqdm=False), -): - if Global.is_training or Global.is_train_starting: - return render_training_status() + render_loss_plot() - - reset_training_status() - Global.is_train_starting = True - - try: - base_model_name = Global.base_model_name - tokenizer_name = Global.tokenizer_name or Global.base_model_name - - resume_from_checkpoint_param = None - if continue_from_model == "-" or continue_from_model == "None": - continue_from_model = None - if continue_from_checkpoint == "-" or continue_from_checkpoint == "None": - continue_from_checkpoint = None - if continue_from_model: - resume_from_model_path = os.path.join( - Config.data_dir, "lora_models", continue_from_model) - resume_from_checkpoint_param = resume_from_model_path - if continue_from_checkpoint: - resume_from_checkpoint_param = os.path.join( - resume_from_checkpoint_param, continue_from_checkpoint) - will_be_resume_from_checkpoint_file = os.path.join( - resume_from_checkpoint_param, "pytorch_model.bin") - if not os.path.exists(will_be_resume_from_checkpoint_file): - raise ValueError( - f"Unable to resume from checkpoint {continue_from_model}/{continue_from_checkpoint}. Resuming is only possible from checkpoints stored locally in the data directory. Please ensure that the file '{will_be_resume_from_checkpoint_file}' exists.") - else: - will_be_resume_from_checkpoint_file = os.path.join( - resume_from_checkpoint_param, "adapter_model.bin") - if not os.path.exists(will_be_resume_from_checkpoint_file): - # Try to get model in Hugging Face cache - resume_from_checkpoint_param = None - possible_hf_model_name = None - possible_model_info_file = os.path.join( - resume_from_model_path, "info.json") - if "/" in continue_from_model: - possible_hf_model_name = continue_from_model - elif os.path.exists(possible_model_info_file): - with open(possible_model_info_file, "r") as file: - model_info = json.load(file) - possible_hf_model_name = model_info.get( - "hf_model_name") - if possible_hf_model_name: - possible_hf_model_cached_path = try_to_load_from_cache( - possible_hf_model_name, 'adapter_model.bin') - if not possible_hf_model_cached_path: - snapshot_download(possible_hf_model_name) - possible_hf_model_cached_path = try_to_load_from_cache( - possible_hf_model_name, 'adapter_model.bin') - if possible_hf_model_cached_path: - resume_from_checkpoint_param = os.path.dirname( - possible_hf_model_cached_path) - - if not resume_from_checkpoint_param: - raise ValueError( - f"Unable to continue from model {continue_from_model}. Continuation is only possible from models stored locally in the data directory. Please ensure that the file '{will_be_resume_from_checkpoint_file}' exists.") - - output_dir = os.path.join(Config.data_dir, "lora_models", model_name) - if os.path.exists(output_dir): - if (not os.path.isdir(output_dir)) or os.path.exists(os.path.join(output_dir, 'adapter_config.json')): - raise ValueError( - f"The output directory already exists and is not empty. ({output_dir})") - - wandb_group = template - wandb_tags = [f"template:{template}"] - if load_dataset_from == "Data Dir" and dataset_from_data_dir: - wandb_group += f"/{dataset_from_data_dir}" - wandb_tags.append(f"dataset:{dataset_from_data_dir}") - - finetune_args = { - 'base_model': base_model_name, - 'tokenizer': tokenizer_name, - 'output_dir': output_dir, - 'micro_batch_size': micro_batch_size, - 'gradient_accumulation_steps': gradient_accumulation_steps, - 'num_train_epochs': epochs, - 'learning_rate': learning_rate, - 'cutoff_len': max_seq_length, - 'val_set_size': evaluate_data_count, - 'lora_r': lora_r, - 'lora_alpha': lora_alpha, - 'lora_dropout': lora_dropout, - 'lora_target_modules': lora_target_modules, - 'lora_modules_to_save': lora_modules_to_save, - 'train_on_inputs': train_on_inputs, - 'load_in_8bit': load_in_8bit, - 'fp16': fp16, - 'bf16': bf16, - 'gradient_checkpointing': gradient_checkpointing, - 'group_by_length': False, - 'resume_from_checkpoint': resume_from_checkpoint_param, - 'save_steps': save_steps, - 'save_total_limit': save_total_limit, - 'logging_steps': logging_steps, - 'additional_training_arguments': additional_training_arguments, - 'additional_lora_config': additional_lora_config, - 'wandb_api_key': Config.wandb_api_key, - 'wandb_project': Config.default_wandb_project if Config.enable_wandb else None, - 'wandb_group': wandb_group, - 'wandb_run_name': model_name, - 'wandb_tags': wandb_tags - } - - prompter = Prompter(template) - data = get_data_from_input( - load_dataset_from=load_dataset_from, - dataset_text=dataset_text, - dataset_text_format=dataset_text_format, - dataset_plain_text_input_variables_separator=dataset_plain_text_input_variables_separator, - dataset_plain_text_input_and_output_separator=dataset_plain_text_input_and_output_separator, - dataset_plain_text_data_separator=dataset_plain_text_data_separator, - dataset_from_data_dir=dataset_from_data_dir, - prompter=prompter - ) - - def training(): - Global.is_training = True - - try: - # Need RAM for training - unload_models() - Global.new_base_model_that_is_ready_to_be_used = None - Global.name_of_new_base_model_that_is_ready_to_be_used = None - clear_cache() - - train_data = prompter.get_train_data_from_dataset(data) - - if Config.ui_dev_mode: - Global.training_args = TrainingArguments( - logging_steps=logging_steps, output_dir="" - ) - - message = "Currently in UI dev mode, not doing the actual training." - message += f"\n\nArgs: {json.dumps(finetune_args, indent=2)}" - message += f"\n\nTrain data (first 5):\n{json.dumps(train_data[:5], indent=2)}" - - print(message) - - total_epochs = epochs - total_steps = len(train_data) * epochs - if total_steps < 1500: - total_steps = 1500 - log_history = [] - initial_loss = 2 - loss_decay_rate = 0.8 - for i in range(total_steps): - if (Global.should_stop_training): - break - - current_step = i + 1 - current_epoch = i / (total_steps / total_epochs) - - if (current_step % logging_steps == 0): - loss = initial_loss * \ - math.exp(-loss_decay_rate * current_epoch) - log_history.append({ - 'loss': loss, - 'learning_rate': 0.0001, - 'epoch': current_epoch - }) - - update_training_states( - total_steps=total_steps, - current_step=current_step, - total_epochs=total_epochs, - current_epoch=current_epoch, - log_history=log_history - ) - time.sleep(0.01) - - result_message = set_train_output(message) - print(result_message) - time.sleep(3) - Global.is_training = False - return - - training_callbacks = [UiTrainerCallback] - - if not os.path.exists(output_dir): - os.makedirs(output_dir) - - with open(os.path.join(output_dir, "info.json"), 'w') as info_json_file: - dataset_name = "N/A (from text input)" - if load_dataset_from == "Data Dir": - dataset_name = dataset_from_data_dir - - info = { - 'base_model': base_model_name, - 'prompt_template': template, - 'dataset_name': dataset_name, - 'dataset_rows': len(train_data), - 'trained_on_machine': socket.gethostname(), - 'timestamp': time.time(), - } - if continue_from_model: - info['continued_from_model'] = continue_from_model - if continue_from_checkpoint: - info['continued_from_checkpoint'] = continue_from_checkpoint - - if Global.version: - info['tuner_version'] = Global.version - - json.dump(info, info_json_file, indent=2) - - train_output = Global.finetune_train_fn( - train_data=train_data, - callbacks=training_callbacks, - status_message_callback=status_message_callback, - params_info_callback=params_info_callback, - additional_wandb_config=info, - **finetune_args, - ) - - result_message = set_train_output(train_output) - print(result_message + "\n" + str(train_output)) - - clear_cache() - - Global.is_training = False - - except Exception as e: - traceback.print_exc() - Global.training_error_message = str(e) - finally: - Global.is_training = False - - training_thread = threading.Thread(target=training) - training_thread.daemon = True - training_thread.start() - - except Exception as e: - Global.is_training = False - traceback.print_exc() - Global.training_error_message = str(e) - finally: - Global.is_train_starting = False - - return render_training_status() + render_loss_plot() - - -def render_training_status(): - if not Global.is_training: - if Global.is_train_starting: - html_content = """ -
-
-
- Starting... -
-
-
- """ - return (gr.HTML.update(value=html_content), gr.HTML.update(visible=True)) - - if Global.training_error_message: - html_content = f""" -
-
-
-
- ⚠ Something went wrong -
-
{Global.training_error_message}
-
-
-
- """ - return (gr.HTML.update(value=html_content), gr.HTML.update(visible=False)) - - if Global.train_output_str: - end_message = "✅ Training completed" - if Global.should_stop_training: - end_message = "🛑 Train aborted" - - params_info_html = "" - if Global.training_params_info_text: - params_info_html = f""" -
- {Global.training_params_info_text} -
- """ - html_content = f""" -
-
-
-
- {end_message} -
-
{Global.train_output_str}
-
-
- {params_info_html} -
- """ - return (gr.HTML.update(value=html_content), gr.HTML.update(visible=False)) - - if Global.training_status_text: - html_content = f""" -
-
{Global.training_status_text}
-
- """ - return (gr.HTML.update(value=html_content), gr.HTML.update(visible=False)) - - html_content = """ -
-
- Training status will be shown here -
-
- """ - return (gr.HTML.update(value=html_content), gr.HTML.update(visible=False)) - - meta_info = [] - meta_info.append( - f"{Global.training_current_step}/{Global.training_total_steps} steps") - current_time = time.time() - time_elapsed = current_time - Global.train_started_at - time_remaining = -1 - if Global.training_eta: - time_remaining = Global.training_eta - current_time - if time_remaining >= 0: - meta_info.append( - f"{format_time(time_elapsed)}<{format_time(time_remaining)}") - else: - meta_info.append(format_time(time_elapsed)) - - current_speed = Global.training_eta_predictor.get_current_speed() - if current_speed is not None: - meta_info.append(f"{current_speed:.2f}it/s") - - if time_remaining >= 0: - meta_info.append(f"ETA: {format_timestamp(Global.training_eta)}") - - params_info_html = "" - if Global.training_params_info_text: - params_info_html = f""" -
- {Global.training_params_info_text} -
- """ - html_content = f""" -
-
{' | '.join(meta_info)}
-
-
- {Global.training_status_text} - {Global.training_progress * 100:.2f}% -
-
-
-
-
-
- {params_info_html} -
- """ - return (gr.HTML.update(value=html_content), gr.HTML.update(visible=True)) - - -def render_loss_plot(): - if len(Global.training_log_history) <= 2: - return (gr.Column.update(visible=False), gr.Plot.update(visible=False)) - - max_elements = 5000 - training_log_history = sample_evenly( - Global.training_log_history, max_elements=max_elements) - logging_steps = Global.training_args and Global.training_args.logging_steps - - loss_data = [ - { - 'type': 'train_loss' if 'loss' in item else 'eval_loss', - 'loss': item.get('loss') or item.get('eval_loss'), - 'epoch': item.get('epoch') - } for item in training_log_history - if ('loss' in item or 'eval_loss' in item) - and 'epoch' in item - ] - - use_steps = False - if len(Global.training_log_history) <= max_elements and logging_steps: - for index, item in enumerate(loss_data): - item["step"] = index * logging_steps - use_steps = True - - source = pd.DataFrame(loss_data) - - highlight = alt.selection( - type='single', # type: ignore - on='mouseover', fields=['type'], nearest=True - ) - - if use_steps: - base = alt.Chart(source).encode( # type: ignore - x='step:Q', - y='loss:Q', - color='type:N', - tooltip=['type:N', 'loss:Q', 'step:Q', 'epoch:Q'] - ) - else: - base = alt.Chart(source).encode( # type: ignore - x='epoch:Q', - y='loss:Q', - color='type:N', - tooltip=['type:N', 'loss:Q', 'epoch:Q'] - ) - - points = base.mark_circle().encode( - opacity=alt.value(0) - ).add_selection( - highlight - ).properties( - width=640 - ) - - lines = base.mark_line().encode( - size=alt.condition(~highlight, alt.value(1), alt.value(3)) - ) - - return (gr.Column.update(visible=True), gr.Plot.update(points + lines, visible=True)) - - -def format_time(seconds): - hours, remainder = divmod(seconds, 3600) - minutes, seconds = divmod(remainder, 60) - if hours == 0: - return "{:02d}:{:02d}".format(int(minutes), int(seconds)) - else: - return "{:02d}:{:02d}:{:02d}".format(int(hours), int(minutes), int(seconds)) - - -def format_timestamp(timestamp): - dt_naive = datetime.datetime.utcfromtimestamp(timestamp) - utc = pytz.UTC - timezone = Config.timezone - dt_aware = utc.localize(dt_naive).astimezone(timezone) - now = datetime.datetime.now(timezone) - delta = dt_aware.date() - now.date() - if delta.days == 0: - time_str = "" - elif delta.days == 1: - time_str = "tomorrow at " - elif delta.days == -1: - time_str = "yesterday at " - else: - time_str = dt_aware.strftime('%A, %B %d at ') - time_str += dt_aware.strftime('%I:%M %p').lower() - return time_str diff --git a/spaces/zhenwusw/JoJoGAN/e4e/models/encoders/__init__.py b/spaces/zhenwusw/JoJoGAN/e4e/models/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/zixian/Zhenhuan-VITS/denoise_audio.py b/spaces/zixian/Zhenhuan-VITS/denoise_audio.py deleted file mode 100644 index dd55eba2954bd655010a9699aec2752c2646c977..0000000000000000000000000000000000000000 --- a/spaces/zixian/Zhenhuan-VITS/denoise_audio.py +++ /dev/null @@ -1,18 +0,0 @@ -import os -import torchaudio -raw_audio_dir = "/content/drive/MyDrive/selected_character_wav/" -denoise_audio_dir = "./denoised_audio/" -filelist = list(os.walk(raw_audio_dir))[0][2] - -for file in filelist: - if file.endswith(".wav"): - os.system(f"demucs --two-stems=vocals {raw_audio_dir}{file}") -for file in filelist: - file = file.replace(".wav", "") - wav, sr = torchaudio.load(f"./separated/htdemucs/{file}/vocals.wav", frame_offset=0, num_frames=-1, normalize=True, - channels_first=True) - # merge two channels into one - wav = wav.mean(dim=0).unsqueeze(0) - if sr != 22050: - wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=22050)(wav) - torchaudio.save(denoise_audio_dir + file + ".wav", wav, 22050, channels_first=True) \ No newline at end of file diff --git a/spaces/zomehwh/vits-uma-genshin-honkai/monotonic_align/core.py b/spaces/zomehwh/vits-uma-genshin-honkai/monotonic_align/core.py deleted file mode 100644 index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/vits-uma-genshin-honkai/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 \ No newline at end of file